text
stringlengths 9
7.94M
|
---|
\begin{document}
\title{Two-ended quasi-transitive graphs}
\begin{abstract} The well-known characterization of two-ended groups says that every two-ended group can be split over finite subgroups which means it is isomorphic to either by a free product with amalgamation~$A\ast_C B$ or an HNN-extension~$\ast_{\phi} C$, where~$C$ is a finite group and~$[A:C]=[B:C]=2$ and~$\phi\in\ensuremath{\mathsf{Aut}}(C)$. In this paper we show that there is a way in order to spilt two-ended quasi-transitive graphs without dominated ends and two-ended transitive graphs over finite subgraphs in the above sense. As an application of it, we characterize all groups acting with finitely many orbits almost freely on those graphs.
\end{abstract}
\section{Introduction} End theory plays a crucial role in graph theory, topology and group theory, see the work of Diestel, Halin, Hughes, Ranicki, M\"{o}ller and Wall~\cite{sur2,sur1,halin64,RoggiEndsI,RoggiEndsII,wall-geometry}. In 1931, Freudenthal~\cite{freu31} defined the concept of ends for topological spaces and topological groups for the first time. Let~$X$ be a locally compact Hausdorff space. In order to define ends of the topological space~$X$, consider infinite sequence~$U_1\supseteq U_2 \supseteq \cdots$ of non-empty connected open subsets of~$X$ such that the boundary of each~$U_i$ is compact and~$\bigcap\overline{U_i}=\emptyset$. Two sequences~$U_1\supseteq U_2 \supseteq \cdots$ and~$V_1\supseteq V_2 \supseteq \cdots$ are \emph{equivalent} if for every~${i\in\ensuremath{\mathbb{N}}}$, there are~$j,k\in\ensuremath{\mathbb{N}}$ in such a way that~$U_i\supseteq V_j$ and~$V_i\supseteq U_k$. The equivalence classes of those sequences are the \emph{ends} of~$X$. The ends of groups arose from ends of topological spaces in the work of Hopf~\cite{hopf}. In 1964, Halin~\cite{halin64} defined ends(vertex-ends) for infinite graphs independently as equivalence classes of rays, one way infinite paths. Diestel and K\"{u}hn~\cite{ends} showed that if we consider locally finite graphs as one dimensional simplicial complexes, then these two concepts coincide. We can define the number of ends for a given finitely generated group~$G$ as the number of ends of a Cayley graph of~$G$. It is known that the number of ends of two Cayley graphs of the same group are equal, as long as the generating sets are finite, see~\cite{mei}.\footnote{Even more stronger, they are quasi-isometric.} Freudenthal~\cite{freu44} and Hopf~\cite{hopf} proved that the number of ends for infinite groups~$G$ is either 1, 2 or~$\infty$. Subsequently Diestel, Jung and M\"oller~\cite{DiestelJungMoeller} extended the above result to arbitrary (not necessarily locally finite) transitive graphs. They proved that the number of ends of an arbitrary infinite connected transitive graph is either 1,2 or~$\infty$. In 1943, Hopf~\cite{hopf} characterized two-ended finitely generated groups. Later, Scott and Wall~\cite{ScottWall} gave another characterization of two-ended finitely generated groups. We summarize all of them as the following theorem: \begin{thm}\label{Classifiication}
Let~$G$ be a finitely generated group. Then the following statements are equivalent:
\begin{enumerate}[\rm (i)]
\item~$G$ is a two-ended group.
\item Any Cayley graph of~$G\sim_{QI} \Gamma(\ensuremath{\mathbb{Z}},\{\pm 1\})$.
\item~$G$ has an infinite cyclic subgroup of finite index.
\item~$G$ is isomorphic to either~$A {{\ast\!\! \,\,}_{C}}B$ and~$C$ is finite and \\$[A:C]=[B:C]=2$ or~$\ast_\phi C$ with~$C$ is finite and~$\phi \in \ensuremath{\mathsf{Aut}}( C)$.
\end{enumerate} \end{thm}
Our aim is to extend the above theorem for quasi-transitive graphs. The first obstacle is the free product with amalgamations and HNN-extensions, as they are group theoretical notions. It turns out that tree-amalgamation is a good approach to generalize the above theorem to two-ended graphs. In particular it seems that two-ended graphs split over finite subgraphs via tree-amalgamations. Indeed we will show that every quasi-transitive graph without dominated end can be expressed as a tree-amalgamation of two rayless graphs, see Theorem \ref{char-two-ended-graph}. In particular if the graph is a locally finite graph, then it is a tree-amalgamation of two finite graphs in an analogous manner with Theorem \ref{Classifiication}.\\ In 1984, Jung and Watkins~\cite{jung_watkins} studied groups acting on two-ended transitive graphs. In this paper, we also generalize the results mentioned above to two-ended quasi-transitive graphs without dominated ends.
\section{Preliminaries}
We refer the readers to~\cite{diestelBook10noEE} for the notations and the terminologies of graph-theoretical terms and to~\cite{harpe-book} for combinatorial group-theoretical notations.\\ In the following we will recall the most important definitions and notations for the readers convenience.
\subsection{Graph theory}
Let~$\Gamma$ be a graph with vertex set~$V$ and edge set~$E$.
For a set~$X \subseteq V$ we set~$\Gamma[X]$ to be the induced subgraph of~$\Gamma$ on~$X$. A \emph{ray} is a one-way infinite path in a graph, the infinite sub-paths of a ray are its \emph{tails}. An \emph{end} of a graph is an equivalence class of rays in a graph in which two rays are equivalent if and only if there exists no finite vertex set~$S$ such that after deleting~$S$ those rays have tails completely contained in different components. A sequence of finite vertex sets~$(F_i) _{i \in \ensuremath{\mathbb{N}}}$ is a \emph{defining sequence} of an end~$\omega$ if~$C_{i+1} \subsetneq C_{i}$, with~${C_i \defi C(F_i,\omega)}$ and~$\bigcap C_i = \emptyset$. We define the \emph{degree of an end~$\omega$} as the supremum over the number of edge-disjoint rays belonging to the class which corresponds to~$\omega$. We say an end~$\omega$ \emph{lives} in a component~$C$ of~$\Gamma \setminus X$, where~$X$ is a subset of~$V(\Gamma)$ or a subset of~$E(\Gamma)$, when a ray of~$\omega$ has a tail completely contained in~$C$, and we denote~$C$ by~$C(X,\omega)$. We say a component of a graph is \emph{big} if there is an end which lives in that component. Components which are not big are called \emph{small}. We define~$s(\Gamma)$ to be the maximum number of disjoint double rays in the graph~$\Gamma$. An end~$\omega$ of a graph~$\Gamma$ is \emph{dominated by a vertex}~$v$ if there is no finite sets of vertices~${S \setminus v}$ such that~$v \notin C(S,\omega)\cup S$. Note that this implies that~$v$ has infinite degree. An end is said to be~\emph{dominated} if there exists a vertex dominating it.
A finite set~$C \subseteq E$ is a \emph{finite cut} if there exists a partition~$(A,A^\ast)$ of~$V$ such that~$C$ are exactly the edges between~$(A,A^\ast)$, which we denote by~$C=E(A,A^\ast)$. A cut~$C=E(A,A^\ast)$ is the cut \emph{induced} by the partition~$(A,A^\ast)$. We note that if~$C=E(A,A^\ast)$ is a cut, then the partition~$(gA,gA^\ast)$ induces a cut for every~$g\in \mathsf{Aut}(\Gamma)$. For the sake of simplicity we denote this new cut only by~$gC$. A finite cut~$C=E(A,A^\ast)$ is called \emph{tight} if $G[A]$ and $G[\ensuremath{A^*}]$ are connected and moreover if $|E(A,\ensuremath{A^*})|=k$, then we say that $C$ is $k$-\emph{tight}.\\
A concept similar to cuts is the concept of separations. A \emph{separation} is a pair~${(A,A^\ast)}$ with~${A,A^\ast \subseteq V}$ such that~$\Gamma=\Gamma[A]\cup \Gamma[A^\ast]$. The set~$A \cap A^\ast$ is called the \emph{separator} of this separation. The \emph{order} of a separation is the size of its separator. In this paper we only consider separations of finite order, thus from here on, any separation will always be a separation of finite order.
For two-ended graphs we call a separation~$\mbox{\emph{$k$-tight}}$ if the following holds: \begin{enumerate}
\item~$|A \cap A^*| = k$.
\item There is an end~$\omega_A$ living in a component~$C_A$ of~$A \setminus A^\ast$.
\item There is an end~$\omega_{A^\ast}$ living in a component~$C_A^\ast$ of~$A^\ast \setminus A$.
\item Each vertex in~$A \cap A^\ast$ is adjacent to vertices in both~$C_A$ and~$C_{A^\ast}$. \end{enumerate} If a separation~$(A,A^\ast)$ is~$k$-tight for some~$k$ then this separation is just called \emph{tight}. Note that finding tight separations is always possible for two-ended graphs. In an analogous matter to finite cuts, one may see that~$(gA,gA^\ast)$ is a tight separation for~$g\in \mathsf{Aut}(\Gamma)$ whenever~$(A,A^\ast)$ is a tight separation.
Assume that~${(A,A^\ast)}$ and~${(B,B^\ast)}$ are two separations of~$\Gamma$. We say~${(A,A^\ast) \leq (B,B^\ast)}$ if and only if~$A \subseteq B$ and~$A^\ast \supseteq B^\ast$. Let us call~${(A, A^\ast)}$ and~${(B, B^\ast)}$ \emph{nested} if~$(A, A^\ast)$ is comparable with~$(B,B^\ast)$ or with~${(B^\ast,B)}$ under~$\leq$. A separation~$(A,A^\ast)$ is \emph{connected} if~$\Gamma[A \cap A^\ast]$ is connected.
Next we recall the definition of the \emph{tree-amalgamation} for graphs which was first defined by Mohar~\cite{Mohar06}. We use the tree-amalgamation to obtain a generalization of factoring quasi-transitive graphs in a similar manner to HNN-extensions or free-products with amalgamation over groups.
A tree~$T$ is~$(p_1,p_2)$\emph{-semiregular }if there exist~$p_1,p_2 \in \{1,2,\ldots \} \cup \infty$ such that for the canonical bipartition~$\{V_1,V_2\}$ of~$V(T)$ the vertices in~$V_i$ all have degree~$p_i$ for~$i=1,2$.
In the following let~$T$ be the~$(p_1,p_2)$-semiregular tree. Suppose that there is a mapping~$c$ which assigns to each edge of~$T$ a pair~$(k,\ell),\, 0 \leq k < p_1,\, 0 \leq \ell < p_2$, such that for every vertex~$v \in V_1$, all the first coordiantes of the pairs in~{$\{c(e) \mid v$ is incident with~$e\}$} are distinct and take all values in the set~$\{k \mid 0 \leq k < p_1\}$, and for every vertex in~$V_2$, all the second coordiantes are distinct and exhaust all values of the set~$\{\ell \mid 0 \leq \ell < p_2 \}$.
Let~$\Gamma_1$ and~$\Gamma_2$ be graphs. Suppose that~$\{S_k \mid 0 \leq k < p_1\}$ is a family of subsets of~$V(\Gamma_1)$, and~${\{ T_\ell \mid 0 \leq \ell < p_2\}}$ is a family of subsets of~${V(\Gamma_2)}$. We shall assume that all sets~$S_k$ and~$T_\ell$ have the same cardinality, and we let~${\phi_{k\ell}\colon S_k \rightarrow T_\ell}$ be a bijection. The maps~$\phi_{k\ell}$ are called \emph{identifying maps}.
For each vertex~$v \in V_i$, take a copy~$\Gamma_i^v$ of the graph~${\Gamma_i , i = 1, 2}$. Denote by~${S_k^v}$ (if~${i = 1}$) and~${T^v_\ell}$ (if~${i = 2}$) the corresponding copies of~$S_k$ or~$T_\ell$ in~${V(\Gamma^v_i)}$. Let us take the disjoint union of graphs~${\Gamma^v_i , v \in V_i , i = 1, 2}$. For every edge~${st \in E(T ) (s \in V_1, t \in V_2)}$ with~${c(st) = (k, \ell)}$ we identify each vertex~${x \in S^s_k}$ with the vertex~$y = \phi_{k\ell}(x)$ in~$T^t_\ell$. The resulting graph~$Y$ is called the \emph{tree-amalgamation} of the graphs~$\Gamma_1$ and~$\Gamma_2$ over the \emph{connecting tree}~$T$. We denote~$Y$ by~$\Gamma_1\ast\!\! _T \Gamma_2$. In the context of tree-amalgamations the sets~$\{S_k\}$ and~$\{T_\ell\}$ are also called \emph{the sets of adhesions} and a single~$S_k$ or~$T_\ell$ might be called an \emph{adhesions} of this tree-amalgamation.
In the case that~$\Gamma_1 = \Gamma_2$ and that~$\phi_{k \ell}$ is the identity for all~$k$ and~$\ell$ we may say that~$\{S_k\}$ is the set of adhesions of this tree-amalgamation. A tree-amalgamation~$\Gamma_1\ast_T\Gamma_2$ is called \emph{thin} if all adhesions are finite and~$T$ is the double ray and moreover if~$\Gamma_1$ and~$\Gamma_2$ are layless, then we call it \emph{strongly thin}.
\subsection{Combinatorial group theory}
Let a group~$G$ act on a set~$X$. By~$\mathsf{St}_G(x)$, we denote the stabilizer of~$x\in X$, i.e the set of all elements of~$G$ fixing~$x$. If~$\mathsf{St}_G(x)$ is finite for all $x\in X$, we say that $G$ acts \emph{almost freely} on $X$.
Let~$(X,d_X)$ and~$(Y,d_Y)$ be two metric spaces and let~$\phi\colon X\to Y$ be a map. The map~$\phi$ is a \emph{quasi-isometric embedding} if there is a constant~$\lambda\geq 1$ such that for all~$x,x^\prime \in X$: $$\frac{1}{\lambda}d_X(x,x')-\lambda\leq d_Y(\phi(x),\phi(x'))\leq{\lambda}d_X(x,x')+\lambda$$
The map~$\phi$ is called \emph{quasi-dense} if there is a~$\lambda'$ such that for every~$y\in Y$ there exists~$x\in X$ such that~$d_Y(\phi(x),y)\leq \lambda'$. Finally~$\phi$ is called a \emph{quasi-isometry} if it is both quasi-dense and quasi-isometric embedding. If~$X$ is quasi-isometric to~$Y$, then we write~$X\sim_{QI} Y$.\\ Remember that~$G=\langle S\rangle$ can be equipped by the word metric induced by~$S$. Thus any group can be turned to a topological space by considering its Cayley graph and so we are able to talk about quasi-isometric groups and it would not be ambiguous if we use the notation~${G\sim_{QI} H}$ for two groups~$H$ and~$G$. Now we have the following important lemma which reveals the connection between Cayley graphs of a group with different generating sets.
\begin{lemma}{\rm\cite[Theorem 11.37]{mei}}\label{diff generators}
Let~$G$ be a finitely generated group and let~$S$ and~$T$ be two finite generating sets.
Then~$\Gamma(G,S)\sim_{QI}\Gamma(G,T)$. \end{lemma}
By Lemma~\ref{diff generators} we know that any two Cayley graphs of the same group are quasi-isometric if the corresponding generating sets are finite. Let~$G=\langle S\rangle$ be a finitely generated group. Brick~\cite{Brick} studied the connection of quasi-isometric groups and their end spaces. He proved the following important lemma. \begin{lemma}{\rm\cite[Corollary 2.3]{Brick}}
\label{endsequall}
Finitely generated quasi-isometric groups all have the same number of ends. \end{lemma} \begin{coro}{\rm\cite[Theorem 11.23]{mei}}
\label{same ends}
The number of ends of a group~$G$ is independent of choosing generating set. \end{coro}
Next we review the definition of the free product with amalgamation and the HNN-extension. Let~$G_i$ be three groups such that there are monomorphisms~$\phi_i\colon G_2\to G_i$ for~${i=1,3}$. Then we denote a \emph{free product with amalgamation}~$G_1$ and~$G_3$ over~$G_2$ and an HNN\emph{-extension} over~$G_2$ by~$G_1\ast_{G_2} G_3$ and~$\ast_{\phi_1}G_1$, respectively. Finally for a subset~$A$ of a set~$X$ we denote the complement of~$A$ by~$A^c$. We denote the disjoint union of two sets~$A$ and~$B$ by~$A \sqcup B$.
\section{Characterization of two-ended graphs}
\begin{thm}
\label{char-two-ended-graph}
Let~$\Gamma$ be a connected quasi-transitive graph without dominated ends.
Then the following statements are equivalent:
\begin{enumerate}[\rm (i)]
\item~$\Gamma$ is two-ended.
\item~$\Gamma$ can be split as a strongly thin tree-amalgamation~$\ensuremath{\bar{\Gamma}} \ast_T \ensuremath{\bar{\Gamma}}$ fulfills the following properties:
\begin{enumerate}[\rm a)]
\item~$\ensuremath{\bar{\Gamma}}$ is a connected rayless graph of finite diameter.
\item The identification maps are all the identity.
\item All adhesions of the tree-amalgamation contained in ${\overline{\Gamma}}$ are finite and connected and pairwise disjoint.
\end{enumerate}
\item~$\Gamma\sim_{QI}$ the double ray.
\end{enumerate} \end{thm}
In Theorem~\ref{char-two-ended-graph} we characterize graphs which are quasi-isometric to the double ray. It is worth mentioning that Kr\"{o}n and M\"{o}ller \cite{kron2008quasi} have studied arbitrary graphs which are quasi-isometric to trees.
Before we can prove Theorem~\ref{char-two-ended-graph} we have to collect some tools used in its proof. The first tool is the following Lemma~\ref{good separation} which basically states that in a two-ended quasi-transitive graph~$\Gamma$ we can find a separation fulfilling some nice properties. For that let us define a \emph{type 1 separation} of~$\Gamma$ as a separation~${(A,\ensuremath{A^*})}$ of~$\Gamma$ fulfilling the following conditions: \begin{enumerate}[\rm (i)]
\item~$A \cap A^\ast$ contains an element from each orbit.
\item~$\Gamma[A \cap A^\ast]$ is a finite connected subgraph.
\item Exactly one component of~$A \setminus A^\ast$ is big. \end{enumerate}
\begin{lemma}
\label{good separation}
Let~$\Gamma$ be a connected two-ended quasi-transitive graph.
Then there exists a type 1 separation of~$\Gamma$. \end{lemma}
\begin{proof}
As the two ends of~$\Gamma$ are not equivalent, there is a finite~$S$ such that the ends of~$\Gamma$ live in different components of~$\Gamma \setminus S$.
Let~$C$ be a big component of~$\Gamma \setminus S$.
We set~$\bar{A} \defi C \cup S$ and~$\bar{A}^\ast \defi \Gamma \setminus C$ and obtain a separation~$(\bar{A}, \bar{A}^\ast)$ fulfilling the condition (iii).
Because~${\bar{A} \cap \bar{A}^\ast = S}$ is finite, we only need to add finitely many finite paths to~$\bar{A} \cap \bar{A}^*$ to connect~${\Gamma[\bar{A} \cap \bar{A}^\ast]}$.
As~$\Gamma$ is quasi-transitive there are only finitely many orbits of the action of~$\mathsf{Aut}(\Gamma)$ on~$V(\Gamma)$.
Picking a vertex from each orbit and a path from that vertex to~$\bar{A} \cap \bar{A}^\ast$ yields a separation~$(A,A^\ast)$ fulfilling all the above listed conditions. \end{proof}
In the proof of Lemma~\ref{good separation} we start by picking an arbitrary separation which we then extend to obtain type 1 separation. The same process can be used when we start with a tight separation, which yields the following corollary:
\begin{coro}
\label{type 2 separation}
Let~$\Gamma$ be a two-ended quasi-transitive graph and let~$(\bar{A},\bar{A}^*)$ be a tight separation of~$\Gamma$.
Then there is an extension of~$(\bar{A},\bar{A}^*)$ to a type 1 separation~$(A,A^*)$ such that~$\bar{A} \cap \bar{A}^* \subseteq A \cap A^*$. \qed \end{coro}
Every separation~$(A,A^*)$ which can be obtained by Corollary~\ref{type 2 separation} is a \emph{type 2 separation}. We also say that the tight separation~$(\bar{A},\bar{A}^*)$ induces the type 2 separation~$(A,A^*)$.
In Lemma~\ref{far away} we prove that in a quasi-transitive graph without dominated ends there are vertices which have arbitrarily large distances from one another. This is very useful as it allows to map separators of type 1 separations far enough into big components, such that the image and the preimage of that separation are disjoint.
\begin{lemma}
\label{far away}
Let~$\Gamma$ be a connected two-ended quasi-transitive graph without dominated ends, and let~$(A,A^\ast)$ be a type 1 separation.
Then for every~$k \in \ensuremath{\mathbb{N}}$ there is a vertex in each big component of~$\Gamma \setminus (A \cap A^\ast)$ that has distance at least~$k$ from~$A\cap A^\ast$. \end{lemma}
\begin{proof}
Let~$\Gamma$ and~$(A,A^\ast)$ be given and set~$S \defi A \cap A^\ast$.
Additionally let~$\omega$ be an end of~$\Gamma$ and set~$C\defi C(S,\omega)$.
For a contradiction let us assume that there is a~$k \in \ensuremath{\mathbb{N}}$ such that every vertex of~$C$ has distance at most~$k$ from~$S$.
Let~$R=r_1,r_2, \ldots$ be a ray belonging to~$\omega$.
We now define a forest~$T$ as a sequence of forests~$T_i$.
Let~$T_1$ be a path from~$r_1$ to~$S$ realizing the distance of~$r_1$ and~$S$, i.e.~$T_1$ is a shortest path between~$r_1$ and~$S$.
Assume that~$T_i$ is defined.
To define~$T_{i+1}$ we start in the vertex~$r_{i+1}$ and follow a shortest path from~$r_{i+1}$ to~$S$.
Either this path meets a vertex contained in~$T_{i}$, say~$v_{i+1}$, or it does not meet any vertex contained in~$T_{i}$.
In the first case let~$P_{i+1}$ be the path from~$r_{i+1}$ to~$v_{i+1}$.
In the second case we take the entire path as~$P_{i+1}$.
Set~$T_{i+1} \defi~T_i \cup P_{i+1}$.
Note that all~$T_i$ are forests by construction.
For a vertex~$v \in T_i$ let~$d_i(v,S)$ be the length of a shortest path in~$T_i$ from~$v$ to any vertex in~$S$.
Note that as each component of each~$T_i$ contains at exactly one vertex of~$S$ by construction, this is always well-defined.
Let~$P=r_i,x_1,x_2,\ldots,x_n,s$ with~$s \in S$ be a shortest path between~$r_i$ and~$S$.
As~$P$ is a shortest path between~$r_i$ and~$S$ the subpath of~$P$ starting in~$x_j$ and going to~$s$ is a shortest~$x_j-s$ path.
This implies that for~$v$ of any~$T_i$ we have~$d_i(v,S)\leq k$.
We now conclude that the diameter of all components of~$T_i$ is at most~$2k$ and hence each component of~${T \defi \bigcup T_i}$ also has diameter at most~$2k$, furthermore note that~$T$ is a forest.
As~$S$ is finite there is an infinite component of~$T$, say~$T^\prime$.
As~$T^\prime$ is an infinite tree of bounded diameter it contains a vertex of infinite degree, say~$u$.
So there are infinitely many paths from~$u$ to~$R$ which only meet in~$u$.
But this implies that~$u$ is dominating the ray~$R$, a contradiction. \end{proof}
Our next tool used in the proof of Theorem~\ref{char-two-ended-graph} is Lemma~\ref{smalldiameter} which basically states that small components have small diameter.
\begin{lemma}
\label{smalldiameter}
Let~$\Gamma$ be a connected two-ended quasi-transitive graphs without dominated ends.
Additionally let~$S=S_1 \cup S_2$ be a finite vertex set such that the following holds:
\begin{enumerate}[\rm (i)]
\item~$S_1 \cap S_2 = \emptyset$.
\item~$\Gamma[S_i]$ is connected for~$i=1,2$.
\item~$S_i$ contains an element from of each orbit for~$i=1,2$.
\end{enumerate}
Let~$H$ be a rayless component of~$\Gamma \setminus S$.
Then~$H$ has finite diameter. \end{lemma}
\begin{proof}
Let~$\Gamma,S$ and~$H$ be given.
Assume for a contradiction that~$H$ has unbounded diameter.
We are going to find a ray inside of~$H$ to obtain a contradiction.
Our first aim is to find a~$g \in \ensuremath{\mathsf{Aut}}(\Gamma)$ such that the following holds:
\begin{enumerate}[(i)]
\item~$gS_i \subsetneq H$
\item~$gH \subsetneq H$.
\end{enumerate}
Let~$d_m$ be the maximal diameter of the~$S_i$, and let~$d_d$ be the distance between~$S_1$ and~$S_2$.
Finally let~$d_S = d_d+2d_m$.
First assume that~$H$ only has neighbors in exactly one~$S_i$.
This implies that~$\Gamma \setminus H$ is connected.
Let~$w$ be a vertex in~$H$ of distance greater than~$2d_S$ from~$S$ and let~$g \in \mathsf{Aut}(\Gamma)$ such that~$w\in gS$.
This implies that~$gS \subsetneq H$.
But as~$\Gamma \setminus H$ contains a ray, we can conclude that~$gH \subsetneq H$.
Otherwise~$gH$ would contain a ray, as~$\Gamma \setminus H$ contains a ray and is connected.
So let us now assume that~$H$ has a neighbor in both~$S_i$.
Let~$P$ be a shortest~$S_1-S_2$ path contained in~$H \bigcup (S_1 \cup S_2)$, say~$P$ has length~$k$.
We pick a vertex~$w \in H$ of distance at least~$2d_S +k+1$ from~$S$, and we pick a~$g \in \ensuremath{\mathsf{Aut}}(\Gamma)$ such that~$w \in gS$.
Obviously we know that~$gP \subseteq (gH \cup gS)$.
By the choice of~$g$ we also know that~$gP \subseteq H$.
This yields that~$gH \subseteq H$, as~$gH$ is small.
We can conclude that~$gH \neq H$ and hence~$gS_i \subsetneq H$ follows directly by our choice of~$g$.
Note that as $gH$ is a component of $\Gamma \setminus gS$ fulfilling all conditions we had
on $H$ we can iterate the above defined process with $gH$ instead of $H$.
We can now pick a vertex $v\in S$.
Let $U$ be the images of $v$.
As $H$ is connected we apply the Star-Comb lemma, see \cite[Lemma 8.2.2.]{diestelBook10noEE}, to $H$ and $U$.
We now show, that the
result of the Star-Comb lemma cannot be a star. So assume that we obtain
a star with center $x$. Let $\ell:= |S|$.
Let $d_X$ be the distance from $S$ to $x$.
By our construction we know that there is a step in which we use a $g_x\in \ensuremath{\mathsf{Aut}}(G)$
such that $d(S, g_xS) > d_x$.
Now pick $\ell+ 1$ many leaves of the star which come from steps in the process after we used $g_x$.
This implies that in the star, all the paths from those $\ell+1$ many leaves to $x$ have to path through a separator
of size $\ell$, which is a contradiction. So the Star-Comb lemma yields a comb
and hence a ray.
\end{proof}
\begin{lemma}
\label{element infinite order}
Let~$\Gamma$ be a two-ended connected quasi-transitive graph without dominated ends and let~$(A,A^\ast)$ be a type 1 separation and let~$C$ be the big component of~$A \setminus A^\ast$.
Then there is a~$g \in \ensuremath{\mathsf{Aut}(\Gamma)}$ such that~$g(C) \subsetneq C$. \end{lemma}
\begin{proof}
Let~$\Gamma$ be a two-ended connected quasi-transitive graph without dominated ends and let~$(A,A^\ast)$ be a type 1 separation of~$\Gamma$.
Set~${d \defi \ensuremath{\mathsf{diam}}(A \cap A^\ast)}$.
Say the ends of~$\Gamma$ are~$\omega_1$ and~$\omega_2$ and set~${C_i \defi C(A \cap A^*,\omega_i)}$.
Our goal now is to find an automorphism~$g$ such that~$g(C_1) \subsetneq C_1$.
To find the desired automorphism~$g$ first pick a vertex~$v$ of distance~${d+1}$ from~${A \cap A^\ast}$ in~$C_1$.
As~$(A,A^\ast)$ is a type 1 separation of the quasi-transitive graph~$\Gamma$ there is an automorphism~$h$ of~$\Gamma$ that maps a vertex of~$A \cap A^\ast$ to~$v$.
Because~${\Gamma[A \cap A^\ast]}$ is connected and because~${d(v,A \cap A^\ast) \geq d+1}$ we can conclude that~$(A \cap A^\ast)$ and~${h (A \cap A^\ast)}$ are disjoint.
If~$h(C_1) \subsetneq C_1$ we can choose~$g$ to be~$h$, so let us assume that~$h(C_1) \supseteq C_2$.
Now pick a vertex~$w$ in~$C_1$ of distance at least~$3d+1$ from~$A \cap A^\ast$, which is again possible by Lemma~\ref{far away}.
Let~$f$ be an automorphism such that~${w \in f(A \cap A^\ast)}$.
Because~${d(w,A \cap A^\ast) \geq 3d+1}$ we can conclude that~
$${A \cap A^\ast, ~h(A\cap A^\ast)} \mbox{ and } {f(A \cap A^\ast)}$$ are pairwise disjoint and hence in particular~${f \neq h}$.
Again if~$f(C_1) \subsetneq C_1$ we may pick~$f$ as the desired~$g$, so assume that~$f(C_1) \supseteq C_2$.
This implies in particular that~$fC_2 \subsetneq hC_2$ which yields that
$$h^{-1}f(C_2) \subsetneq C_2$$ which concludes this proof. \end{proof}
Note that the automorphism in Lemma~\ref{element infinite order} has infinite order. Now we are ready to prove Theorem~\ref{char-two-ended-graph}.
\begin{proof}[\rm \bf Proof of Theorem~\ref{char-two-ended-graph}]
We start with {\bf (i)~$\Rightarrow$ (ii)}.
\noindent So let~$\Gamma$ be a graph fulfilling the conditions in Theorem~\ref{char-two-ended-graph} and let~$\Gamma$ be two-ended.
Additionally let~$(A,A^\ast)$ be a type 1 separation of~$\Gamma$ given by Lemma~\ref{good separation} and let~$d$ be the diameter of~$\Gamma[A \cap A^\ast]$.
Say the ends of~$\Gamma$ are~$\omega_1$ and~$\omega_2$ and set~${C_i \defi C(A\cap A^*, \omega_i)}$.
By Lemma~\ref{element infinite order} we know that there is an element~${g \in \ensuremath{\mathsf{Aut}(\Gamma)}}$ such that~$g(C_1) \subsetneq C_1$.
We know that either~$A \cap gA^*$ or~$\ensuremath{A^*} \cap gA$ is not empty, without loss of generality let us assume the first case happens.
Now we are ready to define the desired tree-amalgamation.
We define the two graphs~$\Gamma_1$ and~$\Gamma_2$ like follows:
\begin{align*}
\Gamma_1 \defi \Gamma_2 \defi \Gamma[A^\ast \cap g A].
\end{align*}
Note that as~$A \cap A^\ast$ is finite and because any vertex of any ray in~$\Gamma$ with distance greater than~$3d+1$ from~${A\cap A^\ast}$ is not contained in~$\Gamma_i$ we can conclude~$\Gamma_i$ is a rayless graph.\footnote{Here we use that any ray belongs to an end in the following manner: Since~$A \cap A^*$ and~${g(A \cap A^*)}$ are finite separator of~$\Gamma$ separating~$\Gamma_1$ from any~$C_i$, no ray in~$\Gamma_i$ can be equivalent to any ray in any~$C_i$ and hence~$\Gamma$ would contain at least three ends.}
The tree~$T$ for the tree-amalgamation is just a double ray.
The families of subsets of~$V(\Gamma_i)$ are just~$A \cap A^\ast$ and~$g(A\cap A^\ast)$ and the identifying maps are the identity.
It is straightforward to check that this indeed defines the desired tree-amalgamation.
The only thing remaining is to check that~$\Gamma_i$ is connected and has finite diameter.
It follows straight from the construction and the fact that~$\Gamma$ is connected that~$\Gamma_i$ is indeed connected.
It remains to show that~$\Gamma_i$ has finite diameter.
We can conclude this from Lemma~\ref{smalldiameter} by setting~$S \defi g^{-1}(A \cap A^\ast) \bigcup g^2 (A\cap A^\ast)$.
As~$\Gamma_i$ is now contained in a rayless component of~$\Gamma \setminus S$.
\vspace*{0,5cm} \noindent {\bf (ii)~$\Rightarrow$ (iii)} Let $\,\Gamma= \ensuremath{\bar{\Gamma}} \ast_T \ensuremath{\bar{\Gamma}}\,$, where $\,\ensuremath{\bar{\Gamma}}\,$ is a rayless graph of diameter $\,\,\lambda\,$ and~$T$ is a double ray. As $\,T\,$ is a double ray there are exactly two adhesion sets, say~$\,S_1\,$ and $\,S_2\,$, in each copy of~$\ensuremath{\bar{\Gamma}}$. We define~$\hat\Gamma:=\ensuremath{\bar{\Gamma}}\setminus S_2$. Note that~$\hat\Gamma \neq \emptyset$. It is not hard to see that~$V(\Gamma)=\bigsqcup_{i\in\ensuremath{\mathbb{Z}}} V(\Gamma_i)$, where each~$\Gamma_i$ isomorphic to~$\hat\Gamma$. We now are ready to define our quasi-isometric embedding between~$\Gamma$ and the double ray~${R=\ldots,v_1,v_0,v_1,\ldots}$.
Define~$\phi\colon V(\Gamma)\to V(R)$ such that~$\phi$ maps every vertex of~$\Gamma_i$ to the vertex~$v_i$ of~$R$.
Next we show that~$\phi$ is a quasi-isomorphic embedding.
Let~$v,v'$ be two vertices of~$\Gamma$. We can suppose that~$v\in V(\Gamma_i)$ and~$v'\in V(\Gamma_j)$, where~$i\leq j$. One can see that~$d_{\Gamma}(v,v')\leq (|j-i|+1)\lambda$ and so we infer that
$$\frac{1}{\lambda} d_{\Gamma}(v,v')-\lambda\leq d_R(\phi(v),\phi(v'))=|j-i| \leq \lambda d_{\Gamma}(v,v')+\lambda.$$
As~$\phi$ is surjective we know that~$\phi$ is quasi-dense.
Thus we proved that~$\phi$ is a quasi-isometry between~$\Gamma$ and~$R$.
\noindent {\bf (iii)~$\Rightarrow$ (i)} Suppose that~$\phi$ is a quasi-isometry between~$\Gamma$ and the double ray, say~$R$, with associated constant~$\lambda$.
We shall show that~$\Gamma$ has exactly two ends, the case that~$\Gamma$ has exactly one end leads to a contradiction in an analogous manner.
Assume to the contrary that there is a finite subset of vertices~$S$ of~$\Gamma$ such that~$\Gamma\setminus S$ has at least three big components.
Let~$R_1:=\{u_i\}_{i\in \ensuremath{\mathbb{N}}}$,~${R_2:=\{v_i\}_{i\in \ensuremath{\mathbb{N}}}}$ and~${R_3:=\{r_i\}_{i\in \ensuremath{\mathbb{N}}}}$ be three rays of~$\Gamma$, exactly one in each of those big components.
In addition one can see that~$d_{R}(\phi(x_i),\phi(x_{i+1}))\leq 2\lambda$, where~$x_i$ and~$x_{i+1}$ are two consecutive vertices of one of those rays.
Since~$R$ is a double ray, we deduce that two infinite sets of~$\phi(R_i) \defi \{\phi(x)\mid x\in R_i\}$ for~$i=1,2,3$ converge to the same end of~$R$.
Suppose that~$\phi(R_1)$ and~$\phi(R_2)$ converge to the same end.
For a given vertex~$u_i\in R_1$ let~$v_{j_i}$ be a vertex of~$R_2$ such that the distance~$d_R(\phi(u_i),\phi(v_{j_i}))$ is minimum.
We note that~$d_R(\phi(u_i),\phi(v_{j_i}))\leq2\lambda$.
As~$\phi$ is a quasi-isometry we can conclude that~$d_{\Gamma}(u_i,v_{j_i})\leq3\lambda^2$.
Since~$S$ is finite, we can conclude that there is a vertex dominating a ray and so we have a dominated end which yields a contradiction. \end{proof}
\begin{thm}
\label{thinend}
Let~$\Gamma$ be a two-ended quasi-transitive graph without dominated ends.
Then each end of~$\Gamma$ is thin. \end{thm}
\begin{proof}
By Lemma~\ref{good separation} we can find a type 1 separation~$(A,A^\ast)$ of~$\Gamma$.
Suppose that the diameter of~${\Gamma[A\cap A^\ast]}$ is equal to~$d$.
Let~$C$ be a big component of~${\Gamma \setminus A\cap A^\ast}$.
By Lemma~\ref{far away} we can pick a vertex~$r_i$ of the ray~$R$ with distance greater than~$d$ from~$S$.
As~$\Gamma$ is quasi-transitive and~${A \cap A^\ast}$ contains an element from of each orbit we can find an automorphism~$g$ such that~${r_i \in g(A \cap A^\ast)}$.
By the choice of~$r_i$ we now have that
$$(A \cap A^\ast) \cap g(A \cap A^\ast) = \emptyset.$$
Repeating this process yields a defining sequence of vertices for the end living in~$C$ each of the same finite size.
This implies that the degree of the end living in~$C$ is finite. \end{proof}
For a two-ended quasi-transitive graph~$\Gamma$ without dominated ends let~$s(\Gamma)$ be the maximal number of disjoint double rays in~$\Gamma$. By Theorem~\ref{thinend} this is always defined. With a slight modification to the proof of Theorem~\ref{thinend} we obtain the following corollary:
\begin{coro}
Let~$\Gamma$ be a two-ended quasi-transitive graphs without dominated ends.
Then the degree of each end of~$\Gamma$ is at most~$s(\Gamma)$. \end{coro}
\begin{proof}
Instead of starting the proof of Theorem \ref{thinend} with an arbitrary separation of finite order we now start with a separation~$(B,B^{\ast})$ of order~$s(\Gamma)$ separating the ends of~$\Gamma$ which we then extend to a connected separation~$(A,A^\ast)$ containing an element of each orbit.
The proof then follows identically with only one additional argument.
After finding the defining sequence as images of~$(A,A^\ast)$, which is too large compared to~$s(\Gamma)$, we can reduce this back down to the separations given by the images of~$(B,B^{\ast})$ because~$(B\cap B^{\ast}) \subseteq (A \cap A^\ast)$ and because~$(B,B^{\ast})$ already separated the ends of~$\Gamma$. \end{proof}
It is worth mentioning that Jung \cite{jung1981note} proved that if a connected locally finite quasi-transitive graph has more than one end then it has a thin end.
\subsection{Two-ended graphs with dominated ends} A natural question that can be raised so far is the following. What can we say about two-ended quasi-transitive graphs with dominated ends? An easy example could be a two-ended quasi-transitive graph with only finitely many dominating vertices in such a way that if we remove the dominating vertices, then the rest of the graph is still connected. In this case, we discard the dominating vertices and then we apply Theorem \ref{char-two-ended-graph}. So a strongly thin tree-amalgamation is obtained. Now we again add the removed dominating vertices to the adhesions of the tree-amalgamation and we end up with a strongly thin tree-amalgamation for the graph. But examples are not always as easy as the above the example. In this section, we will show that we cannot expect that arbitrary two-ended quasi-transitive graphs admit a strongly thin tree-amalgamation let alone a strongly thin tree-amalgamation satisfying the assumption of Theorem \ref{char-two-ended-graph}. Indeed we construct a family of two-ended quasi-transitive graphs with dominated ends which do not admit such splitting introduced in Theorem \ref{char-two-ended-graph}. However we show that two-ended transitive graphs always admit strongly thin amalgamation.
\begin{example}
Let~$\Gamma_1$ be an one-ended quasi-transitive graph(for instance take the complete graph~$K_{\aleph_0}$ with~$\aleph_0$ many vertices).
We take two copies of $\Gamma_1$ and we identify a vertex of the first copy with a vertex of the second copy.
We call the graph by $\Gamma_{1}'$
Take a rayless quasi-transitive graph~$\Gamma$ and join a vertex~$v$ of $\Gamma_1'$ to all vertices of~$\Gamma$.
We obtain a new graph~$\Lambda'$ which is quasi-transitive and has exactly two ends. \end{example}
\begin{thm} The graph~$\Lambda'$ does not admit any strongly thin tree-amalgamation. \end{thm}
\begin{proof} Assume to contrary that the graph~$\Lambda'$ admits a strongly thin tree-amalgamation~${\Lambda_1\ast_T\Lambda_2}$, where~$T$ is the double ray. More precisely assume that~$S_i$ and~$T_i$ are adhesions of~$\Lambda_1$ and~$\Lambda_2$ in the tree-amalgamation, respectively for~$i=1,2$. In addition let~$S_i$ correspond to~$T_i$ for~$i=1,2$. We note that~$\Lambda_1$ and~$\Lambda_2$ are rayess graphs. On the other hand~$\Lambda'$ is a two-ended graph. So we can conclude that one of adhesions~$S_1$ or~$S_2$ of the tree-amalgamation separating the two ends of~$\Lambda'$ and so one of~$S_i$'s has to contain~$v$. Suppose that~$k$ is the maximum number of the sizes of~$S_1$ and~$S_2$ plus 1. Since each end of~$\Lambda'$ is thick, we are able to find at least~$k$ disjoint rays belonging to each end. Pick one of them up and consider these~$k$ disjoint rays in the tree-amalgamation. We note that every copy of~$\Lambda_1$ is attached to~$\Lambda_2$ via the identification map~$id\colon S_1\to T_1$ and each copy of~$\Lambda_2$ is attached to~$\Lambda_1$ via~$id\colon S_2\to T_2$. Thus we deduce that the~$k$ disjoint rays being convergent to the end of~$\Lambda'$ meet of~$S_1(T_1)$ or~$S_2(T_2)$ and so we derive a contradiction, as the size of them is at most~$k-1$. \end{proof}
Next we can ask ourselves what happens if we replace the condition quasi-transitivity with transitivity. We answer to this question in the following theorem but first we need a lemma.
\begin{lemma}{\rm\cite[Propostion 4.1]{ThomassenWoess}}\label{thomasenwoess}
Let $\Gamma$ be a connected infinite graph, let $e$ be an edge of $\Gamma$ and $k\in\mathbb N$.
Then $\Gamma$ has finitely many $k$-tight cut meeting $e$. \end{lemma}
Next we show that every two-ended transitive graph are not allowed to have any dominated end.
\begin{thm} Let $\Gamma$ be a two-ended graph with a dominated end such that~$\ensuremath{\mathsf{Aut}(\Gamma)}$ has~$k$ orbits on~$V(\Gamma)$. Then~$k$ is at least 2. \end{thm}
\begin{proof}
Assume to contrary that~$k=1$ and so~$\Gamma$ is a transitive graph. Consider a vertex~$v\in V(\Gamma)$. We claim that $v$ dominates both ends of $\Gamma$. Suppose not: We can divide the vertex set into two sets. Let $W_1$ be the set of vertices dominating one end and let $W_2$ be the rest of vertices which must dominate the other end. We note that $W_1$ does not intersect with $W_2$. Otherwise the intersection is not empty and the graph is transitive. So every vertex dominates both ends. Now take a finite separator separating two ends. Since each vertex of the graph dominates both ends, we have a contradiction. Hence we assume that $W_1\cap W_2=\emptyset$. We note that $V(\Gamma)=V(W_1)\sqcup V(W_2)$. If the number of edges $E(W_1,W_2)$ is finite, then $E(W_1,W_2)$ forms a tight cut. Because if $W_i$ is not connected, then a component of $W_i$ must be big for $i=1,2$ and the rest of components are small. The small components contain dominating vertices and it yields a contradiction. Thus $W_i$ is connected for $i=1,2$ and $E(W_1,W_2)$ forms a tight cut. On the other hand $\ensuremath{\mathsf{Aut}}(\Gamma)$ is infinite and by Lemma \ref{thomasenwoess} we are able find a $g\in \ensuremath{\mathsf{Aut}}(\Gamma)$ such that $E(W_1,W_2)$ does not touch $gE(W_1,W_2)$ and $gE(W_1,W_2)\subseteq G[W_1]$. Thus $gE(W_1,W_2)$ divide $W_1$ into at least two subgraphs in such a way that one of them is small. But each vertex of $W_1$ is dominating vertex and it yields a contradiction $|gE(W_1,W_2)|$ is finite. Hence $E(W_1,W_2)$ is infinite. There is a finite separator $S$ in $\Gamma$ separating the ends. Without of loss of generality assume that $S\subseteq W_1$. Let $C_1$ and $C_2$ be the big components of $\Gamma\setminus S$ containing $\omega_L$ and $\omega_R$ respectively. Furthermore we may assume that $C_2$ contains $E(W_1,W_2)$ and $\omega_L$ lives in $W_1$ and $\omega_R$ lives in $W_2$. So there is a vertex of $W_1$ in $C_2$ and this vertex dominates the end $\omega_L$. Therefore we derive a contradiction, as $S$ is a finite separator and infinitely many edges need to go through it to reach to $\omega_L$. Hence the claim is proved.
As~$v$ dominates both ends of~$\Gamma$, there must be infinitely many edges crossing through any separator separating the ends that yields a contradiction. So~$\Gamma$ cannot be a transitive graph and so~$k\geq 2$, as desired. \end{proof} Now the above theorem implies the following nice corollary which is the characterization of two-ended transitive graphs.
\begin{coro} Let~$\Gamma$ be a connected transitive graphs. Then the following statements are equivalent: \begin{enumerate}[\rm (i)] \item~$\Gamma$ is two-ended. \item~$\Gamma$ can be split as a strongly thin tree-amalgamation. \item~$\Gamma$ is quasi-isometric to the double ray.
\end{enumerate} \end{coro}
\section{Groups acting on two-ended graphs} \label{action} In this section we investigate the action of groups on two-ended graphs without dominated ends with finitely many orbits. We start with the following lemma which states that there are only finitely many~$k$-tight separations containing a given vertex. Lemma~\ref{New Lemma 5} is a separation version of a result of Thomassen and Woess for vertex cuts~\cite[Proposition 4.2]{ThomassenWoess} with a proof which is quite closely related to their proof.
\begin{lemma}\label{New Lemma 5}
Let~$\Gamma$ be a two-ended graph without dominated ends then for any vertex~$v \in V(\Gamma)$ there are only finitely many~$k$-tight separations containing~$v$. \end{lemma}
\begin{proof}
We apply induction on~$k$.
The case~$k =1$ is trivial.
So let~$k \geq 2$ and let~$v$ be a vertex contained in the separator of a~$k$-tight separation~$(A,A^\ast)$.
Let~$C_1$ and~$C_2$ be the two big components of~$\Gamma \setminus (A \cap A^\ast)$.
As~$(A,A^\ast)$ is a~$k$-tight separation we know that~$v$ is adjacent to both~$C_1$ and~$C_2$.
We now consider the graph~$\Gamma^- \defi \Gamma -v$.
As~$v$ is not dominating any ends we can find a finite vertex set~$S_1 \subsetneq C_1$ and~$S_2 \subsetneq C_2$ such that~$S_i$ separates~$v$ from the end living in~$C_i$ for~$i \in \{1,2\}$.\footnote{A finite vertex set~$S$ separates a vertex~$v \notin S$ from an end~$\omega_1$ if~$v$ is not contained in the component~$G \setminus S$ which~$\omega_1$ lives.}
For each pair~$x,y$ of vertices with~$x \in S_1$ and~$y \in S_2$ we now pick a~$x-y$ path~$P_{xy}$ in~$\Gamma^-$.
This is possible as~$k \geq 2$ and because~$(A,A^\ast)$ is~$k$-tight.
Let~$\ensuremath{\mathcal{P}}$ be the set of all those paths and let~$V_P$ be the set of vertices contained in the path contained in~$\ensuremath{\mathcal{P}}$.
Note that~$V_P$ is finite because each path~$P_{xy}$ is finite and both~$S_1$ and~$S_2$ are finite.
By the hypothesis of the induction we know that for each vertex in~$V_P$ there are only finitely~$(k-1)$-tight separations meeting that vertex.
So we infer that there are only finitely many~$(k-1)$-tight separations of~$\Gamma^-$ meeting~$V_P$.
Suppose that there is a~$k$-tight separation~$(B,\ensuremath{B^*})$ such that~$v \in B \cap \ensuremath{B^*}$ and~$B \cap \ensuremath{B^*}$ does not meet~$V_P$.
As~$(B,\ensuremath{B^*})$ is~$k$-tight we know that~$v$ is adjacent to both big components of~$\Gamma \setminus B \cap \ensuremath{B^*}$.
But this contradicts our choice of~$S_i$.
Hence there are only finitely many~$k$-tight separations containing~$v$, as desired. \end{proof}
In the following we extend the notation of diameter from connected graphs to not necessarily connected graphs. Let~$\Gamma$ be a graph. We denote the set of all subgraphs of~$\Gamma$ by~$\mathcal P(\Gamma)$. We define the function~$\rho\colon\mathcal P(\Gamma)\to\ensuremath{\mathbb{Z}} \cup \{\infty\}$ by setting~$\rho(X)=\mathsf{sup}\{\mathsf{diam}(C)\mid C\text{ is a component of } X\}$.\footnote{If the component~$C$ does not have finite diameter, we say its diameter is infinite.}
\begin{lemma}\label{finitecomplement}
Let~$\Gamma$ be a quasi-transitive two-ended graph without dominated ends such that $|\mathsf{St}(v)|<\infty$ for every vertex $v$ of $\Gamma$ and let~$(A,A^*)$ be a tight separation of~$\Gamma$.
Then for infinitely many~${g\in \mathsf{Aut}(\Gamma)}$ either the number~$\rho({A\Delta gA})$ or~$\rho(A\Delta gA)^c$ is finite. \end{lemma}
\begin{proof}
It follows from Lemma \ref{New Lemma 5} and $|\Gamma_v|<\infty$ that~$(A,A^\ast)$ and~$g(A,A^\ast)$ are nested for all but finitely many~$g\in\ensuremath{\mathsf{Aut}(\Gamma)}$.
Let~$g \in \ensuremath{\mathsf{Aut}(\Gamma)}$ such that
$${(A\cap A^\ast) \cap g(A \cap A^\ast) = \emptyset}.$$
By definition we know that either~${A\Delta gA}$ or~$({A\Delta gA})^c$ contains a ray.
Without loss of generality we may assume the second case.
The other case is analogous.
We now show that the number~${\rho(A\Delta gA)}$ is finite.
Suppose that~$C_1$ is the big component of~$\Gamma\setminus (A\cap A^\ast)$ which does not meet~$g(A\cap A^\ast)$ and~$C_2$ is the big component of~$\Gamma\setminus g(A\cap A^\ast)$ which does not meet~$(A\cap A^\ast)$.
By Lemma~\ref{far away} we are able to find type 1 separations~$(B,\ensuremath{B^*})$ and~$(C,\ensuremath{C^*})$ in such a way that~${B\cap\ensuremath{B^*}\subsetneq C_1}$ and~${C\cap \ensuremath{C^*}\subsetneq C_2}$ and such that the~$B \cap \ensuremath{B^*}$ and~$C \cap \ensuremath{C^*}$ each have empty intersection with~$A \cap \ensuremath{A^*}$ and~$g(A \cap \ensuremath{A^*})$.
Now it is straightforward to verify that~${A\Delta gA}$ is contained in a rayless component~$X$ of~${\Gamma \setminus \left( (B \cap \ensuremath{B^*})\bigcup (C \cap\ensuremath{C^*})\right )}$.
Using Lemma~\ref{smalldiameter} we can conclude that~$X$ has finite diameter and hence~${\rho(A \Delta gA)}$ is finite. \end{proof}
Assume that an infinite group~$G$ acts on a two-ended graph~$\Gamma$ without dominated ends with finitely many orbits and let~$(A,A^\ast)$ be a tight separation of~$\Gamma$. By Lemma~\ref{finitecomplement} we may assume~$\rho(A\Delta gA)$ is finite. We set
~$$H:=\{g\in G\mid \rho(A\Delta gA) < \infty \}.$$
We call~$H$ the \emph{separation subgroup} induced by~$(A,A^\ast)$.\footnote{See the proof of Lemma~\ref{index 2 subgroup} for a proof that~$H$ is indeed a subgroup.} In the sequel we study separations subgroups. We note that we infer from Lemma \ref{finitecomplement} that~$H$ is infinite.
\begin{lemma}\label{index 2 subgroup}
Let~$G$ be an infinite group acting on a two-ended graph~$\Gamma$ without dominated ends with finitely many orbits almost freely.
Let~$H$ be the separation subgroup induced by a tight separation~$(A,A^\ast)$ of~$\Gamma$.
Then~$H$ is a subgroup of~$G$ of index at most~$2$. \end{lemma}
\begin{proof}
We first show that~$H$ is indeed a subgroup of~$G$.
As automorphisms preserve distances it is that for~$h \in H,g \in G$ we have
$$\rho (g(A\Delta hA))=\rho (A\Delta hA)<\infty.$$
As this is in particular true for~$g = h^{-1}$ we only need to show that~$H$ is closed under multiplication and this is straightforward to check as one may see that
\begin{align*}
A\Delta h_1h_2A& =(A\Delta h_1A)\Delta (h_1A\Delta h_1h_2A)\\
& =(A\Delta h_1A)\Delta h_1(A\Delta h_2A) .
\end{align*}
Since~$\rho(A\Delta h_iA)$ is finite for~$i=1,2$, we conclude that~$h_1h_2$ belongs to~$H$.
Now we only need to establish that~$H$ has index at most two in~$G$.
Assume that~$H$ is a proper subgroup of~$G$ and that the index of~$H$ is bigger than two.
Let~$H$ and~$Hg_i$ be three distinct cosets for~${i=1,2}$.
Furthermore by Lemma~\ref{finitecomplement} we may assume~${\rho((A \Delta g_i A})^c)$ is finite for~$i=1,2$ .
Note that
$$A\Delta g_1g^{-1}_2A=(A\Delta g_1A)\Delta g_1(A\Delta g_2^{-1}A).$$
On the other hand we already know that
$$A\Delta g_1g^{-1}_2A=(A\Delta g_1A)^c\Delta (g_1(A\Delta g_2^{-1}A))^c.$$
We notice that the diameter of~$A\Delta g_i A$ is infinite for~$i=1,2$.
Since~$g_2 \notin H$ we know that~$g_2^{-1} \notin H$ and so~$\rho(g_1(A\Delta g_2^{-1}A))$ is infinite.
By Lemma~\ref{finitecomplement} we infer that~$\rho(g_1(A\Delta g_2^{-1}A)^c)$ is finite.
Now as the two numbers~${\rho((A\Delta g_1A)^c)}$ and~${\rho( g_1(A\Delta g_2^{-1}A)^c)}$ are finite we conclude that~${\rho A \Delta g_1g_2^{-1}A < \infty}$.
Thus we conclude that~$g_1g_2^{-1}$ belongs to~$H$.
It follows that~${H = H g_1g_2^{-1}}$ and multiplying by~$g_2$ yields~${Hg_1 = Hg_2}$ which contradicts~$H g_1 \neq H g_2$. \end{proof}
We now are ready to state the main theorem of this section.
\begin{theorem}\label{cyclic finite index}
Let~$G$ be a group acting with only finitely many orbits on a two-ended graph~$\Gamma$ without dominated ends almost freely.
Then~$G$ contains an infinite cyclic subgroup of finite index. \end{theorem}
\begin{proof}
Let~$(A,A^*)$ be a tight separation and let~$(\bar{A}, \bar{A}^\ast)$ be the type 2 separation given by Corollary~\ref{type 2 separation}.
Additionally let~$H$ be the separation subgroup induced by~$(A,A^\ast)$.
We now use Lemma~\ref{element infinite order} on~$(\bar{A},\bar{A}^*)$ to find an element~${h \in G}$ of infinite order.
It is straightforward to check that~$h \in H$.
Now it only remains to show that~$L \defi \langle h \rangle$ has finite index in~$H$.
Suppose for a contradiction that~$L$ has infinite index in~$H$ and for simplicity set~$Z:= A\cap A^*$.
This implies that~$H = \bigsqcup_{i \in \ensuremath{\mathbb{N}}} L h_i$.
We have the two following cases:\\
{\bf{Case I:}} There are infinitely many~${i\in \ensuremath{\mathbb{N}}}$ and~${j_i\in \ensuremath{\mathbb{N}}}$ such that~${h_iZ=h^{j_i}Z}$ and so~${Z=h^{-j_i}h_iZ}$.
It follows from Lemma \ref{New Lemma 5} that there are only finitely many~$f$-tight separations meeting~$Z$ where~$|Z|=f$.
We infer that there are infinitely many~$k\in \ensuremath{\mathbb{N}}$ such that~$h^{-j_\ell}h_{\ell}Z=h^{-j_k}h_kZ$ for a specific~$\ell\in\ensuremath{\mathbb{N}}$.
Since the size of~$Z$ is finite, we deduce that there is a~$v\in Z$ such that for a specific~$m\in \ensuremath{\mathbb{N}}$ we have~$h^{-j_m}h_{m}v=h^{-j_n}h_nv$ for infinitely many~$n\in\ensuremath{\mathbb{N}}$.
So we are able to conclude that the stabilizer of~$v$ is infinite which is a contradiction.
Hence for~$n_i\in \ensuremath{\mathbb{N}}$ where~$i=1,2$ we have to have
$$(h^{-j_m}h_{m}^{-1})h^{-j_{n_1}}h_{n_1}=({h^{-j_m}}h_m)^{-1}h^{-j_{n_2}}h_{n_2}.$$
The above equality implies that~$Lh_{n_1}=Lh_{n_2}$ which yields a contradiction.
\newline
{\bf{Case II:}} We suppose that there are only finitely many~${i\in \ensuremath{\mathbb{N}}}$ and~${j_i\in \ensuremath{\mathbb{N}}}$ such that~$h_iZ=h^{j_i}Z$.
We are going to define the graph~$X \defi \Gamma[A \Delta hA]$ and we conclude that~${\Gamma=\cup_{i\in\ensuremath{\mathbb{Z}}} h^iX}$.
We can assume that~${h_iZ\subseteq h^{j_i}X}$ for infinitely many~$i\in N$ and~$j_i\in \ensuremath{\mathbb{N}}$ and so we have~$h^{-j_i}h_iZ\subseteq X$.
Let~$p$ be a shortest path between~$Z$ and~$hZ$.
For every vertex~$v$ of~$p$, by Lemma \ref{New Lemma 5} we know that there are finitely many tight separation~$gZ$ for~$g\in G$ meeting~$v$.
So we infer that there are infinitely many~$k\in \ensuremath{\mathbb{N}}$ such that~$h^{-j_\ell}h_{\ell}Z=h^{-j_k}h_kZ$ for a specific~$\ell\in\ensuremath{\mathbb{N}}$.
Then with an analogue method we used for the preceding case, we are able to show that the stabilizer of at least one vertex of~$Z$ is infinite and again we conclude that~$(h^{-j_m}h_{m}^{-1})h^{-j_{n_1}}h_{n_1}=({h^{-j_m}}h_m)^{-1}h^{-j_{n_2}}h_{n_2}$ for~$n_1,n_2\in \ensuremath{\mathbb{N}}$.
Again it yields a contradiction. Hence each case gives us a contradiction and it proves our theorem as desired.
\end{proof}
We close the paper with the following corollary which is an immediate consequence of the above theorem and Theorem \ref{Classifiication}.
\begin{coro} Let~$G$ be an infinite group acting with only finitely many orbits on a two-ended graph~$\Gamma$ without dominated ends almost freely. Then~$G$ is two-ended.\qed \end{coro}
\end{document} |
\begin{document}
\title{Bounded gaps between primes in short intervals }
\author{Ryan Alweiss and Sammy Luo}
\maketitle
\begin{abstract} Baker, Harman, and Pintz showed that a weak form of the Prime Number Theorem holds in intervals of the form $[x-x^{0.525},x]$ for large $x$. In this paper, we extend a result of Maynard and Tao concerning small gaps between primes to intervals of this length. More precisely, we prove that for any $\delta\in [0.525,1]$ there exist positive integers $k,d$ such that for sufficiently large $x$, the interval $[x-x^\delta,x]$ contains $\gg_{k} \frac{x^\delta}{(\log x)^k}$ pairs of consecutive primes differing by at most $d$. This confirms a speculation of Maynard that results on small gaps between primes can be refined to the setting of short intervals of this length.
\end{abstract}
\section{Introduction} The classical Prime Number Theorem gives an asymptotic estimate for $\pi(x)$, the number of primes $ \leq x$. It can be written in the form
\begin{equation} \label{eqn:pnt} \pi(x)=\Li(x)+ O(x\exp(-c(\log x)^{\frac{1}{2}})), \end{equation} for some constant $c$, where \[ \Li(x)=\int_{2}^x \frac{dt}{\log t}\sim \frac{x}{\log x}. \] Under the assumption of the Riemann Hypothesis, the error term can be improved to $O(x^{\frac{1}{2}}\log x)$. Assuming this improved bound on the error term, we can estimate the number of primes in a short interval $[x-h,x]$ for $x^{\frac{1}{2}+\epsilon}\leq h \leq x$ and $\epsilon>0$, obtaining \begin{equation} \label{eqn:largebound} \pi(x)-\pi(x-h)\sim h(\log x)^{-1}. \end{equation} More generally, if the error term in \eqref{eqn:pnt} could be improved to $O(x^\delta)$, where $0<\delta<1$, we would obtain \eqref{eqn:largebound} for $ x^{\delta+\epsilon} \leq h \leq x$. Since no such improvement is known unconditionally, it is remarkable that results of the form \eqref{eqn:largebound} have nevertheless been shown for some range of values of $\delta<1$. The first result of this form is due to Hoheisel \cite{hoheisel},
who obtained \eqref{eqn:largebound} for $\delta=1-\frac{1}{33000}$. The best range of $\delta$ for which \eqref{eqn:largebound} is currently known is $\delta\in [\frac{7}{12},1]$, a result due to Heath-Brown \cite{hb}. These results extend readily to the setting of primes in arithmetic progressions. Let $\pi(x;q,a)$ be the number of primes $p\leq x$ such that $p\equiv a\bmod q$. If $\gcd(a,q)=1$, the result corresponding to \eqref{eqn:largebound} is that \[ \pi(x;q,a)-\pi(x-h;q,a)\sim \frac{h}{\phi(q)\log x}, \] where $x^\delta \leq h \leq x$, for $\delta > \frac{7}{12}$ and $q\leq (\log x)^A$.
If we are content with a lower bound on the number of primes in the range $[x-x^\delta,x]$ in place of an asymptotic formula, it is possible to extend these results to smaller values of $\delta$. Heath-Brown and Iwaniec \cite{hbi} showed that \begin{equation} \label{eqn:lbprime} \pi(x)-\pi(x-h)\gg h (\log x)^{-1} \end{equation} for $x^\delta \leq h \leq x$ when $\delta > \frac{11}{20}$.
The range of $\delta$ was subsequently improved several times. In 1996, Baker and Harman \cite{1bhp} showed \eqref{eqn:lbprime} for $\delta\geq 0.535$. In 2001, Baker, Harman, and Pintz (BHP) \cite{bhp} further extended the result to all $\delta\geq 0.525$. To date, this remains the best range of $\delta$ for which \eqref{eqn:lbprime} is known. As an immediate consequence of the work of BHP, we have \[ p_{n+1}-p_n \ll p_n^{0.525}, \] where $p_n$ denotes the $n$th prime. This is the best known unconditional upper bound on $p_{n+1}-p_n$ for sufficiently large $n$. (See the work of Ford, Green, Konyagin, Maynard, and Tao \cite{longgaps} for lower bounds on large gaps between primes.)
In this paper, we carry out a suggestion of Maynard and combine the ideas of BHP with recent advances in the study of small gaps between primes. To make this precise, we first recall the Twin Prime Conjecture, which asserts that \begin{equation} \label{eqn:tpc} \liminf_{n \to \infty}(p_{n+1}-p_n)=2. \end{equation} The Prime Number Theorem implies that the average gap $p_{n+1}-p_n$ between consecutive primes is asymptotic to $\log p_n$. In 2005, Goldston, Pintz, and Y{\i}ld{\i}r{\i}m (GPY) \cite{gpy} showed that $p_{n+1}-p_n$ can be arbitrarily small compared to $\log p_n$. Specifically, they proved that \[ \liminf_{n \to \infty}\frac{p_{n+1}-p_n}{\log p_n}=0. \] GPY further showed that, assuming that the primes are sufficiently well-distributed among residue classes modulo most moduli $q$, it is possible to obtain a bound \begin{equation} \label{eqn:bg16} \liminf_{n \to \infty}(p_{n+1}-p_n)\leq 16. \end{equation} To be more precise, we say that the primes have \emph{level of distribution} $\theta$ if for any $A>0$, \begin{equation} \label{eqn:bv}
\sum_{q\leq x^\theta} \max_{(a,q)=1}\left|\pi(x;q,a)-\frac{\pi(x)}{\phi(q)}\right|\ll_A \frac{x}{(\log x)^A}. \end{equation} The celebrated Bombieri-Vinogradov Theorem states that \eqref{eqn:bv} holds for every $\theta<\frac{1}{2}$. This means that the bound on the error term in the Prime Number Theorem for arithmetic progressions given by the Generalized Riemann Hypothesis holds for almost all moduli $q$. It is conjectured that \eqref{eqn:bv} in fact holds for all $\theta<1$; this conjecture is known as the Elliott-Halberstam Conjecture. The methods of GPY show that the left hand side of~\eqref{eqn:tpc} is finite assuming any level of distribution greater than $\frac{1}{2}$. In particular, if we can take $\theta\geq 0.971$, then we have the claimed bound in~\eqref{eqn:bg16}.
The first unconditional proof that the left hand side of~\eqref{eqn:tpc} is bounded was given by Zhang in 2013 \cite{zhang}. He obtained the bound \[ \liminf_{n \to \infty}(p_{n+1}-p_n)< 7\cdot 10^7. \] While Zhang's results are inspiring, his methods are highly technical and do not easily generalize. That same year, Maynard \cite{maynard} discovered a way to extend the methods of GPY by modifying the sieve used. He used this to lower the bound obtained by Zhang, as well as give a generalization to gaps between $p_n$ and $p_{n+m}$ for arbitrary fixed $m$. His method obtains the results \[ \liminf_{n \to \infty}(p_{n+1}-p_n)\leq 600, \] and \[ \liminf_{n \to \infty}(p_{n+m}-p_n)\ll m^3 e^{4m}. \] Tao discovered the underlying sieve independently, but arrived at slightly weaker conclusions.
The techniques used in these approaches in fact operate within a more general setting. Say that a set of linear forms $\mathcal{H}=\{L_1,\dots,L_k\}$, where $L_i(n)=a_i n+h_i$, is \emph{admissible} if for every prime $p$ there exists a value of $n$ such that none of the $L_i(n)$ are divisible by $p$. We will often take $a_1=\cdots=a_k=1$, in which case we can think of $\mathcal{H}$ as the set $\{h_1,\dots,h_k\}$. The goal is then to look for integers $n$ such that many of $L_1(n),\dots,L_k(n)$ are simultaneously prime. Hardy and Littlewood conjectured that for any admissible $\mathcal{H}$, \[ \#\{n\leq N\mid a_i n+h_i\text{ prime }\forall i\in [1,k]\} \sim \mathfrak{G}\frac{N}{(\log x)^k}, \] where $\mathfrak{G}>0$ is an effective constant depending only on $\mathcal{H}$. Maynard's results above follow from showing for every admissible set $\mathcal{H}=\{L_1,\dots,L_k\}$ that for large enough $N$, there is some $n\in [N,2N]$ such that $\gg \log k$ of the integers $L_1(n),\dots,L_k(n)$ are prime.
In \cite{pintz2015}, Pintz extended Maynard's method to prove the lower bound \begin{align*}
\#\{n \in [x,2x] & \mid \#(\{L_1(n),...,L_k(n)\}\cap \mathbb{P})\geq c \log k,\text{ and }\max_{1\leq i\leq k} P^{-}(L_i(n))\geq n^{c_1(k)}\} \\ & \gg_k \frac{x}{(\log x)^k}, \end{align*} for some $c>0$ and $c_1(k)>0$. Here $\mathbb{P}$ is the set of all primes and $P^-(n)$ is the smallest prime factor of $n$ for $n>1$. This makes the work in \cite{maynard} quantitative and strengthens it by ensuring that none of the $L_i(n)$ have small prime factors.
It is natural to consider a localized version of the question of small gaps between primes, looking for such small gaps within an interval $[x-h,x]$, where $x^\delta\leq h\leq x$ for some $\delta<1$. An analogue of the Bombieri-Vinogradov Theorem holds for such intervals under certain restrictions on $\delta$. The form of the statement is \begin{equation} \label{eqn:bvshorter}
\sum_{q\leq x^\theta} \max_{(a,q)=1}\left|(\pi(x;q,a)-\pi(x-h;q,a))-\frac{\pi(x)-\pi(x-h)}{\phi(q)}\right|\ll_A \frac{h}{(\log x)^A}, \end{equation} for $x^{\delta}\leq h \leq x$. The best known result of this form is due to Timofeev \cite{timofeev}, who showed that \eqref{eqn:bvshorter} holds for any $0<\theta<\frac{1}{30}$ and all $x^{\delta}\leq h \leq x$ when $\delta > \frac{7}{12}$. We refer the reader to \cite{pps} for a history of the development of bounds of this form. Using Timofeev's result, Maynard \cite{maynard2016dense} proved a quantitative, localized analogue of his earlier work on small gaps between primes. Specifically, he obtained that there exists a sufficiently small constant $c_{\delta}>0$ (depending only on $\delta$) such that for all $k>c_{\delta}^{-1}$ for any $\delta > \frac{7}{12}$, \begin{equation} \label{eqn:lbadmis} \#\{n \in [x-h,x] \mid \#(\{L_1(n),...,L_k(n)\}\cap \mathbb{P})\geq c_{\delta} \log k\}\gg_{k,\delta} \frac{h}{(\log x)^k}, \end{equation} for all $h\geq x^\delta$.
The goal of this paper is to shorten the interval $[x-h,x]$ in the result above. For $h\leq x^{\frac{7}{12}}$, \eqref{eqn:bvshorter} is not known for any $\theta>0$. However, a result due to Kumchev \cite{kumchev} gives a Bombieri-Vinogradov type average result for a \emph{lower bound} on the prime indicator function $1_{\mathbb{P}}(n)$ over intervals of size at least $x^{0.53}$. By applying the arguments of BHP \cite{bhp} in conjunction with a generalization of Watt's mean-value theorem to all Dirichlet $L$-functions \cite{wattl}, we extend Kumchev's result to all intervals of size at least $x^{0.525}$. Confirming Maynard’s speculation, this allows us to show~\eqref{eqn:lbadmis} for all $h\geq x^{0.525}$, with the additional property that all the $L_i(n)$ involved have no small prime factors, as in Pintz's result from \cite{pintz2015}. Our main theorem is the following.
\begin{theorem} \label{thm:main} For every $\delta\in [0.525,1]$, there is a constant $c_\delta>0$ such that for all $k\geq c_\delta^{-1}$, we have \begin{align*} \#\{n \in [x-h,x] &\mid \#(\{L_1(n),...,L_k(n)\}\cap \mathbb{P})\geq c_{\delta} \log k,\text{ and }\max_{1\leq i\leq k} P^{-}(L_i(n))\geq n^{c_1(k)}\} \\ & \gg_{k,\delta} \frac{h}{(\log x)^k}, \end{align*} where $x^\delta \leq h\leq x$ and $c_1(k)>0$ is a constant depending on $k$. \end{theorem}
Setting $a_1=\cdots=a_k=1$ and taking $k$ large enough so that $c_\delta \log k > 1$ yields the following important corollary.
\begin{corollary} \label{cor:bgshort} For any $\delta\in [0.525,1]$, there exist some positive integers $k,d$ such that for sufficiently large $x$, the interval $[x-h,x]$ contains $\gg_{k} \frac{h}{(\log x)^k}$ pairs of consecutive primes differing by at most $d$ when $x^\delta \leq h\leq x$. \end{corollary}
To obtain our results, we follow the strategy suggested by Maynard in Section~3 of \cite{maynard2016dense}. His idea was to synthesize the results in ~\cite{wattl} and~\cite{kumchev} to exhibit bounded gaps between primes in intervals of the above length. The main novelty in this paper is verifying that one can indeed combine these results, the details of which are carried out in Section~\ref{sec:kummore}. In Section~\ref{sec:defs}, we go over some of the background for our results, stating results we will use and reviewing basic notation and definitions used in the remainder of the paper. In Section~\ref{sec:1stresult}, we give estimates of weighted sums that are short interval analogues of the sums appearing in \cite{maynard}, which suffice to show that when $k$ is sufficiently large, every interval $(x-h,x]$ for $h\geq x^{0.525}$ and large enough $x$ contains at least one $n$ for which $\gg \log k$ of the $L_i(n)$ are prime. In Section~\ref{sec:2ndresult}, we prove Theorem~\ref{thm:main} in full.
\section{Notation and Background} \label{sec:defs}
We will closely follow the notation of Maynard in \cite{maynard}, with a few modifications. Denote by $(a,b)$ and $[a,b]$ the greatest common factor and least common multiple, respectively, of $a$ and $b$. Let $\tau(n)$ denote the number of divisors of $n$, and let $\tau_{r}(n)$ denote the number of ways to write $n$ as the product of an $r$-tuple of positive integers. Let $\phi(n)$ be the Euler totient function and let $\mu(n)$ be the M\"{o}bius function. As mentioned previously, $1_{\mathbb{P}}(n)$ is the indicator function for the primes, and $P^{-}(n)$ denotes the smallest prime factor of $n$ for $n>1$. We write $n\sim N$ to mean $N/2<n\leq N$ and $n\asymp N$ to mean $c_1N\leq n\leq c_2 N$, where $c_1,c_2\geq 0$ are unspecified absolute constants.
We fix $k$ and the admissible set $\mathcal{H}=\{L_1,\dots,L_k\}$, where $L_i(n)=a_i n+h_i$ with $(a_i,h_i)=1$. Throughout, $x$ is taken to be a large integer, and $\eta_k$ is a sufficiently small constant in terms of $k$, not necessarily the same in every appearance. Let \[ W=\prod_{p \le D_0}p. \] Unlike in \cite{maynard}, where $D_0$ is taken to be $\log(\log(\log(x)))$, we define $D_0$ to depend only on $k$ and $\mathcal{H}$, as in \cite{pintz2015}. We defer the precise definition of $D_0$ to Section~\ref{sec:1stresult}. Pick a residue $v_0$ mod $W$ corresponding to an integer $v$ such that each $L_i(v)$ is relatively prime to $W$. The existence of such a $v$ is guaranteed by admissibility and the Chinese Remainder Theorem. We let $h \ge x^\delta$ for some $\delta\in [0.525,1]$, and let $R=x^{\frac{\theta}{2}-\epsilon}$ for some small $\epsilon>0$, where $\theta>0$ will be defined later.
We now describe the approach of GPY and Maynard. Define the expressions \[ S_1(x_1,x_2)=\sum_{\substack{x_1<n \leq x_2\\ n \equiv v_0 \bmod W}} w_n, \] and \[ S_2(x_1,x_2) = \sum_{m=1}^k S_2^{(m)}(x_1,x_2), \] where \[ S_2^{(m)}(x_1,x_2)=\sum_{\substack {x_1<n \leq x_2\\ n\equiv v_0\bmod W }}1_p(a_mn+h_m)w(n). \] Here $w(n)$ is a nonnegative weight function that will be defined shortly.
The goal in the work of GPY and Maynard is to show that for some $\rho>0$, \begin{equation} \label{eqn:gpym} \sum_{x< n\leq 2x} \Big(\sum_{i=1}^k 1_{\mathbb{P}}(n+h_i)-\rho_k\Big) = (S_2(x,2x)-\rho S_1(x,2x))w_n > 0, \end{equation} for all sufficiently large $x$. Assuming \eqref{eqn:gpym} holds, we know that for some $n\in [x,2x]$, at least $r(k):=\lfloor \rho_k+1\rfloor$ of the numbers $n+h_1,\dots, n+h_k$ are prime. This yields infinitely many $n$ such that at least $r(k)$ of the numbers $n+h_i$ are prime, implying that \[ \liminf_{n\to \infty} (p_{n+r(k)-1}-p_n) \leq \max_{1\leq i,j\leq k} (h_i-h_j). \] Taking $\{h_1,\dots,h_k\}$ to be, for example, the first $k$ primes larger than $k$, we obtain an upper bound of $O(k\log k)$ by the Prime Number Theorem.
The weights $w(n)$ are constructed as follows. Let $F(t_1,\dots,t_k)$ be a smooth function supported on the subset of $[0,1]^k$ with $\sum_{i=1}^k t_i \leq 1$. Let $R=x^{\frac{\theta}{2}-\epsilon}$ for some small fixed $\epsilon>0$, and define \[ y_{\vec r}=F\Big(\frac{\log r_1}{\log R},\dots,\frac{\log r_k}{\log R}\Big), \] where $\vec r=(r_1,\dots,r_k)$, and let \[ \lambda_{\vec d}=\left(\prod_{i=1}^k \mu(d_i) d_i\right) \sum_{\substack{\vec r\\d_i\mid r_i\forall i\\ (r_i,W)=1\forall i}} \frac{\mu(\prod_{i=1}^k r_i)^2}{\prod_{i=1}^k \phi(r_i)} y_{\vec r}. \] We set \[ w(n)=\Big(\sum_{d_i \mid a_in+h_i \forall i} \lambda_{\vec d} \Big)^2. \]
With this choice of weights, Maynard gives the following estimates for $S_1(x,2x)$ and $S_2(x,2x)$. Here we have made the dependence of the error bounds on $D_0$ explicit, and highlighted the further appearance of a constant depending on $k$ by writing $O_k\Big(\frac{1}{D_0}\Big)$.
\begin{proposition}[Maynard, {\cite[Proposition~4.1]{maynard}}] \label{thm:maynard} With $S_1,S_2$ as defined above, we have \begin{align*} S_1(x,2x)&=\frac{\Big(1+O_k\Big(\frac{1}{D_0}\Big)\Big) \phi(W)^k x (\log{R})^k}{W^{k+1}} I_k(F),\\ S_2(x,2x)&=\frac{\Big(1+O_k\Big(\frac{1}{D_0}\Big)\Big) \phi(W)^k x (\log{R})^{k+1}}{W^{k+1}\log{x}}\sum_{m=1}^kJ_k^{(m)}(F), \end{align*} provided $I_k(F)\ne 0$ and $J_k^{(m)}(F)\ne 0$ for each $m$, where \begin{align*} I_k(F)&=\int_0^1\dotsi \int_0^1 F(t_1,\dotsc, t_k)^2dt_1\dotsc dt_k,\\ J_k^{(m)}(F)&=\int_0^1\dotsi \int_0^1 \left(\int_0^1 F(t_1,\dotsc,t_k)dt_m\right)^2 dt_1\dotsc dt_{m-1} dt_{m+1}\dotsc dt_k. \end{align*} \end{proposition}
Defining \[ M_k=\sup_{F} \frac{\sum_{m=1}^k J_k^{(m)}(F)}{I_k(F)}, \] the estimates in Proposition~\ref{thm:maynard} allow Maynard to obtain \eqref{eqn:gpym} with \[ \lfloor \rho_k + 1\rfloor = \left\lceil \frac{\theta M_k}{2}\right\rceil, \] where $\theta$ is a level of distribution of the primes. For $k$ sufficiently large, we have $M_k>\log k-2\log \log k-2$, so that \[ \lfloor \rho_k + 1\rfloor = \Big\lceil \Big(\frac{\theta}{2}+o(1)\Big)\log k\Big\rceil. \] These results generalize readily to the setting where the linear forms $a_i n+h_i$ do not necessarily have $a_i=1$.
A key ingredient in our work is the following result of Kumchev \cite{kumchev} giving a function, which we will denote by $Y(n)$, which is a lower bound for $1_{\mathbb{P}}(n)$ and satisfies a modified Bombieri-Vinogradov type result.
\begin{theorem}[Kumchev, {\cite[Theorem~1]{kumchev}}] \label{thm:primelower} There is an arithmetic function $Y$ with the following properties: \begin{enumerate}[(i)] \item if $n$ is an integer in $[2,x)$, then \[ Y(n)\leq \begin{cases} 1\text{ if }n\text{ is prime,}\\ 0\text{ otherwise;} \end{cases} \] \item if $x/2\leq y<x$ and $z_0=x\exp(-3(\log x)^{1/3})$, then \[ \sum_{y-z_0<n\leq y}Y(n) \gg \frac{z_0}{\log x} ;\] \item there is an absolute constant $\epsilon>0$ such that if \[ E_{Y}(y,h;q,a):=\sum_{\substack{y-h<n\leq y \\ n\equiv a\bmod q}} Y(n)-\frac{hz_0^{-1}}{\phi(q)}\sum_{y-z_0<n\leq y}Y(n), \] and if \[ x^{0.53}\leq z\leq x,\qquad Q\leq zx^{-0.53+\epsilon}, \] then for any $A>0$ \[
\sum_{q\leq Q}\max_{(a,q)=1}\max_{h\leq z} \max_{x/2\leq y<x}|E_Y(y,h;q,a)|\ll_A \frac{z}{(\log x)^A}. \] \end{enumerate} \end{theorem}
By applying a result of Harman, Watt, and Wong \cite{wattl}, we can replace the constant $0.53$ in Theorem~\ref{thm:primelower} by $0.525$.
\begin{theorem} \label{thm:kum525} There is an arithmetic function $Y$ with the following properties: \begin{enumerate}[(i)] \item if $n$ is an integer in $[2,x)$, then \[ Y(n)\leq \begin{cases} 1\text{ if }n\text{ is prime,}\\ 0\text{ otherwise;} \end{cases} \] \item if $x/2\leq y<x$ and $z_0=x\exp(-3(\log x)^{1/3})$, then \[ \sum_{y-z_0<n\leq y}Y(n) \gg \frac{z_0}{\log x} ;\] \item there is an absolute constant $\epsilon>0$ such that if \[ E_{Y}(y,h;q,a):=\sum_{\substack{y-h<n\leq y \\ n\equiv a\bmod q}} Y(n)-\frac{hz_0^{-1}}{\phi(q)}\sum_{y-z_0<n\leq y}Y(n), \] and if \[ x^{0.525}\leq z\leq x,\qquad Q\leq zx^{-0.525+\epsilon}, \] then for any $A>0$ \[
\sum_{q\leq Q}\max_{(a,q)=1}\max_{h\leq z} \max_{x/2\leq y<x}|E_Y(y,h;q,a)|\ll_A \frac{z}{(\log x)^A}. \] \end{enumerate} \end{theorem}
We leave the details of this extension to Section~\ref{sec:2ndresult}.
\begin{remark} We can take the implied constant in (ii) to be $\geq 1-\beta$ for some $\beta<1$ made explicit in the computations of \cite{kumchev} and \cite{bhp}. For sufficiently small $\epsilon$, we can take $\beta \leq 0.94$. \end{remark}
\section{Estimates on the Weighted Sums} \label{sec:1stresult}
In this section we give, for $0.525\leq \delta\leq 1$, a value of $\rho=\rho_{k,\delta}$ such that \begin{equation} \label{eqn:rhocond} S_2(x-h,x)-\rho S_1(x-h,x) > 0, \end{equation} when $x^\delta\leq h \leq x$. We will give an asymptotic estimate for $S_1(x-h,x)$ as in \cite{maynard}, but for $S_2(x-h,x)$ it will suffice to give a lower bound. For the proofs of the next two propositions, the only condition we need on $D_0$ is that \[ D_0>\max(\max_{1\leq i<j\leq k}(a_jh_i-a_ih_j), \max_{1\leq i\leq k}h_i, \max_{1\leq i\leq k}a_i). \] We will impose an additional condition on $D_0$ at the end of this section, in order to make the error term in~\eqref{eqn:finals2s1bd} sufficiently small.
\begin{prop} \label{thm:s2bound} Let $\delta\geq 0.525$ and $x^{\delta} \leq h\leq x$. We have, for sufficiently large $x$, \[ S_2^{(m)}(x-h,x)\geq \left(1-\beta+O_k\left(\frac{1}{D_0}\right)\right)\frac{h}{W}\frac{\log R}{\log x}\left(\frac{\phi(W)}{W}\log R\right)^k J_k^{(m)}(F), \] where $\beta<1$ is an absolute constant. \end{prop}
\begin{proof} We proceed as in \cite{maynard}, with only a few alterations to the argument. We start with the definition \[ S_2^{(m)}(x-h,x)=\sum_{\substack {x-h<n\leq x\\ n\equiv v_0\bmod W }}1_p(a_mn+h_m)w(n), \] where \[ w(n)=\Big(\sum_{\substack {\vec d:\: d_i\mid a_in+h_i \forall i}} \lambda_{\vec d}\Big)^2. \]
First note that for $x$ sufficiently large, $d_m|(a_mn+h_m)$ implies $d_m=1$ if $a_mn+h_m$ is prime, so we can replace $w(n)$ with modified weights, \[ w'(n)=\Big(\sum_{\substack {\vec d:\: d_i\mid a_in+h_i \forall i\\ d_m=1}} \lambda_{\vec d}\Big)^2, \] restricting $d_m$ to equal $1$. Since the weights $w'(n)$ are nonnegative, we have the lower bound \[ S_2^{(m)}(x-h,x)=\sum_{\substack {x-h<n\leq x\\ n\equiv v_0\bmod W }}1_p(a_mn+h_m)w'(n)\geq \sum_{\substack {x-h<n\leq x\\ n\equiv v_0\bmod W }}Y(a_mn+h_m)w'(n). \]
Now we expand the square in the definition of $w'(n)$ and switch the order of summation, to obtain \[ \sum_{\substack{\vec d, \vec e\\ d_m=e_m=1}} \lambda_{\vec d}\lambda_{\vec e} \sum_{\substack{ x-h<n\leq x \\ n\equiv v_0\bmod W\\ [d_i,e_i]\mid a_in+h_i \forall i }} Y(a_mn+h_m). \] As in \cite{maynard}, for large enough $x$ the only contribution is from terms where $W,[d_1,e_1]$, $\dots$, $[d_k,e_k]$ are pairwise relatively prime. Indeed, we have chosen $v_0$ such that $(a_in+h_i,W)=1$ for all $i$ when $n\equiv v_0\bmod W$. If $[d_i,e_i]$ and $[d_j,e_j]$ have a common prime factor $q$, then $q\mid a_jh_i-a_ih_j$. Because $(a_i,h_i)=(a_j,h_j)=1$, $a_jh_i-a_ih_j$ is nonzero and bounded, so our choice of $D_0$ guarantees that all prime factors of $a_jh_i-a_ih_j$ divide $W$. In particular $q\mid W$, a contradiction since $q\mid d_i\mid a_in+h_i$. Thus we can apply the Chinese Remainder Theorem to reduce the restriction on $n$ in the inner sum to a single modular restriction \[ n \equiv b \bmod q, \] where $q=W\prod_{i=1}^k [d_i,e_i]$. Therefore, \[ a_m n + h_m \equiv a_m b + h_m \bmod qa_m. \] Set $b'=a_m b + h_m$.
We approximate the resulting inner sum using property~(iii) of Theorem~\ref{thm:kum525}. Let $z_1=(a_m N+h_m)\exp(-3(\log (a_m N+h_m)^{1/3})\ll a_m z_0$, so that we obtain \begin{align*} & \sum_{\substack{ x-h<n\leq x \\ n\equiv v_0\bmod W\\ [d_i,e_i]\mid n+h_i \forall i }} Y(a_mn+h_m)=\sum_{\substack{ a_m(x-h)+h_m<n\leq a_mx+h_m \\ n\equiv b' \bmod qa_m }} Y(n) \\ & = \frac{a_mhz_1^{-1}}{\phi(qa_m)}\sum_{a_mx-z_1+h_m<n\leq a_mx+h_m} Y(n) + E_Y(a_mx+h_m,a_mh;a_mq,b'), \end{align*} where again \[ E_{Y}(y,h;q,a)=\sum_{\substack{y-h<n\leq y \\ n\equiv a\bmod q}} Y(n)-\frac{hz_0^{-1}}{\phi(q)}\sum_{y-z_0<n\leq y}Y(n). \] We let $q'=qa_m$, and note $\phi(q')=a_m\phi(q)$ because all prime factors of $a_m$ divide $W$. Thus, the above expression becomes \[ \frac{hz_1^{-1}}{\phi(q)}\sum_{a_mx-z_1+h_m<n\leq a_mx+h_m} Y(n) + E_Y(a_mx+h_m,a_mh;q',b'). \] Let $X_h=hz_1^{-1}\sum_{a_mx-z_1+h_m<n\leq a_mx+h_m} Y(n)$, which does not depend on $\vec d$ and $\vec e$, so that our lower bound for $S_2^{(m)}$ is \[ \frac{X_h}{\phi(W)}\sideset{}{'}\sum_{\substack{\vec d, \vec e\\ d_m=e_m=1}} \frac{\lambda_{\vec d}\lambda_{\vec e}}{\prod_{i=1}^k \phi([d_i,e_i])}+\sum_{\substack{\vec d, \vec e\\ d_m=e_m=1}} \lambda_{\vec d}\lambda_{\vec e} E_Y(a_mx+h_m,a_mh;q',a'). \] The sum in our main term appears exactly as in \cite{maynard}, so the argument there, encapsulated in Theorem~\ref{thm:maynard}, shows that our main term is \begin{align*} &\frac{X_h}{\phi(W)}\left(\sum_{\vec u} \frac{(y^{(m)}_{\vec u})^2}{\prod_{i=1}^k g(u_i)}+O_k\left(\frac{(y^{(m)}_{max})^2}{D_0}\left(\frac{ \phi(W)\log R}{W}\right)^{k-1}\right) \right) \\ &= \left(1+O_k\left(\frac{1}{D_0}\right)\right)\frac{X_h}{\phi(W)}\left(\frac{\phi(W)\log R}{W}\right)^{k+1}J_k^{(m)}(F). \end{align*} By property~(ii) of Theorem~\ref{thm:kum525}, we have \[ X_h \geq \left(1-\beta+O_k\left(\frac{1}{D_0}\right)\right) hz_1^{-1}\frac{z_1}{\log (a_m x+h_m)} = \left(1-\beta+O_k\left(\frac{1}{D_0}\right)\right)\frac{h}{\log x}. \] Hence, it follows that \[ \frac{X_h}{\phi(W)}\sideset{}{'}\sum_{\substack{\vec d, \vec e\\ d_m=e_m=1}} \frac{\lambda_{\vec d}\lambda_{\vec e}}{\prod_{i=1}^k \phi([d_i,e_i])} \ge \left(1-\beta+O_k\left(\frac{1}{D_0}\right)\right)\frac{h}{W}\frac{\log R}{\log x}\left(\frac{\phi(W)\log R}{W}\right)^{k}J_k^{(m)}(F). \]
Meanwhile, we can bound our error term as in \cite{maynard}, using (iii)~of Theorem~\ref{thm:kum525} in place of the Bombieri-Vinogradov theorem. We obtain \begin{align} \label{error} & \sum_{\substack{\vec d, \vec e\\ d_m=e_m=1}} \lambda_{\vec d}\lambda_{\vec e} E_Y(a_mx+h_m,a_mh;q',b')\notag\\ & \ll \lambda_{max}^2 \sum_{q< R^2W} \mu(q)^2 \tau_{3k}(q) E_Y(a_mx+h_m,a_mh;q',b')\notag \\ & \ll y_{max}^2 (\log R)^{2k} \sum_{q< R^2W} \mu(q)^2 \tau_{3k}(q) E_Y(a_mx+h_m,a_mh;q',b') .
\end{align} Let \[
E_Y^*(x,z;q)=\max_{(a,q)=1}\max_{h\leq z} \max_{x/2\leq y<x}|E_Y(y,h;q,a)|. \] As in \cite{maynard}, we use the Cauchy-Schwarz inequality, part (iii)~of Theorem~\ref{thm:kum525}, and the trivial bound \[
|E_Y(a_mx+h_m,a_mh;q',b')|\ll \frac{a_mh}{\phi(a_m q)}=\frac{h}{\phi(q)}, \] to show that \eqref{error} is \begin{align*} & \ll y_{max}^2 (\log R)^{2k} \left(\sum_{q<R^2W}\mu(q)^2 \tau_{3k}^2(q)\frac{h}{\phi(q)}\right)^{1/2}\left(\sum_{q<R^2W} \mu(q)^2 E_Y^*(a_mx+h_m,a_mh;a_mq)\right)^{1/2} \\ & \ll_A \frac{y_{max}^2 h}{(\log x)^{A}}. \end{align*} Here we can use (iii)~of Theorem~\ref{thm:kum525} as long as $(a_m x)^{0.525}\leq a_mh \leq a_m x$ and $WR^2 \leq hx^{-0.525+\epsilon_0}$, where $\epsilon_0$ is the absolute constant $\epsilon$ from (iii)~of Theorem~\ref{thm:kum525}. If $R \leq x^{\frac{1}{2}(\delta-0.525+\epsilon_0/2)}$, this error term is dominated by already existing error terms. Therefore, \[ S_2^{(m)}(x-h,x)\geq \left(1-\beta+O_k\left(\frac{1}{D_0}\right)\right)\frac{h}{W}\frac{\log R}{\log x}\left(\frac{\phi(W)\log R}{W}\right)^{k}J_k^{(m)}(F), \] which is the desired lower bound. \end{proof}
The estimate of $S_1(x-h,x)$ is an even more direct adaptation of the argument from \cite{maynard}.
\begin{prop} Let $\delta\geq 0.525$ and $x^{\delta} \leq h\leq x$. We have \[ S_1(x-h,x)=\left(1+O_k\left(\frac{1}{D_0}\right)\right)\frac{h}{W}\left(\frac{\phi(W)}{W}\log R\right)^k I_k(F). \] \end{prop} \begin{proof} As before, we expand out the square and switch the order of summation, obtaining \[ S_1(x-h,x)=\sum_{\substack {x-h<n\leq x\\ n\equiv v_0\bmod W }}w(n)=\sum_{\vec d, \vec e} \lambda_{\vec d}\lambda_{\vec e} \sum_{\substack{ x-h<n\leq x \\ n\equiv v_0\bmod W\\ [d_i,e_i]\mid a_in+h_i \forall i }} 1. \] The inner sum is now $\frac{h}{W}+O(1)$, as opposed to $\frac{x}{W}+O(1)$ as in \cite{maynard}. Besides this, the proof is identical to the proof in \cite{maynard}. Note that since $R^2 = x^{\delta-0.525+\epsilon_0/2} \ll \frac{h}{(\log x)^A}$ for any $A$, the first error term we obtain is still appropriately bounded. \end{proof} With these estimates in hand, finding an appropriate value of $\rho$ is straightforward. Following the argument of Maynard outlined in Section~\ref{sec:defs}, we can achieve
\begin{equation} \label{eqn:finals2s1bd} \frac{S_2}{S_1} \geq \left(1-\beta+O_k\left(\frac{1}{D_0}\right)\right) (M_k-\epsilon)\frac{\log R}{\log x}. \end{equation} Defining $D_0$ to be sufficiently large with respect to $k$, we can make the $O_k(\frac{1}{D_0})$ term small enough to obtain~\eqref{eqn:rhocond} for $\rho$ satisfying \[ \lfloor \rho+1\rfloor\geq \Big\lceil \frac{\delta-0.525+\epsilon_0}{2} (1-\beta) M_k \Big\rceil. \]
\section{Density of Bounded Gaps in Short Intervals} \label{sec:2ndresult} In this section we prove Theorem~\ref{thm:main}, which we will now state in a more precise form.
\begin{theorem} \label{thm:density} For any positive integer $k$ and any $\delta\in [0.525,1]$, let $m=\lceil \frac{\delta-0.525+\epsilon_0}{2} (1-\beta) M_k \rceil - 1$, where $\epsilon_0$ is the positive constant appearing in Theorem~\ref{thm:kum525}. There exists a constant $c_1(k)>0$ such that for any admissible set $\mathcal{H}=\{L_1,\dots,L_k\}$ with $L_i(n)=a_i n+h_i$, the set \[ S(\mathcal{H}):=\left\{n\in \mathbb{N}:\: \sum_{i=1}^k 1_{\mathbb{P}}(a_in+h_i) \geq m+1, P^{-}\left(\prod_{i=1}^k (a_in+h_i)\right)\geq n^{c_1(k)}\right\} \]
satisfies $|S(\mathcal{H})\cap [x-h,x]|\gg_k h(\log x)^{-k}$ for all sufficiently large $x$, where $x^{\delta}\leq h \leq x$. \end{theorem}
Our argument is analogous to that in \cite[Section~2]{pintz2015}, modified to fit our study of short intervals. We follow the exposition of \cite{patterns}, which considers a similar problem in a slightly different setting. We begin with the following lemma.
\begin{lemma} For any $1\leq j\leq k$ there exists $\epsilon>0$ such that for any prime $p>D_0$ with $p<R^\epsilon$ we have \[
S_{1,p}^{(j)}:=\sum_{\substack{x-h<n\leq x \\ n\equiv v_0 \bmod W \\ p|a_jn+h_j}} w_n \ll_k \frac{(\log p)^2}{p(\log R)^2}\frac{h(\log R)^k}{W}. \] \end{lemma}
\begin{proof} The proof is almost identical to the proof of \cite[Lemma~5.1]{patterns}. By symmetry it suffices to show the result for $j=1$. Expanding the square and rearranging the order of summation as usual gives \[
S_{1,p}^{(1)}=\sum_{\vec d, \vec e} \lambda_{\vec d}\lambda_{\vec e} \sum_{\substack{x-h<n\leq x\\ n\equiv v_0\bmod W\\ [d_i,e_i]\mid a_in+h_i\forall i\\ p|a_1n+h_1}} 1. \]
As before we have that $W,[d_1,e_1],\dots,[d_k,e_k]$ are pairwise relatively prime. Since $p>D_0$, we have $(W,p)=1$, and by our choice of $D_0$, we also have $([d_i,e_i],p)=1$ for $i\neq 1$. The Chinese Remainder Theorem, as before, gives that the inner sum is \[ \frac{h}{W[d_1,e_1,p]\prod_{i=2}^k [d_i,e_i]}+O(1), \] so that \[ S_{1,p}^{(1)}=\frac{h}{pW}\sum_{\vec d,\vec e}\frac{\lambda_{\vec d}\lambda_{\vec e}}{\frac{[d_1,e_1,p]}{p}\prod_{i=2}^k [d_i,e_i]}+O\Big(\sum_{\vec d,\vec e}\lambda_{\vec d}\lambda_{\vec e}\Big). \] As in \cite{maynard}, we see that the error term is $\ll y_{max}^2 R^2 (\log R)^{4k}\ll_k \frac{h}{(\log x)^A}$ for any $A$. The sum in the main term is independent of the interval $n$ ranges over, so as in \cite{patterns} it is bounded above by \[ \sum_{\vec d,\vec e}\frac{\lambda_{\vec d}\lambda_{\vec e}}{\frac{[d_1,e_1,p]}{p}\prod_{i=2}^k [d_i,e_i]} \ll_k \left(\frac{\log p}{\log R}\right)^2 (\log R)^k, \] so that \[ S_{1,p}^{(1)} \ll_k \frac{h}{pW}\left(\frac{\log p}{\log R}\right)^2 (\log R)^k=\frac{(\log p)^2}{p(\log R)^2}\frac{h(\log R)^k}{W}, \] as claimed. \end{proof}
\begin{lemma} For any $\epsilon(k)>0$ there exists $c_1(k)>0$ such that for sufficiently large $x$, \[ S_1^-(x-h,x):=\sum_{\substack{x-h<n\leq x\\ n\equiv v_0\bmod W\\ P^{-}(\prod_{i=1}^k (a_in+h_i))<n^{c_1(k)}}} w_n \leq \epsilon(k)\frac{h(\log R)^k}{W}. \] \end{lemma} \begin{proof} We follow the proof of \cite[Lemma~5.2]{patterns}. We have \[ S_1^-(x-h,x)\leq \sum_{j=1}^k\sum_{D_0<p<N^{c_1(k)}}S_{1,p}^{(j)}. \] When $c_1(k)\leq \epsilon \frac{\log R}{\log x}$, we can apply the previous lemma and obtain \[ S_1^-(x-h,x)\ll_k \frac{h(\log R)^k}{W} \sum_{D_0<p<x^{c_1(k)}} \frac{(\log p)^2}{p(\log R)^2} \ll \frac{h(\log R)^k}{W}\frac{(c_1(k)\log x)^2}{(\log R)^2}. \] Picking $c_1(k)$ sufficiently small thus gives \[ S_1^{-}(x-h,x)\leq \epsilon(k)\left(\frac{h(\log R)^k}{W}\right), \] as desired. \end{proof} A similar bound for the contribution to $S_2$ of $n$ such that some $a_in+h_i$ have small prime factors follows easily. \begin{cor} For any $\epsilon(k)>0$ there exists $c_1(k)>0$ such that for sufficiently large $N$, \[ S_2^-(x-h,x):=\sum_{\substack{x-h<n\leq x\\ n\equiv v_0\bmod W\\ P^{-}(\prod_{i=1}^k (a_in+h_i))<n^{c_1(k)}}} \sum_{i=1}^k 1_{\mathbb{P}}(a_in+h_i)w_n\leq \epsilon(k)\frac{h(\log R)^k}{W}. \] \end{cor} \begin{proof} Since $w_n$ is nonnegative, the triangle inequality gives us \[ S_2^-(x-h,x)\leq \sum_{\substack{x-h<n\leq x\\ n\equiv v_0\bmod W\\ P^{-}(\prod_{i=1}^k (a_in+h_i))<n^{c_1(k)}}} k w_n = kS_1^-(N), \] which is bounded appropriately by the previous lemma. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:density}] Let $\rho_m$ be such that $\lfloor \rho_m\rfloor = m$. Note that by definition of $S(\mathcal{H})$, we have \[ 0 < \sum_{i=1}^k 1_{\mathbb{P}}(a_in+h_i)-\rho_m \leq k, \] for $n\in S(\mathcal{H})$. Also note that for $n\in S(\mathcal{H})$, since the smallest prime factor of each $a_in+h_i$ is at least $n^{c_1(k)}$, each $a_in+h_i$ has a number of divisors bounded in terms of $c_1(k)$ and the $h_i$, so \[
w_n=\Big(\sum_{\substack{\vec d:\: d_i|a_in+h_i\:\forall i }} \lambda_{\vec{d}}\Big)^2\ll_{c_1(k),\mathcal{H}} \lambda_{max}^2 \ll_k y_{max}^2 (\log R)^{2k}. \]
Since $y_{max}\ll F_{max}$, and our choice of $F$ only depends on $k$, assuming that our choice of $\epsilon(k)$ and therefore of $c_1(k)$ only depends on $k$, we in fact obtain $w_n \ll_{k,\mathcal{H}} (\log R)^{2k}$, or equivalently, \[ 1 \gg_{k,\mathcal{H}} \frac{w_n}{(\log R)^{2k}}. \] We then have \begin{align*}
& |S(\mathcal{H})\cap [x-h,x]|=\sum_{\substack{x-h<n\leq x\\ n\in S(\mathcal{H})}} 1 \\ & \gg_{k,\mathcal{H}} \frac{1}{(\log R)^{2k}}\sum_{\substack{x-h<n\leq x\\ n\in S(\mathcal{H})}} \left(\sum_{i=1}^k 1_{\mathbb{P}}(a_in+h_i)-\rho_m \right) w_n. \end{align*} Now, define \[ S_1^+(x-h,x)=S_1(x-h,x)-S_1^-(x-h,x), \] \[ S_2^+(x-h,x)=S_2(x-h,x)-S_2^-(x-h,x). \] For $n$ satisfying $P^{-}(\prod_{i=1}^k (a_in+h_i))\geq n^{c_1(k)}$, we have $\sum_{i=1}^k 1_{\mathbb{P}}(a_in+h_i) - \rho_m > 0$ if and only if $n\in S(\mathcal{H})$. So, we have that \begin{align*} S_2^+(x-h,x)-\rho_m S_1^+(x-h,x) &=\sum_{\substack{x-h<n\leq x\\ n\equiv v_0\bmod W\\ P^{-}(\prod_{i=1}^k (a_in+h_i))\geq n^{c_1(k)}}} \left(\sum_{i=1}^k 1_{\mathbb{P}}(a_in+h_i) - \rho_m \right) w_n \\ & \leq \sum_{\substack{x-h<n\leq x\\ n\in S(\mathcal{H})}} \left(\sum_{i=1}^k 1_{\mathbb{P}}(a_in+h_i)-\rho_m \right) w_n. \end{align*} Furthermore, we have \begin{align*} S_2^+(x-h,x)-\rho_m S_1^+(x-h,x) &= (S_2-\rho_m S_1) + (S_2^-(x-h,x)-\rho_m S_1^-(x-h,x)) \\ & \geq \frac{h}{W}\left(\frac{\phi(W)\log R}{W}\right)^k I_k(F) \Big((1-\beta)\Big(\frac{\theta}{2}-\epsilon_0\Big)(M_k-\epsilon_0)-\rho_m\Big) \\ & + O_k\Big(\frac{h}{W}\left(\frac{\phi(W)\log R}{W}\right)^k\Big) + O\Big(k\epsilon(k)\frac{h(\log R)^k}{W}\Big). \end{align*} Since we are picking $D_0$ to only depend on $k$, we have $\frac{\phi(W)}{W}\gg_k 1$ and similarly $\frac{1}{W}\gg_k 1$. Thus, when $\epsilon_0, \epsilon(k)$ are chosen to be small enough based on $k$, we have \[ S_2^+(x-h,x)-\rho_m S_1^+(x-h,x) \gg_k h(\log R)^k. \]
Thus, combining with the previous bounds, we obtain \[
|S(\mathcal{H})\cap [x-h,x]| \gg_{k,\mathcal{H}} \frac{1}{(\log R)^{2k}} (S_2^+(x-h,x)-\rho_m S_1^+(x-h,x)) \gg_k h(\log R)^{-k}, \] as claimed. \end{proof}
\section{Proof of \texorpdfstring{Theorem~\ref{thm:kum525}}{Theorem 2.3}} \label{sec:kummore} In this section we outline a proof of Theorem~\ref{thm:kum525}, the extension of Kumchev's main result in~\cite{kumchev} to all exponents $\delta \geq 0.525$. To do this, we will synthesize the argument of Kumchev in \cite{kumchev}, which shows the result with $0.53$ in place of $0.525$, and the argument of Baker, Harman, and Pintz in \cite{bhp}. To modify the results of \cite{bhp} for use with primes in arithmetic progressions, we replace the use of Watt's theorem by its extension to Dirichlet L-functions in \cite{wattl}. For a function $f$ and a character $\chi$ mod $q$, define \[ E_f(y,h;\chi)=\sum_{y-h<n\leq y } f(n)\chi(n)-\delta(\chi)hz_0^{-1}\sum_{y-z_0<n\leq y}f(n). \] The relation of $E_f(y,h;\chi)$ to $E_f(y,h;q,a)$ is analogous to the relation between the functions $\psi(x;\chi)$ and $\psi(x;q,a)$ in the proof of the Prime Number Theorem for arithmetic progressions given in \cite[Chapter~20]{davenport}. As shown in \cite[Section~4.1]{kumchev}, a function $f$ satisfies property~(iii) from Theorem~\ref{thm:kum525} if it satisfies \[
\sum_{q\sim Q'} \sideset{}{^*} \sum_\chi \max_{h\leq z}\max_{x/2\leq y<x}|E_f(y,h;\chi)|\ll_A \frac{Q' z}{(\log x)^A} \]
for all $Q'\leq Q$ and $A>0$, as long as the following conditions hold: $|f(n)|\ll d(n)^B$ for some $B>0$, for some $D$ we have $f(n)=0$ if $P^-(n)<D$, and \[ D\geq x^\eta, \qquad Q\leq \min(D^2 x^{-\eta},DHx^{-\eta}), \] for some $\eta>0$. Here the asterisk on the sum over $\chi$ indicates that the sum is restricted to primitive characters modulo $q$.
Define \[ \psi(n,w)=\begin{cases} 1\qquad \text{if }P^-(n)>w,\\ 0\qquad \text{otherwise.} \end{cases} \] Throughout the arguments that follow we make repeated use of Buchstab's identity, \[ \psi(n,w_1)=\psi(n,w_2)-\sum_{\substack{pm=n\\w_2\leq p<w_1}}\psi(m,p), \] where $2\leq w_2<w_1$. Note that for $n\in(x^{\frac{1}{2}},x]$ we have \[ \psi(n,x^{\frac{1}{2}})=1_{\mathbb{P}}(n), \] and for $n\leq x^{\frac{1}{2}}$ we have $\psi(n,x^{\frac{1}{2}})=0$.
Our strategy is to apply Buchstab's identity repeatedly to obtain a decomposition \begin{equation} \label{eqn:decompform} \psi(n,x^{\frac{1}{2}})=\sum_{j=1}^k c_j(n)-\sum_{j=k+1}^\ell c_j(n), \end{equation} for some nonnegative arithmetic functions $c_j(n)$ satisfying a particular set of properties.
\begin{defn} \label{def:fine} Define a decomposition of the form~\eqref{eqn:decompform} to be a \emph{fine} decomposition if for some $r<k$ and $0.524< \delta \leq 1$, the following properties hold: \begin{enumerate}[(1)] \item For all $1\leq j\leq \ell$, $c_j(n)\ll d(n)^B$ for some $B$; \item $c_j(n)=0$ if $P^-(n)<x^{2\delta-1}$; \item for any $A>0$ we have \[
\sum_{q\sim Q}\sideset{}{^*}\sum_{\chi} \max_{h\leq z}\max_{y\sim x}|E_{c_j}(y,h;\chi)|\ll Qz(\log x)^{-A}, \] for $Q\leq zx^{-\delta}$ and $j\in [1,r]\cup [k+1,\ell]$; \item if $y\sim x$, $h_0=x\exp(-3(\log x)^{\frac{1}{3}})$, and $\delta\geq 0.525-\epsilon$, then \begin{equation} \label{eqn:fine4} \sum_{y-h_0<n\leq y}\sum_{j=r+1}^k c_j(n)\leq (\beta+o(1))\frac{h_0}{\log x}, \end{equation} where $\beta<1$ is an absolute constant, which we can take to be $0.94$. \end{enumerate} \end{defn}
Given a fine decomposition, we can then take, as in \cite{kumchev}, \[ Y(n)=\sum_{j=1}^r c_j(n) - \sum_{j=k+1}^\ell c_j(n). \] This satisfies the conditions of Theorem~\ref{thm:kum525} by the same argument as in \cite{kumchev}. Namely, property~(i) of Theorem~\ref{thm:kum525} is satisfied because \[ Y(n)\leq \psi(n,x^{1/2})\leq \begin{cases} 1\text{ if }n\text{ is prime,}\\ 0\text{ otherwise,} \end{cases} \] as needed. Property~(ii) of Theorem~\ref{thm:kum525} follows, when $\delta\geq 0.525-\epsilon$, from the equation \[ \sum_{y-h_0<n\leq y}Y(n)=\sum_{y-h_0<n\leq y}\left(\psi(n,x^{\frac{1}{2}})-\sum_{j=r+1}^k c_j(n) \right), \] upon applying the Prime Number Theorem and property~(4) above. Finally, by (1)-(3) and the aforementioned argument from Section~4.1 of \cite{kumchev}, property~(iii) of Theorem~\ref{thm:kum525} holds as long as \[ Q\leq \min(x^{4\delta-2-\eta},Hx^{-\delta-\eta}), \] for some sufficiently small $\eta$. It suffices to consider $H\leq x^{3/5+\eta}$. In this case, since $\delta\geq 0.52+\eta$, the constraint is just $Q\leq Hx^{-0.525-\eta}$. This gives property~(iii) of Theorem~\ref{thm:kum525} by taking $\delta<0.525-\epsilon$. Thus, it remains to find a fine decomposition for $\psi(n,x^{\frac{1}{2}})$.
The following sections contain many technical results giving bounds on expressions involving Dirichlet polynomials or weighted sums of the function $\psi(n,w)$, in many cases very similar to results in \cite{bhp} or \cite{kumchev}. Where relevant, significant changes in the proofs from those in \cite{bhp} and \cite{kumchev} are indicated.
\subsection{Dirichlet polynomials} The lemmas in this section are essentially the same as the lemmas of \cite[Section~2]{bhp} translated for general Dirichlet L-functions, with the addition of a few tools from \cite[Section~2]{kumchev} to deal with the additional factors of $Q$ that appear. They can be seen as strengthened versions of the lemmas of \cite[Section~2]{kumchev}.
We borrow our notation from both \cite{bhp} and \cite{kumchev}, as appropriate. Recall that $\delta\geq 0.525$. Let $\mathcal{L}=\log x$, $\Psi(T)=\min(zx^{-\frac{1}{2}},x^{\frac{1}{2}}T^{-1})$, and $w=\exp(\mathcal{L}/\log \mathcal{L})$. Write $Q=x^\theta$. $\epsilon$ and $\eta$ are taken to be small constants, not necessarily the same in every appearance. Likewise, $B$ is taken to be a large constant, not necessarily the same in every appearance. When the expression $\mathcal{L}^{-A}$ appears, $A$ can be taken to be arbitrarily large. We note that in many of the results below we will impose the condition that $Q\leq zx^{-\delta-\epsilon/2}$. This condition gives the bound \[ \Psi(T)QT\leq (x^{\frac{1}{2}}T^{-1}) zTx^{-\delta-\epsilon/2}=x^{1-\delta-\epsilon/2}(zx^{-\frac{1}{2}}), \] which will be useful for the integral bounds we wish to show.
Define an \emph{$L$-factor} to be a Dirichlet polynomial of the form \[ \sum_{k\sim K} \chi(k) k^{-s}\qquad \text{or}\qquad \sum_{k\sim K} \chi(k)(\log k) k^{-s}. \]
We assume without further comment that all Dirichlet polynomials defined in the following results are of the form \[ M(s,\chi)=\sum_{m\sim M} a_m \chi(m) m^{-s}, \] where the coefficients $a_m$ are bounded by $(\tau(m))^B$ for some $B$.
When considering integrals over an interval $[U_0,U]$, we define the $L_p$ norms \[
\|N\|_p:=\begin{cases}
\left(\sum_{q\sim Q}\sideset{}{^*}\sum_{\chi} \int_{U_0}^U |N(\frac{1}{2}+it,\chi)|^p\right)^{1/p}\qquad \text{if }1\leq p<\infty,\\
\sup_{(t,q,\chi):q\sim Q, t\in [U_0,U]}|N(\frac{1}{2}+it,\chi)| \qquad \text{if } p=\infty. \end{cases} \]
Note that these standard norms are related by a simple bound to the norms that Kumchev defines in \cite{kumchev} in terms of well-spaced sets. As in \cite{kumchev}, we define a \emph{well-spaced set} $\mathcal{T}=\mathcal{T}(Q',T)$ to be a set of tuples $(t,q,\chi)$ with $|t|\leq T$, $q\sim Q'$ such that if $(t,q,\chi),(t',q,\chi)\in \mathcal{T}$ with $t\neq t'$, then $|t-t'|\geq 1$. By considering for each ordered pair $(q,\chi)$ the optimal choice of $t$ in each unit interval, we obtain the bound \[
\sum_{q\sim Q'}\sideset{}{^*}\sum_{\chi} \int_{-T}^T \Big|N\Big(\frac{1}{2}+it,\chi\Big)\Big|^p \leq 2 \max_{\mathcal{T}}\sum_{(t,q,\chi)\in\mathcal{T}} \Big|N\Big(\frac{1}{2}+it,\chi\Big)\Big|^p, \] so that asymptotic bounds on Kumchev's norms apply to these standard $L_p$ norms as well. We start by recalling the following result giving a bound on the $L_2$ norm of a generic Dirichlet polynomial. \begin{lemma}[Kumchev, {\cite[Lemma~1]{kumchev}}] \label{lem:kum1} Given a Dirichlet series $N(s,\chi)=\sum_{n\sim N} b_n\chi(n) n^{-s}$, we have \[
\|N\|_2^2\ll (N+Q'^2 U)G\mathcal{L}, \] where
$G=\sum_{n\sim N}|b_n|^2 n^{-1}$. \end{lemma} Since the Dirichlet series we work with always have coefficients bounded by a power of the divisor function, we always have $G\ll N^\varepsilon$ for any $\varepsilon>0$. In certain special cases, we can obtain stronger bounds on similar norms. The following lemma is an analogue of Lemma~2 from \cite{bhp}, and is essentially a form of the $L$-function analogue of Watt's theorem, proven in \cite{wattl}.
\begin{lemma} \label{lem:lbhp2} If $K(s,\chi)$ is an L-factor, $M<x$, $K \leq 4Q'U$, and $Q' \leq \max(K,U)$, then \[
\sum_{q\sim Q'}\sideset{}{^*}\sum_{\chi}\int_{1/2+iU/2}^{1/2+iU}|M(s,\chi)|^2|K(s,\chi)|^4|ds| \ll (UQ'^2)^{1+\epsilon}M^\epsilon (1+M^2(UQ')^{-1/2}). \] \end{lemma} \begin{proof} For $K\leq (Q'U)^\frac{1}{2}$, this is essentially proven in \cite{wattl} in the course of proving the main theorem there. For $(Q'U)^\frac{1}{2}\leq K\leq 4Q'U$, we proceed as in~\cite{bhp}, this time based on an approximate functional equation for Dirichlet L-functions as stated in \cite{wang}. Namely, if $s=\frac{1}{2}+it$ and $\chi$ is a primitive character to the modulus $q$, we have for any $X,Y>0$ satisfying $X \gg q$ and $2\pi XY=qt$, \begin{equation} \label{eqn:approxfe} L(s,\chi)=\sum_{n \le X}\chi(n)n^{-s}+A(s,\chi)\sum_{n \le Y}G(n,\chi)n^{s-1}+R(X,Y), \end{equation} where \[ A(s,\chi)=iq^{-s}(2\pi)^{s-1}\Gamma(1-s)e^{\frac{-\pi is}{2}}\chi(-1) \ll q^{-1/2}, \] \[ G(n,\chi)=\sum_{r=1}^{q}\chi(r)\exp(rn/q)=\overline{\chi(n)}G(1,\chi), \] and \[ R(X,Y) \ll q^{1/2}X^{-1/2}\log(Y+q+2)+Y^{-1/2}. \] If $Q'<U$, then $K \ge \sqrt{Q'U} \ge Q'$, so since $Q' \le \max(U,K)$, it follows for $X=K,2K$ that $R(X,Y) \ll \log(1+Q'U)$. Arguing as in \cite{bhp} then gives the bound \[
|K|\ll |K'|+|E|, \]
for some $L$-factor $K'$ with $K'\leq (Q'U)^{\frac{1}{2}}$ and some error $E\ll \log (1+Q'U)$. So, $|K|^4\ll |K'|^4+|E|^4$. Applying Lemma~\ref{lem:kum1} yields \begin{align*}
& \sum_{q\leq Q'}\sideset{}{^*}\sum_{\chi}\int_{1/2+iU/2}^{1/2+iU}|M(s,\chi)|^2|E|^4|ds| \ll \log(1+Q'U)^4 \sum_{q\leq Q'}\sideset{}{^*}\sum_{\chi}\int_{1/2+iU/2}^{1/2+iU}|M(s,\chi)|^2|ds| \\ & \ll (Q'U)^{\epsilon} (M+Q'^2 U) M^\epsilon, \end{align*} which is absorbed into the claimed bound. By applying the argument in the previous case to $K'$, the lemma follows in this case as well.
\end{proof}
The lemmas that follow will make reference to a particular kind of Dirichlet polynomial, those defined by \[ N(s,\chi)=\sum_{p_i\sim P_i} \chi(p_1\cdots p_u) (p_1\cdots p_u)^{-s}, \] where $u\leq B$ for some constant $B$, $P_i\geq w$ for all $i$, and $P_1\cdots P_u \leq x$. We call such polynomials ``of bounded product type.''
The following lemma is an analogue of \cite[Lemma~1]{bhp}, and its proof is essentially the same, using a variant of Heath-Brown's identity \cite{6bhp} for $L$-functions instead of for the zeta function. \begin{lemma} \label{lem:lbhp1} Let $N(s,\chi)$ be a Dirichlet polynomial of bounded product type. Then for $\text{Re}(s)=\frac{1}{2}$, \[
|N(s,\chi)| \le g_1(s,\chi)+ \cdots +g_r(s,\chi),\: \text{with }r \le \mathcal{L}^B, \] where each $g_i$ is of the form \[
\mathcal{L}^B \prod_{i=1}^{h}|N_i(s,\chi)|,\qquad \text{with }h \le B,\: \prod_{i=1}^{h}N_i \le x, \] and among the Dirichlet polynomials $N_1, \cdots, N_h$ the only polynomials of length greater than $(QT)^{1/2}$ are L-factors. \end{lemma}
Let $\alpha'=\max(\alpha,\theta+(1-\delta))$ from this point onward. The next two lemmas are direct analogues of Lemmas~3 and~4 of \cite{bhp}. We will go through the proofs in some detail to highlight the appearance of the $\theta$ terms introduced by working with Dirichlet characters and the small extent to which they affect the bounds.
\begin{lemma} \label{lem:lbhp3} Let $MN_1N_2K=x$, $T\leq x$. Suppose that $M$, $N_1$, and $N_2$ are of bounded product type, and $K(s)$ is an L-factor. Further suppose that $Q\leq \min(zx^{-\delta-\epsilon/2},T)$. Let $M=x^{\alpha}$ and $N_j=x^{\beta_j}$ for $j=1,2$. Suppose that
\begin{equation} \label{eqn:bhp3cond1} \alpha \le \delta+\theta, \end{equation}
\begin{equation} \label{eqn:bhp3cond2} \alpha'+\beta_1+\frac{1}{2}\beta_2 \le \frac{1}{2}(1+\delta)+\frac{3}{2}\theta, \end{equation}
\begin{equation} \label{eqn:bhp3cond3} \alpha'+\beta_2 \le \frac{1}{4}(1+3\delta)+\theta, \end{equation}
\begin{equation} \label{eqn:bhp3cond4} \alpha'+\beta_1+\frac{3}{2}\beta_2 \le \frac{1}{4}(3+\delta)+\frac{3}{2}\theta. \end{equation}
Then for $1 \le U \le T$ and $1\leq Q'\leq Q$, \[
\Psi(T)\sum_{q\sim Q'}\sideset{}{^*}\sum_\chi \int_{U/2}^{U}|(MN_1N_2K)(\frac{1}{2}+it,\chi)|dt \ll Q z \mathcal{L}^{-A}. \] \end{lemma}
Note that this lemma immediately gives a bound for the same sum over all $q\leq Q$ via a dyadic decomposition of $[1,Q]$.
\begin{proof} The case $Q'=1$ can be dealt with as in \cite[Lemma~3]{bhp}, giving an upper bound of $x^{1/2}\mathcal{L}^{-A}$. So, we can assume that $Q'>1$. First consider the case where $K>4Q'U$ and let $N=MN_1N_2$. By Lemma~6 of \cite{kumchev}, we have \[ K\Big(\frac{1}{2}+it,\chi\Big) \ll \delta(\chi) K^\frac{1}{2}U^{-1}+K^{-\frac{1}{2}}(qU)^\frac{1}{2}\log(qU) \ll \delta(\chi) K^\frac{1}{2}U^{-1}+\mathcal{L}. \]
Since $q>1$, we then have $\|K\|_\infty \ll \mathcal{L}$, so by H\"{o}lder's inequality and Lemma~\ref{lem:kum1}, the sum we wish to bound is at most \begin{align*}
\|KN\|_1 & \leq \|K\|_\infty \|1\|_2 \|N\|_2 \ll (Q'^2 U)^{\frac{1}{2}}(N+Q'^2U)^{\frac{1}{2}} U^{\epsilon/4} \mathcal{L}^B \\ & \ll Q^2 T^{1+\epsilon/4} \mathcal{L}^B + (Q'^2 U \frac{x}{K})^{\frac{1}{2}} U^{\epsilon/4} \mathcal{L}^B \ll Qx^{1-\delta-\frac{\epsilon}{4}} \mathcal{L}^{B}(zx^{-\frac{1}{2}-\epsilon/2}\Psi(T)^{-1}) + Q'^{\frac{1}{2}}U^{\epsilon/4}x^{\frac{1}{2}}\mathcal{L}^B \\ & \ll Qz\mathcal{L}^{-A}\Psi(T)^{-1}. \end{align*}
The last line follows because we can pick $\epsilon$ small enough such that $Q'^{\frac{1}{2}}U^{\epsilon/4}\ll Q^{\frac{1}{2}}x^{\epsilon/4} \ll x^{\theta}\mathcal{L}^{-A}=Q\mathcal{L}^{-A}$.
Now take $K\leq 4Q'U$. The Cauchy-Schwarz inequality yields that \[
\|KN\|_1 \leq \|M\|_2 \|N_1N_2^{\frac{1}{2}}\|_4 \|KN_2^{\frac{1}{2}}\|_4, \] where the integrals implicit in the norms are over $s=\frac{1}{2}+it$ with $t\in [U/2,U]$. If $Q'\leq U$ or $Q'\leq K$, then Lemma~\ref{lem:lbhp2} gives the bound \begin{equation} \label{eqn:lbhplem3cs}
\|KN_2^{\frac{1}{2}}\|_4^4 \ll (UQ'^2)^{1+\epsilon}M^\epsilon(1+M^2(UQ')^{-1/2}) \ll (Q^2 T)^{1+\epsilon} (1+N_2^2 (QT)^{-1/2}). \end{equation} On the other hand, if $K,U\leq Q'$, then Lemma~\ref{lem:kum1} gives \[
\|KN_2^{\frac{1}{2}}\|_4^4=\|K^2 N_2\|_2^2 \ll (K^2 N_2 + Q'^2 U)x^\epsilon \ll (Q'^2 N_2 + Q'^3)x^\epsilon \ll (Q^2 T + N_2^2 Q(QT)^{1/2}) x^\epsilon, \] since $Q'\leq T$. In either case, applying Lemma~\ref{lem:kum1} to the remaining terms in~\eqref{eqn:lbhplem3cs} then gives, for arbitrarily small $\epsilon>0$, \begin{align*}
\|KN\|_1 & \ll x^{\epsilon/50}(M+Q^2 T)^{1/2}(N_1^2 N_2+Q^2 T)^{1/4}(Q^2 T)^{1/4} (1+N_2^2 (QT)^{-1/2})^{1/4} \\ & \ll \max(1,zx^{-\frac{1}{2}-\epsilon/2}\Psi(T)^{-1})\max(M,Qx^{1-\delta})^{1/2}\max(N_1^2 N_2,Qx^{1-\delta})^{1/4} \\ & \qquad \max(Qx^{1-\delta},N_2^2x^{(1-\delta)/2})^{1/4} \\ & \ll x^\gamma \max(1,zx^{-\frac{1}{2}-\epsilon/2}\Psi(T)^{-1}), \end{align*} where \[ \gamma=\frac{1}{2}\alpha'+\frac{1}{4}\max(2\beta_1+\beta_2,\theta+(1-\delta))+\frac{1}{4}(\theta+(1-\delta))+\frac{1}{4}\max(0,2\beta_2-\frac{1}{2}(1-\delta))-\frac{1}{25}\epsilon. \] Conditions (\ref{eqn:bhp3cond1})-(\ref{eqn:bhp3cond4}) guarantee that $\gamma\leq \frac{1}{2}+\theta-\frac{1}{25}\epsilon$, so we obtain that \[
\Psi(T)\|KN\|_1 \ll Qx^{\frac{1}{2}-\frac{1}{25}\epsilon}\max(zx^{-\frac{1}{2}}, zx^{-\frac{1}{2}-\epsilon/2}) \ll Qz\mathcal{L}^{-A}, \] as needed. \end{proof}
\begin{lemma} \label{lem:lbhp4} The conclusion of Lemma~\ref{lem:lbhp3} still holds if hypotheses (\ref{eqn:bhp3cond2})-(\ref{eqn:bhp3cond4}) are replaced by the following: \begin{enumerate}[(i)] \item Either $\beta_1 \leq \frac{1}{2}\theta+\frac{1}{2}(1-\delta)$ or $N_1$ is an L-factor; \item \[\beta_2 \le \frac{1}{8}(1+3\delta)-\frac{1}{2}\alpha'+\frac{1}{2}\theta. \] \end{enumerate} \end{lemma} \begin{proof} If either $K>4Q'U$, or $N_1>4Q'U$ and $N_1$ is an L-factor, then the argument from the beginning of the proof of Lemma~\ref{lem:lbhp3} still applies. So, assume we are not in these cases. Applying Cauchy-Schwarz in a slightly different way and then proceeding as before, we have \begin{align*}
\|KN\|_1 & \leq \|M\|_2 \|N_1\|_4 \|KN_2\|_4 \\
& \ll x^{\epsilon/50}(M+Q^2 T)^{1/2}\|N_1\|_4 (Q^2 T)^{1/4} (1+N_2^4 (QT)^{-1/2})^{1/4}. \end{align*} Here we either have $\beta_1\leq \frac{1}{2}\theta + \frac{1}{2}(1-\delta)$, in which case Lemma~\ref{lem:kum1} gives \[
\|N_1\|_4=\|N_1^2\|_2^{1/2}\ll (N_1^2+Q'^2 T)^{1/4} \ll Q^{1/4}x^{\frac{1-\delta}{4}}x^{\epsilon/4}\max(1,zx^{-\frac{1}{2}-\epsilon/2}\Psi(T)^{-1})^{1/4}, \] or $N_1$ is an L-factor with $N_1\leq 4Q'U$, in which case applying Lemma~\ref{lem:lbhp2} or Lemma~\ref{lem:kum1} depending on whether $Q'\leq \max(K,U)$ again gives \[
\|N_1\|_4 \ll (Q'^2 T)^{1/4}(1+(Q'T)^{-1/2})^{1/4} x^{\epsilon/4} \ll Q^{1/4}x^{\frac{1-\delta}{4}}x^{\epsilon/4}\max(1,zx^{-\frac{1}{2}-\epsilon/2}\Psi(T)^{-1})^{1/4}. \]
Thus, $\|KN\|_1\ll x^\gamma \max(1,zx^{-\frac{1}{2}-\epsilon/2}\Psi(T)^{-1})$, where we now let \[ \gamma=\frac{1}{2}\alpha'+\frac{1}{2}(\theta+1-\delta) - \frac{1}{10}\epsilon + \frac{1}{4}\max(0,4\beta_2-\frac{1}{2}(1-\delta)). \] Condition~(ii) then gives $x^\gamma \ll Qx^\frac{1}{2} \mathcal{L}^{-A}$, and the proof then concludes as before. \end{proof}
\begin{lemma} \label{lem:lbhp5} Let $T\leq x$ and suppose $Q\leq \min(zx^{-\delta-\epsilon/2},T)$. Let $K(s)$ be an L-factor. Let $M=x^{\alpha}$, $N=x^{\beta}$, with $KMN=x$, and suppose that $\alpha \le \delta+\theta$ and \[ \beta \leq \min\left(\frac{1}{2}(3\delta+1-4\alpha')+2\theta,\frac{1}{5}(3+\delta-4\alpha')+\frac{6}{5}\theta\right). \] Suppose further that $M(s)$ and $N(s)$ are of bounded product type. Then \[
\Psi(T)\sum_{q\leq Q} \sideset{}{^*}\sum_{\chi}\int_{2}^{T}\left|(MNK)\left(\frac{1}{2}+it,\chi\right)\right|dt \ll Q z \mathcal{L}^{-A}. \] \end{lemma}
\begin{proof} The proof is essentially the same as the proof of \cite[Lemma~5]{bhp}, with $a$ now defined by \[ a=\max(0,2\beta-(1+\delta)+2\alpha'-3\theta), \] and with the invocations of Lemmas~1-4 of~\cite{bhp} replaced by invocations of Lemmas~\ref{lem:lbhp2}-\ref{lem:lbhp4}.
\end{proof}
Note that by symmetry we have the same bound for the integral over the range $[-T,-2]$. Since $M,N$ are of bounded product form and $K$ is an $L$-factor, by \cite[Lemma~5]{2bhp} and \cite[Lemma~6]{kumchev} we have \[
\|MNK\|_\infty \ll (MN)^{\frac{1}{2}}\mathcal{L}^{-A} \ll x^{1/2} \mathcal{L}^{-A}, \] so that we can extend the result of Lemma~\ref{lem:lbhp5} to an integral over the whole interval $[-T,T]$. That is, \[
\Phi(T) \sum_{q\leq Q} \sideset{}{^*}\sum_{\chi}\int_{-T}^{T}|(MNK)(\frac{1}{2}+it,\chi)|dt \ll Q z \mathcal{L}^{-A}. \]
\subsection{Sieve estimates} We use the bounds on Dirichlet polynomials derived in the previous section to obtain the Bombieri-Vinogradov style error bounds we want for sequences under certain conditions on sequence length.
The following lemma, analogous to Lemma~6 of \cite{bhp}, is the key to going from Dirichlet polynomial bounds to these error bounds. \begin{lemma} \label{lem:lbhp6} Let $F(s,\chi)=\sum_{k \sim x}c_k\chi(k) k^{-s}$. If for every $T\leq x$ and $Q\leq \min(zx^{-\delta-\eta},T)$ we have \[
\Psi(T) \sum_{q\sim Q} \sideset{}{^*} \sum_\chi\int_{-T}^{T} \Big|F\Big(\frac{1}{2}+it,\chi\Big)\Big|dt \ll Q z \mathcal{L}^{-A}, \] then \begin{equation} \label{eqn:errorbd}
\sum_{q\sim Q} \sideset{}{^*} \sum_\chi \max_{h\leq z}\max_{y\sim x} |E_c(y,h;\chi)|\ll Qz\mathcal{L}^{-A}. \end{equation}
\end{lemma} \begin{proof} For the term with $q=1$ and $\chi$ the trivial character, \cite[Lemma~6]{bhp} gives a bound of $z\mathcal{L}^{-A}$. For every other $\chi$, applying the truncated Perron formula and arguing as in \cite[Lemma~9]{kumchev} yields \[
\max_{h\leq z}\max_{y\sim x} |E_c(y,h;\chi)|\ll \mathcal{L} \max_{Q\leq T\leq x} \Psi(T) \int_{-T}^T \Big|F\Big(\frac{1}{2}+it,\chi\Big)\Big|dt+x^{\eta}, \] which yields the desired result upon summing over $q$ and $\chi$. \end{proof}
The following lemma is an analogue of Lemma~8 in \cite{bhp}, and the proof carries over with no significant changes.
\begin{lemma} \label{lem:lbhp8} Let $M(s)=\sum_{m \sim M}a_m \chi(m) m^{-s}$, $N(s)=\sum_{n \sim N}b_n \chi(n) n^{-s}$, $M=x^{\alpha}$, $N=x^{\beta}$. Suppose that $\alpha \le \delta+\theta-\epsilon$ and \[ \beta \le \min\Big(\frac{1}{2}(3\delta+1-4\alpha')+2\theta,\frac{1}{5}(3+\delta-4\alpha')+\frac{6}{5}\theta\Big)-2\epsilon. \] Finally, suppose that $M$ and $N$ are of bounded product type, and $Q\leq \min(zx^{-\delta-\epsilon/2},T)$.
\[ c(k)=\sum_{\substack{mn\ell=k \\ m\sim M, n\sim N}} a_m b_n \psi(\ell,w), \] where $w=\exp\left(\frac{\mathcal{L}}{\log \mathcal{L}}\right)$. Then equation (\ref{eqn:errorbd}) holds. \end{lemma}
The next lemma is analogous to Lemma~12 of \cite{bhp} and Lemma~10 of \cite{kumchev}.
\begin{lemma} \label{lem:lbhp12} Let $\alpha \in [0,\frac{1}{2}]$, and write \[ h=\Big\lceil \frac{\frac{1}{2}-\alpha}{2\delta-1}\Big\rceil, \] \[ \alpha^*=\max\left(\frac{2h(1-\delta)-\alpha}{2h-1},\frac{2(h-1)\delta+\alpha}{2h-1}\right). \] Suppose \[ 0 \le \beta \le \min(\frac{1}{2}(3\delta+1-4\alpha^{*}), \frac{1}{5}(3+\delta-4\alpha^{*}))-2\epsilon, \] and $Q\leq zx^{-\delta-\eta}$. Let $M(s)=\sum_{m \sim M}a_m \chi(m) m^{-s}$, $N(s)=\sum_{n \sim N}b_n\chi_n n^{-s}$, $2M=x^{\alpha}$, $N=x^{\beta}$, where $M(s)$ and $N(s)$ are of bounded product type. Let \[ I_h=\left[\frac{1}{2}-2h\left(\delta-\frac{1}{2}\right),\frac{1}{2}-(2h-2)\left(\delta-\frac{1}{2}\right)\right), \] and let \[ \nu(\alpha)=\min\left(\frac{2}{2h-1}(\delta-\alpha),\frac{36\delta-17}{19}\right), \] when $\alpha \in I_h$, for $h \ge 1$. Then, Equation~(\ref{eqn:errorbd}) holds for \[ c(k)=\sum_{\substack{mn\ell=k \\ m\sim M, n\sim N}} a_m b_n \psi(\ell,x^{\nu}), \] for every $\nu\leq \nu(\alpha)$. \end{lemma} \begin{proof} The proof proceeds as in \cite[Lemma~12]{bhp}, with an application of Lemma~\ref{lem:lbhp8} using $x^{\alpha^*}$ in place of $M=x^\alpha$. In place of \cite[Lemma~9]{bhp}, \cite[Lemma~2]{kumchev} is used. As in \cite[Lemma~10]{kumchev}, since $1-\delta \leq \alpha^* \leq \frac{1}{2}$, the dependence on $Q=x^\delta$ is eliminated from the bounds on $\alpha$ and $\beta$. \end{proof}
An analogue of Lemma~13 in \cite{bhp} follows immediately from the same arguments. It is stated here for ease of reference.
\begin{lemma} \label{lem:lbhp13} Suppose that $Q\leq zx^{-\delta-\eta}$. Let $M=x^{\alpha}$, $N_1=x^{\beta}$, $N_2=x^{\gamma}$, where $M(s,\chi),N_1(s,\chi),N_2(s,\chi)$ are of bounded product type. Suppose $\alpha\leq \frac{1}{2}$ and either \begin{enumerate}[(i)] \item \begin{align*} 2\beta+\gamma&\leq 1+\delta-2\alpha^*-2\epsilon,\\ \gamma &\leq \frac{1}{4}(1+3\delta)-\alpha^*-\epsilon,\\ 2\beta+3\gamma &\leq \frac{1}{2}(3+\delta)-2\alpha^*-2\epsilon, \end{align*} or \item \[ \beta\leq \frac{1}{2}(1-\theta),\qquad \gamma\leq \frac{1}{8}(1+3\theta-4\alpha^*)-\epsilon. \] \end{enumerate} Let \[ b_n=\sum_{\substack{n_1n_2=n\\ n_1\sim N_1,n_2\sim N_2}} A_{n_1} B_{n_2}, \] where $A_{n_1},B_{n_2}$ are the coefficients of $N_1$ and $N_2$. Then equation~\eqref{eqn:errorbd} holds for \[ c(k)=\sum_{\substack{mn\ell=k \\ m\sim M, n\sim N}} a_m b_n \psi(\ell,x^{\nu}), \] for every $\nu\leq \nu(\alpha)$. \end{lemma}
The last result we need to borrow is an adaptation of Lemma~18 from \cite{bhp}. This gives a bound of the form \begin{equation} \label{eqn:lbhp18}
\Psi(T)\sum_{q\leq Q}\sideset{}{^*}\sum_{\chi}\int_{2}^T \left|L_1\left(\frac{1}{2}+it,\chi\right)\cdots L_\ell \left(\frac{1}{2}+it,\chi\right)\right|dt \ll Qz\mathcal{L}^{-A}, \end{equation} for $L_1\cdots L_\ell=x$, $\ell\geq 3$, $L_j=x^{\alpha_j}$, $\alpha_j\geq \epsilon$, assuming that $(\alpha_1,\dots,\alpha_\ell)$ lies in a particular region in $[0,1]^l$. The full list of bounds required is omitted for the sake of brevity; it is the same list as in the statement of \cite[Lemma~18]{bhp}, with only the additional condition that $Q\leq zx^{-\delta-\eta}$. The proof is likewise analogous to the proof of \cite[Lemma~18]{bhp}, with factors of $Q$ inserted before occurrences of $T$ in bounds (compare the difference between the proofs of \cite[Lemma~9]{bhp} and \cite[Theorem~4]{2bhp}). An application of Lemma~\ref{lem:lbhp6} then gives us an estimate on sums of the form \[ \sum_{\substack{p_1\cdots p_{\ell-1}m=n\\p_i\sim L_i \forall i}} \psi(m,p_{\ell-1}), \] which will be useful in estimating terms in our final decomposition.
\subsection{Final Decomposition} We finish the proof of Theorem~\ref{thm:kum525} by outlining the construction of a fine decomposition of $\psi(n,x^{\frac{1}{2}})$. We start the process by following Kumchev's procedure. Let $w_0=x^{2\delta-1}$, and let $w(m)=x^{\nu(\alpha)}$, where $m=x^\alpha$ and $\nu(\alpha)$ is defined as in the statement of Lemma~\ref{lem:lbhp12}. Note that $w(m)\geq x^{2\delta-1}$ for all $m<x^{\frac{1}{2}}$.
Applying Buchstab's identity twice to $\psi(n,x^{\frac{1}{2}})$ gives \[ \psi(n,x^{\frac{1}{2}})=\psi(n,w_0)-\sum_{\substack{n=mp\\w_0\leq p<x^{\frac{1}{2}}}} \psi(m,w(p))+\sum_{\substack{n=mp_1p_2\\w(p_1)\leq p_2<p_1<x^{\frac{1}{2}}}} \psi(m,p_2). \] We can set \[ c_1(n)=\psi(n,w_0), \qquad c_{k+1}(n)=\sum_{\substack{n=mp\\w_0\leq p<x^{\frac{1}{2}}}} \psi(m,w(p)), \] and \[ c_2(n)=\sum_{\substack{n=mp_1p_2\\w(p_1)\leq p_2<p_1<x^{\frac{1}{2}}\\p_2<w(p_1 p_2)}} \psi(m,p_2). \] As argued in \cite{kumchev}, $c_1,c_2,c_{k+1}$ are nonnegative by construction and satisfy properties~(1)-(3). It remains to further decompose the remaining sum \begin{equation} \label{eqn:buchmain} \sum_{\substack{n=mp_1p_2\\w(p_1)\leq p_2<p_1<x^{\frac{1}{2}}\\p_2\geq w(p_1 p_2)}} \psi(m,p_2). \end{equation}
Our goal, as in \cite{kumchev}, is to split the range of the sum in \eqref{eqn:buchmain} into several parts, then further decompose the sum over each part using Buchstab's identity. To make sure the decomposition we end with satisfies property~(3) of a fine decomposition, we want most of the terms $c$ to satisfy \eqref{eqn:errorbd}, so that we can set them to be $c_j$ for $j\leq r$ or $j\geq k+1$, depending on whether they are positive or not. To satisfy property~(4), we want the remaining terms $c$, which we set to be $c_j$ for $r+1\leq j\leq k$, to all be nonnegative and to not contribute much to the sum on the left hand side of \eqref{eqn:fine4}. Since all terms in our decomposition are constructed using repeated applications of Buchstab's identity with $w_2\geq w_0\geq x^{2\delta-1}$, properties~(1) and~(2) are automatically satisfied.
We decompose along the same lines as in \cite{bhp}, in order to obtain sharper bounds on the terms contributing to property~(4) than obtained by using Kumchev's decomposition \cite{kumchev}. For any multiset $\mathcal{E}$ of positive integers, the decomposition given in \cite{bhp} is presented as a decomposition of the function $S(\mathcal{E},z)$ defined by \[ S(\mathcal{E},z)=\sum_{n\in \mathcal{E}} \psi(n,z). \] When $c_j$ is given in terms of the function $\psi$, the left hand side of \eqref{eqn:fine4} can be written in terms of the function $S$. The transformation on sums over $S$ that BHP calls a ``r\^{o}le-reversal'' can also be applied to sums over $\psi$; we refer the reader to \cite[Section~4]{kumchev} for details.
First, we split off the part of \eqref{eqn:buchmain} with $p_1 p_2^2\geq x$. This term is $c_{r+1}(n)$ in the decomposition given by Kumchev, and his analysis shows that its contribution to the left hand side of \eqref{eqn:fine4} is $O\left(\frac{h_0}{\log^2 x}\right)=o(1)\frac{h_0}{\log x}$.
The remainder of the sum is split into six parts exactly as in \cite{bhp}. Letting $p_1=x^{\alpha_1},p_2=x^{\alpha_2}$, we divide the set of pairs $(\alpha_1,\alpha_2)$ counted in the remaining sum into the regions \begin{align*} A:\: & \frac{1}{4}\leq \alpha_1\leq \frac{2}{5},\: \frac{1}{3}(1-\alpha_1)\leq \alpha_2\leq \min(\alpha_1,\frac{1}{2}(3\delta-1),1-2\alpha_1);\\ B:\: & \frac{1}{4}(3-3\delta)\leq \alpha_1\leq \frac{1}{2},\\ & \max(\frac{1}{2}\alpha_1,1-2\alpha_1)\leq \alpha_2\leq \min(\frac{1}{2}(3\delta-1),\frac{1}{2}(1-\alpha_1));\\ C:\: & \nu(0)\leq \alpha_1\leq \frac{1}{3}, \: \nu(\alpha_1)\leq \alpha_2\leq \min(\alpha_1,\frac{1}{3}(1-\alpha_1));\\ D:\: & \frac{1}{3}\leq \alpha_1\leq \frac{1}{2}, \: \nu(\alpha_1)\leq \alpha_2\leq \max(\frac{1}{3}(1-\alpha_1),\frac{1}{2}\alpha_1);\\ E:\: & \frac{1}{2}(3\delta-1)\leq \alpha_1\leq \frac{1}{4}(3-3\delta),\: \frac{1}{2}(3\delta-1)\leq \alpha_2\leq \min(\alpha_1,1-2\alpha_1); \\ F:\: & \frac{1}{3}\leq \alpha_1\leq 2-3\delta,\: \max(1-2\alpha_1,\frac{1}{2}(3\delta-1))\leq \alpha_2\leq \frac{1}{2}(1-\alpha_1). \end{align*}
As in \cite{bhp}, we note that $(\alpha_1,\alpha_2)\in A$ if and only if $(1-\alpha_1-\alpha_2,\alpha_2)\in B$, and similarly for $E$ and $F$. Moreover, in all of these regions, if $\psi(m,p_2)=1$ and $n=mp_1p_2$, then $m$ must be prime. So, the contributions of the regions $A$ and $B$ are equal, and likewise those of $E$ and $F$ are equal.
We handle the sums over these regions using the same procedure as in \cite{bhp}. In place of \cite[Lemmas~12 and~13]{bhp}, we use the analogues Lemmas~\ref{lem:lbhp12} and~\ref{lem:lbhp13}. The result is a contribution of $\leq 0.3$ to the constant factor $\beta$ in \eqref{eqn:fine4} from $A\cup B$. For $E\cup F$, for simplicity of exposition we can place it among the $c_j$ for $r+1\leq j\leq k$. As noted in \cite{bhp}, the total contribution to $\beta$ from this is $\leq 0.09$. For $C$ and $D$, we again follow the argument of \cite{bhp} with the lemma replacements mentioned above where needed. The contribution to $\beta$ from $C$ and $D$ are $< 0.21$ and $<0.34$ respectively. Thus in total we have a fine decomposition with \[ \beta \leq 0.3+0.09+0.21+0.34 \leq 0.94 < 1, \] as wanted.
\end{document} |
\begin{document}
\version{1}
\title{ \centering A deep learning theory\\ for neural networks grounded in physics }
\author{Benjamin Scellier}
\copyrightyear{2020}
\department{Département d'informatique et de recherche opérationnelle}
\date{December 31, 2020}
\sujet{Informatique}
\president{Irina Rish} \directeur{Yoshua Bengio} \membrejury{Pierre-Luc Bacon} \examinateur{Yann Ollivier}
\maketitle
\maketitle
\anglais \chapter*{Abstract}
In the last decade, deep learning has become a major component of artificial intelligence, leading to a series of breakthroughs across a wide variety of domains. The workhorse of deep learning is the optimization of loss functions by stochastic gradient descent (SGD). Traditionally in deep learning, neural networks are differentiable mathematical functions, and the loss gradients required for SGD are computed with the backpropagation algorithm. However, the computer architectures on which these neural networks are implemented and trained suffer from speed and energy inefficiency issues, due to the separation of memory and processing in these architectures. To solve these problems, the field of neuromorphic computing aims at implementing neural networks on hardware architectures that merge memory and processing, just like brains do. In this thesis, we argue that building large, fast and efficient neural networks on neuromorphic architectures also requires rethinking the algorithms to implement and train them. We present an alternative mathematical framework, also compatible with SGD, which offers the possibility to design neural networks in substrates that directly exploit the laws of physics. Our framework applies to a very broad class of models, namely those whose state or dynamics are described by variational equations. This includes physical systems whose equilibrium state minimizes an energy function, and physical systems whose trajectory minimizes an action functional (principle of least action). We present a simple procedure to compute the loss gradients in such systems, called equilibrium propagation (EqProp), which requires solely locally available information for each trainable parameter. Since many models in physics and engineering can be described by variational principles, our framework has the potential to be applied to a broad variety of physical systems, whose applications extend to various fields of engineering, beyond neuromorphic computing.
\paragraph{Keywords:} deep learning, machine learning, physical learning, equilibrium propagation, energy-based model, variational principle, principle of least action, local learning rule, stochastic gradient descent, Hopfield networks, resistive networks, circuit theory, principle of minimum dissipated power, co-content, neuromorphic computing
\anglais
\cleardoublepage \pdfbookmark[chapter]{\contentsname}{toc}
\tableofcontents \cleardoublepage \phantomsection \listoftables \cleardoublepage \phantomsection \listoffigures
\chapter*{Abbreviations List}
\begin{twocolumnlist}{.2\textwidth}{.7\textwidth}
BPTT & Backpropagation Through Time\\
CHL & Contrastive Hebbian Learning\\
CIFAR-10 & A dataset of images of animals and objects (a standard benchmark in machine learning)\\
ConvNet & Convolutional Network\\
DHN & Deep Hopfield Network\\
EBM & Energy-Based Model\\
EqProp & Equilibrium Propagation\\
GDD & Gradient Descending Dynamics, a property that relates the dynamics of EqProp to the loss gradients, in the discrete-time setting of Chapter \ref{chapter:discrete-time}\\
GPU & Graphics Processing Unit\\
LBM & Lagrangian-Based Model\\
MNIST & A dataset of images of handwritten digits (a standard benchmark in machine learning)\\
RBP & Recurrent Back-Propagation\\
SGD & Stochastic Gradient Descent\\ \end{twocolumnlist}
\cleardoublepage
\vspace*{2cm} \epigraph{
\itshape To the memory of my mother}
\vspace*{1cm}
\section{On Artificial Intelligence}
Computers are since long able to surpass humans at tasks that we like to think of as intellectually advanced. For example, in 1997, Deep Blue, a computer program created by IBM, beat the world champion Garry Kasparov at the game of chess \citep{campbell2002deep}. While this was an impressive achievement, the form of intelligence that we may want to assign to Deep Blue is very rudimentary, though. The strategy of Deep Blue consists in analyzing essentially every possible scenario to pick the move leading to the best possible outcome. The state of technology at the time made it possible, with enough computing power, to automate this brute-force strategy. Humans on the other hand are not able to analyze every possible scenario at the game of chess in a reasonable amount of time. Our brains haven't evolved to do that. Instead, we develop intuitions about what are the most promising moves. Developing the right intuitions is what makes the game of chess challenging for us.
Conversely, there are plenty of tasks that humans (and other animals) do so naturally and effortlessly that it can be hard to appreciate the difficulty to build machines to automate them. Consider for example the task of classifying images of cats and dogs. Although we now have computer programs that can classify images fairly reliably, it is only in the past ten years that we have seen impressive improvements in this area. No one is said to be `intelligent' for being able to tell apart cats from dogs, given that a two-year old can already do this. So why was it so difficult to design programs to classify images? Solving this task is indeed deceptively more complex than it seems. In fact, when we see something, myriads of calculations are performed continuously and automatically in our brains. These calculations happen `behind the scenes', unconsciously, until the concept of `cat' or 'dog' pops up to our consciousness. We take for granted the fact that our brains do all these calculations for us, every second of every day. Perhaps one way to appreciate the difficulty that it represents for a programmer to write a program to classify images, is to have in mind that when our eyes see a cat, the computer program sees a bunch of numbers corresponding to pixel intensities (Fig.~\ref{fig:cifar-samples}). The difficulty for the programmer thus resides in making sense and handling these numbers in such a way that they produce the answer `cat'. Similarly, the human brain is able to learn to hear and recognize sounds, to learn to process the sense of touch, etc. We can perceive the world around us, make sense of it and interact with it like no computer or machine can do today. Undoubtedly, humans (and other animals) are in many ways incredibly smarter than machines today.
\begin{figure}\label{fig:cifar-samples}
\end{figure}
\subsection{Human Intelligence as a Benchmark}
Artificial intelligence (AI) emerged as a research discipline in the 1950s from the idea that every aspect of human intelligence can in principle be discovered, understood, and built into machines. The idea of AI started after Alan Turing formalized the notion of computation and began to study how computers can solve problems on their own \citep{turing2009computing}, but the term \textit{artificial intelligence} was first coined by John McCarthy in 1956. Today, AI is usually used in a broader sense, to refer more broadly to the field of study concerned with designing computational systems to solve practical tasks that we want to automate, whether or not such tasks require some form of human-like intelligence, and whether or not such computational systems are inspired by the brain. However, human intelligence is a natural benchmark for us to build `intelligent' machines. This is an arbitrary choice, and implies by no means that we humans should be regarded as the `perfect' or `ultimate' form of intelligence. But, until we are able to build machines that can do all the incredible things that we humans can do, as easily and as effortlessly as we do them, it seems rather natural to take human intelligence as a benchmark. In the quest of building intelligent machines, Turing proposed that the goal would be reached when our machines can exhibit intelligent behaviours indistinguishable from that of humans.
\subsection{Machine Learning Basics}
Consider the task of classifying images of cats and dogs mentioned earlier. Say that each image is made of $1000$ by $1000$ pixels, each pixel being described by three numbers (in the RGB color representation). Thus, each image can be represented by a vector of three million numbers. The goal is to come up with a program which, given such a vector $x$ as input, produces `dog' or `cat' as output, accordingly. Because there exist many very different vectors $x$ associated to the concept of `dog', there is no obvious, simple and reliable rule to recognize a dog. To solve this task, the program must combine a very large number of `weak rules'.
Early forms of AI consisted of explicit, manually-crafted rules, e.g. depending on formal logic. However, using this methodology to figure out all the weak rules that are necessary to correctly classify images is a really daunting task, given the complexity of real-world images. One of the key features of the brain, which these traditional programs did not have, is its ability to learn from experience and to adapt to the environment. Arthur Samuel introduced a new approach to AI, called \textit{machine learning} (ML) \citep{samuel1959some}, that takes inspiration from how we learn. In the ML approach, instead of operating with predetermined (i.e. immutable) instructions, the program is made of flexible rules that depend on adjustable parameters. As we modify the parameters, the program changes. The goal is then to tune these parameters so that the resulting program solves the task we want.
To solve the task of image classification, the ML approach requires to collect lots of examples that specify the correct output (the label) for a given input (the image). Such a collection of examples is called a \textit{dataset}. Then we use these examples to adjust the parameters of the ML program so that, for each input image, the program produces as output the label associated to that image. Such a procedure to adjust the parameters is called a \textit{learning algorithm}: the ML program \textit{learns} from examples to solve the problem. Once trained, the program obtained can then be used to predict outputs for new unseen inputs. The performance of the program is assessed on a separate set of examples called the \textit{test dataset}.
In the setting of image classification, the data is such that the labels are provided together with the corresponding images. This setting, where the expected result is known in advance for the available data, is called \textit{supervised learning}. This type of learning is currently the most widely used and successful approach to ML. Depending on the task that we want to solve, and the type and the amount of data that is available, there are two other main machine learning paradigms for training an ML program: unsupervised and reinforcement learning. Unsupervised learning refers to data for which no explicitly identified labels exist. Reinforcement learning refers to the case where no exact labels exist, but a scalar value is available (usually called `reward') that provides some knowledge on whether a proposed output is good or bad.
\section{Neural Networks}
Artificial neural networks (ANNs) are a family of ML models inspired by the basic working mechanisms of the brain. In recent years ANNs have had resounding success in AI, in areas as diverse as image recognition, speech recognition, image generation, speech synthesis, text generation and machine language translation. We start this section by presenting the basic concepts of neuroscience that have inspired the design of ANNs. Then we present the key principles at the heart of these ANNs. Finally we point out some weaknesses in the current implementation of these principles in hardware, which makes these neural networks orders of magnitude less energy efficient than brains.
Subsection \ref{sec:neuroscience} is inspired by \citet[Chapter 5]{dehaene2020we}.
\subsection{Neuroscience Basics} \label{sec:neuroscience}
The foundations of modern neuroscience were laid by Santiago Ramon y Cajal, several decades before AI research started. Cajal was the first to observe the brain's micro-organisation with a microscope. He observed that the brain consists of disjoint nerve cells (the \textit{neurons}), not of a continuous network as the proponents of the \textit{reticular theory} thought before him. Neurons have a very particular shape. Each neuron is composed of three main parts (Figure \ref{fig:neuron}): a large `tree' composed of thousands of branches (the \textit{dendrites}\footnote{In Greek, the word \textit{dendron} means tree}), a cell body (also called the \textit{soma}), and a long fiber which extends out of the cell body towards other neurons (the \textit{axon}). A neuron collects information from other neurons through its dendritic tree. The messages collected in the dendrites converge to the cell body, where they are compiled. After compilation, the neuron sends a unique message, called \textit{action potential} (or \textit{spike}), which is carried along its \textit{axon} away from the cell body. In turn, this message is delivered to other neurons.
\begin{figure}\label{fig:neuron}
\end{figure}
\footnotetext{https://towardsdatascience.com/a-gentle-introduction-to-neural-networks-series-part-1-2b90b87795bc}
While neurons are distinct cells, they come into contact at certain points called \textit{synapses} (Figure \ref{fig:synapse}). Synapses are junction zones through which neurons communicate. Specifically, each synapse is the point of contact of the axon of a neuron (called \textit{pre-synaptic} neuron) and the dendrite of another neuron (called \textit{post-synaptic} neuron). The message traveling through the axon of the pre-synaptic neuron is electrical, but the synapse turns it into a chemical message. The axon terminal of the pre-synaptic neuron contains some sorts of pockets (the \textit{vesicles}) filled with molecules (the \textit{neurotransmitters}). When the electrical signal reaches the axon terminal, the vesicles open and the neurotransmitters flow in the small synaptic gap between the two neurons. The neurotransmitters then bind with the membrane of the post-synaptic neuron at specific points (the \textit{receptors}). A neurotransmitter acts on a receptor as a key in a lock: they open `gates' (called \textit{channels}) in the post-synaptic membrane. As a result, ions flow from the extra-cellular fluid through these channels and generate a current in the post-synaptic neuron. To sum up, the message coming from the pre-synaptic neuron went from electrical to chemical, back to electrical, and in the process, the message was transmitted to the post-synaptic neuron.
Each synapse is a chemical factory in which numerous elements can be modified: the number of vesicles and their size, the number of receptors and their efficacy, as well as the size and the shape of the synapse itself. All these elements affect the strength with which the pre-synaptic electrical message is transmitted to the post-synaptic neuron. Synapses are constantly modified and these modifications reflect what we \textit{learn}.
\begin{figure}\label{fig:synapse}
\end{figure}
\footnotetext{https://thesalience.wordpress.com/neuroscience/the-chemical-synapse/chemical-synapses/}
The human brain is composed of around 100 billion ($10^{11}$) neurons, interconnected by a total of around a quadrillion ($10^{15}$) synapses. The brain is a huge parallel computer: in this incredibly complex machine, all the synapses work in parallel -- like independent nanoprocessors -- to process the messages sent between the neurons. Besides, synapses are modified in response to experience, and in turn these modifications alter our behaviours. Thus, synapses are both the computing units and the memory units of the brain. For every task we do, all our thoughts, memories and all our behaviours emerge from the neural activity generated by this machinery.
One of the fundamental questions of neuroscience is that of figuring out the \textit{learning algorithms} of the brain: what is the set of rules which translate the experiences we have into synaptic changes, and how do these synaptic changes modify our behaviour? Understanding the brain’s learning algorithms is not only key to understanding the biological basis of intelligence, but would also unlock the development of truly intelligent machines.
\subsection{Artificial Neural Networks}
Artificial neural networks are ML models that draw inspiration from real brains. Artificial neurons imitate the functionality of biological neurons. These models are highly simplified: they keep some essential ideas from real neurons and synapses but they discard many details of their working mechanisms. The first neuron model was introduced by \citet{mcculloch1943logical}, but the idea to use such artificial neurons in machine learning was proposed by \citet{rosenblatt1958perceptron} and \citet{widrow1960adaptive}. Artificial neurons used today in deep learning are essentially unchanged and rely on the same basic math algebra.
Each neuron $i$ is described by a single number $y_i$. This number can be thought of as the \textit{firing rate} of neuron $i$, that is the rate of spikes sent along its axon. Each synapse is also described by a single number, representing its strength. The strength of the synapse connecting pre-synaptic neuron $j$ to post-synaptic neuron $i$ is denoted $W_{ij}$. These artificial synapses can transmit signals with different efficacies depending on their strength. The neuron calculates a nonlinear function of the weighted sum of its inputs: \begin{equation}
\label{eq:artificial-neuron}
y_i = \sigma \left( \sum_j W_{ij} y_j \right). \end{equation} The \textit{pre-activation} $x_i = \sum_j W_{ij} y_j$ is a weighted sum of the messages received from other neurons, weighted by the corresponding synaptic strengths. $x_i$ can be though of as the membrane voltage of neuron $i$. $\sigma$ is a function called an \textit{activation function}, which maps $x_i$ onto the firing-rate $y_i$.
Such artificial neurons can be combined to form an artificial neural network (ANN). Each neuron in the network receives messages from other neurons (the $y_j$'s), compiles them ($x_i$), and sends in turn a message to other neurons ($y_i$). Thus, a network of interconnected neurons exploits the composition of many elementary operations to form more complex computations. The synaptic strengths (the $W_{ij}$'s), also called \textit{weights}, play the role of adjustable parameters that parameterize this computation.
\textit{Deep learning} refers to ANNs composed of multiple layers of neurons \citep{lecun2015deep,goodfellow2016deep}. These \textit{deep neural networks} were inspired by the structure of the visual cortex in the brain, each layer corresponding to a different brain region. One of the core ideas of neural networks is that of \textit{distributed representations}, the idea that the vector of neuron's states can represent abstract concepts, by opposition to other approaches to AI that use discrete symbols to represent concepts. In a deep network, each layer of neurons applies specialized operations and transformations on its inputs, with the intuition that each layer builds up more abstract concepts than the previous \citep{bengio2009learning}.
\subsection{Energy-Based Models vs Differentiable Neural Networks}
Several families of neural networks emerged in the 1980s. One of these families is that of \textit{energy-based models}, which includes the Hopfield network \citep{hopfield1982neural} and the Boltzmann machine \citep{ackley1985learning}. In these models, under the assumption that the synaptic weights are symmetric (i.e. $W_{ij}=W_{ji}$ for every pair of neurons $i$ and $j$), the dynamics of the network converges to an equilibrium state, after iterating Eq.~\ref{eq:artificial-neuron} a large number of times for every neuron $i$. Because of the large number of iterations required, these models tend to be slow. This is one of the reasons why these neural networks have been mostly discontinued today. However, by reinterpreting the equilibrium equation of energy-based models as a variational principle of physics, I believe that these models could be the basis of a new generation of fast, efficient and scalable neural networks grounded in physics. We will come back to this point later in the discussion (Section \ref{sec:deep-learning-theory}).
The family of neural networks that is at the heart of the on-going deep learning revolution is that of \textit{differentiable neural networks}, which became popular thanks to the discovery of the \textit{backpropagation algorithm} to train them \citep{rumelhart1988learning}. In such neural networks, each operation in the process of computation is differentiable. The earliest models of this kind were feedforward neural networks (e.g. the multi-layer perceptron), wherein the connections between the neurons do not form loops. Recurrent neural networks can also be cast to this category of differentiable neural networks, by unfolding the graph of their computations in time. Since their inception, differentiable neural networks have come a long way. Many novel architectures have been introduced, in particular: convolutional neural networks \citep{fukushima1980neocognitron,lecun1989backpropagation}, Long Short-term Memory \citep{hochreiter1997long,graves2013generating}, and attention mechanisms \citep{bahdanau2014neural,vaswani2017attention}.
\subsection{Stochastic Gradient Descent}
The computations performed by a neural network are parameterized by its synaptic weights. The goal is to find weight values for which the computations of the neural network solve the task of interest. One essential idea of machine learning is to introduce a \textit{loss function}, which provides a numerical measure of how good or bad the computations of the model are, with respect to the task that we want to solve. The goal is then to minimize the loss function with respect to the model weights. For example, in image classification, the computations of the model produce an output which represents a `guess' for the class of that image, and the loss provides a graded measure of `wrongness' between that guess and the actual image label. A smaller value of the loss function means that the model produces an output closer to the desired target. The loss function is minimal when the output is equal to the desired target.
One of the most important ideas of deep learning today is \textit{stochastic gradient descent} (SGD). Provided that the loss function is differentiable with respect to the network weights, we can use the gradient of the loss function to indicate the direction of the minimum of this function. SGD consists in taking examples from the training set one at a time, and adjusting the network weights iteratively in proportion to the negative of the gradient of the loss function. At each iteration, the network performance (as measured per the loss value) slightly improves.
A key discovery that has greatly eased and accelerated deep learning research is the following. Given a computer program that computes a \textit{differentiable} scalar function $f: \mathbb{R}^n \to \mathbb{R}$, it is possible to automatically transform the program into another program that computes the gradient operator $\nabla f : \mathbb{R}^n \to \mathbb{R}^n$. The gradient $\nabla f(\theta)$ can then be evaluated at any given point $\theta \in \mathbb{R}^n$, with a computational overhead that scales linearly with the complexity of the original program. This technique, known as \textit{reverse-mode automatic differentiation} \citep{speelpenning1980compiling}, provides a general framework for `backpropagating loss gradients' \citep{rumelhart1988learning} in any \textit{differentiable computational graph}. In the last decade, dozens of deep learning frameworks and libraries have been developed, which exploit reverse-mode automatic differentiation to compute gradients in arbitrary differentiable computational graphs. This includes Theano \citep{bergstra2010theano} -- a framework that was developed at Université de Montréal -- and the more recent Tensorflow \citep{abadi2016tensorflow} and PyTorch \citep{paszke2017automatic} frameworks. The emergence of these deep learning frameworks has considerably accelerated deep learning research, by enabling researchers to quickly design neural network architectures and train them by SGD. Thanks to these frameworks, deep learning researchers can explore the space of differentiable computational graphs much more rapidly, as they seek novel and more effective neural architectures.
\subsection{Landscape of Loss Functions}
It was not trivial to discover that deep neural networks can be trained at all. Until the seminal work of \citet{hinton2006fast}, common belief was that neural networks with more than two layers were essentially impossible to train. In particular, because the landscape of the loss function associated to a deep network is typically highly non-convex, a common misconception was that gradient-descent methods would likely get stuck at bad local minima. In terms of generalization performance, the large over-parameterization of neural networks was also against general prescriptions from classical statistics and learning theory. One of the surprising discoveries of the deep learning revolution was that, in such highly non-convex and over-parameterized statistical models, provided that the loss landscape has appropriate shape, SGD can solve complex tasks by finding excellent parameter values that generalize well to unseen examples.
Several elements contributed to unlock the training of deep neural networks ; among others: the discovery \citep{glorot2011deep} that the ReLU (`Rectified Linear Unbounded') activation function usually outperforms the sigmoid activation function, the discovery of better weight initialization schemes \citep{glorot2010understanding,saxe2013exact,he2015delving}, and the batch-normalization technique to systematically normalize signal amplitudes at each layer of a network \citep{ioffe2015batch}. Besides, fundamental advances have come from the introduction of new network architectures such as the ones mentioned earlier, and from novel machine learning paradigms such as that of generative adversarial networks \citep{goodfellow2014generative}. All these techniques have as an effect to modify the landscape of the loss function, as well as the starting parameter (before training) in the parameter space. Understanding how the landscape of the loss function can be appropriately shaped to ease optimization by SGD is an active area of research \citep{poggio2017theory,arora2018toward}.
\subsection{Deep Learning Revolution}
In recent years, deep neural networks have proved capable of solving very complex problems across a wide variety of domains. Today, they achieve state-of-the-art performance in image recognition \citep{he2016deep}, speech recognition \citep{hinton2012deep,chan2016listen}, machine translation \citep{vaswani2017attention}, image-to-text \citep{sharma2018conceptual}, text-to-speech \citep{oord2016wavenet}, text generation \citep{brown2020language}, and synthesis of realistic-looking portraits \citep{karras2019style}, among many other applications. Neural networks have become better than humans at playing Atari games \citep{mnih2013playing}, playing the game of Go \citep{silver2018general} and playing Starcraft \citep{vinyals2019alphastar}.
Perhaps what is most exciting is that, although these neural networks are designed to solve very different tasks and deal with different types of data, they are all trained using the same handful of basic principles. As we scale these neural networks, more advanced aspects of intelligence seem to emerge from this handful of principles.
While the pace of progress in neural network research is breathtaking, we should also emphasize that, without any question, neural networks are nowhere close to `surpass' humans. In their current form, they miss key elements of human intelligence. Whenever neural networks beat humans, they beat us at a very specific task and/or under very specific conditions.
We may need new learning paradigms to move away from `task-specific' neural networks towards `multi-functional' and continually learning neural networks.
Besides, as of today, neural networks cannot handle and combine abstract concepts nearly as flexibly as humans do. Making progress along these lines may require new breakthroughs, to give these neural networks the ability to develop a `thought language' and a sense of causal reasoning, among others.
\subsection{Graphics Processing Units}
The on-going deep learning revolution owes to other technological developments too. In the past decades, the amount of data available has greatly increased, and powerful multi-core parallel processors for general purpose computing, such as Graphics Processing Units (GPUs), have emerged \citep{owens2007survey}. Training large models on large datasets -- here is part of the recipe to make a deep neural network solve a challenging task. In the 1980s, when deep neural networks were first conceived, the lack of data and computational power to train them made it practically unfeasible to demonstrate their effectiveness.
The last decade of neural network research seems to hint at a simple and straightforward strategy to further improve performance of our AI systems: scaling. Using more memory and more compute to train larger models on larger datasets -- here is the current trend to build state-of-the-art deep learning systems. The largest model ever built thus far, GPT-3 \citep{brown2020language}, has a capacity of 175 billion parameters.
Training these neural networks requires very large amounts of computations. Standard practice today is to distribute the computations across more and more GPUs, to train larger and larger models. Still, even using thousands of GPUs working in parallel, training these neural networks can take months. For example, AlphaZero learnt to play the game of Go by playing 140 million games, which took 5000 processors and two weeks. Moreover, training such models can cost millions of dollars, just for electricity consumption, not to mention their ecological impact. Yet, even these large neural networks are only a tiny fraction of the size of the human brain. What causes such inefficiency, preventing us from building models of the size of the human brain?
\subsection{The Von Neumann Bottleneck} \label{sec:von-neumann-bottleneck}
If at the conceptual level the neural networks used today take their overall strategy from the brain, on the hardware implementation level however, they use little of the cleverness of nature. Our current processors, on which these neural networks are trained and run, operate in fundamentally different ways than brains. They rely on the \textit{von Neumann architecture}. In this computer architecture, the memory unit where information is stored, is separated from the processing unit where calculations are done. A so-called \textit{bus} moves information back and forth between these two units. Over the course of history of computing, this computer architecture has become the norm, and today, the von Neumann architecture is used in virtually all computer systems: laptops, smartphones, and all kinds of embedded systems. The GPUs, massively used for neural network training today, also rely on the von Neuman architecture, where memory is separated from computing.
The brain on the other hand deeply merges memory and computing by using the same functional unit: the synapse. The human brain is composed of a quadrillion ($10^{15}$) synapses. In other words, the human brain has $10^{15}$ nanoprocessors working in parallel.
Training neural networks on von Neumann hardware as we do it today is extraordinarily energy inefficient in comparison with the way brains operate. The necessity to move the data back and forth between the memory and processing units in the von Neumann architecture is energy intensive and creates considerable latency. This limitation is known as the \textit{von Neumann bottleneck}. How inefficient is it compared to biological systems like the brain? The human brain is composed of $10^{11}$ neurons and consumes around 20W to conduct all of its activities \citep{attwell2001energy}. In comparison, training a BERT model (a state-of-the-art natural language processing model) on a modern supercomputer requires 1500 kW.h \citep{strubell2019energy}, which is the total amount of energy consumed by a brain in nine years. Besides, a GPU running real-time object detection with YOLO \citep{redmon2016you}, a network smaller than the brain by four orders of magnitude, consumes around 200W.
This striking mismatch holds more broadly with biological systems in general. For example, \citet{kempes2017thermodynamic} study the energy efficiency of `cellular computation' in the process of biological translation. What is the amount of energy required (in ATP equivalents) by ribosomes to assemble amino acids into proteins ?
They point out that "the best supercomputers perform a bit operation at roughly $5.27 \times 10^{-13} J$, [...] which is about five orders of magnitude less efficient than biological translation."
To sum up, our current neural networks are orders of magnitude less energy efficient than biological systems at processing information, and the von Neumann bottleneck is largely responsible for this inefficiency. If using more and more GPUs may increase speed, and thereby speed up training and inference of neural networks, this strategy however can't improve energy efficiency.
\subsection{In-Memory Computing}
In order to build massively parallel neural networks that are energy efficient and can scale to the size of the human brain, we need to fundamentally rethink the underlying computing hardware. We need to design neural networks so that computations are performed at the physical location of the synapses, where the strength of the connections (the weights of the neural network) are stored and adjusted, just like in the brain. The concept of hardware that merges memory and computing is called \textit{in-memory computing} (or \textit{in-memory processing}), and the field tackling this problem is called \textit{neuromorphic computing}. This field of research, started by Carver Mead in the 1980s \citep{mead1989analog} aims at mimicking brains at a hardware level, by building physical neurons and synapses onto a chip.
The most common approach to in-memory computing today is to use \textit{programmable resistors} as synapses. Programmable resistors, such as \textit{memristors} \citep{chua1971memristor}, are resistors whose conductance can be changed (or `programmed'). The weights of a neural network can be encoded in the conductance of such devices. In the last decade, important advances in nanotechnology were made, and a number of new technologies have emerged and have been studied as potentially promising programmable resistors \citep{burr2017neuromorphic,xia2019memristive}.
Neuromorphic computing thus explores analog computations that fundamentally depart from the standard digital computing paradigm.
\subsection{Challenges of Analog Computing}
Analog processing differs from digital processing in important ways. Whereas digital circuits manipulate binary signals with reliably distinguishable \textit{on} and \textit{off}-states, analog circuits on the other hand manipulate real-valued currents and voltages that are subject to analog noise that bounds the precision with which computation may be performed. More importantly, analog devices suffer from \textit{mismatches}, i.e. small random variations in the physical characteristics of devices, which occurs during their manufacturing. No two devices are exactly alike in their characteristics, and it is impossible to make a perfect clone of one. These variations result in behavioral differences between identically designed devices. Due to the accumulation of the mismatch errors from individual devices, it is very difficult to analytically predict the behavior of a large analog circuit.
A growing field of research in the neuromorphic literature attempts to perform in analog the operations that we normally do in software, so as to implement feedforward neural networks and the backpropagation algorithm efficiently. In this approach, the starting point is an equation of the kind of Eq.~\ref{eq:artificial-neuron}. Many of these operations are then performed in analog and combined to form the computations of a feedforward network. However, because of device mismatches, it is hard to perform such idealized operations, and as we combine many of these operations, the resulting computation may be different from the desired one. Either these idealized operations are performed with low precision, or we may spend a lot of energy trying to improve precision, e.g. by using analog-to-digital conversion.
Not coincidentally, the constraint of device nonidealities and device variability is shared with biology too. No two neurons are exactly the same. This realization demonstrates, in principle, that it is possible to train (biological) neural networks even in the presence of noise and imperfect `devices'. It invites us to rethink the learning algorithm for neural networks (and the notion of computation altogether).
\section{A Deep Learning Theory for Neural Networks Grounded in Physics} \label{sec:deep-learning-theory}
In this thesis, we propose an alternative theoretical framework for neural network inference and training, with potential implications for neuromorphic computing. Our theoretical framework preserves the key principles that power deep learning today, such as optimization by stochastic gradient descent (SGD), but we use variational formulations of the laws of physics as first principles, so as to directly implement neural networks in physics. We present two very broad classes of neural network models, called \textit{energy-based models} and \textit{Lagrangian-based models}, whose state or dynamics derive from variational principles. The learning algorithm, called \textit{equilibrium propagation} (EqProp), enables to estimate the gradients of arbitrary loss functions in such physical systems using solely locally available information for each parameter.
\subsection{Physical Systems as Deep Learning Models}
The general idea of the manuscript is the following. We consider a physical system composed of multiple parts whose characteristics and working mechanisms may be only partially known. The system has some `adjustable parameters', some of them playing the role of `inputs', and we may read or measure a `response' on some other `output' part of the system. We can think of this black box system as performing computations and implementing a nonlinear input-to-output mapping function (which may be analytically unknown). We wish to tune the adjustable parameters of this system by stochastic gradient descent (SGD), as we normally do in deep learning. The question is: how can we compute or estimate the gradients in such a physical system, by relying on the physics of the system?
The main theoretical result of the thesis is that, for a large class of physical systems (those whose state or dynamics derive from a variational principle), there is a simple procedure to estimate the parameter gradients, which in many practical situations requires only locally available information for each parameter. This procedure, called \textit{equilibrium propagation} (EqProp), preserves the key benefit of being compatible with SGD, while offering the possibility to directly exploit physics to implement and train neural networks.
\subsection{Variational Principles of Physics as First Principles}
Rather than Eq.~\ref{eq:artificial-neuron}, our starting point is a \textit{variational equation} of the form \begin{equation}
\label{eq:variational-principle}
\frac{\partial E}{\partial s}=0, \end{equation} where $E$ is a scalar function. If $s$ is the state of the system, then Eq.~\ref{eq:variational-principle} is an equilibrium condition. In this case, we say that the system is an \textit{energy-based model} (EBM) and we call $E$ the \textit{energy function}.
In this thesis, we also introduce the concept of \textit{Lagrangian-based model} (LBM). Variational equations exist not just to characterize equilibrium states, but also entire trajectories. Many physical systems are such that their trajectory derives from a \textit{principle of stationary action} (e.g. a principle of least action). Denoting $\mathrm{s_t}$ the state of the system at time $t$, this means that the (continuous-time) trajectory $\mathrm{s} = \{ \mathrm{s_t} \}_{0 \leq t \leq T}$ over a time interval $[0,T]$ minimizes a functional of the form \begin{equation}
\mathcal{S} = \int_0^T L(\mathrm{s_t},\mathrm{\dot{s}_t}) dt, \end{equation} where $\mathrm{\dot{s}_t}$ is the time derivative of $\mathrm{s_t}$, $L$ is a function called the \textit{Lagrangian function} of the system, and $\mathcal{S}$ is a scalar functional called the \textit{action}. The stationarity of the action tells us that $\frac{\delta \mathcal{S}}{\delta \mathrm{s}}=0$, which is another variational equation of the kind of Eq.~\ref{eq:variational-principle}. These systems, which we call Lagrangian-based models (LBMs), are suitable in particular in the setting with time-varying inputs and can thus play the role of `recurrent neural networks'.
Equilibrium propagation (EqProp) allows to compute gradients with respect to arbitrary loss functions in these EBMs and LBMs. Furthermore, if the energy function (resp. Lagrangian function) of the system has a property called \textit{sum-separability}, meaning that it is the sum of the energies (resp. Lagrangians) of its parts, then computing the loss gradients with EqProp requires only information that is locally available for each parameter (i.e. the learning rule is \textit{local}).
\subsection{Universality of Variational Principles in Physics}
In the 1650s, Pierre de Fermat proposed the \textit{principle of least time} which states that, between two given points, the light travels the path which takes the least time. He showed that both the laws of reflection and refraction can be derived from this principle. Fermat's least time principle is an instance of what we call more generally a \textit{variational principle}. Today, the variational approach pervades much of modern physics and engineering \citep{lanczos2012variational}, with applications not only in optics, but also in mechanics, electromagnetism, thermodynamics, etc. Even at a fundamental level of description, our universe seems to behave according to variational principles: for example, Einstein's equations of general relativity can be derived from the Einstein-Hilbert action, and in a sense, Feynman's path integral formulation of quantum mechanics can be seen as a generalized principle of least action \citep{feynman2005principle}.
Thus, many physical systems qualify as energy-based models or Lagrangian-based models. This offers in principle a lot of options for implementing our proposed method on physical substrates. In this manuscript we will present one such option in details: \textit{nonlinear resistive networks}.
\subsection{Rethinking the Notion of Computation}
Interestingly, our theoretical framework invites us to rethink not only the von Neumann architecture on which our current processors rely, but also the notion of computation altogether.
Much of computer science deals with computation in the abstract, without worrying about physical implementation \citep{lee2017plato}. Our computers today rely on the computing paradigm introduced by Turing, where computers operate on digital data and carry out computations algorithmically, via step-by-step (discrete-time) processes. The von Neumann architecture was invented to implement these computations (i.e. to bridge the gap between physics and these abstract computations), but suffers from the speed and energy efficiency problems mentioned earlier (Section~\ref{sec:von-neumann-bottleneck}).
The theoretical framework presented in this manuscript can be seen as an alternative approach to computing that takes advantage of the ways by which Nature operates. We suggest a novel computing paradigm, which uses the variational formulations of the laws of physics as first principles. Together with the \textit{equilibrium propagation} (EqProp) training procedure, our approach suggests a way to implement the core principles of deep learning by exploiting physics directly. As will become apparent in Section \ref{sec:resistive-networks-algo}, the process of `computations' in EqProp is very different from the step-by-step processes of Turing's conventional computing paradigm. Although we may call EqProp a learning `algorithm', it is not an algorithm in the conventional sense (one that performs step-by-step computations).
\subsection{A Novel Differentiation Method Compatible with Variational Principles}
More technically, the main idea of the manuscript can be summarized by the following mathematical identity -- a novel differentiation method compatible with variational principles. Consider two functions $E(\theta,s)$ and $C(s)$ of the two variables $\theta$ and $s$. We wish to compute the gradient $\frac{d}{d\theta}C(s_\theta)$, where $s_\theta$ is such that $\frac{\partial E}{\partial s}(\theta,s_\theta) = 0$. To do this, we introduce the function $F(\theta,\beta,s) = E(\theta,s) + \beta \; C(s)$, where $\beta$ is a scalar. For fixed $\theta$, we further define $s_\star^\beta$ by the relationship $\frac{\partial F}{\partial s}(\theta,\beta,s_\star^\beta) = 0$, for any $\beta$ in the neighborhood of $\beta=0$. In particular $s_\star^0 = s_\theta$. Then we have the identity \begin{equation}
\frac{d}{d\theta}C(s_\theta) = \left. \frac{d}{d\beta} \right|_{\beta=0} \frac{\partial E}{\partial \theta}(\theta,s_\star^\beta), \end{equation} where $\frac{\partial E}{\partial \theta}$ denotes the partial derivative of the function $E$ with respect to its first argument. This result is developed and proved in Chapter \ref{chapter:eqprop}.
\section{Overview of the Manuscript and Link to Prior Works}
The manuscript is organized as follows. \begin{itemize}
\item In Chapter \ref{chapter:eqprop}, we present EqProp in its original formulation, as a learning algorithm to train energy-based models (EBMs). We show that, provided that the energy function of the system is \textit{sum-separable}, then the learning rule of EqProp is local. This corresponds to Section 3 and Appendix A of \citet{scellier2017equilibrium}.
\item In Chapter \ref{chapter:hopfield}, we use EqProp to train a particular class of EBMs called \textit{gradient systems}. This includes the continuous Hopfield network, a neural network model introduced by Hopfield in the 1980s, studied by both the neurosience community and the neuromorphic community. In this setting, the learning rule of EqProp is a form of contrastive Hebbian learning. The first part of this chapter corresponds to the result established in \citet{scellier2019equivalence}. The second part corresponds to sections 2, 4, and 5 of \citet{scellier2017equilibrium}.
\item In Chapter \ref{chapter:neuromorphic}, we show that a class of analog neural networks called \textit{nonlinear resistive networks} are energy-based models: they possess an energy function called the \textit{co-content} of the circuit, as a reformulation of Kirchhoff's laws. Furthermore the co-content has the sum-separability property. Therefore we can train these nonlinear resistive networks with EqProp using a local learning rule. This chapter corresponds to \citet{kendall2020training}.
\item In Chapter \ref{chapter:discrete-time}, we present a class of discrete-time neural network models trainable with EqProp, which is useful to accelerate computer simulations. This formulation, which uses notations closer to those used in conventional deep learning, is also more adapted to train more advanced network architectures such as convolutional networks. This chapter corresponds to \citet{ernoult2019updates} and \citet{laborieux2020scaling}.
\item In Chapter \ref{chapter:future}, we present on-going developments. In particular, we introduce the concept of \textit{Lagrangian-based models}, a wide class of machine learning models that can serve as recurrent neural networks and can be implemented directly in physics by exploiting the \textit{principle of stationary action}. These Lagrangian-based models can also be trained with an EqProp-like training procedure. We also present an extension of EqProp to stochastic systems, which was introduced in Appendix C of \citet{scellier2017equilibrium}. Finally, we briefly present the \textit{contrastive meta-learning} framework of \citet{zucchet2021contrastive}, which uses the EqProp technique to train the meta-parameters of a meta-learning model. \end{itemize}
\iffalse \section{Contributions.}
This manuscript is based on the following papers: \citet{scellier2017equilibrium,scellier2019equivalence,scellier2018generalization,ernoult2019updates,ernoult2020equilibrium,kendall2020training,laborieux2020scaling}. My contributions in these papers include: \begin{itemize}
\item the general formulation of EqProp, and the theorems/proofs presented in \citet{scellier2017equilibrium}, as well as the simulations,
\item the theoretical result presented in \citet{scellier2019equivalence},
\item the mathematical formulation of the `vector field version' of EqProp \citep{scellier2018generalization},
\item the main theoretical result presented in \citet{ernoult2019updates},
\item the theoretical results presented in \citet{ernoult2020equilibrium},
\item the mathematical formulation and the proof of the theorem presented in \citet{kendall2020training},
\item the idea of using symmetric difference estimators for the loss gradients in \citet{laborieux2020scaling}. \end{itemize} \fi \chapter{Equilibrium Propagation: A Learning Algorithm for Systems Described by Variational Equations} \label{chapter:eqprop}
Much of machine learning today is powered by stochastic gradient descent (SGD). The standard method to compute the loss gradients required at each iteration of SGD is the backpropagation (Backprop) algorithm. Equilibrium propagation (EqProp) is an alternative to Backprop to compute the loss gradients. The difference between EqProp and Backprop lies in the class of models that they apply to: while Backprop applies to \textit{differentiable neural networks}, EqProp is broadly applicable to systems described by \textit{variational equations}, i.e. systems whose state or dynamics is a stationary point of a scalar function or functional. Since many physical systems have this property \citep{lanczos2012variational}, EqProp offers the perspective to implement and train machine learning models which use the laws of physics at their core.
In this chapter, we present EqProp in its original formulation \citep{scellier2017equilibrium}, as an algorithm to train energy-based models (EBMs). EBMs are systems whose equilibrium states are stationary points of a scalar function called the \textit{energy function}. EBMs are suitable in particular when the input data is static. In most of the manuscript, we consider for simplicity of presentation the supervised learning setting with static input, e.g. the setting of image classification where the input is an image and the target is the category of that image. However, EqProp is applicable beyond this setting. In Section \ref{sec:time-varying-setting}, we introduce the concept of \textit{Lagrangian-based models} (LBMs) which, by definition, are physical systems whose dynamics derives from a \textit{principle of stationary action}, and we show how EqProp can be applied to such systems. LBMs are suitable in the context of time-varying data, and can thus play the role of `recurrent neural networks'. We also present an extension of EqProp to stochastic systems (Section \ref{sec:stochastic-setting}), and to the setting of meta-learning (Section \ref{sec:contrastive-meta-learning}).
The present chapter is organized as follows. \begin{itemize}
\item In section \ref{sec:sgd}, we present the stochastic gradient descent (SGD) algorithm, which is at the heart of current deep learning. We present SGD in the setting of supervised learning, which we will consider in most of the manuscript to illustrate the ideas of the equilibrium propagation training framework. We note however that SGD is also the workhorse of state-of-the-art unsupervised and reinforcement learning algorithms.
\item In section \ref{sec:energy-based-models} we define the notion of \textit{energy-based model} (EBM) that we will use throughout the manuscript.
\item In section \ref{sec:loss-gradients}, we present the general formula for computing the loss gradients in an EBM, and in section \ref{sec:equilibrium-propagation}, we present the equilibrium propagation (EqProp) algorithm to estimate the loss gradients. Under the assumption that the energy function of the system satisfies a property called \textit{sum-separability}, the learning rule for each parameter is local.
\item In section \ref{sec:ebms-examples}, we give a few examples of models trainable with EqProp, which we will study in the next chapters. Besides the well-known Hopfield model, nonlinear resistive networks, flow networks and elastic networks are examples of sum-separable energy-based models and, as such, are trainable with EqProp using a local learning rule.
\item In section \ref{sec:remarks}, we discuss the general applicability of the framework presented here and the conditions under which EqProp is applicable. \end{itemize}
\section{Stochastic Gradient Descent} \label{sec:sgd}
In most of the manuscript, we consider the supervised learning setting, e.g. the setting of image classification where the data consists of images together with the labels associated to these images. In this scenario, we want to build a system that is able, given an input $x$, to `predict' the label $y$ associated to $x$. To do this, we design a \textit{parametric} system, meaning a system that depends on a set of \textit{adjustable parameters} denoted $\theta$. Given an input $x$, the system produces an output $f(\theta, x)$ which represents a `guess' for the label of $x$. Thus, the system implements a mapping function $f(\theta, \cdot)$ from an input space (the space of $x$) to an output space (the space of $y$), parameterized by $\theta$. The goal is to tune $\theta$ so that for most $x$ of interest, the output $f(\theta, x)$ is close to the target $y$. The `closeness' between $f(\theta, x)$ and $y$ is measured using a scalar function $C(f(\theta, x), y)$ called the \textit{cost function}. The overall performance of the system is measured by the expected cost $\mathcal{R}(\theta) = \mathbb E_{(x, y)}[C(f(\theta, x), y)]$ over examples $(x, y)$ from the data distribution of interest. The goal is then to minimize $\mathcal{R}(\theta)$ with respect to $\theta$.
In deep learning, the core idea and leading approach to tune the set of parameters $\theta$ is \textit{stochastic gradient descent} (SGD). The first step consists in gathering a (large) dataset of examples $\mathcal{D}_{\rm train} = \{ (x^{(i)}, y^{(i)}) \}_{1 \leq i \leq N}$, called \textit{training set}, which specifies for each input $x^{(i)}$ the correct output $y^{(i)}$. Then, each step of the training process proceeds as follows. First, a sample $(x, y)$ is drawn from the training set. Input $x$ is presented to the system, which produces $f(\theta, x)$ as output. This output is compared with $y$ to evaluate the loss \begin{equation}
\mathcal{L}(\theta, x, y) = C(f(\theta, x), y). \end{equation} Subsequently, the gradient $\frac{\partial {\mathcal L}}{\partial \theta} \left( \theta, x, y \right)$ is computed or estimated using some procedure, and the parameters are updated proportionally to the loss gradient: \begin{equation}
\Delta \theta = - \eta \frac{\partial {\mathcal L}}{\partial \theta} \left( \theta, x, y \right), \end{equation} where $\eta$ is a step-size parameter called \textit{learning rate}. This process is repeated multiple times (often millions of times) until convergence (or until desired). Once trained, the performance of the system is evaluated on a separate set of \textit{previously unseen} examples, called \textit{test set} and denoted $\mathcal{D}_{\rm test} = \{ (x_{\rm test}^{(i)}, y_{\rm test}^{(i)}) \}_{1 \leq i \leq M}$. The \textit{test loss} is $\widehat{\mathcal{R}}_{\rm test}(\theta) = \frac{1}{M} \sum_{i=1}^M \mathcal{L} \left( \theta, x_{\rm test}^{(i)}, y_{\rm test}^{(i)} \right)$.
Several variants of SGD have been proposed, which use adaptive learning rates to accelerate the optimization process. This includes the \textit{momentum method} \citep{sutskever2013importance}
and \textit{Adam} \citep{kingma2014adam}. In some cases, these methods are not only faster, but also achieve better test performance than standard SGD. Besides, common practice is to average the loss gradients over \textit{mini-batches} of data examples before updating the weights -- a method sometimes called \textit{mini-batch gradient descent} -- but in this manuscript we consider for simplicity of presentation that training examples are processed one at a time.
The SGD algorithm described above powers nearly all of deep learning today. There are, however, two ingredients that we have not specified so far: the `system' that implements the mapping function $f(\theta, x)$, and the `procedure' to compute the loss gradient $\frac{\partial {\mathcal L}}{\partial \theta} \left( \theta, x, y \right)$. In conventional deep learning, the `system' is a \textit{differentiable neural network}, and the loss gradients are computed with the \textit{backpropagation algorithm}. In this chapter, we present an alternative framework for optimization by SGD, where the `system' (i.e. the neural network) is an \textit{energy-based model}, and the procedure to compute the loss gradients is called \textit{equilibrium propagation} (EqProp).
\section{Energy-Based Models} \label{sec:energy-based-models}
There exist different definitions for the concept of energy-based model in the literature -- see \citet{lecun2006tutorial} for a tutorial. In this manuscript, we reserve the term to refer to the specific class of machine learning models described in this section.
In the context of supervised learning, an \textit{energy-based model} (EBM) is specified by three variables: a parameter variable $\theta$, an input variable $x$, and a state variable $s$. An essential ingredient of an EBM is the \textit{energy function}, which is a scalar function $E$ that specifies how the state $s$ depends on the parameter $\theta$ and the input $x$. Given $\theta$ and $x$, the energy function associates to each \textit{conceivable} configuration $s$ a real number $E(\theta, x, s)$. Among all conceivable configurations, the \textit{effective} configuration of the system is by definition a state $s(\theta, x)$ such that \begin{equation}
\label{eq:free-equilibrium-state}
\frac{\partial E}{\partial s}(\theta, x, s(\theta, x)) = 0. \end{equation} We call $s(\theta, x)$ an \textit{equilibrium state} of the system. The aim is to minimize the loss at equilibrium: \begin{equation}
\mathcal{L}(\theta, x, y) = C(s(\theta, x), y). \end{equation} In this expression, $s(\theta, x)$ is the `prediction' from the model, and plays the role of the `output' $f(\theta, x)$ of the previous section. A conceptual difference between $s(\theta, x)$ and $f(\theta, x)$ is that, in conventional deep learning, $f(\theta, x)$ is usually thought of as the output layer of the model (i.e. the last layer of the neural network), whereas here $s(\theta, x)$ represents the entire state of the system. Another difference is that $f(\theta, x)$ is usually \textit{explicitly} determined by $\theta$ and $x$ through an analytical formula, whereas here $s(\theta, x)$ is \textit{implicitly} specified through the variational equation of Eq.~(\ref{eq:free-equilibrium-state}) and may not be expressible by an analytical formula in terms of $\theta$ and $x$. In particular, there exists in general several such states $s(\theta, x)$ that satisfy Eq.~(\ref{eq:free-equilibrium-state}). We further point out that $s(\theta, x)$ need not be a minimum of the energy function $E$ ; it may be a maximum or more generally any saddle point of $E$.
We note that, just like the energy function $E$, the cost function $C$ is defined for any conceivable configuration $s$, not just the equilibrium state $s(\theta, x)$. Although $C(s, y)$ may depend on the entire state $s$, in practical situations that we will study in the next chapters, $C(s, y)$ depends only on a subset of $s$ that plays the role of `outputs'.
We also introduce another key concept: the concept of \textit{sum-separability}. Let $\theta = (\theta_1, \ldots, \theta_N)$ be the adjustable parameters of the system. For each $\theta_k$, we denote $\{x, s\}_k$ the information about $(x, s)$ which is locally available to $\theta_k$. We say that the energy function $E$ is \textit{sum-separable} if it is of the form \begin{equation}
\label{eq:sum-separability}
E(\theta, x, s) = E_0(x, s) + \sum_{k=1}^N E_k(\theta_k, \{x, s\}_k), \end{equation} where $E_0(x, s)$ is a term that is independent of the parameters to be adjusted, and $E_k$ is a scalar function of $\theta_k$ and $\{x, s\}_k$, for each $k \in \{ 1, \ldots, N \}$. Importantly, many physical systems are energy-based models, many of which have the sum-separability property; we give examples in section \ref{sec:ebms-examples}.
\iffalse \begin{figure}\label{fig:diagram-ebm}
\end{figure} \fi
\section{Gradient Formula} \label{sec:loss-gradients}
The central ingredient of the equilibrium propagation training method is the \textit{total energy function} $F$, defined by $F = E + \beta \; C$, where $\beta$ is a real-valued variable called \textit{nudging factor}. The intuition here is that we augment the energy of the system by bringing an additional energy term $\beta C$. By varying $\beta$, the total energy $F$ is modified, and so is the equilibrium state relative to $F$. Specifically, assuming that the functions $E$ and $C$ are continuously differentiable, there exists a continuous mapping $\beta \mapsto s_\star^\beta$ such that $s_\star^0 = s(\theta, x)$ and\footnote{We note that it is also possible to define $s_\star^\beta$ differently, by the relationship $\frac{\partial E}{\partial s}(\theta, x, s_\star^\beta) + \beta \; \frac{\partial C}{\partial s}(s(\theta, x), y) = 0$, without changing the conclusions of Theorem \ref{thm:static-eqprop}. In Chapter \ref{chapter:neuromorphic} we will use this modified definition of $s_\star^\beta$.} \begin{equation}
\label{eq:nudged-steady-state}
\frac{\partial E}{\partial s}(\theta, x, s_\star^\beta) + \beta \; \frac{\partial C}{\partial s}(s_\star^\beta, y) = 0 \end{equation} for any value of the nudging factor $\beta$. Theorem \ref{thm:static-eqprop} provides a formula to compute the loss gradients by varying the nudging factor $\beta$.
\begin{thm}[Gradient formula for energy-based models] \label{thm:static-eqprop} The gradient of the loss is equal to \begin{equation}
\label{eq:static-eqprop}
\frac{\partial \mathcal{L}}{\partial \theta}(\theta, x, y) = \left. \frac{d}{d\beta} \right|_{\beta=0} \frac{\partial E}{\partial \theta} \left( \theta, x, s_\star^\beta \right). \end{equation} Furthermore, if the energy function $E$ is sum-separable, then the loss gradient for each parameter $\theta_k$ depends only on information that is locally available to $\theta_k$: \begin{equation}
\label{eq:static-eqprop-local}
\frac{\partial \mathcal{L}}{\partial \theta_k}(\theta, x, y) = \left. \frac{d}{d\beta} \right|_{\beta=0} \frac{\partial E_k}{\partial \theta_k} \left( \theta_k, \{x, s_\star^\beta\}_k \right). \end{equation} \end{thm}
\begin{proof} Eq.~(\ref{eq:static-eqprop}) is a consequence of Lemma \ref{lma:main} (Section \ref{chapter:fundamental-lemma}) applied to the total energy function\footnote{With the modified definition of $s_\star^\beta$, the total energy function to consider is $F(\theta, \beta, s) = E(\theta, x, s) + \beta \; C(s(\theta, x), y)$.} $F$, defined for a fixed input-target pair $(x, y)$ by $F(\theta, \beta, s) = E(\theta, x, s) + \beta \; C(s, y)$, at the point $\beta=0$. Eq.~(\ref{eq:static-eqprop-local}) is a consequence of Eq.~(\ref{eq:static-eqprop}) and the definition of sum-separability (Eq.~(\ref{eq:sum-separability})). \end{proof}
\section{Equilibrium Propagation} \label{sec:equilibrium-propagation}
We can use Theorem \ref{thm:static-eqprop} to derive a learning algorithm for energy-based models. Let us assume that the energy function is sum-separable. We can estimate the loss gradients using finite differences, for example with \begin{equation} \label{eq:one-sided-estimator} \widehat{\nabla}_{\theta_k}(\beta) = \frac{1}{\beta} \left( \frac{\partial E_k}{\partial \theta_k}(\theta_k, \{ x, s_\star^\beta \}_k) - \frac{\partial E_k}{\partial \theta_k}(\theta_k, \{ x, s_\star^0 \}_k) \right) \end{equation} to approximate the right-hand side of Eq.~(\ref{eq:static-eqprop-local}). We arrive at the following two-phase training procedure to update the parameters in proportion to their loss gradients.
\paragraph{Free phase (inference).} The nudging factor $\beta$ is set to zero, and the system settles to an equilibrium state $s_\star^0$, characterized by Eq.~(\ref{eq:free-equilibrium-state}). We call $s_\star^0$ the \textit{free state}. For each parameter $\theta_k$, the quantity $\frac{\partial E_k}{\partial \theta_k} \left( \theta_k, \{x, s_\star^0\}_k \right)$ is measured locally and stored locally.
\paragraph{Nudged phase.} The nudging factor $\beta$ is set to a nonzero value (positive or negative), and the system settles to a new equilibrium state $s_\star^\beta$, characterized by Eq.~(\ref{eq:nudged-steady-state}). We call $s_\star^\beta$ the \textit{nudged state}. For each parameter $\theta_k$, the quantity $\frac{\partial E_k}{\partial \theta_k} \left( \theta_k, \{x, s_\star^\beta\}_k \right)$ is measured locally.
\paragraph{Update rule.} Finally, each parameter $\theta_k$ is updated locally in proportion to its gradient as $\Delta \theta_k = - \eta \widehat{\nabla}_{\theta_k}(\beta)$, where $\eta$ is a learning rate and $\widehat{\nabla}_{\theta_k}(\beta)$ is the gradient estimator of Eq.~(\ref{eq:one-sided-estimator}).
The training scheme described above is natural because the free phase and the nudged phase can be related to the standard training procedure for neural networks (the backpropagation algorithm), in which there is an inference phase (forward pass) followed by a gradient computation phase (backward pass). However, due to the approximation of derivatives by finite differences, the gradient estimator prescribed by the above training scheme is biased. As detailed in Appendix \ref{chapter:appendix}, the mismatch between this gradient estimator ($\widehat{\nabla}_{\theta_k}(\beta)$) and the true gradient ($\frac{\partial \mathcal{L}}{\partial \theta_k}$) is of the order $O(\beta)$. As proposed in \citet{laborieux2020scaling}, this bias can be reduced by means of a symmetric gradient estimator: \begin{equation} \label{eq:two-sided-estimator} \widehat{\nabla}_{\theta_k}^{\rm sym}(\beta) = \frac{1}{2 \beta} \left( \frac{\partial E_k}{\partial \theta_k} \left( \theta_k, \{x, s_\star^\beta\}_k \right) - \frac{\partial E_k}{\partial \theta_k} \left( \theta_k, \{x, s_\star^{-\beta}\}_k \right) \right). \end{equation} To achieve this, we can modify the above training procedure to include two nudged phases: one with positive nudging ($+\beta$) and one with negative nudging ($-\beta$). The update rule for parameter $\theta_k$ is then $\Delta \theta_k = - \eta \widehat{\nabla}_{\theta_k}^{\rm sym}(\beta)$. The mismatch between this symmetric gradient estimator and the true gradient is only of the order $O(\beta^2)$. We note that higher order methods which use more point values of $\beta$ are also possible, to further reduce the gradient estimator bias (e.g. with $+2\beta$, $+\beta$, $-\beta$ and $-2\beta$).
We call such training procedures \textit{equilibrium propagation} (EqProp), with the intuition that the equilibrium state $s_\star^\beta$ `propagates' across the system as $\beta$ is varied.
\section{Examples of Sum-Separable Energy-Based Models} \label{sec:ebms-examples}
We give here a few examples of sum-separable energy-based models. As a first example, the Hopfield model is useful to develop intuitions, as it is a well-known and well-studied model in both the machine learning literature and the neuroscience literature ; the case of continuous Hopfield networks will be developed in Chapter \ref{chapter:hopfield}. We then briefly present resistive networks, which will be developed in Chapter \ref{chapter:neuromorphic}. Nonlinear resistive networks are potentially promising for the development of neuromorphic hardware, towards the goal of building fast and energy efficient neural networks. We also briefly present another couple of instances of energy-based physical systems, such as flow networks, and elastic networks. All these systems can be trained with EqProp.
\paragraph{Hopfield networks.} In the Hopfield model \citep{hopfield1982neural} and its continuous version \citep{cohen1983absolute,hopfield1984neurons}, neurons are interconnected via bi-directional synapses. Each neuron $i$ is characterised by a scalar $s_i$, and each synapse connecting neurons $i$ and $j$ is characterised by a number $W_{ij}$ representing the synaptic strength. The energy function of the model, called \textit{Hopfield energy}, is of the form \begin{equation}
\label{eq:hopfield-energy-generic}
E(\theta, s) = - \sum_{i, j} W_{ij} s_i s_j \end{equation} (or a variant of Eq.~\ref{eq:hopfield-energy-generic}). In this expression, the vector of neural states $s = (s_1, s_2, \ldots, s_N)$ represents the state variable of the system, and the vector of synaptic strengths $\theta = \{ W_{ij} \}_{i, j}$ represents the parameter variable (the set of adjustable parameters). At inference, neurons stabilize to a minimum of the energy function, where the condition $\frac{\partial E}{\partial s} = 0$ is met. Furthermore, the Hopfield energy is sum-separable with each factor of the form $E_{ij}(W_{ij}, s_i, s_j) = - W_{ij} s_i s_j$. Since the energy gradients are equal to $\frac{\partial E_{ij}}{\partial W_{ij}} = - s_i s_j$, the Hopfield model can be trained with EqProp using a sort of contrastive Hebbian learning rule (Chapter \ref{chapter:hopfield}).
\paragraph{Resistive networks.} A linear resistance network is an electrical circuit composed of nodes interconnected by linear resistors. Let $N$ be the number of nodes in the circuit, and denote $V = (V_1, \ldots, V_N)$ the vector of node voltages. Since the power dissipated in a resistor of conductance $g_{ij}$ is $\mathcal{P}_{ij} = g_{ij} \left(V_j-V_i \right)^2$, where $V_i$ and $V_j$ are the terminal voltages of the resistor, the total power dissipated in the circuit is \begin{equation}
\mathcal{P}(\theta, V) = \sum_{i, j} g_{ij} \left(V_j-V_i \right)^2, \end{equation} where $\theta = \{ g_{ij} \}_{i, j}$ is the set of conductances of the circuit, which plays the role of `adjustable parameters'. Notably, linear resistance networks satisfy the so-called \textit{principle of minimum dissipated power}: if the voltages are imposed at a set of input nodes, then the circuit chooses the voltages at other nodes so as to minimize the total power dissipated ($\mathcal{P}$). This implies in particular that $\frac{\partial \mathcal{P}}{\partial V_i} = 0$ for any floating node voltage $V_i$. Thus, linear resistance networks are energy-based models, with $\mathcal{P}$ playing the role of `energy function'. Furthermore, the function $\mathcal{P}$ has the sum-separability property, with each factor of the form $\mathcal{P}_{ij}(g_{ij}, V_i, V_j) = g_{ij} (V_i - V_j)^2$, and each gradient equal to $\frac{\partial \mathcal{P}_{ij}}{\partial g_{ij}} = (V_i - V_j)^2$. Crucially, as we will see in Chapter \ref{chapter:neuromorphic}, in circuits consisting of arbitrary resistive devices, there exists a generalization of the notion of power function $\mathcal{P}$ called \textit{co-content} \citep{millar1951cxvi}. Such circuits, called \textit{nonlinear resistive networks}, can implement analog neural networks, using memristors (to implement the synaptic weights), diodes (to play the role of nonlinearities), voltage sources (to set the voltages of input nodes) and current sources (to inject loss gradients as currents during training).
\paragraph{Flow networks.} The EqProp framework may also have implications in other areas of engineering, beyond neuromorphic computing. For example, \citet{stern2020supervised} study the case of \textit{flow networks}, e.g. networks of nodes interconnected by pipes. This setting is analogous to the case of resistive networks described above. In a flow network, each node $i$ is described by its pressure $p_i$, and each pipe connecting node $i$ to node $j$ is characterized by its conductance $k_{ij}$. The total dissipated power in the network, which is minimized, is $\mathcal{P}(\theta, p) = \sum_{i, j} k_{ij} \left(p_j-p_i \right)^2$, where $\theta = \{ k_{ij} \}$ is the set of parameters to be adjusted, and $p = \{ p_{ij} \}$ plays the role of the state variable of the system.
\paragraph{Elastic networks.}
\citet{stern2020supervised} also study the case of \textit{central force spring networks}. In this setting, we have a set of $N$ nodes interconnected by linear springs. Each node $i$ is characterized by its 3D position $s_i$. The elastic energy stored in the spring connecting node $i$ to node $j$ is $E_{ij} = \frac{1}{2} k_{ij} \left( r_{ij} - \ell_{ij} \right)^2$, where $k_{ij}$ is the spring constant, $\ell_j$ is the spring's equilibrium length, and $r_{ij} = \| s_i - s_j \|$ is the Euclidean distance between nodes $i$ and $j$. Thus, the total elastic energy stored in the network, which is minimized, is given by \begin{equation}
E(\theta,r) = \sum_{i,j} \frac{1}{2} k_{ij} \left( r_{ij} - \ell_{ij} \right)^2, \end{equation} where $\theta = \{ k_{ij}, \ell_{ij} \}$ is the set of adjustable parameters, and $r = \{ r_{ij} \}$ plays the role of state variable. The energy gradients in this case are $\frac{\partial E_{ij}}{\partial k_{ij}} = \frac{1}{2} \left( r_{ij} - \ell_{ij} \right)^2$ and $\frac{\partial E_{ij}}{\partial \ell_{ij}} = k_{ij} \left( \ell_{ij} - r_{ij} \right)$.
\section{Fundamental Lemma} \label{chapter:fundamental-lemma}
In this section, we present the fundamental lemma of the equilibrium propagation framework, from which Theorem \ref{thm:static-eqprop} derives.
\begin{lma}[\citet{scellier2017equilibrium}] \label{lma:main} Let $F(\theta, \beta, s)$ be a twice differentiable function of the three variables $\theta$, $\beta$ and $s$. For fixed $\theta$ and $\beta$, let $s_\theta^\beta$ be a point that satisfies the stationarity condition \begin{equation} \label{eq:stationary} \frac{\partial F}{\partial s}(\theta, \beta, s_\theta^\beta) = 0, \end{equation} and suppose that $\frac{\partial^2 F}{\partial s^2}(\theta, \beta, s_\theta^\beta)$ is invertible. Then, in the neighborhood of this point, we can define a continuously differentiable function $(\theta, \beta) \mapsto s_\theta^\beta$ such that Eq.~\ref{eq:stationary} holds for any $(\theta, \beta)$ in this neighborhood. Furthermore, we have the following identity: \begin{equation} \label{eq:fund-equation-main-lemma} \frac{d}{d\theta} \frac{\partial F}{\partial \beta}(\theta, \beta, s_\theta^\beta) = \frac{d}{d\beta} \frac{\partial F}{\partial \theta}(\theta, \beta, s_\theta^\beta). \end{equation} \end{lma}
\begin{proof}[Proof of Lemma \ref{lma:main}] The first statement follows from the implicit function theorem. It remains to prove Eq.~\ref{eq:fund-equation-main-lemma}. Let us consider $F \left(\theta, \beta, s_\theta^\beta \right)$ as a function of $(\theta, \beta)$ (not only through $F(\theta, \beta, \cdot)$ but also through $s_\theta^\beta$). Using the chain rule of differentiation and the stationary condition of Eq.~(\ref{eq:stationary}), we have \begin{equation}
\frac{d}{d\beta} F(\theta, \beta, s_\theta^\beta) = \frac{\partial F}{\partial \beta} \left( \theta, \beta, s_\theta^\beta \right)
+ \underbrace{\frac{\partial F}{\partial s} \left( \theta, \beta, s_\theta^\beta \right)}_{= \; 0} \cdot \frac{\partial s_\theta^\beta}{\partial \beta}. \end{equation} Similarly, we have \begin{equation}
\frac{d}{d\theta} F(\theta, \beta, s_\theta^\beta) = \frac{\partial F}{\partial \theta} \left( \theta, \beta, s_\theta^\beta \right)
+ \underbrace{\frac{\partial F}{\partial s} \left( \theta, \beta, s_\theta^\beta \right)}_{= \; 0} \cdot \frac{\partial s_\theta^\beta}{\partial \theta}. \end{equation} Combining these equations and using the symmetry of second-derivatives, we get: \begin{equation} \frac{d}{d\theta} \frac{\partial F}{\partial \beta}(\theta, \beta, s_\theta^\beta) = \frac{d}{d\theta} \frac{d}{d\beta} F(\theta, \beta, s_\theta^\beta) = \frac{d}{d\beta} \frac{d}{d\theta} F(\theta, \beta, s_\theta^\beta) = \frac{d}{d\beta} \frac{\partial F}{\partial \theta}(\theta, \beta, s_\theta^\beta). \end{equation} \end{proof}
\section{Remarks} \label{sec:remarks}
\paragraph{Stationary points.} In the static setting presented in this chapter, EqProp applies to any system whose equilibrium states satisfy the stationary condition of Eq~\ref{eq:free-equilibrium-state} -- what we have called an \textit{energy-based model} (EBM). While early works \citep{scellier2017equilibrium,scellier2019equivalence} proposed to apply EqProp to EBMs whose equilibrium states are minima of the energy function (e.g. Hopfield networks), we stress here that the equilibrium states may more generally be any stationary points (saddle points or maxima) of the energy function. We note that the landscape of the energy function can contain in general exponentially many more stationary points than (local) minima. For instance, the recently introduced `modern Hopfield networks' \citep{ramsauer2020hopfield} are EBMs in the sense of Eq~\ref{eq:free-equilibrium-state}.
\paragraph{Infinite dimensions.} In section \ref{sec:ebms-examples}, we have given examples of EBMs in which the state variable $s$ has finitely many dimensions. We note however that $s$ may also belong to an infinite dimensional space (mathematically, a Banach space). In this case, the expression $\frac{\partial F}{\partial s} \left( \theta, \beta, s_\theta^\beta \right) \cdot \frac{\partial s_\theta^\beta}{\partial \beta}$ in Lemma \ref{lma:main} must be thought of as the differential of the function $F \left( \theta, \beta, \cdot \right)$ at the point $s_\theta^\beta$, applied to the vector $\frac{\partial s_\theta^\beta}{\partial \beta}$.
\paragraph{Variational principles of physics.} In physics, many systems can be described by a variational equation of the form of Eq.~\ref{eq:free-equilibrium-state} ; we have given examples in Section \ref{sec:ebms-examples}. In fact, the framework presented here transfers directly to time-varying physical systems (Chapter \ref{chapter:future}). In this case, $s$ must be thought of as the \textit{trajectory} of the system, $E$ as a functional called \textit{action functional}, and the stationary condition of Eq.~\ref{eq:free-equilibrium-state} as a \textit{principle of stationary action}.
\paragraph{Singularities.} We have proved Theorem \ref{thm:static-eqprop} using Lemma \ref{lma:main}. Yet the formula of Lemma \ref{lma:main} assumes that the Hessian $\frac{\partial^2 E}{\partial s^2}(\theta, x, s(\theta, x))$ is invertible. While this assumption is likely to be valid at most iterations of training, it is also likely that, as $\theta$ evolves during training, $\theta$ goes through values where the Hessian of the energy is singular for some input $x$. At such points, it is not clear how the update rule of EqProp behaves. One branch of mathematics that studies these aspects is \textit{Catastrophe Theory}. Although these aspects raise interesting questions, diving into these questions would take us far from the main thrust of this manuscript. In this manuscript, we pretend everything is differentiable.
\paragraph{Beyond supervised learning.} Although we have focused on supervised learning, the framework presented in this chapter can be adapted to other machine learning paradigms. For example, we note that the formula of Theorem \ref{thm:static-eqprop} can be directly transposed to compute the loss gradients with respect to input variables of the network: \begin{equation}
\frac{\partial \mathcal{L}}{\partial x}(\theta, x, y) = \left. \frac{d}{d\beta} \right|_{\beta=0} \frac{\partial E}{\partial x} \left( \theta, x, s_\star^\beta \right). \end{equation} This formula may be useful in applications where one wants to do gradient descent in the input space, e.g. image synthesis. This may also be useful in the setting of generative adversarial networks \citep{goodfellow2014generative}, in which we need to compute the loss gradients with respect to inputs of the discriminator network, to further propagate error signals in the generator network. The framework presented in this chapter may also be adapted to model-free reinforcement learning algorithms such as temporal difference (TD) learning (e.g. Q-learning). Finally, the EqProp training procedure has also been used in the meta-learning setting to train the meta-parameters of a model, a method called \textit{contrastive meta-learning} \citep{zucchet2021contrastive}. We briefly present this framework in section \ref{sec:contrastive-meta-learning}. \chapter{Training Continuous Hopfield Networks with Equilibrium Propagation} \label{chapter:hopfield}
In the previous chapter, we have presented equilibrium propagation (EqProp) as a general learning algorithm for energy-based models. In this chapter, we use EqProp to train a class of energy-based models called gradient systems. In particular we apply EqProp to continuous Hopfield networks, a class of of neural networks that has inspired neuroscience and neuromorphic computing since the 1980s. The present chapter is essentially a compilation of \citet{scellier2017equilibrium,scellier2019equivalence}, and is organized as follows. \begin{itemize}
\item In section \ref{sec:gradient-system}, we apply EqProp to a class of continuous-time dynamical systems called \textit{gradient systems}. In a gradient system, the state dynamics descend the gradient of a scalar function (the \textit{energy function}) and stabilise to a minimum of that energy function. Thus, in this setting, equilibrium states correspond to energy minima. We provide an analytical formula for the transient states of the system between the free state and the nudged state of the EqProp training process, and we link these transient states to the recurrent backpropagation algorithm of \citet{almeida1987learning} and \citet{pineda1987generalization}.
\item In section \ref{sec:hopfield-model}, we apply EqProp to the continuous Hopfield model, an energy-based neural network model described by an energy function called the \textit{Hopfield energy}. The gradient dynamics associated with the Hopfield energy yields the neural dynamics in Hopfield networks: neurons are seen as leaky integrator neurons, with the constraint that synapses are bidirectional and symmetric. In addition, the update rule of EqProp for each synapse is local (more specifically Hebbian).
\item In section \ref{sec:experiments-hopfield}, we present numerical experiments on deep Hopfield networks trained with EqProp on the MNIST digit classification task.
\item In section \ref{sec:contrastive-hebbian-learning}, we study the relationship between EqProp and the contrastive Hebbian learning algorithm of \citet{movellan1991contrastive}. \end{itemize}
\section{Gradient Systems} \label{sec:gradient-system}
In this section, we present a theoretical result which holds for arbitrary energy functions and cost functions. This section, which deals with the concepts of energy and cost functions in the abstract, is largely independent of the rest of this chapter. The reader who is eager to see how Hopfield networks can be trained with EqProp may skip this section and go straight to the next one.
\subsection{Gradient Systems as Energy-Based Models}
We have seen in Chapter \ref{chapter:eqprop} that EqProp is an algorithm to train systems that possess \textit{equilibrium states}, i.e. states characterized by a variational equation of the form $\frac{\partial E}{\partial s} \left( \theta, x, s_\star \right) = 0$, where $E \left( \theta, x, s \right)$ is a scalar function called \textit{energy function}. Recall that $\theta$ is the set of adjustable parameters of the system, and $x$ is an input. We have called such systems \textit{energy-based models}. The class of energy-based models that we study here is that of systems whose dynamics spontaneously minimizes the energy function $E$ by following its gradient. In such a system, called \textit{gradient system}, the state follows the dynamics \begin{equation} \label{eq:continuous-time-free-phase} \frac{d s_t}{dt} = - \frac{\partial E}{\partial s} \left( \theta, x, s_t \right). \end{equation} Here $s_t$ denotes the state of the system at time $t$. The energy of the system decreases until $\frac{d s_t}{dt} = 0$, and the equilibrium state $s_\star$ reached after convergence of the dynamics is an energy minimum (either local or global). The function $E$ is also sometimes called a \textit{Lyapunov function} for the dynamics of $s_t$.
In this setting, equilibrium states are \textit{stable}: if the state is slightly perturbed around equilibrium, the dynamics will tend to bring the system back to equilibrium. For this reason, such equilibrium states are also called `attractors' or `retrieval states', because the system's dynamics can `retrieve' them if they are only partially known. Thus, gradient systems can recover incomplete data, by storing `memories' in their point attractors.
In this manuscript, we are more specifically interested in the supervised learning problem, where the loss to optimize is of the form \begin{equation} \mathcal{L} = C(s_\star, y). \end{equation} After training is complete, the model can be used to `retrieve' a label $y$ associated to a given input $x$.
\subsection{Training Gradient Systems with Equilibrium Propagation}
In a gradient system, EqProp takes the following form. In the first phase, or free phase, the state of the system follows the gradient of the energy (Eq.~\ref{eq:continuous-time-free-phase}). At the end of the first phase the system is at equilibrium ($s_\star$). In the second phase, or nudged phase, starting from the equilibrium state $s_\star$, a term $- \beta \; \frac{\partial C}{\partial s}$ (where $\beta > 0$ is a hyperparameter called \textit{nudging factor}) is added to the dynamics of the state and acts as an external force nudging the system dynamics towards decreasing the cost $C$. Denoting $s_t^\beta$ the state of the system at time $t$ in the second phase (which depends on the value of the nudging factor $\beta$), the dynamics is defined as\footnote{As discussed in the case of Eq.~\ref{eq:nudged-steady-state}, we can also define $\frac{d s_t^\beta}{dt} = -\frac{\partial E}{\partial s} \left( \theta, x, s_t^\beta \right) - \beta \; \frac{\partial C}{\partial s} \left( s_\star, y \right)$ without changing the conclusions of the theoretical results.} \begin{equation} \label{eq:continuous-time-nudged-phase}
s_0^\beta = s_\star \qquad \text{and} \qquad \forall t \geq 0, \quad \frac{d s_t^\beta}{dt} = -\frac{\partial E}{\partial s} \left( \theta, x, s_t^\beta \right) - \beta \; \frac{\partial C}{\partial s} \left( s_t^\beta, y \right). \end{equation} The system eventually settles to a new equilibrium state $s_\star^\beta$. Recall from Theorem \ref{thm:static-eqprop} that the gradient of the loss $\mathcal{L}$ can be estimated based on the two equilibrium states $s_\star$ and $s_\star^\beta$. Specifically, in the limit $\beta \to 0$, \begin{equation} \label{eqprop-gradient-system} \lim_{\beta \to 0} \frac{1}{\beta} \left( \frac{\partial E}{\partial \theta} \left( x, s_\star^\beta, \theta \right) - \frac{\partial E}{\partial \theta} \left( \theta, x, s_\star \right) \right) = \frac{\partial \mathcal{L}}{\partial \theta}. \end{equation} Furthermore, if the energy function has the sum-separability property (as defined by Eq.~\ref{eq:sum-separability}), then the learning rule for each parameter $\theta_k$ is local: \begin{equation} \label{eqprop-gradient-system-local} \lim_{\beta \to 0} \frac{1}{\beta} \left( \frac{\partial E_k}{\partial \theta_k} \left( \theta_k, \{ x, s_\star^\beta \}_k \right) - \frac{\partial E_k}{\partial \theta_k} \left( \theta_k, \{ x, s_\star \}_k \right) \right) = \frac{\partial \mathcal{L}}{\partial \theta_k}. \end{equation}
\subsection{Transient Dynamics}
Note that the learning rule of Eqs.~\ref{eqprop-gradient-system}-\ref{eqprop-gradient-system-local} only depends on the equilibrium states $s_\star$ and $s_\star^\beta$, not on the specific trajectory that the system follows to reach them. Indeed, as we have seen in the previous chapter, EqProp applies to any energy based model, not just gradient systems. But under the assumption of a gradient dynamics, we can say more about the transient states, when the system gradually moves from the free state ($s_\star$) towards the nudged state ($s_\star^\beta$): we show that the transient states ($s_t^\beta$ for $t \geq 0$) perform gradient computation with respect to a function called the \textit{projected cost function}.
Recall that in the free phase, the system follows the dynamics of Eq.~\ref{eq:continuous-time-free-phase}. In particular, the state $s_t$ at time $t \geq 0$ depends not just on $\theta$ and $x$, but also on the initial state $s_0$ at time $t=0$. Let us define the \textit{projected cost function} \begin{equation}
\label{eq:projected-cost-function}
L_t(\theta, s_0) = C \left( s_t \right), \end{equation} where we omit $x$ and $y$ for brevity of notations. $L_t(\theta, s_0)$ is the cost of the state projected a duration $t$ in the future, when the system starts from $s_0$ and follows the dynamics of the free phase. Note that $L_t$ depends on $\theta$ and $s_0$ (as well as $x$) implicitly through $s_t$. For fixed $s_0$, the process $\left( L_t(\theta, s_0) \right)_{t \geq 0}$ represents the successive cost values taken by the state of the system along the free dynamics when it starts from the initial state $s_0$. In particular, for $t=0$, the projected cost is simply the cost of the initial state, i.e. $L_0(\theta, s_0) = C \left( s_0 \right)$. As $t \to \infty$, we have $s_t \to s_\star$ and therefore $L_t(\theta, s_0) \to C(s_\star) = \mathcal{L}(\theta)$, i.e. the projected cost converges to the loss at equilibrium.
The following result shows that the transient states of EqProp ($s_t^\beta$) can be expressed in terms of the projected cost function ($L_t$), when $s_0 = s_\star$.
\begin{thm}[\citet{scellier2019equivalence}] \label{thm:truncated-eqprop} The following identities hold for any $t \geq 0$: \begin{gather}
\label{eq:truncated-eqprop-parameter}
\lim_{\beta \to 0} \frac{1}{\beta} \left(
\frac{\partial E}{\partial \theta} \left( \theta, s_t^\beta \right) - \frac{\partial E}{\partial \theta} \left( \theta, s_\star \right)
\right) = \frac{\partial L_t}{\partial \theta} \left( \theta, s_\star \right), \\
\label{eq:truncated-eqprop-state}
\lim_{\beta \to 0} \frac{1}{\beta} \; \frac{d s_t^\beta}{d t} = - \frac{\partial L_t}{\partial s} \left( \theta, s_\star \right). \end{gather} Furthermore, if the energy function has the sum-separability property (as defined by Eq.~\ref{eq:sum-separability}), then \begin{equation}
\label{eq:truncated-eqprop-local}
\lim_{\beta \to 0} \frac{1}{\beta} \left(
\frac{\partial E_k}{\partial \theta_k} \left( \theta_k, \{ s_t^\beta \}_k \right) - \frac{\partial E_k}{\partial \theta_k} \left( \theta_k, \{ s_\star \}_k \right)
\right) = \frac{\partial L_t}{\partial \theta_k} \left( \theta, s_\star \right). \end{equation} \end{thm}
\begin{proof}
Eq.~\ref{eq:truncated-eqprop-parameter}-\ref{eq:truncated-eqprop-state} follow directly from Lemma \ref{lma:rec-backprop} (next subsection). Eq.~\ref{eq:truncated-eqprop-local} follows from Eq.~\ref{eq:truncated-eqprop-parameter} and the definition of sum-separability. \end{proof}
The left-hand-side of Eq.~\ref{eq:truncated-eqprop-parameter} represents the gradient provided by EqProp if we substitute $s_t^\beta$ to $s_\star^\beta$ in the gradient formula (Eq.~\ref{eqprop-gradient-system}). This corresponds to a \textit{truncated} version of EqProp, where the second phase (nudged phase) is halted before convergence to the nudged equilibrium state. Eq.~\ref{eq:truncated-eqprop-parameter} provides an analytical formula for this truncated gradient in terms of the projected cost function, when $s_0 = s_\star$
The left-hand side of Eq.~\ref{eq:truncated-eqprop-state} is the temporal derivative of $s_t^\beta$ rescaled by a factor $\frac{1}{\beta}$. In essence, Eq.~\ref{eq:truncated-eqprop-state} shows that, in the second phase of EqProp (nudged phase), the temporal derivative of the state \textit{codes} for gradient information (namely the gradients of the projected cost function, when $s_0 = s_\star$).
\begin{figure*}\label{fig:thm-gradient-system}
\end{figure*}
\subsection{Recurrent Backpropagation} \label{sec:proof-temporal-derivatives}
In this section, we prove Theorem \ref{thm:truncated-eqprop}. In doing so, we also establish a link between EqProp and the recurrent backpropagation algorithm of \citet{almeida1987learning} and \citet{pineda1987generalization}, which we briefly present below.
First, let us introduce the temporal processes $(\overline{S}_t, \overline{\Theta}_t)$ and $(\widetilde{S}_t, \widetilde{\Theta}_t)$ defined by \begin{equation}
\label{eq:process-bar}
\forall t \geq 0, \qquad \overline{S}_t = \frac{\partial L_t}{\partial s} \left( \theta, s_\star \right), \qquad \overline{\Theta}_t = \frac{\partial L_t}{\partial \theta} \left( \theta, s_\star \right), \end{equation} and \begin{equation}
\label{eq:process-tilde}
\forall t \geq 0, \qquad \widetilde{S}_t = -\lim_{\beta \to 0} \frac{1}{\beta} \; \frac{d s_t^\beta}{d t}, \qquad
\widetilde{\Theta}_t = \lim_{\beta \to 0} \frac{1}{\beta} \left( \frac{\partial E}{\partial \theta} \left( \theta, s_t^\beta \right) - \frac{\partial E}{\partial \theta} \left( \theta, s_\star \right) \right). \end{equation} The processes $\overline{S}_t$ and $\widetilde{S}_t$ take values in the state space (space of the state variable $s$). The processes $\overline{\Theta}_t$ and $\widetilde{\Theta}_t$ take values in the parameter space (space of the parameter variable $\theta$). Using these notations, Theorem \ref{thm:truncated-eqprop} states that $\overline{S}_t = \widetilde{S}_t$ and $\overline{\Theta}_t = \widetilde{\Theta}_t$ for every $t \geq 0$. These identities are a direct consequence of the following lemma.
\footnotetext{https://github.com/ernoult/updatesEPgradientsBPTT}
\begin{restatable}{lma}{lmarbp}
\label{lma:rec-backprop}
Both the processes $(\overline{S}_t, \overline{\Theta}_t)$ and $(\widetilde{S}_t, \widetilde{\Theta}_t)$ are solutions of the same (linear) differential equation:
\begin{align}
\label{eq:Cauchy-1}
S_0 & = \frac{\partial C}{\partial s} \left( s_\star \right), \\
\label{eq:Cauchy-2}
\Theta_0 & = 0, \\
\label{eq:Cauchy-3}
\frac{d}{dt} S_t & = - \frac{\partial^2 E}{\partial s^2} \left( \theta, s_\star \right) \cdot S_t, \\
\label{eq:Cauchy-4}
\frac{d}{dt} \Theta_t & = - \frac{\partial^2 E}{\partial \theta \partial s} \left( \theta, s_\star \right) \cdot S_t.
\end{align}
By uniqueness of the solution, the processes $(\overline{S}_t, \overline{\Theta}_t)$ and $(\widetilde{S}_t, \widetilde{\Theta}_t)$ are equal. \end{restatable}
We refer to \citet{scellier2019equivalence} for a proof of this result. Since the differential equation of Lemma \ref{lma:rec-backprop} is linear with constant coefficients, we can express $S_t$ and $\Theta_t$ using closed-form formulas. Specifically $S_t = \exp \left( - t \frac{\partial^2 E}{\partial s^2} \left( \theta, s_\star \right) \right) \cdot \frac{\partial C}{\partial s} \left( s_\star \right)$ and $\Theta_t = \frac{\partial^2 E}{\partial \theta \partial s} \left( \theta, s_\star \right) \cdot \left( \frac{\partial^2 E}{\partial s^2} \left( \theta, s_\star \right) \right)^{-1} \cdot \left[ \textrm{Id} - \exp \left( - t \frac{\partial^2 E}{\partial s^2} \left( \theta, s_\star \right) \right) \right] \cdot \frac{\partial C}{\partial s} \left( s_\star \right)$.
Lemma \ref{lma:rec-backprop} also suggests an alternative procedure to compute the parameter gradients of the loss $\mathcal{L}$ numerically. This procedure, known as \textit{Recurrent Backpropagation} (RBP), was introduced independently by \citet{almeida1987learning} and \citet{pineda1987generalization}. Specifically, RBP consists of the following two phases. The first phase is the same as the free phase of EqProp: $s_t$ follows the free dynamics (Eq.~\ref{eq:continuous-time-free-phase}) and relaxes to the equilibrium state $s_\star$. The state $s_\star$ is necessary for evaluating $\frac{\partial^2 E}{\partial s^2} \left( \theta, s_\star \right)$ and $\frac{\partial^2 E}{\partial \theta \partial s} \left( \theta, s_\star \right)$, which the second phase requires. In the second phase, $S_t$ and $\Theta_t$ are computed iteratively for increasing values of $t$ using Eq.~\ref{eq:Cauchy-1}-\ref{eq:Cauchy-4}. Finally, $\Theta_t$ provides the desired loss gradient in the limit $t \to \infty$. To see this, we first note that Lemma \ref{lma:rec-backprop} tells us that the vector $\Theta_t$ computed by this procedure is equal to $\widetilde{\Theta}_t$ for any $t \geq 0$. Then by definition, $\widetilde{\Theta}_t = \lim_{\beta \to 0} \frac{1}{\beta} \left( \frac{\partial E}{\partial \theta} \left( \theta, s_t^\beta \right) - \frac{\partial E}{\partial \theta} \left( \theta, s_\star \right) \right)$. It follows that, as $t \to \infty$, we have $\widetilde{\Theta}_t \to \lim_{\beta \to 0} \frac{1}{\beta} \left( \frac{\partial E}{\partial \theta} \left( \theta, s_\star^\beta \right) - \frac{\partial E}{\partial \theta} \left( \theta, s_\star \right) \right) = \frac{\partial \mathcal{L}}{\partial \theta}$.
An important benefit of EqProp over RBP it that EqProp requires only one kind of dynamics for both phases of training. RBP requires a special computational circuit in the second phase for computing the gradients.
The original RBP algorithm was described
for a general state-to-state dynamics. Here, we have presented RBP
in the particular case of gradient dynamics. We refer to \citet{lecun1988theoretical} for a more general derivation of RBP based on the adjoint method.
\section{Continuous Hopfield Networks} \label{sec:hopfield-model}
In the previous section we have presented a theoretical result which is generic and involves the energy function $E$ and cost function $C$ in their abstract form. In this section, we study EqProp in the context of a neural network model called the continuous Hopfield model \citep{hopfield1984neurons}.
\subsection{Hopfield Energy}
A Hopfield network is a neural network with the following characteristics. The state of a neuron $i$ is described by a scalar $s_i$, loosely representing its membrane voltage. The state of a synapse connecting neuron $i$ to neuron $j$ is described by a real number $W_{ij}$ representing its efficacy (or `strength'). The notation $\sigma(s_i)$ is further used to denote the firing rate of neuron $i$. The function $\sigma$ is called \textit{activation function} ; it takes a real number as input, and returns a real number as output. Using the formalism of the previous section, the state of the system is the vector $s= \left( s_1, s_2, \ldots, s_N \right)$ where $N$ is the number of neurons in the network, and the set of parameters to be adjusted is $\theta = \{ W_{ij} \}_{ij}$. As we will see shortly, one biologically unrealistic requirement of the Hopfield model is that synapses are assumed to be bidirectional and symmetric: the synapse connecting $i$ to $j$ shares the same weight value as the synapse connecting $j$ to $i$, i.e. $W_{ij} = W_{ji}$.
\paragraph{Hopfield Energy.} \citet{hopfield1984neurons} introduced the following energy function\footnote{The energy function of Eq.~\ref{eq:hopfield-energy} is in fact the one proposed by \citet{bengio2017stdp}. The energy function introduced by Hopfield is slightly different, but this technical detail is not essential for our purpose.}: \begin{equation}
\label{eq:hopfield-energy}
E(\theta, s) = \frac{1}{2} \sum_i s_i^2 - \sum_{i < j} W_{ij} \sigma(s_i) \sigma(s_j), \end{equation} which we will call the \textit{Hopfield energy}. We calculate \begin{equation}
\label{eq:leaky-integrator-hopfield}
-\frac{\partial E}{\partial s_i} = \sigma'(s_i) \left( \sum_{j \neq i} W_{ij} \sigma(s_j) \right) - s_i. \end{equation} Thus, the gradient dynamics for neuron $s_i$ with respect to the Hopfield energy is given by the formula $\frac{d s_i}{dt} = \sigma'(s_i) \left( \sum_{j \neq i} W_{ij} \sigma(s_j) \right) - s_i$. This dynamics is reminiscent of the leaky integrator neuron model, a simplified neuron model commonly used in neuroscience. The main difference with the standard leaky-integrator neuron model is the fact that synaptic weights are constrained to be bidirectional and symmetric, a biologically unrealistic constraint often referred to as the \textit{weight transport problem}. Another difference is the presence of the term $\sigma'(s_i)$ which modulates the total input to neuron $i$.
\paragraph{Squared Error.} In the supervised setting that we study here, a set of neurons are input neurons, denoted $x$, and are always clamped to their input values. Among the `free' neurons ($s$), a subset of them are \textit{output neurons} (denoted $o$), meaning that they represent the network's output. The network's prediction is the state of output neurons at equilibrium (denoted $o_\star$). We call all other neurons the \textit{hidden neurons} and denote them $h$. Thus, the state of the network is $s = \left( h, o \right)$. The cost function considered here is the squared error \begin{equation}
\label{eq:sq-cost-function}
C(s, y) = \frac{1}{2} \left\lVert o-y \right \rVert^2, \end{equation} which measures the discrepancy between the state of output neurons ($o$) and their target values ($y$).
\paragraph{Total Energy.} One of the novelties of EqProp with respect to prior learning algorithms for energy-based models is the \textit{total energy function} $F$, which takes the form $F = E + \beta \; C$, where $\beta$ is a real-valued scalar (the \textit{nudging factor}). The function $C$ not only represents the cost to minimize, but also contributes to the total energy of the system by acting like an external potential for the output neurons ($o$). Thus, the total energy $F$ is the sum of two potential energies: an `internal potential' ($E$) that models the interactions within the network, and an `external potential' ($\beta \; C$) that models how the targets influence the output neurons. The resulting gradient dynamics $\frac{d s_t}{dt} = - \frac{\partial E}{\partial s} - \beta \frac{\partial C}{\partial s}$ consists of two 'forces' which act on the temporal derivative of $s_t$. The 'internal force' (induced by $E$) is that of a leaky integrator neuron (Eq.~\ref{eq:leaky-integrator-hopfield}). The 'external force' (induced by $\beta \; C$) on $s=(h,o)$ takes the form: \begin{equation}
\label{eq:external-force}
- \beta \frac{\partial C}{\partial h} = 0 \qquad \text{and} \qquad
- \beta \frac{\partial C}{\partial o} = \beta ( y-o ). \end{equation} This external force acts on output neurons only: it can pull them (if $\beta \geq 0$) towards their target values ($y$), or repel them (if $\beta \leq 0$). The nudging factor $\beta$ controls the strength of this interaction between output neurons and targets. In particular, when $\beta=0$, the output neurons are not sensitive to the targets.
\begin{figure*}\label{fig:network_undirected}
\end{figure*}
\subsection{Training Continuous Hopfield Networks with Equilibrium Propagation} \label{sec:eqprop-hopfield}
Consider a deep Hopfield network (DHN) of the kind depicted in Figure \ref{fig:network_undirected}. For each training example $(x, y)$ in the dataset, EqProp training proceeds as follows.
\paragraph{Free Phase.} At inference, inputs $x$ are clamped, and both the hidden neurons ($h^1$ and $h^2$) and output neurons ($o$) evolve freely, following the gradient of the energy. The hidden and output neurons subsequently stabilize to an energy minimum, called \textit{free state} and denoted $s_\star = \left( h_\star^1, h_\star^2, o_\star \right)$. The state of output neurons at equilibrium ($o_\star$) plays the role of prediction for the model.
\paragraph{Nudged Phase.} After relaxation to the free state $s_\star$, the target $y$ is observed, and the nudging factor $\beta$ takes on a positive value, gradually driving the state of output neurons ($o$) towards $y$. Since the external force only acts on the output neurons, the hidden layers ($h^1$ and $h^2$) are initially at equilibrium at the beginning of the nudged phase. The perturbation introduced at output neurons gradually propagates backwards along the layers of the network, until the system settles to a new equilibrium state ($s_\star^\beta$).
\begin{prop}[\citet{scellier2017equilibrium}] \label{prop:eqprop-hopfield}
Denote $s_i^0$ and $s_i^\beta$ the free state and nudged state of neuron $i$, respectively. Then, we have the following formula to estimate the gradient of the loss $\mathcal{L} = \frac{1}{2} \| o_\star - y \|^2$ : \begin{equation} \lim_{\beta \to 0} \frac{1}{\beta} \left( \sigma \left( s_i^\beta \right) \sigma \left( s_j^\beta \right) - \sigma \left( s_i^0 \right) \sigma \left( s_j^0 \right) \right) = -\frac{\partial \mathcal{L}}{\partial W_{ij}}. \end{equation} \end{prop}
\begin{proof} This is a direct consequence of the main Theorem \ref{thm:static-eqprop}, applied to the Hopfield energy function (Eq.~\ref{eq:hopfield-energy}) and the squared error cost function (Eq.~\ref{eq:sq-cost-function}). Notice that the Hopfield energy has the sum-separability property (as defined by Eq.~\ref{eq:sum-separability}), with each factor of the form $E_{ij}(W_{ij}, s_i, s_j) = - W_{ij} \sigma(s_i) \sigma(s_j)$. \end{proof}
Proposition \ref{prop:eqprop-hopfield} suggests for each synapse $W_{ij}$ the update rule \begin{equation} \label{eq:global-update-hopfield} \Delta W_{ij} = \frac{\eta}{\beta} \left( \sigma \left( s_i^\beta \right) \sigma \left( s_j^\beta \right) - \sigma \left( s_i^0 \right) \sigma \left( s_j^0 \right) \right), \end{equation} where $\eta$ is a learning rate. This learning rule is a form of contrastive Hebbian learning (CHL), with a Hebbian term at one equilibrium state, and an anti-Hebbian term at the other equilibrium state. We will discuss in Section \ref{sec:contrastive-hebbian-learning} the relationship between EqProp and the CHL algorithm of \citet{movellan1991contrastive}.
\subsection{`Backpropagation' of Error Signals}
It is interesting to note that EqProp is similar in spirit to the backpropagation algorithm \citep{rumelhart1988learning}. The free phase of EqProp, which corresponds to inference, plays the role of the forward pass in a feedforward net. The nudged phase of EqProp is similar to the backward pass of backpropagation, in that the target output is revealed and it involves the propagation of loss gradient signals. This analogy is even more apparent in a layered network like the one depicted in Fig.~\ref{fig:network_undirected}: in the nudged phase of EqProp, error signals (back-)propagate across the layers of the network, from output neurons to input neurons. Theorem \ref{thm:truncated-eqprop} gives a more quantitative description of how gradient computation is performed in the nudged phase, with the temporal derivatives of neural activity carrying gradient signals. Thus, like backprop, the learning process in EqProp is driven by an error signal; but unlike backprop, neural computation in EqProp corresponds to both inference and error back-propagation. The idea that error signals in neural networks can be encoded in the temporal derivatives of neural activity was also explored by \citet{hinton1988learning,movellan1991contrastive,o1996biologically}, and has been recently formulated as a hypothesis for neuroscience \citep{lillicrap2020backpropagation}.
Because error signals are propagated in the network via the neural dynamics, synaptic plasticity can be driven directly by the dynamics of the neurons. Indeed, the global update of EqProp (Eq.~\ref{eq:global-update-hopfield}) is equal to the temporal integration of infinitesimal updates \begin{equation}
\label{eq:real-time-update}
dW_{ij} = \frac{\eta}{\beta} d \left( \sigma(s_i) \sigma(s_j) \right) \end{equation} over the nudged phase, when the neurons gradually move from their free state ($s_\star$) to their nudged state ($s_\star^\beta$). This suggests an alternative method to implement the global weight update: in the first phase, when the neurons relax to the free state, no synaptic update occurs ($\Delta W_{ij} = 0$) ; in the second phase, the real-time update of Eq.~\ref{eq:real-time-update} is performed when the neurons evolve from their free state to their nudged state. This idea is formalized and tested numerically in \citet{ernoult2020equilibrium}.
From a biological perspective, perhaps the most unrealistic assumption in this model of credit assignment is the requirement of symmetric weights. This constraint can be relaxed at the cost of computing a biased gradient \citep{scellier2018generalization,ernoult2020equilibrium,laborieux2020scaling,tristany2020equilibrium}.
\section{Numerical Experiments on MNIST} \label{sec:experiments-hopfield}
In this section, we present the experimental results of \citet{scellier2017equilibrium}. In these simulations, we train deep Hopfield networks of the kind depicted in Fig.~\ref{fig:network_undirected}. Our networks have no skip-layer connections and no lateral connections\footnote{We stress that the models trainable by EqProp are not limited to the chain-like architecture of Fig.~\ref{fig:network_undirected}. Other works have studied the effect of adding skip-layer connections \citep{gammell2020layer} and introducing sparsity \citep{tristany2020equilibrium}.}. We recall that these Hopfield networks, unlike feedforward networks, are recurrently connected, with bidirectional and symmetric connections (i.e. the synapse from neuron $i$ to neuron $j$ shares the same weight value as the synapse from neuron $j$ to neuron $i$).
We train these Hopfield networks on the MNIST digits classification task \citep{lecun1998gradient}. The MNIST dataset (the `modified' version of the National Institute of Standards and Technology dataset) of handwritten digits is composed of 60,000 training examples and 10,000 test examples. Each example $x$ in the dataset is a $28 \times 28$ gray-scaled image and comes with a label $y \in \left\{ 0, 1, \ldots, 9 \right\}$ indicating the digit that the image represents. Given an input $x$, the network's prediction $\widehat{y}$ is the index of the output neuron (among the $10$ output neurons) whose activity at equilibrium is maximal, that is \begin{equation}
\widehat{y} = \underset{i \in \{ 0, 1, \ldots, 9 \}}{\arg \max} \; o_{\star, i}. \end{equation} The network is optimized by stochastic gradient descent (SGD). The process to perform one training iteration on a sample of the training set (i.e. to compute the corresponding gradient and to take one step of SGD) is the one described in section \ref{sec:eqprop-hopfield}. For efficiency of the experiments, we use minibatches of $20$ training examples.
\subsection{Implementation Details}
The hyperparameters chosen for each model are shown in Table \ref{table:hopfield-results}. The code is available\footnote{https://github.com/bscellier/Towards-a-Biologically-Plausible-Backprop}.
\paragraph{Architecture.} We train deep Hopfield networks with $1$, $2$ and $3$ hidden layers. The input layer consists of $28 \times 28 = 784$ neurons. The hidden layers consist of $500$ hidden neurons each. The output layer consists of $10$ output neurons.
\paragraph{Weight initialization.} The weights of the network are initialized\footnote{Little is known about how to initialise the weights of recurrent neural networks with static input. More exploration is needed to find appropriate initialisation schemes for such networks.} according to the Glorot-Bengio initialization scheme \citep{glorot2010understanding}, i.e. each weight matrix is initialized by drawing i.i.d. samples uniformly at random in the range $[L, U]$, where $L=-\frac{\sqrt{6}}{\sqrt{n_i + n_{i+1}}}$ and $U=\frac{\sqrt{6}}{\sqrt{n_i + n_{i+1}}}$, with $n_i$ the fan-in and $n_{i+1}$ the fan-out of the weight matrix.
\paragraph{Implementation of the neural dynamics.} Recall that, for a fixed input-target pair $(x, y)$, the total energy is $F(\theta, \beta, s) = E(\theta, x, s) + \beta \; C(s, y)$. We implement the gradient dynamics $\frac{ds_t}{dt} = -\frac{\partial F}{\partial s} \left( \theta, \beta, s_t \right)$ using the Euler scheme, meaning that we discretize time into short time lapses of duration $\epsilon$ and iteratively update the state of the network (hidden and output neurons) according to \begin{equation}
\label{eq:gradient-descent}
s_{t+1} = s_t - \epsilon \frac{\partial F}{\partial s} \left( \theta, \beta, s_t \right). \end{equation} This process can be thought of as one step of gradient descent (in the state space) on the total energy $F$, with learning rate $\epsilon$. In practice we find that it is necessary to restrict the space for each state variable (i.e. each neuron) to a bounded interval ; we choose the interval $[0, 1]$. This amounts to use the modified version of the Euler scheme: \begin{equation}
\label{eq:clipped-gradient-descent}
s_{t+1} = \min \left( \max \left( 0, s_t - \epsilon \frac{\partial F}{\partial s} \left( \theta, \beta, s_t \right) \right), 1 \right). \end{equation} We choose $\epsilon = 0.5$ in the simulations. The number of iterations in the free phase is denoted $T$. The number of iterations in the nudged phase is denoted $K$.
\paragraph{Number of iterations in the free phase ($T$).} We find experimentally that for the network to be successfully trained, it is necessary that the equilibrium state be reached with very high precision in the free phase (otherwise the gradient estimate of EqProp is unreliable). As a consequence, we require a large number of iterations (denoted $T$) to reach this equilibrium state. Moreover we find that $T$ grows fast as the number of layers increases (see Table \ref{table:hopfield-results}). Nevertheless, we will see in Chapter \ref{chapter:discrete-time} that we can experimentally cut down the number of iterations by a factor five by rewriting the free phase dynamics differently. Importantly, we stress that the large number of time steps required in the free phase is only a concern for computer simulations ; we will see in Chapter \ref{chapter:neuromorphic} that inference can potentially be extremely fast if performed appropriately on analog hardware (by using the physics of the circuit, rather than numerical optimization on conventional computers).
\paragraph{Number of iterations in the nudged phase ($K$).} During the second phase of training, we find experimentally that full relaxation to the nudged equilibrium state is not necessary. This observation is also partly justified by Theorem \ref{thm:truncated-eqprop}, which gives an explicit formula for the `truncated gradient' provided by EqProp when the nudged phase is halted before convergence. As a heuristic, we choose $K$ (the number of iterations in the nudged phase) proportional to the number of layers, so that the `error signals' are able to propagate from output neurons back to input neurons.
\paragraph{Nudging factor ($\beta$).} In spite of its intrinsic bias (Lemma \ref{lma:gradient-estimators}), we find that the one-sided gradient estimator performs well on MNIST (as also observed by \citet{ernoult2019updates}). We choose $\beta=1$ in the experiments. Although it is not crucial, we find that the test accuracy is slightly improved by choosing the sign of $\beta$ at random in the nudged phase of each training iteration (with probability $p(\beta=1)=1/2$ and $p(\beta=-1)=1/2$). Randomizing $\beta$ indeed has the effect of cancelling on average the $O(\beta)$-error term of the one-sided gradient estimator.
While this is not necessary on MNIST, we will see in Chapter \ref{chapter:discrete-time} that on a more complex task such as CIFAR-10, unbiasing the gradient estimator is necessary, and that the symmetric nudging estimator (Eq.~\ref{eq:two-sided-estimator}) further helps stabilize training and improve test accuracy.
\paragraph{Learning rates.}
We find experimentally that we need different learning rates for the weight matrices of different layers. We choose these learning rates heuristically as follows. Denote by $h^0, h^1, \cdots, h^N$ the layers of the network (where $h^0 = x$ and $h^N = o$) and by $W_k$ the weight matrix between the layers $h^{k-1}$ and $h^k$. We choose the learning rate $\alpha_k$ for $W_k$ proportionally to $\frac{\left\lVert W_k \right \rVert}{\mathbb{E} \left[ \left\lVert \nabla_{W_k} \right \rVert \right]}$, where $\mathbb{E} \left[ \left\lVert \nabla_{W_k} \right \rVert \right]$ represents the norm of the EqProp gradient for layer $W_k$, averaged over training examples.
\subsection{Experimental Results}
Table \ref{table:hopfield-results} (top) presents the experimental results of \citet{scellier2017equilibrium}. These experiments aim at demonstrating that the EqProp training scheme is able to perfectly (over)fit the training dataset, i.e. to get the error rate on the training set down to $0.00\%$. To achieve this, we use the following trick to reach the equilibrium state of the first phase more easily: at each epoch of training, for each example in the training set, we store the corresponding equilibrium state (i.e. the state of the hidden and output neurons at the end of the free phase), and we use this configuration as a starting point for the next free phase relaxation on that example. This method, which is similar to the PCD (Persistent Contrastive Divergence) algorithm for sampling from the equilibrium distribution of the Boltzmann machine \citep{tieleman2008training}, enables to speed up the first phase and reach the equilibrium state with higher precision.
However, this technique hurts generalization performance. Table~\ref{table:hopfield-results} (bottom) shows the experimental results of \citet{ernoult2019updates}, which do not use this technique: during training, for each training example in the dataset, the state of the network is initialized to zero at the beginning of each free phase relaxation. The resulting test error rate is lower, though the number of iterations required in the free phase to converge to equilibrium is larger.
\begin{table}[ht!] \centering
$\begin{array}{|c|c|cc|ccccc|cccc|} \hline
\hbox{Model} & {\rm cached} & {\rm Test \; er.} & {\rm Train \; er.} & T & K & \epsilon & \beta & {\rm Epochs} & \alpha_1 & \alpha_2 & \alpha_3 & \alpha_4 \\ \hline
\hbox{DHN-1h} & Y & \sim 2.5 \; \% & 0.00 \; \% & 20 & 4 & 0.5 & 1.0 & 25 & 0.1 & 0.05 & & \\
\hbox{DHN-2h} & Y & \sim 2.3 \; \% & 0.00 \; \% & 100 & 6 & 0.5 & 1.0 & 60 & 0.4 & 0.1 & 0.01 & \\
\hbox{DHN-3h} & Y & \sim 2.7 \; \% & 0.00 \; \% & 500 & 8 & 0.5 & 1.0 & 150 & 0.128 & 0.032 & 0.008 & 0.002 \\ \hline
\hbox{DHN-1h} & N & 2.06 \; \% & 0.13 \; \% & 100 & 12 & 0.2 & 0.5 & 30 & 0.1 & 0.05 & & \\
\hbox{DHN-2h} & N & 2.01 \; \% & 0.11 \; \% & 500 & 40 & 0.2 & 0.8 & 50 & 0.4 & 0.1 & 0.01 & \\ \hline \end{array}$ \caption[Experimental results of \citet{scellier2017equilibrium} on deep Hopfield networks trained on MNIST.]{ "DHN-$\#$h" stands for Deep Hopfield Network with $\#$ hidden layers. `cached' refers to whether or not the equilibrium states are cached and reused as a starting point at the next free phase relaxation. $T$ is the number of iterations in the free phase. $K$ is the number of iterations in the nudged phase. $\epsilon$ is the step size for the dynamics of the state variable $s$. $\beta$ is the value of the nudging factor in the nudged phase. $\alpha_k$ is the learning rate for updating the parameters in layer $k$. \textbf{Top.} Experimental results of \citet{scellier2017equilibrium} with the caching trick. Test error rates and train error rates are reported on single trials. \textbf{Bottom.} Experimental results of \citet{ernoult2019updates} without the caching trick. Test error rates and train error rates are averaged over five trials. } \label{table:hopfield-results} \end{table}
Since these early experiments, thanks to new insights, new ideas and more perseverance, new results have been obtained which improve in terms of simulation speed, test accuracy, and complexity of the task solved. We present these more recent experimental results in Chapter \ref{chapter:discrete-time}. In addition, we stress that the real potential of EqProp is more likely to shine on neuromorphic substrates (Chapter \ref{chapter:neuromorphic}), rather than on digital computers.
\section{Contrastive Hebbian Learning (CHL)} \label{sec:contrastive-hebbian-learning}
In the setting of continuous Hopfield networks studied in this Chapter, EqProp is similar to the generalized recirculation algorithm (GeneRec) \citep{o1996biologically}. The main novelty of EqProp with respect to GeneRec is the formalism based on the concepts of nudging factor ($\beta$) and total energy function ($F$), which enables to formulate a general framework for training energy-based models (Chapter \ref{chapter:eqprop}) and Lagrangian-based models (Chapter \ref{chapter:future}), applicable not just to the continuous Hopfield model, but also many more network models, including nonlinear resistive networks (Chapter \ref{chapter:neuromorphic}) and convolutional networks (Chapter \ref{chapter:discrete-time}).
EqProp is also similar in spirit to the contrastive Hebbian learning algorithm (CHL), which we present in this section. The CHL algorithm was originally introduced in the case of the Boltzmann machine \citep{ackley1985learning} and then extended to the case of the continuous Hopfield network \citep{movellan1991contrastive,baldi1991contrastive}.
We note that Boltzmann machines may be trained with EqProp, via the stochastic version presented in Section \ref{sec:stochastic-setting}.
\subsection{Contrastive Hebbian Learning in the Continuous Hopfield Model}
Like EqProp, the CHL algorithm proceeds in two phases and uses a free phase. But unlike EqProp, it uses a \textit{clamped phase} as a second phase for training, instead of a \textit{nudged phase}. Bringing this modification to the EqProp training procedure described in section \ref{sec:eqprop-hopfield}, we arrive at the following algorithm, proposed by \citet{movellan1991contrastive}.
\paragraph{Free phase.} As in EqProp, the first phase is a free phase (also called `negative phase'): inputs $x$ are clamped, and both the hidden and output neurons evolve freely, following the gradient of the energy function. The hidden and output neurons stabilize to an energy minimum called free state and denoted $\left( h_\star^-, o_\star^- \right)$. We write $s^- = \left( x, h_\star^-, o_\star^- \right)$. At the free state, every synapse undergoes an anti-Hebbian update. That is, for any synapse $W_{ij}$ (connecting neuron $i$ to neuron $j$), we perform the weight update $\Delta W_{ij} = - \eta \; \sigma \left( s_i^- \right) \sigma \left( s_j^- \right)$.
\paragraph{Clamped phase.} The second phase is a `clamped phase' (also called `positive phase'): not only inputs are clamped, but also outputs are now clamped to their target value $y$. The hidden neurons evolve freely and stabilize to another energy minimum $h_\star^+$. We write $s^+ = \left( x, h_\star^+, y \right)$ and call this configuration the \textit{clamped state}. At the clamped state, every synapse undergoes a Hebbian update. That is, for any synapse $W_{ij}$ (connecting neuron $i$ to neuron $j$), we perform the weight update $\Delta W_{ij} = + \eta \; \sigma \left( s_i^+ \right) \sigma \left( s_j^+ \right)$.
\paragraph{Global update.} Putting the weight updates of the free phase and clamped phase together, we get the global update of the CHL algorithm: \begin{equation}
\label{eq:global-chl-update}
\Delta W_{ij} = \eta \left( \sigma \left( s_i^+ \right) \sigma \left( s_j^+ \right) - \sigma \left( s_i^- \right) \sigma \left( s_j^- \right) \right). \end{equation}
\subsection{An Intuition Behind Contrastive Hebbian Learning}
Both CHL and EqProp have the desirable property that learning stops when the network correctly predicts the target. Specifically, in CHL, when the equilibrium state of the free phase (the free state) matches the equilibrium state of the clamped phase (the clamped state), the two terms of the weight update (Eq.~\ref{eq:global-chl-update}) cancel out, thus yielding an effective weight update of zero. In other words, if the network already provides the correct output, then no learning occurs.
It is instructive to verify that EqProp preserves this property, even in its general formulation (Chapter \ref{chapter:eqprop}). Suppose that the equilibrium state ($s_\star^0$) corresponding to an input $x$ provides the correct answer ($y$), i.e. suppose that $s_\star^0$ is a minimum of the function $s \mapsto C(s, y)$. This implies that $\frac{\partial C}{\partial s}(s_\star^0, y)=0$. Using the fact that $\frac{\partial E}{\partial s}(\theta, x, s_\star^0) = 0$ by definition of $s_\star^0$, we get $\frac{\partial E}{\partial s}(\theta, x, s_\star^0) + \beta \; \frac{\partial C}{\partial s}(s_\star^0, y)=0$ for any value of $\beta$. This implies that $s_\star^\beta = s_\star^0$ for any $\beta$, by definition of $s_\star^\beta$. As a consequence, the two terms in the learning rule of EqProp cancel out. We note that this property remains true in the case of the symmetric difference estimator (Eq.~\ref{eq:two-sided-estimator}).
\subsection{A Loss Function for Contrastive Hebbian Learning}
The global learning rule of the CHL algorithm rewrites in terms of the energy function (the Hopfield energy) as \begin{equation}
\label{eq:chl-learning-rule}
\Delta \theta = \eta \left( - \frac{\partial E}{\partial \theta} \left( \theta, x, h_\star^+, y \right) + \frac{\partial E}{\partial \theta} \left( \theta, x, h_\star^-, o_\star^- \right) \right). \end{equation} Here $\left( x, h_\star^+, y \right)$ is the clamped state, and $\left( x, h_\star^-, o_\star^- \right)$ is the free state. In this form, the CHL update rule stipulates to decrease the energy value of the clamped state and to increase the energy value of the free state. Since low-energy configurations correspond to preferred states of the model under the gradient dynamics, the CHL update rule thus increases the likelihood that the model produces the correct output ($y$), and decreases the likelihood that it generates again the same output ($o_\star$).
\begin{prop}[\citet{movellan1991contrastive}] \label{prop:loss-chl} The CHL update rule (Eq.~\ref{eq:chl-learning-rule}) is equal to \begin{equation} \Delta \theta = - \eta \frac{\partial \mathcal{L}^{\rm CHL}}{\partial \theta}(\theta, x, y), \end{equation} where $\mathcal{L}^{\rm CHL}$ is the loss defined by \begin{equation}
\mathcal{L}^{\rm CHL}(\theta, x, y) = E \left( \theta, x, h_\star^+, y \right) - E \left( \theta, x, h_\star^-, o_\star^- \right). \end{equation} \end{prop}
The loss $\mathcal{L}^{\rm CHL}$ has the problem that the two phases of the CHL algorithm may stabilize in different modes of the energy function. \citet{movellan1991contrastive} points out that when this happens, the weight update is inconsistent and learning usually deteriorates. Similarly, \citet{baldi1991contrastive} note abrupt discontinuities due to basin hopping phenomena.
EqProp solves this problem by optimizing the loss $\mathcal{L} = C(s_\star, y)$, whose gradient can be estimated using nudged states ($s_\star^\beta$) that are infinitesimal continuous deformations of the free state ($s_\star$), and are thus in the same `mode' of the energy landscape. \chapter{Training Nonlinear Resistive Networks with Equilibrium Propagation} \label{chapter:neuromorphic}
In the previous chapter, we have discussed the bio-realism of EqProp in the setting of Hopfield networks. Learning in this context is achieved using solely leaky integrator neurons (in both phases of training) and a local (Hebbian) weight update. These bio-realistic features are of interest not only for neuroscience, but also for neuromorphic computing, towards the goal of building fully analog neural networks supporting on-chip learning. Recently, several works have proposed analog implementations of EqProp in the context of Hopfield networks \citep{zoppo2020equilibrium,foroushani2020analog,ji2020towards} and spiking variants \citep{o2019training,martin2020eqspike}.
Here we investigate a different approach to implement EqProp on neuromorphic chips. We emphasize that EqProp is not limited to the Hopfield model and the gradient systems of Chapter \ref{chapter:hopfield}, but more broadly applies to any system whose equilibrium state $s_\star$ is a solution of a variational equation $\frac{\partial E}{\partial s}(s_\star) = 0$, where $E(s)$ is a scalar function -- what we have called an \textit{energy-based model} (EBM) in Chapter \ref{chapter:eqprop}. Importantly, many physical systems can be described by variational principles, as a reformulation of the physical laws characterizing their state. This suggests a path to build highly efficient energy-based models grounded in physics, with EqProp as a learning algorithm for training.
In this chapter, we exploit the fact that a broad class of analog neural networks called \textit{nonlinear resistive networks} can be described by such a variational principle. Nonlinear resistive network are electrical circuits consisting of nodes interconnected by (linear or nonlinear) resistive elements. These circuits can serve as analog neural networks, in which the weights to be adjusted are implemented by the conductances of programmable resistive devices such as memristors \citep{chua1971memristor}, and the nonlinear transfer functions (or `activation functions') are implemented by nonlinear components such as diodes. The `energy function' in these nonlinear resistive networks is a quantity called the \textit{co-content} \citep{millar1951cxvi} or \textit{total pseudo-power} \citep{johnson2010nonlinear} of the circuit, and its existence can be derived directly from Kirchhoff's laws. Moreover, this energy function has the sum-separability property: the total pseudo-power of the circuit is the sum of the pseudo-powers of its individual elements. As a consequence, we can train these analog networks with EqProp, and the update rule for each conductance, which follows the gradient of the loss, is local. Specifically, we show mathematically that the gradient with respect to a conductance can be estimated using solely the voltage drop across the corresponding resistor. This theoretical result provides a principled method to train end-to-end analog neural networks by stochastic gradient descent, thus suggesting a path towards the development of ultra-fast, compact and low-power learning-capable neural networks.
The present chapter, which is essentially a rewriting of \citet{kendall2020training}, is articulated as follows. \begin{itemize} \item In section \ref{sec:analog-neural-network}, we briefly present a class of analog neural networks called \textit{nonlinear resistive networks}, as well as the concept of \textit{programmable resistors} that play the role of synapses. \item In section \ref{sec:nonlinear-resistive-network-ebm}, we show that these nonlinear resistive networks are energy-based models: at inference, the configuration of node voltages chosen by the circuit corresponds to the minimum of a mathematical function (the \textit{energy function}) called the \textit{co-content} (or \textit{total pseudo-power}) of the circuit, as a consequence of Kirchhoff's laws (Lemma \ref{lma:power}). This suggests an implementation of energy-based neural networks grounded in electrical circuit theory, which also bridges the conceptual gap between energy functions (at a mathematical level\footnote{In an energy-based model, the \textit{energy function} is a mathematical abstraction of the model, not a physical energy.}), and physical energies\footnote{Specifically the power dissipated in resistive devices.} (at a hardware level). \item In section \ref{sec:nonlinear-resistive-network-eqprop}, we show how these nonlinear resistive networks can be trained with EqProp, and we derive the formula for updating the conductances (the synaptic weights) in proportion to their loss gradients, using solely the voltage drops across the corresponding resistive devices (Theorem \ref{lma:gradients}). \item In section \ref{sec:analog-network-model}, as a proof of concept of what is possible with this neuromorphic hardware methodology, we propose an analog network architecture inspired by the deep Hopfield network, which alternates linear and nonlinear processing stages (Fig.~\ref{fig:network}). \item In section \ref{sec:numerical-simulations}, we present numerical simulations on the MNIST dataset, using a SPICE-based framework to simulate the circuit's dynamics. \end{itemize}
By explicitly decoupling the training procedure (EqProp in Section \ref{sec:nonlinear-resistive-network-eqprop}) from the specific neural network architecture presented (Section \ref{sec:analog-network-model}), we stress that this optimization method is applicable to any resistive network architecture, not just the one of Section \ref{sec:analog-network-model}. This modular approach thus offers the possibility to explore the design space of analog network architectures trainable with EqProp, in essentially the same way as deep learning researchers explore the design space of differentiable neural networks trainable with backpropagation.
\section{Nonlinear Resistive Networks as Analog Neural Networks} \label{sec:analog-neural-network}
Nonlinear resistive networks are electrical circuits consisting of arbitrary two-terminal resistive elements -- see \citet[Chapter~3]{muthuswamy2018introduction} for an introduction. We can use such circuits to build neural networks. In the supervised learning scenario, we use a subset of the nodes of the circuit as input nodes, and another subset of the nodes as output nodes. We use voltage sources to impose the voltages at input nodes: after the circuit has settled to steady state, the voltages of output nodes indicate the `prediction'. The circuit thus implements an input-to-output mapping function, with the node voltages representing the state of the network. This mapping function can be nonlinear if we include nonlinear resistive elements such as diodes in the circuit, and the conductance values of resistors can be thought of as parameterizing this mapping function.
A \textit{programmable resistor} is a resistor whose conductance can be changed (or `programmed'), and thus can play the role of a `weight' to be adjusted. Programmable resistors can thus implement the synapses of a neural network. In the last decade, many technologies have emerged, and have been proposed and studied as programmable resistors. We refer to \citet{burr2017neuromorphic} and \citet{xia2019memristive} for reviews on existing technologies, their working mechanisms, and how they are used for neuromorphic computing. For convenience, in most of this chapter we will think of programmable resistors as ideally tunable, which is a convenient concept to formalize mathematically the goal of learning in nonlinear resistive networks. However, this is an ideal and unrealistic assumption: in practice, far from being ideally tunable, these programmable resistive devices currently present important challenges for the coming decade of research to solve. We refer to \citet{chang2017mitigating} for an analysis of these challenges to be overcome. In this manuscript, we will not discuss how the programming of a conductance can be done and implemented in hardware.
We note that nonlinear resistive networks have been studied as neural network models since the 1980s \citep{hutchinson1988computing,harris1989resistive}.
\section{Nonlinear Resistive Networks are Energy-Based Models} \label{sec:nonlinear-resistive-network-ebm}
In this section we show that, in a nonlinear resistive network, the steady state of the circuit imposed by Kirchhoff's laws is a stationary point of a function called the \textit{co-content}, or \textit{total pseudo-power} (Lemma \ref{lma:power}). Thus, nonlinear resistive networks are energy-based models whose energy function is the total pseudo-power. Furthermore, the total pseudo-power has the sum-separability property, being by definition the sum of the pseudo-powers of its individual components.
We first present in section \ref{sec:linear-resistance-network} the case of linear resistance networks. Although this model is functionally not very useful (as a neural network model), studying it is helpful to gain understanding of the working mechanisms of analog neural networks: it helps understand the limits of linear resistances and the need to introduce nonlinear elements (section \ref{sec:resistive-elements}). In section \ref{sec:nonlinear-resistive}, we derive the general result for nonlinear resistive networks.
\subsection{Linear Resistance Networks} \label{sec:linear-resistance-network}
A \textit{linear resistance network} is an electrical circuit whose nodes are linked pairwise by \textit{linear resistors}, i.e. resistors that satisfy Ohm's law. We recall that, in a linear resistor, Ohm's law states that $I_{ij} = g_{ij} (V_i-V_j)$, where $I_{ij}$ is the current through the resistor, $g_{ij}$ is its conductance ($g_{ij} = \frac{1}{R_{ij}}$ where $R_{ij}$ is the resistance), and $V_i$ and $V_j$ are its terminal voltages.
Consider the following question: we impose the voltages at a set of input nodes, and we want to know what are the voltages at other nodes of the circuit. We can answer this question by writing Ohm's law in every branch, Kirchhoff's current law at every node, and by solving the set of equations obtained for all node voltages and all branch currents. But there is a more elegant way to characterize the steady state of the circuit. Kirchhoff's current law gives $\sum_j I_{ij} = 0$ for every node $i$. Combined with Ohm's law, we get $\sum_j g_{ij} (V_i-V_j) = 0$. Now note that the left-hand side of this expression is equal to $\frac{1}{2} \frac{\partial \mathcal{P}}{\partial V_i}$, where $\mathcal{P}(V_1, V_2, \ldots, V_N)$ is the functional defined by \begin{equation}
\mathcal{P}(V_1, V_2, \ldots, V_N) = \sum_{i<j} g_{ij} \left(V_j-V_i \right)^2.
\label{eq:power-linear-resistance-network} \end{equation} This means that, among all \textit{conceivable} configurations of node voltages, the configuration that is physically realized is a stationary point of the functional $\mathcal{P}(V_1, V_2, \ldots, V_N)$. Therefore, linear resistance networks are energy-based models, with the configuration of node voltages $V = (V_1, V_2, \ldots, V_N)$ playing the role of state variable, and the functional $\mathcal{P}(V_1, V_2, \ldots, V_N)$ playing the role of energy function.
The functional $\mathcal{P}(V_1, V_2, \ldots, V_N)$ is called the \textit{power functional}, because it represents the total power dissipated in the circuit, with $\frac{1}{2} g_{ij} \left(V_j-V_i \right)^2$ being the power dissipated in the resistor connecting node $i$ to node $j$. Since $\mathcal{P}$ is convex, the steady state of the circuit is not just a stationary point of $\mathcal{P}$, but also the global minimum. This well-known result of circuit theory is called the \textit{principle of minimum dissipated power}: if we impose the voltages at a set of input nodes, the circuit will choose the voltages at other nodes so as to minimize the total power dissipated in the resistors (Fig.~\ref{fig:minimum-power}).
However, linear resistance networks are not very useful as neural network models since they cannot implement nonlinear operations. Rewriting Kirchhoff's current law at node $i$, we get $V_i = \frac{\sum_j g_{ij} V_j}{\sum_j g_{ij}}$. This operation resembles the usual multiply-accumulate operation of artificial neurons in conventional deep learning, but with the notable difference that there is no nonlinear activation function. Another difference is the presence of the factor $G_i = \sum_j g_{ij}$ at the denominator, which replaces the usual weighted sum by a weighted mean: each floating node voltage $V_i$ is a weighted mean of its neighbors.
From this analysis, it appears that nonlinear elements such as diodes are necessary to perform nonlinear operations. In the rest of this section, we generalize the result of this subsection to the setting of nonlinear resistive networks.
\begin{figure*}\label{fig:minimum-power}
\end{figure*}
\subsection{Two-Terminal Resistive Elements} \label{sec:resistive-elements}
In this subsection, we follow the method of \citet{johnson2010nonlinear} to generalize the notion of `power dissipated in a linear resistor' to arbitrary two-terminal resistive elements.
\paragraph{Current-voltage characteristic.} Consider a two-terminal resistive element with terminals $i$ and $j$, characterised by a well-defined and continuous current-voltage characteristic $\gamma_{ij}$. The function $\gamma_{ij}$ takes as input the voltage drop $\Delta V_{ij} = V_i - V_j$ across the component and returns the current $I_{ij} = \gamma_{ij} \left( \Delta V_{ij} \right)$ moving from node $i$ to node $j$ in response to $\Delta V_{ij}$. Since the current flowing from $i$ to $j$ is the negative of the current flowing from $j$ to $i$, we have by definition: \begin{equation}
\label{eq:antisymmetry}
\forall i, j, \qquad \gamma_{ij} \left( \Delta V_{ij} \right) = - \gamma_{ji} \left( \Delta V_{ji} \right) \end{equation} where $\Delta V_{ji} = - \Delta V_{ij}$.
For example, the current-voltage characteristic of a linear resistor of conductance $g_{ij}$ linking node $i$ to node $j$ is, by Ohm's law, $I_{ij} = g_{ij} \Delta V_{ij}$. By definition of $\gamma_{ij}$, this implies that \begin{equation}
\label{eq:IVresistor}
\gamma_{ij} \left( \Delta V_{ij} \right) = g_{ij} \Delta V_{ij}. \end{equation}
\paragraph{Pseudo-power.} For each two-terminal element with current-voltage characteristic $I_{ij} = \gamma_{ij}(\Delta V_{ij})$, we define $p_{ij}(\Delta V_{ij})$ as the primitive function of $\gamma_{ij}(\Delta V_{ij})$ that vanishes at $0$, i.e. \begin{equation}
\label{eq:pseudo-power}
p_{ij}(\Delta V_{ij}) = \int_0^{\Delta V_{ij}} \gamma_{ij}(v) dv. \end{equation} The quantity $p_{ij} \left( \Delta V_{ij} \right)$ has the physical dimensions of power, being a product of a voltage and a current. We call $p_{ij} \left( \Delta V_{ij} \right)$ the \textit{pseudo-power} along the branch from $i$ to $j$, following the terminology of \citet{johnson2010nonlinear}. Note that as a consequence of Eq.~\ref{eq:antisymmetry} we have \begin{equation}
\label{eq:symmetry}
\forall i, j, \qquad p_{ij}(\Delta V_{ij}) = p_{ji}(\Delta V_{ji}), \end{equation} i.e. the pseudo-power from $i$ to $j$ is equal to the pseudo-power from $j$ to $i$. We call this property the \textit{pseudo-power symmetry}.
For example, in the case of a linear resistor of conductance $g_{ij}$ linking node $i$ to node $j$, the pseudo-power corresponding to the current-voltage characteristic of Eq.~\ref{eq:IVresistor} is: \begin{equation} \label{eq:pseudo-resistor} p_{ij}(\Delta V_{ij}) = \frac{1}{2} g_{ij} \Delta V_{ij}^2. \end{equation} In this case, the pseudo-power is half the physical power dissipated in the resistor.
\subsection{Nonlinear Resistive Networks} \label{sec:nonlinear-resistive}
A \textit{nonlinear resistive network} is a circuit consisting of interconnected two-terminal resistive elements. We number the nodes of the circuit $i=1, 2, \ldots, N$.
\paragraph{Configuration.} We call a vector of voltage values $V = \left( V_1, V_2, \ldots, V_N \right)$ a \textit{configuration}. Importantly, a configuration can be any vector of voltage values, even those that are not compatible with Kirchhoff's current law (KCL).
\paragraph{Total pseudo-power (also called co-content).} Recall the definition of the pseudo-power of a two-terminal element (Eq.~\ref{eq:pseudo-power}). We define the \textit{total pseudo-power} of a configuration $V = \left( V_1, V_2, \ldots, V_N \right)$ as the sum of pseudo-powers along all branches: \begin{equation}
\label{eq:total-pseudo-power}
\mathcal{P}(V_1, \cdots, V_N) = \sum_{i<j} p_{ij}(V_i - V_j). \end{equation} We note that the pseudo-power symmetry (Eq.~\ref{eq:symmetry}) guarantees that this definition does not depend on node ordering. In the case of a linear resistance network, the total pseudo-power of the circuit is half the power functional of Eq.~\ref{eq:power-linear-resistance-network}.
We stress that $\mathcal{P}$ is a mathematical function defined on any configuration $V_1, V_2, \ldots, V_N$, even those that are not compatible with KCL.
\paragraph{Steady state.} We denote $V_1^\star$, $V_2^\star$, $\ldots$, $V_N^\star$ the configuration of node voltages imposed by Kirchhoff's current law (KCL), and we call $V^\star = \left( V_1^\star, V_2^\star, \ldots, V_N^\star \right)$ the \textit{steady state} of the circuit. Specifically, for every (internal or output) floating node $i$, KCL implies $\sum_{j=1}^N I_{ij} = 0$, which rewrites \begin{equation}
\label{eq:KCL}
\sum_{j=1}^N \gamma_{ij} \left( V_i^\star-V_j^\star \right) = 0. \end{equation}
The following result, known since \citet{millar1951cxvi}, shows that the circuit is an energy-based model, whose energy function is the total pseudo-power.
\begin{lma} \label{lma:power} The steady state of the circuit, denoted $\left( V_1^\star, V_2^\star, \ldots, V_N^\star \right)$, is a stationary point\footnote{With further assumptions on the current-voltage characteristics $\gamma_{ij}$, \citet{christianson2007dirichlet}, as well as \citet{johnson2010nonlinear}, show that the function $\mathcal{P}$ is convex, so that the steady state is the global minimum of $\mathcal{P}$. However, in the context of EqProp, all one needs is the first order condition, i.e. the fact that the steady state is a stationary point of $\mathcal{P}$, not necessarily a minimum.} of the total pseudo-power: for every floating node $i$, we have \begin{equation}
\frac{\partial \mathcal{P}}{\partial V_i} \left( V_1^\star, V_2^\star, \ldots, V_N^\star \right) = 0. \end{equation} \end{lma}
\begin{proof}[Proof of Lemma \ref{lma:power}] We use the definition of the total pseudo-power (Eq.~\ref{eq:total-pseudo-power}), the pseudo-power symmetry (Eq.~\ref{eq:symmetry}), the definition of the pseudo-power (Eq.~\ref{eq:pseudo-power}) and the fact that the steady state of the circuit satisfies Kirchhoff's current law (Eq.~\ref{eq:KCL}). For every floating node $i$ we have: \begin{equation} \frac{\partial \mathcal{P}}{\partial V_i} \left( V_1^\star, V_2^\star, \ldots, V_N^\star \right) = \sum_j \frac{\partial p_{ij}}{\partial V_i}(V_i^\star-V_j^\star) = \sum_j \gamma_{ij}(V_i^\star-V_j^\star) = 0. \end{equation} \end{proof}
Equipped with this result, we can now derive a procedure to train nonlinear resistive networks with EqProp.
\section{Training Nonlinear Resistive Networks with Equilibrium Propagation} \label{sec:nonlinear-resistive-network-eqprop}
\subsection{Supervised Learning Setting}
In the supervised learning setting, a subset of the nodes of the circuit are \textit{input nodes}, at which input voltages (denoted $X$) are sourced. All other nodes -- the \textit{internal nodes} and \textit{output nodes} -- are left floating: after the voltages of input nodes have been set, the voltages of internal and output nodes settle to their steady state. The output nodes, denoted $\widehat{Y}$, represent the readout of the system, i.e. the model prediction. The architecture and the components of the circuit determine the $X \mapsto \widehat{Y}$ mapping function. Specifically, the conductances of the programmable resistors, denoted $\theta$, parameterize this mapping function. That is, $\widehat{Y}$ can be written as a function of $X$ and $\theta$ in the form $\widehat{Y}(\theta, X)$. Training such a circuit consists in adjusting the values of the conductances ($\theta$) so that the voltages of output nodes ($\widehat{Y}$) approach the target voltages ($Y$). Formally, we cast the goal of training as an optimization problem in which the loss to be optimized (corresponding to an input-target pair $(X, Y)$) is of the form: \begin{equation}
\label{eq:loss}
\mathcal{L}(\theta, X, Y) = C \left( \widehat{Y}(\theta, X), Y \right). \end{equation}
We have seen that nonlinear resistive networks are energy-based models (Lemma \ref{lma:power}) and that the energy function (the total pseudo-power) is sum-separable, by definition (Eq.!\ref{eq:total-pseudo-power}). This enables us to use EqProp in such analog neural networks to compute the gradient of the loss. Theorem \ref{lma:gradients} below provides a formula for computing the loss gradient with respect to a conductance using solely the voltage drop across the corresponding resistor.
\subsection{Training Procedure} \label{sec:resistive-networks-algo}
Given an input $X$ and associated target $Y$, EqProp proceeds in the following two phases.
\paragraph{Free phase.} At inference, input voltages are sourced at input nodes ($X$), while all other nodes of the circuit (the internal nodes and output nodes) are left floating. All internal and output node voltages are stored\footnote{On practical neuromorphic hardware, this can be achieved using a capacitor or sample-and-hold amplifier (SHA) circuit, for instance. We note that we only need one SHA per node (neuron), not per synapse. We will not discuss these aspects of implementation here.
}. In particular, the voltages of output nodes ($\widehat{Y}$) corresponding to prediction are compared with the target ($Y$) to compute the loss $\mathcal {L} = C ( \widehat{Y}, Y)$.
\paragraph{Nudged phase.} For each output node $\widehat{Y}_k$, a current $I_k = - \beta \frac{\partial C}{\partial \widehat{Y}_k}$ is sourced at $\widehat{Y}_k$, where $\beta$ is a positive or negative scaling factor (the \textit{nudging factor}). All internal node voltages and output node voltages are measured anew.
\begin{thm}[\citet{kendall2020training}] \label{lma:gradients} Consider a two-terminal component whose terminals are $i$ and $j$. Denote $\Delta V_{ij}^0$ the voltage drop across this two-terminal component in the free phase (when no current is sourced at output nodes), and $\Delta V_{ij}^\beta$ the voltage drop in the nudged phase (when a current $I_k = - \beta \frac{\partial C}{\partial \widehat{Y}_k}$ is sourced at each output node $\widehat{Y}_k$). Let $w_{ij}$ denote an adjustable parameter of this component, and $p_{ij}$ its pseudo-power (which depends on $w_{ij}$). Then, the gradient of the loss ${\mathcal L} = C \left( \widehat{Y}, Y \right)$ with respect to $w_{ij}$ can be estimated as \begin{equation} \label{eq:device-gradient} \frac{\partial {\mathcal L}}{\partial w_{ij}} = \lim_{\beta \to 0} \frac{1}{\beta} \left( \frac{\partial p_{ij} \left( \Delta V^\beta_{ij} \right)}{\partial w_{ij}} - \frac{\partial p_{ij} \left( \Delta V^0_{ij} \right)}{\partial w_{ij}} \right). \end{equation} In particular, if the component is a linear resistor of conductance $g_{ij}$, then the loss gradient with respect to $g_{ij}$ can be estimated as \begin{equation} \label{eq:conductance-gradient} \frac{\partial {\mathcal L}}{\partial g_{ij}} = \lim_{\beta \to 0} \frac{1}{2 \beta} \left( \left( \Delta V^\beta_{ij} \right)^2 - \left( \Delta V^0_{ij} \right)^2 \right). \end{equation} \end{thm}
\begin{proof} For simplicity, we have stated Theorem \ref{lma:gradients} in the case where the cost function $C(\widehat{Y}, Y)$ depends only on output node voltages ($\widehat{Y}$). But this result can be directly generalized to the case of a cost function $C(V, Y)$ that depends on any node voltages ($V$), not just output node voltages. In this case, in the nudged phase of EqProp, currents $I_k = - \beta \frac{\partial C}{\partial V_k}$ must be sourced at every node $V_k$ (not just at output nodes).
Let $\theta$ denote the vector of adjustable parameters (e.g. the conductances), $X$ the voltages of input nodes, and $V$ the voltages of floating nodes (which includes the internal nodes and output nodes). Further let $\mathcal{P}(\theta, X, V)$ denote the total pseudo-power of the circuit in the free phase. By Lemma \ref{lma:power}, the steady state $V_\star$ of the free phase is such that $\frac{\partial \mathcal{P}}{\partial V}(\theta, X, V_\star) = 0$. In the nudged phase, when a current $I_k = -\beta \; \frac{\partial C}{\partial V_k}(V_\star, Y)$ is sourced at every floating node $V_k$, Kirchhoff's current law at the steady state $V_\star^\beta$ implies that $\frac{\partial \mathcal{P}}{\partial V}(\theta, X, V_\star^\beta) + \beta \; \frac{\partial C}{\partial V}(V_\star, Y) = 0$. Furthermore, the total pseudo-power (Eq.~\ref{eq:total-pseudo-power}) has the sum-separability property: an adjustable parameter $w_{ij}$ of a component whose terminals are $i$ and $j$ contributes to $\mathcal{P}(\theta, X, V)$ only through the pseudo-power $p_{ij}(V_i - V_j)$ of that component. Therefore, Eq.~\ref{eq:device-gradient} follows from the main Theorem \ref{thm:static-eqprop}.
In the case of a linear resistor, the adjustable parameter is $w_{ij} = g_{ij}$ and the pseudo-power is given by Eq.~\ref{eq:pseudo-resistor}. Thus, Eq.~\ref{eq:conductance-gradient} follows from Eq.~\ref{eq:device-gradient} and the fact that $\frac{\partial p_{ij} \left( \Delta V_{ij} \right)}{\partial g_{ij}} = \frac{1}{2} \left( \Delta V_{ij} \right)^2$. \end{proof}
As explained in the general setting (Section \ref{sec:equilibrium-propagation}), it is possible to reduce the bias and the variance of the gradient estimator by performing two nudged phases: one with a positive nudging ($+\beta$) and one with a negative nudging ($-\beta$).
Although the framework we have presented here is deterministic, we note that analog circuits in practice are affected by noise. In section \ref{sec:stochastic-setting} we present a stochastic version of EqProp which can model such forms of noise and incorporate effects of thermodynamics.
\subsection{On the Loss Gradient Estimates}
\paragraph{Computing the sign of the gradients.} Theorem \ref{lma:gradients} provides a formula for computing the gradient of a given device, assuming that the pseudo-power gradient ($\frac{\partial p_{ij}}{\partial w_{ij}}$) of this device is known, and that its terminal voltages can be measured, stored\footnote{The node voltages must be measured and stored at the end of the first phase, since they are no longer physically available after the second phase, at the moment of the weight update. We can achieve this with a sample and hold amplifier circuit.} and retrieved with arbitrary precision. In practice however, these conditions are too stringent.
A piece of good news is that there is empirical evidence that training neural networks by stochastic gradient descent (SGD) works well, even if for each weight, only the sign of the weight gradient is known. Variants of SGD which use the sign of the gradient rather than its exact value work well in practice. At each step of this training procedure, the weight update for $\theta_k$ takes the form $\Delta \theta_k = - \eta \; \text{sign} \left( \frac{\partial {\mathcal L}}{\partial \theta_k} \right)$. The effectiveness of this optmization method has been shown empirically in the context of differentiable neural networks trained with backpropagation \citep{bernstein2018signsgd}.
In the context of nonlinear resistive networks trained with EqProp, if we aim to get the correct sign (rather than its exact value) of the gradient for a given resistor, precise knowledge of the voltage values at the terminals is not necessary. As a corollary of Theorem \ref{lma:gradients}, the sign of the gradient can be obtained by comparing $\left| \Delta V_{ij}^0 \right|$ and $\left| \Delta V_{ij}^\beta \right|$, i.e. the absolute values of the voltages across the resistor in the free phase and the nudged phase\footnote{Equivalently, we can compare $\left| I_{ij}^0 \right|$ and $\left| I_{ij}^\beta \right|$, i.e. the currents through the resistor in the free phase and the nudged phase.}. This means that, if we aim to compute the sign of the gradient, we only need to perform a `compare' operation reliably.
\paragraph{Robustness to characteristics variability.} A large body of works aims at implementing the backpropagation algorithm in analog \citep{burr2017neuromorphic,xia2019memristive}. However, the weight gradients computed by backpropagation are sensitive to characteristics variability of analog devices. This is because the mathematical derivation of the backpropagation algorithm relies on a \textit{global coordination of elementary operations}: if any of the elementary operations of the algorithm is inaccurate, then the gradients computed are inaccurate (i.e. biased).
Although there is no experimental evidence for this fact yet, there are reasons to believe that the gradient estimates of EqProp are more robust to device mismatches than the gradients of Backprop. The reason is that, in EqProp, the same circuit is used in both phases of training. Intuitively, any device mismatch will affect the steady states of both phases (free phase and nudged phase), and since the gradient estimate depends on the difference between the measurements of the two phases, the effects of the mismatch will cancel out. More precisely, Theorem \ref{lma:gradients} tells us that the quality of the gradient estimate for a given device does not depend on the characteristics of other devices in the circuit.
We note that this argument holds only for the computation/estimation of the weight gradients. In EqProp like in Backprop, the challenge of weight update asymmetry of programmable resistors remains.
\section{Example of a Deep Analog Neural Network Architecture} \label{sec:analog-network-model}
The theory of Section \ref{sec:nonlinear-resistive-network-eqprop} applies to any nonlinear resistive network. In this section, as an example of what is possible with this general method, we present the neural network architecture proposed by \citet{kendall2020training}, inspired by the deep Hopfield network model (Figure~\ref{fig:network}). It is composed of multiple layers, alternating linear and non-linear processing stages. The linear transformations are performed by crossbar arrays of programmable resistors, that play the role of weight matrices that parameterize the transformations. The nonlinear transfer function is implemented using a pair of diodes, followed by a linear amplifier. These crossbar arrays of programmable resistors and these nonlinear transfer functions are alternated to form a deep network.
\begin{figure*}\label{fig:network}
\end{figure*}
\subsection{Antiparallel Diodes} \label{sec:diodes}
We propose to implement the neuron nonlinearities (or `activation functions') as shunt conductances. To do this, we place two diodes antiparallel between the neuron's node and ground. Each diode is placed in series with a voltage source, used to shift the bounds of the activation function. The diodes ensure that the neuron's voltage remains bounded even as its input current grows large, because for any additional input current, one of the diodes turns on and sinks the extra current to ground.
\subsection{Bidirectional Amplifiers} \label{sec:amplifiers}
In a circuit composed only of resistors and diodes, voltages decay through the resistive layers. This \textit{vanishing signals effect} can be explained by the fact that currents always flow from high electric potential to low electric potential. Thus, extremal voltage values are necessarily reached at input nodes, whose voltages are set.
To counter signal decay, one option is to use voltage-controlled voltage sources (VCVS) to amplify the voltages of hidden neurons in the forward direction. Current-controlled current sources (CCCS) can also be used to amplify currents in the backward direction, to better propagate error signals in the nudged phase. We call such a combination of a forward-directed VCVS and a backward-directed CCCS a `bidirectional amplifier'.
\subsection{Positive Weights} \label{sec:positive-weights}
Unlike conventional neural networks trained in software whose weights are free to take either positive or negative values, one constraint of analog neural networks is that the conductances of programmable resistors (which represent the weights) are positive. Several approaches are proposed in the literature to overcome this structural constraint. One approach consists in decomposing each weight as the difference of two (positive) conductances \citep{wang2019reinforcement}. Another approach is to shift the mean of the weight matrix by a constant factor \citep{hu2016dot}.
A third approach proposed here consists in doubling the number of input nodes, and to duplicate input values by inverting one set. We also double the number of output nodes so that, in a classification task with $K$ classes, the network has two output nodes for each class $k$, denoted $\widehat{Y}_k^+$ and $\widehat{Y}_k^-$, with $\widehat{Y}_k^+ - \widehat{Y}_k^-$ representing a score assigned to class $k$. The prediction of the model is then \begin{equation}
\widehat{Y}_{\rm pred} = \underset{0 \leq k \leq K}{\arg \max} \left( \widehat{Y}_k^+ - \widehat{Y}_k^- \right). \end{equation} We optimize the loss associated to the squared error cost function, i.e. $\mathcal{L} = C(V_\star, Y)$, where the target vector $Y = (Y_1, Y_2, \ldots, Y_K)$ is the one-hot code of the class label, and \begin{equation} \label{eq:loss3} C(\widehat{Y}, Y) = \frac{1}{2} \sum_{k=1}^K \left( \widehat{Y}_k^+-\widehat{Y}_k^--Y_k \right)^2. \end{equation}
\subsection{Current Sources} \label{sec:current-sources}
The nudged phase requires to inject currents $I_k^+$ and $I_k^-$ at output nodes $\widehat{Y}_k^+$ and $\widehat{Y}_k^-$. These currents must be proportional to the gradients of output node voltages $\widehat{Y}_k^+$ and $\widehat{Y}_k^-$, i.e. \begin{equation}
I_k^+ = - \beta \frac{\partial C}{\partial \widehat{Y}_k^+} = \beta \left( Y_k+\widehat{Y}_k^--\widehat{Y}_k^+ \right), \qquad I_k^- = - \beta \frac{\partial C}{\partial \widehat{Y}_k^-} = \beta \left( \widehat{Y}_k^+-\widehat{Y}_k^- - Y_k \right), \end{equation} where the nudging factor $\beta$ has the physical dimensions of a conductance. We can inject these currents in the nudged phase using current sources. In the free phase, these current sources are set to zero current and do not influence the voltages of output nodes, acting like open circuits.
\section{Numerical Simulations on MNIST} \label{sec:numerical-simulations}
\citet{kendall2020training} present simulations on the MNIST digits classification task, performed using the high-performance SPICE-class parallel circuit simulator \textit{Spectre} \citep{2020spectre}. SPICE (simulation program with integrated circuit emphasis) is a framework for realistic simulations of circuit dynamics \citep{vogt2020ngspice}. Specifically, SPICE is used in the simulations to perform the free phase and the nudged phase of the EqProp training process. The other operations are performed in Python: this includes weight initialization (before training starts), calculating loss and gradient currents (between the free phase and the nudged phase), weight gradient calculation (at the end of the nudged phase) and performing the weight updates (resistances are updated in software). We refer to \citet{kendall2020training} for full details of the implementation and simulation results.
Simulations are performed on a small network with a single hidden layer of $100$ neurons. Training is stopped after 10 epochs, when the SPICE network achieves a test error rate of $3.43\%$. For comparison, \citet{lecun1998gradient} report results with different kinds of linear classifiers and logistic regression models (corresponding to different pre-processing methods), all performing $> 7\%$ test error, which is significantly worse than the SPICE network. This demonstrates that the SPICE network benefits from the non-linearities offered by the diodes. \chapter{Training Discrete-Time Neural Network Models with Equilibrium Propagation} \label{chapter:discrete-time}
In the previous chapters, we have presented EqProp in its general formulation (Chapter \ref{chapter:eqprop}), and we have applied it to gradient systems (such as the continuous Hopfield model, Chapter \ref{chapter:hopfield}), and to physical systems that can be described by a variational principle (such as nonlinear resistive networks, Chapter \ref{chapter:neuromorphic}). Although EqProp is a potentially promising tool for training neuromorphic hardware, developing such hardware is still in the future. Whereas in the previous two chapters we have mostly focused on neuroscience and neuromorphic considerations, it is also essential to demonstrate the potential of EqProp to solve practical tasks. In this chapter, we focus on the scalability of EqProp in software, to demonstrate its usefulness as a learning strategy.
When simulated on digital computers, the models presented in the previous chapters are very slow and require long inference times to converge to equilibrium. More importantly, these models have thus far not been proved to scale to tasks harder than MNIST. In this chapter, we present a class of models trainable with EqProp, specifically aimed at accelerating simulations in software, and at scaling EqProp training to larger models and more challenging tasks. As a consequence of this change of perspective, some of the techniques introduced in this chapter can be viewed as a step backward from biorealism and neuromorphic considerations (e.g. the use of shared weights in the convolutional network models). However, the introduction of such techniques allows us to broaden the scope of EqProp and to benchmark it against more advanced models of deep learning.
The present chapter, which is essentially a compilation and a rewriting of \citet{ernoult2019updates} and \citet{laborieux2020scaling}, is organized as follows. \begin{itemize}
\item In Section \ref{sec:discrete-time-EqProp}, we present a discrete-time formulation of EqProp, which allows training neural network models closer to those used in conventional deep learning.
\item In Section \ref{sec:discrete-time-models}, we present discrete-time neural network models trainable with EqProp, including a fully-connected model (close in spirit to the Hopfield model) and a convolutional one. In contrast with previous chapters where we have only considered the squared error as a cost function, we present here a method to optimize the cross-entropy loss commonly used for classification tasks.
\item In Section \ref{sec:discrete-time-experiments}, we present the experimental results of \citet{ernoult2019updates} and \citet{laborieux2020scaling}. Compared to the experiments of Chapter \ref{chapter:hopfield}, discrete-time models allow one to reduce the computational cost of inference, and enable to scale EqProp to deeper architectures and more challenging tasks. In particular, a ConvNet model trained with EqProp achieves $11.68\%$ test error rate on CIFAR-10. Furthermore, these experiments highlight the importance of reducing the bias and variance of the loss gradient estimators on complex tasks. We also discuss some challenges to overcome in order to unlock the scaling of EqProp to larger models and harder tasks, as well as some promising avenues towards this goal.
\item In Section \ref{sec:discrete-time-transient-dynamics}, we present a theoretical result linking the transient states in the second phase of EqProp to the partial derivatives of the loss to optimize (Theorem \ref{thm:gdd} and Fig.~\ref{fig:gdd}). This property, which we call the \textit{gradient descending dynamics} (GDD) property, is useful in practice as it prescribes a criterion to decide when the dynamics of the first phase of training has converged to equilibrium. \end{itemize}
\section{Discrete-Time Dynamical Systems with Static Input} \label{sec:discrete-time-EqProp}
In this section, we apply EqProp to a class of discrete-time dynamical systems, as proposed by \citet{ernoult2019updates}.
\subsection{Primitive Function}
Consider an energy function of the form \begin{equation}
\label{eq:primitive-function}
E(\theta, x, s) = \frac{1}{2}\| s \|^2 - \Phi(\theta, x, s), \end{equation} where $\Phi$ is a scalar function that we will choose later. With this choice of energy function, the equilibrium condition $\frac{\partial E}{\partial s} \left( \theta, x, s_\star \right) = 0$ of Eq.~\ref{eq:free-equilibrium-state} rewrites as a fixed point condition: \begin{equation} \label{eq:free-fixed-point} s_\star = \frac{\partial \Phi}{\partial s} \left( \theta, x, s_\star \right). \end{equation} Assuming that the function $s \mapsto \frac{\partial \Phi}{\partial s} \left( \theta, x, s \right)$ is contracting, by the contraction mapping theorem, the sequence of states $s_1$, $s_2$, $s_3$, $\ldots$ defined by \begin{equation} \label{eq:free-phase-discrete-time} s_{t+1} = \frac{\partial \Phi}{\partial s} \left( \theta, x, s_t \right) \end{equation} converges to $s_\star$. This dynamical system can be viewed as a recurrent neural network (RNN) with static input $x$ (meaning that the same input $x$ is fed to the RNN at each time step) and transition function $F = \frac{\partial \Phi}{\partial s}$. Because $\Phi$ is a primitive function of the transition function $F$, we call $\Phi$ the \textit{primitive function} of the system. In light of Eq.~\ref{eq:free-fixed-point}, in this chapter we will call $s_\star$ a \textit{fixed point} (rather than an equilibrium state).
The question of necessary and sufficient conditions on $\Phi$ for the dynamics of Eq.~\ref{eq:free-phase-discrete-time} to converge to a fixed point is out of the scope of the present manuscript. We refer to \citet{scarselli2009graph} where conditions on the transition function are discussed.
\subsection{Training Discrete-Time Dynamical Systems with Equilibrium Propagation}
Recall that we want to optimize a loss of the form \begin{equation} \mathcal{L} = C \left( s_\star, y \right), \end{equation} where $C(s, y)$ is a scalar function called \textit{cost function}, defined for any state $s$. In the discrete-time setting, EqProp takes the following form.
\paragraph{Free Phase.} In the free phase, the dynamics of Eq.~\ref{eq:free-phase-discrete-time} is run for $T$ time steps, until the sequence of states $s_1, s_2, s_3, \ldots, s_T$ has converged. At the end of the free phase, the network is at the \textit{free fixed point} $s_\star$ characterized by Eq.~\ref{eq:free-fixed-point}, i.e. $s_T = s_\star$.
\paragraph{Nudged Phase.} In the nudged phase, starting from the free fixed point $s_\star$, an additional term $- \beta \; \frac{\partial C}{\partial s}$ is introduced in the dynamics of the neurons, where $\beta$ is a positive or negative scalar, called \textit{nudging factor}. This term acts as an external force nudging the system dynamics towards decreasing the cost function $C$. Denoting $s_0^\beta, s_1^\beta, s_2^\beta, \ldots$ the sequence of states in the second phase (which depends on the value of $\beta$), we have \begin{equation}
s_0^\beta = s_\star \qquad \text{and} \qquad \forall t \geq 0, \quad s_{t+1}^\beta = \frac{\partial \Phi}{\partial s} \left( \theta, x, s_t^\beta \right) - \beta \; \frac{\partial C}{\partial s} \left( s_t^\beta, y \right).
\label{eq:nudged-phase-discrete-time} \end{equation} The network eventually settles to a new fixed point $s_\star^\beta$, called \textit{nudged fixed point}.
\paragraph{Update Rule.} In this context, the formula for estimating the loss gradients using the two fixed points $s_\star$ and $s_\star^\beta$ takes the form \begin{equation} \lim_{\beta \to 0} \frac{1}{\beta} \left( \frac{\partial \Phi}{\partial \theta} \left( \theta, x, s_\star^\beta \right) - \frac{\partial \Phi}{\partial \theta} \left( \theta, x, s_\star \right) \right) = -\frac{\partial \mathcal{L}}{\partial \theta}. \label{eq:eqprop-phi-grad} \end{equation} Furthermore, if the primitive function $\Phi$ has the sum-separability property, i.e. if it is of the form $\Phi(\theta, x, s) = \Phi_0(x, s) + \sum_{k=1}^N \Phi_k(\theta_k, x, s)$ where $\theta=(\theta_1, \theta_2, \ldots, \theta_N)$, then \begin{equation} \lim_{\beta \to 0} \frac{1}{\beta} \left( \frac{\partial \Phi_k}{\partial \theta_k} \left( \theta_k, x, s_\star^\beta \right) - \frac{\partial \Phi_k}{\partial \theta_k} \left( \theta_k, x, s_\star \right) \right) = -\frac{\partial \mathcal{L}}{\partial \theta_k}. \label{eq:eqprop-phi-grad-local} \end{equation} Eq.~\ref{eq:eqprop-phi-grad} follows directly from Theorem \ref{thm:static-eqprop} and the definition of $\Phi$ in terms of $E$ (Eq.~\ref{eq:primitive-function}). Eq.~\ref{eq:eqprop-phi-grad-local} follows from Eq.~\ref{eq:eqprop-phi-grad} and the definition of sum-separability.
In the discrete-time setting, as in the other settings, we can reduce the bias and the variance of the gradient estimate by using a symmetrized gradient estimator (see Eq.~\ref{eq:two-sided-estimator}). This requires two nudged phases: one with a positive nudging ($+\beta$) and one with a negative nudging ($-\beta$).
\subsection{Recovering Gradient Systems}
We note that if we choose a primitive function $\Phi$ of the form $\Phi(\theta, x, s) = \frac{1}{2} \| s \|^2 - \epsilon \; \widetilde{E}(\theta, x, s)$, where $\epsilon$ is a positive hyperparameter and $\widetilde{E}$ is a scalar function, then the dynamics of Eq.~\ref{eq:free-phase-discrete-time} rewrites $s_{t+1} = s_t - \epsilon \frac{\partial \widetilde{E}}{\partial s} \left(\theta, x, s_t\right)$. This is the Euler scheme with discretization step $\epsilon$ of the gradient dynamics $\frac{d}{dt} s_t = - \frac{\partial \widetilde{E}}{\partial s} \left( \theta, x, s_t \right)$, which was used in the simulations of Chapter \ref{chapter:hopfield}. In this sense, the setting of gradient systems of Chapter \ref{chapter:hopfield} can be seen as a particular case of the discrete-time formulation presented in this chapter.
\section{RNN Models with Static Input} \label{sec:discrete-time-models}
The algorithm presented in the previous section is generic and holds for arbitrary primitive function $\Phi$ and cost function $C$. In this section, we present two models corresponding to different choices of primitive function. The first model is a vanilla RNN with static input and symmetric weights (a variant of the Hopfield model). The second model is a convolutional RNN model. We also propose different choices of cost function: whereas in Chapter \ref{chapter:hopfield} and Chapter \ref{chapter:neuromorphic} we have only considered the squared error between outputs and targets, here we also present an implementation of the cross-entropy cost function.
\subsection{Fully Connected Layers}
To implement a fully connected layer, we consider the following primitive function: \begin{equation}
\Phi_k^{\rm fc}(w_k, h^{k-1}, h^k) = (h^k)^\top \cdot w_k \cdot h^{k-1}. \end{equation} In this expression, $h^{k-1}$ and $h^k$ are two consecutive layers of neurons, and $w_k$ is a weight matrix of size $\dim(h^k) \times \dim(h^{k-1})$ connecting $h^{k-1}$ to $h^k$. We note that $\Phi_k^{\rm fc}$ is closely related to the Hopfield energy of Eq.~\ref{eq:hopfield-energy}.
By stacking several of these fully connected layers, we can form a network of multiple layers of the kind considered in the previous chapters (e.g. as depicted in Fig~\ref{fig:network_undirected}). The corresponding primitive function is obtained by summing together the primitive functions of individual pairs of layers. For example, consider $\Phi = \cdots + \Phi_k^{\rm fc} + \Phi_{k+1}^{\rm fc} + \cdots$, i.e. \begin{equation}
\Phi = \cdots + (h^k)^\top \cdot w_k \cdot h^{k-1} + (h^{k+1})^\top \cdot w_{n+1} \cdot h^k + \cdots. \label{eq:model-simple-ep-phi} \end{equation} For this choice of primitive function, we have $\frac{\partial \Phi}{\partial h^k} = w_k \cdot h^{k-1} + w_{n+1}^\top \cdot h^{k+1}$. In practice, we find that it is necessary that the values of the state variable be bounded. For this reason, we apply an activation function $\sigma$ and arrive at the following dynamics in the free phase, which is a discrete-time variant of the dynamics of the Hopfield network studied in Chapter \ref{chapter:hopfield}: \begin{equation} h^k_{t+1} = \sigma \left( w_k \cdot h^{k-1}_t + w_{n+1}^\top \cdot h^{k+1}_t \right). \label{eq:EqProp-simple-first-phase} \end{equation}
\subsection{Convolutional Layers} \label{sec:convolutional-layer}
\citet{ernoult2019updates} propose to implement a convolutional layer with the following primitive function: \begin{equation} \label{eq:phiCNN}
\Phi_k^{\rm conv}(w_k, h^{k-1}, h^k) = h^k\bullet\mathcal{P}\left(w_k\star h^{k-1}\right). \end{equation} In this expression, $w_k$ is the kernel (convolutional weights) for that layer, $\star$ is the convolution operator, $\mathcal{P}$ is a pooling operation, and $\bullet$ is the canonical scalar product for pairs of tensors with same dimension. In particular, in this expression, $h^k$ and $\mathcal{P}\left(w_k\star h^{k-1}\right)$ are tensors with same size. This implementation is similar to the one proposed by \citet{lee2009convolutional} in the context of restricted Boltzmann machines.
By stacking several of these convolutional layers we can form a deep ConvNet (specifically a recurrent convolutional network with static input and symmetric weights). Consider a primitive function of the form $\Phi = \cdots + \Phi_k^{\rm conv} + \Phi_{k+1}^{\rm conv} + \cdots$, i.e. \begin{equation}
\Phi = \cdots + h^k\bullet\mathcal{P}\left(w_k\star h^{k-1}\right) + h^{k+1}\bullet\mathcal{P}\left(w_{n+1}\star h^k\right) + \cdots \end{equation} We have $\frac{\partial \Phi}{\partial h^k} = \mathcal{P}\left(w_k\star h^{k-1}\right) + \tilde{w}_{k+1}\star \mathcal{P}^{-1}\left(h^{k+1}\right)$, where $\mathcal{P}^{-1}$ is an `inverse pooling' operation, and $\tilde{w}_k$ is the flipped kernel, which forms the transpose convolution. We refer to \citet{ernoult2019updates} where these operations are defined in details. After restricting the space of the state variables by using the hardsigmoid activation function $\sigma$ to clip the states, we obtain the following dynamics for layer $h^k$: \begin{equation} \label{eq:conv-dynamics} h^k_{t+1} = \sigma \left( \mathcal{P}\left(w_k\star h^{k-1}_t\right) + \tilde{w}_{n+1}\star \mathcal{P}^{-1}\left(h^{k+1}_{t}\right)\right). \end{equation}
We can also combine convolutional layers, followed by fully connected layers, to form a more practical deep ConvNet. Denoting $N^{\rm conv}$ and $N^{\rm fc}$ the number of convolutional layers and fully connected layers, the total number of layers is $N^{\rm tot} = N^{\rm conv} + N^{\rm fc}$ and the primitive function is \begin{equation}
\Phi(\theta, x, s) = \sum_{k=1}^{N^{\rm conv}} \Phi_k^{\rm conv}(w_k, h^{k-1}, h^k) + \sum_{k = N^{\rm conv}+1}^{N^{\rm tot}} \Phi_k^{\rm fc}(w_k, h^{k-1}, h^k), \end{equation} where the set of parameters is $\theta = \{w_k\}_{1 \leq k \leq N^{\rm tot}}$, the input is $x=h^0$, and the state variable is $s = \{h^k\}_{1 \leq k \leq N^{\rm tot}}$.
\subsection{Squared Error}
We have already studied in Chapters \ref{chapter:hopfield} and \ref{chapter:neuromorphic} the case where we optimize the loss associated to the squared error cost function. In this setting, the state variable of the network is of the form $s=(h, o)$, where $h$ represents the \textit{hidden neurons} and $o$ the \textit{output neurons}, and the cost function is \begin{equation}
C(o, y) = \frac{1}{2} \left\| o - y \right\|^2. \end{equation} The nudged phase dynamics of the hidden neurons and output neurons read, in this context: \begin{equation}
h_{t+1}^{\beta} = \frac{\partial \Phi}{\partial h}(\theta, x, h_t^{\beta}, o_t^{\beta}), \qquad o_{t+1}^{\beta} = \frac{\partial \Phi}{\partial o}(\theta, x, h_t^{\beta}, o_t^{\beta}) + \beta \; (y - o_t^{\beta}). \end{equation}
\subsection{Cross-Entropy}
\citet{laborieux2020scaling} present a method to implement the output layer of the neural network as a softmax output, which can be used in conjunction with the cross-entropy loss. In this setting, the state of output neurons ($o$) are not a part of the state variable ($s$), but are instead viewed as a readout, which is a function of $s$ and of a weight matrix $w_{\rm out}$ of size $\dim(y) \times \dim(s)$. Specifically, the state of output neurons at time step $t$ is defined by the formula: \begin{equation}
o_t = \mbox{softmax}(w_{\rm out}\cdot s_t). \end{equation} Denoting $M=\dim(y)$ the number of categories in the classification task of interest, the cross-entropy cost function associated with the softmax output is then: \begin{equation} C(s, y, w_{\rm out}) = - \sum_{i=1}^M y_i \log(\textrm{softmax}_i(w_{\rm out} \cdot s)). \end{equation} Using the fact that $\frac{\partial C}{\partial s}(s, y, w_{\rm out}) = w_{\rm out}^\top \cdot \left( \textrm{softmax}(w_{\rm out} \cdot s) - y \right)$, the nudged phase dynamics corresponding to the cross-entropy cost function read \begin{equation} s_{t+1}^{\beta} = \frac{\partial \Phi}{\partial s}(\theta, x, s_t^\beta) + \beta \; w_{\rm out}^\top \cdot \left( y - o_t^\beta \right), \end{equation} where $o_t^\beta = \textrm{softmax}(w_{\rm out} \cdot s_t^\beta)$. Note that in this context the loss $\mathcal{L} = C(s_\star, y, w_{\rm out})$ also depends on the parameter $w_{\rm out}$. The loss gradient with respect to $w_{\rm out}$ is given by \begin{equation} \frac{\partial \mathcal{L}}{\partial w_{\rm out}} = s_\star^\top \cdot \left( y - o_\star \right), \end{equation} where $o_\star = \textrm{softmax}(w_{\rm out} \cdot s_\star)$.
In practice, the state variable is of the form $s = (h^1, h^2, \ldots, h^N)$, where $h^1, h^2, \ldots, h^N$ are the hidden layers of the network, and $w_{\rm out}$ connects only the last hidden layer $h^N$ (not all hidden layers) to the output layer $o$. The weight matrix $w_{\rm out}$ has size $\dim(y) \times \dim(h^N)$ in this case.
\section{Experiments on MNIST and CIFAR-10} \label{sec:discrete-time-experiments}
In this section, we present the experimental results of \citet{ernoult2019updates} and \citet{laborieux2020scaling}, on the MNIST (Table~\ref{table:mnist-convnet-results}) and the CIFAR-10 (Table~\ref{table:cifar-convnet-results}) classification tasks, respectively. The CIFAR-10 dataset \citep{krizhevsky2009learning} consists of $60,000$ colour images of $32 \times 32$ pixels. These images are split in $10$ classes (each corresponding to an object or animal), with $6,000$ images per class. The training set consists of $50,000$ images and the test set of $10,000$ images.
Experiments are performed on different network architectures (composed of multiple fully-connected and/or convolutional layers), using different cost functions (either the squared error or the cross-entropy loss) and different loss gradient estimators. Using the notations of this chapter, the \textit{one-sided} gradient estimator ($\widehat{\nabla}_\theta(\beta)$) and the \textit{symmetric} gradient estimator ($\widehat{\nabla}_\theta^{\rm sym}(\beta)$) presented in Chapter~\ref{chapter:eqprop} take the form \begin{align} \label{eq:one-sided-phi} \widehat{\nabla}_\theta(\beta) & = \frac{1}{\beta} \left( \frac{\partial \Phi}{\partial \theta}(\theta, x, s_\star^\beta) - \frac{\partial \Phi}{\partial \theta}(\theta, x, s_\star) \right), \\ \label{eq:symmetric-phi} \widehat{\nabla}_\theta^{\rm sym}(\beta) & = \frac{1}{2 \beta} \left( \frac{\partial \Phi}{\partial \theta} \left( \theta, x, s_\star^\beta \right) - \frac{\partial \Phi}{\partial \theta} \left( \theta, x, s_\star^{-\beta} \right) \right). \end{align} We refer to \citet{ernoult2019updates} and \citet{laborieux2020scaling} for the implementation and simulation details. Finally, since the models considered here are RNNs, we can also train them with the more conventional \textit{backpropagation through time} (BPTT) algorithm\footnote{In this case, BPTT is used on RNNs of a very specific kind. The RNN models considered here have a transition function of the form $F = \frac{\partial \Phi}{\partial s}$, a static input $x$ at each time step, and a single target $y$ at the final time step.}, and use BPTT as a benchmark for EqProp.
\begin{table}[ht!] \centering
$\begin{array}{|cc|ccc|ccc|cc|} \hline
& & \multicolumn{3}{c|}{\rm EqProp \; Error \; (\%)} & & & & \multicolumn{2}{c|}{\rm BPTT \; Error \; (\%)} \\
\hbox{Model} & \hbox{Loss} & \hbox{Estimator} & {\rm Test} & {\rm Train} & T & K & {\rm Epochs} & {\rm Test} & {\rm Train} \\ \hline
\hbox{DHN-1h} & \multirow{2}{*}{Squared Error} & \multirow{2}{*}{One-sided} & 2.06 & 0.13 & 100 & 12 & 30 & 2.11 & 0.46 \\
\hbox{DHN-2h} & & & 2.01 & 0.11 & 500 & 40 & 50 & 2.02 & 0.29 \\ \hline
\hbox{FC-1h} & \multirow{4}{*}{Squared Error} & \multirow{4}{*}{One-sided} & 2.00 & 0.20 & 30 & 10 & 30 & 2.00 & 0.55 \\
\hbox{FC-2h} & & & 1.95 & 0.14 & 100 & 20 & 50 & 2.09 & 0.37 \\
\hbox{FC-3h} & & & 2.01 & 0.10 & 180 & 20 & 100 & 2.30 & 0.32 \\
\hbox{ConvNet} & & & 1.02 & 0.54 & 200 & 10 & 40 & 0.88 & 0.12 \\ \hline \end{array}$ \caption[Experimental results of \citet{ernoult2019updates} on discrete-time neural network models trained on MNIST.]{ Experimental results of \citet{ernoult2019updates} on MNIST. EqProp is benchmarked against BPTT. `DHN' stands for the `deep Hopfield networks' of Chapter \ref{chapter:hopfield}. `FC' means `fully connected', and `-$\#$h' stands for the number of hidden layers.
The test error rates and training error rates (in \%) are averaged over five trials. $T$ is the number of iterations in the first phase. $K$ is the number of iterations in the second phase. All these results are obtained with the squared error and the one-sided gradient estimator. } \label{table:mnist-convnet-results} \end{table}
\begin{table}[ht!] \centering
$\begin{array}{|cc|ccc|ccc|cc|} \hline
& & \multicolumn{3}{c|}{\rm EqProp \; Error \; (\%)} & & & & \multicolumn{2}{c|}{\rm BPTT \; Error \; (\%)} \\
\hbox{Model} & \hbox{Loss} & \hbox{Estimator} & {\rm Test} & {\rm Train} & T & K & {\rm Epochs} & {\rm Test} & {\rm Train} \\ \hline
\multirow{3}{*}{\rm ConvNet} & \multirow{3}{*}{\rm Squared Error} & \hbox{One-sided} & 86.64 & 84.90 & 250 & 30 & 120 & \multirow{3}{*}{11.10} & \multirow{3}{*}{3.69} \\
& & \hbox{Random Sign} & \color{blue}{12.61^\star} & \color{blue}{8.64^\star} & 250 & 30 & 120 & & \\
& & \hbox{Symmetric} & 12.45 & 7.83 & 250 & 30 & 120 & & \\ \hline
\hbox{ConvNet} & \hbox{Cross-Ent.} & \hbox{Symmetric} & 11.68 & 4.98 & 250 & 25 & 120 & 11.12 & 2.19 \\ \hline \end{array}$ \caption[Experimental results of \citet{laborieux2020scaling} on ConvNets trained on CIFAR-10.]{Experimental results of \citet{laborieux2020scaling} on CIFAR-10. EqProp is benchmarked against BPTT. The test error rates and training error rates (in \%) are averaged over five trials. $T$ is the number of iterations in the first phase. $K$ is the number of iterations in the second phase. The `one-sided' and `symmetric' gradient estimators refer to Eq.~\ref{eq:one-sided-phi} and Eq.~\ref{eq:symmetric-phi}, respectively. `random sign' refers to the one-sided estimator with $\beta$ being positive or negative with even probability.\\ \color{blue}{$^\star$In the simulations with random $\beta$, the training process collapsed in one trial out of five, leading to a performance similar to the one-sided estimator. The test error mean and train error mean reported here include only the four trials that worked fine.} } \label{table:cifar-convnet-results} \end{table}
Table~\ref{table:mnist-convnet-results} compares the performance on MNIST of the discrete-time models presented in this chapter (FC-\#h and ConvNet) with the continuous-time Hopfield networks of Chapter \ref{chapter:hopfield} (DHN-\#h). No degradation of accuracy is observed when using discrete-time rather than continuous-time networks, although the former require many less time steps in the first phase of training ($T$). The lowest test error rate ($\sim 1 \%$) is achieved with the ConvNet model.
Table~\ref{table:cifar-convnet-results} shows the performance of a ConvNet model on CIFAR-10, for different gradient estimators and different loss functions. Unlike in the MNIST experiments, the one-sided gradient estimator with a nudging factor ($\beta$) of constant sign works poorly on CIFAR-10: training is unstable and the network is unable to fit the training data (84.90\% train error). The bias of the one-sided gradient estimator can be reduced on average by choosing the sign of $\beta$ at random in the second phase: with this technique, \citet{laborieux2020scaling} report that training proceeded well in four runs out of five, yielding a mean test error of 12.61\%, but training collapsed in the last run in a way similar to the one-sided gradient estimator with constant sign. The symmetric difference estimator allows to reduce not only the bias but also the variance, and to stabilize the training process consistently across runs (12.45\% test error). Finally, the best test error rate, $11.68\%$, is obtained with the cross-entropy loss, and approaches the performance of BPTT with less than 0.6 \% degradation in accuracy.
\subsection{Challenges with EqProp Training} \label{eq:difficulties}
The theoretical guarantee that EqProp can approximate with arbitrary precision the gradient of arbitrary loss functions for a very broad class of models (energy-based models) suggests that EqProp could eventually train large networks on challenging tasks, as was proved feasible in the last decade with other deep learning training methods (e.g. backpropagation) relying on stochastic gradient descent. Nevertheless, EqProp training on current processors (GPUs) presents several challenges.
One difficulty encountered with EqProp training is that, although the gradient formula (Eq.~\ref{eq:eqprop-phi-grad}) requires that $\frac{\partial \Phi}{\partial \theta}$ be measured \textit{exactly} at the fixed points, in many situations however, these fixed points are only approached up to certain precision. Empirically, we observe that for learning to work, the fixed point of the first phase of training must be approximated with very high accuracy ; otherwise the gradient estimate is of poor quality and does not enable to optimize the loss function. This implies that the equations of the first phase of training need to be iterated a large number of time steps, until convergence to the fixed point. Table~\ref{table:mnist-convnet-results} and Table~\ref{table:cifar-convnet-results} show that hundreds of iterations are required for the networks to converge, even though these networks consist of just a few layers.
Various methods have been investigated to accelerate convergence, none of which has proved really satisfying so far. \citet{scellier2016towards} propose a method based on variational inference, in which the state variables are split in two groups (specifically the layers of odd indices and the layers of even indices): at each iteration, one group of state variables remains fixed, while the other group is updated by solving for the stationarity condition. \citet{bengio2016feedforward} give a sufficient condition so that initialization of the network with a forward pass provides sensible initial states for inference ; the condition is that any two successive layers must form a `good autoencoder'. \citet{o2018initialized} use a side network to learn these initial states for inference (in the main network).
One promising avenue to solve the problem of long inference times is offered by the recent work of \citet{ramsauer2020hopfield}, which shows that for a certain class of \textit{modern Hopfield networks}, equilibrium states are reached in exactly one step.
This idea could considerably accelerate simulations in software and demonstrate EqProp training on more advanced architectures and harder tasks. We emphasize however that the difficulty of long inference times is specific to numerical simulations (i.e. simulations on digital computers), and may not be a problem for neuromorphic hardware, where energy minimization is performed by the physics of system (Chapter \ref{chapter:neuromorphic}).
A second difficulty with EqProp training is due to the saturation of neurons. All experiments so far have found that for EqProp training to be effective, the neurons' states need to be clipped to a closed interval, typically $[0, 1]$. This is achieved in most experiments by applying the hard-sigmoid activation function $\sigma(s) = \min(\max(0, s), 1)$ after each iteration during inference. Due to this technique however, many neurons `saturate', i.e. they have a value of exactly $0$ or $1$ at equilibrium. In the second phase of training, due to these saturated neurons, error signals have difficulty propagating from output neurons across the network, when the nudging factor $\beta$ is small. To mitigate this problem, in most experiments $\beta$ is chosen large enough so as to amplify and better propagate error signals along the layers, at the cost of degrading the quality of the gradient estimate. To counter this problem, \citet{o2018initialized} suggest to use a modified activation function which includes a leak term, namely $\sigma^{\rm mod}(s) = \sigma(s) + 0.01 s$. Another avenue to further reduce the saturation effect is to search weight initialization schemes specifically meant for the kind of network models trained with EqProp. The weight initialisation schemes that dominate deep learning today have been designed to fit feedforward nets \citep{he2015delving} and RNNs \citep{saxe2013exact} trained with automatic differentiation. Finding appropriate weight initialization schemes for the kind of bidirectional networks studied in our context is an area of research largely unexplored.
The third difficulty with EqProp training is hyperparameter tuning, due to the high sensitivity of the training process to some of the hyperparameters. Initial learning rates for example need to be tuned layer-wise. In addition to the usual hyperparameters (architecture, learning rates, ...), EqProp requires tuning some additional hyperparameters: the number of iterations in the free phase ($T$), the number of iterations in the nudged phase ($K$), the value of the nudging factor ($\beta$), ... In the next section, we present a theoretical result called the \textit{GDD property} that can help accelerate hyperparameter search. As we will see, the GDD property provides a criterion to decide whether the fixed point of the first phase has been reached or not.
\section{Gradient Descending Dynamics (GDD)} \label{sec:discrete-time-transient-dynamics}
The gradient formula of Eq.~\ref{eq:eqprop-phi-grad} depends only on the fixed points $s_\star$ and $s_\star^\beta$, not on the specific trajectory that the network follows to reach them. But similarly to the real-time setting of Chapter \ref{chapter:hopfield}, assuming the dynamics of Eq.~\ref{eq:nudged-phase-discrete-time} when the neurons gradually move from their free fixed point values ($s_\star$) towards their nudged fixed point values ($s_\star^\beta$), we can show that the transient states of the network ($s_t^\beta$ for $t \geq 0$) perform step-by-step gradient computation.
\subsection{Transient Dynamics}
First, note that the gradient of EqProp (Eq.~\ref{eq:eqprop-phi-grad}), which is equal to the gradient of the loss in the limit $\beta \to 0$, can be decomposed as a telescoping sum: \begin{equation}
\label{eq:telescoping-sum}
\frac{1}{\beta} \left( \frac{\partial \Phi}{\partial \theta} \left( \theta, x, s_\star^\beta \right) - \frac{\partial \Phi}{\partial \theta} \left( \theta, x, s_\star \right) \right) = \sum_{t=0}^\infty \frac{1}{\beta} \left( \frac{\partial \Phi}{\partial \theta} \left( \theta, x, s_{t+1}^\beta \right) - \frac{\partial \Phi}{\partial \theta} \left( \theta, x, s_t^\beta \right) \right). \end{equation}
Second, we rewrite the dynamics of the free phase (Eq~\ref{eq:free-phase-discrete-time}) in the form \begin{equation}
s_{t+1} = \frac{\partial \Phi}{\partial s} \left( \theta_{t+1} = \theta, x, s_t \right), \end{equation} where $\theta_t$ denotes the parameter of the model at time step $t$, the value $\theta$ being shared across all time steps. We consider the loss after $T$ time steps: \begin{equation}
\mathcal{L}_T = C \left( s_T, y \right). \end{equation} $\mathcal{L}_T$ is what we have called the \textit{projected cost function} in the setting of real-time dynamics (Eq.~\ref{eq:projected-cost-function}). Rewriting the free phase dynamics this way allows us to define the partial derivative $\frac{\partial \mathcal{L}_T}{\partial \theta_t}$ as the sensitivity of the loss $\mathcal{L}_T$ with respect to $\theta_t$, when $\theta_1, \ldots \theta_{t-1}, \theta_{t+1}, \ldots \theta_T$ remain fixed (set to the value $\theta$). With these notations, the full gradient of the loss can be decomposed as \begin{equation} \label{eq:total-gradient} \frac{\partial \mathcal{L}_T}{\partial \theta} = \frac{\partial \mathcal{L}_T}{\partial \theta_1} + \frac{\partial \mathcal{L}_T}{\partial \theta_2} + \cdots + \frac{\partial \mathcal{L}_T}{\partial \theta_T}. \end{equation}
The following result links the right-hand sides of Eq.~\ref{eq:telescoping-sum} and Eq.~\ref{eq:total-gradient} term by term.
\begin{thm}[\citet{ernoult2019updates}] \label{thm:gdd} Let $s_0, s_1, \ldots, s_T$ be the sequence of states in the free phase. Suppose that the sequence has converged to the fixed point $s_\star$ after $T-K$ time steps for some $K \geq 0$, i.e. that $s_\star = s_T = s_{T-1} = \ldots s_{T-K}$. Then, the following identities hold at any time $t = 0, 1, \ldots, K-1$ in the nudged phase: \begin{gather} \label{eq:gdd-theta} \lim_{\beta \to 0} \frac{1}{\beta} \left( \frac{\partial \Phi}{\partial \theta} \left( \theta, x, s_{t+1}^\beta \right) - \frac{\partial \Phi}{\partial \theta} \left( \theta, x, s_t^\beta \right) \right) = - \frac{\partial \mathcal{L}_T}{\partial \theta_{T-t}},\\ \label{eq:gdd-s} \lim_{\beta \to 0} \frac{1}{\beta} \left( s_{t+1}^\beta - s_t^\beta \right) = - \frac{\partial \mathcal{L}_T}{\partial s_{T-t}}. \end{gather} \end{thm}
We refer to \citet{ernoult2019updates} for a proof. Theorem \ref{thm:gdd} relates neural computation to gradient computation, and is as such a discrete-time variant of Theorem \ref{thm:truncated-eqprop}. In essence, Theorem \ref{thm:gdd} shows that in the nudged phase of EqProp, the temporal variations in neural activity and incremental weight updates represent loss gradients. Since the sequence of states in the nudged phase satisfies $s_{t+1}^\beta = s_t^\beta - \beta \; \frac{\partial \mathcal{L}_T}{\partial s_{T-t}} + o(\beta)$ as $\beta \to 0$, descending the gradients of the loss $\mathcal{L}_T$, we call this property the \textit{gradient descending dynamics} (GDD) property.
As mentioned in section \ref{eq:difficulties}, one of the challenges with EqProp training comes from the empirical observation that learning is successful only if we are \textit{exactly} at the fixed point at the end of the first phase, although in practice we use numerical methods to \textit{approximate} this fixed point. In particular, we need a criterion to `decide' when the fixed point has been reached with high enough accuracy. Theorem \ref{thm:gdd} provides such a criterion: a necessary condition for the fixed point of the first phase to be reached is that the identities of Eqs.~\ref{eq:gdd-theta}-\ref{eq:gdd-s} hold.
\subsection{Backpropagation Through Time}
On the one hand we can define the neural and weight increments of EqProp as follows, which we can compute in the second phase of EqProp: \begin{align}
\Delta_\theta^{\rm EP}(\beta, t) & = \frac{1}{\beta} \left( \frac{\partial \Phi}{\partial \theta} \left( \theta, x, s_{t+1}^\beta \right) - \frac{\partial \Phi}{\partial \theta} \left( \theta, x, s_t^\beta \right) \right), \\
\Delta_s^{\rm EP}(\beta, t) & = \frac{1}{\beta} \left( s_{t+1}^\beta-s_t^\beta \right). \end{align}
On the other hand, the loss gradients $\frac{\partial \mathcal{L}_T}{\partial s_{T-t}}$ and $\frac{\partial \mathcal{L}_T}{\partial \theta_{T-t}}$ appearing on the right-hand sides of Eqs.~\ref{eq:gdd-theta}-\ref{eq:gdd-s} can be computed by automatic differentiation. Specifically, these loss gradients are the `partial derivatives' computed by the \textit{backpropagation through time} (BPTT) algorithm. Here BPTT is applied in the very specific setting of an RNN with transition function $F = \frac{\partial \Phi}{\partial s}$, with static input $x$ at each time step, and with target $y$ at the final time step. In particular, there is no time-dependence in the data. We denote these partial derivatives computed by BPTT: \begin{align}
\nabla^{\rm BPTT}_\theta(t) & = \frac{\partial \mathcal{L}_T}{\partial \theta_{T-t}}, \\
\nabla^{\rm BPTT}_s(t) & = \frac{\partial \mathcal{L}_T}{\partial s_{T-t}}. \end{align}
Using these notations, the GDD property (Theorem \ref{thm:gdd}) states that under the condition that $s_\star = s_T = s_{T-1} = \ldots s_{T-K}$, we have for every $t = 0, 1, \ldots K-1$ that $\lim_{\beta \to 0} \Delta_s^{\rm EP}(\beta, t) = - \nabla^{\rm BPTT}_s(t)$ and $\lim_{\beta \to 0} \Delta_\theta^{\rm EP}(\beta, t) = - \nabla^{\rm BPTT}_\theta(t)$. The GDD property is illustrated in Figure \ref{fig:gdd}.
We note that the GDD property also implies that the gradient computed by `truncated EqProp' (i.e. EqProp where the second phase is halted before convergence to the second fixed point) corresponds to truncated BPTT.
\begin{figure}\label{fig:gdd}
\end{figure} \chapter{Extensions of Equilibrium Propagation} \label{chapter:future}
In this chapter, we present research directions for the development of the equilibrium propagation framework. In section \ref{sec:time-varying-setting}, we present a general framework for training dynamical systems with time-varying inputs, which exploits the principle of least action. In section \ref{sec:stochastic-setting}, we adapt the EqProp framework to the setting of stochastic systems. In section \ref{sec:contrastive-meta-learning}, we briefly present the \textit{contrastive meta-learning} framework of \citet{zucchet2021contrastive}, where they use the EqProp method to train the meta-parameters of a meta-learning model.
\section{Equilibrium Propagation in Dynamical Systems with Time-Varying Inputs} \label{sec:time-varying-setting}
In Chapter \ref{chapter:eqprop}, we have derived the EqProp training procedure for a class of models called energy-based models (EBMs). A key element of the theory is the fact that, in EBMs, equilibrium states are characterized by variational equations. In this section, we show that the EqProp training strategy can be applied to other situations where variational equations appear. The equations of motion of many physical systems can also be characterized by variational equations -- their trajectory can be derived through a \textit{principle of stationary action} (e.g. a principle of least action). In such systems, the quantity that is stationary is not the energy function (as in an EBM), but the \textit{action functional}, which is by definition the time integral of the Lagrangian function. Such systems, which we call \textit{Lagrangian-based models} (LBMs), can play the role of machine learning models with time-varying inputs. This idea was first proposed by \citet{baldi1991contrastive} in the context of the \textit{contrastive learning framework}.
\subsection{Lagrangian-Based Models}
A Lagrangian-based model (LBM) is specified by a set of adjustable parameters, denoted $\theta$, a time-varying input, and a state variable. We write $\mathbf{x}_t$ the input value at time $t$, and $\mathbf{s}_t$ the state of the model at time $t$. We study the evolution of the system over a time interval $[0,T]$, and we write $\mathbf{x}$ and $\mathbf{s}$ the entire input and state trajectories over this time interval. The model is further described in terms of a \textit{functional} $\mathcal{S}$ which, given a parameter value $\theta$ and an input trajectory $\mathbf{x}$, associates to each trajectory $\mathbf{s}$ the real number \begin{equation}
\mathcal{S}(\theta, \mathbf{x}, \mathbf{s}) = \int_0^T L(\theta, \mathbf{x}_t, \mathbf{s}_t, \dot{\mathbf{s}}_t) dt, \end{equation}
where $L(\theta, \mathbf{x}_t, \mathbf{s}_t, \dot{\mathbf{s}}_t)$ is a scalar function of the parameters ($\theta$), the external input ($\mathbf{x}_t$), the state of the system ($\mathbf{s}_t$) as well as its time derivative ($\dot{\mathbf{s}_t}$). The function $L$ is called the \textit{Lagrangian function} of the system, and $\mathcal{S}$ is called the \textit{action functional}. The action functional is defined for any \textit{conceivable} trajectory $\mathbf{s}$, but, among all conceivable trajectories, the \textit{effective} trajectory of the system (subject to $\theta$ and $\mathbf{x}$), denoted $\mathbf{s}(\theta, \mathbf{x})$, satisfies by definition $\left. \frac{d}{d \epsilon} \right|_{\epsilon=0} \mathcal{S}(\theta, \mathbf{x}, \mathbf{s}(\theta, \mathbf{x}) + \epsilon \mathbf{s}) = 0$ for any trajectory $\mathbf{s}$ that satisfies the boundary conditions $\mathbf{s}_0=0$ and $\mathbf{s}_T=0$. We write for short \begin{equation}
\label{eq:free-effective-trajectory}
\frac{\delta \mathcal{S}}{\delta \mathbf{s}}(\theta, \mathbf{x}, \mathbf{s}(\theta, \mathbf{x})) = 0. \end{equation} Intuitively, $\delta \mathcal{S}$ can be thought of as the variation of $\mathcal{S}$ associated to a small variation $\delta \mathbf{s}$ around the trajectory $\mathbf{s}(\theta, \mathbf{x})$. Mathematically, $\frac{\delta \mathcal{S}}{\delta \mathbf{s}}(\theta, \mathbf{x}, \mathbf{s}(\theta, \mathbf{x}))$ represents the differential of the function $\mathcal{S}(\theta, \mathbf{x}, \cdot)$ at the point $\mathbf{s}(\theta, \mathbf{x})$. We say that the effective trajectory is stationary with respect to the action functional, and that the dynamics of the system derives from a \textit{principle of stationary action}. Since the action functional ($\mathcal{S}$) is defined in terms of the Lagrangian of the system ($L$), we call such a time-varying system a \textit{Lagrangian-based model}.
The loss to be minimized is an integral of cost values of the states along the effective trajectory: \begin{equation}
\mathcal{L}_0^T(\theta, \mathbf{x}, \mathbf{y}) = \int_0^T c_t(\mathbf{s}_t(\theta, \mathbf{x}), \mathbf{y}_t) dt. \end{equation} In this expression, $\mathbf{y}_t$ is the desired target at time $t$, $\mathbf{y}$ is the corresponding target trajectory, $\mathbf{s}_t(\theta, \mathbf{x})$ is the state at time $t$ along the effective trajectory $\mathbf{s}(\theta, \mathbf{x})$, and $c_t(\mathbf{s}_t, \mathbf{y}_t)$ is a scalar function (the \textit{cost function} at time $t$).
Similarly to the setting of EBMs, the concept of \textit{sum-separability} is useful in LBMs. Let $\theta = (\theta_1, \ldots, \theta_N)$ be the adjustable parameters of the system. Let $\{ \mathbf{x}_t, \mathbf{s}_t, \dot{\mathbf{s}}_t \}_k$ denote the information about $(\mathbf{x}_t, \mathbf{s}_t, \dot{\mathbf{s}}_t)$ at time $t$ which is locally available to parameter $\theta_k$. We say that the Lagrangian $L$ is \textit{sum-separable} if it is of the form \begin{equation}
\label{eq:sum-separability-Lagrangian}
L(\theta, \mathbf{x}_t, \mathbf{s}_t, \dot{\mathbf{s}}_t) = L_0(\mathbf{x}_t, \mathbf{s}_t, \dot{\mathbf{s}}_t) + \sum_{k=1}^N L_k( \theta_k, \{ \mathbf{x}_t, \mathbf{s}_t, \dot{\mathbf{s}}_t \}_k), \end{equation} where $L_0(\mathbf{x}_t, \mathbf{s}_t, \dot{\mathbf{s}}_t)$ is a term that is independent of the parameters to be adjusted, and $L_k$ is a scalar function of $\theta_k$ and $\{\mathbf{x}_t, \mathbf{s}_t, \dot{\mathbf{s}}_t\}_k$ for each $k \in \{ 1, \ldots, N \}$.
\subsection{Gradient Formula}
Similarly to the static case (Section \ref{sec:loss-gradients}), we introduce the \textit{total action functional} \begin{equation}
\mathcal{S}^\beta(\mathbf{s}) = \int_0^T \left( L(\theta, \mathbf{x}_t, \mathbf{s}_t, \dot{\mathbf{s}}_t) + \beta \; c_t(\mathbf{s}_t, \mathbf{y}_t) \right) dt, \end{equation} defined for any value of the nudging factor $\beta$ (for fixed $\theta$, $\mathbf{x}$ and $\mathbf{y}$). Intuitively, by varying $\beta$, the action functional $\mathcal{S}^\beta$ is modified, and so is the stationary solution of the action, i.e. the effective trajectory of the system. Specifically, let us denote $\mathbf{s}^\beta$ the trajectory characterized by the stationarity condition $\frac{\delta \mathcal{S}^\beta}{\delta \mathbf{s}}(\mathbf{s}^\beta) = 0$. Note in particular that for $\beta=0$ we have $\mathbf{s}^0 = \mathbf{s}(\theta, \mathbf{x})$.
\begin{thm}[Gradient formula for Lagrangian-based models] \label{thm:time-varying-eqprop} The gradient of the loss can be computed using the following formula: \begin{equation}
\label{eq:time-varying-eqprop}
\frac{\partial \mathcal{L}_0^T}{\partial \theta}(\theta, \mathbf{x}, \mathbf{y}) = \left. \frac{d}{d\beta} \right|_{\beta=0} \int_0^T \frac{\partial L}{\partial \theta} \left( \theta, \mathbf{x}_t, \mathbf{s}^\beta_t, \dot{\mathbf{s}}^\beta_t \right) dt \end{equation} Furthermore, if the Lagrangian function $L$ is sum-separable, then the gradient for each parameter $\theta_k$ depends only on information that is locally available to $\theta_k$: \begin{equation}
\label{eq:time-varying-eqprop-local}
\frac{\partial \mathcal{L}_0^T}{\partial \theta_k}(\theta, \mathbf{x}, \mathbf{y}) = \left. \frac{d}{d\beta} \right|_{\beta=0} \int_0^T \frac{\partial L_k}{\partial \theta_k} (\theta_k, \{ \mathbf{x_t}, \mathbf{s}^\beta_t, \dot{\mathbf{s}}^\beta_t \}_k) dt. \end{equation} \end{thm}
\begin{proof}[Proof of Theorem \ref{thm:time-varying-eqprop}] We derive Theorem \ref{thm:time-varying-eqprop} as a corollary of Theorem \ref{thm:static-eqprop}. Recall that the action functional is by definition $\mathcal{S}(\theta, \mathbf{x}, \mathbf{s}) = \int_0^T L(\theta, \mathbf{x}_t, \mathbf{s}_t, \dot{\mathbf{s}}_t)$ and that the effective trajectory $\mathbf{s}(\theta, \mathbf{x})$ satisfies the stationary condition $\frac{\delta \mathcal{S}}{\delta \mathbf{s}}(\theta, \mathbf{x}, \mathbf{s}(\theta, \mathbf{x})) = 0$. We can define a cost functional $\mathcal{C}$ on any conceivable trajectory $\mathbf{s}$ by the formula $\mathcal{C}(\mathbf{s}, \mathbf{y}) = \int_0^T c_t(\mathbf{s}_t, \mathbf{y}_t) dt$. The loss $\mathcal{L}_0^T$ then rewrites $\mathcal{L}_0^T(\theta, \mathbf{x}, \mathbf{y}) = \mathcal{C}(\mathbf{s}(\theta, \mathbf{x}), \mathbf{y})$, the total action functional rewrites $\mathcal{S}^\beta(\mathbf{s}) = \mathcal{S}(\theta, \mathbf{x}, \mathbf{s}) + \beta \; \mathcal{C}(\mathbf{s}, \mathbf{y})$, and the nudged trajectory $\mathbf{s}^\beta$ satisfies the stationarity condition $\frac{\delta \mathcal{S}}{\delta \mathbf{s}}(\theta, \mathbf{x}, \mathbf{s}^\beta) + \beta \; \frac{\delta \mathcal{C}}{\delta \mathbf{s}}(\mathbf{s}^\beta, \mathbf{y}) = 0$.
Using these notations, the first formula to be proved (Eq.~\ref{eq:time-varying-eqprop}) rewrites \begin{equation}
\frac{\partial \mathcal{L}_0^T}{\partial \theta}(\theta, \mathbf{x}, \mathbf{y}) = \left. \frac{d}{d\beta} \right|_{\beta=0} \frac{\partial \mathcal{S}}{\partial \theta}\left(\theta, \mathbf{x}, \mathbf{s}^\beta \right), \end{equation} which is exactly the first formula of Theorem \ref{thm:static-eqprop}. Finally, the second formula to be proved (Eq.~\ref{eq:time-varying-eqprop-local}) is a direct consequence of Eq.~\ref{eq:time-varying-eqprop} and the definition of sum-separability (Eq.~\ref{eq:sum-separability-Lagrangian}).
\iffalse Finally, assuming that the Lagrangian is sum-separable, we can define $\mathcal{S}_0( \mathbf{x}, \mathbf{s} ) = \int_0^T L_0( \mathbf{x}_t, \mathbf{s}_t, \dot{\mathbf{s}}_t ) dt$ and $\mathcal{S}_k( \theta_k, \{ \mathbf{x}, \mathbf{s} \}_k) = \int_0^T L_k( \theta_k, \{ \mathbf{x}_t, \mathbf{s}_t, \dot{\mathbf{s}}_t \}_k) dt$ for each $k$, so that $\mathcal{S}(\theta, \mathbf{x}, \mathbf{s}) = \mathcal{S}_0(\mathbf{x}, \mathbf{s}) + \sum_{k=1}^N \mathcal{S}_k( \theta_k, \{ \mathbf{x}, \mathbf{s} \}_k)$. Here we have used the fact that $\{ \mathbf{x}, \mathbf{s} \}_k$, the information that is locally available to $\theta_k$ over the time interval $[0, T]$, includes $\{ \mathbf{x}_t, \mathbf{s}_t, \dot{\mathbf{s}}_t \}_k$ for any time $t \in [0, T]$. Then the second formula to be proved (Eq.~\ref{eq:time-varying-eqprop-local}) rewrites \begin{equation}
\frac{\partial \mathcal{L}_0^T}{\partial \theta_k}(\theta, \mathbf{x}, \mathbf{y}) = \left. \frac{d}{d\beta} \right|_{\beta=0} \frac{\partial \mathcal{S}_k}{\partial \theta_k}(\theta_k, \{ \mathbf{x}, \mathbf{s}^\beta \}_k), \end{equation} which is exactly the second formula of Theorem \ref{thm:static-eqprop}. \fi \end{proof}
\subsection{Training Sum-Separable Lagrangian-Based Models}
Theorem \ref{thm:time-varying-eqprop} suggests the following EqProp-like training procedure for Lagrangian-based models, to update the parameters in proportion to their loss gradients. Let us assume that the Lagrangian function has the sum-separability property.
\paragraph{Free phase (inference).} Set the system in some initial state $(\mathbf{s}_0,\dot{\mathbf{s}}_0)$ at time $t=0$, and set the nudging factor $\beta$ to zero. Play the input trajectory $\mathbf{x}$ over the time interval $[0, T]$, and let the system follow the trajectory $\mathbf{s}^0$ (i.e. the effective trajectory characterized by Eq.~\ref{eq:free-effective-trajectory}). We call $\mathbf{s}^0$ the \textit{free trajectory}. For each parameter $\theta_k$, the quantity $\frac{\partial L_k}{\partial \theta_k} (\theta_k, \{ \mathbf{x_t}, \mathbf{s}^0_t, \dot{\mathbf{s}}^0_t \}_k)$ is measured and integrated from $t=0$ to $t=T$, and the result is stored locally.
\paragraph{Nudged phase.} Set the system in the same initial state $(\mathbf{s}_0,\dot{\mathbf{s}}_0)$ as in the free phase, and set now the nudging factor $\beta$ to some positive or negative (nonzero) value. Play again the input trajectory $\mathbf{x}$ over the time interval $[0, T]$, as well as the target trajectory $\mathbf{y}$, and let the system follow the trajectory $\mathbf{s}^\beta$ (i.e. the effective trajectory that is stationary with respect to $\mathcal{S}^\beta$). For each parameter $\theta_k$, the quantity $\frac{\partial L_k}{\partial \theta_k} (\theta_k, \{ \mathbf{x_t}, \mathbf{s}^\beta_t, \dot{\mathbf{s}}^\beta_t \}_k)$ is measured and integrated from $t=0$ to $t=T$.
\paragraph{Update rule.} Finally, each parameter $\theta_k$ is updated locally in proportion to its gradient, i.e. $\Delta \theta_k = - \eta \widehat{\nabla}_{\theta_k}(\beta)$, where $\eta$ is a learning rate, and \begin{equation} \widehat{\nabla}_{\theta_k}(\beta) = \frac{1}{\beta} \left( \int_0^T \frac{\partial L_k}{\partial \theta_k} (\theta_k, \{ \mathbf{x_t}, \mathbf{s}^\beta_t, \dot{\mathbf{s}}^\beta_t \}_k) dt - \int_0^T \frac{\partial L_k}{\partial \theta_k} (\theta_k, \{ \mathbf{x_t}, \mathbf{s}^0_t, \dot{\mathbf{s}}^0_t \}_k) dt \right). \end{equation}
As in the static setting (Chapter \ref{chapter:eqprop}), it is possible to reduce the bias and the variance of the gradient estimator by using the symmetrized version \begin{equation} \widehat{\nabla}_{\theta_k}^{\rm sym}(\beta) = \frac{1}{2 \beta} \left( \int_0^T \frac{\partial L_k}{\partial \theta_k} (\theta_k, \{ \mathbf{x_t}, \mathbf{s}^\beta_t, \dot{\mathbf{s}}^\beta_t \}_k) dt - \int_0^T \frac{\partial L_k}{\partial \theta_k} (\theta_k, \{ \mathbf{x_t}, \mathbf{s}^{-\beta}_t, \dot{\mathbf{s}}^{-\beta}_t \}_k) dt \right). \end{equation} This requires two nudged phases: one with a positive nudging ($+\beta$) and one with a negative nudging ($-\beta$).
Although the EqProp training method for Lagrangian-based models requires running the input trajectory twice (in the free phase and in the nudged phase), we stress that we do not require to store the past states of the system, unlike the backpropagation through time (BPTT) algorithm used to train conventional recurrent neural networks.
\subsection{From Energy-Based to Lagrangian-Based Models}
Conceptually, we have the following correspondence between the static setting (energy-based models) and the time-varying setting (Lagrangian-based models). \begin{itemize} \item The concept of \textit{configuration} ($s$) is replaced by that of \textit{trajectory} ($\mathbf{s}$). A trajectory $\mathbf{s}$ is a function from the time interval $[0, T]$ to the space of configurations, which assigns to each time $t \in [0, T]$ a configuration $\mathbf{s}_t$. \item The concept of \textit{energy function} ($E$) is replaced by that of \textit{action functional} ($\mathcal{S}$). Whereas an energy function $E$ assigns a real number $E(s)$ to each configuration $s$, an action functional $\mathcal{S}$ assigns a real number $\mathcal{S}(\mathbf{s})$ to each trajectory $\mathbf{s}$. \item The concept of \textit{equilibrium state} (denoted $s(\theta, x)$ or $s_\star$) is replaced by that of \textit{effective trajectory} (denoted $\mathbf{s}(\theta, \mathbf{x})$). Whereas an equilibrium state is characterized by the stationarity of the energy ($\frac{\partial E}{\partial s} = 0$), an effective trajectory is characterized by the stationarity of the action ($\frac{\delta \mathcal{S}}{\delta \mathbf{s}} = 0$). \end{itemize}
\subsection{Lagrangian-Based Models Include Energy-Based Models}
Consider a Lagrangian-based model whose Lagrangian function does not depend on $\dot{\mathbf{s}}_t$, i.e. $L$ is of the form \begin{equation} L(\theta,\mathbf{x}_t,\mathbf{s}_t,\dot{\mathbf{s}}_t) = E(\theta,\mathbf{x}_t,\mathbf{s}_t). \end{equation} Further suppose that the input signal $\mathbf{x}$ is static, i.e. $\mathbf{x}_t = x$ for any $t$. Denote $s_\star$ the equilibrium state characterized by $\frac{\partial E}{\partial s}(\theta,x,s_\star) = 0$. Then the trajectory $\mathbf{s}$ constantly equal to $s_\star$ (i.e. such that $\mathbf{s}_t = s_\star$ for all $t$) is a stationary solution of the action functional \begin{equation} \mathcal{S}(\theta,x,\mathbf{s}) = \int_0^T E(\theta,x,\mathbf{s}_t) dt. \end{equation} Indeed, for any variationa $\delta \mathbf{s}$ around $\mathbf{s}$, we have $\delta \mathcal{S} = \int_0^T \delta E(\theta,x,\mathbf{s}_t) dt = \int_0^T \frac{\partial E}{\partial s}(\theta,x,s_\star) \cdot \delta \mathbf{s_t} dt = 0$. In this sense, energy-based models are special instances of Lagrangian-based models. Furthermore, assuming that the target signal $\mathbf{y}$ and the cost function $c_t$ are also static (i.e. $\mathbf{y}_t = y$ and $c_t = c$ at any time $t$), then the loss is equal to $\mathcal{L}_0^T = \int_0^T c(s_\star,y) dt$, which is the loss in the static setting (up to a constant $T$). In this case, the EqProp learning algorithm for Lagrangian-based models boils down to the EqProp learning algorithm for energy-based models (up to a constant $T$).
\section{Equilibrium Propagation in Stochastic Systems} \label{sec:stochastic-setting}
Unlike neural networks trained on digital computers which can reliably process information in a deterministic way, physical systems (including analog circuits and biological networks) are subject to noise. In this section we present an extension of the equilibrium propagation framework to stochastic systems, which allows us to take such forms of noise into account, and may therefore be useful both from the neuromorphic and neuroscience points of view.
We note that the question whether the brain is stochastic or deterministic is controversial. However, even if the brain were deterministic, the precise trajectory of the neural activity is likely to be fundamentally unpredictable (i.e. chaotic) and thus easier to study statistically. In this case, the brain can still be usefully modelled with probability distributions (using probability theory or ergodic theory).
\subsection{From Deterministic to Stochastic Systems}
In the stochastic setting, when presented with an input $x$, instead of an equilibrium state $s_\star$, the model defines a probability distribution $p_\star(s)$ over the space of possible configurations $s$. Thus, rather than a stationary condition of the form $\frac{\partial E}{\partial s}(\theta, x, s_\star) = 0$, we now have an equilibrium distribution $p_\star(s)$ such that \begin{equation}
\label{eq:free-boltzmann-distribution}
p_\star(s) = \frac{e^{-E(\theta, x, s)}}{Z_\star}, \qquad \text{with} \qquad Z_\star = \int e^{-E(\theta, x, s)}ds. \end{equation} The probability distribution defined by $p_\star(s)$ is called the Boltzmann distribution (or Gibbs distribution), and the normalizing constant $Z_\star$ is called the \textit{partition function}.
In this setting, the loss that we want to minimize is the expected cost over the equilibrium distribution \begin{equation}
\label{eq:loss-stochastic}
\mathcal{L}_{\rm sto}(\theta, x, y) = \mathbb{E}_{s \sim p_\star(s)} \left[ C \left( s, y \right) \right]. \end{equation} We note that $\mathcal{L}_{\rm sto}$ depends on $\theta$ and $x$ through the equilibrium distribution $p_\star(s)$.
\subsection{Gradient Formula}
As in the deterministic framework, the stochastic version of equilibrium propagation makes use of the total energy function $E(\theta, x, s) + \beta \; C(s, y)$. The notion of nudged equilibrium state ($s_\star^\beta$) is replaced accordingly by a nudged equilibrium distribution $p_\star^\beta(s)$, which is the Boltzmann distribution associated to the total energy function, i.e. \begin{equation}
\label{eq:nudged-boltzmann-distribution}
p_\star^\beta(s) = \frac{e^{-E(\theta, x, s) - \beta \; C(s, y)}}{Z_\star^\beta}, \qquad \text{with} \qquad Z_\star^\beta = \int e^{-E(\theta, x, s) - \beta \; C(s, y)} ds. \end{equation}
The following theorem extends Theorem \ref{thm:static-eqprop} to stochastic systems.
\begin{thm}[\citet{scellier2017equilibrium}]
\label{thm:equi-prop-langevin}
The gradient of the objective function with respect to $\theta$ is equal to
\begin{equation}
\label{eq:thm-stochastic}
\frac{\partial \mathcal{L}_{\rm sto}}{\partial \theta}(\theta, x, y) =
\left. \frac{d}{d \beta} \right|_{\beta=0} \mathbb{E}_{s \sim p_\star^\beta(s)} \left[
\frac{\partial E}{\partial \theta} \left( \theta, x, s \right) \right].
\end{equation}
Furthermore, if the energy function is sum-separable (in the sense of Eq.~\ref{eq:sum-separability}), then
\begin{equation}
\frac{\partial \mathcal{L}_{\rm sto}}{\partial \theta_k}(\theta, x, y) =
\left. \frac{d}{d \beta} \right|_{\beta=0} \mathbb{E}_{s \sim p_\star^\beta(s)} \left[
\frac{\partial E_k}{\partial \theta_k} \left( \theta_k, \{ x, s \}_k \right) \right].
\end{equation} \end{thm}
\begin{proof}[Proof of Theorem \ref{thm:equi-prop-langevin}]
Recall that the total energy function is by definition $F(\theta, \beta, s) = E(\theta, x, s) + \beta \; C(s, y)$, where the notations $x$ and $y$ are dropped for simplicity (since they do not play any role in the proof). We also (re)define $Z_\theta^\beta = \int e^{-F(\theta, \beta, s)}ds$, the partition function, as well as $p_\theta^\beta(s) = \frac{e^{-F(\theta, \beta, s)}}{Z_\theta^\beta}$, the corresponding Boltzmann distribution. Recalling the definition of the loss $\mathcal{L}_{\rm sto}$ (Eq.~\ref{eq:loss-stochastic}), and using the fact that $\frac{\partial F}{\partial \beta} = C$ and that $\frac{\partial F}{\partial \theta} = \frac{\partial E}{\partial \theta}$, the formula to show (Eq.~\ref{eq:thm-stochastic}) is a particular case of the following formula, evaluated at the point $\beta=0$:
\begin{equation}
\label{eq:lemma-eqprop-stochastic}
\frac{d}{d\theta} \mathbb{E}_{s \sim p_\theta^\beta(s)} \left[ \frac{\partial F}{\partial \beta} \left( \theta, \beta, s \right) \right]
= \frac{d}{d\beta} \mathbb{E}_{s \sim p_\theta^\beta(s)} \left[ \frac{\partial F}{\partial \theta} \left( \theta, \beta, s \right) \right].
\end{equation}
Therefore, in order to prove Theorem \ref{thm:equi-prop-langevin}, it is sufficient to prove Eq.~\ref{eq:lemma-eqprop-stochastic}. We do this in two steps. First, the cross-derivatives of the log-partition function $\ln \left( Z_\theta^\beta \right)$ are equal:
\begin{equation}
\label{eq:cross-derivatives}
\frac{d}{d \theta} \frac{d}{d \beta} \ln \left( Z_\theta^\beta \right)
= \frac{d}{d \beta} \frac{d}{d \theta} \ln \left( Z_\theta^\beta \right).
\end{equation}
Second, we have
\begin{equation}
\label{eq:derivative-beta}
\frac{d}{d\beta} \ln \left( Z_\theta^\beta \right)
= \mathbb{E}_{s \sim p_\theta^\beta(s)} \left[ \frac{\partial F}{\partial \beta}(\theta, \beta, s) \right],
\end{equation}
and similarly
\begin{equation}
\label{eq:derivative-theta}
\frac{d}{d\theta} \ln \left( Z_\theta^\beta \right)
= \mathbb{E}_{s \sim p_\theta^\beta(s)} \left[ \frac{\partial F}{\partial \theta}(\theta, \beta, s) \right].
\end{equation}
Plugging Eq.~\ref{eq:derivative-beta} and Eq.~\ref{eq:derivative-theta} in Eq.~\ref{eq:cross-derivatives}, we get Eq.~\ref{eq:lemma-eqprop-stochastic}.
Hence the result. \end{proof}
We note that Eq.~\ref{eq:lemma-eqprop-stochastic} is a stochastic variant of the fundamental lemma of EqProp (Lemma \ref{lma:main}). The quantity $-\ln \left( Z_\theta^\beta \right)$ is called the \textit{free energy} of the system.
\subsection{Langevin Dynamics}
The prototypical dynamical system to sample from the equilibrium distribution $p_\star(s)$ (Eq.~\ref{eq:free-boltzmann-distribution}) is the \textit{Langevin dynamics}, which we describe here. Recall from Chapter \ref{chapter:hopfield} the gradient dynamics $\frac{d s_t}{dt} = -\frac{\partial E}{\partial s}(\theta, x, s_t)$. To go from this (deterministic) gradient dynamics to the (stochastic) Langevin dynamics, we add a new term (a Brownian term) which models a form of noise: \begin{equation}
\label{eq:free-langevin-dynamics}
d S_t = - \frac{\partial E}{\partial s} \left( \theta, x, S_t \right) dt + \sqrt{2} \; dB_t. \end{equation} In this expression, $B_t$ is a mathematical object called a \textit{Brownian motion}. Instead of defining $B_t$ formally, we give here an intuitive definition. Intuitively, each increment $dB_t$ (between time $t$ and time $t+dt$) can be thought of as a normal random variable with mean $0$ and variance $dt$, which is "independent of past increments". By following this noisy form of gradient descent with respect to the energy function $E$, the state of the system ($S_t$) settles to the Boltzmann distribution. This can be proved using the Kolmogorov forward equation (a.k.a. Fokker-Planck equation) for diffusion processes.
Here we have chosen the constant $\sqrt{2}$ in the Langevin dynamics, so that the `temperature' of the system is $1$. More generally, if the Brownian motion is scaled by a factor $\sigma(\theta, x)$, i.e. if the dynamics is of the form $d S_t = - \frac{\partial E_\theta}{\partial s} \left( \theta, x, S_t \right) dt + \sigma(\theta, x) \cdot dB_t$, then the exponent in the Boltzmann distribution needs to be rescaled by a factor $\frac{1}{2}\sigma^2(\theta, x)$. We call this modified equilibrium distribution the Boltzmann distribution with temperature $T=\frac{1}{2}\sigma^2(\theta, x)$. We note that if $\sigma(\theta, x)=\sqrt{2}$ then $T=1$.
\subsection{Equilibrium Propagation in Langevin Dynamics}
In the setting of Langevin dynamics, EqProp takes the following form.
\paragraph{Free phase (inference).} In the free phase, the network is shown an input $x$ and the state of the system follows the Langevin dynamics of Eq.~\ref{eq:free-langevin-dynamics}. `Free samples' are drawn from the equilibrium distribution $p_\star(s) \propto e^{-E(\theta, x, s)}$.
\paragraph{Nudged phase.} In the nudged phase, a term $-\beta \; \frac{\partial C}{\partial s}$ is added to the dynamics of Eq.~\ref{eq:free-langevin-dynamics}, where $\beta$ is a scalar hyperparameter (the nudging factor). Denoting $S_t^\beta$ the state of the network at time $t$ in the nudged phase, the dynamics reads: \begin{equation}
d S_t^\beta = \left[ - \frac{\partial E}{\partial s} \left( \theta, x, S_t^\beta \right) - \beta \; \frac{\partial C}{\partial s} \left( S_t^\beta, y \right) \right] dt + \sqrt{2} \; dB_t. \end{equation} Here for readability we use the same notation $B_t$ for the Brownian motion of the nudged phase, but it should be understood that this is a new Brownian motion, independent of the one used in the free phase. `Nudged samples' are drawn from the nudged distribution $p_\star^\beta(s) \propto e^{-E(\theta, x, s)- \beta \; C(s, y)}$.
\paragraph{Gradient estimate.} Finally, the gradient of the loss $\mathcal{L}_{\rm sto}$ of Eq.~\ref{eq:loss-stochastic} can be approximated using the samples from the free and nudged distributions to estimate:
\begin{equation}
\widehat{\nabla}_\theta(\beta) = \frac{1}{\beta} \left( \mathbb{E}_{s \sim p_\star^\beta(s)} \left[
\frac{\partial E}{\partial \theta} \left( \theta, x, s \right) \right]
- \mathbb{E}_{s \sim p_\star(s)} \left[
\frac{\partial E}{\partial \theta} \left( \theta, x, s \right) \right] \right). \end{equation}
\section{Contrastive Meta-Learning} \label{sec:contrastive-meta-learning}
Recently, \citet{zucchet2021contrastive} introduced the \textit{contrastive meta-learning} framework, where they propose to train the meta-parameters of a meta-learning model using the EqProp method. In this section, we briefly present the setting of meta-learning and show how \citet{zucchet2021contrastive} derive the contrastive meta-learning rule.
\subsection{Meta-Learning and Few-Shot Learning}
Meta learning, or \textit{learning to learn}, is a broad field that encompasses \textit{hyperparameter optimization}, \textit{few-shot learning}, and many other use cases. Here, for concreteness, we present the setting of few-shot learning.
In the setting of few-shot learning, the aim is to build a system that is able to learn (or `adapt' to) a given task $\mathcal{T}$ when only very limited data is available for that task.
The system should be able to do so for a variety of tasks coming from a distribution of tasks $p(\mathcal{T})$.
In this setting, the system has two types of parameters: a \textit{meta-parameter} $\theta$ which is shared across all tasks, and a \textit{task-specific parameter} $\phi$ which can be adapted to a given task. In the \textit{adaptation phase}, the task-specific parameter $\phi$ adapts to some task $\mathcal{T}$ using a training set $\mathcal{D}_{\rm train}$ corresponding to that task. The resulting value of the task-specific parameter after this adaptation phase is denoted $\phi(\theta,\mathcal{D}_{\rm train})$, which depends on both $\theta$ and $\mathcal{D}_{\rm train}$. The performance of the resulting $\phi(\theta,\mathcal{D}_{\rm train})$ is then evaluated on a test set $\mathcal{D}_{\rm test}$ from the same task $\mathcal{T}$. This performance is denoted $L(\phi(\theta,\mathcal{D}_{\rm train}),\mathcal{D}_{\rm test})$, where $L$ is a loss function. The goal of meta-learning is then to find the value of the meta-parameter $\theta$ that minimizes the expected loss $\mathcal{R}(\theta) = \mathbb{E}_{(\mathcal{D}_{\rm train},\mathcal{D}_{\rm test})} \left[ L(\phi(\theta,\mathcal{D}_{\rm train}),\mathcal{D}_{\rm test}) \right]$ over pairs of training/test sets $(\mathcal{D}_{\rm train},\mathcal{D}_{\rm test})$ coming from the distribution of tasks $p(\mathcal{T})$. In other words, the goal is to find $\theta$ that generalizes well across tasks from the distribution $p(\mathcal{T})$.
\subsection{Contrastive Meta-Learning}
The idea of the \textit{contrastive meta-learning} framework of \citet{zucchet2021contrastive} is the following. In the adaptation phase, the task-specific parameter $\phi$ minimizes an \textit{inner loss} $L^{\rm in}$, so that \begin{equation}
\phi(\theta,\mathcal{D}_{\rm train}) = \underset{\phi}{\arg\min} \; L^{\rm in}(\theta,\mathcal{D}_{\rm train},\phi). \end{equation} In the setting of \textit{regularization learning} for example, the inner loss is of the form $L^{\rm in}(\theta,\mathcal{D}_{\rm train},\phi) = L(\phi,\mathcal{D}_{\rm train}) + R(\theta,\phi)$, where $L$ is the same loss as the one used on the test set, and $R(\theta,\phi)$ is a regularization term. Exploiting the fact that, at the end of the adaptation phase, the task-specific parameter $\phi(\theta,\mathcal{D}_{\rm train})$ satisfies the `equilibrium condition' $\frac{\partial L^{\rm in}}{\partial \phi}(\theta,\mathcal{D}_{\rm train},\phi(\theta,\mathcal{D}_{\rm train})) = 0$, \citet{zucchet2021contrastive} then propose to use the EqProp method to compute the gradients (with respect to $\theta$) of the meta loss \begin{equation}
\mathcal{L}_{\rm meta} = L^{\rm out}(\phi(\theta,\mathcal{D}_{\rm train}),\mathcal{D}_{\rm test}), \end{equation} where $L^{\rm out}$ is a so-called \textit{outer loss}, e.g. $L^{\rm out} = L$ in regularization learning. To this end, they introduce the `total loss' \begin{equation}
L^{\rm total}(\theta,\beta,\phi) = L^{\rm in}(\theta,\mathcal{D}_{\rm train},\phi) + \beta \; L^{\rm out}(\phi,\mathcal{D}_{\rm test}), \end{equation} where $\beta$ is a scalar parameter (the nudging factor). Then they consider \begin{equation}
\phi_\star^\beta = \underset{\phi}{\arg\min} \; L^{\rm total}(\theta,\beta,\phi), \end{equation} defined for any $\beta$, and they exploit the formula \begin{equation}
\frac{\partial \mathcal{L}_{\rm meta}}{\partial \theta} = \left. \frac{d}{d\beta} \right|_{\beta=0} \frac{\partial L^{\rm in}}{\partial \theta} \left( \theta, \mathcal{D}_{\rm train}, \phi_\star^\beta \right), \end{equation} which is a reformulation of Theorem \ref{thm:static-eqprop}.
More generally, the contrastive meta-learning method applies to any functions $L^{\rm in}$ and $L^{\rm out}$, and any bilevel optimization problem where the aim is to optimize $\mathcal{L}_{\rm meta}(\theta) = L^{\rm out}(\theta,\phi(\theta))$ with respect to $\theta$, under the constraint that $\frac{\partial L^{\rm in}}{\partial \phi}(\theta,\phi(\theta)) = 0$. \chapter{Conclusion}
In this thesis, we have presented a mathematical framework that applies to systems that are described by variational equations, while maintaining the benefits of backpropagation. This framework may have implications both for neuromorphic computing and for neuroscience.
\section{Implications for Neuromorphic Computing}
Current deep learning research is grounded on a very general and powerful mathematical principle: automatic differentiation for backpropagating error gradients in differentiable neural networks. This mathematical principle is at the heart of all deep learning libraries (TensorFlow, PyTorch, Theano, etc.). The emergence of such software libraries has greatly eased deep learning research and fostered the large scale development of parallel processors for deep learning (e.g. GPUs and TPUs). However, these processors are power inefficient by orders of magnitude, if we take the brain as a benchmark. The rapid increase in energy consumption raises concerns as the use of deep learning systems in society keeps growing \citep{strubell2019energy}.
At a more abstract level of description, the backpropagation algorithm of conventional deep learning allows to train neural networks by stochastic gradient descent (SGD). In this thesis, we have presented a mathematical framework which allows to preserve the key benefits of SGD, but opens a path for implementation on neuromorphic processors which directly exploit physics and the in-memory computing concept to perform the desired computations. Building neuromorphic systems that can match the performance of current deep learning systems is still in the future, but the potential speedup and power reduction is extremely appealing. This would also allow us to scale neural networks to sizes beyond the reach of current GPU-based deep learning models. Besides, by mimicking the working mechanisms of the brain more closely, such neuromorphic systems could also inform neuroscience.
\section{Implications for Neuroscience}
How do the biophysical mechanisms of neural computation give rise to intelligence? Ultimately, if we want to explain how our thoughts, memories and behaviours emerge from neural activities, we need a mathematical theory. Here, we explain how the mathematical framework presented in this thesis may help for this purpose.
\subsection{Variational Formulations of Neural Computation ?}
A number of ideas at the core of today's deep learning systems draw inspiration from the brain. However, these deep neural networks are not biologically realistic in details. In particular, the neuron models may look overly simplistic from a neurophysiological point of view. In these models, the state of a neuron is described by a single number, which can be thought of as its firing rate. A real neuron on the other hand, like any other biological cell, is an extraordinarily complex machinery, composed of a very large quantity of proteins interacting in complex ways. Because of this complexity, the hope to ever come up with a mathematical theory of the brain may seem vain.
This complexity should not discourage us, however.
One key point is that not all details of neurobiology may be relevant to explain the fundamental working mechanisms of the brain that give rise to emerging properties such as memory and learning. \citet{hertz1991introduction} puts it in these words: "Just as most of the details of the separate parts of a large ship are unimportant in understanding the behaviour of the ship (e.g. that it floats or transport cargo), so many details of single nerve cells may be unimportant in understanding the collective behaviour of a network of cells". Which biophysical characteristics of neural computation are essential to explain how information is processed in brains, and which can be abstracted away? While current deep learning systems use \textit{rate models} (i.e. neuron models relying on the neuron's firing rate), a simple but more realistic neuron model is the leaky-integrate and fire (LIF) model, which accounts for the \textit{spikes} (a.k.a. \textit{action potentials}) and the electrical activity of neurons at each point in time. A more elaborated model is the Hodgkin-Huxley model of action potentials, which takes into account ion channels to describe how spikes are initiated. At a more detailed level, real neurons have a spatial layout, and each part of the neuron has its own voltage value and ion concentration values. In recent years, more realistic neuron models that include spikes \citep{zenke2017superspike,payeur2020burst} and multiple compartments \citep{bengio2016feedforward,guerguiev2017towards,sacramento2018dendritic,richards2019dendritic,payeur2020burst} have been proposed for deep learning. Can we figure out which elements of neurobiology are essential to explain the mechanisms underlying intelligence, abstracting out those that are not necessary to understand these mechanisms?
In this thesis, we have presented a mathematical theory which applies to a broad class of systems whose state or dynamics is the solution of a variational equation. Given the predominance of variational principles in physics, a question arises: can neural dynamics in the brain be derived from variational principles too? We note that various variational principles for neuroscience modelling have been proposed \citep{friston2010free,betti2019cognitive,dold2019lagrangian,kendall2021gradient}.
\subsection{SGD Hypothesis of Learning}
Today, the neural networks of conventional deep learning are trained by stochastic gradients descent (SGD), using the backpropagation algorithm to compute the loss gradients. The backpropagation algorithm is not biologically realistic as it requires that neurons emit two quite different types of signals: an activation signal in the forward pass, and a signed gradient signal in the backward pass. Real neurons on the other hand communicate with only one sort of signals -- the \textit{spikes}. Worse, the backpropagation through time (BPTT) algorithm used in recurrent networks requires storing past hidden states of the neurons.
Although these deep neural networks are not biologically realistic in details, they have proved to be valuable not just for AI applications, but also as models for neuroscience. In recent years, deep learning models have been used for neuroscience modelling of the visual and auditory cortex. Deep neural networks have been found to outperform other biologically plausible models at matching neural representations in the visual cortex \citep{mante2013context,cadieu2014deep,kriegeskorte2015deep,sussillo2015neural,yamins2016using,pandarinath2018inferring} and at predicting auditory neuron responses \citep{kell2018task}. Because SGD-optimized neural networks are state-of-the-art at solving a variety of tasks in AI, and also state-of-the-art models at predicting neocortical representations, a hypothesis emerges which is that the cortex may possess general purpose learning algorithms that implement SGD. More generally, a view emerges, which is that the fundamental principles of current deep learning systems may provide a useful theoretical framework for gaining insight into the principles of neural computation \citep{richards2019deep}.
While the backpropagation algorithm is not biologically realistic, a more reasonable hypothesis is that the brain uses a different mechanism to compute the loss gradients required to perform SGD. A long standing idea is that the loss gradients may be encoded in the difference of neural activities to drive synaptic changes \citep{hinton1988learning,lillicrap2020backpropagation}. If variational principles for neural dynamics exist, and if their corresponding energy function or Lagrangian function have the sum-separability property, then EqProp would suggest a learning mechanism involving local learning rules and suitable with optimization by SGD. Whereas in the setting of energy-based models, EqProp suggests that gradients are encoded in the difference of \textit{neural activities} (as hypothesized by \citet{lillicrap2020backpropagation}), in the Lagrangian-based setting, EqProp suggests that gradients are encoded in the difference of \textit{neural trajectories}.
We note that the SGD hypothesis of learning also raises several questions. First, what is the loss function that is optimized? Unlike in conventional machine learning, there are likely a variety of such loss functions, which may vary across brain areas and time \citep{marblestone2016toward}. Second, SGD dynamics depend on the metric that we choose for the space of synaptic weights \citep{surace2020choice}. Also, while the SGD hypothesis is reasonable for the function of the cortex, other components of the brain such as the hippocampus may use different learning algorithms.
\subsection{The Role of Evolution}
In this manuscript we have emphasized the importance of learning. The ability for individuals to learn within their lifetime is indeed an essential component of intelligence. But learning alone is not the only key to human and animal intelligence. Far from being a blank slate, at birth, the brain is pre-wired and structured. This structure provides us straight from birth with innate intuitions, abilities, and mechanisms which make us predisposed to learn much more quickly \citep[Chapters 3 and 4]{dehaene2020we}. These innate structures and mechanisms have arisen through evolution. Machine learning models account for these innate aspects of intelligence using \textit{inductive biases} (or \textit{priors}). Traditionally, these inductive biases are manually crafted. However, given the complexity of the brain, one may wonder whether one will ever manage to reverse-engineer the inductive biases of the brain `by hand'.
Evolution by natural selection can be regarded as another optimization process where, loosely speaking, the `adjustable parameters' are the \textit{genes}, and the `objective' that is maximized is the \textit{fitness} of the individual. The human genome has around $3 \times 10^9$ `parameters' (base pairs). Just like moving from manually crafted computations (in classical AI) to learned computations (in machine learning) proved extremely fruitful both for AI and neuroscience modelling, one may benefit from `evolving' inductive biases, by mimicking the process of evolution in some way. One branch of machine learning which is relevant to address questions related to the optimization process carried out by evolution is \textit{meta-learning} (Section \ref{sec:contrastive-meta-learning}).
A related path, proposed by \citet{zador2019critique}, is to reverse-engineer the program encoded in the genome which wires up the brain during embryonic development.
We note that the learning rules and loss functions of the brain have also arisen through evolution and are possibly much more complex than in the traditional view of machine learning (as we have formulated it in this manuscript).
\section{Synergy Between Neuroscience and AI}
Is a mathematical theory of the brain all we need to understand the brain? Or do we need to build brain-like machines to claim that we understand it? This question depends of course on what we mean by `understanding' ; it is one of the fundamental questions of philosophy of science. In many fields of science, we have mathematical models of objects that we cannot build (for example, we have physics models of the Sun, but we cannot build one). Although a theory is all we need in principle to explain the measurements of experimentally accessible variables, it seems also clear that, if we can build a brain, or simulate one, our `understanding' of the brain will further improve, and the underlying theory will become more plausible.
Can we simulate a brain in software? In the introductory chapter, we have argued that with current digital hardware this strategy would at best be extremely slow and power hungry, and more likely just unfeasible. Just like it is impossible for statistical physicists to simulate in software the internal dynamics of a fluid composed of $10^{23}$ particles, simulating a brain composed of $10^{11}$ neurons and $10^{15}$ synapses (and many many more proteins) seems unfeasible. In these respects, the development of appropriate neuromorphic systems will eventually be necessary to emulate a brain.
More likely, by making it possible to run and train neural networks with more elaborated neural dynamics that more closely mimic those of real neurons, the development of neuromorphic hardware will help us come up with new hypotheses about the working mechanisms of the brain. As we build more brain-like AI systems, and as the performance of these AI systems improves, we can formulate new mathematical theories of the brain. Just like the rise of deep learning as a leading approach to AI has eased the flow of information between different fields of AI (computer vision, speech recognition, natural language processing, etc.), we can expect that the development of neuromorphic systems together with mathematical frameworks to train them will ease the flow of information between AI and neuroscience too.
The problem of intelligence is thus both a problem for natural sciences and engineering. It counts to the greatest scientific problems, together with the problem of the origin of life, the problem of the origin of the universe, and many others. One specificity of the problem of intelligence is that, as we make progress towards solving this problem, we can use the knowledge that we acquire to build machines that can help us solve other scientific problems more easily. For example, most recently, a program called AlphaFold 2 promises to help us discover the 3D structure of proteins much more rapidly than prior methods, which is key to understanding most biological mechanisms in living organisms.
\appendix \chapter{Gradient Estimators} \label{chapter:appendix}
Recall from Theorem \ref{thm:static-eqprop} that the loss gradient is equal to \begin{equation}
\frac{\partial \mathcal{L}}{\partial \theta}(\theta,x,y) = \left. \frac{d}{d\beta} \right|_{\beta=0} \frac{\partial E}{\partial \theta} \left( \theta,x,s_\star^\beta \right). \end{equation} The \textit{one-sided gradient estimator} is defined as \begin{equation} \widehat{\nabla}_\theta(\beta) = \frac{1}{\beta} \left( \frac{\partial E}{\partial \theta}(\theta, x, s_\star^\beta) - \frac{\partial E}{\partial \theta}(\theta, x, s_\star^0) \right), \end{equation} and the \textit{symmetric gradient estimator} is defined as \begin{equation} \widehat{\nabla}_\theta^{\rm sym}(\beta) = \frac{1}{2\beta} \left( \frac{\partial E}{\partial \theta}(\theta, x, s_\star^\beta) - \frac{\partial E}{\partial \theta}(\theta, x, s_\star^{-\beta}) \right). \end{equation}
\begin{lma} \label{lma:gradient-estimators} Let $\theta$, $x$ and $y$ be fixed. Assuming that the function $\beta \mapsto \frac{\partial E}{\partial \theta}(\theta,x,s_\star^\beta)$ is three times differentiable, we have, as $\beta \to 0$: \begin{align} \widehat{\nabla}_\theta(\beta) &= \frac{\partial \mathcal{L}}{\partial \theta}(\theta,x,y) + \frac{A}{2} \beta + O(\beta^2), \label{eq:estimate-EP}\\ \widehat{\nabla}_\theta^{\rm sym}(\beta) &= \frac{\partial \mathcal{L}}{\partial \theta}(\theta,x,y) + O(\beta^2),\label{eq:estimate-EP-sym} \end{align}
where $A = \left. \frac{d^2}{d\beta^2} \right|_{\beta=0} \frac{\partial E}{\partial \theta}(\theta,x,s_\star^\beta)$ is a constant (independent of $\beta$, but dependent on $\theta$, $x$ and $y$). \end{lma}
Lemma \ref{lma:gradient-estimators} shows that the one-sided estimator $\widehat{\nabla}_\theta(\beta)$ possesses a first-order error term in $\beta$, which the symmetric estimator $\widehat{\nabla}_\theta^{\rm sym}(\beta)$ eliminates.
\begin{proof}
Define $f(\beta) = \frac{\partial E}{\partial \theta}(\theta,x,s_\star^\beta)$ and note that $f'(0) = \frac{\partial \mathcal{L}}{\partial \theta}(\theta,x,y)$ by Theorem \ref{thm:static-eqprop}, and that $f''(0) = \left. \frac{d^2}{d\beta^2} \right|_{\beta=0} \frac{\partial E}{\partial \theta}(\theta, x, s_\star^\beta)$. As $\beta \to 0$, we have the Taylor expansion $f(\beta) = f(0) + \beta \; f'(0) + \frac{\beta^2}{2} f''(0) + O(\beta^3)$. With these notations, the one-sided estimator reads $\widehat{\nabla}_\theta(\beta) = \frac{1}{\beta} \left( f(\beta) - f(0) \right) = f'(0) + \frac{\beta}{2} f''(0) + O(\beta^2)$, and the symmetric estimator, which is the mean of $\widehat{\nabla}_\theta(\beta)$ and $\widehat{\nabla}_\theta(-\beta)$, reads $\widehat{\nabla}_\theta^{\rm sym}(\beta) = f'(0) + O(\beta^2)$. \end{proof}
\end{document} |
\begin{document}
\title[Mirror model] {Lower bound for the escape probability in the Lorentz Mirror Model on $\mathbb Z^2$}
\author{Gady Kozma} \address{Gady Kozma, Department of Mathematics, Ziskind Building, Weizmann Institute of Science, Rehovot 76100, Israel} \email{[email protected]} \author{Vladas Sidoravicius} \address{Vladas Sidoravicius, IMPA, Estrada Dona Castorina 110, 22460-320, Rio de Janeiro, Brasil.} \email{[email protected]}
\begin{abstract} We show that in the Lorentz mirror model, at any density of mirrors, \[ \mathbb{P}(0\leftrightarrow\partial Q(n))\ge\frac{1}{2n+1}. \]\end{abstract} \maketitle
Let $0 \le p \le 1$. Designate each vertex $x \in \mathbb Z^2$ a double-sided mirror with probability $p$. For every vertex the designations are independent and identically distributed. Vertices which are designated a mirror, with probability $1/2$ obtain a north-west mirror, otherwise they obtain north-east mirror. A ray of light traveling along the edges of $\mathbb Z^2$ is reflected when it hits a mirror (see image on the right) and keeps its direction unchanged at vertices which are not designated a mirror. The question is if for all values of $0 < p \leq 1$ the orbits of the ray of light are periodic, or, otherwise, for some values of $p < 1$ there is a positive probability that the light travels to infinity. For $p=1$, a simple argument, see for instance \cite[\S 13.3]{G}, shows that this question is equivalent to the question of the existence of an infinite open path in the bond percolation model on $\mathbb{Z}^2$ at the parameter value $1/2$, which is, due to the seminal result of \cite{H}, known to have negative answer. If $p=1$ and mirrors are oriented with probabilities $p_{_{NW}} \neq p_{_{NE}}$ we have the same conclusion, see \cite[p.\ 54]{K}. No similar result is known for $p < 1$. We are ready to state our theorem. By $[0 \leftrightarrow A]$ we denote the event that ray of light starting at the origin reaches set $A \subset \mathbb{Z}^2$, and let $Q(n) = [-n, n]^2$. \parshape 13 0pt 10cm 0pt 10cm 0pt 10cm 0pt 10cm 0pt 10cm 0pt 10cm 0pt 10cm 0pt 10cm 0pt 10cm 0pt 10cm 0pt 10cm 0pt 10cm 0pt \hsize \vadjust{\kern -7.6cm \vtop to 7.6cm{\hbox to \hsize{\hss\includegraphics{reflection.eps}}}}
\begin{thm*} In the mirror model at any density $0 < p \leq 1$ of mirrors, \[ \mathbb{P}_p(0\leftrightarrow Q(n)^c)\ge\frac{1}{2n+1}. \] \end{thm*} \begin{proof} Examine the mirror model on an infinite cylinder $\mathbb Z \times S_{2n+1}$ of odd width $2n+1$. We first note that, deterministically, there must exist an infinite path. Indeed, examine the paths crossing the $2n+1$ horizontal edges whose left-end vertex has the coordinate $0$.
Then each finite path must cross an even number of edges, since each crossing moves it from the left half of the cylinder or back. Since the total number of edges is odd, at least one cannot belong to a finite path, hence it belongs to an infinite path.
Hence the expected number of edges that belong to an infinite path is $\ge1$. Since the cylinder is invariant under rotations, we get that the probability that it crosses any given vertex is equal to $\frac{1}{2n+1}$ of this expectation. So it is $\ge\frac{1}{2n+1}$.
Finally, since the path cannot tell the difference between being in the cylinder and in $\mathbb{Z}^{2}$ before getting to distance $n$, we get the desired claim on $\mathbb{Z}^{2}$. \end{proof} Remarks \begin{enumerate} \item The argument works for reasonable high-dimensional analogs of the mirror model. \item The argument does not apply to the periodic Manhattan model (see
\cite[p.\ 13]{C}) because the Manhattan model has the evenness built into it, and cannot be defined on a cylinder with odd width consistently. Thus we have avoided a contradiction as the result is not true for the the Manhattan model. It is not difficult to convince oneself that the path of the ray is contained inside the vacant $*$-cluster of the origin. Therefore when the density of obstacles is bigger than the critical value for site percolation on $\mathbb{Z}^2$, one has that $\mathbb{P}_p(0\leftrightarrow Q(n)^c)$ decays exponentially. \item On the other hand, the argument does apply to the \emph{randomly
oriented} Manhattan model. In this model the orientations of the
``streets'' and ``avenues'' are random and i.i.d\@. Here there is no
problem to define the model on a cylinder with odd width and the
proof carries through literally. \item Rotating mirrors. In this model mirrors are changing their position deterministically after each interaction with the beam of light by flipping by $90^{\rm{o}}$ degrees. If $p=1$, it is easy to see that the path of the ray of light is unbounded. However it is not known to the authors if in this case it is recurrent. \end{enumerate}
We thank S. Smirnov for a useful discussion.
\end{document} |
\begin{document}
\title{Chv\'{a} \footnotetext[1]{Department of Mathematics, Iowa State University, Ames, IA 50011; {\tt $\{$cocox,rymartin$\}[email protected]}} \footnotetext[2]{Department of Mathematical and Statistical Sciences, University of Colorado Denver, Denver, CO 80217 ; {\tt [email protected]}.} \footnotetext[3]{Research supported in part by NSF grant DMS-1427526, ``The Rocky Mountain - Great Plains Graduate Research Workshop in Combinatorics".} \footnotetext[4]{Research supported in part by Simons Foundation Collaboration Grant \#206692 (to Michael Ferrara).} \footnotetext[5]{Department of Mathematics, University of Illinois Urbana-Champaign, Urbana, IL 60801; {\tt [email protected]}.} \footnotetext[6]{This author's research was partially supported by the National Security Agency (NSA) via grant H98230-13-1-0226. This author's contribution was completed in part while he was a long-term visitor at the Institute for Mathematics and its Applications. He is grateful to the IMA for its support and for fostering such a vibrant research community.} \begin{abstract} A sequence of nonnegative integers $\pi =(d_1,d_2,...,d_n)$ is {\it graphic} if there is a (simple) graph $G$ of order $n$ having degree sequence $\pi$. In this case, $G$ is said to {\it realize} or be a {\it realization of} $\pi$. Given a graph $H$, a graphic sequence $\pi$ is \textit{potentially $H$-graphic} if there is some realization of $\pi$ that contains $H$ as a subgraph.
In this paper, we consider a degree sequence analogue to classical graph Ramsey numbers. For graphs $H_1$ and $H_2$, the \textit{potential-Ramsey number} $r_{pot}(H_1,H_2)$ is the minimum integer $N$ such that for any $N$-term graphic sequence $\pi$, either $\pi$ is potentially $H_1$-graphic or the complementary sequence $\overline{\pi}=(N-1-d_N,\dots, N-1-d_1)$ is potentially $H_2$-graphic.
We prove that if $s\ge 2$ is an integer and $T_t$ is a tree of order $t> 7(s-2)$, then $$r_{pot}(K_s, T_t) = t+s-2.$$ This result, which is best possible up to the bound on $t$, is a degree sequence analogue to a classical 1977 result of Chv\'{a}tal on the graph Ramsey number of trees vs.\ cliques. To obtain this theorem, we prove a sharp condition that ensures an arbitrary graph packs with a forest, which is likely to be of independent interest.
\end{abstract} \section{Introduction}
A sequence of nonnegative integers $\pi =(d_1,d_2,\dotsc,d_n)$ is {\it graphic} if there is a (simple) graph $G$ of order $n$ having degree sequence $\pi$. In this case, $G$ is said to {\it realize} or be a {\it realization of} $\pi$, and we will write $\pi=\pi(G)$. Unless otherwise stated, all sequences in this paper are assumed to be nonincreasing.
There are a number of theorems characterizing graphic sequences, including classical results by Havel~\cite{hav} and Hakimi~\cite{hak} and an independent characterization by Erd\H{o}s and Gallai~\cite{EG}. However, a given graphic sequence may have a diverse family of nonisomorphic realizations; as such it has become of recent interest to determine when realizations of a given graphic sequence have various properties. As suggested by A.R.~Rao in~\cite{Ra79}, such problems can be broadly classified into two types, the first described as ``forcible'' problems and the second as ``potential'' problems. In a forcible degree sequence problem, a specified graph property must exist in every realization of the degree sequence $\pi$, while in a potential degree sequence problem, the desired property must be found in at least one realization of $\pi$.
Results on forcible degree sequences are often stated as traditional problems in structural or extremal graph theory, where a necessary and/or sufficient condition is given in terms of the degrees of the vertices (or equivalently the number of edges) of a given graph (e.g.~Dirac's theorem on hamiltonian graphs).
A number of degree sequence analogues to classical problems in extremal graph theory appear throughout the literature, including potentially graphic sequence variants of Hadwiger's Conjecture~\cite{CO,DM,RoSo10}, extremal graph packing theorems~\cite{BFHJKW, DFJS}, the Erd\H{o}s-S\'{o}s Conjecture~\cite{LiYin09}, and the Tur\'{a}n Problem~\cite{EJL, FLMW}.
\subsection{Potential-Ramsey Numbers}
Given graphs $H_1$ and $H_2$, the \textit{Ramsey number} $r(H_1,H_2)$ is the minimum positive integer $N$ such that every red/blue coloring of the edges of the complete graph $K_N$ yields either a copy of $H_1$ in red or a copy of $H_2$ in blue. In~\cite{BFHJ}, the authors introduced the following potential degree sequence version of the graph Ramsey numbers.
Given a graph $H$, a graphic sequence $\pi=(d_1,\dots,d_n)$ is \textit{potentially $H$-graphic} if there is some realization of $\pi$ that contains $H$ as a subgraph. For graphs $H_1$ and $H_2$, the \textit{potential-Ramsey number} $r_{pot}(H_1,H_2)$ is the minimum integer $N$ such that for any $N$-term graphic sequence $\pi$, either $\pi$ is potentially $H_1$-graphic or the complementary sequence $$\overline{\pi}=(\overline{d}_1,\dots,\overline{d}_N)=(N-1-d_N,\dots, N-1-d_1)$$ is potentially $H_2$-graphic.
In traditional Ramsey theory, the enemy gives us a coloring of the edges of $K_N$, and we must be sure that there is some copy of $H_1$ or $H_2$ in the appropriate color, regardless of the enemy's choice. When considering the potential-Ramsey problem, we are afforded the additional flexibility of being able to replace the enemy's red subgraph with any graph having the same degree sequence. Consequently, it is immediate that for any $H_1$ and $H_2$, $$r_{pot}(H_1,H_2)\le r(H_1,H_2).$$
While this bound is sharp for certain pairs of graphs, for example $H_1=P_n$ and $H_2=P_m$ (see~\cite{BFHJ}), in general that is far from the case. For instance, determining $r(K_k,K_k)$ is generally considered to be one of the most difficult open problems in combinatorics, if not all of mathematics. For instance, the best known asymptotic lower bound, due to Spencer \cite{S}, states that $$r(K_k,K_k)\ge (1+o(1))\frac{\sqrt{2}}{e}k2^{\frac{k}{2}}.$$ As a contrast, the following theorem appears in \cite{BFHJ}.
\begin{theorem}[Busch, Ferrara, Hartke and Jacobson~\cite{BFHJ}]\label{theorem:K_nK_t}\label{theorem:ramsey_clique} For $n\ge t\ge 3$, $$r_{pot}(K_n,K_t) = 2n+t-4$$ except when $n=t=3$, in which case $r_{pot}(K_3,K_3)=6$. \end{theorem}
Despite this theorem, which implies that the potential-Ramsey numbers are at worst a ``small'' linear function of the order of the target graphs, it remains a challenging problem to determine a meaningful bound on $r_{pot}(H_1,H_2)$ for arbitrary $H_1$ and $H_2$. In addition to its connections to classical Ramsey theory, the problem of determining $r_{pot}(H_1,H_2)$ also contributes to the robust body of work on subgraph inclusion problems in the context of degree sequences (c.f.~\cite{ChenLiYin,DM,EJL,FLMW,HS,LS}).
\section{Potential-Ramsey Numbers for Trees vs.\ Cliques} In this paper, we are motivated by the following classical result of Chv\'{a}tal~\cite{Chv}, which is one of the relatively few exact results for graph Ramsey numbers that applies to a broad collection of target graphs.
\begin{theorem} For any positive integers $s$ and $t$ and any tree $T_t$ of order $t$, $$r(K_s,T_t)=(s-1)(t-1)+1.$$ \end{theorem}
This elegant result states that the Ramsey number $r(K_s,T_t)$ depends only on $s$ and $t$ and not $T_t$ itself. Our goal in this paper is to investigate the existence of a similarly general result for $r_{pot}(K_s, T_t)$. For $s\ge 2$, let $G=K_{s-2}\vee \overline{K_{t-1}}$ and note that (a) $G$ is the unique realization of its degree sequence, and (b) $G$ does not contain $K_s$ and $\overline{G}$ does not contain any graph $H$ of order $t$ without isolated vertices. This implies the following general lower bound on the potential Ramsey number.
\begin{prop}\label{prop:LBtreevsnoisol} If $H$ is any graph of order $t$ without isolated vertices, then $r_{pot}(K_s,H)\geq t+s-2$. In particular, for any tree $T_t$ of order $t$, $r_{pot}(K_s,T_t)\ge t+s-2$. \end{prop}
The following theorems show that for fixed $t$, different choices of $T_t$ may have different potential-Ramsey numbers.
\begin{theorem}[Busch, Ferrara, Hartke and Jacobson~\cite{BFHJ}]\label{theorem:pathclique} For $t\ge 6$ and $s\ge 3$,
$$r_{pot}(K_s, P_t)=\left\{\begin{array}{ll}
2s - 2 + \left\lfloor\frac{t}{3}\right\rfloor, & \text{ if $s > \left\lfloor \frac{2t}{3} \right\rfloor$},
\\
t+s-2, & \text{ otherwise}.\\
\end{array}\right.$$
\end{theorem}
\begin{theorem}\label{theorem:potram_clique_star} For $s,t\geq4$, $$r_{pot}(K_s, K_{1,t-1}) = \left\{\begin{array}{ll}
2s, & \text{ if } t< s+2,
\\
t+s-2, & \text{ otherwise}.\\
\end{array}\right.$$
\end{theorem}
We will prove Theorem~\ref{theorem:potram_clique_star} in Section~\ref{section:proofs}.
There is one notable common feature between these results; namely that the potential-Ramsey number matches the bound given in Proposition~\ref{prop:LBtreevsnoisol} when $t$ is large enough relative to $s$. This suggests the question: does there exist a function $f(s)$ such that if $t\ge f(s)$, then for any tree $T_t$ of order $t$, $$r_{pot}(K_s,T_t)=t+s-2?$$ Our next result answers this question in the affirmative. Let $\ell(T)$ denote the number of leaves of a tree $T$.
\begin{theorem}\label{theorem:tree_clique} Let $T_t$ be a tree of order $t$ and let $s\ge 2$ be an integer. If either \begin{itemize} \item[(i)] $\ell(T_t)\geq s+1$ or \item[(ii)] $t> 7(s-2)$, \end{itemize} \noindent then $r_{pot}(K_s, T_t)=t+s-2$. \end{theorem}
We believe that the coefficient of $7$ in part (ii) of the theorem could be improved, and believe it is feasible that $r_{pot}(K_s,T_t)=t+s-2$ whenever $t\gtrsim \frac{3}{2}s$, as suggested by Theorem~\ref{theorem:pathclique}, although we pose no formal conjecture here.
To obtain Theorem~\ref{theorem:tree_clique}, we give a sharp condition that ensures an arbitrary graph and a forest pack. This result, which we prove in Section~\ref{section:packing}, is likely of independent interest. In Section~\ref{section:lemmas} we present some technical lemmas, and in Section~\ref{section:proofs} we prove both Theorem~\ref{theorem:potram_clique_star} and Theorem~\ref{theorem:tree_clique}.
\section{Packing Forests and Arbitrary Graphs}\label{section:packing}
For two graphs $G$ and $H$ where $|V(G)|\geq |V(H)|$, we say that $G$ and $H$ \emph{pack} if there is an injective function $f:V(H)\to V(G)$ such that for any $xy\in E(H)$, $f(x)f(y)\notin E(G)$. In other words, $G$ and $H$ pack if we can embed both of them into $K_{|V(G)|}$ without any edges overlapping. One of the most notable results about packing is the following theorem due to Sauer and Spencer.
\begin{theorem}[Sauer and Spencer~\cite{SS}]\label{thm:sauerspencer} Let $G$ and $H$ be graphs of order $n$. If $2\Delta(G)\Delta(H)<n$, then $G$ and $H$ pack. \end{theorem}
While Theorem \ref{thm:sauerspencer} is tight (see~\cite{KK} for the characterization of the extremal graphs), the condition is not necessary to ensure that $G$ and $H$ pack. In particular, even if $H$ has large maximum degree, if $H$ is sparse enough, then it may still pack with $G$. In light of this, we provide the following, which is in many cases an improvement to the Sauer-Spencer theorem provided one graph happens to be a forest.
\begin{theorem}\label{thm:sauerspencerforest} Let $F$ be a forest and $G$ be a graph both on $n$ vertices, and let $\comp(F)$ be the number of components of $F$ that contain at least one edge and $\ell(F)$ the number of vertices of $F$ with degree 1. If \[ 3\Delta(G)+\ell(F)-2\comp(F)< n, \] then $F$ and $G$ pack. \end{theorem} \begin{proof} Let $e\in E(F)$ and let $F'=F\setminus\{e\}$. Notice that $\ell(F')\leq\ell(F)+2$, but if $\ell(F')>\ell(F)$, then $\comp(F')>\comp(F)$. Further, if $\comp(F')<\comp(F)$, then $e$ was an isolated edge, so $\ell(F')\leq\ell(F)-2$. Therefore, $\ell(F')-2\comp(F')\leq\ell(F)-2\comp(F)$.
Consequently, if $3\Delta(G)+\ell(F)-2\comp(F)< n$, then any subgraph of $F$ attained by deleting edges also satisfies this inequality. Therefore we may suppose, for the sake of contradiction, that $F$ is minimal in the sense that for any edge $e\in E(F)$, $F\setminus\{e\}$ packs with $G$. Going forward, we follow the proof of the Sauer-Spencer theorem from \cite{KK} and improve the bounds at certain steps with the knowledge that $F$ is a forest.
For an embedding $f:V(G)\to V(F)$, we call an edge $uv\in E(F)$ a \emph{conflicting edge} if there is some edge $xy\in E(G)$ such that $f(x)=u$ and $f(y)=v$. As $G$ and $F$ do not pack, any embedding must have at least one conflicting edge. Notice that by the minimality of $F$, there is a packing $f$ of $G$ with $F\setminus\{uv\}$, and since $G$ does not pack with $F$, inserting the edge $uv$ must create a conflicting edge. Therefore, for any $uv\in E(F)$, there is an embedding in which $uv$ is the only conflicting edge.
For an embedding $f:V(G)\to V(F)$ and $u,v\in V(F)$, a \emph{$(u,v;G,F)_f$-link} is a triple $(u,w,v)$ such that $uw$ is an edge in the embedding of $G$ and $wv$ is an edge of $F$. Similarly, a \emph{$(u,v;F,G)_f$-link} is a triple $(u,w,v)$ such that $uw$ is an edge in $F$ and $wv$ is an edge in the embedding of $G$. Let $f$ be an embedding such that $ux$ is the only conflicting edge, and let $v\in V(F)\setminus\{x\}$. We claim that there is either a $(u,v;G,F)_f$-link or a $(u,v;F,G)_f$-link.
As $ux$ is both an edge of $F$ and an edge in the embedding of $G$, $xv$ is not an edge of $F$ and also not an edge in the embedding of $G$. Supposing that $f(u')=u$ and $f(v')=v$, let $f':V(G)\to V(F)$ be defined as $f'(u')=v$, $f'(v')=u$, and $f'(y)=f(y)$ for all $y\notin\{u',v'\}$. As $F$ and $G$ do not pack, there must be a conflicting edge under $f'$. This conflicting edge cannot be incident to $x$, but must be incident to either $u$ or $v$. If the conflicting edge is $uw$, then $(u,w,v)$ is a $(u,v;F,G)_f$-link; if the conflicting edge is instead $vw$, then $(u,w,v)$ is a $(u,v;G,F)_f$-link.
Let $u\in V(F)$ be a non-isolated vertex and let $f:V(G)\to V(F)$ be an embedding such that $ux$ is the only conflicting edge for some $x$. Define \begin{align*} V_1 &=\{v\in V(F)\colon\,\text{there is a $(u,v;F,G)_f$-link}\},\\ V_2 &=\{v\in V(F)\colon\,\text{there is a $(u,v;G,F)_f$-link}\}. \end{align*} Since $u$ is incident to $\deg_F(u)$ edges of $F$ and each $w\in N_F(u)$ is incident to at most $\Delta(G)$ edges of the embedding of $G$, we have \[
|V_1|\leq\deg_F(u)\Delta(G). \] Similarly, with $u'\in V(G)$ such that $f(u')=u$, \begin{align*}
|V_2| &\leq \sum_{w'\in N_G(u')} \deg_F(f(w')) \\
&= 2\deg_G(u') + \sum_{w'\in N_G(u')} (\deg_F(f(w'))-2) \\
&\leq 2\Delta(G) + \sum_{\substack{w\in V(F):\\ \deg_F(w)\geq2}} (\deg_F(w)-2) \\
&= 2\Delta(G) + \ell(F)-2\comp(F), \end{align*} where the last equality is justified by the fact that for any tree $T$ with at least two vertices, \[ \ell(T)=2+\sum_{\substack{w\in V(T):\\ \deg_T(w)\geq 2}}(\deg_T(w)-2). \] Since every $v\in V(F)\setminus\{x\}$ is an element of either $V_1$ or $V_2$ and $u\in V_1\cap V_2$, \[
n\leq |V_1| + |V_2| \leq \deg_F(u)\Delta(G) + 2\Delta(G)+\ell(F)-2\comp(F). \] Since this holds for every nonisolated vertex $u$ of $F$, it holds for a leaf of $F$. Therefore, by choosing $u$ to be a leaf, \[ n\leq 3\Delta(G)+\ell(F)-2\comp(F), \]
contradicting the hypothesis of the theorem. \end{proof}
As we are interested in attaining Chv\'{a}tal-type results for the potential-Ramsey number, we will make use of Theorem \ref{thm:sauerspencerforest} when $F$ is a tree.
\begin{cor}\label{cor:sauerspencertree} Let $G$ and $T$ be graphs of order $n$ where $T$ is a tree. If $3\Delta(G)+\ell(T)-2< n$, then $G$ and $T$ pack. \end{cor}
The tightness of Corollary~\ref{cor:sauerspencertree}, and hence of Theorem~\ref{thm:sauerspencerforest}, is seen by letting $n$ be even, $G={n\over 2}K_2$ and $T=K_{1,n-1}$. In this case, $\Delta(G)=1$ and $\ell(T)=n-1$, so $3\Delta(G)+\ell(T)-2=n$; however, $G$ and $T$ do not pack. The following proposition shows the asymptotic tightness of Corollary~\ref{cor:sauerspencertree} for any $\Delta(G)$.
\begin{prop} For any $\epsilon>0$ and $r\in\mathbb{Z}^+$, there is a graph $G$ with $\Delta(G)=r$ and a tree $T$, each with $n$ vertices, such that $3\Delta(G)+\ell(T)-2< (1+\epsilon)n$, but $G$ and $T$ do not pack. \end{prop}
\begin{proof} For $m,r\geq 2$ and $n=mr$, let $G=mK_r$ and let $T$ be any tree of order $n$ that is a subdivision of $K_{1,(m-1)r+1}$. It is readily seen that $G$ and $T$ do not pack for any $m$ and $r$. Given $\epsilon>0$, choose $m$ such that $m> 2/\epsilon$. Thus, \begin{align*} 3\Delta(G)+\ell(T)-2 &= 3(r-1)+(m-1)r+1-2 \\ &= 2r+rm-4=\left(1+{2\over m}\right)n-4\\ &< (1+\epsilon)n.\qedhere \end{align*} \end{proof}
\section{Technical Lemmas}\label{section:lemmas}
We begin with the following useful result of Yin and Li.
\begin{theorem}[Yin and Li~\cite{YiLi05}]\label{thm:cliquegraph} Let $\pi=(d_1,\dots,d_n)$ be a graphic sequence and $s\geq 1$ be an integer. \begin{itemize} \item[(i)] If $d_s\geq s-1$ and $d_{2s}\geq s-2$, then $\pi$ is potentially $K_s$-graphic. \item[(ii)] If $d_s\geq s-1$ and $d_i\geq 2s-2-i$ for $1\leq i\leq s-1$, then $\pi$ is potentially $K_s$-graphic. \end{itemize} \end{theorem}
The following lemma from~\cite{GJL} is an extension of a corresponding result of Rao~\cite{Ra79} for cliques.
\begin{lemma}[Gould, Jacobson and Lehel~\cite{GJL}]\label{lemma:gould} If $\pi$ is potentially $H$ graphic, then there is a realization $G$ of $\pi$
containing $H$ as a subgraph on the $|V(H)|$ vertices of largest degree in $G$.
\end{lemma}
We next give a simple sufficient condition for a graphic sequence to have a realization containing any tree of order $t$.
\begin{lemma}\label{lemma:tree_pot} Let $T$ be a tree of order $t$ and let $\pi=(d_1,\dots,d_n)$ be a graphic sequence with $n\geq t$. If $d_{t-1}\geq t-1$, then $\pi$ is potentially $T$-graphic. \end{lemma}
\begin{proof} We proceed by induction on $t$. If $t=2$, the claim follows immediately, so suppose that $t>2$ and let $\pi=(d_1,\dots,d_n)$ be a graphic sequence with $d_{t-1}\geq t-1$.
Let $T$ be any tree of order $t$, $v$ be a leaf of $T$, and $T'=T\setminus\{v\}$. Thus, $T'$ is a tree of order $t-1$. As $\pi$ is a non-increasing sequence, $d_{t-2}\geq t-1>t-2$, so by the induction hypothesis, $\pi$ is potentially $T'$-graphic. By Lemma~\ref{lemma:gould}, there is a realization $G$ of $\pi$ such that $T'$ is a subgraph of $G[v_1,\dots,v_{t-1}]$.
As $\deg_G(v_i)\geq t-1$ for all $i\leq t-1$, $|N_G(v_i)\cap\{v_t,\dots,v_n\}|\geq 1$ for all $i\leq t-1$. Hence we may attach a leaf to any vertex of $T'$ to attain a copy of $T$, so $\pi$ is potentially $T$-graphic. \end{proof}
Finally, we combine Lemma~\ref{lemma:tree_pot} and Theorem~\ref{thm:cliquegraph} to obtain the following.
\begin{prop}\label{prop:degrees} Let $T$ be a tree on $t$ vertices, let $s\leq t-2$, and set $n=t+s-2$. If $\pi=(d_1,\dots,d_n)$ is a graphic sequence such that
$\pi$ is not potentially $T$-graphic and
$\overline{\pi}$ is not potentially $K_s$-graphic, then $d_{t-s-1}\geq t$ and $d_t\geq t-s+1$. \end{prop}
\begin{proof} If $\pi$ is not potentially $T$-graphic, then $d_{t-1}\leq t-2$. Therefore, $\overline{d}_s=n-1-d_{t-1}\geq s-1$. By Theorem \ref{thm:cliquegraph}, if $\overline{d}_{2s}\geq s-2$, then $\overline{\pi}$ is potentially $K_s$-graphic. As this is not the case, $\overline{d}_{2s}\leq s-3$. Hence, $d_{t-s-1}=n-1-\overline{d}_{2s}\geq t$.
Furthermore, since $\overline{d}_s\geq s-1$, we see that $\overline{d}_i\leq 2s-3-i$ for some $1\leq i\leq s-2$ (else $\overline{\pi}$ is potentially $K_s$-graphic by Theorem~\ref{thm:cliquegraph}). Therefore \[d_t=n-1-\overline{d}_{s-1}\geq n-1-\overline{d}_i \geq n-1-(2s-4)=t-s+1.\qedhere\] \end{proof}
\section{Proof of Theorems \ref{theorem:potram_clique_star} and \ref{theorem:tree_clique}}\label{section:proofs}
We begin by proving Theorem \ref{theorem:potram_clique_star}.
\begin{proof}[Proof of Theorem~\ref{theorem:potram_clique_star}]
When $s\leq t-2$, the lower bound is established by Proposition~\ref{prop:LBtreevsnoisol}. When $s\geq t-2$, the lower bound is established by considering the graphic sequence $\pi=(d_1,\dots,d_{2s-1})$ where $d_i=2$ for all $i$. As $t\geq 4$, $\pi$ is not potentially $K_{1,t-1}$-graphic. Further, in any realization of $\pi$, the largest independent set has size at most $s-1$, so $\overline{\pi}$ is not potentially $K_s$-graphic.
For the upper bound, consider any graphic sequence $\pi=(d_1,\dotsc,d_n)$ with $n=s+\max\{s,t-2\}$. If $d_1\geq t-1$, then $\pi$ is potentially (in fact forcibly) $K_{1,t-1}$-graphic. So suppose $d_1\leq t-2$. Thus, we have $\overline{d}_n=n-1-d_1\geq n-1-(t-2)=n-t+1\geq s-1$; since also $n\geq2s$, Theorem~\ref{thm:cliquegraph} implies that $\overline{\pi}$ is potentially $K_s$-graphic. \end{proof}
We are now ready to prove Theorem~\ref{theorem:tree_clique}, the main result of this paper.
\begin{proof}[Proof of Theorem~\ref{theorem:tree_clique}.]
Let $\pi=(d_1,\dots,d_n)$ be a graphic sequence with $n=t+s-2$ such that $\pi$ is not potentially $T$-graphic and $\overline{\pi}$ is not potentially $K_s$-graphic.
(i) Suppose that $\ell(T)\geq s+1$ and let $S$ be any set of $s+1$ leaves of $T_t$. Let $T'=T\setminus S$, so $T'$ is a tree of order $t-s-1$. By Proposition~\ref{prop:degrees}, $d_{t-s-2}\geq t>t-s-2$, so $\pi$ is potentially $T'$-graphic. Let $G$ be a realization of $\pi$ with $T'$ as a subgraph of $G[v_1,\dots,v_{t-s-1}]$. Because $\deg_G(v_i)\geq t$ for all $i\leq t-s-1$, $|N_G(v_i)\cap\{v_{t-s},\dots,v_{t-s-1}\}|\geq s+2$ for all $i\leq t-s-1$. Hence, we can reattach the vertices in $S$ to attain a copy of $T$, contradicting the fact that $\pi$ is not potentially $T$-graphic.
(ii) Suppose that $\ell(T)\leq s$ and let $G$ be any realization of $\pi$ and define $G_t=G[v_1,\dots,v_t]$. As $d_t\geq t-s+1$ and $|V(G)|=t+s-2$, $\delta(G_t)\geq d_t-(s-2)=t-2s+3$, so $\Delta(\overline{G_t})=t-1-\delta(G_t)\leq 2s-4$. Because $t>7(s-2)$, \[ 3\Delta(\overline{G_t})+\ell(T)-2\leq 3(2s-4)+s-2=7s-14< t. \] By applying Corollary~\ref{cor:sauerspencertree}, we see that $T$ and $\overline{G_t}$ pack, or in other words, $T$ is a subgraph of $G_t$, again contradicting the fact that $\pi$ is not potentially $T$-graphic. \end{proof}
\end{document} |
\begin{document}
\author{Jacek Jakubowski and
Maciej Wi\'sniewolski }
\title[Verhulst process]{Exact distribution of Verhulst process}
\maketitle
\begin{center} {\small
Institute of Mathematics, University of Warsaw \\
Banacha 2, 02-097 Warszawa, Poland \\ e-mail: {\tt [email protected] } \\ and \\
{\tt [email protected] } } \end{center}
\begin{abstract} We investigate a Verhulst process, which is the special functional of geometric Brownian motion and has many applications, among others in biology and in stochastic volatility models. We present an exact form of density of a one dimensional distribution of Verhulst process. Simple formula for the density of Verhulst process is obtained in the special case, when the drift of geometric Brownian motion is equal to $-\frac12$. Some special properties of this process are discussed, e.g. it turns out that under Girsanov's change of measure a Verhulst process still remains a Verhulst process but with different parameters. \end{abstract}
\noindent \begin{quote} \noindent \textbf{Key words}: geometric Brownian motion, Verhulst process, Girsanov's change of measure, Laplace transform, exponential functional of Brownian motion
\ \\ \textbf{2010 AMS Subject Classification}: 60J65, 60J70.
\end{quote}
\section{Introduction}
The paper is devoted to study the distribution of Verhulst process. A Verhulst process is the special functional of geometric Brownian motion with drift and its integral. The process had been studied among others by Mackevi$\check{c}$ius \cite{Mac} and Lungu and {\O}ksendal \cite{LO}, where it is called a process of population growth in stochastic crowded environment. The results in both papers were obtained by a some kind of approximations. In {\O}ksendal \cite{O} there were indicated some applications of a Verhulst process in finance and biology. Mackevi$\check{c}$ius \cite{Mac1} during his conference presentation on Vilnius Conference 2014 indicates on the need to find the closed-form expressions for Laplace transform and exact distribution of Verhulst process. In this paper we provide the formulas for one dimensional distribution of the process. Results obtained by Yor and collaborators in several papers and monographes (see, e.g. \cite{CMY}, \cite{DYM}, \cite{DMY}, \cite{Mans08}, \cite{Mat-I}, \cite{MatII}, \cite{MatIII}) on the distribution of $(B_t^{(\mu)},\int_0^te^{B_u^{(\mu)}}du)$, where $B_t^{(\mu)}=B_t + \mu t$ is a Brownian motion with drift, give us the background in providing closed formulas for the density of Verhulst process, which in case of drift $\mu = -\frac{1}{2}$ becomes especially simple. We present also some interesting and important properties of the Verhulst process among them the fact that a Verhulst process remains, under Girsanov's change of measure, a Verhulst proces but with different parameters. The ideas presented below are original and are not easy consequences of the previous results.
\section{Distribution and properties of Verhulst process} We work on a complete probability space $(\Omega,\mathcal{F},\mathbb{P})$ with filtration $(\mathcal{F}_t)_{t \in [0,\infty)}$ and Brownian motion $B$ defined on it. We define a Verhulst process $\theta^{(\mu,\beta)}$ starting from $1$ as the functional of a Brownian motion with drift
\begin{equation}\label{VP}
\theta_t^{(\mu,\beta)} = \frac{e^{B_t +\mu t}}{1+\beta\int_0^te^{B_s+\mu s}ds}. \end{equation} It is easy to see that this functional is
the unique strong solution of SDE \begin{equation}\label{SDE}
d\theta_t^{(\mu,\beta)} = \theta_t^{(\mu,\beta)}dB_t + \Big((\mu+1/2)\theta_t^{(\mu,\beta)}-\beta(\theta_t^{(\mu,\beta)})^2\Big)dt, \ \theta_0^{(\mu,\beta)} = 1, \end{equation} for $\mu\in\mathbb{R},$ $\beta\geq 0$ and $ t\geq 0$. It is worth to note that taking $\theta_t^{(\mu,\beta)}$ starting from any $x> 0$ does not change any of its probabilistic properties (see also Section 3). As it was mentioned, the process $\theta^{(\mu,\beta)}$ is also known in literature as a process of population growth in crowded stochastic environment process (see \cite{O}). The approximation of the process was presented in \cite{Mac}. The properties of Verhulst process were also studied by Jakubowski and Wi\'sniewolski \cite{JWII}. In particular it was shown there (see Theorem 2.19) that for $\lambda >0, a\in\mathbb{R}$, we have \begin{equation}\label{Lap}
\mathbb{E}\Big[ \exp\Big(-\frac{\lambda e^{aB^{(\mu)}_t}}{1+\beta
A_t^{(\mu)}}\Big) \Big]
= \mathbb{E} \Big[F_{B_t^{(\mu)}}\Big(\beta^{-1}R^{(\lambda e^{aB^{(\mu)}_t})}(1/2)\Big)\Big], \end{equation} where $B^{(\mu)}_t = B_t +\mu t$, $ A_t^{(\mu)} = \int_0^te^{2B^{(\mu)}_s}ds$, and $F_x$ is given by \begin{equation}\label{postac-F}
F_{x}(z) = \exp\Big(-\frac{\varphi_z(x)^2 - x^2}{2t}\Big) \end{equation} with \begin{align} \label{varphi}
& \varphi_x(y) = \mbox{arcosh}(xe^{-y} + \cosh(y))\\
& = \ln\Big(xe^{-y} + \cosh(y)+\sqrt{x^2e^{-2y}+\sinh^2(y) +2xe^{-y}\cosh(y)}\Big).\notag
\end{align} Moreover, $R^x$ is a squared Bessel process with the index $-1$ starting at $x$ and independent of
$(B^{(\mu)}_t,t\geq 0)$.
Observe that from \eqref{Lap} we easily obtain the Laplace transform of $\theta_t^{(\mu,\beta)}$. \begin{proposition} For $\lambda > 0$ we have \begin{equation}
\mathbb{E}e^{-\lambda \theta_t^{(\mu,\beta)}} = \mathbb{E} \Big[F_{B_t^{(2\mu)}}\Big((4\beta)^{-1}R^{(\lambda e^{2B^{(2\mu)}_t})}(1/2)\Big)\Big]. \end{equation} \end{proposition} \begin{proof} Exactly in the same manner as in the proof of Theorem 2.20 in \cite{JWII}, we have \begin{align*} \theta_{4t}^{(\mu,\beta)} = \frac{e^{2(B_{4t}/2 +2t\mu)}}{1+4\beta\int_0^{t}e^{2(B_{4t}/2 +2t\mu)}du}. \end{align*} Since $B_{4t}/2$ is a Brownian motion, the statement follows from (\ref{Lap}) with $ a = 2$, $\mu$ replaced by $2\mu$ and $\beta$ replaced by $4\beta$. \end{proof} The following lemma will be used further. \begin{lemma}\label{LQ} Fix $\gamma >0$, $\mu\in\mathbb{R},$ $\beta\geq 0$. Then \begin{equation}\label{defM}
M^{(\gamma,\mu,\beta)}_t = e^{-\gamma\int_0^t\theta_s^{(\mu,\beta)}dB_s -\frac{\gamma^2}{2}\int_0^t(\theta_s^{(\mu,\beta)})^2ds}, \qquad t \in[0,\infty) \end{equation} is a martingale. \end{lemma} \begin{proof} It is enough to prove that for fixed $T>0$ the local martingale $M^{(\gamma,\mu,\beta)}_t, t \in[0,T]$ is bounded. It is so, since SDE \eqref{SDE} implies that \begin{align*} &\exp \Big({-\gamma\int_0^t\theta_s^{(\mu,\beta)}dB_s -\frac{\gamma^2}{2}\int_0^t(\theta_s^{(\mu,\beta)})^2ds}\Big) \\ &= e^{-\gamma\Big(\theta_t^{(\mu,\beta)}-1 -(\mu+1/2)\int_0^t\theta_s^{(\mu,\beta)}ds+\beta\int_0^t(\theta_s^{(\mu,\beta)})^2ds\Big) -\frac{\gamma^2}{2}\int_0^t(\theta_s^{(\mu,\beta)})^2ds}\\
&= e^{-\gamma(\theta_t^{(\mu,\beta)}-1)-(\gamma\beta+\frac{\gamma^2}{2})\int_0^t\Big((\theta_s^{(\mu,\beta)})^2- 2\theta_s^{(\mu,\beta)}\frac{\gamma(\mu+1/2)}{\gamma^2+2\gamma\beta} +
\Big(\frac{\gamma(\mu+1/2)}{\gamma^2+2\gamma\beta}\Big)^2\Big)ds } \times \\ & \times e^{ \frac{(\gamma(\mu+1/2))^2} {4(\gamma\beta+\gamma^2/2)}t } <\infty. \end{align*} \end{proof} \begin{remark} One can wonder if $ \overline{M}^{(\gamma,\mu,\beta)}_t = e^{\gamma\int_0^t\theta_s^{(\mu,\beta)}dB_s -\frac{\gamma^2}{2}\int_0^t(\theta_s^{(\mu,\beta)})^2ds}$, for a fixed $\gamma >0$, could be a martingale as well. In Remark 1.1 \cite{DMY} it was noticed that $ \overline{M}^{(\gamma,\mu,\beta)}$ can not be a martingale. \end{remark} Lemma \ref{LQ} enables us to define a new probability measure \begin{equation}\label{MQ}
\frac{d\mathbb{Q}^{(\gamma,\mu,\beta,T)}}{d\mathbb{P}}\Big|_{\mathcal{F}_T} = M^{(\gamma,\mu,\beta)}_T . \end{equation} From Girsanov's theorem $V_s = B_s + \gamma\int_0^s\theta_u^{(\mu,\beta)}du$, $s\leq T$, is a Brownian motion under $\mathbb{Q}^{(\gamma,\mu,\beta)}$. This leads us to the following \begin{theorem} Let $\theta_t^{(\mu,\beta)}$ be defined by (\ref{VP}) and $\mathbb{Q}^{(\gamma,\mu,\beta,T)}$ by (\ref{MQ}). Then \begin{equation}
\theta_t^{(\mu,\beta)} = \frac{e^{V_t +\mu t}}{1+ (\beta + \gamma)\int_0^te^{V_u +\mu u }du}, \qquad t\leq T, \end{equation} where $V$ is a standard Brownian motion under $\mathbb{Q}^{(\gamma,\mu,\beta,T)}$. \end{theorem} \begin{proof} From Ito's lemma and (\ref{SDE}) follows that \begin{align}\label{Int}
\ln \theta_t^{(\mu,\beta)} = B_t + \int_0^t(\mu - \beta \theta_s^{(\mu,\beta)})ds. \end{align} Taking $V_t = B_t + \gamma\int_0^t\theta_u^{(\mu,\beta)}du$, a Brownian motion under $\mathbb{Q}^{(\gamma,\mu,\beta,T)}$, we obtain from the last equality that \begin{align}\label{EV}
V_t + \mu t = \ln \theta_t^{(\mu,\beta)} + (\beta + \gamma)\int_0^t\theta_u^{(\mu,\beta)}du. \end{align} Thus direct computation yields \begin{align*}
\int_0^te^{V_u + \mu u}du &= \int_0^t\theta_u^{(\mu,\beta)}e^{(\beta + \gamma)\int_0^u\theta_s^{(\mu,\beta)}ds}du\\
&= \frac{1}{\gamma+\beta}\Big(e^{(\beta + \gamma)\int_0^t\theta_u^{(\mu,\beta)}du} -1\Big). \end{align*} From the last equality and (\ref{EV}) we finally have \begin{align*}
\theta_t^{(\mu,\beta)} = \frac{e^{V_t +\mu t}}{1+ (\beta + \gamma)\int_0^te^{V_u +\mu u }du}. \end{align*} \end{proof} This theorem justifies that a Verhulst process $\theta^{(\mu,\beta)}$ on $(\Omega,\mathcal{F},\mathbb{P})$ remains a Verhulst process under $\mathbb{Q}^{(\gamma,\mu,\beta,T)}$, though with different parameters. \begin{proposition} \label{Stb} If $(\theta_t^{(\mu,\beta)})$, $t \leq T$, is a Verhulst process with parameters $(\mu,\beta)$ on $(\Omega,\mathcal{F},\mathbb{P})$, then $(\theta_t^{(\mu,\beta)})$ is on $(\Omega,\mathcal{F},\mathbb{Q}^{(\gamma,\mu,\beta,T)})$ a Verhulst process with parameters $(\mu,\gamma + \beta)$. \end{proposition} We are ready to derive the formula for the density of one dimensional distribution of Verhulst process, i.e. the density of $\theta_t^{(\mu,\beta)}$ for a fixed $t> 0$. Let us introduce the following notation \begin{align*}
\mathbb{P}(\theta_t^{(\mu,\beta)}\in dx) = g_t(\beta,x)dx. \end{align*} Observe that for $\beta = 0$, $g_t(0,x)$ is the density of geometric Brownian motion $e^{B_t +\mu t}$. Proposition \ref{Stb} implies that \begin{align*}
g_t(\beta+\gamma,x) = g_t(\beta, x)\mathbb{E}(M^{(\gamma,\mu,\beta)}_t|\theta_t^{(\mu,\beta)} = x). \end{align*} In particular, for $\beta = 0$ we obtain \begin{align}\label{Dens}
g_t(\gamma,x) = g_t(0, x)\mathbb{E}(M^{(\gamma,\mu,0)}_t|e^{B_t +\mu t} = x). \end{align} Theorem 8.1 in Matsumoto - Yor \cite{MatII} states that: \\ for any $t>0, \lambda > 0, v > 0, x\in\mathbb{R}$ \begin{align}\label{MYOR}
&\psi_t^{(\mu)}(v,x)\mathbb{E}\Big(e^{-\frac{\lambda^2}{2}\int_0^te^{2B_u +2\mu u}du} \Big|\int_0^te^{B_u+\mu u}du = v, B_t + \mu t = x\Big)\\
&= e^{\mu x - \mu^2t/2}\frac{\lambda}{4\sinh(\lambda v/2)}e^{-\lambda(1+e^x)\coth(\lambda v/2)}\Theta(\phi(v,x,\lambda), t/4),\notag \end{align} where \begin{align*}
\psi_t^{(\mu)}(v,x) &= \frac{1}{16}e^{\mu x - \mu^2t/2}\frac{1}{v}e^{-\frac{2(1+e^x)}{v}}\Theta(4e^{x/2}/v, t/4),\\
\Theta(r,t) &= \frac{r}{\sqrt{2\pi^3t}}e^{\frac{\pi^2}{2t}}\int_0^{\infty}e^{-\frac{z^2}{2t}}e^{-r\cos(z)}\sinh(z)\sin(\pi z/t)dz, \\
\phi(v,x,\lambda) &= \frac{2\lambda e^{x/2}}{\sinh(\lambda v/2)}. \end{align*} Using this result we can write the formula for density of Verhulst process. \begin{theorem} \label{th:Vdens} The density of Verhulst process is given by \begin{align*}
g_t(\gamma,x) = g_t(0, x)e^{-\gamma(x-1)}\mathbb{E}H_t(a_t^{(\mu)},x), \end{align*} where $a_t^{(\mu)} = \int_0^te^{B_u+\mu u}du$, and \begin{align*}
H_t(y,x) = e^{\gamma(\mu+1/2)y}\mathbb{E}\Big(e^{-\frac{\gamma^2}{2}\int_0^te^{2B_u +2\mu u}du}|a_t^{(\mu)} = y, B_t + \mu t = \ln x\Big). \end{align*} \end{theorem} \begin{proof} We have from (\ref{defM}) and (\ref{SDE}) \begin{align*}
\mathbb{E}(M^{(\gamma,\mu,0)}_t|e^{B_t +\mu t} = x) &= e^{-\gamma(x-1)}\mathbb{E}\Big(e^{\gamma(\mu +1/2)a_t^{(\mu)} - \frac{\gamma^2}{2}\int_0^te^{2B_u +2\mu u}du}|e^{B_t +\mu t} = x\Big)\\ &= e^{-\gamma(x-1)}\int_0^{\infty}H_t(y,x)\mathbb{P}(a_t^{(\mu)}\in dy)\\ &= e^{-\gamma(x-1)}\mathbb{E}H_t(a_t^{(\mu)},x). \end{align*} Thus, the result follows from (\ref{Dens}). \end{proof} \begin{remark} Taking together Theorem \ref{th:Vdens} and (\ref{MYOR}) enable us to write down the close formula for density of Verhulst process. The density of $a_t^{(\mu)}$ can be obtained from the formula \begin{align*}
\mathbb{P}(a_t^{(\mu)}\in dv, B_t +\mu t\in dx) = \psi_t^{(\mu)}(v,x)dvdx \end{align*} (see \cite[Theorem 4.1]{MatII}). Another formula for density of $a_t^{(\mu)}$ can be also found on page 612 in \cite{SB}, formula (1.8.4). \end{remark} \begin{proposition} For a Verhulst process $\theta_t^{(\mu,\beta)}$, a Brownian motion $B$, $\beta>0$ and $\mu \neq -1/2$ we have \begin{align*}
\theta_t^{(\mu,\beta)}e^{\beta\int_0^t\theta_s^{(\mu,\beta)}ds} &= e^{B_t + \mu t},\\
\mathbb{E}e^{\beta\int_0^t\theta_s^{(\mu,\beta)}ds} &= 1 + \frac{\beta}{\mu + 1/2}\Big(e^{(\mu+1/2)t}-1\Big). \end{align*} \end{proposition} \begin{proof} The first equality follows immediately from (\ref{Int}). For the second observe that \begin{align} \label{1/6}
e^{\beta\int_0^t\theta_s^{(\mu,\beta)}ds} = (1+\beta\int_0^te^{B_u +\mu u}du), \end{align} so \begin{align*}
e^{(\mu+1/2)t} = \mathbb{E}e^{B_t+\mu t} = \mathbb{E}\theta_t^{(\mu,\beta)}e^{\beta\int_0^t\theta_s^{(\mu,\beta)}ds} = \frac{1}{\beta}\frac{\partial}{\partial t}\mathbb{E}e^{\beta\int_0^t\theta_s^{(\mu,\beta)}ds}. \end{align*} Thus \begin{align*} \mathbb{E}e^{\beta\int_0^t\theta_s^{(\mu,\beta)}ds} = 1 + \frac{\beta}{\mu + 1/2}\Big(e^{(\mu+1/2)t}-1\Big). \end{align*} \end{proof} As an example of applications we present a solution to problem of finding a special representation of Brownian motion with drift. For another distributional equations of this kind see e.g. Section 13 in \cite{Mat}. \begin{example} Let $B$ be a Brownian motion under $\mathbb{P}$, $\mu\in\mathbb{R}$ and $\alpha\in(0,1)$. Our aim is to find a measure $\mathbb{Q}$, a Brownian motion $V$ under $\mathbb{Q}$ and a random variable $G$ such that distribution of $G$ under $\mathbb{P}$ and under $\mathbb{Q}$ belongs to the same class, and for fixed $t > 0$ \begin{align} \label{2/6}
B_t + \mu t = \alpha (V_t + \mu t) + (1-\alpha) G. \end{align} Fix $T$, $T>t$, and $\gamma > 0$. To find a representation \eqref{2/6} we take $\mathbb{Q}=\mathbb{Q}^{(\gamma, \mu,\beta,T)}$ given by (\ref{MQ}) with $\beta = \frac{\gamma\alpha}{1-\alpha}$. Then $V_t = B_t + \gamma\int_0^t\theta_s^{(\mu,\beta)}ds$ is a Brownian motion under $\mathbb{Q}$. By (\ref{Int}) and \eqref{1/6} we have \begin{align*}
e^{V_t +\mu t} = \theta_t^{(\mu,\beta)}\Big(1+\beta\int_0^te^{B_s +\mu s}ds\Big)^{\frac{\beta+\gamma}{\beta}}. \end{align*} From the definition of $\theta_t^{(\mu,\beta)}$ and the last equality we obtain \begin{align} \label{2/7}
e^{B_t + \mu t} = e^{\frac{\beta}{\beta+\gamma}(V_t + \mu t)}(\theta_t^{(\mu,\beta)})^{\frac{\gamma}{\beta +\gamma}}. \end{align} Since $\alpha= \frac{\beta}{\beta+\gamma}$, by taking ln of both sides of \eqref{2/7} we have $$ B_t+ \mu t = \alpha (V_t + \mu t) + (1-\alpha ) \ln \theta_t^{(\mu,\beta)}, $$ which is \eqref{2/6} with $G = \ln \theta_t^{(\mu,\beta)}$. Observe that $\ln\theta_t^{(\mu,\beta)}$ under both $\mathbb{P}$ and $\mathbb{Q}$ belongs to the same family of $\ln$ of Verhulst process. This finishes the proof. \end{example} Now, we present another formula for Laplace transform of Verhulst process. \begin{proposition} \label{LapT2} Let $\theta_t^{(\mu,\beta)}$ be a Verhulst process with parameters $(\mu,\beta)$. Then for $\lambda \geq 0$ \begin{equation}
\mathbb{E} e^{-\lambda \theta_t^{(\mu,\beta)}} = e^{\beta}\mathbb{E}e^{-(\beta+\lambda)e^{B_t+\mu t}+\beta(\mu+1/2)\int_0^te^{B_s+\mu s}ds -\frac{\beta^2}{2}\int_0^te^{2B_s+2\mu s}ds}, \end{equation} where $B$ is a standard Brownian motion under $\mathbb{P}$. \end{proposition} \begin{proof}
From Proposition \ref{Stb} we know that a geometric Brownian motion $e^{B_t+\mu t}$
under $\mathbb{P}$ becomes a Verhulst process $\tilde{\theta}_t^{(\mu,\beta)}$ with parameters $(\mu,\beta)$ under $\mathbb{Q} = \mathbb{Q}^{(\beta,\mu,0,t)}$ given by \eqref{MQ}. Thus
\begin{align*}
\mathbb{E} e^{-\lambda \theta_t^{(\mu,\beta)}} = \mathbb{E}_{\mathbb{Q}}e^{-\lambda \tilde{\theta}_t^{(\mu,\beta)}} = \mathbb{E} e^{-\lambda e^{B_t+\mu t}} M^{(\beta,\mu,0)}_t,
\end{align*}
where $M^{(\beta,\mu,0)}$ is defined by (\ref{defM}). The assertion follows from $(\ref{SDE})$ and some simple algebra. \end{proof} \section{Exponential random time and drift $\mu = -\frac{1}{2}$} In this section we will consider a Verhulst process $\theta$ starting from $x>0$ with $\mu = -\frac{1}{2}$ and $\beta=x$. Thus, $\theta$ is of the special form \begin{align} \label{1/7}
\theta_t = \frac{xe^{B_t - \frac{t}{2}}}{1+x\int_0^te^{B_u - \frac{u}{2}}du}, \end{align} where $B$ is a Brownian motion. Let $T_{\lambda}$ be an exponential random variable with parameter $\lambda>0$, independent of $B$. We have \begin{lemma}\label{expdens} Let $v = \sqrt{2\lambda+1/4}$. The density of $\theta_{T_{\lambda}}$ is given on $(0,\infty)$ by \begin{equation}
\mathbb{P}(\theta_{T_{\lambda}}\in dz) = 2\lambda e^{x-z}\sqrt{x/z^3}F_{v}(x,z)dz, \end{equation} where $$F_{v}(x,y) = I_{v}(x\vee y)K_{v}(x\wedge y)$$ is a product of two modified Bessel functions. \end{lemma} \begin{proof} For $r\geq 0$ from Proposition \ref{LapT2} (where we put $\mu = -1/2, \lambda = xr, \gamma = x$) we obtain \begin{align*}
\mathbb{E}e^{-r\theta_t} = e^x\mathbb{E}e^{-x(r+1)e^{B_t-t/2} - \frac{x^2}{2}\int_0^te^{2B_u -u}du}. \end{align*} Thus, using \cite[Theorem 4.11]{MatII} describing the joint density of the vector $(e^{B_{T_{\lambda}}-{T_{\lambda}}/2},\int_0^{T_{\lambda}}e^{2B_u -u}du)$, we have \begin{align} \label{1/8}
\mathbb{E}e^{-r\theta_{T_{\lambda}}} &= e^x\mathbb{E}e^{-x(r+1)e^{B_{T_{\lambda}}-{T_{\lambda}}/2} - \frac{x^2}{2}\int_0^{T_{\lambda}}e^{2B_u -u}du}\\ \notag
&= e^x\int_0^{\infty}\int_0^{\infty}e^{-x(r+1)y-\frac{x^2}{2}w}\frac{\lambda}{y^{v+5/2}}p^{(v)}(w,1,y)dydw, \end{align} where $p^{(v)}$ is the transition probability density of the Bessel process with index $v$. Again from \cite{MatII} (see Remark 2.1) we have for $\alpha >0$ $$
\int_0^{\infty}e^{-\alpha t}p^{(v)}(t,x,y)dt = 2y\Big(\frac{y}{x}\Big)^vF_{v}(\sqrt{2\alpha}x, \sqrt{2\alpha}y). $$ Thus by \eqref{1/8}, Fubbini's theorem and the last identity we have \begin{align*} e^x&\int_0^{\infty}\int_0^{\infty}e^{-x(r+1)y-\frac{x^2}{2}w}\frac{\lambda}{y^{v+5/2}}p^{(v)}(w,1,y)dydw\\ &= \int_0^{\infty}e^{x-x(r+1)y}\frac{\lambda}{y^{v+5/2}}2y^{v+1}F_v(x,xy)dy\\ &= \int_0^{\infty}e^{x-rw -w}\frac{2\lambda}{w^{3/2}}\sqrt{x}F_v(x,w)dw \end{align*} and the assertion follows from the standard Laplace transform argument. \end{proof} Now we are ready to derive the exact formula for the density of $\theta_t$. \begin{theorem} Fix $t> 0$. Then on $(0,\infty)$ \begin{equation}
\mathbb{P}(\theta_{t}\in dw) = e^{-\frac{t}{8}+x -w}\sqrt{\frac{x}{w^3}}\Big(\int_0^{\infty}\frac{1}{z}e^{-\frac{z}{2}-\frac{x^2+w^2}{2z}}\Theta(xw/z,t)dz\Big)dw, \end{equation} where \begin{equation}
\Theta(r,t) = \frac{r}{\sqrt{2\pi^3t}}e^{\frac{\pi^2}{2t}}\int_0^{\infty}e^{-\frac{z^2}{2t}}e^{-r\cos(z)}\sinh(z)\sin(\pi z/t)dz. \end{equation} \end{theorem} \begin{proof}
We have on $(0,\infty)$ \begin{align}\label{RHS}
\mathbb{P}(\theta_{T_{\lambda}}\in dw) = \lambda\int_0^{\infty}e^{-\lambda t}\mathbb{P}(\theta_{t}\in dw)dt.\, \end{align} where $T_{\lambda}$ is an exponential random variable with parameter $\lambda$ independent of the process $\theta$. On the other side, from Lemma \ref{expdens} for $\lambda >0$ and $v=\sqrt{2\lambda+1/4}$, we have on $(0,\infty)$ \begin{align*}
\mathbb{P}&(\theta_{T_{\lambda}}\in dw) = 2\lambda e^{x-w}\sqrt{\frac{x}{w^3}}F_{v}(x,w)dw\\
&= 2\lambda e^{x-w}\sqrt{\frac{x}{w^3}}I_{v}(x\vee w)K_{v}(x\wedge w)dw\\
&= 2\lambda e^{x-w}\sqrt{\frac{x}{w^3}}\frac12\int_0^{\infty}e^{-\frac{z}{2}-\frac{(x\vee w)^2+(x\wedge w)^2}{2z}}I_v((x\vee w)(x\wedge w)/z)\frac{dz}{z}dw\\
&= \lambda e^{x-w}\sqrt{\frac{x}{w^3}}\int_0^{\infty}e^{-\frac{z}{2}-\frac{x^2+w^2}{2z}}I_v(xw/z)\frac{dz}{z}dw, \end{align*} where in the third equality we used the identity $$ I_{v}(x)K_{v}(w) = \frac12\int_0^{\infty}e^{-\frac{z}{2}-\frac{x^2+w^2}{2z}}I_v(xw/z)\frac{dz}{z} $$ for $w\geq x>0$ (see (2.7) in \cite{MatII}). To continue we use another identity for modified Bessel functions (see (2.10) in \cite{MatII}) \begin{align*}
I_v(r) = \int_0^{\infty}e^{-\frac{v^2}{2}t}\Theta(r,t)dt, \ r>0 \end{align*} and obtain \begin{align*}
\lambda &e^{x-w}\sqrt{\frac{x}{w^3}}\int_0^{\infty}e^{-\frac{z}{2}-\frac{x^2+w^2}{2z}}I_v(xw/z)\frac{dz}{z}\\
&= \lambda e^{x-w}\sqrt{\frac{x}{w^3}}\int_0^{\infty}\frac{1}{z}e^{-\frac{z}{2}-\frac{x^2+w^2}{2z}}\int_0^{\infty}e^{-\frac{v^2}{2}t}\Theta(xw/z,t)dt\\
&= \lambda e^{x-w}\sqrt{\frac{x}{w^3}}\int_0^{\infty}\int_0^{\infty}e^{-\lambda t -\frac{t}{8}}\Theta(xw/z,t)\frac{1}{z}e^{-\frac{z}{2}-\frac{x^2+w^2}{2z}}dzdt, \end{align*} where in the last equality we used Fubini's theorem and that $v^2 = 2\lambda + 1/4$. To finish the proof we compare the last expression with (\ref{RHS}). \end{proof} Now we consider a family of processes defined by \eqref{1/7} for all $x>0$, and to underline the dependence on $x$ we denote this process by $\theta(x)$, i.e. \begin{align*}
\theta_t(x) = \frac{xe^{B_t - \frac{t}{2}}}{1+x\int_0^te^{B_u - \frac{u}{2}}du}. \end{align*} From Lemma \ref{expdens} we can deduce \begin{proposition} Let $T_{\lambda}, \hat{T}_2$ be two independent exponential random variables, which are independent of the Brownian motion $B$. Then on $(0,\infty)$ \begin{equation}
z^2\mathbb{P}\Big(\theta_{T_{\lambda}}(\hat{T}_2)\in dz\Big) = \mathbb{P}(\hat{T}_2\in dz)\mathbb{E}(\theta_{T_{\lambda}}(z))^2. \end{equation} \end{proposition} \begin{proof} From Lemma \ref{expdens} for $x>0$, we have on $(0,\infty)$ \begin{align*}
e^{-2x}z^2\mathbb{P}(\theta_{T_{\lambda}}(x)\in dz) = 2\lambda e^{-x-z}\sqrt{xz}F_{v}(x,z)dz. \end{align*} Thus \begin{align*}
e^{-2x}\mathbb{E}(\theta_{T_{\lambda}}(x))^2 = \int_0^{\infty}2\lambda e^{-x-z}\sqrt{xz}F_{v}(x,z)dz. \end{align*} From symmetry of $F_v$, after integrating on $x$, we obtain \begin{align*}
z^2\int_0^{\infty}e^{-2x}\mathbb{P}(\theta_{T_{\lambda}}(x)\in dz)dx &= \Big(\int_0^{\infty}2\lambda e^{-x-z}\sqrt{xz}F_{v}(x,z)dx\Big)dz\\
&= e^{-2z}\mathbb{E}(\theta_{T_{\lambda}}(z))^2dz. \end{align*} The assertion follows. \end{proof}
\end{document} |
\begin{document}
\begin{abstract} We prove symplectic non-squeezing for the cubic nonlinear Schr\"odinger equation on the line via finite-dimensional approximation. \end{abstract}
\maketitle
\section{Introduction}
The main result of this paper is a symplectic non-squeezing result for the cubic nonlinear Schr\"odinger equation on the line: \begin{align*}\label{nls}\tag{NLS}
iu_t+\Delta u= \pm |u|^2 u. \end{align*} We consider this equation for initial data in the underlying symplectic Hilbert space $L^2({\mathbb{R}})$. For this class of initial data, the equation is globally well-posed in both the defocusing and focusing cases, that is, with $+$ and $-$ signs in front of the nonlinearity, respectively. Correspondingly, we will be treating the defocusing and focusing cases on equal footing.
Our main result is the following:
\begin{thm}[Non-squeezing for the cubic NLS]\label{thm:nsqz}
Fix $z_*\in L^2({\mathbb{R}})$, $l\in L^2({\mathbb{R}})$ with $\|l\|_2=1$, $\alpha\in {\mathbb{C}}$, $0<r<R<\infty$, and $T>0$. Then there exists $u_0\in B(z_*, R)$ such that the solution $u$ to \eqref{nls} with initial data $u(0)=u_0$ satisfies \begin{align}\label{th:1}
|\langle l, u(T)\rangle-\alpha|>r. \end{align} \end{thm}
Colloquially, this says that the flow associated to \eqref{nls} does not carry any ball of radius $R$ into any cylinder whose cross-section has radius $r<R$. Note that it is immaterial where the ball and cylinder are centered; however, it is essential that the cross-section of the cylinder is defined with respect to a pair of canonically conjugate coordinates.
The formulation of this result is dictated by the non-squeezing theorem of Gromov, \cite[\S0.3A]{Gromov}, which shows the parallel assertion for \emph{any} symplectomorphism of finite-dimensional Hilbert spaces. At the present time, it is unknown whether this general assertion extends to the infinite-dimensional setting.
Non-squeezing has been proved for a number of PDE models; see \cite{Bourg:approx,Bourg:aspects,CKSTT:squeeze,HongKwon,KVZ:nsqz2d,Kuksin,Mendelson,Roumegoux}. We have given an extensive review of this prior work in our paper \cite{KVZ:nsqz2d} and so will not repeat ourselves here. Rather, we wish to focus on our particular motivations for treating the model \eqref{nls}.
With the exception of \cite{KVZ:nsqz2d}, which considers the cubic NLS on ${\mathbb{R}}^2$, all the papers listed above considered the non-squeezing problem for equations posed on tori. One of the initial goals for the paper \cite{KVZ:nsqz2d} was to treat (for the first time) a problem in infinite volume. Moreover, we sought also to obtain a first unconditional result where the regularity required to define the symplectic form coincided with the scaling-critical regularity for the equation.
Many of the central difficulties encountered in \cite{KVZ:nsqz2d} stem from the criticality of the problem considered there, to the point that they obscure the novel aspects associated to working in infinite volume. One of our main motivations in writing this paper is to elaborate our previous approach in a setting unburdened by the specter of criticality. In this way, we hope also to provide a more transparent framework for attacking (subcritical) non-squeezing problems in infinite volume.
In keeping with the expository goal just mentioned, we have elected here to treat a single model, namely, the cubic NLS in one dimension. What follows applies equally well to any mass-subcritical NLS in any space dimension --- it is simply a matter of adjusting the H\"older/Strichartz exponents appropriately.
Let us now briefly outline the method of proof. Like previous authors, the goal is to combine a suitable notion of finite-dimensional approximation with Gromov's theorem in that setting. The particular manner in which we do this mirrors \cite{KVZ:nsqz2d}, but less so other prior work.
In the presence of a frequency truncation, NLS on a torus (of, say, large circumference) becomes a finite-dimensional system and so is non-squeezing in the sense of Gromov. In particular, there is an initial datum $u_0$ in the ball of radius $R$ about $z_*$ so that the corresponding solution $u$ obeys \eqref{th:1} at time $T$. We say that $u$ is a \emph{witness} to non-squeezing.
Now choosing a sequence of frequency cutoff parameters $N_n\to\infty$ and a sequence of circumferences $L_n\to\infty$, Gromov guarantees that there is a sequence of witnesses $u_n$. Our overarching goal is to take a ``limit'' of these solutions and so obtain a witness to non-squeezing for the full (untruncated) model on the whole line. This goal is realized in two steps: (i) Removal of the frequency cutoff for the problem in infinite volume; see Section~\ref{S:4}. (ii) Approximation of the frequency-truncated model in infinite volume by that on a large torus; see Section~\ref{S:5}. The frequency truncation is essential for the second step since it enforces a form of finite speed of propagation.
The principal simplifications afforded by working in the subcritical case appear in the treatment of step (i); they are two-fold. First, the proof of large-data space-time bounds for solutions to \eqref{nls} is elementary and applies also (after trivial modifications) to the frequency-truncated PDE. This is not true for the critical problem. Space-time bounds for the mass-critical NLS is a highly nontrivial result of Dodson \cite{Dodson:3,Dodson:1,Dodson}; moreover, the argument does not apply in the frequency-truncated setting because the truncation ruins the monotonicity formulae at the heart of his argument. For a proof of uniform space-time bounds for suitably frequency-truncated cubic NLS on ${\mathbb{R}}^2$, see Section~4 in \cite{KVZ:nsqz2d}.
The second major simplification relative to \cite{KVZ:nsqz2d} appears when we prove wellposedness in the weak topology on $L^2$. Indeed, the reader will notice that the statement of Theorem~\ref{T:weak wp} here is essentially identical to that of Theorem~6.1 in \cite{KVZ:nsqz2d}; however, the two proofs bear almost no relation to one another. Here we exploit the fact that bounded-mass solutions are compact on bounded sets in space-time in a scaling-critical $L^p$ norm; this is simply not true in the mass-critical case. See Section~4 for further remarks on this topic.
\section{Preliminaries}
Throughout this paper, we will write the nonlinearity as $F(u):=\pm|u|^2u$.
\begin{defn}[Strichartz spaces] We define the Strichartz norm of a space-time function via $$
\| u \|_{S(I\times{\mathbb{R}})} := \| u\|_{C^{ }_t L^2_x (I\times{\mathbb{R}})} + \| u\|_{L_t^4 L^\infty_x (I\times{\mathbb{R}})} $$ and the dual norm via $$
\| G \|_{N(I\times{\mathbb{R}})} := \inf_{G=G_1+G_2} \| G_1\|_{L^1_t L^2_x (I\times{\mathbb{R}})} + \| G_2 \|_{L^{4/3}_t L^{1 }_x (I\times{\mathbb{R}})}. $$ We define Strichartz spaces on the torus analogously. \end{defn}
The preceding definition permits us to write the full family of Strichartz estimates in a very compact form; see \eqref{StrichartzIneq} below. The other basic linear estimate that we need is local smoothing; see \eqref{LocalSmoothing} below.
\begin{lem}[Basic linear estimates] Suppose $u:{\mathbb{R}}\times{\mathbb{R}}\to{\mathbb{C}}$ obeys $$ (i\partial_t +\Delta) u = G. $$ Then for every $T>0$ and every $R>0$, \begin{align}\label{StrichartzIneq}
\| u\|_{S([0,T]\times{\mathbb{R}})} \lesssim \| u(0) \|_{L^2_x} + \| G \|_{N([0,T]\times{\mathbb{R}})}, \end{align} \begin{align}\label{LocalSmoothing}
\| u\|_{L^2_{t,x}([0,T]\times[-R,R])}
\lesssim R^{1/2} \Bigl\{ \| |\nabla|^{-1/2} u(0) \|_{L^2_x} + \| |\nabla|^{-1/2} G \|_{L^1_t L^2_x([0,T]\times{\mathbb{R}})} \Bigr\}. \end{align} \end{lem}
Let $m_{\le 1}:{\mathbb{R}}\to[0,1]$ be smooth, even, and obey $$
m_{\le 1}(\xi) = 1 \text{ for $|\xi|\leq 1$} \qtq{and} m_{\le 1}(\xi) = 0 \text{ for $|\xi|\geq 2$.} $$ We define Littlewood--Paley projections onto low frequencies according to \begin{align}\label{E:LP defn} \widehat{ P_{\leq N} f }(\xi) := m_{\le 1}(\xi/N) \hat f(\xi) \end{align} and then projections onto individual frequency bands via \begin{align}\label{E:LP defn'} f_N := P_N f := [ P_{\leq N} - P_{\leq N/2} ] f . \end{align}
\section{Well-posedness theory for several NLS equations}
In the course of the proof of Theorem~\ref{thm:nsqz}, we will need to consider the cubic NLS both with and without frequency truncation. To consider both cases simultaneously, we consider the following general form: \begin{align}\label{eq:1} iu_t+\Delta u= \mathcal P F(\mathcal P u), \end{align} where $\mathcal P$ is either the identity or the projection to low frequencies $P_{\leq N}$ for some $N\in 2^{\mathbb{Z}}$. For the results of this section, it only matters that $\mathcal P$ is $L^p$ bounded for all $1\leq p\leq\infty$.
\begin{defn}(Solution)\label{D:solution} Given an interval $I\subseteq{\mathbb{R}}$ with $0\in I$, we say that $u:I\times{\mathbb{R}}\to{\mathbb{C}}$ is a \emph{solution} to \eqref{eq:1} with initial data $u_0\in L^2({\mathbb{R}})$ at time $t=0$ if $u$ lies in the classes $C^{ }_t L^2_x(K\times{\mathbb{R}})$ and $L^6_{t,x}(K\times{\mathbb{R}})$ for any compact $K\subset I$ and obeys the Duhamel formula \begin{align*} u(t) = e^{it\Delta} u_0 - i \int_{0}^{t} e^{i(t-s)\Delta} \mathcal P F\bigl(\mathcal P u(s) \bigr) \, ds \end{align*} for all $t \in I$. \end{defn}
Such solutions to \eqref{eq:1} are unique and conserve both mass and energy: \begin{align*}
\int_{{\mathbb{R}}} |u(t,x)|^2 \,dx \qtq{and}
E(u(t)):=\int_{{\mathbb{R}}}\tfrac 12 |\nabla u(t,x)|^2 \pm\tfrac14 |\mathcal P u(t,x)|^4 \,dx, \end{align*} respectively. Indeed, \eqref{eq:1} is the Hamiltonian evolution associated to $E(u)$ through the standard symplectic structure: $$ \omega:L^2({\mathbb{R}})\times L^2({\mathbb{R}})\to {\mathbb{R}} \qtq{with} \omega(u,v)=\Im \int_{\mathbb{R}} u(x)\bar v(x)\, dx. $$
The well-posedness theory for \eqref{eq:1} reflects the subcritical nature of this equation with respect to the mass. We record this classical result without a proof.
\begin{lem}[Well-posedness of \eqref{eq:1}]\label{lm:loc}
Let $u_0\in L^2({\mathbb{R}})$ with $\|u_0\|_2\le M$. There exists a unique global solution $u:{\mathbb{R}}\times{\mathbb{R}}\to {\mathbb{C}}$ to \eqref{eq:1} with initial data $u(0)=u_0$. Moreover, for any $T>0$, \begin{align*}
\|u\|_{S([0,T]\times{\mathbb{R}})}\lesssim_T M. \end{align*} If additionally $u_0\in H^1({\mathbb{R}})$, then \begin{align*}
\|\partial_x u\|_{S([0,T]\times{\mathbb{R}})} \lesssim_{M,T} \| \partial_x u_0 \|_{L^2({\mathbb{R}})}. \end{align*} \end{lem}
\section{Local compactness and well-posedness in the weak topology}\label{S:4}
The arguments presented in this section show that families of solutions to \eqref{nls} that are uniformly bounded in mass are precompact in $L^p_{t,x}$ for $p<6$ on bounded sets in space-time. Furthermore, we have well-posedness in the weak topology on $L^2$; specifically, if we take a sequence of solutions $u_n$ to \eqref{nls} for which the initial data $u_n(0)$ converges \emph{weakly} in $L^2({\mathbb{R}})$, then $u_n(t)$ converges weakly at all times $t\in{\mathbb{R}}$. Moreover, the pointwise in time weak limit is in fact a solution to \eqref{nls}.
Justification of the assertions made in the first paragraph can be found within the proof of \cite[Theorem~1.1]{Nakanishi:SIAM}; however, this is not sufficient for our proof of symplectic non-squeezing. Rather, we have to prove a slightly stronger assertion that allows the functions $u_n$ to obey different equations for different $n$; specifically, \begin{align}\label{eqpn} i\partial_t u_n+\Delta u_n=P_{\le N_n} F(P_{\le N_n}u_n) \end{align} where $N_n\to \infty$; see Theorem~\ref{T:weak wp} below. For the sake of completeness, we give an unabridged proof of this theorem, despite substantial overlap with the arguments presented in~\cite{Nakanishi:SIAM}.
What follows adapts easily to the setting of any mass-subcritical NLS. It does \emph{not} apply at criticality: compactness fails in any scale-invariant space-time norm (even on compact sets). Nevertheless, well-posedness in the weak topology on $L^2$ does hold for the mass-critical NLS; see \cite{KVZ:nsqz2d}. Well-posedness in the weak topology has also been demonstrated for some energy-critical models; see \cite{BG99, KMV:cq}. In all three critical results \cite{BG99, KMV:cq, KVZ:nsqz2d}, the key to overcoming the lack of compactness is to employ concentration compactness principles. We warn the reader, however, that the augments presented in \cite{KVZ:nsqz2d} are far more complicated than would be needed to merely verify well-posedness in the weak topology for the mass-critical NLS. In that paper, we show non-squeezing and so (as here) we were compelled to consider frequency-truncated models analogous to \eqref{eqpn}. Due to criticality, this change has a profound effect on the analysis; see \cite{KVZ:nsqz2d} for further discussion.
Simple necessary and sufficient conditions for a set $\mathcal F\subseteq L^p({\mathbb{R}}^n)$ to be precompact were given in \cite{Riesz}, perfecting earlier work of Kolmogorov and Tamarkin. In addition to boundedness (in $L^p$ norm), the conditions are tightness and equicontinuity: $$
\lim_{R\to\infty} \sup_{f\in\mathcal F} \|f\|_{L^p(\{|x|>R\})} =0
\qtq{and} \lim_{h\to 0} \sup_{f\in\mathcal F} \|f(\cdot+h)-f(\cdot)\|_{L^p({\mathbb{R}}^n)} =0, $$ respectively. The basic workhorse for equicontinuity in our setting is the following lemma:
\begin{lem}\label{L:workhorse} Fix $T>0$ and suppose $u:[-2T,2T]\times{\mathbb{R}}\to{\mathbb{C}}$ obeys \begin{align}
\| u \|_{\tilde S} := \| u \|_{L^\infty_t L^2_x([-2T,2T]\times {\mathbb{R}})} + \| (i\partial_t+\Delta) u \|_{L^2_{t,x}([-2T,2T]\times{\mathbb{R}})} < \infty. \end{align} Then \begin{align}\label{22Holder}
\| u(t+\tau,x+y) - u(t,x) \|_{L^2_{t,x}([-T,T]\times[-R,R])} \lesssim_{R,T} \bigl\{ |\tau|^{1/5} + |y|^{1/3} \bigr\} \| u \|_{\tilde S}, \end{align}
uniformly for $|\tau|\leq T$ and $y\in {\mathbb{R}}$. \end{lem}
\begin{proof} It is sufficient to prove the result when $y=0$ and when $\tau=0$; the full result then follows by the triangle inequality. In both cases, we use \eqref{LocalSmoothing} to estimate the high-frequency portion as follows: \begin{align}\label{22uhi}
\| u_{>N}(t+\tau,x+y) - u_{>N}(t,x) \|_{L^2_{t,x}([-T,T]\times[-R,R])} \lesssim R^{\frac12} N^{-\frac12}(1+T^{\frac12}) \| u \|_{\tilde S}. \end{align}
Next we turn to the low-frequency contribution. Consider first the case $\tau=0$. By Bernstein's inequality, $$
\| u_{\leq N}(t,x+y) - u_{\leq N}(t,x) \|_{L^2_x({\mathbb{R}})} \lesssim N|y| \, \| u(t) \|_{L^2_x({\mathbb{R}})} \lesssim N|y| \, \| u \|_{\tilde S}. $$
Therefore, setting $N=|y|^{-2/3}$, integrating in time, and using \eqref{22uhi}, we obtain \begin{align}\label{22ulo2}
\| u(t,x+y) - u(t,x) \|_{L^2_{t,x}([-T,T]\times[-R,R])} \lesssim (R^{\frac12}+(RT)^{\frac12}+T^{\frac12} ) |y|^{\frac13} \| u \|_{\tilde S}. \end{align}
Consider now the case $y=0$. Here it is convenient to use the Duhamel representation of $u$: $$ u(t) = e^{it\Delta} u(0) - i \int_0^t e^{i(t-s)\Delta} (i\partial_s+\Delta) u(s)\,ds. $$ To exploit this identity, we first observe that $$
\bigl\| P_{\leq N} \bigl[ e^{i\tau\Delta} - 1] e^{it\Delta} u(0) \bigr\|_{L^2_{x}({\mathbb{R}})} \lesssim N^2 |\tau| \| u(0) \|_{L^2_{x}({\mathbb{R}})}. $$ Then, by the Duhamel representation and the Strichartz inequality, we obtain \begin{align*}
\bigl\| u_{\leq N}(t+\tau) - u_{\leq N}(t) \bigr\|_{L^2_x({\mathbb{R}})}
&\lesssim N^2 |\tau| \| u(0) \|_{L^2_{x}({\mathbb{R}})} + \| (i\partial_t+\Delta)u\|_{L^1_t L^2_x([t,t+\tau]\times{\mathbb{R}})} \\
&\lesssim \{ N^2 |\tau| + |\tau|^{\frac12} \} \| u \|_{\tilde S}. \end{align*}
Combining this with \eqref{22uhi} and choosing $N=|\tau|^{-2/5}$ yields \begin{equation}\label{22ulo1}
\bigl\| u(t+\tau) - u(t) \bigr\|_{L^2_{t,x}([-T,T]\times[-R,R])}
\lesssim (R^{\frac12}+(RT)^{\frac12}+ T^{\frac12}) (|\tau|^{\frac15} +|\tau|^{\frac12}) \| u \|_{\tilde S}. \end{equation}
This completes the proof of the lemma. \end{proof}
\begin{prop}\label{P:compact} Let $u_n:{\mathbb{R}}\times{\mathbb{R}}\to{\mathbb{C}}$ be a sequence of solutions to \eqref{eqpn} corresponding to some sequence of $N_n>0$. We assume that \begin{equation}\label{apmassbound}
M:= \sup_n \| u_n(0) \|_{L^2_x({\mathbb{R}})} < \infty. \end{equation} Then there exist $v:{\mathbb{R}}\times{\mathbb{R}}\to{\mathbb{C}}$ and a subsequence in $n$ so that \begin{align}\label{E:P:compact}
\lim_{n\to\infty} \| u_n - v \|_{L^p_{t,x}([-T,T]\times[-R,R])} = 0, \end{align} for all $R>0$, all $T>0$, and all $1\leq p <6$. \end{prop}
\begin{proof} A simple diagonal argument shows that it suffices to consider a single fixed pair $R>0$ and $T>0$. In what follows, implicit constants will be permitted to depend on $R$, $T$, and $M$.
In view of Lemma~\ref{lm:loc} and \eqref{apmassbound}, we have \begin{equation}\label{apbound}
\sup_n \| u_n \|_{S([-4T,4T]\times{\mathbb{R}})} \lesssim 1. \end{equation} Consequently, if we define $\chi:{\mathbb{R}}^2\to[0,1]$ as a smooth cutoff satisfying $$
\chi(t,x)= \begin{cases} 1 &:\text{ if } |t|\leq T \text{ and } |x|\leq R, \\
0 &:\text{ if } |t| > 2T \text{ or } |x|> 2R, \end{cases} $$ then the sequence $\chi u_n$ is uniformly bounded in $L^2_{t,x}({\mathbb{R}}\times{\mathbb{R}})$. Moreover, by Lemma~\ref{L:workhorse} and \eqref{apbound}, it is also equicontinuous. As it is compactly supported, it is also tight. Thus, $\{\chi u_n\}$ is precompact in $L^2_{t,x}({\mathbb{R}}\times{\mathbb{R}})$ and so there is a subsequence such that \eqref{E:P:compact} holds with $p=2$. That it holds for the other values $1\leq p <6$ then follows from H\"older and \eqref{apbound}, which implies that $\{\chi u_n\}$ is uniformly bounded in $L^6_{t,x}([-T,T]\times{\mathbb{R}})$. \end{proof}
\begin{thm}\label{T:weak wp} Let $u_n:{\mathbb{R}}\times{\mathbb{R}}\to{\mathbb{C}}$ be a sequence of solutions to \eqref{eqpn} corresponding to a sequence $N_n\to\infty$. We assume that \begin{equation} u_n(0) \rightharpoonup u_{\infty,0} \quad\text{weakly in $L^2({\mathbb{R}})$} \end{equation} and define $u_\infty$ to be the solution to \eqref{nls} with $u_\infty(0)=u_{\infty,0}$. Then \begin{align}\label{E:P:weak wp} u_n(t) \rightharpoonup u_\infty(t) \quad\text{weakly in $L^2({\mathbb{R}})$} \end{align} for all $t\in{\mathbb{R}}$. \end{thm}
\begin{proof} It suffices to verify \eqref{E:P:weak wp} along a subsequence; moreover, we may restrict our attention to a fixed (but arbitrary) time window $[-T,T]$. Except where indicated otherwise, all space-time norms in this proof will be taken over the slab $[-T,T]\times{\mathbb{R}}$.
Weak convergence of the initial data guarantees that this sequence is bounded in $L^2({\mathbb{R}})$ and so by Lemma~\ref{lm:loc} we have \begin{equation}\label{apb}
\sup_n \ \| u_n \|_{L^\infty_t L^2_x} + \| u_n \|_{L^6_{t,x}} \lesssim 1. \end{equation} The implicit constant here depends on $T$ and the uniform bound on $u_n(0)$ in $L^2({\mathbb{R}})$. Such dependence will be tacitly permitted in all that follows.
Passing to a subsequence, we may assume that \eqref{E:P:compact} holds for some $v$, all $R>0$, and our chosen $T$. This follows from Proposition~\ref{P:compact}. Combining \eqref{apb} and \eqref{E:P:compact} yields \begin{equation}\label{apb'}
\| v \|_{L^\infty_t L^2_x} + \| v \|_{L^6_{t,x}} \lesssim 1. \end{equation} Moreover, as $L^2({\mathbb{R}})$ admits a countable dense collection of $C^\infty_c$ functions, \eqref{E:P:compact} guarantees that \begin{equation}\label{tinOmega} u_n(t) \rightharpoonup v(t) \quad\text{weakly in $L^2({\mathbb{R}})$ for all $t\in\Omega$}, \end{equation} where $\Omega\subseteq[-T,T]$ is of full measure.
We now wish to take weak limits on both sides of the Duhamel formula \begin{equation*}
u_n(t) = e^{it\Delta} u_n(0) - i \int_0^t e^{i(t-s)\Delta} P_{\leq N_n} F\bigl(P_{\leq N_n} u_n(s)\bigr)\,ds. \end{equation*} Clearly, $e^{it\Delta} u_n(0)\rightharpoonup e^{it\Delta} u_{\infty,0}$ weakly in $L^2({\mathbb{R}})$. We also claim that \begin{equation}\label{v du hamel main} \wlim_{n\to\infty} \int_0^t e^{i(t-s)\Delta} P_{\leq N_n} F\bigl(P_{\leq N_n} u_n(s)\bigr)\,ds = \int_0^t e^{i(t-s)\Delta} F\bigl(v(s)\bigr)\,ds, \end{equation} where the weak limit is with respect to the $L^2({\mathbb{R}})$ topology. This assertion will be justified later. Taking this for granted for now, we deduce that \begin{equation}\label{v du hamel}
\wlim_{n\to\infty} u_n(t) = e^{it\Delta} u_\infty(0) - i \int_0^t e^{i(t-s)\Delta} F(v(s))\,ds. \end{equation} Moreover, we observe that RHS\eqref{v du hamel} is continuous in $t$, with values in $L^2({\mathbb{R}})$, and that LHS\eqref{v du hamel} agrees with $v(t)$ for almost every $t$. Correspondingly, after altering $v$ on a space-time set of measure zero, we obtain $v\in C([-T,T];L^2({\mathbb{R}}))$ that still obeys \eqref{apb'} but now also obeys \begin{equation}\label{v du hamel'}
v(t) = e^{it\Delta} u_\infty(0) - i \int_0^t e^{i(t-s)\Delta} F(v(s))\,ds \qtq{and} \wlim_{n\to\infty} u_n(t) = v(t) , \end{equation} for \emph{all} $t\in[-T,T]$. By Definition~\ref{D:solution} and Lemma~\ref{lm:loc}, we deduce that $v=u_\infty$ on $[-T,T]$ and that \eqref{E:P:weak wp} holds for $t\in [-T,T]$.
To complete the proof of Theorem~\ref{T:weak wp}, it remains only to justify \eqref{v du hamel main}. To this end, let us fix $\psi\in L^2({\mathbb{R}})$. We will divide our task into three parts.
Part 1: By H\"older's inequality, \eqref{apb}, and the dominated convergence theorem, \begin{align*}
\biggl| \biggl\langle \psi,&\ \int_0^t e^{i(t-s)\Delta} \bigl[ F\bigl(P_{\leq N_n} u_n(s)\bigr) - P_{\leq N_n} F\bigl(P_{\leq N_n} u_n(s)\bigr) \bigr]\,ds \biggr\rangle \biggr| \\
&\leq \biggl| \int_0^t \ \Bigl\langle e^{-i(t-s)\Delta} P_{>N_n}\psi,\ F\bigl(P_{\leq N_n} u_n(s)\bigr) \Bigr\rangle \,ds\, \biggr| \\
&\leq \sqrt{T} \| P_{>N_n}\psi \|_{L^2_x} \| u_n \|_{L^6_{t,x}}^3 = o(1) \quad\text{as $n\to\infty$.} \end{align*}
Part 2: Let $\chi_R$ denote the indicator function of $[-R,R]$ and let $\chi_R^c$ denote the indicator of the complementary set. Arguing much the same as for part 1, we have \begin{align*}
\sup_n \biggl| \biggl\langle \psi,&
\ \int_0^t e^{i(t-s)\Delta} \chi_R^c \bigl[ F\bigl(P_{\leq N_n} u_n(s)\bigr) - F\bigl(v(s)\bigr) \bigr]\,ds \biggr\rangle \biggr| \\
&\leq T^{1/2} \| \chi_R^c e^{it \Delta} \psi \|_{L_{t,x}^6} \Bigl\{ \| v \|_{L^6_{t,x}}^2\|v\|_{L_t^\infty L_x^2} + \sup_n \| u_n \|_{L^6_{t,x}}^2\|u_n\|_{L_t^\infty L_x^2} \Bigr\} \\
&= o(1) \quad\text{as $R\to\infty$.} \end{align*}
Part 3: An easy application of Schur's test shows that $$
\| \chi_R P_{\leq N_n} \chi_{2R}^c f \|_{L^p({\mathbb{R}})} \lesssim_\beta (N_nR)^{-\beta} \| f \|_{L^p({\mathbb{R}})} $$ for any $1\leq p\leq \infty$ and any $\beta>0$. Correspondingly, \begin{align*}
\bigl\| \chi_R P_{\leq N_n} (u_n - v) \bigr\|_{L^6_{t,x}} \lesssim \bigl\| \chi_{2R} (u_n - v) \bigr\|_{L^6_{t,x}}
+ (N_nR)^{-\beta} \bigl\{ \| u_n\|_{L^6_{t,x}} + \| v \|_{L^6_{t,x}} \bigr\}. \end{align*}
Using this estimate together with \eqref{apb}, \eqref{apb'}, \eqref{E:P:compact}, and the fact that $N_n\to\infty$, we deduce that \begin{align*}
\bigl\| \chi_R \bigl[ (P_{\leq N_n} u_n) - v \bigr] \bigr\|_{L^6_{t,x}}
&\lesssim \bigl\| \chi_R P_{\leq N_n} (u_n - v) \bigr] \bigr\|_{L^6_{t,x}} + \bigl\| P_{> N_n} v \bigr\|_{L^6_{t,x}} = o(1) \end{align*} as $n\to\infty$. From this, \eqref{apb}, and \eqref{apb'}, we then easily deduce that \begin{align*}
&\limsup_{n\to\infty}\, \biggl| \biggl\langle \psi,
\ \int_0^t e^{i(t-s)\Delta} \chi_R \bigl[ F\bigl(P_{\leq N_n} u_n(s)\bigr) - F\bigl(v(s)\bigr) \bigr]\,ds \biggr\rangle \biggr| \\
&\ \ \ \lesssim T^{1/2} \| \psi \|_{2}
\limsup_{n\to\infty} \, \bigl\| \chi_R\bigl[ (P_{\leq N_n} u_n) - v \bigr] \bigr\|_{L^6_{t,x}} \Bigl\{ \| v \|_{L^6_{t,x}}^2 + \| u_n \|_{L^6_{t,x}}^2 \Bigr\} = 0. \end{align*}
Combining all three parts proves \eqref{v du hamel main} and so completes the proof of Theorem~\ref{T:weak wp}. \end{proof}
\section{Finite-dimensional approximation}\label{S:5}
As mentioned in the introduction, to prove the non-squeezing result for the cubic NLS on the line we will prove that solutions to this equation are well approximated by solutions to a finite-dimensional Hamiltonian system. As an intermediate step, in this section, we will prove that solutions to the frequency-localized cubic Schr\"odinger equation on the line are well approximated by solutions to the same equation on ever larger tori; see Theorem~\ref{thm:app} below.
To do this, we will need a perturbation theory for the frequency-localized cubic NLS on the torus, which in turn relies on suitable Strichartz estimates for the linear propagator. In Lemma~\ref{lm:stri} below we exploit the observation that with a suitable inter-relation between the frequency cut-off and the torus size, one may obtain the full range of mass-critical Strichartz estimates in our setting. We would like to note that other approaches to Strichartz estimates on the torus \cite{Bourg:1993,BourgainDemeter} are not well suited to the scenario considered here --- they give bounds on unnecessarily long time intervals, so long in fact, that the bounds diverge as the circumference of the circle diverges.
\subsection{Strichartz estimates and perturbation theory on the torus}
Arguing as in \cite[\S7]{KVZ:nsqz2d} one readily obtains frequency-localized finite-time $L^1\to L^\infty$ dispersive estimates on the torus ${\mathbb{T}}_L:={\mathbb{R}}/L{\mathbb{Z}}$, provided the circumference $L$ is sufficiently large. This then yields Strichartz estimates in the usual fashion:
\begin{lem}[Torus Strichartz estimates, \cite{KVZ:nsqz2d}]\label{lm:stri} Given $T>0$ and $1\leq N\in 2^{\mathbb{Z}}$, there exists $L_0=L_0(T,N)\geq 1$ sufficiently large so that for $L\ge L_0$, \begin{gather*}
\|P_{\le N}^L u\|_{S([-T,T]\times{\mathbb{T}}_L)}\lesssim_{q,r} \|u(0)\|_{L^2({\mathbb{T}}_L)} + \|(i\partial_t+\Delta)u\|_{N([-T,T]\times{\mathbb{T}}_L)}. \end{gather*} Here, $P_{\le N}^L$ denotes the Fourier multiplier on ${\mathbb{T}}_L$ with symbol $m_{\leq 1}(\cdot / N)$. \end{lem}
Using these Strichartz estimates one then obtains (in the usual manner) a stability theory for the following frequency-localized NLS on the torus ${\mathbb{T}}_L$: \begin{align}\label{stabt:1} \begin{cases} (i\partial_t+\Delta )u =P_{\le N}^ L F(P_{\le N}^L u),\\ u(0)=P_{\le N}^L u_0. \end{cases} \end{align}
\begin{lem}[Perturbation theory for \eqref{stabt:1}]\label{lm:stab} Given $T>0$ and $1\leq N\in 2^{\mathbb{Z}}$, let $L_0$ be as in Lemma~\ref{lm:stri}. Fix $L\geq L_0$ and let $\tilde u$ be an approximate solution to \eqref{stabt:1} on $[-T,T]$ in the sense that \begin{align*} \begin{cases} (i\partial_t+\Delta) \tilde u=P_{\le N}^L F(P_{\le N}^L \tilde u) +e, \\ \tilde u(0)=P_{\le N}^L \tilde u_0 \end{cases} \end{align*} for some function $e$ and $\tilde u_0\in L^2({\mathbb{T}}_L)$. Assume \begin{align*}
\|\tilde u\|_{L_t^\infty L_x^2([-T,T]\times{\mathbb{T}}_L)}\le M \end{align*} and the smallness conditions \begin{align*}
\|u_0-\tilde u_0\|_{L^2({\mathbb{T}}_L)} \le {\varepsilon} \qtq{and} \|e\|_{N([-T,T]\times{\mathbb{T}}_L)}\le {\varepsilon}. \end{align*} Then if ${\varepsilon}\le {\varepsilon}_0(M,T)$, there exists a unique solution $u$ to \eqref{stabt:1} such that \begin{align*}
\|u-\tilde u\|_{S([-T,T]\times{\mathbb{T}}_L)}\le C(M,T) {\varepsilon}. \end{align*} \end{lem}
\subsection{Approximation by finite-dimensional PDE}
Fix $M>0$, $T>0$, and $\eta_n\to 0$. Let $N_n\to \infty$ be given and let $L_n=L_n(M,T, N_n, \eta_n)$ be large constants to be chosen later; in particular, we will have $L_n\to \infty$. Let $\mathbb{T}_n:={\mathbb{R}}/L_n{\mathbb{Z}}$ and let \begin{align}\label{1241}
u_{0, n}\in \mathcal H_n:=\bigl\{f\in L^2(\mathbb{T}_n):\,P_{>2N_n}^{L_n} f=0 \bigr\} \qtq{with} \|u_{0,n}\|_{L^2(\mathbb{T}_n)}\le M. \end{align}
Consider the following finite-dimensional Hamiltonian systems: \begin{align}\label{per1} \begin{cases} (i\partial_t+\Delta)u_n = P_{\le N_n}^{L_n} F( P_{\le N_n}^{L_n} u_n), \qquad (t,x)\in{\mathbb{R}}\times\mathbb{T}_n,\\ u_n(0)=u_{0,n}. \end{cases} \end{align} We will show that for $n$ sufficiently large, solutions to \eqref{per1} can be well approximated by solutions to the corresponding problem posed on ${\mathbb{R}}$ on the fixed time interval $[-T,T]$. Note that as a finite-dimensional system with a coercive Hamiltonian, \eqref{per1} automatically has global solutions.
To continue, we subdivide the interval $[\frac{L_n}4,\frac{L_n}2]$ into at least $16M^2/\eta_n$ many subintervals of length $20\frac 1{\eta_n} N_n T$. This can be achieved so long as \begin{align}\label{cf:1} L_n\gg \tfrac{M^2}{\eta_n} \cdot \tfrac 1{\eta_n} N_nT. \end{align} By the pigeonhole principle, there exists one such subinterval, which we denote by \begin{align*} I_n:=[c_n-\tfrac {10}{\eta_n} N_n T, c_n+\tfrac {10}{\eta_n}N_nT], \end{align*} such that \begin{align*}
\|u_{0,n}\chi_{I_n}\|_2\le \tfrac 14 \eta_n ^{1/2}. \end{align*} For $0\leq j\leq 4$, let $\chi_{n}^j:{\mathbb{R}}\to [0,1]$ be smooth cutoff functions adapted to $I_n$ such that \begin{align*} \chi_{n}^j(x)=\begin{cases}1, & x\in [c_n-L_n+\tfrac {10-2j}{\eta_n}N_nT, c_n-\tfrac {10-2j}{\eta_n}N_nT],\\ 0, & x\in (-\infty, c_n-L_n+\tfrac {10-2j-1}{\eta_n}N_nT)\cup(c_n-\tfrac {10-2j-1}{\eta_n}N_nT,\infty). \end{cases} \end{align*}
The following properties of $\chi_n^j$ follow directly from the construction above: \begin{align}\label{cf:1.0} \left\{\quad\begin{gathered} \chi_n^j\chi_n^i =\chi_n^j \quad\text{for all $0\leq j<i\leq 4$,} \\
\|\partial_x^k \chi_n^j \|_{L^\infty} = o\bigl( (N_nT)^{-k} \bigr) \quad\text{for each $k\geq 0$}, \\
\|(1-\chi_n^j)u_{0,n}\|_{L^2({\mathbb{T}}_n)} = o(1) \quad\text{for all $0\le j\le 4$}. \end{gathered}\right. \end{align} Here and subsequently, $o(\cdot)$ refers to the limit as $n\to\infty$. To handle the frequency truncations appearing in \eqref{per1} and \eqref{lm:1}, we need to control interactions between these cutoffs and Littlewood--Paley operators. This is the role of the next lemma.
\begin{lem}[Littlewood--Paley estimates]\label{L:LP est} For $L_n$ sufficiently large and all $0\leq j\leq 4$, we have the following norm bounds as operators on $L^2$\textup{:} \begin{gather}
\|\chi_n^j (P_{\le N_n}- P_{\le N_n}^{L_n}) \chi_n^j \|_{L^2({\mathbb{R}})\to L^2({\mathbb{R}})} =o(1), \label{E:P2P}\\
\|[\chi_n^j, P_{\le N_n}^{L_n} ]\|_{L^2(\mathbb{T}_n)\to L^2(\mathbb{T}_n)} + \|[\chi_n^j, P_{\le N_n}]\|_{L^2({\mathbb{R}})\to L^2({\mathbb{R}})} =o(1), \label{E:Pcom} \end{gather} as $n\to\infty$. Moreover, if $i>j$ then \begin{gather}
\|\chi_n^j P_{\le N_n}^{L_n} (1-\chi_n^i) \|_{L^2(\mathbb{T}_n)\to L^2(\mathbb{T}_n)} + \|\chi_n^j P_{\le N_n} (1-\chi_n^i) \|_{L^2({\mathbb{R}})\to L^2({\mathbb{R}})} =o(1). \label{E:Pmis} \end{gather} \end{lem}
\begin{proof} The proof of \eqref{E:P2P} is somewhat involved; one must estimate the difference between a Fourier sum and integral and then apply Schur's test. See \cite[\S8]{KVZ:nsqz2d} for details.
The estimate \eqref{E:Pcom} follows readily from the rapid decay of the kernels associated to Littlewood--Paley projections on both the line and the circle; specifically, by Schur's test, \begin{gather*}
\text{LHS\eqref{E:Pcom}} \lesssim N_n^{-1} \|\nabla \chi_n^j\|_{L^\infty_x} =o(1). \end{gather*} We may then deduce \eqref{E:Pmis} from this. Indeed, as $\chi_n^j\chi_n^i=\chi_n^j$ we have \begin{gather*} \chi_n^j P_{\le N_n} (1-\chi_n^i) = [\chi_n^j, P_{\le N_n}] (1-\chi_n^i), \end{gather*} and analogously on the torus. \end{proof}
Now let $\tilde u_n$ denote the solution to \begin{align}\label{lm:1} \begin{cases} (i\partial_t+\Delta)\tilde u_n=P_{\le N_n} F(P_{\le N_n}\tilde u_n),\qquad (t,x)\in{\mathbb{R}}\times{\mathbb{R}},\\ \tilde u_n(0,x)=\chi_n^0(x) u_{0,n}(x+L_n{\mathbb{Z}}), \end{cases} \end{align} where $u_{0,n}\in L^2(\mathbb{T}_n)$ is as in \eqref{1241}. It follows from Lemma~\ref{lm:loc} that these solutions are global and, moreover, that they obey \begin{align}\label{c1}
\|\partial_x^k \tilde u_n \|_{S([-T,T]\times{\mathbb{R}})}\lesssim_T M N_n^k \end{align} uniformly in $n$ and $k\in\{0,1\}$. By providing control on the derivative of $\tilde u_n$, this estimate also controls the transportation of mass:
\begin{lem}[Mass localization for $\tilde u_n$]\label{lm:sm} Let $\tilde u_n$ be the solution to \eqref{lm:1} as above. Then for every $0\leq j\leq 4$ we have \begin{align*}
\|(1-\chi_n^j)\tilde u_n\|_{L_t^\infty L_x^2([-T,T]\times{\mathbb{R}})} =o(1) \quad\text{as $n\to\infty$}. \end{align*} \end{lem}
\begin{proof} Direct computation (cf. \cite[Lemma~8.4]{KVZ:nsqz2d}) shows \begin{align*}
\frac{d\ }{dt} \int_{{\mathbb{R}}}|1-\chi_n^j(x)|^2 |\tilde u_n(t,x)|^2 \,dx&= - 4\Im\int_{{\mathbb{R}}^2}(1-\chi_n^j)(\nabla\chi_n^j) \overline{\tilde u_n} \nabla\tilde u_n \,dx \\ &\quad+2\Im\int_{{\mathbb{R}}^2}F(P_{\le N_n} \tilde u_n)[P_{\le N_n},(1-\chi_n^j)^2]\overline{\tilde u_n} \,dx. \end{align*} From this, the result can then be deduced easily using \eqref{cf:1.0}, \eqref{c1}, and \eqref{E:Pcom}. \end{proof}
With these preliminaries complete, we now turn our attention to the main goal of this section, namely, to prove the following result:
\begin{thm}[Approximation]\label{thm:app} Fix $M>0$ and $T>0$. Let $N_n\to \infty$ and let $L_n$ be sufficiently large depending on $M, T, N_n$. Assume $u_{0,n}\in \mathcal H_n$ with $\|u_{0,n}\|_{L^2(\mathbb{T}_n)} \le M$. Let $u_n$ and $\tilde u_n$ be solutions to \eqref{per1} and \eqref{lm:1}, respectively. Then
\begin{align}\label{pert:2.0}
\lim_{n\to \infty} \|P_{\le 2N_n}^{L_n}(\chi_n^2 \tilde u_n)-u_n\|_{S([-T,T]\times \mathbb{T}_n)}=0.
\end{align}
\end{thm}
\begin{rem} Note that for $0\leq j\leq 4$ and any $t\in{\mathbb{R}}$, the function $\chi_n^j\tilde u_n(t)$ is supported inside an interval of size $L_n$; consequently, we can view it naturally as a function on the torus ${\mathbb{T}}_n$. Conversely, the functions $\chi_n^ju_n(t)$ can be lifted to functions on ${\mathbb{R}}$ that are supported in an interval of length $L_n$. In what follows, the transition between functions on the line and the torus will be made without further explanation. \end{rem}
\begin{proof} The proof of Theorem~\ref{thm:app} is modeled on that of \cite[Theorem~8.9]{KVZ:nsqz2d}. For brevity, we write
\begin{align*}
z_n:=P_{\le 2 N_n}^{L_n}(\chi_n^2 \tilde u_n).
\end{align*} We will deduce \eqref{pert:2.0} as an application of the stability result Lemma ~\ref{lm:stab}. Consequently, it suffices to verify the following:
\begin{gather}
\|z_n\|_{L_t^\infty L_x^2 ([-T,T]\times\mathbb{T}_n)}\lesssim M \mbox{ uniformly in } n,\label{per:1}\\
\lim_{n\to \infty}\|z_n (0)-u_n(0)\|_{L^2(\mathbb{T}_n)}=0, \label{per:2}\\
\lim_{n\to \infty}\|(i\partial_t+\Delta)z_n- P_{\le N_n}^{L_n} F( P_{\le N_n}^{L_n} z_n)\|_{N([-T,T]\times\mathbb{T}_n)}=0.\label{per:3}
\end{gather}
Claim \eqref{per:1} is immediate:
\begin{align*}
\|z_n\|_{L_t^\infty L_x^2([-T,T]\times\mathbb{T}_n)}\lesssim \|\tilde u_n\|_{L_t^\infty L_x^2 ([-T,T]\times{\mathbb{R}})}\lesssim \|\tilde u_n(0) \|_{L_x^2({\mathbb{R}})}
\lesssim \| u_{n,0} \|_{L_x^2(\mathbb{T}_n)} \lesssim M.
\end{align*}
To prove \eqref{per:2}, we use $u_{0,n}\in \mathcal H_n$ and \eqref{cf:1.0} as follows:
\begin{align*}
\|z_n(0)-u_n(0)\|_{L^2(\mathbb{T}_n)}
&=\|P_{\le 2N_n}^{L_n} (\chi_n^2 u_{0,n}-u_{0, n})\|_{L^2(\mathbb{T}_n)}\\
&\lesssim \|\chi_n^2 u_{0,n}-u_{0,n}\|_{L^2(\mathbb{T}_n)} =o(1) \qtq{as} n\to \infty.
\end{align*}
It remains to verify \eqref{per:3}. Direct computation gives \begin{align*} (i\partial_t+\Delta)z_n - P_{\le N_n}^{L_n} &F( P_{\le N_n}^{L_n} z_n)\\ &=P_{\le 2N_n}^{L_n}\Bigl[2(\partial_x \chi_n^2)(\partial_x \tilde u_n) + (\Delta \chi_n^2) \tilde u_n\Bigr] \\
&\quad+P_{\le 2N_n}^{L_n} \Bigl[\chi_n^2 P_{\le N_n} F(P_{\le N_n} \tilde u_n)- P_{\le N_n}^{L_n} F( P_{\le N_n}^{L_n}(\chi_n^2 \tilde u_n))\Bigr]. \end{align*} In view of the boundedness of $P_{\le 2N_n}^{L_n}$, it suffices to show that the terms in square brackets converge to zero in $N([-T,T]\times\mathbb{T}_n)$ as $n\to \infty$.
Using \eqref{c1} and \eqref{cf:1.0}, we obtain \begin{align*}
\|(\partial_x \chi_n^2) (\partial_x\tilde u_n) \|_{L^1_tL^2_x([-T,T]\times\mathbb{T}_n)}
&\le T \|\partial_x \chi_n^2\|_{L^\infty_x({\mathbb{R}})} \|\partial_x\tilde u_n \|_{L_t^\infty L^2_x([-T,T]\times{\mathbb{R}})} = o(1),\\
\|(\Delta \chi_n^2) \tilde u_n\|_{L^1_t L^2_x ([-T,T]\times\mathbb{T}_n)}&\le T\|\partial_x^2 \chi_n^2\|_{L^\infty_x({\mathbb{R}})}\|\tilde u_n\|_{L_t^\infty L_x^2([-T,T]\times{\mathbb{R}})}= o(1), \end{align*} as $n\to\infty$.
To estimate the remaining term, we decompose it as follows: \begin{align} \chi_n^2 P_{\le N_n} F(P_{\le N_n} \tilde u_n)&- P_{\le N_n}^{L_n} F( P_{\le N_n}^{L_n}(\chi_n^2 \tilde u_n))\notag\\ &=\chi_n^2 P_{\le N_n} \bigl[F(P_{\le N_n} \tilde u_n)-F(P_{\le N_n}(\chi_n^2\tilde u_n))\bigr]\label{per:6}\\ &\quad+\chi_n^2 P_{\le N_n}(1-\chi_n^3) F(P_{\le N_n}(\chi_n^2 \tilde u_n))\bigr]\label{per:7}\\ &\quad+\chi_n^2P_{\le N_n}\chi_n^3 \bigl[F(P_{\le N_n}(\chi_n^2\tilde u_n))- F( P_{\le N_n}^{L_n}(\chi_n^2 \tilde u_n))\bigr]\label{per:8}\\ &\quad+\chi_n^2\bigl(P_{\le N_n}- P_{\le N_n}^{L_n}\bigr)\chi_n^3 F( P_{\le N_n}^{L_n}(\chi_n^2\tilde u_n))\label{per:9}\\ &\quad+[\chi_n^2, P_{\le N_n}^{L_n}]\chi_n^3 F( P_{\le N_n}^{L_n} (\chi_n^2\tilde u_n))\label{per:10}\\ &\quad+ P_{\le N_n}^{L_n}(\chi_n^2-1) F( P_{\le N_n}^{L_n}(\chi_n^2\tilde u_n)).\label{per:11} \end{align}
To estimate \eqref{per:6}, we use H\"older and Lemma~\ref{lm:sm}: \begin{align*}
\|\eqref{per:6}\|_{N([-T,T]\times\mathbb{T}_n)}
&\lesssim \|F(P_{\le N_n} \tilde u_n)-F(P_{\le N_n}(\chi_n^2\tilde u_n))\|_{L^{6/5}_{t,x}([-T,T]\times{\mathbb{R}})}\\
&\lesssim T^{\frac 12}\|(1-\chi_n^2)\tilde u_n\|_{L^\infty_t L^2_x([-T,T]\times{\mathbb{R}})}\|\tilde u_n\|_{L^6_{t,x}([-T,T]\times{\mathbb{R}})}^2= o(1). \end{align*}
We next turn to \eqref{per:7}. As \begin{align*}
\|\eqref{per:7}\|_{L^1_tL^2_x([-T,T]\times\mathbb{T}_n)}
&\lesssim \|\chi_n^2 P_{\le N_n}(1-\chi_n^3)\|_{L^2({\mathbb{R}})\to L^2({\mathbb{R}})} T^{\frac 12} \|\tilde u_n\|_{L_{t,x}^6([-T,T]\times{\mathbb{R}})}^3, \end{align*} it follows from \eqref{E:Pmis} and \eqref{c1} that this is $o(1)$ as $n\to\infty$.
We now consider \eqref{per:8}. Using \eqref{E:P2P} and \eqref{c1}, we estimate \begin{align*}
\|&\eqref{per:8}\|_{L_{t,x}^{6/5}([-T,T]\times\mathbb{T}_n)}\\
&\lesssim T^{\frac 12} \|\chi_n^3(P_{\le N_n}- P_{\le N_n}^{L_n})\chi_n^2\tilde u_n \|_{L_t^\infty L_x^2([-T,T]\times\mathbb{T}_n)}\|\chi_n^2 \tilde u_n\|_{L_{t,x}^6([-T,T]\times{\mathbb{R}})}^2\\
&\lesssim T^{\frac 12} \|\chi_n^3(P_{\le N_n}- P_{\le N_n}^{L_n})\chi_n^3 \|_{L^2({\mathbb{R}})\to L^2({\mathbb{R}})}\|\chi_n^2 \tilde u_n\|_{L^\infty_t L^2_x ([-T,T]\times{\mathbb{R}})}\|\tilde u_n\|_{L_{t,x}^6([-T,T]\times{\mathbb{R}})}^2 \\ &= o(1). \end{align*}
Next we turn to \eqref{per:9}. Using \eqref{cf:1.0}, \eqref{E:P2P}, and \eqref{c1}, we get \begin{align*}
\|\eqref{per:9}&\|_{L^1_tL^2_x([-T,T]\times\mathbb{T}_n)}\\
&\lesssim \|\chi_n^3(P_{\le N_n}- P_{\le N_n}^{L_n})\chi_n^3\|_{L^2({\mathbb{R}})\to L^2({\mathbb{R}})}\|\chi_n^4 F( P_{\le N_n}^{L_n}(\chi_n^2\tilde u_n))\|_{L^1_tL^2_x([-T,T]\times{\mathbb{R}})}\\
&\lesssim o(1) \cdot T^{\frac 12} \|\tilde u_n\|_{L_{t,x}^6([-T,T]\times{\mathbb{R}})}^3 = o(1). \end{align*}
To estimate \eqref{per:10}, we use \eqref{E:Pcom} and \eqref{c1} as follows: \begin{align*}
\|\eqref{per:10}&\|_{L^1_tL^2_x([-T,T]\times\mathbb{T}_n)}\\
&\lesssim\|[\chi_n^2, P_{\le N_n}^{L_n}]\|_{L^2(\mathbb{T}_n)\to L^2(\mathbb{T}_n)} \| \chi_n^3 F( P_{\le N_n}^{L_n}(\chi_n^2\tilde u_n))\|_{L_t^1L_x^2([-T,T]\times\mathbb{T}_n)}\\
&\lesssim o(1) \cdot T^{\frac 12} \|\tilde u_n\|_{L_{t,x}^6([-T,T]\times{\mathbb{R}})}^3 = o(1). \end{align*}
Finally, to estimate \eqref{per:11}, we write $\tilde u_n = \chi_n^1\tilde u_n + (1-\chi_n^1)\tilde u_n$ and then use \eqref{c1}, \eqref{E:Pmis}, and \eqref{cf:1.0}:\begin{align*}
\|\eqref{per:11}\|_{N([-T,T]\times\mathbb{T}_n)} &\lesssim \| (\chi_n^2-1) F( P_{\le N_n}^{L_n}(\chi_n^2\tilde u_n)) \|_{L^{6/5}_{t,x}([-T,T]\times{\mathbb{R}})} \\
&\lesssim T^{\frac12} \|(1-\chi_n^2) P_{\le N_n}^{L_n} \chi_n^1\tilde u_n\|_{L^\infty_t L^2_x([-T,T]\times{\mathbb{R}})} \|\tilde u_n\|_{L_{t,x}^6([-T,T]\times{\mathbb{R}})}^2 \\
&\quad+ T^{\frac12} \|(1-\chi_n^1)\tilde u_n\|_{L^\infty_tL_x^2([-T,T]\times{\mathbb{R}})}\|\tilde u_n\|_{L_{t,x}^6([-T,T]\times{\mathbb{R}})}^2\\ &= o(1) \qtq{as} n\to \infty. \end{align*}
This completes the proof of the theorem. \end{proof}
\section{Proof of Theorem \ref{thm:nsqz}}
In this section, we complete the proof of Theorem \ref{thm:nsqz}. To this end, fix parameters $z_*\in L^2({\mathbb{R}})$, $l\in L^2({\mathbb{R}})$ with $\|l\|_2=1$, $\alpha \in {\mathbb{C}}$, $0<r<R<\infty$, and $T>0$. Let $M:=\|z_*\|_2 + R$. Let $N_n\to \infty$ and choose $L_n$ diverging to infinity sufficiently fast so that all the results of Section~\ref{S:5} hold.
By density, we can find $\tilde z_*, \tilde l \in C_c^\infty({\mathbb{R}})$ such that \begin{align}\label{tildeapprox}
\| z_*-\tilde z_*\|_{L^2} \le \delta \qquad\qtq{and}\qquad \|l-\tilde l\|_{L^2} \le \delta M^{-1} \qtq{with} \|\tilde l\|_2=1, \end{align} for a small parameter $\delta>0$ chosen so that $\delta < (R-r)/8$. For $n$ sufficiently large, the supports of $\tilde z_*$ and $\tilde l$ are contained inside the interval $[-L_n/2, L_n/2]$, which means that we can view $\tilde z_*$ and $\tilde l$ as functions on ${\mathbb{T}}_n={\mathbb{R}}/L_n{\mathbb{Z}}$. Moreover, \begin{equation}\label{z*n}
\|\tilde z_* - P_{\leq N_n}^{L_n} \tilde z_*\|_{L^2(\mathbb{T}_n)}\lesssim N_n^{-1} \|\tilde z_*\|_{H^1(\mathbb{T}_n)} = o(1), \end{equation} as $n\to\infty$. Similarly, \begin{equation}\label{l*n}
\| P_{>2N_n}^{L_n} \tilde l\|_{L^2(\mathbb{T}_n)} = o(1) \quad\text{as $n\to\infty$.} \end{equation}
Consider now the initial-value problem \begin{equation}\label{329} \begin{cases} (i\partial_t+\Delta) u_n= P_{\le N_n}^{L_n} F( P_{\le N_n}^{L_n} u_n), \qquad(t,x)\in{\mathbb{R}}\times\mathbb{T}_n,\\ u_n(0)\in \mathcal H_n=\{f\in L^2(\mathbb{T}_n): \, P_{>2 N_n}^{L_n} f=0\}. \end{cases} \end{equation} This is a finite-dimensional Hamiltonian system with respect to the standard Hilbert-space symplectic structure on $\mathcal H_n$; the Hamiltonian is $$
H(u) = \int_{\mathbb{T}_n} \tfrac12 |\partial_x u|^2 \pm \tfrac14 | P_{\le N_n}^{L_n} u|^4\,dx. $$ Therefore, by Gromov's symplectic non-squeezing theorem, there exist initial data \begin{align}\label{main:0} u_{0,n}\in B_{\mathcal H_n}(P_{\leq N_n}^{L_n} \tilde z_*, R-4\delta) \end{align} such that the solution to \eqref{329} with initial data $u_n(0)=u_{0,n}$ satisfies \begin{align}\label{main:1}
|\langle \tilde l, u_n(T)\rangle_{L^2(\mathbb{T}_n)} -\alpha|> r + 4\delta. \end{align}
Just as in Section~\ref{S:5} we let $\tilde u_n:{\mathbb{R}}\times{\mathbb{R}}\to{\mathbb{C}}$ denote the global solution to \begin{align*} \begin{cases} (i\partial_t+\Delta) \tilde u_n=P_{\le N_n} F(P_{\le N_n} \tilde u_n),\\ \tilde u_n(0)=\chi_n^0 u_{0,n}, \end{cases} \end{align*} and write $z_n:=P_{\le 2N_n}^{L_n} (\chi_n^2 \tilde u_n)$. By Theorem~\ref{thm:app}, we have the following approximation result: \begin{align}\label{main:2}
\lim_{n\to \infty}\|z_n-u_n\|_{L^\infty_t L^2_x([-T,T]\times\mathbb{T}_n)}=0. \end{align}
We are now ready to select initial data that witnesses the non-squeezing for the cubic NLS on the line. By the triangle inequality, \eqref{cf:1.0}, \eqref{main:0}, \eqref{z*n}, and \eqref{tildeapprox}, \begin{align*}
\|\chi_n^0 u_{0,n}-z_*\|_{L^2({\mathbb{R}})}
&\le \|(\chi_n^0-1) u_{0,n}\|_{L^2(\mathbb{T}_n)} + \|u_{0,n}-P_{\leq N_n}^{L_n}\tilde z_*\|_{L^2(\mathbb{T}_n)}\\
&\quad +\|P_{\leq N_n}^{L_n}\tilde z_*-\tilde z_*\|_{L^2(\mathbb{T}_n)} + \| \tilde z_*-z_*\|_{L^2({\mathbb{R}})}\\ &\le o(1) + R-4\delta+ o(1) + \delta\le R-\delta, \end{align*} provided we take $n$ sufficiently large. Therefore, passing to a subsequence, we may assume that \begin{align}\label{main:3} \chi_n^0 u_{0,n}\rightharpoonup u_{0,\infty}\in B(z_*,R) \quad \text{weakly in $L^2({\mathbb{R}})$}. \end{align}
Now let $u_\infty:{\mathbb{R}}\times{\mathbb{R}}\to{\mathbb{C}}$ be the global solution to \eqref{nls} with initial data $u_\infty (0)=u_{0,\infty}$. By Theorem~\ref{T:weak wp}, \begin{align*} \tilde u_n (T) \rightharpoonup u_\infty (T) \quad \text{weakly in $L^2({\mathbb{R}})$}. \end{align*} Combining this with Lemma \ref{lm:sm}, we deduce \begin{align*} \chi_n^2 \tilde u_n(T)\rightharpoonup u_\infty (T) \quad \text{weakly in $L^2({\mathbb{R}})$}. \end{align*} Thus, using also \eqref{l*n}, the definition of $z_n$, \eqref{main:2}, and \eqref{main:1}, we get \begin{align*}
\bigl|\langle \tilde l, u_\infty (T)\rangle_{L^2({\mathbb{R}})}-\alpha\bigr |
&= \lim_{n\to \infty}\bigl|\langle \tilde l, \chi_n^2 \tilde u_n(T)\rangle_{L^2(\mathbb{T}_n)}-\alpha\bigr|\\
&= \lim_{n\to \infty}\bigl |\langle P_{\le 2N_n}^{L_n} \tilde l, \chi_n^2\tilde u_n (T)\rangle_{L^2(\mathbb{T}_n)} -\alpha\bigr|\\
&= \lim_{n\to \infty}\bigl |\langle \tilde l, z_n(T)\rangle_{L^2(\mathbb{T}_n)} -\alpha\bigr| \\
&= \lim_{n\to \infty}\bigl |\langle \tilde l, u_n(T)\rangle_{L^2(\mathbb{T}_n)} -\alpha\bigr| \\ &\geq r + 4\delta. \end{align*}
Therefore, using $$
\|u_\infty(T)\|_{L^2({\mathbb{R}})} = \|u_{0,\infty}\|_{L^2({\mathbb{R}})} < R + \|z_*\|_{L^2({\mathbb{R}})} = M $$ (cf. \eqref{main:3}) together with \eqref{tildeapprox}, we deduce that \begin{align*}
\bigl |\langle l, u_\infty(T)\rangle-\alpha\bigr|\ge r + 4\delta - \|l-\tilde l\|_2\|u_\infty(T)\|_{L^2} \geq r + 3\delta > r. \end{align*} This shows that $u_\infty(T)$ lies outside the cylinder $C_r(\alpha, l)$, despite the fact that $u_\infty(0)\in B(z_*,R)$, and so completes the proof of the theorem.\qed
\end{document} |
\begin{document}
\title{On $(h,k,\mu,\nu)$-trichotomy of evolution operators in Banach spaces}
\author{ Mihail Megan, Traian Ceau\c su, Violeta Crai}
\begin{abstract} The paper considers some concepts of trichotomy with different growth rates for evolution operators in Banach spaces. Connections between these concepts and characterizations in terms of Lyapunov- type norms are given. \end{abstract} \section{Introduction} In the qualitative theory of evolution equations, exponential dichotomy, essentially introduced by O. Perron in \cite{peron} is one of the most important asymptotic properties and in last years it was treated from various perspective.
For some of the most relevant early contributions in this area we refer to the books of J.L. Massera and J.J. Schaffer \cite{masera}, Ju. L. Dalecki and M.G. Krein \cite{daletchi} and W.A. Coppel \cite{copel}. We also refer to the book of C. Chichone and Yu. Latushkin \cite{chicone}.
In some situations, particularly in the nonautonomous setting, the concept of uniform exponential dichotomy is too restrictive and it is important to consider more general behaviors. Two different perspectives can be identify for to generalize the concept of uniform exponential dichotomy: on one hand one can define dichotomies that depend on the initial time (and therefore are nonuniform) and on the other hand one can consider growth rates that are not necessarily exponential.
The first approach leads to concepts of nonuniform exponential dichotomies and can be found in the works of L. Barreira and C. Valls \cite{ba2} and in a different form in the works of P. Preda and M. Megan \cite{preda-megan} and M. Megan, L. Sasu and B. Sasu \cite{megan-sasu}.
The second approach is present in the works of L. Barreira and C. Valls \cite{ba3}, A.J.G. Bento and C.M. Silva \cite{bento} and M. Megan \cite{megan}.
A more general dichotomy concept is introduced by M. Pinto in \cite{pinto2} called $(h,k)$-dichotomy, where $h$ and $k$ are growth rates. The concept of $(h,k)-$ dichotomy has a great generality and it permits the construction of similar notions for systems with dichotomic behaviour which are not described by the classical theory of J.L. Massera \cite{masera}.
As a natural generalization of exponential dichotomy (see \cite{ba3}, \cite{vio}, \cite{elaydi1}, \cite{saker}, \cite{sasu} and the references therein), exponential trichotomy is one of the most complex asymptotic properties of dynamical systems arising from the central manifold theory (see \cite{carr}). In the study of the trichotomy the main idea is to obtain a decomposition of the space at every moment into three closed subspaces: the stable subspace, the unstable subspace and the central manifold.
Two concepts of trichotomy have been introduced: the first by R.J. Sacker and G.L. Sell \cite{saker} (called (S,S)-trichotomy) and the second by S. Elaydi and O. Hayek \cite{elaydi1} (called (E,H)-trichotomy).
The existence of exponential trichotomies is a strong requirement and hence it is of considerable interest to look for more general types of trichotomic behaviors.
In previous studies of uniform and nonuniform trichotomies, the growth rates are always assumed to be the same type functions. However, the nonuniformly hyperbolic dynamical systems vary greatly in forms and none of the nonuniform trichotomy can well characterize all the nonuniformly dynamics. Thus it is necessary and reasonable to look for more general types of nonuniform trichotomies.
The present paper considers the general concept of nonuniform $(h,k,\mu,\nu)-$ trichotomy, which not only incorporates the existing notions of uniform or nonuniform trichotomy as special cases, but also allows the different growth rates in the stable subspace, unstable subspace and the central manifold.
We give characterizations of nonuniform $(h,k,\mu,\nu)-$ trichotomy using families of norms equivalent with the initial norm of the states space. Thus we obtain a characterization of the nonuniform $(h,k,\mu,\nu)-$trichotomy in terms of a certain type of uniform $(h,k,\mu,\nu)-$trichotomy.
As an original reference for considering families of norms in the nonuniform theory we mention Ya. B. Pesin's works \cite{pesin} and \cite{pesin1}. Our characterizations using families of norms are inspired by the work of L. Barreira and C. Valls \cite{ba3} where characterizations of nonuniform exponential trichotomy in terms of Lyapunov functions are given. \section{Preliminaries}
Let $X$ be a Banach space and $\mathcal{B}(X)$ the Banach algebra of all linear and bounded operators on $X$. The norms on $X$ and on $\mathcal{B}(X)$ will be denoted by $\|\cdot\|$. The identity operator on $X$ is denoted by $I$. We also denote by $\Delta=\{(t,s)\in\mathbb{R}_+^2:t\geq s\geq 0\}$.
We recall that an application $U:\Delta\to\mathcal{B}(X)$ is called \textit{evolution operator} on $X$ if \begin{itemize}
\item[$(e_1)$]$U(t,t)=I$, for every $t\geq 0$
\item[] and
\item[$(e_2)$]$U(t,t_0)=U(t,s)U(s,t_0)$, for all $(t,s),(s,t_0)\in\Delta$. \end{itemize} \begin{definition} A map $P:\mathbb{R}_+\to\mathcal{B}(X)$ is called
\begin{itemize}
\item[(i)] \textit{a family of projectors} on $X$ if
$$P^2(t)=P(t),\text{ for every } t\geq 0;$$
\item [(ii)] \textit{invariant} for the evolution operator $U:\Delta\to\mathcal{B}(X)$ if
\begin{align*}
U(t,s)P(s)x=P(t)U(t,s)x,
\end{align*}
for all $(t,s,x)\in \Delta\times X$;
\item[(iii)] \textit{stronlgy invariant} for the evolution operator $U:\Delta\to\mathcal{B}(X)$ if it is invariant for $U$ and for all $(t,s)\in\Delta$ the restriction of $U(t,s)$ on Range $P(s)$ is an isomorphism from Range $P(s)$ to Range $P(t)$.
\end{itemize}
\end{definition} \begin{remark}
It is obvious that if $P$ is strongly invariant for $U$ then it is also invariant for $U$. The converse is not valid (see \cite{mihit}). \end{remark} \begin{remark}\label{rem-proiectorstrong}
If the family of projectors $P:\mathbb{R}_+\to\mathcal{B}(X)$ is strongly invariant for the evolution operator $U:\Delta\to\mathcal{B}(X)$ then (\cite{lupa}) there exists a map $V:\Delta\to\mathcal{B}(X)$ with the properties:
\begin{itemize}
\item[$v_1)$ ] $V(t,s)$ is an isomorphism from Range $ P(t)$ to Range $ P(s)$,
\item [$v_2)$ ] $U(t,s)V(t,s)P(t)x=P(t)x$,
\item[$v_3)$ ] $V(t,s)U(t,s)P(s)x=P(s)x$,
\item[$v_4)$ ]$V(t,t_0)P(t)=V(s,t_0)V(t,s)P(t)$,
\item[$v_5)$ ]$V(t,s)P(t)=P(s)V(t,s)P(t)$,
\item[$v_6)$ ] $V(t,t)P(t)=P(t)V(t,t)P(t)=P(t)$,
\end{itemize}
for all $(t,s),(s,t_0)\in \Delta$ and $x\in X$. \end{remark} \begin{definition}
Let $P_1,P_2,P_3:\mathbb{R}\to\mathcal{B}(X)$ be three families of projectors on $X$. We say that the family $\mathcal{P}=\{P_1,P_2,P_3\}$ is
\begin{itemize}
\item [(i)] \textit{orthogonal} if
\begin{itemize}
\item [$o_1)$]$P_1(t)+P_2(t)+P_3(t)=I$ for every $t\geq 0$\\
and
\item[$o_2)$] $P_i(t)P_j(t)=0$ for all $t\geq 0$ and all $i,j\in\{1,2,3\}$ with $i\neq j$;
\end{itemize}
\item[(ii)] \textit{compatible} with the evolution operator $U:\Delta\to\mathcal{B}(X)$ if
\begin{itemize}
\item[$c_1)$] $P_1$ is invariant for $U$\\
and
\item[$c_2)$] $P_2,P_3$ are strongly invariant for $U$.
\end{itemize}
\end{itemize} \end{definition} In what follows we shall denote by $V_j(t,s)$ the isomorphism (given by Remark \ref{rem-proiectorstrong}) from Range $P_j(t)$ to Range $P_j(s)$ and $j\in\{2,3\}$, where $\mathcal{P}=\{P_2,P_2,P_3\}$ is compatible with $U.$ \begin{definition} We say that a nondecreasing map $h:\mathbb{R}_+\to[1,\infty)$ is a \textit{ growth rate} if \begin{align*} \lim\limits_{t\to\infty}h(t)=\infty. \end{align*} \end{definition} As particular cases of growth rates we remark: \begin{itemize}
\item [$r_1)$ ] \textit{exponential rates}, i.e.
$h(t)=e^{\alpha t}$ with $\alpha>0;$
\item [$r_2)$ ]\textit{polynomial rates}, i.e.
$h(t)=(t+1)^\alpha$ with $\alpha>0.$ \end{itemize} Let $\mathcal{P}=\{P_1, P_2, P_3\}$ be an orthogonal family of projectors which is compatible with the evolution operator $U:\Delta\to\mathcal{B}(X)$ and $h,k,\mu,\nu:\mathbb{R}_+\to[1,\infty)$ be four growth rates. \begin{definition}\label{def-tricho}
We say that the pair $(U,\mathcal{P})$ is \textit{$(h,k,\mu,\nu)$-trichotomic} (and we denote $(h,k,\mu,\nu)-t$) if there exists a nondecreasing function $N:\mathbb{R}_+\to [1,\infty)$ such that
\begin{itemize}
\item [$(ht_1)$ ]$h(t)\|U(t,s)P_1(s)x\|\leq N(s)h(s) \|P_1(s)x\|$
\item [$(kt_1)$ ]$k(t)\|P_2(s)x\|\leq N(t) k(s) \|U(t,s)P_2(s)x\|$
\item [$(\mu t_1)$ ]$\mu(s)\|U(t,s)P_3(s)x\|\leq N(s) \mu(t) \|P_3(s)x\|$
\item [$(\nu t_1)$ ]$\nu(s)\|P_3(s)x\|\leq N(t)\nu(t) \|U(t,s)P_3(s)x\|,$
\end{itemize}
for all $(t,s,x)\in \Delta\times X.$ \end{definition} In particular, if the function $N$ is constant then we obtain the \textit{uniform $(h,k,\mu,\nu)$-trichotomy} property, denoted by $u-(h,k,\mu,\nu)-t$. \begin{remark}
As important particular cases of $(h,k,\mu,\nu)$-trichotomy we have:
\begin{itemize}
\item[(i)] \textit{(nonuniform) exponential trichotomy} ($et$) and respectively \textit{uniform exponential trichotomy} ($uet$) when the rates $h,k,\mu,\nu$ are exponential rates;
\item[(ii)]\textit{(nonuniform) polynomial trichotomy} ($pt$) and respectively \textit{uniform polynomial trichotomy} ($upt$) when the rates $h,k,\mu,\nu$ are polynomial rates;
\item[(iii)]\textit{(nonuniform) $(h,k)-$dichotomy} ($(h,k)-d$) respectively {uniform $(h,k)-$dichotomy} ($u-(h,k)-d$) for $P_3=0$;
\item[(iv)] \textit{(nonuniform) exponential dichotomy} ($ed$) and respectively \textit{uniform exponential dichotomy} ($ued$) when $P_3=0$ and the rates $h,k$ are exponential rates;
\item[(v)]\textit{(nonuniform) polynomial dichotomy} (p.d.) and respectively \textit{uniform polynomial dichotomy} ($upd$) when $P_3=0$ and the rates $h,k$ are polynomial rates;
\end{itemize} \end{remark}
It is obvious that if the pair $(U,\mathcal{P})$ is $u-(h,k,\mu,\nu)-t$ then it is also $(h,k,\mu,\nu)-t$ In general, the reverse of this statement is not valid, phenomenon illustrated by \begin{example} Let $U:\Delta\to\mathcal{B}(X)$ be the evolution operator defined by \begin{align} U(t,s)=\frac{u(s)}{u(t)}\left( \frac{h(s)}{h(t)}P_1(s)+ \frac{k(t)}{k(s)}P_2(s)+\frac{\mu(t)}{\mu(s)}\frac{\nu(s)}{\nu(t)}P_3(s)\right) \end{align} where $u,h,k,\mu,\nu:\mathbb{R}_+\to[1,\infty)$ are growth rates and $P_1, P_2, P_3:\mathbb{R}_+\to\mathcal{B}(X)$ are projectors families on $X$ with the properties: \begin{itemize}
\item[(i)]$P_1(t)+P_2(t)+P_3(t)=I$ for every $t\geq 0$;
\item[(ii)]\[ P_i(t)P_j(s)=\left\{ \begin{array}{ll} 0&\mbox{if $i\neq j$} \\
P_i(s),& \mbox{ if $i=j$},
\end{array} \right. \]for all $(t,s)\in\Delta.$
\item[(iii)] $U(t,s)P_i(s)=P_i(t)U(t,s)$ for all $(t,s)\in\Delta$ and all $i\in\{1,2,3\}$. \end{itemize} For example if $P_1,P_2,P_3$ are constant and orthogonal then the conditions (i),(ii) and (iii) are satisfied.
We observe that \begin{align*}
h(t)\|U(t,s)P_1(s)x\|&=\frac{u(s)h(s)}{u(t)}\|P_1(s)x\|\leq u(s) h(s)\|P_1(s)x\|\\
u(t)k(s)\|U(t,s)P_2(s)x\|&=u(s){k(s)}\|P_2(s)x\|\geq k(t)\|P_2(s)x\|\\
\mu(s)\|U(t,s)P_3(s)x\|&=\frac{u(s)\mu(t)\nu(s)}{u(t)\nu(t)}\|P_3(s)x\|\leq u(s)\mu(t)\|P_3(s)x\|\\
u(t)\nu(t)\|U(t,s)P_3(s)x\|&=\frac{u(s)\nu(s)\mu(t)}{\mu(s)}\|P_3(s)x\|\geq \nu(s)\|P_3(s)x\| \end{align*} for all $(t,s,x)\in\Delta\times X.$
Thus the pair $(U,\mathcal{P})$ is $(h,k,\mu,\nu)-t$. \\ If we assume that the pair $(U,\mathcal{P})$ is $u-(h,k,\mu,\nu)-t$ then there exists a real constant $N\geq 1$ such that \begin{align*} N u(s)\geq u(t),\text{ for all } (t,s)\in\Delta. \end{align*} Taking $s=0$ we obtain a contradiction. \end{example} \begin{remark} The previous example shows that for all four growth rates $h,k,\mu,\nu$ there exits a pair $(U,\mathcal{P})$ which is $(h,k,\mu,\nu)-t$ and is not $u-(h,k,\mu,\nu)-t$.
\end{remark} In the particular case when $\mathcal{P}$ is compatible with $U$ a characterization of $(h,k,\mu,\nu)-t$ is given by \begin{proposition}\label{prop strong invariant trichotomy}
If $\mathcal{P}=
\{P_1,P_2,P_3\}$ is compatible with the evolution operator $U:\Delta\to\mathcal{B}(X)$ then the pair $(U,\mathcal{P})$ is $(h,k,\mu,\nu)$-trichotomic if and only if there exists a nondecreasing function $ N_1:\mathbb{R}_+\to[1,\infty)$ such that
\begin{itemize}
\item [$(ht_2)$ ]$h(t)\|U(t,s)P_1(s)x\|\leq N_1(s)h(s)\|x\|$
\item [$(kt_2)$ ]$k(t)\|V_2(t,s)P_2(t)x\|\leq N_1(t) k(s) \|x\|$
\item [$(\mu t_2)$ ]$\mu(s)\|U(t,s)P_3(s)x\|\leq N_1(s) \mu(t)\|x\|$
\item[$(\nu t_2)$ ]$\nu(s)\|V_3(t,s)P_3(t)x\|\leq N_1(t) \nu(t) \|x\|$
\end{itemize}
for all $ (t,s,x)\in \Delta\times X$, where $V_j(t,s)$ for $j\in\{2,3\}$ is the isomorphism from Range $P_j(t)$ to Range $P_j(s)$. \end{proposition} \begin{proof}
\textit{Necessity.} By Remark \ref{rem-proiectorstrong} and the Definition \ref{def-tricho} we obtain
\begin{align*}
(ht_2)\thickspace& \thickspace h(t)\|U(t,s)P_1(s)x\|\leq N(s)h(s)\|P_1(s)x\|\leq N(s)\|P_1(s)\|h(s)\|x\|\\
&\leq N_1(s)h(s)\|x\|\\
(kt_2)\thickspace &\thickspace k(t)\|V_2(t,s)P_2(t)x\|=k(t)\|P_2(s)V_2(t,s)P_2(t)x\|\\
&\leq N(t)k(s)\|U(t,s)P_2(s)V_2(t,s)P_2(t)x\|\\
&=N(t)k(s)\|P_2(t)x\|\leq N(t)\|P_2(t)\|k(s)\|x\|\leq N_1(t)k(s)\|x\|\\
(\mu t_2)\thickspace &\thickspace \mu(s)\|U(t,s)P_3(s)x\|\leq N(s) \mu(t)\|P_3(s)x\|\leq N(s)\|P_3(s)\|\mu(t)\|x\|\\
&\leq N_1(s)\mu(t)\|x\|\\
(\nu t_2 )\thickspace&\thickspace \nu(s)\|V_3(t,s)P_3(t)x\|=\nu(s)\|P_3(s)V_3(t,s)P_3(t)x\|\\
&\leq N(t)\nu(t)\|U(t,s)P_3(s)V_3(t,s)P_3(t)x\|\\
&=N(t)\nu(t)\|P_3(t)x\|\leq N(t)\|P_3(t)\|\nu(t)\|x\|\leq N_1(t)\nu(t)\|x\|,
\end{align*}for all $(t,s,x)\in\Delta\times X,$ where
$$N_1(t)=\sup_{s\in[0,t]}N(s)(\|P_1(s)\|+\|P_2(s)\|+\|P_3(s)\|).$$
\textit{Sufficiency.} The implications $(ht_2)\Rightarrow(ht_1)$ and $(\mu t_2)\Rightarrow(\mu t_1)$ result by replacing $x$ with $P_1(s)x$ respectively by $P_3(s)x$.
For the implications $(kt_2)\Rightarrow(kt_1)$ and $(\nu t_2)\Rightarrow (\nu t_1)$ we have (by Remark \ref{rem-proiectorstrong})
\begin{align*}
k(t)\|P_2(s)x\|&=k(t)\|V_2(t,s)U(t,s)P_2(s)x\|\leq N(t)k(s)\|U(t,s)P_2(s)x\|\\
&\text{and}\\
\nu(s)\|P_3(s)x\|&=\nu(s)\|V_3(t,s)U(t,s)P_3(s)x\|\leq N(t)\nu(t)\|U(t,s)P_3(s)x\|,
\end{align*}for all $(t,s,x)\in\Delta\times X.$ \end{proof} A similar characterization for the $u-(h,k,\mu,\nu)-t$ concept results under the hypotheses of boundedness of the projectors $P_1,P_2,P_3$. A characterization with compatible family of projectors without assuming the boundedness of projectors is given by \begin{proposition}\label{prop strong invariant trichotomy uniform}
If $\mathcal{P}=
\{P_1,P_2,P_3\}$ is compatible with the evolution operator $U:\Delta\to\mathcal{B}(X)$ then the pair $(U,\mathcal{P})$ is uniformly$-(h,k,\mu,\nu)-$ trichotomic if and only if there exists a constant $ N\geq 1$ such that
\begin{itemize}
\item [$(uht_1)$ ]$h(t)\|U(t,s)P_1(s)x\|\leq Nh(s)\|P_1(s)x\|$
\item [$(ukt_1)$ ]$k(t)\|V_2(t,s)P_2(t)x\|\leq N k(s) \|P_2(t)x\|$
\item [$(u\mu t_1)$ ]$\mu(s)\|U(t,s)P_3(s)x\|\leq N \mu(t)\|P_3(s)x\|$
\item[$(u\nu t_1)$ ]$\nu(s)\|V_3(t,s)P_3(t)x\|\leq N \nu(t) \|P_3(t)x\|$
\end{itemize}
for all $ (t,s,x)\in \Delta\times X$, where $V_j(t,s)$ for $j\in\{2,3\}$ is the isomorphism from Range $P_j(t)$ to Range $P_j(s)$. \end{proposition} \begin{proof}
It is similar to the proof of Proposition \ref{prop strong invariant trichotomy}. \end{proof} \section{The main result} In this section we give a characterization of $(h,k,\mu,\nu)-$trichotomy in terms of a certain type of uniform $(h,k,\mu,\nu)-$trichotomy using families of norms equivalent with the norms of $X$. Firstly we introduce \begin{definition}\label{def-norma-compatibila}
A family $\mathcal{N}=\{\|\cdot\|_t: t\geq0\}$ of norms on the Banach space $X$ (endowed with the norm $\|\cdot\|$) is called \textit{compatible} to the norm $\|\cdot\|$ if there exists a nondecreasing map $C:\mathbb{R}_+\to[1,\infty)$ such that
\begin{align}
\|x\|&\leq \|x\|_t\leq C(t)\|x\|,\label{normprop-fara-proiectori1}
\end{align}
for all $(t,x)\in\mathbb{R}_+\times X$. \end{definition} \begin{proposition}\label{ex-norma-trichotomie2}
If the pair $(U,\mathcal{P})$ is $(h,k,\mu,\nu)-t$ then the family of norms $\mathcal{N}_1=\{\|\cdot\|_t:t\geq0\}$ given by
\begin{align}
\|x\|_t&=\sup_{\tau\geq t} \frac{h(\tau)}{h(t)}\|U(\tau,t)P_1(t)x\|+\sup_{r\leq t} \frac{k(t)}{k(r)}\|V_2(t,r)P_2(t)x\|\nonumber\\
&+\sup_{\tau\geq t} \frac{\mu(t)}{\mu(\tau)}\|U(\tau,t)P_3(t)x\|\label{norma-tricho-sus}
\end{align}
is compatible with $\|\cdot\|$. \end{proposition} \begin{proof}
For $\tau=t=r$ in (\ref{norma-tricho-sus}) we obtain that
\begin{align*}
\|x\|_t&\geq \|P_1(t)x\|+\|P_2(t)x\|+\|P_3(t)x\|\geq \|x\|
\end{align*}
for all $t\geq 0$.
If the pair $(U,\mathcal{P})$ is $(h,k,\mu,\nu)-t$ then by Proposition \ref{prop strong invariant trichotomy} there exits a nondecreasing function $N_1:\mathbb{R}_+\to\mathcal{B}(X)$ such that
\begin{align*}
\|x\|_t\leq 3N_1(t)\|x\|, \text{ for all } (t,x)\in\mathbb{R}_+\times X.
\end{align*}
Finally we obtain that $\mathcal{N}_1$ is compatible with $\|\cdot\|.$ \end{proof} \begin{proposition}\label{ex-norma-trichotomie1}
If the pair $(U,\mathcal{P})$ is $(h,k,\mu,\nu)-t$ then the family of norms $\mathcal{N}_2=\{\||\cdot\||_t,t\geq0\}$ defined by
\begin{align}
\||x\||_t&=\sup_{\tau\geq t} \frac{h(\tau)}{h(t)}\|U(\tau,t)P_1(t)x\|+ \sup_{r\leq t} \frac{k(t)}{k(r)}\|V_2(t,r)P_2(t)x\|\nonumber\\
&+
\sup_{r\leq t} \frac{\nu(r)}{\nu(t)}\|V_3(t,r)P_3(t)x\|\label{norma-tricho-jos}
\end{align}
is compatible with $\|\cdot\|.$ \end{proposition} \begin{proof} If the pair $(U,\mathcal{P})$ is $(h,k,\mu,\nu)-t$ then by Proposition \ref{prop strong invariant trichotomy} there exits a nondecreasing function $N_1:\mathbb{R}_+\to\mathcal{B}(X)$ such that
\begin{align*}
\||x\||_t\leq 3N_1(t)\|x\|, \text{ for all } (t,x)\in\mathbb{R}_+\times X.
\end{align*}
On the other hand, for $\tau=t=r$ in the definition of $\||\cdot\||_t$ we obtain
\begin{align*}
\||x\||_t&\geq \|P_1(t)x\|+\|P_2(t)x\|+\|P_3(t)x\|\geq\|x\|.
\end{align*}
In consequence, by Definition \ref{def-norma-compatibila} it results that the family of norms $\mathcal{N}_2$ is compatible to $\|\cdot\|.$ \end{proof}
The main result of this paper is \begin{theorem}\label{unif=neunif-trichotomie}If $\mathcal{P}=\{P_1,P_2,P_3\}$ is compatible with the evolution operator $U:\Delta\to\mathcal{B}(X)$ then
the pair $(U,\mathcal{P})$ is $(h,k,\mu,\nu)$-trichotomic if and only if there exist two families of norms $\mathcal{N}_1=\{\|\cdot\|_t: t\geq0\}$ and $\mathcal{N}_2=\{\||\cdot\||_t: t\geq0\}$ compatible with the norm $\|\cdot\|$ such that the following take place
\begin{itemize}
\item[($ht_3$) ] $h(t)\|U(t,s)P_1(s)x\|_t\leq h(s) \|P_1(s)x\|_s$
\item[($kt_3$) ] $k(t)\||V_2(t,s)P_2(t)x\||_s\leq k(s) \||P_2(t)x\||_t$
\item [$(\mu t_3)$ ]$\mu(s)\|U(t,s)P_3(s)x\|_t\leq \mu(t) \|P_3(s)x\|_s$
\item[$(\nu t_3)$ ]$\nu(s)\||V_3(t,s)P_3(t)x\||_s\leq \nu(t) \||P_3(t)x|\|_t$
\end{itemize}
for all $(t,s,x)\in\Delta\times X.$ \end{theorem} \begin{proof}
\textit{Necessary.}
If the pair $(U,\mathcal{P})$ is $(h,k,\mu,\nu)$-trichotomic then by Propositions \ref{ex-norma-trichotomie2} and \ref{ex-norma-trichotomie1} that there exist the families of norms $\mathcal{N}_1=\{\|\cdot\|_t: t\geq0\}$ and $\mathcal{N}_2=\{\||\cdot\||_t: t\geq0\}$ compatible with $\|\cdot\|$.
$\boldsymbol{(ht_1)\Rightarrow(ht_3)}.$ We have that \begin{align*}
h(t)\|U(t,s)P_1(s)x\|_t&=h(t)\|P_1(t)U(t,s)P_1(s)x\|_t\\
&=h(t)\sup_{\tau\geq t} \frac{h(\tau)}{h(t)}\|U(\tau,t)P_1(t)U(t,s)P_1(s)x\|\\
&\leq h(s)\sup_{\tau\geq s} \frac{h(\tau)}{h(s)}\|U(\tau,s)P_1(s)x\|= h(s)\|P_1(s)\|_s, \end{align*}for all $(t,s,x)\in\Delta\times X$.
$\boldsymbol{(kt_2)\Rightarrow(kt_3)}.$ If $(kt_2)$ holds then \begin{align*}
k(t)\||V_2(t,s)P_2(t)x\||_s&=k(t)\||P_2(s)V_2(t,s)P_2(t)x\||_s\\
&=k(t)\sup_{r\leq s} \frac{k(s)}{k(r)}\|V_2(s,r)P_2(s)V_2(t,s)P_2(t)x\|\\
&\leq k(s) \sup_{r\leq t} \frac{k(t)}{k(r)}\|V_2(t,r)P_2(t)x\|= k(s)\||P_2(t)\||_t \end{align*}for all $(t,s,x)\in\Delta\times X$.
$\boldsymbol{(\mu t_1)\Rightarrow(\mu t_3)}.$ If $(U,\mathcal{P})$ is $(h,k,\mu,\nu)-$ trichotomic then by $(\mu t_1)$ it results \begin{align*}
\mu(s)\|U(t,s)P_3(s)x\|_t&=\mu(s)\|P_3(t)U(t,s)P_3(s)x\|_t\\
&=\mu(s)\sup_{\tau\geq t} \frac{\mu(t)}{\mu(\tau)}\|U(\tau,t)P_3(t)U(t,s)P_3(s)x\|\\
&= \mu(s)\sup_{\tau\geq t}\frac{\mu(t)}{\mu(\tau)}\|U(\tau,s)P_3(s)x\| \leq\mu(t)\sup_{\tau\geq s}\frac{\mu(s)}{\mu(\tau)}\|U(\tau,s)P_3(s)x\|\\
&=\mu(t)\|P_3(s)x\|_s, \end{align*}for all $(t,s,x)\in\Delta\times X$.
$\boldsymbol{(\nu t_2)\Rightarrow(\nu t_3)}.$ Using Proposition \ref{normprop-fara-proiectori1} we obtain \begin{align*}
\nu(s)\||V_3(t,s)P_3(t)x\||_s&=\nu(s)\||P_3(s)V_3(t,s)P_3(t)x\||_s\\
&=\nu(s)\sup_{r\leq s} \frac{\nu(r)}{\nu(s)}\|V_3(s,r)P_3(s)V_3(t,s)P_3(t)x\|\\
&\leq \nu(t)\sup_{r\leq t}\frac{\nu(r)}{\nu(t)}\|V_3(t,r)P_3(t)x\| =\nu(t)\||P_3(t)x\||_t, \end{align*} for all $(t,s,x)\in\Delta\times X$.
\textit{Sufficiency.}We assume that there are two families of norms $\mathcal{N}_1=\{\|\cdot\|_t: t\geq0\}$ and $\mathcal{N}_2=\{\||\cdot\||_t: t\geq0\}$ compatible with the norm $\|\cdot\|$ such that the inequalities $(ht_3)--(\nu t_3)$ take place. Let $(t,s,x)\in\Delta\times X$,
$\boldsymbol{(ht_3)\Rightarrow(ht_2)}.$ The inequality $(ht_3)$ and Definition \ref{def-norma-compatibila} imply that \begin{align*}
h(t)\|U(t,s)P_1(s)x\|&\leq \|U(t,s)P_1(s)x\|_t\leq h(s)\|P_1(s)x\|_s\\
&\leq h(s)C(s)\|P_1(s)x\| \leq C(s)\|P_1(s)\|h(s) \|x\|. \end{align*} $\boldsymbol{(kt_3)\Rightarrow(kt_2)}.$ Similarly, \begin{align*}
k(t)\|V_2(t,s)P_2(t)x\|&\leq k(t)\||V_2(t,s)P_2(t)x\||_s\leq k(s)\||P_2(t)\||_t\\
&\leq k(s)C(t)\|P_2(t)x\|\leq C(t)\|P_2(t)\|k(s)\|x\|. \end{align*} $\boldsymbol{(\mu t_3)\Rightarrow(\mu t_2)}.$ From Definition \ref{def-norma-compatibila} and inequality $(\mu t_3)$ we have \begin{align*}
\mu(s)\|U(t,s)P_3(s)x\|&\leq \mu(s) \|U(t,s)P_3(s)x\|_t
\leq \mu(t)\|P_3(s)x\|_s\\
&\leq C(s) \mu(t)\|P_3(s)x\|\leq C(s)\|P_3(s)\| \mu(t)\|x\|. \end{align*} $\boldsymbol{(\nu t_3)\Rightarrow(\nu t_2)}.$ Similarly, \begin{align*}
\nu(s)\|V_3(t,s)P_3(t)x\|&|\leq \nu(s)\||V_3(t,s)P_3(s)x\||_s\leq \nu(t)\||P_3(t)x\||_t\\
&\leq C(t) \nu(t)\|P_3(t)x\|\leq C(t)\|P_3(t)\| \nu(t)\|x\| . \end{align*} If we denote by
$$N(t)=\sup_{s\in[0,t]}C(s)(\|P_1(s)\|+\|P_2(s)\|+\|P_3(s)\|)$$ then we obtain that the inequalities $(ht_2),(kt_2),(\mu t_2),(\nu t_2)$ are satisfied. By Proposition \ref{prop strong invariant trichotomy} it follows that $(U,\mathcal{P})$ is $(h,k,\mu,\nu)-t$. \end{proof} As a particular case, we obtain a characterization of (nonuniform) exponential trichotomy given by \begin{corollary}\label{cor1}
If $\mathcal{P}=\{P_1,P_2,P_3\}$ is compatible with the evolution operator $U:\Delta\to\mathcal{B}(X)$ then
the pair $(U,\mathcal{P})$ is exponential trichotomic if and only if there are four real constants $\alpha,\beta,\gamma,\delta>0$ and two families of norms $\mathcal{N}_1=\{\|\cdot\|_t: t\geq0\}$ and $\mathcal{N}_2=\{\||\cdot\||_t: t\geq0\}$ compatible with the norm $\|\cdot\|$ such that \begin{itemize}
\item[($et_1$) ] $\|U(t,s)P_1(s)x\|_t\leq e^{-\alpha(t-s)}\|P_1(s)x\|_s$
\item[($et_2$) ] $\||V_2(t,s)P_2(t)x\||_s\leq e^{-\beta(t-s)}\||P_2(t)x\||_t$
\item [($e t_3$) ]$\|U(t,s)P_3(s)x\|_t\leq e^{\gamma(t-s)} \|P_3(s)x\|_s$
\item[($e t_4$) ]$\||V_3(t,s)P_3(t)x\||_s\leq e^{\delta(t-s)}\||P_3(t)x\||_t$, \end{itemize} for all $(t,s,x)\in\Delta\times X.$ \end{corollary} \begin{proof}
It results from Theorem \ref{unif=neunif-trichotomie} for $$h(t)=e^{\alpha t},k(t)=e^{\beta t},\nu(t)=e^{\gamma t},\nu(t)=e^{\delta t},$$ with $\alpha,\beta,\gamma,\delta>0.$ \end{proof} If the growth rates are of polynomial type then we obtain a characterization of (nonuniform) polynomial trichotomy given by \begin{corollary}\label{cor2}
Let $\mathcal{P}=\{P_1,P_2,P_3\}$ is compatible with the evolution operator $U:\Delta\to\mathcal{B}(X)$.
Then $(U,\mathcal{P})$ is nonuniform polynomial trichotomic if and only if there exist two families of norms $\mathcal{N}_1=\{\|\cdot\|_t: t\geq0\}$ and $\mathcal{N}_2=\{\||\cdot\||_t: t\geq0\}$ compatible with the norm $\|\cdot\|$ and four real constants $\alpha,\beta,\gamma,\delta>0$ such that
\begin{itemize}
\item[($pt_1$) ] $(t+1)^\alpha\|U(t,s)P_1(s)x\|_t\leq (s+1)^\alpha\|P_1(s)x\|_s$
\item[($pt_2$) ] $(t+1)^\beta\||V_2(t,s)P_2(t)x\||_s\leq (s+1)^\beta\||P_2(t)x\||_t$
\item [($p t_3$) ]$(s+1)^\gamma\|U(t,s)P_3(s)x\|_t\leq (t+1)^\gamma \|P_3(s)x\|_s$
\item[($p t_4$) ]$(s+1)^\delta\||V_3(t,s)P_3(t)x\||_s\leq (t+1)^\delta
\||P_3(t)x\||_t$,
\end{itemize}
for all $(t,s,x)\in\Delta\times X.$ \end{corollary} \begin{proof}
It results from Theorem \ref{unif=neunif-trichotomie} for $$h(t)=(t+1)^\alpha,k(t)=(t+1)^\beta,\mu(t)=(t+1)^\gamma,\nu(t)=(t+1)^\delta,$$
with $\alpha,\beta,\gamma,\delta>0.$ \end{proof} \begin{definition}
A family of norms $\mathcal{N}=\{\|\cdot\|_t,t\geq 0\}$ is \textit{uniformly compatible} with the norm $\|\cdot\|$ if there exits a constant $c>0$
such that \begin{equation}\label{norma compatibila uniform}
\|x\|\leq \|x\|_t\leq c\|x\|, \text{ for all } (t,x)\in\mathbb{R}_+\times X. \end{equation} \end{definition} \begin{remark}
From the proofs of Propositions \ref{ex-norma-trichotomie2}, \ref{ex-norma-trichotomie1} it results that if the pair $(U,\mathcal{P})$ is uniformly $(h,k,\mu,\nu)-$ trichotomic then the families of norms $\mathcal{N}_1=\{\|\cdot\|_t:t\geq 0\}$ and $\mathcal{N}_2=\{\||\cdot\||_t:t\geq 0\}$ (given by (\ref{norma-tricho-sus}) and (\ref{norma-tricho-jos})) are uniformly compatible with the norm $\|\cdot\|.$ \end{remark} A characterization of the uniform$-(h,k,\mu,\nu)-$trichotomy is given by \begin{theorem}
Let $\mathcal{P}=\{P_1,P_2,P_3\}$ be compatible with the evolution operator $U:\Delta\to\mathcal{B}(X)$. Then
the pair $(U,\mathcal{P})$ is uniformly$-(h,k,\mu,\nu)-$trichotomic if and only if there exist two families of norms $\mathcal{N}_1=\{\|\cdot\|_t: t\geq0\}$ and $\mathcal{N}_2=\{\||\cdot\||_t: t\geq0\}$ uniformly compatible with the norm $\|\cdot\|$ such that the inequalities $(ht_3),(kt_3),(\mu t_3)$ and $(\nu t_3)$ are satisfied. \end{theorem} \begin{proof} It results from the proof of Theorem \ref{unif=neunif-trichotomie} (via Proposition \ref{prop strong invariant trichotomy uniform}). \end{proof} \begin{remark}
Similarly as in Corollaries \ref{cor1}, \ref{cor2} one can obtain characterizations for uniform exponential trichotomy respectively uniform polynomial trichotomy. \end{remark} Another characterization of the $(h,k,\mu,\nu)-$trichotomy is given by \begin{theorem}\label{unif=neunif-trichotomie-fara-proiectori}If $\mathcal{P}=\{P_1,P_2,P_3\}$ is compatible with the evolution operator $U:\Delta\to\mathcal{B}(X)$ then
the pair $(U,\mathcal{P})$ is $(h,k,\mu,\nu)$-trichotomic if and only if there exist two families of norms $\mathcal{N}_1=\{\|\cdot\|_t,t\geq 0\},\mathcal{N}_2=\{\||\cdot\||_t: t\geq0\}$ compatible with the family of projectors $\mathcal{P}=\{P_1,P_2,P_3\}$ such that
\begin{itemize}
\item[($ht_4$) ] $h(t)\|U(t,s)P_1(s)x\|_t\leq h(s) \|x\|_s$
\item[($kt_4$) ] $k(t)\||V_2(t,s)P_2(t)x\||_s\leq k(s) \||x\||_t$
\item [$(\mu t_4)$ ]$\mu(s)\|U(t,s)P_3(s)x\|_t\leq \mu(t) \|x\|_s$
\item[$(\nu t_4$) ]$\nu(s)\||V_3(t,s)P_3(t)x\||_s\leq \nu(t) \||x|\||_t$
\end{itemize}
for all $(t,s,x)\in\Delta\times X.$ \end{theorem} \begin{proof}
\textit{Necessity.} It results from Theorem \ref{unif=neunif-trichotomie} and inequalities
\begin{eqnarray*}
\|P_i(t)x\|_t\leq \|x\|_t&\text{and}&
\||P_i(t)x\||_t\leq \||x\||_t,
\end{eqnarray*}for all $(t,x)\in\mathbb{R}_+\times X$ and $i=\{1,2,3\}$.
\textit{Sufficiency.} It results replacing $x$ by $P_1(s)x$ in $(ht_4)$, $x$ by $P_2(t)x$ in $(kt_4)$, $x$ by $P_3(s)x$ in $(\mu t_4)$ and $x$ by $P_3(t)x$ in $(\nu t_4)$. \end{proof} The variant of the previous theorem for uniform $(h,k,\mu,\nu)-$trichotomy is given by \begin{theorem}
If $\mathcal{P}=\{P_1,P_2,P_3\}$ is compatible with the evolution operator $U:\Delta\to\mathcal{B}(X)$ then
the pair $(U,\mathcal{P})$ is uniformly $-(h,k,\mu,\nu)-$ trichotomic if and only if there exist two families of norms $\mathcal{N}_1=\{\|\cdot\|_t:t\geq 0\},\mathcal{N}_2=\{\||\cdot\||_t: t\geq0\}$ uniformly compatible with the family of projectors $\mathcal{P}=\{P_1,P_2,P_3\}$ such that the inequalities $(ht_4),(kt_4),(\mu t_4 )$ and $(\nu t_4)$ are satisfied. \end{theorem} \begin{proof}
It is similar with the proof of Theorem \ref{unif=neunif-trichotomie}. \end{proof} \begin{remark}
If the growth rates are exponential respectively polynomial then we obtain characterizations for exponential trichotomy, uniform exponential trichotomy and uniform polynomial trichotomy. \end{remark}
\end{document} |
\begin{document}
\title[Characterizing Follower and Extender Set Sequences]{Characterizing Follower and Extender Set Sequences}
\begin{abstract}
Given a one-dimensional shift $X$, let $|F_X(\ell)|$ be the number of follower sets of words of length $\ell$ in $X$. We call the sequence $\{|F_X(\ell)|\}_{\ell \in \mathbb{N}}$ the follower set sequence of the shift $X$. Extender sets are a generalization of follower sets (see ~\cite{KassMadden}), and we define the extender set sequence similarly. In this paper, we explore which sequences may be realized as follower set sequences and extender set sequences of one-dimensional sofic shifts. We show that any follower set sequence or extender set sequence of a sofic shift must be eventually periodic. We also show that, subject to a few constraints, a wide class of eventually periodic sequences are possible. In fact, any natural number difference in the $\limsup$ and $\liminf$ of these sequences may be achieved, so long as the $\liminf$ of the sequence is sufficiently large. \end{abstract}
\date{} \author{Thomas French} \address{Thomas French\\ Department of Mathematics\\ University of Denver\\ 2280 S. Vine St.\\ Denver, CO 80208} \email{[email protected]}
\subjclass[2010]{Primary: 37B10}
\maketitle
\section{Introduction} \label{intro}
The complexity function of a shift $X$, $\Phi_X(\ell)$, counts the number of words of a given length $\ell$ in the language of the shift $X$. This function is natural to study; in particular, it may be used to calculate topological entropy of symbolic shifts. The Morse-Hedlund Theorem implies that if there exists $\ell \in \mathbb{N}$ with $\Phi_X(\ell) \leq \ell$, then every sequence in $X$ must be periodic. (See ~\cite{MorseHedlund}) \newline
\indent For any $\mathbb{Z}$ shift $X$ and finite word $w$ appearing in some point of $X$, the \textbf{follower set} of $w$, written $F_X(w)$, is defined as the set of all one-sided infinite sequences $s$ such that the infinite word $ws$ occurs in some point of $X$. The \textbf{extender set} of $w$, written $E_X(w)$, is the set of all pairs of infinite sequences $(s,u)$, $s$ left-infinite and $u$ right-infinite, such that $swu$ is a point of $X$. It is well-known that for a $\mathbb{Z}$ shift $X$, finiteness of $\{F_X(w) \ | \ w \text{ in the language of $X$} \}$ is equivalent to $X$ being sofic, that is, the image of a shift of finite type under a continuous shift-commuting map. (see ~\cite{LindMarcus}) (In fact it is true that finiteness of $\{E_X(w) \ | \ w \text{ in the language of $X$}\}$ is equivalent to $X$ being sofic as well, see ~\cite{OrmesPavlov}).
We define the set $F_X(\ell)$ to be $\{F_X(w) \ | \ w \text{ has length }\ell\}$ for any positive integer $\ell$. Thus $|F_X(\ell)|$ is the total number of follower sets which correspond to some word $w$ of length $\ell$ in $X$. $|E_X(\ell)|$ is defined similarly for extender sets. Since the alphabet is finite, there are only finitely many words of a given length $\ell$, and so for any shift (sofic or not), $|F_X(\ell)|$ and $|E_X(\ell)|$ are finite for every $\ell$. If $X$ is sofic, $\{F_X(w) \ | \ w \text{ in the language of $X$} \}$ is finite, and thus the follower set sequence $\{|F_X(\ell)|\}_{\ell \in \mathbb{N}}$ must be bounded, and similarly for the extender set sequence $\{|E_X(\ell)|\}_{\ell \in \mathbb{N}}$. Ormes and Pavlov have proved a result in the style of Morse-Hedlund, that if $|E_X(\ell)| \leq \ell$ for any $\ell \in \mathbb{N}$, then $X$ is necessarily sofic. (See ~\cite{OrmesPavlov}) This may lead one to believe that there is a connection between those sequences which may appear as complexity sequences of shifts and those which may appear as extender set sequences of shifts. The results in this paper suggest otherwise; in particular, extender set sequences need not be monotonic!
First, any follower set sequence or extender set sequence of a one-dimensional sofic shift must be eventually periodic:
\begin{theorem}\label{eventuallyperiodic}
Let $X$ be a one-dimensional sofic shift, $p$ be one greater than the total number of extender sets in $X$, and $p_0$ be one greater than the total number of follower sets in $X$. Then the extender set sequence $\{|E_X(\ell)|\}_{\ell \in \mathbb{N}}$ is eventually periodic, where the periodicity must begin before the $p(1+p!)^{th}$ term, and the least eventual period is at most $p!$. The follower set sequence $\{|F_X(\ell)|\}_{\ell \in \mathbb{N}}$ is eventually periodic, where the periodicity must begin before the $p_0(1+p_0!)^{th}$ term, and the least eventual period is at most $p_0!$. \end{theorem}
For simple examples, the follower set sequence and extender set sequence of a sofic shift are eventually constant, but in fact, sequences may be realized which are merely eventually periodic. This is in contrast to complexity sequences of shifts, which must be nondecreasing. Martin Delacourt discovered the first such example in 2013. (see page 8 of ~\cite{OrmesPavlov}) In fact, a wide class of eventually periodic sequences may be realized:
\begin{theorem}\label{mainthm} Let $n \in \mathbb{N}$, and $\mathcal{A} = \{A_1, A_2, A_3, ..., A_k\}$ be a nontrivial partition of $\{0, 1, ..., n-1\}$. Let $0 = r_1 < r_2< ... < r_k$ be natural numbers. Then there exists $m \in \mathbb{N}$ and an irreducible graph $\mathcal{G}$ such that the number of follower sets in $X_\mathcal{G}$ of words of length $\ell$ where $\ell \geq n+2$ and $\ell \pmod n \in A_j$ will be exactly $m + r_j$ for all $1 \leq j \leq k$. Furthermore, $m$ may be chosen such that $m < (6n+3)r_k$. \end{theorem}
We also prove a similar result for extender sets:
\begin{theorem}\label{ESmain} Let $n \in \mathbb{N}$, and $\mathcal{A} = \{A_1, A_2, A_3, ..., A_k\}$ be a nontrivial partition of $\{0, 1, ..., n-1\}$. Let $0 = r_1 < r_2< ... < r_k$ be natural numbers. Then there exists $m \in \mathbb{N}$ and an irreducible graph $\mathcal{G}$ such that the number of extender sets in $X_\mathcal{G}$ of words of length $\ell$ where $\ell \geq 14r_kn - 1$ and $\ell \pmod n \in A_j$ will be exactly $m + r_j$ for all $1 \leq j \leq k$. Furthermore, $m$ may be chosen such that $m \leq 39n^2r_k^2$. \end{theorem}
The proofs of Theorems~\ref{mainthm} and~\ref{ESmain} are broken into two parts. First, we define a process, given $n \in \mathbb{N}$ and $S \subset \{0, 1, ... , n-1\}$, of constructing a graph $\mathcal{G}_{n,S}$ which gives words of length $\ell \pmod n \in S$ one greater follower and extender set than words of length $\ell \pmod n \notin S$ whenever $\ell$ is sufficiently large. We then describe a method of combining these graphs which results in a new shift, where for each $\ell$, the number of follower or extender sets of words of length $\ell$ is the sum of the number of follower or extender sets of words of length $\ell$ in each of the original shifts, plus a constant which does not depend on $\ell$. Combining these two propositions proves the result.
Finally, while non-sofic shifts always have follower set sequences and extender set sequences which go to infinity, we show that they need not do so in a monotone increasing fashion:
\begin{theorem}\label{Non-sofic}
There exists an irreducible non-sofic shift $X$ such that $\{|F_X(\ell)|\}_{\ell \in \mathbb{N}}$ and $\{|E_X(\ell)|\}_{\ell \in \mathbb{N}}$ are not monotone increasing. \end{theorem}
\section{Definitions and preliminaries} \label{defns} Let $A$ denote a finite set, which we will refer to as our alphabet.
\begin{definition} A \textbf{word} over $A$ is a member of $A^n$ for some $n \in \mathbb{N}$. \end{definition}
\begin{definition} For any words $v \in A^n$ and $w \in A^m$, we define the \textbf{concatenation} $vw$ to be the pattern in $A^{n+m}$ whose first $n$ letters are the letters forming $v$ and whose next $m$ letters are the letters forming $w$. \end{definition}
\begin{definition} The \textbf{language} of a $\mathbb{Z}$ shift $X$, denoted by $L(X)$, is the set of all words which appear in points of $X$. For any finite $\ell \in \mathbb{N}$, $L_\ell(X) := L(X) \cap A^\ell$, the set of words in the language of $X$ with length $\ell$. \end{definition}
\begin{definition} For any one-dimensional shift $X$ over the alphabet $A$, and any word $w$ in the language of $X$, we define the \textbf{follower set of w in $X$}, $F_X(w)$, to be the set of all one-sided infinite sequences $s \in A^\mathbb{N}$ such that the infinite word $ws$ occurs in some point of $X$. \end{definition}
\begin{definition} For any one-dimensional shift $X$ over the alphabet $A$, and any word $w$ in the language of $X$, we define the \textbf{extender set of w in $X$}, $E_X(w)$, to be the set of all pairs $(s, u)$ where $s$ is a left-infinite sequence of symbols in $A$, $u$ is a right-infinite sequence of symbols in $A$, and $swu$ is a point of $X$. \end{definition}
\begin{remark} For any word $w \in L(X)$, define a projection function $f_w:E_X(w) \rightarrow F_X(w)$ by $f(s,u) = u$. Such a function sends the extender set of $w$ onto the follower set of $w$. Any two words $w, v$ with the same extender set would have the property then that $f_w(E_X(w))= f_v(E_X(v))$, that is, that $w$ and $v$ have the same follower set. \end{remark}
\begin{definition}
For any positive integer $\ell$, define the set $F_X(\ell) = \{F_X(w) \ | \ w \in L_{\ell}(X)\}$. Thus the cardinality $|F_X(\ell)|$ is the number of distinct follower sets of words of length $\ell$ in $X$. Similarly, define $E_X(\ell) = \{E_X(w) \ | \ w \in L_\ell(X)\}$, so that $|E_X(\ell)|$ is the number of distinct extender sets of words of length $\ell$ in $X$. \end{definition}
\begin{definition}
Given a shift $X$, the \textbf{follower set sequence of $X$} is the sequence $\{|F_X(\ell)|\}_{\ell \in \mathbb{N}}$. The \textbf{extender set sequence of $X$} is the sequence $\{|E_X(\ell)|\}_{\ell \in \mathbb{N}}$. \end{definition}
\begin{example}
Let $X$ be a full shift on the alphabet $A$. Then any word $w \in L(X)$ may be followed legally by any sequence in $A^{\mathbb{N}}$, and thus the follower set of any word is the same. Hence there is only one follower set in a full shift. Similarly, there is only one extender set in a full shift. Then $\{|F_X(\ell)|\}_{\ell \in \mathbb{N}} = \{|E_X(\ell)|\}_{\ell \in \mathbb{N}} = \{1, 1, 1, ...\}$. \end{example}
\begin{example}
The even shift is the one-dimensional sofic shift with alphabet $\{0,1\}$ defined by forbidding odd runs of zeros between ones. It is a simple exercise to show that the even shift has three follower sets, $F(0), F(1),$ and $F(10)$. The follower set sequence of the even shift is $\{|F_X(\ell)|\}_{\ell \in \mathbb{N}} = \{2, 3, 3, 3, ...\}$. It is easy to verify that for any word $w$ in the language of the even shift, the follower set of $w$ is identical to the follower set of one of these three words. \end{example}
\begin{definition} A one-dimensional shift $X$ is \textbf{sofic} if it is the image of a shift of finite type under a continuous shift-commuting map. \end{definition}
Equivalently, a shift $X$ is sofic iff there exists a finite directed graph $\mathcal{G}$ with labeled edges such that the points of $X$ are exactly the sets of labels of bi-infinite walks on $\mathcal{G}$. Then $\mathcal{G}$ is a \textbf{presentation} of $X$ and we say $X = X_\mathcal{G}$ (or that $X$ is the \textbf{edge shift} presented by $\mathcal{G}$). Another well-known equivalence is that sofic shifts are those with only finitely many follower sets, that is, a shift $X$ is sofic iff $\{F_X(w) \ | \ w \text{ in the language of $X$} \}$ is finite. The same equivalence exists for extender sets: X is sofic iff $\{E_X(w) \ | \ w \text{ in the language of $X$} \}$ is finite. (see ~\cite{OrmesPavlov}) This necessarily implies that for a sofic shift $X$, the follower set sequence and extender set sequence of $X$ are bounded. In fact, the converse is also true: if the follower set or extender set sequence of a shift $X$ is bounded, then $X$ is necessarily sofic. (See ~\cite{OrmesPavlov})
\begin{definition} A directed labeled graph $\mathcal{G}$ is \textbf{irreducible} if for every ordered pair $(I, J)$ of vertices in $\mathcal{G}$, there exists a path in $\mathcal{G}$ from $I$ to $J$. \end{definition}
Results about shifts presented by graphs which are not irreducible may often be found by considering the reducible graph's irreducible components; for this reason, results in Sections ~\ref{followers} and ~\ref{XcrossY} of this paper will focus on the irreducible case.
\begin{definition} A directed labeled graph $\mathcal{G}$ is \textbf{primitive} if $\exists \> N \in \mathbb{N}$ such that for every $n \geq N$, for every ordered pair $(I, J)$ of vertices in $\mathcal{G}$, there exists a path in $\mathcal{G}$ from $I$ to $J$ of length $n$. The least such $N$ is the \textbf{primitivity distance} for $\mathcal{G}$. \end{definition}
\begin{definition} A directed labeled graph $\mathcal{G}$ is \textbf{right-resolving} if for each vertex $I$ of $\mathcal{G}$, all edges starting at $I$ carry different labels. Similarly, $\mathcal{G}$ is \textbf{left-resolving} if for each vertex $I$ of $\mathcal{G}$, all edges ending at $I$ carry different labels. \end{definition}
\begin{definition} A directed labeled graph $\mathcal{G}$ is \textbf{follower-separated} if distinct vertices in $\mathcal{G}$ correspond to distinct follower sets--for all vertices $I, J$ in $\mathcal{G}$, there exists a one-sided infinite sequence $s$ of labels which may follow one vertex but not the other. \end{definition}
\begin{definition} A directed labeled graph $\mathcal{G}$ is \textbf{extender-separated} if distinct pairs of vertices correspond to distinct extender sets--for any two distinct pairs of initial and terminal vertices $\{I \rightarrow I'\}$ and $\{J \rightarrow J'\}$ such that there exist paths in $\mathcal{G}$ from $I$ to $I'$ and from $J$ to $J'$, there exists some word $w$ which is the label of a path in $\mathcal{G}$ beginning and ending with one pair of vertices, and pair $(s,u)$, $s$ a left-infinite sequence, $u$ a right-infinite sequence, such that $swu$ is a point of $X_\mathcal{G}$, but for every word $v$ which is the label of some path beginning and ending with the other pair of vertices, $svu$ is not a point of $X_\mathcal{G}$. \end{definition}
\begin{definition} Given a directed labeled graph $\mathcal{G}$, a word $w$ is \textbf{right-synchronizing} if all paths in $\mathcal{G}$ labeled $w$ terminate at the same vertex. The word $w$ is \textbf{left-synchronizing} if all paths in $\mathcal{G}$ labeled $w$ begin at the same vertex. The word $w$ is \textbf{bi-synchronizing} if $w$ is both left- and right-synchronizing. A \textbf{bi-synchronizing letter} is a bi-synchronizing word of length 1. \end{definition}
In fact, every one-dimensional sofic shift has a presentation $\mathcal{G}$ which is right-resolving, follower-separated, and contains a right-synchronizing word. (See ~\cite{LindMarcus})
\section{Eventual Periodicity of Follower and Extender Set Sequences} \label{periodicity}
First, we show that only eventually periodic sequences may appear as follower set sequences or extender set sequences of one-dimensional sofic shifts. To establish this result, we first need a lemma which is reminiscent of the pumping lemma. (See ~\cite{Lawson})
\begin{lemma}\label{pumping}
Let $X$ be a sofic shift, define $p$ to be one greater than the total number of extender sets in $X$, and define $p_0$ to be one greater than the total number of follower sets in $X$. Then all words $w$ in $L(X)$ of length $\ell \geq p $ may be written as $w = xyz$, where $|y| \geq 1$ and the word $xy^iz$ has the same extender set as the word $w$ for all $i \in \mathbb{N}$. Furthermore, all words $w$ in $L(X)$ of length $\ell \geq p_0$ may be written as $w = xyz$, where $|y| \geq 1$ and the word $xy^iz$ has the same follower set as the word $w$ for all $i \in \mathbb{N}$. \end{lemma}
\begin{proof}
Since $X$ is sofic, $X$ has only finitely many extender sets. Let $p$ be one greater than the number of extender sets in $X$. Since $X$ is represented by a finite labeled graph $\mathcal{G}$, and there are only $|V(\mathcal{G})|^2$ possible pairs of vertices in $\mathcal{G}$, and each extender set must correspond to a non-empty set of pairs, we have that $p \leq 2^{(|V(\mathcal{G})|^2)}$. Let $w$ be a word in $L(X)$ of length $\ell \geq p$. Consider the prefixes of $w$. Since $w$ is of length at least $p$, there exist two prefixes of $w$ (one necessarily a strict subword of the other) with the same extender set, say $x$ and $xy$, where $|y| \geq 1$. Then for any pair $(s, u)$ of infinite sequences, $sxu$ is a point of $X$ if and only if $sxyu$ is a point of $X$ also. Call the remaining portion of $w$ by $z$ so that $w = xyz$. (We may have $|z| = 0$).\newline \indent Now, let $(s, u)$ be in the extender set of $w$, that is, that $swu$ is a point of $X$. But $swu = sxyzu$, so $(s, yzu)$ is in the extender set of $x$. By above, then, $(s, yzu)$ is in the extender set of $xy$ also, that is, that $sxyyzu$ is a point of $X$. Hence, $(s, u)$ is in the extender set of $xyyz$. So $E_X(xyz) \subseteq E_X(xyyz)$. On the other hand, if $(s, u)$ is in the extender set of $xyyz$, then $sxyyzu$ is a point of $X$, and so $(s, yzu)$ is in the extender set of $xy$, and therefore the extender set of $x$. Thus $sxyzu$ is a point of $X$, and so $(s, u)$ is in the extender set of $xyz = w$. Therefore $E_X(xyz) = E_X(xyyz)$. Applying this argument repeatedly gives that $E_X(xyz) = E_X(xy^iz)$ for any $i \in \mathbb{N}$. \newline \indent Letting $p_0$ be one greater than the total number of follower sets in $X$, and identical argument gives the corresponding result for follower sets. \end{proof}
We use this lemma to establish Theorem~\ref{eventuallyperiodic}, that all follower set and extender set sequences of one-dimensional sofic shifts must be eventually periodic:
\begin{proof}[Proof of Theorem~\ref{eventuallyperiodic}]
Let $X$ be a one-dimensional sofic shift. We prove that the sequences $\{F_X(\ell)\}$ and $\{E_X(\ell)\}$ (that is, the sequences which record not only the number of follower sets of each length, but also the identities of those sets) are eventually periodic, which will trivially imply eventual periodicity of $\{|F_X(\ell)|\}$ and $\{|E_X(\ell)|\}$. \newline
\indent Let $w$ be a word of length $p$ in $L(X)$, where $p$ is defined as in Lemma~\ref{pumping}. Then by Lemma~\ref{pumping}, $w = xy_wz$ where $|y_w| \geq 1$ and $xy_w^iz$ has the same extender set as $w$ for all $i \in \mathbb{N}$. Let $k = lcm\{|y_w|\> | \> w \in L_p(X)\}$. Since the longest such a word $y_w$ could be is $p$, we have $k \leq p!$. Clearly for any word $w \in L_p(X)$, there is an $i \in \mathbb{N}$ such that $xy_w^iz \in L_{p+k}(X)$. Therefore, every extender set in $E_X(p)$ is also an extender set in $E_X(p+k)$, so $E_X(p) \subseteq E_X(p+k)$. \newline \indent Now, let $w$ be a word of length $\ell > p$ in $L(X)$. Then $w$ has some word $w' = w_1...w_p \in L_p(X)$ as a prefix. Applying Lemma ~\ref{pumping} to $w'$ as above, we get a word $w''$ of length $p+k$ with the same extender set as $w'$. If $(s,u)$ is in the extender set of $w = w'w_{p+1}...w_\ell$, then $sw'w_{p+1}...w_\ell u$ is a point of $X$, and so $(s, w_{p+1}...w_\ell u)$ is in the extender set of $w'$. Since $w'$ and $w''$ have the same extender set, $sw''w_{p+1}...w_\ell u$ is a point of $X$, and $(s,u)$ is in the extender set of $w''w_{p+1}...w_\ell$. Similarly, if $(s,u)$ is in the extender set of $w''w_{p+1}...w_\ell$, then $(s,u)$ is in the extender set of $w$ as well. Therefore $w''w_{p+1}...w_\ell$ is a word in $X$ of length $\ell + k$ with the same extender set as $w$. Hence, every extender set in $E_X(\ell)$ is an extender set in $E_X(\ell + k)$. So $E_X(\ell) \subseteq E_X(\ell + k)$ for any $\ell \geq p$. \newline
\indent But sofic shifts only have finitely many extender sets, so eventually, the sequence $\{|E_X(\ell + jk)|\}_{j \in \mathbb{N}}$ must stop growing. Thus, we have $E_X(\ell) = E_X(\ell+k)$ for all sufficiently large $\ell$, and the sequence $\{E_X(\ell)\}$ is eventually periodic with period $k$, where $k \leq p!$. Certainly, this implies that the extender set sequence is eventually periodic with period $k$ as well. \newline
\indent Suppose $\ell \geq p$ and $E_X(\ell) = E_X(\ell+k)$. Then we claim that $E_X(\ell+k) = E_X(\ell+2k)$: By above, we have $E_X(\ell+k) \subseteq E_X(\ell+2k)$. Suppose $w$ is a word of length $\ell + 2k$, say $w = w_1w_2...w_{\ell+k}w_{\ell+k+1}...w_{\ell+2k}$. Then, because $E_X(\ell) = E_X(\ell+k)$, there exists a word $z$ of length $\ell$ such that $z$ and $w_1w_2...w_{\ell+k}$ have the same extender set. Let $(s,u)$ be in the extender set of $w$. Then $sw_1w_2...w_{\ell+k}w_{\ell+k+1}...w_{\ell+2k}u$ is a point of $X$, so $(s,w_{\ell+k+1}...w_{\ell+2k}u)$ is in the extender set of $w_1w_2...w_{\ell+k}$, and thus in the extender set of $z$. So $szw_{\ell + k + 1}w_{\ell + k + 2}...w_{\ell+2k}u$ is a point of X. Hence $(s, u)$ is in the extender set of $zw_{\ell + k + 1}w_{\ell + k + 2}...w_{\ell+2k}$, a word of length $\ell +k$. Similarly, if $(s,u)$ is in the extender set of $zw_{\ell + k + 1}w_{\ell + k + 2}...w_{\ell+2k}$, then $(s,u)$ is in the extender set of $w$, giving $E_X(w) = E_X(zw_{\ell + k + 1}w_{\ell + k + 2}...w_{\ell+2k})$. Therefore we have $E_X(\ell+2k) \subseteq E_X(\ell+k)$ and we may conclude that $E_X(\ell+k) = E_X(\ell+2k)$. \newline \indent Now, $\{|E_X(\ell)|\} < p$ for any given $\ell \in \mathbb{N}$. Moreover, we have proven that the sequence $\{E_X(\ell + jk)\}_{j \in \mathbb{N}}$ is nondecreasing and nested by inclusion, and once two terms of the sequence are equal, it will stabilize for all larger $j$. The sequence $\{E_X(\ell + jk)\}_{j \in \mathbb{N}}$ must grow fewer than $p$ times, so the periodicity of the sequence $\{E_X(\ell)\}$ (and thus of $\{|E_X(\ell)|\}$) must begin before the $p + pk^{th}$ term, and $p + pk \leq p + p(p!) = p(1 + p!)$. \newline \indent Again, a similar argument using the follower set portion of Lemma ~\ref{pumping} establishes the corresponding result for follower sets. \end{proof}
\section{Existence of Eventually Periodic Nonconstant Follower and Extender Set Sequences} \label{followers}
Now we demonstrate the existence of sofic shifts with follower set sequences which are not eventually constant. Given $n \in \mathbb{N}$ and $S \subset \{0, 1, ..., n-1\}$, construct an irreducible graph $\mathcal{G}_{n,S}$ in the following way: \newline
First, place edges labeled $p, q$ and $b$ as below, followed by a loop of $n$-many edges labeled $a$. We will refer to the initial vertex of the edge $p$ as ``Start." \newline
\includegraphics[scale=.7]{Step1}
Choose a fixed $i^* \in S$. Then for every $i \in S, \> i \neq i^*$, place two consecutive edges $c_i$ and $d_i$, such that the intial vertex of $c_i$ is the $(i-2 \pmod n)^{th}$ vertex of the loop of edges labeled $a$ (where we make the convention that the terminal vertex of the edge $b$ is the $0^{th}$ vertex of the loop, the next vertex the $1^{st}$ vertex of the loop, and so on) and the terminal vertex of $d_i$ is ``Start." For $i^*$, we still add the edge $c_{i^*}$, but follow it instead by another edge labeled $b$ and another loop of $n$-many edges labeled $a$: \newline \newline
\includegraphics[scale=.7]{Step2}
Again, $\forall \> i \in S$, at the $(i - 2 \pmod n)^{th}$ vertex of the new loop, add an edge $c_i$, and if $i \neq i^*$, follow with an edge $e_i$ returning to Start. After $c_{i^*}$, add a third loop of $n$-many edges labeled $a$: \newline
\includegraphics[scale=.7]{Step3}
Finally, for each $i \in S$, (including $i^*$), add an edge $c_i$ from the $(i - 2 \pmod n)^{th}$ vertex of the third loop returning to Start. The resulting graph is $\mathcal{G}_{n,S}$. Due to choice of $i^*$, there are $|S|$-many possible graphs $\mathcal{G}_{n,S}$; the results of this paper will hold for any of them.
\begin{figure}\label{G503}
\end{figure}
We first make some basic observations about the graphs $\mathcal{G}_{n,S}$. It is easy to check that for any $n, S$, the graph $\mathcal{G}_{n,S}$ will be irreducible, right-resolving and left-resolving, follower-separated and extender-separated. We furthermore observe that the graph is primitive:
\begin{lemma}\label{primitivity} For any $n \in \mathbb{N}, S \subset \{0, 1, ..., n-1\}$, the graph $\mathcal{G}_{n,S}$ is primitive, and the primitivity distance of $\mathcal{G}_{n,S}$ is at most $3n + 3$. \end{lemma}
\begin{proof}
Any irreducible graph with a self-loop is primitive, so due to the self-loop labeled $q$, $\mathcal{G}_{n,S}$ is primitive. So long as a path passes the vertex at which $q$ is anchored, that path may be inflated to any greater length by following the self-loop $q$ repeatedly. Given any two vertices $I$ and $J$ in $\mathcal{G}_{n,S}$, we may clearly get from $I$ to $J$, while being certain to also pass the vertex anchoring $q$, by traveling through each loop of edges labeled $a$ at most once (and following at most $n-1$ of the edges in each loop), and using no more than six letters total for the connecting paths between the loops. Thus, the longest path required to travel from one vertex to another in $\mathcal{G}_{n,S}$, requiring that such a path pass through the vertex which anchors $q$, is of length at most $3(n-1) + 6 = 3n+3$. For instance, if $S = \{1\}$, the shortest possible path from the initial vertex of $p$ to itself is labeled by the word $pba^{n-1}c_1ba^{n-1}c_1a^{n-1}c_1$, which clearly travels through the vertex anchoring $q$. This is a sort of ``worst-case" scenario, where, because $|S| = 1$, the shortest path requires traveling through all three loops of the graph, and with $i^*=1$, the exit of each loop is as far away from the entry point as possible. \end{proof}
Any graph $\mathcal{G}_{n,S}$ created by the above construction will yield a shift with a follower set sequence and extender set sequence of period $n$ with a difference in $\limsup$ and $\liminf$ of 1:
\begin{theorem}\label{Lemma1}
Let $n \in \mathbb{N}$ and $S \subset \{0, 1, ... , n-1\}$. Then the shift $X_{\mathcal{G}_{n, S}}$ has $3n + 3|S| + 4$ follower sets for words of length $\ell$ where $\ell \geq n + 2$ and $\ell \pmod n \in S$, and only $3n + 3|S| + 3$ follower sets for words of length $\ell'$ where $\ell' \geq n + 2$ and $\ell' \pmod n \notin S$. Furthermore, the shift $X_{\mathcal{G}_{n, S}}$ has $(3n + 2|S| + 1)^2 + |S| + 3$ extender sets for words of length $\ell$ where $\ell \geq 3n+3$ and $\ell \pmod n \in S$, and only $(3n + 2|S| + 1)^2 + |S| + 2$ extender sets for words of length $\ell'$ where $\ell' \geq 3n+3$ and $\ell' \pmod n \notin S$. \end{theorem}
\begin{proof} Let $\ell \geq n+2$. Let $\mathcal{G} = \mathcal{G}_{n, S}$ as defined above. We will find the number of follower sets and extender sets of words of length $\ell$ in $X_\mathcal{G}$ (though for one part of the extender set case, we will need to require $\ell \geq 3n+3$). Each follower set $F_X(w)$ is uniquely determined by the set of terminal vertices of paths labeled $w$ in $\mathcal{G}$, and each extender set $E_X(w)$ is determined by the set of pairs $\{I \rightarrow T \}$ of initial and terminal vertices of paths labeled $w$ in $\mathcal{G}$. \newline
\indent Since in $\mathcal{G}$ the words $p$, $q$, $c_{i^*}b$, $d_i$, $e_i$, and $c_{i^*}a$ are right-synchronizing, the longest path required to get from a right-synchronizing word to any vertex of $\mathcal{G}$ is $n + 2$. Because $\ell \geq n+2$, and the graph $\mathcal{G}$ is irreducible and right-resolving, every singleton represents the follower set of some word $w$ of length $\ell$. There are $3n + 2|S| + 1$ such follower sets, all distinct as $\mathcal{G}$ is follower-separated. \newline
\indent It is easy to see that the right-synchronizing words listed above are in fact bi-synchronizing. Since the graph $\mathcal{G}$ is both left- and right-resolving, if a legal word $w$ contains any bi-synchronizing word, then only one pair $\{ I \rightarrow T \}$ can be the initial and terminal vertices of paths labeled $w$. If $\ell \geq 3n+3$, by Lemma ~\ref{primitivity} every single pair $\{I \rightarrow T \}$ corresponds to the extender set of some word $w$ of length $\ell$. There are $(3n + 2|S| + 1)^2$ such extender sets, all distinct as $\mathcal{G}$ is extender-separated. \indent Note that for the graph $\mathcal{G}$, recording the labels of two edges beyond any loop of edges labeled $a$, whether before or after the loop, results in a bi-synchronizing word. Since any word which would be capable of having a follower set not corresponding to a singleton (or of having an extender set not corresponding to a single pair of initial and terminal vertices) must be one which avoids all bi-synchronizing words, and $\ell \geq n+2>2$, any word of length $\ell$ with such a follower or extender set must include a string of $a$'s, and no more than 1 letter on either side of such a string. So only words of the forms $a^\ell$, $ka^{\ell-1}$, $a^{\ell-1}k'$, and $ka^{\ell-2}k'$ (where $k$ and $k'$ are labels appearing in $\mathcal{G}$ not equal to $a$) can terminate (or begin) at more than one vertex. \newline \indent The word $a^\ell$ has 1 follower set, corresponding to all $3n$ vertices involved in loops of edges labeled $a$. This follower set is distinct from those corresponding to singletons, for which we have previously accounted. Similarly, the extender set of the word $a^\ell$ is distinct from those for which we have previously accounted.\newline
\indent The label $a$ is only followed in $\mathcal{G}$ by the labels $a$ and $c_i$ for all $i \in S$. For each $c_i$, the word $a^{\ell-1}c_i$ has a unique follower set corresponding to three terminal vertices, one for each loop in which the $a^{\ell-1}$ may occur. Thus there are $|S|$-many follower sets of this form, all distinct from previous follower sets. Similarly, there are $|S|$-many extender sets of this form, all distinct from previous extender sets as well.\newline \indent The label $a$ is only preceded in $\mathcal{G}$ by the labels $a$, $b$, and $c_{i^*}$. Since $c_{i^*}a$ is bi-synchronizing, the follower set for the word $c_{i^*}a^{\ell-1}$ corresponds to a singleton and has already been counted. The word $ba^{\ell-1}$ has a follower set corresponding to two terminal vertices, one in each loop of edges labeled $a$ which is preceded by $b$. Hence there is 1 additional distinct follower set of this form, and again this behavior is mirrored by the extender sets--there is 1 additional distinct extender set for the word $ba^{\ell -1}$. \newline \indent Finally, based on our above observations, if a word of the form $ka^{\ell-2}k'$ is to have a follower set corresponding to a greater number of vertices than one, that word must be of the form $ba^{\ell-2}c_i$. By construction, a path with this label only exists in $\mathcal{G}$ if $\ell \pmod n \in S$. If such a path exists, it contributes a single new follower set corresponding to two terminal vertices, each one edge past a loop of edges labeled $a$ that is preceded by the label $b$. This follower set cannot repeat one that we already found: if $i \neq i^*$, then the follower set of $ba^{\ell -2}c_i$ is exactly the set of all legal sequences beginning with $d_i$ or $e_i$, clearly not equal to the follower set of any other word of length $\ell$. If $i = i^*$, the follower set of $ba^{\ell -2}c_i$ contains sequences beginning with each of the letters $a$ and $b$, but no other letter, again setting it apart from any other follower set previously discussed. Similarly, the word $ba^{\ell -2}c_i$ contributes a single new extender set for any length $\ell$ for which a path with this label exists. \newline
\indent Therefore, in $X_\mathcal{G}$, if $\ell \geq n+2$ and $\ell \pmod n \in S$, there are $3n + 2|S| + 1 + 1 + |S| + 1 + 1 = 3n + 3|S| + 4$ follower sets of words of length $\ell$, while if $\ell' \geq n+2$ and $\ell' \pmod n \notin S$, there are only $3n + 3|S| + 3$ follower sets of words of length $\ell'$. Moreover, if $\ell \geq 3n+3$ and $\ell \pmod n \in S$, there are $(3n + 2|S| + 1)^2 + 1 + |S| + 1 + 1 = (3n + 2|S| + 1)^2 + |S| + 3$ extender sets of words of length $\ell$, while if $\ell' \geq 3n+3$ and $\ell' \pmod n \notin S$, there are only $(3n + 2|S| + 1)^2 + |S| + 2$ extender sets of words of length $\ell'$. \newline \end{proof}
So, for the shift presented by the graph in Figure~\ref{G503}, for $\ell \geq 7$, the follower set sequence oscillates between $|F_X(\ell)| = 3(5) + 3(2) + 4 = 25$ if $\ell \equiv 0$ or $3 \pmod 5$, and $|F_X(\ell)| = 24$ if $\ell \equiv 1$, $2$, or $4 \pmod 5$. Moreover, for $\ell \geq 18$, the extender set sequence oscillates between $|E_X(\ell)| = (3(5) + 2(2) + 1)^2 + (2) + 3 = 405$ if $\ell \equiv 0$ or $3 \pmod 5$, and $|E_X(\ell)| = 404$ if $\ell \equiv 1$, $2$, or $4 \pmod 5$. \newline
We have now demonstrated the existence of sofic shifts whose follower set sequences and extender set sequences eventually oscillate between two different (but adjacent) values. We may furthermore combine these graphs, forming new graphs presenting shifts whose follower set sequences and extender set sequences oscillate by more than 1:
\begin{theorem}\label{Lemma2}
Let $\mathcal{G}_1$ and $\mathcal{G}_2$ be two finite, irreducible, right-resolving, left-resolving, primitive, extender-separated labeled graphs with disjoint label sets, each containing a self-loop labeled by a bi-synchronizing letter $q_1$ and $q_2$ respectively. Let $I_1$ be the anchoring vertex of $q_1$ in $\mathcal{G}_1$ and $I_2$ be the anchoring vertex of $q_2$ in $\mathcal{G}_2$. Let $x,y$ be letters not in the label set of $\mathcal{G}_1$ or $\mathcal{G}_2$. Construct a new graph $\mathcal{G}$ by taking the disjoint union of $\mathcal{G}_1$ and $\mathcal{G}_2$ and adding an edge labeled $x$ beginning at $I_1$ and terminating at $I_2$ and an edge labeled $y$ beginning at $I_2$ and terminating at $I_1$. Then $\mathcal{G}$ is finite, irreducible, right-resolving, left-resolving, primitive, extender-separated, contains a self-loop labeled with a bi-synchronizing letter, and for any $\ell \in \mathbb{N}$, $$|F_{X_\mathcal{G}}(\ell)| = |F_{X_{\mathcal{G}_1}}(\ell)| + |F_{X_{\mathcal{G}_2}}(\ell)|.$$ Moreover, for any $\ell$ greater than twice the maximum of the primitivity distances of $\mathcal{G}_1$ and $\mathcal{G}_2$, $$|E_{X_\mathcal{G}}(\ell)| = |E_{X_{\mathcal{G}_1}}(\ell)| + |E_{X_{\mathcal{G}_2}}(\ell)| + 2|V(\mathcal{G}_1)|\cdot |V(\mathcal{G}_2)|.$$ \newline \end{theorem}
\begin{figure}
\caption{The graph $\mathcal{G}$ constructed as in Theorem ~\ref{Lemma2}}
\end{figure}
\begin{proof}
The reader may check that $\mathcal{G}$ is finite, irreducible, right-resolving, left-resolving, primitive, extender-separated and contains a self-loop labeled with a bi-synchronizing letter. We first check that this construction does not cause any collapsing of follower or extender sets. That is, if two words $w$ and $v$ had distinct follower or extender sets in $X_{\mathcal{G}_1}\sqcup X_{\mathcal{G}_2}$, then they have distinct follower or extender sets in $X_{\mathcal{G}}$. We present the argument for follower sets; the argument for extender sets is similar. \newline
\indent If for two words $w$ and $v$ in $L_\ell(X_{\mathcal{G}_1})\sqcup L_\ell(X_{\mathcal{G}_2})$, we have $F(w) \neq F(v)$, then there exists a sequence $s$ in $X_{\mathcal{G}_1}$ or $X_{\mathcal{G}_2}$ which may follow one word but not the other--without loss of generality, say that $s$ may follow $w$ but not $v$. Such a sequence may still follow $w$ in $\mathcal{G}$ by following the same path labeled $s$ which followed $w$ in $\mathcal{G}_1$ or $\mathcal{G}_2$ to begin with. The sequence $s$ still may not follow $v$, as no path existed in the parent graph $\mathcal{G}_1$ or $\mathcal{G}_2$ labeled $s$ following $v$, and any new paths introduced by our construction may not be labeled $s$, as $s$ did not contain the letters $x$ or $y$. Hence $|F_{X_{\mathcal{G}_1}}(\ell)| + |F_{X_{\mathcal{G}_2}}(\ell)| \leq |F_{X_{\mathcal{G}}}(\ell)|$, and similarly, $|E_{X_{\mathcal{G}_1}}(\ell)| + |E_{X_{\mathcal{G}_2}}(\ell)| \leq |E_{X_{\mathcal{G}}}(\ell)|$. \newline \indent Now we establish that no extra follower sets are introduced by this construction. If two words $w$ and $v$ had the same follower set in $X_{\mathcal{G}_1}\sqcup X_{\mathcal{G}_2}$, then they certainly exist in the same parent graph, $\mathcal{G}_1$ or $\mathcal{G}_2$; without loss of generality, say $\mathcal{G}_1$. Let $s$ be some sequence following $w$ in $\mathcal{G}$. If the path in $\mathcal{G}$ labeled $s$ is contained within $\mathcal{G}_1$, then $s$ is part of the follower set of $w$ in $X_{\mathcal{G}_1}$ and thus, is part of the follower set of $v$ in $X_{\mathcal{G}_1}$. So $s$ may follow $v$ in $\mathcal{G}$. On the other hand, if $s$ is presented by a path traveling through both graphs, then $s$ contains the letter $x$. Let $z$ denote the maximal finite prefix of $s$ without the letter $x$. A path labeled $z$ follows $w$ in $\mathcal{G}_1$, and since $z$ must terminate at $I_1$ in order to be followed by $x$, a path labeled $zq_1$ follows $w$ in $\mathcal{G}_1$ as well. Then a path labeled $zq_1$ must also follow $v$ in $\mathcal{G}_1$, and since $q_1$ is left-synchronizing, there must exist a path labeled $z$ following $v$ in $\mathcal{G}_1$ terminating at $I_1$. Such a path may certainly, then, be followed by $x$, and indeed, the remaining portion of $s$, so $s$ is in the follower set of $v$. Thus in $X_\mathcal{G}$, the follower sets of $w$ and $v$ remain the same. \newline
\indent Moreover, if a word $v$ is not in either $L_\ell(X_{\mathcal{G}_1})$ or $L_\ell(X_{\mathcal{G}_2})$, then $v$ includes either the letter $x$ or $y$. Since $x$ and $y$ are right-synchronizing, a path labeled $v$ may terminate only at a single vertex. Due to the fact that $\mathcal{G}_1$ and $\mathcal{G}_2$ are each irreducible, right-resolving, and contain a right-synchronizing letter, there exists a word $w$ in either $L_\ell(\mathcal{G}_1)$ or $L_\ell(\mathcal{G}_2)$ which terminates at the same unique vertex as paths labeled $v$. Since the right-synchronizing letter terminates at the same vertex as $x$ or $y$, depending on the graph, we may construct $w$ to be of the desired length $\ell$ in the following way: Let $v = v_1v_2...v_\ell$ and let $v_i$ be the last occurrence of $x$ or $y$ in $v$. Then set $w_{i+1}w_{i+2}...w_\ell = v_{i+1}v_{i+2}...v_\ell$. Create $w_1...w_i$ by replacing $v_i$ by $q_1$ or $q_2$ (choosing the one which terminates at the same place as $v_i$) and then following any path backward in that same parent graph ($\mathcal{G}_1$ or $\mathcal{G}_2$) to fill in $i-1$ labels before $w_i$. Then $w$ is a right-synchronizing word contained entirely in either $\mathcal{G}_1$ or $\mathcal{G}_2$ of length $\ell$ terminating at the same single vertex as $v$, and so $w$ and $v$ have the same follower set in $X_\mathcal{G}$. Therefore the construction introduced no extra follower sets, and $|F_{X_\mathcal{G}}(\ell)| = |F_{X_{\mathcal{G}_1}}(\ell)| + |F_{X_{\mathcal{G}_2}}(\ell)|.$ \newline \indent This construction also causes no splitting of extender sets: If $w$ and $v$ have the same extender set in $X_{\mathcal{G}_1}\sqcup X_{\mathcal{G}_2}$, then they certainly exist in the same parent graph, $\mathcal{G}_1$ or $\mathcal{G}_2$; without loss of generality, say $\mathcal{G}_1$. Let $(s,u)$ be in the extender set of $w$. If $s$ and $u$ are both contained within $\mathcal{G}_1$, then $(s,u)$ is certainly in the extender set of $v$ as well. Otherwise, let $z$ be the maximal suffix of $s$ with no appearance of the letter $y$ and $z'$ be the maximal prefix of $u$ with no appearance of $x$. (Note that in this case one of $z$ and $z'$ may be infinite, but not both.) Then $zwz'$ is contained in $\mathcal{G}_1$ and since $w$ and $v$ have the same extender set in $\mathcal{G}_1$, a path labeled $zvz'$ exists in $\mathcal{G}_1$ as well. If $z = s$ but $z'$ is finite, then as above, $z'$ may be followed by $q_1$ in $\mathcal{G}_1$, which is left-synchronizing, so there must exist a path labeled $z'$ following $sv$ in $\mathcal{G}_1$ terminating at $I_1$. Such a path may certainly, then, be followed by $x$, and the remaining portion of $u$, and so $(s,u)$ is in the extender set of $v$. Similarly, if $z$ is finite but $z' = u$, then $z$ may be preceded by $q_1$, which is right-synchronizing, so there must exist a path labeled $z$ preceding $vu$ in $\mathcal{G}_1$ beginning at $I_1$. Such a path may then be preceded by $y$, and the preceding portion of $s$, and so $(s,u)$ is in the extender set of $v$. If both $z$ and $z'$ are finite, then the path $q_1zvz'q_1$ exists in $\mathcal{G}_1$, and so a path labeled $zvz'$ exists in $\mathcal{G}_1$ beginning and ending at $I_1$ which then may be extended to an infinite path labeled $svu$, so $(s,u)$ is in the extender set of $v$. Thus $E(w) = E(v)$. \newline
\indent Finally, if a word $v$ is not in either $L_\ell(X_{\mathcal{G}_1})$ or $L_\ell(X_{\mathcal{G}_2})$, then $v$ includes either the letter $x$ or $y$. Since $x$ and $y$ are bi-synchronizing and $\mathcal{G}$ is both left- and right-resolving, paths labeled $v$ have exactly one pair $\{ I \rightarrow T \}$ of initial and terminal vertices. If $I$ and $T$ are in the same parent graph $\mathcal{G}_i$, we observe that if $\ell$ is longer than twice the primitivity distance for $\mathcal{G}_i$, we can construct a path $w$ from $I$ to $I_i$, and a path $u$ from $I_i$ to $T$, such that the path labeled $wq_iu$ has length $\ell$. Because $q_i$ is bi-synchronizing, paths labeled $wq_iu$ have only one pair of initial and terminal vertices; the exact same initial and terminal vertices as $v$, so $E(v) = E(wq_iu)$. On the other hand, if $I$ and $T$ are in different parent graphs, then $E(v)$ is certainly not equal to the extender set of any word in $L_\ell(X_{\mathcal{G}_1})\sqcup L_\ell(X_{\mathcal{G}_2})$, so the construction did introduce new extender sets of words of length $\ell$, but only at most $2|V(\mathcal{G}_1)|\cdot |V(\mathcal{G}_2)|$ many of them. Furthermore, if $\ell$ is longer than the primitivity distance of $\mathcal{G}$, then all $2|V(\mathcal{G}_1)|\cdot |V(\mathcal{G}_2)|$ such extender sets will be realized, and since $\mathcal{G}$ is extender-separated, they will all be distinct. The primitivity distance for $\mathcal{G}$ is at most one greater than the sum of the primitivity distances of $\mathcal{G}_1$ and $\mathcal{G}_2$, so if $\ell$ is greater than twice the maximum of the primitivity distances of $\mathcal{G}_1$ and $\mathcal{G}_2$, we have $|E_{X_\mathcal{G}}(\ell)| = |E_{X_{\mathcal{G}_1}}(\ell)| + |E_{X_{\mathcal{G}_2}}(\ell)| + 2|V(\mathcal{G}_1)|\cdot |V(\mathcal{G}_2)|.$ \end{proof}
It is evident that the process outlined in Theorem~\ref{Lemma2} may be repeated an arbitrary number of times, and since the constant introduced ($0$ in the follower set case, $2|V(\mathcal{G}_1)|\cdot |V(\mathcal{G}_2)|$ in the extender set case) does not depend on $\ell$, we may use this process to increase the oscillations in the follower and extender set sequences of the resulting shift. We formalize this idea in Theorems~\ref{mainthm} and ~\ref{ESmain}, which state that there exist sofic shifts with follower set sequences and extender set sequences of every eventual period and with any natural number as the difference in $\limsup$ and $\liminf$ of the sequence.
\begin{proof}[Proof of Theorem~\ref{mainthm}] Let $\mathcal{G}_{n,S}$ denote the graph constructed from $n$ and $S \subset \{0,1,..., n-1\}$ as in Theorem~\ref{Lemma1}. First construct $\mathcal{G}_{n,A_2\cup A_3\cup ...\cup A_k}$. By Theorem~\ref{Lemma1}, this graph will give one more follower set to words of length $\ell \geq n+2$ and $\ell \pmod n \in A_2\cup A_3\cup ...\cup A_k$ than to words of length $\ell \geq n+2$ and $\ell \pmod n \in A_1$. \newline \indent Now, as $\mathcal{G}_{n, A_2\cup A_3 \cup ... \cup A_k}$ is finite, irreducible, right-resolving, left-resolving, primitive, extender-separated, and contains a self-loop labeled with the bi-synchronizing letter $q$, we may use the process defined in Theorem~\ref{Lemma2} to join together $r_2$ many copies of $\mathcal{G}_{n, A_2\cup A_3 \cup ... \cup A_k}$ only by giving each copy a disjoint set of labels. Call the resulting graph $\mathcal{G}_2$. For each $\ell$, the number of follower sets of words of length $\ell$ in $\mathcal{G}_2$ is the sum of the number of follower sets of words of length $\ell$ in each of the $r_2$ copies of $\mathcal{G}_{n, A_2\cup A_3 \cup ... \cup A_k}$. Therefore, the graph $\mathcal{G}_2$ gives $r_2$ more follower sets to words of length $\ell \geq n+2$ with $\ell \pmod n \in A_2\cup A_3\cup ... \cup A_k$ than to words of length $\ell \geq n+2$ with $\ell \pmod n \in A_1$. \newline \indent Using the same process, we may now join onto $\mathcal{G}_2$ another $(r_3-r_2)$ many copies of the graph $\mathcal{G}_{n, A_3\cup A_4 \cup ... \cup A_k}$, and call the resulting graph $\mathcal{G}_3$. Now, words of length $\ell \geq n+2$ where $\ell \pmod n \in A_3 \cup A_4 \cup ... \cup A_k$ will have $(r_3 - r_2) + r_2 = r_3$ more follower sets than words of length $\ell \geq n+2$ where $\ell \pmod n \in A_1$, while words of length $\ell \geq n+2$ where $\ell \pmod n \in A_2$ will have only $r_2$ greater follower sets than words of length $\ell \geq n+2$ where $\ell \pmod n \in A_1$. \newline \indent Continue on this way, adjoining next $(r_4 - r_3)$ copies of $\mathcal{G}_{n, A_4 \cup A_5\cup ... \cup A_k}$ to make $\mathcal{G}_4$, and so forth, terminating after constructing $\mathcal{G}_k$. The graph will clearly be irreducible, and in $\mathcal{G}_k$, for each $1 \leq j \leq k$, words of length $\ell \geq n+2$ where $\ell \pmod n \in A_j$ will have $r_j$ more follower sets than words of length $\ell \geq n+2$ where $\ell \pmod n \in A_1$. That is, if $m$ is defined to be the number of follower sets of words of length $\ell \geq n+2$ where $\ell \pmod n \in A_1$, then words of length $\ell \geq n+2$ where $\ell \pmod n \in A_j$ will have $m + r_j$ many follower sets in $X_{\mathcal{G}_k}$. \newline
\indent Using the formula established in Theorem~\ref{Lemma1}, we see that for each $\mathcal{G}_{n,S}$, words of lengths $\ell \geq n+2$ whose residue classes are not in $S$ have $3n + 3|S| + 3$ many follower sets. Since $A_1 \subseteq S^c$ for every graph used in the construction of $\mathcal{G}_k$, words of length $\ell \geq n+2$ where $\ell \pmod n \in A_1$ must have the following number of follower sets: \newline \begin{align*}
m &= r_2(3n + 3|A_2\cup A_3\cup ... \cup A_k| + 3) + (r_3 - r_2)(3n + 3|A_3\cup A_4\cup ... \cup A_k| + 3) + ... \\
&\phantom{====================}+ (r_k - r_{k-1})(3n + 3|A_k| + 3) \\
&= r_2(3n + 3\sum_{j=2}^k|A_j| + 3) + (r_3 - r_2)(3n + 3\sum_{j=3}^k|A_j| + 3) + ...\\
&\phantom{====================}+ (r_k - r_{k-1})(3n+ 3\sum_{j = k}^k|A_j| + 3) \\
&= \sum_{i=2}^k(r_i - r_{i-1})(3n + 3\sum_{j = i}^k|A_j| + 3). \end{align*}
Furthermore, since for all $i \geq 2$, we have $\displaystyle \sum_{j = i}^k|A_j| < n$, we get that \newline \begin{align*} m &< \sum_{i=2}^k(r_i - r_{i-1})(6n + 3) \\ &=(6n + 3) \sum_{i=2}^k(r_i - r_{i-1}) \\ &= (6n+3)r_k. \end{align*} \end{proof}
This theorem shows that we may construct a sofic shift whose follower set sequence follows any desired oscillation scheme--increasing or decreasing by specified amounts at specified lengths $\ell$, and repeating with any desired eventual period. A similar result holds for extender set sequences, though the bounds for $m = \liminf \{|E_X(\ell)|\}$ and for the start of the periodicity of the sequence are different.
\begin{proof}[Proof of Theorem~\ref{ESmain}] We follow the same construction as in the proof of Theorem~\ref{mainthm}, combining $r_k$-many graphs using Theorem~\ref{Lemma2} to construct the graph $\mathcal{G}_k$. Then, in $\mathcal{G}_k$, for each $1 \leq j \leq k$, sufficiently long words of length $\ell$ where $\ell \pmod n \in A_j$ will have $r_j$ more extender sets than sufficiently long words of length $\ell$ where $\ell \pmod n \in A_1$. That is, if $m$ is defined to be the number of extender sets of words of sufficiently long words of length $\ell$ where $\ell \pmod n \in A_1$, then words of sufficient length $\ell$ where $\ell \pmod n \in A_j$ will have $m + r_j$ many extender sets in $X_{\mathcal{G}_k}$ for all $1 \leq j \leq k$. \newline \indent To discover what length is sufficient for periodicity of the extender set sequence to begin, we observe that for every graph $\mathcal{G}$ used in the construction of $\mathcal{G}_k$, the primitivity distance of $\mathcal{G}$ is less than or equal to $3n + 3$ as in Lemma ~\ref{primitivity}. Note that, since $n \geq 1$, we have $3n + 3 \leq 7n -1$. (In fact, in any interesting case, $n \geq 2$, so $3n + 3 \leq 5n -1$, but $n = 1$ certainly may be chosen as a trivial case, where $S = \{0\}$ necessarily). By Theorem ~\ref{Lemma2}, the eventual periodicity of the extender set sequence of the combination of two graphs begins before the $2z + 1^{st}$ term, where $z$ is the maximum of the primitivity distances of the two graphs. So, when adding two graphs together in this construction, we get that the eventual periodicity begins before $(3n + 3) + (3n + 3) + 1 \leq (7n -1) + (7n -1) + 1 = 14n -1$. (We observe that the primitivity distance of the resulting graph will also be less than $14n -1$). Since $n$ is the same for each graph involved in the construction, it does not matter which type of graph we are adding at each step, whether a copy of $\mathcal{G}_{n, A_2\cup...\cup A_k}, \mathcal{G}_{n, A_3\cup ...\cup A_k}$, up to $\mathcal{G}_{n, A_k}$. \newline
\indent Though we have performed the same construction as in Theorem ~\ref{mainthm}, we add our graphs in a more efficient order to minimize the effect of the constant $2|V(\mathcal{G}_1)|\cdot |V(\mathcal{G}_2)|$. First consider the case where $r_k$ is a power of 2. Then we may choose to construct $\mathcal{G}_k$ in such a way that at each step, we add two graphs each made up of the same number of components. (2 graphs each consisting of 2 components to make 4, 2 graphs each consisting of 4 components to make 8, and so on). Then if $a_i$ is an upper bound for the start of primitivity at the $i^{th}$ step, an upper bound for the primitivity at the $i+1^{st}$ step is $2(a_i)+1$. Then letting $a_1 = 7n -1$, the value of the sequence $a_i = 2(a_{i-1}) + 1$ at $i = \log_2(r_k) + 1$ will give an upper bound for the start of primitivity for $\mathcal{G}_k$, since we must add together two peices of equal components $\log_2(r_k)$ times to construct $\mathcal{G}_k$. \newline \indent We claim that for all $i$, $a_i = 2^{i-1}(7n) -1$. This is trivially true for the base case, $i = 1$. By induction, suppose $a_{i-1} = 2^{i-2}(7n) - 1$. Then \begin{align*} a_i &= 2(a_{i-1}) +1 \\ &= 2(2^{i-2}(7n) -1) + 1 \\ &= 2^{i-1}(7n) - 2 + 1 \\ &= 2^{i-1}(7n) -1. \end{align*} Thus, when $r_k$ is a power of 2, the primitivity of the extender set sequence of $\mathcal{G}_k$ begins before $a_{\log_2(r_k) + 1} = 2^{\log_2(r_k)}(7n) - 1 = 7nr_k -1$. \newline \indent Now, the upper bound for the beginning of the periodicity of the extender set sequence certainly increases as $r_k$ increases--increasing $r_k$ means adding more graphs to construct $\mathcal{G}_k$--and so, since $r_k \leq 2^{\lceil \log_2(r_k)\rceil}$ for all $r_k \in \mathbb{N}$, and $2^{\lceil \log_2(r_k)\rceil}$ is a power of 2, for any $r_k$, the primitivity of $\mathcal{G}_k$ must begin at the latest when $\ell = 7(2^{\lceil \log_2(r_k)\rceil})n -1 \leq 7(2^{\log_2(r_k) + 1})n - 1 = 14(r_k)n -1$. \newline
\indent Finally, it remains to show that in this construction, $m \leq 39n^2r_k^2$. We first show that $\mathcal{G}_2$ (that is, $r_2$ combined copies of $\mathcal{G}_{n, A_2 \cup...\cup A_k}$) has at most $6nr_2$ vertices and will have $m \leq 39n^2r_2^2$. The bound on the number of vertices is clear--for any graph $\mathcal{G}_{n, S}$ constructed by the method defined at the beginning of this section, $\mathcal{G}_{n,S}$ has $3n + 2|S| + 1$ vertices, and since $|S| \leq n$ and $n \geq 1$, we have $3n + 2|S| + 1 \leq 6n$. With each graph having at most $6n$ vertices, it is trivial that $\mathcal{G}_2$ has at most $6nr_2$ vertices. As discussed in Theorem ~\ref{Lemma1}, the number of extender sets for $\ell \geq 3n+3$ and $\ell \pmod n \notin S$ (that is, $m$ for $\mathcal{G}_{n,S}$) is $(3n + 2|S| + 1)^2 + |S| + 2 \leq 36n^2 + n + 2 \leq 39n^2$. This proves the base case, when $r_2 = 1$. Suppose for an induction that after joining together $i$ copies of $\mathcal{G}_{n, A_2 \cup...\cup A_k}$ to make a graph $\mathcal{G}$, we get $m \leq 39i^2n^2$, and we then adjoin a single copy of $\mathcal{G}_{n, A_2\cup...\cup A_k}$ to $\mathcal{G}$. Then, by Theorem ~\ref{Lemma2}, words of sufficient length $\ell$ in the new graph where $\ell \pmod n \in A_1$ will have a number of extender sets equal to the number of extender sets for words of such length in $\mathcal{G}$ (bounded above by $39i^2n^2$) plus the number of extender sets for words of such length in $\mathcal{G}_{n, A_2\cup ... \cup A_k}$ (bounded above by $39n^2$) plus twice the product of the number of vertices in $\mathcal{G}$ and $\mathcal{G}_{n, A_2\cup ... \cup A_k}$ (bounded above by $2(6in)(6n) < 2i(39n^2)$). \newline \indent Thus, for the resulting graph containing $i + 1$ copies of $\mathcal{G}_{n, A_2\cup ... \cup A_k}$, we have: $$m < 39i^2n^2 + 39n^2 + 2i(39n^2) = 39n^2(i^2 + 1 + 2i) = 39n^2(i+1)^2,$$ giving the result for $\mathcal{G}_2$ when $i = r_2$. \newline \indent We next consider adding $(r_3-r_2)$ many copies of $\mathcal{G}_{n, A_3\cup ...\cup A_k}$ to $\mathcal{G}_2$ to make $\mathcal{G}_3$. By the same argument as above, the graph consisting of $(r_3-r_2)$ many copies of $\mathcal{G}_{n, A_3\cup ...\cup A_k}$ will have at most $6n(r_3-r_2)$ vertices and $m \leq 39n^2(r_3-r_2)^2$. Again using Theorem ~\ref{Lemma2} to combine the two graphs, the resulting graph $\mathcal{G}_3$ will have at most $6nr_2 + 6n(r_3 - r_2) = 6nr_3$ vertices, and will have \begin{align*} m &\leq 39n^2r_2^2 + 39n^2(r_3-r_2)^2 + 2(6nr_2)(6n(r_3-r_2))\\
&< 39n^2r_2^2 + 39n^2(r_3-r_2)^2 + 39n^2(2(r_2)(r_3-r_2)) \\
&= 39n^2(r_2^2 + (r_3-r_2)^2 + 2((r_2)(r_3-r_2)) \\
&= 39n^2(r_2 + (r_3-r_2))^2 \\
&= 39n^2r_3^2. \end{align*} Continuing inductively, we can see that for $\mathcal{G}_k$, we will have $m \leq 39n^2r_k^2$. \end{proof}
While we may achieve any desired oscillation scheme, we cannot achieve any eventually periodic sequence we like--$m$ must be sufficiently large.
\begin{theorem}\label{mlowerbds}
Let $X$ be an irreducible sofic shift with $\liminf\{|F_X(\ell)|\} = m$ and $ \limsup \{|F_X(\ell)|\} = m +r$ with least eventual period $n$. Then we have that $m > \log_2(r)$ and $m > \frac{1}{2}\log_2(\log_2(n))$. If $\liminf \{|E_X(\ell)|\} = m'$ and $\limsup \{|E_X(\ell)|\} = m' + r'$ with least eventual period $n'$, then $m' > \sqrt{\log_2(r')}$ and $m' > \sqrt{\frac{1}{2}\log_2(\log_2(n'))}$. \end{theorem}
\begin{proof}
Let $\mathcal{G}$ be an irreducible right-resolving presentation of $X$ which contains a right-synchronizing word. Let $|V(\mathcal{G})|$ be denoted by $V$. Since a follower set of a word $w$ in a sofic shift is determined by the non-empty set of terminal vertices of paths labeled $w$ in a presentation $\mathcal{G}$, $X$ has less than $|\mathcal{P}(V(\mathcal{G}))| = 2^V$ follower sets, and so $m + r < 2^V$. Because $\mathcal{G}$ is irreducible, right-resolving, and contains a right-synchronizing word, for large enough $\ell$, each singleton will correspond to the follower set of some word for any length greater than $\ell$. Because $m$ occurs infinitely often in the follower set sequence, $m$ must be greater than or equal to the number of singletons in $\mathcal{G}$, that is, $m \geq V$. Thus, we have: $$2^m \geq 2^V > m + r > r$$ $$m > \log_2(r).$$ Moreover, as there are less than $2^V$ follower sets, the least eventual period $n$ of the follower set sequence is less than or equal to $(2^V)!$, as in Theorem ~\ref{eventuallyperiodic}. Thus $(2^V)! \geq n$. So we have: $$(2^m)^{(2^m)} > (2^m)! \geq (2^V)! \geq n$$ $$(2^m)^{(2^m)} > n $$ $$\log_2((2^m)^{(2^m)}) > \log_2(n)$$ $$2^m\log_2(2^m) > \log_2(n)$$ $$2^m(m) > \log_2(n)$$ $$\log_2(2^m) + \log_2(m) > \log_2(\log_2(n))$$ $$m + \log_2(m) > \log_2(\log_2(n)).$$ Since $m > \log_2(m)$, we have: $$2m > \log_2(\log_2(n))$$ $$m > \frac{1}{2}\log_2(\log_2(n)).$$ Now, an extender set of a word $w$ in a sofic shift is determined by the non-empty set of pairs of initial and terminal vertices of paths labeled $w$ in $\mathcal{G}$, so $X$ has less than $2^{V^2}$ extender sets, that is, $m' + r' < 2^{V^2}$. Since two words with the same extender set have the same follower set, $m' \geq m \geq V$. Thus we have: $$2^{{m'}^2} \geq 2^{V^2} > m' + r' > r'$$ $${m'}^2 > \log_2(r')$$ $$m' > \sqrt{\log_2(r')}.$$ Finally, as there are less than $2^{V^2}$ extender sets, by Theorem~\ref{eventuallyperiodic}, $(2^{v^2})! \geq n'$, giving: $$(2^{{m'}^2})^{(2^{{m'}^2})} > (2^{{m'}^2})! \geq (2^{V^2})! \geq n'$$ $$(2^{{m'}^2})^{(2^{{m'}^2})} > n'$$ $$(2^{{m'}^2})({m'}^2) > \log_2(n')$$ $${m'}^2 + \log_2({m'}^2) > \log_2(\log_2(n')).$$ Since ${m'}^2 > \log_2({m'}^2)$, we have: $$2{m'}^2 > \log_2(\log_2(n'))$$ $$m' > \sqrt{\frac{1}{2}\log_2(\log_2(n'))}.$$ \end{proof}
\begin{remark} For the examples discussed in this paper, we may take $r$ or $r' = r_k$. \end{remark}
\section{The Non-Sofic Case} \label{XcrossY}
In this section we demonstrate the existence of a non-sofic shift whose follower set sequence and extender set sequence are not monotone increasing. The construction uses the following fact about Sturmian shifts (for a definition, see ~\cite{Fogg}):
\begin{lemma}\label{Sturmian}
If $Y$ is a Sturmian shift, then for any $\ell \in \mathbb{N}$, $|F_Y(\ell)| = |E_Y(\ell)| = \ell + 1$. \end{lemma}
\begin{proof}
From ~\cite{Fogg}, it is known that for a fixed length $\ell$, Sturmian shifts have exactly $\ell + 1$ words in $L_\ell(Y)$, so it is sufficient to show that any two words of length $\ell$ in $Y$ have distinct follower and extender sets. We also know from ~\cite{Fogg} that Sturmian shifts are symbolic codings of irrational circle rotations, say by $\alpha \notin \mathbb{Q}$, where the circle is expressed by the interval $[0,1]$ with the points $0$ and $1$ identified, and the interval coded with $1$ has length $\alpha$. We may take $\alpha < \frac{1}{2}$ by simply switching the labels $0$ and $1$ whenever $\alpha > \frac{1}{2}$. Furthermore, the cylinder sets of words of length $\ell$ correspond to a partition of the circle into $\ell + 1$ subintervals, so for two words of length $\ell$ in $Y$, each corresponds to a subinterval of the circle, and the two subintervals are disjoint. Let $w$ and $v$ be two distinct words in $L_\ell (Y)$ corresponding to disjoint intervals $I_w$ and $I_v$, $[0, \alpha)$ be the interval coded with $1$, and $T_\alpha$ be the rotation by $\alpha$. We claim that there exists an $N \in \mathbb{N}$ such that $T_\alpha^{-N}[0,\alpha)$ intersects one of $I_w$ and $I_v$ but not the other: Since $\alpha < \frac{1}{2}$, and since $\{n\alpha \ | \ n \in \mathbb{N} \}$ is dense in the circle, if one of $I_w$ and $I_v$ has length at least $\frac{1}{2}$, there exists $N \in \mathbb{N}$ such that $T_\alpha^{-N}[0, \alpha)$ is contained entirely inside that large interval, and thus completely disjoint from the other. Otherwise, take $I_w^c$, which clearly has length at least $\frac{1}{2}$, and find an $N \in \mathbb{N}$ such that $T_\alpha^{-N}[0,\alpha)$ is contained inside $I_w^c$ and intersects $I_v \subseteq I_w^c$, again possible due to denseness of $\{n\alpha \ | \ n \in \mathbb{N} \}$. Hence we have proved our claim, that $\exists \> N \in \mathbb{N}$ such that $T_\alpha^{-N}[0,\alpha)$ intersects one of $I_w$ and $I_v$ but not the other, and therefore, that the symbol $1$ may follow one of the words $w$ and $v$ exactly $N$ units later, but not the other. Therefore $w$ and $v$ have distinct follower sets, and thus, distinct extender sets, completing the proof. \end{proof}
We use Lemma ~\ref{Sturmian} and the sofic shifts constructed in Section ~\ref{followers} to build a non-sofic shift with non-monotonically increasing follower and extender set sequences:
\begin{proof}[Proof of Theorem ~\ref{Non-sofic}]
Take a sofic shift $X = X_{\mathcal{G}_{n, S}}$ for any $n, S$ as defined in Section ~\ref{followers}. Take the direct product of $X$ and a Sturmian shift $Y$. Two words in $X \times Y$ have the same extender set if and only if the projection of those words to both their first and second coordinates have the same extender set in $X$ and $Y$, respectively. That is, if two words $w$ and $v$ have different extender sets in $X$, then any two words whose projections to their first coordinate are $w$ and $v$ will have different extender sets in $X \times Y$, and similarly for words $w'$ and $v'$ with different extender sets in $Y$. Therefore $|E_{X\times Y}(\ell)| = |E_X(\ell)|\cdot |E_Y(\ell)|$. By similar logic, $|F_{X\times Y}(\ell)| = |F_X(\ell)|\cdot |F_Y(\ell)|$. \newline
\indent Thus, if we let $m = \liminf \{|E_X(\ell)|\}_{\ell \in \mathbb{N}}$, then for any $\ell \geq 3n+3$ with $\ell \pmod n \notin S$, we have $|E_{X\times Y}(\ell)| = m\cdot (\ell + 1)$, and if $\ell \pmod n \in S$, then $|E_{X\times Y}(\ell)| = (m + 1)(\ell + 1)$. As $m$ is fixed and $\ell$ approaches infinity, it is clear that $\{|E_{X\times Y}(\ell)|\}$ is unbounded, and thus the shift $X \times Y$ is nonsofic. Furthermore, as the direct product of a mixing shift ($X$ is primitive by Lemma~\ref{primitivity}, and therefore mixing) with an irreducible shift, $X \times Y$ is irreducible. \newline \indent Choose $\ell$ large enough that $\ell > m -1$, and such that $\ell \pmod n \in S$ and $\ell + 1 \pmod n \notin S$. Then \begin{align*}
|E_{X\times Y}(\ell)| &= (m + 1)(\ell + 1) \\ &= m\ell + m + \ell + 1 \\ &> m\ell + m + (m -1) +1 \\ &= m\ell +2m \\ &= m(\ell + 2) \\
&= |E_{X\times Y}(\ell +1)|. \end{align*} Therefore the extender set sequence of $X\times Y$ is not monotone increasing. A similar argument shows that the follower set sequence of $X\times Y$ is not monotone increasing as well. \end{proof}
\begin{example}
Let $X= X_{\mathcal{G}_{2, \{0\}}}$. Then $m = \liminf \{|E_X(\ell)|\}_{\ell \in \mathbb{N}} = (3n + 2|S| + 1)^2 + |S| + 2 = 84$, so for $\ell = 84$ (since $84 > 3n + 3, 84 > m-1, 84 \pmod 2 \in \{0\},$ and $85 \pmod 2 \notin \{0\}$), $|E_{X\times Y}(\ell)| > |E_{X\times Y}(\ell + 1)|$. In particular, $|E_{X\times Y}(84)| = (85)(85) = 7225$ while $|E_{X\times Y}(85)| = (84)(86) = 7224$. \end{example}
\begin{remark} The reader may observe that once $\ell$ is sufficiently large for the follower or extender set sequence of $X \times Y$ to decrease, these decreases will happen for exactly the same lengths $\ell$ as the decreases in the follower or extender set sequence of $X = X_{\mathcal{G}_{n,S}}$. Thus there are infinitely many lengths for which the follower or extender set sequence of $X \times Y$ decreases. \end{remark}
{}
\end{document} |
\begin{document}
\title{How Many Components should be Retained from a Multivariate Time Series PCA?}
\input Abstract.tex
\input Introduction.tex
\input Data.tex
\input Method.tex
\input Results.tex
\input Discussion.tex
\appendix
\input HeatMaps.tex
\end{document} |
\begin{document}
\title[Radii of starlikeness of some special functions]{Radii of starlikeness of some special functions}
\author[\'A. Baricz]{\'Arp\'ad Baricz} \address{Department of Economics, Babe\c{s}-Bolyai University, Cluj-Napoca 400591, Romania} \email{[email protected]}
\author[D. K. Dimitrov]{Dimitar K. Dimitrov} \address{Departamento de Matem\'atica Aplicada, IBILCE, Universidade Estadual Paulista UNESP, S\~{a}o Jos\'e do Rio Preto 15054, Brazil} \email{[email protected]}
\author[H. Orhan]{Halit Orhan} \address{Department of Mathematics, Ataturk University, Erzurum 25240, Turkey} \email{[email protected]}
\author[N. Yagmur]{Nihat Yagmur} \address{Department of Mathematics, Erzincan University, Erzincan 24000, Turkey} \email{[email protected]}
\thanks{The research of \'A. Baricz is supported by the Romanian National Authority for Scientific Research, CNCS-UEFISCDI, under Grant PN-II-RU-TE-2012-3-0190. The research of D. K. Dimitrov is supported by the Brazilian foundations CNPq under Grant 307183/2013--0 and FAPESP under Grants 2009/13832--9}
\begin{abstract} Geometric properties of the classical Lommel and Struve functions, both of the first kind, are studied. For each of them, there different normalizations are applied in such a way that the resulting functions are analytic in the unit disc of the complex plane. For each of the six functions we determine the radius of starlikeness precisely. \end{abstract}
\maketitle
\section{Introduction and statement of the main results}
Let $\mathbb{D}_{r}$ be the open disk $\left\{ {z\in \mathbb{C}:\left\vert z\right\vert <r}\right\} ,$ where $r>0,$ and set $\mathbb{D}=\mathbb{D}_{1}$. By $\mathcal{A}$ we mean the class of analytic functions $f:\mathbb{D}_r\to\mathbb{C}$ which satisfy the usual normalization conditions $f(0)=f'(0)-1=0.$ Denote by $\mathcal{S}$ the class of functions belonging to $\mathcal{A}$ which are univalent in $\mathbb{D} _r$ and let $\mathcal{S}^{\ast }(\alpha )$ be the subclass of $\mathcal{S}$ consisting of functions which are starlike of order $\alpha $ in $\mathbb{D} _r,$ where $0\leq \alpha <1.$ The analytic characterization of this class of functions is \begin{equation*} \mathcal{S}^{\ast }(\alpha )=\left\{ f\in \mathcal{S}\ :\ \Re \left(\frac{zf'(z)}{f(z)}\right)>\alpha\ \ \mathrm{for\ all}\ \ z\in \mathbb{ D}_r \right\}, \end{equation*} and we adopt the convention $\mathcal{S}^{\ast}=\mathcal{S}^{\ast }(0)$. The real number \begin{equation*} r_{\alpha }^{\ast}(f)=\sup \left\{ r>0\ :\ \Re \left(\frac{zf'(z)}{f(z)}\right)>\alpha\ \ \mathrm{ for\ all}\ \ z\in \mathbb{D}_{r}\right\}, \end{equation*} is called the radius of starlikeness of order $\alpha $ of the function $f.$ Note that $r^{\ast }(f)=r_0^{\ast}(f)$ is the largest radius such that the image region $f(\mathbb{D}_{r^{\ast }(f)})$ is a starlike domain with respect to the origin.
We consider two classical special functions, the Lommel function of the first kind $s_{\mu ,\nu }$ and the Struve function of the first kind $\mathbf{H}_{\nu}$. They are explicitly defined in terms of the hypergeometric function $\,_{1}F_{2}$ by \begin{equation} s_{\mu ,\nu }(z)=\frac{z^{\mu +1}}{(\mu -\nu +1)(\mu +\nu +1)}\, _{1}F_{2}\left( 1;\frac{\mu -\nu +3}{2},\frac{\mu +\nu +3}{2};-\frac{z^{2}}{4}\right),\ \ \frac{1}{2}(-\mu \pm \nu-3) \not\in \mathbb{N}, \label{LomHypG} \end{equation} and \begin{equation} \mathbf{H}_{\nu}(z)=\frac{\left(\frac{z}{2}\right)^{\nu+1}}{\sqrt{\frac{\pi}{4}}\, \Gamma\left(\nu+\frac{3}{2}\right)} \,_{1}F_{2} \left( 1;\frac{3}{2},\nu + \frac{3}{2};-\frac{z^{2}}{4}\right),\ \ -\nu-\frac{3}{2} \not\in \mathbb{N}. \label{SrtHypG} \end{equation} A common feature of these functions is that they are solutions of inhomogeneous Bessel differential equations \cite{Wat}. Indeed, the Lommel function of the first kind $s_{\mu ,\nu }$ is a solution of \begin{equation*} z^{2}w''(z)+zw'(z)+(z^{2}-{\nu }^{2})w(z)=z^{\mu +1} \end{equation*} while the Struve function $\mathbf{H}_{\nu}$ obeys \begin{equation*} z^{2}w''(z)+zw'(z)+(z^{2}-{\nu }^{2})w(z)=\frac{4\left( \frac{z}{2}\right) ^{\nu +1}}{\sqrt{\pi }\Gamma \left( \nu +\frac{1}{2} \right) }. \end{equation*} We refer to Watson's treatise \cite{Wat} for comprehensive information about these functions and recall some more recent contributions. In 1972 Steinig \cite{stein} examined the sign of $s_{\mu ,\nu }(z)$ for real $\mu ,\nu $ and positive $z$. He showed, among other things, that for $\mu <\frac{1}{2}$ the function $s_{\mu ,\nu }$ has infinitely many changes of sign on $(0,\infty )$. In 2012 Koumandos and Lamprecht \cite{Kou} obtained sharp estimates for the location of the zeros of $s_{\mu -\frac{1}{2},\frac{1}{2}}$ when $\mu \in (0,1)$. The Tur\'{a}n type inequalities for $s_{\mu -\frac{1}{2},\frac{1}{2}}$ were established in \cite{Bar2} while those for the Struve function were proved in \cite{BPS}.
Geometric properties of $s_{\mu -\frac{1}{2},\frac{1}{2}}$ and of the Struve function were obtained in \cite{Bar3} and in \cite{H-Ny,N-H}, respectively. Motivated by those results we study the problem of starlikeness of certain analytic functions related to the classical special functions under discussion. Since neither $s_{\mu ,\nu }$, nor $\mathbf{H}_{\nu}$ belongs to $\mathcal{A}$, first we perform some natural normalizations. We define three functions originating from $s_{\mu ,\nu }$: \begin{equation*} f_{\mu ,\nu }(z)=\left( (\mu -\nu +1)(\mu +\nu +1)s_{\mu ,\nu }(z)\right)^{\frac{1}{\mu +1}}, \end{equation*} \begin{equation*} g_{\mu ,\nu }(z)=(\mu -\nu +1)(\mu +\nu +1)z^{-\mu }s_{\mu ,\nu }(z) \end{equation*} and \begin{equation*} h_{\mu ,\nu }(z)=(\mu -\nu +1)(\mu +\nu +1)z^{\frac{1-\mu }{2}}s_{\mu ,\nu }( \sqrt{z}). \end{equation*} Similarly, we associate with $\mathbf{H}_{\nu}$ the functions $$ u_{\nu }(z)=\left(\sqrt{\pi }2^{\nu }\Gamma \left( \nu +\frac{3}{2} \right) \mathbf{H}_{\nu }(z)\right)^{\frac{1}{\nu +1}},$$ $$ v_{\nu }(z)=\sqrt{\pi }2^{\nu }z^{-\nu }\Gamma \left( \nu + \frac{3}{2} \right) \mathbf{H}_{\nu }(z) $$ and $$ w_{\nu }(z)=\sqrt{\pi }2^{\nu }z^{\frac{1-\nu }{2}}\Gamma \left( \nu +\frac{3}{2}\right) \mathbf{H}_{\nu }(\sqrt{z}). $$
Clearly the functions $f_{\mu ,\nu }$, $g_{\mu ,\nu }$, $h_{\mu ,\nu }$, $u_{\nu }$, $v_{\nu }$ and $w_{\nu }$ belong to the class $\mathcal{A}$. The main results in the present note concern the exact values of the radii of starlikeness for these six function, for some ranges of the parameters.
Let us set $$ f_{\mu }(z)=f_{\mu -\frac{1}{2},\frac{1}{2}}(z),\ \ g_{\mu }(z)=g_{\mu-\frac{1}{2},\frac{1}{2}}(z)\ \ \ \mbox{and}\ \ \ h_{\mu }(z)=h_{\mu-\frac{1}{2},\frac{1}{2}}(z).$$ The first principal result we establish reads as follows:
\begin{theorem} \label{theo1} Let $\mu\in(-1,1),$ $\mu\neq0.$ The following statements hold:
\begin{enumerate} \item[\textbf{a)}] If $0\leq\alpha<1$ and $\mu\in \left(-\frac{1}{2},0\right),$ then $r_{\alpha }^{\ast }(f_{\mu })=x_{\mu ,\alpha }$, where $x_{\mu ,\alpha }$ is the smallest positive root of the equation \begin{equation*} z\, s_{\mu -\frac{1}{2},\frac{1}{2}}'(z)-\alpha \left(\mu + \frac{1}{2} \right)s_{\mu -\frac{1}{2},\frac{1}{2}}(z)=0. \end{equation*}
Moreover, if $0\leq\alpha<1$ and $\mu\in \left(-1,-\frac{1}{2}\right),$ then $r_{\alpha }^{\ast }(f_{\mu })=q_{\mu ,\alpha }$, where $q_{\mu ,\alpha }$ is the unique positive root of the equation $$izs_{\mu -\frac{1}{2},\frac{1}{2} }'(iz)-\alpha \left(\mu +\frac{1}{2}\right)s_{\mu -\frac{1}{2},\frac{1}{2} }(iz)=0.$$
\item[\textbf{b)}] If $0\leq\alpha<1,$ then $r_{\alpha }^{\ast }(g_{\mu })=y_{\mu ,\alpha }$, where $y_{\mu ,\alpha }$ is the smallest positive root of the equation \begin{equation*} z\, s_{\mu -\frac{1}{2},\frac{1}{2}}'(z)-\left(\mu +\alpha- \frac{1}{2} \right) s_{\mu -\frac{1}{2},\frac{1}{2}}(z)=0. \end{equation*}
\item[\textbf{c)}] If $0\leq\alpha<1,$ then $r_{\alpha }^{\ast }(h_{\mu })=t_{\mu ,\alpha }$, where $ t_{\mu ,\alpha }$ is the smallest positive root of the equation \begin{equation*} zs_{\mu -\frac{1}{2},\frac{1}{2}}'(z)-\left(\mu +2\alpha -\frac{3}{2 }\right)s_{\mu -\frac{1}{2},\frac{1}{2}}(z)=0. \end{equation*}
\end{enumerate} \end{theorem}
The corresponding result about the radii of starlikeness of the functions, related to Struve's one, is:
\begin{theorem}
\label{theo2} Let $|\nu|<\frac{1}{2}.$ The following assertions are true:
\begin{enumerate} \item[\textbf{a)}] If $0\leq \alpha <1,$ then $r_{\alpha }^{\ast }(u_{\nu })=\delta _{\nu ,\alpha }$, where $\delta _{\nu ,\alpha }$ is the smallest positive root of the equation \begin{equation*} z\mathbf{H}_{\nu }'(z)-\alpha (\nu +1)\mathbf{H}_{\nu }(z)=0. \end{equation*}
\item[\textbf{b)}] If $0\leq \alpha <1,$ then $r^{\ast }(v_{\nu })=\rho _{\nu ,\alpha }$, where $\rho _{\nu ,\alpha }$ is the smallest positive root of the equation \begin{equation*} z\mathbf{H}_{\nu }'(z)-(\alpha +\nu )\mathbf{H}_{\nu }(z)=0. \end{equation*}
\item[\textbf{c)}] If $0\leq\alpha<1,$ then $r_{\alpha }^{\ast }(w_{\nu })=\sigma _{\nu ,\alpha }$, where $\sigma _{\nu ,\alpha }$ is the smallest positive root of the equation \begin{equation*} z\mathbf{H}_{\nu }'(z)-(2\alpha +\nu -1)\mathbf{H}_{\nu }(z)=0. \end{equation*}
\end{enumerate} \end{theorem}
It is worth mentioning that the starlikeness of $h_{\mu }$, when $\mu \in (-1,1)$, $\mu \neq 0,$ as well as of $w_{\nu }$, under the restriction $\left\vert \nu \right\vert \leq \frac{1}{2}$, were established in \cite{Bar3}, and it was proved there that all the derivatives of these functions are close-to-convex in $\mathbb{D}.$
\section{Preliminaries} \setcounter{equation}{0}
\subsection{The Hadamard's factorization} The following preliminary result is the content of Lemmas 1 and 2 in \cite{Bar2}.
\begin{lemma} \label{lem1} Let \begin{equation*} \varphi _{k}(z)=\, _{1}F_{2}\left( 1;\frac{\mu -k+2}{2},\frac{\mu -k+3}{2};- \frac{z^{2}}{4}\right) \end{equation*} where ${z\in \mathbb{C}}$, ${\mu \in \mathbb{R}}$ and ${k\in }\left\{ 0,1,\dots\right\} $ such that ${\mu -k}$ is not in $\left\{ 0,-1,\dots\right\} $. Then, $\varphi _{k}$ is an entire function of order $ \rho =1$ and of exponential type $\tau =1.$ Consequently, the Hadamard's factorization of $\varphi _{k}$ is of the form \begin{equation} \varphi _{k}(z)=\prod\limits_{n\geq 1}\left( 1-\frac{z^{2}}{z_{\mu ,k,n}^{2}} \right) , \label{1.6} \end{equation} where $\pm z_{\mu ,k,1},$ $\pm z_{\mu ,k,2},\dots$ are all zeros of the function $\varphi _{k}$ and the infinite product is absolutely convergent. Moreover, for $z,$ ${\mu }$ and $k$ as above, we have \begin{equation*} (\mu -k+1)\varphi _{k+1}(z)=(\mu -k+1)\varphi _{k}(z)+z\varphi _{k}'(z), \label{1.7} \end{equation*} \begin{equation*} \sqrt{z}s_{\mu -k-\frac{1}{2},\frac{1}{2}}(z)=\frac{z^{\mu -k+1}}{(\mu -k)(\mu -k+1)}\varphi _{k}(z). \label{1.8} \end{equation*} \end{lemma}
\subsection{Quotients of power series} We will also need the following result (see \cite{biernacki,pv}):
\begin{lemma}\label{lempower} Consider the power series $f(x)=\displaystyle\sum_{n\geq 0}a_{n}x^n$ and $g(x)=\displaystyle\sum_{n\geq 0}b_{n}x^n$, where $a_{n}\in \mathbb{R}$ and $b_{n}>0$ for all $n\geq 0$. Suppose that both series converge on $(-r,r)$, for some $r>0$. If the sequence $\lbrace a_n/b_n\rbrace_{n\geq 0}$ is increasing (decreasing), then the function $x\mapsto{f(x)}/{g(x)}$ is increasing (decreasing) too on $(0,r)$. The result remains true for the power series $$f(x)=\displaystyle\sum_{n\geq 0}a_{n}x^{2n}\ \ \ \mbox{and}\ \ \ g(x)=\displaystyle\sum_{n\geq 0}b_{n}x^{2n}.$$ \end{lemma}
\subsection{Zeros of polynomials and entire functions and the Laguerre-P\'olya class} In this subsection we provide the necessary information about polynomials and entire functions with real zeros. An algebraic polynomial is called hyperbolic if all its zeros are real.
The simple statement that two real polynomials $p$ and $q$ posses real and interlacing zeros if and only if any linear combinations of $p$ and $q$ is a hyperbolic polynomial is sometimes called Obrechkoff's theorem. We formulate the following specific statement that we shall need. \begin{lemma} \label{OLem} Let $p(x)=1-a_1 x +a_2 x^2 -a_3 x^3 + \cdots +(-1)^n a_n x^n = (1-x/x_1)\cdots (1-x/x_n)$ be a hyperbolic polynomial with positive zeros $0< x_1\leq x_2 \leq \cdots \leq x_n$, and normalized by $p(0)=1$. Then, for any constant $C$, the polynomial $q(x) = C p(x) - x\, p'(x)$ is hyperbolic. Moreover, the smallest zero $\eta_1$ belongs to the interval $(0,x_1)$ if and only if $C<0$. \end{lemma}
The proof is straightforward; it suffices to apply Rolle's theorem and then count the sign changes of the linear combination at the zeros of $p$. We refer to \cite{BDR, DMR} for further results on monotonicity and asymptotics of zeros of linear combinations of hyperbolic polynomials.
A real entire function $\psi$ belongs to the Laguerre-P\'{o}lya class $\mathcal{LP}$ if it can be represented in the form $$ \psi(x) = c x^{m} e^{-a x^{2} + \beta x} \prod_{k\geq1} \left(1+\frac{x}{x_{k}}\right) e^{-\frac{x}{x_{k}}}, $$ with $c,$ $\beta,$ $x_{k} \in \mathbb{R},$ $a \geq 0,$ $m\in \mathbb{N} \cup\{0\},$ $\sum x_{k}^{-2} < \infty.$ Similarly, $\phi$ is said to be of type I in the Laguerre-P\'{o}lya class, written $\varphi \in \mathcal{LP}I$, if $\phi(x)$ or $\phi(-x)$ can be represented as $$ \phi(x) = c x^{m} e^{\sigma x} \prod_{k\geq1}\left(1+\frac{x}{x_{k}}\right), $$ with $c \in \mathbb{R},$ $\sigma \geq 0,$ $m \in \mathbb{N}\cup\{0\},$ $x_{k}>0,$ $\sum 1/x_{k} < \infty.$ The class $\mathcal{LP}$ is the complement of the space of hyperbolic polynomials in the topology induced by the uniform convergence on the compact sets of the complex plane while $\mathcal{LP}I$ is the complement of the hyperbolic polynomials whose zeros posses a preassigned constant sign. Given an entire function $\varphi$ with the Maclaurin expansion $$\varphi(x) = \sum_{k\geq0}\gamma_{k} \frac{x^{k}}{k!},$$ its Jensen polynomials are defined by $$ g_n(\varphi;x) = g_{n}(x) = \sum_{j=0}^{n} {n\choose j} \gamma_{j} x^j. $$ Jensen proved the following relation in \cite{Jen12}: \begin{THEO}\label{JTh} The function $\varphi$ belongs to $\mathcal{LP}$ ($\mathcal{LP}I$, respectively) if and only if all the polynomials $g_n(\varphi;x)$, $n=1,2,\ldots$, are hyperbolic (hyperbolic with zeros of equal sign). Moreover, the sequence $g_n(\varphi;z/n)$ converges locally uniformly to $\varphi(z)$. \end{THEO} Further information about the Laguerre-P\'olya class can be found in \cite{Obr, RS} while \cite{DC} contains references and additional facts about the Jensen polynomials in general and also about those related to the Bessel function.
A special emphasis has been given on the question of characterizing the kernels whose Fourier transform belongs to $\mathcal{LP}$ (see \cite{DR}). The following is a typical result of this nature, due to P\'olya \cite{pol}.
\begin{THEO} \label{PTh} \label{pol} Suppose that the function $K$ is positive, strictly increasing and continuous on $[0, 1)$ and integrable there. Then the entire functions \begin{equation*} U(z)=\int_{0}^{1}K(t) \sin (zt)dt\ \ \ \mbox{and} \ \ \ V(z)=\int_{0}^{1}K(t)\cos(zt)dt \end{equation*} have only real and simple zeros and their zeros interlace. \end{THEO}
In other words, the latter result states that both the sine and the cosine transforms of a kernel are in the Laguerre-P\'olya class provided the kernel is compactly supported and increasing in the support.
\begin{theorem}\label{ThZ} Let $\mu\in(-1,1),$ $\mu\neq0,$ and $c$ be a constant such that $c<{\mu}+\frac{1}{2}$. Then the functions $z\mapsto z s_{\mu -\frac{1}{2},\frac{1}{2}}'(z)-c s_{\mu -\frac{1}{2},\frac{1}{2}}(z)$ can be represented in the form \begin{equation} \mu (\mu+1) \left( z\, s_{\mu -\frac{1}{2},\frac{1}{2}}'(z)- c\, s_{\mu -\frac{1}{2},\frac{1}{2}}(z) \right) = z^{\mu+\frac{1}{2}} \psi_\mu(z), \label{psi} \end{equation} where $\psi_\mu$ is an even entire function and $ \psi_\mu \in \mathcal{LP}$. Moreover, the smallest positive zero of $\psi_\mu$ does not exceed the first positive zero of $s_{\mu-\frac{1}{2},\frac{1}{2}}$.
Similarly, if $|\nu |<\frac{1}{2}$ and $d$ is a constant satisfying $d<\nu+1$, then \begin{equation} {\frac{\sqrt{\pi}}{2}}\, \Gamma\left(\nu+\frac{3}{2}\right)\ \left(\, z \mathbf{H}_{\nu }'(z)- d \mathbf{H}_{\nu }(z) \, \right) = \left(\frac{z}{2}\right)^{\nu+1}\, \phi_\nu(z), \label{phinu} \end{equation} where $\phi_\nu$ is an entire function in the Laguerre-P\'olya class and the smallest positive zero of $\phi_\nu$ does not exceed the first positive zero of $\mathbf{H}_{\nu}$. \end{theorem}
\begin{proof} First suppose that $\mu\in(0,1).$ Since, by (\ref{LomHypG}), $$ \mu (\mu+1) s_{\mu -\frac{1}{2},\frac{1}{2}}(z) = \sum_{k\geq0} \frac{(-1)^kz^{2k+\mu+\frac{1}{2}}}{2^{2k}\left(\frac{\mu+2}{2}\right)_k \left(\frac{\mu+3}{2}\right)_k}, $$ then $$ \mu (\mu+1) z\, s_{\mu -\frac{1}{2},\frac{1}{2}}'(z) = \sum_{k\geq0} \frac{(-1)^k\left(2k+\mu+\frac{1}{2}\right)z^{2k+\mu+\frac{1}{2}}}{2^{2k}\left(\frac{\mu+2}{2}\right)_k \left(\frac{\mu+3}{2}\right)_k}. $$ Therefore, (\ref{psi}) holds with $$ \psi_\mu(z) = \sum_{k\geq0} \frac{2k+{\mu}+\frac{1}{2} -c}{\left(\frac{\mu+2}{2}\right)_k \left(\frac{\mu+3}{2}\right)_k} \left( -\frac{z^2}{4} \right)^{k}. $$ On the other hand, by Lemma 1, $$ \mu (\mu+1) s_{\mu -\frac{1}{2},\frac{1}{2}}(z) = z^{\mu+\frac{1}{2}} \varphi_0(z), $$ and, by \cite[Lemma 3]{Bar2}, we have \begin{equation} z\varphi _{0}(z)=\mu (\mu +1)\int_{0}^{1}(1-t)^{\mu -1}\sin (zt)dt, \ \ \mathrm{for}\ \mu >0. \label{integ} \end{equation} Therefore $\varphi_0$ has the Maclaurin expansion $$ \varphi_0(z) = \sum_{k\geq0}\frac{1}{\left(\frac{\mu+2}{2}\right)_k \left(\frac{\mu+3}{2}\right)_k} \left( -\frac{z^2}{4} \right)^{k}. $$ Moreover, (\ref{integ}) and Theorem \ref{PTh} imply that $\varphi_0 \in \mathcal{LP}$ for $\mu\in(0,1)$, so that the function $\tilde{\varphi}_0(z):= \varphi_0(2\sqrt{z})$, $$ \tilde{\varphi}_0(\zeta) = \sum_{k\geq0} \frac{1}{\left(\frac{\mu+2}{2}\right)_k \left(\frac{\mu+3}{2}\right)_k} \left( -\zeta\right)^{k}, $$ belongs to $\mathcal{LP}I$. Then it follows form Theorem \ref{JTh} that its Jensen polynomials $$ g_n(\tilde{\varphi}_0;\zeta) = \sum_{k=0}^n {n\choose k} \frac{k!}{\left(\frac{\mu+2}{2}\right)_k \left(\frac{\mu+3}{2}\right)_k} \left( -\zeta \right)^{k} $$ are all hyperbolic. However, observe that the Jensen polynomials of $\tilde{\psi}_\mu(z):= \psi_\mu(2\sqrt{z})$ are simply $$ -\frac{1}{2}g_n(\tilde{\psi}_\mu;\zeta) = -\frac{1}{2}\left({\mu}+\frac{1}{2}-c\right)\, g_n(\tilde{\varphi}_0;\zeta) - \zeta\, g_n'(\tilde{\varphi}_0;\zeta). $$ Lemma \ref{OLem} implies that all zeros of $g_n(\tilde{\psi}_\mu;\zeta)$ are real and positive and that the smallest one precedes the first zero of $g_n(\tilde{\varphi}_0;\zeta)$. In view of Theorem \ref{JTh}, the latter conclusion immediately yields that $\tilde{\psi}_\mu \in \mathcal{LP}I$ and that its first zero precedes the one of $\tilde{\varphi}_0$. Finally, the first statement of the theorem for $\mu\in(0,1)$ follows after we go back from $\tilde{\psi}_\mu$ and $\tilde{\varphi}_0$ to $\psi_\mu$ and $\varphi_0$ by setting $\zeta=-\frac{z^2}{4}$.
Now we prove \eqref{psi} for the case when $\mu\in(-1,0)$. Observe that for $\mu\in(0,1)$ the function \cite[Lemma 3]{Bar2} $$\varphi_1(z)=\sum_{k\geq0}\frac{1}{\left(\frac{\mu+1}{2}\right)_k \left(\frac{\mu+2}{2}\right)_k} \left( -\frac{z^2}{4} \right)^{k}=\mu \int_{0}^{1}(1-t)^{\mu -1}\cos (zt)dt$$ belongs also to Laguerre-P\'olya class $\mathcal{LP},$ and hence the Jensen polynomials of $\tilde{\varphi}_1(z):= \varphi_1(2\sqrt{z})$ are hyperbolic. Straightforward calculations show that the Jensen polynomials of $\tilde{\psi}_{\mu-1}(z):= \psi_{\mu-1}(2\sqrt{z})$ are $$-\frac{1}{2}g_n(\tilde{\psi}_{\mu-1};\zeta) = -\frac{1}{2}\left({\mu}-\frac{1}{2}-c\right)\, g_n(\tilde{\varphi}_1;\zeta) - \zeta\, g_n'(\tilde{\varphi}_1;\zeta).$$ Lemma \ref{OLem} implies that for $\mu\in(0,1)$ all zeros of $g_n(\tilde{\psi}_{\mu-1};\zeta)$ are real and positive and that the smallest one precedes the first zero of $g_n(\tilde{\varphi}_1;\zeta)$. This fact, together with Theorem \ref{JTh}, yields that $\tilde{\psi}_{\mu-1} \in \mathcal{LP}I$ and that its first zero precedes the one of $\tilde{\varphi}_1$. Consequently, the first statement of the theorem for $\mu\in(-1,0)$ follows after we go back from $\tilde{\psi}_{\mu-1}$ and $\tilde{\varphi}_1$ to $\psi_{\mu-1}$ and $\varphi_1$ by setting $\zeta=-\frac{z^2}{4}$ and substituting $\mu$ by $\mu+1$.
In order to prove the corresponding statement for (\ref{phinu}), we recall first that the hypergeometric representation (\ref{SrtHypG}) of the Struve function is equivalent to $$ \frac{\sqrt{\pi}}{2}\, \Gamma\left(\nu+\frac{3}{2}\right)\, \mathbf{H}_{\nu}(z) = \sum_{k\geq0} \frac{(-1)^k}{\left(\frac{3}{2}\right)_k \left(\nu+\frac{3}{2}\right)_k} \left( \frac{z}{2} \right)^{2k+\nu+1}, $$ which immediately yields $$
\phi_\nu(z) = \sum_{k\geq0} \frac{2k+\nu+1-d}{\left(\frac{3}{2}\right)_k \left(\nu+\frac{3}{2}\right)_k}\left( -\frac{z^2}{4} \right)^{k}. $$ On the other hand, the integral representation \begin{equation*} \mathbf{H}_{\nu }(z)=\frac{2\left(\frac{z}{2}\right) ^{\nu }}{\sqrt{\pi } \Gamma \left( \nu +\frac{1}{2}\right) }\int_{0}^{1}(1-t^{2})^{\nu -\frac{1}{2 }}\sin (zt)dt, \end{equation*} which holds for $\nu >-\frac{1}{2},$ and Theorem \ref{PTh} imply that the even entire function $$ \mathcal{H}_\nu(z) = \sum_{k\geq0} \frac{1}{\left(\frac{3}{2}\right)_k \left(\nu+\frac{3}{2}\right)_k} \left( -\frac{z^2}{4} \right)^{k} $$
belongs to the Laguerre-P\'olya class when $|\nu|<\frac{1}{2}$. Then the functions $\tilde{\mathcal{H}}_\nu(z):= \mathcal{H}_{\nu}(2\sqrt{z})$, $$ \tilde{\mathcal{H}}_\nu(\zeta) = \sum_{k\geq0} \frac{1}{\left(\frac{3}{2}\right)_k \left(\nu+\frac{3}{2}\right)_k} \left(-\zeta \right)^{k}, $$ is in $\mathcal{LP}I$. Therefore, its Jensen polynomials $$ g_n(\tilde{\mathcal{H}}_\nu;\zeta) = \sum_{k=0}^n {n\choose k} \frac{k!}{\left(\frac{3}{2}\right)_k \left(\nu+\frac{3}{2}\right)_k} \left( -\zeta \right)^{k} $$ are hyperbolic, with positive zeros. Then, by Lemma \ref{OLem}, the polynomial $ -\frac{1}{2}\left({\nu}+{1}-d\right)\, g_n(\tilde{\mathcal{H}}_\nu;\zeta) - \zeta\, g_n'(\tilde{\mathcal{H}}_\nu;\zeta) $ possesses only real positive zeros. Obviously the latter polynomial coincides with the $n$th Jensen polynomials of $\tilde{\phi}_\nu(z) = \phi_\nu(2\sqrt{z})$, that is $$ -\frac{1}{2}g_n(\tilde{\phi}_\nu;\zeta) = -\frac{1}{2}\left({\nu}+{1}-d\right)\, g_n(\tilde{\mathcal{H}}_\nu;\zeta) - \zeta\, g_n'(\tilde{\mathcal{H}}_\nu;\zeta). $$ Moreover, the smallest zero of $g_n(\tilde{\phi}_\nu;\zeta)$ precedes the first positive zero of $g_n(\tilde{\mathcal{H}}_\nu;\zeta)$. This implies that $\phi_\nu \in \mathcal{LP}$ and that its first positive zero is smaller that the one of $\mathcal{H}_\nu$. \end{proof}
\section{Proofs of the main results} \setcounter{equation}{0}
\begin{proof}[Proof of Theorem \ref{theo1}] We need to show that for the corresponding values of $\mu$ and $\alpha$ the inequalities \begin{equation} \Re \left( \frac{zf_{\mu }'(z)}{f_{\mu }(z)}\right) >\alpha , \text{ \ \ }\Re \left( \frac{zg_{\mu }'(z)}{g_{\mu }(z)}\right) >\alpha \text{ \ and \ }\Re \left( \frac{zh_{\mu }'(z)}{h_{\mu }(z)}\right) >\alpha \text{ \ \ } \label{2.0} \end{equation} are valid for $z\in \mathbb{D}_{r_{\alpha }^{\ast }(f_{\mu })}$, $z\in \mathbb{D}_{r_{\alpha }^{\ast }(g_{\mu })}$ and $z\in \mathbb{D}_{r_{\alpha }^{\ast }(h_{\mu })}$ respectively, and each of the above inequalities does not hold in larger disks. It follows from (\ref{1.6}) that \begin{equation*} f_{\mu }(z)=f_{\mu -\frac{1}{2},\frac{1}{2}}(z)=\left(\mu (\mu +1)s_{\mu - \frac{1}{2},\frac{1}{2}}(z)\right)^{\frac{1}{\mu +\frac{1}{2}}}=z\left(\varphi _{0}(z)\right)^{\frac{1}{\mu +\frac{1}{2}}}, \end{equation*} \begin{equation*} g_{\mu }(z)=g_{\mu -\frac{1}{2},\frac{1}{2}}(z)=\mu (\mu +1)z^{-\mu +\frac{1 }{2}}s_{\mu -\frac{1}{2},\frac{1}{2}}(z)=z\varphi _{0}(z), \end{equation*} \begin{equation*} h_{\mu }(z)=h_{\mu -\frac{1}{2},\frac{1}{2}}(z)=\mu (\mu +1)z^{\frac{3-2\mu }{4}}s_{\mu -\frac{1}{2},\frac{1}{2}}(\sqrt{z})=z\varphi _{0}(\sqrt{z}), \end{equation*} which in turn imply that \begin{equation*} \frac{zf_{\mu }'(z)}{f_{\mu }(z)}=1+\frac{z\varphi _{0}'(z) }{(\mu +\frac{1}{2})\varphi _{0}(z)}=1-\frac{1}{\mu +\frac{1}{2}} \sum\limits_{n\geq 1}\frac{2z^{2}}{z_{\mu ,0,n}^{2}-z^{2}}, \end{equation*} \begin{equation*} \frac{zg_{\mu }'(z)}{g_{\mu }(z)}=1+\frac{z\varphi _{0}'(z) }{\varphi _{0}(z)}=1-\sum\limits_{n\geq 1}\frac{2z^{2}}{z_{\mu ,0,n}^{2}-z^{2}}, \end{equation*} \begin{equation*} \frac{zh_{\mu }'(z)}{h_{\mu }(z)}=1+\frac{1}{2}\frac{\sqrt{z} \varphi _{0}'(\sqrt{z})}{\varphi _{0}(\sqrt{z})}=1-\sum\limits_{n \geq 1}\frac{z}{z_{\mu ,0,n}^{2}-z}, \end{equation*} respectively. We note that for $\mu \in (0,1)$ the function $\varphi _{0}$ has only real and simple zeros (see \cite{Bar2}). For $\mu \in (0,1),$ and ${ n\in }\left\{ 1,2,\dots \right\} $ let $\xi _{\mu ,n}=z_{\mu ,0,n}$ be the $ n $th positive zero of $\varphi _{0}.$ We know that (see \cite[Lemma 2.1] {Kou}) $\xi _{\mu ,n}\in (n\pi ,(n+1)\pi )$ for all $\mu \in (0,1)$ and ${n\in }\left\{ 1,2,\dots \right\}$, which implies that $\xi _{\mu ,n}>\xi _{\mu ,1}>\pi >1$ for all $\mu \in (0,1)$ and $n \geq 2$. On the other hand, it is known that \cite{sz} if ${z\in \mathbb{C}}$ and $\beta $ ${\in \mathbb{R}}$ are such that $\beta >{\left\vert z\right\vert }$, then \begin{equation} \frac{{\left\vert z\right\vert }}{\beta -{\left\vert z\right\vert }}\geq \Re\left( \frac{z}{\beta -z}\right) . \label{2.5} \end{equation} Then the inequality $$ \frac{{\left\vert z\right\vert }^{2}}{\xi _{\mu ,n}^{2}-{\left\vert z\right\vert }^{2}}\geq\Re\left( \frac{z^{2}}{\xi _{\mu ,n}^{2}-z^{2}} \right), $$ holds get for every $\mu \in (0,1)$, $n\in \mathbb{N}$ and ${\left\vert z\right\vert <}\xi _{\mu ,1}$. Therefore, \begin{equation*} \Re\left(\frac{zf_{\mu }'(z)}{f_{\mu }(z)}\right)=1-\frac{1 }{\mu +\frac{1}{2}}\Re\left(\sum\limits_{n\geq 1}\frac{2z^{2}}{\xi _{\mu ,n}^{2}-z^{2}}\right) \geq 1-\frac{1}{\mu +\frac{1}{2}} \sum\limits_{n\geq 1}\frac{2\left\vert z\right\vert ^{2}}{\xi _{\mu ,n}^{2}-\left\vert z\right\vert ^{2}}=\frac{\left\vert z\right\vert f_{\mu }'(\left\vert z\right\vert )}{f_{\mu }(\left\vert z\right\vert )}, \end{equation*} \begin{equation*} \Re\left( \frac{zg_{\mu }'(z)}{g_{\mu }(z)}\right) =1-\Re\left(\sum\limits_{n\geq 1}\frac{2z^{2}}{\xi _{\mu ,n}^{2}-z^{2}}\right) \geq 1-\sum\limits_{n\geq 1}\frac{2\left\vert z\right\vert ^{2}}{\xi _{\mu ,n}^{2}-\left\vert z\right\vert ^{2}}=\frac{\left\vert z\right\vert g_{\mu }'(\left\vert z\right\vert )}{g_{\mu }(\left\vert z\right\vert )} \end{equation*} and \begin{equation*} \Re\left(\frac{zh_{\mu}'(z)}{h_{\mu }(z)}\right)=1-\Re\left(\sum\limits_{n\geq 1}\frac{z}{\xi _{\mu ,n}^{2}-z}\right) \geq 1-\sum\limits_{n\geq 1}\frac{\left\vert z\right\vert }{\xi _{\mu ,n}^{2}-\left\vert z\right\vert }=\frac{\left\vert z\right\vert h_{\mu }'(\left\vert z\right\vert )}{h_{\mu }(\left\vert z\right\vert )}, \end{equation*} where equalities are attained only when $z=\left\vert z\right\vert =r$. The latter inequalities and the minimum principle for harmonic functions imply that the corresponding inequalities in (\ref{2.0}) hold if and only if $ \left\vert z\right\vert <x_{\mu ,\alpha },$ $\left\vert z\right\vert <y_{\mu ,\alpha }$ and $\left\vert z\right\vert <t_{\mu ,\alpha },$ respectively, where $x_{\mu ,\alpha }$, $y_{\mu ,\alpha }$ and $t_{\mu ,\alpha }$ are the smallest positive roots of the equations \begin{equation*} rf_{\mu }'(r)/f_{\mu }(r)=\alpha ,\text{ \ }rg_{\mu }'(r)/g_{\mu }(r)=\alpha ,\ rh_{\mu }'(r)/h_{\mu }(r)=\alpha. \end{equation*} Since their solutions coincide with the zeros of the functions $$r\mapsto rs_{\mu -\frac{1}{2},\frac{1}{2}}'(r)-\alpha \left( \mu +\frac{1}{2} \right) s_{\mu -\frac{1}{2},\frac{1}{2}}(r),\ r\mapsto rs_{\mu -\frac{1}{2},\frac{1}{2}}'(r)-\left( \mu +\alpha -\frac{1}{2 }\right) s_{\mu -\frac{1}{2},\frac{1}{2}}(r), $$ $$ r\mapsto rs_{\mu -\frac{1}{2},\frac{1}{2}}'(r)-\left( \mu +2\alpha -\frac{3}{ 2}\right) s_{\mu -\frac{1}{2},\frac{1}{2}}(r), $$ the result we need follows from Theorem \ref{ThZ}. In other words, Theorem \ref{ThZ} show that all the zeros of the above three functions are real and their first positive zeros do not exceed the first positive zero $\xi_{\mu,1}$. This guarantees that the above inequalities hold. This completes the proof our theorem when $\mu\in(0,1)$.
Now we prove that the inequalities in \eqref{2.0} also hold for $\mu\in\left(-1,0\right),$ except the first one, which is valid for $\mu\in\left(-\frac{1}{2},0\right).$ In order to do this, suppose that $\mu\in(0,1)$ and adapt the above proof, substituting $\mu$ by $\mu-1$, $\varphi_0$ by the function $\varphi_1$ and taking into account that the $n$th positive zero of $\varphi _{1},$ denoted by $\zeta_{\mu ,n}=z_{\mu ,1,n},$ satisfies (see \cite{Bar3}) $\zeta _{\mu ,n}>\zeta _{\mu ,1}>\frac{\pi }{2} >1 $ for all $\mu \in (0,1)$ and $n\geq 2$. It is worth mentiontioning that \begin{equation*} \Re\left(\frac{zf_{\mu-1}'(z)}{f_{\mu-1}(z)}\right)=1-\frac{1 }{\mu -\frac{1}{2}}\Re\left(\sum\limits_{n\geq 1}\frac{2z^{2}}{\zeta _{\mu ,n}^{2}-z^{2}}\right) \geq 1-\frac{1}{\mu -\frac{1}{2}} \sum\limits_{n\geq 1}\frac{2\left\vert z\right\vert ^{2}}{\zeta_{\mu ,n}^{2}-\left\vert z\right\vert ^{2}}=\frac{\left\vert z\right\vert f_{\mu-1 }'(\left\vert z\right\vert )}{f_{\mu-1}(\left\vert z\right\vert )}, \end{equation*} remains true for $\mu\in\left(\frac{1}{2},1\right)$. In this case we use the minimum principle for harmonic functions to ensure that \eqref{2.0} is valid for $\mu-1$ instead of $\mu.$ Thus, using again Theorem \ref{ThZ} and replacing $\mu$ by $\mu+1$, we obtain the statement of the first part for $\mu\in\left(-\frac{1}{2},0\right)$. For $\mu\in(-1,0)$ the proof of the second and third inequalities in \eqref{2.0} go along similar lines.
To prove the statement for part {\bf a} when $\mu \in \left(-1,-\frac{1}{2}\right)$ we observe that the counterpart of (\ref{2.5}) is \begin{equation} \operatorname{Re}\left( \frac{z}{\beta -z}\right) \geq \frac{-{\left\vert z\right\vert }}{\beta +{\left\vert z\right\vert }}, \label{2.10} \end{equation} and it holds for all ${z\in \mathbb{C}}$ and $\beta $ ${\in \mathbb{R}}$ such that $\beta >{\left\vert z\right\vert }$ (see \cite{sz}). From (\ref {2.10}), we obtain the inequality $$ \Re\left( \frac{z^{2}}{\zeta _{\mu ,n}^{2}-z^{2}}\right) \geq \frac{-{ \left\vert z\right\vert }^{2}}{\zeta _{\mu ,n}^{2}+{\left\vert z\right\vert } ^{2}}, \label{2.11} $$ which holds for all $\mu \in \left(0,\frac{1}{2}\right),$ $n\in \mathbb{N}$ and ${\left\vert z\right\vert <}\zeta _{\mu ,1}$ and it implies that \begin{equation*} \Re\left(\frac{zf_{\mu-1}'(z)}{f_{\mu-1}(z)}\right) =1- \frac{1}{\mu -\frac{1}{2}}\Re\left(\sum\limits_{n\geq1}\frac{ 2z^{2}}{\zeta _{\mu ,n}^{2}-z^{2}}\right) \geq 1+\frac{1}{\mu -\frac{1}{2}} \sum\limits_{n\geq1}\frac{2\left\vert z\right\vert ^{2}}{\zeta _{\mu ,n}^{2}+\left\vert z\right\vert ^{2}}=\frac{i\left\vert z\right\vert f_{\mu -1}'(i\left\vert z\right\vert )}{f_{\mu -1}(i\left\vert z\right\vert )}. \end{equation*} In this case equality is attained if $z=i\left\vert z\right\vert =ir.$ Moreover, the latter inequality implies that \begin{equation*} \Re\left( \frac{zf_{\mu -1}'(z)}{f_{\mu -1}(z)}\right) >\alpha \end{equation*} if and only if $\left\vert z\right\vert <q_{\mu ,\alpha }$, where $q_{\mu ,\alpha }$ denotes the smallest positive root of the equation $irf_{\mu -1}'(\mathrm{i}r)/f_{\mu-1}(\mathrm{i}r)=\alpha,$ which is equivalent to \begin{equation*} i r s_{\mu -\frac{3}{2},\frac{1}{2}}'(ir)-\alpha \left(\mu -\frac{1}{2}\right)s_{\mu -\frac{3}{2},\frac{1}{2}}(ir)=0,\text{ for }\mu \in \left(0,\frac{1}{2}\right). \end{equation*} Substituting $\mu$ by $\mu +1,$ we obtain \begin{equation*} i r s_{\mu -\frac{1}{2},\frac{1}{2}}'(i r)-\alpha \left(\mu +\frac{1}{2}\right)s_{\mu -\frac{1}{2},\frac{1}{2}}(ir)=0,\text{ for }\mu \in \left(-1,-\frac{1}{2}\right). \end{equation*} It follows from Theorem \ref{ThZ} that the first positive zero of $z\mapsto izs_{\mu -\frac{1}{2},\frac{1}{2}}'(iz)-\alpha \left(\mu +\frac{1}{2}\right)s_{\mu -\frac{1}{2},\frac{1}{2}}(iz)$ does not exceed $\zeta_{\mu,1}$ which guarantees that the above inequalities are valid. All we need to prove is that the above function has actually only one zero in $(0,\infty)$. Observe that, according to Lemma \ref{lempower}, the function $$ r\mapsto \frac{irs_{\mu -\frac{1}{2},\frac{1}{2}}'(ir)}{s_{\mu -\frac{1}{2},\frac{1}{2}}(ir)} $$ is increasing on $(0,\infty)$ as a quotient of two power series whose positive coefficients form the increasing ``quotient sequence'' $\left\{2k+\mu+\frac{1}{2}\right\}_{k\geq0}.$ On the other hand, the above function tends to $\mu+\frac{1}{2}$ when $r\to0,$ so that its graph can intersect the horizontal line $y=\alpha\left(\mu+\frac{1}{2}\right)>\mu+\frac{1}{2}$ only once. This completes the proof of part {\bf a} of the theorem when $\mu\in(-1,0)$. \end{proof}
\begin{proof}[Proof of Theorem 2] As in the proof of Theorem \ref{theo1} we need show that, for the corresponding values of $\nu $ and $\alpha $, the inequalities \begin{equation} \Re\left( \frac{zu_{\nu }'(z)}{u_{\nu }(z)}\right) >\alpha ,\ \ \Re\left( \frac{zv_{\nu }'(z)}{v_{\nu }(z)}\right) >\alpha \ \text{and\ }\Re\left( \frac{zw_{\nu }'(z)}{w_{\nu }(z)} \right) >\alpha \ \ \label{strv1} \end{equation} are valid for $z\in \mathbb{D}_{r_{\alpha }^{\ast }(u_{\nu })}$, $z\in \mathbb{D}_{r_{\alpha }^{\ast }(v_{\nu })}$ and $z\in \mathbb{D}_{r_{\alpha }^{\ast }(w_{\nu })}$ respectively, and each of the above inequalities does not hold in any larger disk.
If $\left\vert \nu \right\vert \leq \frac{1}{2},$ then (see \cite[Lemma 1]{BPS}) the Hadamard factorization of the transcendental entire function $\mathcal{H}_{\nu }$, defined by \begin{equation*} \mathcal{H}_{\nu }(z)=\sqrt{\pi }2^{\nu }z^{-\nu -1}\Gamma \left( \nu +\frac{ 3}{2}\right) \mathbf{H}_{\nu }(z), \end{equation*} reads as follows \begin{equation*} \mathcal{H}_{\nu }(z)=\prod\limits_{n\geq 1}\left( 1-\frac{z^{2}}{h_{\nu ,n}^{2}}\right) , \end{equation*} which implies that \begin{equation*} \mathbf{H}_{\nu }(z)=\frac{z^{\nu +1}}{\sqrt{\pi }2^{\nu }\Gamma \left( \nu + \frac{3}{2}\right) }\prod\limits_{n\geq 1}\left( 1-\frac{z^{2}}{h_{\nu ,n}^{2}}\right), \end{equation*} where $h_{\nu ,n}$ stands for the $n$th positive zero of the Struve function $\mathbf{H}_{\nu }.$
We know that (see \cite[Theorem 2]{Bar3}) $h_{\nu ,n}>h_{\nu ,1}>1$ for all $ \left\vert \nu \right\vert \leq \frac{1}{2}$ and $n\in \mathbb{N}$. If $\left\vert \nu \right\vert \leq \frac{1}{2}$ and $\left\vert z\right\vert <h_{\nu ,1}$, then (\ref{2.5}) imples \begin{equation*} \Re\left(\frac{zu_{\nu }'(z)}{u_{\nu }(z)}\right) =1-\frac{1 }{\nu +1}\Re\left(\sum\limits_{n\geq 1}\frac{2z^{2}}{h_{\nu ,n}^{2}-z^{2}}\right) \geq 1-\frac{1}{\nu +1}\sum\limits_{n\geq 1}\frac{ 2\left\vert z\right\vert ^{2}}{h_{\nu ,n}^{2}-\left\vert z\right\vert ^{2}}= \frac{\left\vert z\right\vert u_{\nu }'(\left\vert z\right\vert )}{ u_{\nu }(\left\vert z\right\vert )}, \end{equation*} \begin{equation*} \Re\left(\frac{zv_{\nu }'(z)}{v_{\nu }(z)}\right)=1-\Re\left(\sum\limits_{n\geq 1}\frac{2z^{2}}{h_{\nu ,n}^{2}-z^{2}}\right) \geq 1-\sum\limits_{n\geq 1}\frac{2\left\vert z\right\vert ^{2}}{h_{\nu ,n}^{2}-\left\vert z\right\vert ^{2}}=\frac{\left\vert z\right\vert v_{\nu }'(\left\vert z\right\vert )}{v_{\nu }(\left\vert z\right\vert )} \end{equation*} and \begin{equation*} \Re\left(\frac{zw_{\nu }'(z)}{w_{\nu }(z)}\right)=1-\Re\left(\sum\limits_{n\geq 1}\frac{z}{h_{\nu ,n}^{2}-z}\right)\geq 1-\sum\limits_{n\geq 1}\frac{\left\vert z\right\vert }{h_{\nu ,n}^{2}-\left\vert z\right\vert }=\frac{\left\vert z\right\vert w_{\nu}'(\left\vert z\right\vert )}{w_{\nu }(\left\vert z\right\vert )}, \end{equation*} where equalities are attained when $z=\left\vert z\right\vert =r.$ Then minimum principle for harmonic functions implies that the corresponding inequalities in (\ref{strv1}) hold if and only if $\left\vert z\right\vert <\delta _{\nu ,\alpha },$ $\left\vert z\right\vert <\rho _{\nu ,\alpha }$ and $\left\vert z\right\vert <\sigma _{\nu ,\alpha },$ respectively, where $\delta _{\nu ,\alpha }$, $\rho _{\nu ,\alpha }$ and $ \sigma _{\nu ,\alpha }$ are the smallest positive roots of the equations \begin{equation*} ru_{\nu }'(r)/u_{\nu }(r)=\alpha ,\text{ \ }rv_{\nu }'(r)/v_{\nu }(r)=\alpha ,\ rw_{\nu }'(r)/w_{\nu }(r)=\alpha. \end{equation*} The solutions of these equations are the zeros of the functions $$r\mapsto r\mathbf{H}_{\nu }'(r)-\alpha (\nu +1)\mathbf{H}_{\nu }(r),\ r\mapsto r\mathbf{H}_{\nu }'(r)-(\alpha +\nu )\mathbf{H}_{\nu }(r),\ r\mapsto r\mathbf{H}_{\nu }'(r)-(2\alpha +\nu -1)\mathbf{H}_{\nu }(r),$$ which, in view of Theorem \ref{ThZ}, have only real zeros and the smallest positive zero of each of them does not exceed the first positive zeros of $\mathbf{H}_{\nu }$. \end{proof}
\end{document} |
\begin{document}
{\hskip 115mm December 7, 2010 \vskip 5mm}
\title{Topological entropy and irregular recurrence}
\author{Lenka Obadalov\'a}
\address{ Mathematical Institute, Silesian University, CZ-746 01 Opava, Czech Republic}
\email{[email protected]}
\thanks{The research was supported, in part, by grant SGS/15/2010 from the Silesian University in Opava.}
\begin{abstract} This paper is devoted to problems stated by Z. Zhou and F. Li in 2009. They concern relations between almost periodic, weakly almost periodic, and quasi-weakly almost periodic points of a continuous map $f$ and its topological entropy. The negative answer follows by our recent paper. But for continuous maps of the interval and other more general one-dimensional spaces we give more results; in some cases, the answer is positive.
{\small {2000 {\it Mathematics Subject Classification.}} Primary 37B20, 37B40, 37D45, 37E05.} \end{abstract}
\maketitle
\section{Introduction}
Let $(X,d)$ be a compact metric space, $I=[0,1]$ the unit interval, and $\mathcal C(X)$ the set of continuous maps $f :X\rightarrow X$. By $\omega (f,x)$ we denote the {\it $\omega$-limit set} of $x$ which is the set of limit points of the {\it trajectory} $\{f ^{i}(x)\} _{i\ge 0}$ of $x$, where $f^{i}$ denotes the $i$th iterate of $f$. We consider sets $W(f)$ of {\it weakly almost periodic points} of $f$, and $QW(f)$ of {\it quasi-weakly almost periodic points} of $f$. They are defined as follows, see \cite{zhou}: $$ W(f)= \left\{x \in X; \forall \varepsilon ~\exists N>0 \ \text{such that} \ \sum_{i=0}^{nN-1} \chi_{B(x,\varepsilon)}(f^i(x)) \geq n, \forall n >0\right\}, $$ $$ QW(f)= \left\{x \in X; \forall \varepsilon ~\exists N>0, \exists \{n_j\} \ \text{such that} \ \sum_{i=0}^{n_jN-1} \chi_{B(x,\varepsilon)}(f^i(x)) \geq n_j, \forall j >0\right\}, $$ where ${B(x,\varepsilon)}$ is the $\varepsilon$-neighbourhood of $x$, $\chi_{A}$ the characteristic function of a set $A$, and $\{n_j\}$ an increasing sequence of positive integers. For $x \in X$ and $t>0$, let \begin{eqnarray} \label{novew} \Psi_x (f, t) &=& \liminf_{n \to \infty} \tfrac1n \# \{0\le j<n; d(x, f^j(x))<t\}, \\ \label{noveqw} \Psi_x^* (f, t) &=& \limsup_{n \to \infty} \tfrac1n \# \{0\le j<n; d(x, f^j(x))<t\}. \end{eqnarray} Thus, $\Psi _x(f,t)$ and $\Psi ^*_x(f,t)$ are the {\it lower} and {\it upper Banach density} of the set $\{ n\in\mathbb N; f^n(x)\in B(x,t)\}$, respectively. In this paper we make of use more convenient definitions of $W(f)$ and $QW(f)$ based on the following lemma.
\begin{lm} \label{l1} Lef $f \in \mathcal C(X)$. Then
{\rm (i)} \ $x \in W(f)$ if and only if $\Psi_x (f, t)>0$, for every $t>0$,
{\rm (ii)} \ $x \in QW(f)$ if and only if $\Psi_x^* (f, t)>0$, for every $t>0$. \end{lm}
\begin{proof} It is easy to see that, for every $\varepsilon>0$ and $N>0$, \begin{equation} \label{99} \sum_{i=0}^{nN-1} \chi_{B(x,\varepsilon)}(f^i(x)) \geq n \ \ \ \text{if and only if} \ \ \ \#\{0 \leq j <nN; f^j(x) \in B(x, \varepsilon)\} \geq n. \end{equation} (i) If $x\in W(f)$ then, for every $\varepsilon >0$ there is an $N>0$ such that the condition on the left side in (\ref{99}) is satisfied for every $n$. Hence, by the condition on the right, $\Psi _x(f,\varepsilon )\ge 1/N>0$. If $x\notin W(f)$ then there is an $\varepsilon >0$ such, that for every $N>0$, there is an $n>0$ such that the condition on the left side of (\ref{99}) is not satisfied. Hence, by the condition on the right, $\Psi _x(f,t)<1/N\to 0$ if $N\to\infty$. Proof of (ii) is similar. \end{proof}
Obviously, $W(f)\subseteq QW(f)$. The properties of $W(f)$ and $QW(f)$ were studied in the nineties by Z. Zhou et al, see \cite{zhou} for references. The points in $IR(f):=QW(f)\setminus W(f)$ are {\it irregularly recurrent points}, i.e., points $x$ such that $\Psi_x^*(f, t) >0 $ for any $t>0$, and $\Psi_x(f, t_0)=0$ for {\it some} $t_0>0$, see \cite{lenka}. Denote by $h(f)$ {\it topological entropy} of $f$ and by $R(f)$, $UR(f)$ and $AP(f)$ the set of {\it recurrent}, {\it uniformly recurrent} and {\it almost periodic} points of $f$, respectively. Thus, $x\in R(f)$ if, for every neighborhood $U$ of $x$, $f^j(x)\in U$ for infinitely many $j\in\mathbb N$, $x\in UR(f)$ if, for every neighborhood $U$ of $x$ there is a $K>0$ such that every interval $[n, n+K]$ contains a $j\in\mathbb N$ with $f^j(x)\in U$, and $x\in AP(f)$, if for every neighborhood $U$ of $x$, there is a $k>0$ such that $f^{kj}(x)\in U$, for every $j\in\mathbb N$. Recall that $x\in R(f)$ if and only if $x\in\omega (f,x)$, and $x\in UR(f)$ if and only if $\omega (f,x)$ is a {\it minimal set}, i.e., a closed set $\emptyset\ne M\subseteq X$ such that $f(M)=M$ and no proper subset of $M$ has this property. Denote by $\omega (f)$ the union of all $\omega$-limit sets of $f$. The next relations follow by definition: \begin{equation} \label{eq10} AP(f)\subseteq UR(f)\subseteq W(f)\subseteq QW(f)\subseteq R(f)\subseteq \omega (f) \end{equation} The next theorem will be used in Section 2. Its part (i) is proved in \cite{zhou2} but we are able to give a simpler argument, and extend it to part (ii).
\begin{theorem} If $f\in\mathcal C(X)$ then
{\rm (i)} \ $W(f) = W(f^m)$,
{\rm (ii)} \ $QW(f) = QW(f^m)$,
{\rm (iii)} \ $IR(f) = IR(f^m)$. \label{dalsi3vlastnosti} \end{theorem}
\begin{proof} Since $\Psi _x(f,t)\ge \tfrac 1m \Psi _x(f^m,t)$, $x\in W(f^m)$ implies $x\in W(f)$ and similarly, $QW(f^m)\subseteq QW(f)$. Since (iii) follows by (i) and (ii), it suffices to prove that for every $\varepsilon >0$ there is a $\delta >0$ such that, for every prime integer $m$, \begin{equation} \label{eq11} \Psi _x(f^m,\varepsilon) \ge\Psi _x(f,\delta ) \ \text{and} \ \Psi ^*_x(f^m,\varepsilon )\ge \Psi ^*_x(f,\delta ). \end{equation} For every $i\ge 0$, denote $\omega _i:=\omega (f^m,f^i(x))$ and $\omega _{ij}:=\omega _i\cap\omega _j$. Obviously, $\omega (f,x)=\bigcup _{0\le i<m}\omega _i$, and $f(\omega _{i})=\omega _{i+1}$, where $i$ is taken mod $m$. Moreover, $f^m(\omega _i)=\omega _i$ and $f^m(\omega _{ij})=\omega _{ij}$, for every $0\le i<j<m$. Hence \begin{equation} \label{equ12} \omega _i\ne \omega _{ij} \ \text{implies} \ \omega _j\ne\omega _{ij}, \ \text{and} \ f^i(x), f^j(x)\notin \omega _{ij}. \end{equation} Let $k$ be the least period of $\omega _0$. Since $m$ is prime, there are two cases.
(a) If $k=m$ then the sets $\omega _i$ are pairwise distinct and, by (\ref{equ12}), there is a $\delta>0$ such that $B(x,\delta )\cap \omega _i=\emptyset$, $0<i<m$. It follows that if $f^r(x)\in B(x,\delta )$ then $r$ is a multiple of $m$, with finitely many exceptions. Consequently, (\ref{eq11}) is satisfied for $\varepsilon =\delta$, even with $\ge$ replaced by the equality.
(b) If $k=1$ then $\omega _i=\omega _0$, for every $i$. Let $\varepsilon >0$. For every $i$, $0\le i<m$, there is the minimal integer $k_i\ge 0$ such that $f^{mk_i+i}(x)\in B(x,\varepsilon)$. By the continuity, there is a $\delta >0$ such that $f^{mk_i+i}(B(x,\delta ))\subseteq B(x,\varepsilon )$, $0\le i<m$. If $f^r(x)\in B(x,\delta )$ and $r\equiv i ({\rm mod} \ m)$, $r=ml+i$, then $f^{m(l+1+k_{m-i})}(x)=f^{r+ mk_{m-i}+{m-i}}(x)\in f^{mk_{m-i} +m-i}(B(x,\delta ))\subseteq B(x,\varepsilon )$. This proves (\ref{eq11}). \end{proof}
In 2009 Z. Zhou and F. Li stated, among others, the following problems, see \cite{zhou3}.
\noindent {\bf Problem 1.} Does $IR(f)\ne\emptyset$ imply $h(f)>0$?
\noindent {\bf Problem 2.} Does $W(f)\ne AP(f)$ imply $h(f)>0$?
\noindent In general, the answer to either problem is negative. In \cite{lenka} we constructed a skew-product map $F:Q\times I\to Q\times I$, $(x,y)\mapsto (\tau (x), g_x(y))$, where $Q=\{ 0,1\}^{\mathbb N}$ is a Cantor-type set, $\tau$ the adding machine (or, odometer) on $Q$ and, for every $x$, $g_x$ is a nondecreasing mapping $I\to I$, with $g_x(0)=0$. Consequently, $h(F)=0$ and $Q_0:=Q\times\{ 0\}$ is an invariant set. On the other hand, $IR(F)\ne\emptyset$ and $Q_0=AP(F)\ne W(F)$. This example answers in the negative both problems.
However, for maps $f\in\mathcal C(I)$, $h(f)>0$ is equivalent to $IR(f)\ne\emptyset$. On the other hand, the answer to Problem 2 remains negative even for maps in $\mathcal C(I)$. Instead, we are able to show that such maps with $W(f)\ne AP(f)$ are Li-Yorke chaotic. These results are given in the next section, as Theorems 2 and 3. Then, in Section 3 we show that these results can be extended to maps of more general one-dimensional compact metric space like topological graphs, topological trees, but not dendrites, see Theorems \ref{gen1} and \ref{gen2}.
\section{Relations with topological entropy for maps in $\mathcal C(I)$}
\begin{theorem} For $f\in\mathcal C (I)$, the conditions $h(f)>0$ and $IR(f)\ne\emptyset$ are equivalent. \label{entropie} \end{theorem}
\begin{proof} If $h(f)=0$ then $UR(f)=R(f)$ (see, e.g., \cite{block}, Corollary VI.8). Hence, by (\ref{eq10}), $W(f)=QW(f)$. If $h(f)>0$ then $W(f)\ne QW(f)$; this follows by Theorem \ref{dalsi3vlastnosti} and Lemmas \ref{shift} and \ref{kladna} stated below. \end{proof}
Let $(\Sigma _2,\sigma )$ be the shift on the set $\Sigma _2$ of sequences of two symbols, $0, 1$, equipped with a metric $\rho$ of pointwise convergence, say, $\rho (\{x_i\}_{i\ge 1},\{y_i\}_{i\ge 1})=1/k$ where $k=\min \{ i\ge 1; x_i\ne y_i\}$.
\begin{lm} $IR(\sigma)$ is non-empty, and contains a transitive point. \label{shift} \end{lm}
\begin{proof} Let $$ k_{1,0},k_{1,1},k_{2,0},k_{2,1},k_{2,2},k_{3,0},\cdots ,k_{3,3},k_{4,0},\cdots ,k_{4,4},k_{5,0},\cdots $$ be an increasing sequence of positive integers. Let $\{ B_n\} _{n\ge 1}$ be a sequence of all finite blocks of digits 0 and 1. Put $A_0 = 10$, $A_1=(A_0)^{k_{1,0}}0^{k_{1,1}}B_1$ and, in general, \begin{equation} \label{equ2} A_{n}=A_{n-1}(A_0)^{k_{n,0}}(A_1)^{k_{n,1}}\cdots (A_{n-1})^{k_{n,n-1}}0^{k_{n,n}}B_{n}, \ n\ge 1. \end{equation}
Denote by $|A|$ the lenght of a finite block of 0's and 1's, and let \begin{equation} \label{equ20}
a_n=|A_n|, \ b_n=|B_n|, \ c_n=a_n-b_{n} -k_{n, n}, \ n\ge 1, \end{equation} and \begin{equation} \label{equ21}
\lambda _{n,m}=\left|A_{n-1}(A_0)^{k_{n,0}}(A_1)^{k_{n,1}}\cdots (A_m)^{k_{n,m}}\right|, \ 0\le m< n. \end{equation} By induction we can take the numbers $k_{i,j}$ such that \begin{equation} \label{equ22} k_{n,m+1}=n\cdot \lambda _{n,m}, \ 0\le m< n. \end{equation} Let $N(A)$ be the cylinder of all $x \in \Sigma_2$ beginning with a finite block $A$. Then $\{ N(B_n)\}_{n\ge 1}$ is a base of the topology of $\Sigma _2$, and $\bigcap _{n=1}^\infty N(A_n) $ contains exactly one point; denote it by $u$.
Since $\sigma ^{a_n-b_n}(u)\in N(B_n)$, i.e., since the trajectory of $u$ visits every $N(B_n)$, $u$ is a transitive point of $\sigma$. Moreover, $\rho (u, \sigma ^{j} (u))=1$, whenever $c_n\le j<a_n-b_n$. By (\ref{equ22}) it follows that $\Psi _u(\sigma ,t)=0$ for every $t\in (0,1)$. Consequently, $u\notin W(\sigma )$.
It remains to show that $u\in QW(\sigma )$. Let $t\in (0,1)$. Fix an $n_0\in\mathbb N$ such that $1/a_{n_0}<t$. Then, by (\ref{equ2}), $$ \#\left\{ j<\lambda _{n,n_0}; \rho (u,\sigma ^j(u))<t\right\} \ge k_{n,n_0}, \ n>n_0, $$ hence, by (\ref{equ21}) and (\ref{equ22}), $$ \lim _{n\to\infty}\frac{\#\left\{ j<\lambda _{n,n_0}; \rho (u,\sigma ^j(u))<t\right\}} {\lambda _{n,n_0}}\ge \lim _{n\to\infty}\frac{k_{n,n_0}}{\lambda _{n,n_0}} =\lim_{n\to\infty}\frac{k_{n,n_0}}{\lambda _{n,n_0-1}+a_{n_0}k_{n,n_0}} =\lim _{n\to\infty}\frac n{1+a_{n_0}n}=\frac 1{a_{n_0}}. $$
Thus, $\Psi^*_u(\sigma ,t)\ge 1/{a_{n_0}}$ and, by Lemma \ref{l1},
$u\in QW(\sigma )$. \end{proof}
\begin{lm} Let $f\in \mathcal C (I)$ have positive topological entropy. Then $IR(f)\ne\emptyset$. \label{kladna} \end{lm}
\begin{proof} When $h(f)>0$, then $f^m$ is strictly turbulent for some $m$. This means that there exist disjoint compact intervals $K_0$, $K_1$ such that $f^m(K_0)\cap f^m(K_1)\supset K_0\cup K_1$, see \cite{block}, Theorem IX.28. This condition is equivalent to the existence of a continuous map $g: X\subset I \rightarrow \Sigma_2$, where $X$ is of Cantor type, such that $g \circ f^m(x) = \sigma \circ g(x)$ for every $x \in X$, and such that each point in $\Sigma_2$ is the image of at most two points in $X$ (\cite{block}, Proposition II.15). By Lemma~\ref{shift}, there is a $u\in IR(\sigma )$. Hence, for every $t>0$, $\Psi_u^*(\sigma, t)>0$, and there is an $s>0$ such that $\Psi_u(\sigma, s)=0$. There are at most two preimages, $u_0$ and $u_1$, of $u$. Then, by the continuity, $\Psi_{u_i}(f^m, r)=0$, for some $r>0$ and $i= 0, 1$, and $\Psi_{u_i}^*(f^m, k)>0$ for at least one $i\in\{0, 1\}$ and every $k>0$. Thus, $u_0\in IR(f^m)$ or $u_1\in IR(f^m )$ and, by Theorem \ref{dalsi3vlastnosti}, $IR (f)\ne\emptyset$. \end{proof}
Recall that $f\in\mathcal C (X)$ is {\it Li-Yorke chaotic}, or {\it LYC}, if there is an uncountable set $S\subseteq X$ such that, for every $x\ne y$ in $S$, $\liminf _{n\rightarrow\infty} \rho (\varphi ^n(x), \varphi ^n(y))=0$ and $\limsup _{n\rightarrow\infty} \rho (\varphi ^n(x), \varphi ^n(y))>0$.
\begin{theorem} \label{LiYorke} For $f\in \mathcal C(I)$, $W(f)\ne AP(f)$ implies that $f$ is Li-Yorke chaotic, but does not imply $h(f)>0$. \end{theorem}
\begin{proof} Every continuous map of a compact metric space with positive topological entropy is Li-Yorke chaotic \cite{BGKM}. Hence to prove the theorem it suffices to consider the class $\mathcal C_0\subset \mathcal C(I)$ of maps with zero topological entropy and show that
(i) for every $f\in\mathcal C_0$, $W(f)\ne AP(f)$ implies {\it LYC}, and
(ii) there is an $f\in\mathcal C_0$ with $W(f)\ne AP(f)$.
\noindent For $f\in\mathcal C_0$, $R(f)=UR(f)$, see, e.g., \cite{block}, Corollary VI.8. Hence, by (\ref{eq10}), $W(f)\ne AP(f)$ implies that $f$ has an infinite minimal $\omega$-limit set $\widetilde\omega$ possessing a point which is not in $AP(f)$. Recall that for every such $\widetilde\omega$ there is an {\it associated system} $\{ J_n\}_{n\ge 1}$ of compact periodic intervals such that $J_n$ has period $2^n$, and $\widetilde\omega\subseteq \bigcap _{n\ge 1}\bigcup _{0\le j<2^n} f^{j}(J_n)$ \cite{smital}. For every $x\in\widetilde\omega$ there is a sequence $\iota (x)=\{j_n\}_{n\ge 1}$ of integers, $0\le j_n<2^n$, such that $x\in\bigcap _{n\ge 1}f^{j_n}(J_n)=:Q_x$. For every $x\in \widetilde\omega$, the set $\widetilde\omega \cap Q_x$ contains one (i.e., the point $x$) or two points. In the second case $Q_x=[a,b]$ is a compact wandering interval (i.e., $f^n(Q_x)\cap Q_x=\emptyset$ for every $n\ge 1$) such that $a,b\in\widetilde\omega$ and either $x=a$ or $x=b$. Moreover, if, for every $x\in\widetilde\omega$, $\widetilde\omega \cap Q_x$ is a singleton then $f$ restricted to $\widetilde\omega$ is the adding machine, and $\widetilde\omega \subseteq AP(f)$, see \cite{brucknersmital}. Consequently, $W(f)\ne AP(f)$ implies the existence of an infinite $\omega$-limit set $\widetilde\omega$ such that \begin{equation} \label{eq12} \widetilde\omega\cap Q_x=\{a,b\}, \ a<b, \ \text{for some} \ x\in\widetilde\omega . \end{equation} This condition characterizes {\it LYC} maps in $\mathcal C_0$ (see \cite{smital} or subsequent books like \cite{block}) which proves (i).
To prove (ii) note that there are maps $f\in\mathcal C_0$ such that both $a$ and $b$ in (\ref{eq12}) are non-isolated points of $\widetilde\omega$, see \cite{brucknersmital} or \cite{MS}. Then $a,b\in UR(f)$ are minimal points. We show that in this case either $a\notin AP(f)$ or $b\notin AP(f)$ (actually, neither $a$ nor $b$ is in $AP(f)$ but we do not need this stronger property). So assume that $a,b\in AP(f)$ and $U_a$, $U_b$ are their disjoint open neighborhoods. Then there is an {\it even} $m$, $m=(2k+1)2^n$, with $n\ge 1$, such that $f^{jm}(a)\in U_a$ and $f^{jm}(b)\in U_b$, for every $j\ge 0$. Let $\{ J_n\} _{n\ge 1}$ be the system of compact periodic intervals associated with $\widetilde\omega$. Without loss of generality we may assume that, for some $n$, $[a,b]\subset J_n$. Since $J_n$ has period $2^n$, for arbitrary odd $j$, $f^{jm}(J_n)\cap J_n=\emptyset$. If $f^{jm}(J_n)$ is to the left of $J_n$, then $f^{jm}(J_n)\cap U_b=\emptyset$, otherwise $f^{jm}(J_n)\cap U_a=\emptyset$. In any case, $f^{jm}(a)\notin U_a$ or $f^{jm}(b)\notin U_b$, which is a contradiction. \end{proof}
\section{Generalization for maps on more general one-dimensional spaces}
Here we show that results given in Theorems \ref{entropie} and \ref{LiYorke} concerning maps in $\mathcal C(I)$ can be generalized to more general one-dimensional compact metric spaces like topological graphs or trees, but not dendrites. Recall that $X$ is a {\it topological graph} if $X$ is a non-empty compact connected metric space which is the union of finitely many arcs (i.e., continuous images of the interval $I$) such that every two arcs can have only end-points in common. A {\it tree} is a topological graph which contains no subset homeomorphic to the circle. A {\it dendrite} is a locally connected continuum containing no subset homeomorphic to the circle. The proofs of generalized results are based on the same ideas, as the proofs of Theorems \ref{entropie} and \ref{LiYorke}. We only need some recent, nontrivial results concerning the structure of $\omega$-limit sets of such maps, see \cite{HrMa} and \cite{KKM}. Therefore we give here only outline of the proofs, pointing out only main differences. \begin{theorem} \label{gen1} Let $f\in\mathcal C(X)$.
{\rm (i)} \ If $X$ is a topological graph then $h(f)>0$ is equivalent to $QW(f)\ne W(f)$.
{\rm (ii)} \ There is a dendrit $X$ such that $h(f)>0$ and $QW(f)=W(f)=UR(f)$. \end{theorem}
\begin{proof} To prove (i) note that, for $f\in\mathcal C(X)$ where $X$ is a topological graph, $h(f)>0$ if and only if, for some $n\ge 1$, $f^n$ is turbulent \cite{HrMa}. Hence the proof of Lemma \ref{kladna} applies also to this case and $h(f)>0$ implies $IR(f)\ne\emptyset$. On the other hand, if $h(f)=0$ then every infinite $\omega$-limit set is a solenoid (i.e., it has an associated system of compact periodic intervals $\{ J_n\}_{n\ge 1}$, $J_n$ with period $2^n$) and consequently, $R(f)=UR(f)$ \cite{HrMa} which gives the other implication.
(ii) In \cite{KKM} there is an example of a dendrit $X$ with a continuous map $f$ possessing exactly two $\omega$-limit sets: a minimal Cantor-type set $Q$ such that $h(f|_Q)\ge 0$ and a fixed point $p$ such that $\omega (f,x)=\{ p\}$ for every $x\in X\setminus Q$. \end{proof}
\begin{theorem} \label{gen2} Let $f\in\mathcal C(X)$.
{\rm (i)} \ If $X$ is a compact tree then $W(f)\ne AP(f)$ implies LYC, but does not imply $h(f)>0$.
{\rm (ii)} \ If $X$ is a dendrit, or a topological graph containing a circle then $W(f)\ne AP(f)$ implies neither LYC nor $h(f)>0$. \end{theorem}
\begin{proof} (i) Similarly as in the proof of Theorem \ref{LiYorke} we may assume that $h(f)=0$. Then every infinite $\omega$-limit set of $f$ is a solenoid and the argument, with obvious modifications, applies.
(ii) If $X$ is the circle, take $f$ to be an irrational rotation. Then obvioulsy $X=UR(f)\setminus AP(f)=W(f)\setminus AP(f)$ but $f$ is not {\it LYC}. On the other hand, let $\widetilde\omega$ be the $\omega$-limit set used in the proof of part (ii) of Theorem \ref{LiYorke}. Thus, $\widetilde\omega$ is a minimal set intersecting $UR(f)\setminus AP(f)$. A modification of the construction from \cite{KKM} yields a dendrite with exactly two $\omega$-limit sets, an infinite minimal set $Q=\widetilde\omega$ and a fixed point $q$ (see proof of part (ii) of preceding theorem). It is easy to see that $f$ is not {\it LYC}. \end{proof}
\begin{remark} By Theorems \ref{gen1} and \ref{gen2}, for a map $f\in\mathcal C(X)$ where $X$ is a compact metric space, the properties $h(f)>0$ and $W(f)\ne AP(f)$ are independent. Similarly, $h(f)>0$ and $IR(f)\ne\emptyset$ are independent. Example of a map $f$ with $h(f)=0$ and $IR(f)\ne\emptyset$ is given in \cite{lenka} (see also text at the end of Section 1), and any minimal map $f$ with $h(f)>0$ yields $IR(f)=\emptyset$. \end{remark}
{\bf Acknowledgments}
The author thanks Professor Jaroslav Sm\'{\i}tal for his heedful guidance and helpful suggestions.
\thebibliography{99}
\bibitem{BGKM} Blanchard F., Glasner E., Kolyada S. and Maass A., On Li-Yorke pairs, J. Reine Angew. Math., 547 (2002), 51--68.
\bibitem{block} Block L.S. and Coppel W.A., Dynamics in One Dimension, Springer-Verlag, Berlin Heidelberg, 1992.
\bibitem{brucknersmital} Bruckner A. M. and Sm\'{i}tal J., A characterization of $\omega$-limit sets of maps of the interval with zero topological entropy, Ergod. Th. \& Dynam. Sys., 13 (1993), 7--19.
\bibitem{HrMa} Hric R. and M\'alek M., Omega-limit sets and distributional chas on graphs, Topology Appl. 153 (2006), 2469 -- 2475.
\bibitem{KKM} Ko\v can Z., Korneck\'a-Kurkov\'a V. and M\'alek M., Entropy, horseshoes and homoclinic trajectories on trees, graphs and dendrites, Ergodic Theory \& Dynam. Syst. 30 (2011), to appear.
\bibitem{MS} Misiurewicz M. and Sm\'{\i}tal J., Smooth chaotic mappings with zero topological entropy, Ergod. Th. \& Dynam. Sys., 8 (1988), 421--424.
\bibitem{lenka} Obadalov\'{a} L. and Sm\'{i}tal J., Distributional chaos and irregular recurrence, Nonlin. Anal. A - Theor. Meth. Appl., 72 (2010), 2190--2194.
\bibitem{smital} Sm\'{i}tal J., Chaotic functions with zero topological entropy, Trans. Amer. Math. Soc. 297 (1986), 269 -- 282.
\bibitem{zhou2} Zhou Z., Weakly almost periodic point and measure centre, Science in China (Ser. A), 36 (1993), 142 -- 153.
\bibitem{zhou3} Zhou Z. and Li F., Some problems on fractal geometry and topological dynamical systems. Anal. Theor. Appl. 25 (2009), 5 - 15.
\bibitem{zhou} Zhou Z. and Feng L., Twelve open problems on the exact value of the Hausdorff measure and on topological entropy, Nonlinearity 17 (2004), 493--502.
\end{document} |
\begin{document}
\baselineskip=17pt
\title [Corona problem with data in ideal spaces of sequences] {Corona problem with data\\in ideal spaces of sequences}
\author [D.~V.~Rutsky] {Dmitry V. Rutsky} \address { Steklov Mathematical Institute\\ St. Petersburg Branch\\ Fontanka 27\\ 191023 St. Petersburg, Russia} \email {[email protected]}
\date{}
\begin{abstract} Let $E$ be a Banach lattice on $\mathbb Z$ with order continuous norm. We show that for any function $f = \{f_j\}_{j \in \mathbb Z}$ from the Hardy space $\hclass {\infty} {E}$ such that
$\delta \leqslant \|f (z)\|_E \leqslant 1$ for all $z$ from the unit disk~$\mathbb D$ there exists some solution $g = \{g_j\}_{j \in \mathbb Z} \in \hclass {\infty} {E'}$,
$\|g\|_{\hclass {\infty} {E'}} \leqslant C_\delta$ of the B\'ezout equation $\sum_j f_j g_j = 1$, also known as the vector-valued corona problem with data in~$\hclass {\infty} {E}$. \end{abstract}
\keywords {corona problem, ideal sequence spaces}
\maketitle
The (classical) Corona Problem (see, e.~g., \cite [Appendix~3] {nikolsky}) has the following equivalent formulation: given a finite number of bounded analytic functions
$f = \left\{f_j\right\}_{j = 1}^N \subset \hclassg {\infty}$ on the unit disk~$\mathbb D$, is the condition $\inf_{z \in \mathbb D} \max_j |f_j (z)| > 0$ sufficient as well as necessary for the existence of some solutions $g = \left\{g_j\right\}_{j = 1}^N \subset \hclassg {\infty}$ of the B\'ezout equation $\sum_j f_j g_j = 1$? A positive answer to this problem was established for the first time by L.~Carleson in~\cite {carleson1962}, and later a relatively simple proof was found by T.~Wolff (see, e.~g., \cite [Appendix~3] {nikolsky}); we also mention another approach to the proof based on the theory of analytic multifunctions (see, e.~g., \cite {slodkowski1986}).
These important results set ground for many subsequent developments. One question to ask is what estimates are possible for the solutions $g$ in terms of the estimates on $f$. In particular, estimates such as the $\hclass {2} {\lsclass {2}}$ norm of $g$ make it possible to extend these results to infinite sequences $f$. We formalize this problem somewhat vaguely for now; see Section~\ref {statementsoftheresults} below for exact definitions. Notation $\langle f, g\rangle = \sum_j f_j g_j$ will be used with suitable sequences $f = \{f_j\}$ and $g = \{g_j\}$. Let $E$ be a normed space of sequences, and let $$
E' = \left\{ g\,\, \mid \,\, |\langle f, g \rangle| < \infty \text { for all $f \in E$}\right\} $$ be the space of sequences dual to~$E$ with respect to the pairing~$\langle \cdot, \cdot\rangle $. Suppose that $f \in \hclass {\infty} {E}$ satisfies
$\delta \leqslant \|f (z)\|_{E} \leqslant 1$ for all $z \in \mathbb D$ with some~$\delta > 0$. We are interested in the existence of a function $g \in \hclass {\infty} {E'}$ satisfying $\langle f, g\rangle = 1$, and in the possible norm estimates of~$g$ in terms of~$\delta$. If this is the case for all such $f$ then we say that $E$ has the \emph {corona property}.
T.~Wolff's argument allowed M.~Rosenblum, V.~A.~Tolokonnikov and A.~Uchi\-yama to obtain (independently from one another) a positive answer to this question in the Hilbert space case, showing that $\lsclass {2}$ has the corona property. The corresponding estimates were later improved many times; apparently, the best one at the time of this writing can be found in~\cite {treilwick2005}. A.~Uchiyama also obtained in~\cite {uchiyama1980} a different estimate using a rather involved argument based on the original proof of L.~Carleson, thus establishing that~$\lsclass {\infty}$ also has the corona property. The intermediate spaces $E = \lsclass {p}$, $2 < p < \infty$ between these two cases can be reduced to the case $p = 2$ (see~\cite [\S 3] {kisliakovrutsky2012en}), following Tolokonnikov's (unpublished) remark that Wolff's method can be directly extended to this case at least for even values of $p > 2$, but very little was known about the corona property for other spaces~$E$.
Recently, in~\cite {kisliakov2015pen} S.~V.~Kislyakov extended the corona property to a large class of Banach ideal sequence spaces~$E$. Specifically, in this result $E$ is assumed to be $q$-concave with some $q < \infty$ and satisfy the Fatou property, and space $\lclass {\infty} {E}$ is assumed to be $\BMO$-regular. These conditions are satisfied by all $\mathrm {UMD}$ lattices with the Fatou property, and in particular by spaces~$\lsclass {p}$, $1 \leqslant p < \infty$. A novel and somewhat counterintuitive idea leading to this result is that for suitable spaces $E_0$ and $E_1$ the corona property of $E_0$ implies the corona property of their pointwise product $E_0 E_1$. The proof uses the theory of interpolation for Hardy-type spaces to reduce the result to the well-known case $E = \lsclass {2}$.
In the present work we show how the approach of~\cite {kisliakov2015pen} can be modified to obtain a complete answer: it turns out that \emph {all} ideal sequence spaces with order continuous norm have the corona property. Note that this, in particular, includes all finite-dimensional ideal sequence spaces. These conditions are more general than the result~\cite {kisliakov2015pen}; see Proposition~\ref {myctmog} at the end of Section~\ref {cpmpr2}. It remains unclear if the assumption of the order continuity of the norm can be weakened.
Compared to~\cite {kisliakov2015pen}, our proof relies on somewhat less elementary means, namely we use a fixed point theorem and a selection theorem to reduce the problem to Uchiyama's difficult case $E = \lsclass {\infty}$, but otherwise the reduction appears to be rather simple and straightforward. Moreover, for $q$-concave lattices $E$ with $q < \infty$ the problem is still reduced in this manner to the relatively easy standard case $E = \lsclass {2}$ using the same argument as in~\cite {kisliakov2015pen}.
We also mention that very recently in~\cite {zlotnikov2016} the method described in the present work was also applied to the problem of characterizing the ideals $I (f) = \left\{ \langle f, g\rangle \mid g \in \hclass {\infty} {E'} \right\}$. Certain classical results concerning the case $E = \lsclass {2}$ were extended to the case $E = \lsclass {1}$. The approach~\cite {kisliakov2015pen} based on interpolation of Hardy-type spaces does not seem to lend itself to such an extension.
We briefly outline the implications for the estimates $C_{E, \delta}$ of the norms of the solutions $g$ in terms of $\delta$. Theorem~\ref {cpmultt} below can be stated quantitatively: $C_{E_0 E_1, \delta} \leqslant C_{E_0, \frac \delta 2}$ for $0 < \delta < 1$, provided that $E_0 E_1$ is a Banach lattice with order continuous norm. Thus for any lattice~$E$ with order continuous norm we obtain an estimate \begin {equation} \label {bascest} C_{E, \delta} \leqslant C_{\lsclass {\infty}, \frac \delta 2} \leqslant c_1 \delta^{-c_2} \end {equation} with some constants $c_1, c_2 > 0$ independent of $E$. Furthermore, if a Banach lattice $E$ is $q$-concave with some $1 < q < \infty$ then $E = \lsclass {q} E_1$ with a Banach lattice $E_1$ (see the proof of Proposition~\ref {myctmog} below), and we get an estimate $C_{E, \delta} \leqslant C_{\lsclass {q}, \frac \delta 2}$ that may be sharper than~\eqref {bascest} for some values of $q$. Indeed, we also have an estimate $C_{\lsclass {p}, \delta} \leqslant C_{\lsclass {2}, \delta^{\frac p 2}}$ for $p \geqslant 2$ (see, e.~g., \cite [\S 3] {kisliakovrutsky2012en}) and $C_{\lsclass {2}, \delta} \leqslant \frac 1 \delta + c \frac 1 {\delta^2} \log \frac 1 \delta$ by~\cite {treilwick2005} with an explicit constant $c \approx 8.4$. The latter estimate is known to be close to optimal in terms of the rate of growth as $\delta \to 0$. Thus, for $q$-concave lattices $E$ with some $2 \leqslant q < \infty$ we also have an estimate $C_{E, \delta} \leqslant c \frac 1 {\delta^q} (\log \frac 1 \delta)^{\frac q 2}$ for small enough $\delta$. Our knowledge about sharp estimates for the value of $C_{\lsclass {p}, \delta}$ with $p \neq 2$ seems to be lacking.
\section {Statements of the results}
\label {statementsoftheresults}
A quasi-normed \emph {lattice}
$X$ of measurable functions on a measurable space~$\Omega$, also called an \emph {ideal space}, is a quasi-normed space of measurable functions such that $f \in X$ and $|g| \leqslant |f|$ implies $g \in X$ and $\|g\|_X \leqslant \|f\|_X$. Ideal spaces of sequences $E$ are lattices on $\Omega = \mathbb Z$. A lattice~$X$ is said to have \emph {order continuous quasi-norm} if for any sequence $f_n \in X$
such that $\sup_n |f_n| \in X$ and $f_n \to 0$ almost everywhere one also has $\|f_n\|_X \to 0$. Lattices~$\lsclass {p}$ have order continuous quasi-norm if and only if $p < \infty$. For a lattice $X$ of measurable functions the \emph {order dual} $X'$, also called the \emph {associate space}, is the lattice of all measurable functions~$g$ such that the norm $$
\|g\|_{X'} = \sup_{ f \in X, \, \|f\|_X \leqslant 1} \int |f g| $$ is finite. For example, the order dual of $\lsclass {p}$ is $\lsclass {p'}$ for all $1 \leqslant p \leqslant \infty$. If~$X$ is a Banach lattice, the order dual is contained in the topological dual space~$X^*$ of all continuous linear functionals on~$X$, and $X' = X^*$ if and only if~$X$ has order continous norm. For more on lattices see, e.~g.,~\cite {kantorovichold}.
\begin {definition} \label {cpdef} Suppose that $E$ is a normed lattice on $\mathbb Z$. We say that $E$ has the corona property with constant $C_\delta$, $0 < \delta < 1$, if for any $f \in \hclass {\infty} {E}$ such that\footnote {
Replacing this condition with the nonstrict inequality $\delta < \|f (z)\|_E \leqslant 1$ would allow to simplify somewhat the arguments in Section~\ref {cpmpred} below; however, the closed form looks nicer. }
$\delta \leqslant \|f (z)\|_E \leqslant 1$ for all $z \in \mathbb D$ there exists some $g \in \hclass {\infty} {E'}$
such that $\|g\|_{\hclass {\infty} {E'}} \leqslant C_\delta$ and $\langle f (z), g (z)\rangle = 1$ for all $z \in \mathbb D$. Such a function~$f$ is called the data for the corona problem with lower bound $\delta$, and such a function $g$ is called the solution for the corona problem with data $f$. \end {definition}
For any two quasi-normed lattices $E_0$ and $E_1$ on the same measurable space the set of pointwise products $$ E_0 E_1 = \{ h_0 h_1 \mid h_0 \in E_0,\break h_1 \in E_1\} $$ is a quasi-normed lattice with the quasi-norm defined by $$
\|h\|_{E_0 E_1} = \inf_{h = h_0 h_1} \|h_0\|_{E_0} \|h_1\|_{E_1}. $$ For example, the H\"older inequality shows that $\lsclass {p} \lsclass {q} = \lsclass {r}$ with $\frac 1 r = \frac 1 p + \frac 1 q$. It is easy to see that $X \lclassg {\infty} = X$ for any lattice $X$.
\begin {theorem} \label {cpmultt} Suppose that $E_0$, $E_1$ are Banach lattices on $\mathbb Z$ such that $E = E_0 E_1$ is also a Banach lattice having order continuous norm. If $E_0$ has the corona property with constants $C_\delta$, $0 < \delta < 1$, then $E$ also has it with constants $C_{\frac \delta 2}$. \end {theorem}
The proof of Theorem~\ref {cpmultt} is given in Section~\ref {cpmpr2} below. Since $\lsclass {\infty}$ has the corona property by~\cite {uchiyama1980}, applying Theorem~\ref {cpmultt} with $E_0 = \lsclass {\infty}$ and $E_1 = E$ yields the main result, stated as follows.
\begin {theorem} \label {ecoronat} Every Banach lattice on $\mathbb Z$ with order continous norm has the corona property. \end {theorem}
\section {Proof of Theorem~\ref {cpmultt}}
\label {cpmpr2}
We begin with some preparations. The following result shows that finite-dimensional approximation of the corona property is possible under the assumption that $E$ has order continous norm. \begin {proposition} \label {cpfdred} Suppose that $E$ is a Banach lattice on $\mathbb Z$ with order continous norm such that for any $\varepsilon > 0$ and a finite $I \subset \mathbb Z$ the restriction of $E$ onto $I$ has the corona property with constants $(1 + \varepsilon) C_\delta$, $0 < \delta < 1$, where $C_\delta$ are independent of~$I$ and~$\varepsilon$. Then~$E$ has the corona property with constants $C_\delta$, $0 < \delta < 1$. \end {proposition} The proof of Proposition~\ref {cpfdred} is given in Section~\ref {cpmpred} below.
A set-valued map $\Phi : X \to 2^Y$ between normed spaces is called \emph {lower semicontinuous} if for any $x_n \in X$, $x_n \to x$ in $X$ and $y \in \Phi (x)$ there exists a subsequence $n'$ and some $y_{n'} \in \Phi (x_{n'})$ such that $y_{n'} \to y$ in $Y$. We need the following well-known result. \begin {theoremmichael} [{\cite {michael1956}}\footnote { For the sake of simlicity we omitted the converse part of this famous theorem. }] Let $Y$ be a Banach space, $X$ a paracompact space and $\varphi : X \to 2^Y$ a lower semicontinuous multivalued map taking values that are nonempty, convex and closed. Then there exists a continuous selection $f : X \to Y$ of $\varphi$, i.~e. $f (x) \in \varphi (x)$ for all $x \in X$. \end {theoremmichael}
This allows us to conclude that the factorization corresponding to the product of finite-dimensional Banach lattices can be made continuous (in the more general infinite-dimensional cases this seems to be unclear). \begin {proposition} \label {multcsel} Suppose that $F_0$ and $F_1$ are finite-dimensional Banach lattices of functions on the same measurable space. Then for every $\varepsilon$ there exists a continuous map $\Delta : F_0 F_1 \setminus \{0\} \to F_1$ taking nonnegative values such that
$\|\Delta f\|_{F_1} \leqslant 1$ and
$\|f (\Delta f)^{-1}\|_{F_0} \leqslant (1 + \varepsilon) \|f\|_{F_0 F_1}$. \end {proposition} Indeed, we first consider a set-valued map $\Delta_0 : F_0 F_1 \setminus \{0\} \to 2^{F_1}$ defined~by \begin {multline*} \Delta_0 (f) = \left\{ g \in F_1 \mid \text {$g > 0$ everywhere},\right. \\ \left.
\|g\|_{F_1} < 1,
\left\|f g^{-1}\right\|_{F_0} < (1 + \varepsilon) \|f\|_{F_0 F_1} \right\} \end {multline*} for $f \in F_0 F_1 \setminus \{0\}$. By the definition of the space $F_0 F_1$, the map $\Delta_0$ takes nonempty values. It is easy to see that $\Delta_0$ has open graph, and hence $\Delta_0$ is a lower semicontinuous map. The graph of a map $\overline\Delta_0 : F_0 F_1 \setminus \{0\} \to 2^{F_1}$ defined by \begin {equation*} \overline\Delta_0 (f) = \left\{ g \in F_1 \mid g \geqslant 0,
\|g\|_{F_1} \leqslant 1,
\left\|f g^{-1}\right\|_{F_0} \leqslant (1 + \varepsilon) \|f\|_{F_0 F_1} \right\} \end {equation*} (with the conventions $0 \cdot 0^{-1} = 0$ and $a \cdot 0^{-1} = \infty$ for $a \neq 0$) is easily seen to be the closure of the graph of the map of $\Delta_0$, therefore $\overline\Delta_0$ is also a lower semicontinuous map. The values of $\Delta_0$ are convex and closed, so by the Michael selection theorem $\overline\Delta_0$ admits a continuous selection $\Delta$, that is, $\Delta (f) \in \overline\Delta_0 (f)$ for all $f \in F_0 F_1$. This selection satisfies the conclusion of Proposition~\ref {multcsel}.
\begin {theoremfanky} [{\cite {fanky1952}}] Suppose that $K$ is a compact set in a locally convex linear topological space. Let $\Phi$ be a mapping from $K$ to the set of nonempty subsets of $K$ that are convex and compact, and assume that the graph of $\Phi$ is closed. Then $\Phi$ has a fixed point, i.~e. $x \in \Phi (x)$ for some $x \in K$. \end {theoremfanky}
A quasi-normed lattice~$X$ of measurable functions is said to have the
\emph {Fatou property} if for any $f_n, f \in X$ such that $\|f_n\|_X
\leqslant 1$ and the sequence~$f_n$ converges to~$f$ almost everywhere it is also true that $f \in X$ and $\|f\|_X \leqslant 1$.
The following formula (also appearing in \cite [Lemma~1] {kisliakov2015pen}) seems to be rather well known; see, e.~g., \cite [Theorem~3.7] {schep2010}. \begin {proposition} \label {xyxm} Suppose that $X$ and $Y$ are Banach lattices of measurable functions on the same measurable space having the Fatou property such that $X Y$ is also a Banach lattice. Then $X' = (X Y)' Y$. \end {proposition}
In order to achieve the best estimate possible with the method used without assuming that $C_\delta$ is continuous in $\delta$, we take advantage of the fact that the decomposition in the definition of the pointwise product of Banach lattices can be made exact if both lattices satisfy the Fatou property. \begin {proposition} \label {fatouexactd} Let $X$ and $Y$ be Banach lattices of measurable functions on the same measurable space having the Fatou property. Then for every function $f \in X Y$ there exist some $g \in X$ and $h \in Y$ such that $f = g h$ and
$\|g\|_X \|h\|_Y \leqslant \|f\|_{X Y}$. \end {proposition}
This appears to be rather well known but hard to find in the literature, so we give a proof. We may assume that $\|f\|_{X Y} = 1$. Let $\varepsilon_n \to 0$ be a decreasing sequence. Sets \begin {multline*} F_n = \left\{ g \mid g \geqslant 0, \,\,\text {$\supp g = \supp f$\ up to a set of measure $0$},\right. \\ \left.
\|g\|_X \leqslant 1, \left\|f g^{-1} \right\|_Y \leqslant 1 + \varepsilon_n \right\} \subset X \end {multline*} are nonempty and form a nonincreasing sequence. It is easy to see that $F_n$ are convex (one uses the convexity of the map $t \mapsto t^{-1}$, $t > 0$). By the Fatou property of~$X$ and~$Y$ sets~$F_n$ are closed with respect to the convergence in measure, and they are bounded in~$X$. The intersection of such a sequence of sets is nonempty (see~\cite [Chapter 10, \S 5, Theorem~3] {kantorovichold}), so there exists some $g \in \bigcap_n F_n$, which together with $h = f g^{-1}$ yields the required decomposition.
Now we are ready to prove Theorem~\ref {cpmultt}. Suppose that under its assumptions $E = E_0 E_1$, lattice $E_0$ has the corona property with constant $C_\delta$ for some $0 < \delta \leqslant 1$ and we are given some $f \in \hclass {\infty} {E}$ such that
$\delta \leqslant \|f (z)\|_E \leqslant 1$ for all $z \in \mathbb D$; we need to find a suitable $g \in \hclass {\infty} {E'}$ solving $\langle f, g \rangle = 1$.
Proposition~\ref {cpfdred} allows us to assume that the lattices have finite support $I \subset \mathbb Z$, and moreover, we may relax the claimed estimate for the norm of a solution to $(1 + \varepsilon) C_{\frac \delta 2}$ for arbitrary $\varepsilon > 0$. It is easy to see that finite-dimensional lattices always have the Fatou property, so we may assume that both $E_0$ and $E_1$, and thus both $\lclass {\infty} {E_0}$ and $\lclass {\infty} {E_1}$ have the Fatou property. By Proposition~\ref {fatouexactd}
there exist some $\weightu \in \lclass {\infty} {E_0}$ and $\weightv \in \lclass {\infty} {E_1}$ such that
$|f| = \weightu \weightv$ and
$\|\weightu\|_{\lclass {\infty} {E_0}} \|\weightv\|_{\lclass {\infty} {E_1}} \leqslant \|f\|_{\lclass {\infty} {E}} \leqslant 1$. We may further assume that $\|\weightu\|_{\lclass {\infty} {E_0}} \leqslant 1$ and
$\|\weightv\|_{\lclass {\infty} {E_1}} \leqslant 1$.
Since $f = \{f_j\}_{j \in I}$ is analytic and bounded, if we restrict $I$ so that $\mathbb T \times I$ becomes the support of $f$, we may assume that $\log |f_j| \in \lclassg {1}$ for all $j \in I$. Boundedness of $\weightu$ and $\weightv$ further implies that $\log |\weightv_j| \in \lclassg {1}$ for all $j \in I$.
Let us fix some $\varepsilon > 0$ and a sequence $0 < r_j < 1$ such that $r_j \to 1$. We denote by $P_r$ the operator of convolution with the Poisson kernel for radius $0 < r < 1$, that is, $P_r a (z) = a (r z)$ for any harmonic function $a$ on $\mathbb D$ and any $z \in \mathbb D$.
Let \begin {equation} \label {bsetdef}
B = \left\{ \log \weightw \mid \weightw \in \lclass {\infty} {E_1}, \|\weightw\|_{\lclass {\infty} {E_1}} \leqslant 2, \weightw \geqslant \weightv\right\} \subset \lclassg {1}. \end {equation} This set is convex, which follows from the well-known logarithmic convexity of the norm of a Banach lattice. We endow $B$ with the weak topology of~$\lclassg {1}$. By the Fatou property of $\lclass {\infty} {E_1}$ it is easy to see that $B$ is closed with respect to the convergence in measure, so $B$ is also closed in $\lclassg {1}$ and thus weakly closed. The Dunford--Pettis theorem easily shows that $B$ is a compact set, since the functions from $B$ are uniformly bounded from above and below by some summable functions.
For convenience, we denote by $B_Z$ the closed unit ball of a Banach space~$Z$. We endow $\hclass {\infty} {E_0'}$ with the topology of uniform convergence on compact sets in $\mathbb D \times I$, and define a (single-valued) map $ \Phi_0^{(j)} : C_{\frac \delta 2} B_{\hclass {\infty} {E_0'}} \to B $ by $$
\Phi_0^{(j)} (h) = \log \left(\Delta (|P_{r_j} h|) + \weightv\right), \quad h \in C_{\frac \delta 2} B_{\hclass {\infty} {E_0'}} $$ with a map $\Delta$ from Proposition~\ref {multcsel} applied to $F_0 = E'$ and $F_1 = E_1$ (observe that by Proposition~\ref {xyxm} we have $E_0' = E' E_1$) and the chosen value of $\varepsilon$. It is easy to see that $\Phi_0^{(j)}$ is continuous.
We endow $\hclass {\infty} {E_1}$ with the topology of uniform convergence on compact sets in $\mathbb D \times I$ and define a (single-valued) map $ \Phi_1 : B \to 2 B_{\hclass {\infty} {E_1}} $ by \begin {equation} \label {phi1def} \Phi_1 (\log \weightw) (z, \omega) = \exp \left(\frac 1 {2 \pi} \int_0^{2 \pi} \frac {e^{i \theta} + z} {e^{i \theta} - z} \log \weightw \left(e^{i \theta}, \omega\right) d\theta \right) \end {equation} for all $\log \weightw \in B$, $z \in \mathbb D$ and $\omega \in I$. This map is easily seen to be continuous and (since the integral under the exponent is the convolution with the Schwarz kernel)
$|\Phi_1 (\log \weightw)| = \weightw$ almost everywhere.
Observe that if $\psi = \Phi_1 (\log \weightw)$ for some $\log \weightw \in B$ and $\varphi = \frac f \psi$ then
$|\varphi| = \frac {|f|} {\weightw} \leqslant \frac {|f|} {\weightv} = \weightu$ and we have $\varphi \in \hclass {\infty} {E_0}$ with $\|\varphi\|_{\hclass {\infty} {E_0}} \leqslant 1$. On the other hand, $$
\delta \leqslant \|f (z)\|_{E} =
\left\|\varphi (z) \psi (z)\right\|_{E_0 E_1} \leqslant \|\varphi (z)\|_{E_0} \|\psi (z)\|_{E_1} \leqslant 2 \|\varphi (z)\|_{E_0}, $$
so $\frac \delta 2 \leqslant \|\varphi (z)\|_{E_0} \leqslant 1$ for all $z \in \mathbb D$. This means that $\varphi$ belongs to the set $$
D = \left\{ \varphi \in \hclass {\infty} {E_0} \mid \frac \delta 2 \leqslant \|\varphi (z)\|_{E_0} \leqslant 1 \text { for all $z \in \mathbb D$}\right\} $$ of corona data functions corresponding to the assumed corona property of~$E_0$. Thus we may define a (single-valued) map $$ \Phi_2 : \Phi_1 (B) \to D $$ by $\Phi_2 (\psi) = \frac f {\psi}$ for $\psi \in \Phi_1 (B)$. We endow $D$ with the topology of uniform convergence on compact sets in $\mathbb D \times I$. The continuity of $\Phi_2$ is evident.
We define a set-valued map $\Phi_3 : D \to 2^{C_{\frac \delta 2} B_{\hclass {\infty} {E_0'}}}$ by $$ \Phi_3 (\varphi) = \left\{h \in \hclass {\infty} {E_0'} \mid \langle \varphi, h\rangle = 1,
\|h\|_{\hclass {\infty} {E_0'}} \leqslant C_{\frac \delta 2} \right\} $$ for $\varphi \in D$. By the assumed corona property of $E_0$ map $\Phi_3$ takes nonempty values. Since the condition $\langle \varphi, h\rangle = 1$ is equivalent to $\langle \varphi (z), h (z)\rangle = 1$ for all $z \in \mathbb D$, it is easy to see that the values of $\Phi_3$ are convex and closed, and thus they are compact. Similarly, the closedness of the graph of $\Phi_3$ is easily verified.
Now we define a set-valued map $\Phi^{(j)} : C_{\frac \delta 2} B_{\hclass {\infty} {E_0'}} \to 2^{C_{\frac \delta 2} B_{\hclass {\infty} {E_0'}}}$ by $\Phi^{(j)} = \Phi_3 \circ \Phi_2 \circ \Phi_1 \circ \Phi_0^{(j)}$. The graph of $\Phi^{(j)}$ is closed since all individual maps are continuous in the appropriate sense (specifically, as a composition of upper semicontinous maps, but it is easy to establish the continuity in this case directly using compactness). The domain $C_{\frac \delta 2} B_{\hclass {\infty} {E_0'}}$ with the introduced topology is a compact set in a locally convex linear topological space. Thus $\Phi^{(j)}$ satisfies the assumptions of the Fan--Kakutani fixed point theorem, which implies that the maps $\Phi^{(j)}$ admit some fixed points $h_j \in C_{\frac \delta 2} B_{\hclass {\infty} {E_0'}}$, that is, $h_j \in \Phi^{(j)} (h_j)$ for all $j$. This means that with $\log \weightw_j = \Phi_0^{(j)} (h_j)$, $\psi_j = \Phi_1 (\log \weightw_j)$ and
$\varphi_j = \Phi_2 (\psi_j)$ we have $h_j \in \Phi_3 (\varphi_j)$. The first two conditions imply that $|\psi_j| = \Delta (|P_{r_j} h_j|) + v \geqslant \Delta (|P_{r_j} h_j|)$, so $$
\left\|(P_{r_j} h_j) (\psi_j)^{-1}\right\|_{\lclass {\infty} {E'}}
\leqslant \left\| \frac {|P_{r_j} h_j|} {\Delta (|P_{r_j} h_j|)}\right\|_{\lclass {\infty} {E'}} \leqslant \left(1 + \varepsilon\right) C_{\frac \delta 2} $$ by Proposition~\ref {multcsel}. Thus \begin {equation} \label {phi61}
\left\|\frac {h_j (r_j z)} {\psi_j (z)}\right\|_{E'} \leqslant \left(1 + \varepsilon\right) C_{\frac \delta 2} \end {equation} for all $z \in \mathbb D$, and condition $h_j \in \Phi_3 (\varphi_j)$ implies that \begin {equation} \label {phi62} 1 = \left\langle \frac {f (z)} {\psi_j (z)}, h_j (z)\right\rangle = \left\langle f (z), \frac {h_j (z)} {\psi_j (z)}\right\rangle \end {equation} for all $z \in \mathbb D$. Since sequences $\psi_j$ and $h_j$ are uniformly bounded on compact sets in $\mathbb D \times I$, by passing to a subsequence we may assume that $\psi_j \to \psi$ with some $\psi \in \Phi_1 (B)$ and $h_j \to h$ with some $h \in C_{\frac \delta 2} B_{\hclass {\infty} {E_0'}}$ uniformly on compact sets in $\mathbb D \times I$. Thus we may pass to the limits in~\eqref {phi61} and~\eqref {phi62} to see that $\frac h \psi$ is a suitable solution for the corona problem with data~$f$, which concludes the proof of Theorem~\ref {cpmultt}.
We remark that this construction can be modified to use the Tychonoff fixed point theorem, which is the particular case of single-valued maps in the setting of the Fan--Kakutani theorem. It suffices to find a continuous selection for the slightly enlarged map $\Phi_3$, which is the purpose of the next result; the arbitrarily small increase in the estimate is inconsequential for the scheme of the proof.
\begin {proposition} \label {cpcsel} Suppose that a finite-dimensional lattice $E$ has the corona property with constant $C_\delta$ for some $0 < \delta < 1$. Let $$
D_E = \left\{ f \in \hclass {\infty} {E} \mid \delta \leqslant \|f (z)\|_{E} \leqslant 1 \text { for all $z \in \mathbb D$}\right\}. $$ Then for any $\varepsilon > 0$ there exists a continuous map $$ K : D_E \to (1 + \varepsilon) C_\delta B_{\hclass {\infty} {E'}} $$ such that $\langle f, K (f)\rangle = 1$ for any $f \in D_E$. \end {proposition} Indeed, let $0 < \alpha < 1$. We define a set-valued map $$ K_0 : D_E \to (1 + \alpha) C_\delta B_{\hclass {\infty} {E'}} $$ by $$
K_0 (f) = \left\{g \in \hclass {\infty} {E'} \mid \|\langle f, g \rangle - 1\|_{\hclassg {\infty}} < \alpha,
\|g\|_{\hclass {\infty} {E'}} < (1 + \alpha) C_\delta \right\} $$ for $f \in D_E$. By the corona property assumption on $E$ map $K_0$ takes nonempty values. It is easy to see that $K_0$ has open graph and thus $K_0$ is a lower semicontinous map. The graph of a map $\overline K_0 : D_E \to (1 + \alpha) C_\delta B_{\hclass {\infty} {E'}}$ defined by \begin {multline*}
\overline K_0 (f) = \left\{g \in \hclass {\infty} {E'} \mid \|\langle f, g \rangle - 1\|_{\hclassg {\infty}} \leqslant \alpha,
\|g\|_{\hclass {\infty} {E'}} \leqslant (1 + \alpha) C_\delta \right\} \end {multline*} is easily seen to be the closure of the graph of the map of $K_0$, and hence $\overline K_0$ is also a lower semicontinuous map. The values of $K_0$ are convex and closed. By the Michael selection theorem there exists a continuous selection~$K_1$
of the map $\overline K_0$, that is, $K_1 (f) \in \overline K_0 (f)$ for all $f \in D_E$. Now observe that $|\langle f (z), K_1 (f) (z)\rangle - 1| \leqslant \alpha$ implies
$|\langle f (z), K_1 (f) (z)\rangle | \geqslant 1 - \alpha$ for all $z \in \mathbb D$ and $f \in D_E$, so we may set $K (f) = \frac {K_1 (f)} {\langle f, K_1 (f)\rangle }$ and have $\langle f, K (f)\rangle = 1$ with
$\|K (f)\|_{\hclass {\infty} {E'}} \leqslant \frac {1 + \alpha} {1 - \alpha} C_\delta$. Choosing $\alpha$ small enough yields the claimed range of~$K$.
Finally, we mention that Theorem~\ref {ecoronat} includes the result \cite [Corollary~2] {kisliakov2015pen}. This is implied by the following known observation; we give a proof for convenience. \begin {proposition} \label {myctmog} Suppose that~$X$ is a Banach lattice of measurable functions having the Fatou property and $X$ is $q$-concave with some $1 < q < \infty$. Then~$X$ has order continuous norm. \end {proposition}
Lattice $Z^\delta$ is defined by the norm $\|f\|_{Z^\delta} = \left\| |f|^{\frac 1 \delta} \right\|_Z^\delta$ for a quasi-normed lattice $Z$ of measurable functions and $\delta > 0$. Lattice~$X'$ is $q'$-convex, and hence $Y = (X')^{q'}$ is a Banach lattice with the Fatou property. Then $X' = Y^{\frac 1 {q'}}$. The Fatou property is equivalent to the order reflexivity $X = X''$, and using the well-known formula for the duals of the Calder\'on-Lozanovsky products (see, e.~g., \cite [Theorem~2.10] {schep2010}) we may write $$ X = (X')' = \left({Y}^{\frac 1 {q'}} {\lclassg {\infty}\strut}^{\frac 1 q}\right)' = Y'^{\frac 1 {q'}} {\lclassg {1}\strut}^{\frac 1 q} = Y'^{\frac 1 {q'}} \lclassg {q}. $$ Since lattice $\lclassg {q}$ has order continuous norm, it suffices to establish the following. \begin {proposition} \label {ocnmult} Suppose that $X$ and $Y$ are quasi-normed lattices of measurable functions and $Y$ has order continous quasi-norm. Then $X Y$ also has order continuous quasi-norm. \end {proposition}
Let $f_n \in X Y$ be a sequence with $f = \sup_n |f_n| \in X Y$ such that $f_n \to 0$ almost everywhere. Then $f = g h$ with some $g \in X$ and $h \in Y$. We may assume that $g, h \geqslant 0$. Sequence $h_n = \frac {f_n} g$ also converges to $0$ almost everywhere, and $|h_n| \leqslant \frac f g = h$, so $\sup_n |h_n| \in Y$. By the order continuity of the quasi-norm of $Y$
we have $\|h_n\|_Y \to 0$, hence $\|f_n\|_{X Y} \leqslant \|g\|_X \|h_n\|_Y \to 0$.
\section {Proof of Proposition~\ref {cpfdred}}
\label {cpmpred}
First, observe that if a lattice $E$ on $\mathbb Z$ has order continuous norm then $\lclass {1} {E}$ also has order continous norm, we have $\lclass {\infty} {E'} = \left[\lclass {1} {E}\right]' = \left[\lclass {1} {E}\right]^*$, and $\hclass {\infty} {E'}$ is easily seen to be $w^*$-closed in $\lclass {\infty} {E'}$ (see, e.~g., \cite [\S 1.2.1] {kisliakovrutsky2012en}). The $w^*$-convergence of a sequence $h_k \in \hclass {\infty} {E'}$ to some $h$ implies that $h_k (z) \to h (z)$ in the $*$-weak topology of $E' = E^*$ for all $z \in \mathbb D$.
Now suppose that under the assumptions of Proposition~\ref {cpfdred} $0 < \delta < 1$ and $f \in \hclass {\infty} {E}$ satisfies
$\delta \leqslant \|f (z)\|_E \leqslant 1$ for all $z \in \mathbb D$. Let $I_k \subset \mathbb Z$ be a nondecreasing sequence such that $\bigcup_k I_k = \mathbb Z$, and fix a sequence $\varepsilon_j > 0$, $\varepsilon_j \to 0$. We consider the natural approximations $f_{A, r, k} (z) = A f (r z) \chi_{I_k}$, $z \in \mathbb D$, for $0 < r \leqslant 1$ and $A \geqslant 1$. If there exists some sequence of parameters $A_j \to 1$, $r_j \to 1$ and $k_j \to \infty$ such that $f_j = f_{A_j, r_j, k_j}$ is a data for the corona problem with lower bound $\delta$ then by the assumptions there exist some $g_j$ such that $\langle f_j, g_j\rangle = 1$ and
$\|g_j\|_{\hclass {\infty} {E'}} \leqslant (1 + \varepsilon_j) C_\delta$. By passing to a subsequence we may assume that $A_j$ is nonincreasing, $r_j$ and $k_j$ are nondecreasing, and $g_j \to g$ in the $*$-weak topology of $\lclass {\infty} {E'}$ for some $g \in \hclass {\infty} {E'}$. Observe that \begin {multline} \label {cpfdreddec} 1 = \langle f_j (z), g_j (z)\rangle = \\ \langle f (z), g_j (z)\rangle + \langle f (r_j z) - f (z), g_j (z)\rangle + \langle f_j (z) - f (r_j z), g_j (z)\rangle \end {multline} for all $z \in \mathbb D$. The first term in~\eqref {cpfdreddec} converges to $\langle f (z), g (z)\rangle $. Since the $E$-valued analytic function $f$ is strongly continuous at every $z \in \mathbb D$, the second term in~\eqref {cpfdreddec} converges to $0$. By the assumptions \begin {multline} \label {cpfdreddec2}
|f_j (z) - f (r_j z)| = |A_j f (r_j z) \chi_{I_{k_j}} - f (r_j z)| \leqslant \\
\chi_{I_{k_j}} |A_j f (r_j z) \chi_{I_{k_j}} - f (r_j z)| +
\chi_{\mathbb Z \setminus I_{k_j}} |A_j f (r_j z) \chi_{I_{k_j}} - f (r_j z)| = \\
\chi_{I_{k_j}} |A_j f (r_j z) - f (r_j z)| +
\chi_{\mathbb Z \setminus I_{k_j}} |f (r_j z)| \leqslant \\
(A_j - 1) |f (r_j z)| +
\chi_{\mathbb Z \setminus I_{k_j}} |f (z)| + |f (r_j z) - f (z)|. \end {multline} The norm in $E$ of the first term in~\eqref {cpfdreddec2} is estimated by $(A_j - 1)$, and thus it converges to $0$. The second term in~\eqref {cpfdreddec2} converges to $0$ in $E$ by the assumption that $E$ has order continous norm. The third term in~\eqref {cpfdreddec2} converges to $0$ in $E$ by the strong continuity of $f$ in $\mathbb D$. It follows that the third term in~\eqref {cpfdreddec} is dominated by
$\|f_j (z) - f (r_j z)\|_E \|g_j (z)\|_{E'} \leqslant \|f_j (z) - f (r_j z)\|_E (1 + \varepsilon_j) C_\delta$, and so it also converges to $0$. Therefore passing to the limit in \eqref {cpfdreddec} yields $\langle f (z), g (z)\rangle = 1$ for all $z \in \mathbb D$.
We also have $\|g\|_{\hclass {\infty} {E'}} \leqslant \limsup_j \|g_j\|_{\hclass {\infty} {E'}} \leqslant C_\delta$, so~$g$ is a solution for the corona problem with data~$f$ having the claimed constant~$C_\delta$.
Thus it suffices to find a suitable sequence of parameters. We consider two cases. In the first case
$\left\|f (z)\right\|_E = 1$ for all $z \in \mathbb D$. We take $A_j = 1$ and any increasing sequence $r_j \to 1$. By the order continuity of norm we have
$\left\|f (z) \chi_{I_k}\right\|_E \to \left\|f (z)\right\|_E = 1$ for every $z \in \mathbb D$, so by the compactness of closed sets in $\mathbb D$ and the assumption that $\delta < 1$
we have $\left\|f (r_j z) \chi_{I_k}\right\|_E \geqslant \delta$ for large enough $k_j$. Thus $f_j$ is a suitable corona data in this case.
In the second case $\left\|f (z_0)\right\|_E < 1$ for some $z_0 \in \mathbb D$. With the help of an automorphism we may assume for convenience that $z_0 = 0$. We also fix any increasing sequence $r_j \to 1$. A simple consequence of the Schwarz lemma (see, e.~g., \cite [Chapter~1, Corollary~1.3] {garnett1981baf}) shows that \begin {equation} \label {cpfdreddec3}
|\langle f (z), e'\rangle | \leqslant \frac {|\langle f (0), e'\rangle | + |z|} {1 + |\langle f (0), e'\rangle |\,\, |z|} \end {equation}
for all $z \in \mathbb D$ and $e' \in E' = E^*$ with $\|e'\|_{E'} \leqslant 1$. Since function $(x, y) \mapsto \frac {x + y} {1 + x y}$ is increasing in both $x \in [0, 1]$ and $y \in [0, 1]$, taking the supremum in \eqref {cpfdreddec3} over all such $e'$ yields
$\|f (z)\|_E \leqslant \frac {\|f (0)\|_E + |z|} {1 + \|f (0)\|_E |z|}$, thus $\alpha_j = \sup_{z \in \mathbb D} \|f (r_j z)\|_E < 1$. Setting $A_j = \frac 1 {\alpha_j}$ yields $\|A_j f (r_j z)\|_E \leqslant 1$ for all $z \in \mathbb D$. Again, since $E$ has order continuous norm we have
$\left\|A_j f (z) \chi_{I_k}\right\|_E \to \left\|A_j f (z)\right\|_E \geqslant A_j \delta > \delta$ for every $z \in \mathbb D$, and we also have
$\left\|A_j f (r_j z) \chi_{I_{k_j}}\right\|_E \geqslant \delta$ for large enough $k_j$, so $f_j$ is a suitable corona data in this case as well. The proof of Proposition~\ref {cpfdred} is complete.
\subsection* {Acknowledgements}
The author is grateful to S.~V.~Kislyakov for stimulating discussions and a surprising conjecture that eventually became the statement of Theorem~\ref {cpmultt}, and to the referee for thorough and helpful remarks.
\normalsize \baselineskip=17pt
\bibliographystyle {acmx}
\bibliography {bmora}
\end{document} |
\begin{document}
\begin{abstract} Let $G$ be a connected complex reductive algebraic group with Lie algebra $\mathfrak{g}$. The Lusztig--Vogan bijection relates two bases for the bounded derived category of $G$-equivariant coherent sheaves on the nilpotent cone $\mathcal{N}$ of $\mathfrak{g}$. One basis is indexed by $\Lambda^+$, the set of dominant weights of $G$, and the other by $\Omega$, the set of pairs $(\mathcal{O}, \mathcal{E})$ consisting of a nilpotent orbit $\mathcal{O} \subset \mathcal{N}$ and an irreducible $G$-equivariant vector bundle $\mathcal{E} \rightarrow \mathcal{O}$. The existence of the Lusztig--Vogan bijection $\gamma \colon \Omega \rightarrow \Lambda^+$ was proven by Bezrukavnikov, and an algorithm computing $\gamma$ in type $A$ was given by Achar. Herein we present a combinatorial description of $\gamma$ in type $A$ that subsumes and dramatically simplifies Achar's algorithm. \end{abstract}
\title{Computing the Lusztig--Vogan Bijection}
\tableofcontents
\section*{Overview} In 1989, Lusztig concluded his landmark four-part study of cells in affine Weyl groups \cite{Lusztig1, Lusztig2, Lusztig3, Lusztig4} with an almost offhand remark:
\begin{quote} ``\ldots we obtain a (conjectural) bijection between $X_{\text{dom}}$ and the set of pairs $(u, \rho)$, (up to $G$-conjugacy) with $u \in G$ unipotent and $\rho$ an irreducible representation of $Z_G(u)$.'' \end{quote}
By $X_{\text{dom}}$, Lusztig meant the set of dominant weights of a connected complex reductive algebraic group $G$. (We refer to this set as $\Lambda^+$.) We denote by $\Omega$ the set of pairs $(\mathcal{C}, V)$, where $\mathcal{C} \subset G$ is a unipotent conjugacy class and $V$ is an irreducible representation of the centralizer $Z_G(u)$ for $u \in \mathcal{C}$, which is uniquely determined by $\mathcal{C}$ up to inner isomorphism.
So elementary an assertion was Lusztig's claim of a bijection between $\Lambda^+$ and $\Omega$ that its emergence from so deep an opus was in retrospect an obvious indication that the close connection between the sets in question transcends the setting in which it was first glimpsed.
Indeed, Vogan's work on associated varieties \cite{Vogan} led him to the same supposition only two years later. Let $\mathfrak{g}$ denote the Lie algebra of $G$, and let $\mathcal{N}^*$ denote the nilpotent cone of the dual space $\mathfrak{g}^*$. Fixing a compact real form $K$ of $G$ with Lie algebra $\mathfrak{k}$, let $\mathfrak{C}$ be the category of finitely generated $(S(\mathfrak{g}/\mathfrak{k}), K)$-modules for which each prime ideal in the support corresponds under the Nullstellensatz to a subvariety of $(\mathfrak{g}/\mathfrak{k})^* \subset \mathfrak{g}^*$ contained in $\mathcal{N}^*$. In 1991, Vogan \cite{Vogan} showed that $\Omega$ --- in an alternate incarnation as the set of pairs $(\mathcal{O}, V)$, where $\mathcal{O} \subset \mathcal{N}^*$ is a coadjoint orbit and $V$ is an irreducible representation of the stabilizer $G_X$ for $X \in \mathcal{O}$ --- indexes a basis for the Grothendieck group $K_0(\mathfrak{C})$. That $\Lambda^+$ also indexes such a basis pointed to an uncharted bijection.
Further evidence for the existence of what has come to be known as the \textit{Lusztig--Vogan bijection} was uncovered by Ostrik \cite{Ostrik}, who was first to consider $\Omega$ and $\Lambda^+$ in the context in which the conjecture was ultimately confirmed by Bezrukavnikov \cite{Bezrukav} --- that of the equivariant $K$-theory of the nilpotent cone of $\mathfrak{g}$. Let $\mathcal{N}$ denote this nilpotent cone. Ostrik examined $(G \times \mathbb{C}^*)$-equivariant coherent sheaves on $\mathcal{N}$. Subsequently, Bezrukavnikov examined $G$-equivariant coherent sheaves on $\mathcal{N}$ and proved Lusztig and Vogan's claim.
Let $\mathfrak{D} := \textbf{D}^b(\operatorname{Coh}^G(\mathcal{N}))$ be the bounded derived category of $G$-equivariant coherent sheaves on $\mathcal{N}$. Bezrukavnikov \cite{Bezrukav} showed not only that $\Omega$ and $\Lambda^+$ both index bases for the Grothendieck group $K_0(\mathfrak{D})$, but also that there exists a bijection $\gamma \colon \Omega \rightarrow \Lambda^+$ uniquely characterized by the following property: For any total order $\leq$ on $\Lambda^+$ compatible with the root order, if $\leq$ is imposed on $\Omega$ via $\gamma^{-1}$, then the change-of-basis matrix is upper triangular.
In his proof, Bezrukavnikov did not construct $\gamma$. Instead, he exhibited a $t$-structure on $\mathfrak{D}$, the heart of which is a quasi-hereditary category with irreducible objects indexed by $\Omega$ and costandard objects indexed by $\Lambda^+$. This entailed the existence of $\gamma$, but left open the question of how $\gamma$ is computed.\footnote{In type A, the existence of the Lusztig--Vogan bijection also follows from Xi's work on the based ring of the affine Weyl group \cite{Xi}, in which he proved a more general conjecture of Lusztig \cite{Lusztig4}. }
In his 2001 doctoral thesis \cite{Achart}, Achar set $G := GL_n(\mathbb{C})$ and formulated algorithms to compute inverse maps $\Omega \rightarrow \Lambda^+$ and $\Lambda^+ \rightarrow \Omega$ that yield an upper triangular change of basis in $K_0(\mathfrak{C})$. Then, in a follow-up article \cite{Acharj}, he showed that his calculations carry over to $K_0(\mathfrak{D})$ and therefore that his bijection agrees with Bezrukavnikov's.
Achar's algorithm for $\gamma^{-1}$ is elegant and simple. Unfortunately, his algorithm for $\gamma$ is a series of nested while loops, set to terminate upon reaching a configuration satisfying a list of conditions. Progress is tracked by a six-part monovariant, which is whittled down as the algorithm runs. Achar \cite{Achart, Acharj} proved that his algorithm halts on every input after finitely many steps. But it does not directly describe the image of a given pair $(\mathcal{O}, V) \in \Omega$.
In this article, we present an algorithm that directly describes the terminal configuration returned by Achar's algorithm on an input in $\Omega$, bypassing all of Achar's while loops and obviating the need for an accompanying monovariant. The upshot is a combinatorial algorithm to compute $\gamma$ for $G = GL_n(\mathbb{C})$ that encompasses and expedites Achar's algorithm and holds the prospect of extension to other classical groups.\footnote{A conjectural algorithm, to compute $\gamma$ for \textit{even} nilpotent orbits in type $C$, is featured in Chapter 7 of the author's 2017 doctoral thesis \cite{Rush}. }
\eject
\section*{Index of Notation}
\begin{tabularx}{6.0in}{l X l}
$G$ & connected complex reductive algebraic group & \S 1.1 \\
$\mathfrak{g}$ & Lie algebra of $G$ & \S 1.1 \\
$\mathcal{N}$ & nilpotent cone of $\mathfrak{g}$ & \S 1.1 \\
$\mathfrak{D}$ & bounded derived category of $G$-equivariant coherent sheaves on $\mathcal{N}$ & \S 1.1 \\
$X$ & nilpotent element & \S 1.1 \\
$\mathcal{O}_X$ & nilpotent orbit of $X$ & \S 1.1 \\
$G_X$ & stabilizer of $X$ & \S 1.1 \\
$(\mathcal{O}_X, V)$ & pair consisting of nilpotent orbit $\mathcal{O}_X$ and irreducible $G_X$-representation $V$ & \S 1.1 \\
$IC_{(\mathcal{O}_X, V)}$ & intersection cohomology complex associated to $(\mathcal{O}_X, V)$ & \S 1.1 \\
$\Omega$ & equivalence classes of pairs $(\mathcal{O}_X, V)$ & \S 1.1 \\
$A_{\lambda}$ & complex associated to weight $\lambda$ via Springer resolution & \S 1.1 \\
$\Lambda$ & weight lattice of $G$ & \S 1.1 \\
$\Lambda^+$ & dominant weights of $G$ & \S 1.1 \\
$\gamma(\mathcal{O}_X, V)$ & Lusztig--Vogan bijection & \S 1.1 \\
$A^P_{\lambda}$ & complex associated to weight $\lambda$ via $T^*(G/P) \rightarrow \overline{\mathcal{O}}$ & \S 1.1 \\
$[\alpha_1, \ldots, \alpha_{\ell}]$ & partition associated to $X$ & \S 1.2 \\
$[k_1^{a_1}, \ldots, k_m^{a_m}]$ & distinct parts of $\alpha$ with multiplicities & \S 1.2 \\
$G_X^{\text{red}}$ & reductive part of $G_X$ & \S 1.2 \\
$[\alpha^*_1, \ldots, \alpha^*_s]$ & conjugate partition to $\alpha$ & \S 1.2 \\
$P_X$ & parabolic subgroup associated to $X$ & \S 1.2 \\
$L_X$ & Levi factor of $P_X$ & \S 1.2 \\
$L_X^{\text{ref}}$ & Levi subgroup of $L_X$ containing $G_X^{\text{red}}$ & \S 1.2 \\
$X_{\alpha}$ & representative element of $\mathcal{O}_X$ & \S 1.3 \\
$\mathcal{O}_{\alpha}$ & $\mathcal{O}_{X_{\alpha}}$ & \S 1.3 \\
$V^{\nu(t)}$ & irreducible $GL_{a_t}$-representation with highest weight $\nu(t)$ & \S 1.3 \\
$V^{(\nu(1), \ldots, \nu(m))}$ & $V^{\nu(1)} \boxtimes \cdots \boxtimes V^{\nu(m)}$ & \S 1.3 \\
$[\nu_1, \ldots, \nu_{\ell}]$ & integer sequence & \S 1.3 \\
$G_{\alpha}$ & $G_{X_{\alpha}}$ & \S 1.3 \\
$G_{\alpha}^{\text{red}}$ & $G_{X_{\alpha}}^{\text{red}}$ & \S 1.3 \\
$V^{(\alpha, \nu)}$ & $G_{\alpha}$-representation arising from $V^{(\nu(1), \ldots, \nu(m))}$ & \S 1.3 \\
$P_{\alpha}$ & $P_{X_{\alpha}}$ & \S 1.3 \\
$L_{\alpha}$ & $L_{X_{\alpha}}$ & \S 1.3 \\
$\Lambda^+_{\alpha}$ & dominant weights of $L_{\alpha}$ & \S 1.3 \\
$W^{\lambda^j}$ & irreducible $GL_{\alpha^*_j}$-representation with highest weight $\lambda^j$ & \S 1.3 \\
$W^{\lambda}$ & $W^{\lambda^1} \boxtimes \cdots \boxtimes W^{\lambda^s}$ & \S 1.3 \\
$A^{\alpha}_{\lambda}$ & $A^{P_{\alpha}}_{\lambda}$ & \S 1.3 \\
$W_{\alpha}$ & Weyl group of $L_{\alpha}$ & \S 1.3 \\
$\rho_{\alpha}$ & half-sum of positive roots of $L_{\alpha}$ & \S 1.3 \\
$W$ & Weyl group of $G$ & \S 1.3 \\
$\operatorname{dom}(\mu)$ & unique dominant weight of $G$ in $W$-orbit of $\mu$ & \S 1.3 \\
$\Omega_{\alpha}$ & dominant integer sequences with respect to $\alpha$ & \S 1.3 \\
$\Lambda^+_{\alpha, \nu}$ & dominant weights $\mu$ of $L_{\alpha}$ such that $V^{(\alpha, \nu)}$ occurs in decomposition of $W^{\mu}$ as direct sum of irreducible $G_{\alpha}^{\text{red}}$-representations & \S 1.3 \\
$\mathfrak{A}(\alpha, \nu)$ & integer-sequences version of algorithm & \S 1.5 \\
$\mathsf{A}(\alpha, \nu)$ & Achar's algorithm & \S 1.5 \\
$\mathcal{A}(\alpha, \nu)$ & weight-diagrams version of algorithm & \S 1.5 \\
$\operatorname{dom}(\iota)$ & rearrangement of entries of $\iota$ in weakly decreasing order & \S 2.1 \\
$\mathcal{C}_{-1}(\alpha, \nu, i, I_a, I_b)$ & candidate-ceiling function & \S 2.2 \\
$\mathcal{R}_{-1}(\alpha, \nu)$ & ranking-by-ceilings function & \S 2.2 \\
$\sigma$ & permutation & \S 2.2 \\
$\mathbb{Z}^{\ell}_{\text{dom}}$ & weakly decreasing integer sequences of length $\ell$ & \S 2.2 \\
$\mathcal{U}_{-1}(\alpha, \nu, \sigma)$ & column-ceilings function & \S 2.2 \\
$\mathfrak{A}_{\operatorname{iter}}(\alpha, \nu)$ & iterative integer-sequences version of algorithm & \S 2.2 \\
$D_{\alpha}$ & weight diagrams of shape-class $\alpha$ & \S 3 \\
$X$ & weight diagram & \S 3 \\
$X^j_i$ & $i^{\text{th}}$ entry from top in $j^{\text{th}}$ column of $X$ & \S 3 \\
$EX$ & map $D_{\alpha} \rightarrow D_{\alpha}$ & \S 3 \\
$(X, Y)$ & diagram pair & \S 3 \\
$\kappa(X)$ & map $D_{\alpha} \rightarrow \Omega_{\alpha}$ & \S 3 \\
$h(X)$ & map $D_{\alpha} \rightarrow \Lambda^+_{\alpha}$ & \S 3 \\
$\eta(Y)$ & map $D_{\alpha} \rightarrow \Lambda^+$ & \S 3 \\
$D_{\ell}$ & weight diagrams with $\ell$ rows & \S 4.1 \\
$X_{i,j}$ & entry of $X$ in $i^{\text{th}}$ row and $j^{\text{th}}$ column & \S 4.1 \\
$\mathcal{S}(\alpha, \sigma, \iota)(i)$ & row-survival function & \S 4.1 \\
$\mathcal{k}$ & number of branches & \S 4.1 \\
$\ell_x$ & number of rows surviving into $x^{\text{th}}$ branch & \S 4.1 \\
$\mathcal{C}_1(\alpha, \nu, i, I_a, I_b)$ & candidate-floor function & \S 4.2 \\
$\mathcal{R}_1(\alpha, \nu)$ & ranking-by-floors function & \S 4.2 \\
$\mathcal{U}_1(\alpha, \nu, \sigma)$ & column-floors function & \S 4.2 \\
$\alpha^*_j$ & $|\lbrace i: \alpha_i \geq j \rbrace|$ & \S 4.2 \\
$\#(X,i)$ & number of boxes in $i^{\text{th}}$ row of $X$ & \S 4.3 \\
$\Sigma(X,i)$ & sum of entries in $i^{\text{th}}$ row of $X$ & \S 4.3 \\
$\mathcal{P}(\alpha, \iota)(i)$ & row-partition function & \S 5 \\
$\operatorname{Cat}$ & diagram-concatenation function & \S 5 \\
$\mathcal{T}_j(X)$ & column-reduction function & \S 5 \\
\end{tabularx}
\eject
\section{Introduction}
\subsection{Sheaves on the nilpotent cone} Let $G$ be a connected complex reductive algebraic group with Lie algebra $\mathfrak{g}$. An element $X \in \mathfrak{g}$ is \textit{nilpotent} if $X \in [\mathfrak{g}, \mathfrak{g}]$ and the endomorphism $\operatorname{ad} X \colon \mathfrak{g} \rightarrow \mathfrak{g}$ is nilpotent. The \textit{nilpotent cone} $\mathcal{N}$ comprises the nilpotent elements of $\mathfrak{g}$. Since $\mathcal{N}$ is a subvariety of $\mathfrak{g}$ (cf. Jantzen \cite{Jantzen}, section 6.2), we may consider the bounded derived category $\mathfrak{D} := \textbf{D}^b(\operatorname{Coh}^G(\mathcal{N}))$ of $G$-equivariant coherent sheaves on $\mathcal{N}$.
Let $X \in \mathfrak{g}$ be nilpotent, and write $\mathcal{O}_X \subset \mathcal{N}$ for the orbit of $X$ in $\mathfrak{g}$ under the adjoint action of $G$. We refer to $\mathcal{O}_X$ as the \textit{nilpotent orbit} of $X$.
Write $G_X$ for the stabilizer of $X$ in $G$. To an irreducible representation $V$ of $G_X$ corresponds the $G$-equivariant vector bundle \[E_{(\mathcal{O}_X, V)} := G \times_{G_X} V \rightarrow \mathcal{O}_X\] with projection given by $(g, v) \mapsto \operatorname{Ad}(g) (X)$. Its sheaf of sections $\mathcal{E}_{(\mathcal{O}_X, V)}$ is a $G$-equivariant coherent sheaf on $\mathcal{O}_X$. To arrive at an object in the derived category $\mathfrak{D}$, we build the complex $\mathcal{E}_{(\mathcal{O}_X, V)}[\frac{1}{2} \dim \mathcal{O}_X]$ consisting of $\mathcal{E}_{(\mathcal{O}_X, V)}$ concentrated in degree $-\frac{1}{2} \dim \mathcal{O}_X$. Then we set \[IC_{(\mathcal{O}_X, V)} := j_{!*}\left(\mathcal{E}_{(\mathcal{O}_X, V)}\left[\frac{1}{2} \dim \mathcal{O}_X\right]\right) \in \mathfrak{D},\] where $j_{!*}$ denotes the Goresky--Macpherson extension functor obtained from the inclusion $j \colon \mathcal{O}_X \rightarrow \mathcal{N}$ and Bezrukavnikov's $t$-structure on $\mathfrak{D}$.
Let ${\Omega}^{\text{pre}}$ be the set of pairs $\lbrace (\mathcal{O}_X, V) \rbrace_{X \in \mathcal{N}}$ consisting of a nilpotent orbit $\mathcal{O}_X$ and an irreducible representation $V$ of the stabilizer $G_X$. We assign an equivalence relation to ${\Omega}^{\text{pre}}$ by stipulating that $(\mathcal{O}_X, V) \sim (\mathcal{O}_Y, W)$ if there exists $g \in G$ and an isomorphism of vector spaces $\pi \colon V \rightarrow W$ such that $\operatorname{Ad}(g) X = Y$ and the group isomorphism $\operatorname{Ad}(g) \colon G_X \rightarrow G_Y$ manifests $\pi$ as an isomorphism of $G_X$-representations.
Note that $(\mathcal{O}_X, V) \sim (\mathcal{O}_Y, W)$ implies $\mathcal{O}_X = \mathcal{O}_Y$ and $E_{(\mathcal{O}_X, V)} \cong E_{(\mathcal{O}_Y, W)}$. Thus, the map associating the intersection cohomology complex $IC_{(\mathcal{O}_X, V)}$ in $\mathfrak{D}$ to the equivalence class of $(\mathcal{O}_X, V)$ in $\Omega^{\text{pre}}$ is well-defined. Set $\Omega := \Omega^{\text{pre}} / \sim$. Then $\Omega$ indexes the family of complexes $\lbrace IC_{(\mathcal{O}_X, V)} \rbrace_{(\mathcal{O}_X, V) \in \Omega}$. (The notation $(\mathcal{O}_X, V) \in \Omega$ is shorthand for the equivalence class represented by $(\mathcal{O}_X, V)$ belonging to $\Omega$.)
On the other hand, weights of $G$ also give rise to complexes in $\mathfrak{D}$. To see this, let $B$ be a Borel subgroup of $G$, and fix a maximal torus $T \subset B$. A weight $\lambda \in \operatorname{Hom}(T, \mathbb{C}^*)$ is a character of $T$, from which we obtain a one-dimensional representation $\mathbb{C}^{\lambda}$ of $B$ by stipulating that the unipotent radical of $B$ act trivially. Then \[L_{\lambda} := G \times_B \mathbb{C}^{\lambda} \rightarrow G/B \] is a $G$-equivariant line bundle on the flag variety $G/B$. Its sheaf of sections $\mathcal{L}_{\lambda}$ is a $G$-equivariant coherent sheaf on $G/B$ which may be pulled back to the cotangent bundle $T^*(G/B)$ along the projection $p \colon T^*(G/B) \rightarrow G/B$.
From the Springer resolution of singularities $\pi \colon T^*(G/B) \rightarrow \mathcal{N}$, we obtain the direct image functor $\pi_{*}$, and then the total derived functor $R\pi_{*}$. We set \[A_{\lambda} := R\pi_{*} p^{*} \mathcal{L}_{\lambda} \in \mathfrak{D}.\]
Let $\Lambda := \operatorname{Hom}(T, \mathbb{C}^*)$ be the weight lattice of $G$, and let $\Lambda^+ \subset \Lambda$ be the subset of dominant weights with respect to $B$. The family of complexes $\lbrace A_{\lambda} \rbrace_{\lambda \in \Lambda^+}$ is sufficient to generate the Grothendieck group $K_0(\mathfrak{D})$, so it is this family which we compare to $\lbrace IC_{(\mathcal{O}_X, V)} \rbrace_{(\mathcal{O}_X, V) \in \Omega}$. Entailed in the relationship is the Lusztig--Vogan bijection.
\begin{thm}[Bezrukavnikov \cite{Bezrukav}, Corollary 4] \label{bez} The Grothendieck group $K_0(\mathfrak{D})$ is a free abelian group for which both the sets $\lbrace [IC_{(\mathcal{O}_X, V)}] \rbrace_{(\mathcal{O}_X, V) \in \Omega}$ and $\lbrace [A_{\lambda}] \rbrace_{\lambda \in \Lambda^+}$ form bases. There exists a unique bijection $\gamma \colon \Omega \rightarrow \Lambda^+$ such that \[\left[IC_{(\mathcal{O}_X, V)}\right] \in \operatorname{span} \lbrace [A_{\lambda}] : \lambda \leq \gamma(\mathcal{O}_X, V) \rbrace, \] where the partial order on the weights is the root order, viz., the transitive closure of the relations $\upsilon \lessdot \omega$ for all $\upsilon, \omega \in \Lambda$ such that $\omega - \upsilon$ is a positive root with respect to $B$.
Furthermore, the coefficient of $[A_{\gamma(\mathcal{O}_X, V)}]$ in the expansion of $[IC_{(\mathcal{O}_X, V)}]$ is $\pm 1$. \end{thm}
The association of the complex $A_{\lambda}$ to the weight $\lambda$ evinces a more general construction of objects in $\mathfrak{D}$ that is instrumental in identifying the bijection $\gamma$. Let $P \supset B$ be a parabolic subgroup, and let $U_P$ be its unipotent radical. Denote the Lie algebra of $U_P$ by $\mathfrak{u}_P$. The unique nilpotent orbit $\mathcal{O}$ for which $\mathcal{O} \cap \mathfrak{u}_P$ is an open dense subset of $\mathfrak{u}_P$ is called the \textit{Richardson orbit} of $P$, and there exists a canonical map $\pi \colon T^*(G/P) \rightarrow \overline{\mathcal{O}}$ analogous to the Springer resolution.
Let $L$ be the Levi factor of $P$ that contains $T$. From a weight $\lambda \in \Lambda$ dominant with respect to the Borel subgroup $B_L := B \cap L$ of $L$, we obtain an irreducible $L$-representation $W^{\lambda}$ with highest weight $\lambda$, which we may regard as a $P$-representation by stipulating that $U_P$ act trivially. Then \[M_{\lambda} := G \times_P W^{\lambda} \rightarrow G/P\] is a $G$-equivariant vector bundle on $G/P$. Pulling back its sheaf of sections $\mathcal{M}_{\lambda}$ to the cotangent bundle $T^*(G/P)$ along the canonical projection $p \colon T^*(G/P) \rightarrow G/P$, and then pushing the result forward onto $\mathcal{N}$, we end up with the complex \[A^P_{\lambda} := R\pi_* p^*\mathcal{M}_{\lambda} \in \mathfrak{D}.\]
Note that the Richardson orbit of $B$ is the \textit{regular nilpotent orbit} $\mathcal{O}^{\text{reg}}$, uniquely characterized by the property $\overline{\mathcal{O}^{\text{reg}}} = \mathcal{N}$. The Levi factor of $B$ containing $T$ is $T$ itself. Thus, for all $\lambda \in \Lambda$, the complex $A^B_{\lambda}$ is defined and coincides with $A_{\lambda}$, meaning that the above construction specializes to that of $\lbrace A_{\lambda} \rbrace_{\lambda \in \Lambda}$, as we claimed.
\subsection{The nilpotent cone of $\mathfrak{gl}_n$} Henceforward we set $G := GL_n(\mathbb{C})$. Then $\mathfrak{g} = \mathfrak{gl}_n(\mathbb{C})$. Let $X \in \mathfrak{g}$ be nilpotent. The existence of the Jordan canonical form implies the existence of positive integers $\alpha_1 \geq \cdots \geq \alpha_{\ell}$ summing to $n$ and vectors $v_1, \ldots, v_{\ell}$ such that \[\mathbb{C}^n = \operatorname{span} \lbrace X^j v_i : 1 \leq i \leq \ell, 0 \leq j \leq \alpha_i -1 \rbrace \] and $X^{\alpha_i} v_i = 0$ for all $i$ (cf. Jantzen \cite{Jantzen}, section 1.1).
Express the partition $\alpha := [\alpha_1, \ldots, \alpha_{\ell}]$ in the form $[k_1^{a_1}, \ldots, k_m^{a_m}]$, where $k_1 > \cdots > k_m$ are the distinct parts of $\alpha$ and $a_t$ is the multiplicity of $k_t$ for all $1 \leq t \leq m$. Let $V_t$ be the $a_t$-dimensional vector space spanned by the set $\lbrace v_i : \alpha_i = k_t \rbrace$. Define a map \[\varphi_X \colon GL(V_1) \times \cdots \times GL(V_m) \rightarrow G_X\] by $\varphi_X(g_1, \ldots, g_m)(X^j v_i) := X^j g_t v_i$ for $v_i \in V_t$.
Note that $\varphi_X$ is injective. Let $G_X^{\text{red}}$ be the image of $\varphi_X$, and let $R_X$ be the unipotent radical of $G_X$. From Jantzen \cite{Jantzen}, sections 3.8--3.10, we see that $G_X^{\text{red}}$ is reductive and $G_X = G_X^{\text{red}} R_X$. Since $R_X$ acts trivially in any irreducible $G_X$-representation, specifying an irreducible representation of $G_X$ is equivalent to specifying an irreducible representation of $G_X^{\text{red}}$, which means specifying an irreducible representation of $GL_{a_1, \ldots, a_m} := GL_{a_1} \times \cdots \times GL_{a_m}$.
Let $\alpha^* = [\alpha^*_1, \ldots, \alpha^*_s]$ be the conjugate partition to $\alpha$, where $s := \alpha_1$. For all $1 \leq j \leq s$, let $V(j)$ be the $\alpha^*_j$-dimensional vector space spanned by the set $\lbrace X^{\alpha_i - j} v_i : \alpha_i \geq j \rbrace$, and set $V^{(j)} := V(1) \oplus \cdots \oplus V(j)$.
Define subgroups $L_X \subset P_X \subset G$ by \[P_X := \lbrace g \in G : g\big(V^{(j)}\big) = V^{(j)} \text{ for all } 1\leq j \leq s \rbrace\] and \[L_X := \lbrace g \in G : g\big(V(j)\big) = V(j) \text{ for all } 1 \leq j \leq s \rbrace.\]
Since $P_X$ is the stabilizer of the partial flag \[\lbrace 0 \rbrace \subset V^{(1)} \subset \cdots \subset V^{(s)} = \mathbb{C}^n,\] it follows immediately that $P_X \subset G$ is a parabolic subgroup and $L_X \subset P_X$ is a Levi factor. Furthermore, the Richardson orbit of $P_X$ is none other than $\mathcal{O}_X$ (cf. Jantzen \cite{Jantzen}, section 4.9). For general $G$, this implies that the connected component of the identity in $G_X$ is contained in $P_X$. In our case $G = GL_n$, the conclusion is stronger: $G_X \subset P_X$, and $G_X^{\text{red}} \subset L_X$. (That we could find $P_X$ so that $\mathcal{O}_X$ is its Richardson orbit is also due to the assumption that $G$ is of type $A$.)
The claim $G_X \subset P_X$ follows from the observation that $V^{(j)}$ is the kernel of $X^j$ for all $1 \leq j \leq s$. To see $G_X^{\text{red}} \subset L_X$, we find a Levi subgroup of $L_X$ that contains $G_X^{\text{red}}$. Since $X^{k_t - j}V_t \subset V(j)$, the direct sum decomposition \[\mathbb{C}^n = \bigoplus_{t = 1}^m \bigoplus_{j = 1}^{k_t} X^{k_t - j} V_t\] is a refinement of the decomposition $\mathbb{C}^n = \bigoplus_{j=1}^s V(j)$. Set \[L_X^{\text{ref}} := \lbrace g \in G : g(X^{k_t-j} V_t) = X^{k_t-j} V_t \text{ for all } 1 \leq t \leq m, 1 \leq j \leq k_t \rbrace.\] Then $L_X^{\text{ref}} \subset L_X$, and the inclusion $G_X^{\text{red}} \subset L_X^{\text{ref}}$ follows directly from the definition of $\varphi_X$.
Let $\chi_X$ be the isomorphism \[L_X^{\text{ref}} \rightarrow \prod_{t = 1}^m \prod_{j=1}^{k_t} GL(X^{k_t -j} V_t)\] given by $g \mapsto \prod_{t=1}^m (g|_{X^{k_t-1} V_t}, \ldots, g|_{V_t})$, and let $\psi_X$ be the isomorphism \[L_X \rightarrow GL(V(1)) \times \cdots \times GL(V(s))\] given by $g \mapsto \left(g|_{V(1)}, \ldots, g|_{V(s)}\right)$.
From the analysis above, we may conclude that the composition \[\psi_X \varphi_X \colon GL_{a_1, \ldots, a_m} \rightarrow GL_{\alpha^*_1, \ldots, \alpha^*_s}\] factors as the composition \[\chi_X \varphi_X \colon GL_{a_1, \ldots, a_m} \rightarrow \prod_{t=1}^m (GL_{a_t})^{k_t}\] (which coincides with the product, over all $1 \leq t \leq m$, of the diagonal embeddings $GL_{a_t} \rightarrow (GL_{a_t})^{k_t}$), followed by the product, over all $1 \leq j \leq s$, of the inclusions $\prod_{t : k_t \geq j} GL_{a_t} \rightarrow GL_{\alpha^*_j}$. This description of $\psi_X \varphi_X$ allows us to detect the appearance of certain $[IC_{(\mathcal{O}_X, V)}]$ classes in the expansion on the $\Omega$-basis of a complex arising via the resolution $T^*(G/P_X) \rightarrow \overline{\mathcal{O}_X}$ (cf. Lemma~\ref{omega}).
\begin{exam} \label{colors}
\setcounter{MaxMatrixCols}{20} Set $n := 11$. Then $G = GL_{11}$. Set \[X := \begin{bmatrix}
0 & 1 & 0 & 0 & & & & & & & \\
0 & 0 & 1 & 0 & & & & & & & \\
0 & 0 & 0 & 1 & & & & & & & \\
0 & 0 & 0 & 0 & & & & & & & \\
& & & & 0 & 1 & 0 & & & & \\
& & & & 0 & 0 & 1 & & & & \\
& & & & 0 & 0 & 0 & & & & \\
& & & & & & & 0 & 1 & & \\
& & & & & & & 0 & 0 & & \\
& & & & & & & & & 0 & \\
& & & & & & & & & & 0 \end{bmatrix}.\]
The partition encoding the sizes of the Jordan blocks of $X$ is $\alpha = [4,3,2,1,1]$. The Young diagram of $\alpha$ is depicted in Figure~\ref{rowleng}. Each Jordan block of $X$ corresponds to a row of $\alpha$.
\begin{figure}
\caption{The Young diagram of $\alpha$, colored by rows}
\label{rowleng}
\end{figure}
We may express $\alpha$ in the form $[4^1,3^1,2^1,1^2]$, where $4>3>2>1$ are the distinct parts of $\alpha$. Then $G_X^{\text{red}}$ is the image under the isomorphism $\varphi_X$ of \[ GL_1 \times GL_1 \times GL_1 \times GL_2.\] Each factor of the preimage corresponds to a distinct part of $\alpha$ (cf. Figure~\ref{rowmult}).
\begin{figure}
\caption{The Young diagram of $\alpha$, partitioned by distinct parts}
\label{rowmult}
\end{figure}
The conjugate partition $\alpha^*$ is $[5,3,2,1]$. The isomorphism $\psi_X$ maps $L_X$ onto \[GL_5 \times GL_3 \times GL_2 \times GL_1.\] Each factor of the image corresponds to a column of $\alpha$ (cf. Figure~\ref{col}).
\begin{figure}
\caption{The Young diagram of $\alpha$, colored by columns}
\label{col}
\end{figure}
The group $L_X^{\text{ref}}$ lies inside $L_X$ and contains $G_X^{\text{red}}$. The isomorphism $\chi_X$ maps $L_X^{\text{ref}}$ isomorphically onto \[(GL_1)^4 \times (GL_1)^3 \times (GL_1)^2 \times (GL_2)^1.\] Each factor of the image corresponds to an ordered pair consisting of a distinct part of $\alpha$ \textit{and} a column of $\alpha$ (cf. Figure~\ref{both}).
\begin{figure}
\caption{The Young diagram of $\alpha$, partitioned by distinct parts and colored by columns}
\label{both}
\end{figure}
The composition \[\psi_X \varphi_X \colon GL_{1,1,1,2} \rightarrow GL_{5,3,2,1}\] factors as the product of diagonal embeddings \[\chi_X \varphi_X \colon GL_{1,1,1,2} \rightarrow (GL_1)^4 \times (GL_1)^3 \times (GL_1)^2 \times (GL_2)^1,\] followed by the product of the inclusions \[ GL_{1,1,1,2} \rightarrow GL_5, \quad GL_{1,1,1} \rightarrow GL_3, \quad GL_{1,1} \rightarrow GL_2, \quad \text{and} \quad GL_1 \rightarrow GL_1.\] \end{exam}
\subsection{Sheaves on the nilpotent cone of $\mathfrak{gl}_n$} Let $e_1, \ldots, e_n$ be the standard basis for $\mathbb{C}^n$. From the nilpotent orbit $\mathcal{O}_X$, we choose the representative element $X_{\alpha} \in \mathfrak{g}$ given by \[e_i \mapsto 0\] for all $1 \leq i \leq \alpha^*_1$ and \[e_{\alpha^*_1 + \cdots + \alpha^*_{j-1} + i} \mapsto e_{\alpha^*_1 + \cdots + \alpha^*_{j-2} + i}\] for all $2 \leq j \leq s$, $1 \leq i \leq \alpha^*_j$.
\begin{exam} Maintain the notation of Example~\ref{colors}. Then \[X_{\alpha} = \begin{bmatrix}
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & & & \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & & & \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & & & \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & & & \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & & & \\
& & & & & 0 & 0 & 0 & 1 & 0 & \\
& & & & & 0 & 0 & 0 & 0 & 1 & \\
& & & & & 0 & 0 & 0 & 0 & 0 & \\
& & & & & & & & 0 & 0 & 1 \\
& & & & & & & & 0 & 0 & 0 \\
& & & & & & & & & & 0
\end{bmatrix}.\] \end{exam}
To see that $X_{\alpha} \in \mathcal{O}_X$, let $g \in G$ be given by $X^{\alpha_i - j} v_i \mapsto e_{\alpha^*_1 + \cdots + \alpha^*_{j-1} + i}$, and observe that $X_{\alpha} = gXg^{-1}$. Thus, $\mathcal{N} = \bigcup_{\alpha \vdash n} \mathcal{O}_{X_{\alpha}}$. For $\alpha$ a partition of $n$, we write $\mathcal{O}_{\alpha}$ for the orbit $\mathcal{O}_{X_{\alpha}}$. The uniqueness of the Jordan canonical form implies that the orbits $\mathcal{O}_{\alpha}$ and $\mathcal{O}_{\beta}$ are disjoint for distinct partitions $\alpha$ and $\beta$, so $\lbrace \mathcal{O}_{\alpha} \rbrace_{\alpha \vdash n}$ constitutes the set of nilpotent orbits of $\mathfrak{g}$.
For each factor $GL_{a_t}$ of $GL_{a_1, \ldots, a_m}$, we identify the weight lattice with the character lattice $\mathbb{Z}^{a_t}$ of the maximal torus $(\mathbb{C}^*)^{a_t}$ of invertible diagonal matrices, and we assign the partial order induced by the Borel subgroup of invertible upper triangular matrices. Then the isomorphism classes of irreducible $GL_{a_1, \ldots, a_m}$-representations are indexed by $m$-tuples of integer sequences $(\nu(1), \ldots, \nu(m))$ such that $\nu(t)$ is a dominant weight of $GL_{a_t}$ for all $1 \leq t \leq m$. The $m$-tuple $(\nu(1), \ldots, \nu(m))$ corresponds to the representation \[V^{(\nu(1), \ldots, \nu(m))} := V^{\nu(1)} \boxtimes \cdots \boxtimes V^{\nu(m)},\] where $V^{\nu(t)}$ denotes the irreducible $GL_{a_t}$-representation with highest weight $\nu(t)$.
We say that an integer sequence $\nu = [\nu_1, \ldots, \nu_{\ell}]$ is \textit{dominant} with respect to $\alpha$ if $\alpha_i = \alpha_{i+1}$ implies $\nu_i \geq \nu_{i+1}$. Note that the dominance condition holds precisely when $\nu$ is the concatenation of an $m$-tuple of dominant weights $(\nu(1), \ldots, \nu(m))$. For such $\nu$, we denote by $V^{(\alpha, \nu)}$ the representation of $G_{\alpha} := G_{X_{\alpha}}$ (or of $G_{\alpha}^{\text{red}} := G_{X_{\alpha}}^{\text{red}}$, depending on context) arising from the representation $V^{(\nu(1), \ldots, \nu(m))}$ of $GL_{a_1, \ldots, a_m}$. Via the association \[(\alpha, \nu) \mapsto \big(\mathcal{O}_{\alpha}, V^{(\alpha, \nu)}\big),\] we construe $\Omega$ as consisting of pairs of integer sequences $(\alpha, \nu)$ such that $\alpha = [\alpha_1, \ldots, \alpha_{\ell}]$ is a partition of $n$ and $\nu = [\nu_1, \ldots, \nu_{\ell}]$ is dominant with respect to $\alpha$.
Let $B \subset G$ be the Borel subgroup of invertible upper triangular matrices, and let $T \subset B$ be the maximal torus of invertible diagonal matrices. The weight lattice $\Lambda = \operatorname{Hom}(T, \mathbb{C}^*) \cong \mathbb{Z}^n$ comprises length-$n$ integer sequences $\lambda = [\lambda_1, \ldots, \lambda_n]$. Those weights $\lambda \in \Lambda$ which are weakly decreasing are dominant with respect to $B$ and belong to $\Lambda^+$.
Set $P_{\alpha} := P_{X_{\alpha}}$ and $L_{\alpha} := L_{X_{\alpha}}$. Then \[P_{\alpha} = \left\lbrace g \in G : ge_{\alpha^*_1 + \cdots + \alpha^*_{j-1} + i} \in \operatorname{span} \lbrace e_1, \ldots, e_{\alpha^*_1 + \cdots + \alpha^*_j} \rbrace \right\rbrace \] and \[L_{\alpha} = \left\lbrace g \in G : ge_{\alpha^*_1 + \cdots + \alpha^*_{j-1} + i} \in \operatorname{span} \lbrace e_{\alpha^*_1 + \cdots + \alpha^*_{j-1} + 1}, \ldots, e_{\alpha^*_1 + \cdots + \alpha^*_j} \rbrace \right\rbrace. \]
We see immediately that $P_{\alpha} \supset B$ and $L_{\alpha} \supset T$. Thus, $\Lambda$ doubles as the weight lattice of $L_{\alpha}$. Given a weight $\lambda \in \Lambda$, let $\lambda^j$ be its restriction to the factor $GL_{\alpha^*_j}$ of $L_{\alpha} \cong GL_{\alpha^*_1, \ldots, \alpha^*_s}$. This realizes $\lambda$ as the concatenation of the $s$-tuple of integer sequences $(\lambda^1, \ldots, \lambda^s)$. If $\lambda^j$ is weakly decreasing for all $1 \leq j \leq s$, then $\lambda$ is dominant with respect to the Borel subgroup $B_{\alpha} := B_{L_{\alpha}}$, in which case $\lambda$ belongs to $\Lambda^+_{\alpha}$, the set of dominant weights of $L_{\alpha}$ with respect to $B_{\alpha}$. For $\lambda \in \Lambda^+_{\alpha}$, we denote by $W^{\lambda^j}$ the irreducible $GL_{\alpha^*_j}$-representation with highest weight $\lambda^j$, and we set \[W^{\lambda} := W^{\lambda^1} \boxtimes \cdots \boxtimes W^{\lambda^s},\] which indeed has highest weight $\lambda$.
We rely on the complexes $A^{\alpha}_{\lambda} := A^{P_{\alpha}}_{\lambda}$ associated to weights $\lambda \in \Lambda^+_{\alpha}$ to interpolate between the $\Omega$- and $\Lambda^+$-bases for $K_0(\mathfrak{D})$. Weights of $L_{\alpha}$ are also weights of $G$, so it is reasonable to expect that the expansion of $[A^{\alpha}_{\lambda}]$ on the $\Lambda^+$-basis be easy to compute. On the other hand, representations of $L_{\alpha}$ restrict to representations of $G_{\alpha}^{\text{red}}$, and it turns out that this relationship lifts to the corresponding objects in $\mathfrak{D}$. The following results of Achar \cite{Acharj} encapsulate these statements formally.
\begin{lem}[Achar \cite{Acharj}, Corollary 2.5] \label{omega} Let $(\alpha, \nu) \in \Omega$, and let $\lambda \in \Lambda^+_{\alpha}$. Suppose that $V^{(\alpha, \nu)}$ occurs in the decomposition of the $L_{\alpha}$-representation $W^{\lambda}$ as a direct sum of irreducible $G_{\alpha}^{\emph{red}}$-representations. Then, when $[A^{\alpha}_{\lambda}]$ is expanded on the $\Omega$-basis for $K_0(\mathfrak{D})$, the coefficient of $[IC_{(\alpha, \nu)}]$ is nonzero. \end{lem}
\begin{lem}[Achar \cite{Acharj}, Corollary 2.7] \label{lambda} Let $W_{\alpha}$ be the Weyl group of $L_{\alpha}$, and let $\rho_{\alpha}$ be the half-sum of the positive roots of $L_{\alpha}$. For all $\lambda \in \Lambda^+_{\alpha}$, the following equality holds in $K_0(\mathfrak{D})$: \[[A^{\alpha}_{\lambda}] = \sum_{w \in W_{\alpha}} (-1)^w [A_{\lambda + \rho_{\alpha} - w \rho_{\alpha}}].\] \end{lem}
Let $W$ be the Weyl group of $G$, and, for all $\mu \in \Lambda$, let $\operatorname{dom}(\mu) \in \Lambda^+$ be the unique dominant weight in the $W$-orbit of $\mu$. When $[A_{\mu}]$ is expanded on the $\Lambda^+$-basis for $K_0(\mathfrak{D})$, the coefficient of $[A_{\lambda}]$ is zero unless $\lambda \leq \operatorname{dom}(\mu)$ (cf. Achar \cite{Acharj}, Proposition 2.2). Thus, if $\mu \in \Lambda^+_{\alpha}$, it follows from Lemma~\ref{lambda} that $[A^{\alpha}_{\mu}] \in \lbrace \operatorname{span} [A_{\lambda}] : \lambda \leq \operatorname{dom}(\mu + 2 \rho_{\alpha}) \rbrace$.
Let $\Omega_{\alpha}$ be the set of all dominant integer sequences $\nu$ with respect to $\alpha$. Given $\nu \in \Omega_{\alpha}$, set \[\Lambda^+_{\alpha, \nu} := \left \lbrace \mu \in \Lambda^+_{\alpha} : \dim \operatorname{Hom}_{G_{\alpha}^{\text{red}}} \big(V^{(\alpha, \nu)}, W^{\mu}\big) > 0 \right \rbrace.\] On input $(\alpha, \nu)$, our algorithm finds a weight $\mu \in \Lambda^+_{\alpha, \nu}$ such that $||\mu + 2 \rho_{\alpha}||$ is minimal. As demonstrated by Achar \cite{Achart, Acharj}, this guarantees that $\gamma(\alpha, \nu) = \operatorname{dom}(\mu + 2 \rho_{\alpha})$.\footnote{This follows from Claim 2.3.1 in \cite{Achart}, except that $\gamma$ is defined differently. In \cite{Acharj}, Theorem 8.10, Achar shows that the bijection $\gamma$ constructed in \cite{Achart} coincides with the bijection in Theorem~\ref{bez}. }
The intuition behind this approach is straightforward. For all $\mu \in \Lambda^+_{\alpha, \nu}$, the expansion of $[A^{\alpha}_{\mu}]$ on the $\Omega$-basis takes the form \[\big[A^{\alpha}_{\mu}\big] = \dim \operatorname{Hom}_{G_{\alpha}^{\text{red}}} \big(V^{(\alpha, \nu)}, W^{\mu}\big) \big[IC_{(\alpha, \nu)}\big] + \sum_{\upsilon \in \Omega_{\alpha} : \upsilon \neq \nu} c_{\alpha, \upsilon} \big[IC_{(\alpha, \upsilon)}\big] + \sum_{(\beta, \xi) \in \Omega : \beta \vartriangleleft \alpha} c_{\beta, \xi} \big[IC_{(\beta, \xi)}\big],\] where $\trianglelefteq$ denotes the dominance order on partitions of $n$. On the other hand, the expansion of $[A^{\alpha}_{\mu}]$ on the $\Lambda^+$-basis takes the form \[\big[A^{\alpha}_{\mu}\big] = \pm \big[A_{\operatorname{dom}(\mu + 2 \rho_{\alpha})}\big] + \sum_{\lambda < \operatorname{dom}(\mu + 2 \rho_{\alpha})} c_{\lambda} \big[A_{\lambda} \big].\]
We compare the equations. There is a single maximal-weight term in the right-hand side of the second equation. It follows that there is a single maximal-weight term in the expansion of the right-hand side of the first equation on the $\Lambda^+$-basis. By Theorem~\ref{bez}, the maximal weight must be $\gamma(\alpha, \nu)$ or among the sets $\lbrace \gamma(\alpha, \upsilon) : \upsilon \neq \nu \rbrace$ and $\lbrace \gamma(\beta, \xi) : \beta \vartriangleleft \alpha \rbrace$. In the former case, we may conclude immediately that $\gamma(\alpha, \nu) = \operatorname{dom}(\mu + 2 \rho_{\alpha})$. It turns out that mandating the minimality of $||\mu + 2 \rho_{\alpha}||$ suffices to preclude the latter possibility.
\subsection{The Lusztig--Vogan bijection for $GL_2$} Set $n := 2$. Then $G = GL_2$. The weight lattice $\Lambda$ comprises ordered pairs $[\lambda_1, \lambda_2] \in \mathbb{Z}^2$, and $\Lambda^+ = \lbrace [\lambda_1, \lambda_2] \in \mathbb{Z}^2 : \lambda_1 \geq \lambda_2 \rbrace$.
The variety $\mathcal{N} \subset \mathfrak{g}$ is the zero locus of the determinant polynomial. Each matrix of rank $1$ in $\mathfrak{g}$ is similar to $\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}$, so $\mathcal{N}$ is the union of $\begin{bmatrix} 0 & 0 \\ 0 & 0\end{bmatrix}$ (the \textit{zero orbit}) and the $G$-orbit of $\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}$ (the \textit{regular orbit}).
To the zero orbit corresponds the partition $[1,1]$. Note that $G_{[1,1]}^{\text{red}} = L_{[1,1]} = G$. Hence \[\Omega_{[1,1]} = \lbrace [\nu_1, \nu_2] \in \mathbb{Z}^2 : \nu_1 \geq \nu_2 \rbrace \quad \text{and} \quad \Lambda^+_{[1,1]} = \lbrace [\mu_1, \mu_2] \in \mathbb{Z}^2 : \mu_1 \geq \mu_2 \rbrace.\]
For all $[\mu_1, \mu_2] \in \Lambda^+_{[1,1]}$, the irreducible $L_{[1,1]}$-representation $W^{[\mu_1, \mu_2]}$ is isomorphic as a $G_{[1,1]}^{\text{red}}$-representation to $V^{([1,1], [\mu_1, \mu_2])}$. Thus, for all $[\nu_1, \nu_2] \in \Omega_{[1,1]}$, \[\Lambda^+_{[1,1], [\nu_1, \nu_2]} = \lbrace [\nu_1, \nu_2] \rbrace.\] Our algorithm sets $[\mu_1, \mu_2] := [\nu_1, \nu_2]$.
On the $\Omega$-basis, $[A^{[1,1]}_{[\mu_1, \mu_2]}]$ expands as \[ \left[A^{[1,1]}_{[\nu_1, \nu_2]}\right] = \left[IC_{([1,1], [\nu_1, \nu_2])}\right].\]
Since $W_{[1,1]} = W = \mathfrak{S}_2$ and $\rho_{[1,1]} = [\frac{1}{2}, - \frac{1}{2}]$, it follows that $[A^{[1,1]}_{[\mu_1, \mu_2]}]$ expands on the $\Lambda^+$-basis as \[\left[A^{[1,1]}_{[\nu_1, \nu_2]}\right] = - \left[A_{[\nu_1 + 1, \nu_2 - 1]}\right] + \left[A_{[\nu_1, \nu_2]}\right].\]
Hence $\gamma([1,1], [\nu_1, \nu_2]) = [\nu_1 + 1, \nu_2 - 1] = \operatorname{dom}([\mu_1, \mu_2] + 2 \rho_{[1,1]})$, which confirms that the output is correct.
We turn our attention to the regular orbit, to which corresponds the partition $[2]$. Recall that $G_{[2]}^{\text{red}} \cong GL_1$ and $L_{[2]} \cong GL_1 \times GL_1$. Hence \[\Omega_{[2]} = \lbrace [\nu_1] \in \mathbb{Z}^1 \rbrace \quad \text{and} \quad \Lambda^+_{[2]} = \lbrace [\mu_1, \mu_2] \in \mathbb{Z}^2 \rbrace.\]
Furthermore, the composition $\psi_{X_{[2]}} \varphi_{X_{[2]}}$ of the isomorphisms $\varphi_{X_{[2]}} \colon GL_1 \rightarrow G_{[2]}^{\text{red}}$ and $\psi_{X_{[2]}} \colon L_{[2]} \rightarrow GL_1 \times GL_1$ coincides with the diagonal embedding $GL_1 \rightarrow GL_1 \times GL_1$. For all $[\mu_1, \mu_2] \in \Lambda^+_{[2]}$, the irreducible $L_{[2]}$-representation $W^{[\mu_1, \mu_2]}$ is isomorphic as a $G_{[2]}^{\text{red}}$-representation to $V^{([2], [\mu_1 + \mu_2])}$.
Thus, for all $[\nu_1] \in \Omega_{[2]}$, \[\Lambda^+_{[2], [\nu_1]} = \lbrace [\mu_1, \mu_2] \in \Lambda^+_{[2]} : \mu_1 + \mu_2 = \nu_1 \rbrace.\] Our algorithm sets \[\left[\mu_1, \mu_2 \right] := \left[ \left \lceil \frac{\nu_1}{2} \right \rceil, \left \lfloor \frac{\nu_1}{2} \right \rfloor \right].\]
On the $\Omega$-basis, $[A^{[2]}_{[\mu_1, \mu_2]}]$ expands as \[ \left[A^{[2]}_{\left[ \left \lceil \frac{\nu_1}{2} \right \rceil, \left \lfloor \frac{\nu_1}{2} \right \rfloor \right]} \right] = \left[IC_{([2], [\nu_1])}\right] + \sum_{[\xi_1, \xi_2] \in \Omega_{[1,1]}} c_{[1,1], [\xi_1, \xi_2]}\left[IC_{([1,1], [\xi_1, \xi_2])}\right].\]
Since $W_{[2]}$ is trivial and $\rho_{[2]} = [0, 0]$, it follows that $[A^{[2]}_{[\mu_1, \mu_2]}]$ expands on the $\Lambda^+$-basis as \[\left[A^{[2]}_{\left[ \left \lceil \frac{\nu_1}{2} \right \rceil, \left \lfloor \frac{\nu_1}{2} \right \rfloor \right]} \right] = \left[A_{\left[ \left \lceil \frac{\nu_1}{2} \right \rceil, \left \lfloor \frac{\nu_1}{2} \right \rfloor \right]} \right].\]
From our analysis above, we know that $\gamma([1,1], [\xi_1, \xi_2]) = [\xi_1 + 1, \xi_2 - 1]$, so there cannot exist $[\xi_1, \xi_2] \in \Omega_{[1,1]}$ such that $\gamma([1,1], [\xi_1, \xi_2]) = [\lceil \frac{\nu_1}{2} \rceil, \lfloor \frac{\nu_1}{2} \rfloor]$.
Hence $\gamma([2], [\nu_1]) = [\lceil \frac{\nu_1}{2} \rceil, \lfloor \frac{\nu_1}{2} \rfloor] = \operatorname{dom}([\mu_1, \mu_2] + 2 \rho_{[2]})$.\footnote{It follows immediately from Theorem~\ref{bez} that $c_{[1,1],[\xi_1, \xi_2]} = 0$ for all $[\xi_1, \xi_2] \in \Omega_{[1,1]}$. }
\subsection{Outline} The cynosure of this article is the \textit{integer-sequences version} of our algorithm, which admits as input a pair $(\alpha, \nu) \in \Omega$ and yields as output a weight $\mathfrak{A}(\alpha, \nu) \in \Lambda^+_{\alpha}$. The output, which consists of a weight of each factor of $L_{\alpha}$, is obtained recursively: The weight of the first factor $GL_{\alpha^*_1}$ is computed; then the input is adjusted accordingly, and the algorithm is called on the residual input to determine the weight of each of the remaining factors.
The algorithm design is guided by the objective of locating $\mathfrak{A}(\alpha, \nu)$ in $\Lambda^+_{\alpha, \nu}$ and keeping $||\mathfrak{A}(\alpha, \nu) + 2 \rho_{\alpha}||$ as small as possible. Our main theorem is the following.
\begin{thm} \label{main} Let $(\alpha, \nu) \in \Omega$. Then $\gamma(\alpha, \nu) = \operatorname{dom}(\mathfrak{A}(\alpha, \nu) + 2 \rho_{\alpha})$. \end{thm}
We prove the main theorem by verifying that \[||\mathfrak{A}(\alpha, \nu) + 2 \rho_{\alpha}|| = \min \lbrace ||\mu + 2 \rho_{\alpha}|| : \mu \in \Lambda^+_{\alpha, \nu} \rbrace.\] However, our approach is indirect and relies on a combinatorial apparatus introduced by Achar \cite{Achart, Acharj} --- \textit{weight diagrams}.
A weight diagram $X$ of shape-class $\alpha$ encodes several integer sequences, including a weight $h(X) \in \Lambda^+_{\alpha}$. On input $(\alpha, \nu)$, Achar's algorithm outputs a weight diagram $\mathsf{A}(\alpha, \nu)$ of shape-class $\alpha$ such that $h \mathsf{A}(\alpha, \nu) \in \Lambda^+_{\alpha, \nu}$ and $||h \mathsf{A}(\alpha, \nu) + 2 \rho_{\alpha}||$ is minimal (cf. \cite{Acharj}, Corollary 8.9). Achar's conclusion (cf. \cite{Acharj}, Theorem 8.10) is that Theorem~\ref{main} holds with $h \mathsf{A}(\alpha, \nu)$ in place of $\mathfrak{A}(\alpha, \nu)$.
The minimality of $||h \mathsf{A}(\alpha, \nu) + 2 \rho_{\alpha}||$ is basic to Achar's algorithm, which maintains a candidate output $X$ at each step, and performs only manipulations that do not increase $||hX + 2 \rho_{\alpha}||$. In contrast, $\mathfrak{A}(\alpha, \nu)$ is computed one entry at a time. The minimality of $||\mathfrak{A}(\alpha, \nu) + 2 \rho_{\alpha}||$ is an emergent property, which we prove by comparison of our algorithm with Achar's.
Rather than attempt to connect $\mathfrak{A}$ to $\mathsf{A}$, we introduce a third algorithm $\mathcal{A}$, built with the same tools as $\mathfrak{A}$, but configured to output weight diagrams rather than integer sequences.\footnote{$\mathcal{A}$ actually outputs pairs of weight diagrams, so what we refer to in the introduction as $\mathcal{A}(\alpha, \nu)$ is denoted in the body by $p_1 \mathcal{A}(\alpha, \nu)$. } The relationship between this \textit{weight-diagrams version} and Achar's algorithm is impossible to miss: $\mathcal{A}(\alpha, \nu)$ always exactly matches $\mathsf{A}(\alpha, \nu)$. Hence $||h\mathcal{A}(\alpha, \nu) + 2 \rho_{\alpha}||$ is minimal.
While it is not the case that $\mathfrak{A}(\alpha, \nu)$ always coincides with $h\mathcal{A}(\alpha, \nu)$,\footnote{In the author's thesis \cite{Rush}, the integer-sequences version $\mathfrak{A}$ is defined so that $\mathfrak{A}(\alpha, \nu) = h \mathcal{A}(\alpha, \nu)$, but the proof that this equation holds is laborious and not altogether enlightening (cf. Chapter 5). Relaxing this requirement allows us to simplify the definition of $\mathfrak{A}$ and focus on proofs more pertinent to $\gamma$. } we show nonetheless that \begin{equation} \label{minimality}
||\mathfrak{A}(\alpha, \nu) + 2 \rho_{\alpha}|| = ||h \mathcal{A}(\alpha, \nu) + 2 \rho_{\alpha}||, \end{equation} which implies that \[\operatorname{dom}(\mathfrak{A}(\alpha, \nu) + 2 \rho_{\alpha}) = \operatorname{dom}(h \mathcal{A}(\alpha, \nu) + 2 \rho_{\alpha}),\] confirming that $\mathfrak{A}$ is a bona fide version of $\mathcal{A}$. The main theorem follows immediately.
In summary, the algorithm $\mathfrak{A}$ is a bee-line for computing $\gamma$, akin to an ansatz, which works because $\mathfrak{A}(\alpha, \nu) \in \Lambda^+_{\alpha, \nu}$ such that $||\mathfrak{A}(\alpha, \nu) + 2 \rho_{\alpha}||$ is minimal. The minimality of $||\mathfrak{A}(\alpha, \nu) + 2 \rho_{\alpha}||$ is a consequence of the minimality of $||h \mathcal{A}(\alpha, \nu) + 2 \rho_{\alpha}||$, and we deduce the latter by identifying $\mathcal{A}(\alpha, \nu)$ with $\mathsf{A}(\alpha, \nu)$.
The rest of this article is organized as follows. In section 2, we present the integer-sequences version of our algorithm, along with several example calculations.
In section 3, we define weight diagrams. A weight diagram of shape-class $\alpha$ encodes an element each of $\Omega_{\alpha}$, $\Lambda^+_{\alpha}$, and $\Lambda^+$, and we give a correct proof of Proposition 4.4 in Achar \cite{Acharj} regarding the relations between the corresponding objects in $\mathfrak{D}$.
In section 4, we present the weight-diagrams version of our algorithm and delineate its basic properties. Then we prove Equation~\ref{minimality} holds, assuming that $||h \mathcal{A}(\alpha, \nu) + 2 \rho_{\alpha}||$ is minimal.
In section 5, we state Achar's criteria for a weight diagram to be \textit{distinguished}, and we prove that $\mathcal{A}$ outputs a distinguished diagram on any input. As we explain, this implies that the diagrams $\mathcal{A}(\alpha, \nu)$ and $\mathsf{A}(\alpha, \nu)$ are identical for all $(\alpha, \nu) \in \Omega$.
Finally, in the appendix, we cite Achar's algorithm for $\gamma^{-1}$ as heuristic evidence that our algorithm for $\gamma$ is the conceptually correct counterpart. Achar's algorithm for $\gamma$ does not parallel his algorithm for $\gamma^{-1}$, but ours does.
\eject
\section{The Algorithm, Integer-Sequences Version}
\subsection{Overview} Fix a partition $\alpha = [\alpha_1, \ldots, \alpha_{\ell}]$ with conjugate partition $\alpha^* = [\alpha^*_1, \ldots, \alpha^*_s]$. Given an integer sequence $\iota$ of any length, let $\operatorname{dom}(\iota)$ be the sequence obtained by rearranging the entries of $\iota$ in weakly decreasing order. (This is consistent with the notation of section 1.3, for $\operatorname{dom}(\iota) \in W \iota \cap \Lambda^+$ if $\iota \in \Lambda$.)
Let $\nu \in \Omega_{\alpha}$. On input $(\alpha, \nu)$, our algorithm outputs an integer sequence $\mu$ of length $n$ satisfying the following conditions: \begin{enumerate}
\item $\mu$ is the concatenation of an $s$-tuple of weakly decreasing integer sequences $(\mu^1, \ldots, \mu^s)$ such that $\mu^j$ is of length $\alpha^*_j$ for all $1 \leq j \leq s$;
\item There exists a collection of integers $\lbrace \nu_{i,j} \rbrace_{\substack{1 \leq i \leq \ell \\ 1 \leq j \leq \alpha_i}}$ such that \[\nu_i = \nu_{i, 1} + \cdots + \nu_{i, \alpha_i}\] for all $1 \leq i \leq \ell$ and $\mu^j = \operatorname{dom}([\nu_{1, j}, \ldots, \nu_{\alpha^*_j, j}])$ for all $1 \leq j \leq s$. \end{enumerate} Recall that the first condition indicates $\mu \in \Lambda^+_{\alpha}$. The second condition implies $\mu \in \Lambda^+_{\alpha, \nu}$ (cf. Corollary~\ref{decamp}).
Although we could construct a collection $\lbrace \nu_{i,j} \rbrace_{\substack{1 \leq i \leq \ell \\ 1 \leq j \leq \alpha_i}}$ such that $\nu_i = \nu_{i,1} + \cdots + \nu_{i, \alpha_i}$ for all $i$ and obtain $\mu$ as a by-product (by setting $\mu^j := \operatorname{dom}([\nu_{1,j}, \ldots, \nu_{\alpha^*_j, j}])$ for all $j$), our algorithm instead computes each $\mu^j$ directly, alongside a permutation $\sigma^j \in \mathfrak{S}_{\alpha^*_j}$, so that $\nu_i = \mu^1_{\sigma^1(i)} + \cdots + \mu^{\alpha_i}_{\sigma^{\alpha_i}(i)}$ for all $i$. (Then a collection fit to $\mu$ is given by $\nu_{i,j} := \mu^j_{\sigma^j(i)}$.)
\begin{rem} \label{motiv}
Were we seeking to minimize $||\mu||$, it would suffice to choose, for all $i$, integers $\nu_{i, 1}, \ldots, \nu_{i, \alpha_i} \in \lbrace \lceil \frac{\nu_i}{\alpha_i} \rceil, \lfloor \frac{\nu_i}{\alpha_i} \rfloor \rbrace$ summing to $\nu_i$, and let the collection $\lbrace \nu_{i,j} \rbrace_{\substack{1 \leq i \leq \ell \\ 1 \leq j \leq \alpha_i}}$ induce the output $\mu$.
However, our task is to minimize $||\mu + 2 \rho_{\alpha}||$, in which case we cannot confine each $\nu_{i,j}$ to the set $\lbrace \lceil \frac{\nu_i}{\alpha_i} \rceil, \lfloor \frac{\nu_i}{\alpha_i} \rfloor \rbrace$.\footnote{See section 2.4 for an example in which there exists $i,j$ such that $\nu_{i,j}$ must not belong to $\lbrace \lceil \frac{\nu_i}{\alpha_i} \rceil, \lfloor \frac{\nu_i}{\alpha_i} \rfloor \rbrace$. } Specifying the collection $\lbrace \nu_{i,j} \rbrace_{\substack{1 \leq i \leq \ell \\ 1 \leq j \leq \alpha_i}}$ straightaway, and learning the (numerical) order of the entries in each sequence $[\nu_{1, j}, \ldots, \nu_{\alpha^*_j, j}]$ post hoc, risks needlessly inflating \[\sum_{j=1}^s \left|\left|\operatorname{dom}([\nu_{1,j}, \ldots, \nu_{\alpha^*_j, j}]) + 2 \left[\frac{\alpha^*_j - 1}{2}, \ldots, \frac{1 - \alpha^*_j}{2} \right]\right|\right|^2 = ||\mu + 2 \rho_{\alpha}||^2.\]
But how can we know what the order among the integers $\nu_{1,j}, \ldots, \nu_{\alpha^*_j, j}$ will be before their values are assigned? Our answer is simply to stipulate the order, and pick values pursuant thereto --- by deciding $\sigma^j$, then $\mu^j$, and setting $[\nu_{1, j}, \ldots, \nu_{\alpha^*_j, j}] := [\mu^j_{\sigma^j(1)}, \ldots, \mu^j_{\sigma^j(\alpha^*_j)}]$. \end{rem}
The algorithm runs by recursion. Roughly: $\sigma^1$ is determined via a \textit{ranking} function, which compares \textit{candidate ceilings}, each measuring how the addition of $2 \rho_{\alpha}$ to $\mu$ might affect a subset of the collection $\lbrace \nu_{i,j} \rbrace_{\substack{1 \leq i \leq \ell \\ 1 \leq j \leq \alpha_i}}$, subject to a hypothesis about $\sigma^1$. After $\sigma^1$ is settled, the corresponding candidate ceilings are tweaked (under the aegis of a \textit{column} function) to compute $\mu^1$. Then $\mu^1$ is ``subtracted off,'' and the algorithm is called on the residual input $\nu'$, defined by $\nu'_i := \nu_i - \mu^1_{\sigma^1(i)}$, returning $\mu^2, \ldots, \mu^s$.
\subsection{The algorithm} Describing the algorithm explicitly requires us to introduce formally several preliminary functions. \begin{df} Given a pair of integer sequences $(\alpha, \nu) \in \mathbb{N}^{\ell} \times \mathbb{Z}^{\ell}$, an integer $i \in \lbrace 1, \ldots, \ell \rbrace$, and an ordered pair of disjoint sets $(I_a, I_b)$ satisfying $I_a \cup I_b = \lbrace 1, \ldots, \ell \rbrace \setminus \lbrace i \rbrace$, we define the \textit{candidate-ceiling} function $\mathcal{C}_{-1}$ as follows:
\[\mathcal{C}_{-1}(\alpha, \nu, i , I_a, I_b) := \left \lceil \frac{\nu_i - \sum_{j \in I_a} \min \lbrace \alpha_i, \alpha_j \rbrace + \sum_{j \in I_b} \min \lbrace \alpha_i, \alpha_j \rbrace}{\alpha_i} \right \rceil.\] \end{df}
\begin{df} The \textit{ranking-by-ceilings} algorithm $\mathcal{R}_{-1}$ computes a function $\mathbb{N}^{\ell} \times \mathbb{Z}^{\ell} \rightarrow \mathfrak{S}_{\ell}$ iteratively over $\ell$ steps.
Say $\mathcal{R}_{-1}(\alpha, \nu) = \sigma$. On the $i^{\text{th}}$ step of the algorithm, $\sigma^{-1}(1), \ldots, \sigma^{-1}(i-1)$ have already been determined. Set \[J_i := \lbrace \sigma^{-1}(1), \ldots, \sigma^{-1}(i-1) \rbrace \quad \text{and} \quad J'_i := \lbrace 1, \ldots, \ell \rbrace \setminus J_i.\] Then $\sigma^{-1}(i)$ is designated the numerically minimal $j \in J'_i$ among those for which \[(\mathcal{C}_{-1}(\alpha, \nu, j, J_i, J'_i \setminus \lbrace j \rbrace), \alpha_j, \nu_j)\] is lexicographically maximal. \end{df}
\begin{df} The \textit{column-ceilings} algorithm $\mathcal{U}_{-1}$ is iterative with $\ell$ steps and computes a function $\mathbb{N}^{\ell} \times \mathbb{Z}^{\ell} \times \mathfrak{S}_{\ell} \rightarrow \mathbb{Z}^{\ell}_{\text{dom}}$, where $\mathbb{Z}^{\ell}_{\text{dom}} \subset \mathbb{Z}^{\ell}$ denotes the subset of weakly decreasing sequences.
Say $\mathcal{U}_{-1}(\alpha, \nu, \sigma) = [\iota_1, \ldots, \iota_{\ell}]$. On the $i^{\text{th}}$ step of the algorithm, $\iota_1, \ldots, \iota_{i-1}$ have already been determined. Then \[\iota_i := \mathcal{C}_{-1}(\alpha, \nu, \sigma^{-1}(i), \sigma^{-1} \lbrace 1, \ldots, i-1 \rbrace, \sigma^{-1} \lbrace i+1, \ldots, \ell \rbrace) - \ell + 2i - 1\] unless the right-hand side is greater than $\iota_{i-1}$, in which case $\iota_i := \iota_{i-1}$. \end{df}
We assemble these constituent functions into a recursive algorithm $\mathfrak{A}$ that computes a map $\mathbb{Y}_{n, \ell} \times \mathbb{Z}^{\ell} \rightarrow \mathbb{Z}^{n}$, where $\mathbb{Y}_{n, \ell}$ denotes the set of partitions of $n$ with $\ell$ parts.
On input $(\alpha, \nu)$, the algorithm sets \[\sigma^1 := \mathcal{R}(\alpha, \nu) \quad \text{and} \quad \mu^1 := \mathcal{U}(\alpha, \nu, \sigma^1).\]
If $\alpha_1 = 1$, it returns $\mu^1$.
Otherwise, it defines $(\alpha', \nu') \in \mathbb{Y}_{n-\ell, \alpha^*_2} \times \mathbb{Z}^{\alpha^*_2}$ by setting \[\alpha'_i := \alpha_i - 1 \quad \text{and} \quad \nu'_i := \nu_i - \mu^1_{\sigma^1(i)}\] for all $1 \leq i \leq \alpha^*_2$.
Then it prepends $\mu^1$ to $\mathfrak{A}(\alpha', \nu')$ and returns the result.
\begin{rem} \label{iter}
The use of recursion makes our instructions for computing $\mathfrak{A}(\alpha, \nu)$ succinct. At the cost of a bit of clarity, we can rephrase the instructions to use iteration, and thereby delineate every step in the computation.
Consider the algorithm $\mathfrak{A}_{\operatorname{iter}} \colon \mathbb{Y}_{n, \ell} \times \mathbb{Z}^{\ell} \rightarrow \mathbb{Z}^n$ defined as follows.
On input $(\alpha, \nu)$, it starts by setting $\alpha^1 := \alpha$, $\nu^1 := \nu$, $\sigma^1 := \mathcal{R}_{-1}(\alpha^1, \nu^1)$, and $\mu^1 := \mathcal{U}_{-1}(\alpha^1, \nu^1, \sigma^1)$.
Then, for $2 \leq j \leq s$: \begin{itemize} \item It defines $\alpha^j$ by $\alpha^j_i :=\alpha^{j-1}_i - 1$ for all $1 \leq i \leq \alpha^*_j$; \item It defines $\nu^{j}$ by $\nu^{j}_i := \nu^{j-1}_i - \mu^{j-1}_{\sigma^{j-1}(i)}$ for all $1 \leq i \leq \alpha^*_{j}$; \item It sets $\sigma^j := \mathcal{R}_{-1}(\alpha^j, \nu^j)$; \item It sets $\mu^j := \mathcal{U}_{-1}(\alpha^j, \nu^j, \sigma^j)$. \end{itemize}
Finally, it returns the concatenation of $(\mu^1, \ldots, \mu^s)$.
It should be clear that $\mathfrak{A}_{\operatorname{iter}}(\alpha, \nu)$ agrees with $\mathfrak{A}(\alpha, \nu)$. To see this, we induct on $s$. For the inductive step, it suffices to show that $\mathfrak{A}(\alpha', \nu')$ is the concatenation of $(\mu^2, \ldots, \mu^s)$. But $\mathfrak{A}(\alpha', \nu') = \mathfrak{A}_{\operatorname{iter}}(\alpha^2, \nu^2)$ by the inductive hypothesis.
\end{rem}
\subsection{Examples}
We study three examples. First, to illustrate the workings of the ranking function, we consider the orbit $\mathcal{O}_{[2,1]}$. Given $\nu \in \Omega_{[2,1]}$, the algorithm makes exactly one meaningful comparison --- to determine whether $\sigma^1$ is the trivial or nontrivial permutation in $\mathfrak{S}_2$.
Second, to underscore the advantages of our approach, we consider an input pair $(\alpha, \nu)$ for which there exists only one collection $\lbrace \nu_{i,j} \rbrace_{\substack{1 \leq i \leq \ell \\ 1 \leq j \leq \alpha_i}}$ such that $\nu_{i,j} \in \lbrace \lceil \frac{\nu_i}{\alpha_i} \rceil, \lfloor \frac{\nu_i}{\alpha_i} \rfloor \rbrace$ for all $i,j$, and setting $\mu^j := \operatorname{dom}([\nu_{1,j}, \ldots, \nu_{\alpha^*_j,j}])$ for all $j$ yields an incorrect answer for $\gamma(\alpha, \nu)$. The input pair is $([3,2,2,1], [15,8,8,4])$.
Last, we revisit the orbit $\mathcal{O}_{[4,3,2,1,1]}$ featured in Example~\ref{colors} and compute $\mathfrak{A}$ on the input pair $([4,3,2,1,1], [15,14,9,4,4])$, taken from Achar's thesis \cite{Achart}. We also discuss the computation of $\mathfrak{A}_{\operatorname{iter}}$.
\begin{exam} Set $\alpha := [2,1]$. Then $\alpha^* = [2,1]$. Reading $G_{\alpha}^{\text{red}}$ and $L_{\alpha}$ off the Young diagram of $\alpha$ (cf. Figure~\ref{rank}), we see that $G_{[2,1]}^{\text{red}} \cong GL_1 \times GL_1$ and $L_{[2,1]} \cong GL_2 \times GL_1$. \begin{figure}
\caption{The Young diagram of $[2,1]$}
\label{rank}
\end{figure}
Note that \[\Omega_{[2,1]} = \lbrace [\nu_1, \nu_2] \in \mathbb{Z}^2 \rbrace \quad \text{and} \quad \Lambda^+_{[2,1]} = \lbrace [\lambda_1, \lambda_2, \lambda_3] \in \mathbb{Z}^3 : \lambda_1 \geq \lambda_2 \rbrace.\]
Let $\nu = [\nu_1, \nu_2] \in \Omega_{[2,1]}$. On input $(\alpha, \nu)$, the algorithm computes $\sigma^1 := \mathcal{R}_{-1}(\alpha, \nu)$. Since $\alpha_1 > \alpha_2$, the triple \[(\mathcal{C}_{-1}(\alpha, \nu, 1, \varnothing, \lbrace 2 \rbrace), \alpha_1, \nu_1)\] is lexicographically greater than the triple \[(\mathcal{C}_{-1}(\alpha, \nu, 2, \varnothing, \lbrace 1 \rbrace), \alpha_2, \nu_2)\] if and only if \begin{align} \label{pare} \mathcal{C}_{-1}(\alpha, \nu, 1, \varnothing, \lbrace 2 \rbrace) \geq \mathcal{C}_{-1}(\alpha, \nu, 2, \varnothing, \lbrace 1 \rbrace). \end{align}
Therefore, $(\sigma^1)^{-1}(1) = 1$ if and only if Inequality~\ref{pare} holds. By construction of the ranking-by-ceilings algorithm, $(\sigma^1)^{-1}(2) \in \lbrace 1, 2 \rbrace \setminus \lbrace (\sigma^1)^{-1}(1) \rbrace$, so $\sigma^1$ is the identity in $\mathfrak{S}_2$ if Inequality~\ref{pare} holds, and transposes $1$ and $2$ otherwise.
Evaluating the candidate ceilings, we find: \begin{align*} & \mathcal{C}_{-1}(\alpha, \nu, 1, \varnothing, \lbrace 2 \rbrace) = \mathcal{C}_{-1}([2,1], [\nu_1, \nu_2], 1, \varnothing, \lbrace 2 \rbrace) = \left \lceil \frac{\nu_1 + 1}{2} \right \rceil; \\ & \mathcal{C}_{-1}(\alpha, \nu, 2, \varnothing, \lbrace 1 \rbrace) = \mathcal{C}_{-1}([2,1], [\nu_1, \nu_2], 2, \varnothing, \lbrace 1 \rbrace) = \nu_2 + 1. \end{align*}
Observe that \[\left \lceil \frac{\nu_1 + 1}{2} \right \rceil \geq \nu_2 + 1 \Longleftrightarrow \nu_1 \geq 2 \nu_2.\]
Hence \[\sigma^1 = \begin{cases} 12 & \nu_1 \geq 2 \nu_2 \\ 21 & \nu_1 \leq 2 \nu_2 - 1 \end{cases}.\]
We treat each case separately.
\begin{enumerate} \item Suppose $\nu_1 \geq 2 \nu_2$.
The algorithm computes $\mu^1 := \mathcal{U}_{-1}(\alpha, \nu, \sigma^1)$. By definition, \[\mu^1_1 = \mathcal{C}_{-1}(\alpha, \nu, 1, \varnothing, \lbrace 2 \rbrace) - 1 = \left \lceil \frac{\nu_1 - 1}{2} \right \rceil.\]
Since \[\mathcal{C}_{-1}(\alpha, \nu, 2, \lbrace 1 \rbrace, \varnothing) + 1 = \nu_2,\] and $\lceil \frac{\nu_1 - 1}{2} \rceil \geq \nu_2$, it follows that \[\mu^1_1 \geq \mathcal{C}_{-1}(\alpha, \nu, 2, \lbrace 1 \rbrace, \varnothing) + 1.\]
Hence \[\mu^1_2 = \mathcal{C}_{-1}(\alpha, \nu, 2, \lbrace 1 \rbrace, \varnothing) + 1 = \nu_2.\]
Then the algorithm sets $\alpha' := [1]$, and it defines $\nu'$ by \[\nu'_1 := \nu_1 - \mu^1_1 = \nu_1 - \left \lceil \frac{\nu_1 - 1}{2} \right \rceil = \left \lfloor \frac{\nu_1+1}{2} \right \rfloor.\]
Clearly, \[\mathfrak{A}(\alpha', \nu') = \mathcal{C}_{-1}(\alpha', \nu', 1, \varnothing, \varnothing) = \nu'_1 = \left \lfloor \frac{\nu_1+1}{2} \right \rfloor.\]
Hence \[\mathfrak{A}([2,1], [\nu_1, \nu_2]) = \left[\left \lceil \frac{\nu_1 - 1}{2} \right \rceil, \nu_2, \left \lfloor \frac{\nu_1 + 1}{2} \right \rfloor \right].\]
\item Suppose $\nu_1 \leq 2 \nu_2 - 1$.
The algorithm computes $\mu^1 := \mathcal{U}_{-1}(\alpha, \nu, \sigma^1)$. By definition, \[\mu^1_1 = \mathcal{C}_{-1}(\alpha, \nu, 2, \varnothing, \lbrace 1 \rbrace) - 1 = \nu_2.\]
Since \[\mathcal{C}_{-1}(\alpha, \nu, 1, \lbrace 2 \rbrace, \varnothing) + 1 = \left \lceil \frac{\nu_1 + 1}{2} \right \rceil\] and $\nu_2 \geq \lceil \frac{\nu_1 + 1}{2} \rceil$, it follows that \[\mu^1_1 \geq \mathcal{C}_{-1}(\alpha, \nu, 1, \lbrace 2 \rbrace, \varnothing) + 1.\]
Hence \[\mu^1_2 = \mathcal{C}_{-1}(\alpha, \nu, 1, \lbrace 2 \rbrace, \varnothing) + 1 = \left \lceil \frac{\nu_1+1}{2} \right \rceil.\]
Then the algorithm sets $\alpha' := [1]$, and it defines $\nu'$ by \[\nu'_1 := \nu_1 - \mu^1_2 = \nu_1 - \left \lceil \frac{\nu_1 + 1}{2} \right \rceil = \left \lfloor \frac{\nu_1 - 1}{2} \right \rfloor.\]
Clearly, \[\mathfrak{A}(\alpha', \nu') = \mathcal{C}_{-1}(\alpha', \nu', 1, \varnothing, \varnothing) = \nu'_1 = \left \lfloor \frac{\nu_1 - 1}{2} \right \rfloor.\]
Hence \[\mathfrak{A}([2,1], [\nu_1, \nu_2]) = \left[\nu_2, \left \lceil \frac{\nu_1 + 1}{2} \right \rceil, \left \lfloor \frac{\nu_1 - 1}{2} \right \rfloor \right].\] \end{enumerate}
We conclude that \[\mathfrak{A}([2,1], [\nu_1, \nu_2]) = \begin{cases} \left[\left \lceil \frac{\nu_1 - 1}{2} \right \rceil, \nu_2, \left \lfloor \frac{\nu_1 + 1}{2} \right \rfloor \right] & \nu_1 \geq 2 \nu_2 \\ \left[\nu_2, \left \lceil \frac{\nu_1 + 1}{2} \right \rceil, \left \lfloor \frac{\nu_1 - 1}{2} \right \rfloor \right] & \nu_1 \leq 2 \nu_2 - 1 \end{cases}.\]
Since $\rho_{[2,1]} = [\frac{1}{2}, -\frac{1}{2}, 0]$, assuming Theorem~\ref{main} holds, we find \[\gamma([2,1], [\nu_1, \nu_2]) = \begin{cases} \left[\left \lceil \frac{\nu_1 + 1}{2} \right \rceil, \left \lfloor \frac{\nu_1 + 1}{2} \right \rfloor, \nu_2 - 1 \right] & \nu_1 \geq 2 \nu_2 \\ \left[\nu_2 + 1, \left \lceil \frac{\nu_1 - 1}{2} \right \rceil, \left \lfloor \frac{\nu_1 - 1}{2} \right \rfloor \right] & \nu_1 \leq 2 \nu_2 - 1 \end{cases}.\] \end{exam}
\begin{exam} Set $\alpha := [3,2,2,1]$. Then $\alpha^* = [4,3,1]$. Reading $G_{\alpha}^{\text{red}}$ and $L_{\alpha}$ off the diagram of $\alpha$ (cf. Figure~\ref{task}), we see that $G_{\alpha}^{\text{red}} \cong GL_1 \times GL_2 \times GL_1$ and $L_{\alpha} \cong GL_4 \times GL_3 \times GL_1$.
\begin{figure}
\caption{The Young diagram of $[4,3,1]$}
\label{task}
\end{figure}
Note that \[\Omega_{\alpha} = \lbrace \nu \in \mathbb{Z}^4 : \nu_2 \geq \nu_3 \rbrace\] and \[\Lambda^+_{\alpha} = \lbrace \lambda \in \mathbb{Z}^{8} : \lambda_1 \geq \lambda_2 \geq \lambda_3 \geq \lambda_4; \lambda_5 \geq \lambda_6 \geq \lambda_7 \rbrace.\]
Set $\nu := [15, 8, 8, 4] \in \Omega_{\alpha}$. On input $(\alpha, \nu)$, the algorithm computes \[\sigma^1 := \mathcal{R}_{-1}(\alpha, \nu) = 1234.\]
Next it computes \[\mu^1 := \mathcal{U}_{-1}(\alpha, \nu, \sigma^1) = [4, 4, 4, 4].\]
Then it sets \[\alpha' := [2, 1, 1] \quad \text{and} \quad \nu' := [11, 4, 4].\]
To finish off, it computes \[\mathfrak{A}(\alpha', \nu') = [5, 4, 4, 6].\]
Thus, \[\mathfrak{A}(\alpha, \nu) = [4, 4, 4, 4, 5, 4, 4, 6].\]
Since \[\rho_{\alpha} = \left[\frac{3}{2}, \frac{1}{2}, - \frac{1}{2}, - \frac{3}{2}, 1, 0, -1, 0 \right],\] assuming Theorem~\ref{main} holds, we find \[\gamma(\alpha, \nu) = [7,7,6,5,4,3,2,1].\]
Note that \[\frac{\nu_1}{\alpha_1} = 5 \quad \text{and} \quad \frac{\nu_2}{\alpha_2} = \frac{\nu_3}{\alpha_3} = \frac{\nu_4}{\alpha_4} = 4.\]
Therefore, if $\lbrace \nu_{i,j} \rbrace_{\substack{1 \leq i \leq 4 \\ 1 \leq j \leq \alpha_i}} \subset \mathbb{Z}$ is a collection such that $\nu_{i,j} \in \lbrace \lceil \frac{\nu_i}{\alpha_i} \rceil, \lfloor \frac{\nu_i}{\alpha_i} \rfloor \rbrace$ for all $i, j$, then \[\nu_{1,1} = \nu_{1,2} = \nu_{1,3} = 5 \quad \text{and} \quad \nu_{2,1} = \nu_{2,2} = \nu_{3,1} = \nu_{3,2} = \nu_{4,1} = 4.\]
Setting \[\mu^1 := [5, 4, 4, 4], \quad \mu^2 := [5, 4, 4], \quad \mu^3 := [5],\] we arrive at the induced weight $\mu = [5, 4, 4, 4, 5, 4, 4, 5]$, which has smaller norm than the output $\mathfrak{A}(\alpha, \nu) = [4, 4, 4, 4, 5, 4, 4, 6]$.
However, \[[5, 4, 4, 4, 5, 4, 4, 5] + 2 \rho_{\alpha} = [8, 5, 3, 1, 7, 4, 2, 5],\] which has larger norm than \[[4, 4, 4, 4, 5, 4, 4, 6] + 2 \rho_{\alpha} = [7, 5, 3, 1, 7, 4, 2, 6].\]
Thus, attempting to minimize $||\mathfrak{A}(\alpha, \nu)||$ leads to an incorrect answer for $\gamma(\alpha, \nu)$. It is essential to minimize $||\mathfrak{A}(\alpha, \nu) + 2 \rho_{\alpha}||$, which is accomplished by our algorithm (cf. Remark~\ref{motiv}). \end{exam}
\begin{exam} \label{acharexam} Set $\alpha := [4,3,2,1,1]$. Then $\alpha^* = [5,3,2,1]$. Recall from Example~\ref{colors} that \[G_{\alpha}^{\text{red}} \cong GL_1 \times GL_1 \times GL_1 \times GL_2 \quad \text{and} \quad L_{\alpha} \cong GL_5 \times GL_3 \times GL_2 \times GL_1.\]
Note that \[\Omega_{\alpha} = \lbrace \nu \in \mathbb{Z}^5 : \nu_4 \geq \nu_5 \rbrace\] and \[\Lambda^+_{\alpha} = \lbrace \lambda \in \mathbb{Z}^{11} : \lambda_1 \geq \lambda_2 \geq \lambda_3 \geq \lambda_4 \geq \lambda_5; \lambda_6 \geq \lambda_7 \geq \lambda_8; \lambda_9 \geq \lambda_{10} \rbrace.\]
Set $\nu := [15, 14, 9, 4, 4] \in \Omega_{\alpha}$. On input $(\alpha, \nu)$, the algorithm computes \[\sigma^1 := \mathcal{R}_{-1}(\alpha, \nu) = 42135.\]
Next it computes \[\mu^1 := \mathcal{U}_{-1}(\alpha, \nu, \sigma^1) = [4, 4, 4, 4, 4].\]
Then it sets \[\alpha' := [3, 2, 1] \quad \text{and} \quad \nu' := [11, 10, 5].\]
To finish off, it computes \[\mathfrak{A}(\alpha', \nu') = [5, 5, 5, 5, 4, 2].\]
Thus, \[\mathfrak{A}(\alpha, \nu) = [4, 4, 4, 4, 4, 5, 5, 5, 5, 4, 2].\]
If we run $\mathfrak{A}_{\operatorname{iter}}$ on input $(\alpha, \nu)$, we obtain the following table.
\begin{center}
\begin{tabular}{ |l|l|l|l|l| }
\hline
$\alpha^1 = [4,3,2,1,1]$ & $\nu^1 = [15, 14, 9, 4, 4]$ & $\sigma^1 = 42135$ & $\mu^1 = [4, 4, 4, 4, 4]$ \\
$\alpha^2 = [3,2,1]$ & $\nu^2 = [11, 10, 5]$ & $\sigma^2 = 312$ & $\mu^2 = [5, 5, 5]$ \\
$\alpha^3 = [2,1]$ & $\nu^3 = [6, 5]$ & $\sigma^3 = 21$ & $\mu^3 = [5, 4]$ \\
$\alpha^4 = [1]$ & $\nu^4 = [2]$ & $\sigma^4 = 1$ & $\mu^4 = [2]$ \\
\hline
\end{tabular} \end{center}
Hence \[\mathfrak{A}_{\operatorname{iter}}(\alpha, \nu) = [4, 4, 4, 4, 4, 5, 5, 5, 5, 4, 2] = \mathfrak{A}(\alpha, \nu).\]
Since \[\rho_{\alpha} = \left[2, 1, 0, -1, -2, 1, 0, -1, \frac{1}{2}, -\frac{1}{2}, 0 \right],\] assuming Theorem~\ref{main} holds, we find \[\gamma(\alpha, \nu) = [8, 7, 6, 6, 5, 4, 3, 3, 2, 2, 0].\]
This agrees with Achar's answer (cf. \cite{Achart}, Appendix A). \end{exam}
\eject
\section{Weight Diagrams} In this section, we define a class of combinatorial models, which Achar christened \textit{weight diagrams}. In form akin to Young tableaux, weight diagrams in function capture at the level of integer sequences the interactions in $K_0(\mathfrak{D})$ described in Lemmas~\ref{omega} and ~\ref{lambda}. A weight diagram of shape-class $\alpha \vdash n$ simultaneously depicts a dominant integer sequence $\kappa(X)$ with respect to $\alpha$ and a dominant weight $h(X)$ of $L_{\alpha}$. We establish herein that $[IC_{(\alpha, \kappa(X))}]$ occurs in the decomposition of $[A^{\alpha}_{h(X)}]$ on the $\Omega$-basis.
Let $\alpha = [\alpha_1, \ldots, \alpha_{\ell}]$ be a partition of $n$ with conjugate partition $\alpha^* = [\alpha^*_1, \ldots, \alpha^*_s]$. Let $k_1 > \cdots > k_m$ be the distinct parts of $\alpha$, and $a_t$ be the multiplicity of $k_t$ for all $1 \leq t \leq m$.
\begin{df} \label{blank} A \textit{blank diagram} of \textit{shape-class} $\alpha$ is a collection of unit squares (referred to as boxes) arranged in $\ell$ left-justified rows, which differs from a Young diagram of shape $\alpha$ only by permutation of the rows. \end{df}
\begin{df} \label{diagram} A \textit{weight diagram} of \textit{shape-class} $\alpha$ is a filling of a blank diagram of shape-class $\alpha$ by integer entries, with one entry in each box. \end{df}
Let $D_{\alpha}$ be the set of all weight diagrams of shape-class $\alpha$. For a weight diagram $X \in D_{\alpha}$, we denote by $X^j_i$ the $i^{\text{th}}$ entry from the top in the $j^{\text{th}}$ column from the left. We next define a combinatorial map $E \colon D_{\alpha} \rightarrow D_{\alpha}$.
\begin{df} Let $X$ be a weight diagram of shape-class $\alpha$. Set $EX$ to be the filling of the same blank diagram as $X$ given by $EX^j_i := X^j_i + \alpha^*_j - 2i + 1$ for all $1 \leq j \leq s$, $1 \leq i \leq \alpha^*_j$. \end{df}
For the sake of convenience, we consider weight diagrams in pairs for which the second diagram is obtained from the first via $E$. The weight-diagrams version of our algorithm better stores simultaneously the combinatorial information pertinent to the corresponding elements in $\Omega_{\alpha}$ and $\Lambda^+$ when formulated to build diagram pairs, rather than individual diagrams.
\begin{df} Let $\overline{E} \colon D_{\alpha} \rightarrow D_{\alpha} \times D_{\alpha}$ denote the composition of the diagonal map $D_{\alpha} \rightarrow D_{\alpha} \times D_{\alpha}$ with the map $\operatorname{Id} \times E \colon D_{\alpha} \times D_{\alpha} \rightarrow D_{\alpha} \times D_{\alpha}$. A \textit{diagram pair} of \textit{shape-class} $\alpha$ is an ordered pair of diagrams $(X, Y)$ in $\overline{E}(D_{\alpha})$. \end{df}
The nomenclature ``weight diagram'' is attributable to the natural maps $\kappa \colon D_{\alpha} \rightarrow \Omega_{\alpha}$, $h \colon D_{\alpha} \rightarrow \Lambda^+_{\alpha}$, and $\eta \colon D_{\alpha} \rightarrow \Lambda^+$, which we proceed to define.
\begin{df} Let $X$ be a weight diagram of shape-class $\alpha$. For all $1 \leq t \leq m$, $1 \leq i \leq a_t$, $1 \leq j \leq k_t$, let $\kappa_X^j(t, i)$ be the entry of $X$ in the $j^{\text{th}}$ column and the $i^{\text{th}}$ row from the top among rows of length $k_t$. Then set \[\kappa_X(t) := \operatorname{dom} \left(\sum_{j=1}^{k_t} [\kappa_X^j(t, 1), \ldots, \kappa_X^j(t, a_t)] \right).\] Set $\kappa(X)$ to be the concatenation of the $m$-tuple $(\kappa_X(1), \ldots, \kappa_X(m))$. \end{df}
\begin{df} Let $X$ be a weight diagram of shape-class $\alpha$. For all $1 \leq j \leq s$, set $h_X^j := \operatorname{dom}([X^j_1, \ldots, X^j_{\alpha^*_j}])$. Then set $h(X)$ to be the concatenation of the $s$-tuple $(h_X^1, \ldots, h_X^s)$. \end{df}
\begin{df} Let $Y$ be a weight diagram of shape-class $\alpha$. Set $\eta(Y) := \operatorname{dom}(h(Y))$. \end{df}
Suppose that the entries of $X$ are weakly decreasing down each column. Then $E$ lifts the addition of $2 \rho_{\alpha}$ to the underlying $L_{\alpha}$-weight of $X$; in other words, $h(EX) = h(X) + 2 \rho_{\alpha}$. Hence \begin{align} \label{compat} \eta(EX) = \operatorname{dom}(h(X) + 2 \rho_{\alpha}). \end{align}
If $X$ is \textit{distinguished} (cf. Definition~\ref{dis}), then the pair $(\alpha, \kappa(X)) \in \Omega$ and the dominant weight $\eta(EX) \in \Lambda^+$ correspond under $\gamma$ (cf. Theorem~\ref{achar}), and both can be read off the diagram pair $(X, EX)$. The task of the weight-diagrams version of our algorithm is to find, on input $(\alpha, \nu)$, a distinguished diagram $X$ such that $\kappa(X) = \nu$, and output $(X, EX)$.
\begin{exam} \label{thes} We present a diagram pair of shape-class $[4,3,2,1,1] \vdash 11$, taken from Achar's thesis \cite{Achart}.
\begin{figure}
\caption{A diagram pair of shape-class $[4,3,2,1,1]$}
\end{figure}
We see that $\kappa(X) = [15, 14, 9, 4, 4]$ and $h(X) = [4, 4, 4, 4, 4, 5, 5, 4, 5, 4, 3]$. Furthermore, $Y = EX$, and $\eta(Y) = [8, 7, 6, 6, 5, 4, 3, 3, 2, 2, 0]$. As noted in Example~\ref{acharexam}, \[\gamma([4,3,2,1,1], \kappa(X)) = \eta(Y).\] \end{exam}
\begin{thm} \label{decomp} Let $(X, Y) \in \overline{E}(D_{\alpha})$ be a diagram pair of shape-class $\alpha$. Then $V^{(\alpha, \kappa(X))}$ occurs in the decomposition of $W^{h(X)}$ as a direct sum of irreducible $G_{\alpha}^{\text{red}}$-representations. Furthermore, $[IC_{(\alpha, \kappa(X))}]$ occurs in the decomposition of $[A^{\alpha}_{h(X)}]$ on the $\Omega$-basis. \end{thm}
\begin{proof} It suffices to prove the former statement, for the latter follows from the former in view of Lemma~\ref{omega}. For all $1 \leq t \leq m$, $1 \leq j \leq k_t$, set \[\kappa_X^j(t) := \operatorname{dom}([\kappa_X^j(t, 1), \ldots, \kappa_X^j(t, a_t)]).\] For all $1 \leq j \leq s$, let $\kappa_X^j$ be the concatenation of $\prod_{t : k_t \geq j} \kappa_X^j(t)$. Finally, set $\kappa_X^{\text{ref}}$ to be concatenation of $(\kappa_X^1, \ldots, \kappa_X^s)$.
Observe first that $\kappa_X^{\text{ref}}$ is a dominant weight of $L_{\alpha}^{\text{ref}} := L_{X_{\alpha}}^{\text{ref}}$ with respect to the Borel subgroup $B_{\alpha}^{\text{ref}} := B_{L_{\alpha}^{\text{ref}}}$. To see this, note that $\kappa_X^j(t)$ is weakly decreasing for all $1 \leq t \leq m$, $1 \leq j \leq k_t$, and $L_{\alpha}^{\text{ref}}$ is included in $L_{\alpha}$ via the product, over all $1 \leq j \leq s$, of the inclusions $\prod_{t : k_t \geq j} GL_{a_t} \rightarrow GL_{\alpha^*_j}$ (cf. section 1.2).
Since $\kappa_X^j$ is a permutation of $h_X^j$ for all $1 \leq j \leq s$, it follows that $\kappa_X^{\text{ref}}$ belongs to the $W_{\alpha}$-orbit of $h(X)$, so $\kappa_X^{\text{ref}}$ is a weight of the $L_{\alpha}$-representation $W^{h(X)}$. Let $w \in W_{\alpha}$ be chosen so that $w(\kappa_X^{\text{ref}}) = h(X)$. We claim that $\kappa_X^{\text{ref}}$ is a highest weight of the restriction of $W^{h(X)}$ to $L_{\alpha}^{\text{ref}}$.
Let $\Phi_{\alpha} \subset \Lambda$ be the set of roots of $L_{\alpha}$, and let $\Phi_{\alpha}^{\text{ref}} \subset \Phi_{\alpha}$ be the subset of roots of $L_{\alpha}^{\text{ref}}$. Assume for the sake of contradiction that there exists a root $\beta \in \Phi_{\alpha}^{\text{ref}}$, positive with respect to $B_{\alpha}^{\text{ref}}$, such that $\kappa_X^{\text{ref}} + \beta$ is a weight of $W^{h(X)}$. Let $\beta^{\vee}$ denote the coroot corresponding to $\beta$. Then $\langle \kappa_X^{\text{ref}}, \beta^{\vee} \rangle \geq 0$, which implies $\langle h(X), \beta_1^{\vee} \rangle \geq 0$, where $\beta_1 := w(\beta)$.
However, $w(\kappa_X^{\text{ref}} + \beta) = h(X) + \beta_1$ is a weight of $W^{h(X)}$, so $\beta_1$ must be negative with respect to $B_{\alpha}$. Since $h(X)$ is dominant with respect to $B_{\alpha}$, it follows that $\langle h(X), \beta_1^{\vee} \rangle \leq 0$.
We conclude that $\langle h(X), \beta_1^{\vee} \rangle = 0$. Let $s_{\beta_1} \in W_{\alpha}$ be the reflection corresponding to $\beta_1$. Then $s_{\beta_1}(h(X)) = h(X)$. Hence $s_{\beta_1}(h(X) + \beta_1) = h(X) - \beta_1$ is a weight of $W^{h(X)}$ that exceeds $h(X)$ in the root order. (Contradiction.)
Let $V$ be the $(GL_{a_1})^{k_1} \times \cdots \times (GL_{a_m})^{k_m}$-representation given by \[ V := \left(V^{\kappa_X^1(1)} \boxtimes \cdots \boxtimes V^{\kappa_X^{k_1}(1)}\right) \boxtimes \cdots \boxtimes \left(V^{\kappa_X^1(m)} \boxtimes \cdots \boxtimes V^{\kappa_X^{k_m}(m)}\right).\]
What we have just shown implies that $V$ occurs in the decomposition of $W^{h(X)}$ as a direct sum of irreducible $L_{\alpha}^{\text{ref}}$-representations. Recall from section 1.2 that $G_{\alpha}^{\text{red}}$ is embedded in $L_{\alpha}^{\text{ref}}$ via the product, over all $1 \leq t \leq m$, of the diagonal embeddings $GL_{a_t} \rightarrow (GL_{a_t})^{k_t}$. It follows that the restriction of $V$ to $G_{\alpha}^{\text{red}} \cong GL_{a_1, \ldots, a_m}$ is \[\left(V^{\kappa_X^1(1)} \otimes \cdots \otimes V^{\kappa_X^{k_1}(1)}\right) \boxtimes \cdots \boxtimes \left(V^{\kappa_X^1(m)} \otimes \cdots \otimes V^{\kappa_X^{k_m}(m)}\right).\]
Therefore, to see that \[\dim \operatorname{Hom}_{G_{\alpha}^{\text{red}}} \left(V^{(\alpha, \kappa(X))}, V \right) > 0,\] it suffices to show that \[\dim \operatorname{Hom}_{GL_{a_t}}\left(V^{\kappa_X(t)}, V^{\kappa_X^1(t)} \otimes \cdots \otimes V^{\kappa_X^{k_t}(t)}\right) > 0\] for all $1 \leq t \leq m$.
This is a consequence of the Parthasarathy--Ranga Rao--Varadarajan conjecture, first proved for complex semisimple algebraic groups (via sheaf cohomology) by Kumar \cite{Kumar} in 1988. For complex general linear groups, a combinatorial proof via honeycombs is given in Knutson--Tao \cite{Knutson}, section 4. \end{proof}
\begin{rem} In Achar's work, the corresponding claim is Proposition 4.4 in \cite{Acharj}. Unfortunately, Achar's proof is incorrect: He implicitly assumes that the combinatorial map $\kappa \colon D_{\alpha} \rightarrow \Omega_{\alpha}$ lifts the action of a representation-theoretic map $\Lambda^+_{\alpha} \rightarrow \Omega_{\alpha}$, which he also denotes by $\kappa$, so that $\kappa(X) = \kappa(h(X))$. This is manifestly untrue, for permuting the entries within a column of $X$ affects $\kappa(X)$ but leaves $h(X)$ unchanged.
Thus, Achar's assertion: \begin{quote} ``\ldots the $G^{\alpha}$-submodule generated by the $\mu$-weight space of $V^L_{\mu}$ is a representation whose highest weight is the restriction of $\mu$, \textit{which is exactly what $E$ is}'' [emphasis added] \end{quote} is false unless $\kappa_X^{\text{ref}}$ coincides with $h(X)$ and $\kappa_X(t) = \sum_{j=1}^{k_t} \kappa_X^j(t)$ for all $1 \leq t \leq m$ --- in which case the $L_{\alpha}^{\text{ref}}$-subrepresentation of $W^{h(X)}$ generated by the highest weight space is isomorphic to $V$, and the highest weight of its restriction to $G_{\alpha}^{\text{red}}$ is $\kappa(X)$. \end{rem}
\begin{exam} Set $\alpha := [2,2]$. Note that $G_{[2,2]}^{\text{red}} \cong GL_{2}$ and $L_{[2,2]}^{\text{ref}} = L_{[2,2]} \cong (GL_{2})^2$. Furthermore, $G_{[2,2]}^{\text{red}}$ is embedded in $L_{[2,2]}$ via the diagonal embedding $GL_2 \rightarrow (GL_2)^2$.
Let $X_1$ and $X_2$ be the weight diagrams $\begin{smallmatrix} 1 & 1 \\ 0 & 0 \end{smallmatrix}$ and $\begin{smallmatrix} 1 & 0 \\ 0 & 1 \end{smallmatrix}$, respectively. Then \[\kappa(X_1) = [2,0], \quad \kappa(X_2) = [1,1], \quad \text{and} \quad h(X_1) = h(X_2) = [1,0,1,0].\]
The restriction of the $L_{[2,2]}$-representation \[W^{[1,0,1,0]} = W^{[1,0]} \boxtimes W^{[1,0]}\] to $G_{[2,2]}^{\text{red}}$ is \[W^{[1,0]} \otimes W^{[1,0]} \cong W^{[2,0]} \oplus W^{[1,1]}.\]
Hence Theorem~\ref{decomp} holds for $X_1$ and $X_2$.
However, Achar's proof is valid for $X_1$ only. To see this, let $v$ and $w$ be weight vectors of $W^{[1,0]}$ of weight $[1,0]$ and $[0,1]$, respectively. Up to scaling, \[\lbrace v \otimes v, v \otimes w, w \otimes v, w \otimes w \rbrace\] is the unique basis of weight vectors for $W^{[1,0]} \boxtimes W^{[1,0]}$. Whereas $v \otimes v$ and $w \otimes w$ each generates a $GL_2$-subrepresentation isomorphic to $W^{[2,0]}$, both $v \otimes w$ and $w \otimes v$ are cyclic vectors. No weight space of $W^{[1,0]} \boxtimes W^{[1,0]}$ generates a $GL_2$-subrepresentation isomorphic to $W^{[1,1]}$ (instead, $W^{[1,1]}$ is generated by $v \otimes w - w \otimes v$). \end{exam}
\begin{cor} \label{decamp} Let $\nu \in \Omega_{\alpha}$, and let $\lbrace \nu_{i,j} \rbrace_{\substack{1 \leq i \leq \ell \\ 1 \leq j \leq \alpha_i}}$ be a collection of integers such that \[\nu_i = \nu_{i, 1} + \cdots + \nu_{i, \alpha_i}\] for all $1 \leq i \leq \ell$. For all $1 \leq j \leq s$, set $\mu^j := \operatorname{dom}([\nu_{1, j}, \ldots, \nu_{\alpha^*_j, j}])$. Set $\mu$ to be the concatenation of $(\mu^1, \ldots, \mu^s)$. Then $\mu \in \Lambda^+_{\alpha, \nu}$. \end{cor}
\begin{proof} Let $X$ be the filling of the Young diagram of shape $\alpha$ for which $\nu_{i,j}$ is the entry in the $i^{\text{th}}$ row and $j^{\text{th}}$ column of $X$ for all $i, j$. Then $\kappa(X) = \nu$, and $h(X) = \mu$. Hence the result follows from Theorem~\ref{decomp}. \end{proof}
\begin{cor} \label{inside} Let $\nu \in \Omega_{\alpha}$. Then $\mathfrak{A}(\alpha, \nu) \in \Lambda^+_{\alpha, \nu}$. \end{cor}
\begin{proof} By Remark~\ref{iter}, it suffices to show that $\mathfrak{A}_{\operatorname{iter}}(\alpha, \nu) \in \Lambda^+_{\alpha, \nu}$. For all $1 \leq i \leq \ell$ and $1 \leq j \leq \alpha_i$, set $\nu_{i,j} := \mu^j_{\sigma^{j}(i)}$. Then Corollary~\ref{decamp} implies the result. \end{proof}
\eject
\section{The Algorithm, Weight-Diagrams Version}
\subsection{Overview} In this section, we reengineer our algorithm from section 2.2 to output diagram pairs rather than weights. Let $D_{\ell}$ be the set of weight diagrams, of any shape-class, with $\ell$ rows. For a diagram $X \in D_{\ell}$, we denote by $X_{i,j}$ the entry of $X$ in the $i^{\text{th}}$ row and the $j^{\text{th}}$ column.
We define a recursive algorithm $\mathcal{A}$ that computes a map \[\mathbb{N}^{\ell} \times \mathbb{Z}^{\ell} \times \lbrace \pm 1 \rbrace \rightarrow D_{\ell} \times D_{\ell}\] by determining the entries in the first column of each diagram of its output and using recursion to ascertain the entries in the remaining columns. Whenever we write $\mathcal{A}(\alpha, \nu)$, we refer to $\mathcal{A}(\alpha, \nu, -1)$.
Let maps $p_1, p_2 \colon D_{\ell} \times D_{\ell} \rightarrow D_{\ell}$ be given by projection onto the first and second factors, respectively. We refer to $p_1 \mathcal{A}(\alpha, \nu)$ as the \textit{left} diagram and to $p_2 \mathcal{A}(\alpha, \nu)$ as the \textit{right} diagram. The algorithm $\mathcal{A}$ computes the Lusztig--Vogan bijection via $\gamma(\alpha, \nu) = \eta p_2 \mathcal{A}(\alpha, \nu)$.
While $\mathcal{A}$ relies on the same functions as $\mathfrak{A}$ for its computations, it also requires companion versions of these functions that use floors rather than ceilings. The \textit{candidate-floor} function $\mathcal{C}_1$, and the \textit{ranking-by-floors} and \textit{column-floors} algorithms $\mathcal{R}_1$ and $\mathcal{U}_1$, are analogous to the function $\mathcal{C}_{-1}$, and the algorithms $\mathcal{R}_{-1}$ and $\mathcal{U}_{-1}$, respectively, and we define them formally in section 4.2.
More substantively, the recursive structure of $\mathcal{A}$ differs from that of $\mathfrak{A}$. The integer-sequences version is singly recursive: On input $(\alpha, \nu)$, it reduces the task of determining the output to one sub-problem, namely, computing $\mathfrak{A}(\alpha', \nu')$. In contrast, the weight-diagrams version is multiply recursive, and, depending on the input, it may require the solutions to several sub-problems to be assembled in order to return the output.
After computing the first column of each output diagram, the weight-diagrams version creates a separate branch for each \textit{distinct} entry in the first column of the left diagram. Then it attaches each branch's output diagrams to the first columns already computed to build the output diagrams of the whole recursion tree. The attachment process is trivial; preparing each branch for its recursive call is not.\footnote{Thus, $\mathcal{A}$ deviates from the pattern of most prototypical divide-and-conquer algorithms, such as mergesort, for which dividing the residual input into branches is easier than combining the resulting outputs. }
On input $(\alpha, \nu, \epsilon)$, the algorithm $\mathcal{A}$ undertakes the following steps to compute $p_1 \mathcal{A}(\alpha, \nu, \epsilon)$ (the diagram $p_2 \mathcal{A}(\alpha, \nu, \epsilon)$ is computed simultaneously and similarly): \begin{enumerate} \item It computes $\sigma := \mathcal{R}_{\epsilon}(\alpha, \nu)$, which it construes as permuting the rows of a blank diagram of shape $\alpha$;\footnote{By a diagram of shape $\alpha$, we mean a diagram for which the $i^{\text{th}}$ row contains $\alpha_i$ boxes for all $1 \leq i \leq \ell$. } \item It fills in the first column of the (permuted) diagram with the entries of $\iota := \mathcal{U}_{\epsilon}(\alpha, \nu, \sigma)$; \item For each row, it appeals to the \textit{row-survival function} to query whether the row \textit{survives} into the residual input (viz., is of length greater than $1$), and, if so, determine which branch of the residual input it is sorted into (and its position therein); \item For all $x$, it records the surviving rows in the $x^{\text{th}}$ branch in $\alpha^{(x)}$, and subtracts off the corresponding entries in $\iota$ from those in $\nu$ to obtain $\nu^{(x)}$; \item For all $x$, it adjusts $\nu^{(x)}$ to $\hat{\nu}^{(x)}$ to reflect the data from the other branches; \item For all $x$, it sets $X^{(x)} := p_1 \mathcal{A}(\alpha^{(x)}, \hat{\nu}^{(x)}, -\epsilon)$ and attaches $X^{(x)}$ to the first column. \end{enumerate}
After the rows of a blank diagram of shape $\alpha$ have been permuted according to $\sigma \in \mathfrak{S}_{\ell}$, the $i^{\text{th}}$ row from the top is of length $\alpha_{\sigma^{-1}(i)}$. Thus, the $i^{\text{th}}$ row \textit{survives} into the residual input if and only if $\alpha_{\sigma^{-1}(i)} > 1$. Which branch it belongs to depends on its first-column entry.
The first column of the permuted diagram is filled in with the entries of $\iota$. Each distinct entry $\iota^{\circ}$ in $\iota$ gives rise to its own branch, comprising the surviving rows whose first-column entry is $\iota^{\circ}$ (a branch may be empty). If the $i^{\text{th}}$ row does survive, it is sorted into the $x^{\text{th}}$ branch, where $x$ is the number of distinct entries in the subsequence $[\iota_1, \ldots, \iota_i]$; if, furthermore, exactly $i'$ rows among the first $i$ survive into the $x^{\text{th}}$ branch, then the $i^{\text{th}}$ row becomes the $i'^{\text{th}}$ row in the $x^{\text{th}}$ branch.
To encompass these observations, we define the \textit{row-survival} function as follows. \begin{df}
For all $(\alpha, \sigma, \iota) \in \mathbb{N}^{\ell} \times \mathfrak{S}_{\ell} \times \mathbb{Z}^{\ell}_{\text{dom}}$, \[\mathcal{S}(\alpha, \sigma, \iota) \colon \lbrace 1, \ldots, \ell \rbrace \rightarrow \lbrace 1, \ldots, \ell \rbrace \times \lbrace 0, 1, \ldots, \ell \rbrace\] is given by \[\mathcal{S}(\alpha, \sigma, \iota)(i) := \big(|\lbrace \iota_{i'} : i' \leq i \rbrace|, |\lbrace i' \leq i : \iota_{i'} = \iota_i; \alpha_{\sigma^{-1}(i')} > 1 \rbrace| \cdot 1_{i} \big),\] where \[1_{i} := \begin{cases} 1 & \alpha_{\sigma^{-1}(i)} > 1 \\ 0 & \alpha_{\sigma^{-1}(i)} = 1 \end{cases}.\] \end{df}
\begin{rem} Suppose $\mathcal{S}(\alpha, \sigma, \iota)(i) = (x, i')$. Assuming $i' > 0$, the $i^{\text{th}}$ row becomes the $i'^{\text{th}}$ row in the $x^{\text{th}}$ branch (if $i' = 0$, the row dies). \end{rem}
\begin{exam} \label{surv} We revisit the input $(\alpha, \nu) := ([4,3,2,1,1], [15,14,9,4,4])$ from Example~\ref{acharexam}. As noted therein, $\sigma := \mathcal{R}_{-1}(\alpha, \nu) = 42135$ and $\iota := \mathcal{U}_{-1}(\alpha, \nu, \sigma) = [4,4,4,4,4]$. Thus, beginning with a blank diagram of shape $\alpha$, we see that the permuted diagram (with first column filled in) looks like \begin{figure}
\caption{The left diagram after steps 1 and 2}
\end{figure}
From the picture, it is clear that there is exactly one branch, comprising the first, second, and fourth rows. The row-survival function indicates the same, for \[\mathcal{S}(\alpha, \sigma, \iota)(1,2,3,4,5) = ((1,1), (1,2), (1,0), (1,3), (1,0)).\]
We see later that $\mathcal{A}(\alpha, \nu) = (X, Y)$ in the notation of Example~\ref{thes}.
\end{exam}
\subsection{The algorithm}
Before we describe the algorithm, we define the preliminary functions that use floors.
\begin{df} Given a pair of integer sequences \[(\alpha, \nu) = ([\alpha_1, \ldots, \alpha_{\ell}], [\nu_1, \ldots, \nu_{\ell}]) \in \mathbb{N}^{\ell} \times \mathbb{Z}^{\ell},\] an integer $i \in \lbrace 1, \ldots, \ell \rbrace$, and an ordered pair of disjoint sets $(I_a, I_b)$ satisfying $I_a \cup I_b = \lbrace 1, \ldots, \ell \rbrace \setminus \lbrace i \rbrace$, we define the \textit{candidate-floor} function $\mathcal{C}$ as follows:
\[\mathcal{C}_1(\alpha, \nu, i , I_a, I_b) := \left \lfloor \frac{\nu_i - \sum_{j \in I_a} \min \lbrace \alpha_i, \alpha_j \rbrace + \sum_{j \in I_b} \min \lbrace \alpha_i, \alpha_j \rbrace}{\alpha_i} \right \rfloor.\] \end{df}
\begin{df} The \textit{ranking-by-floors} algorithm $\mathcal{R}_1$ computes a function $\mathbb{N}^{\ell} \times \mathbb{Z}^{\ell} \rightarrow \mathfrak{S}_{\ell}$ iteratively over $\ell$ steps.
Say $\mathcal{R}_1(\alpha, \nu) = \sigma$. On the $i^{\text{th}}$ step of the algorithm, $\sigma^{-1}(\ell), \ldots, \sigma^{-1}(\ell-i+2)$ have already been determined. Set \[J_i := \lbrace \sigma^{-1}(\ell), \ldots, \sigma^{-1}(\ell-i+2) \rbrace \quad \text{and} \quad J'_i := \lbrace 1, \ldots, \ell \rbrace \setminus J_i.\] Then $\sigma^{-1}(\ell-i+1)$ is designated the numerically maximal $j \in J'_i$ among those for which \[(\mathcal{C}_1(\alpha, \nu, j, J'_i \setminus \lbrace j \rbrace, J_i), -\alpha_j, \nu_j)\] is lexicographically minimal. \end{df}
\begin{df} The \textit{column-floors} algorithm $\mathcal{U}_1$ is iterative with $\ell$ steps and computes a function $\mathbb{N}^{\ell} \times \mathbb{Z}^{\ell} \times \mathfrak{S}_{\ell} \rightarrow \mathbb{Z}^{\ell}_{\text{dom}}$.
Say $\mathcal{U}_1(\alpha, \nu, \sigma) = [\iota_1, \ldots, \iota_{\ell}]$. On the $i^{\text{th}}$ step of the algorithm, $\iota_{\ell}, \ldots, \iota_{\ell - i + 2}$ have already been determined. Then \[\iota_{\ell - i + 1} := \mathcal{C}_1(\alpha, \nu, \sigma^{-1}(\ell - i + 1), \sigma^{-1} \lbrace 1, \ldots, \ell - i \rbrace, \sigma^{-1} \lbrace \ell - i + 2, \ldots, \ell \rbrace) + \ell - 2i + 1\] unless the right-hand side is less than $\iota_{\ell - i +2}$, in which case $\iota_{\ell - i + 1} := \iota_{\ell - i + 2}$. \end{df}
We assemble these functions, together with the preliminary functions that use ceilings, and the row-survival function, into the recursive algorithm $\mathcal{A} \colon \mathbb{N}^{\ell} \times \mathbb{Z}^{\ell} \times \lbrace \pm 1 \rbrace \rightarrow D_{\ell} \times D_{\ell}$.
On input $(\alpha, \nu, \epsilon)$, the algorithm sets \[\sigma := \mathcal{R}_{\epsilon}(\alpha, \nu) \quad \text{and} \quad \iota := \mathcal{U}_{\epsilon}(\alpha, \nu, \sigma).\]
Next it sets \[X_{i,1} := \iota_{i} \quad \text{and} \quad Y_{i, 1} := \iota_i + \ell - 2i + 1\] for all $1 \leq i \leq \ell$.
For all $(x, i')$ in the image of $\mathcal{S}(\alpha, \sigma, \iota)$ such that $i' > 0$, we write \[i_{(x, i')} := \mathcal{S}(\alpha, \sigma, \iota)^{-1}(x, i').\]
The algorithm sets $\mathcal{k} := |\lbrace \iota_1, \ldots, \iota_{\ell} \rbrace|$, which counts the number of branches.
For all $1 \leq x \leq \mathcal{k}$, it sets \[\ell_x := \max \left\lbrace i' : (x, i') \in \mathcal{S}(\alpha, \sigma, \iota) \lbrace 1, \ldots, \ell \rbrace \right\rbrace.\]
Note that $\ell_x$ counts the number of rows surviving into the $x^{\text{th}}$ branch; if $\ell_x = 0$, then the $x^{\text{th}}$ branch is empty.
If $\ell_x > 0$, then the $x^{\text{th}}$ branch contains $\ell_x$ surviving rows, and the algorithm sets \[\alpha^{(x)} := \left [\alpha_{\sigma^{-1}\left(i_{(x, 1)}\right)} - 1, \ldots, \alpha_{\sigma^{-1}\left(i_{(x, \ell_x)}\right)} - 1 \right] \] and \[\nu^{(x)} = \left [\nu_{\sigma^{-1}\left(i_{(x, 1)}\right)} - \iota_{i_{(x, 1)}}, \ldots, \nu_{\sigma^{-1}\left(i_{(x, \ell_x)}\right)} - \iota_{i_{(x, \ell_x)}} \right].\]
The algorithm does not call itself on $(\alpha^{(x)}, \nu^{(x)})$ because it has to adjust $\nu^{(x)}$ to reflect the data from the other branches, if any are present.
For all $1 \leq i' \leq \ell_x$, it sets \[\hat{\nu}^{(x)}_{i'} := \nu^{(x)}_{i'} - \sum_{x' = 1}^{x-1} \sum_{i_0 = 1}^{\ell_{x'}} \min \left\lbrace \alpha^{(x)}_{i'}, \alpha^{(x')}_{i_0} \right\rbrace + \sum_{x' = x+1}^{\mathcal{k}} \sum_{i_0 = 1}^{\ell_{x'}} \min \left\lbrace \alpha^{(x)}_{i'}, \alpha^{(x')}_{i_0} \right\rbrace.\]
Then it sets $\hat{\nu}^{(x)} := \left[\hat{\nu}^{(x)}_1, \ldots, \hat{\nu}^{(x)}_{\ell_x}\right]$ and $\left(X^{(x)}, Y^{(x)} \right) := \mathcal{A}\left(\alpha^{(x)}, \hat{\nu}^{(x)}, -\epsilon \right)$.
The algorithm fills in the rest of the entries of $X$ and $Y$ according to the following rule: For all $(i', j') \in \mathbb{N} \times \mathbb{N}$ such that $X^{(x)}$ and $Y^{(x)}$ each have an entry in the $i'^{\text{th}}$ row and $j'^{\text{th}}$ column, \begin{align} \label{attachx} X_{i_{(x, i')}, j'+1} := X^{(x)}_{i', j'} + \sum_{x' = 1}^{x-1} (\alpha^{(x')})^*_{j'} - \sum_{x' = x+1}^{\mathcal{k}} (\alpha^{(x')})^*_{j'}, \end{align}
where $(\alpha^{(x')})^*_{j'} := |\lbrace i_0 : \alpha^{(x')}_{i_0} \geq j' \rbrace|$, and \begin{align} \label{attachy} Y_{i_{(x,i')}, j'+1} := Y^{(x)}_{i', j'}. \end{align}
Finally, it returns $(X, Y)$.
Henceforward we adopt the notation of Equation~\ref{attachx} and denote $|\lbrace i : \alpha_i \geq j \rbrace|$ by $\alpha^*_j$ for all integer sequences $\alpha$, regardless of whether $\alpha$ is a partition.
\begin{exam} \label{cont} Maintain the notation of Example~\ref{surv}. We proceed to compute $\mathcal{A}(\alpha, \nu)$.
Since $\sigma := \mathcal{R}_{-1}(\alpha, \nu) = 42135$ and $\iota := \mathcal{U}_{-1}(\alpha, \nu, \sigma) = [4,4,4,4,4]$, we see that \[ [X_{1, 1}, X_{2, 1}, X_{3, 1}, X_{4, 1}, X_{5, 1}] = [4, 4, 4, 4, 4]\] and \[ [Y_{1, 1}, Y_{2, 1}, Y_{3, 1}, Y_{4, 1}, Y_{5, 1}] = [8, 6, 4, 2, 0].\]
Set $f := \mathcal{S}(\alpha, \sigma, \iota)$. Recall from Example~\ref{surv} that \[\big(f(1), f(2), f(3), f(4), f(5)\big) = \big((1,1), (1,2), (1,0), (1,3), (1,0)\big).\]
Thus, $\mathcal{k} = 1$ and $\ell_1 = 3$. Furthermore, $(i_{(1,1)}, i_{(1,2)}, i_{(1,3)}) = (1, 2, 4)$. It follows that \[\alpha^{(1)} = [1, 2, 3] \quad \text{and} \quad \hat{\nu}^{(1)} = \nu^{(1)} = [5, 10, 11].\]
(Since the first branch is the only branch, no adjustment to $\nu^{(1)}$ is required and $\hat{\nu}^{(1)} = \nu^{(1)}$.)
As it happens, we find that $X^{(1)}$ and $Y^{(1)}$ look as depicted in Figure~\ref{firstbranch}. \begin{figure}
\caption{The diagram pair obtained from the first branch}
\label{firstbranch}
\end{figure}
Finally, we ``attach'' $X^{(1)}$ and $Y^{(1)}$ to the first columns of $X$ and $Y$, respectively, to complete the output.
\begin{figure}
\caption{The diagram pair obtained from the recursion tree}
\end{figure}
\end{exam}
Since Example~\ref{cont} involves only one branch, it doesn't fully illustrate the contours of the algorithm. For this reason, we also show how the algorithm computes $X^{(1)}$ and $Y^{(1)}$ in Example~\ref{cont}, during which we encounter multiple branches.
\begin{exam}
Set $\alpha := [1,2,3]$ and $\nu := [5,10,11]$. We compute $(\mathsf{X}, \mathsf{Y}) := \mathcal{A}(\alpha, \nu, 1)$.
We find $\sigma := \mathcal{R}_1(\alpha, \nu) = 123$ and $\iota := \mathcal{U}_1(\alpha, \nu, \sigma) = [5,5,4]$, so \[[\mathsf{X}_{1,1}, \mathsf{X}_{2,1}, \mathsf{X}_{3,1}] = [5,5,4] \quad \text{and} \quad [\mathsf{Y}_{1,1}, \mathsf{Y}_{2,1}, \mathsf{Y}_{3,1}] = [7,5,2].\]
Set $\mathsf{f} := \mathcal{S}(\alpha, \sigma, \iota)$. Note that \[\big(\mathsf{f}(1), \mathsf{f}(2), \mathsf{f}(3)\big) = \big((1,0), (1,1), (2,1)\big).\]
Thus, $\mathcal{k} = 2$ and $\ell_1 = \ell_2 = 1$. Furthermore, $i_{(1,1)} = 2$ and $i_{(2,1)} = 3$. It follows that \[(\alpha^{(1)}, \nu^{(1)}) = ([1], [5]) \quad \text{and} \quad (\alpha^{(2)}, \nu^{(2)}) = ([2], [7]).\]
Hence \[(\alpha^{(1)}, \hat{\nu}^{(1)}) = ([1], [6]) \quad \text{and} \quad (\alpha^{(2)}, \hat{\nu}^{(2)}) = ([2], [6]).\]
We draw the diagram pairs $(\mathsf{X}^{(1)}, \mathsf{Y}^{(1)})$ and $(\mathsf{X}^{(2)}, \mathsf{Y}^{(2)})$ below.
\begin{figure}
\caption{The diagram pairs obtained from the first and second branches}
\end{figure}
Finally, we ``attach'' these diagrams to the first columns computed above to complete the output.
\begin{figure}
\caption{The diagram pair obtained from the recursion tree}
\end{figure}
\textit{Nota bene.} Equation~\ref{attachx} dictates that the entries of $\mathsf{X}^{(1)}$ and $\mathsf{X}^{(2)}$ must be modified before they can be adjoined to $\mathsf{X}$, but the entries of $\mathsf{Y}^{(1)}$ and $\mathsf{Y}^{(2)}$ are adjoined to $Y$ as they are. \end{exam}
\subsection{Properties}
The following propositions delineate a few properties of $\mathcal{A}$.
\begin{prop} \label{permute} Let $(\beta, \xi), (\alpha, \nu) \in \mathbb{N}^{\ell} \times \mathbb{Z}^{\ell}$, and suppose that the multisets \[\lbrace (\beta_1, \xi_1), \ldots, (\beta_{\ell}, \xi_{\ell}) \rbrace \quad \text{and} \quad \lbrace (\alpha_1, \nu_1), \ldots, (\alpha_{\ell}, \nu_{\ell}) \rbrace\] are coincident. Then $\mathcal{A}(\beta, \xi, \pm 1) = \mathcal{A}(\alpha, \nu, \pm 1)$. \end{prop}
\begin{proof} We prove $\mathcal{A}(\beta, \xi, -1) = \mathcal{A}(\alpha, \nu, -1)$.
Set $\tau := \mathcal{R}_{-1}(\beta, \xi)$ and $\sigma := \mathcal{R}_{-1}(\alpha, \nu)$. It suffices to show that \begin{align} \label{perm} \beta_{\tau^{-1}(i)} = \alpha_{\sigma^{-1}(i)} \quad \text{and} \quad \xi_{\tau^{-1}(i)} = \nu_{\sigma^{-1}(i)} \end{align} for all $1 \leq i \leq \ell$. The proof is by induction on $i$; we show the inductive step. In other words, we assume that Equation~\ref{perm} holds for all $1 \leq i_0 \leq i-1$ and show it holds for $i$.
Set $J := \tau^{-1} \lbrace 1, \ldots, i-1 \rbrace$ and $J' := \lbrace 1, \ldots, \ell \rbrace \setminus J$. Also set $I := \sigma^{-1} \lbrace 1, \ldots, i-1 \rbrace$ and $I' := \lbrace 1, \ldots, \ell \rbrace \setminus I$. By the inductive hypothesis, the multisets \[\lbrace (\beta_j, \xi_j) \rbrace_{j \in J} \quad \text{and} \quad \lbrace (\alpha_j, \nu_j) \rbrace_{j \in I} \] are coincident.
Therefore, the multisets \[\lbrace (\beta_j, \xi_j) \rbrace_{j \in J'} \quad \text{and} \quad \lbrace (\alpha_j, \nu_j) \rbrace_{j \in I'} \] are coincident, and there exists a bijection $\zeta \colon I' \rightarrow J'$ such that $(\beta_j, \xi_j) = (\alpha_{\zeta^{-1}(j)}, \nu_{\zeta^{-1}(j)})$ for all $j \in J'$.
Then \[(\mathcal{C}_{-1}(\beta, \xi, j, J, J' \setminus \lbrace j \rbrace), \beta_j, \xi_j) = (\mathcal{C}_{-1}(\alpha, \nu, \zeta^{-1}(j), I, I' \setminus \lbrace \zeta^{-1}(j) \rbrace), \alpha_{\zeta^{-1}(j)}, \nu_{\zeta^{-1}(j)})\] for all $j \in J'$.
Hence the lexicographically maximal value of the function \[j \mapsto (\mathcal{C}_{-1}(\beta, \xi, j, J, J' \setminus \lbrace j \rbrace), \beta_j, \xi_j)\] over the domain $J'$ coincides with the lexicographically maximal value of the function \[j \mapsto (\mathcal{C}_{-1}(\alpha, \nu, j, I, I' \setminus \lbrace j \rbrace), \alpha_j, \nu_j) \] over the domain $I'$.
By definition of $\mathcal{R}_{-1}$, the former value is attained at $j = \tau^{-1}(i)$, and the latter value is attained at $j = \sigma^{-1}(i)$. The result follows.
\end{proof}
\begin{prop} \label{eworks} Let $(\alpha, \nu, \epsilon) \in \mathbb{N}^{\ell} \times \mathbb{Z}^{\ell} \times \lbrace \pm 1 \rbrace$. Then $\mathcal{A}(\alpha, \nu, \epsilon) \in \overline{E}(D_{\operatorname{dom}(\alpha)})$. \end{prop}
\begin{proof} Set $(X, Y) := \mathcal{A}(\alpha, \nu, \epsilon)$. We first show $(X, Y) \in D_{\operatorname{dom}(\alpha)} \times D_{\operatorname{dom}(\alpha)}$. The proof is by induction on $\max \lbrace \alpha_1, \ldots, \alpha_{\ell} \rbrace$; we show the inductive step. Since $X$ has an entry in the $i^{\text{th}}$ row and $j^{\text{th}}$ column if and only if $Y$ does for all $(i, j) \in \mathbb{N} \times \mathbb{N}$, it suffices to show $X \in D_{\operatorname{dom}(\alpha)}$. By construction, the first column of $X$ has $\alpha^*_1$ entries. Applying the inductive hypothesis, viz., $(X^{(x)}, Y^{(x)}) \in D_{\operatorname{dom}(\alpha^{(x)})} \times D_{\operatorname{dom}(\alpha^{(x)})}$, we find the $(j'+1)^{\text{th}}$ column of $X$ has $\sum_{x=1}^{\mathcal{k}} (\alpha^{(x)})^*_{j'} = \alpha^*_{j'+1}$ entries for all $1 \leq j' \leq \max \lbrace \alpha_1, \ldots, \alpha_{\ell} \rbrace -1$. We conclude that $X$ is of shape $\alpha$.
To see that $(X, Y) \in \overline{E}(D_{\operatorname{dom}(\alpha)})$, we show $EX = Y$. Again the proof is by induction on $\max \lbrace \alpha_1, \ldots, \alpha_{\ell} \rbrace$, and we show the inductive step. By construction, $X^1_i + \alpha^*_1 - 2i + 1 = Y^1_i$ for all $1 \leq i \leq \ell$. Let $(i', j') \in \mathbb{N} \times \mathbb{N}$ such that $X^{(x)}$ and $Y^{(x)}$ each have an entry in the ${i'}^{\text{th}}$ row and ${j'}^{\text{th}}$ column, and set $\mathcal{i}$ so that $X^{(x)}_{i', j'} = (X^{(x)})^{j'}_{\mathcal{i}}$. Note that \[X_{i_{(x, i')}, j'+1} = X^{j'+1}_{\mathcal{i} + \sum_{x'=1}^{x-1} (\alpha^{(x')})^*_{j'}}.\]
Therefore, \begin{align*} EX_{i_{(x, i')}, j'+1} & = X_{i_{(x, i')}, j'+1} + \alpha^*_{j'+1} - 2\left(\mathcal{i} + \sum_{x'=1}^{x-1} (\alpha^{(x')})^*_{j'}\right) + 1 \\ & = X^{(x)}_{i', j'} + \sum_{x'=1}^{x-1} (\alpha^{(x')})^*_{j'} - \sum_{x' = x+1}^{\mathcal{k}} (\alpha^{(x')})^*_{j'} + \alpha^*_{j'+1} - 2\mathcal{i}- 2 \sum_{x'=1}^{x-1} (\alpha^{(x')})^*_{j'} + 1 \\ & = X^{(x)}_{i', j'} - \sum_{x'=1}^{x-1} (\alpha^{(x')})^*_{j'} - \sum_{x' = x+1}^{\mathcal{k}} (\alpha^{(x')})^*_{j'} + \sum_{x'=1}^{\mathcal{k}} (\alpha^{(x')})^*_{j'} - 2\mathcal{i} + 1 \\ & = X^{(x)}_{i', j'} + (\alpha^{(x)})^*_{j'} - 2\mathcal{i} + 1 = Y^{(x)}_{i', j'} = Y_{i_{(x,i')}, j'+1}, \end{align*} where the second-to-last equality follows from the inductive hypothesis. \end{proof}
For a diagram $X \in D_{\ell}$, let $\#(X,i)$ be the number of boxes in the $i^{\text{th}}$ row of $X$, and set $\Sigma(X, i) := \sum_{j=1}^{\#(X,i)} X_{i,j}$.
\begin{prop} \label{multi} Set $(X, Y) := \mathcal{A}(\alpha, \nu, \epsilon)$. For all $1 \leq i \leq \ell$, set $\beta_i := \#(X,i)$ and $\xi_i := \Sigma(X,i)$. Then the multisets \[\lbrace (\beta_1, \xi_1), \ldots, (\beta_{\ell}, \xi_{\ell}) \rbrace \quad \text{and} \quad \lbrace (\alpha_1, \nu_1), \ldots, (\alpha_{\ell}, \nu_{\ell}) \rbrace\] are coincident. \end{prop}
\begin{proof} We prove the assertion for $\epsilon = -1$. The proof is by induction on $\max \lbrace \alpha_1, \ldots, \alpha_{\ell} \rbrace$; we show the inductive step. By Proposition~\ref{permute}, we may assume $\mathcal{R}_{-1}(\alpha, \nu) = \operatorname{id}$ without loss of generality.
Let $i \in \lbrace 1, \ldots, \ell \rbrace$. Set $\iota := \mathcal{U}_{-1}(\alpha, \nu, \operatorname{id})$. We first claim that $\alpha_i = 1$ entails $\iota_i = \nu_i$. To see this, suppose $\alpha_i = 1$. Then $\mathcal{C}_{-1}(\alpha, \nu, i, I_a, I_b) = \nu_i - |I_a| + |I_b|$. Thus, \begin{align*} \iota_i & = \mathcal{C}_{-1}(\alpha, \nu, i, \lbrace 1, \ldots, i-1 \rbrace, \lbrace i+1, \ldots, \ell \rbrace) - \ell + 2i - 1 \\ & = \nu_i - (i-1) + (\ell -i) - \ell + 2i -1 = \nu_i \end{align*} unless $\nu_i > \iota_{i-1}$. If indeed $\nu_i > \iota_{i-1}$, then let $i_0$ be minimal such that $\iota_{i-1} = \iota_{i_0}$. Since $\mathcal{R}_{-1}(\alpha, \nu) = \operatorname{id}$, \begin{align*} \nu_i + \ell - 2i_0 + 1 & = \mathcal{C}_{-1}(\alpha, \nu, i, \lbrace 1, \ldots, i_0-1 \rbrace, \lbrace i_0, i_0+1, \ldots, i-1, i+1, \ldots, \ell \rbrace) \\ & \leq \mathcal{C}_{-1}(\alpha, \nu, i_0, \lbrace 1, \ldots, i_0-1 \rbrace, \lbrace i_0+1, \ldots, \ell \rbrace) \\ & = \iota_{i_0} + \ell - 2i_0 + 1 < \nu_i + \ell - 2i_0 + 1. \end{align*} This is a contradiction, so $\iota_i = \nu_i$ for all $i$ such that $\alpha_i = 1$. It follows that \[(\alpha_i, \nu_i) = (1, \iota_i) = (\beta_i, \xi_i)\] for all $i$ such that $\alpha_i = 1$.
Thus, it suffices to show, for all $1 \leq x \leq \mathcal{k}$, that the multisets \[\left \lbrace \big (\beta_{i_{(x, 1)}}, \xi_{i_{(x, 1)}} \big ), \ldots, \big (\beta_{i_{(x, \ell_x)}}, \xi_{i_{(x, \ell_x)}} \big) \right \rbrace \] and \[ \left \lbrace \big (\alpha_{i_{(x, 1)}}, \nu_{i_{(x, 1)}} \big ), \ldots, \big (\alpha_{i_{(x, \ell_x)}}, \nu_{i_{(x, \ell_x)}} \big ) \right \rbrace \] are coincident. For all $1 \leq i' \leq \ell_x$, set $\beta^{(x)}_{i'} := \#(X^{(x)}, i')$ and $\xi^{(x)}_{i'} := \Sigma(X^{(x)}, i')$. Note that \begin{align*} \xi_{i_{(x, i')}} & = \iota_{i_{(x, i')}} + \xi^{(x)}_{i'} + \sum_{j' =1}^{\beta^{(x)}_{i'}} \left(\sum_{x' = 1}^{x-1} (\alpha^{(x')})^*_{j'} - \sum_{x' = x+1}^{\mathcal{k}} (\alpha^{(x')})^*_{j'}\right) \\ & = \iota_{i_{(x, i')}} + \xi^{(x)}_{i'} + \sum_{x' = 1}^{x-1} \sum_{i_0 = 1}^{\ell_{x'}} \min \left \lbrace \beta^{(x)}_{i'}, \alpha^{(x')}_{i_0} \right \rbrace - \sum_{x' = x+1}^{\mathcal{k}} \sum_{i_0 =1}^{\ell_{x'}} \min \left \lbrace \beta^{(x)}_{i'}, \alpha^{(x')}_{i_0} \right \rbrace. \end{align*}
Therefore, as multisets, \begin{align*} & \left \lbrace \big (\beta_{i_{(x, i')}}, \xi_{i_{(x, i')}} \big ) \right \rbrace_{i' =1}^{\ell_x} \\ & = \left \lbrace \Big (1 + \beta^{(x)}_{i'}, \iota_{i_{(x, i')}} + \xi^{(x)}_{i'} + \sum_{x' = 1}^{x-1} \sum_{i_0 = 1}^{\ell_{x'}} \min \left \lbrace \beta^{(x)}_{i'}, \alpha^{(x')}_{i_0} \right \rbrace - \sum_{x' = x+1}^{\mathcal{k}} \sum_{i_0 =1}^{\ell_{x'}} \min \left \lbrace \beta^{(x)}_{i'}, \alpha^{(x')}_{i_0} \right \rbrace \Big) \right \rbrace_{i'=1}^{\ell_x} \\ & = \left \lbrace \Big (1 + \alpha^{(x)}_{i'}, \iota_{i_{(x, i')}} + \hat{\nu}^{(x)}_{i'} + \sum_{x' = 1}^{x-1} \sum_{i_0 = 1}^{\ell_{x'}} \min \left \lbrace \alpha^{(x)}_{i'}, \alpha^{(x')}_{i_0} \right \rbrace - \sum_{x' = x+1}^{\mathcal{k}} \sum_{i_0 =1}^{\ell_{x'}} \min \left \lbrace \alpha^{(x)}_{i'}, \alpha^{(x')}_{i_0} \right \rbrace \Big) \right \rbrace_{i'=1}^{\ell_x} \\ & = \left \lbrace \big (\alpha_{i_{(x, i')}}, \iota_{i_{(x, i')}} + \nu^{(x)}_{i'} \big) \right \rbrace_{i'=1}^{\ell_x} = \left \lbrace \big (\alpha_{i_{(x, i')}}, \nu_{i_{(x, i')}} \big) \right \rbrace_{i'=1}^{\ell_x}, \end{align*} where we obtain the second equality by recalling $\iota_{i_{(x, 1)}} = \cdots = \iota_{i_{(x, \ell_x)}}$ and applying the inductive hypothesis. \end{proof}
\begin{cor} \label{multiforward} Set $\sigma := \mathcal{R}_{-1}(\alpha, \nu)$, and $\iota := \mathcal{U}_{-1}(\alpha, \nu, \sigma)$. Then the multisets \[\left \lbrace (\beta_i - 1, \xi_i - \iota_i) : \beta_i > 1 \right \rbrace \quad \text{and} \quad \left \lbrace (\alpha_i - 1, \nu_i - \iota_{\sigma(i)}) : \alpha_i > 1 \right \rbrace \] are coincident. \end{cor}
\begin{proof} Maintain the notation of Proposition~\ref{multi}. By Proposition~\ref{multi}, for all $1 \leq x \leq \mathcal{k}$, \[\left \lbrace \big(\beta^{(x)}_{i'}, \xi^{(x)}_{i'}\big) \right \rbrace_{i'=1}^{\ell_x} = \left \lbrace \big(\alpha^{(x)}_{i'}, \hat{\nu}^{(x)}_{i'}\big) \right\rbrace_{i'=1}^{\ell_x}\] is an equality of multisets.
Therefore, as multisets, \begin{align*} & \left \lbrace \big (\beta_{i_{(x, i')}} - 1, \xi_{i_{(x, i')}} - \iota_{i_{(x,i')}} \big ) \right \rbrace_{i' =1}^{\ell_x} \\ & = \left \lbrace \Big (\beta^{(x)}_{i'}, \xi^{(x)}_{i'} + \sum_{x' = 1}^{x-1} \sum_{i_0 = 1}^{\ell_{x'}} \min \left \lbrace \beta^{(x)}_{i'}, \alpha^{(x')}_{i_0} \right \rbrace - \sum_{x' = x+1}^{\mathcal{k}} \sum_{i_0 =1}^{\ell_{x'}} \min \left \lbrace \beta^{(x)}_{i'}, \alpha^{(x')}_{i_0} \right \rbrace \Big) \right \rbrace_{i'=1}^{\ell_x} \\ & = \left \lbrace \Big (\alpha^{(x)}_{i'}, \hat{\nu}^{(x)}_{i'} + \sum_{x' = 1}^{x-1} \sum_{i_0 = 1}^{\ell_{x'}} \min \left \lbrace \alpha^{(x)}_{i'}, \alpha^{(x')}_{i_0} \right \rbrace - \sum_{x' = x+1}^{\mathcal{k}} \sum_{i_0 =1}^{\ell_{x'}} \min \left \lbrace \alpha^{(x)}_{i'}, \alpha^{(x')}_{i_0} \right \rbrace \Big) \right \rbrace_{i'=1}^{\ell_x} \\ & = \left \lbrace \big (\alpha^{(x)}_{i'}, \nu^{(x)}_{i'} \big) \right \rbrace_{i'=1}^{\ell_x} = \left \lbrace \big (\alpha_{\sigma^{-1}(i_{(x, i')})} - 1, \nu_{\sigma^{-1}(i_{(x, i')})} - \iota_{i_{(x, i')}} \big) \right \rbrace_{i'=1}^{\ell_x}. \end{align*}
Taking the union of both sides over $1 \leq x \leq \mathcal{k}$, we obtain the equality of multisets \[\left \lbrace(\beta_i - 1, \xi_i - \iota_i) : \beta_i > 1 \right \rbrace = \left \lbrace (\alpha_{\sigma^{-1}(i)} -1, \nu_{\sigma^{-1}(i)} - \iota_i) : \alpha_{\sigma^{-1}(i)} > 1 \right \rbrace,\] whence the result follows. \end{proof}
Fix again a partition $\alpha = [\alpha_1, \ldots, \alpha_{\ell}]$ of $n$ with conjugate partition $\alpha^* = [\alpha^*_1, \ldots, \alpha^*_s]$. Recall from section 3 that a diagram pair $(X, Y) \in \overline{E}(D_{\alpha})$ encodes three sequences related to objects in $\mathfrak{D}$ --- $\kappa(X)$, $h(X)$, and $\eta(Y)$. If $(X,Y)$ is an output of the weight-diagrams version of the algorithm, then all three sequences carry crucial information: $\kappa(X)$ returns the input; $h(X)$ is a weight in $\Lambda^+_{\alpha, \nu}$ such that $||h(X) + 2 \rho_{\alpha}||$ is minimal, and $\eta(Y)$ is the output of the Lusztig--Vogan bijection.
\begin{prop} \label{kap} Let $\nu \in \Omega_{\alpha}$. Then $\kappa p_1 \mathcal{A}(\alpha, \nu) = \nu$. (Hence $hp_1\mathcal{A}(\alpha, \nu) \in \Lambda^+_{\alpha, \nu}$.) \end{prop}
\begin{proof} This is a direct consequence of Proposition~\ref{multi}. (The statement in parentheses follows from Theorem~\ref{decomp}.) \end{proof}
\begin{thm} \label{mini}
Let $\nu \in \Omega_{\alpha}$. Then $||hp_1\mathcal{A}(\alpha, \nu) + 2 \rho_{\alpha}|| = \min \lbrace ||\bar{\mu} + 2 \rho_{\alpha}|| : \bar{\mu} \in \Lambda^+_{\alpha, \nu} \rbrace$. \end{thm}
\begin{cor} \label{etz} Let $\nu \in \Omega_{\alpha}$. Then $\operatorname{dom}(hp_1 \mathcal{A}(\alpha, \nu) + 2 \rho_{\alpha}) = \gamma(\alpha, \nu)$. \end{cor}
\begin{proof} This follows from (the parenthetical statement in) Proposition~\ref{kap} and Theorem~\ref{mini}, as discussed in the introduction (cf. footnote 3). \end{proof}
\begin{cor} \label{eta} Let $\nu \in \Omega_{\alpha}$. Then $\eta p_2 \mathcal{A}(\alpha, \nu) = \gamma(\alpha, \nu)$. \end{cor}
\begin{proof} Recall from Equation~\ref{compat} that $\eta p_2 \mathcal{A}(\alpha, \nu) = \operatorname{dom}(hp_1 \mathcal{A}(\alpha, \nu) + 2 \rho_{\alpha})$.\footnote{Equation~\ref{compat} holds under the assumption that the entries of $X$ are weakly decreasing down each column, which is certainly the case if $X$ is distinguished (cf. condition (4) of Definition~\ref{dis}). We prove that $p_1 \mathcal{A}(\alpha, \nu)$ is distinguished in the next section (cf. Corollary~\ref{dist}). } \end{proof}
Corollaries~\ref{etz} and ~\ref{eta} express that $\mathcal{A}$ computes the Lusztig--Vogan bijection. Just as Corollary~\ref{etz} follows from Proposition~\ref{kap} and Theorem~\ref{mini}, that $\mathfrak{A}$ computes the Lusztig--Vogan bijection (as expressed in Theorem~\ref{main}) follows from Corollary~\ref{inside} and the following theorem, which we deduce from Theorem~\ref{mini}.
\begin{thm} \label{thesame}
Let $\nu \in \Omega_{\alpha}$. Then $||\mathfrak{A}(\alpha, \nu) + 2 \rho_{\alpha}|| = \min \lbrace ||\bar{\mu} + 2 \rho_{\alpha} || : \bar{\mu} \in \Lambda^+_{\alpha, \nu} \rbrace$. \end{thm}
\begin{proof} Assume Theorem~\ref{mini} holds. Set $\mu := \mathfrak{A}(\alpha, \nu)$ and $\breve{\mu} := hp_1 \mathcal{A}(\alpha, \nu)$. Recall from Corollary~\ref{inside} that $\mu \in \Lambda^+_{\alpha, \nu}$ and from Proposition~\ref{kap} that $\breve{\mu} \in \Lambda^+_{\alpha, \nu}$. Note that \begin{align} \label{first} [\mu_1, \ldots, \mu_{\ell}] = [\breve{\mu}_1, \ldots, \breve{\mu}_{\ell}]. \end{align}
In other words, the first $\ell$ entries of $\mathfrak{A}(\alpha, \nu)$ agree with the first $\ell$ entries of $hp_1 \mathcal{A}(\alpha, \nu)$; both $\mathfrak{A}$ and $hp_1\mathcal{A}$ assign the same weight to the first factor $GL_{\alpha^*_1}$ of $L_{\alpha}$.
To prove the theorem, we induct on $s$. For the inductive step, set $\mu' := [\mu_{\ell + 1}, \ldots, \mu_n]$ and $\breve{\mu}' := [\breve{\mu}_{\ell+1}, \ldots, \breve{\mu}_n]$.
Maintain the notation of sections 2.1 and 2.2. By Corollary~\ref{inside}, $\mu' \in \Lambda^+_{\alpha', \nu'}$. We claim that $\breve{\mu}' \in \Lambda^+_{\alpha', \nu'}$. Indeed, since $(\alpha_i - 1, \nu_i - \mu_{\sigma(i)}) = (\alpha'_i, \nu'_i)$ for all $1 \leq i \leq \alpha^*_2$, Corollary~\ref{multiforward} (in view of Equation~\ref{first}) tells us that the multisets \[\lbrace \big(\beta_i - 1, \xi_i - \breve{\mu}_i \big) : \beta_i > 1 \rbrace \quad \text{and} \quad \lbrace (\alpha'_i, \nu'_i) \rbrace_{i=1}^{\alpha^*_2}\] are coincident.
Therefore, there exists a function $\zeta \colon \lbrace 1, \ldots, \alpha^*_2 \rbrace \rightarrow \lbrace 1, \ldots, \ell \rbrace$ such that $\beta_{\zeta(i)} > 1$ and $(\beta_{\zeta(i)} - 1, \xi_{\zeta(i)} - \breve{\mu}_{\zeta(i)}) = (\alpha'_i, \nu'_i)$ for all $1 \leq i \leq \alpha^*_2$. For all $1 \leq i \leq \alpha^*_2$ and $1 \leq j \leq \alpha'_i$, set $\nu'_{i,j} := X_{\zeta(i), j+1}$. Then the claim follows from Corollary~\ref{decamp}.
Thus, by the inductive hypothesis, \[||\mu' + 2 \rho_{\alpha'}|| = \min \lbrace || \bar{\mu}' + 2 \rho_{\alpha'}|| : \bar{\mu}' \in \Lambda^+_{\alpha', \nu'} \rbrace \leq ||\breve{\mu}' + 2 \rho_{\alpha'}||.\]
It follows that \begin{align*}
||\mu + 2 \rho_{\alpha}||^2 & = ||[\mu_1 + \ell - 1, \ldots, \mu_{\ell} + 1 - \ell]||^2 + ||\mu' + 2 \rho_{\alpha'}||^2 \\ & \leq ||[\breve{\mu}_1 + \ell - 1, \ldots, \breve{\mu}_{\ell} + 1 - \ell]||^2 + ||\breve{\mu}' + 2 \rho_{\alpha'}||^2 \\ & = ||\breve{\mu} + 2 \rho_{\alpha}||^2 \\ & = \min \lbrace ||\bar{\mu} + 2 \rho_{\alpha}||^2 : \bar{\mu} \in \Lambda^+_{\alpha, \nu} \rbrace \\ & \leq ||\mu + 2 \rho_{\alpha}||^2, \end{align*} where the first inequality follows from Equation~\ref{first} and the third equality follows from Theorem~\ref{mini}.
We conclude that $||\mu + 2 \rho_{\alpha}|| = \min \lbrace ||\bar{\mu} + 2 \rho_{\alpha}|| : \bar{\mu} \in \Lambda^+_{\alpha, \nu} \rbrace$, as desired. \end{proof}
It remains is to prove Theorem~\ref{mini}. In the next section, we make good on our pledge to prove that the weight-diagrams version of our algorithm encompasses Achar's algorithm; in particular, we prove the following theorem, whence Theorem~\ref{mini} follows immediately.
\begin{thm} \label{mata} Let $\nu \in \Omega_{\alpha}$. Then $p_1 \mathcal{A}(\alpha, \nu) = \mathsf{A}(\alpha, \nu)$. \end{thm}
\begin{cor} Theorem~\ref{mini} holds. \end{cor}
\begin{proof}
Note that $||h\mathsf{A}(\alpha, \nu) + 2 \rho_{\alpha}|| = \min \lbrace ||\bar{\mu} + 2 \rho_{\alpha}|| : \bar{\mu} \in \Lambda^+_{\alpha, \nu} \rbrace$ (cf. Achar \cite{Acharj}, Corollary 8.9). \end{proof}
\eject
\section{Proof of Theorem~\ref{mata}}
The crux of the proof is a simple characterization of the diagram pairs that occur as outputs of the algorithm $\mathcal{A}$. These \textit{distinguished} diagram pairs are images under $\overline{E}$ of the \textit{distinguished} diagrams of Achar \cite{Acharj}. We start by defining distinguished diagrams and diagram pairs.
\begin{df} Let $Y$ be a diagram of shape-class $\alpha$. The entry $Y^j_i$ is \textit{$E$-raisable} if $i=1$, or if $i > 1$ and $Y^j_{i-1} > Y^j_i + 2$. The entry $Y^j_i$ is \textit{$E$-lowerable} if $i = \alpha^*_j$, or if $i < \alpha^*_j$ and $Y^j_{i+1} < Y^j_i - 2$. \end{df}
\begin{df} \label{dis} Let $X \in D_{\alpha}$, and set $Y := EX$. Then the diagram $X$ and the diagram pair $(X,Y)$ are \textit{odd-distinguished} if the following four conditions hold. \begin{enumerate}
\item $Y_{i,j+1} - Y_{i,j} \in \lbrace 0, (-1)^j \rbrace$.
\item For all $1 \leq j < j' \leq s$ such that $j \equiv j' \pmod 2$:
\begin{enumerate}
\item If $j$ and $j'$ are odd and $Y_{i,j} \leq Y_{i,j'} - 1$, then $Y_{i,j}$ is not $E$-raisable;
\item If $j$ and $j'$ are even and $Y_{i,j} \geq Y_{i, j'} + 1$, then $Y_{i,j}$ is not $E$-lowerable.
\end{enumerate}
\item For all $1 \leq j < j' \leq s$:
\begin{enumerate}
\item If $Y_{i,j} \leq Y_{i,j'} - 2$, then $Y_{i,j}$ is not $E$-raisable;
\item If $Y_{i,j} \geq Y_{i,j'} + 2$, then $Y_{i,j}$ is not $E$-lowerable.
\end{enumerate}
\item $Y^j_i - Y^j_{i+1} \geq 2$. \end{enumerate} \end{df}
\begin{df}
Let $X \in D_{\alpha}$, and set $Y := EX$. Then the diagram $X$ and the diagram pair $(X,Y)$ are \textit{even-distinguished} if the following four conditions hold.
\begin{enumerate}
\item $Y_{i,j+1} - Y_{i,j} \in \lbrace 0, (-1)^{j+1} \rbrace$.
\item For all $1 \leq j < j' \leq s$ such that $j \equiv j' \pmod 2$:
\begin{enumerate}
\item If $j$ and $j'$ are even and $Y_{i,j} \leq Y_{i,j'} - 1$, then $Y_{i,j}$ is not $E$-raisable;
\item If $j$ and $j'$ are odd and $Y_{i,j} \geq Y_{i, j'} + 1$, then $Y_{i,j}$ is not $E$-lowerable.
\end{enumerate}
\item For all $1 \leq j < j' \leq s$:
\begin{enumerate}
\item If $Y_{i,j} \leq Y_{i,j'} - 2$, then $Y_{i,j}$ is not $E$-raisable;
\item If $Y_{i,j} \geq Y_{i,j'} + 2$, then $Y_{i,j}$ is not $E$-lowerable.
\end{enumerate}
\item $Y^j_{i} - Y^j_{i+1} \geq 2$.
\end{enumerate} \end{df}
We refer to odd-distinguished diagrams and diagram pairs as just \textit{distinguished}.
\begin{rem} \label{weak} The definition of distinguished diagram in Achar \cite{Acharj} is weaker than ours inasmuch as it requires $Y^j_i - Y^j_{i+1} \geq 1$ rather than $Y^j_{i} - Y^j_{i+1} \geq 2$. However, Achar's definition of the $E$ map differs slightly from ours, so it does not suffice for our purposes to copy his definition wholesale. Our definition of distinguished ensures that, if $(X,Y)$ is distinguished, then $Y = EX$ under Achar's definition as well as ours --- so it guarantees that any diagram distinguished by our reckoning is distinguished by Achar's \textit{a fortiori}. \end{rem}
To simplify our analysis of the algorithm, we define the \textit{row-partition} function, which is similar to the row-survival function, but does not discriminate between surviving and non-surviving rows.
\begin{df}
For all $(\alpha, \iota) \in \mathbb{N}^{\ell} \times \mathbb{Z}^{\ell}_{\operatorname{dom}}$, \[\mathcal{P}(\alpha, \iota) \colon \lbrace 1, \ldots, \ell \rbrace \rightarrow \lbrace 1, \ldots, \ell \rbrace \times \lbrace 1, \ldots, \ell \rbrace\] is given by \[\mathcal{P}(\alpha, \iota)(i) :=\big(|\lbrace \iota_{i'} : i' \leq i \rbrace|, |\lbrace i' \leq i : \iota_{i'} = \iota_i \rbrace | \big).\] \end{df}
\begin{rem} After the rows of a blank diagram of shape $\alpha$ have been permuted according to $\sigma \in \mathfrak{S}_{\ell}$ and the first column of the permuted diagram is filled in with the entries of $\iota$, the row-survival function $\mathcal{S}(\alpha, \sigma, \iota)$ tells us, for each surviving row, which branch it belongs to, and its position within that branch. (For each row that does not survive, the row-survival function still records a branch, but records its position within that branch as $0$.)
If we construe each branch as comprising all rows with a particular first-column entry, not merely the surviving such rows, then, for each row, the row-partition function $\mathcal{P}(\alpha, \iota)$ tells us which branch it belongs to, and its position among all rows within that branch. \end{rem}
\begin{df} Given diagrams $Z^{(1)}, \ldots, Z^{(k)}$ such that $Z^{(x)} \in D_{\ell_x}$ for all $1 \leq x \leq k$, construct $Z \in D_{\ell_1 + \cdots + \ell_k}$ as follows: For all $(i, j) \in \mathbb{N} \times \mathbb{N}$ such that $Z^{(x)}$ has an entry in the $i^{\text{th}}$ row and $j^{\text{th}}$ column, $Z_{i + \sum_{x'=1}^{x-1} \ell_{x'}, j} := Z^{(x)}_{i, j}$. The \textit{diagram-concatenation} function $\operatorname{Cat} \colon D_{\ell_1} \times \cdots \times D_{\ell_k} \rightarrow D_{\ell_1 + \cdots + \ell_k}$ is given by $(Z^{(1)}, \ldots, Z^{(k)}) \mapsto Z$. \end{df}
\begin{rem} Diagram concatenation is transitive: If $Z^{(x)} = \operatorname{Cat} (Z^{(x, 1)}, \ldots, Z^{(x, \omega_x)})$ for all $x$, then \[Z = \operatorname{Cat}\big(Z^{(1, 1)}, \ldots, Z^{(1, \omega_1)}, \ldots, Z^{(k, 1)}, \ldots, Z^{(k, \omega_k)}\big).\] \end{rem}
Let $(\alpha, \nu, \epsilon) \in \mathbb{N}^{\ell} \times \mathbb{Z}^{\ell} \times \lbrace \pm 1 \rbrace$. Set $\sigma := \mathcal{R}_{\epsilon}(\alpha, \nu)$. Set $\iota := \mathcal{U}_{\epsilon}(\alpha, \nu, \sigma)$. For all $(x, i')$ in the image of $\mathcal{P}(\alpha, \iota)$, set $p_{(x,i')} := \mathcal{P}(\alpha, \iota)^{-1}(x, i')$.
For all $1 \leq x \leq \mathcal{k}$, set \[\ell^{\circ}_x := \max \lbrace i' : (x,i') \in \mathcal{P}(\alpha, \iota) \lbrace 1, \ldots, \ell \rbrace \rbrace. \]
Set \[\mathcal{a}^{(x)} := \left[\alpha_{\sigma^{-1}(p_{(x,1)})}, \ldots, \alpha_{\sigma^{-1}(p_{(x,\ell^{\circ}_x)})}\right] \quad \text{and} \quad \mathcal{n}^{(x)} := \left[\nu_{\sigma^{-1}(p_{(x,1)})}, \ldots, \nu_{\sigma^{-1}(p_{(x,\ell^{\circ}_x)})}\right].\]
For all $1 \leq i' \leq \ell^{\circ}_x$, set \[\hat{\mathcal{n}}^{(x)}_{i'} := \mathcal{n}^{(x)}_{i'} - \sum_{x'=1}^{x-1} \sum_{i_0 = 1}^{\ell^{\circ}_{x'}} \min \left \lbrace \mathcal{a}^{(x)}_{i'}, \mathcal{a}^{(x')}_{i_0} \right \rbrace + \sum_{x'=x+1}^{\mathcal{k}} \sum_{i_0 = 1}^{\ell^{\circ}_{x'}} \min \left \lbrace \mathcal{a}^{(x)}_{i'}, \mathcal{a}^{(x')}_{i_0} \right \rbrace.\]
Then set $\hat{\mathcal{n}}^{(x)} := \left[ \hat{\mathcal{n}}^{(x)}_{1}, \ldots, \hat{\mathcal{n}}^{(x)}_{\ell^{\circ}_x} \right]$ and $\left (\mathcal{X}^{(x)}, \mathcal{Y}^{(x)} \right) := \mathcal{A} \left (\mathcal{a}^{(x)}, \hat{\mathcal{n}}^{(x)}, \epsilon \right)$.
\begin{lem} \label{collapse} Set $(X,Y) := \mathcal{A}(\alpha, \nu, \epsilon)$. Then $Y = \operatorname{Cat}(\mathcal{Y}^{(1)}, \ldots, \mathcal{Y}^{(\mathcal{k})})$. \end{lem}
\begin{proof} We show the claim for $\epsilon = -1$ only. By Proposition~\ref{permute}, we may assume $\sigma = \operatorname{id}$ without loss of generality.
Fix $1 \leq x \leq \mathcal{k}$. Set $\mathcal{s}_{(x)} := \mathcal{R}_{-1}(\mathcal{a}^{(x)}, \hat{\mathcal{n}}^{(x)})$ and $\mathcal{m}_{(x)} := \mathcal{U}_{-1}(\mathcal{a}^{(x)}, \hat{\mathcal{n}}^{(x)}, \mathcal{s}_{(x)})$. We claim that $\mathcal{s}_{(x)} = \operatorname{id}$, which is to say that $\mathcal{s}_{(x)}(i') = i'$ for all $1 \leq i' \leq \ell^{\circ}_x$. The proof is by induction on $i'$; we show the inductive step.
Set $J := \lbrace 1, \ldots, i'-1 \rbrace$ and $J' := \lbrace 1, \ldots, \ell^{\circ}_x \rbrace \setminus J$. For all $I \subset \lbrace 1, \ldots, \ell^{\circ}_x \rbrace$, set $p^{(x)}_I := \lbrace p_{(x, i_0)} : i_0 \in I \rbrace$. Also set $\mathcal{Q}_{<x} := \lbrace p_{(x', i_0)} : x' < x \rbrace$ and $\mathcal{Q}_{>x} := \lbrace p_{(x', i_0)} : x' > x \rbrace$.
For all $j \in J'$, \begin{align*} & \mathcal{n}^{(x)}_{j} - \sum_{x'=1}^{x-1} \sum_{i_0 = 1}^{\ell^{\circ}_{x'}} \min \left \lbrace \mathcal{a}^{(x)}_{j}, \mathcal{a}^{(x')}_{i_0} \right \rbrace + \sum_{x'=x+1}^{\mathcal{k}} \sum_{i_0 = 1}^{\ell^{\circ}_{x'}} \min \left \lbrace \mathcal{a}^{(x)}_{j}, \mathcal{a}^{(x')}_{i_0} \right \rbrace \\ & - \sum_{i_0 \in J} \min \left \lbrace \mathcal{a}^{(x)}_{j}, \mathcal{a}^{(x)}_{i_0} \right \rbrace + \sum_{i_0 \in J' \setminus \lbrace j \rbrace} \min \left \lbrace \mathcal{a}^{(x)}_{j}, \mathcal{a}^{(x)}_{i_0} \right \rbrace \\ & = \nu_{p_{(x,j)}} - \sum_{i \in \mathcal{Q}_{<x} \cup p^{(x)}_J} \min \lbrace \alpha_{p_{(x,j)}}, \alpha_i \rbrace + \sum_{i \in \mathcal{Q}_{>x} \cup p^{(x)}_{J' \setminus \lbrace j \rbrace}} \min \lbrace \alpha_{p_{(x,j)}}, \alpha_i \rbrace. \end{align*}
Therefore, \begin{align} \label{pro} \mathcal{C}_{-1}\big(\mathcal{a}^{(x)}, \hat{\mathcal{n}}^{(x)}, j, J, J' \setminus \lbrace j \rbrace\big) = \mathcal{C}_{-1}\big(\alpha, \nu, p_{(x,j)}, \mathcal{Q}_{<x} \cup p^{(x)}_J, \mathcal{Q}_{>x} \cup p^{(x)}_{J' \setminus \lbrace j \rbrace}\big). \end{align}
Note that $\mathcal{Q}_{<x} \cup p^{(x)}_J = \lbrace 1, \ldots, p_{(x,i')} - 1 \rbrace$. Since $\sigma(p_{(x,i')}) = p_{(x,i')}$, it follows that the lexicographically maximal value of the function \[j \mapsto \left(\mathcal{C}_{-1}\big(\alpha, \nu, p_{(x,j)}, \mathcal{Q}_{<x} \cup p^{(x)}_J, \mathcal{Q}_{>x} \cup p^{(x)}_{J' \setminus \lbrace j \rbrace}\big), \alpha_{p_{(x,j)}}, \nu_{p_{(x,j)}} \right)\] over the domain $J'$ is attained at $j = i'$.
Thus, the lexicographically maximal value of the function \[j \mapsto \left(\mathcal{C}_{-1}\big(\mathcal{a}^{(x)}, \hat{\mathcal{n}}^{(x)}, j, J, J' \setminus \lbrace j \rbrace\big), \mathcal{a}^{(x)}_{j}, \mathcal{n}^{(x)}_{j} \right)\] over the domain $J'$ is attained at $j = i'$.
Since $\hat{\mathcal{n}}^{(x)}_j - \mathcal{n}^{(x)}_j$ as a function of $j$ depends only on $\mathcal{a}^{(x)}_j$, the lexicographically maximal value of the function \[j \mapsto \left(\mathcal{C}_{-1}\big(\mathcal{a}^{(x)}, \hat{\mathcal{n}}^{(x)}, j, J, J' \setminus \lbrace j \rbrace\big), \mathcal{a}^{(x)}_{j}, \hat{\mathcal{n}}^{(x)}_{j} \right)\] over the domain $J'$ is also attained at $j = i'$.
Since $i' \in J'$ is numerically minimal, it follows that $\mathcal{s}^{(x)}(i') = i'$, as desired.
We next claim that \begin{align} \label{art} X_{p_{(x,i')}, 1} = \mathcal{X}^{(x)}_{i', 1} + \sum_{x'=1}^{x-1} \ell^{\circ}_{x'} - \sum_{x'=x+1}^{\mathcal{k}} \ell^{\circ}_{x'}. \end{align}
Again the proof is by induction on $i'$. For all $1 \leq i' \leq \ell^{\circ}_x$, we see from Equation~\ref{pro} that \begin{align*} & \mathcal{C}_{-1}(\mathcal{a}^{(x)}, \hat{\mathcal{n}}^{(x)}, i', \lbrace 1, \ldots, i'-1 \rbrace, \lbrace i'+1, \ldots, \ell^{\circ}_x \rbrace) \\ & = \mathcal{C}_{-1}(\alpha, \nu, p_{(x,i')}, \lbrace 1, \ldots, p_{(x,i')} -1 \rbrace, \lbrace p_{(x,i')} + 1, \ldots, \ell \rbrace). \end{align*}
Denote the value $\iota_{p_{(x,1)}} = \cdots = \iota_{p_{(x, \ell^{\circ}_x)}}$ by $\iota^{\circ}_x$.
Note that \[\iota^{\circ}_x = \mathcal{C}_{-1}(\alpha, \nu, p_{(x,1)}, \lbrace 1, \ldots, p_{(x,1)} -1 \rbrace, \lbrace p_{(x,1)} + 1, \ldots, \ell \rbrace) - \ell + 2p_{(x,1)} - 1,\] else $\iota^{\circ}_{x} = \iota^{\circ}_{x-1}$, which contradicts the definition of $\mathcal{P}(\alpha, \iota)$.
Hence \begin{align*} \mathcal{X}_{1,1}^{(x)} & = \mathcal{C}_{-1}(\mathcal{a}^{(x)}, \hat{\mathcal{n}}^{(x)}, 1, \varnothing, \lbrace 2, \ldots, \ell^{\circ}_x \rbrace) - \ell^{\circ}_x + 1 \\ & = \mathcal{C}_{-1}(\alpha, \nu, p_{(x,1)}, \lbrace 1, \ldots, p_{(x,1)} -1 \rbrace, \lbrace p_{(x,1)} + 1, \ldots, \ell \rbrace) - \ell^{\circ}_x + 1 \\ & = \iota^{\circ}_x + \ell - 2(p_{(x,1)} - 1) - \ell^{\circ}_x \\ & = X_{p_{(x,1)}, 1} - \sum_{x'=1}^{x-1} \ell^{\circ}_{x'} + \sum_{x' = x+1}^{\mathcal{k}} \ell^{\circ}_{x'}, \end{align*} which proves the base case.
For the inductive step, note that \[\iota^{\circ}_x \leq \mathcal{C}_{-1}(\alpha, \nu, p_{(x,i')}, \lbrace 1, \ldots, p_{(x,i')} -1 \rbrace, \lbrace p_{(x,i')} + 1, \ldots, \ell \rbrace) - \ell + 2p_{(x,i')} - 1.\]
Thus, \begin{align*} & \mathcal{C}_{-1}(\mathcal{a}^{(x)}, \hat{\mathcal{n}}^{(x)}, i', \lbrace 1, \ldots, i' -1 \rbrace, \lbrace i'+1, \ldots, \ell^{\circ}_x \rbrace) - \ell^{\circ}_x + 2i' - 1 \\ & = \mathcal{C}_{-1}(\alpha, \nu, p_{(x,i')}, \lbrace 1, \ldots, p_{(x,i')} -1 \rbrace, \lbrace p_{(x,i')} + 1, \ldots, \ell \rbrace) - \ell^{\circ}_x + 2i' - 1 \\ & \geq \iota^{\circ}_x + \ell - 2(p_{(x,i')} -i') - \ell^{\circ}_x \\ & = X_{p_{(x,i'-1)}, 1} - \sum_{x'=1}^{x-1} \ell^{\circ}_{x'} + \sum_{x' = x+1}^{\mathcal{k}} \ell^{\circ}_{x'} \\ & = \mathcal{X}^{(x)}_{i'-1, 1}, \end{align*} where the last inequality follows from the inductive hypothesis.
We conclude that \begin{align*} \mathcal{X}^{(x)}_{i', 1} = \mathcal{X}^{(x)}_{i'-1, 1} & = X_{p_{(x,i'-1)}, 1} - \sum_{x'=1}^{x-1} \ell^{\circ}_{x'} + \sum_{x' = x+1}^{\mathcal{k}} \ell^{\circ}_{x'} \\ & = X_{p_{(x,i')}, 1} - \sum_{x'=1}^{x-1} \ell^{\circ}_{x'} + \sum_{x' = x+1}^{\mathcal{k}} \ell^{\circ}_{x'}. \end{align*}
This establishes Equation~\ref{art}, which implies $Y_{p_{(x,i')}, 1} = \mathcal{Y}^{(x)}_{i', 1}$, proving the result for the first column of $Y$.
We turn now to the successive columns. By Equation~\ref{art}, \[\mathcal{m}^{(x)}_1 = \cdots = \mathcal{m}^{(x)}_{\ell^{\circ}_x} = \iota^{\circ}_x - \sum_{x'=1}^{x-1} \ell^{\circ}_{x'} + \sum_{x' = x+1}^{\mathcal{k}} \ell^{\circ}_{x'}.\]
Set $\mathcal{f}_x := \mathcal{S}(\mathcal{a}^{(x)}, \operatorname{id}, \mathcal{m}^{(x)})$ and $\mathcal{f} := \mathcal{S}(\alpha, \operatorname{id}, \iota)$. Suppose that $\ell_x > 0$. Note that \[p_{(x, \mathcal{f}_x^{-1}(1,i'))} = \mathcal{f}^{-1}(x, i')\] for all $1 \leq i' \leq \ell_x$.
Set \[({\mathcal{a}^{(x)}})' := \left [\mathcal{a}^{(x)}_{\mathcal{f}_x^{-1}(1,1)} - 1, \ldots, \mathcal{a}^{(x)}_{\mathcal{f}_x^{-1}(1, \ell_x)} - 1 \right ]\] and \[({\hat{\mathcal{n}}^{(x)}})' := \left [\hat{\mathcal{n}}^{(x)}_{\mathcal{f}_x^{-1}(1,1)} - \mathcal{m}^{(x)}_{\mathcal{f}_x^{-1}(1,1)}, \ldots, \hat{\mathcal{n}}^{(x)}_{\mathcal{f}_x^{-1}(1,\ell_x)} - \mathcal{m}^{(x)}_{\mathcal{f}_x^{-1}(1,\ell_x)}\right].\]
Then set \[\left((\mathcal{X}^{(x)})', (\mathcal{Y}^{(x)})'\right) := \mathcal{A}_{1} \left(({\mathcal{a}^{(x)}})', ({\hat{\mathcal{n}}^{(x)}})' \right).\]
Since \[({\mathcal{a}^{(x)}})'_{i'} = \mathcal{a}^{(x)}_{\mathcal{f}_x^{-1}(1,i')} - 1 = \alpha_{\mathcal{f}^{-1}(x,i')} - 1 = \alpha^{(x)}_{i'}\] and \begin{align*} & ({\hat{\mathcal{n}}^{(x)}})'_{i'} = \hat{\mathcal{n}}^{(x)}_{\mathcal{f}_x^{-1}(1,i')} - \mathcal{m}^{(x)}_{\mathcal{f}_x^{-1}(1,i')} \\ & = \nu_{\mathcal{f}^{-1}(x,i')} - \sum_{x'=1}^{x-1} \sum_{i_0 = 1}^{\ell^{\circ}_{x'}} \min \left \lbrace \alpha_{\mathcal{f}^{-1}(x,i')}, \alpha_{p_{(x',i_0)}} \right \rbrace + \sum_{x'=x+1}^{\mathcal{k}} \sum_{i_0 = 1}^{\ell^{\circ}_{x'}} \min \left \lbrace \alpha_{\mathcal{f}^{-1}(x,i')}, \alpha_{p_{(x',i_0)}} \right \rbrace \\ & - \iota^{\circ}_x + \sum_{x'=1}^{x-1} \ell^{\circ}_{x'} - \sum_{x'=x+1}^{\mathcal{k}} \ell^{\circ}_{x'} \\ & = \nu^{(x)}_{i'} - \sum_{x'=1}^{x-1} \sum_{i_0 = 1}^{\ell^{\circ}_{x'}} \min \left \lbrace \alpha_{\mathcal{f}^{-1}(x,i')} - 1, \alpha_{p_{(x',i_0)}} - 1 \right \rbrace + \sum_{x'=x+1}^{\mathcal{k}} \sum_{i_0 = 1}^{\ell^{\circ}_{x'}} \min \left \lbrace \alpha_{\mathcal{f}^{-1}(x,i')} - 1, \alpha_{p_{(x',i_0)}} - 1 \right \rbrace \\ & = \nu^{(x)}_{i'} - \sum_{x'=1}^{x-1} \sum_{i_0 =1}^{\ell_x} \min \left \lbrace \alpha^{(x)}_{i'}, \alpha_{\mathcal{f}^{-1}(x', i_0)} - 1 \right \rbrace + \sum_{x'=x+1}^{\mathcal{k}} \sum_{i_0 = 1}^{\ell_x} \min \left \lbrace \alpha^{(x)}_{i'}, \alpha_{\mathcal{f}^{-1}(x', i_0)} - 1\right \rbrace \\ & = \nu^{(x)}_{i'} - \sum_{x'=1}^{x-1} \sum_{i_0 =1}^{\ell_x} \min \left \lbrace \alpha^{(x)}_{i'}, \alpha^{(x')}_{i_0} \right \rbrace + \sum_{x'=x+1}^{\mathcal{k}} \sum_{i_0 = 1}^{\ell_x} \min \left \lbrace \alpha^{(x)}_{i'}, \alpha^{(x')}_{i_0} \right \rbrace = \hat{\nu}^{(x)}_{i'}, \end{align*} it follows that $((\mathcal{X}^{(x)})', (\mathcal{Y}^{(x)})') = (X^{(x)}, Y^{(x)})$.
Thus, $Y_{p_{(x, \mathcal{f}_x^{-1}(1,i'))}, j' + 1} = Y_{\mathcal{f}^{-1}(x,i'), j' + 1} = Y^{(x)}_{i',j'} = (\mathcal{Y}^{(x)})'_{i',j'} = \mathcal{Y}^{(x)}_{\mathcal{f}_x^{-1}(1, i'), j' + 1}$ for all $j' \geq 1$. The result follows. \end{proof}
\begin{thm} \label{differences} Let $(\alpha, \nu, \epsilon) \in \mathbb{N}^{\ell} \times \mathbb{Z}^{\ell} \times \lbrace \pm 1 \rbrace$. Set $(X,Y) := \mathcal{A}(\alpha, \nu, \epsilon)$. Then $Y_{i, j+1} - Y_{i,j} \in \lbrace 0, \epsilon (-1)^{j+1} \rbrace$. \end{thm}
\begin{proof} The proof is by induction on $\max \lbrace \alpha_1, \ldots, \alpha_{\ell} \rbrace$. We show the inductive step for $\epsilon = -1$ only. Thus, we set $(X,Y) := \mathcal{A}(\alpha, \nu, -1)$ and prove $Y_{i, 2} - Y_{i, 1} \in \lbrace 0, -1 \rbrace$. The rest follows from the inductive hypothesis. To see this, set $\sigma := \mathcal{R}_{-1}(\alpha, \nu)$ and $\mu := \mathcal{U}_{-1}(\alpha, \nu, \sigma)$. Then note that \[Y_{i_{(x,i')}, j'+2} - Y_{i_{(x,i')},j'+1} = Y^{(x)}_{i', j'+1} - Y^{(x)}_{i', j'} \in \lbrace 0, (-1)^{j'+1} \rbrace\] for all $(x,i')$ in the image of $\mathcal{S}(\alpha, \sigma, \mu)$ such that $i' > 0$.
By Lemma~\ref{collapse}, we may assume $\mu_1 = \cdots = \mu_{\ell}$. Denote the common value by $\mu^{\circ}$. Furthermore, by Proposition~\ref{permute}, we may assume without loss of generality that $\sigma = \operatorname{id}$. Hence \begin{align*} \mu^{\circ} & = \mathcal{C}_{-1}(\alpha, \nu, 1, \varnothing, \lbrace 2, \ldots, \ell \rbrace) - \ell + 1 \\ & = \left \lceil \frac{\nu_1 + \sum_{i = 2}^{\ell} \min \lbrace \alpha_1, \alpha_i \rbrace}{\alpha_1} \right \rceil - \ell + 1 \\ & = \left \lceil \frac{\nu_1 + \alpha^*_1 + \cdots + \alpha^*_{\alpha_1}}{\alpha_1} \right \rceil - \ell. \end{align*}
It follows that \[Y_{i,1} = X_{i,1} + \ell - 2i + 1 = \left \lceil \frac{\nu_1 + \alpha^*_1 + \cdots + \alpha^*_{\alpha_1}}{\alpha_1} \right \rceil - 2i + 1.\]
Set $\mathcal{f} := \mathcal{S}(\alpha, \mu, 1)$. Set $\ell' := \alpha^*_2$. For all $1 \leq i' \leq \ell'$, set $i_{i'} := \mathcal{f}^{-1}(1, i')$. Set \[\alpha' := \left [\alpha_{i_1} - 1, \ldots, \alpha_{i_{\ell'}} - 1 \right] \quad \text{and} \quad \nu' := \left [\nu_{i_1} - \mu^{\circ}, \ldots, \nu_{i_{\ell'}} - \mu^{\circ} \right].\]
Set $\tau := \mathcal{R}_1 (\alpha', \nu')$. Set $\mu' := \mathcal{U}_1(\alpha', \nu', \tau)$. Additionally, set $(X', Y') := \mathcal{A}(\alpha', \nu', 1)$. Note that \[Y_{i_{i'},2} = Y'_{i',1} = X'_{i',1} + \ell' - 2i' + 1 = \mu'_{i'} + \ell' - 2i' + 1.\]
We claim that $Y_{i_{i'}, 2} - Y_{i_{i'}, 1} \in \lbrace 0, -1 \rbrace$ for all $1 \leq i' \leq \ell'$. The proof is by (backwards) induction on $i'$. For the inductive step, assume that the claim holds for all $i' + 1 \leq i_0 \leq \ell'$. Set $I_b := \tau^{-1} \lbrace i' + 1, \ldots, \ell' \rbrace$ and $I'_b := \lbrace 1, \ldots, \ell' \rbrace \setminus I_b$. To show the claim holds for $i'$, we split into two cases.
\begin{enumerate}
\item If $i' = \ell'$, or $i' < \ell'$ and $\alpha_{i_{i'} + 1} = 1$, we show:
\begin{enumerate}
\item For all $\mathcal{i} \in I'_b$, \[\mathcal{C}_1 \left (\alpha', \nu', \mathcal{i}, I'_b \setminus \lbrace \mathcal{i} \rbrace, I_b \right) \geq \left \lceil \frac{\nu_1 + \alpha^*_1 + \cdots + \alpha^*_{\alpha_1}}{\alpha_1} \right \rceil - 2i_{i'}.\]
\item There exists $\mathcal{i} \in I'_b$ such that \[\mathcal{C}_1 \left (\alpha', \nu', \mathcal{i}, I'_b \setminus \lbrace \mathcal{i} \rbrace, I_b \right) \leq \left \lceil \frac{\nu_1 + \alpha^*_1 + \cdots + \alpha^*_{\alpha_1}}{\alpha_1} \right \rceil - 2i_{i'} + 1.\]
\end{enumerate}
\item If $i' < \ell'$ and $\alpha_{i_{i'} + 1} \geq 2$, we show (b) only. \end{enumerate}
We first prove that the properties indicated are sufficient to obtain the desired; then we show that they indeed hold.
\begin{enumerate}
\item Suppose $i' = \ell'$, or $i' < \ell'$ and $\alpha_{i_{i'}+1} = 1$, and suppose (a) and (b) hold. We first claim that
\begin{align} \label{echo}
\mu'_{i'} = \mathcal{C}_1(\alpha', \nu', \tau^{-1}(i'), I'_b \setminus \lbrace \tau^{-1}(i') \rbrace, I_b) -\ell' + 2i' - 1.
\end{align}
If $i' = \ell'$, the claim follows immediately. If $i' < \ell'$ and $\alpha_{i_{i'}+1} = 1$, it suffices to show \[\mathcal{C}_1(\alpha', \nu', \tau^{-1}(i'), I'_b \setminus \lbrace \tau^{-1}(i') \rbrace, I_b) -\ell' + 2i' - 1 \geq \mu'_{i'+1}.\] Applying (a) and the inductive hypothesis, we find
\begin{align*}
\mathcal{C}_1(\alpha', \nu', \tau^{-1}(i'), I'_b \setminus \lbrace \tau^{-1}(i') \rbrace, I_b) & \geq \left \lceil \frac{\nu_1 + \alpha^*_1 + \cdots + \alpha^*_{\alpha_1}}{\alpha_1} \right \rceil - 2i_{i'} \\ & = \mu^{\circ} + \ell - 2i_{i'} \\ & = X_{i_{i' + 1}, 1} + \ell - 2i_{i'} \\ & = Y_{i_{i'+1},1} + 2i_{i'+1} - 2i_{i'}- 1 \\ & \geq Y_{i_{i'+1}, 1} + 3 \\ & \geq Y_{i_{i'+1}, 2} + 3 \\ & = \mu'_{i'+1} + \ell' - 2i' + 2.
\end{align*}
From Equation~\ref{echo}, invoking (a) yields
\begin{align*}
Y_{i_{i'}, 2} & = \mu'_{i'} + \ell' - 2i' + 1 \\ & = \mathcal{C}_1(\alpha', \nu', \tau^{-1}(i'), I'_b \setminus \lbrace \tau^{-1}(i') \rbrace, I_b) \\ & \geq \mu^{\circ} + \ell - 2i_{i'} \\ & = X_{i_{i'}, 1} + \ell - 2i_{i'} \\ & = Y_{i_{i'}, 1} - 1.
\end{align*}
Since the minimum value of the function given by \[\mathcal{i} \mapsto \mathcal{C}_1(\alpha', \nu', \mathcal{i}, I'_b \setminus \lbrace \mathcal{i} \rbrace, I_b) \] is attained at $\mathcal{i} = \tau^{-1}(i')$, invoking (b) yields
\begin{align*}
Y_{i_{i'}, 2} & = \mathcal{C}_1(\alpha', \nu', \tau^{-1}(i'), I'_b \setminus \lbrace \tau^{-1}(i') \rbrace, I_b) \\ & \leq \mu^{\circ} + \ell - 2{i_{i'}} + 1 \\ & = X_{i_{i'}, 1} + \ell -2i_{i'} + 1 \\ & = Y_{i_{i'}, 1}.
\end{align*}
It follows that $Y_{i_{i'}, 2} - Y_{i_{i'}, 1} \in \lbrace 0, -1 \rbrace$.
\item Suppose $i' < \ell'$ and $\alpha_{i_{i'} + 1} \geq 2$, and suppose (b) holds. Note that $i_{i'+1} = i_{i'} + 1$. Thus,
\begin{align*}
Y_{i_{i'}, 2} & = \mu'_{i'} + \ell' - 2i' + 1 \\ & \geq \mu'_{i'+1} + \ell' - 2i' + 1 \\ & = Y_{i_{i'+1}, 2} + 2 \\ & = Y_{i_{i'} +1, 2} + 2 \\ & \geq Y_{i_{i'} + 1, 1} + 1 \\ & = X_{i_{i'} + 1, 1} + \ell - 2i_{i'}\\ & = X_{i_{i'}, 1} + \ell - 2i_{i'} \\ & = Y_{i_{i'}, 1} - 1.
\end{align*}
If Equation~\ref{echo} holds, then $Y_{i_{i'}, 2} \leq Y_{i_{i'}, 1}$ follows from invoking (b) as above. Otherwise, $\mu'_{i'} = \mu'_{i' + 1}$, and
\begin{align*}
Y_{i_{i'}, 2} & = \mu'_{i'} + \ell' - 2i' + 1 \\ & = \mu'_{i'+1} + \ell' - 2i' + 1 \\ & = Y_{i_{i'+1}, 2} + 2 \\ & = Y_{i_{i'} + 1, 2} + 2 \\ & \leq Y_{i_{i'} + 1, 1} + 2 \\ & = X_{i_{i'} +1, 1} + \ell - 2i_{i'} + 1 \\ & = X_{i_{i'}, 1} + \ell - 2i_{i'} + 1 \\ & = Y_{i_{i'}, 1}.
\end{align*} \end{enumerate}
Therefore, it suffices to show (i) that (a) holds if $i' = \ell'$, or $i' < \ell'$ and $\alpha_{i_{i'} + 1} = 1$, and (ii) that (b) holds always. We prove these claims subject to the following assumption: For all $1 \leq i \leq \ell$ such that $\alpha_i = 1$, the set $I_i := \lbrace i_0 \in \lbrace 1, \ldots, \ell' \rbrace : i_{i_0} > i \rbrace$ is preserved under $\tau$. Finally, we justify the assumption.
\begin{enumerate}
\item[(i)] Suppose $i' = \ell'$, or $i' < \ell'$ and $\alpha_{i_{i'} + 1} = 1$. Assume for the sake of contradiction that there exists $\mathcal{i} \in I'_b$ such that \[\mathcal{C}_1 \left (\alpha', \nu', \mathcal{i}, I'_b \setminus \lbrace \mathcal{i} \rbrace, I_b \right) < \mu^{\circ} + \ell - 2i_{i'}.\]
For all $1 \leq i, j \leq \ell$, set $m_{i,j} := \min \lbrace \alpha_i, \alpha_j \rbrace$.
Note that
\begin{align*}
\mathcal{C} \left (\alpha', \nu', \mathcal{i}, I'_b \setminus \lbrace \mathcal{i} \rbrace, I_b \right) & = \nu'_{\mathcal{i}} - \sum_{i_0 \in I'_b \setminus \lbrace \mathcal{i} \rbrace} \min \left \lbrace \alpha'_{\mathcal{i}}, \alpha'_{i_0} \right \rbrace + \sum_{i_0 \in I_b} \min \left \lbrace \alpha'_{\mathcal{i}}, \alpha'_{i_0} \right \rbrace \\ & = \nu_{i_{\mathcal{i}}} - \mu^{\circ} - (\alpha^*_2 + \cdots + \alpha^*_{\alpha_{i_{\mathcal{i}}}}) + \alpha_{i_{\mathcal{i}}} - 1 - 2 (\ell' - i') + 2 \sum_{i_0 \in I_b} m_{i_{\mathcal{i}}, i_{i_0}}.
\end{align*}
Hence \[\mathcal{C}_1(\alpha', \nu', \mathcal{i}, I'_b \setminus \lbrace \mathcal{i} \rbrace, I_b) = \left \lfloor \frac{ \nu_{i_{\mathcal{i}}} - \mu^{\circ} - (\alpha^*_2 + \cdots + \alpha^*_{\alpha_{i_{\mathcal{i}}}}) - 2(\ell'-i') + 2 \sum_{i_0 \in I_b} m_{i_{\mathcal{i}}, i_{i_0}}}{\alpha_{i_{\mathcal{i}}} - 1} \right \rfloor + 1.\]
Thus, \[\left \lfloor \frac{ \nu_{i_{\mathcal{i}}} - \mu^{\circ} - (\alpha^*_2 + \cdots + \alpha^*_{\alpha_{i_{\mathcal{i}}}}) - 2(\ell'-i') + 2 \sum_{i_0 \in I_b} m_{i_{\mathcal{i}}, i_{i_0}}}{\alpha_{i_{\mathcal{i}}} - 1} \right \rfloor < \mu^{\circ} + \ell - 2i_{i'} - 1.\]
Since the right-hand side is an integer, it follows that
\begin{align*}
& \frac{ \nu_{i_{\mathcal{i}}} - \mu^{\circ} - (\alpha^*_2 + \cdots + \alpha^*_{\alpha_{i_{\mathcal{i}}}}) - 2(\ell'-i') + 2 \sum_{i_0 \in I_b} m_{i_{\mathcal{i}}, i_{i_0}}}{\alpha_{i_{\mathcal{i}}} - 1} < \mu^{\circ} + \ell - 2i_{i'} - 1 \\ & \Longleftrightarrow \frac{\nu_{i_{\mathcal{i}}} - (\alpha^*_2 + \cdots + \alpha^*_{\alpha_{i_{\mathcal{i}}}}) - 2(\ell'-i') + 2 \sum_{i_0 \in I_b} m_{i_{\mathcal{i}}, i_{i_0}} - (\ell - 2i_{i'} - 1)(\alpha_{i_{\mathcal{i}}} - 1)}{\alpha_{i_{\mathcal{i}}} - 1} \\ & < \mu^{\circ} \left( 1 + \frac{1}{\alpha_{i_{\mathcal{i}}} - 1} \right) \\ & \Longleftrightarrow \frac{\nu_{i_{\mathcal{i}}} - (\alpha^*_2 + \cdots + \alpha^*_{\alpha_{i_{\mathcal{i}}}}) - 2(\ell'-i') + 2 \sum_{i_0 \in I_b} m_{i_{\mathcal{i}}, i_{i_0}} - (\ell - 2i_{i'} - 1)(\alpha_{i_{\mathcal{i}}} - 1)}{\alpha_{i_{\mathcal{i}}}} < \mu^{\circ} \\ & \Longleftrightarrow \frac{\nu_{i_{\mathcal{i}}} - (\alpha^*_1 + \cdots + \alpha^*_{\alpha_{i_{\mathcal{i}}}}) + 2 \sum_{i_0 \in I_b} m_{i_{\mathcal{i}}, i_{i_0}} + 2\ell - 2 \ell' + 2 i' - 2i_{i'} - 1}{\alpha_{i_{\mathcal{i}}}} - \ell + 2i_{i'} + 1 < \mu^{\circ} \\ & \Longleftrightarrow \frac{\nu_{i_{\mathcal{i}}} + (\alpha^*_1 + \cdots + \alpha^*_{\alpha_{i_{\mathcal{i}}}}) - \alpha_{i_{\mathcal{i}}} - 2 \sum_{i_0 = 1}^{i_{\mathcal{i}} - 1} m_{i_{\mathcal{i}}, i_0} - 1}{\alpha_{i_{\mathcal{i}}}} \\ & + \frac{- 2 \sum_{i_0 = i_{\mathcal{i}} + 1}^{\ell} m_{i_{\mathcal{i}}, i_0} + 2\sum_{i_0 \in I_b} m_{i_{\mathcal{i}}, i_{i_0}} + 2\ell - 2\ell' + 2i' - 2i_{i'}}{\alpha_{i_{\mathcal{i}}}} - \ell + 2i_{i'} < \mu^{\circ} \\ & \Longleftrightarrow \frac{\mathcal{C}(\alpha, \nu, i_{\mathcal{i}}, \lbrace 1, \ldots, i_{\mathcal{i}}-1 \rbrace, \lbrace i_{\mathcal{i}} +1, \ldots, \ell \rbrace) -1}{\alpha_{i_{\mathcal{i}}}} \\ & + \frac{- 2 \sum_{i_0 = i_{\mathcal{i}} + 1}^{\ell} m_{i_{\mathcal{i}}, i_0} + 2\sum_{i_0 \in I_b} m_{i_{\mathcal{i}}, i_{i_0}} + 2\ell - 2\ell' + 2i' - 2i_{i'}}{\alpha_{i_{\mathcal{i}}}} + 2i_{i'} - 2i_{\mathcal{i}} < \mu^{\circ} + \ell - 2i_{\mathcal{i}}.
\end{align*}
We observe that $I_b = \lbrace i'+1, \ldots, \ell' \rbrace$. If $i' = \ell'$, this holds vacuously; otherwise, it follows from the assumption indicated above, for $\alpha_{i_{i'} + 1} =1$ implies $\lbrace i'+1, \ldots, \ell' \rbrace = I_{i_{i'} +1}$ is preserved under $\tau$. Since $\mathcal{i} \in I'_b$, we see also that $\mathcal{i} \leq i'$.
Thus,
\begin{align*}
& \frac{- 2 \sum_{i_0 = i_{\mathcal{i}} + 1}^{\ell} m_{i_{\mathcal{i}}, i_0} + 2\sum_{i_0 \in I_b} m_{i_{\mathcal{i}}, i_{i_0}} + 2\ell - 2\ell' + 2i' - 2i_{i'}}{\alpha_{i_{\mathcal{i}}}} + 2i_{i'} - 2i_{\mathcal{i}} \\ & = \frac{-2 \sum_{i_0 = i_{\mathcal{i}} + 1}^{i_{i'}} m_{i_{\mathcal{i}}, i_0} - 2 \sum_{i_0 = i_{i'} + 1}^{\ell} m_{i_{\mathcal{i}}, i_0} + 2 \sum_{i_0 = i' +1}^{\ell'} m_{i_{\mathcal{i}}, i_{i_0}} + 2\ell - 2\ell' + 2i' - 2i_{i'}}{\alpha_{i_{\mathcal{i}}}} + 2i_{i'} - 2i_{\mathcal{i}} \\ & = \frac{-2 \sum_{i_0=i_{\mathcal{i}} + 1}^{i_{i'}} m_{i_{\mathcal{i}}, i_0}}{\alpha_{i_{\mathcal{i}}}} + 2i_{i'} - 2i_{\mathcal{i}} \geq 0.
\end{align*}
Furthermore, \[\mu^{\circ} = \mu_{i_{\mathcal{i}}} \leq \mathcal{C}_{-1}(\alpha, \nu, i_{\mathcal{i}}, \lbrace 1, \ldots, i_{\mathcal{i}} - 1 \rbrace, \lbrace i_{\mathcal{i}} + 1, \ldots, \ell \rbrace) - \ell + 2i_{\mathcal{i}} - 1.\]
Set \[\mathcal{c} := \mathcal{C}(\alpha, \nu, i_{\mathcal{i}}, \lbrace 1, \ldots, i_{\mathcal{i}} - 1 \rbrace, \lbrace i_{\mathcal{i}} + 1, \ldots, \ell \rbrace).\]
Then
\begin{align*}
& \frac{\mathcal{c}-1}{\alpha_{i_{\mathcal{i}}}} < \left \lceil \frac{\mathcal{c}}{\alpha_{i_{\mathcal{i}}}} \right \rceil - 1 \\ & \Longleftrightarrow \mathcal{c} -1 < \alpha_{i_{\mathcal{i}}} \left \lceil \frac{\mathcal{c}}{\alpha_{i_{\mathcal{i}}}} \right \rceil - \alpha_{i_{\mathcal{i}}} \\ & \Longleftrightarrow \mathcal{c} \leq \alpha_{i_{\mathcal{i}}} \left \lceil \frac{\mathcal{c}}{\alpha_{i_{\mathcal{i}}}} \right \rceil - \alpha_{i_{\mathcal{i}}} \\ & \Longrightarrow \mathcal{c} < \alpha_{i_{\mathcal{i}}} \left( \frac{\mathcal{c}}{\alpha_{i_{\mathcal{i}}}} + 1 \right) - \alpha_{i_{\mathcal{i}}} = \mathcal{c},
\end{align*}
which is a contradiction.
\item[(ii)] If $\alpha_j > 1$ for all $1 \leq j < i_{\mathcal{i'}}$, set $j_0 := 0$. Otherwise, let $j_0 < i_{i'}$ be maximal such that $\alpha_{j_0} = 1$. Analogously, if $\alpha_j > 1$ for all $i_{\mathcal{i'}} < j \leq \ell$, set $j_1 := \ell + 1$. Otherwise, let $j_1 > i_{i'}$ be minimal such that $\alpha_{j_1} = 1$. Set $I_c := I_{j_0} \setminus I_{j_1}$. By assumption, $I_c$ is preserved under $\tau$. Hence $I_c \cap I'_b \neq \varnothing$, else $I_c \subset I_b$, meaning $I_c \subset \lbrace i' + 1, \ldots, \ell' \rbrace$, which is impossible because $i' \in I_c$.
Let $\mathcal{i} \in I_c \cap I'_b$ be chosen so that $\alpha_{i_{\mathcal{i}}}$ is minimal. We claim that \[\mathcal{C}_1 \left (\alpha', \nu', \mathcal{i}, I'_b \setminus \lbrace \mathcal{i} \rbrace, I_b \right) \leq \mu^{\circ} + \ell - 2i_{i'} + 1.\]
Assume for the sake of contradiction that \[\mathcal{C}_1 \left (\alpha', \nu', \mathcal{i}, I'_b \setminus \lbrace \mathcal{i} \rbrace, I_b \right) \geq \mu^{\circ} + \ell - 2i_{i'} + 2.\]
As above, \[\mathcal{C}_1(\alpha', \nu', \mathcal{i}, I'_b \setminus \lbrace \mathcal{i} \rbrace, I_b) = \left \lfloor \frac{ \nu_{i_{\mathcal{i}}} - \mu^{\circ} - (\alpha^*_2 + \cdots + \alpha^*_{\alpha_{i_{\mathcal{i}}}}) - 2(\ell'-i') + 2 \sum_{i_0 \in I_b} m_{i_{\mathcal{i}}, i_{i_0}}}{\alpha_{i_{\mathcal{i}}} - 1} \right \rfloor + 1.\]
Thus,
\begin{align*}
& \frac{ \nu_{i_{\mathcal{i}}} - \mu^{\circ} - (\alpha^*_2 + \cdots + \alpha^*_{\alpha_{i_{\mathcal{i}}}}) - 2(\ell'-i') + 2 \sum_{i_0 \in I_b} m_{i_{\mathcal{i}}, i_{i_0}}}{\alpha_{i_{\mathcal{i}}} - 1} \geq \mu^{\circ} + \ell - 2i_{\mathcal{i'}} + 1 \\ & \Longleftrightarrow \frac{\nu_{i_{\mathcal{i}}} - (\alpha^*_2 + \cdots + \alpha^*_{\alpha_{i_{\mathcal{i}}}}) - 2(\ell'-i') + 2 \sum_{i_0 \in I_b} m_{i_{\mathcal{i}}, i_{i_0}} - (\ell - 2i_{i'} + 1)(\alpha_{i_{\mathcal{i}}} - 1)}{\alpha_{i_{\mathcal{i}}} - 1} \\ & \geq \mu^{\circ} \left( 1 + \frac{1}{\alpha_{i_{\mathcal{i}}} - 1} \right) \\ & \Longleftrightarrow \frac{\nu_{i_{\mathcal{i}}} - (\alpha^*_2 + \cdots + \alpha^*_{\alpha_{i_{\mathcal{i}}}}) - 2(\ell'-i') + 2 \sum_{i_0 \in I_b} m_{i_{\mathcal{i}}, i_{i_0}} - (\ell - 2i_{i'} + 1)(\alpha_{i_{\mathcal{i}}} - 1)}{\alpha_{i_{\mathcal{i}}}} \geq \mu^{\circ} \\ & \Longleftrightarrow \frac{\nu_{i_{\mathcal{i}}} - (\alpha^*_1 + \cdots + \alpha^*_{\alpha_{i_{\mathcal{i}}}}) + 2 \sum_{i_0 \in I_b} m_{i_{\mathcal{i}}, i_{i_0}} + 2 \ell - 2 \ell' + 2 i' - 2i_{i'}+1}{\alpha_{i_{\mathcal{i}}}} - \ell + 2i_{i'} - 1 \geq \mu^{\circ} \\ & \Longleftrightarrow \frac{\nu_{i_{\mathcal{i}}} + (\alpha^*_1 + \cdots + \alpha^*_{\alpha_{i_{\mathcal{i}}}}) - \alpha_{i_{\mathcal{i}}} - 2 \sum_{i_0 = 1}^{j_0} m_{i_{\mathcal{i}}, i_0} + 1}{\alpha_{i_{\mathcal{i}}}} \\ & + \frac{- 2 \sum_{i_0 = j_0+1}^{\ell} m_{i_{\mathcal{i}}, i_0} + 2\sum_{i_0 \in I_b} m_{i_{\mathcal{i}}, i_{i_0}} + 2\ell - 2\ell' + 2i' - 2i_{i'}}{\alpha_{i_{\mathcal{i}}}} - \ell + 2i_{i'} \geq \mu^{\circ} \\ & \Longleftrightarrow \frac{\nu_{i_{\mathcal{i}}} + (\alpha^*_1 + \cdots + \alpha^*_{\alpha_{i_{\mathcal{i}}}}) - \alpha_{i_{\mathcal{i}}} - 2 \sum_{i_0 = 1}^{j_0} m_{i_{\mathcal{i}}, i_0} + 1}{\alpha_{i_{\mathcal{i}}}} \\ & + \frac{- 2 \sum_{i_0 = j_0 + 1}^{\ell} m_{i_{\mathcal{i}}, i_0} + 2\sum_{i_0 \in I_b} m_{i_{\mathcal{i}}, i_{i_0}} + 2\ell - 2\ell' + 2i' - 2i_{i'}}{\alpha_{i_{\mathcal{i}}}} + 2i_{i'} - 2j_0 \geq \mu^{\circ} + \ell - 2 j_0.
\end{align*}
From the inclusions $I_{j_1} \subset \lbrace i'+1, \ldots, \ell' \rbrace \subset I_{j_0}$, we see that $I_{j_1} \subset I_b \subset I_{j_0}$, so $I_{j_1} = I_b \setminus I_c$. Furthermore, $|I_c \cap I'_b| = |I_c \cap \lbrace 1, \ldots, i' \rbrace| = i_{i'} - j_0$. Thus,
\begin{align*}
& \frac{- 2 \sum_{i_0 = j_0 + 1}^{\ell} m_{i_{\mathcal{i}}, i_0} + 2\sum_{i_0 \in I_b} m_{i_{\mathcal{i}}, i_{i_0}} + 2\ell - 2\ell' + 2i' - 2i_{i'}}{\alpha_{i_{\mathcal{i}}}} + 2i_{i'} - 2j_0 \\ & = \frac{- 2 \sum_{i_0 = j_0 + 1}^{j_1 - 1} m_{i_{\mathcal{i}}, i_0} + 2\sum_{i_0 \in I_c \cap I_b} m_{i_{\mathcal{i}}, i_{i_0}} }{\alpha_{i_{\mathcal{i}}}} + 2i_{i'} - 2j_0 \\ & + \frac{- 2 \sum_{i_0 = j_1}^{\ell} m_{i_{\mathcal{i}}, i_0} + 2 \sum_{i_0 \in I_{j_1}} m_{i_{\mathcal{i}}, i_{i_0}} + 2\ell - 2\ell' + 2i' - 2i_{i'}}{\alpha_{i_{\mathcal{i}}}} \\ & = \frac{- 2\sum_{i_0 \in I_c \cap I'_b} m_{i_{\mathcal{i}}, i_{i_0}}}{\alpha_{i_{\mathcal{i}}}} + 2i_{i'} - 2j_0 \\ & + \frac{-2 \sum_{i_0 = i_{i'}}^{\ell} m_{i_{\mathcal{i}}, i_0} + 2 \sum_{i_0 = i'}^{\ell'} m_{i_{\mathcal{i}}, i_{i_0}} + 2\ell - 2\ell' + 2i' - 2i_{i'}}{\alpha_{i_{\mathcal{i}}}} \\ & = 0 + 0 = 0.
\end{align*}
Hence \[ \frac{\nu_{i_{\mathcal{i}}} + (\alpha^*_1 + \cdots + \alpha^*_{\alpha_{i_{\mathcal{i}}}}) - \alpha_{i_{\mathcal{i}}} - 2 \sum_{i_0 = 1}^{j_0} m_{i_{\mathcal{i}}, i_0} + 1}{\alpha_{i_{\mathcal{i}}}} \geq \mu^{\circ} + \ell - 2j_0.\]
If $j_0 = 0$, then
\begin{align*}
\frac{\mathcal{C}(\alpha, \nu, i_{\mathcal{i}}, \varnothing, \lbrace 1, \ldots, \ell \rbrace \setminus \lbrace i_{\mathcal{i}} \rbrace) + 1}{\alpha_{i_{\mathcal{i}}}} & = \frac{\nu_{i_{\mathcal{i}}} + (\alpha^*_1 + \cdots + \alpha^*_{\alpha_{i_{\mathcal{i}}}}) - \alpha_{i_{\mathcal{i}}} + 1}{\alpha_{i_{\mathcal{i}}}} \\ & \geq \mu^{\circ} + \ell \\ & = \mathcal{C}_{-1}(\alpha, \nu, 1, \varnothing, \lbrace 2, \ldots, \ell \rbrace) + 1 \\ & \geq \mathcal{C}_{-1}(\alpha, \nu, i_{\mathcal{i}}, \varnothing, \lbrace 1, \ldots, \ell \rbrace \setminus \lbrace i_{\mathcal{i}} \rbrace) + 1 \\ & \geq \frac{\mathcal{C}(\alpha, \nu, i_{\mathcal{i}}, \varnothing, \lbrace 1, \ldots, \ell \rbrace \setminus \lbrace i_{\mathcal{i}} \rbrace)}{\alpha_{i_{\mathcal{i}}}} + 1,
\end{align*}
which is impossible because $\alpha_{i_{\mathcal{i}}} > 1$.
Thus, $j_0 \geq 1$. From Proposition~\ref{multi}, it follows that $\nu_{j_0} = \mu^{\circ}$. Hence \[\mathcal{C}_{-1}(\alpha, \nu, j_0, \lbrace 1, \ldots, j_0 - 1 \rbrace, \lbrace j_0 + 1, \ldots, \ell \rbrace) = \mu^{\circ} + \ell - 2j_0 + 1.\]
Since $j_0 < i_{\mathcal{i}}$ and $\alpha_{j_0} < \alpha_{i_{\mathcal{i}}}$, it follows that \[\mathcal{C}_{-1}(\alpha, \nu, j_0, \lbrace 1, \ldots, j_0 - 1 \rbrace, \lbrace j_0 + 1, \ldots, \ell \rbrace) > \mathcal{C}_{-1}(\alpha, \nu, i_{\mathcal{i}}, \lbrace 1, \ldots, j_0 - 1 \rbrace, \lbrace j_0, \ldots, \ell \rbrace \setminus \lbrace i_{\mathcal{i}} \rbrace).\]
Therefore, \[\mu^{\circ} + \ell - 2j_0 \geq \mathcal{C}_{-1}(\alpha, \nu, i_{\mathcal{i}}, \lbrace 1, \ldots, j_0 - 1 \rbrace, \lbrace j_0, \ldots, \ell \rbrace \setminus \lbrace i_{\mathcal{i}} \rbrace).\]
Then
\begin{align*}
& \frac{\mathcal{C}(\alpha, \nu, i_{\mathcal{i}}, \lbrace 1, \ldots, j_0 - 1 \rbrace, \lbrace j_0, \ldots, \ell \rbrace \setminus \lbrace i_{\mathcal{i}} \rbrace) - 1}{\alpha_{i_{\mathcal{i}}}} \\ & = \frac{\nu_{i_{\mathcal{i}}} + (\alpha^*_1 + \cdots + \alpha^*_{\alpha_{i_{\mathcal{i}}}}) - \alpha_{i_{\mathcal{i}}} - 2 \sum_{i_0 = 1}^{j_0} m_{i_{\mathcal{i}}, i_0} + 1}{\alpha_{i_{\mathcal{i}}}} \\ & \geq \mu^{\circ} + \ell - 2j_0 \\ & \geq \mathcal{C}_{-1}(\alpha, \nu, i_{\mathcal{i}}, \lbrace 1, \ldots, j_0 - 1 \rbrace, \lbrace j_0, \ldots, \ell \rbrace \setminus \lbrace i_{\mathcal{i}} \rbrace) \\ & \geq \frac{\mathcal{C}(\alpha, \nu, i_{\mathcal{i}}, \lbrace 1, \ldots, j_0 - 1 \rbrace, \lbrace j_0, \ldots, \ell \rbrace \setminus \lbrace i_{\mathcal{i}} \rbrace)}{\alpha_{i_{\mathcal{i}}}},
\end{align*}
which is a contradiction.
\end{enumerate}
It remains to justify the assumption that $I_i$ is preserved under $\tau$ for all $1 \leq i \leq \ell$ such that $\alpha_i = 1$. Given a subset $J \subset \lbrace 1, \ldots, \ell' \rbrace$, set $i_J := \lbrace i_j : j \in J \rbrace$. Given a subset $I \subset \lbrace 1, \ldots, \ell \rbrace$, let $m(I)$ be its minimal element, and let $M(I)$ be its maximal element. Say that $I$ is \textit{consecutive} if $M(I) - m(I) + 1 = |I|$. Partition $\lbrace 1, \ldots, \ell' \rbrace$ into disjoint blocks $J_1, \ldots, J_k$ such that $i_{J_r}$ is consecutive for all $1 \leq r \leq k$ and $m(i_{J_{r+1}}) - M(i_{J_r}) > 1$ for all $1 \leq r \leq k-1$.
We claim that $J_r$ is preserved under $\tau$ for all $1 \leq r \leq k$. The proof is by (backwards) induction on $r$. For the inductive step, suppose the claim holds for all $r + 1 \leq r_0 \leq k$. Let $c$ be the cardinality of $J_r$, and let $j^r_1, \ldots, j^r_c$ be the elements of $J_r$, arranged in increasing order.
The claim for $r$ is then that $\tau^{-1}(j^r_{w}) \in J_r$ for all $1 \leq w \leq c$. If $r=1$, this follows immediately from the inductive hypothesis, so we may assume $r \geq 1$. Set $q := i_{j^r_1} - 1$. Then $\alpha_q = 1$ and $i_{j^r_w} = q + w$. We prove the claim by (backwards) induction on $w$.
Suppose $\tau^{-1}(j^r_{w_0}) \in J_r$ for all $w + 1 \leq w_0 \leq c$. Set $j_0 := \tau^{-1}(j^r_{w})$. Assume for the sake of contradiction that $j_0 \notin J_r$. By the inductive hypothesis (on $r$), we see that $j_0 \in J_1 \cup \cdots \cup J_{r-1}$. Thus, $i_{j_0} < q$.
Note that
\begin{align*}
\mu^{\circ} = \mu_{i_{j_0}} & \leq \mathcal{C}_{-1}(\alpha, \nu, i_{j_0}, \lbrace 1, \ldots, i_{j_0} - 1 \rbrace, \lbrace i_{j_0} + 1, \ldots, \ell \rbrace) - \ell + 2i_{j_0} - 1 \\ & = \left \lceil \frac{\nu_{i_{j_0}} - \sum_{i_0=1}^{i_{j_0}-1} m_{i_{j_0}, i_0} + \sum_{i_0 = i_{j_0} + 1}^{q-1} m_{i_{j_0}, i_0} + 1 + \sum_{i_0 = q+1}^{\ell} m_{i_{j_0}, i_0}}{\alpha_{i_{j_0}}} \right \rceil - \ell + 2i_{j_0} - 1 \\ & \leq \left \lceil \frac{\nu_{i_{j_0}} - \sum_{i_0=1}^{i_{j_0}-1} m_{i_{j_0}, i_0} - \sum_{i_0 = i_{j_0} + 1}^{q-1} m_{i_{j_0}, i_0} + 1 + \sum_{i_0 = q+1}^{\ell} m_{i_{j_0}, i_0}}{\alpha_{i_{j_0}}}\right \rceil - \ell + 2q - 3 \\ & = \left \lceil \frac{\nu_{i_{j_0}} - \sum_{i_0=1}^{q-1} m_{i_{j_0}, i_0} + 1 + \sum_{i_0 = q+1}^{\ell} m_{i_{j_0}, i_0}}{\alpha_{i_{j_0}}}\right \rceil - \ell + 2q - 2.
\end{align*}
Thus,
\begin{align}
& \left \lfloor \frac{\nu_{i_{j_0}} - \mu^{\circ} - \sum_{i_0 = 1}^{q-1} m_{i_{j_0}, i_0} + \sum_{i_0 = q+1}^{\ell} m_{i_{j_0}, i_0} - \ell + 2q - 1}{\alpha_{i_{j_0}} - 1}\right \rfloor \\ & \geq \left \lfloor \frac{\nu_{i_{j_0}} - \sum_{i_0 = 1}^{q-1} m_{i_{j_0}, i_0} + 1 + \sum_{i_0 = q+1}^{\ell} m_{i_{j_0}, i_0} - \left \lceil \frac{\nu_{i_{j_0}} - \sum_{i_0=1}^{q-1} m_{i_{j_0}, i_0} + 1 + \sum_{i_0 = q+1}^{\ell} m_{i_{j_0}, i_0}}{\alpha_{i_{j_0}}}\right \rceil}{\alpha_{i_{j_0}} -1} \right \rfloor \\ & = \left \lfloor \frac{\left \lfloor \left(\nu_{i_{j_0}} - \sum_{i_0=1}^{q-1} m_{i_{j_0}, i_0} + 1 + \sum_{i_0 = q+1}^{\ell} m_{i_{j_0}, i_0} \right) \left( \frac{\alpha_{i_{j_0}} - 1}{\alpha_{i_{j_0}}} \right)\right \rfloor}{\alpha_{i_{j_0}} - 1}\right \rfloor \\ & = \left \lfloor \frac{\nu_{i_{j_0}} - \sum_{i_0=1}^{q-1} m_{i_{j_0}, i_0} + 1 + \sum_{i_0 = q+1}^{\ell} m_{i_{j_0}, i_0}}{\alpha_{i_{j_0}}}\right \rfloor \\ & \geq \left \lceil \frac{\nu_{i_{j_0}} - \sum_{i_0=1}^{q-1} m_{i_{j_0}, i_0} + 1 + \sum_{i_0 = q+1}^{\ell} m_{i_{j_0}, i_0}}{\alpha_{i_{j_0}}}\right \rceil - 1 \\ & \geq \mu^{\circ} + \ell - 2q + 1.
\end{align}
Set \[J_r^w := \left \lbrace w_0 \in \lbrace 1, \ldots, c \rbrace : j^r_{w_0} \in J_r \setminus \tau^{-1} \lbrace j^r_{w+1}, \ldots, j^r_{c} \rbrace \right \rbrace.\] Let $w' \in J_r^w$ be chosen so that $\alpha_{i_{j^r_{w'}}}$ is minimal.
Since $q < q + w'$ and $\alpha_q = 1 < \alpha_{q+w'}$, it follows that
\begin{align*}
&\mathcal{C}_{-1}(\alpha, \nu, q + w', \lbrace 1, \ldots, q-1 \rbrace, \lbrace q, \ldots, \ell \rbrace \setminus \lbrace q+w' \rbrace) \\ & < \mathcal{C}_{-1}(\alpha, \nu, q, \lbrace 1, \ldots, q-1 \rbrace, \lbrace q+1, \ldots, \ell \rbrace) \\ & = \nu_q + \ell - 2q + 1 \\ & = \mu^{\circ} + \ell - 2q + 1,
\end{align*}
where the last equality follows from Proposition~\ref{multi}.
Hence
\begin{align*}
\mu^{\circ} + \ell - 2q + 1 & \geq \mathcal{C}_{-1}(\alpha, \nu, q + w', \lbrace 1, \ldots, q-1 \rbrace, \lbrace q, \ldots, \ell \rbrace \setminus \lbrace q+w' \rbrace) + 1 \\ & = \left \lceil \frac{\nu_{q+w'} - \sum_{i_0=1}^{q-1} m_{q+w', i_0} + 1 + \sum_{i_0 = q+1}^{\ell} m_{q+w', i_0}}{\alpha_{q + w'}} \right \rceil.
\end{align*}
Therefore,
\begin{align}
& \left \lfloor \frac{\nu_{q+w'} - \mu^{\circ} - \sum_{i_0 = 1}^{q-1} m_{q+w', i_0} + \sum_{i_0 = q+1}^{\ell} m_{q+w', i_0} - \ell + 2q - 1}{\alpha_{q+w'}-1} \right \rfloor \\ & \leq \left \lfloor \frac{\nu_{q+w'} - \sum_{i_0 = 1}^{q-1} m_{q+w', i_0} + \sum_{i_0 = q+1}^{\ell} m_{q+w', i_0} - \left \lceil \frac{\nu_{q+w'} - \sum_{i_0=1}^{q-1} m_{q+w', i_0} + 1 + \sum_{i_0 = q+1}^{\ell} m_{q+w', i_0}}{\alpha_{q + w'}} \right \rceil}{\alpha_{q+w'}-1} \right \rfloor \\ & \leq \left \lfloor \frac{\nu_{q+w'} - \sum_{i_0 = 1}^{q-1} m_{q+w', i_0} + 1 + \sum_{i_0 = q+1}^{\ell} m_{q+w', i_0} - \left \lceil \frac{\nu_{q+w'} - \sum_{i_0=1}^{q-1} m_{q+w', i_0} + 1 + \sum_{i_0 = q+1}^{\ell} m_{q+w', i_0}}{\alpha_{q + w'}} \right \rceil}{\alpha_{q+w'}-1} \right \rfloor \\ & = \left \lfloor \frac{\left \lfloor \left(\nu_{q+w'} - \sum_{i_0=1}^{q-1} m_{q+w', i_0} + 1 + \sum_{i_0 = q+1}^{\ell} m_{q+w', i_0} \right) \left(\frac{\alpha_{q+w'}-1}{\alpha_{q+w'}} \right) \right \rfloor}{\alpha_{q+w'}-1} \right \rfloor \\ & = \left \lfloor \frac{\nu_{q+w'} - \sum_{i_0=1}^{q-1} m_{q+w', i_0} + 1 + \sum_{i_0 = q+1}^{\ell} m_{q+w', i_0}}{\alpha_{q + w'}} \right \rfloor \\ & \leq \left \lceil \frac{\nu_{q+w'} - \sum_{i_0=1}^{q-1} m_{q+w', i_0} + 1 + \sum_{i_0 = q+1}^{\ell} m_{q+w', i_0}}{\alpha_{q + w'}} \right \rceil \\ & \leq \mu^{\circ} + \ell - 2q + 1.
\end{align}
Combining (5.4) -- (5.9) and (5.10) -- (5.16), we see that $(5.4) \geq (5.10)$, with equality if and only if all the inequalities are in fact equalities. However, if $(5.14) = (5.15)$, then \[z:= \frac{\nu_{q+w'} - \sum_{i_0=1}^{q-1} m_{q+w', i_0} + 1 + \sum_{i_0 = q+1}^{\ell} m_{q+w', i_0}}{\alpha_{q + w'}} \in \mathbb{Z},\] in which case \[\frac{\nu_{q+w'} - \sum_{i_0=1}^{q-1} m_{q+w', i_0} + 1 + \sum_{i_0 = q+1}^{\ell} m_{q+w', i_0}-z}{\alpha_{q + w'} -1} = z \in \mathbb{Z},\] so $(5.12) = z$ and $(5.11) = \left \lfloor z - \frac{1}{\alpha_{q+w'} - 1} \right \rfloor = z-1$. It follows that $(5.4) > (5.10)$.
For all $1 \leq i, j \leq \ell$, set $m'_{i,j} := m_{i,j} - 1$. Note that $m'_{i,j} = 0$ unless $\alpha_i, \alpha_j > 1$. Additionally, \[\frac{-2\sum_{w_0 \in J_r^w} m'_{i_{j_0}, q+w_0}}{\alpha_{i_{j_0}} - 1} \geq -2w = \frac{-2\sum_{w_0 \in J_r^w} m'_{q+w', q+w_0}}{\alpha_{q+w'} - 1}.\]
Thus,
\begin{align*}
& \mathcal{C}_1(\alpha', \nu', j_0, (J_1 \cup \cdots \cup J_r) \setminus \tau^{-1} \lbrace j^r_{w}, \ldots, j^r_c \rbrace, \tau^{-1} \lbrace j^r_{w+1}, \ldots, j^r_c \rbrace \cup J_{r+1} \cup \cdots \cup J_k) \\ & = \left \lfloor \frac{\nu'_{j_0} - \sum_{i_0 = 1}^{q-1} m'_{i_{j_0}, i_0} - \sum_{w_0 \in J_r^w} m'_{i_{j_0}, q + w_0} + \sum_{w_0 \in \lbrace 1, \ldots, c \rbrace \setminus J_r^w} m'_{i_{j_0}, q + w_0} + \sum_{i_0 = q + c + 1}^{\ell} m'_{i_{j_0}, i_0}}{\alpha_{i_{j_0}} -1} \right \rfloor + 1 \\ & = \left \lfloor \frac{\nu_{i_{j_0}} - \mu^{\circ} - \sum_{i_0 = 1}^{q-1} m'_{i_{j_0}, i_0} + \sum_{i_0 = q + 1}^{\ell} m'_{i_{j_0}, i_0}}{\alpha_{i_{j_0}} -1} + \frac{-2\sum_{w_0 \in J_r^w} m'_{i_{j_0}, q + w_0}}{\alpha_{i_{j_0}} - 1} \right \rfloor + 1 \\ & \geq \left \lfloor \frac{\nu_{i_{j_0}} - \mu^{\circ} - \sum_{i_0 = 1}^{q-1} m_{i_{j_0}, i_0} + \sum_{i_0 = q + 1}^{\ell} m_{i_{j_0}, i_0} - \ell + 2q - 1}{\alpha_{i_{j_0}} -1} \right \rfloor - 2w + 1 \\ & > \left \lfloor \frac{\nu_{q+w'} - \mu^{\circ} - \sum_{i_0 = 1}^{q-1} m_{q+w', i_0} + \sum_{i_0 = q+1}^{\ell} m_{q+w', i_0} - \ell + 2q - 1}{\alpha_{q+w'}-1} \right \rfloor - 2w + 1 \\ & = \left \lfloor \frac{\nu_{q+w'} - \mu^{\circ} - \sum_{i_0 = 1}^{q-1} m'_{q+w', i_0} + \sum_{i_0 = q+1}^{\ell} m'_{q+w', i_0}}{\alpha_{q+w'}-1} + \frac{-2 \sum_{w_0 \in J_r^w} m'_{q+w', q+ w_0}}{\alpha_{q+w'}-1} \right \rfloor + 1 \\ & = \left \lfloor \frac{\nu'_{j^r_{w'}} - \sum_{i_0=1}^{q-1} m'_{i_{j^r_{w'}}, i_0} - \sum_{w_0 \in J_r^w} m'_{i_{j^r_{w'}}, q + w_0} + \sum_{w_0 \in \lbrace 1, \ldots, c \rbrace \setminus J_r^w} m'_{i_{j^r_{w'}}, q + w_0} + \sum_{i_0 = q+c+1}^{\ell} m'_{i_{j^r_{w'}}, i_0}}{\alpha_{q+w'}-1} \right \rfloor + 1 \\ & = \mathcal{C}_1(\alpha', \nu', j^r_{w'}, (J_1 \cup \cdots \cup J_r) \setminus (\lbrace j^r_{w'} \rbrace \cup \tau^{-1} \lbrace j^r_{w+1}, \ldots, j^r_{c} \rbrace), \tau^{-1} \lbrace j^r_{w+1}, \ldots, j^r_c \rbrace \cup J_{r+1} \cup \cdots \cup J_k).
\end{align*}
Set $J := \tau^{-1} \lbrace j^r_{w+1}, \ldots, j^r_c \rbrace \cup J_{r+1} \cup \cdots \cup J_k$ and $J' := \lbrace 1, \ldots, \ell' \rbrace \setminus J$. By the inductive hypothesis, \[J = \tau^{-1} \lbrace j \in \lbrace 1, \ldots, \ell' \rbrace : j > j^r_w \rbrace.\]
From our work above, we see that \[\mathcal{C}_1(\alpha', \nu', j_0, J' \setminus \lbrace j_0 \rbrace, J) > \mathcal{C}_1(\alpha', \nu', j^r_{w'}, J' \setminus \lbrace j^r_{w'} \rbrace, J),\] which means that the function given by \[j \mapsto \mathcal{C}_1(\alpha', \nu', j, J' \setminus \lbrace j \rbrace, J)\] does not attain its minimal value over the domain $j \in J'$ at $j = j_0 = \tau^{-1}(j^r_w)$. This contradicts the definition of $\tau$. \end{proof}
\begin{df} Given a diagram $X$ and a positive integer $j$, the diagram $\mathcal{T}_j(X)$ is obtained from $X$ by removing the leftmost $j-1$ columns of $X$, and then removing the empty rows from the remaining diagram. \end{df}
\begin{rem} We refer to $\mathcal{T}_j$ as the \textit{column-reduction} function. Inductively, we see that $\mathcal{T}_j \mathcal{T}_{j'}(X) = \mathcal{T}_{j+j'-1}(X)$ for all $j, j' \in \mathbb{N}$. \end{rem}
\begin{lem} \label{bigentry} Let $(\alpha, \nu) \in \mathbb{N}^{\ell} \times \mathbb{Z}^{\ell}$. Set $(X,Y) := \mathcal{A}(\alpha, \nu, -1)$. Suppose $X_{1,1} = \cdots = X_{\ell, 1}$. Then $Y_{1,1} \geq Y_{i,j}$ for all $(i, j) \in \mathbb{N} \times \mathbb{N}$ such that $Y$ has an entry in the $i^{\text{th}}$ row and $j^{\text{th}}$ column. \end{lem}
\begin{proof} The proof is by induction on $M := \max \lbrace \alpha_1, \ldots, \alpha_{\ell} \rbrace$. Clearly, $Y_{1, 1} \geq Y_{i, 1}$ for all $i$, and it follows from Theorem~\ref{differences} that $Y_{i, 1} \geq Y_{i, 2}$ for all $i$, so $Y_{1,1} \geq Y_{i,2}$. Thus, we may assume $M \geq 3$.
Maintain the notation from the proof of Theorem~\ref{differences} (continue to assume without loss of generality that $\sigma = \operatorname{id}$). Set $\mathcal{f}' := \mathcal{S}(\alpha', \tau, \mu')$. For all $(x, i_0)$ in the image of $\mathcal{f}'$ such that $i_0 > 0$, set $i'_{(x,i_0)} := {\mathcal{f}'}^{-1}(x,i_0)$. Also set $\mathcal{k}' := |\lbrace \mu'_1, \ldots, \mu'_{\ell'} \rbrace|$. For all $1 \leq x \leq \mathcal{k}'$, set \[\ell'_x := \max \lbrace i_0 : (x, i_0) \in \mathcal{f}' \lbrace 1, \ldots, \ell' \rbrace \rbrace.\]
If $\ell'_x > 0$, then set \[{\alpha'}^{(x)} := \left [\alpha'_{{\tau}^{-1}\left(i'_{(x, 1)}\right)} - 1, \ldots, \alpha'_{{\tau}^{-1}\left(i'_{(x, \ell'_x)}\right)} - 1 \right] \] and \[{\nu'}^{(x)} = \left [\nu'_{{\tau}^{-1}\left(i'_{(x, 1)}\right)} - \mu'_{i'_{(x, 1)}}, \ldots, \nu'_{{\tau}^{-1}\left(i'_{(x, \ell'_x)}\right)} - \mu'_{i'_{(x, \ell'_x)}} \right].\]
For all $1 \leq i_0 \leq \ell'_x$, set \[\widehat{\nu'}^{(x)}_{i_0} := {\nu'}^{(x)}_{i_0} - \sum_{x' = 1}^{x-1} \sum_{i_1 = 1}^{\ell'_{x'}} \min \left\lbrace {\alpha'}^{(x)}_{i_0}, {\alpha'}^{(x')}_{i_1} \right\rbrace + \sum_{x' = x+1}^{\mathcal{k}'} \sum_{i_1 = 1}^{\ell'_{x'}} \min \left\lbrace {\alpha'}^{(x)}_{i_0}, {\alpha'}^{(x')}_{i_1} \right\rbrace.\] Then set $\widehat{\nu'}^{(x)} := \left[\widehat{\nu'}^{(x)}_1, \ldots, \widehat{\nu'}^{(x)}_{\ell'_x}\right]$ and $\left({X'}^{(x)}, {Y'}^{(x)} \right) := \mathcal{A}\left({\alpha'}^{(x)}, \widehat{\nu'}^{(x)}, -1 \right)$.
By construction $\mathcal{T}_2(Y) = Y'$ and \[\mathcal{T}_3(Y) = \mathcal{T}_2 (Y') = \operatorname{Cat}\left({Y'}^{(1)}, \ldots, {Y'}^{(\mathcal{k}')}\right).\]
Note that $\mathcal{T}_2(Y)_{i, 1} \geq \mathcal{T}_2(Y)_{i+1, 1} + 1$. To see this, suppose the $i^{\text{th}}$ row of $\mathcal{T}_2(Y)$ is contained within the $j^{\text{th}}$ row of $Y$, and the $(i+1)^{\text{th}}$ row of $\mathcal{T}_2(Y)$ is contained within the $k^{\text{th}}$ row of $Y$. Then $k - j \geq 1$, so $Y_{j, 1} \geq Y_{k, 1} + 2$, and it follows from Theorem~\ref{differences} that $Y_{j, 2} \geq Y_{k, 2} + 1$.
Applying exactly the same reasoning, we find $\mathcal{T}_2(Y')_{i, 1} \geq \mathcal{T}_2(Y')_{i+1, 1}$, so the entries of $\mathcal{T}_3(Y) = \mathcal{T}_2 (Y')$ down the first column are weakly decreasing. In particular, ${Y'}^{(1)}_{1,1} \geq {Y'}^{(x)}_{1, 1}$ for all $1 \leq x \leq \mathcal{k}'$. By the inductive hypothesis, we see that ${Y'}^{(1)}_{1,1} \geq \mathcal{T}_3(Y)_{i,j}$ for all $(i, j)$.
Thus, it suffices to show $Y_{1,1} \geq {Y'}^{(1)}_{1,1}$. Assume for the sake of contradiction that ${Y'}^{(1)}_{1,1} > Y_{1,1}$. Set $\phi := \mathcal{R}_{-1}({\alpha'}^{(1)}, \widehat{\nu'}^{(1)})$. Set $i_0 := \phi^{-1}(1)$. Also set $i'_0 := \tau^{-1}\left(i'_{(1,i_0)}\right)$. Note that \begin{align*} {Y'}^{(1)}_{1,1} & = {X'}^{(1)}_{1,1} + \ell'_1 - 1 \\ & = \mathcal{C}_{-1}({\alpha'}^{(1)}, \widehat{\nu'}^{(1)}, i_0, \varnothing, \lbrace 1, \ldots, \ell'_1 \rbrace \setminus \lbrace i_0 \rbrace) \\ & = \left \lceil \frac{\widehat{\nu'}^{(1)}_{i_0} + \sum_{i_1 = 1}^{\ell'_1} \min \left \lbrace {\alpha'}^{(1)}_{i_0}, {\alpha'}^{(1)}_{i_1} \right \rbrace}{{\alpha'}^{(1)}_{i_0}} \right \rceil - 1 \\ & = \left \lceil \frac{{\nu'}^{(1)}_{i_0} + \sum_{i_1 = 1}^{\ell'_1} \min \left \lbrace {\alpha'}^{(1)}_{i_0}, {\alpha'}^{(1)}_{i_1} \right \rbrace + \sum_{x' = 2}^{\mathcal{k}'} \sum_{i'=1}^{\ell'_{x'}} \min \left \lbrace {\alpha'}^{(1)}_{i_0},{\alpha'}^{(x')}_{i_1} \right \rbrace}{{\alpha'}^{(1)}_{i_0}} \right \rceil - 1 \\ & = \left \lceil \frac{\nu'_{i'_0} - \mu'_{i'_{(1,i_0)}} + \sum_{x = 1}^{\mathcal{k}'} \sum_{i'=1}^{\ell'_{x}} \min \left \lbrace {\alpha'}_{i'_0} - 1, {\alpha'}_{\tau^{-1}(i'_{(x, i_1)})} - 1 \right \rbrace}{{\alpha'}_{i'_0} - 1} \right \rceil - 1 \\ & = \left \lceil \frac{\nu'_{i'_0} - \mu'_{i'_{(1,i_0)}} + \sum_{i' = 1}^{\ell'} \min \left \lbrace {\alpha'}_{i'_0} - 1, {\alpha'}_{i'} - 1 \right \rbrace}{{\alpha'}_{i'_0} - 1} \right \rceil - 1 \\ & = \left \lceil \frac{\nu'_{i'_0} - \mu'_{i'_{(1,i_0)}} + \left({\alpha'}^*_1 + \cdots + {\alpha'}^*_{{\alpha'}_{i'_0}} \right) - \ell'}{{\alpha'}_{i'_0} - 1} \right \rceil - 1 \\ & = \left \lceil \frac{\nu_{i_{i'_0}} - \mu^{\circ} - \mu'_{i'_{(1,i_0)}} + \left({\alpha}^*_1 + \cdots + {\alpha}^*_{{\alpha}_{i_{i'_0}}} \right) - \ell - \ell'}{{\alpha}_{i_{i'_0}} - 2} \right \rceil - 1. \end{align*}
Suppose that the topmost row of $\mathcal{T}_3(Y)$ is contained within the $p^{\text{th}}$ row of $Y$. If $p > 1$, then \[\mathcal{T}_3(Y)_{1,1} = Y_{p, 3} \leq Y_{p, 2} + 1 \leq Y_{p, 1} + 1 \leq Y_{1,1} - 1.\] It follows that $p = 1$, so there are at least three boxes in the first row of $Y$. Thus, the first row of $Y'$ is contained within the first row of $Y$, and, furthermore, there are at least two boxes in the first row of $Y'$. Hence $\alpha'_{\tau^{-1}(1)} > 1$, whence $\mathcal{f}'(1) = (1, 1)$.
Since $\mathcal{T}_3(Y)_{1,1} = Y_{1, 3}$ and $Y_{1, 3} \leq Y_{1, 2} + 1 \leq Y_{1, 1} + 1$, the assumption $Y_{1,3} > Y_{1,1}$ entails $Y_{1,2} = Y_{1,1}$. Note that $Y_{1,1} = \mu^{\circ} + \ell - 1$ and $Y_{1, 2} = Y'_{1,1} = \mu'_1 + \ell' - 1$, so $\mu^{\circ} + \ell = \mu'_1 + \ell'$. Then $\mu^{\circ} + \ell = \mu'_{i'_{(1,i_0)}} + \ell'$ because $\mu'_{i'_{(1,i_0)}} = \mu'_{i'_{(1,1)}} = \mu'_1$.
Thus, \[{Y'}^{(1)}_{1,1} = \left \lceil \frac{\nu_{i_{i'_0}} + \left({\alpha}^*_1 + \cdots + {\alpha}^*_{{\alpha}_{i_{i'_0}}} \right) - 2(\mu^{\circ} + \ell)}{{\alpha}_{i_{i'_0}} - 2} \right \rceil - 1.\]
From ${Y'}^{(1)}_{1,1} > Y_{1,1} = \mu^{\circ} + \ell - 1$, we obtain \begin{align*} & \left \lceil \frac{\nu_{i_{i'_0}} + \left({\alpha}^*_1 + \cdots + {\alpha}^*_{{\alpha}_{i_{i'_0}}} \right) - 2(\mu^{\circ} + \ell)}{{\alpha}_{i_{i'_0}} - 2} \right \rceil > \mu^{\circ} + \ell \\ & \Longleftrightarrow \frac{\nu_{i_{i'_0}} + \left({\alpha}^*_1 + \cdots + {\alpha}^*_{{\alpha}_{i_{i'_0}}} \right) - 2(\mu^{\circ} + \ell)}{{\alpha}_{i_{i'_0}} - 2} > \mu^{\circ} + \ell \\ & \Longleftrightarrow \frac{\nu_{i_{i'_0}} + \left({\alpha}^*_1 + \cdots + {\alpha}^*_{{\alpha}_{i_{i'_0}}} \right)}{{\alpha}_{i_{i'_0}} - 2} > (\mu^{\circ} + \ell) \left(1 + \frac{2}{\alpha_{i_{i'_0}} - 2} \right) \\ & \Longleftrightarrow \frac{\nu_{i_{i'_0}} + \left({\alpha}^*_1 + \cdots + {\alpha}^*_{{\alpha}_{i_{i'_0}}} \right)}{{\alpha}_{i_{i'_0}}} > \mu^{\circ} + \ell \\ & \Longleftrightarrow \left \lceil \frac{\nu_{i_{i'_0}} + \left({\alpha}^*_1 + \cdots + {\alpha}^*_{{\alpha}_{i_{i'_0}}} \right)}{{\alpha}_{i_{i'_0}}} \right \rceil - 1 > \mu^{\circ} + \ell - 1 \\ & \Longleftrightarrow \mathcal{C}_{-1}(\alpha, \nu, i_{i'_0}, \varnothing, \lbrace 1, \ldots, \ell \rbrace \setminus \lbrace i_{i'_0} \rbrace) > \mathcal{C}_{-1}(\alpha, \nu, 1, \varnothing, \lbrace 2, \ldots, \ell \rbrace). \end{align*}
However, the function given by \[i \mapsto \mathcal{C}_{-1}(\alpha, \nu, i, \varnothing, \lbrace 1, \ldots, \ell \rbrace \setminus \lbrace i \rbrace)\] attains its maximal value over the domain $i \in \lbrace 1, \ldots, \ell \rbrace$ at $i = \sigma^{-1}(1) = 1$.
Therefore, \[\mathcal{C}_{-1}(\alpha, \nu, i_{i'_0}, \varnothing, \lbrace 1, \ldots, \ell \rbrace \setminus \lbrace i_{i'_0} \rbrace) \leq \mathcal{C}_{-1}(\alpha, \nu, 1, \varnothing, \lbrace 2, \ldots, \ell \rbrace).\] This is a contradiction. \end{proof}
\begin{lem} \label{smallentry} Let $(\alpha, \nu) \in \mathbb{N}^{\ell} \times \mathbb{Z}^{\ell}$. Set $(X,Y) := \mathcal{A}(\alpha, \nu, 1)$. Suppose $X_{1,1} = \cdots = X_{\ell, 1}$. Then $Y_{\ell,1} \leq Y_{i,j}$ for all $(i, j) \in \mathbb{N} \times \mathbb{N}$ such that $Y$ has an entry in the $i^{\text{th}}$ row and $j^{\text{th}}$ column. \end{lem}
\begin{proof} The proof is analogous to that of Lemma~\ref{bigentry}. \end{proof}
\begin{thm} \label{puttog} Let $(\alpha, \nu) \in \mathbb{N}^{\ell} \times \mathbb{Z}^{\ell}$. Then $\mathcal{A}(\alpha, \nu, -1)$ is odd-distinguished, and $\mathcal{A}(\alpha, \nu, 1)$ is even-distinguished. \end{thm}
\begin{proof} The proof is by induction on $\max \lbrace \alpha_1, \ldots, \alpha_{\ell} \rbrace$. We show the inductive step for the former statement only.
Maintain the notation following the definitions of the row-survival and row-partition functions. Recall that \[Y = \operatorname{Cat}(\mathcal{Y}^{(1)}, \ldots, \mathcal{Y}^{(\mathcal{k})}) \quad \text{and} \quad \mathcal{T}_2(Y) = \operatorname{Cat}(Y^{(1)}, \ldots, Y^{(\mathcal{k})}).\] By the inductive hypothesis, $Y^{(x)}$ is even-distinguished for all $1 \leq x \leq \mathcal{k}$. To see that $Y$ is odd-distinguished, we prove that $Y$ satisfies the four conditions delineated in Definition~\ref{dis}.
\begin{enumerate}
\item This follows immediately from Theorem~\ref{differences}.
\item \begin{enumerate}
\item Suppose $j < j'$ are odd and $Y_{i,j} \leq Y_{i, j'} - 1$. We split into two cases.
If $j > 1$, note that there are at least two boxes in the $i^{\text{th}}$ row of $Y$. Setting $(x, i') := \mathcal{S}(\alpha, \sigma, \iota)$, we obtain $i' > 0$, so $i = i_{(x,i')}$. Thus, \[Y^{(x)}_{i', j-1} = Y_{i,j} \leq Y_{i, j'} - 1 = Y^{(x)}_{i', j' - 1} - 1.\] Since $Y^{(x)}$ is even-distinguished, it follows that $Y^{(x)}_{i', j-1}$ is not $E$-raisable in $Y^{(x)}$, so $Y_{i,j}$ is not $E$-raisable in $Y$.
If $j = 1$, assume for the sake of contradiction that $Y_{i,j}$ is $E$-raisable. Setting $(x, i') := \mathcal{P}(\alpha, \iota)(i)$, we obtain $i' = 1$, so $i = p_{(x,1)}$. Then \[\mathcal{Y}^{(x)}_{1, 1} = Y_{i, 1} \leq Y_{i, j'} - 1 = \mathcal{Y}^{(x)}_{1, j'} - 1,\] which contradicts Lemma~\ref{bigentry}.
\item Suppose $j < j'$ are even and $Y_{i, j} \geq Y_{i, j'} + 1$. As above, note that there are at least two boxes in the $i^{\text{th}}$ row of $Y$. Setting $(x, i') := \mathcal{S}(\alpha, \sigma, \iota)(i)$, we again obtain $i' > 0$, so $i = i_{(x,i')}$. Thus, \[Y^{(x)}_{i', j-1} = Y_{i,j} \geq Y_{i, j'} + 1 = Y^{(x)}_{i', j'-1} + 1.\] Since $Y^{(x)}$ is even-distinguished, it follows that $Y^{(x)}_{i', j-1}$ is not $E$-lowerable in $Y^{(x)}$, so $Y_{i,j}$ is not $E$-lowerable in $Y$.
\end{enumerate}
\item For both parts of this condition, it suffices to address the case $j = 1$; the condition holds for $j > 1$ by the inductive hypothesis (as in the proof of condition (2)).
\begin{enumerate}
\item Suppose $Y_{i,1} \leq Y_{i,j'} - 2$. If $j'$ is odd, then $Y_{i,1}$ is not $E$-raisable by condition (2). Otherwise, $j'$ is even, and we see from Theorem~\ref{differences} that \[Y_{i, 1} \leq Y_{i, j'} - 2 \leq Y_{i, j'-1} - 2.\] Thus, $j' -1 > 1$, and, again invoking condition (2), we find $Y_{i,1}$ is not $E$-raisable.
\item Suppose $Y_{i,1} \geq Y_{i,j'} + 2$, and assume for the sake of contradiction that $Y_{i,1}$ is $E$-lowerable. Setting $(x,i') := \mathcal{P}(\alpha, \iota)(i)$, we obtain $i' = \ell^{\circ}_x$, so $i = p_{(x, \ell^{\circ}_x)}$. By Theorem~\ref{differences},
\begin{align} \label{compare}
\mathcal{Y}^{(x)}_{\ell^{\circ}_x, 2} = Y_{i, 2} \geq Y_{i, 1} - 1 \geq Y_{i, j'} + 1 = \mathcal{Y}^{(x)}_{\ell^{\circ}_x, j'} + 1.
\end{align}
Note that $\mathcal{Y}^{(x)}_{\ell^{\circ}_x, 2}$ is $E$-lowerable in $\mathcal{Y}^{(x)}$ (\textit{even if} $Y_{i,2}$ is \textit{not} $E$-lowerable in $Y$). Thus, if $j'$ is even, then $j' > 2$, and Equation~\ref{compare} contradicts condition (2). Otherwise, $j'$ is odd, and \[\mathcal{Y}^{(x)}_{\ell^{\circ}_x, 2} \geq \mathcal{Y}^{(x)}_{\ell^{\circ}_x,j'} + 1 \geq \mathcal{Y}^{(x)}_{\ell^{\circ}_x,j'-1} + 1,\] which means $j' - 1 > 2$, and again yields a contradiction with condition (2).
\end{enumerate}
\item Clearly this condition holds for $j=1$. Therefore, since $Y^{(x)}$ is even-distinguished for all $1 \leq x \leq \mathcal{k}$, it suffices to show $x < x'$ implies $Y^{(x)}_{i, j} \geq Y^{(x')}_{i', j} + 2$.
We claim that $Y^{(x)}_{i, j} \geq Y^{(x)}_{\ell'_x, 1}$. To see this, note that Lemma~\ref{collapse} tells us that there exists a positive integer $\mathcal{k}_x$ and pairs of integer sequences \[\left(\alpha^{(x;1)}, \nu^{(x;1)}\right), \ldots, \left(\alpha^{(x;\mathcal{k}_x)}, \nu^{(x; \mathcal{k}_x)}\right)\] such that the diagram pairs $\left(X^{(x;y)}, Y^{(x;y)}\right) := \mathcal{A}_1 \left(\alpha^{(x;y)}, \nu^{(x;y)} \right)$ satisfy the following conditions: For all $1 \leq y \leq \mathcal{k}_x$, the entries in the first column of $X^{(x;y)}$ are all equal, and \[Y^{(x)} = \operatorname{Cat}\left(Y^{(x;1)}, \ldots, Y^{(x; \mathcal{k}_x)}\right).\]
Let $y$ be chosen so that the $i^{\text{th}}$ row of $Y^{(x)}$ is contained in $Y^{(x;y)}$. By Lemma~\ref{smallentry}, $Y^{(x)}_{i,j}$ is greater than or equal to the bottommost entry in the first column of $Y^{(x;y)}$. This entry belongs to the first column of $Y^{(x)}$, so it is itself greater than or equal to $Y^{(x)}_{\ell'_x,1}$, which proves the claim.
By Theorem~\ref{differences}, \[Y^{(x)}_{\ell'_x, 1} = Y_{i_{(x,\ell'_x)}, 2} \geq Y_{i_{(x,\ell'_x)},1} - 1 \geq Y_{p_{(x,\ell^{\circ}_x)}, 1} - 1.\]
Furthermore, \[Y_{p_{(x', 1)}, 1} + 2 = \mathcal{Y}^{(x')}_{1,1} + 2 \geq Y^{(x')}_{i',j} + 2,\] where the inequality follows from Lemma~\ref{bigentry} because $Y^{(x')}$ is contained in $\mathcal{Y}^{(x')}$, as shown in Lemma~\ref{collapse}.
Thus, it suffices to show $Y_{p_{(x,\ell^{\circ}_x)}, 1} - 1 \geq Y_{p_{(x',1)}, 1} + 2$. By definition, \[\mathcal{P}(\alpha, \iota)(p_{(x,\ell^{\circ}_x)}) = (x, \ell^{\circ}_x) \quad \text{and} \quad \mathcal{P}(\alpha, \iota)(p_{(x',1)}) = (x', 1).\] Hence $x < x'$ implies $X_{p_{(x,\ell^{\circ}_x)}, 1} > X_{p_{(x',1)}, 1}$, so $X_{p_{(x,\ell^{\circ}_x)}, 1} - 1 \geq X_{p_{(x',1)}, 1}$, whence the result follows. \end{enumerate} \end{proof}
\begin{cor} \label{dist} Let $\alpha \vdash n$, and let $\nu \in \Omega_{\alpha}$. Then $\mathcal{A}(\alpha, \nu)$ is distinguished. \end{cor}
Corollary~\ref{dist}, in view of Proposition~\ref{kap}, suffices to prove Theorem~\ref{mata} --- thanks to the following theorem.
\begin{thm}[Achar \cite{Acharj}, Theorem 8.8] \label{achar} Let $\alpha \vdash n$, and let $\nu \in \Omega_{\alpha}$. Then $\mathsf{A}(\alpha, \nu)$ is the unique distinguished diagram of shape-class $\alpha$ in $\kappa^{-1}(\nu)$. \end{thm}
\begin{rem} Again (cf. Remark~\ref{weak}), Achar's definition of distinguished is weaker than ours, but it doesn't matter: $p_1 \mathcal{A}(\alpha, \nu)$ is distinguished by our definition, so it is distinguished by Achar's definition, so $p_1\mathcal{A}(\alpha, \nu) = \mathsf{A}(\alpha, \nu)$. \end{rem}
This completes the proof of Theorem~\ref{mata}. \qed
\eject
\appendix
\section{Afterword}
Achar's algorithm $\mathsf{A}$ computes a map \[\Omega \rightarrow \bigcup_{\ell = 1}^n D_{\ell}.\] On input $(\alpha, \nu) \in \Omega$, the corresponding dominant weight $\gamma(\alpha, \nu) \in \Lambda^+$ is obtained from the output by taking $\eta E \mathsf{A}(\alpha, \nu)$.
Achar's algorithm for $\gamma^{-1}$, which we denote by $\mathsf{B}$, computes a map \[\Lambda^+ \rightarrow \bigcup_{\ell = 1}^n D_{\ell}.\] On input $\lambda \in \Lambda^+$, the corresponding pair $\gamma^{-1}(\lambda) \in \Omega$ is obtained from the output by taking $(\delta E^{-1} \mathsf{B}(\lambda), \kappa E^{-1} \mathsf{B}(\lambda))$, where the map $\delta$ sends a diagram to its shape-class.
Consider the following diagram.
\[\Omega \xleftarrow{(\delta p_1, \kappa p_1)} \bigcup_{\ell=1}^n D_{\ell} \times D_{\ell} \xrightarrow{\eta p_2} \Lambda^+\]
The algorithms $\mathsf{A}$ and $\mathsf{B}$ yield sections $(\mathsf{A}, E \mathsf{A})$ and $(E^{-1}\mathsf{B}, \mathsf{B})$ of the projections $(\delta p_1, \kappa p_1)$ and $\eta p_2$ onto $\Omega$ and $\Lambda^+$, respectively, for which \[\eta p_2 \circ (\mathsf{A}, E \mathsf{A}) = \gamma \quad \text{and} \quad (\delta p_1, \kappa p_1) \circ (E^{-1} \mathsf{B}, \mathsf{B}) = \gamma^{-1}.\] That the maps computed by $\mathsf{A}$ and $\mathsf{B}$ play symmetric roles suggests that the algorithms themselves should exhibit structural symmetry. Unfortunately, they do not.
We address this incongruity by introducing the algorithm $\mathcal{A}$, which computes the section $(\mathsf{A}, E \mathsf{A})$, yet has the same recursive structure as $\mathsf{B}$: Both $\mathcal{A}$ and $\mathsf{B}$ recur after determining the entries in the first column of their output diagram(s).\footnote{Achar \cite{Acharj} phrases the instructions for $\mathsf{B}$ to use iteration rather than recursion, but they amount to carrying out the same computations. } Thus, the weight-diagrams version of our algorithm achieves structural parity with $\mathsf{B}$; the integer-sequences version $\mathfrak{A}$ is a singly recursive simplification that sidesteps weight diagrams altogether.
Having established that our algorithm is correct, we are mindful that Einstein's admonition, ``Everything should be made as simple as possible, but not simpler,'' will have the last word. We believe the appeal of our weight-by-weight, column-by-column approach is underscored by its consonance with Achar's algorithm $\mathsf{B}$, which has stood the test of time. To demonstrate the complementarity between $\mathcal{A}$ and $\mathsf{B}$, we offer the following description of $\mathsf{B}$. More details can be found in Achar \cite{Acharj}, section 6.
\subsubsection*{The algorithm}
We define a recursive algorithm $\mathsf{B}$ that computes a map \[\mathbb{Z}^n_{\operatorname{dom}} \times \lbrace \pm 1 \rbrace \rightarrow \bigcup_{\ell = 1}^n D_{\ell}\] by filling in the entries in the first column of its output diagram and using recursion to fill in the entries in the remaining columns. Whenever we write $\mathsf{B}(\lambda)$, we refer to $\mathsf{B}(\lambda, -1)$.
The algorithm $\mathsf{B}$ is multiply recursive and begins by dividing a weakly decreasing integer sequence into \textit{clumps}. From each clump, it builds a diagram by first extracting a maximal-length \textit{majuscule} sequence to comprise the entries of the first column, and then calling itself on the clump's remains. To obtain the output, it concatenates the diagrams constructed from all the clumps.
\begin{df} A subsequence of a weakly decreasing integer sequence is \textit{clumped} if no two of its consecutive entries differ by more than $1$. A clumped subsequence is a \textit{clump} if it is not contained in a longer clumped subsequence. \end{df}
\begin{df} An integer sequence $\iota = [\iota_1, \ldots, \iota_{\ell}]$ is \textit{majuscule} if $\iota_i - \iota_{i+1} \geq 2$ for all $1 \leq i \leq \ell - 1$. \end{df}
We are ready to describe $\mathsf{B}$.
On input $(\lambda, \epsilon)$, the algorithm designates $\mathsf{c}$ the number of distinct clumps in $\lambda$. For all $1 \leq x \leq \mathsf{c}$, it designates $n_x$ the number of entries in the $x^{\text{th}}$ clump.
Let $\lambda^{(x)}$ denote the $x^{\text{th}}$ clump, and $\mathcal{Y}^{(x)}$ the diagram to be built from $\lambda^{(x)}$. For all $1 \leq x \leq \mathsf{c}$, the algorithm obtains a majuscule sequence $\iota^{(x)}$ from $\lambda^{(x)}$ as follows: \begin{itemize} \item If $\epsilon = -1$, then $\iota^{(x)}$ is the maximal-length majuscule sequence contained in $\lambda^{(x)}$ that begins with $\lambda^{(x)}_1$; \item If $\epsilon = 1$, then $\iota^{(x)}$ is the maximal-length majuscule sequence contained in $\lambda^{(x)}$ that ends with $\lambda^{(x)}_{n_x}$. \end{itemize}
Then it sets \[\mathcal{Y}^{(x)}_{i, 1} := \iota^{(x)}_i\] for all $1 \leq i \leq \ell_x$, where $\ell_x$ is the length of $\iota^{(x)}$. This determines the entries in the first column of $\mathcal{Y}^{(x)}$.
If $\iota^{(x)} = \lambda^{(x)}$, the diagram $\mathcal{Y}^{(x)}$ is complete. Otherwise, the algorithm arranges the elements of the (multiset) difference $\lambda^{(x)} \setminus \iota^{(x)}$ in weakly decreasing order, leaving a weakly decreasing integer sequence $\bar{\lambda}^{(x)}$, and it sets \[Y^{(x)} := \mathsf{B}(\bar{\lambda}^{(x)}, -\epsilon).\]
It proceeds to attach $Y^{(x)}$ to the first column of $\mathcal{Y}^{(x)}$. For all $i'$ such that $Y^{(x)}$ has an $i'^{\text{th}}$ row, the algorithm finds the unique $i \in \lbrace 1, \ldots, \ell_x \rbrace$ such that $Y^{(x)}_{i',1} - \mathcal{Y}^{(x)}_{i,1} \in \lbrace 0, \epsilon \rbrace$. Then, for all $j'$ such that $Y^{(x)}$ has an entry in the $i'^{\text{th}}$ row and $j'^{\text{th}}$ column, it sets \[\mathcal{Y}^{(x)}_{i,j'+1} := Y^{(x)}_{i',j'}.\]
Finally, it sets \[Y := \operatorname{Cat}\left (\mathcal{Y}^{(1)}, \ldots, \mathcal{Y}^{(\mathcal{c})} \right )\] and returns $Y$.
\begin{exam} In Example~\ref{acharexam}, we found \[\gamma([4,3,2,1,1], [15,14,9,4,4]) = [8,7,6,6,5,4,3,3,2,2,0].\]
Here we set $\lambda := [8,7,6,6,5,4,3,3,2,2,0]$ and compute $\mathsf{B}(\lambda)$.
We start by observing that $\lambda$ has $2$ clumps, \[\lambda^{(1)} = [8,7,6,6,5,4,3,3,2,2] \quad \text{and} \quad \lambda^{(2)} = [0].\]
The maximal-length majuscule sequence contained in $\lambda^{(1)}$ that begins with $\lambda^{(1)}_1 = 8$ is $\iota^{(1)} = [8,6,4,2]$. Hence \[\big[\mathcal{Y}^{(1)}_{1,1}, \mathcal{Y}^{(1)}_{2,1}, \mathcal{Y}^{(1)}_{3,1}, \mathcal{Y}^{(1)}_{4,1} \big] = [8,6,4,2].\]
Upon removing $\iota^{(1)}$ from $\lambda^{(1)}$, we see that \[\bar{\lambda}^{(1)} = [7,6,5,3,3,2].\]
As it happens, $\mathsf{B}(\bar{\lambda}^{(1)}, 1)$ looks as follows.
\begin{figure}
\caption{The diagram obtained from the remains of the first clump}
\end{figure}
We complete $\mathcal{Y}^{(1)}$ by attaching $Y^{(1)}$ to the first column of $\mathcal{Y}^{(1)}$.
\begin{figure}
\caption{The diagram obtained from the first clump}
\end{figure}
Since $\lambda^{(2)}$ consists of a single entry, it follows that $\iota^{(2)} = \lambda^{(2)}$, so $\mathcal{Y}^{(2)}$ consists of a single box.
\begin{figure}
\caption{The diagram obtained from the second clump}
\end{figure}
Concatenating $\mathcal{Y}^{(1)}$ and $\mathcal{Y}^{(2)}$, we arrive at $Y$.
\begin{figure}
\caption{The diagram obtained from the recursion}
\end{figure} \end{exam}
Comparing our result with that of Example~\ref{cont}, we see d\'ej\`a vu --- $\mathcal{A}(\alpha, \nu) = (E^{-1}Y, Y)$. This corroborates that the sections $\mathcal{A}$ and $(E^{-1} \mathsf{B}, \mathsf{B})$ send $(\alpha, \nu)$ and $\lambda$, respectively, to the same diagram pair whenever $(\alpha, \nu)$ and $\lambda$ correspond under the Lusztig--Vogan bijection, as in this case they do.
\eject
\section*{Acknowledgments}
This research is based on the author's doctoral dissertation at the Massachusetts Institute of Technology. His gratitude to his advisor David A. Vogan Jr. --- for suggesting the problem and offering invaluable help and encouragement along the way to a solution --- cannot be overstated. Thanks are also due to Roman Bezrukavnikov and George Lusztig for serving on the author's thesis committee and offering thoughtful remarks.
John W. Chun, the author's late grandfather, emigrated from Korea and fell in love with the author's grandmother and American literature. He earned a doctorate in English at the Ohio State University, becoming the first member of the author's family to be awarded a Ph.D. This article is dedicated to his memory, fondly and with reverence.
Throughout his graduate studies, the author was supported by the US National Science Foundation Graduate Research Fellowship Program.
\end{document} |
\begin{document}
\title{Complex and Hypercomplex Discrete Fourier Transforms
Based on Matrix Exponential Form of Euler's Formula} \begin{abstract} We show that the discrete complex, and numerous hypercomplex, Fourier transforms defined and used so far by a number of researchers can be unified into a single framework based on a matrix exponential version of Euler's formula $e^{j\theta}=\cos\theta+j\sin\theta$, and a matrix root of $-1$ isomorphic to the imaginary root $j$. The transforms thus defined can be computed using standard matrix multiplications and additions with no hypercomplex code, the complex or hypercomplex algebra being represented by the form of the matrix root of $-1$, so that the matrix multiplications are equivalent to multiplications in the appropriate algebra. We present examples from the complex, quaternion and biquaternion algebras, and from Clifford algebras \clifford{1}{1} and \clifford{2}{0}. The significance of this result is both in the theoretical unification, and also in the scope it affords for insight into the structure of the various transforms, since the formulation is such a simple generalization of the classic complex case. It also shows that hypercomplex discrete Fourier transforms may be {computed} using standard matrix arithmetic packages {without the need for a hypercomplex library}, which is of importance in providing a reference implementation for {verifying} implementations based on hypercomplex code. \end{abstract} \section{Introduction} The discrete Fourier transform is widely known and used in signal and image processing, and in many other fields where data is analyzed for frequency content \cite{Bracewell:2000}. The discrete Fourier transform in one dimension is classically formulated as: \begin{equation} \label{eqn:classicdft} \begin{aligned} F[u] &= S\sum_{m=0}^{M-1}f[m]\exp\left( -j 2\pi\frac{mu}{M}\right)\\ f[m] &= T\sum_{u=0}^{M-1}F[u]\exp\left(\phantom{-}j 2\pi\frac{mu}{M}\right) \end{aligned} \end{equation} where $j$ is the imaginary root of $-1$, $f[m]$ is real or complex valued with $M$ samples, $F[u]$ is complex valued, also with $M$ samples, and the two scale factors $S$ and $T$ must multiply to $1/M$. If the transforms are to be unitary then $S$ must equal $T$ also. In this paper we {discuss the formulation of the transform} using a matrix exponential form of Euler's formula in which the imaginary square root of $-1$ is replaced by {an isomorphic} matrix root. {This formulation works for the complex DFT, but more importantly, it works for hypercomplex DFTs (reviewed in §\,\ref{hypercomplex}).} The matrix exponential formulation is equivalent to all the known hypercomplex generalizations of the DFT known to the authors, based on quaternion, biquaternion or Clifford algebras, through a suitable choice of matrix root of $-1$, isomorphic to a root of $-1$ in the corresponding hypercomplex algebra. {All associative hypercomplex algebras (and indeed the complex algebra) are known to be isomorphic to matrix algebras over the reals or the complex numbers. For example, Ward \cite[\S\,2.8]{Ward:1997} discusses isomorphism between the quaternions and $4\times4$ real or $2\times2$ complex matrices so that quaternions can be replaced by matrices, the rules of matrix multiplication then being equivalent to the rules of quaternion multiplication \emph{by virtue of the pattern of the elements of the quaternion within the matrix}. Also in the quaternion case, Ickes \cite{Ickes:1970} wrote an important paper showing how multiplication of quaternions could be accomplished using a matrix-vector or vector-matrix product that could accommodate reversal of the product ordering by a partial transposition within the matrix. This paper, more than any other, led us to the {observations} presented here.}
{The fact that a hypercomplex DFT may be formulated using a matrix exponential may not be surprising. Nevertheless, to our knowledge, those who have worked on hypercomplex DFTs have not so far noted or exploited the observations made in this paper, which is surprising, given the ramifications discussed later.}
\section{{Hypercomplex transforms}} \label{hypercomplex} { The first published descriptions of hypercomplex transforms that we are aware of date from the late 1980s, using quaternions. In all three known earliest formulations, the transforms were defined for two-dimensional signals (that is functions of two independent variables). The two earliest formulations \cite[§\,6.4.2]{Ernst:1987} and \cite[Eqn.~20]{Delsuc:1988} are almost equivalent (they differ only in the placing of the exponentials and the signal and the signs inside the exponentials)\footnote{{In comparing the various formulations of hypercomplex transforms, we have changed the symbols used by the original authors in order to make the comparisons clearer. We have also made trivial changes such as the choice of basis elements used in the exponentials.}}: \[ F(\omega_1,\omega_2) = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
f(t_1, t_2)
e^{\i\omega_1 t_1}e^{\j\omega_2 t_2}\textrm{d}\xspace t_1\textrm{d}\xspace t_2 \] In a non-commutative algebra the ordering of exponentials within an integral is significant, and of course, two exponentials with different roots of -1 cannot be combined trivially. Therefore there are other possible transforms that can be defined by positioning the exponentials differently. The first transform in which the exponentials were placed either side of the signal function was that of Ell \cite{Ell:thesis,Ell:1993}: \[ F(\omega_1,\omega_2) = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
e^{\i\omega_1 t_1}
f(t_1, t_2)
e^{\j\omega_2 t_2}\textrm{d}\xspace t_1\textrm{d}\xspace t_2 \] } This style of transform was followed by Chernov, B\"{u}low and Sommer \cite{Chernov:1995,Bulow:1999,BulowSommer:2001} and others since. In 1998 the present authors described a single-sided hypercomplex transform for the first time \cite{SangwineEll:1998} exactly as in \eqref{eqn:classicdft} except that $f$ and $F$ were quaternion-valued and $j$ was replaced by a general quaternion root of $-1$. {Expressed in the same form as the transforms above, this would be: \[ F(\omega_1,\omega_2) = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
e^{\mu(\omega_1 t_1 + \omega_2 t_2)}
f(t_1, t_2) \textrm{d}\xspace t_1\textrm{d}\xspace t_2 \] where $\mu$ is now an arbitrary root of -1, not necessarily a basis element of the algebra. The realisation that an arbitrary root of -1 could be used meant that it was possible to define a hypercomplex transform applicable to one dimension: \[ F(\omega) = \int_{-\infty}^{\infty}e^{\mu\omega t} f(t) \textrm{d}\xspace t \] } Pei \textit{et al} have studied efficient implementation of quaternion FFTs and presented a transform based on commutative reduced biquaternions \cite{PeiDingChang:2001,PeiChangDing:2004}. Ebling and Scheuermann defined a Clifford Fourier transform \cite[\S\,5.2]{10.1109/TVCG.2005.54}, but their transform used the pseudoscalar (one of the basis elements of the algebra) as the square root of $-1$.
{A}part from the works by the present authors \cite{SangwineEll:1998,SangwineEll:2000b,10.1109/TIP.2006.884955}, the idea of using a root of $-1$ different to the basis elements of a hypercomplex algebra was not developed {further until 2006}, with the publication of a paper setting out the roots of $-1$ in biquaternions (a quaternion algebra with complex numbers as the components of the quaternions) \cite{10.1007/s00006-006-0005-8}. This work prepared the ground for a biquaternion Fourier transform \cite{10.1109/TSP.2007.910477} based on the present authors' one-sided quaternion transform \cite{SangwineEll:1998}. More recently, the idea of finding roots of $-1$ in other algebras has been advanced in Clifford algebras by Hitzer and Ab{\l}amowicz \cite{10.1007/s00006-010-0240-x} with the express intent of using them in Clifford Fourier transforms, perhaps generalising the ideas of Ebling and Scheuermann \cite{10.1109/TVCG.2005.54}. Finally, in this very brief summary of prior work we mention that the idea of applying hypercomplex algebras in signal processing has been studied by other authors apart from those referenced above. For an overview see \cite{AlfsmannGocklerSangwineEll:2007}.
{In what follows we concentrate on DFTs in one dimension for simplicity, returning to the two dimensional case in §\,\ref{twodimdft}.} \section{Matrix formulation of the discrete Fourier transform} \subsection{Matrix form of Euler's formula} \label{sec:mateuler} The transform presented in this paper depends on a generalization of Euler's formula: $\exp i\theta = \cos\theta + i\sin\theta$, in which the imaginary root of $-1$ is replaced by a matrix root, that is, a matrix that squares to give a negated identity matrix. Even among $2\times2$ matrices there is an infinite number of such roots \cite[p16]{Nahin:2006}. In the matrix generalization, the exponential must, of course, be a matrix exponential \cite[\S\,11.3]{Golub:1996}. {The following Lemma is not claimed to be original but we have not been able to locate any published source that we could cite here. Since the result is essential to Theorem~\ref{theorem:matrixdft}, we set it out in full.} \begin{lemma} \label{lemma:euler} Euler's formula may be generalized as follows: \begin{equation*} e^{\mat{J}\theta} = \mat{I}\cos\theta + \mat{J}\sin\theta \end{equation*} where \mat{I} is an identity matrix, and $\mat{J}^2 = -\mat{I}$. \end{lemma} \begin{proof} The result follows from the series expansions of the matrix exponential and the trigonometric functions. From the definition of the matrix exponential \cite[\S\,11.3]{Golub:1996}: \begin{align*} e^{\mat{J}\theta} &= \sum_{k=0}^{\infty} \frac{\mat{J}^k\theta^k}{k!} = \mat{J}^0 + \mat{J}\theta + \frac{\mat{J}^2\theta^2}{2!} +
\frac{\mat{J}^3\theta^3}{3!} + \cdots
\intertext{Noting that $\mat{J}^0=\mat{I}$ (see \cite[Index Laws]{CollinsDictMaths}), and separating the series into even and odd terms:}
&= \mat{I} - \frac{\mat{I}\theta^2}{2!}
+ \frac{\mat{I}\theta^4}{4!}
- \frac{\mat{I}\theta^6}{6!} + \cdots\\ &\phantom{=} + \mat{J}\theta - \frac{\mat{J}\theta^3}{3!}
+ \frac{\mat{J}\theta^5}{5!}
- \frac{\mat{J}\theta^7}{7!} + \cdots\\
&= \mat{I}\cos\theta + \mat{J}\sin\theta \end{align*} \qed \end{proof} Note that matrix versions of the trigonometric functions are not needed to compute the matrix exponential, because $\theta$ is a scalar. In fact, if the exponential is evaluated numerically using a matrix exponential algorithm or function, the trigonometric functions are not even explicitly evaluated \cite[\S\,11.3]{Golub:1996}. In practice, given that this is a special case of the matrix exponential, (because $\mat{J}^2=-\mat{I}$), it is likely to be numerically preferable to evaluate the trigonometric functions and to sum scaled versions of \mat{I} and \mat{J}.
{Notice that the matrix $e^{\mat{J}\theta}$ has a structure with the cosine of $\theta$ on the diagonal and the (scaled) sine of $\theta$ where there are non-zero elements of \mat{J}.}
\subsection{Matrix form of DFT} \label{sec:matdft} The classic discrete Fourier transform of \eqref{eqn:classicdft} may be generalized to a matrix form in which the signals are vector-valued with $N$ components each and the root of $-1$ is replaced by an $N\times N$ matrix root $\mat{J}$ such that {$\mat{J}^2=-\mat{I}$}. In this form, subject to choosing the correct representation for the matrix root of $-1$, we may represent a wide variety of complex and hypercomplex Fourier transforms. \begin{theorem}\label{theorem:matrixdft} The following are a discrete Fourier transform pair\footnote{The colon notation used here will be familiar to users of \textsc{matlab}\textregistered\xspace (an explanation may be found in \cite[\S\,1.1.8]{Golub:1996}). Briefly, $\mat{f}[:,m]$ means the $m\textsuperscript{th}$ column of the matrix \mat{f}.}: \begin{align} \label{eqn:forward} \mat{F}[:,u] &= S\sum_{m=0}^{M-1}\exp\left( -\mat{J}\,2\pi\frac{mu}{M}\right)\mat{f}[:,m]\\ \label{eqn:inverse} \mat{f}[:,m] &= T\sum_{u=0}^{M-1}\exp\left(\phantom{-}\mat{J}\,2\pi\frac{mu}{M}\right)\mat{F}[:,u] \end{align} where \mat{J} is a $N\times N$ matrix root of $-1$, \mat{f} and \mat{F} are $N\times M$ matrices with one sample per column, and the two scale factors $S$ and $T$ multiply to give $1/M$. \end{theorem} \begin{proof} \newcommand{\mathcal{M}}{\mathcal{M}} The proof is based on substitution of the forward transform \eqref{eqn:forward} into the inverse \eqref{eqn:inverse} followed by algebraic reduction to a result equal to the original signal $\mat{f}$. We start by substituting \eqref{eqn:forward} into \eqref{eqn:inverse}, replacing $m$ by $\mathcal{M}$ to keep the two indices distinct{, and at the same time replacing the two scale factors by their product $1/M$}: \begin{equation*} \mat{f}[:,m] = {\frac{1}{M}}\sum_{u=0}^{M-1}\left[e^{\mat{J}2\pi\frac{mu}{M}}
\sum_{\mathcal{M}=0}^{M-1}e^{-\mat{J}2\pi\frac{\mathcal{M} u}{M}}\mat{f}[:,\mathcal{M}]\right] \end{equation*} The exponential of the outer summation can be moved inside the inner, because it is constant with respect to the summation index $\mathcal{M}$: \begin{equation*} \mat{f}[:,m] = \frac{1}{M}\sum_{u=0}^{M-1}\sum_{\mathcal{M}=0}^{M-1}
e^{\mat{J}\,2\pi\frac{mu}{M}}
e^{-\mat{J}\,2\pi\frac{\mathcal{M} u}{M}}
\mat{f}[:,\mathcal{M}] \end{equation*} The two exponentials have the same root of $-1$, namely \mat{J}, and therefore they can be combined: \begin{equation*} \mat{f}[:,m] = \frac{1}{M}\sum_{u=0}^{M-1}\sum_{\mathcal{M}=0}^{M-1}
e^{\mat{J}\,2\pi\frac{(m-\mathcal{M})u}{M}}
\mat{f}[:,\mathcal{M}] \end{equation*} We now isolate out from the inner summation the case where $m=\mathcal{M}$. In this case the exponential reduces to an identity matrix, and we have: \begin{align*} \mat{f}[:,m] &= \frac{1}{M}\sum_{u=0}^{M-1}\mat{f}[:,m]\\
&+ \frac{1}{M}\sum_{u=0}^{M-1}\left[\left.
\sum_{\mathcal{M}=0}^{M-1}\right|_{\mathcal{M}\ne m}
e^{\mat{J}\,2\pi\frac{(m-\mathcal{M})u}{M}}
\mat{f}[:,\mathcal{M}]\right] \end{align*} The first line on the right sums to $\mat{f}[:,m]$, which is the original signal, as required. To complete the proof, we have to show that the second line on the right reduces to zero. Taking the second line alone, and changing the order of summation, we obtain: \[
\left.\sum_{\mathcal{M}=0}^{M-1}\right|_{\mathcal{M}\ne m} \left[\sum_{u=0}^{M-1}e^{\mat{J}\,2\pi\frac{(m-\mathcal{M})u}{M}}\right]\mat{f}[:,\mathcal{M}] \] Using Lemma~\ref{lemma:euler} we now write the matrix exponential as the sum of a cosine and sine term. \begin{equation*}
\left.\sum_{\mathcal{M}=0}^{M-1}\right|_{m\ne\mathcal{M}} \left[ \begin{aligned}
\mat{I}&\sum_{u=0}^{M-1}\cos\left(\!2\pi\frac{(m-\mathcal{M})u}{M}\right)\\ +\mat{J}&\sum_{u=0}^{M-1}\sin\left(\!2\pi\frac{(m-\mathcal{M})u}{M}\right) \end{aligned} \right]\mat{f}[:,\mathcal{M}] \end{equation*} Since both of the inner summations are sinusoids summed over an integral number of cyles, they vanish, and this completes the proof. \qed \end{proof} Notice that the requirement for $\mat{J}^2=-\mat{I}$ is the only constraint on \mat{J}. It is not necessary to constrain elements of \mat{J} to be real. Note that $\mat{J}^2=\mat{-I}$ implies that $\mat{J}^{-1}=-\mat{J}${,} hence the inverse transform is obtained by negating or inverting the matrix root of $-1$ (the two operations are equivalent).
The matrix dimensions must be consistent according to the ordering inside the summation. As written above, for a complex transform represented in matrix form, \mat{f} and \mat{F} must have two rows and $M$ columns. If the exponential were to be placed on the right, \mat{f} and \mat{F} would have to be transposed, with two columns and $M$ rows.
It is important to realize that \eqref{eqn:forward} is totally different to the classical matrix formulation of the discrete Fourier transform, as given for example by Golub and Van Loan \cite[\S\,4.6.4]{Golub:1996}. The classic DFT given in \eqref{eqn:classicdft} can be formulated as a matrix equation in which a large $M\times M$ {Vandermonde} matrix containing $n\textsuperscript{th}$ roots of unity multiplies the signal $f$ expressed as a vector of real or complex values. Instead, in \eqref{eqn:forward} each matrix exponential multiplies \emph{one column} of \mat{f}, corresponding to \emph{one sample} of the signal represented by \mat{f} and the dimensions of the matrix exponential are set by the dimensionality of the algebra (2 for complex, 4 for quaternions \textit{etc.}). In \eqref{eqn:forward} it is the multiplication of the exponential and the signal samples, dependent on the algebra involved, that is expressed in matrix form, not the structure of the transform itself.
Readers who are already familiar with hypercomplex Fourier transforms should note that the ordering of the exponential within the summation \eqref{eqn:forward} is not related to the ordering within the hypercomplex formulation of the transform (which is significant because of non-commutative multiplication). The hypercomplex ordering can be accommodated within the framework presented here by changing the representation of the matrix root of $-1$, in a non-trivial way, shown for the quaternion case by Ickes \cite[Equation 10]{Ickes:1970} and called \emph{transmutation}. We have studied the generalisation of Ickes' transmutation to the case of Clifford algebras, and it appears {that there is a more general} operation. {In the cases we have studied this can be} described as negation of the off-diagonal elements of the lower-right sub-matrix, excluding the first row and column\footnote{This gives the same result as transmutation in the quaternion case.}. We believe a more general result is known in Clifford algebra but we have not been able to locate a clear statement that we could cite. We therefore leave this for later work, as a full generalisation to Clifford algebras of arbitrary dimension requires further work, and is more appropriate to a more mathematical paper.
\section{Examples in specific algebras} \label{sec:examples} In this section we present the information necessary for \eqref{eqn:forward} and \eqref{eqn:inverse} to be verified numerically. In each of the cases below, we present an example root of $-1$ and a matrix representation{\footnote{{The matrix representations of roots of -1 are not unique -- a transpose of the matrix, for example, is equally valid. The operations that leave the square of the matrix invariant probably correspond to fundamental operations in the hypercomplex algebra, for example negation, conjugation, reversion.}}}. We include in the Appendix a short \textsc{matlab}\textregistered\xspace function for computing the transform in \eqref{eqn:forward}. The same code will compute the inverse by negating \mat{J}. This may be used to verify the results in the next section and to compare the results obtained with the classic complex FFT. In order to verify the quaternion or biquaternion results, the reader will need to install the QTFM toolbox \cite{qtfm}{, or use some other specialised software for computing with quaternions.}
\subsection{Complex algebra} \label{sec:complex} The $2\times2$ real matrix $\left( \begin{smallmatrix} 0 & -1\\ 1 & \phantom{-}0 \end{smallmatrix} \right)$ can be easily verified by eye to be a square root of the negated identity matrix $\left(\begin{smallmatrix}-1 & \phantom{-} 0\\ \phantom{-} 0 &-1\end{smallmatrix}\right)$, and it is easy to verify numerically that Euler's formula gives the same numerical results in the classic complex case and in the matrix case for an arbitrary $\theta$. This root of $-1$ is based on the well-known isomorphism between a complex number $a+j b$ and the matrix representation $\left(\begin{smallmatrix}a & -b\\b & \phantom{-}a\end{smallmatrix}\right)$ \cite[Theorem 1.6]{Ward:1997}\footnote{We have used the transpose of Ward's representation for consistency with the quaternion and biquaternion representations in the two following sections.}. The structure of a matrix exponential $e^{\mat{J}\theta}$ using the above matrix for \mat{J} is $\left( \begin{smallmatrix} C & -S\\ S & \phantom{-}C \end{smallmatrix} \right)$ where $C=\cos\theta$ and $S=\sin\theta$.
\subsection{Quaternion algebra} \label{sec:quaternion} The quaternion roots of $-1$ were discovered by Hamilton \cite[pp\,203, 209]{Hamiltonpapers:V3:7}, and consist of all unit pure quaternions, that is quaternions of the form $x\i+y\j+z\k$ subject to the constraint $x^2+y^2+z^2=1$. A simple example is the quaternion $\boldsymbol\mu=(\i+\j+\k)/\sqrt{3}$, which can be verified by hand to be a square root of $-1$ using the usual rules for multiplying the quaternion basis elements ($\i^2=\j^2=\k^2=\i\j\k=-1$). Using the isomorphism with $4\times4$ matrices given by Ward \cite[\S\,2.8]{Ward:1997}, between the quaternion $w+x\i+y\j+z\k$ and the matrix: \begin{align*} &
\begin{pmatrix}
w & -x & -y & -z\\
x & \phantom{-}w & -z & \phantom{-}y\\
y & \phantom{-}z & \phantom{-}w & -x\\
z & -y & \phantom{-}x & \phantom{-}w
\end{pmatrix} \intertext{we have the following matrix representation:} \boldsymbol\mu = \frac{1}{\sqrt{3}} &
\begin{pmatrix}
\newcommand{{\color{white}+}}{{\color{white}+}}
0 & -1 & -1 & -1\\
1 & \phantom{-}0 & -1 & \phantom{-}1\\
1 & \phantom{-}1 & \phantom{-}0 & -1\\
1 & -1 & \phantom{-}1 & \phantom{-}0
\end{pmatrix} \end{align*} Notice the structure that is apparent in this matrix: the $2\times2$ blocks on the leading diagonal {at the top left and bottom right} can be recognised as roots of $-1$ in the complex algebra as shown in \S\,\ref{sec:complex} \begin{proposition} \label{prop:quatroot} Any matrix of the form: \[ \begin{pmatrix}
0 & -x & -y & -z\\
x & \phantom{-}0 & -z & \phantom{-}y\\
y & \phantom{-}z & \phantom{-}0 & -x\\
z & -y & \phantom{-}x & \phantom{-}0 \end{pmatrix} \] with $x^2+y^2+z^2=1$ is the square root of a negated $4\times4$ identity matrix. Thus the matrix representations of the quaternion roots of $-1$ are all roots of the negated $4\times4$ identity matrix. \end{proposition} \begin{proof} The matrix is anti-symmetric, and the inner product of the $i\textsuperscript{th}$ row and $i\textsuperscript{th}$ column is $-x^2-y^2-z^2$, which is $-1$ because of the stated constraint. Therefore the diagonal elements of the square of the matrix are $-1$. Note that the rows of the matrix have one or three negative values, whereas the columns have zero or two. The product of the $i\textsuperscript{th}$ row with the $j\textsuperscript{th}$ column, $i\ne j$, is the sum of two values of opposite sign and equal magnitude. Therefore all off-diagonal elements of the square of the matrix are zero. \qed \end{proof} The structure of a matrix exponential $e^{\mat{J}\theta}$ using a matrix as in Proposition \ref{prop:quatroot} for \mat{J} is: \[ \begin{pmatrix} \phantom{x}C & -xS & -yS & -zS\\
xS & \phantom{-x}C & -zS & \phantom{-}yS\\
yS & \phantom{-}zS & \phantom{-x}C & -xS\\
zS & -yS & \phantom{-}xS & \phantom{-x}C \end{pmatrix} \] where, as before, $C=\cos\theta$ and $S=\sin\theta$. \subsection{Biquaternion algebra} The biquaternion algebra \cite[Chapter 3]{Ward:1997} (quaternions with complex elements) can be handled exactly as in the previous section, except that the $4\times4$ matrix representing the root of $-1$ must be complex (and the signal matrix must have {four} complex elements {per column}). The set of square roots of $-1$ in the biquaternion algebra is given in \cite{10.1007/s00006-006-0005-8}. A simple example is $\i+\j+\k+\I(\j-\k)$ (where \I denotes the classical complex root of $-1$, that is the biquaternion has real part $\i+\j+\k$ and imaginary part $\j-\k$). Again, this can be verified by hand to be a root of $-1$ and its matrix representation is: \[ \begin{pmatrix} 0 & -1 & -1 - \I & -1 + \I\\ 1 & \phantom{-}0 & -1 + \I & \phantom{-}1 + \I\\ 1 +\I & \phantom{-}1 -\I & 0 & -1\\ 1 -\I & -1 -\I & 1 & \phantom{-}0 \end{pmatrix} \] Again, sub-blocks of the matrix have recognizable structure. The {upper left and lower right} diagonal $2\times2$ blocks are roots of $-1$, while the {lower left and upper right} off-diagonal {$2\times2$} blocks are nilpotent -- that is their square vanishes.
\subsection{Clifford algebras} Recent work by Hitzer and Ab{\l}amowicz \cite{10.1007/s00006-010-0240-x} has explored the roots of $-1$ in Clifford algebras \clifford{p}{q} up to those with $p+q = 4$, which are 16-dimensional algebras\footnote{$p$ and $q$ are non-negative integers such that $p+q=n$ and $n\ge1$. The dimension of the algebra (strictly the dimension of the space spanned by the basis elements of the algebra) is $2^n$ .}. The derivations of the roots of -1 for the 16-dimensional algebras are long and difficult. Therefore, for the moment, we confine the discussion here to lower-order algebras, noting that, since all Clifford algebras are isomorphic to a matrix algebra, we can be assured that if roots of -1 exist, they must have a matrix representation. Using {the results obtained by Hitzer and Ab{\l}amowicz}, and by finding from first principles the layout of a real matrix isomorphic to a Clifford multivector in a given algebra, it has been possible to verify that the transform formulation presented in this paper is applicable to at least the lower order Clifford algebras. The quaternions and biquaternions are isomorphic to the Clifford algebras \clifford{0}{2} and \clifford{3}{0} respectively so this is not surprising. Nevertheless, it is an important finding, because until now quaternion and Clifford Fourier transforms were defined in different ways, using different terminology, and it was difficult to make comparisons between the two. Now, with the matrix exponential formulation, it is possible to handle quaternion and Clifford transforms (and indeed transforms in different Clifford algebras) within the same algebraic and/or numerical framework.
We present examples here from two of the 4-dimensional Clifford algebras, namely \clifford{1}{1} and \clifford{2}{0}. These results have been verified against the \textsc{CLICAL}\xspace package \cite{CLICAL-User-manual} to ensure that the multiplication rules have been followed correctly and that the roots of $-1$ found by Hitzer and Ab{\l}amowicz are correct.
Following the notation in \cite{10.1007/s00006-010-0240-x}, we write a multivector in \clifford{1}{1} as $\alpha + b_1 e_1 + b_2 e_2 + \beta e_{12}$, where $e_1^2=+1, e_2^2=-1, e_{12}^2=+1$ and $e_1 e_2 = e_{12}$. A possible real matrix representation is as follows: \[ \begin{pmatrix} \alpha & \phantom{-}b_1 & -b_2 & \beta\\ b_1 & \phantom{-}\alpha & -\beta & b_2\\ b_2 & -\beta & \phantom{-}\alpha & b_1\\ \beta & -b_2 & \phantom{-}b_1 & \alpha \end{pmatrix} \] In this algebra, the constraints on the coefficients of a multivector for it to be a root of $-1$ are as follows: $\alpha=0$ and $b_1^2-b_2^2+\beta^2=-1$ \cite[Table 1]{10.1007/s00006-010-0240-x}\footnote{We have re-arranged the constraint compared to \cite[Table 1]{10.1007/s00006-010-0240-x} to make the comparison with the quaternion case easier: we see that the signs of the squares of the coefficients match the signs of the squared basis elements.}. Choosing $b_1=\beta=1$ gives $b_2=\sqrt{3}$ and thus $e_1 + \sqrt{3}e_2 + e_{12}$ which can be verified algebraically or in \textsc{CLICAL}\xspace to be a root of $-1$. The corresponding matrix is then: \[ \begin{pmatrix} 0 & \phantom{-}1 & -\sqrt{3} & 1\\ 1 & \phantom{-}0 & -1 & \sqrt{3}\\ \sqrt{3} & -1 & \phantom{-}0 & 1\\ 1 & -\sqrt{3} & \phantom{-}1 & 0 \end{pmatrix} \] Following the same notation in algebra \clifford{2}{0}, in which $e_1^2=e_2^2=+1, e_{12}^2=-1$, a possible matrix representation is: \[ \begin{pmatrix} \alpha & \phantom{-}b_1 & b_2 & -\beta\\ b_1 & \phantom{-}\alpha & \beta & -b_2\\ b_2 & -\beta & \alpha & \phantom{-}b_1\\ \beta & -b_2 & b_1 & \phantom{-}\alpha \end{pmatrix} \] The constraints on the coefficients are $\alpha=0$ and $b_1^2+b_2^2-\beta^2=-1$, and choosing $b_1=b_2=1$ gives $\beta=\sqrt{3}$ and thus $e_1 + e_2 + \sqrt{3}e_{12}$ is a root of $-1$. The corresponding matrix is then: \[ \begin{pmatrix} 0 & \phantom{-}1 & 1 & -\sqrt{3}\\ 1 & \phantom{-}0 & \sqrt{3} & -1\\ 1 & -\sqrt{3} & 0 & \phantom{-}1\\ \sqrt{3} & -1 & 1 & \phantom{-}0 \end{pmatrix} \] Notice that in both of these algebras the matrix representation of a root of $-1$ is very similar to that given for the quaternion case in Proposition~\ref{prop:quatroot}, with zeros on the leading diagonal, an odd number of negative values in each row and an even number in each column. It is therefore simple to see that minor modifications to Proposition~\ref{prop:quatroot} would cover these algebras and the matrices presented above.
\section{An example not based on a specific\\algebra} \label{sec:mystery} We show here using an arbitrary $2\times2$ matrix root of $-1$, that it is possible to define a Fourier transf{or}m {without a specific} algebra. Let an arbitrary real matrix be given as $J = \left(\begin{smallmatrix}a & b\\c & d\end{smallmatrix}\right)$, then by brute force expansion of $J^2=-{I}$ we find the original four equations reduce to but two independent equations. Picking $(a,b)$ and solving for the remaining coefficients we find that any matrix of the form: \begin{equation*} \begin{pmatrix}a & \phantom{-} b\\ -(1+a^2)/b &-a\end{pmatrix} \end{equation*} with finite $a$ and $b$, and $b\ne0$, is a root of $-1$. Choosing instead $(a,c)$ we get the transpose form: \begin{equation*} \begin{pmatrix}a & -(1+a^2)/c \\ c &-a\end{pmatrix} \end{equation*} where $c\ne0$. Choosing the cross-diagonal terms $(b,c)$ yields: \begin{equation} \label{eqn:ellipseroot} { \begin{pmatrix} \pm\kappa & b\\
c & \mp\kappa \end{pmatrix} } \end{equation} {where $\kappa=\sqrt{ -1 - bc}$ and} $bc\leq-1$.
In all cases the resulting matrix has eigenvalues of $\lambda = \pm i$. (This is a direct consequence of the fact that this matrix squares to $-1$.) Each form, however, has different eigenvectors. The standard matrix {representation} for the complex operator {$i$ is} {$\left(\begin{smallmatrix}0 & -1\\1 &\phantom{-}0\end{smallmatrix}\right)$} {with} eigenvectors $v = [1,\pm\,i]$. In the matrix with $(a,b)$ parameters the eigenvectors are $v = [1,-b/(a\,\pm\,i)]$ whereas the cross-diagonal form with $(b,c)$ parameters has eigenvectors {$v = [1,(\kappa\,\pm\,i)/c]$}.
These forms suggest the interesting question: which algebra, if any, applies here\footnote{It is possible that there is no corresponding `algebra' in the usual sense. Note that there are only two Clifford algebras of dimension 2, one of which is the algebra of complex numbers. The other has no multivector roots of -1 \cite[\S\,4]{10.1007/s00006-010-0240-x} and therefore the roots of $-1$ given above cannot be a root of $-1$ in any Clifford algebra.}; and how can the Fourier coefficients (the `spectrum') be interpreted? {We are not able to answer the first question in this paper. The `interpretation' of the spectrum is relatively simple. Consider a spectrum $\mat{F}$ containing only one non-zero column at index $u_0$ with value $\left(\begin{smallmatrix}x\\y\end{smallmatrix}\right)$ and invert this spectrum using \eqref{eqn:inverse}. Ignoring the scale factor, the result will be the signal: \[ \mat{f}[:,m] = \exp\left(\mat{J}\,2\pi\frac{mu_0}{M}\right)\begin{pmatrix}x\\y\end{pmatrix} \] The form of the matrix exponential depends on \mat{J}. In the classic complex case, as given in \S\,\ref{sec:complex}, the matrix exponential, as already seen, takes the form: \[ \begin{pmatrix} \cos\theta & - \sin\theta\\ \sin\theta & \phantom{-}\cos\theta \end{pmatrix} \] where $\theta=2\pi\frac{mu_0}{M}$. } This is a rotation matrix and it maps a real unit vector $\left(\begin{smallmatrix}1\\0\end{smallmatrix}\right)$ to a point on a circle in the complex plane. It embodies the standard \emph{phasor} concept associated with sinusoidal functions. Using the same analysis, this time using the matrix in \eqref{eqn:ellipseroot} above, one obtains for the matrix exponential the `phasor' operator: \[ \left( \begin{array}{@{}rr@{}}
\cos\theta + \kappa\sin\theta & b\sin\theta\\
c\sin\theta & \cos\theta - \kappa\sin\theta \end{array} \right) \] Instead of mapping a real unit vector $\left(\begin{smallmatrix}1\\0\end{smallmatrix}\right)$ to a {point on a circle}, this matrix maps to an ellipse. Thus, we see that a transform based on a matrix such as that in \eqref{eqn:ellipseroot} has basis functions that are projections of an elliptical, rather than a circular path in the complex plane, as in the classical complex Fourier transform. We refer the reader to a discussion on a similar point for the one-sided quaternion discrete Fourier transform in our own 2007 paper \cite[\S\,VI]{10.1109/TIP.2006.884955}, in which we showed that the quaternion coefficients of the Fourier spectrum also represent elliptical paths through the space of the signal samples.
{It is possible that the matrices discussed in this section could be transformed by similarity transformations into matrices representing elements of a Clifford algebra\footnote{{We are grateful to Dr Eckhard Hitzer for pointing this out, in September 2010.}}. Note that in the quaternion case, any root of -1 lies on the unit sphere in 3-space, and can therefore be transformed into another root of -1 by a rotation. It is possible that the same applies in other algebras, the transformation needed being dependent on the geometry.} Clearly there are interesting issues to be studied here, and further work to be done.
\section{Non-existence of transforms in algebras with odd dimension} \label{sec:nonexist} In this section we show that there are no real matrix roots of $-1$ with odd dimension. This is not unexpected, since the existence of such roots would {suggest} the existence of a hypercomplex algebra of odd dimension. The significance of this result is to show that there is no discrete Fourier transform as formulated in Theorem \ref{theorem:matrixdft} for an algebra of dimension $3$, which is of importance for the processing of signals representing physical 3-space quantities, or the values of colour image pixels. We thus conclude that the choice of quaternion Fourier transforms or a Clifford Fourier transform of dimension $4$ is inevitable in these cases. This is not an unexpected conclusion, nevertheless, in the experience of the authors, some researchers in signal and image processing hesitate to accept the idea of using four dimensions to handle three-dimensional samples or pixels. (This is despite the rather obvious parallel of needing two dimensions -- complex numbers -- to represent the Fourier coefficients of a real-valued signal or image.) \begin{theorem} There are no $N\times N$ matrices $\mat{J}$ with real elements such that $\mat{J}^2=-\mat{I}$ for odd values of $N$. \end{theorem} \begin{proof}
The determinant of a diagonal matrix is the product of its diagonal entries. Therefore $|-\mat{I}|=-1$ for odd $N$. Since the product of two determinants is the determinant of the product,
$|\mat{J}^2|=-1$ requires $|\mat{J}|^2=-1$, which cannot be satisfied if $\mat{J}$ has real elements. \qed \end{proof}
\section{Extension to two-sided DFTs} \label{twodimdft} There have been various definitions of two sided hypercomplex Fourier transforms and DFTs. We consider here only one case to demonstrate that the approach presented in this paper is applicable to two-sided as well as one-sided transforms: this is a matrix exponential Fourier transform based on Ell's original two-sided two-dimensional quaternion transform \cite[Theorem 4.1]{Ell:thesis}, \cite{Ell:1993}, \cite{10.1049/el:19961331}. A more general formulation is: \begin{equation} \mat{F}[u, v] = S\!\sum_{m=0}^{M-1}\sum_{n=0}^{N-1}\! e^{-\mat{J} 2\pi\frac{mu}{M}}\mat{f}[m,n]e^{-\mat{K} 2\pi\frac{nv}{N}} \label{eqn:twodforward} \end{equation} \begin{equation} \mat{f}[m, n] = T\!\sum_{u=0}^{M-1}\sum_{v=0}^{N-1}\! e^{+\mat{J} 2\pi\frac{mu}{M}}\mat{F}[u,v]e^{+\mat{K} 2\pi\frac{nv}{N}} \label{eqn:twodinverse} \end{equation} in which \emph{each element} of the two-dimensional arrays \mat{F} and \mat{f} is a {square} matrix representing a complex or hypercomplex number using a matrix isomorphism for the algebra in use, for example the representations already given in \S\,\ref{sec:quaternion} in the case of the quaternion algebra; the two scale factors multiply to give $1/MN$, and \mat{J} and \mat{K} are matrix representations of two \emph{arbitrary} roots of $-1$ in the chosen algebra. (In Ell's original formulation, the roots of $-1$ were $\j$ and $\k$, that is two of the \emph{orthogonal} quaternion basis elements. The following theorem shows that there is no requirement for the two roots to be orthogonal in order for the transform to invert.) \begin{theorem} \label{theorem:twodmatrixdft} The transforms in \eqref{eqn:twodforward} and \eqref{eqn:twodinverse} are a two-dimensional discrete Fourier transform pair, {provided that $\mat{J}^2 = \mat{K}^2 = -\mat{I}$.} \end{theorem} \begin{proof} \newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{\mathcal{N}}{\mathcal{N}} The proof follows the same scheme as the proof of Theorem~\ref{theorem:matrixdft}, but we adopt a more concise presentation to fit the available column space. We start by substituting \eqref{eqn:forward} into \eqref{eqn:inverse}, replacing $m$ and $n$ by $\mathcal{M}$ and $\mathcal{N}$ respectively to keep the indices distinct: \begin{align*} \mat{f}[m, n] &= {\frac{1}{M N}}\sum_{u=0}^{M-1}\sum_{v=0}^{N-1}
e^{\mat{J} 2\pi\frac{mu}{M}}\\
&\times\left[\sum_{\mathcal{M}=0}^{M-1}\sum_{\mathcal{N}=0}^{N-1}
e^{-\mat{J} 2\pi\frac{\mathcal{M} u}{M}}
\mat{f}[\mathcal{M},\mathcal{N}]
e^{-\mat{K} 2\pi\frac{\mathcal{N} v}{N}}\right]\\
&\times
e^{\mat{K} 2\pi\frac{nv}{N}} \end{align*} The scale factors can be moved outside both summations, and replaced with their product $1/M N$; and the exponentials of the outer summations can be moved inside the inner, because they are constant with respect to the summation indices $\mathcal{M}$ and $\mathcal{N}$. At the same time, adjacent exponentials with the same root of $-1$ can be merged. With these changes{, and omitting the scale factor to save space}, the right-hand side of the equation becomes: \begin{equation*} \sum_{u=0}^{M-1}\sum_{v=0}^{N-1} \sum_{\mathcal{M}=0}^{M-1}\sum_{\mathcal{N}=0}^{N-1} e^{\mat{J} 2\pi\frac{(m-\mathcal{M})u}{M}} \mat{f}[\mathcal{M},\mathcal{N}] e^{\mat{K} 2\pi\frac{(n-\mathcal{N})v}{N}} \end{equation*} We now isolate out from the inner pair of summations the case where $\mathcal{M}=m$ and $\mathcal{N}=n$. In this case the exponentials reduce to identity matrices, and we have: \begin{equation*} \frac{1}{MN}\sum_{u=0}^{M-1}\sum_{v=0}^{N-1}\mat{f}[m,n] \end{equation*} This sums to $\mat{f}[m,n]$, which is the original two-dimensional signal, as required. To complete the proof we have to show that the rest of the summation, excluding the case $\mathcal{M}=m$ and $\mathcal{N}=n$, reduces to zero. Dropping the scale factor, and changing the order of summation, we have the following inner double summation: \begin{equation*} \sum_{u=0}^{M-1}\sum_{v=0}^{N-1} e^{\mat{J} 2\pi\frac{(m-\mathcal{M})u}{M}} \mat{f}[\mathcal{M},\mathcal{N}] e^{\mat{K} 2\pi\frac{(n-\mathcal{N})v}{N}} \end{equation*} Noting that the first exponential and \mat{f} are independent of the second summation index $v$, we can move them outside the second summation (we could do similarly with the exponential on the right and the first summation): \begin{equation*} \sum_{u=0}^{M-1} e^{\mat{J} 2\pi\frac{(m-\mathcal{M})u}{M}} \mat{f}[\mathcal{M},\mathcal{N}] \sum_{v=0}^{N-1} e^{\mat{K} 2\pi\frac{(n-\mathcal{N})v}{N}} \end{equation*} and, as in Theorem \ref{theorem:matrixdft}, the summation on the right is over an integral number of cycles of cosine and sine, and therefore vanishes. \qed \end{proof} Notice that it was not necessary to assume that \mat{J} and \mat{K} were orthogonal: it is sufficient that each be a root of $-1$. This has been verified numerically using the two-dimensional code given in the Appendix.
\section{Discussion} We have shown that any discrete Fourier transform in an algebra that has a matrix representation, can be formulated in the way shown here. This includes the complex, quaternion, biquaternion, and Clifford algebras (although we have demonstrated only certain cases of Clifford algebras, we believe the result holds in general). {This observation provides a theoretical unification of diverse hypercomplex DFTs.}
Several immediate possibilities for further work, as well as ramifications, now suggest themselves. Firstly, the study of roots of $-1$ is accessible from the matrix representation as well as direct representation in whatever algebra is employed for the transform. All of the results obtained so far in hypercomplex algebras, and known to the authors \cite[pp\,203, 209]{Hamiltonpapers:V3:7}, \cite{10.1007/s00006-006-0005-8,10.1007/s00006-010-0240-x}, were achieved by working \emph{in the algebra} in question, that is by algebraic manipulation of quaternion, biquaternion or Clifford multivector values. An alternative approach would be to work in the equivalent matrix algebra, but this seems difficult even for the lower order cases. Nevertheless, it merits further study because of the possibility of finding a systematic approach that would cover many algebras in one framework. Following the reasoning in \S\,\ref{sec:mystery}, it is possible to define matrix roots of $-1$ that appear not to be isomorphic to any Clifford or quaternion algebra, and these merit further study.
Secondly, the matrix formulation presented here lends itself to analysis of the structure of the transform, including possible factorizations for fast algorithms, as well as parallel or vectorized implementations for single-instruction, multiple-data (\textsc{simd}) processors, and of course, factorizations into multiple complex FFTs as has been done for quaternion FFTs (see for example \cite{SangwineEll:2000b}). In the case of matrix roots of $-1$ which do not correspond to Clifford or quaternion algebras, analysis of the structure of the transform may give insight into possible applications of transforms based on such roots.
Finally, at a practical level, hypercomplex transforms implemented {directly} in hypercomplex arithmetic are likely to be much faster than any implementation based on matrices, but the simplicity of the matrix exponential formulation {discussed in this paper}, and the fact that it can be computed using standard real or complex matrix arithmetic, {\emph{without using a hypercomplex library},} means that the matrix exponential formulation provides a very simple reference implementation which can be used for verification of {the correctness of} hypercomplex code. {This is an important point, because verification of the correctness of hypercomplex FFT code is otherwise non-trivial. Verification of inversion is simple enough, but establishing that the spectral coefficients have the correct values is much less so. }
\appendix \section{\textsc{matlab}\textregistered\xspace code} We include here two short \textsc{matlab}\textregistered\xspace functions for computing the forward transform given in \eqref{eqn:forward}, and \eqref{eqn:twodforward}, \emph{apart from the scale factors}. The inverses can be computed simply by interchanging the input and output and negating the matrix roots of $-1$. Neither function is coded for speed, on the contrary the coding is intended to be simple and easily verified against the equations. \begin{alltt} {\color{blue}function} F = matdft(f, J) M = size(f, 2); F = zeros(size(f)); {\color{blue}for} m = 0:M-1
{\color{blue}for} u = 0:M-1
F(:, u + 1) = F(:, u + 1) {\color{blue}...}
+ expm(-J .* 2 .* pi .* m .* u./M) {\color{blue}...}
* f(:, m + 1);
{\color{blue}end} {\color{blue}end} \end{alltt}
\begin{alltt} {\color{blue}function} F = matdft2(f, J, K) A = size(J, 1); M = size(f, 1) ./ A; N = size(f, 2) ./ A; F = zeros(size(f)); {\color{blue}for} u = 0:M-1
{\color{blue}for} v = 0:N-1
{\color{blue}for} m = 0:M-1
{\color{blue}for} n = 0:N-1
F(A*u+1:A*u+A, A*v+1:A*v+A) = {\color{blue}...}
F(A*u+1:A*u+A, A*v+1:A*v+A) + {\color{blue}...}
expm(-J .* 2*pi .* m .* u./M) {\color{blue}...}
* f(A*m+1:A*m+A, A*n+1:A*n+A) {\color{blue}...}
* expm(-K .* 2*pi .* n .* v./N);
{\color{blue}end}
{\color{blue}end}
{\color{blue}end} {\color{blue}end} \end{alltt}
\nocite{Hamilton:1848}
\end{document} |
\begin{document}
\title{Difference Nullstellensatz in the case of finite group\footnote{\Xy-pic package is used}
\begin{abstract} We develop a geometric theory for difference equations with a given group of automorphisms. To solve this problem we extend the class of difference fields to the class of absolutely flat simple difference rings called pseudofields. We prove the Nullstellensatz over pseudofields and investigate geometric properties of pseudovarieties. \end{abstract}
\section{Introduction}
Our purpose is to produce a geometric technique allowing to obtain a Picard-Vessiot theory of difference equations with difference parameters. Unfortunately, usual geometric approaches do not allow to produce a Picard-Vessiot extension with difference parameters. Roughly speaking, the problem is that morphisms of varieties in these theories are not constructible. Therefore, we have to develop an absolutely new machinery. The main advantage of our theory is that the morphisms of varieties are constructible. We also describe general properties of our varieties. Using these results one can obtain a Picard-Vessiot theory of difference equations with difference parameters~\cite{AOT}. The most important application of this theory is the description of difference relations among solutions of difference equations, especially, for Jacobi's theta-function.
This article is devoted to producing a general geometric theory of difference equations. Therefore, it is hard to distinguish main results. Nevertheless, we underline the following ones. Theorem~\ref{equtheor} describes difference closed pseudofields. Proposition~\ref{homst} used to obtain all geometric results, particulary, this proposition shows that morphisms of pseudovarieties are constructible. Describing the global regular functions, Theorem~\ref{regularf} reduces the geometric theory to the algebraic one. Lemma~\ref{taylormod} appears in different forms in the text (for example, Proposition~\ref{taylor}), this full version allows to connect the theory of pseudovarieties with the theory of algebraic varieties. A more detailed survey of the main results is included in the following section.
In this paper, we develop a geometric theory of difference equations. But what does it mean? In commutative algebra if an algebraically closed field is given, there is a one-to-one correspondence between the set of affine algebraic varieties and radical ideals in a polynomial ring. Moreover, the category of affine algebraic varieties is equivalent to the category of finitely generated algebras. Such a point of view was extended to the case of differential equations by Ritt and his successors. The notion of differential algebraic variety led to the notion of differentially closed field. The similar results appeared in difference algebra. The first results in this direction were obtained by Cohn~\cite{Cohn}. He discovered the following difficulty: to obtain a necessarily number of solutions of a given system of difference equations we have to consider solutions in several distinct difference fields at the same time. This effect prevented from finding the notion of difference closed field. However, such notion was introduced by Macintyer~\cite{Maci}. In model theory the theory of a difference closed field is called ACFA. A detailed survey of ACFA theory can be found in~\cite{Chad}. The appearance of the notion of difference closed field allowed to define the notion of difference variety in the same manner as in differential case.
In~\cite{Hrush} Hrushovski develops the notion of difference algebraic geometry in a scheme theoretic language. But his machinery can be applied only to well-mixed rings. Unfortunately, here is a deeper problem: all mentioned attempts of building a geometric theory deal with fields. Let us recall the main difficulties: 1) there exist a pair of difference fields such that there is no difference overfield containing both of them 2) morphisms of difference varieties are not constructible 3) a maximal difference ideal is not necessarily prime 4) the set of all common zeros of a nontrivial difference ideal is sometimes empty. An example can be found in~\cite[Proposition~7.4]{Rosen}. The essential idea is to extend the class of difference fields. Such approach was used in ~\cite{vPS,Tace,AmMas,Wib}. In particulary, in the Picard-Vessiot theory of difference equations one founds that the ring containing enough solutions is not necessarily a field but rather a finite product of fields. Solving a similar problem, Takeuchi considered simple Artinian rings. In~\cite{Wib} Wibmer combined the ideas of Hrushovski with those similar to the ideas in Takeuchi's work to develop the Galois theory of difference equations in the most general context. All these works ultimately deal with finite products of fields.
As we can see, the Galois theory of difference equations requires to consider finite products of fields. This simple improvement allowed to produce difference-differential Picard-Vessiot theory with differential parameters~\cite{HardSin}. The next step is to obtain Picard-Vessiot theory for the equations with difference parameters. A first idea of how to do this is to use difference closed fields and to repeat the discourse developed in~\cite{vPS,HardSin}. Unfortunately, this method does not work. And the general problem is that the morphisms of difference varieties are not constructible. This effect appears when we construct a Picard-Vessiot extension for an equation with difference parameters. In this situation we expect that the constants of the extension coincide with the constants of the initial field. And we use this fact to produce a Galois group as a linear algebraic group. However, the constants of a Picard-Vessiot extension need not be a field, for example~\cite[example~2.6]{AOT}.
Therefore, we must develop a new geometric theory. The first question is what class of difference rings is appropriate for our purpose. The answer is the class of all simple absolutely flat difference rings. The detailed discussion of how to figure this out can be found in~\cite{DNull}. We shall use a term pseudofield for such rings. Our plan is to introduce difference closed pseudofields and to develop the corresponding theory of pseudovarieties.
Here we shall briefly discuss some milestones of the theory. First of all we need the notion of difference closed pseudofield. A similar problem appears in differential algebra of prime characteristic. In prime characteristic, we have to deal with quasifields instead of fields. In~\cite{Quasi}, differentially closed quasifields are introduced. The crucial role in this theory is played by the ring of Hurwitz series. A difference algebraic analogue is introduced in Section~\ref{sec33} and is called the ring of functions. Such a construction appeared in many papers, for example~\cite{vPS,Mor,UmMor}.
Here our theory is divided into two parts: the case of a finite group or an infinite one. This paper deals with the finite groups. We show that functions on the group give a full classification of difference closed pseudofields. In this situation, pseudofields are finite products of fields. Therefore, pseudofields in our sense and pseudofields in~\cite{Wib} coincide. The case of infinite group is much harder and is scrutinized in~\cite{DNull}.
The theory of difference rings has one important technical difficulty: we cannot use an arbitrary localization. For example, suppose that we need to investigate an inequality $f\neq0$. To do this one can consider the localization with respect to the powers of $f$. Unfortunately, the constructed ring is not necessarily a difference one. To find the ``minimal'' difference ring containing $1/f$, we should generate the smallest invariant multiplicative subset by $f$. But this subset often contains zero. Therefore, we have to develop a new machinery to avoid this difficulty. This machinery is developed in section~\ref{sec42} and is called an inheriting. Roughly speaking, all results of the paper are based on the classification of difference closed pseudofields and the inheriting machinery.
\subsection{Structure of the paper}
All necessary terms and notation are introduced in Section~\ref{sec2}. Section~\ref{sec3} is devoted to the basic techniques used in further sections. In Section~\ref{sec31}, we introduce pseudoprime ideals and investigate their properties. In the next Section~\ref{sec32}, we deal with pseudospectra and introduce a topology on them. In Section~\ref{sec33}, the most important class of difference rings is presented. We prove the theorem of the Taylor homomorphism for this class of rings (Proposition~\ref{taylor}).
The most interesting case for us is the case of finite groups of automorphisms. Section~\ref{sec4}. In Section~\ref{sec41}, we improve basic technical results obtained in Section~\ref{sec3}. Section~\ref{sec42} provides the relation between the commutative structure of a ring and its difference structure. Since in difference algebra we are not able to produce fraction rings with respect to an arbitrary multiplicatively closed sets, we need an alternative technique, which is based on the inheriting of properties. The main technical result is Proposition~\ref{inher} allowing to avoid localization.
The structure of pseudofields is scrutinized in Section~\ref{sec43}. We introduce difference closed pseudofields and classify them up to isomorphism (Proposition~\ref{equtheor}). We prove that every pseudofield (so, thus every field) can be embedded into a difference closed pseudofield (Propositions~\ref{difclosemin} and~\ref{difclosuni}). Our technique is illustrated by a sequence of examples. Section~\ref{sec44} plays an auxiliary role. Its results have special geometric interpretation. The most important statements are Proposition~\ref{homst} and its corollaries~\ref{cor1} and~\ref{cor2}.
Using difference closed pseudofield one can produce a geometric theory of difference equations with finite group of automorphisms. In Section~\ref{sec45}, we introduce the basic geometric notions. The main result of the section is the difference Nullstellensatz for pseudovarieties (Proposition~\ref{nullth}). In Section~\ref{sec46}, we construct two different structure sheaves of regular functions. The first one consists of functions that are given by a fraction $a/b$ in a neighborhood of each point. Every pseudofield has an operation generalizing division. We use this operation to produce the second sheaf. And the main result is that these sheaves coincide and the ring of global sections consists of polynomial functions. Section~\ref{sec47} contains nontrivial geometric results about pseudovarieties. For example, Theorem~\ref{constrst} says that morphisms are constructible.
There is a natural way to identify a pseudoaffine space with an affine space over some algebraically closed field. Thus, every pseudovariety can be considered as a subset of an affine space. One can show that pseudovarieties are closed in the Zariski topology. Moreover, there is a one-to-one correspondence between pseudovarieties and algebraic varieties. We prove this in Section~\ref{sec48} and we show how to derive geometric properties of pseudovarieties using the adjoint variety in Section~\ref{sec49}. The final section contains the basic results on the dimension.
\section{Terms and notation}\label{sec2}
This section is devoted to basic notions and objects used further. We shall define an interesting for us class of rings and the notion of pseudospectrum.
Let $\Sigma$ be an arbitrary group. A ring $A$ will be said to be a difference ring if $A$ is an associative commutative ring with an identity element such that the group $\Sigma$ is acting on $A$ by means of ring automorphisms. A difference homomorphism of difference rings is a homomorphism preserving the identity element and commuting with the action of $\Sigma$. A difference ideal is an ideal stable under the action of the group $\Sigma$. We shall write $\Sigma$ instead of the word difference. A simple difference ring is a ring with no nontrivial difference ideals. The set of all $\Sigma$-ideals of $A$ will be denoted by $\operatorname{Id^\Sigma} A$. For every ideal $\frak a\subseteq A$ and every element $\sigma\in\Sigma$ the image of $\frak a$ under $\sigma$ will be denoted by $\frak a^\sigma$.
The set of all, radical, prime, maximal ideals of $A$ will be denoted by $\operatorname{Id} A$, $\operatorname{Rad} A$, $\operatorname{Spec} A$, $\operatorname{Max} A$, respectively. The set of all prime difference ideals of $A$ will be denoted by $\operatorname{Spec^\Sigma} A$. For every ideal $\frak a\subseteq A$ the largest $\Sigma$-ideal laying in $\frak a$ will be denoted by $\frak a_\Sigma$. Such an ideal exists because it coincides with the sum of all difference ideals contained in $\frak a$. Note that $$ \frak a_\Sigma=\{\, a\in\frak a\mid\forall \sigma\in\Sigma\colon \sigma(a)\in\frak a \,\}. $$ So, we have a mapping $$ \pi\colon \operatorname{Id} A\to \operatorname{Id^\Sigma} A $$ defined by the rule $\frak a\mapsto \frak a_\Sigma$. Straightforward calculation shows that for every family of ideals $\frak a_\alpha$ we have $$ \pi(\bigcap_{\alpha} \frak a_\alpha)=\bigcap_{\alpha} \pi(\frak a_\alpha). $$ It is easy to see that for any ideal $\frak a$ there is the equality $$ \frak a_\Sigma=\bigcap_{\sigma\in\Sigma} \frak a^\sigma. $$
We shall define the notion of pseudoprime ideal of a $\Sigma$-ring $A$. Let $S\subseteq A$ be a multiplicatively closed subset containing the identity element, and let $\frak q$ be a maximal $\Sigma$-ideal not meeting $S$. Then the ideal $\frak q$ will be called pseudoprime. The set of all pseudoprime ideals will be denoted by $\operatorname{PSpec} A$ and is called a pseudospectrum.
Note that the restriction of $\pi$ onto the spectrum gives the mapping $$ \pi\colon \operatorname{Spec} A\to \operatorname{PSpec} A. $$ The ideal $\frak p$ will be called $\Sigma$-associated with pseudoprime $\frak q$ if $\pi(\frak p)=\frak q$. Let $\frak q$ be a pseudoprime ideal, and let $S$ be a multiplicatively closed set from the definition of $\frak q$, then every prime ideal containing $\frak q$ and not meeting $S$ is $\Sigma$-associated with $\frak q$. So, the mapping $\pi\colon \operatorname{Spec} A\to \operatorname{PSpec} A$ is surjective.
Let $S$ be a multiplicatively closed set and $\frak a$ be an ideal of $A$. Then the saturation of $\frak a$ with respect to $S$ will be the following ideal $$ S(\frak a)=\bigcup_{s\in S}(\frak a:s). $$ If $s$ is an element of $A$ then the saturation of $\frak a$ with respect to $\{s^n\}$ will be denoted by $\frak a:s^\infty$.
If $S$ is a multiplicatively closed subset of $A$ then the ring of fractions of $A$ with respect to $S$ will be denoted by $S^{-1}A$. If $S=\{t^n\}_{n=0}^\infty$ then the ring $S^{-1}A$ will be denoted by $A_t$. If $\frak p$ is a prime ideal of $A$ and $S=A\setminus \frak p$ then the ring $S^{-1}A$ will be denoted by $A_{\frak p}$.
For any subset $X\subseteq A$ the smallest difference ideal containing $X$ will be denoted by $[X]$. The smallest radical difference ideal containing $X$ will be denoted by $\{X\}$. The radical of an ideal $\frak a$ will be denoted by $\frak r(\frak a)$. So, we have that $\{X\}=\frak r([X])$.
Let $f\colon A\to B$ be a homomorphism of rings and let $\frak a$ and $\frak b$ be ideals of $A$ and $B$, respectively. Then we define the extension $\frak a^e$ to be the ideal $f(\frak a)B$ generated by $f(\frak a)$. The contraction $\frak b^c$ is the ideal $f^*(\frak b)=f^{-1}(\frak b)$. If the homomorphism $f\colon A\to B$ is a difference one then both extension and contraction of difference ideals are difference ones.
Let $f\colon A\to B$ be a $\Sigma$-homomorphism of difference rings, and let $\frak q$ be a pseudoprime ideal of $B$. The contraction $\frak q^c$ is pseudoprime because $\pi$ is surjective and commutes with $f^*$. So, we have a mapping from $\operatorname{PSpec} B$ to $\operatorname{PSpec} A$. This mapping will be denoted by $f^*_\Sigma$. It follows from the definition that the following diagram is commutative $$ \xymatrix{
\operatorname{Spec} B\ar[r]^{f^*}\ar[d]^{\pi}&\operatorname{Spec} A\ar[d]^{\pi}\\
\operatorname{PSpec} B\ar[r]^{f^*_\Sigma}&\operatorname{PSpec} A\\ } $$
The set of all radical $\Sigma$-ideals of $A$ will be denoted by $\operatorname{Rad^\Sigma} A$. For the convenience maximal difference ideals will be called pseudomaximal. This set will be denoted by $\operatorname{PMax} A$. It is clear, that every pseudomaximal ideal is pseudoprime ($S=\{1\}$). It is easy to see, that a radical difference ideal can be presented as an intersection of pseudoprime ideals. So, the objects with prefix pseudo have the same behavior as the objects without it.
The ring of difference polynomials $A\{Y\}$ is a ring $A[\Sigma Y]$, where $\Sigma$ acts in the natural way. A difference ring $B$ will be called an $A$-algebra if there is a difference homomorphism $A\to B$. It is clear, that every $A$-algebra can be presented as a quotient ring of some polynomial ring $A\{Y\}$.
\section{Basic technique}\label{sec3}
In this section we shall prove basic results about the introduced set of difference ideals.
\subsection{Pseudoprime ideals}\label{sec31}
\begin{proposition}\label{TechnicalStatement} Let $\frak q$ and $\frak q'$ be pseudoprime ideals of a difference ring $A$. Then \begin{enumerate} \item Ideal $\frak q$ is radical. \item For every ideal $\frak p$ $\Sigma$-associated with $\frak q$ there is the equality $$ \frak q=\bigcap_{\sigma\in \Sigma}\frak p^\sigma. $$ \item For every element $s\notin \frak q$ there is the equality $$ (\frak q:s^\infty)_\Sigma=\frak q. $$ \item It follows from the equality $\frak q:s^\infty=\frak q':s^\infty$ that for every element $s$ either $s$ belongs to $\frak q$ and $\frak q'$, or $\frak q=\frak q'$. \item For every two difference ideals $\frak a$ and $\frak b$ the inclusion $\frak a\frak b\subseteq \frak q$ implies either $\frak a\subseteq \frak q$, or $\frak b\subseteq \frak q$. \end{enumerate} \end{proposition} \begin{proof} (1). Let $S$ be a multiplicatively closed subset of $A$ such that $\frak q$ is a maximal difference ideal not meeting $S$. Then $\frak r(\frak q)$ is a difference ideal containing $\frak q$ and not meeting $S$. Consequently, $\frak r(\frak q)=\frak q$.
(2). The equality $\frak p_\Sigma=\cap \frak p^\sigma$ is always true. But from the definition we have $\frak q=\frak p_\Sigma$.
(3). Let $\frak p$ be a $\Sigma$-associated with $\frak q$ prime ideal. Then it follows from~(2) that there exists $\sigma\in\Sigma$ such that $s\notin \frak p^\sigma$. Therefore, there is the inclusion $$ (\frak q:s^\infty)\subseteq(\frak p^\sigma:s^\infty)=\frak p^\sigma, $$ and, consequently, $$ (\frak q:s^\infty)_\Sigma\subseteq\frak p^\sigma_\Sigma=\frak q. $$ The other inclusion is obvious.
Note that for every ideal $\frak a$ the equality $\frak a:s^\infty=A$ holds if and only if $s\in \frak a$. Therefore, we need to consider the case $s\notin \frak q$ and $s\notin \frak q'$. From the previous item we have $$ \frak q=(\frak q:s^\infty)_\Sigma=(\frak q':s^\infty)_\Sigma=\frak q'. $$
(5). Let $\frak p$ be a $\Sigma$-associated with $\frak q$ prime ideal. Then either $\frak a\subseteq \frak p$, or $\frak b\subseteq \frak p$. Suppose that the first one holds. Then $$ \frak a=\frak a_\Sigma\subseteq \frak p_\Sigma=\frak q. $$ \end{proof}
We shall show that condition~(3) does not hold for an arbitrary multiplicatively closed subset $S$.
\begin{example} Let $\Sigma=\mathbb Z$, consider the ring $A=K^\Sigma$, where $K$ is a field. Then this ring is of Krull dimension zero. So, every prime ideal is maximal. This is a well-known fact that the maximal ideals of $A$ can be described in terms of maximal filters on $\Sigma$. Namely, for an arbitrary filter $\mathcal F$ of $\Sigma$ we define the ideal $$ \frak m_{\mathcal F}=\{\,x\in A\mid \{\,n\mid x_n=0\,\}\in\mathcal F\,\}. $$ There are two different types of maximal ideals. The first type corresponds to principal maximal filters $$ \frak m_k=\{\,x\in A\mid x_k=0\,\} $$ and the second type corresponds to ultrafilters $\frak m_{\mathcal F}$. It is clear that for all ideals of the first type we have $(\frak m_k)_\Sigma=0$. But for any ultrafilter $\mathcal F$ the ideal $\frak m_{\mathcal F}$ contains the ideal $K^{\oplus\Sigma}$ consisting of all finite sequences. Therefore, $(\frak m_{\mathcal F})_\Sigma\neq0$. As we can see not every minimal prime ideal containing zero ideal is $\Sigma$-associated with it. Additionally, set $S=A\setminus \frak m_{\mathcal F}$, where $\mathcal F$ is an ultrafilter. Then $$ (S(0))_\Sigma=(\frak m_{\mathcal F})_\Sigma\neq0. $$ \end{example}
Let us note one peculiarity of radical difference ideals.
\begin{example} Let $\Sigma = \mathbb Z$. Consider the ring $A=K\times K$, where $\Sigma$ acts as a permutation of factors. Then $$ \{(1,0)\}\{(0,1)\}\nsubseteq \{(1,0)(0,1)\}, $$ because the left-hand part is $A$ and the right-hand part is $0$. So, the condition $\{X\}\{Y\}\subseteq\{XY\}$ does not hold. \end{example}
\subsection{Pseudospectrum}\label{sec32}
We shall provide a pseudospectrum with a structure of topological space such that the mapping $\pi$ will be continuous.
Let $A$ be an arbitrary difference ring and $X$ be the set of all its pseudoprime ideals. For every subset $E\subseteq A$ let $V(E)$ denote the set of all pseudoprime ideals containing $E$.
\begin{proposition}\label{pspectop} Using the above notation the following holds: \begin{enumerate} \item If $\frak a$ is a difference ideal generated by $E$, then $$ V(E)=V(\frak a)=V(\frak r(\frak a)). $$ \item $V(0)=X$, $V(1)=\emptyset$. \item Let $(E_i)_{i\in I}$ be a family of subsets of $A$. Then $$ V\left(\bigcup_{i\in I}E_i\right)=\bigcap_{i\in I}V\left(E_i\right). $$ \item For any difference ideals $\frak a$, $\frak b$ in $A$ the following holds $$ V(\frak a\cap\frak b)=V(\frak a\frak b)=V(\frak a)\cup V(\frak b). $$ \end{enumerate} \end{proposition} \begin{proof} Condition~(1) immediately follows from the definition of $V(E)$ and the fact that pseudoprime ideal is radical. Conditions~(2) and~(3) are obvious. The last statement immediately follows from condition~(5) of Proposition~\ref{TechnicalStatement}. \end{proof}
So, we see that the sets $V(E)$ satisfy the axioms for closed sets in topological space. We shall fix this topology on pseudospectrum. Consider the mapping $$ \pi\colon \operatorname{Spec} A\to \operatorname{PSpec} A. $$ For every difference ideal $\frak a$ we have $$ \pi^{-1}(V(\frak a))=V(\frak a), $$ i.~e., the mapping $\pi$ is continuous. Let us recall that $\pi$ is always surjective.
Let us denote the pseudospectrum of a difference ring $A$ by $X$. Then for every element $s\in A$ the complement of $V(s)$ will be denoted by $X_s$. From the definition of topology we have that every open subset can be presented as a union of the sets of the form $X_s$. In other words the family $\{X_s\mid s\in A\}$ forms a basis of topology. It should be noted that the intersection $X_s\cap X_t$ is not necessarily of the form $X_u$.
\begin{proposition}\label{pspecst} Using the above notation we have \begin{enumerate} \item $X_s\cap X_t=\cup_{\sigma,\tau\in\Sigma} X_{\sigma(s) \tau(t)}$. \item $X_s=\emptyset$ iff $s$ is nilpotent. \item $X$ is quasi-compact (that is, every open covering of $X$ has a finite subcovering). \item There is a one-to-one correspondence between the set of all closed subsets of the pseudospectrum and the set of all radical difference ideals: $$ \frak t\mapsto V(\frak t)\:\mbox{ è }\:V(E)\mapsto \bigcap_{\frak q\in V(E)} \frak q. $$ \end{enumerate} \end{proposition} \begin{proof} Condition~(1) is proved by straightforward calculation.
(2). Note that $X_s$ is not empty if and only if the set of all prime ideals not containing $s$ is not empty. The last condition is equivalent to $s$ being not nilpotent.
(3). Let $\{V(\frak a_i)\}$ be a centered family of closed subsets (that is every intersection of finitely many elements is not empty), where $\frak a_i$ are difference ideals. We need to show that $\cap_i V(\frak a_i)$ is not empty. Suppose that contrary holds $\cap_i V(\frak a_i)=\emptyset$. But $$ \bigcap_i V(\frak a_i)=V(\sum_i \frak a_i)=\emptyset. $$ The last equality is equivalent to condition that $1$ belongs to $\sum_i\frak a_i$. But in this situation $1$ belongs to a finite sum. Therefore, the corresponding intersection of finitely many closed subsets is empty, contradiction.
(4). The statement immediately follows from the equality $$ \frak r([E])=\bigcap_{\frak q\in V(E)} \frak q. $$ Let us show that this equality holds. The inclusion $\subseteq$ is obvious. Let us show the other one. Let $g$ not belong to the radical of $[E]$ then consider the set of all difference ideals containing $E$ and not meeting $\{g^n\}_{n=0}^\infty$. This set is not empty, since $[E]$ is in it. From Zorn's lemma there is a maximal difference ideal with that property. From the definition this ideal is pseudoprime. \end{proof}
\subsection{Functions on the group}\label{sec33}
For every commutative ring $B$ the set of all functions from $\Sigma$ to $B$ will be denoted by $\operatorname{F} B$. As a commutative ring it coincides with the product $\prod_{\sigma\in\Sigma} B$. Let us provide $\operatorname{F} B$ with the structure of a difference ring. We define $\sigma(f)(\tau)=f(\sigma^{-1}\tau)$. For every element $\sigma$ of the group $\Sigma$ there is a homomorphism \begin{equation*} \begin{split} \gamma_{\sigma}\colon\operatorname{F} B&\to B\\ f&\mapsto f(\sigma) \end{split} \end{equation*} It is clear that $\gamma_{\tau}(\sigma f)=\gamma_{\sigma^{-1}\tau}(f)$.
\begin{proposition}\label{taylor} Let $A$ be a difference ring, and let $\varphi\colon A\to B$ be a homomorphism of rings. Then for every element $\sigma\in \Sigma$ there exists a unique difference homomorphism $\Phi_{\sigma}\colon A\to \operatorname{F} B$ such that the following diagram is commutative $$ \xymatrix{
& {\operatorname{F} B}\ar[d]^{\gamma_{\sigma}} \\
A\ar[r]^{\varphi}\ar[ur]^{\Phi_\sigma} & B } $$ \end{proposition} \begin{proof} By the hypothesis the homomorphism $\Phi_\sigma$ satisfies the property $$ \Phi_\sigma(a)(\tau^{-1}\sigma)=(\tau\Phi_\sigma(a))(\sigma)=\varphi(\tau a) $$ whenever $a\in A$ and $\tau\in \Sigma$. Consequently, if $\Phi_\sigma$ exists then it is unique. Define the mapping $\Phi_\sigma$ by the following rule $$ \Phi_\sigma(a)(\tau)=\varphi(\sigma\tau^{-1} a). $$ It is clear that this mapping is a homomorphism. The following calculation shows that this homomorphism is a difference one. $$ (\nu \Phi_\sigma(a))(\tau)=\Phi_\sigma(a)(\nu^{-1}\tau)=\varphi(\sigma\tau^{-1}\nu a)=\Phi_\sigma(\nu a)(\tau). $$ \end{proof}
The ring $\operatorname{F} B$ is an essential analogue of the Hurwitz series ring. The elements of $\operatorname{F} B$ are the analogues of the Taylor series. The homomorphisms $\Phi_\sigma$ are analogues of the Taylor homomorphism for the Hurwitz series ring. Therefore, we shall call these homomorphisms the Taylor homomorphisms at $\sigma$. The Taylor homomorphism at the identity of the group will be called simpler the Taylor homomorphism.
It should be noted that the set of all invariant elements of $\operatorname{F} B$ can be identified with $B$. Namely, $B$ coincides with the set of all constant functions. So, we suppose that $B$ is embedded in $\operatorname{F} B$.
\section{The case of finite group}\label{sec4}
\subsection{Basic technique}\label{sec41}
From now we shall suppose that the group $\Sigma$ is finite. First of all we shall prove more delicate technical results for the finite group.
\begin{proposition}\label{techbasic} Let $A$ be a difference ring, $\frak q$ be a pseudoprime ideal of $A$, and $S$ be a multiplicatively closed subset in $A$. Then \begin{enumerate} \item Every minimal prime ideal containing $\frak q$ is $\Sigma$-associated with $\frak q$. \item The restriction of $\pi$ onto $\operatorname{Max} A$ is a well-defined mapping $\pi\colon \operatorname{Max} A\to\operatorname{PMax} A$. \item If $S\cap\frak q=\emptyset$ then $(S(\frak q))_{\Sigma}=\frak q$. \end{enumerate} \end{proposition} \begin{proof} (1). Let $\frak p$ be a $\Sigma$-associated with $\frak q$ prime ideal. Then $\frak q=\cap_\sigma \frak p^\sigma$. Now let $\frak p'$ be an arbitrary minimal prime ideal containing $\frak q$. Then $\cap_\sigma \frak p^\sigma=\frak q\subseteq\frak p'$. Consequently, $\frak p^\sigma\subseteq \frak p'$ for some $\sigma$ and, thus, $\frak p^\sigma=\frak p'$.
(2). Let $\frak m$ be a maximal ideal and let $\frak q=\frak m_{\Sigma}$. We shall show that $\frak q$ is a maximal difference ideal. Since the mapping $\operatorname{Spec} A\to\operatorname{PSpec} A$ is surjective, it suffices to show that every prime ideal containing $\frak q$ coincides with $\frak m^\sigma$ for some $\sigma$. Indeed, let $\frak q\subseteq\frak p$. Then since $\frak q=\cap_\sigma \frak m^\sigma$, we have $\frak m^\sigma\subseteq\frak p$ for some $\sigma$. The desired result holds because $\frak m$ is maximal.
(3). By the hypothesis there is a prime ideal $\frak p$ such that $\frak q\subseteq \frak p$ and $S\cap\frak p=\emptyset$. Then there exists a minimal prime ideal $\frak p'$ with the same condition. From the definition we have $S(\frak q)\subseteq S(\frak p')=\frak p'$. Thus, the equality $\frak q=\frak p'_\Sigma$ follows from condition~(1). \end{proof}
Let $A$ be a difference ring, and $X$ be the pseudospectrum of $A$. For any radical difference ideal $\frak t$ we define the closed subset $V(\frak t)$ in $X$. Conversely, for every closed subset $Z$ we define the radical difference ideal $\cap_{\frak q\in Z}\frak q$.
\begin{proposition} The mentioned mappings are inverse to each other bijections between $\operatorname{Rad}^\Sigma A$ and $\{\,Z\subseteq X\mid Z=V(E)\,\}$. Suppose additionally that any radical difference ideal in $A$ is an intersection of finitely many prime ideals (for example $A$ is Noetherian). Then a closed set is irreducible if and only if it corresponds to a pseudoprime ideal. \end{proposition} \begin{proof}
The first statement follows from Proposition~\ref{pspecst}~(4). Let us show that irreducible sets correspond to pseudoprime ideals.
Let $\frak q$ be a pseudoprime ideal and let $V(\frak q)=V(\frak a)\cup V(\frak b)=Ì(\frak a\cap\frak b)$. Then $\frak q\supseteq\frak a\cap\frak b$. Thus, either $\frak q\supseteq\frak a$, or $\frak q\supseteq\frak b$ (see Proposition~\ref{pspectop}~(4)). Suppose that the first condition holds. Then $V(\frak q)\subseteq V(\frak a)$. The other inclusion holds because of the choice of $V(\frak a)$.
Conversely, let $\frak t$ be a radical difference ideal. Suppose that $\frak t$ is not pseudoprime. Let $\frak p_1\ldots,\frak p_n$ be all minimal prime ideals containing $\frak t$. Then the action of $\Sigma$ on this set is not transitive. Thus, the ideals $$ \frak t_i=\bigcap_{\sigma\in\Sigma}\frak p_i^\sigma $$ contain $\frak t$ and do not coincide with $\frak t$. Let $\frak t_1,\ldots,\frak t_s$ be all different ideals among $\frak t_i$. Then $\frak t =\cap_i \frak t_i$ is a nontrivial decomposition of the ideal $\frak t$. \end{proof}
\subsection{Inheriting of properties}\label{sec42}
Let $f\colon A\to B$ be a difference homomorphism of difference rings. We shall consider the following pairs of properties: \begin{description} \item{({\bf A1}):} is a property of $f$, where $f$ is considered as a homomorphism \item{({\bf A2}):} is a property of $f$, where $f$ is considered as a difference homomorphism \end{description} such that ({\bf A1}) implies ({\bf A2}). The idea is the following: finding such pair of properties, we shall reduce the difference problem to a non difference one.
The homomorphism $f\colon A\to B$ is said to have the going-up property if for every chain of prime ideals $$\frak p_1\subseteq \frak p_2\subseteq\ldots\subseteq\frak p_n$$ in $A$ and every chain of prime ideals $$\frak q_1\subseteq \frak q_2\subseteq\ldots\subseteq\frak q_m$$ in $B$ such that $0<m<n$ and $\frak q^c_i=\frak p_i$ ($1\leqslant i\leqslant m$) the second chain can be extended to a chain $$\frak q_1\subseteq \frak q_2\subseteq\ldots\subseteq\frak q_n$$ with condition $\frak q^c_i=\frak p_i$ ($1\leqslant i\leqslant n$).
The homomorphism $f\colon A\to B$ is said to have the going-down property if for every chain of prime ideals $$\frak p_1\supseteq \frak p_2\supseteq\ldots\supseteq\frak p_n$$ in $A$ and every chain of prime ideals $$\frak q_1\supseteq \frak q_2\supseteq\ldots\supseteq\frak q_m$$ in $B$ such that $0<m<n$ and $\frak q^c_i=\frak p_i$ ($1\leqslant i\leqslant m$), the second chain can be extended to a chain $$\frak q_1\supseteq \frak q_2\supseteq\ldots\supseteq\frak q_n$$ with condition $\frak q^c_i=\frak p_i$ ($1\leqslant i\leqslant n$).
Let $f\colon A\to B$ be a difference homomorphism. This homomorphism is said to have going-up (going-down) property for difference ideals if the mentioned above properties hold for the chains of pseudoprime ideals.
\begin{proposition} For every difference homomorphism $f\colon A\to B$ the following holds \begin{enumerate} \item In the following diagram $$ \xymatrix{
\operatorname{Spec} B\ar[r]^{f^*}\ar[d]^{\pi}&\operatorname{Spec} A\ar[d]^{\pi}\\
\operatorname{PSpec} B\ar[r]^{f^*_\Sigma}&\operatorname{PSpec} A\\ } $$ if $f^*$ is surjective, then $f^*_{\Sigma}$ is surjective. \item $f$ has the going-up property $\Rightarrow$ $f$ has the going-up property for difference ideals. \item $f$ has the going-down property $\Rightarrow$ $f$ has the going-down property for difference ideals. \end{enumerate} \end{proposition} \begin{proof} (1) This property follows from the fact that $\pi$ is surjective.
(2). Let $\frak q_1\subseteq\frak q_2$ be a chain of pseudoprime ideals of $A$ and let $\frak q'_1$ be a pseudoprime ideal in $B$ contracting to $\frak q_1$. Consider a prime ideal $\frak p'_1$ $\Sigma$-associated with $\frak q'_1$. The contraction of $\frak p'_1$ to $A$ will be denoted by $\frak p_1$. Then $\frak p_1$ will be $\Sigma$-associated with $\frak q_1$. Let $\frak p_2$ be a prime ideal $\Sigma$-associated with $\frak q_2$. Then $\cap_\sigma \frak p^\sigma_1=\frak q_1\subseteq\frak p_2$. Thus, it follows from~\cite[chapter~1, sec.~6, prop.~1.11(2)]{AM} that for some $\sigma$ we have $\frak p^\sigma_1\subseteq\frak p_2$. Consider two chains of prime ideals $\frak p^\sigma_1\subseteq\frak p_2$ in $A$ and $(\frak p'_1)^\sigma$ in $B$. From the going-up property there exists a prime ideal $\frak p'_2$ such that $(\frak p'_1)^\sigma\subseteq\frak p'_2$ and $(\frak p'_2)^c=\frak p_2$. Therefore, the ideal $(\frak p'_2)_\Sigma$ is the desired pseudoprime ideal.
(3). Let $\frak q_1\supseteq\frak q_2$ be a chain of pseudoprime ideals in $A$, and let $\frak q'_1$ be a pseudoprime ideal in $B$ contracting to $\frak q_1$. Let $\frak p'_1$ be a prime ideal $\Sigma$-associated with $\frak q'_1$. Its contraction to $A$ will be denoted by $\frak p_1$. Then $\frak p_1$ is $\Sigma$-associated with $\frak q_1$. Let $\frak p$ be a prime ideal $\Sigma$-associated with $\frak q_2$. Then $\cap_\sigma \frak p^\sigma=\frak q_2\subseteq\frak p_1$. Consequently, for some $\sigma$ we have $\frak p^\sigma\subseteq \frak p$ (see.~\cite[chapter~1, sec.~6, prop.~1.11(2)]{AM}). The going-down property guaranties that there exists a prime ideal $\frak p'_2$ with conditions $\frak p'_2\subseteq\frak p'_1$ and $(\frak p'_2)^c=\frak p^\sigma$. Then the ideal $(\frak p'_2)_\Sigma$ is the desired one. \end{proof}
Since not for every multiplicatively closed set $S$ the fraction ring is a difference ring we need to generalize the previous proposition. Let $f\colon A\to B$ be a difference homomorphism and let $X$ and $Y$ be subsets of the pseudospectra of $A$ and $B$, respectively, such that $f^*_\Sigma(Y)\subseteq X$. We shall say that that the going-up property holds for $f^*_\Sigma\colon Y\to X$ if for every chain of pseudoprime ideals $$\frak p_1\subseteq \frak p_2\subseteq\ldots\subseteq\frak p_n$$ in $X$ and every chain of pseudoprime ideals $$\frak q_1\subseteq \frak q_2\subseteq\ldots\subseteq\frak q_m$$ in $Y$ such that $0<m<n$ and $\frak q^c_i=\frak p_i$ ($1\leqslant i\leqslant m$), the second chain can be extended to a chain $$\frak q_1\subseteq \frak q_2\subseteq\ldots\subseteq\frak q_n$$ in $Y$ with condition $\frak q^c_i=\frak p_i$ ($1\leqslant i\leqslant n$).
We shall say that that the going-down property holds for $f^*_\Sigma\colon Y\to X$ if for every chain of pseudoprime ideals $$\frak p_1\supseteq \frak p_2\supseteq\ldots\supseteq\frak p_n$$ in $X$ and every chain of pseudoprime ideals $$\frak q_1\supseteq \frak q_2\supseteq\ldots\supseteq\frak q_m$$ in $Y$ such that $0<m<n$ and $\frak q^c_i=\frak p_i$ ($1\leqslant i\leqslant m$), the second chain can be extended to a chain $$\frak q_1\supseteq \frak q_2\supseteq\ldots\supseteq\frak q_n$$ in $Y$ with condition $\frak q^c_i=\frak p_i$ ($1\leqslant i\leqslant n$). Now we shall prove more delicate result.
\begin{proposition}\label{inher} Let $f\colon A\to B$ be a difference homomorphism of difference rings. The pseudospectrum of $A$ will be denoted by $X$ and the pseudospectrum of $B$ by $Y$. Then the following holds: \begin{enumerate} \item Let for some $s\in A$ and $u\in B$ the mapping $$ f^*\colon \operatorname{Spec} B_{su}\to \operatorname{Spec} A_s $$ be surjective. Then the mapping $f^*_\Sigma\colon Y_{f(s)u}\to X_s$ is surjective. \item Let for some $s\in A$ the mapping $$ f^*_s\colon \operatorname{Spec} B_s\to \operatorname{Spec} A_s $$ have the going-up property. Then the mapping $f^*_\Sigma\colon Y_{f(s)}\to X_s$ has the going-up property. \item Let for some $s\in A$ and $u\in B$ the mapping $$ f^*\colon \operatorname{Spec} B_{su}\to \operatorname{Spec} A_s $$ have the going-down property. Then the mapping $f^*_\Sigma\colon Y_{f(s)u}\to X_s$ has the going-down property. \end{enumerate} \end{proposition} \begin{proof}
(1). Let $\frak q\in X_s$. Since $s\notin \frak q$ then there exists a prime ideal $\frak p'$ such that $\frak q\subseteq\frak p'$ and $s\notin\frak p'$. Then there exists a minimal prime ideal $\frak p$ with this property. Consequently, $\frak p$ is $\Sigma$-associated with $\frak q$. By the hypothesis there is a prime ideal $\frak p_1$ in $B$ not containing $f(s)u$ such that $\frak p_1^c=\frak p$. Therefore, the ideal $(\frak p_1)_\Sigma$ is the desired one.
(2). Let $\frak q_1\subseteq\frak q_2$ be a chain of pseudoprime ideals of $A$ not containing $s$, and $\frak q'_1$ be a pseudoprime ideal of $B$ not containing $f(s)$. Let $\frak p'_1$ be a prime ideal $\Sigma$-associated with $\frak q'_1$, and $\frak p_1$ is its contraction to $A$. As in~(1) we shall find a prime ideal $\frak p_2$ $\Sigma$-associated with $\frak q_2$ and not containing $s$. Then $\cap_\sigma \frak p^\sigma_1=0\subseteq\frak p_2$. Thus, for some $\sigma$ we have $\frak p^\sigma_1\subseteq \frak p_2$. Consider the sequence of ideals $\frak p^\sigma_1\subseteq \frak p_2$ and ideal $(\frak p'_1)^\sigma$ contracting to $\frak p^\sigma_1$. By the hypothesis there exists a prime ideal $\frak p'_2$ containing $(\frak p'_1)^\sigma$ and contracting to $\frak p_2$. Then the ideal $(\frak p'_2)_\Sigma$ is the desired one.
(3). Let $\frak q_1\supseteq\frak q_2$ be a chain of pseudoprime ideals in $A$ not containing $s$ and $\frak q'_1$ is a pseudoprime ideal in $B$ contracting to $\frak q_1$. As in~(2) we shall find a prime ideal $\frak p'_1$ $\sigma$-associated with $\frak q'_1$ and not containing $f(s)u$. Its contraction will be denoted by $\frak p_1$. Let $\frak p_2$ be a prime ideal $\Sigma$-associated with $\frak q_2$. Then $\cap_\sigma \frak p_2^\sigma=0\subseteq\frak p_1$. Thus, for some $\sigma$ we have $\frak p_2^\sigma\subseteq\frak p_1$. By the hypothesis for the chain $\frak p_1\supseteq\frak p_2^\sigma$ and the ideal $\frak p'_1$ there is a prime ideal $\frak p'_2$ laying in $\frak p'_1$ and contracting to $\frak p_2^\sigma$. Then the ideal $(\frak p'_2)_\Sigma$ is the desired one. \end{proof}
\begin{example} Let $\Sigma=\mathbb Z/2\mathbb Z$, where $\sigma=1$ is the nonzero element of the group, and let $C$ be an algebraically closed field. Let $A=C[x]$, where $\sigma$ coincides with the identity mapping on $A$. Now consider the ring $B=C[t]$, where $\sigma(t)=-t$. There is a difference embedding $\varphi\colon A\to B$ such that $x\mapsto t^2$. So, we can identify $A$ with the subring $C[t^2]$ in $B$.
Let $\operatorname{Spec^\Sigma} B$ and $\operatorname{Spec^\Sigma} A$ be the sets of all prime difference ideals of the rings $B$ and $A$, respectively. It is clear that $\operatorname{Spec^\Sigma} B=\{0\}$ consists of one single point and $\operatorname{Spec^\Sigma} A=\operatorname{Spec} A$. The contraction mapping $$ \varphi^*\colon \operatorname{Spec^\Sigma} B\to\operatorname{Spec^\Sigma} A $$ maps the zero ideal to the zero ideal. We see that $\operatorname{Spec^\Sigma} B$ is dense in $\operatorname{Spec^\Sigma} A$ but does not contain an open in its closure. So, $\operatorname{Spec^\Sigma} B$ is very poor.
Now let us show what will happen if we use pseudoprime ideals instead of prime ones. It is clear that $\operatorname{PSpec} A=\operatorname{Spec} A$. Let us describe $\operatorname{PSpec} B$. Consider the mapping $\pi\colon\operatorname{Spec} B\to \operatorname{PSpec} B$. Every maximal ideal of $B$ is of the form $(t-a)$, then $(t-a)_\Sigma=(t^2-a^2)$ is pseudoprime if $a\neq 0$. Therefore, the set of all pseudoprime ideals is the following $$ \operatorname{PSpec} B=\{0\}\cup\{(t)\}\cup\{\,(t^2-a)\mid 0\neq a\in C\,\}. $$ We can identify pseudomaximal spectrum with an affine line $C$ by the rule $(t^2-a)\mapsto a$ and $(t)\mapsto 0$. Now consider the contraction mapping $$ \varphi^*_{\Sigma}\colon \operatorname{PSpec} B\to \operatorname{PSpec} A. $$ As we can see $\varphi^*_\Sigma(t^2-a)=(x-a)$ and $\varphi^*_\sigma(t)=(x)$. Identifying pseudomaximal spectrum of $A$ with $C$ by the rule $(x-a)\mapsto a$, we see that the mapping $\varphi^*_\Sigma\colon \operatorname{PMax} B\to \operatorname{PMax} A$ coincides with the identity mapping. It is easy to see that the homomorphism $\varphi\colon A\to B$ has the going-up and going-down properties. Therefore, it has the going-up and the going-down properties for difference ideals. But this is obvious from the discussion above. Consequently, the mapping $\varphi^*_\Sigma$ is a homeomorphism between $\operatorname{PSpec} A$ and $\operatorname{PSpec} B$. \end{example}
\subsection{Pseudofields}\label{sec43}
An absolutely flat simple difference ring will be called a pseudofield.
\begin{proposition} For every pseudofield $A$ the group $\Sigma$ is transitively acting on $\operatorname{Max} A$. Moreover, as a commutative ring $A$ is isomorphic to $K^n$, where $n$ is the number of all maximal ideals in $A$ and $K$ is isomorphic to $A/\frak m$, where $\frak m$ is a maximal ideal in $A$. \end{proposition} \begin{proof}
Let $\frak m$ be a prime ideal of $A$. By the hypothesis this ideal is simultaneously maximal and minimal (see.~\cite[chapter~3, ex.~11]{AM}). Then $\cap_\sigma\frak m^\sigma$ is a difference ideal and, thus, equals zero. Let $\frak n$ be an arbitrary prime ideal of $A$ then $\cap_\sigma\frak m^\sigma=0\subseteq \frak n$. Consequently, $\frak n=\frak m^\sigma$ for some $\sigma$, i.~e., $\Sigma$ acts transitively on $\operatorname{Max} A$. Let $\frak m_1,\ldots,\frak m_n$ be the set of all maximal ideals of $A$. Then it follows from~\cite[chapter~1, sec.~6, prop.~1.10]{AM} that $A$ is isomorphic to $\prod_i A/\frak m_i$. Since every element of $\Sigma$ is an isomorphism then for every $\sigma$ the field $A/\frak m$ is isomorphic to $A/\frak m^\sigma$. \end{proof}
\begin{proposition} Let $A$ be a difference ring and $\frak q$ be its difference ideal. The ideal is pseudomaximal if and only if $A/\frak q$ is pseudofield. In other words every simple difference ring is absolutely flat. \end{proposition} \begin{proof} If $A/\frak q$ is a pseudofield then $\frak q$ is a maximal difference ideal and, hence, pseudomaximal. Conversely, let $\frak q$ be pseudomaximal and $\frak m$ is a maximal ideal containing $\frak q$. Since $\frak q$ is a maximal difference ideal, then $\frak m$ is $\Sigma$-associated with $\frak q$. Hence, $\frak q=\cap_\sigma \frak m^\sigma$. And it follows from~\cite[chapter~1, sec.~6, prop.~1.10]{AM} that $A/\frak q=\prod_\sigma A/\frak m^\sigma$. \end{proof}
As we see a simple difference ring and a pseudofield are the same notions. Note that the ring $\operatorname{F} A$ is a pseudofield if and only if $A$ is a field. We shall introduce the notion of difference closed pseudofield. Let $A$ be a pseudofield. Consider the ring of difference polynomials $A\{y_1,\ldots,y_n\}$. Let $E\subseteq A\{y_1,\ldots,y_n\}$ be an arbitrary subset. The set of all common zeros of $E$ in $A^n$ will be denoted by $V(E)$. Conversely, let $X\subseteq A^n$ be an arbitrary subset. The set of all polynomials vanishing on $X$ will be denoted by $I(X)$. It is clear that for any difference ideal $\frak a\subseteq A\{y_1,\ldots,y_n\}$ we have $\frak r(\frak a)\subseteq I(V(\frak a))$. A pseudofield $A$ will be said to be a difference closed pseudofield if for every $n$ and every difference ideal $\frak a\subseteq A\{y_1,\ldots,y_n\}$ there is the equality $\frak r(\frak a)= I(V(\frak a))$.
\begin{proposition}\label{weakcon} If $A$ is a difference closed pseudofield, then every difference finitely generated over $A$ pseudofield coincides with $A$. \end{proposition} \begin{proof} Every difference finitely generated over $A$ pseudofield can be presented as $A\{y_1,\ldots,y_n\}/\frak q$, where $\frak q$ is a pseudomaximal ideal. It is easy to see that the ideal $\frak q$ is of the form $I(a)$ for some $a\in A^n$. Hence, $\frak q=[y_1-a_1,\ldots,y_n-a_n]$. Therefore, $A\{y_1,\ldots,y_n\}/\frak q$ coincides with $A$. \end{proof}
\begin{proposition}\label{funcon} A pseudofield $\operatorname{F} K$ is difference closed if and only if $K$ is algebraically closed. \end{proposition} \begin{proof} Let $\operatorname{F} K$ be difference closed. We recall that $K$ can be embedded into $\operatorname{F} K$ as the subring of the constants. Consider the ring $$ R=\operatorname{F} K\{y\}/(\ldots,\sigma y-y,\ldots)_{\sigma\in\Sigma}. $$ As a commutative ring it is isomorphic to $\operatorname{F} K[y]$. Let $f$ be a polynomial in one variable with coefficients in $K$. The ideal $(f(y))$ is a nontrivial ideal in $\operatorname{F} K[y]$. Moreover, since $f(y)$ is an invariant element, the mentioned ideal is difference. Consequently, $B=R/(f(y))$ is a nontrivial difference ring. Let $\frak m$ be a pseudomaximal ideal in $B$. Then the pseudofield $B/\frak m$ coincides with $\operatorname{F} K$ because of the previous proposition. Denote the image of the element $y$ in $\operatorname{F} K$ by $t$. By the definition $f(t)=0$ and $t$ is invariant. Thus, $t$ is in $K$. So, $K$ is algebraically closed.
Conversely, let $K$ be an algebraically closed field. Let $\frak a$ be an arbitrary difference ideal in $\operatorname{F} K\{y_1,\ldots,y_n\}$. Consider the algebra $$ B=\operatorname{F} K\{y_1,\ldots,y_n\}/\frak a. $$ We shall show that for every element $s\in B$ not belonging to the nilradical there is a difference homomorphism $f\colon B\to\operatorname{F} K$ over $\operatorname{F} K$ such that $f(s)\neq0$. From Proposition~\ref{taylor} it suffices to find a homomorphism $\psi\colon B\to K$ such that for some $\sigma$ the following diagram is commutative $$ \xymatrix@R=10pt{
B\ar[rd]^{\psi}&\\
\operatorname{F} K\ar[r]^{\gamma_\sigma}\ar[u]&K } $$ Indeed, consider the ring $B_s$ and let $\frak n$ be a maximal ideal of $B_s$. Then $B_s/\frak n$ is a finitely generated algebra over $K$ and is a field. Therefore, $B_s/\frak n$ coincides with $K$ (see the Hilbert Nullstellensatz) and this quotient mapping gives us the homomorphism $\psi\colon B\to K$. Let $\frak m = \operatorname{F} K\cap \frak n$, then $\frak m$ coincides with the ideal $\ker \gamma_\sigma$ for some $\sigma\in \Sigma$. So, the restriction of $\psi$ onto $\operatorname{F} K$ coincides with $\gamma_\sigma$. \end{proof}
\begin{proposition}\label{singdef} Let $A$ be a pseudofield. Suppose that every difference generated over $A$ by one single element pseudofield coincides with $A$. Then the Taylor homomorphism is an isomorphism between $A$ and $\operatorname{F} K$, where $K=A/\frak m$ for every maximal ideal $\frak m$ of $A$. \end{proposition} \begin{proof} Let $\frak m$ be a maximal ideal of $A$. Consider the field $K=A/\frak m$ and define the ring $\operatorname{F} K$. It follows from Proposition~\ref{taylor} that there exists a difference homomorphism $\Phi\colon A\to \operatorname{F} K$ for the quotient homomorphism $\pi\colon A\to K$. $$ \xymatrix{
&\operatorname{F} K\ar[d]^{\gamma_e}\\
A\ar[ur]^{\Phi}\ar[r]^{\pi}&K } $$
Since $A$ is a simple difference ring $\Phi$ is injective. Let us show that $\Phi$ is surjective. Assume that contrary holds and there is an element $\eta\in\operatorname{F} K\setminus A$. Consider the ring $A\{y\}=A[\ldots,\sigma y,\ldots]$ and its quotient ring $K[\ldots,\sigma y,\ldots]$. The ideal $(\ldots,\sigma y-\eta(\sigma),\ldots)$ is maximal in the latter ring. This ideal contracts to the maximal ideal $\frak m'$ in $A\{y\}$. It follows from Proposition~\ref{techbasic} that the ideal $\frak n=\frak m'_\Sigma$ is pseudomaximal. So, $A\{y\}/\frak n$ is a pseudofield difference generated over $A$ by one singly element. Thus, $A\{y\}/\frak n$ coincides with $A$. On the other hand, the following is a homomorphism $$ \varphi\colon A\{y\}/\frak n\to A\{y\}/\frak m'= K[\ldots,\sigma y,\ldots]/(\ldots,\sigma y-\eta(\sigma),\ldots)=K. $$ The restriction of this homomorphism to $A$ coincides with the quotient homomorphism $\pi$. Proposition~\ref{taylor} guaranties that there is a difference embedding $\Psi\colon A\{y\}/\frak n\to \operatorname{F} K$. It follows from the uniqueness of the Taylor homomorphism that the restriction of the last mapping to $A$ coincides with $\Phi$. $$ \xymatrix{
A\{y\}/\frak n\ar[r]^{\Psi}\ar[rd]^{\varphi}&\operatorname{F} K\ar[d]^{\gamma_e}\\
A\ar[r]^{\pi}\ar[u]^{Id}&K } $$ From the definition we have $\Psi(y)(\sigma)=\eta(\sigma)$. Consequently, $\Psi(y)=\eta$ and, thus, the image of pseudofield $A\{y\}/\frak n$ contains $\eta$. Form the other hand the image coincides with $A$, contradiction. \end{proof}
The following theorem is a corollary of the previous propositions.
\begin{theorem}\label{equtheor} Let $A$ be a pseudofield, then the following conditions are equivalent: \begin{enumerate} \item $A$ is difference closed. \item Every difference finitely generated over $A$ pseudofield coincides with $A$. \item Every pseudofield generated over $A$ by one single element coincides with $A$. \item The pseudofield $A$ is isomorphic to $\operatorname{F} K$, where $K$ is an algebraically closed field. \end{enumerate} \end{theorem} \begin{proof} (1)$\Rightarrow$(2). It follows from Proposition~\ref{weakcon}.
(2)$\Rightarrow$(3). Is trivial.
(3)$\Rightarrow$(4). By Proposition~\ref{singdef}, it follows that the ring $A$ is isomorphic to $\operatorname{F} K$. We only need to show that $K$ is algebraically closed (see Proposition~\ref{funcon}). For that we shall repeat the first half of the proof of Proposition~\ref{funcon}.
We know that every pseudofield difference generated by one single element over $\operatorname{F} K$ coincides with $\operatorname{F} K$. Let us recall that $K$ can be embedded into $\operatorname{F} K$ as the subring of the constants. Consider the ring $$ R=\operatorname{F} K\{y\}/(\ldots,\sigma y-y,\ldots)_{\sigma\in\Sigma}. $$ As a commutative ring it is isomorphic to $\operatorname{F} K[y]$. Let $f$ be a polynomial in one variable with coefficients in $K$. The ideal $(f(y))$ is a nontrivial ideal in $\operatorname{F} K[y]$. Moreover, since $f(y)$ is an invariant element the mentioned ideal is difference. Let $\frak m$ be a pseudomaximal ideal in $B$, then the pseudofield $B/\frak m$ coincides with $\operatorname{F} K$. The image of $y$ in $\operatorname{F} K$ will be denoted by $t$. By the definition we have $f(t)=0$ and $t$ is an invariant element. Thus, $t$ is in $K$. Therefore, the field $K$ is algebraically closed.
(4)$\Rightarrow$(1). It follows from Proposition~\ref{funcon} \end{proof}
\begin{example} Consider the field of complex numbers $\mathbb C$ and its automorphism $\sigma$ (the complex conjugation). This pair can be regarded as a difference ring with a group $\Sigma=\mathbb Z/2\mathbb Z$. Let $\mathbb C[x]$ be the ring of polynomials over $\mathbb C$ and automorphism $\sigma$ is acting as follows $\sigma(f(x))=\overline{f}(-x)$. Then the ideal $(x^2-1)$ is a difference ideal. Consider the ring $A=\mathbb C[x]/(x^2-1)$. As a commutative ring it can be presented as follows $$ \mathbb C[x]/(x^2-1)=\mathbb C[x]/(x-1)\times\mathbb C[x]/(x+1)=\mathbb C\times \mathbb C. $$ Under this mapping an element $c\in \mathbb C$ maps to $(c,c)$ and $x$ maps to $(1,-1)$. The automorphism acts as follows $(a,b)\mapsto(\overline{b},\overline{a})$. Consider the projection of $A$ onto its first factor. For this homomorphism there is the Taylor homomorphism $A\to \operatorname{F} \mathbb C$. As a commutative ring the ring $\operatorname{F} \mathbb C$ coincides with $\mathbb C\times \mathbb C$. Automorphism acts as follows $(a,b)\mapsto(b,a)$. The Taylor homomorphism is defined by the following rule $a+bx\mapsto (a+b,\overline{a}-\overline{b})$.
Now we have two homomorphisms: the first one is $f\colon A\to \mathbb C\times\mathbb C$ and is defined by the rule $$ a+bx\mapsto (a+b,a-b) $$ and the second one is $g\colon A\to \mathbb C\times \mathbb C$ and is defined by the rule $$ a+bx\mapsto (a+b,\overline{a}-\overline{b}). $$ Then composition $g\circ f^{-1}$ acts as follows $(a,b)\mapsto (a,\overline{b})$.
So, pseudofield $A$ is difference closed. Moreover, the homomorphism $g\circ f^{-1}$ transforms the initial action of $\sigma$ into more simple one. \end{example}
Let $A$ be a pseudofield and $\frak m$ is its maximal ideal. Then the residue field of $\frak m$ will be denoted by $K$, i.~e., $K=A/\frak m$. Let $L$ be the algebraical closure of $K$. The pseudofield $\operatorname{F} L$ will be denoted by $\overline{A}$. Let $\varphi\colon A\to L$ be the composition of the quotient morphism and the natural embedding of $K$ to $L$. Let $\Phi\colon A\to \overline{A}$ be the Taylor homomorphism corresponding to $\varphi$. We know that $\overline{A}$ is difference closed. Let us show that $\overline{A}$ is a minimal difference closed pseudofield containing $A$.
\begin{proposition}\label{difclosemin} Let $D$ be a difference closed pseudofield such that $A\subseteq D\subseteq \overline{A}$. Then $D=\overline{A}$. \end{proposition} \begin{proof} Consider the sequence of rings $A\subseteq D\subseteq \overline{A}$. Let $\frak m$ be a maximal ideal of $\overline{A}$. Then we have the following sequence of fields $$ A/A\cap\frak m\subseteq D/D\cap\frak m\subseteq \overline{A}/\frak m. $$ Since $D$ is difference closed, it follows from Theorem~\ref{equtheor} that the field $D/D\cap\frak m$ coincides with $L=\overline{A}/\frak m$. Now consider the composition of $D\to D/D\cap\frak m$ and $D/D\cap\frak m\to L$ and let $\Psi\colon D\to L$ be the corresponding Taylor homomorphism. It follows from the uniqueness of the Taylor homomorphism that $\Psi$ coincides with the initial embedding of $D$ to $\overline{A}$. So, $D$ satisfies the condition of Proposition~\ref{singdef}. \end{proof}
\begin{proposition}\label{difclosuni} Let $B$ be a difference closed pseudofield containing $A$. Then there exists an embedding of $\overline{A}$ to $B$ over $A$. \end{proposition} \begin{proof}
On the following diagram arrows present the embeddings of $A$ to $\overline{A}$ and to $B$ respectively: $$ \xymatrix@R=5pt@C=15pt{
\overline{A}&&B\\
&A\ar[ul]\ar[ur]& } $$
Let $\frak m$ be a maximal ideal in $B$. Then it contracts to a maximal ideal $\frak m^c$ in $A$. Since $A$ is an absolutely flat ring there exists an ideal $\frak n$ in $\overline{A}$ contracting to $\frak m^c$ (see~\cite[chapter~3, ex.~29, ex.~30]{AM}). So, we have $$ \xymatrix@R=10pt@C=15pt{
\overline{A}/\frak n&&B\ar[d]\\
&A/\frak m^c\ar[ul]\ar[r]&B/\frak m } $$
By the definition the field $\overline{A}/\frak n$ is the algebraic closure of $A/\frak m^c$ and $B/\frak m$ is algebraically closed (Theorem~\ref{equtheor}). Therefore, there exists an embedding of $\overline{A}/\frak n$ to $B/\frak m$. $$ \xymatrix@R=10pt@C=15pt{
\overline{A}\ar[d]&&B\ar[d]\\
{\overline{A}/\frak n}\ar@/^1pc/[rr]&A/\frak m^c\ar[l]\ar[r]&B/\frak m } $$
So, there is a homomorphism $\overline{A}\to B/\frak m$. Then Proposition~\ref{taylor} guaranties that there is a difference homomorphism $\varphi$ such that the following diagram is commutative $$ \xymatrix@R=10pt@C=15pt{
\overline{A}\ar[d]\ar@/^/[rrd]\ar[rr]^{\varphi}&&B\ar[d]\\
{\overline{A}/\frak n}\ar@/^1pc/[rr]&A/\frak m^c\ar[l]\ar[r]&B/\frak m } $$
The restriction of $\varphi$ onto $A$ coincides with the Taylor homomorphism for the mapping $A\to B/\frak m$. It follows from the uniqueness that the Taylor homomorphism coincides with the initial embedding of $A$ to $B$. \end{proof}
\begin{example} Let $\Sigma=\mathbb Z/2\mathbb Z$ and $\sigma=1$ be a nonzero element of $\Sigma$. Consider the field $\mathbb C(t)$, where $t$ is a transcendental element over $\mathbb C$. We assume that action of $\Sigma$ is trivial on $\mathbb C(t)$. Consider the following system of difference equations $$ \left\{ \begin{aligned} \sigma x&=-x,\\ x^2&=t. \end{aligned} \right. $$
Let $L$ be the algebraical closure of the field $\mathbb C(t)$. Then the difference closure of $\mathbb C(t)$ coincides with $\operatorname{F} L$. From the definition we have $\operatorname{F} L=L\times L$, where the first factor corresponds to $0$ and the second one to $1$ in $\mathbb Z/2\mathbb Z$. Then our system has the two solutions $(\sqrt{t},-\sqrt{t})$ and $(-\sqrt{t},\sqrt{t})$.
Moreover, we are able to construct the field containing the solutions of this system. Consider the ring of polynomials $\mathbb C(t)[x]$, where $\sigma x=-x$. Then the ideal $(x^2-t)$ is a maximal difference ideal. Define $$ D=\mathbb C(t)[x]/(x^2-t). $$
By the definition $D$ is a minimal field containing solutions of the system. From the other hand, Proposition~\ref{taylor} guaranties that $D$ can be embedded into the difference closure of $\mathbb C(t)$. \end{example}
\begin{example} Consider a ring $A=\mathbb C\times \mathbb C$ and a group $\Sigma=\mathbb Z/4\mathbb Z$. Let $\sigma=1$ be a generator of $\Sigma$. Let $\Sigma$ act on $A$ by the following rule $\sigma(a,b)=(b,\overline{a})$. Then $\sigma$ is an automorphism of fourth order. Consider the projection of $A$ onto the first factor. Then there exists a homomorphism $\Phi\colon A\to \operatorname{F} \mathbb C$ such that the following diagram is commutative $$ \xymatrix{
&\operatorname{F} \mathbb C\ar[d]^{\gamma_e}\\
A\ar[r]^{\pi}\ar[ru]^{\Phi}&\mathbb C } $$ where $\pi$ is the projection onto the first factor of $A$.
The pseudofield $\operatorname{F} \mathbb C$ is of the following form $\mathbb C_0\times\mathbb C_1\times\mathbb C_2\times\mathbb C_3$, where $\mathbb C_i$ is a field $\mathbb C$ over the point $i$ of $\Sigma$. Using this notation, the homomorphism $\gamma_e$ coincides with the projection onto the first factor. The element $\sigma$ acts on $\operatorname{F} \mathbb C$ by the right transaction. The Taylor homomorphism is defined by the rule $(a,b)\mapsto (a,\overline{b},\overline{a},b)$.
Consider the embedding of $\mathbb C$ into $A$ by the rule $c\mapsto (c,c)$ and the embedding into $\operatorname{F} \mathbb C$ by the rule $(c,\overline{c},\overline{c},c)$. These both embeddings induce the structure of a $\mathbb C$-algebra. Since dimensions of $A$ and $\operatorname{F} \mathbb C$ equal $2$ and $4$, respectively, $\operatorname{F} \mathbb C$ is generated by one single element over $A$. We shall find this element explicitly. Consider the element $x=(i,i,i,i)$ of $\operatorname{F} A$. This element does not belong to $A$, therefore, $\operatorname{F} \mathbb C=A\{x\}$. We have the following relations on the element $x$: $\sigma x=x$ and $x^2+1=0$. Comparing the dimensions, we get $\operatorname{F} \mathbb C =A\{y\}/[\sigma x-x, x^2+1]$. \end{example}
\begin{example} Let $\mathbb C$ be the field of complex numbers considered as a difference ring over $\Sigma=\mathbb Z/2\mathbb Z$ and let $\sigma=1$ be the nonzero element of the group. Then the system of equations $$ \left\{ \begin{aligned} x\sigma x&=0,\\ x+\sigma x&=1 \end{aligned} \right. $$ has no solutions in every difference overfield containing $\mathbb C$. But the ideal $$ [x\sigma x,x+\sigma x-1] $$ of the ring $\mathbb C\{x\}$ is not trivial. Therefore, the system has solutions in the difference closure of $\mathbb C$. The closure coincides with $\operatorname{F} \mathbb C$. Namely, $\operatorname{F} \mathbb C=\mathbb C\times\mathbb C$, where the first factor corresponds to zero and the second one to the element $\sigma$. Then the solutions are $(1,0)$ and $(0,1)$. \end{example}
\begin{example} Let $U$ be an open subset in the complex plane $\mathbb C$ and let $\Sigma$ be a finite group of automorphisms of $U$. The ring of all holomorphic functions in $U$ will be denoted by $A$. Then $A$ is a $\Sigma$-algebra with respect to the action $$ (\sigma\varphi)(z)=\varphi(\sigma^{-1} z). $$ The difference closure of $\mathbb C$ is $\operatorname{F} \mathbb C$. Consider an arbitrary point $x\in U$, then there is a substitution homomorphism $\psi_x\colon A\to \mathbb C$ such that $f\mapsto f(x)$. Proposition~\ref{taylor} says that there exists the corresponding Taylor homomorphism $\Psi_x\colon A\to \operatorname{F} \mathbb C$.
Let us show the geometrical sense of this mapping. Consider the orbit of the point $x$ and denote it by $\Sigma x$. Then there is a natural mapping $\Sigma\to \Sigma x$ by $\sigma\mapsto \sigma x$. Then for every function $\varphi\in A$ the composition of
$\Sigma\to\Sigma x$ and $\varphi|_{\Sigma x}\colon\Sigma x\to \mathbb C$ coincides with the mapping $\Psi_x(\varphi)$. So, the Taylor homomorphism $\Psi_x$ is just the restriction onto the orbit of the given element $x$. \end{example}
\begin{proposition} Let $A$ be a pseudofield. Then the following conditions are equivalent: \begin{enumerate} \item $A$ is difference closed \item For every $n$ and every set $E\subseteq A\{y_1,\ldots,y_n\}$ if there is a common zero for $E$ in $B^n$, where $B$ is a pseudofield containing $A$, then there is a common zero in $A^n$. \item For every $n$, every set $E\subseteq A\{y_1,\ldots,y_n\}$ and every finite set $W\subseteq A\{y_1,\ldots,y_n\}$ if there is a common zero $b$ for $E$ in $B^n$, where $B$ is a pseudofield containing $A$, such that no element of $W$ vanishes on $b$, then there is a common zero for $E$ in $A^n$ such that no element of $W$ vanishes on it. \end{enumerate} \end{proposition} \begin{proof}
(1)$\Rightarrow$(3). First of all we shall reduce the problem to the case $|W|=1$. Let $b=(b_1,\ldots,b_n)\in B^n$ be the desired common zero. The pseudofield $B$ can be embedded into its difference closure $\overline{B}$. As we know $\overline{B}$ coincides with a finite product of fields. Consider substitution homomorphism $$ A\{y_1,\ldots,y_n\}\to B \to \overline{B} $$ The composition of these two mappings we shall define by $\phi$. For every element $w_i\in W$ we have $\phi(w_i)\neq0$. Thus for some $\sigma_i$ we have $\phi(w_i)(\sigma_i)\neq 0$. By the definition of $\Sigma$-action on $\overline{B}$, it follows that there is an element $\tau_i\in \Sigma$ such that $\phi(\tau_i w_i)(e)\neq 0$. (Actually, we know that $\tau_i=\sigma_i^{-1}$.) So, $$ \phi\left(\prod_{i=1}^n\tau_iw_i\right)(e)\neq 0. $$ Consider the polynomial $w=\prod_{i=1}^n\tau_iw_i$. It follows from the definition that $\phi(w)\neq 0$. Moreover, the ring $(A\{y_1,\ldots,y_n\}/[E])_w$ is not a zero ring.
Since $A$ is difference closed then it is of the form $A=\operatorname{F} K$, where $K$ is algebraically closed. And there are homomorphisms $\gamma_\sigma\colon A\to K$. Consider an arbitrary maximal ideal $\frak n$ in $D=(A\{y_1,\ldots,y_n\}/[E])_w$ and let $\frak m$ be its contraction to $A$. Then the field $D/\frak n$ is a finitely generated algebra over $A/\frak m$. For the ideal $\frak m$ there is a homomorphism $\gamma_\sigma$ such that $\frak m=\ker \gamma_\sigma$. So, we have $A/\frak m= K$. Since $K$ is algebraically closed, there is an embedding $D\to K$. So, we have the following commutative diagram $$ \xymatrix@R=20pt@C=20pt{
A\{y_1,\ldots,y_n\}\ar[d]\ar[rd]^-{\phi}&\\
(A\{y_1,\ldots,y_n\}/[E])_w\ar[r]&K } $$
such that $\phi|_A=\gamma_\sigma$. By Proposition~\ref{taylor}, it follows that there exists a difference homomorphism $\varphi\colon A\{y_1,\ldots,y_n\}\to A$ such that the following diagram is commutative $$ \xymatrix@R=20pt@C=20pt{
A\{y_1,\ldots,y_n\}\ar[d]\ar[rd]^-{\phi}\ar[r]^-{\varphi}&A\ar[d]^{\gamma_\sigma}\\
(A\{y_1,\ldots,y_n\}/[E])_w\ar[r]&K } $$ So, $\varphi$ is a difference homomorphism over $A$. The images of $y_i$ give us the desired common zero in $A^n$.
(3)$\Rightarrow$(2). Is trivial.
(2)$\Rightarrow$(1). Let us show that every pseudofield difference finitely generated over $A$ coincides with $A$. Let $B$ be a pseudofield difference finitely generated over $A$. Then it can be presented in the following form $$ B=A\{y_1,\ldots,y_n\}/\frak m, $$ where $\frak m$ is a pseudomaximal ideal. Then this ideal has a common zero in $B^n$, $(y_1,\ldots,y_n)$ say. Consequently, there is a common zero $$ (a_1,\ldots,a_n)\in A^n. $$ Consider a substitution homomorphism $A\{y_1,\ldots,y_n\}\to A$ by the rule $y_i\mapsto a_i$. Then all elements of $\frak m$ maps to zero. So, there is a difference homomorphism $B\to A$. Thus, $B$ coincides with $A$. \end{proof}
\begin{proposition} Let $A$ be a difference pseudofield with the residue field $K$ and let $A[\Sigma]$ be the ring of difference operators on $A$. Then the ring of difference operators is completely reducible and there is a decomposition $$ A[\Sigma]=A\oplus A\oplus\ldots\oplus A, $$ where the number of summands is equal to size of the group $\Sigma$. Moreover, we have $$ A[\Sigma]=M_n(K), $$
where $n=|\Sigma|$ \end{proposition} \begin{proof} Let us define the following module $$ A_\tau = \{\,\sum_{\sigma} a_\sigma \delta_\sigma \sigma\tau\,\}, $$ where $\delta_\sigma$ is the indicator of the point $\sigma$. Using the fact that $A=\operatorname{F} K$, we see that $$ A[\Sigma]=\mathop{\oplus}_{\sigma\in \Sigma}A_\sigma. $$ And moreover, every module $A_\sigma$ is isomorphic to $A$ as a difference module. It follows from the equality $$ A[\Sigma]=\operatorname{Hom}_{A[\Sigma]}(A[\Sigma],A[\Sigma])=M_n(\operatorname{Hom}_{A[\Sigma]}(A,A)) $$ that $A[\Sigma]=M_n(K)$. \end{proof}
\begin{remark} It follows from the previous proposition that every difference module over a difference closed pseudofield is free. Moreover, for every such module there is a basis consisting of $\Sigma$-invariant elements. \end{remark}
\subsection{difference finitely generated algebras}\label{sec44}
The section is devoted to different technical conditions on difference finitely generated algebras.
\begin{lemma}\label{lemma1} Let $A$ be a ring with finitely many minimal prime ideals. Then there exists an element $s\in A$ such that there is only one minimal prime ideal in $A_s$. \end{lemma} \begin{proof}
Let $\frak p_1,\ldots,\frak p_n$ be all minimal prime ideal of $A$. Then it follows from~\cite[chapter~1, sec.~6, prop.~1.11(II)]{AM} that there exists an element $s$ such that $$ s\in\bigcap_{i=2}^n\frak p_i\setminus\frak p_1. $$ Then there is only one minimal prime ideal in $A_s$ and this ideal corresponds to $\frak p_1$. \end{proof}
\begin{lemma}\label{lemma2} Let $A\subseteq B$ be rings such that $A$ is an integral domain and $B$ is finitely generated over $A$. Then there exists an element $s\in A$ with the following property. For any algebraically closed field $L$ every homomorphism $A_s\to L$ can be extended to a homomorphism $B_s\to L$. \end{lemma} \begin{proof}
Let $S=A\setminus 0$, consider the ring $S^{-1}B$. This ring is a finitely generated algebra over the field $S^{-1}A$. Then there are finitely many minimal prime ideals in $S^{-1}B$. These ideals correspond to the ideals in $B$ contracting to $0$. Let $\frak p$ be one of them. Consider the rings $A\subseteq B/\frak p$. It follows from~\cite[chapter~5, ex.~21]{AM} that there exists an element $s\in A$ with the following property. For every algebraically closed field $L$ every homomorphism $A_s\to L$ can be extended to a homomorphism $(B/\frak p)_s\to L$. Considering the composition of the last one with $B_s\to (B/\frak p)_s$, we extend the initial homomorphism to the homomorphism $B_s\to L$. \end{proof}
\begin{lemma}\label{lemma3} Let $A\subseteq B$ be rings such that $A$ is an integral domain and $B$ is finitely generated over $A$. Then there exists an element $s\in A$ such that the corresponding mapping $\operatorname{Spec} B_s\to\operatorname{Spec} A_s$ is surjective. \end{lemma} \begin{proof}
From the previous lemma we find an element $s$. Let $\frak p$ be a prime ideal in $A$ not containing $s$. The residue field of $\frak p$ will be denoted by $K$. Let $L$ denote the algebraic closure of $K$. The composition of the mappings $A\to K$ and $K\to L$ will be denoted by $\varphi\colon A\to L$. By the definition we have $\varphi(s)\neq 0$. Consequently, there exists a homomorphism $\overline{\varphi}\colon B\to L$ extending $\varphi$. Then $\ker \overline{\varphi}$ is the desired ideal laying over $\frak p$. \end{proof}
We shall give two proves of the following proposition.
\begin{proposition}\label{homst} Let $A\subseteq B$ be difference rings, $B$ being difference finitely generated over $A$, and there are only finitely many minimal prime ideals in $A$. Then there exists a nonnilpotent element $u$ in $A$ with the following property. For every difference closed pseudofield $\Omega$ and every difference homomorphism $\varphi\colon A \to \Omega$ such that $\varphi(u)\neq 0$ there exists a difference homomorphism $\overline{\varphi}\colon B\to
\Omega$ with the condition $\overline{\varphi}|_A=\varphi$. \end{proposition} \begin{proof}[First proof]
It follows from Theorem~\ref{equtheor} that $\Omega$ is of the following form $\operatorname{F} L$, where $L$ is an algebraically closed field. Let $\gamma_\sigma\colon \Omega\to L$ be the corresponding substitution homomorphisms.
We shall reduce the theorem to the case where $A$ and $B$ are reduced. Let us assume that we have proved the theorem for rings without nilpotent elements. Let $\frak a$ and $\frak b$ be the nilradicals of $A$ and $B$, respectively. Let $s'\in A/\frak a$ be the desired element, denote by $s$ some preimage of $s'$ in $A$. Let $\varphi\colon A\to \Omega$ be a difference homomorphism with condition $\varphi(s)\neq0$. Since $\Omega$ does not contain nilpotent elements, $\frak a$ is in the kernel of $\varphi$. Consequently, there exists a homomorphism $\varphi'\colon A/\frak a\to \Omega$. $$ \xymatrix@R=10pt@C=15pt{
&A\ar[r]\ar[d]\ar[dl]_{\varphi}&B\ar[d]\\
\Omega&A/\frak a\ar[r]\ar[l]_{\varphi'}&B/\frak b } $$ Since $\varphi(s')=\varphi(s)\neq0$, then it follows from our hypothesis that there is a difference homomorphism $\overline{\varphi}'\colon B/\frak b\to \Omega$. $$ \xymatrix@R=10pt@C=15pt{
&A\ar[r]\ar[d]\ar[dl]_{\varphi}&B\ar[d]\\
\Omega&A/\frak a\ar[r]\ar[l]_{\varphi'}&B/\frak
b\ar@/^1pc/[ll]^{\overline{\varphi}'} } $$ Then the desired homomorphism $\overline{\varphi}\colon B\to \Omega$ is the composition of the quotient homomorphism and $\overline{\varphi}'$.
Now we suppose that the nilradicals of $A$ and $B$ are zero. By Lemma~\ref{lemma1}, it follows that there exists an element $s\in A$ such that $A_s$ contains only one minimal prime ideal. Since $A$ has no nilpotent elements, $A_s$ is an integral domain. Let us apply Lemma~\ref{lemma2} to the pair $A_s\subseteq B_s$. So, there exists an element $t\in A$ such that for any algebraically closed field $L$ every homomorphism of $A_{st}\to L$ can be extended to a homomorphism $B_{st}\to L$. Denote the element $st$ by $u$. Let us show that the desired property holds. Let $\varphi\colon A\to \Omega$ be a difference homomorphism such that $\varphi(u)\neq0$. Then for some $\sigma$ we have $\gamma_\sigma\circ \varphi(u)\neq0$. So, there is a homomorphism $\varphi_\sigma\colon A\to L$ such that $\varphi_\sigma(u)\neq 0$. We shall extend $\varphi_\sigma$ to a homomorphism $B\to L$ as it shown in the following diagram (numbers show the order). $$ \xymatrix@R=10pt@C=10pt{
A_u\ar[rr]&&B_u&A_u\ar[rd]^{1}\ar[rr]&&B_u&A_u\ar[rd]^{1}\ar[rr]&&B_u\ar[dl]_{2}\\
&L&\ar@{.>}[r]&&L&\ar@{.>}[r]&&L&&\\
A\ar[uu]\ar[rr]^{\varphi}\ar[ru]^{\varphi_\sigma}&&\Omega\ar[ul]_{\gamma_\sigma}&A\ar[uu]\ar[rr]^{\varphi}\ar[ru]^{\varphi_\sigma}&&\Omega\ar[ul]_{\gamma_\sigma}&A\ar[uu]\ar[rr]^{\varphi}\ar[ru]^{\varphi_\sigma}&&\Omega\ar[ul]_{\gamma_\sigma} } $$
The homomorphism~$1$ appears from the condition $\varphi_\sigma(u)\neq0$ and the universal property of localization. The homomorphism~$2$ exists because of the definition of $u$. $$ \xymatrix@R=10pt@C=10pt{ A_u\ar[rd]^{1}\ar[rr]&&B_u\ar[dl]_{2}&B\ar[l]\ar[dll]_{3}&A_u\ar[rd]^{1}\ar[rr]&&B_u\ar[dl]_{2}&B\ar[l]\ar[dll]_{3}\ar[ddl]_{\overline{\varphi}}\\
&L&&\ar@{.>}[r]&&L&&\\
A\ar[uu]\ar[rr]^{\varphi}\ar[ru]^{\varphi_\sigma}&&\Omega\ar[ul]_{\gamma_\sigma}&&A\ar[uu]\ar[rr]^{\varphi}\ar[ru]^{\varphi_\sigma}&&\Omega\ar[ul]_{\gamma_\sigma}& } $$ The homomorphism~$3$ is constructed as a composition. Then from Proposition~\ref{taylor} there exists a difference homomorphism $\overline{\varphi}$. Since the diagram is commutative, it follows from the uniqueness of the Taylor homomorphism for $A$ that the restriction of $\overline{\varphi}$ onto $A$ coincides with $\varphi$. \end{proof} \begin{proof}[Second proof] We shall derive this proposition from Proposition~\ref{inher}. Since $B$ is finitely generated over $A$ then there exists an element $s$ in $A$ such that the corresponding mapping $$ \operatorname{Spec} B_s\to \operatorname{Spec} A_s $$ is surjective. Then it follows from Proposition~\ref{inher}~(1) that the mapping $$ (\operatorname{PSpec} B)_s\to (\operatorname{PSpec} A)_s $$ is surjective.
Let $\Omega$ be an arbitrary difference closed pseudofield and $\varphi\colon A\to \Omega$ is a difference homomorphism such that $\varphi(s)\neq 0$. The kernel of $\varphi$ will be denoted by $\frak p$ and we have $s\notin \frak p$. Therefore, there is a pseudoprime ideal $\frak q\subseteq B$ such that $\frak q^c=\frak p$ and $s\notin \frak q$. Consider the following ring $$ R=B/\frak q\mathop{\otimes}_{A/\frak p}\Omega. $$ It follows from the definition that $R$ is difference finitely generated over $\Omega$. Let $\frak m$ be an arbitrary maximal difference ideal of $R$. Since $\Omega$ is difference closed, the quotient ring $R/\frak m$ coincides with $\Omega$. Now we have the following diagram $$ \xymatrix@R=10pt{
B\ar[rr]&&B/\frak q\ar[rd]&&\\
A\ar[r]\ar[u]&A/\frak p\ar[ur]\ar[rd]&&B/\frak q\mathop{\otimes}_{A/\frak p}\Omega\ar[r]&R/\frak m = \Omega\\
&&\Omega\ar[ru]&&\\ } $$ The composition of upper arrows gives us the desired homomorphism from $B$ to $\Omega$. \end{proof}
There are two important particular cases of this proposition.
\begin{corollary}\label{cor1} Let $A\subseteq B$ be difference rings, $B$ being difference finitely generated over $A$, and $A$ is a pseudo integral domain. Then there exists a non nilpotent element $u$ in $A$ with the following property. For every difference closed pseudofield $\Omega$ and every difference homomorphism $\varphi\colon A \to \Omega$ such that $\varphi(u)\neq 0$ there exists a difference homomorphism $\overline{\varphi}\colon B\to \Omega$ with condition
$\overline{\varphi}|_A=\varphi$. \end{corollary} \begin{proof}
Since $A$ is a pseudo integral domain there are finitely many minimal prime ideals in $A$. Indeed, let $\frak p$ be a minimal prime ideal. Then it is $\Sigma$-associated with zero ideal. So, $\cap_\sigma \frak p^\sigma=0$. Let $\frak q$ be an arbitrary minimal prime ideal of $A$. Then $\cap_\sigma\frak p^\sigma=0\subseteq \frak q$. Therefore, for some $\sigma$ we have $\frak p^\sigma\subseteq \frak q$. But $\frak q$ is a minimal prime ideal, hence, $\frak p^\sigma=\frak q$. So, $\frak p^\sigma$ are all minimal prime ideals of $A$. Now the result follows from the previous theorem. \end{proof}
\begin{corollary}\label{cor2}
Let $A\subseteq B$ be difference rings, $B$ being difference finitely generated over $A$, and $A$ is a difference finitely generated algebra over a pseudofield. Then there exists a non nilpotent element $u$ in $A$ with the following property. For every difference closed pseudofield $\Omega$ and every difference homomorphism $\varphi\colon A \to \Omega$ such that $\varphi(u)\neq 0$ there exists a difference homomorphism $\overline{\varphi}\colon B\to \Omega$ with condition $\overline{\varphi}|_A=\varphi$. \end{corollary} \begin{proof} Every pseudofield is an Artin ring and, thus, is Noetherian. If $A$ is difference finitely generated over a pseudofield, then $A$ is finitely generated over it. Hence, $A$ is Noetherian. Consequently, there are finitely many minimal prime ideals in $A$. Now the result follows from the previous theorem. \end{proof}
\begin{proposition} Let $K$ be a pseudofield and $L$ be its difference closure. Consider an arbitrary difference finitely generated algebra $A$ over $K$ and a non nilpotent element $u\in A$. Then there is a difference homomorphism $\varphi\colon A\to L$ such that $\varphi(u)\neq 0$. \end{proposition} \begin{proof} As we know $L=\operatorname{F} (F)$ for some algebraically closed field $F$ and there are homomorphisms $\gamma_\sigma\colon L\to F$. Then we have the compositions $\pi_\sigma \colon K\to L\to F$. As we can see every maximal ideal of $K$ can be presented as $\ker \pi_\sigma$ for an appropriate element $\sigma$. So, every factor field of $K$ can be embedded into $F$.
Since $u$ is not a nilpotent element then the algebra $A_u$ is not a zero ring. Consider an arbitrary maximal ideal $\frak n$ in $A_u$. Let $\frak m$ denote its contraction to $K$. Then $(A/\frak n)_u$ is a finitely generated field over $K/\frak m$. The field $K/\frak m$ can be embedded into $F$ by some mapping $\pi_\sigma$. Since $F$ is algebraically closed, there is a mapping $\phi_\sigma\colon (A/\frak n)_u\to F$ such that the following diagram is commutative $$ \xymatrix@R=15pt@C=15pt{
(A/\frak n)_u\ar[rd]^{\phi_\sigma}&\\
K/\frak m\ar[r]^{\pi_\sigma}\ar[u]&F } $$ By Proposition~\ref{taylor}, it follows that there exists a mapping $\varphi\colon A\to L $ such that the following diagram is commutative $$ \xymatrix{
(A/\frak n)_u\ar[rrd]^{\phi_\sigma}&A\ar[l]^{\pi}\ar[r]^{\varphi}&L\ar[d]^{\gamma_\sigma}\\
&K/\frak m\ar[r]^{\pi_\sigma}\ar[ul]&F } $$ where $\pi\colon A\to (A/\frak n)_u$ is a natural mapping. Since $\phi_\sigma\circ\pi(u)\neq 0$, we have $\varphi(u)\neq 0$. \end{proof}
The next technical condition is concerned with extensions of pseudoprime ideals.
\begin{proposition} Let $A\subseteq B$ be difference rings, $B$ being difference finitely generated over $A$, and there are finitely many minimal prime ideals in $A$. Then there exists an element $u$ in $A$ such that the mapping $$ (\operatorname{PSpec} B)_u\to(\operatorname{PSpec} A)_u $$ is surjective. \end{proposition} \begin{proof} We may suppose that the nilradicals of the rings are zero. By Lemma~\ref{lemma1}, it follows that there exists an element $s\in A$ such that $A_s$ is an integral domain. Further, as in Lemma~\ref{lemma2} there is an element $t$ such that $B_{st}$ is integral over $A_{st}[x_1,\ldots,x_n]$ and the elements $x_1,\ldots,x_n$ are algebraically independent over $A_{st}$. Let $u=st$. By Theorem~\cite[chapter~5, th.~5.10]{AM}, it follows that the mapping $\operatorname{Spec} B_u\to \operatorname{Spec} A_u[x_1,\ldots,x_n]$ is surjective. It is clear that the mapping $$ \operatorname{Spec} A_u[x_1,\ldots,x_n]\to \operatorname{Spec} A_u $$ is surjective too. So, from Proposition~\ref{inher} the mapping $$ (\operatorname{PSpec} B)_u\to(\operatorname{PSpec} A)_u $$ is surjective. \end{proof}
\begin{proposition}\label{Pmaxst} Let $A\subseteq B$ be difference finitely generated algebras over a pseudofield. Then there exists an element $u$ in $A$ such that the mapping $$ (\operatorname{PMax} B)_u\to(\operatorname{PMax} A)_u $$ is surjective. \end{proposition} \begin{proof} Since the algebra $A$ is difference finitely generated over a pseudofield $A$ is Noetherian. Consequently, there are finitely many minimal prime ideals in $A$. Following the proof of the previous proposition, we are finding the element $u$ such that the mapping $\operatorname{Spec} B_u\to\operatorname{Spec} A_u$ is surjective. Since $A_u$ and $B_u$ are finitely generated over an Artin ring, the contraction of any maximal ideal is a maximal ideal. So, the mapping $\operatorname{Max} B_u\to\operatorname{Max} A_u$ is well-defined and surjective. Then Proposition~\ref{techbasic}~(2) completes the proof. \end{proof}
\subsection{Geometry}\label{sec45}
In this section we develop a geometric theory of difference equations with solutions in pseudofields. This theory is quite similar to the theory of polynomial equations.
Let $A$ be a difference closed pseudofield. The ring of difference polynomials $A\{y_1,\ldots,y_n\}$ will be denoted by $R_n$. For every subset $E\subseteq R_n$ we shall define the subset $V(E)$ of $A^n$ as follows $$ V(E)=\{\,a\in A^n \mid \forall f\in E:\:f(a)=0\,\}. $$ This set will be called a pseudovariety. Conversely, let $X$ be an arbitrary subset in $A^n$, then we set $$
I(X)=\{\,f\in R_n\mid f|_X=0\,\}. $$ This ideal is called the ideal of definition of $X$. Let now $\operatorname{Hom}^\Sigma_A(R_n,A)$ denote the set of all difference homomorphisms from $R_n$ to $A$ over $A$. Consider the mapping $$ \varphi\colon A^n\to \operatorname{Hom}^\Sigma_A(R_n,A) $$ by the rule: every point $a=(a_1,\ldots,a_n)$ maps to a homomorphism $\xi_a$ such that $\xi_a(f)=f(a)$. The mapping $$ \psi\colon\operatorname{Hom}^\Sigma_A(R_n,A)\to\operatorname{PMax} A $$ by the rule $\xi\mapsto\ker\xi$ will be denoted by $\psi$. So, we have the following sequence $$ A^n\stackrel{\varphi}{\longrightarrow} \operatorname{Hom}^\Sigma_A(R_n,A)\stackrel{\psi}{\longrightarrow}\operatorname{PMax} A. $$
\begin{proposition}\label{bij} The mappings $\varphi$ and $\psi$ are bijections \end{proposition} \begin{proof} The inverse mapping for $\varphi$ is given by the rule $$ \xi\mapsto(\xi(y_1),\ldots,\xi(y_n)). $$ Since $A$ is difference closed, for every homomorphism $\xi\colon R_n\to A$ its kernel is of the form $\ker \xi=[y_1-a_1,\ldots,y_n-a_n]$, where $a_i=\xi(y_i)$. So, the mapping $\psi$ injective and surjective. \end{proof}
It is clear that under the mapping $\psi\circ\varphi$ the set $V(E)$ of $A^n$ maps to the set $V(E)$ of $\operatorname{PMax} R_n$. So, the sets $V(E)$ define a topology on $A^n$ and the mentioned mapping is a homeomorphism. Therefore, we can identify pseudomaximal spectrum of $R_n$ with an affine space $A^n$. Let $\frak a$ be a difference ideal in $R_n$. Then the set $\operatorname{Hom}^\Sigma_A(R_n/\frak a,A)$ can be identified with the set of all homomorphisms of $\operatorname{Hom}^\Sigma_A(R_n,A)$ mapping $\frak a$ to zero. In other words, there is a homeomorphism between $V(\frak a)$ and $\operatorname{PMax} R_n/\frak a$.
\begin{corollary} The mappings $\varphi$ and $\psi$ are homeomorphisms. \end{corollary}
\begin{theorem}\label{nullth} Let $\frak a$ be a difference ideal in $R_n$. Then $\frak r(\frak a)=I(V(\frak a))$. \end{theorem} \begin{proof} Since $A$ is an Artin ring, $A$ is a Jacobson ring. $R_n$ is finitely generated over $A$, consequently, $R_n$ is a Jacobson ring too. Therefore, every radical ideal in $R_n$ can be presented as an intersection of maximal ideals. Hence, every radical difference ideal can be presented as an intersection of pseudomaximal ideals (Proposition~\ref{techbasic} item~(2)). Now we are useing the correspondence between points of $V(\frak a)$ and pseudomaximal ideals (Proposition~\ref{bij}). \end{proof}
\subsection{Regular functions and a structure sheaf}\label{sec46}
Let $X\subseteq A^n$ be a pseudovariety over a difference closed pseudofield $A$ and let $I(X)$ be its ideal of definition in the ring $R_n=A\{y_1,\ldots,y_n\}$. Then the ring $R_n/I(X)$ can be identified with the ring of polynomial functions on $X$ and will be denoted by $A\{X\}$.
Let $f\colon X\to A$ be a function. We shall say that $f$ is regular at $x\in X$ if there are an open neighborhood $U$ containing $x$ and elements $h,g\in A\{x_1,\ldots,x_n\}$ such that for every element $y\in U$ $g(y)$ is invertible and $f(y)=h(y)/g(y)$. The condition on $g$ can be stated as follows: for each element $y\in U$ the value $g(y)$ is not a zero divisor. For any subset $Y$ in $X$ a function is said to be regular on $Y$ if it is regular at each point of $Y$. The set of all regular functions on an open subset $U$ of $X$ will be denoted by $\mathcal O_X(U)$. Since the definition of a regular function arises from a local condition the set of the rings $\mathcal O_X$ form a sheaf. This sheaf will be called a structure sheaf on $X$. This definition naturally generalizes the usual one in algebraic geometry. It follows from the definition that there is the inclusion $A\{X\}\subseteq \mathcal O_X(X)$. The very important fact is that the other inclusion is also true.
\begin{theorem}\label{regularf} For an arbitrary pseudovariety $X$ there is the equality $$ A\{X\}=\mathcal O_X(X). $$ \end{theorem} \begin{proof} Let $f$ be a regular function on $X$. It follows from the definitions of a regular function that for each point $x\in X$ there exist a neighborhood $U_x$ and elements $h_x,g_x\in A\{X\}$ such that for every element $y\in U_x$ $g_x(y)$ is invertible and $$ f(y)=h_x(y)/g_x(y). $$ Replacing $h_x$ and $g_x$ by $h_xg_x$ and $g_x^2$, respectively, we can suppose that the condition $g_x(y)=0$ implies $h_x(y)=0$. The element $\prod_{\sigma}\sigma(g_x)$ is $\Sigma$-constant. So, we replace $h_x$ and $g_x$ by $$ h_x\prod_{\sigma\neq e}\sigma(g_x)\quad \mbox{ and }\quad\prod_{\sigma}\sigma(g_x). $$ Hence, we can suppose that each $g_x$ is $\Sigma$-constant.
The family $\{U_x\}$ covers $X$ and $X$ is compact. So, there is a finite subfamily such that $$ X=U_{x_1}\cup\ldots\cup U_{x_m}. $$ For every $\Sigma$-constant element $s$ the set $X_s$ coincides with the set of all points where $s$ is invertible. Therefore, we have $U_x\subseteq X_{g_x}$. Hence, $X_{g_{x_i}}$ cover $X$ and, thus, $(g_{x_1},\ldots,g_{x_m})=(1)$. So, we have $$ 1=d_1 g_{x_1}+\ldots+d_m g_{x_m}. $$
Now observe that $h_{x}g_{x'}=h_{x'}g_x$, where $x$ and $x'$ are among $x_i$. Indeed, if $$ g_{x}(y)=0\quad \mbox{ or }\quad g_{x'}(y)=0 $$ then the other part of the equality is zero. If both $g_{x}$ and $g_{x'}$ are not zero then the condition holds because they define the same functions on intersection $X_{g_{x}}\cap X_{g_{x'}}$. Now set $d=\sum_i d_i h_{x_i}$. We claim that $f=d$. Indeed, $$ dg_{x_j}=\sum d_i h_{x_i}g_{x_j}=\sum d_i h_{x_j}g_{x_i}=h_{xj}. $$ \end{proof}
The given definition of a structure sheaf comes from algebraic geometry. Roughly speaking, we use inverse operation to produce a rational function. When we deal with a field we know that inverse operation is defined for every nonzero element. In the case of pseudofields we have a similar operation. Indeed, for every nonzero element $a$ of an arbitrary absolutely flat ring there exist unique elements $e$ and $a^*$ with relations $$ a=ea,\:\:a^*=ea^*,\:\: e = a a^*. $$ A formal definition of these elements is the following. For every element $a$ of an absolutely flat ring $A$ there is an element $x\in A$ such that $a=xa^2$. Then set $e=ax$ and $a^*=ax^2$. These elements can be described as follows. The element $a$ can be considered as a function on the spectrum of $A$. Then the element $e$ can be defined as a function that equals $1$ where $a$ is not zero and equals zero, where $a$ is zero. In other words, $e$ is the indicator of the support of $a$. So, the element $a^*$ is equal to the inverse element where $a$ is not zero and zero otherwise.
Now we shall define the second structure sheaf. But we will see that this new sheaf coincides with the sheaf above. Let $X\subseteq A^n$ be a pseudovariety over a difference closed pseudofield $A$. Consider an arbitrary function $f\colon X\to A$. We shall say that $f$ is pseudoregular at a given point $x\in X$ if there exist a neighborhood $U$ containing $x$ and elements $h,g\in A\{x_1,\ldots,x_n\}$ such that for every $y\in U$ the element $g(y)$ is not zero and $f(y)=h(y)(g(y))^*$. The function pseudoregular at each point of the subset $Y$ is called pseudoregular on $Y$. The set of all pseudoregular functions on an open set $U$ will be denoted by $\mathcal O'_X$. Let us note that there is a natural mapping $\mathcal O_X\to \mathcal O'_X$. Actually, the sheaf $\mathcal O_X$ is a subsheaf of $\mathcal O'_X$. Let us show that both sheaves coincide to each other.
\begin{theorem}\label{regularfnew} Under the above assumptions, we have $$ \mathcal O_X=\mathcal O'_X. $$ \end{theorem} \begin{proof} Let $f$ be a pseudoregular function defined on some open subset of $X$ and let $x$ be an arbitrary point, where $f$ is defined. Then it follows from the definition of pseudoregular function that there are neighborhood $U$ of $x$ and elements $h,g\in A\{X\}$ such that for every point $y\in U$ $g(y)$ is not zero and $$ f(y)=h(y)(g(y))^*. $$ Let $e$ be an idempotent of $A$ corresponding to $g(x)$. Then in some smaller neighborhood (we should intersect $U$ with $X_{eg}$) $f$ is given by the equality $$ f(y)=eh(y)(eg(y))^*. $$ Let us set $g'(y)= 1-e + eg(y)$. So, the value $g'(x)$ is invertible in $A$. Now we consider the functions $$ h_0 = eh\prod_{\sigma\neq e} \sigma(g')\quad \mbox{ and } \quad g_0 = \prod_{\sigma}\sigma(g'). $$ So, $g_0$ is a $\Sigma$-constant element. Therefore, $g_0(y)$ is invertible for every $y\in X_{g_0}$. The set $X_{g_0}$ contains $x$ because $g'(x)$ is invertible. Additionally, we have $$ eh(y)(eg(y))^*=h_0(y)/g_0(y) $$ for all $y\in U\cap X_{g_0}$. Therefore, if a function $f$ is pseudoregular at $x$ it is also regular at $x$. \end{proof}
So, as we can see the new method of constructing structure sheaf gives us the same result.
Let $X\subseteq A^n$ and $Y\subseteq A^m$ be pseudovarieties over a difference closed pseudofield $A$. A mapping $f\colon X\to Y$ will be called regular if its coordinate functions are regular. By Theorem~\ref{regularf}, it follows that the set of all regular functions on $X$ coincides with the set of polynomial functions and, therefore, every regular mapping from $X$ to $Y$ coincides with a polynomial mapping.
Let a mapping $f\colon X\to Y$ be a regular one. So, all functions $f_i(x_1,\ldots,x_n)$ are difference polynomials. For every such $f\colon X\to Y$ there is a difference homomorphism $f^*\colon A\{Y\}\to A\{X\}$ by the rule $f^*(\xi)=\xi\circ f$.
Conversely, for every difference homomorphism $\varphi\colon A\{Y\}\to A\{X\}$ over $A$ we shall define $$ \varphi^*\colon \operatorname{Hom}^\Sigma_A(A\{X\},A)\to \operatorname{Hom}^\Sigma_A(A\{Y\},A) $$ by the rule $\varphi^*(\xi)=\xi\circ\varphi$. Let us recall that the pseudovariety $X$ can be identified with $\operatorname{Hom}^\Sigma_A(A\{X\},A)$. Then we have the mapping $$ \varphi^*\colon X\to Y. $$
\begin{proposition} The constructed mappings are inverse to each other bijections between the set of all regular mappings from $X$ to $Y$ and the set of all difference homomorphisms from $A\{Y\}$ to $A\{X\}$. \end{proposition} \begin{proof}
If $f\colon A^n\to A^m$ is a polynomial mapping then $f(X)\subseteq Y$ iff $f^*(I(Y))\subseteq I(X)$. Since $$A\{X\}=A\{y_1,\ldots,y_n\}/I(X)$$ and $$A\{Y\}=A\{y_1,\ldots,y_m\}/I(Y)$$ then the set of all $$ g\colon A\{y_1,\ldots,y_m\}\to A\{y_1,\ldots,y_n\} $$ with condition $g(I(Y))\subseteq I(X)$ corresponds to the set of all $\bar g\colon A\{Y\}\to A\{X\}$. \end{proof}
\subsection{Geometry continuation}\label{sec47}
Here we shall continue investigation of some geometric properties of pseudovarieties and their morphisms.
Since every pseudofield is an Artin ring and a difference finitely generated algebra over a pseudofield is finitely generated, then every algebra difference finitely generated over a pseudofield is Noetherian. So, we have the following.
\begin{proposition}\label{netst} Every pseudovariety is a Noetherian topological space. \end{proposition}
The following propositions are devoted to the geometric properties of regular mappings.
\begin{proposition}\label{imst} Let $f\colon X\to Y$ be a regular mapping with the dense image and let $Y$ be irreducible. Then the image of $f$ contains an open subset. \end{proposition} \begin{proof} Let $A\{X\}$ and $A\{Y\}$ be coordinate rings of the pseudovarieties. Then the mapping $f$ gives us the mapping $$ f^*\colon A\{Y\}\to A\{X\}. $$ Since the image of $f$ is dense, the homomorphism $f^*$ is injective. By Proposition~\ref{Pmaxst}, it follows that there exists an element $s\in A\{Y\}$ such that the mapping $\operatorname{PMax} A\{X\}_s\to \operatorname{PMax} A\{Y\}_s$ is surjective. But from Proposition~\ref{bij} the last mapping coincides with $f\colon X_s\to Y_s$. Since $Y$ is irreducible, every open subset is dense. \end{proof}
\begin{proposition}\label{constrst} Let $f\colon X\to Y$ be a regular mapping. Then $f$ is constructible. \end{proposition} \begin{proof} Let $A\{X\}$ and $A\{Y\}$ be denoted by $B$ and $D$, respectively. Then we have the corresponding difference homomorphism $f^*\colon D\to B$. We identify pseudovarieties $X$ and $Y$ with pseudomaximal spectra of the rings $B$ and $D$, respectively.
Let $E$ be a constructible subset in $X$, then it has the form $$ E=U_1\cap V_1\cup\ldots\cup U_n\cap V_n, $$ where $U_i$ are open and $V_i$ are closed. Since the image of mapping preserves a union of sets then we can suppose that $E=U\cap V$, where $U$ is open and $V$ is irreducible and closed. Let $V$ be of the form $V=V(\frak p)$, where $\frak p$ is a pseudoprime ideal of $B$. Taking a quotient by $\frak p$ we reduce to the case where $E$ is open in $X$ and $X$ is irreducible.
Now we are going to show that $f(E)$ is constructible in $Y$. To do this we shall use a criterion~\cite[chapter~7, ex.~21]{AM}. Let $X_0$ be an irreducible subset in $Y$ such that $f(E)$ is dense in $X_0$. Here we must show that the image of $E$ contains an open subset in $X_0$. Then $X_0$ is of the form $V(\frak p)$, where $\frak p$ is a pseudoprime ideal in $D$. The preimage of $X_0$ under $f$ is of the form $V(\frak p^e)$, where $\frak p^e$ is the extension of $\frak p$ to $B$. The closed set $V(\frak p^e)$ can be presented as follows $$ V(\frak p^e)=V(\frak q_1)\cup\ldots\cup V(\frak q_m), $$ where $\frak q_i$ are pseudoprime ideals of $B$. Therefore, the set $E$ has the following decomposition $$ E=U\cap V(\frak q_1)\cup\ldots\cup U\cap V(\frak q_m). $$ Considering quotient by $\frak p$ and $\frak p^e$ we reduce the problem to the case $D$ is a pseudodomain and $Y=X_0$. Now the image of $E$ is of the form $$ f(E)=f(V(\frak q_1)\cap U)\cup\ldots\cup f(V(\frak q_m)\cap U). $$ Since $f(E)$ is dense in $Y$ and $Y$ is irreducible then there exists an $i$ such that $f(V(\frak q_i)\cap U)$ is dense in $Y$. Replacing $B$ by $B/\frak q_i$ we can suppose that $D\subseteq B$, $B$ is pseudodomain and $E=U$ is open. Every open subset is a union of principal open subsets, and since $X$ is a Noetherian topological space this union is finite. Therefore, we can suppose that $E=X_s$ and we need to prove that $f(E)$ contains open subset in $Y$.
In order to show the last claim we shall prove that there is a nonzero element $t\in D$ such that the mapping $\operatorname{Max} B_{st}\to \operatorname{Max} D_t$ is surjective. Then our proposition follows from Proposition~\ref{techbasic}~(2). First of all we note that every minimal prime ideal of $B$ is $\Sigma$-associated with the zero ideal. Since $D$ is a subring of $B$ then the contraction of every $\Sigma$-associated with zero prime ideal is $\Sigma$-associated with zero prime ideal in $D$. So, every minimal prime ideal of $B$ contracts to a minimal prime ideal of $D$. Now there exists a minimal prime ideal $\frak q$ of $B$ such that $s\notin\frak q$. The contraction of $\frak q$ to $D$ will be denoted by $\frak p$. Let $\frak p_1,\ldots,\frak p_n$ be the set of all minimal prime ideals of $D$, where $\frak p=\frak p_1$. Then there is an element $t\in D$ such that $t\in \bigcup_{i=2}^n\frak p_i\setminus \frak p_1$. So, we have the inclusion $D_t\subseteq B_t$. Moreover, the element $t$ was constructed such that $D_t=(D/\frak p)_t$. Therefore, the composition of embedding $D_t\to B_t$ and localization $B_t\to B_{st}$ is injective. So, $B_{st}$ is a finitely generated algebra over an integral domain $D_t$. Therefore, there exists an element $u\in D_t$ such that the mapping $\operatorname{Max} B_{stu}\to \operatorname{Max} D_{tu}$ is surjective. Denoting the element $tu$ by $t$ we complete the proof. \end{proof}
\begin{proposition}\label{openst} Let $f\colon Y\to X$ be a regular mapping with the dense image. Then there exists an element $u\in A\{X\}$ such that the mapping $f\colon Y_{f^*(u)}\to X_u$ is open. \end{proposition} \begin{proof}
Let the rings $A\{X\}$ and $A\{Y\}$ be denoted by $C$ and $D$, respectively. Since the image of $f$ is dense, $f^*\colon C\to D$ is injective. So, $C$ can be identified with the subring in $D$. By Lemma~\ref{lemma1}, it follows that there exists an element $s\in C$ such that $C_s$ is an integral domain and $C_s\subseteq D_s$. Since $C_s$ is an integral domain and $D_s$ is finitely generated over $C_s$, there exists an element $t\in C$ such that $D_{st}$ is a free $C_{st}$-module (see~\cite[chapter~8, sec.~22, th.~52]{Mu}). Let us denote $st$ by $u$. Then $D_u$ is a faithfully flat algebra over $C_u$, thus, by~\cite[chapter~3, ex.~16]{AM} and~\cite[chapter~5, ex.~11]{AM} the corresponding mapping $\operatorname{Spec} D_u\to \operatorname{Spec} C_u$ has the going-down property and surjective. Proposition~\ref{inher}~(1) and~(3) guaranties that the mapping $(\operatorname{PSpec} D)_u\to (\operatorname{PSpec} C)_u$ is surjective and has the going-down property. Let us show that $f\colon Y_u\to X_u$ is open.
It suffices to show that the image of every principal open set is open. Let $Y_t$ be a principal open set, then $$ Y_u\cap Y_t=\cup_{\sigma \tau} Y_{\sigma(u)\tau(t)}. $$ It suffices to consider the set of the form $Y_{\sigma(u)\tau(t)}$. Since $Y_{w}=Y_{\sigma(w)}$, it suffices to consider the set of the form $Y_{uv}$.
To show that the set $f(Y_{uv})$ is open we shall use the criterion~\cite[chapter~7, ex.~22]{AM}. Let $Y'$ and $X'$ be pseudospectra of $D$ and $C$, respectively. Not that every irreducible closed subset in $X$ has the following form $X'_0\cap X_u$, where $X'_0$ is an irreducible subset in $X'$. Consider the set $f(Y'_{uv})$ and let $X'_0$ be an irreducible closed subset in $X'$. Consider $f(Y'_{uv})\cap X'_0$. Suppose that the last set is not empty. We have $$ f(Y'_{uv})\cap X'_0=f(Y'_{uv}\cap f^{-1}(X'_0)). $$ Let $X'_0=V(\frak q)$, where $\frak q\in \operatorname{PSpec} C$. Therefore, $$ f(Y'_{uv})\cap X'_0=f(Y'_{uv}\cap V(\frak q^e)). $$ The last set is not empty. Thus, there exists a prime ideal $\frak q'$ in $D$ such that $\frak q^e\subseteq\frak q'$ and $uv\notin \frak q'$. Since $D_{uv}$ is a flat $C_u$-module, using the same arguments as above, we see that the mapping $\operatorname{Spec} D_{uv}\to \operatorname{Spec} C_{u}$ has the going-down property. Therefore, the mapping $f\colon Y'_{uv}\to X_u$ has the going-down property. Now consider the chain of pseudoprime ideals $(\frak q')^c\supseteq \frak q$ in $C$ and $\frak q'$ in $D$. Then there exists a pseudoprime ideal $\frak q''$ in $D$ such that $(\frak q'')^c=\frak q$. Therefore, homomorphism $C/\frak q\to D/\frak q^e$ is injective. Now consider the pair of rings $$ (C/\frak q)_u\subseteq(D/\frak q^e)_{uv}. $$ By Lemma~\ref{lemma1}, it follows that there exists an element $s\in C/\frak q$ such that $(C/\frak q)_{us}$ is an integral domain. Then Lemma~\ref{lemma3} guaranties that for some element $t\in (C/\frak q)_{su}$ the mapping $$ \operatorname{Spec} (D/\frak q^e)_{uvst}\to \operatorname{Spec} (C/\frak q)_{ust} $$ is surjective. Since the rings in the last expression are finitely generated algebras over an Artin ring, the mapping $$ \operatorname{Max} (D/\frak q^e)_{uvst}\to \operatorname{Max} (C/\frak q)_{ust} $$ is surjective. By Proposition~\ref{techbasic}~(2), it follows that the mapping $$ (\operatorname{PMax} D/\frak q^e)_{uvst}\to (\operatorname{PMax} C/\frak q)_{ust} $$ is surjective. Thus, $X_{ust}\cap (X'_0\cap X)$ is contained in $f(Y_{uv})$. Now we are able to apply the criterion~\cite[chapter~7, ex.~22]{AM}. To complete the proof we need to remember that $\operatorname{PMax} C$ can be identified with $X$ and $\operatorname{PMax} D$ with $Y$. \end{proof}
\subsection{Adjoint construction}\label{sec48}
Let $M$ be an abelian group. Consider the set of all functions on $\Sigma$ taking values in $M$. This set has a natural structure of abelian group. We shall denote it by $\operatorname{F}(M)$. So, we have $$ \operatorname{F}(M)=M^\Sigma=\{\,f\colon \Sigma\to M\,\} $$ The group $\Sigma$ is acting on $\operatorname{F}(M)$ by the following rule $(\tau f)(\sigma)=f(\tau^{-1}\sigma)$. Let $\sigma$ be an arbitrary element of $\Sigma$, then we have a homomorphism of abelian groups $\gamma_\sigma\colon \operatorname{F}(M)\to M$ by the rule $\gamma_\sigma(f)=f(\sigma)$. For an arbitrary homomorphism of abelian groups $h\colon M\to M'$ we have a homomorphism $\operatorname{F}(h)\colon \operatorname{F}(M)\to \operatorname{F}(M')$ given by the rule $\operatorname{F}(h)f=hf$, where $f\in\operatorname{F}(M)$. This homomorphism commutes with the action of $\Sigma$. This group has the following universal property.
\begin{lemma}\label{taylormod} Let $N$ and $M$ be abelian groups. Suppose that $\Sigma$ acts on $N$ by automorphisms. Then for every homomorphism of abelian groups $\varphi\colon N\to M$ and every element $\sigma\in \Sigma$ there is a unique homomorphism of abelian groups $\Phi_{\sigma}\colon N\to \operatorname{F}(M)$ such that the diagram $$ \xymatrix{
&\operatorname{F}(M)\ar[d]^{\gamma_{\sigma}}\\
N\ar[r]^{\varphi}\ar[ru]^{\Phi_\sigma}&M\\ } $$ is commutative and $\Phi_\sigma$ commutes with the action of $\Sigma$. \end{lemma} \begin{proof} If such a homomorphism $\Phi_\sigma$ exists then it satisfies the property $\nu \Phi_\sigma(n)=\Phi_\sigma (\nu n)$, where $\nu\in\Sigma$ and $n\in N$. Therefore, we have $$ \Phi_\sigma(n)(\tau)=\Phi_\sigma(n)((\tau\sigma^{-1})\sigma)= \Phi_\sigma((\tau\sigma^{-1})^{-1}n)=\varphi(\sigma\tau^{-1}n) $$ So, such homomorphism is unique.
For existence let us define $\Phi_\sigma$ by the rule $$ \Phi_\sigma(n)(\tau)=\varphi(\sigma\tau^{-1}n). $$ It is clear that this mapping is a homomorphism of abelian groups. Now we shall check that it commutes with the action of $\Sigma$. Let $\nu\in\Sigma$, then $$ (\nu\Phi_{\sigma}(n))(\tau)=\Phi_{\sigma}(\nu^{-1}\tau)=\varphi(\sigma(\nu^{-1}\tau)^{-1}n)= \varphi(\sigma\tau^{-1}\nu n)=\Phi_{\sigma}(\nu n)(\tau) $$
\end{proof}
Let $A$ be a commutative ring with an identity. Let us denote the category of all $A$ modules by $A{-}\mathbf{mod}$. For a given ring $A$ we construct the ring $\operatorname{F}(A)$ and will denote the category of all difference $\operatorname{F}(A)$ modules by $\Sigma{-}\operatorname{F}(A){-}\mathbf{mod}$.
Let $M$ be an $A$ module, then we can produce an abelian group $\operatorname{F}(M)$. The group $\operatorname{F}(M)$ has a structure of $\operatorname{F}(A)$ module by the rule $(fh)(\sigma)=f(\sigma)h(\sigma)$, where $f\in\operatorname{F}(A)$ and $h\in \operatorname{F}(M)$. As we can see $\operatorname{F}(M)$ is a difference $\operatorname{F}(A)$-module.
Let $f\colon M\to M'$ be a homomorphism of $A$-modules. Then the homomorphism $\operatorname{F}(f)\colon \operatorname{F}(M)\to \operatorname{F}(M')$ is a difference homomorphism of $\operatorname{F}(A)$ modules. As we can see these data define a functor.
Let $N$ be a difference $\operatorname{F}(A)$ module. Let $e\in\operatorname{F}(A)$ be the indicator of the identity element of $\Sigma$. Consider an abelian group $eN$ in $N$. We have a homomorphism $A\to \operatorname{F}(A)$ such that every element of $A$ maps to a constant function. Therefore, module $N$ has a structure of $A$ module. Moreover, $eN$ is a submodule under defined action of $A$. There is another way to provide an action of $A$ on $eN$. We have a homomorphism $\gamma_e\colon\operatorname{F}(A)\to A$. Then for any element $a\in A$ we can take its preimage $x$ in $\operatorname{F}(A)$ and we define $an=xn$ for $n\in eN$. This definition is well-defined. Indeed, let $x'$ be another preimage of $a$, then $x-x'=(1-e)y$. Therefore, $xn=x'n$ for all $n\in eN$. Particulary, for every $a\in A$ we can take a constant function with a value $a$. Therefore, both defined actions coincide to each other.
The second construction can be described in other terms. For the homomorphism $\gamma_e\colon \operatorname{F}(A)\to A$ and a difference $\operatorname{F}(A)$-module $N$ we consider a module $N\mathop{\otimes}_{\operatorname{F}(A)} A_{\gamma_e}$, where index $\gamma_e$ reminds us the structure of $\operatorname{F}(A)$-module on $A$. So, we have a functor in other direction.
\begin{theorem}\label{equivmod} The functors \begin{align*} \operatorname{F}\colon& A{-}\mathbf{mod}\to \Sigma{-}\operatorname{F}(A){-}\mathbf{mod}\\ {-}\mathop{\otimes}_{\operatorname{F}(A)}A_{\gamma_e}\colon& \Sigma{-}\operatorname{F}(A){-}\mathbf{mod}\to A{-}\mathbf{mod}\\ \end{align*} are inverse to each other equivalences. \end{theorem} \begin{proof} Consider the composition $G\circ \operatorname{F}$. Let $M$ be an arbitrary $A$-module, then $G\operatorname{F} (M)=e\operatorname{F}(M)$. Now we see that the restriction of $\gamma_e$ on $e\operatorname{F}(M)$ is the desired natural isomorphism.
Now, let $N$ be a difference $\operatorname{F}(A)$-module. Then we have $\operatorname{F} G (N)=\operatorname{F}(eN)$. Using Lemma~\ref{taylormod} we can define a homomorphism $\Phi_N\colon N\to \operatorname{F}(eN)$. Let us show that this homomorphism is a homomorphism of $\operatorname{F}(A)$-modules. Consider $a\in\operatorname{F} (A)$ and $n\in N$. Then $$ \Phi_N(an)(\tau)=e(\tau^{-1}(an))=e\tau^{-1}(a)\tau^{-1}(n)=ea_{\tau}\tau^{-1}(n) $$ and $$ (a\Phi_N(n))(\tau)=a_\tau \Phi_N(n)(\tau)=e a_\tau \tau^{-1}(n). $$
We are going to show that $\Phi_N$ is a desired natural isomorphism. For that we need to show that all $\Phi_N$ are isomorphisms. Let $n\in N$. We have an equality $1=\sum_\sigma e_\sigma$, where $e_\sigma$ is the indicator of the element $\sigma$. Then $n=\sum_\sigma e_\sigma n$. Therefore, there is $\sigma$ such that $e_\sigma n\neq 0$. Thus, $e\sigma^{-1}n=\sigma^{-1}(e_{\sigma}n)\neq 0$. Thus, $\Phi_N(n)$ is not a zero function.
Now consider the function $\Phi_N(e_\sigma \sigma(n))$. Let us calculate its values. $$ \Phi_N(e_\sigma \sigma(n))(\tau)=e(\tau^{-1}(e_\sigma\sigma(n)))=e e_{\tau^{-1}\sigma}\tau^{-1}\sigma (n)= \left\{ \begin{aligned} &en, &\tau=\sigma\\ &0, &\tau\neq\sigma\\ \end{aligned} \right. $$ Therefore, $\Phi_N(e_{\sigma}\sigma(n))=(en)e_{\sigma}$. Since every element of $\operatorname{F}(eN)$ is of the form $\sum_\sigma (en_\sigma)e_\sigma$ we get the desired. \end{proof}
Now let $B$ be an $A$-algebra, then $\operatorname{F}(B)$ is a difference $\operatorname{F}(A)$-algebra, and conversely if $D$ is an $\operatorname{F}(A)$-difference algebra, then $D\mathop{\otimes}_{\operatorname{F}(A)}A_{\gamma_e}$ is an $A$-algebra. Moreover, we see that for every ring homomorphism $f\colon B\to B'$ the mapping $\operatorname{F}(f)\colon \operatorname{F}(B)\to\operatorname{F}(B')$ is a difference homomorphism, and for arbitrary difference homomorphism $h\colon D\to D'$ the mapping $h\mathop{\otimes} Id\colon D\mathop{\otimes}_{\operatorname{F}(A)}A_{\gamma_e}\to D'\mathop{\otimes}_{\operatorname{F}(A)}A_{\gamma_e}$ is a ring homomorphism.
Let us denote by $A{-}\mathbf{alg}$ the category of $A$-algebras and by $\Sigma{-}\operatorname{F}(A){-}\mathbf{alg}$ the category of difference $\operatorname{F}(A)$-algebras. So, we have proved a theorem.
\begin{theorem}\label{equivalg} Functors \begin{align*} \operatorname{F}\colon& A{-}\mathbf{alg}\to \Sigma{-}\operatorname{F}(A){-}\mathbf{alg}\\ {-}\mathop{\otimes}_{\operatorname{F}(A)}A_{\gamma_e}\colon& \Sigma{-}\operatorname{F}(A){-}\mathbf{alg}\to A{-}\mathbf{alg}\\ \end{align*} are inverse to each other equivalences. \end{theorem}
For the difference ring $\operatorname{F}(A)$ there is a homomorphism $\gamma_e\colon \operatorname{F}(A)\to A$. For every difference $\operatorname{F}(A)$-algebra $B$ the last homomorphism induce a homomorphism $B\to B\mathop{\otimes}_{\operatorname{F}(A)}A_{\gamma_e}$. We shall similarly denote the last homomorphism by $\gamma_e$.
\subsection{Adjoint variety}\label{sec49}
Let $X\subseteq A^n$ be a pseudovariety over a difference closed pseudofield $A$. We know that $A=\operatorname{F}(K)$, where $K$ is an algebraically closed field. We shall connect with $X$ a corresponding algebraic variety over $K$. Moreover, if $X$ has a group structure such that all group laws are regular mappings, then the corresponding algebraic variety will be an algebraic group.
From the previous section we have the equivalence of categories $K{-}\mathbf{alg}$ and $\Sigma{-}A{-}\mathbf{alg}$. Now we note that a difference $A$-algebra $B$ is difference finitely generated over $A$ iff the algebra $G(B)$ is finitely generated over $K$. For any pseudovariety $X$ we construct the ring of regular functions $A\{X\}$. This ring is difference finitely generated over $A$, therefore, the ring $B=G(A\{X\})$ is finitely generated over $K$. The last ring defines an algebraic variety $X^*$ such that the ring of regular functions $K[X^*]$ coincides with $B$. The variety $X^*$ will be called an adjoint variety for $X$.
Now consider an arbitrary pseudovariety $X\subseteq A^n$. Then a difference line $A$ has a natural structure of an affine space over
$K$. Indeed, $A=\operatorname{F}(K)=K^m$, where $m=|\Sigma|$. Therefore, the set $X$ can be considered as a subset in $K^{mn}$. We claim that this subset is an algebraic variety over $K$ that can be naturally identified with $X^*$.
Now we shall show that there is a natural bijection between $X$ and $X^*$. Indeed, pseudovariety $X$ can be naturally identified with $$ \hom_{\Sigma{-}A{-}\mathbf{alg}}(A\{X\},A). $$ From Theorem~\ref{equivalg} the last set can be naturally identified with $$ \hom_{K{-}\mathbf{alg}}(K[X^*],K). $$ And the last set coincides with $X^*$. So, we have constructed a mapping $\varphi\colon X\to X^*$.
We shall describe the bijection $\varphi$ explicitly. Consider a pseudovariety $X$ over difference closed pseudofield $A$ and let $A=\operatorname{F}(K)$, where $K$ is algebraically closed. Suppose that $X$ is a subset of $A^n$. Then the algebra $A\{X\}$ is of the form $A\{x_1,\ldots,x_n\}/I(X)$, where $I(X)$ is the ideal of definition for $X$. For every point $(a_1,\ldots,a_n)\in X$ we construct a difference homomorphism $A\{X\}\to A$ by the rule $f\mapsto f(a_1,\ldots,a_n)$. Using this rule the variety $X$ can be identified with $$ \{\, \varphi\colon A\{X\}\to A \mid \varphi\mbox{ is a }\Sigma\mbox{
homomorphism },\: \varphi|_{A}=Id\,\} $$ Then it follows from Theorem~\ref{taylor} that this set coincides with $$ \{\, \varphi\colon A\{X\}\to K\mid \varphi\mbox{ is a homomorphism},
\: \varphi|_{A}=\gamma_e \,\} $$ Let $e\in A$ be the indicator of the identity element of $\Sigma$. Then for every homomorphism $\varphi$ such that
$\varphi|_{A}=\gamma_e$ the element $1-e$ is in $\ker\varphi$. And if we identify the subring $eA$ with $K$ we get that the previous set coincides with $$
\{\,\varphi\colon eA\{X\}\to K\mid\varphi\mbox{ is a homomorphism},\:\varphi|_{eA}=Id\,\} $$ Since $A\{X\}=A\{x_1,\ldots,x_n\}/I(X)$, then $eA\{X\}$ is of the form $$ K[\ldots,\sigma x_i,\ldots]/eI(X). $$ And every homomorphism $f\colon eA\{X\}\to K$ we identify with the point $$
(\ldots,f(\sigma x_i),\ldots)\in X^*\subseteq K^{n|\Sigma|}. $$ If $\xi$ presents an element of $X$ and $\epsilon$ presents the corresponding to $\xi$ element of $X^*$ then we have the following commutative diagram $$ \xymatrix{
A\{X\}\ar[r]^{\xi}\ar[d]^{\gamma_e}&A\ar[d]^{\gamma_e}\\
K[X^*]\ar[r]^{\epsilon}&K\\ } $$
Let $f\colon X\to Y$ be a regular mapping of pseudovarieties $X$ and $Y$. This mapping induce a difference homomorphism $\bar f\colon A\{Y\}\to A\{X\}$. Then applying functor $G$ we get $G(\bar f)\colon K[Y^*]\to K[X^*]$. The last homomorphism induce a regular mapping $f^*\colon X^*\to Y^*$.
\begin{proposition} Let $f\colon X\to Y$ be a regular mapping of pseudovarieties $X$ and $Y$. Then \begin{enumerate} \item The diagram $$ \xymatrix{
X\ar[r]^{f}\ar[d]^{\varphi}&Y\ar[d]^{\varphi}\\
X^*\ar[r]^{f^*}&Y^*\\ } $$ is commutative. \item $\varphi$ is a homeomorphism. \item $\varphi$ preserves unions, intersections, and complements. \end{enumerate} \end{proposition} \begin{proof} (1) The proof follows from commutativity of the diagram $$ \xymatrix{
A\{Y\}\ar[r]^{\bar f}\ar[d]^{\gamma_e}&A\{X\}\ar[r]^{\xi}\ar[d]^{\gamma_e}&A\ar[d]^{\gamma_e}\\
K[Y^*]\ar[r]^{G(\bar f)}&K[X^*]\ar[r]^{\varphi(\xi)}&K\\ } $$
(2) Let $I$ be an ideal of the ring $K[X^*]$ then $\operatorname{F}(I)$ is an ideal of $A\{X\}$. Then it follows from Theorem~\ref{equivmod} that difference homomorphism $\xi\colon A\{X\}\to A$ maps $F(I)$ to zero iff the homomorphism $\varphi(\xi)\colon K[X^*]\to K$ maps the ideal $I$ to zero. And converse, from Theorem~\ref{equivmod} we know that every difference ideal of $A\{X\}$ has the form $\operatorname{F}(I)$ for some ideal $I$ of $K[X^*]$.
(3) Every bijection preserves such operations. \end{proof}
Here we recall that the set $X^*$ coincides with $X$ if we consider an affine pseudospace $\operatorname{F}(K)$ as an affine space $K^{|\Sigma|}$. So, the morphism $f^*\colon X^*\to Y^*$ is just the initial mapping $f\colon X\to Y$ if we identify $X$ with $X^*$ and $Y$ with $Y^*$. In other words, there is no difference between pseudovarieties over $\operatorname{F}(K)$ and algebraic varieties over $K$.
Moreover, we shall show that using this correspondence between pseudovarieties and algebraic varieties all geometric theorems like~\ref{imst}, \ref{constrst}, and~\ref{openst} can be derived from the same theorems for algebraic varieties. But if we analyze the proofs of the mentioned theorems we will see that they remain valid even for an arbitrary pseudofield if we take pseudomaximal (or pseudoprime) spectra instead of pseudovarieties. Therefore, we adduce direct proofs of Theorems~\ref{imst}, \ref{constrst}, and~\ref{openst}. But now we are going to derive them from results about algebraic varieties.
\begin{theorem} Let $f\colon X\to Y$ be a regular mapping of pseudovarieties. Then \begin{enumerate} \item If $Y$ is irreducible and $f$ is dominant then the image of $f$ contains open subset. \item For every constructible set $E\subseteq X$ the set $f(E)$ is also constructible. \item Let the image of $f$ be dense, then there is an open subset $U\subseteq X$ such that the restriction of $f$ onto $U$ is an open mapping. \end{enumerate} \end{theorem} \begin{proof} (1) Consider $f^*\colon X^*\to Y^*$. Then the image of $f^*$ is dense and $Y^*$ is irreducible. Therefore, it follows from~\cite[chapter~5, ex.~21]{AM} that the image of $f^*$ contains an open subset $U$. The corresponding subset $\varphi^{-1}(U)$ is an open subset in the image of $f$.
(2) As in previous situation we consider a mapping $f^*\colon X^*\to Y^*$. The set $E$ is of the following form $E=U_1\cap V_1\cup\ldots\cup U_n\cap V_n$, where $U_i$ are open and $V_i$ are closed. Since $\varphi$ is a homeomorphism we see that $\varphi(E)$ is constructible. Then it follows from~\cite[chapter~7, ex.~23]{AM} that $f(\varphi(E))$ is constructible in $Y^*$. And applying $\varphi^{-1}$ we conclude that $f(E)$ is constructible.
(3) Again consider the regular mapping $f^*\colon X^*\to Y^*$. We can find an element $s\in K[Y^*]$ such that $K[Y^*]_s$ is irreducible. Then it follows from~\cite[chapter~8, sec.~22, th.~52]{Mu} that there is an element $u\in K[Y^*]$ such that the ring $K[X^*]_{su}$ is faithfully flat over $K[Y^*]_{su}$. Hence, from~\cite[chapter~7, ex.~25]{AM} we have that the mapping $f^*\colon (X^*)_{su}\to (Y^*)_{su}$ is open. Since $\varphi$ is a homeomorphism we get the desired result. \end{proof}
Now suppose that $X$ is a pseudovariety with a group structure such that all group laws are regular mappings. The last one means that $A\{X\}$ is a Hopf algebra over $A$. So, we have \begin{align*} \mu^*\colon& A\{X\}\to A\{X\times X\}=A\{X\}\mathop{\otimes}_A A\{X\} \\ i^*\colon& A\{X\}\to A\{X\}\\ \varepsilon^*\colon& A\{X\}\to A\\ \end{align*} and these mappings satisfy all necessary identities. Since functors $\operatorname{F}$ and $G$ are equivalences they preserve limits and colimits. Therefore, they preserve products and tensor products. So, applying the functor $G$ to $A\{X\}$ we get a Hopf algebra $K[X^*]=G(A\{X\})$ over a field $K$, because $G$ preserves all identities on mappings $\mu^*$, $i^*$, and $\varepsilon^*$.
\subsection{Dimension}\label{sec410}
Let $X\subseteq A^n$ be a pseudovariety over a difference closed pseudofield $A$, and as above we suppose that $A=\operatorname{F}(K)$, where $K$ is an algebraically closed field. Since $A$ is an Artin ring and $A\{X\}$ is a finitely generated algebra over $A$, the ring $A\{X\}$ has finite Krull dimension. Therefore, we can define $\dim X$ as $\dim A\{X\}$.
It follows from Theorem~\ref{equivalg} that $A\{X\}=\operatorname{F}(K[X^*])$. In other words $A\{X\}$ is a finite product of the rings $K[X^*]$. Therefore, algebras $A\{X\}$ and $K[X^*]$ have the same Krull dimension. So, we have the following result.
\begin{proposition} For arbitrary pseudovariety $X$ we have $\dim X=\dim X^*$. \end{proposition}
It should be noted that an affine pseudoline $A$ has dimension
$|\Sigma|$. Moreover, we have more general result.
\begin{proposition}
An affine pseudospace $A^n$ has dimension $n|\Sigma|$. \end{proposition} \begin{proof}
The ring of regular functions on $A^n$ coincides with the ring of difference polynomials $A\{y_1,\ldots,y_n\}$. Its image under the functor $G$ coincides with $K[\ldots,\sigma y_i,\ldots]$. Number of variables is $n|\Sigma|$. \end{proof}
The last result agrees with our insight about structure of $A^n$. Indeed, every $A$ can be identified with $K^{|\Sigma|}$. So, $A^n$
coincides with $K^{n|\Sigma|}$.
\paragraph{Acknowledgement.}
The idea to write this paper appeared after a conversation with Michael Singer in Newark in 2007. The most significant machinery such as inheriting and the Taylor homomorphism were developed under the influence of William Keigher's papers. Eric Rosen brought to my attention details of ACFA theory, he explained the algebraic meaning of logical results to me. Alexander Levin helped me learn difference algebra. Alexey Ovchinnikov suggested to use this new geometric theory for the Picard-Vessiot theory with difference parameters.
\end{document} |
\begin{document}
\title{Mild assumptions for the derivation of \ Einstein's effective viscosity formula}
\begin{abstract} We provide a rigorous derivation of Einstein's formula for the effective viscosity of dilute suspensions of $n$ rigid balls, $n \gg 1$, set in a volume of size $1$. So far, most justifications were carried under a strong assumption on the minimal distance between the balls: $d_{min} \ge c n^{-\frac{1}{3}}$, $c > 0$. We relax this assumption into a set of two much weaker conditions: one expresses essentially that the balls do not overlap, while the other one gives a control of the number of balls that are close to one another. In particular, our analysis covers the case of suspensions modelled by standard Poisson processes with almost minimal hardcore condition. \end{abstract}
\section{Introduction} Mixtures of particles and fluids, called {\em suspensions}, are involved in many natural phenomena and industrial processes. The understanding of their rheology, notably the so-called {\em effective viscosity} $\mu_{eff}$ induced by the particles, is therefore crucial. Many experiments or simulations have been carried out to determine $\mu_{eff}$ \cite{Guaz}. For $\lambda$ large enough, they seem to exhibit some generic behaviour, in terms of the ratio between the solid volume fraction $\lambda$ and the maximal flowable solid volume fraction $\lambda_c$, {\it cf.} \cite{Guaz}. Still, a theoretical derivation of the relation $\mu_{eff} = \mu_{eff}(\lambda/\lambda_c)$ observed experimentally is missing, due to the complex interactions involved: hydrodynamic interactions, direct contacts, \dots Mathematical works related to the analysis of suspensions are mostly limited to the {\em dilute regime}, that is when $\lambda$ is small.
\noindent In these mathematical works, the typical model under consideration is as follows. One considers $n$ rigid balls $B_i = \overline{B(x_i, r_n)}$, $1 \le i \le n$, in a fixed compact subset of ${\mathbb R}^3$, surrounded by a viscous fluid.
The inertia of the fluid is neglected, leading to the Stokes equations \begin{equation} \label{Sto} \left\{ \begin{aligned} -\mu \Delta u_n + {\nabla} p_n & = f_n, \quad x \in \Omega_n = {\mathbb R}^3 \setminus \cup B_i , \\ \hbox{div \!} u_n & = 0, \quad x \in \Omega_n , \\ u_n\vert_{B_i} & = u_{n,i} + \omega_{n,i} \times (x-x_i). \end{aligned} \right. \end{equation} The last condition expresses a no-slip condition at the rigid spheres, where the velocity is given by some translation velocities $u_{n,i}$ and some rotation vectors $\omega_{n,i}$, $1 \le i \le n$. We neglect the inertia of the balls: the $2n$ vectors $u_{n,i}, \omega_{n,i}$ can then be seen as Lagrange multipliers for the $2n$ conditions \begin{equation} \label{Sto2} \begin{aligned} \int_{{\partial} B_i} \sigma_\mu(u,p) \nu & = - \int_{B_i} f_n , \quad \int_{{\partial} B_i} \sigma_\mu(u,p) \nu \times (x-x_i) = - \int_{B_i} (x-x_i) \times f_n \end{aligned} \end{equation} where $\sigma_\mu = 2\mu D(u) \nu - p \nu$ is the usual Newtonian tensor, and $\nu$ the normal vector pointing outward $B_i$.
\noindent
The general belief is that one should be able to replace \eqref{Sto}-\eqref{Sto2} by an effective Stokes model, with a modified viscosity taking into account the average effect of the particles:
\begin{equation} \label{Stoeff} \left\{ \begin{aligned} -\hbox{div \!} (2 \mu_{eff} D u_{eff} ) + {\nabla} p_{eff} & = f, \quad x \in {\mathbb R}^3, \\ \hbox{div \!} u_{eff} & = 0, \quad x \in {\mathbb R}^3, \end{aligned} \right. \end{equation} with $D = \frac{1}{2}({\nabla} + {\nabla}^t)$ the symmetric gradient. Of course, such average model can only be obtained asymptotically, namely when the number of particles $n$ gets very large. Moreover, for averaging to hold, it is very natural to impose some averaging on the distribution of the balls itself. Our basic hypothesis will therefore be the existence of a limit density, through \begin{equation} \label{A0} \tag{A0} \frac{1}{n} \sum_i \delta_{x_i} \xrightarrow[n \rightarrow +\infty]{} \rho(x) dx \quad \text{weakly in the sense of measures} \end{equation}
where $\rho \in L^\infty({\mathbb R}^3)$ is assumed to be zero outside a smooth open bounded set $\mathcal{O}$. After playing on the length scale, we can always assume that $|\mathcal{O}| = 1$. Of course, we expect $\mu_{eff}$ to be different from $\mu$ only in this region $\mathcal{O}$ where the particles are present.
\noindent
The volume fraction of the balls is then given by $\lambda = \frac{4\pi}{3} n r_n^3$. We shall consider the case where $\lambda$ is small (dilute suspension), but independent of $n$ so as to derive a non-trivial effect as $n \rightarrow +\infty$. The mathematical questions that follow are:
\begin{itemize}
\item Q1 : Can we approximate system \eqref{Sto}-\eqref{Sto2} by a system of the form \eqref{Stoeff} for large $n$?
\item Q2 : If so, can we provide a formula for $\mu_{eff}$ inside $\mathcal{O}$? In particular, for small $\lambda$, can we derive an expansion
$$ \mu_{eff} = \mu + \lambda \mu_1 + \dots \quad ? $$
\end{itemize}
Regarding Q1, the only work we are aware of is the recent paper \cite{DuerinckxGloria19}. It shows that $u_n$ converges to the solution $u_{eff}$ of an effective model of the type \eqref{Sto2}, under two natural conditions:
\begin{enumerate}
\item[i)] the balls satisfy the separation condition $\inf_{i \neq j} |x_i - x_j| \ge M \ r_n$, $M > 2$. Note that this is a slight reinforcement of the natural constraint that the balls do not overlap.
\item[ii)] the centers of the balls are obtained from a stationary ergodic point process.
\end{enumerate}
We refer to \cite{DuerinckxGloria19} for all details. Note that in the scalar case, with the Laplacian instead of the Stokes operator, similar results can be found in \cite[paragraph 8.6]{MR1329546}.
\noindent
Q2, and more broadly quantitative aspects of dilute suspensions, have been studied for long. The pioneering work is due to Einstein \cite{Ein}. By {\em neglecting the interaction between the particles}, he computed a first order approximation of the effective viscosity of homogeneous suspensions: $$ \mu_{eff} = (1 + \frac{5}{2} \lambda) \mu \quad \text{ in } \mathcal{O}.$$ This celebrated formula was confirmed experimentally afterwards. It was later extended to the inhomogenous case, with formula \begin{equation} \label{Almog-Brenner} \mu_{eff} = (1 + \frac{5}{2} \lambda \rho) \mu, \end{equation} see \cite[page 16]{AlBr}. Further works investigated the $O(\lambda^2)$ approximation of the effective viscosity, {\it cf.} \cite{BaGr1} and the recent analysis \cite{DGV_MH, GerMec20}.
\noindent Our concern in the present paper is the justification of Einstein's formula. To our knowledge, the first rigorous studies on this topic are \cite{MR813656} and \cite{MR813657}: they rely on homogenization techniques, and are restricted to suspensions that are periodically distributed in a bounded domain. A more complete justification, still in the periodic setting but based on variational principles, can be found in \cite{MR2982744}. Recently, the periodicity assumption was relaxed in \cite{HiWu}, \cite{NiSc}, and replaced by an assumption on the minimal distance: \begin{equation} \label{A1} \tag{A1}
\text{There exists an absolute constant $c$, such that } \quad \forall n, \forall 1 \le i \neq j \le n, \quad |x_i - x_j| \ge c n^{-\frac{1}{3}}. \end{equation} For instance, introducing the solution $u_{E}$ of the Einstein's approximate model \begin{equation} \label{Sto_E} -\hbox{div \!} (2 \mu_E Du_E) + {\nabla} p_E = f, \quad \hbox{div \!} u = 0 \quad \text{ in } \: {\mathbb R}^3 \end{equation} with $\mu_E = (1 + \frac{5}{2} \lambda \rho) \mu$, it is shown in \cite{HiWu} that for all $ 1 \le p < \frac{3}{2}$,
$$ \limsup_{n \to \infty} ||u_n - u_E||_{L^p_{loc}({\mathbb R}^3)} = O(\lambda^{1+\theta}), \quad \theta = \frac{1}{p} - \frac{2}{3}. $$ We refer to \cite{HiWu} for refined statements, including quantitative convergence in $n$ and treatment of polydisperse suspensions.
\noindent Although it is a substantial gain over the periodicity assumption, hypothesis \eqref{A1} on the minimal distance is still strong. In particular, it is much more stringent that the condition that the rigid balls can not overlap. Indeed, this latter condition reads: $\forall i \neq j$, $|x_i - x_j| \ge 2 r_n$, or equivalently $|x_i - x_j| \ge c \, \lambda^{1/3} n^{-\frac{1}{3}}$, with $c = 2 (\frac{3\pi}{4})^{1/3}$. It follows from \eqref{A1} at small $\lambda$. On the other hand, one could argue that a simple non-overlapping condition is not enough to ensure the validity of Einstein's formula. Indeed, it is based on neglecting interaction between particles, which is incompatible with too much clustering in the suspension. Still, one can hope that if the balls are not too close from one another {\em on average}, the formula still holds.
\noindent This is the kind of result that we prove here. Namely, we shall replace \eqref{A1} by a set of two relaxed conditions: \begin{align} \label{B1} \tag{B1}
& \text{There exists $M >2$, such that} \quad \forall n, \: \forall 1 \le i \neq j \le n, \quad |x_i - x_j| \ge M r_n. \\
\label{B2} \tag{B2}
& \text{There exist $C,\alpha > 0$, such that} \quad \forall \eta > 0, \quad \#\{i, \: \exists j, \: |x_i - x_j| \le \eta n^{-\frac13}\} \le C \eta^\alpha n \end{align} Note that \eqref{B1} is slightly stronger than the non-overlapping condition, and was already present in the work \cite{DuerinckxGloria19} to ensure the existence of an effective model. It is possible to relax this condition into a moment bound on the particle separation, see Remark \ref{rem:recentbibli} and Section \ref{sec:B1}.
As regards \eqref{B2}, one can show that it is satisfied almost surely as $n \to \infty$ in the case when the particle positions are generated by a stationary ergodic point process if the process does not favor too much close pairs of points. In particular, it is satisfied by a (hard-core) Poisson point process for $\alpha = 3$. Moreover, \eqref{B2} is satisfied for $\alpha = 3$ with probability tending to $1$ as $n \to \infty$ for independent and identically distributed particles. We postpone further discussion to Section \ref{sec:prob}.
\noindent Under these general assumptions, we obtain: \begin{theorem} \label{main} Let $\lambda > 0$, $f \in L^1({\mathbb R}^3) \cap L^\infty({\mathbb R}^3)$. For all $n$, let $r_n$ such that $\displaystyle \lambda = \frac{4\pi}{3} n r_n^3$, let $f_n \in L^{\frac65}({\mathbb R}^3)$, and $u_n$ in $\displaystyle \dot{H}^1({\mathbb R}^3)$ the solution of \eqref{Sto}-\eqref{Sto2}. Assume \eqref{A0}-\eqref{B1}-\eqref{B2}, and that $f_n \rightarrow f$ in $L^{\frac65}({\mathbb R}^3)$. Then, there exists $p_{min} > 1$ such that for any $p < p_{min}$, any $q < \frac{3 p_{min}}{3 - p_{min}}$, one can find $\delta > 0$ with the estimate
$$ ||{\nabla} (u - u_E)||_{L^p({\mathbb R}^3)} + \limsup_{n \rightarrow +\infty} ||u_n - u_E||_{L^q(K)} = O(\lambda^{1+\delta}), \quad \forall K \Subset {\mathbb R}^3, \quad \text{as } \: \lambda \rightarrow 0, $$ where $u$ is any weak accumulation point of $u_n$ in $\displaystyle \dot{H}^1({\mathbb R}^3)$ and
$u_E$ satisfies Einstein's approximate model \eqref{Sto_E}.
\end{theorem} \noindent Here, we use the notation $\dot H^1({\mathbb R}^3)$ for the homogeneous Sobolev space
$\dot H^1({\mathbb R}^3) = \{ w \in L^6({\mathbb R}^3) : \nabla w \in L^2({\mathbb R}^3)\}$ equipped with the $L^2$ norm of the gradient.
\begin{remark} \label{rem:exponents}
The following explicit formula for $p_{min}$ and $\delta$ will be obtained in the proof of the theorem:
\begin{align*}
p_{min} = 1 + \frac{\alpha}{6 + \alpha}, \qquad \delta = \frac 1 r - \frac{6}{6 + (2-r)\alpha} \qquad r = \max\left\{p,\frac{3q}{3+q} \right\}.
\end{align*} \end{remark}
\begin{remark} \label{rem:recentbibli}
Since the preprint of our paper, several further results have appeared which we briefly discuss in this remark.
In \cite[version 1]{DuerinckxGloria20}, an extensive study of the effective viscosity at low volume fraction was performed in the context of stationary ergodic particle configurations, under suitable versions of \eqref{B1}-\eqref{B2}. It includes results on the $O(\lambda^2)$ and higher order corrections, see also the recent paper \cite{Gerard-Varet20}. As regards the $O(\lambda)$ Einstein's formula, a result analogous to Theorem \ref{main} was shown with methods of a more probabilistic flavour.
It was subsequently shown in \cite{Duerinckx20} and \cite[version 2]{DuerinckxGloria20} that both the existence of an effective viscosity and the Einstein's formula hold when relaxing condition \eqref{B1} into a moment bound on the particle separation. We will argue in Section \ref{sec:B1} that our main result still holds under similar milder assumption.
Finally, in \cite{HoeferSchubert20}, results have been obtained concerning the coupling of Einstein's formula to the time evolution of sedimenting particles. \end{remark}
\noindent The rest of the paper is dedicated to the proof of Theorem \ref{main}.
\section{Main steps of proof} To prove Theorem \ref{main}, we shall rely on an enhancement of the general strategy explained in \cite{DGV}, to justify various effective models for conducting and fluid media. Let us point out that one of the examples considered in \cite{DGV} is the scalar version of \eqref{Sto}-\eqref{Sto2}. It leads to a proof of a scalar analogue of Einstein's formula, under assumptions \eqref{A0}, \eqref{B1}, plus an abstract assumption intermediate between \eqref{A1} and \eqref{B2}. We refer to the discussion at the end of \cite{DGV} for more details. Nevertheless, to justify the effective fluid model \eqref{Sto_E} under the mild assumption \eqref{B2} will require several new steps. The main difficulty will be to handle particles that are close to one another, and will involve sharp $L^p$ estimates similar to those of \cite{GerMec20}.
\noindent Concretely, let $\varphi$ be a smooth and compactly supported divergence-free vector field. For each $n$, we introduce the solution $\phi_n \in \dot{H}^1({\mathbb R}^3)$ of \begin{equation} \label{Sto_phi} \begin{aligned} - \hbox{div \!}(2\mu D \phi_n) + {\nabla} q_n & = \hbox{div \!} (5 \lambda \mu \rho D \varphi) \: \text{ in } \: \Omega_n, \\
\hbox{div \!} \phi_n & = 0 \: \text{ in } \: \Omega_n, \\
\phi_n & = \varphi + \phi_{n,i} + w_{n,i} \times (x-x_i) \: \text{ in } \: B_i, \: 1 \le i \le n \end{aligned} \end{equation} where the constant vectors $\phi_{n,i}$, $w_{n,i}$ are associated to the constraints \begin{equation} \label{Sto2_phi} \begin{aligned} \int_{{\partial} B_i} \sigma_\mu(\phi_n,q_n) \nu & = - \int_{{\partial} B_i} 5 \lambda \mu \rho D \varphi \nu, \\
\int_{{\partial} B_i}(x-x_i) \times \sigma_\mu(\phi_n,q_n) \nu & = - \int_{{\partial} B_i}(x-x_i) \times 5 \lambda \mu \rho D \varphi \nu.
\end{aligned} \end{equation} Testing \eqref{Sto} with $\varphi - \phi_n$, we find after a few integration by parts that $$ \int_{{\mathbb R}^3} 2\mu_E Du_n : D \varphi = \int_{{\mathbb R}^3} f_n \cdot \varphi - \int_{{\mathbb R}^3} f_n \cdot \phi_n. $$ Testing \eqref{Sto_E} with $\varphi$, we find $$ \int_{{\mathbb R}^3} 2\mu_E Du_E : D \varphi = \int_{{\mathbb R}^3} f \cdot \varphi. $$ Combining both, we end up with \begin{equation} \label{weak_estimate} \int_{{\mathbb R}^3} 2\mu_E D(u_n - u_E) : D \varphi = \int_{{\mathbb R}^3} (f_n - f) \cdot \varphi - \int_{{\mathbb R}^3} f_n \cdot \phi_n. \end{equation} We remind that vector fields $u_n, u_E, \phi_n$ depend implicitly on $\lambda$.
\noindent
The main point will be to show \begin{proposition} \label{main_prop} There exists $p_{min} > 1$ such that for all $p < p_{min}$, there exists $\delta > 0$ and $C > 0$, independent of $\varphi$, such that \begin{equation} \label{estimateR}
\limsup_{n \to \infty} \big| \int_{{\mathbb R}^3} f_n \cdot \phi_n \big| \le C \lambda^{1+\delta} ||{\nabla} \varphi||_{L^{p'}}, \quad p' = \frac{p}{p-1}. \end{equation}
\end{proposition} \noindent Let us show how the theorem follows from the proposition. First, by standard energy estimates, we find that $u_n$ is bounded in $\dot{H}^1({\mathbb R}^3)$ uniformly in $n$. Let $u = \lim u_{n_k}$ be a weak accumulation point of $u_n$ in this space. Taking the limit in \eqref{weak_estimate}, we get $$ \int_{{\mathbb R}^3} 2\mu_E D(u - u_E) : D \varphi = \langle R, \varphi \rangle $$ where $\langle R , \varphi \rangle = \lim_{k \rightarrow +\infty} \int_{{\mathbb R}^3} f_{n_k} \cdot \phi_{n_k}$.
Recall that $\varphi$ is an arbitrary smooth and compactly supported divergence-free vector field and that such functions are dense in the homogeneous Sobolev space of divergence-free functions $\dot{W}^{1,p}_\sigma$. Thus, Proposition \ref{main_prop} implies that $R$ is an element of $\dot{W}_\sigma^{-1,p}$ with $||R||_{\dot{W}_\sigma^{-1,p}} = O(\lambda^{1+\delta})$. Moreover, the previous identity is the weak formulation of $$ - \hbox{div \!}(2 \mu_E D(u - u_E)) + {\nabla} q = R, \quad \hbox{div \!} (u - u_E) = 0 \quad \text{ in } \: {\mathbb R}^3. $$ Writing these Stokes equations with non-constant viscosity as $$ -\mu \Delta (u - u_E) + {\nabla} q = R + \hbox{div \!} (5 \lambda \mu \rho D(u-u_E)), \quad \hbox{div \!} (u - u_E) = 0 \quad \text{ in } \: {\mathbb R}^3. $$ and using standard estimates for this system, we get
$$ ||{\nabla} (u-u_E)||_{L^p} \le C \left(||R||_{\dot{W}^{-1,p}_\sigma} + \lambda ||{\nabla} (u-u_E)||_{L^p} \right). $$ For $\lambda$ small enough, the last term is absorbed by the left-hand side, and finally
$$ ||{\nabla} (u-u_E)||_{L^p({\mathbb R}^3)} \le C\lambda^{1+\delta}$$ which implies the first estimate of the theorem. Then, by Sobolev imbedding, for any $q \le \frac{3p}{3-p}$, and any compact $K$, \begin{equation} \label{Sob_imbed}
||u-u_E||_{L^q(K)} \le C_{K,q} \, \lambda^{1+\delta}.
\end{equation} We claim that
$ \limsup_{n \to \infty} ||u_n-u_E||_{L^q(K)} \le C_{K,q} \, \lambda^{1+\delta}$. Otherwise, there exists a subsequence $u_{n_k}$ and ${\varepsilon} > 0$ such that $\displaystyle ||u_{n_k} - u_E||_{L^q(K)} \ge C_{K,q} \, \lambda^{1+\delta} + {\varepsilon}$ for all $k$. Denoting by $u$ a (weak) accumulation point of $u_{n_k}$ in $\dot{H}^1$, Rellich's theorem implies that, for a subsequence still denoted $u_{n_k}$, $||u_{n_k} - u||_{L^q(K)} \rightarrow 0$, because $q < 6$ (for $p_{min}$ taken small enough). Combining this with \eqref{Sob_imbed}, we reach a contradiction. As $p$ is arbitrary in $(1, p_{min})$, $q \leq \frac{3p}{3-p}$ is arbitrary in $(1, \frac{3 p_{min}}{3 - p_{min}})$. The last estimate of the theorem is proved.
\noindent It remains to prove Proposition \ref{main_prop}. Therefore, we need a better understanding of the solution $\phi_n$ of \eqref{Sto_phi}-\eqref{Sto2_phi}. Neglecting any interaction between the balls, a natural attempt is to approximate $\phi_n$ by \begin{equation} \label{approx_phi} \phi_n \approx \phi_{{\mathbb R}^3} + \sum_i \phi_{i,n} \end{equation} where $\phi_{{\mathbb R}^3}$ is the solution of \begin{equation} \label{eq_phi_R3}
- \mu \Delta \phi_{{\mathbb R}^3} + {\nabla} p_{{\mathbb R}^3} = \hbox{div \!}(5 \lambda \mu \rho D\varphi), \quad \hbox{div \!} \phi_{{\mathbb R}^3} = 0 \quad \text{in } \: {\mathbb R}^3
\end{equation} and $\phi_{i,n}$ solves \begin{equation} \label{eq_phi_i_n}
- \mu \Delta \phi_{i,n} + {\nabla} p_{i,n} = 0, \quad \hbox{div \!}\phi_{i,n} = 0 \quad \text{ outside } \: B_i, \quad \phi_{i,n}\vert_{B_i}(x) = D \varphi(x_i) \, (x-x_i)
\end{equation} Roughly, the idea of approximation \eqref{approx_phi} is that $\phi_{{\mathbb R}^3}$ adjusts to the source term in \eqref{Sto_phi}, while for all $i$, $\phi_{i,n}$ adjusts to the boundary condition at the ball $B_i$. Indeed, using a Taylor expansion of $\varphi$ at $x_i$, and splitting ${\nabla} \varphi(x_i)$ between its symmetric and skew-symmetric part, we find $$ \phi_n\vert_{B_i}(x) \approx D\varphi(x_i) \, (x- x_i) \: + \: \text{\em rigid vector field} = \phi_{i,n}\vert_{B_i}(x) \: + \: \text{\em rigid vector field}. $$ Moreover, $\phi_{i,n}$ can be shown to generate no force and torque, so that the extra rigid vector fields (whose role is to ensure the no-force and no-torque conditions), should be small.
\noindent Still, approximation \eqref{approx_phi} may be too crude : the vector fields $\phi_{j,n}$, $j \neq i$, have a non-trivial contribution at $B_i$, and for the balls $B_j$ close to $B_i$, which are not excluded by our relaxed assumption \eqref{B1}, these contributions may be relatively big. We shall therefore modify the approximation, restricting the sum in \eqref{approx_phi} to balls far enough from the others.
\noindent Therefore, for $\eta > 0$, we introduce a {\em good} and a {\em bad} set of indices: \begin{equation} \label{good_bad_sets}
\mathcal{G}_\eta = \{ 1 \le i \le n, \: \forall j \neq i, |x_i - x_j| \ge \eta n^{-\frac{1}{3}} \}, \quad \mathcal{B}_\eta = \{1, \dots n\} \setminus \mathcal{G}_\eta. \end{equation} The good set $\mathcal{G}_\eta$ corresponds to balls that are at least $\eta n^{-\frac{1}{3}}$ away from all the others. The parameter $\eta > 0$ will be specified later: we shall consider $\eta = \lambda^\theta$ for some appropriate power $0 < \theta < 1/3$. We set \begin{equation} \label{def_phi_app} \phi_{app,n} = \phi_{{\mathbb R}^3} + \sum_{i \in \mathcal{G}_\eta} \phi_{i,n} \end{equation} Note that $\phi_{{\mathbb R}^3}$ and $\phi_{i,n}$ are explicit:
$$ \phi_{{\mathbb R}^3} = \mathcal{U} \star \hbox{div \!}(5 \lambda \rho D\varphi), \quad \mathcal{U}(x) = \frac{1}{8\pi} \left( \frac{I}{|x|} + \frac{x \otimes x}{|x|^3} \right)$$ and \begin{equation} \label{def_phi_in}
\phi_{i,n} = r_n V[D \varphi(x_i)]\left(\frac{x-x_i}{r_n}\right)
\end{equation} where for all trace-free symmetric matrix $S$, $V[S]$ solves $$ -\Delta V[S] + {\nabla} P[S] = 0, \: \hbox{div \!} V[S] = 0 \quad \text{outside } \: B(0,1), \quad V[S](x) = Sx, \: x \in B(0,1). $$ with expressions
$$ V[S] = \frac{5}{2} S : (x \otimes x) \frac{x}{|x|^5} + Sx \frac{1}{|x|^5} - \frac{5}{2} (S : x \otimes x) \frac{x}{|x|^7}, \quad P[S] = 5 \frac{S : x \otimes x}{|x|^5}. $$
Eventually, we denote
$$ \psi_n = \phi_n - \phi_{app,n}. $$ Tedious but straightforward calculations show that $$ - \hbox{div \!} (\sigma_\mu(V[S], P[S])) = 5 \mu S x s^1 = - \hbox{div \!} (5 \mu S 1_{B(0,1)}) \quad \text{in } \: {\mathbb R}^3 $$ where $s^1$ denotes the surface measure at the unit sphere. It follows that \begin{equation}
- \mu \Delta \phi_{app,n} + {\nabla} p_{app,n} = \hbox{div \!} \Big( 5 \lambda \mu \rho D\varphi - \sum_{i \in \mathcal{G}_\eta} 5 \mu D \varphi(x_i) 1_{B_i} \Big), \quad \hbox{div \!} \phi_{app,n} = 0 \quad \text{in} \: {\mathbb R}^3, \end{equation} Moreover, for all $1 \le i \le n$, \begin{align*} \int_{{\partial} B_i} \sigma_\mu(\phi_{app,n}, p_{app,n}) \nu & = - \int_{{\partial} B_i} 5 \lambda \mu \rho D\varphi \nu, \\ \int_{{\partial} B_i} (x-x_i) \times \sigma_\mu(\phi_{app,n}, p_{app,n}) \nu & = - \int_{{\partial} B_i} (x-x_i) \times 5 \lambda \mu \rho D\varphi \nu. \end{align*} Hence, the remainder $\psi_n$ satisfies \begin{equation} \label{Sto_psi} \begin{aligned} - \mu \Delta \psi_n + {\nabla} q_n & = 0 \: \text{ in } \: \Omega_n, \\
\hbox{div \!} \psi_n & = 0 \: \text{ in } \: \Omega_n, \\
\psi_n & = \varphi - \phi_{app,n} + \psi_{n,i} + w_{n,i} \times (x-x_i) \: \text{ in } \: B_i, \: 1 \le i \le n \end{aligned} \end{equation} where the constant vectors $\psi_{n,i}$, $w_{n,i}$ are associated to the constraints \begin{equation} \label{Sto2_psi} \begin{aligned} \int_{{\partial} B_i} \sigma_\mu(\psi_n,q_n) \nu & = 0, \\
\int_{{\partial} B_i}(x-x_i) \times \sigma_\mu(\psi_n,q_n) \nu & = 0.
\end{aligned} \end{equation} Estimates on $\phi_{app,n}$ and $\psi_n$ will be postponed to sections \ref{sec_app} and \ref{sec_rem} respectively. Regarding $\phi_{app,n}$, we shall prove \begin{proposition} \label{prop_phi_app}
For all $p \ge 1$,
\begin{equation} \label{estimate_phi_app}
\limsup_{n \to \infty} \Big|\int_{{\mathbb R}^3} f \cdot \phi_{app,n} \Big| \le C_{p,f} (\lambda \eta^\alpha)^{\frac{1}{p}} ||{\nabla} \varphi||_{L^{p'}}.
\end{equation}
\end{proposition} \noindent Regarding the remainder $\psi_n$, we shall prove \begin{proposition} \label{prop_psi} For all $1 < p < 2$, there exists $c> 0$ independent of $\lambda$ such that for all $1 \ge \eta \ge c \lambda^{1/3}$,
$$ \limsup_{n \to \infty} \Big|\int_{{\mathbb R}^3} f \cdot \psi_n \Big| \le C_{p,f} \lambda^{\frac 1 2} \Big(\lambda^{1+ \frac{2-p}{2p}} \, \eta^{-\frac{3}{p}} + \big( \eta^\alpha \lambda \big)^{\frac{2-p}{2p}} \Big) ||{\nabla} \varphi||_{L^{p'}}.$$ \end{proposition}
\noindent Let us explain how to deduce Proposition \ref{main_prop} from these two propositions. Let $1 < p < 2$. By standard estimates, we see that $\phi_n$ is bounded uniformly in $n$ in $\dot{H}^1$. It follows that \begin{align*}
\limsup_{n \to \infty} \Big| \int_{{\mathbb R}^3} f_n \cdot \phi_n \Big| &= \limsup_{n \to \infty} \Big| \int_{{\mathbb R}^3} f \cdot \phi_n \Big| \le \limsup_{n \to \infty} \Big| \int_{{\mathbb R}^3} f \cdot \phi_{app,n} \Big| + \limsup_{n \to \infty} \Big| \int_{{\mathbb R}^3} f \cdot \psi_n \Big|
\\ &\le C_{p,f} \left((\lambda \eta^\alpha)^{\frac{1}{p}} + \lambda^{\frac 3 2 + \frac{2-p}{2p}} \eta^{-\frac{3}{p}} + \lambda^{\frac 1 2} \big( \eta^\alpha \lambda \big)^{\frac{2-p}{2p}} \right) \|\nabla \varphi \|_{L^{p'}}
\end{align*} To conclude, we adjust properly the parameters $p$ and $\eta$. We look for $\eta$ in the form $\eta = \lambda^{\theta}$, with $0 < \theta < \frac{1}{3}$, so that the lower bound on $\eta$ needed in Proposition \ref{prop_psi} will be satisfied for small enough $\lambda$.
Then, we choose $p_{min} = 1 + \frac{\alpha}{6 + \alpha}$ and for $p < p_{min}$ we choose $\theta = \frac{2p}{6 + (2-p) \alpha}$. It is straightforward to check that this yields a right-hand side $\lambda^{1+\delta}$ with $\delta = \frac 1 p - \frac{6}{6 + (2-p)\alpha}$ in accordance with Remark \ref{rem:exponents}.
\section{Bound on the approximation} \label{sec_app} This section is devoted to the proof of Proposition \ref{prop_phi_app}. We decompose $$\phi_{app,n} = \phi_{app,n}^1 + \phi_{app,n}^2 + \phi_{app,n}^3$$ where \begin{align*} & - \mu \Delta \phi^1_{app,n} + {\nabla} p^1_{app,n} = \hbox{div \!} \Big( 5 \lambda \mu \rho D\varphi - \sum_{1 \le i \le n} 5 \mu D \varphi(x_i) 1_{B_i} \Big), \quad \hbox{div \!} \phi^1_{app,n} = 0 \quad \text{in} \: {\mathbb R}^3, \\ & - \mu \Delta \phi^2_{app,n} + {\nabla} p^2_{app,n} = \hbox{div \!} \Big( \sum_{i \in \mathcal{B}_\eta} 5 \mu D \varphi(x) 1_{B_i} \Big), \quad \hbox{div \!} \phi^1_{app,n} = 0 \quad \text{in} \: {\mathbb R}^3, \\ & - \mu \Delta \phi^3_{app,n} + {\nabla} p^3_{app,n} = \hbox{div \!} \Big( \sum_{i \in \mathcal{B}_\eta} 5 \mu (D \varphi(x_i) - D\varphi(x)) 1_{B_i} \Big), \quad \hbox{div \!} \phi^1_{app,n} = 0 \quad \text{in} \: {\mathbb R}^3. \end{align*} By standard energy estimates, $\phi^k_{app,n}$ is seen to be bounded in $n$ in $\dot{H^1}$, for all $1 \le k \le 3$. We shall prove next that $\phi^1_{app,n}$ and $\phi^3_{app,n}$ converge in the sense of distributions to zero, while for any $f$ with $\displaystyle D ( \Delta)^{-1} \mathbb{P} f \in L^\infty$ ($\mathbb{P}$ denoting the standard Helmholtz projection), for any $p \ge 1$, \begin{equation} \label{estimate_phi_app_2}
\Big|\int_{{\mathbb R}^3} f \cdot \phi^2_{app,n} \Big| \le C_{f,p} (\lambda \eta^\alpha)^{\frac{1}{p}} ||{\nabla} \varphi||_{L^{p'}}, \quad p' = \frac{p}{p-1}. \end{equation} Proposition \ref{prop_phi_app} follows easily from those properties.
\noindent We start with \begin{lemma} Under assumption \eqref{A0}, $\: \sum_{1 \le i \le n} D\varphi(x_i) \mathbf{1}_{B_i} \: \rightharpoonup \: \lambda \rho D\varphi $ weakly* in $L^\infty$. \end{lemma} \noindent
{\em Proof}. As the balls are disjoint, $|\sum_{1 \le i \le n} D\varphi(x_i) \mathbf{1}_{B_i}| \le ||D\varphi||_{L^\infty}$. Let $g \in C_c({\mathbb R}^3)$, and denote $\delta_n = \frac{1}{n} \sum_{i} \delta_{x_i}$ the empirical measure. We write
\begin{align*}
\int_{{\mathbb R}^3} \sum_{1 \le i \le n} D\varphi(x_i) \mathbf{1}_{B_i}(y) g(y) dy & = \sum_{1 \le i \le n} D\varphi(x_i) \int_{B(0,r_n)} g(x_i+y) dy \\
& = n \int_{{\mathbb R}^3} D\varphi(x) \int_{B(0,r_n)} g(x+y) dy d\delta_n(x) \\
& = n r_n^3 \int_{{\mathbb R}^3} \int_{B(0,1)} g(x+r_nz) dz d\delta_n(x). \end{align*} The sequence of bounded continuous functions $x \rightarrow \int_{B(0,1)} g(x+r_n z) dz$ converges uniformly to the function $x \rightarrow \frac{4\pi}{3} g(x)$ as $n \rightarrow +\infty$. We deduce: $$ \lim_{n \to \infty} \int_{{\mathbb R}^3} \sum_{1 \le i \le n} D\varphi(x_i) \mathbf{1}_{B_i}(y) g(y) dy = \lim_{n \to \infty} \lambda \int_{{\mathbb R}^3} D\varphi(x) g(x) d\delta_n(x) = \lambda \int_{{\mathbb R}^3} D\varphi(x) g(x) \rho(x) dx $$ where the last equality comes from \eqref{A0}. The lemma follows by density of $C_c$ in $L^1$.
\noindent Let now $h \in C^\infty_c({\mathbb R}^3)$ and $v = (\Delta)^{-1} \mathbb{P} h$. We find \begin{align*} \langle \phi_{app,n}^1 , h \rangle & = \langle \phi_{app,n}^1 , \Delta v \rangle = \langle \Delta \phi_{app,n}^1 , v \rangle \\ & = \int_{{\mathbb R}^3} \big( 5 \lambda \mu \rho D\varphi - \sum_{1 \le i \le n} 5 \mu D \varphi(x_i) 1_{B_i} \big) \cdot Dv \: \rightarrow 0 \quad \text{ as } \: n \rightarrow +\infty \end{align*} where we used the previous lemma and the fact that $Dv$ belongs to $L^1_{loc}$ and $\varphi$ has compact support. Hence, $\phi_{app,n}^1$ converges to zero in the sense of distributions. As regards $\phi^3_{app,n}$, we notice that \begin{align*}
||\sum_{i \in \mathcal{B}_\eta} 5 \mu (D \varphi(x) - D\varphi(x_i)) 1_{B_i}||_{L^1} & \le ||{\nabla}^2 \varphi||_{L^\infty} \sum_{1 \le i \le n} \int_{B_i} |x-x_i| dx \\
& \le ||{\nabla}^2 \varphi||_{L^\infty} \lambda r_n \rightarrow 0 \quad \text{ as } \: n \rightarrow +\infty \end{align*} Using the same duality argument as for $\phi^1_{app, n}$ (see also below), we get that $\phi^3_{app,n}$ converges to zero in the sense of distributions.
\noindent
It remains to show \eqref{estimate_phi_app_2}. We use a simple H\"older estimate, and write for all $p \ge 1$: \begin{align*}
||\sum_{i \in \mathcal{B}_\eta} 5 \mu D \varphi 1_{B_i}||_{L^1} & \le 5 \mu || \sum_{i \in \mathcal{B}_\eta} 1_{B_i}||_{L^p} ||D\varphi||_{L^{p'}} = 5 \mu \big( \text{card} \ \mathcal{B}_\eta \, \frac{4\pi}{3} r_n^3 \big)^{\frac{1}{p}} ||D\varphi||_{L^{p'}} \\
& \le C (\eta^\alpha \lambda)^{\frac{1}{p}} ||D\varphi||_{L^{p'}} \end{align*} where the last inequality follows from \eqref{B2}. Denoting $v = ( \Delta)^{-1} \mathbb{P} f$, we have this time \begin{align*}
\int_{{\mathbb R}^3} f \cdot \phi_{app,n}^2 & = \int_{{\mathbb R}^3} D v \cdot \sum_{i \in \mathcal{B}_\eta} 5 \mu D \varphi 1_{B_i} \le C ||Dv||_{L^\infty} (\eta^\alpha \lambda)^{\frac{1}{p}} ||D\varphi||_{L^{p'}} \end{align*} which implies \eqref{estimate_phi_app_2}.
\section{Bound on the remainder} \label{sec_rem} We focus here on estimates for the remainder $\psi_n = \phi_n - \phi_{app,n}$, which satisfies \eqref{Sto_psi}-\eqref{Sto2_psi}. The proof of Proposition \ref{prop_psi} relies on properties of the solutions of the system \begin{equation} \label{Sto_Psi} -\mu \Delta \psi + {\nabla} p = 0, \quad \hbox{div \!} \psi = 0 \quad \text{ in } \: \Omega_n, \quad D \psi = D \tilde{\psi} \quad \text{ in } \: B_i, \quad 1 \le i \le n \end{equation} together with the constraints \begin{equation} \label{Sto2_Psi}
\int_{{\partial} B_i} \sigma_\mu(\psi, p)\nu = \int_{{\partial} B_i} (x-x_i) \times \sigma_\mu(\psi, p)\nu = 0, \quad 1 \le i \le n.
\end{equation}
More precisely, we use a duality argument to prove the following proposition, corresponding to \cite[Proposition 3.2]{GerMec20}. \begin{proposition} \label{prop_estimate_gphi2n} Let $q > 3$. Then, under assumption \eqref{B1} for all $g \in L^{q}({\mathbb R}^3)$ and all $\tilde \psi \in H^1(\cup_i B_i)$, the weak solution $\psi \in \dot H^1({\mathbb R}^3)$ to \eqref{Sto_Psi}-\eqref{Sto2_Psi} satisfies \begin{align} \label{improvement}
\left|\int_{{\mathbb R}^3} g \psi \right| \leq C_{g} \lambda^{\frac 1 2 } \| D \tilde{\psi} \|_{L^2(\cup B_i)}. \end{align} \end{proposition} \begin{proof} We introduce the solution $u_g$ of the Stokes equation \begin{equation} \label{eq_ug}
-\Delta u_g + {\nabla} p_g = g, \quad \hbox{div \!} g = 0, \quad \text{ in } \: {\mathbb R}^3.
\end{equation} As $g \in L^{q}$, $q>3$, $u_g \in W^{2,q}_{loc}$, so that $D(u_g)$ is continuous. Integrations by parts yield \begin{align*} \int_{{\mathbb R}^3} g \psi & = \int_{{\mathbb R}^3}(-\Delta u_g + {\nabla} p_g) \psi = 2 \int_{{\mathbb R}^3} D(u_g) : D(\psi) \\ & = 2 \int_{\cup B_i} D(u_g) : D(\psi) - \sum_i \int_{{\partial} B_i} u_g \cdot \sigma(\psi, p)\nu \\ & = 2 \int_{\cup B_i} D(u_g) : D(\psi) - \sum_i \int_{{\partial} B_i} (u_g + u^i_g + \omega^i_g \times (x-x_i)) \cdot \sigma(\psi, p)\nu \end{align*} for any constant vectors $u^i_g$, $\omega^i_g$, $1 \le i \le n$, by the force-free and torque-free conditions on $\psi$. As $u_g + u^i_g + \omega^i_g \times (x-x_i)$ is divergence-free, one has $$ \int_{{\partial} B_i} (u_g + u^i_g + \omega^i_g \times (x-x_i)) \cdot \nu = 0. $$ We can apply classical considerations on the Bogovskii operator: for any $1 \le i \le n$, there exists $U_g ^i \in H^1_0(B(x_i, (M/2)r_n))$ such that $$ \hbox{div \!} U_g ^i = 0 \quad \text{ in } \: B\Big(x_i, \frac{M}{2} r_n\Big), \quad U_g ^i = u_g + u^i_g + \omega^i_g \times (x-x_i) \quad \text{ in } \: B_i $$ and with
$$ ||{\nabla} U_g ^i||_{L^2} \le C_{i,n} ||u_g + u^i_g + \omega^i_g \times (x-x_i)||_{W^{1,2}(B_i)} $$ Furthermore, by a proper choice of $u_g ^i$ and $\omega_g^i$, we can ensure the Korn inequality:
$$ ||u_g + u^i_g + \omega^i_g \times (x-x_i)||_{W^{1,2}(B_i)} \le c'_{i,n} ||D(u_g)||_{L^2(B_i)} $$ resulting in \begin{equation*} \label{control_Ug^i}
||{\nabla} U_g ^i||_{L^2} \le C ||D(u_g)||_{L^2(B_i)} \end{equation*} where the constant $C$ in the last inequality can be taken independent of $i$ and $n$ by translation and scaling arguments. Extending $U_g ^i$ by zero, and denoting $U_g = \sum U_g ^i$, we have \begin{equation} \label{control_Ug}
||{\nabla} U_g||_{L^2} \le C ||D(u_g)||_{L^2(\cup B_i)} \end{equation}
Thus, we find \begin{align*} \int_{{\mathbb R}^3} g \psi & = 2 \int_{\cup B_i} D(U_g) : D(\psi) - \sum_i \int_{{\partial} B_i} U_g \cdot \sigma(\psi, q)\nu \\ & = 2 \int_{{\mathbb R}^3} D(U_g) : D(\psi) \end{align*} By using \eqref{control_Ug} and Cauchy-Schwarz inequality, we end up with
\begin{align*}
\big| \int_{{\mathbb R}^3} g \psi \big| & \le C ||D(u_g)||_{L^2(\cup B_i)} \|D(\psi)\|_{L^2({\mathbb R}^3)} \le C ||D(u_g)||_{L^\infty} \lambda^{\frac12} \|D(\psi)\|_{L^2({\mathbb R}^3)} \end{align*} Now the assertion follows from the somehow standard estimate \begin{equation} \label{L2_estimate_Psi}
||{\nabla} \psi||_{L^2({\mathbb R}^3)} \le C \| D \tilde{\psi} \|_{L^2(\cup B_i)} \end{equation} for a constant $C$ independent of $n$. Indeed, by a classical variational characterization of $\psi$, we have $$
||{\nabla} \psi||_{L^2({\mathbb R}^3)}^2 = 2 ||D \psi||_{L^2({\mathbb R}^3)}^2 = \inf \big\{ 2 ||D U||_{L^2({\mathbb R}^3)}^2, \: D U = D \tilde{\psi} \: \text{ on } \cup_i B_i\big\}. $$ Thus, \eqref{L2_estimate_Psi} follows by constructing such a vector field $U$ from $\tilde \psi$ in the same manner as we constructed $U_g$ from $u_g$ above and applying \eqref{control_Ug}. \end{proof}
\noindent By \eqref{Sto_psi} we can apply this proposition with $g=f$, $\psi = \psi_n$ and $\tilde \psi_n = \varphi - \phi_{app,n}$. Thus, for the proof of Proposition \ref{prop_psi}, it remains to show \begin{equation} \label{bound.psi}
\limsup_{n \to \infty} ||D (\varphi - \phi_{app,n})||_{L^2(\cup B_i)} \le C \Big(\lambda^{1+ \frac{2-p}{2p}} \, \eta^{-\frac{3}{p}} + \big( \eta^3 \lambda \big)^{\frac{2-p}{2p}} \Big) ||{\nabla} \varphi||_{L^{p'}}. \end{equation}
We decompose $$\varphi - \phi_{app,n} = \tilde{\psi}^1_n + \tilde{\psi}^2_n + \tilde{\psi}^3_n $$ where \begin{align*} & \forall 1 \le i \le n, \: \forall x \in B_i, \quad \tilde{\psi}^1_n(x) = -\phi_{{\mathbb R}^3}(x) - \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_\eta}} \phi_{j,n}(x) \end{align*} and \begin{align*} & \forall i \in \mathcal{G}_\eta, \: \forall x \in B_i, \quad \tilde{\psi}^2_n(x) = \varphi(x) - \varphi(x_i) - {\nabla}\varphi(x_i) (x-x_i) + \Bigl(\varphi(x_i) + \frac 1 2 \hbox{curl \!} \varphi(x_i) \times (x-x_i)\Bigr), \\ & \forall i \in \mathcal{B}_\eta, \: \forall x \in B_i, \quad \tilde{\psi}^2_n(x) = 0, \\ & \forall i \in \mathcal{G}_\eta, \: \forall x \in B_i, \quad \tilde{\psi}^3_n(x) = 0, \\ & \forall i \in \mathcal{B}_\eta, \: \forall x \in B_i, \quad \tilde{\psi}^3_n(x) = \varphi(x). \end{align*} We remind that the sum in \eqref{def_phi_app} is restricted to indices $i \in \mathcal{G}_\eta$ and that $\phi_{i,n}(x) = D\varphi(x_i) (x-x_i)$ for $x$ in $B_i$. This explains the distinction between $\tilde{\psi}^2_n$ and $\tilde{\psi}^3_n$.
\noindent The control of $\tilde \psi^2_n$ is the simplest: \begin{align} \label{psi_2n}
\|D \tilde \psi^2_n\|_{L^2(\cup B_i)} \le C ||D^2 \varphi||_{L^\infty} \Big( \sum_{i \in \mathcal{G}_\eta} \int_{B_i} |x-x_i|^2 dx \Bigr)^{1/2} \le C' \lambda^{1/2} r_n. \end{align} Hence, \begin{equation} \label{lim_psi_2n}
\lim_{n \rightarrow +\infty} \|D \tilde \psi^2_n\|_{L^2(\cup B_i)} = 0. \end{equation}
Next, we estimate $\tilde \psi^3_n$.
This term expresses the effect of the balls $\mathcal{B}_\eta$ that are close to one another. By assumption \eqref{B2}, $\text{card}\mathcal{B}_\eta \le C \eta^\alpha n$.
Thus, \begin{equation} \label{bound_psi_3n} \begin{aligned}
\|D \tilde \psi^3_n\|_{L^2(\cup B_i)} \le C \|1_{\cup_{i \in \mathcal{B}'} B_i}\|_{L^{\frac{2p}{2-p}}({\mathbb R}^3)} \|D\varphi\|_{L^{p'}(\cup_{i \in \mathcal{B}'} B_i)}
\leq C'(\eta^\alpha \lambda)^{\frac {2-p} {2p}} \|\nabla \varphi\|_{L^{p'}}. \end{aligned} \end{equation}
The final step in the proof of Proposition \ref{prop_psi} is to establish bounds on $\tilde \psi^1_n$. We have \begin{align} \label{bound_psi_2n}
||D \tilde \psi^1_n||_{L^2(\cup B_i)} & \le C \Big( \|D \phi_{{\mathbb R}^3}\|_{L^2(\cup B_i)} + \Big(\sum_i \int_{B_i} \big| \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_\eta}} D\phi_{j,n}\big|^2 \Big)^{1/2} \Big)
\end{align}
For any $r,s < +\infty$ with $\frac{1}{r} + \frac{1}{s} = \frac{1}{2}$, we obtain
\begin{equation} \label{bound_phi_R3}
\|D \phi_{{\mathbb R}^3}\|_{L^2(\cup B_i)} \: \le \: ||1_{\cup B_i}||_{L^r({\mathbb R}^3)} ||D \phi_{{\mathbb R}^3}||_{L^s({\mathbb R}^3)} \: \le C ||1_{\cup B_i}||_{L^r({\mathbb R}^3)} ||\lambda \rho D\varphi||_{L^s({\mathbb R}^3)}
\end{equation} using standard $L^s$ estimate for system \eqref{eq_phi_R3}. Hence,
$$ \|D \phi_{{\mathbb R}^3}\|_{L^2(\cup B_i)} \le C' \lambda^{1+ \frac{1}{r}} ||D\varphi||_{L^s({\mathbb R}^3)}. $$ Note that we can choose any $s >2$, this lower bound coming from the requirement $\frac{1}{r} + \frac{1}{s} = \frac{1}{2}$. Introducing $p$ such that $s= p'$, we find that for any $p < 2$, \begin{equation} \label{bound_Dphi_R3}
\|D \phi_{{\mathbb R}^3}\|_{L^2(\cup B_i)} \le C' \lambda^{\frac{1}{2} + \frac{1}{p}} ||D\varphi||_{L^{p'}({\mathbb R}^3)}.
\end{equation}
The treatment of the second term at the r.h.s. of \eqref{bound_psi_2n} is more delicate. We write, see \eqref{def_phi_in}: \begin{align} \label{decompo_phi_jn} D\phi_{j,n}(x) & = DV[D\varphi(x_j)]\Big(\frac{x-x_j}{r_n}\Big) = \mathcal{V}[D\varphi(x_j)]\Big(\frac{x-x_j}{r_n}\Big) \: + \: \mathcal{W}[D\varphi(x_j)]\Big(\frac{x-x_j}{r_n}\Big) \end{align}
where $\: \mathcal{V}[S] = D\Big( \frac{5}{2} S : (x \otimes x) \frac{x}{|x|^5} \Big)$, $\: \mathcal{W}[S] = D \Big( \frac{Sx}{|x|^5} - \frac{5}{2} (S : x \otimes x) \frac{x}{|x|^7} \Big)$.
\noindent We have: \begin{align*}
& \sum_i \int_{B_i} \Big| \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_\eta}} \mathcal{W}[D\varphi(x_j)]\Big(\frac{x-x_j}{r_n}\Big)\Big|^2 dx
\: \le \: C \, r_n^{10} \, \sum_i \int_{B_i} \Big( \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_\eta}} |D\varphi(x_j)| \, |x-x_j|^{-5} \Big)^2 dx \end{align*} For all $i$, for all $j \in \mathcal{G}_\eta$ with $j \neq i$, and all $(x,y) \in B_i \times B(x_j, \frac{\eta}{4} n^{-\frac{1}{3}})$, we have for some absolute constants $c,c' > 0$:
$$|x-x_j| \: \ge \: c \, |x - y| \ge c' \, \eta n^{-\frac{1}{3}}. $$ Denoting $B_j^* = B(x_j,\frac{\eta}{4} n^{-\frac{1}{3}})$ We deduce \begin{align*}
& \sum_i \int_{B_i} \Big| \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_\eta}} \mathcal{W}[D\varphi(x_j)]\Big(\frac{x-x_j}{r_n}\Big)\Big|^2 dx \\ & \le C \, r_n^{10} \sum_i \int_{B_i}
\Big( \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_\eta}} \frac{1}{|B_j^*|} \int_{B_j^*} |x - y|^{-5} 1_{\{|x-y| > c' \eta n^{-\frac{1}{3}}\}}(x-y) |D\varphi(x_j)| dy \Big)^2 dx\\
& \le C' \, n^2 \frac{r_n^{10}}{\eta^6} \int_{{\mathbb R}^3} 1_{\cup B_i}(x) \Big( \int_{{\mathbb R}^3} |x - y|^{-5} 1_{\{|x-y| > c' \eta n^{-\frac{1}{3}}\}}(x-y) \sum_{1 \le j \le n} |D\varphi(x_j)| 1_{B_j^*}(y) dy \Big)^2 dx \end{align*} Using H\"older and Young's convolution inequalities, we find that for all $r,s$ with $\frac{1}{r} + \frac{1}{s} = 1$, \begin{align*}
& \int_{{\mathbb R}^3} 1_{\cup B_i}(x) \Big( \int_{{\mathbb R}^3} |x - y|^{-5} 1_{\{|x-y| > c' \eta n^{-\frac{1}{3}}\}}(x-y) \sum_{1 \le j \le n} |D\varphi(x_j)| 1_{B_j^*}(y) dy \Big)^2 dx \\
& \le ||1_{\cup B_i}||_{L^r} \, || \big(|x|^{-5} 1_{\{|x| > c' \eta n^{-\frac{1}{3}}\}}\big) \star \sum_{1 \le j \le n} |D\varphi(x_j)| 1_{B_j^*} ||_{L^{2s}}^2 \\
& \le ||1_{\cup B_i}||_{L^r} \, |||x|^{-5} 1_{\{|x| > c' \eta n^{-\frac{1}{3}}\}}||_{L^1}^2 \, ||\sum_{1 \le j \le n} |D\varphi(x_j)| 1_{B_j^*} ||_{L^{2s}}^2 \\
& \le C \lambda^{\frac{1}{r}} \, (\eta n^{-\frac{1}{3}})^{-4} \, \Big( \sum_j |D\varphi(x_j)|^{2s} \eta^3 n^{-1} \Big)^{\frac{1}{s}} \end{align*}
Note that, by \eqref{A0}, $\frac{1}{n} \sum_j |D\varphi(x_j)|^t \rightarrow \int_{{\mathbb R}^3} |D\varphi|^t \rho$ as $n \rightarrow +\infty$. We end up with \begin{align*}
& \limsup_{n \to \infty} \, \sum_i \int_{B_i} \Big| \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_\eta}} \mathcal{W}[D\varphi(x_j)]\Big(\frac{x-x_j}{r_n}\Big)\Big|^2 dx
\le C \, \lambda^{\frac{10}{3} + \frac{1}{r}} \, \eta^{-10+\frac{3}{s}} ||D\varphi||^2_{L^{2s}(\mathcal{O})}. \end{align*} We can take any $s > 1$, which yields by setting $p$ such that $p'=2s$: for any $p < 2$ \begin{equation} \label{bound_mW}
\limsup_{n \to \infty} \, \sum_i \int_{B_i} \Big| \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_\eta}} \mathcal{W}[D\varphi(x_j)]\Big(\frac{x-x_j}{r_n}\Big)\Big|^2 dx
\le C \, \lambda^{\frac{10}{3} + \frac{2-p}{p}} \, \eta^{-4-\frac{6}{p}} ||D\varphi||^2_{L^{2s}(\mathcal{O})}. \end{equation} To treat the first term in the decomposition \eqref{decompo_phi_jn}, we write $$\mathcal{V}[D\varphi(x_j)]\Big(\frac{x-x_j}{r_n}\Big) = r_n^3 \, \mathcal{M}(x-x_j) \, D\varphi(x_j) $$ for $\mathcal{M}$ a matrix-valued Calderon-Zygmund operator.
We use that for all $i$ and all $j \neq i$, $j \in \mathcal{G}_\eta$ we have for all $(x,y) \in B_i \times B_j^\ast$ \begin{align*}
|\mathcal{M}(x-x_j) - \mathcal{M}(x-y)| \leq C \eta n^{-1/3} |x- y|^{-4} \end{align*} Thus, by similar manipulations as before \begin{align*}
& \sum_i \int_{B_i} \Big| \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_\eta}} \mathcal{V}[D\varphi(x_j)]\Big(\frac{x-x_j}{r_n}\Big)\Big|^2 dx \\ & \leq C r_n^6 \sum_i \int_{B_i}
\Big( \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_\eta}} \frac{1}{|B_j^*|} \int_{B_j^*} \mathcal{M}(x-y) 1_{\{|x-y| > c \eta n^{-\frac{1}{3}}\}}(x-y) |D\varphi(x_j)| dy \Big)^2 dx\\ & + C \frac{\eta^2}{n^{2/3}} r_n^6 \sum_i \int_{B_i}
\Big( \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_\eta}} \frac{1}{|B_j^*|} \int_{B_j^*} |x- y|^{-4} 1_{\{|x-y| > c \eta n^{-\frac{1}{3}}\}}(x-y) |D\varphi(x_j)| dy \Big)^2 dx \\
& \leq C n^2 \frac{r_n^6}{\eta^6} \, ||1_{\cup B_i}||_{L^r} \, || \big(\mathcal{M}(x) 1_{\{|x| > \eta n^{-\frac{1}{3}}\}}\big) \star \sum_{1 \le j \le n} \, |D\varphi(x_j)| 1_{B_j^*} ||_{L^{2s}}^2 \\
&+C n^2 \frac{\eta^2}{n^{2/3}} \frac{r_n^6}{\eta^6} ||1_{\cup B_i}||_{L^r} \, |||x|^{-4} 1_{\{|x| > c \eta n^{-\frac{1}{3}}\}}||_{L^1}^2 \, ||\sum_{1 \le j \le n} |D\varphi(x_j)| 1_{B_j^*} ||_{L^{2s}}^2 \\ \end{align*}
As seen in \cite[Lemma 2.4]{DGV_MH}, the kernel $\mathcal{M}(x) 1_{\{|x| > c \eta n^{-\frac{1}{3}}\}}$ defines a singular integral that is continuous over $L^t$ for any $1 < t < \infty$, with operator norm bounded independently of the value $\eta n^{-\frac{1}{3}}$ (by scaling considerations). Applying this continuity property with $t=2s$, writing as before $p'=2s$, we get for all $p < 2$, \begin{align*}
& \limsup_{n \to \infty} \sum_i \int_{B_i} \Big| \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_\eta}} \mathcal{V}[D\varphi(x_j)]\Big(\frac{x-x_j}{r_n}\Big)\Big|^2 dx \le C \lambda^{2+ \frac{2-p}{p}} \eta^{-\frac{6}{p}} ||D\varphi||^2_{L^{2s}(\mathcal{O})} \end{align*} Combining this last inequality with \eqref{decompo_phi_jn} and \eqref{bound_mW}, we finally get: for all $p < 2$, \begin{align} \label{bound_sum_phi_jn}
& \limsup_{n \to \infty} \Big(\sum_i \int_{B_i} \big| \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_\eta}} D\phi_{j,n}\big|^2 \Big)^{1/2} \le C' \lambda^{1+ \frac{2-p}{2p}} \, \eta^{-\frac{3}{p}} ||D\varphi||_{L^{p'}(\mathcal{O})} \end{align} Finally, if we inject \eqref{bound_Dphi_R3} and \eqref{bound_sum_phi_jn} in \eqref{bound_psi_2n}, we obtain that for any $p < 2$, \begin{align} \label{bound_psi_1n_final}
\limsup_{n \to \infty} ||D \tilde \psi^1_n||_{L^2(\cup_i B_i)} & \le C \lambda^{1+ \frac{2-p}{2p}} \, \eta^{-\frac{3}{p}} ||D\varphi||_{L^{p'}({\mathbb R}^3)}
\end{align}
Here we used $\eta \leq 1$.
The desired estimate \eqref{bound.psi} follows from collecting \eqref{lim_psi_2n}, \eqref{bound_psi_1n_final} and \eqref{bound_psi_3n}. This concludes the proof of Proposition \ref{prop_psi}.
\section{Discussion of assumption \texorpdfstring{\eqref{B1}}{(B1)}} \label{sec:B1}
In the light of the recent paper \cite{Duerinckx20}, we will show how condition \eqref{B1} can be replaced by the following assumption: \begin{align} \label{B1'} \tag{B1'}
\forall i, \quad \rho_i := \sup_{ j \neq i} r_n^{-1} |x_i - x_j| - 2 > 0, \qquad \exists s > 1, \quad \limsup_{n \to \infty} \frac{1}{n} \sum_i \rho_i^{-s} < \infty. \end{align}
We will argue that Theorem \ref{main} remains valid with $p_{\min}$ depending in addition on the power $s$ from \eqref{B1'}. More precisely, $p_{\min}$ in Remark \ref{rem:exponents} needs to be replaced by \begin{align} \label{p_min}
p_{\min} = \min\left\{ 1 + \frac{\alpha}{6+\alpha}, 1 + \frac{s-1}{s+1}, \frac{3}{2}\right\}. \end{align}
There are only two instances where we have used assumption \eqref{B1}, which are both contained in the proof of Proposition \ref{prop_estimate_gphi2n} : one is to prove the estimate \eqref{L2_estimate_Psi} for the solution $\psi$ to the system \eqref{Sto_Psi} -- \eqref{Sto2_Psi}, and the other one is to prove the analogue estimate \eqref{control_Ug}. The proof has been based on the construction of suitable functions $\Psi_i \in \dot H^1_0(B(x_i,M/2 r_n))$ with $D(\Psi_i) = D(\tilde \psi)$ in $B_i$. If we drop assumption \eqref{B1}, we can still replace the balls $B(x_i,M/2 r_n)$ by disjoint neighborhoods $B_i^+$ satisfying the assumptions of \cite[section 3.1]{Duerinckx20} (with $I_n, I_n^+$ replaced by $B_i, B_i^{+}$). By \cite[Lemma 3.3]{Duerinckx20}, it then follows that for all $r > 2$ and all $q \ge \max(2,\frac{6r}{5r-6})$, there exists $\Psi_i \in H^1_0(B_i^+)$, such that
$$ \| {\nabla} \Psi_i\|_{L^2(B_i^+)} \le C_{r} \, \rho_i^{\frac{2}{r} - \frac{3}{2}} r_n^{\frac{3}{2} - \frac{3}{q}} \| D(\tilde \psi)\|_{L^q(B_i)}, $$ Setting $\Psi = \sum_i \Psi_i$, we find that \begin{equation} \label{estim_naPsi}
\|\nabla \Psi\|^2_{L^2} \leq C_{r} \sum_i \rho_i^{\frac{4}{r} - 3} r_n^{3-\frac{6}{q}} \| D(\tilde \psi)\|_{L^q(B_i)}^2 \leq C_{r} \lambda^{\frac{q-2}{q}} \left( \frac{1}{n} \sum_i
\left( \rho_i^{\frac{4}{r} - 3}\right)^\frac{q}{q-2} \right)^{\frac{q-2}{q}} \|D(\tilde \psi)\|_{L^q(\cup B_i)}^2
\end{equation}
Note that for $s$ the exponent in \eqref{B1'}, $q > 3$ and $\frac{q}{q-2} < s$, taking $r$ close enough to $2$, one can ensure that $q \ge \max(2,\frac{6r}{5r-6})$ and that the first factor at the right-hand side of \eqref{estim_naPsi} is finite. In conclusion this argument shows that Proposition \ref{prop_estimate_gphi2n} remains valid under assumption \eqref{B1'} with the estimate \eqref{improvement} replaced by
\begin{align} \label{improvement'}
\left|\int_{{\mathbb R}^3} g \psi \right| \leq C_{g,q} \lambda^{\frac 1 2 + \frac{q - 2}{2q}} \| D \tilde{\psi} \|_{L^{q}(\cup B_i)}. \end{align}
It is not difficult to check that this change of the estimate still allows to conclude the argument in Section \ref{sec_rem} along the same lines as before. Indeed, whenever we used \eqref{improvement}, we also applied Hölder's estimate to replace $\| D \tilde{\psi} \|_{L^{2}(\cup B_i)}$ by a higher Lebesgue norm in order to gain powers in $\lambda$. One could say that the modified estimate \eqref{improvement'} has just partly anticipated Hölder's estimate. The additional restrictions on $q$ ($q >3$, $q < \frac{s}{s-2}$) are the reason for the additional constraints in $p_{\min}$ in \eqref{p_min}. The estimates in Section \ref{sec_rem} where we use Proposition \ref{prop_estimate_gphi2n} concern the terms $\tilde \psi_n^i$, $i=1,2,3$. First, in the estimate for $\tilde \psi_n^2$ corresponding to \eqref{psi_2n}, we can just use \eqref{improvement'} with $q= \infty$. Second for $\tilde \psi_n^3$, previously estimated in \eqref{bound_psi_3n}, we use \eqref{improvement'} with $q = p'$. Finally, for $\tilde \psi_n^1$, if one carefully follows the estimates in Section \ref{sec_rem}, one observes that \eqref{improvement'} with $q= p'$ is again sufficient.
\section{Discussion of assumption \texorpdfstring{\eqref{B2}}{(B2)}} \label{sec:prob}
\subsection{Stationary ergodic processes}
Let $\Phi^\delta = \{y_i\}_i \subset {\mathbb R}^3$ be a stationary ergodic point process on ${\mathbb R}^3$ with intensity $\delta$ and hard-core radius $R$, i.e., $|y_i - y_j| \geq R$ for all $i \neq j$. An example of such a process is a hard-core Poisson point process, which is obtained from a Poisson point process upon deleting all points with a neighboring point closer than $R$. We refer to \cite[Section 8.1]{MR1950431} for the construction and properties of such processes.
\noindent Assume that $\mathcal{O}$ is convex and contains the origin. For ${\varepsilon} > 0$, we consider the set \begin{align*}
{\varepsilon} \Phi^\delta \cap \mathcal{O} =: \{ x^{\varepsilon}_i, i =1, \dots, n_{\varepsilon}\}. \end{align*} Let $r < R/2$ and denote $r_{\varepsilon} = {\varepsilon} r$ and consider $B_i = \overline{ B(x_i,r_{\varepsilon})}$. The volume fraction of the particles depends on ${\varepsilon}$ in this case. However, it is not difficult to generalize our result to the case when the volume fraction converges to $\lambda$ and this holds in the setting under consideration since \begin{align*}
\frac{4 \pi}{3} n_{\varepsilon} r_{\varepsilon}^3 \to \frac{4 \pi}{3} \delta r^3 =: \lambda(r,\delta) \quad \text{almost surely as } {\varepsilon} \to 0. \end{align*} Clearly, $\lambda(r,\delta) \to 0$, both if $r \to 0$ and if $\delta \to 0$. However, the process behaves fundamentally differently in those cases. Indeed, if we take $r \to 0$ (for $\delta$ and $R$ fixed), we find that condition \eqref{A1}, which implies \eqref{B2}, is satisfied almost surely for ${\varepsilon}$ sufficiently small as \begin{align*}
n_{\varepsilon}^{1/3} |x_i^{\varepsilon} - x_j^{\varepsilon}| \geq n_{\varepsilon}^{1/3} {\varepsilon} R \to \delta^{1/3} R. \end{align*}
\noindent In the case when we fix $r$ and consider $\delta \to 0$ (e.g. by randomly deleting points from a process), \eqref{A1} is in general not satisfied. We want to characterize processes for which \eqref{B2} is still fulfilled almost surely as ${\varepsilon} \to 0$. Indeed, using again the relation between ${\varepsilon}$ and $n_{\varepsilon}$, it suffices to show \begin{align} \label{eq:B2.prob}
\forall \eta > 0, \quad \#\{i, \: \exists j, \: |x_i - x_j| \le \eta {\varepsilon} \} \le C \eta^\alpha \delta^{1 + \frac \alpha 3} {\varepsilon}^{-3}. \end{align} Let $\Phi^\delta_\eta$ be the process obtained from $\Phi^\delta$ by deleting those points $y$ with $B(y,\eta) \cap \Phi^\delta = \{y\}$. Then, the process $\Phi^\delta_\eta$ is again stationary ergodic (since deleting those points commutes with translations\footnote{In detail: let $\mathcal E_\eta$ be the operator that erases all points without a neighboring point closer than $\eta$, and let $T_x$ denote a translation by $x$. Now, let $\mu$ be the measure for the original process $\Phi^\delta$. Then the measure for $\Phi^\delta_\eta$ is given by $\mu_\eta = \mu \circ \mathcal E_\eta^{-1}$. Since $\mathcal E_\eta T_x = T_x \mathcal E_\eta$ (for all $x$, in particular for $T_{-x} = T^{-1}_x$), we have for any measurable set $A$ that $T_x \mathcal E_\eta^{-1} A = \mathcal E_\eta^{-1} T_x A$. This immediately implies that the new process adopts stationarity and ergodicity.}), so that almost surely as ${\varepsilon} \to 0$ \begin{align*}
{\varepsilon}^3 \#\{i, \: \exists j, \: |x_i - x_j| \le \eta {\varepsilon} \} \to {\mathbb E}[\# \Phi^\delta_\eta \cap Q], \end{align*} where $Q = [0,1]^3$. Clearly, $$ {\mathbb E}[\# \Phi^\delta_\eta \cap Q] \le {\mathbb E} \sum_{y \in \Phi^\delta \cap Q} \sum_{y' \neq y \in \Phi^\delta} 1_{B(0,\eta)}(y' - y). $$
We can express this expectation in terms of the 2-point correlation function $\rho^\delta_2(y,y')$ of $\Phi^\delta$ yielding $$ {\mathbb E}[\# \Phi^\delta_\eta \cap Q] \le \int_{{\mathbb R}^6} 1_Q(y) 1_{B(0,\eta)}(y'-y) \rho^\delta_2(y,y') \, \mathrm{d} y \, \mathrm{d} y'. $$ Hence, \eqref{eq:B2.prob} and therefore also \eqref{B2} is in particular satisfied with $\alpha = 3$ if $\rho^\delta_2 \leq C \delta^2$ which is the case for a (hard-core) Poisson point process.
Moreover, we observe that \eqref{B2} with $\alpha < 3$ is satisfied even for processes that favor clustering: \eqref{eq:B2.prob} holds if $\rho_2^\delta(y,y') \leq C \delta^{1 + \frac{\alpha}3} |y - y'|^{\alpha - 3}$. This means that $\rho_2^\delta$ can be quite singular at the diagonal and of much higher intesity than $\delta^2$. Examples for such clustering point processes are Neyman-Scott processes (see e.g. \cite[Section 6.3]{MR1950431}).
\subsection{Identically, independently distributed particles}
Focusing on assumption \eqref{B2}, we neglect the non-overlapping condition \eqref{B1} in the following, which is not satisfied for i.i.d. particles. As in the case of hard-core Poisson point processes, it is nevertheless possible to construct a process that satisfies \eqref{B1}, by deleting points which have a too close neighbor. As those points will be few for small volume fractions, this will not affect the discussion of \eqref{B2} qualitatively.
\noindent We will show the following result: for $x_1, \dots, x_n$ i.i.d. with a law $\rho \in L^\infty$ ($\rho \geq 0$, $\int \rho = 1$), for all $\eta > 0$: \begin{align} \label{eq:B2.iid}
n^{-1} \# \{ i, \: \exists j \neq i, \: |x_i - x_j| \le \eta n^{-1/3} \} \: \xrightarrow[n \rightarrow +\infty]{} \: 1 - \int_{{\mathbb R}^3} \rho(x) e^{-\rho(x) \frac{4\pi}{3} \eta^3} dx \end{align} in probability. This implies \eqref{B2} with $\alpha = 3$ in probability. We first set $$\eta_n := \eta n^{-1/3}, \quad B^n_j := B(x_j, \eta_n), \quad Y^n_{i} := \prod_{j\neq i} 1_{(B^n_j)^c}(x_i).$$
Note that the random variables $Y^n_{i}$ are identically distributed, but not independent. Note also that $ \displaystyle n^{-1} \# \{ i, \: \exists j \neq i, \: |x_i - x_j| \le \eta_n \} = \frac{1}{n} \sum_{i=1}^n (1-Y_i^n)$. Hence, we need to show that $\frac{1}{n} \sum_{i=1}^n Y_i^n$ converges to $\displaystyle I_{\rho, \eta} := \int_{{\mathbb R}^3} \rho(x) e^{-\rho(x) \frac{4\pi}{3} \eta^3} dx $ in probability.
\noindent \noindent {\em Step 1}. We show that $ {\mathbb E} Y_1^n \: \xrightarrow[n \rightarrow +\infty]{} \: I_{\rho,\eta}$. Indeed, by independence, \begin{align*}
{\mathbb E} Y_1^n & = \int_{{\mathbb R}^3} \Big( \int_{{\mathbb R}^3} 1_{B(y,\eta_n)^c}(x) \rho(y)dy \Big)^{n-1} \rho(x) dx = \int_{{\mathbb R}^3} \Big( 1- \int_{B(x, \eta_n)} \rho(y) dy \Big)^{n-1} \rho(x) dx
\end{align*}
At each Lebesgue point $x$ of $\rho$, one has $\frac{1}{|B(x, \eta_n)|} \int_{B(x, \eta_n)} \rho(y) dy \rightarrow \rho(x)$, so that
$$ \Big( 1- \int_{B(x, \eta_n)} \rho(y) dy \Big)^{n-1} \rightarrow e^{-\rho(x) \frac{4\pi}{3} \eta^3} \quad \text{for a.e. $x$} $$ and the result follows by the dominated convergence theorem.
\noindent \noindent {\em Step 2}. We show that $$\text{var} \Big( \frac{1}{n}\sum_{i=1}^n Y^n_i \Big) \rightarrow 0 \quad \text{as} \: n \rightarrow +\infty$$ By Markov inequality and Step 1, this implies \eqref{eq:B2.iid}.
\noindent We have \begin{align*} \text{var}\Big( \frac{1}{n} \sum_{i=1}^n Y^n_i \Big) & = \frac{1}{n}\text{var}Y^n_1 + \frac{n(n-1)}{n^2} \text{Cov}Y_1^n Y_2^n = \text{Cov}Y_1^n Y_2^n + O(\frac{1}{n}) \end{align*} using that $0 \le Y_1^n \le 1$. It remains to show that the covariance goes to zero. Using the independence of the $x_i$'s, we have the explicit formula \begin{align*}
{\mathbb E} Y_1^n Y_2^n = \int_{{\mathbb R}^6} \Big( \int_{\mathbb R} 1_{|x-x_1|\ge \eta_n} 1_{|x-x_2|\ge \eta_n} \rho(x) dx \Big)^{n-2} 1_{|x_1-x_2|\ge \eta_n} \rho(x_1) \rho(x_2) dx_1 dx_2. \\ \end{align*} We have \begin{align*}
&\Big( \int_{\mathbb R} 1_{|x-x_1|\ge \eta_n} 1_{|x-x_2|\ge \eta_n} \rho(x) dx \Big)^{n-2} \\
= & \Big( 1 - \int_{B(x_1,\eta_n)} \!\rho - \int_{B(x_2,\eta_n)}\!\rho + \Big(\int_{B(x_1,\eta_n) \cap B(x_2,\eta_n)} \!\! \rho\Big) \, 1_{|x_1-x_2|\le 2\eta_n}\Big)^{n-2}\\
= & e^{-\frac{4\pi}{3}\Xint-_{B(x_1,\eta_n)} \rho} e^{-\frac{4\pi}{3}\Xint-_{B(x_1,\eta_n)} \rho} e^{R_n(x_1,x_2)}, \quad |R_n(x_1,x_2)|\le C 1_{|x_1-x_2|\le 2\eta_n} + C n^{-1}. \end{align*} This quantity converges almost surely to $e^{-\frac{4\pi}{3}\rho(x_1)} e^{-\frac{4\pi}{3}\rho(x_2)}$ and it follows by the dominated convergence theorem that \begin{align*}
{\mathbb E} Y_1^n Y_2^n \rightarrow \Big(\int_{\mathbb R} e^{-\frac{4\pi}{3}\rho(x_1)} \rho(x_1) dx_1 \Big) \Big(\int_{\mathbb R} e^{-\frac{4\pi}{3}\rho(x_2)}\rho(x_2) dx_2 \Big) = \lim_{n \rightarrow +\infty} ({\mathbb E} Y_1^n)^2
\end{align*} wich yields the result.
\end{document} |
\begin{document}
\allowdisplaybreaks
\title{Boundary value problems for Willmore curves in $\R^2$}
\author{Rainer Mandel} \address{R. Mandel
\break Scuola Normale Superiore di Pisa
\break I - Pisa, Italy} \email{[email protected]} \date{}
\subjclass[2000]{Primary: } \keywords{}
\begin{abstract}
In this paper the Navier problem and the Dirichlet problem for Willmore curves in $\R^2$ is solved. \end{abstract}
\maketitle
\section{Introduction}
\parindent0mm
The Willmore functional of a given smooth regular curve $\gamma\in C^\infty([0,1],\R^n)$ is given by
$$
W(\gamma) := \frac{1}{2}\int_\gamma \kappa_\gamma^2
$$
where $\kappa_\gamma$ denotes the signed curvature function of $\gamma$.
The value $W(\gamma)$ can be interpreted as a bending energy of the curve $\gamma$ which one usually tends
to minimize in a suitable class of admissible curves where the latter is chosen according to the mathematical
or real-world problem one is interested in. One motivation for considering boundary value
problems for Willmore curves originates from the industrial fabrication process called wire bonding
\cite{Koo_ideal_bent_contours}. Here, engineers aim at interconnecting small semiconductor devices via wire
bonds in such a way that energy losses within the wire are smallest possible. Typically the initial position
and direction as well as the end position and direction of the curve are prescribed by the arrangement of the
semiconductor devices. Neglecting energy losses due to the length of a wire one ends up with studying the
Dirichlet problem for Willmore curves in $\R^3$. In the easier two-dimensional case the Dirichlet problem
consists of finding a regular curve $\gamma\in C^\infty([0,1],\R^2)$ satisfying
\begin{align} \label{Gl Dirichlet BVP}
W'(\gamma)=0,\qquad
\gamma(0)=A,\; \gamma(1)=B,\qquad
\frac{\gamma'(0)}{\|\gamma'(0)\|}= \vecII{\cos(\theta_1)}{\sin(\theta_1)},\;
\frac{\gamma'(1)}{\|\gamma'(1)\|}= \vecII{\cos(\theta_2)}{\sin(\theta_2)}
\end{align}
for given boundary data $A,B\in\R^2$ and $\theta_1,\theta_2\in\R$. Here we used the shorthand notation
$W'(\gamma)=0$ in order to indicate that $\gamma$ is a Willmore curve which, by
definition, is a critical point of the Willmore functional.
Our main result, however, concerns the Navier problem for Willmore curves in $\R^2$ where one looks for
regular curves $\gamma\in C^\infty([0,1],\R^2)$ satisfying
\begin{align} \label{Gl Navier BVP}
W'(\gamma)=0,\qquad \gamma(0)=A,\;\gamma(1)=B,\qquad
\kappa_\gamma(0)=\kappa_1,\;\kappa_\gamma(1)=\kappa_2
\end{align}
for given $A,B\in\R^2$ and $\kappa_1,\kappa_2\in\R$.
In the special case $\kappa_1=\kappa_2=:\kappa$ the
boundary value problem \eqref{Gl Navier BVP} was investigated by Deckelnick and Grunau
\cite{DecGru_Boundary_value_problems} under the restrictive assumption that $\gamma$ is a smooth symmetric
graph lying on one side of the straight line joining $A$ and $B$. More precisely
they assumed $A=(0,0),B=(1,0)$ as well as $\gamma(t)=(t,u(t))\,(t\in [0,1])$ for some positive
symmetric function $u\in C^4(0,1)\cap C^2[0,1]$. They found that there is a positive number $M_0\approx 1.34380$ such
that \eqref{Gl Navier BVP} has precisely two graph solutions for $0<|\kappa|<M_0$, precisely one graph solution for $|\kappa|\in\{0,M_0\}$ and no
graph solutions otherwise, see Theorem~1 in \cite{DecGru_Boundary_value_problems}.
In a subsequent paper \cite{DecGru_Stability_and_symmetry} the same authors proved stability and symmetry
results for such Willmore graphs. Up to the author's knowledge no further results related to the Navier
problem are known. The aim of our paper is to fill this gap in the literature by providing
the complete solution theory both for the Navier problem and for the Dirichlet problem. Special attention
will be paid to the symmetric Navier problem
\begin{align} \label{Gl Navier BVP symm}
W'(\gamma)=0,\qquad
\gamma(0)=A,\; \gamma(1)=B,\qquad
\kappa_\gamma(0)=\kappa,\; \kappa_\gamma(1)=\kappa
\end{align}
which, as we will show in Theorem \ref{Thm Navier kappa1=kappa2}, admits a beautiful solution theory
extending the results obtained in \cite{DecGru_Boundary_value_problems} which we described above. Our
results concerning this boundary value problem not only address existence and multiplicity issues but also
qualitative information about the solutions are obtained. In \cite{DecGru_Boundary_value_problems} the
authors asked whether symmetric boundary conditions imply the symmetry of the solution. In order to analyse
such interesting questions not only for Willmore graphs as in
\cite{DecGru_Boundary_value_problems,DecGru_Stability_and_symmetry} but also for general Willmore curves we
fix the notions of axially symmetric respectively pointwise symmetric regular curves which we believe to be natural.
\begin{defn}
Let $\gamma\in C^\infty([0,1],\R^2)$ be a regular curve of length $L>0$ and let $\hat\gamma$ denote its
arc-length parametrization. Then $\gamma$ is said to be
\begin{itemize}
\item[(i)] axially symmetric if there is an orthogonal matrix $P\in O(2)$ satisfying $\det(P)=-1$
such that $s\mapsto \hat\gamma(s)+P\hat\gamma(L-s)$ is constant on $[0,L]$,
\item[(ii)] pointwise symmetric if $s\mapsto \hat\gamma(s)+\hat\gamma(L-s)$ is constant on $[0,L]$.
\end{itemize}
\end{defn}
Let us shortly explain why these definitions are properly chosen. First, being axially or
pointwise symmetric does not depend on the parametrization of the curve, which is a necessary condition for
a geometrical property. Second, the definition from (i) implies
$P(\hat\gamma(L)-\hat\gamma(0))=\hat\gamma(L)-\hat\gamma(0)$ so that $P$ realizes a
reflection about the straight line from $\gamma(0)=\hat\gamma(0)$ to $\gamma(1)=\hat\gamma(L)$. Hence, (i)
may be considered as a natural generalization of the notion of a symmetric function. Similarly, (ii) extends the notion of an odd
function. Since our results describe the totality of Willmore curves in $\R^2$ we will need more refined
ways of distinguishing them. One way of defining suitable types of Willmore curves is based on the observation
(to be proved later) that the curvature function of the arc-length parametrization of such a curve
is given by
\begin{equation} \label{Gl kappa_ab}
\kappa_{a,b}(s) = \sqrt{2}a \cn(as+b) \qquad a\geq 0,\; -T/2\leq b<T/2.
\end{equation}
Here, the symbol $\cn$ denotes Jacobi's elliptic function with modulus $1/\sqrt{2}$ and periodicity $T>0$,
cf. \cite{AbrSte_Handbook}, chapter 16. In view of this result it turns out that the following notion of a
$(\sigma_1,\sigma_2,j)$-type solution provides a suitable classification of all Willmore curves except the
straight line (which corresponds to the special case $a=0$ in \eqref{Gl kappa_ab}).
\begin{defn} \label{Def sigma1sigma2j}
Let $\gamma\in C^\infty([0,1],\R^2)$ be a Willmore curve of length $L>0$ such that the curvature
function of its arc-length parametrization is $\kappa_{a,b}$ for $a>0$ and $b\in [-T/2,T/2)$. Then $\gamma$ is said to be
of type $(\sigma_1,\sigma_2,j)$ if we have
$$
\sign(b)=\sigma_1,\qquad
\sign(b+aL)=\sigma_2,\qquad
jT \leq aL<(j+1)T.
$$
\end{defn}
Here, the sign function is given by $\sign(z):=1$ for $z\geq 0$ and $\sign(z)=-1$ for $z<0$. Roughly
speaking, a Willmore curve is of type $(\sigma_1,\sigma_2,j)$ if its curvature function runs through the
repetitive pattern given by the $\cn$-function more than $j$ but less than $j+1$ times and is increasing or
decreasing at its initial point respectively end point according to $\sigma_1=\pm 1$ respectively
$\sigma_2=\pm 1$. This particular shape of the curvature function of a Willmore curve clearly induces a
special shape of the Willmore curve itself as the following examples show.
\begin{figure}
\caption{Willmore curves of type $(\sigma_1,\sigma_2,j)$}
\end{figure}
In order to describe the entire solution set $\mathcal{F}$ of the symmetric Navier problem \eqref{Gl Navier
BVP symm} we introduce the solution classes $\mathcal{F}_0,\mathcal{F}_1,\ldots$ by setting
$\mathcal{F}_k:= \mathcal{F}_k^+\cup\mathcal{F}_k^-$ and
\begin{align} \label{Gl def Fk}
\begin{aligned}
\mathcal{F}_0^\pm
&:= \Big\{ (\gamma,\kappa)\in C^\infty([0,1],\R^2)\times\R: \; \kappa=0,\;
\gamma \text{ is the straight line through } A,B \\
&\qquad\qquad\text{or } 0\leq \pm\kappa<\infty,\; \gamma \text{ is
a } (\mp 1,\pm 1,0)-\text{type solution of \eqref{Gl Navier BVP symm}} \Big\}, \\
\mathcal{F}_k^\pm
&:= \Big\{ (\gamma,\kappa)\in C^\infty([0,1],\R^2)\times\R : \; 0\leq \pm\kappa<\infty \text{ and }
\gamma \text{ is a solution of \eqref{Gl Navier BVP symm}} \\
&\qquad\qquad \text{of type } (\pm 1,\mp 1,k-1) \text{ or }(\mp 1,\pm 1,k) \text{ or } (1,1,k) \text{ or
}(-1,-1,k) \Big\}.
\end{aligned}
\end{align}
In our main result Theorem \ref{Thm Navier kappa1=kappa2} we show amongst other things that every solution
of the symmetric Navier problem \eqref{Gl Navier BVP symm} belongs to precisely one of these sets, in
other words we show $\mathcal{F}= \bigcup_{k\in\N_0} \mathcal{F}_k$ where the union of the right hand side
is disjoint. In this decomposition the sets $\mathcal{F}_1,\mathcal{F}_2,\ldots$ will turn out to be of
similar ''shape'' whereas $\mathcal{F}_0$ looks different, see figure \ref{Fig 1}.
\begin{figure}
\caption{An illustration of $\mathcal{F}_0$ and $\mathcal{F}_1$}
\label{Fig 1}
\end{figure}
Due to space limitations we only illustrated the union of
$\mathcal{F}_0=\mathcal{F}_0^+\cup\mathcal{F}_0^-$ and $\mathcal{F}_1=\mathcal{F}_1^+\cup\mathcal{F}_1^-$
but the dashed branches emanating from $A_6,A_7$ are intended to indicate that
$\mathcal{F}_1$ connects to the set $\mathcal{F}_2$ which has the same shape as $\mathcal{F}_1$ for
$(2C,M_1)$ replaced by $(4C,M_2)$ and which in turn connects to $\mathcal{F}_3$ and so on. Here the real numbers $0<M_0<2C<M_1<4C<M_2<\ldots$ are given by
$$
C:= \int_0^1 \frac{4t^2}{\sqrt{1-t^4}}\,dt,
\qquad
M_k := \max_{z\in[0,1]} 2z \Big( kC+ 2\int_z^1 \frac{t^2}{\sqrt{1-t^4}}\,dt \Big).
$$
The labels $(1,-1,0),(-1,1,0)$, etc. in figure \ref{Fig 1} indicate the type of the solution branch which
is preserved until one of the bifurcation points $B_1,B_2$ or the trivial solution $A_1$ is reached. Let us
remark that the horizontal line does not represent a parameter but only enables us to show seven different solutions $A_1,\ldots,A_7$
of \eqref{Gl Navier BVP symm} for $\kappa=0$ as well as their interconnections in
$C^\infty([0,1],\R^2)\times\R$. The solutions $A_1,A_2,A_3$ belong to
$\mathcal{F}_0$ whereas $A_4,A_5,A_6,A_7$ belong to $\mathcal{F}_1$. The above picture indicates that the
solution set $\mathcal{F}$ is pathconnected in any reasonable topology.
Intending not to overload this paper we do not give a formal proof of this
statement but only mention that a piecewise parametrization of the solution sets can be realized using the parameter
$z\in [0,1]$ from the proof of Theorem~\ref{Thm Navier kappa1=kappa2}. Our result leading to
figure~\ref{Fig 1} reads as follows.
\begin{thm} \label{Thm Navier kappa1=kappa2}
Let $A,B\in\R^2,A\neq B$ and $\kappa\in\R$. Then the following statements concerning the symmetric Navier
problem \eqref{Gl Navier BVP symm} hold true:
\begin{itemize}
\item[(i)] There are
\begin{itemize}
\item[(a)] precisely three $\mathcal{F}_0$-solutions of in case $\kappa=0$,
\item[(b)] precisely two $\mathcal{F}_0$-solutions of in case $0<|\kappa|\|B-A\|<M_0$,
\item[(c)] precisely one $\mathcal{F}_0$-solution in case $|\kappa|\|B-A\|=M_0$,
\item[(d)] no $\mathcal{F}_0$-solutions in case $|\kappa|\|B-A\|>M_0$
\end{itemize}
\item[] and for all $k\geq 1$ there are
\begin{itemize}
\item[(e)] precisely four $\mathcal{F}_k$-solutions in case $|\kappa|\|B-A\|<2kC$,
\item[(f)] precisely two $\mathcal{F}_k$-solutions in case $2kC\leq |\kappa|\|B-A\|<M_k$,
\item[(g)] precisely one $\mathcal{F}_k$-solution in case $|\kappa|\|B-A\|=M_k$,
\item[(h)] no $\mathcal{F}_k$-solutions in case $|\kappa|\|B-A\|>M_k$.
\end{itemize}
\item[(ii)] The minimal energy solutions of \eqref{Gl Navier BVP symm} are
given as follows:
\begin{itemize}
\item[(a)] In case $\kappa=0$ the Willmore minimizer is the straight line joining $A$ and $B$.
\item[(b)] In case $2kC<|\kappa|\|B-A\|\leq M_k$ for some $k\in\N_0$ the Willmore minimizer is
of type $(-\sign(\kappa),\sign(\kappa),k)$.
\item[(c)] In case $M_k<|\kappa|\|B-A\|\leq 2(k+1)C$ for some $k\in\N_0$ the Willmore minimizer is the
unique solution of type $(\sign(\kappa),-\sign(\kappa),k)$.
\end{itemize}
\item[(iii)] All $(\sigma_1,\sigma_2,j)$-type solutions with $\sigma_1=-\sigma_2$ are axially symmetric.
The $(\sigma_1,\sigma_2,j)$-type solutions with $\sigma_1=\sigma_2$ are not axially symmetric and
pointwise symmetric only in case $\kappa=0$.
\end{itemize}
\end{thm}
We would like to point out that we can write down almost explicit formulas for the solutions obtained in
Theorem \ref{Thm Navier kappa1=kappa2} using the formulas from our general result which we will provide in
Theorem~\ref{Thm Navier BVP the general case} and the subsequent Remark~\ref{Bem allgemeines Resultat}~(b).
Futhermore let us notice that the proof of Theorem \ref{Thm Navier kappa1=kappa2}~(ii)~(b) reveals which of
the two $(-\sign(\kappa),\sign(\kappa),k)$-type solutions is the Willmore minimizer. We did not provide this
piece of information in the above statement in order not to anticipate the required notation.
At this stage let us only remark that in figure \ref{Fig 1} the Willmore minimizers from Theorem \ref{Thm
Navier kappa1=kappa2}~(ii)~(b) lie on the left parts of the branches consisting of
$(-\sign(\kappa),\sign(\kappa),0)$-type respectively $(-\sign(\kappa),\sign(\kappa),1)$-type solutions.
\begin{bem} \label{Bem 1}~
\begin{itemize}
\item[(a)] The numerical value for $M_0$ is $1.34380$ which is precisely the threshold value
found in Theorem~1 in~\cite{DecGru_Boundary_value_problems}. The existence result for graph-type
solutions from \cite{DecGru_Boundary_value_problems} are therefore reproduced by Theorem \ref{Thm
Navier kappa1=kappa2} (i)(b)-(d). Notice that a $(\sigma_1,\sigma_2,j)$-type solution with $j\geq 1$ can
not be a graph with respect to any coordinate frame. On the other hand, in (i)(a) we found three
$\mathcal{F}_0$-solutions for $\kappa=0$ which are the straight line and two nontrivial Willmore curves of type $(1,-1,0)$ respectively
$(-1,1,0)$. The latter two solutions were not discovered by Deckelnick and Grunau because they are
continuous but non-smooth graphs since the slopes (calculated with respect to the straight line
joining the end points of the curve) at both end points are $+\infty$ or $-\infty$.
Viewed as a curve, however, these solutions are smooth.
\item[(b)] From Theorem \ref{Thm Navier kappa1=kappa2} (i) one deduces that for all
$\kappa$ and all $A\neq B$ there is an infinite sequence of Willmore curves
solving \eqref{Gl Navier BVP symm} whose Willmore energies tends to $+\infty$. Here one uses that the Willmore energies of $\mathcal{F}_k-$solutions
tend to $+\infty$ as $k\to\infty$. We will see in Theorem~\ref{Thm Navier kappa1!=kappa2} that this result
remains true in the nonsymmetric case.
\item[(c)] Using the formula for the length of a Willmore curve from our general result
Theorem~\ref{Thm Navier BVP the general case} one can show that for any given $A,B\in\R^2,\kappa\in\R$
satisfying $A\neq B,\kappa\neq 0$ the solution of \eqref{Gl Navier BVP symm} having minimal length $L$
must be of type $(-\sign(\kappa),\sign(\kappa),j)$ for some $j\in\N_0$. More precisely, defining $L^*:=
T\|B-A\|/(\sqrt{2}C)$ for the periodicity $T$ of $\cn$ (see section \ref{sec prelim}) one can show
$L>L^*,L=L^*,L<L^*$ respectively if the solution curve is of type $(\sigma_1,\sigma_2,j)$ with
$\sigma_1=-\sigma_2=\sign(\kappa), \sigma_1=\sigma_2,\sigma_1=-\sigma_2=-\sign(\kappa)$.
Notice that in the case $\|B-A\|=1$ studied by Linn{\'e}r we have $L^*\approx 2.18844$ which is
precisely the value on page 460 in Linn{\'e}r's paper \cite{Lin_Explicit_elastic}.
Numerical plots indicate that in most cases the Willmore curves of minimal length have to be
of type $(-\sign(\kappa),\sign(\kappa),j)$ where $j$ is smallest possible which, however, is not
true in general as the following numerical example shows.
In the special case $\|B-A\|=1,\kappa=9.885$ there is a $(-1,1,2)$-type solution
generated by $a\approx 7.48526$ having length $L_1\approx 2.08043$ and a $(-1,1,3)$-type
solution generated by $a\approx 11.65140$ which has length $L_2\approx 2.08018$ and we observe
$L_2<L_1$.
\item[(d)] Having a look at figure \ref{Fig 1} we find that the set of Willmore minimizers described
in Theorem~\ref{Thm Navier kappa1=kappa2}~(ii) is not connected as a whole, but it is path-connected
within suitable intervals of the parameter $\kappa\|B-A\|$.
\end{itemize}
\end{bem}
In the nonsymmetric case $\kappa_1\neq \kappa_2$ it seems to be difficult to identify nicely
behaved connected sets which constitute the whole solution set. However, we find existence and
multiplicity results in the spirit of Remark \ref{Bem 1} (b). We can prove the following result.
\begin{thm} \label{Thm Navier kappa1!=kappa2}
Let $A,B\in\R^2, \kappa_1,\kappa_2\in\R$ satisfy $A\neq B,\kappa_1\neq \kappa_2$ and let
$\sigma_1,\sigma_2\in\{-1,+1\}$. Then there is a smallest natural number $j_0$ such that
for all $j\geq j_0$ the Navier problem \eqref{Gl Navier BVP} has at least one
$(\sigma_1,\sigma_2,j)$-type solution $\gamma_j$ such that $W(\gamma_{j_0}) < W(\gamma_{j_0+1}) < \ldots
< W(\gamma_j) \to +\infty$ as $j\to\infty$. Moreover, we have $j_0\leq j_0^*$ where $j_0^*$
is the smallest natural number satisfying
\begin{equation*}
\|B-A\|^2
\leq \begin{cases}
\frac{4C^2(j_0^*-\frac{1}{2})^2}{\max\{\kappa_1^2,\kappa_2^2\}}
+ \frac{4(\kappa_2-\kappa_1)^2}{\max\{\kappa_1^4,\kappa_2^4\}} &,\text{if }\sigma_1=\sigma_2\\
\frac{4C^2(j_0^*-1)^2}{\max\{\kappa_1^2,\kappa_2^2\}}
+ \frac{4(\kappa_2-\kappa_1)^2}{\max\{\kappa_1^4,\kappa_2^4\}}&,\text{if } \sigma_1=-\sigma_2.
\end{cases}
\end{equation*}
\end{thm}
The proof of both theorems will be achieved by using an explicit expression for Willmore curves like the
ones used in Linn{\'e}r's work \cite{Lin_Explicit_elastic} and Singer's lectures \cite{Sin_Lectures}. In
Proposition \ref{Prop Navier} we show that the set of Willmore curves emanating from $A\in\R^2$ is
generated by four real parameters so that solving the Navier problem is reduced to adjusting these
parameters in such a way that the four equations given by the boundary conditions are satisfied. The proofs
of Theorem \ref{Thm Navier kappa1=kappa2} and Theorem \ref{Thm Navier kappa1!=kappa2} will therefore be
based on explicit computations which even allow to write down the complete solution theory for the general
Navier problem \eqref{Gl Navier BVP}, see Theorem \ref{Thm Navier BVP the general case}.
Basically the same strategy can be pursued to solve the general Dirichlet problem~\eqref{Gl Dirichlet
BVP}. In view of the fact that a discussion of this boundary value problem does not reveal any new
methodological aspect we decided to defer it to Appendix~B. Notice that even in the presumably simplest
special cases $\theta_2-\theta_1\in\{-\pi/2,0,\pi/2\}$ it requires non-negligible computational effort to
prove optimal existence and multiplicity results for \eqref{Gl Dirichlet BVP} like the ones in
Theorem~\ref{Thm Navier kappa1=kappa2}. We want to stress that it would be desirable to prove the
existence of infinitely many solutions of the Navier problem and the Dirichlet problem also in
higher dimensions where Willmore curves $\gamma\in C^\infty([0,1],\R^n)$ with $n\geq 3$ are considered. It
would be very interesting to compare such a solution theory to the corresponding theory for Willmore
surfaces established by Sch\"atzle \cite{Sch_The_Willmore_boundary} where the existence of one solution of the
corresponding Dirichlet problem is proved.
Let us finally outline how this paper is organized. In section \ref{sec prelim} we review some preliminary
facts about Willmore curves and Jacobi's elliptic functions appearing in the definition of $\kappa_{a,b}$,
see \eqref{Gl kappa_ab}. Next, in section \ref{sec Navier}, we provide all information concerning the Navier
problem starting with a derivation of the complete solution theory for the general Navier problem \eqref{Gl
Navier BVP} in subsection \ref{subsec general result}. In the subsections \ref{subsec Navier symmertric}
and \ref{subsec Navier nonsymmertric} this result will be applied to the special cases $\kappa_1=\kappa_2$
respectively $\kappa_1\neq \kappa_2$ in order to prove Theorem~\ref{Thm Navier kappa1=kappa2} and Theorem
\ref{Thm Navier kappa1!=kappa2}. In Appendix~A we provide the proof of one auxiliary result (Proposition
\ref{Prop Navier}) and Appendix~B is devoted to the solution theory for the general Dirichlet problem.
\section{Preliminaries} \label{sec prelim}
The Willmore functional is invariant under reparametrization so that the arc-length
parametri\-zation $\hat\gamma$ of a given Willmore curve $\gamma$ is also a Willmore curve. It is known that
the curvature function $\hat\kappa$ of $\hat\gamma$ satisfies the ordinary differential equation
\begin{equation} \label{Gl Willmore equation}
- \hat\kappa''(s) = \frac{1}{2}\hat\kappa(s)^3,
\end{equation}
in particular it is not restrictive to consider smooth curves only. A derivation of \eqref{Gl Willmore
equation} may be found in Singer's lectures \cite{Sin_Lectures}. Moreover, we
will use the fact that every regular planar curve $\gamma\in C^\infty([0,1],\R^2)$ admits the
representation
\begin{align} \label{Gl ansatz gamma allgemein}
\begin{aligned}
\gamma\sim \hat\gamma,\qquad
\hat\gamma(t)
= A+ Q\int_0^t \vecII{\cos(\int_0^s \hat\kappa(r)\,dr)}{\sin(\int_0^s\hat\kappa(r)\,dr)} \,ds
\qquad (t\in [0,L])
\end{aligned}
\end{align}
where $A$ is a point in $\R^2$ and $Q\in SO(2)$ is a rotation matrix. Here we used the symbol
$\gamma\sim\hat\gamma$ in order to indicate that $\gamma,\hat\gamma$ can be transformed into each other by
some diffeomorphic reparametrization. In Proposition \ref{Prop Navier} we will show that a regular curve
$\gamma$ is a Willmore curve if and only if the function $\hat\kappa$ in
\eqref{Gl ansatz gamma allgemein} is given by $\hat\kappa=\kappa_{a,b}$ for $\kappa_{a,b}$ as in \eqref{Gl
kappa_ab} with ${a\geq 0},{L>0},{-T/2\leq b<T/2}$. In such a situation we will say that the
parameters $a,b,L$ are admissible. Since this fact is fundamental for the proofs of our results we mention
some basic properties of Jacobi's elliptic functions $\sn,\cn,\dn$. Each of these functions maps $\R$ into
the interval $[-1,1]$ and is $T$-periodic where $T:= 4\int_0^1 (\tfrac{2}{1-t^4})^{1/2}\,dt \approx
7.41630$. We will use the identities $\cn^4+2\sn^2\dn^2=1$ and $\cn'=-\sn\dn$ on $\R$ as well as
$\arccos(\cn^2)'=\sqrt{2}\cn$ on $[0,T/2]$.
Moreover we will exploit the fact that $\cn$ is $T/2$-antiperiodic, i.e. $\cn(x+T/2)=-\cn(x)$ for all
$x\in\R$, and that $\cn$ is decreasing on $[0,T/2]$ with inverse function $\cn^{-1}$ satisfying
$\cn^{-1}(1)=0$ and $(\cn^{-1})'(t)=-(\tfrac{2}{1-t^4})^{1/2}$ for $t\in (-1,1)$.
\section{The Navier problem} \label{sec Navier}
\subsection{The general result} \label{subsec general result}
The Navier boundary value problem \eqref{Gl Navier BVP} does not appear to be easily
solvable since the curvature function $\det(\gamma',\gamma'')|\gamma'|^{-3}$ of a given
regular smooth curve $\gamma$ does in general not solve a reasonable differential equation.
If, however, we parametrize $\gamma$ by arc-length we obtain a smooth regular curve
$\hat\gamma:[0,L]\to\R^2$ whose curvature function $\hat\kappa$ satisfies the differential equation
$-2\hat\kappa''=\hat\kappa^3$ on $[0,L]$ which is explicitly solvable. Indeed, in Proposition~\ref{Prop
Navier} we will show that the nontrivial solutions of this differential equation are given by the functions
$\kappa_{a,b}$ from \eqref{Gl kappa_ab} so that the general form of a Willmore curve $\gamma$ is given by
\begin{align} \label{Gl ansatz gamma}
\begin{aligned}
\gamma\sim\hat\gamma,\qquad
\hat\gamma(t)
&= A+ Q\int_0^t \vecII{\cos(\int_0^s
\kappa_{a,b}(r)\,dr)}{\sin(\int_0^s\kappa_{a,b}(r)\,dr)} \,ds
\qquad \text{for all }t\in [0,L]
\end{aligned}
\end{align}
where $Q\in SO(2)$ is a rotation matrix and $L>0$ denotes the length of $\gamma$.
In such a situation we will say that the Willmore curve $\gamma$ is generated by $a,b,L$.
As a consequence, a Willmore curve $\gamma$ solves the Navier boundary value problem \eqref{Gl Navier BVP}
once we determine the parameters $a,b,L,Q$ such that the equations
\begin{align} \label{Gl Navier BC}
\hat\gamma(L)-\hat\gamma(0)=B-A,\qquad \kappa_{a,b}(0)=\kappa_1,\qquad \kappa_{a,b}(L)=\kappa_2,
\end{align}
are satisfied. Summarizing the above statements and taking into account Definition \ref{Def sigma1sigma2j}
we obtain the following preliminary result the proof of which we defer to Appendix A.
\begin{prop} \label{Prop Navier}
Let $\gamma\in C^\infty([0,1],\R^2)$ be a regular curve. Then the following is true:
\begin{itemize}
\item[(i)] The curve $\gamma$ solves the Navier problem \eqref{Gl Navier BVP} if and only if $\gamma$
is given by \eqref{Gl ansatz gamma} for admissible parameters $a,b,L$ and $Q\in
SO(2)$ satisfying \eqref{Gl Navier BC}.
\item[(ii)] Let $\sigma_1,\sigma_2\in\{-1,+1\},j\in\N_0$ be given and assume $\gamma$ satisfies
\eqref{Gl ansatz gamma}. Then $\gamma$ is of type $(\sigma_1,\sigma_2,j)$ if and only if there is
some $\tilde b\in [-T/2,T/2)$ such that
\begin{align}\label{Gl b+aL}
\begin{aligned}
b+aL = (j+\lceil(b-\tilde b)/T \rceil) T + \tilde b,\qquad
\sign(b)=\sigma_1, \qquad
\sign(\tilde b)=\sigma_2.
\end{aligned}
\end{align}
\end{itemize}
\end{prop}
In Theorem \ref{Thm Navier BVP the general case} we provide the complete solution
theory for the boundary value problem \eqref{Gl Navier BVP} which, due to Proposition \ref{Prop Navier}, is
reduced to solving the equivalent problem \eqref{Gl Navier BC}. In order to write down the
precise conditions when $(\sigma_1,\sigma_2,j)$-type solutions of \eqref{Gl Navier BVP} exist we
need the functions
\begin{align} \label{Gl def f}
\begin{aligned}
f_{\sigma_1,\sigma_2,j}(\kappa_1,\kappa_2,a)
&:= \Big(j+\Big\lceil
\frac{\sigma_1\cn^{-1}(\frac{\kappa_1}{\sqrt{2}a})-\sigma_2\cn^{-1}(\frac{\kappa_2}{\sqrt{2}a})}{T}\Big\rceil\Big)\cdot
C \\
&- \sigma_1 \begin{cases}
\int_{\frac{\kappa_1}{\sqrt{2}a}}^{\frac{\kappa_2}{\sqrt{2}a}}\frac{t^2}{\sqrt{1-t^4}}\,dt &
,\text{if }\sigma_1=\sigma_2 \\
\int_{\frac{\kappa_1}{\sqrt{2}a}}^1 \frac{t^2}{\sqrt{1-t^4}}\,dt+\int_{\frac{\kappa_2}{\sqrt{2}a}}^1
\frac{t^2}{\sqrt{1-t^4}}\,dt &,\text{if }\sigma_1\neq \sigma_2
\end{cases}
\end{aligned}
\end{align}
where $\kappa_1,\kappa_2\in\R$ and $a\geq a_0:=\max\{|\kappa_1|,|\kappa_2|\}/\sqrt{2}$.
\begin{thm} \label{Thm Navier BVP the general case}
Let $A,B\in\R^2,A\neq B$ and $\kappa_1,\kappa_2\in\R$ be given, let
$\sigma_1,\sigma_2\in \{-1,+1\},{j\in\N_0}$. Then all solutions of the Navier problem
\eqref{Gl Navier BVP} which are regular curves of type $(\sigma_1,\sigma_2,j)$ are given by \eqref{Gl
ansatz gamma} for $a,b,L$ and the uniquely determined\footnote{The unique
solvability for the rotation matrix $Q$ will be ensured by arranging $a,b,L$ in such a way that the
Euclidean norms on both sides of the equation are equal, see also Remark \ref{Bem allgemeines
Resultat} (c).} rotation matrix $Q\in SO(2)$ satisfying
\begin{align} \label{Gl equation for bQ}
\begin{aligned}
b &= \sigma_1 \cn^{-1}\Big(\frac{\kappa_1}{\sqrt{2}a}\Big),\qquad\qquad
Q^T(B-A) = \int_0^L \vecII{\cos(\int_0^s \kappa_{a,b}(r))\,dr}{\sin(\int_0^s \kappa_{a,b}(r))\,dr},\\
L &= \frac{1}{a}\Big( \Big(j+ \Big\lceil
\frac{\sigma_1\cn^{-1}(\frac{\kappa_1}{\sqrt{2}a})-\sigma_2\cn^{-1}(\frac{\kappa_2}{\sqrt{2}a})}{T}\Big\rceil\Big)T+
\sigma_2\cn^{-1}\Big(\frac{\kappa_2}{\sqrt{2}a}\Big)-\sigma_1\cn^{-1}\Big(\frac{\kappa_1}{\sqrt{2}a}\Big)\Big)
\end{aligned}
\end{align}
where $a$ is any solution of
\begin{align}\label{Gl eq for a}
\begin{aligned}
\|B-A\|^2 = \frac{2}{a^2} f_{\sigma_1,\sigma_2,j}(\kappa_1,\kappa_2,a)^2 +
\frac{(\kappa_2-\kappa_1)^2}{a^4}\qquad (a>0,a\geq a_0) \\
\text{provided}\quad
\kappa_1=\pm\sqrt{2}a\Rightarrow\sigma_1=\pm 1,\quad
\kappa_2=\pm\sqrt{2}a\Rightarrow\sigma_2=\pm 1.
\end{aligned}
\end{align}
The Willmore energy of such a curve $\gamma$ is given by
\begin{align*}
W(\gamma)
= \big( a^4 \|B-A\|^2- (\kappa_2-\kappa_1)^2\big)^{1/2}
\end{align*}
\end{thm}
\begin{proof}
By Proposition \ref{Prop Navier} we have to find all admissible parameters $a,b,L$ satisfying
\eqref{Gl b+aL} and a rotation matrix $Q\in SO(2)$ such that the equation \eqref{Gl Navier BC} holds.
We show that all these parameters are given by \eqref{Gl equation for bQ} and \eqref{Gl eq for a} and it
is a straightforward calculation (which we omit) to check that parameters given by these
equations indeed give rise to a $(\sigma_1,\sigma_2,j)$-type solution of \eqref{Gl Navier BC}. So let
us in the following assume that $a,b,L,Q$ solve \eqref{Gl Navier BC}. Notice that we have $a>0$ by
assumption, i.e. $\gamma$ is not the straight line.
First let us remark that the boundary conditions $\kappa_{a,b}(0)=\kappa_1,\kappa_{a,b}(L)=\kappa_2$ imply
\begin{equation}\label{Gl metaThm 0}
\cn(b)=\frac{\kappa_1}{\sqrt{2}a}, \qquad \cn(b+aL)=\frac{\kappa_2}{\sqrt{2}a}.
\end{equation}
so that $|\cn|\leq 1$ gives $\sqrt{2}a\geq \max\{|\kappa_1|,|\kappa_2|\}$ and thus $a\geq a_0$.
{\it 1st step: Simplification.}\; From \eqref{Gl Navier BC} we infer
\begin{align} \label{Gl metaThm 1}
\|B-A\|^2
&= \|\hat\gamma(L)-\hat\gamma(0)\|^2 \notag \\
&= \Big\|\int_0^L Q\vecII{\cos(\int_0^s \kappa_{a,b}(r)\,dr)}{\sin(\int_0^s
\kappa_{a,b}(r)\,dr)} \,ds\Big\|^2 \notag\\
&= \Big\|\int_0^L \vecII{\cos(\int_0^s \kappa_{a,b}(r)\,dr)}{\sin(\int_0^s \kappa_{a,b}(r)\,dr)} \,ds\Big\|^2
\notag \\
&= \Big( \int_0^L \cos\Big(\int_0^s \kappa_{a,b}(r)\,dr\Big)\,ds \Big)^2
+ \Big( \int_0^L \sin\Big(\int_0^s \kappa_{a,b}(r)\,dr\Big)\,ds \Big)^2
\end{align}
Using the
identities
\begin{align*}
\frac{d}{dr} \Big(2\arctan\Big(\frac{\sn(ar+b)}{\sqrt{2}\dn(ar+b)}\Big)\Big)
&=\sqrt{2}a\cn(ar+b)=\kappa_{a,b}(r) \qquad (r\in\R), \\
\cos(2\arctan(x)-2\arctan(y))
&= \frac{2(1+xy)^2}{(1+x^2)(1+y^2)}-1
\;\;\qquad (x,y\in\R),\\
\sin(2\arctan(x)-2\arctan(y))
&= \frac{2(1+xy)(x-y)}{(1+x^2)(1+y^2)}
\qquad\qquad (x,y\in\R)
\end{align*}
we obtain after some rearrangements
\begin{align} \label{Gl metaThm 2}
\begin{aligned}
\cos\Big(\int_0^s \kappa_{a,b}(r)\,dr\Big)
&= \cn(b)^2 \cn(as+b)^2 + 2\sn(b)\dn(b)\sn(as+b)\dn(as+b), \\
\sin\Big(\int_0^s \kappa_{a,b}(r)\,dr\Big)
&= -\sqrt{2}\sn(b)\dn(b)\cn(as+b)^2 + \sqrt{2}\cn(b)^2\sn(as+b)\dn(as+b).
\end{aligned}
\end{align}
In order to proceed with calculating $\|B-A\|^2$ we set
\begin{equation}\label{Gl metaThm 3}
\alpha:= \int_0^L \cn(as+b)^2\,ds, \qquad \beta:=\int_0^L \sqrt{2}\sn(as+b)\dn(as+b)\,ds.
\end{equation}
so that \eqref{Gl metaThm 1},\eqref{Gl metaThm 2} yield
\begin{align}\label{Gl metaThm 6}
\begin{aligned}
\|B-A\|^2
&= \Big( \cn(b)^2\alpha + \sqrt{2}\sn(b)\dn(b)\beta
\Big)^2 + \Big( -\sqrt{2}\sn(b)\dn(b)\alpha + \cn(b)^2\beta \Big)^2 \\
&= \alpha^2 (\cn(b)^4+2\sn(b)^2\cn(b)^2) + \beta^2 (\cn(b)^4+2\sn(b)^2\dn(b)^2) \\
&= \alpha^2+\beta^2 \\
&\stackrel{\eqref{Gl metaThm 3}}{=} \frac{1}{a^2} \Big( \int_b^{b+aL} \cn(t)^2\,dt \Big)^2
+ \Big( \Big[-\frac{\sqrt{2}}{a}\cn(as+b)\Big]^L_0\Big)^2 \\
&\stackrel{\eqref{Gl metaThm 0}}{=} \frac{1}{a^2} \Big( \int_b^{b+aL} \cn(t)^2\,dt \Big)^2
+ \frac{(\kappa_2-\kappa_1)^2}{a^4}.
\end{aligned}
\end{align}
In order to derive equation \eqref{Gl eq for a} it therefore remains to show
\begin{equation}\label{Gl metaThm 4}
\frac{1}{\sqrt{2}} \int_b^{b+aL} \cn(t)^2\,dt= f_{\sigma_1,\sigma_2,j}(\kappa_1,\kappa_2,a).
\end{equation}
{\it 2nd step: Proving \eqref{Gl metaThm 4}.}\; For notational convenience we define $\tilde \kappa_1:=
\kappa_1/(\sqrt{2}a),{\tilde \kappa_2:= \kappa_2/(\sqrt{2}a)}$ so that $a\geq a_0$ implies
$\tilde\kappa_1,\tilde\kappa_2\in [-1,1]$. Then \eqref{Gl
b+aL} and the boundary conditions \eqref{Gl metaThm 0} yield
\begin{align}\label{Gl metaThm 7}
\begin{aligned}
b &= \sigma_1 \cn^{-1}(\tilde\kappa_1),\qquad\quad\tilde\kappa_1=\pm 1 \Rightarrow \sigma_1=\pm 1 \\
\tilde b &= \sigma_2 \cn^{-1}(\tilde\kappa_2), \qquad\quad\tilde\kappa_2=\pm 1 \Rightarrow \sigma_2=\pm
1.
\end{aligned}
\end{align}
Using the shorthand notation $m:= j+\lceil (b-\tilde b)/T\rceil$ we obtain
\begin{align*}
&\; \frac{1}{\sqrt{2}}\int_b^{b+aL} \cn(t)^2\,dt \\
&= \frac{1}{\sqrt{2}}\int_{\sigma_1 \cn^{-1}(\tilde\kappa_1)}^{mT+\sigma_2
\cn^{-1}(\tilde\kappa_2)} \cn(t)^2\,dt \\
&= \frac{m}{\sqrt{2}}\int_0^T \cn(t)^2\,dt + \frac{1}{\sqrt{2}}\int_{\sigma_1
\cn^{-1}(\tilde\kappa_1)}^{\sigma_2 \cn^{-1}(\tilde\kappa_2)} \cn(t)^2\,dt \\
&= 2\sqrt{2}m\int_0^{T/4} \cn(t)^2\,dt + \frac{\sigma_1}{\sqrt{2}}
\begin{cases}
\int_{\cn^{-1}(\tilde\kappa_1)}^{\cn^{-1}(\tilde\kappa_2)} \cn(t)^2\,dt &,\text{if }
\sigma_1=\sigma_2\\
-\int_0^{\cn^{-1}(\tilde\kappa_1)} \cn(t)^2\,dt
-\int_0^{\cn^{-1}(\tilde\kappa_2)} \cn(t)^2\,dt &,\text{if }\sigma_1\neq \sigma_2
\end{cases} \\
&= 2\sqrt{2}m\int_0^1 t^2\sqrt{\frac{2}{1-t^4}}\,dt - \frac{\sigma_1}{\sqrt{2}}
\begin{cases}
\int_{\tilde\kappa_1}^{\tilde\kappa_2} t^2\sqrt{\frac{2}{1-t^4}}\,dt &,\text{if }
\sigma_1=\sigma_2\\
\int_{\tilde\kappa_1}^1 t^2\sqrt{\frac{2}{1-t^4}}\,dt
+ \int_{\tilde\kappa_2}^1 t^2\sqrt{\frac{2}{1-t^4}}\,dt &,\text{if }\sigma_1\neq \sigma_2
\end{cases} \\
&= mC -\sigma_1
\begin{cases}
\int_{\tilde\kappa_1}^{\tilde\kappa_2} \frac{t^2}{\sqrt{1-t^4}}\,dt &,\text{if }
\sigma_1=\sigma_2,\\
\int_{\tilde\kappa_1}^1\frac{t^2}{\sqrt{1-t^4}}\,dt
+ \int_{\tilde\kappa_2}^1\frac{t^2}{\sqrt{1-t^4}}\,dt &,\text{if }\sigma_1\neq\sigma_2
\end{cases} \\
&= f_{\sigma_1,\sigma_2,j}(\kappa_1,\kappa_2,a).
\end{align*}
The formulas for $L$ and $W$ follow from \eqref{Gl b+aL} and \eqref{Gl metaThm 7}:
\begin{align} \label{Gl length energy}
\begin{aligned}
L
&= \frac{1}{a} \big( (j+\lceil (b-\tilde b)/T\rceil)T+\tilde b-b\big) \\
&= \frac{1}{a}\Big( \Big(j+\Big\lceil
\frac{\sigma_1\cn^{-1}(\tilde\kappa_1)-\sigma_2\cn^{-1}(\tilde\kappa_2)}{T}\Big\rceil\Big)T+\sigma_2
\cn^{-1}(\tilde\kappa_2)-\sigma_1 \cn^{-1}(\tilde\kappa_1) \Big), \\
W &= \frac{1}{2}\int_\gamma \kappa_\gamma^2
\; = \; \frac{1}{2}\int_0^L \kappa_{a,b}(s)^2\,ds
\; = \; a^2 \int_0^L \cn(as+b)^2\,ds\\
&\stackrel{\eqref{Gl metaThm 3}}{=} a^2\cdot \alpha
\; \stackrel{\eqref{Gl metaThm 6}}{=} \;a^2 \cdot (\|B-A\|^2-\beta^2)^{1/2} \\
&= \big( a^4 \|B-A\|^2-(\kappa_2-\kappa_1)^2\big)^{1/2}.
\end{aligned}
\end{align}
\end{proof}
\begin{bem} \label{Bem allgemeines Resultat}~
\begin{itemize}
\item[(a)] Reviewing the proof of Theorem \ref{Thm Navier BVP the general case} one finds that the
boundary value problem \eqref{Gl Navier BVP} does not have nontrivial solutions in case
$A=B$, i.e. there are no closed Willmore curves. This result has already been obtained by Linn\'{e}r,
cf. Proposition 3 in \cite{Lin_Explicit_elastic}. In fact, \eqref{Gl metaThm 6} would imply $aL=0$ so
that the solution must be the ''trivial curve'' consisting of precisely one point.
\item[(b)] For computational purposes it might be interesting to know that the integrals involved in
the definition of $Q$ need not be calculated numerically. With some computational effort one finds that
$Q$ is the unique rotation matrix satisfying $Qw=B-A$ where the vector $w=(w_1,w_2)\in\R^2$ is given
by
\begin{align*}
w_1
&= \frac{\kappa_1^2(a^4 \|B-A\|^2-(\kappa_2-\kappa_1)^4)^{1/2} +
\sigma_1(\kappa_1-\kappa_2)(4a^4-\kappa_1^4)^{1/2} }{2a^4}, \\
w_2 &= \frac{-\sigma_1 ((4a^4-\kappa_1^4)(a^4\|B-A\|^2-(\kappa_2-\kappa_1)^4))^{1/2} +
\kappa_1^2(\kappa_1-\kappa_2)}{2a^4}.
\end{align*}
Hence, $Q$ can be expressed in terms of $B-A,w_1,w_2$ only.
\end{itemize}
\end{bem}
\subsection{Proof of Theorem \ref{Thm Navier kappa1=kappa2}} \label{subsec Navier symmertric} ~
\subsubsection{Proof of Theorem \ref{Thm Navier kappa1=kappa2} (i) - Existence results} ~
In order to prove Theorem \ref{Thm Navier kappa1=kappa2} (i) we apply Theorem \ref{Thm Navier BVP the
general case} to the special case $\kappa_1=\kappa_2=\kappa$. As before we set
$\sqrt{2}a_0:= \max\{|\kappa_1|,|\kappa_2|\}=|\kappa|$. Theorem \ref{Thm Navier BVP the general case} tells
us that $a>0,a\geq a_0$ generates a solution of type $(\sigma_1,\sigma_2,j)$ if
and only if
\begin{align*}
\begin{aligned}
\|B-A\|
&= \frac{\sqrt{2}}{a} f_{\sigma_1,\sigma_2,j}(\kappa,\kappa,a) \\
&= \frac{\sqrt 2}{a} \Big(
\Big(j+\Big\lceil
\frac{(\sigma_1-\sigma_2)\cn^{-1}(\frac{\kappa}{\sqrt{2}a})}{T}\Big\rceil \Big)C
- \sigma_1 \begin{cases}
0 &, \text{if }\sigma_1=\sigma_2, \\
2\int_{\frac{\kappa}{\sqrt{2}a}}^1 \frac{t^2}{\sqrt{1-t^4}}\,dt &, \text{if }\sigma_1\neq \sigma_2
\end{cases} \Big) \\
&= \frac{\sqrt 2}{a} \cdot
\begin{cases}
jC &, \text{if }\sigma_1=\sigma_2 \text{ or }(\sigma_1,\sigma_2,\frac{\kappa}{\sqrt{2}a})=(-1,1,-1),
\\
jC+2\int_{\frac{\sigma_2\kappa}{\sqrt{2}a}}^1 \frac{t^2}{\sqrt{1-t^4}}\,dt &, \text{if
}\sigma_1\neq \sigma_2 \text{ and }(\sigma_1,\sigma_2,\frac{\kappa}{\sqrt{2}a})\neq (-1,1,-1)
\end{cases}
\end{aligned}
\end{align*}
provided $|\kappa|=\sqrt{2}a$ implies $\sigma_1=\sigma_2=\sign(\kappa)$. Hence, the
above equation is equivalent to \\~\\~\\
\begin{subequations} \label{Gl main equation symmetric}
\renewcommand\theequation*{(\theparentequation)$_j$}
\begin{align*}
&\|B-A\|
= \frac{\sqrt 2}{a} \cdot
\begin{cases}
jC &, \text{if }\sigma_1=\sigma_2, \\
jC+2\int_{\frac{\sigma_2\kappa}{\sqrt{2}a}}^1 \frac{t^2}{\sqrt{1-t^4}}\,dt &, \text{if
}\sigma_1\neq \sigma_2
\end{cases} \\
&\text{provided}\quad |\kappa|=\sqrt{2}a \Rightarrow \sigma_1=\sigma_2=\sign(\kappa)
\end{align*}
\end{subequations}
so that it remains to determine all solutions $a>0,a\geq a_0$ of \eqref{Gl main equation symmetric}$_j$.
First let us note that there are no $(\sigma_1,\sigma_2,0)$-type
solutions with $\sigma_1=\sigma_2$, see \eqref{Gl main equation symmetric}$_0$. Hence, by definition of the
sets $\mathcal{F}_0,\mathcal{F}_1,\ldots,$
all solutions of the symmetric Navier problem \eqref{Gl Navier BVP symm} are contained in
$\bigcup_{k=0}^\infty\mathcal{F}_k$ which shows $\mathcal{F}=\bigcup_{k=0}^\infty\mathcal{F}_k$. In order
to prove the existence and nonexistence results stated in Theorem \ref{Thm Navier kappa1=kappa2} it
therefore remains to find all solutions belonging to $\mathcal{F}_k= \mathcal{F}_k^+ \cup \mathcal{F}_k^-$
for any given $k\in\N_0$.
In the following we will without loss of generality restrict ourselves to the case $\kappa\geq 0$. This is
justified since every $(\sigma_1,\sigma_2,j)$-type solution of \eqref{Gl Navier BVP symm} generated by
$a,b,L$ gives rise to a $(-\sigma_1,-\sigma_2,j)$-type solution of \eqref{Gl Navier BVP symm} with $\kappa$
replaced by $-\kappa$ which is generated by $a,b-\sigma_1T/2,L$. Let us recall that figure
\ref{Fig 1} might help to understand and to visualize the proof of Theorem \ref{Thm Navier kappa1=kappa2}
(i).
{\it Proof of Theorem \ref{Thm Navier kappa1=kappa2} (i)(a)-(d):}\;
First we determine all solutions belonging to $\mathcal{F}_0^+$. All solutions from $\mathcal{F}_0^+$ with
$\kappa=0$ are the straight line from $A$ to $B$ and the solution of type
$(-1,1,0)$ which is generated by $a= C(\sqrt{2}\|B-A\|)^{-1}$, see \eqref{Gl
main equation symmetric}. The solutions from $\mathcal{F}_0^+$ with $\kappa>0$ are of type
$(-1,1,0)$ and therefore generated by $a>0,a\geq a_0$ such that
\begin{equation} \label{Gl F0 I}
\kappa\|B-A\| = 4z\int_z^1 \frac{t^2}{\sqrt{1-t^4}}\,dt
\qquad\text{where } z = \frac{\kappa}{\sqrt{2}a} \in (0,1),
\end{equation}
see \eqref{Gl main equation symmetric}. Notice that $z>0$ is due to the assumption $\kappa>0$ whereas $z\leq
1$ follows from $a\geq a_0$ and $z\neq 1$ follows from the last line in \eqref{Gl main equation symmetric}. The
function defined by the right hand side is positive and strictly concave on $(0,1)$ with global maximum $M_0$. Hence, we
find two, one respectively no solution in $\mathcal{F}_0^+$ according to $\kappa\|B-A\|$ being smaller
than, equal to or larger than $M_0$. This proves the assertions (i)(a)-(d) in Theorem \ref{Thm Navier kappa1=kappa2}.
{\it Proof of Theorem \ref{Thm Navier kappa1=kappa2} (i)(e)-(h):}\; Now we investigate the solutions
belonging to $\mathcal{F}_k^+$ for $k\geq 1$. From \eqref{Gl main equation symmetric} we infer that solutions of type
$(\sigma_1,\sigma_2,k)$ with $\sigma_1=\sigma_2$ exist if and only if
\begin{align*}
\sigma_1=\sigma_2=-1,\; 0\leq\kappa\|B-A\|<2kC
\quad\text{ or }\quad
\sigma_1=\sigma_2=1,\; 0\leq \kappa\|B-A\|\leq 2kC.
\end{align*}
In both cases the solution is unique and given by $a=\sqrt{2}kC\|B-A\|^{-1}$.
(Notice that the special case $\kappa\|B-A\|=2kC$ corresponds to $\kappa=\sqrt{2}a$ so that the last
line in \eqref{Gl main equation symmetric} is responsable for the slight difference between the cases
$\sigma_1=\sigma_2=-1$ and $\sigma_1=\sigma_2=1$.)
Next we study the solutions belonging to $\mathcal{F}_k^+$ which are of type
$(1,-1,k-1)$. According to \eqref{Gl main equation symmetric} these solutions are
generated by all $a>0,a\geq a_0$ which satisfy
\begin{equation} \label{Gl Fk I}
\kappa\|B-A\| = 2z\Big((k-1)C+2\int_{-z}^1 \frac{t^2}{\sqrt{1-t^4}}\,dt\Big)
\qquad\text{where }z= \frac{\kappa}{\sqrt{2}a} \in [0,1).
\end{equation}
Here, $z\in [0,1)$ follows from $a\geq a_0$ and the last line in \eqref{Gl
main equation symmetric}. Since the function defined by the right hand side increases from 0 to
$2kC$ we have precisely one solution of type $(1,-1,k-1)$ for
$0\leq \kappa \|B-A\|<2kC$ and no such solution otherwise.
Finally we study the solutions in $\mathcal{F}_k^+$ which are of type $(-1,1,k)$.
For this kind of solutions equation \eqref{Gl main equation symmetric} reads
\begin{equation} \label{Gl Fk II}
\kappa\|B-A\| = 2z\Big(kC+2\int_z^1 \frac{t^2}{\sqrt{1-t^4}}\,dt\Big)
\qquad\text{where }z= \frac{\kappa}{\sqrt{2}a} \in [0,1).
\end{equation}
The function given by the right hand side is positive and strictly concave on $[0,1)$ with global
maximum $M_k$ and which attains the values 0 at $z=0$ and $2kC$ at $z=1$. Hence we find precisely one
$(-1,1,k)$-type solution for $0\leq \kappa\|B-A\|\leq 2kC$, two $(-1,1,k)$-type solutions for $2kC<
\kappa\|B-A\|<M_k$, one $(-1,1,k)$-type solution for $\kappa\|B-A\|=M_k$ and no such solution otherwise.
\subsubsection{Proof of Theorem \ref{Thm Navier kappa1=kappa2} (ii) - Willmore curves of minimal Willmore
energy} ~
In case $\kappa=0$ the straight line joining $A$ and $B$ is the unique minimal energy solution of~\eqref{Gl
Navier BVP symm} so that it remains to analyse the case $\kappa\neq 0$. According to Theorem~\ref{Thm
Navier kappa1=kappa2} the energy of a nontrivial $(\sigma_1,\sigma_2,j)$-type solution of \eqref{Gl Navier
BVP symm} is given by $W = a^2\|B-A\|^2$ where $a$ solves~\eqref{Gl main equation symmetric}$_j$. Hence,
finding the minimal energy solution of \eqref{Gl Navier BVP symm} is equivalent to finding
$\sigma_1,\sigma_2\in\{-1,+1\},j\in\N_0$ and a solution $a$ of \eqref{Gl main equation
symmetric}$_j$ such that $a$ is smallest possible. To this end we will dinstinguish the cases
$$
\text{(a)}\; 2kC<|\kappa|\|B-A\|\leq M_k\quad (k\in\N_0)
\qquad
\text{(b)}\; M_k<|\kappa|\|B-A\|\leq 2(k+1)C\quad (k\in\N_0).
$$
First let us note that in the above situation we only have to consider $(\sigma_1,\sigma_2,j)$-type
solutions where $j$ is smallest possible.
In case (a) there are two $(-\sign(\kappa),\sign(\kappa),k)$-type solutions (belonging to $\mathcal{F}_k$)
and one $(\sign(\kappa),-\sign(\kappa),k)$-type solution (belonging to $\mathcal{F}_{k+1}$), see figure
\ref{Fig 1}. All other $(\sigma_1,\sigma_2,j)$-type solutions satisfy $j\geq k+1$ and need not be taken into
account by the observation we made above. According to the formulas \eqref{Gl F0 I},\eqref{Gl Fk
I},\eqref{Gl Fk II} the smallest possible $a$ occurs when $\sigma_1=-\sign(\kappa),\sigma_2=\sign(\kappa)$
and $z:=|\kappa|/(\sqrt{2}a)\in (0,1)$ is the larger solution of \eqref{Gl F0 I} in case $k=0$ respectively
\eqref{Gl Fk II} in case $k\geq 1$.
In case (b) there is one $(\sign(\kappa),-\sign(\kappa),k)$-type solution whereas all other solutions
are of type $(\sigma_1,\sigma_2,j)$ with $j\geq k+1$. Hence, this Willmore curve has minimal
Willmore energy among all solutions of \eqref{Gl Navier BVP symm}.
\subsubsection{Proof of Theorem \ref{Thm Navier kappa1=kappa2} (iii) - Symmetry results} ~
In the following let $R(\tau)\in SO(2)$ denote the orthogonal matrix which realizes a rotation through
the rotation angle $\tau\in\R$, i.e.
$$
R(\tau) := \matII{\cos(\tau)}{-\sin(\tau)}{\sin(\tau)}{\cos(\tau)}.
$$
With the following Proposition at hand the proof of our symmetry result for the solutions
$\gamma:[0,1]\to\R^2$ from Theorem \ref{Thm Navier kappa1=kappa2} is
reduced to proving symmetry results of the curvature function
$\hat\kappa=\det(\hat\gamma',\hat\gamma''):[0,L]\to\R$ of its arc-length parametrization
${\hat\gamma:[0,L]\to\R^2}$. Let us recall that $\hat\kappa$ is said to be symmetric in case
$\hat\kappa(L-s)=\hat\kappa(s)$ for all $s\in [0,L]$ and it is called pointwise symmetric (with respect to
its midpoint) in case ${\hat\kappa(L-s)=-\hat\kappa(s)}$ for all $s\in [0,L]$.
\begin{prop} \label{Prop symmetry}
Let $\gamma:[0,1]\to\R^2$ be a twice differentiable regular curve and let
${\hat\gamma:[0,L]\to\R^2}$ denote its parametrization by arc-length with
curvature function $\hat\kappa$. Then $\gamma$ is
\begin{itemize}
\item[(i)] axially symmetric if and only if $\hat\kappa$ is symmetric,
\item[(ii)] pointwise symmetric if and only if $\hat\kappa$ is pointwise symmetric.
\end{itemize}
\end{prop}
\begin{proof}
Let us start with proving (i). We first assume that $\hat\kappa$ is symmetric, i.e.
$\hat\kappa(s)=\hat\kappa(L-s)$ for all $s\in\R$. We have to prove that there is a $P\in O(2)$ with
$\det(P)=-1$ such that
$\hat\gamma(L-s)+P\hat\gamma(s)$ is constant. Indeed, the symmetry assumption on $\hat\kappa$ implies
$$
\theta+\int_0^{L-s} \hat\kappa(r)\,dr + \int_0^s \hat\kappa(r)\,dr = \theta + \int_0^L
\hat\kappa(r)\,dr =:\theta_L \qquad\text{for all }s\in [0,L]
$$
and we obtain from \eqref{Gl ansatz gamma allgemein}
\begin{align*}
\frac{d}{ds}\big( \hat\gamma(L-s)\big)
&= -\hat\gamma'(L-s)\\
&= \vecII{-\cos(\theta+\int_0^{L-s}\hat\kappa(r)\,dr)}{-\sin(\theta+\int_0^{L-s}\hat\kappa(r)\,dr)} \\
&= \vecII{-\cos(\theta_L -\int_0^s\hat\kappa(r)\,dr)}{-\sin(\theta_L -\int_0^s\hat\kappa(r)\,dr)} \\
&= - R(\theta_L) \vecII{\cos(-\int_0^s\hat\kappa(r)\,dr)}{\sin(-\int_0^s\hat\kappa(r)\,dr)} \\
&= - R(\theta_L)
\matII{1}{0}{0}{-1}\vecII{\cos(\int_0^s\hat\kappa(r)\,dr)}{\sin(\int_0^s\hat\kappa(r)\,dr)} \\
&= - R(\theta_L)
\matII{1}{0}{0}{-1}R(-\theta)
\vecII{\cos(\theta+\int_0^s\hat\kappa(r)\,dr)}{\sin(\theta+\int_0^s\hat\kappa(r)\,dr)} \\
&= -P\hat\gamma'(s)
\qquad\text{where }P := R(\theta_L) \matII{1}{0}{0}{-1}R(-\theta).
\end{align*}
Since $P\in O(2)$ satisfies $\det(P)=-1$ we obtain that $\gamma$ is axially symmetric. Vice
versa, the condition $\hat\kappa(s)=\hat\kappa(L-s)$ for all $s\in [0,L]$ is also necessary for
$\gamma$ to be axially symmetric since $\hat\gamma(L-s)+P\hat\gamma(s)=const$ implies
$\hat\gamma'(L-s)=P\hat\gamma'(s),\hat\gamma''(L-s)=-P\hat\gamma''(s)$ and thus
\begin{align*}
\hat\kappa(L-s)
&= \det(\hat\gamma'(L-s),\hat\gamma''(L-s)) \\
&= \det(P\hat\gamma'(s),-P\hat\gamma''(s)) \\
&= -\det(P)\det(\hat\gamma'(s),\hat\gamma''(s)) \\
&= \hat\kappa(s) \qquad\text{for all }s\in [0,L].
\end{align*}
The proof of (ii) is similar. Assuming $\hat\kappa(s)=-\hat\kappa(L-s)$ for all $s\in [0,L]$ one deduces
$\theta=\theta_L$ and thus
$$
\theta+\int_0^{L-s} \hat\kappa(r)\,dr - \int_0^s \hat\kappa(r)\,dr = \theta_L = \theta
\qquad\text{for all }s\in [0,L]
$$
which, as above, leads to $\hat\gamma(L-s)+\hat\gamma(s)=const$.
\end{proof}
In our proof of Theorem \ref{Thm Navier kappa1=kappa2} (iii) we will apply Proposition \ref{Prop symmetry}
to Willmore curves which, according to Proposition \ref{Prop Navier}, are given by \eqref{Gl ansatz gamma}.
In this context the above result reads as follows.
\begin{cor} \label{Cor symmetry}
Let $\gamma:[0,1]\to\R^2$ be a Willmore curve given by \eqref{Gl ansatz gamma}. Then $\gamma$ is
\begin{itemize}
\item[(i)] axially symmetric if and only if $aL+2b$ is an even multiple of $T/2$,
\item[(ii)] pointwise symmetric if and only if $aL+2b$ is an odd multiple of $T/2$.
\end{itemize}
\end{cor}
\begin{proof}
We apply Proposition \ref{Prop symmetry} to the special case $\hat\kappa=\kappa_{a,b}:[0,L]\to\R$
for admissible parameters $a,b,L$. Hence, both (i) and (ii) follow from
\begin{align*}
\kappa_{a,b}(L-s)
&= \sqrt{2}a\cn(a(L-s)+b) \\
&= \sqrt{2}a\cn(-a(L-s)-b) \\
&= \sqrt{2}a\cn(as+b-(aL+2b))
\end{align*}
and $\cn(z+kt)=(-1)^k\cn(z)$ for all $z\in\R,k\in\Z$ if and only if $t=T/2$.
\end{proof}
{\it Proof of Theorem \ref{Thm Navier kappa1=kappa2} (iii):}\; Let $\kappa\in\R$,
$\sigma_1,\sigma_2\in\{-1,+1\}$ and $j\in\N_0$. The proof of Theorem \ref{Thm Navier kappa1=kappa2}
tells us that every $(\sigma_1,\sigma_2,j)$-type solution of the symmetric Navier Problem \eqref{Gl Navier
BVP symm} is generated by admissible parameters $a,b,L$ which satisfy equation \eqref{Gl b+aL}, in
particular
$$
aL+2b
= mT+(\sigma_2+\sigma_1)\cn^{-1}\big(\frac{\kappa}{\sqrt{2}a}\big)
\qquad\text{for some }m\in\N_0.
$$
In case $\sigma_1=-\sigma_2$ this implies that $aL+2b$ is an even multiple of $T/2$ which, by
Corollary~\ref{Cor symmetry}~(i), proves that the solution is axially symmetric. In case
$\sigma_1=\sigma_2$ we find that the solution is axially symmetric if and only if
$2\sigma_1\cn^{-1}\big(\tfrac{\kappa}{\sqrt{2}a}\big)$ is a multiple of $T$. This is equivalent to
$\cn^{-1}\big(\tfrac{\kappa}{\sqrt{2}a}\big)\in \{0,T/2\}$ and thus to $|\kappa|=\sqrt{2}a$. Due to the
restriction $z<1$ in \eqref{Gl F0 I},\eqref{Gl Fk I},\eqref{Gl Fk II} this equality cannot hold true so
that these solutions are not axially symmetric. Finally we find that $\gamma$ is pointwise
symmetric if and only if $\cn^{-1}\big(\tfrac{\kappa}{\sqrt{2}a}\big) = T/4$, i.e. if and only if
$\kappa=0$.~
$\Box$
\subsection{Proof of Theorem \ref{Thm Navier kappa1!=kappa2}} \label{subsec Navier nonsymmertric} ~
{\it 1st step:}\, We first show that a minimal index $j_0$ having the properties claimed in Theorem \ref{Thm
Navier kappa1!=kappa2} exists. To this end let $\sigma_1,\sigma_2\in\{-1,+1\}$ and $k,l\in\N_0$ with $k<l$
be arbitrary. Our aim is to show that if a $(\sigma_1,\sigma_2,k)$-type solution generated by a solution
$a_k\in [a_0,\infty)$ of \eqref{Gl eq for a}$_k$ exists then there is a $(\sigma_1,\sigma_2,l)$-solution
generated by some $a_l\in (a_k,\infty)$ and every such solution satisfies $a_l>a_k$ (i.e. $a_0\leq a_l\leq
a_k$ cannot hold). Once this is shown it follows that there is a minimal index
$j_0=j_0(\sigma_1,\sigma_2)$ such that for all $j\geq j_0$ a $(\sigma_1,\sigma_2,j)$-type solution
$\gamma_j$ of \eqref{Gl Navier BVP} exists with
$W(\gamma_{j_0})<W_{\gamma_{j_0+1}}<\ldots<W(\gamma_j)\to\infty$ as $j\to\infty$. The latter statement
follows from $a_j\to \infty$ as $j\to\infty$. So it remains to prove the statement from above.
For $l>k$ we have
\begin{align*}
\frac{2f_{\sigma_1,\sigma_2,l}(a_k,\kappa_1,\kappa_2)}{a_k^2} &+ \frac{(\kappa_2-\kappa_1)^2}{a_k^4}
> \frac{2f_{\sigma_1,\sigma_2,k}(a_k,\kappa_1,\kappa_2)}{a_k^2} + \frac{(\kappa_2-\kappa_1)^2}{a_k^4}
\stackrel{\eqref{Gl eq for a}_k}{=} \|B-A\|^2, \\
&\text{and }\quad \lim_{a\to \infty} \Big( \frac{2f_{\sigma_1,\sigma_2,l}(a,\kappa_1,\kappa_2)}{a^2} +
\frac{(\kappa_2-\kappa_1)^2}{a^4}\Big) = 0
\end{align*}
so that the intermediate value theorem provides at least one solution $a_l\in (a_k,\infty)$ of
\eqref{Gl eq for a}$_l$. In addition, every such solution has to lie in $(a_k,\infty)$ because $a_0\leq
a\leq a_k$ implies
\begin{align*}
\frac{2f_{\sigma_1,\sigma_2,l}(a,\kappa_1,\kappa_2)}{a^2} + \frac{(\kappa_2-\kappa_1)^2}{a^4}
&\geq \frac{2f_{\sigma_1,\sigma_2,k}(a_k,\kappa_1,\kappa_2)}{a^2} + \frac{(\kappa_2-\kappa_1)^2}{a^4} \\
&\geq \frac{2f_{\sigma_1,\sigma_2,k}(a_k,\kappa_1,\kappa_2)}{a_k^2} +
\frac{(\kappa_2-\kappa_1)^2}{a_k^4} \\
&\stackrel{\eqref{Gl eq for a}_k}{=} \|B-A\|^2,
\end{align*}
where equality cannot hold simultaneously in both inequalities. Hence, equation \eqref{Gl eq for a}$_l$
does not have a solution in the interval $[a_0,a_k]$ which finishes the first step.
{\it 2nd step:}\, For given $\sigma_1,\sigma_2\in\{-1,+1\}$ let us show $j_0\leq j_0^*$ or equivalently that
for all $j\geq j_0^*$ there is at least one $(\sigma_1,\sigma_2,j)$-type solution of the Navier problem
\eqref{Gl Navier BVP}. So let $j\geq j_0^*$. Then \eqref{Gl def f} implies
\begin{align*}
&f_{\sigma_1,\sigma_2,j}(\kappa_1,\kappa_2,a_0)
\geq (j-\frac{1}{2}) C
\geq (j_0^*-\frac{1}{2}) C
&&\text{if } \sigma_1=\sigma_2, \\
&f_{\sigma_1,\sigma_2,j}(\kappa_1,\kappa_2,a_0)
\geq (j-1)C
\geq (j_0^*-1)C
&&\text{if } \sigma_1=-\sigma_2
\end{align*}
so that the choice for $j_0^*$ from Theorem \ref{Thm Navier kappa1!=kappa2} and
$\sqrt{2}a_0=\max\{|\kappa_1|,|\kappa_2|\}$ imply
\begin{align*}
\frac{2f_{\sigma_1,\sigma_2,j}(\kappa_1,\kappa_2,a_0)^2}{a_0^2} &+ \frac{(\kappa_2-\kappa_1)^2}{a_0^4}
=\frac{4 f_{\sigma_1,\sigma_2,j}(\kappa_1,\kappa_2,a_0)^2}{\max\{\kappa_1^2,\kappa_2^2\}} +
\frac{4(\kappa_2-\kappa_1)^2}{\max\{\kappa_1^4,\kappa_2^4\}}
\geq \|B-A\|^2, \\
&\text{and }\quad \lim_{a\to \infty} \Big( \frac{2f_{\sigma_1,\sigma_2,j}(\kappa_1,\kappa_2,a)}{a^2} +
\frac{(\kappa_2-\kappa_1)^2}{a^4}\Big) = 0.
\end{align*}
Hence, the intermediate value theorem provides at least one solution $a\in [a_0,\infty)$ of equation
\eqref{Gl eq for a}$_j$ which, by Theorem \ref{Thm Navier BVP the general case}, implies that there is at
least one $(\sigma_1,\sigma_2,j)$-type solution of the Navier problem \eqref{Gl Navier BVP}.
~
$\Box$
\section*{Appendix A -- Proof of Proposition \ref{Prop Navier}}
{\it Proof of part (i):}\;
In view of the formula \eqref{Gl ansatz gamma allgemein} for regular curves it remains to prove that
for every $\kappa_0,\kappa_0'\in\R$ with $(\kappa_0,\kappa_0')\neq (0,0)$ the unique solution of the
initial value problem
\begin{equation} \label{Gl Prop1 I}
- \hat\kappa''(s) = \frac{1}{2}\hat\kappa(s)^3,\qquad \hat\kappa(0)=\kappa_0,\;\hat\kappa'(0)=\kappa_0'
\end{equation}
is given by $\kappa_{a,b}(s) = \sqrt{2}a \cn(as+b)$ where $a>0, b\in [-T/2,T/2)$ are chosen according to
\begin{equation} \label{Gl Prop1 II}
a= \big( \frac{1}{4}\kappa_0^4 + {\kappa_0'}^2\big)^{1/4},\qquad
b= -\sign(\kappa_0') \cn^{-1}(\frac{\kappa_0}{\sqrt{2}a}).
\end{equation}
Indeed, \eqref{Gl Prop1 II} implies $\kappa_{a,b}(0)=
\kappa_0$ and the equations $\cn'=-\sn\dn, \cn^4+2\sn^2\dn^2=1$ give
\begin{align*}
\kappa_{a,b}'(0)
&= \sqrt{2} a^2 \cn'(b)
\;=\; -a^2 \cdot \sqrt{2}\sn(b)\dn(b)
\:=\; -a^2 \sign(b)\sqrt{1-\cn^4(b)} \\
&= a^2 \sign(\kappa_0')\sqrt{1-\frac{\kappa_0^4}{4a^4}}
\;=\; \sign(\kappa_0')|\kappa_0'| \\
&= \kappa_0'.
\end{align*}
In view of unique solvability of the initial value problem \eqref{Gl Prop1 I} this proves the claim.
{\it Proof of part (ii):}\; This follows from setting $\tilde b:= b+aL -mT$ for $m\in\N_0$ such that
$\tilde b\in [-T/2,T/2)$.
$\Box$
\section{Appendix B -- The Dirichlet problem}
In this section we want to provide the solution theory for the Dirichlet problem~\eqref{Gl Dirichlet
BVP}. As in our analysis of the Navier problem we first reduce the boundary value problem \eqref{Gl
Dirichlet BVP} for the regular curve $\gamma:[0,1]\to\R^2$ to the corresponding boundary value
problem for its parametrization by arc-length $\hat\gamma:[0,L]\to\R^2$. Being given the
ansatz for $\hat\gamma$ from \eqref{Gl ansatz gamma} (cf. Proposition \ref{Prop Navier}) one finds that
solving~\eqref{Gl Dirichlet BVP} is equivalent to finding admissible parameters $a,b,L$ such that
\begin{equation} \label{Gl Dirichlet BC}
\vecII{\cos(\theta_1 + \int_0^L \kappa_{a,b}(r)\,dr)}{\sin(\theta_1+ \int_0^L \kappa_{a,b}(r)\,dr)}
= \vecII{\cos(\theta_2)}{\sin(\theta_2)}
\;\text{and}\;
\vecII{\int_0^L \cos(\theta_1+\int_0^t \kappa_{a,b}(r)\,dr)\,dt}{\int_0^L \sin(\theta_1+\int_0^t
\kappa_{a,b}(r)\,dr)\,dt} = B-A
\end{equation}
where we may without loss of generality assume $|\theta_2-\theta_1|\leq \pi$. In order to formulate our
result we introduce the functions $\bar\alpha_j,\bar\beta$ and $\bar z\in [-1,1]$ as follows:
\begin{align} \label{Gl Dirichlet Definition hatbeta hatalpha}
\begin{aligned}
\bar\beta(z;\eta) &:= z-\bar z,\qquad\text{where }
\bar z:=\eta\cos(\theta_2-\theta_1+\sigma_1\arccos(z^2))^{1/2}, \\
\bar\alpha_j(z;\eta) &:= \Big(j+\Big\lceil
\frac{\sigma_1\cn^{-1}(z)-\sigma_2\cn^{-1}(\bar z)}{T}\Big\rceil\Big) C - \sigma_1\int_z^1
\frac{t^2}{\sqrt{1-t^4}}\,dt + \sigma_2\int_{\bar z}^1 \frac{t^2}{\sqrt{1-t^4}}\,dt
\end{aligned}
\end{align}
where $\eta\in\{-1,+1\},j\in\N_0$ and $z\in [-1,1]$ is chosen in such a way
that $\bar z\in [-1,1]$ is well-defined. Moreover we will use the abbreviation
$v:=R(-\theta_1)(B-A)\in\R^2$ or equivalently $v=(v_1,v_2)$ with
\begin{align} \label{Gl Dirichlet Definition v}
\begin{aligned}
v_1 &= \quad\cos(\theta_1)(B_1-A_1)+\sin(\theta_1)(B_2-A_2),\\
v_2 &= -\sin(\theta_1)(B_1-A_1)+\cos(\theta_1)(B_2-A_2).
\end{aligned}
\end{align}
The solution theory for the Dirichlet problem then reads as follows:
\begin{thm} \label{Thm Dirichlet}
Let $A,B\in\R^2,A\neq B$ and $\theta_1,\theta_2\in\R$ be given such that $|\theta_2-\theta_1|\leq \pi$
holds, let $\sigma_1,\sigma_2\in\{-1,+1\},j\in\N_0$. Then all solutions $\gamma\in C^\infty([0,1],\R^2)$
of the Dirichlet problem \eqref{Gl Dirichlet BVP} which are regular curves of type $(\sigma_1,\sigma_2,j)$
are given by \eqref{Gl ansatz gamma} for
\begin{align} \label{Gl Dirichlet abL}
\begin{aligned}
a&= \frac{\sqrt{2}(\bar\alpha_j(z;\eta)^2+\bar\beta(z;\eta)^2)^{1/2}}{\|B-A\|},\\
b&= \sigma_1 \cn^{-1}(z),\\
L &= \frac{\|B-A\| \big(\big(j+\big\lceil \frac{\sigma_1\cn^{-1}(z)-\sigma_2\cn^{-1}(\bar
z)}{T}\big\rceil\big) T-\sigma_1\cn^{-1}(z) +\sigma_2\cn^{-1}(\bar z)
\big)}{\sqrt{2}(\bar\alpha_j(z;\eta)^2+ \bar\beta(z;\eta)^2)^{1/2}}
\end{aligned}
\end{align}
where $z\in [-1,1]$ is any solution of
\begin{align}\label{Gl Dirichlet equation for z}
\begin{aligned}
&\bar\beta(z;\eta)(z^2 v_1- \sigma_1 \sqrt{1-z^4}v_2)
= \bar\alpha_j(z;\eta)(z^2v_2+\sigma_1\sqrt{1-z^4}v_1) \\
&\text{provided }\qquad
0\leq \sigma_2(\theta_2-\theta_1+\sigma_1\arccos(z^2))\leq \pi/2 \\
&\text{and}\quad
z=\pm 1\Rightarrow\sigma_1= \pm 1,\quad \bar z=\pm 1\Rightarrow\sigma_2= \pm 1,\quad
z=\bar z,\sigma_1=\sigma_2 \Rightarrow j\geq 1.
\end{aligned}
\end{align}
Here, $\bar\alpha_j,\bar\beta,\bar z,v$ are given by \eqref{Gl Dirichlet Definition hatbeta hatalpha} and
\eqref{Gl Dirichlet Definition v}.
\end{thm}
\begin{proof}
As in the proof of Theorem \ref{Thm Navier BVP the general case} we only show that a
$(\sigma_1,\sigma_2,j)$-type solution of \eqref{Gl Dirichlet BVP} generated by
$a,b,L$ satisfies the conditions stated above, i.e. we
will not check that these conditions are indeed sufficient. By Proposition \ref{Prop Navier} (ii) we may
write
\begin{align} \label{Gl Dirichlet parameters}
\begin{aligned}
b+aL=mT+\bar b\quad\text{for } m=j+ \lceil (b-\bar b)/T\rceil \in\N_0 \text{ and } \\
b,\bar b \in [-T/2,T/2) \quad\text{such that}\quad
\sign(b)=\sigma_1,\;\sign(\bar b)=\sigma_2.
\end{aligned}
\end{align}
In the following we set $z:= \cn(b),\bar z:=\cn(\bar b)$ so that our conventions
$\sign(0)=1$ and $b,\bar b\in [-T/2,T/2)$ and $L>0$ imply the third line in
\eqref{Gl Dirichlet equation for z}.
{\it 1st step:}\; Using the first set of equations in \eqref{Gl Dirichlet BC} we prove the second
line in \eqref{Gl Dirichlet equation for z} and
\begin{align} \label{Gl Dirichlet cn(tildeb)}
\bar z
= \eta \cos(\theta_2-\theta_1+\sigma_1\arccos(z^2))^{1/2}
\quad\text{for some }\eta\in\{-1,+1\}.
\end{align}
Indeed, \eqref{Gl Dirichlet BC} implies that there is some $l\in\Z$ such that
\begin{align} \label{Gl Dirichlet angles}
\begin{aligned}
\theta_2-\theta_1 +2l\pi
&= \int_0^L \kappa_{a,b}(r)\,dr \\
&= \int_b^{b+aL} \sqrt{2}\cn(r)\,dr \\
&\stackrel{\eqref{Gl Dirichlet parameters}}{=} m\int_0^T \sqrt{2}\cn(r)\,dr + \int_b^{\bar b}
\sqrt{2}\cn(r)\,dr \\
&= 0 -\sigma_1\int_0^{|b|} \sqrt{2}\cn(r)\,dr + \sigma_2 \int_0^{|\bar b|} \sqrt{2}\cn(r)\,dr \\
&= -\sigma_1\arccos(\cn^2(b))+\sigma_2\arccos(\cn^2(\bar b)) \\
&= -\sigma_1\arccos(z^2)+\sigma_2\arccos(\bar z^2).
\end{aligned}
\end{align}
(Since we have $|\theta_2-\theta_1|\leq \pi$
and since the right hand side in \eqref{Gl Dirichlet angles} lies in $[-\pi/2,\pi/2]$
we even know $l=0$.) Rearranging the above equation and applying sine respectively cosine gives
\begin{align*}
\sigma_2 \sqrt{1-\bar z^4}
&= \sin(\sigma_2\arccos(\bar z^2))
= \sin(\theta_2-\theta_1+\sigma_1\arccos(z^2)), \\
\bar z^2
&= \cos(\sigma_2\arccos(\bar z^2))
= \cos( \theta_2-\theta_1+\sigma_1\arccos(z^2))
\end{align*}
which proves the second line in \eqref{Gl Dirichlet equation for z} and \eqref{Gl Dirichlet cn(tildeb)}.
{\it 2nd step:}\; Next we use the second set of equations in \eqref{Gl Dirichlet BC}
to derive the first line in \eqref{Gl Dirichlet equation for z} and the formulas for $a,b,L$ from
\eqref{Gl Dirichlet abL}. Using the addition theorems for sine and cosine we find
\begin{align} \label{Gl Dirichlet v}
\begin{aligned}
v
&= R(\theta_1)^T(B-A) \\
&\stackrel{\eqref{Gl Dirichlet BC}}{=}
R(\theta_1)^T \vecII{\int_0^L \cos(\theta_1+\int_0^t \kappa_{a,b}(r)\,dr)\,dt}{\int_0^L\sin(\theta_1+
\int_0^t \kappa_{a,b}(r)\,dr)\,dt} \\
&= R(\theta_1)^T R(\theta_1) \vecII{\int_0^L \cos(\int_0^t \kappa_{a,b}(r)\,dr)\,dt}{\int_0^L
\sin(\int_0^t \kappa_{a,b}(r)\,dr)\,dt} \\
&\stackrel{\eqref{Gl metaThm 2}}{=}
\matII{\cn^2(b)}{\sqrt{2}\sn(b)\dn(b)}{-\sqrt{2}\sn(b)\dn(b)}{\cn^2(b)} \vecII{\alpha}{\beta} \\
&= \matII{z^2}{\sigma_1\sqrt{1-z^4}}{-\sigma_1\sqrt{1-z^4}}{z^2} \vecII{\alpha}{\beta}
\end{aligned}
\end{align}
where $\alpha,\beta$ were defined in \eqref{Gl metaThm 3}. From \eqref{Gl Dirichlet v} we obtain
the equations
\begin{equation} \label{Gl Dirichlet alphabeta}
\alpha = z^2v_1 -\sigma_1\sqrt{1-z^4}v_2,\qquad
\beta = \sigma_1\sqrt{1-z^4}v_1 + z^2v_2.
\end{equation}
Straightforward calculations give
\begin{align*}
\beta
&= \frac{\sqrt{2}}{a}\int_b^{b+aL}\sn(t)\dn(t)\,dt
= \frac{\sqrt{2}}{a}(\cn(b)-\cn(b+aL))
= \frac{\sqrt{2}}{a}(z-\bar z)
= \frac{\sqrt{2}}{a}\bar\beta(z;\eta)
\intertext{and}
\alpha
&= \frac{1}{a}\int_b^{b+aL}\cn^2(t)\,dt
\;=\; \frac{1}{a}\int_b^{mT+\bar b}\cn^2(t)\,dt \\
&= \frac{1}{a}\Big( m\int_0^T\cn^2(t)\,dt - \sigma_1 \int_0^{|b|}\cn^2(t)\,dt + \sigma_2
\int_0^{|\bar b|}\cn^2(t)\,dt \Big) \\
&= \frac{\sqrt{2}}{a}\Big(mC
- \sigma_1\int_{z}^1 \frac{t^2}{\sqrt{1-t^4}}\,dt
+ \sigma_2\int_{\bar z}^1 \frac{t^2}{\sqrt{1-t^4}}\,dt
\Big)\\
&= \frac{\sqrt{2}}{a} \bar\alpha_j(z;\eta)
\end{align*}
where we have used $m=j+ \lceil (b-\bar b)/T \rceil$ as well as
$b=\sigma_1\cn^{-1}(z),\bar b=\cn^{-1}(\bar z)$ in order to derive the last equality. Using
$\alpha^2+\beta^2=\|v\|^2=\|B-A\|^2$ from \eqref{Gl Dirichlet v} we get
the formula for $a$ from \eqref{Gl Dirichlet abL} as well as
$$
\bar\beta(z;\eta)(z^2 v_1- \sigma_1 \sqrt{1-z^4}v_2)
\stackrel{\eqref{Gl Dirichlet alphabeta}}{=} \frac{a}{\sqrt{2}}\cdot \alpha\beta
\stackrel{\eqref{Gl Dirichlet alphabeta}}{=} \bar\alpha_j(z;\eta)(z^2v_2+\sigma_1\sqrt{1-z^4}v_1)
$$
so that the first line in \eqref{Gl Dirichlet equation for z} holds true. Finally, the formula $b=
\sigma_1\cn^{-1}(z)$ immediately follows from $\cn(b)=z,\sign(b)=\sigma_1$ and the formula for $L$ is a direct
consequence of the formulas for $a,b,\bar b$ and $aL=mT+\bar b-b$, see \eqref{Gl Dirichlet parameters}.
\end{proof}
Let us finally show how the results of Deckelnick and Grunau \cite{DecGru_Boundary_value_problems} can
be reproduced by Theorem~\ref{Thm Dirichlet}. In \cite{DecGru_Boundary_value_problems} the authors analysed
the Dirichlet problem for graph-type curves given by $\gamma(t)=(t,u(t))$
where $u:[0,1]\to\R$ is a smooth positive symmetric function. The Dirichlet boundary conditions were given
by $u(0)=u(1)=0,u'(0)=-u'(1)=\beta$ for $\beta>0$ which in our setting corresponds to $A=(0,0),B=(1,0)$ and
$\theta_1=-\theta_2=\arctan(\beta)>0$.
They showed that for any given $\beta>0$ this boundary value problem has precisely one symmetric positive
solution $u$. This result can be derived from Theorem \ref{Thm Dirichlet} in the following way.
Looking for graph-type solutions of \eqref{Gl Dirichlet BVP} requires to restrict the attention to
$(\sigma_1,\sigma_2,j)$-type solutions with $j=0$. The symmetry condition and Corollary~\ref{Cor
symmetry}~(i) imply that $aL+2b$ is a multiple of~$T$ so that
$\sigma_1\cn^{-1}(z)+\sigma_2\cn^{-1}(\bar z)$ has to be a multiple of $T$.
From this and $\theta_2-\theta_1=-2\theta_1<0$ one can rule out the case $\sigma_1=\sigma_2$ and further
reductions yield ${\sigma_1=1},{\sigma_2=-1},z=\bar z= \eta \cos(\theta_1)^{1/2}$ for some
$\eta\in\{-1,+1\}$.
Finally one can check that $\eta=1$ produces a symmetric solution which is not of graph-type in the
sense of \cite{DecGru_Boundary_value_problems} since the slope becomes $\pm\infty$ at two points and thus
$u\notin C^2((0,1),\R^2)$. Hence, $\eta$ must be -1 so that we end up with
$$
a = \sqrt{2}\Big(C-2\int_{z}^1 \frac{t^2}{\sqrt{1-t^4}}\,dt\Big),\qquad
b= \cn^{-1}(z),\qquad
L = \frac{T-2\cn^{-1}(z)}{\sqrt{2}(C-2\int_{z}^1\frac{t^2}{\sqrt{1-t^4}}\,dt)}
$$
for $z=-\cos(\theta_1)^{1/2}$
and one may verify that this choice for $a,b,L$ indeed solves \eqref{Gl Dirichlet equation for z} and
generates the solution found in \cite{DecGru_Boundary_value_problems}. The following Corollary shows that
this particular axially symmetric solution is only one of infinitely many other axially symmetric solutions.
Moreover we prove that axially non-symmetric solutions exist.
\begin{cor}
Let $A=(0,0),B=(1,0)$ and $\theta_1=-\theta_2\in (0,\pi/2)$. Then the Dirichlet problem~\eqref{Gl
Dirichlet BVP} has infinitely many axially symmetric solutions $(\gamma_{1,j})$ and infinitely many
axially non-symmetric solutions $(\gamma_{2,j})$ satisfying $W(\gamma_{1,j}),W(\gamma_{2,j})\to\infty$ as
$j\to\infty$.
\end{cor}
\begin{proof}
The existence of infinitely many axially symmetric solutions $(\gamma_{1,j})_{j\in\N_0}$ of \eqref{Gl
Dirichlet BVP} follows from Theorem \ref{Thm Dirichlet}, Corollary \ref{Cor symmetry} and the fact that
for every $\eta\in\{-1,+1\}$ and every $j\in\N_0$ a solution of \eqref{Gl Dirichlet equation for z} is
given by $\sigma_1=1,\sigma_2=-1,z=\bar z=\eta\cos(\theta_1)^{1/2}$. The formula for $a$ and $W$ from
Theorem \ref{Thm Dirichlet} implies that these solutions satisfy $W(\gamma_{1,j})\to\infty$ as
$j\to\infty$ and $\sigma_1\cn^{-1}(z)+\sigma_2\cn^{-1}(\bar z)=0$ implies that $\gamma_{1,j}$ is axially
symmetric.
The existence of infinitely many axially non-symmetric solutions $(\gamma_{2,j})$ may be proved for
parameters $\sigma_1=1,\sigma_2=-1,\eta=1$ and all $j\in\N_0$ satisfying $j\geq j_0$ where $j_0\in\N$ is
chosen such that the inequality
$$
\bar\beta(-1;1)\cos(\theta_1)+\sin(\theta_1)\bar\alpha_{j_0}(-1;1) >0
$$
holds. Indeed, using $v_1=\cos(\theta_1),v_2=-\sin(\theta_1)$ in the setting of Theorem \ref{Thm
Dirichlet} it suffices to find a zero of the function
$$
f_j(z)
:= \bar\beta(z;1)(z^2 \cos(\theta_1)+ \sqrt{1-z^4}\sin(\theta_1))
-\bar\alpha_j(z;1)(-z^2\sin(\theta_1)+\sqrt{1-z^4}\cos(\theta_1))
$$
in $(-1,1)$. To this end we apply the intermediate value theorem to $f_j$ on the interval
$(-1,-z^*)$ where $z^*:=\cos(\theta_1)^{1/2}$. Calculations and $j\geq j_0$ imply
$$
f_j(-z^*)=-2\cos(\theta_1)^{1/2}<0,\qquad
f_j(-1) = \bar\beta(-1;1)\cos(\theta_1)+\bar\alpha_j(-1;1)\sin(\theta_1) >0
$$
so that the intermediate value theorem provides the existence of at least one zero in this interval.
Hence, $-1<z<-z^*<0<\bar z<1$ implies that $\sigma_1 \cn^{-1}(z)+\sigma_2\cn^{-1}(\bar z)$ is not a
multiple of $T$ proving that the constructed solutions are not axially symmetric.
\end{proof}
\end{document} |
\begin{document}
\title{GENERALIZED FUSION FRAMES IN HILBERT SPACES}
\author[V. Sadri]{Vahid Sadri} \address{Department of Mathematics, Faculty of Tabriz Branch,\\ Technical and Vocational University (TUV), East Azarbaijan , Iran} \email{[email protected]}
\author[Gh. Rahimlou]{GHOLAMREZA RAHIMLOU} \address{Department of Mathematics, Faculty of Tabriz Branch,\\ Technical and Vocational University (TUV), East Azarbaijan , Iran} \email{[email protected]}
\author[R. Ahmadi]{Reza Ahmadi} \address{Institute of Fundamental Sciences\\University of Tabriz\\, Iran\\} \email{[email protected]}
\author[R. Zarghami Farfar]{Ramazan Zarghami Farfar} \address{Dapartement of Geomatic and Mathematical\\Marand Faculty of Technical and Engineering\\University of Tabriz\\, Iran\\} \email{[email protected]}
\begin{abstract} After introducing g-frames and fusion frames by Sun and Casazza, combining these frames together is an interesting topic for research. In this paper, we introduce the generalized fusion frames or g-fusion frames for Hilbert spaces and give characterizations of these frames from the viewpoint of closed range and g-fusion frame sequences. Also, the canonical dual g-fusion frames are presented and we introduce Parseval g-fusion frames. \end{abstract}
\subjclass[2010]{Primary 42C15; Secondary 46C99, 41A58}
\keywords{Fusion frame, g-fusion frame, Dual g-fusion frame, g-fusion frame sequence.}
\maketitle
\section{Introduction} During the past few years, the theory of frames have been growing rapidly and new topics about them are discovered almost every year. For example, generalized frames (or g-frames), subspaces of frames (or fusion frames), continuous frames (or c-frames), $k$-frames, controlled frames and the combination of each two of them, lead
to c-fusion frames, g-c-frames, c-g-frames, c$k$-frames, c$k$-fusion frames and etc. The purpose of this paper is to introduce and review some of the generalized fusion frames (or g-fusion frames) and their operators. Then, we will get some useful propositions about these frames and finally, we will study g-fusion frame sequences.
Throughout this paper, $H$ and $K$ are separable Hilbert spaces and $\mathcal{B}(H,K)$ is the collection of all the bounded linear operators of $H$ into $K$. If $K=H$, then $\mathcal{B}(H,H)$ will be denoted by $\mathcal{B}(H)$. Also, $\pi_{V}$ is the orthogonal projection from $H$ onto a closed subspace $V\subset H$ and $\lbrace H_j\rbrace_{j\in\Bbb J}$ is a sequence of Hilbert spaces where $\Bbb J$ is a subset of $\Bbb Z$. It is easy to check that if $u\in\mathcal{B}(H)$ is an invertible operator, then (\cite{ga}) $$\pi_{uV}u\pi_{V}=u\pi_{V}.$$ \begin{definition}\textbf{(frame)}. Let $\{f_j\}_{j\in\Bbb J}$ be a sequence of members of $H$. We say that $\{f_j\}_{j\in\Bbb J}$ is a frame for $H$ if there exists $0<A\leq B<\infty$ such that for each $f\in H$ \begin{eqnarray*} A\Vert f\Vert^2\leq\sum_{j\in\Bbb J}\vert\langle f,f_j\rangle\vert^2\leq B\Vert f\Vert^2. \end{eqnarray*} \end{definition} \begin{definition}\textbf{(g-frame)} A family $\lbrace \Lambda_j\in\mathcal{B}(H,H_j)\rbrace_{j\in\Bbb J}$ is called a g-frame for $H$ with respect to $\lbrace H_j\rbrace_{j\in\Bbb J}$, if there exist $0<A\leq B<\infty$ such that \begin{equation} \label{1}
A\Vert f\Vert^2\leq\sum_{j\in\Bbb J}\Vert \Lambda_{j}f\Vert^2\leq B\Vert f\Vert^2, \ \ f\in H. \end{equation} \end{definition} \begin{definition}\textbf{(fusion frame)}. Let $\{W_j\}_{j\in\Bbb J}$ be a family of closed subspaces of $H$ and $\{v_j\}_{j\in\Bbb J}$ be a family of weights (i.e. $v_j>0$ for any $j\in\Bbb J$). We say that $(W_j, v_j)$ is a fusion frame for $H$ if there exists $0<A\leq B<\infty$ such that for each $f\in H$ \begin{eqnarray*} A\Vert f\Vert^2\leq\sum_{j\in\Bbb J}v^{2}_j\Vert \pi_{W_j}f\Vert^2\leq B\Vert f\Vert^2. \end{eqnarray*} \end{definition} If an operator $ u$ has closed range, then there exists a right-inverse operator $u^ \dagger$ (pseudo-inverse of $u$) in the following sences (see \cite{ch}).
\begin{lemma}\label{l1} Let $u\in\mathcal{B}(K,H)$ be a bounded operator with closed range $\mathcal{R}_{u}$. Then there exists a bounded operator $u^\dagger \in\mathcal{B}(H,K)$ for which $$uu^{\dagger} x=x, \ \ x\in \mathcal{R}_{u}.$$ \end{lemma} \begin{lemma}\label{Ru} Let $u\in\mathcal{B}(K,H)$. Then the following assertions holds: \begin{enumerate} \item
$\mathcal{R}_u$ is closed in $H$ if and only if $\mathcal{R}_{u^{\ast}}$ is closed in $K$. \item $(u^{\ast})^\dagger=(u^\dagger)^\ast$. \item The orthogonal projection of $H$ onto $\mathcal{R}_{u}$ is given by $uu^{\dagger}$. \item The orthogonal projection of $K$ onto $\mathcal{R}_{u^{\dagger}}$ is given by $u^{\dagger}u$.\item$\mathcal{N}_{{u}^{\dagger}}=\mathcal{R}^{\bot}_{u}$ and $\mathcal{R}_{u^{\dagger}}=\mathcal{N}^{\bot}_{u}$.
\end{enumerate} \end{lemma}
\section{Generalized Fusion Frames and Their Operators}
We define the space $\mathscr{H}_2:=(\sum_{j\in\Bbb J}\oplus H_j)_{\ell_2}$ by \begin{eqnarray} \mathscr{H}_2=\big\lbrace \lbrace f_j\rbrace_{j\in\Bbb J} \ : \ f_j\in H_j , \ \sum_{j\in\Bbb J}\Vert f_j\Vert^2<\infty\big\rbrace. \end{eqnarray} with the inner product defined by $$\langle \lbrace f_j\rbrace, \lbrace g_j\rbrace\rangle=\sum_{j\in\Bbb J}\langle f_j, g_j\rangle.$$ It is clear that $\mathscr{H}_2$ is a Hilbert space with pointwise operations. \begin{definition} Let $W=\lbrace W_j\rbrace_{j\in\Bbb J}$ be a family of closed subspaces of $H$, $\lbrace v_j\rbrace_{j\in\Bbb J}$ be a family of weights, i.e. $v_j>0$ and $\Lambda_j\in\mathcal{B}(H,H_j)$ for each $j\in\Bbb J$. We say $\Lambda:=(W_j, \Lambda_j, v_j)$ is a \textit{generalized fusion frame} (or \textit{g-fusion frame} ) for $H$ if there exists $0<A\leq B<\infty$ such that for each $f\in H$ \begin{eqnarray}\label{g} A\Vert f\Vert^2\leq\sum_{j\in\Bbb J}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2\leq B\Vert f\Vert^2. \end{eqnarray} \end{definition} We call $\Lambda$ a \textit{Parseval g-fusion frame} if $A=B=1$. When the right hand of (\ref{g}) holds, $\Lambda$ is called a \textit{g-fusion Bessel sequence} for $H$ with bound $B$. If $H_j=H$ for all $j\in\Bbb J$ and $\Lambda_j=I_H$, then we get the fusion frame $(W_j, v_j)$ for $H$. Throughout this paper, $\Lambda$ will be a triple $(W_j, \Lambda_j, v_j)$ with $j\in\Bbb J$ unless otherwise stated. \begin{proposition}\label{2.2} Let $\Lambda$ be a g-fusion Bessel sequence for $H$ with bound $B$. Then for each sequence $\lbrace f_j\rbrace_{j\in\Bbb J}\in\mathscr{H}_2$, the series $\sum_{j\in\Bbb J}v_j \pi_{W_j}\Lambda_{j}^{*}f_j$ converges unconditionally. \end{proposition} \begin{proof} Let $\Bbb I$ be a finite subset of $\Bbb J$, then \begin{align*} \Vert \sum_{j\in\Bbb I}v_j\pi_{W_j}\Lambda_j^{*} f_j\Vert&=\sup_{\Vert g\Vert=1}\big\vert\langle\sum_{j\in\Bbb I}v_j\pi_{W_j}\Lambda_j^{*} f_j , g\rangle\big\vert\\ &\leq\big(\sum_{j\in\Bbb I}\Vert f_j\Vert^2\big)^{\frac{1}{2}}\sup_{\Vert g\Vert=1}\big(\sum_{j\in\Bbb I}v_j^2\Vert \Lambda_j \pi_{W_j}g\Vert^2\big)^{\frac{1}{2}}\\ &\leq \sqrt{B}\big(\sum_{j\in\Bbb I}\Vert f_j\Vert^2\big)^{\frac{1}{2}}<\infty \end{align*} and it follows that $\sum_{j\in\Bbb J}v_j \pi_{W_j}\Lambda_{j}^{*}f_j$ is unconditionally convergent in $H$ (see \cite{diestel} page 58). \end{proof} Now, we can define the \textit{synthesis operator} by Proposition \ref{2.2}. \begin{definition} Let $\Lambda$ be a g-fusion frame for $H$. Then, the synthesis operator for $\Lambda$ is the operator \begin{eqnarray*} T_{\Lambda}:\mathscr{H}_2\longrightarrow H \end{eqnarray*} defined by \begin{eqnarray*} T_{\Lambda}(\lbrace f_j\rbrace_{j\in\Bbb J})=\sum_{j\in\Bbb J}v_j \pi_{W_j}\Lambda_{j}^{*}f_j. \end{eqnarray*} \end{definition} We say the adjoint $T_{\Lambda}^*$ of the synthesis operator the \textit{analysis operator} and it is defined by the following Proposition. \begin{proposition} Let $\Lambda$ be a g-fusion frame for $H$. Then, the analysis operator \begin{equation*} T_{\Lambda}^*:H\longrightarrow\mathscr{H}_2 \end{equation*} is given by \begin{equation*} T_{\Lambda}^*(f)=\lbrace v_j \Lambda_j \pi_{W_j}f\rbrace_{j\in\Bbb J}. \end{equation*} \end{proposition} \begin{proof} If $f\in H$ and $\lbrace g_j\rbrace_{j\in\Bbb J}\in\mathscr{H}_2$, we have \begin{align*} \langle T_{\Lambda}^*(f), \lbrace g_j\rbrace_{j\in\Bbb J}\rangle&=\langle f, T_{\Lambda}\lbrace g_j\rbrace_{j\in\Bbb J}\rangle\\ &=\langle f, \sum_{j\in\Bbb J}v_j \pi_{W_j}\Lambda_{j}^{*}g_j\rangle\\ &=\sum_{j\in\Bbb J}v_j \langle \Lambda_j \pi_{W_j}f, g_j\rangle\\ &=\langle\lbrace v_j \Lambda_j \pi_{W_j}f\rbrace_{j\in\Bbb J}, \lbrace g_j\rbrace_{j\in\Bbb J}\rangle. \end{align*} \end{proof} \begin{theorem}\label{t2} The following assertions are equivalent: \begin{enumerate} \item $\Lambda$ is a g-fusion Bessel sequence for $H$ with bound $B$. \item The operator \begin{align*} T_{\Lambda}&:\mathscr{H}_2\longrightarrow H\\ T_{\Lambda}(\lbrace f_j\rbrace_{j\in\Bbb J})&=\sum_{j\in\Bbb J}v_j \pi_{W_j}\Lambda_{j}^{*}f_j \end{align*} is a well-defined and bounded operator with $\Vert T_{\lambda}\Vert\leq \sqrt{B}$. \item The series \begin{align*} \sum_{j\in\Bbb J}v_j \pi_{W_j}\Lambda_{j}^{*}f_j \end{align*}
converges for all $\lbrace f_j\rbrace_{j\in\Bbb J}\in\mathscr{H}_2$. \end{enumerate} \end{theorem} \begin{proof} $\textit{(1)}\Rightarrow\textit{(2)} $. It is clear by Proposition \ref{2.2}.\\ $\textit{(2)}\Rightarrow\textit{(1)}$. Suppose that $T_{\Lambda}$ is a well-defined and bounded operator with $\Vert T_{\lambda}\Vert\leq \sqrt{B}$. Let $\Bbb I$ be a finite subset of $\Bbb J$ and $f\in H$. Therefore \begin{align*} \sum_{j\in\Bbb I}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2&=\sum_{j\in\Bbb I}v_j^2\langle \pi_{W_j}\Lambda^*_j \Lambda_j \pi_{W_j}f, f \rangle\\ &=\langle T_{\Lambda}\lbrace v_j\Lambda_j \pi_{W_j}f\rbrace_{j\in\Bbb I}, f\rangle\\ &\leq \Vert T_{\Lambda}\Vert \Vert v_j\Lambda_j \pi_{W_j}f\Vert \Vert f\Vert\\ &=\Vert T_{\Lambda}\Vert\big(\sum_{j\in\Bbb I}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2\big)^{\frac{1}{2}}\Vert f\Vert. \end{align*} Thus, we conclude that \begin{align*} \sum_{j\in\Bbb J}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2\leq\Vert T_{\Lambda}\Vert^2 \Vert f\Vert^2\leq B\Vert f\Vert^2. \end{align*} $\textit{(1)}\Rightarrow\textit{(3)}$. It is clear.\\ $\textit{(3)}\Rightarrow\textit{(1)}$. Suppose that $\sum_{j\in\Bbb J}v_j \pi_{W_j}\Lambda_{j}^{*}f_j$ converges for all $\lbrace f_j\rbrace_{j\in\Bbb J}\in\mathscr{H}_2$. We define \begin{align*} T:&\mathscr{H}_2\longrightarrow H\\ T(\lbrace f_j\rbrace_{j\in\Bbb J})&=\sum_{j\in\Bbb J}v_j \pi_{W_j}\Lambda_{j}^{*}f_j. \end{align*} Then, $T$ is well-defined. Let for each $n\in\Bbb N$, \begin{align*} T_n:&\mathscr{H}_2\longrightarrow H\\ T_n(\lbrace f_j\rbrace_{j\in\Bbb J})&=\sum_{j=1}^{n}v_j \pi_{W_j}\Lambda_{j}^{*}f_j. \end{align*} Let $B_n:=(\sum_{j=1}^{n}\Vert v_j \pi_{W_j}\Lambda_{j}^{*}f_j\Vert^2)^{\frac{1}{2}}$. Since, $\Vert T_n(\lbrace f_j\rbrace_{j\in\Bbb J})\Vert\leq B_n \Vert f_j\lbrace\rbrace_{j\in\Bbb J}\Vert$, then $\lbrace T_n\rbrace$ is a sequence of bounded linear operators which converges pointwise to $T$. Hence, by the Banach-Steinhaus Theorem, $T$ is a bounded operator with $$\Vert T\Vert\leq\lim\inf\Vert T_n\Vert.$$ So, by Theorem \ref{t2}, $\Lambda$ is a g-fusion Bessel sequence for $H$. \end{proof} \begin{corollary}\label{cor} $\Lambda$ is a g-fusion Bessel sequence for $H$ with bound $B$ if and only if for each finite subset $\Bbb I\subseteq\Bbb J$ and $f_j\in H_j$ $$\Vert\sum_{j\in\Bbb I}v_j \pi_{W_j}\Lambda^*_j f_j\Vert^2\leq B\sum_{j\in\Bbb I}\Vert f_j\Vert^2.$$ \end{corollary} \begin{proof} It is an immediate consequence of Theorem \ref{t2} and the proof of Proposition \ref{2.2}. \end{proof} Let $\Lambda$ be a g-fusion frame for $H$. The \textit{g-fusion frame operator} is defined by \begin{align*} S_{\Lambda}&:H\longrightarrow H\\ S_{\Lambda}f&=T_{\Lambda}T^*_{\Lambda}f. \end{align*} Now, for each $f\in H$ we have $$S_{\Lambda}f=\sum_{j\in\Bbb J}v_j^2 \pi_{W_j}\Lambda^*_j \Lambda_j \pi_{W_j}f$$ and $$\langle S_{\Lambda}f, f\rangle=\sum_{j\in\Bbb J}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2.$$ Therefore, $$A I\leq S_{\Lambda}\leq B I.$$ This means that $S_{\Lambda}$ is a bounded, positive and invertible operator (with adjoint inverse). So, we have the reconstruction formula for any $f\in H$: \begin{equation}\label{3} f=\sum_{j\in\Bbb J}v_j^2 \pi_{W_j}\Lambda^*_j \Lambda_j \pi_{W_j}S^{-1}_{\Lambda}f =\sum_{j\in\Bbb J}v_j^2 S^{-1}_{\Lambda}\pi_{W_j}\Lambda^*_j \Lambda_j \pi_{W_j}f. \end{equation} \begin{example}
We introduce a Parseval g-fusion frame for $H$ by the g-fusion frame operator. Assume that $\Lambda=(W_j, \Lambda_j, v_j)$ is a g-fusion frame for $H$. Since $S_{\Lambda}$(or $S_{\Lambda}^{-1}$) is positive in $\mathcal{B}(H)$ and $\mathcal{B}(H)$ is a $C^*$-algebra, then there exist a unique positive square root $S^{\frac{1}{2}}_{\Lambda}$ (or $S^{-\frac{1}{2}}_{\Lambda}$) and they commute with $S_{\Lambda}$ and $S_{\Lambda}^{-1}$. Therefore, each $f\in H$ can be written \begin{align*} f&=S^{-\frac{1}{2}}_{\Lambda}S_{\Lambda}S^{-\frac{1}{2}}_{\Lambda}\\ &=\sum_{j\in\Bbb J}v_j^2 S^{-\frac{1}{2}}_{\Lambda}\pi_{W_j}\Lambda^*_j \Lambda_j \pi_{W_j}S^{-\frac{1}{2}}_{\Lambda}f. \end{align*} This implies that \begin{align*} \Vert f\Vert^2&=\langle f, f\rangle\\ &=\langle\sum_{j\in\Bbb J}v_j^2 S^{-\frac{1}{2}}_{\Lambda}\pi_{W_j}\Lambda^*_j \Lambda_j \pi_{W_j}S^{-\frac{1}{2}}_{\Lambda}f, f\rangle\\ &=\sum_{j\in\Bbb J}v_j^2\Vert \Lambda_j \pi_{W_j}S^{-\frac{1}{2}}_{\Lambda}f\Vert^2\\ &=\sum_{j\in\Bbb J}v_j^2\Vert \Lambda_j \pi_{W_j}S^{-\frac{1}{2}}_{\Lambda}\pi_{S^{-\frac{1}{2}}_{\Lambda}W_j}f\Vert^2, \end{align*} this means that $(S^{-\frac{1}{2}}_{\Lambda}W_j, \Lambda_j \pi_{W_j}S^{-\frac{1}{2}}_{\Lambda}, v_j)$ is a Parseval g-fusion frame. \end{example} \begin{theorem}\label{2.3}
$\Lambda$ is a g-fusion frame for $H$ if and only if
\begin{align*} T_{\Lambda}&:\mathscr{H}_2\longrightarrow H\\ T_{\Lambda}(\lbrace f_j\rbrace_{j\in\Bbb J})&=\sum_{j\in\Bbb J}v_j \pi_{W_j}\Lambda_{j}^{*}f_j \end{align*} is a well-defined, bounded and surjective. \end{theorem} \begin{proof} If $\Lambda$ is a g-fusion frame for $H$, the operator $S_{\Lambda}$ is invertible. Thus, $T_{\Lambda}$ is surjective. Conversely, let $T_{\Lambda}$ be a well-defined, bounded and surjective. Then, by Theorem \ref{t2}, $\Lambda$ is a g-fusion Bessel sequence for $H$. So, $T^{*}_{\Lambda}f=\lbrace v_j \Lambda_j \pi_{W_j}f\rbrace_{j\in\Bbb J}$ for all $f\in H$. Since $T_{\Lambda}$ is surjective, by Lemma \ref{l1}, there exists an operator $T^{\dagger}_{\Lambda}:H\rightarrow\mathscr{H}_2$ such that $(T^{\dagger}_{\Lambda})^*T^*_{\Lambda}=I_{H}$. Now, for each $f\in H$ we have \begin{align*} \Vert f\Vert^2&\leq\Vert (T^{\dagger}_{\Lambda})^*\Vert^2 \Vert T^*_{\Lambda}f\Vert^2\\ &=\Vert T^{\dagger}_{\Lambda}\Vert^2\sum_{j\in\Bbb J}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2. \end{align*} Therefore, $\Lambda$ is a g-fusion frame for $H$ with lower g-fusion frame bound $\Vert T^{\dagger}\Vert^{-2}$ and upper g-fusion frame $\Vert T_{\Lambda}\Vert^2$. \end{proof} \begin{theorem} $\Lambda$ is a g-fusion frame for $H$ if and only if the operator $$S_{\Lambda}:f\longrightarrow\sum_{j\in\Bbb J}v_j^2 \pi_{W_j}\Lambda^*_j \Lambda_j \pi_{W_j}f$$ is a well-defined, bounded and surjective. \end{theorem} \begin{proof} The necessity of the statement is clear. Let $S_{\Lambda}$ be a well-defined, bounded and surjective operator. Since $\langle S_{\Lambda}f, f\rangle\geq0$ for all $f\in H$, so $S_{\Lambda}$ is positive. Then $$\ker S_{\Lambda}=(\mathcal{R}_{S^*_{\Lambda}})^{\dagger}=(\mathcal{R}_{S_{\Lambda}})^{\dagger}=\lbrace 0\rbrace$$ thus, $S_{\Lambda}$ is injective. Therefore, $S_{\Lambda}$ is invertible. Thus, $0\notin\sigma(S_{\Lambda})$. Let $C:=\inf_{\Vert f\Vert=1}\langle S_{\Lambda}f, f\rangle$. By Proposition 70.8 in \cite{he}, we have $C\in\sigma(S_{\Lambda})$. So $C>0$. Now, we can write for each $f\in H$ $$\sum_{j\in\Bbb J}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2=\langle S_{\Lambda}f, f\rangle\geq C\Vert f\Vert^2$$ and $$\sum_{j\in\Bbb J}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2=\langle S_{\Lambda}f, f\rangle\leq \Vert S_{\Lambda}\Vert \Vert f\Vert^2.$$ It follows that $\Lambda$ is a g-fusion frame for $H$. \end{proof} \begin{theorem} Let $\Lambda:=(W_j, \Lambda_j, v_j)$ and $\Theta:=(W_j, \Theta_j, v_j)$ be two g-fusion Bessel sequence for $H$ with bounds $B_1$ and $B_2$, respectively. Suppose that $T_{\Lambda}$ and $T_{\Theta}$ be their analysis operators such that $T_{\Theta}T^*_{\Lambda}=I_H$. Then, both $\Lambda$ and $\Theta$ are g-fusion frames. \end{theorem} \begin{proof} For each $f\in H$ we have \begin{align*} \Vert f\Vert^4&=\langle f, f\rangle^2\\ &=\langle T^*_{\Lambda}f, T^*_{\Theta}f\rangle^2\\ &\leq\Vert T^*_{\Lambda}f\Vert^2 \Vert T^*_{\Theta}f\Vert^2\\ &=\big(\sum_{j\in\Bbb I}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2\big)\big(\sum_{j\in\Bbb I}v_j^2\Vert \Theta_j \pi_{W_j}f\Vert^2\big)\\ &\leq\big(\sum_{j\in\Bbb I}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2\big) B_2 \Vert f\Vert^2, \end{align*} thus, $B_2^{-1}\Vert f\Vert^2\leq\sum_{j\in\Bbb I}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2$. This means that $\Lambda$ is a g-fusion frame for $H$. Similarly, $\Theta$ is a g-fusion frame with the lower bound $B_1^{-1}$. \end{proof} \section{Dual g-Fusion Frames} For definition of the dual g-fusion frames, we need the following theorem. \begin{theorem}\label{dual} Let $\Lambda=(W_j, \Lambda_j, v_j)$ be a g-fusion frame for $H$. Then $(S^{-1}_{\Lambda}W_j, \Lambda_j \pi_{W_j}S_{\Lambda}^{-1}, v_j)$ is a g-fusion frame for $H$. \end{theorem} \begin{proof}
Let $A,B$ be the g-fusion frame bounds of $\Lambda$ and $f\in H$, then \begin{align*} \sum_{j\in\Bbb J}v^2_j\Vert \Lambda_{j}\pi_{W_j}S_{\Lambda}^{-1}\pi_{S^{-1}_{\Lambda}W_j}f\Vert^2&=\sum_{j\in\Bbb J}v^2_j\Vert \Lambda_{j}\pi_{W_j}S_{\Lambda}^{-1}f\Vert^2\\ &\leq B\Vert S_{\Lambda}^{-1}\Vert^2 \Vert f\Vert^2. \end{align*} Now, to get the lower bound, by using (\ref{3}) we can write \begin{align*} \Vert f\Vert^4&=\big\vert\langle\sum_{j\in\Bbb J}v_j^2\pi_{W_j}\Lambda^*_J\Lambda_j\pi_{W_j}S^{-1}_{\Lambda}f, f\rangle\big\vert^2\\ &=\big\vert\sum_{j\in\Bbb J}v_j^2\langle\Lambda_j\pi_{W_j}S^{-1}_{\Lambda}f, \Lambda_j\pi_{W_j}f\rangle\big\vert^2\\ &\leq\sum_{j\in\Bbb J}v_j^2\Vert \Lambda_j\pi_{W_j}S^{-1}_{\Lambda}f\Vert^2 \sum_{j\in\Bbb J}v_j^2\Vert\Lambda_j\pi_{W_j}f\Vert^2\\ &\leq \sum_{j\in\Bbb J}v_j^2\Vert\Lambda_j\pi_{W_j} S_{\Lambda}^{-1}\pi_{S^{-1}_{\Lambda}W_j}f\Vert^2\big(B\Vert f\Vert^2\big), \end{align*} therefore \begin{align*} B^{-1}\Vert f\Vert^2\leq \sum_{j\in\Bbb J}v_j^2\Vert\Lambda_j\pi_{W_j} S_{\Lambda}^{-1}\pi_{S^{-1}_{\Lambda}W_j}f\Vert^2. \end{align*} \end{proof} Now, by Theorem \ref{dual}, $\tilde{\Lambda}=(S^{-1}_{\Lambda}W_j, \Lambda_j\pi_{W_j} S_{\Lambda}^{-1}, v_j)$ is a g-fusion frame for $H$. Then, $\tilde{\Lambda}$ is called the \textit{(canonical) dual g-fusion frame} of $\Lambda$. Let $S_{\tilde{\Lambda}}=T_{\tilde{\Lambda}}T^*_{\tilde{\Lambda}}$ is the g-fusion frame operator of $\tilde{\Lambda}$. Then, for each $f\in H$ we get $$T^*_{\tilde{\Lambda}}f=\lbrace v_j\Lambda_j\pi_{W_j}S^{-1}_{\Lambda}\pi_{S^{-1}_{\Lambda W_j}}f\rbrace=\lbrace v_j\Lambda_j\pi_{W_j} S^{-1}_{\Lambda}f\rbrace=T^*_{\Lambda}(S^{-1}_{\Lambda}f),$$ so $T_{\Lambda}T^*_{\tilde{\Lambda}}=I_H$. Also, we have for each $f\in H$, \begin{align*} \langle S_{\tilde{\Lambda}}f, f\rangle&=\sum_{j\in\Bbb J}v_j^2\Vert\Lambda_j\pi_{W_j} S_{\Lambda}^{-1}\pi_{S^{-1}_{\Lambda}W_j}f\Vert^2\\ &=\sum_{j\in\Bbb J}v_j^2\Vert\Lambda_j \pi_{W_j}S_{\Lambda}^{-1}f\Vert^2\\ &=\langle S_{\Lambda}(S_{\Lambda}^{-1}f), S_{\Lambda}^{-1}f\rangle\\ &=\langle S_{\Lambda}^{-1}f, f\rangle \end{align*} thus, $S_{\tilde{\Lambda}}=S_{\Lambda}^{-1}$ and by (\ref{3}), we get for each $f\in H$ \begin{align}\label{frame} f=\sum_{j\in\Bbb J}v_j^2\pi_{W_j}\Lambda^*_j\tilde{\Lambda_j}\pi_{\tilde{W_j}}f= \sum_{j\in\Bbb J}v_j^2\pi_{\tilde{W_j}}\tilde{\Lambda_j}^*\Lambda_j\pi_{W_j}f, \end{align} where $\tilde{W_j}:=S^{-1}_{\Lambda}W_j \ , \ \tilde{\Lambda_j}:=\Lambda_j \pi_{W_j}S_{\Lambda}^{-1}.$
The following Theorem shows that the canonical dual g-fusion frame gives rise to expansion coefficients with the minimal norm. \begin{theorem}\label{min} Let $\Lambda$ be a g-fusion frame with canonical dual $\tilde{\Lambda}$. For each $g_j\in H_j$, put $f=\sum_{j\in\Bbb J}v_j^2\pi_{W_j}\Lambda^*_j g_j$. Then $$\sum_{j\in\Bbb J}\Vert g_j\Vert^2=\sum_{j\in\Bbb J}v_j^2\Vert \tilde{\Lambda_j}\pi_{\tilde{W_j}}f\Vert^2+\sum_{j\in\Bbb J}\Vert g_j-v_j^2\tilde{\Lambda_j}\pi_{\tilde{W_j}}f\Vert^2.$$ \end{theorem} \begin{proof} We can write again \begin{align*} \sum_{j\in\Bbb J}v_j^2\Vert \tilde{\Lambda_j}\pi_{\tilde{W_j}}f\Vert^2&=\langle f, S^{-1}_{\Lambda}f\rangle\\ &=\sum_{j\in\Bbb J}v_j^2\langle\pi_{W_j}\Lambda^*_j g_j, S_{\Lambda}^{-1}f \rangle\\ &=\sum_{j\in\Bbb J}v_j^2\langle g_j, \Lambda_j\pi_{W_j}S_{\Lambda}^{-1}f \rangle\\ &=\sum_{j\in\Bbb J}v_j^2\langle g_j, \tilde{\Lambda_j}\pi_{\tilde{W_j}} f \rangle. \end{align*} Therefore, $\mbox{Im}\Big(\sum_{j\in\Bbb J}v_j^2\langle g_j, \tilde{\Lambda_j}\pi_{\tilde{W_j}} f \rangle\Big)=0$. So \begin{align*} \sum_{j\in\Bbb J}\Vert g_j-v_j^2\tilde{\Lambda_j}\pi_{\tilde{W_j}}f\Vert^2=\sum_{j\in\Bbb J}\Vert g_j\Vert^2 -2\sum_{j\in\Bbb J}v_j^2\langle g_j, \tilde{\Lambda_j}\pi_{\tilde{W_j}} f \rangle+\sum_{j\in\Bbb J}v_j^2\Vert \tilde{\Lambda_j}\pi_{\tilde{W_j}}f\Vert^2 \end{align*} and the proof completes. \end{proof}
\section{Gf-Complete and g-Fusion Frame Sequences} \begin{definition} We say that $(W_j, \Lambda_j)$ is \textit{gf-complete} , if $\overline{\mbox{span}}\lbrace \pi_{W_j}\Lambda^*_j H_j\rbrace=H.$ \end{definition} Now, it is easy to check that $(W_j, \Lambda_j)$ is gf-complete if and only if $$\lbrace f: \ \Lambda_j \pi_{W_j}f=0 , \ j\in\Bbb J\rbrace=\lbrace 0\rbrace.$$ \begin{proposition}\label{p3} If $\Lambda=(W_j, \Lambda_j, v_j)$ is a g-fusion frame for $H$, then $(W_j, \Lambda_j)$ is a gf-complete. \end{proposition} \begin{proof} Let $f\in(\mbox{span}\lbrace \pi_{W_j}\Lambda^*_j H_j\rbrace)^{\perp}\subseteq H$. For each $j\in\Bbb J$ and $g_j\in H_j$ we have $$\langle \Lambda_j\pi_{W_j}f, g_j\rangle=\langle f, \pi_{W_j}\Lambda^*_j g_j\rangle=0,$$ so, $\Lambda_j\pi_{W_j}f=0$ for all $j\in\Bbb J$. Since $\Lambda$ is a g-fusion frame for $H$, then $\Vert f\Vert=0$. Thus $f=0$ and we get $(\mbox{span}\lbrace \pi_{W_j}\Lambda^*_j H_j\rbrace)^{\perp}=\lbrace0\rbrace$. \end{proof} In the following, we want to check that if a member is removed from a g-fusion frame, will the new set remain a g-fusion frame or not? \begin{theorem}\label{del} Let $\Lambda=(W_j, \Lambda_j, v_j)$ be a g-fusion frame for $H$ with bounds $A, B$ and $\tilde{\Lambda}=(S^{-1}_{\Lambda}W_j, \Lambda_j \pi_{W_j}S^{-1}_{\Lambda}, v_j)$ be a canonical dual g-fusion frame. Suppose that $j_0\in\Bbb J$. \begin{enumerate} \item If there is a $g_0\in H_{j_0}\setminus\lbrace 0\rbrace$ such that $\tilde\Lambda_{j_0}\pi_{\tilde{W}_{j_0}}\pi_{W_{j_0}}\Lambda^*_{j_0}g_0=g_0$ and $v_{j_0}=1$, then $(W_j, \Lambda_j)_{j\neq j_0}$ is not gf-complete in $H$. \item If there is a $f_0\in H_{j_0}\setminus\lbrace0\rbrace$ such that $\pi_{W_{j_0}}\Lambda^*_{j_0}\tilde\Lambda_{j_0}\pi_{\tilde{W}_{j_0}}f_0=f_0$ and $v_{j_0}=1$, then $(W_j, \Lambda_j)_{j\neq j_0}$ is not gf-complete in $H$. \item If $I-\Lambda_{j_0}\pi_{W_{j_0}}\pi_{\tilde{W_{j_0}}}\tilde{\Lambda}^*_{j_0}$ is bounded invertible on $H_{j_0}$, then $(W_j, \Lambda_j, v_j)_{j\neq j_0}$ is a g-fusion frame for $H$. \end{enumerate} \end{theorem} \begin{proof} \textit{(1).} Since $\pi_{W_{j_0}}\Lambda^*_{j_0}g_0\in H$, then by (\ref{frame}), $$\pi_{W_{j_0}}\Lambda^*_{j_0}g_0=\sum_{j\in\Bbb J}v_j^2\pi_{W_j}\Lambda^*_j\tilde{\Lambda_j}\pi_{\tilde{W_j}}\pi_{W_{j_0}}\Lambda^*_{j_0}g_0.$$ So, $$\sum_{j\neq j_0}v_j^2\pi_{W_j}\Lambda^*_j\tilde{\Lambda_j}\pi_{\tilde{W_j}}\pi_{W_{j_0}}\Lambda^*_{j_0}g_0=0.$$ Let $u_{j_0, j}:=\delta_{j_0, j}g_0$, thus $$\pi_{W_{j_0}}\Lambda^*_{j_0}g_0=\sum_{j\in\Bbb J}v_j^2\pi_{W_j}\Lambda^*_j u_{j_0, j}.$$ Then, by Theorem \ref{min}, we have $$\sum_{j\in\Bbb J}\Vert u_{j_0, j}\Vert^2=\sum_{j\in\Bbb J}v_j^2\Vert \tilde{\Lambda}_j\pi_{\tilde{W_j}}\pi_{W_{j_0}}\Lambda^*_{j_0}g_0\Vert^2+\sum_{j\in\Bbb J}\Vert v_j^2\tilde{\Lambda}_j\pi_{\tilde{W_j}}\pi_{W_{j_0}}\Lambda^*_{j_0}g_0-u_{j_0, j}\Vert^2.$$ Consequently, $$\Vert g_0\Vert^2=\Vert g_0\Vert^2+2\sum_{j\neq j_0}v_j^2\Vert \tilde{\Lambda}_j\pi_{\tilde{W_j}}\pi_{W_{j_0}}\Lambda^*_{j_0}g_0\Vert^2$$ and we get $\tilde{\Lambda}_j\pi_{\tilde{W_j}}\pi_{W_{j_0}}\Lambda^*_{j_0}g_0=0$. Therefore, $$\Lambda_j\pi_{W_j}S^{-1}_{\Lambda}\pi_{W_{j_0}}\Lambda^*_{j_0}g_0=\tilde{\Lambda}_j\pi_{\tilde{W_j}}\pi_{W_{j_0}}\Lambda^*_{j_0}g_0=0.$$ But, $g_0=\tilde\Lambda_{j_0}^*\pi_{\tilde{W}_{j_0}}\pi_{W_{j_0}}\Lambda^*_{j_0}g_0=\Lambda_{j_0}\pi_{W_{j_0}}S^{-1}_{\Lambda}\pi_{W_{j_0}}\Lambda^*_{j_0}g_0\neq0$, which implies that $S^{-1}_{\Lambda}\pi_{W_{j_0}}\Lambda^*_{j_0}g_0\neq0$ and this means that $(W_j, \Lambda_j)_{j\neq j_0}$ is not gf-complete in $H$.
\textit{(2).} Since $\pi_{W_{j_0}}\Lambda^*_{j_0}\tilde\Lambda_{j_0}\pi_{\tilde{W}_{j_0}}f_0=f_0\neq0$, we obtain $\tilde\Lambda_{j_0}\pi_{\tilde{W}_{j_0}}f_0\neq0$ and $$\tilde\Lambda_{j_0}\pi_{\tilde{W}_{j_0}}\pi_{W_{j_0}}\Lambda^*_{j_0}\tilde\Lambda_{j_0}\pi_{\tilde{W}_{j_0}}f_0=\tilde\Lambda_{j_0}\pi_{\tilde{W}_{j_0}}f_0.$$ Now, the conclusion follows from \textit{(1)}.
\textit{(3)}. Using (\ref{frame}), we have for any $f\in H$ $$\Lambda_{j_0}\pi_{W_{j_0}}f=\sum_{j\in\Bbb J}v_j^2\Lambda_{j_0}\pi_{W_{j_0}}\pi_{\tilde{W_j}}\tilde{\Lambda}^*_j\Lambda_j\pi_{W_j}f.$$ So, \begin{equation}\label{com}(I-\Lambda_{j_0}\pi_{W_{j_0}}\pi_{\tilde{W_{j_0}}}\tilde{\Lambda}^*_{j_0})\Lambda_{j_0}\pi_{W_{j_0}}f=\sum_{j\neq j_0}v_j^2\Lambda_{j_0}\pi_{W_{j_0}}\pi_{\tilde{W_j}}\tilde{\Lambda}^*_j\Lambda_j\pi_{W_j}f. \end{equation} On the other hand, we can write \begin{small} \begin{align*} \big\Vert\sum_{j\neq j_0}v_j^2\Lambda_{j_0}\pi_{W_{j_0}}\pi_{\tilde{W_j}}\tilde{\Lambda}^*_j\Lambda_j\pi_{W_j}f\big\Vert^2&=\sup_{\Vert g\Vert=1}\big\vert\sum_{j\neq j_0}v_j^2\big\langle \Lambda_j\pi_{W_j}f, \tilde{\Lambda}_j\pi_{\tilde{W}_j}\pi_{W_{j_0}}\Lambda^*_{j_0}g \big\rangle\big\vert^2\\ &\leq\big(\sum_{j\neq j_0}v_j^2\Vert \Lambda_j\pi_{W_j}f\Vert^2\big)\sup_{\Vert g\Vert=1}\sum_{j\in\Bbb J}v_j^2\Vert \tilde{\Lambda}_j\pi_{\tilde{W}_j}\pi_{W_{j_0}}\Lambda^*_{j_0}g\Vert^2\\ &\leq\tilde{B}\Vert\Lambda_{j_0}\Vert^2(\sum_{j\neq j_0}v_j^2\Vert \Lambda_j\pi_{W_j}f\Vert^2) \end{align*} \end{small} where, $\tilde{B}$ is the upper bound of $\tilde\Lambda$. Now, by (\ref{com}), we have \begin{equation*} \Vert\Lambda_{j_0}\pi_{W_{j_0}}f\Vert^2\leq\Vert(I-\Lambda_{j_0}\pi_{W_{j_0}}\pi_{\tilde{W_{j_0}}}\tilde{\Lambda}^*_{j_0})^{-1}\Vert^2 \tilde{B}\Vert\Lambda_{j_0}\Vert^2(\sum_{j\neq j_0}v_j^2\Vert \Lambda_j\pi_{W_j}f\Vert^2). \end{equation*} Therefore, there is a number $C>0$ such that $$\sum_{j\in\Bbb J}v_j^2\Vert \Lambda_j\pi_{W_j}f\Vert^2\leq C\sum_{j\neq j_0}v_j^2\Vert \Lambda_j\pi_{W_j}f\Vert^2$$ and we conclude for each $f\in H$ $$\frac{A}{C}\Vert f\Vert^2\leq\sum_{j\neq j_0}v_j^2\Vert \Lambda_j\pi_{W_j}f\Vert^2\leq B\Vert f\Vert^2.$$ \end{proof} \begin{theorem} $\Lambda$ is a g-fusion frame for $H$ with bounds $A,B$ if and only if the following two conditions are satisfied: \begin{enumerate} \item[(I)] The pair $(W_j, \Lambda_j)$ is gf-complete. \item[(II)] The operator $$T_{\Lambda}: \lbrace f_j\rbrace_{j\in\Bbb J}\mapsto \sum_{j\in\Bbb J}v_j\pi_{W_j}\Lambda_j^* f_j$$ is a well-defined from $\mathscr{H}_2$ into $H$ and for each $\lbrace f_j\rbrace_{j\in\Bbb J}\in\mathcal{N}^{\perp}_{T_{\Lambda}}$, \begin{equation}\label{e7} A\sum_{j\in\Bbb J}\Vert f_j\Vert^2\leq \Vert T_{\Lambda}\lbrace f_j\rbrace_{j\in\Bbb J}\Vert^2\leq B\sum_{j\in\Bbb J}\Vert f_j\Vert^2. \end{equation} \end{enumerate} \end{theorem} \begin{proof} First, suppose that $\Lambda$ is a g-fusion frame. By Proposition \ref{p3}, (I) is satisfied. By Theorem \ref{t2}, $T_{\Lambda}$ is a well-defined from $\mathscr{H}_2$ into $H$ and $\Vert T_{\Lambda}\Vert^2\leq B$. Now, the right-hand inequality in (\ref{e7}) is proved.
By Theorem \ref{2.3}, $T_{\Lambda}$ is surjective. So, $\mathcal{R}_{T^*_{\Lambda}}$ is closed. Thus $$\mathcal{N}^{\perp}_{T_{\Lambda}}=\overline{\mathcal{R}_{T^*_{\Lambda}}}=\mathcal{R}_{T^*_{\Lambda}}.$$ Now, if $\lbrace f_j\rbrace_{j\in\Bbb J}\in\mathcal{N}^{\perp}_{T_{\Lambda}}$, then $$\lbrace f_j\rbrace_{j\in\Bbb J}=T^*_{\Lambda}g=\lbrace v_j\Lambda_j \pi_{W_j}g\rbrace_{j\in\Bbb J}$$ for some $g\in H$. Therefore \begin{align*} (\sum_{j\in\Bbb J}\Vert f_j\Vert^2)^2&=(\sum_{j\in\Bbb J}v^2_j\Vert \Lambda_j \pi_{W_j}g\Vert^2)^2 \vert\langle S_{\Lambda}(g), g\rangle\vert^2\\ &\leq\Vert S_{\Lambda}(g)\Vert^2 \Vert g\Vert^2\\ &\leq\Vert S_{\Lambda}(g)\Vert^2 \big(\frac{1}{A}\sum_{j\in\Bbb J}v^2_j\Vert \Lambda_j \pi_{W_j}g\Vert^2\big). \end{align*} This implies that $$A\sum_{j\in\Bbb J}\Vert f_j\Vert^2\leq\Vert S_{\Lambda}(g)\Vert^2=\Vert T_{\Lambda}\lbrace f_j\rbrace_{j\in\Bbb J}\Vert^2$$ and (II) is proved.
Conversely, Let $(W_j, \Lambda_j)$ be gf-complete and inequality (\ref{e7}) is satisfied. Let $\lbrace t_j\rbrace_{j\in\Bbb J}=\lbrace f_j\rbrace_{j\in\Bbb J}+\lbrace g_j\rbrace_{j\in\Bbb J}$, where $\lbrace f_j\rbrace_{j\in\Bbb J}\in\mathcal{N}_{T_{\Lambda}}$ and $\lbrace g_j\rbrace_{j\in\Bbb J}\in\mathcal{N}_{T_{\Lambda}}^{\perp}$. We get \begin{align*} \Vert T_{\Lambda}\lbrace t_j\rbrace_{j\in\Bbb J}\Vert^2&=\Vert T_{\Lambda}\lbrace g_j\rbrace_{j\in\Bbb J}\Vert^2\\ &\leq B\sum_{j\in\Bbb J}\Vert g_j\Vert^2\\ &\leq B\Vert \lbrace f_j\rbrace+\lbrace g_j\rbrace\Vert^2\\ &=B\Vert\lbrace t_j\rbrace_{j\in\Bbb J}\Vert^2. \end{align*} Thus, $\Lambda$ is a g-fusion Bessel sequence.
Assume that $\lbrace y_n\rbrace$ is a sequence of members of $\mathcal{R}_{T_{\Lambda}}$ such that $y_n\rightarrow y$ for some $y\in H$. So, there is a $\lbrace x_n\rbrace\in\mathcal{N}_{T_{\Lambda}}$ such that $T_{\Lambda}\{x_n\}=y_n$. By (\ref{e7}), we obtain \begin{align*} A\Vert\lbrace x_n-x_m\rbrace\Vert^2&\leq\Vert T_{\Lambda}\lbrace x_n-x_m\rbrace\Vert^2\\ &=\Vert T_{\Lambda}\lbrace x_n\rbrace -T_{\Lambda}\lbrace x_m\rbrace\Vert^2\\ &=\Vert y_n-y_m\Vert^2. \end{align*} Therefore, $\lbrace x_n\rbrace$ is a Cauchy sequence in $\mathscr{H}_2$. Therefore $\lbrace x_n\rbrace$ converges to some $x\in \mathscr{H}_2$, which by continuity of $T_{\Lambda}$, we have $y=T_{\Lambda}(x)\in\mathcal{R}_{T_{\Lambda_j}}$. Hence $\mathcal{R}_{T_{\Lambda}}$ is closed. Since $\mbox{span}\lbrace \pi_{W_j}\Lambda_{j}^{\ast}(H_j)\rbrace\subseteq\mathcal{R}_{T_{\Lambda}}$, by (I) we get $\mathcal{R}_{T_{\Lambda}}=H$.
Let $T_{\Lambda}^\dagger$ denotes the pseudo-inverse of $T_{\Lambda}$. By Lemma \ref{Ru}(4), $T_{\Lambda}T_{\Lambda}^{\dagger}$ is the orthogonal projection onto $\mathcal{R}_{T_{\Lambda}}=H$. Thus for any $\lbrace f_j\rbrace_{j\in\Bbb J}\in\mathscr{H}_2 $, \begin{eqnarray*} A\Vert T_{\Lambda}^{\dagger}T_{\Lambda}\lbrace f_j\rbrace\Vert^2\leq\Vert T_{\Lambda}T_{\Lambda}^{\dagger}T_{\Lambda}\lbrace f_j\rbrace \Vert^2=\Vert T_{\Lambda}\lbrace f_j\rbrace\Vert^2. \end{eqnarray*} By Lemma \ref{Ru} (4), $\mathcal{N}_{{T}_{\Lambda}^{\dagger}}=\mathcal{R}^{\bot}_{T_{\Lambda}}$, therefore \begin{eqnarray*} \Vert T_{\Lambda}^\dagger\Vert^2\leq\frac{1}{A}. \end{eqnarray*} Also by Lemma \ref{Ru}(2), we have $$ \Vert(T_{\Lambda}^\ast)^{\dagger}\Vert^2\leq\frac{1}{A}.$$ But $(T_{\Lambda}^\ast)^{\dagger}T_{\Lambda}^\ast$ is the orthogonal projection onto \begin{eqnarray*} \mathcal{R}_{(T_{\Lambda}^\ast)^\dagger}=\mathcal{R}_{(T_{\Lambda}^\dagger)^\ast}=\mathcal{N}_{T_{\Lambda}^\dagger}^{\bot}=\mathcal{R}_{T_{\Lambda}}=H. \end{eqnarray*} So, for all $f\in H$ \begin{align*} \Vert f\Vert^2&=\Vert(T_{\Lambda}^\ast)^{\dagger}T_{\Lambda}^\ast f\Vert^2\\ &\leq \frac{1}{A}\Vert T_{\Lambda}^\ast f\Vert^2\\ &=\frac{1}{A}\sum_{j\in\Bbb J}v^2_j\Vert \Lambda_j \pi_{W_j}f\Vert^2. \end{align*} This implies that $\Lambda$ satisfies the lower g-fusion frame condition. \end{proof} Now, we can define a g-fusion frame sequence in the Hilbert space. \begin{definition} We say that $\Lambda$ is a \textit{g-fusion frame sequence} if it is a g-fusion frame for $\overline{\mbox{span}}\lbrace \pi_{W_j}\Lambda^*_j H_j\rbrace$. \end{definition} \begin{theorem}\label{2.6} $\Lambda$ is a g-fusion frame sequence if and only if the operator
\begin{align*} T_{\Lambda}&:\mathscr{H}_2\longrightarrow H\\ T_{\Lambda}(\lbrace f_j\rbrace_{j\in\Bbb J})&=\sum_{j\in\Bbb J}v_j \pi_{W_j}\Lambda_{j}^{*}f_j \end{align*} is a well-defined and has closed range. \end{theorem} \begin{proof} By Theorem \ref{2.3}, it is enough to prove that if $T_{\lambda}$ has closed range, then $\overline{\mbox{span}}\lbrace \pi_{W_j}\Lambda^*_j H_j\rbrace=\mathcal{R}_{T_{\Lambda}}$. Let $f\in\overline{\mbox{span}}\lbrace \pi_{W_j}\Lambda^*_j H_j\rbrace$, then $$f=\lim_{n\rightarrow\infty}g_n , \ \ \ g_n\in\mbox{span}\lbrace \pi_{W_j}\Lambda^*_j H_j\rbrace\subseteq \mathcal{R}_{T_{\Lambda}}=\overline{\mathcal{R}}_{T_{\Lambda}}.$$ Therefore, $\overline{\mbox{span}}\lbrace \pi_{W_j}\Lambda^*_j H_j\rbrace\subseteq\overline{\mathcal{R}}_{T_{\Lambda}}=\mathcal{R}_{T_{\Lambda}}$. On the other hand, if $f\in\mathcal{R}_{T_{\Lambda}}$, then $$f\in\mbox{span}\lbrace \pi_{W_j}\Lambda^*_j H_j\rbrace\subseteq\overline{\mbox{span}}\lbrace \pi_{W_j}\Lambda^*_j H_j\rbrace$$
and the proof is completed. \end{proof} \begin{theorem}
$\Lambda$ is a g-fusion frame sequence if and only if \begin{equation}\label{4} f \longmapsto \lbrace v_j \Lambda_j \pi_{W_j}f\rbrace_{j\in\Bbb J} \end{equation} defines a map from $H$ onto a closed subspace of $\mathscr{H}_2$. \end{theorem} \begin{proof} Let $\Lambda$ be a g-fusion frame sequence. Then, by Theorem \ref{2.6}, $T_{\lambda}$ is well-defined and $\mathcal{R}_{T_{\Lambda}}$ is closed. So, $T^*_{\Lambda}$ is well-defined and has closed range. Conversely, by hypothesis, for all $f\in H$ $$\sum_{j\in\Bbb J}\Vert v_j \Lambda_j \pi_{W_j}f\Vert^2<\infty.$$ Let $$B:=\sup\big\lbrace \sum_{j\in\Bbb J}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2 : \ \ f\in H, \ \Vert f\Vert=1\big\rbrace$$ and suppose that $g_j\in H_j$ and $\Bbb I\subseteq\Bbb J$ be finite. We can write \begin{align*} \Vert\sum_{j\in\Bbb I}v_j \pi_{W_j}\Lambda^*_j g_j\Vert^2&=\Big(\sup_{\Vert f\Vert=1}\big\vert\langle\sum_{j\in\Bbb I}v_j \pi_{W_j}\Lambda^*_j g_j, f\rangle\big\vert\Big)^2\\ &\leq\Big(\sup_{\Vert f\Vert=1}\sum_{j\in\Bbb I}v_j\big\vert\langle g_j, \Lambda_j \pi_{W_j}f\rangle\big\vert\Big)^2\\ &\leq\big(\sum_{j\in\Bbb I}\Vert g_j\Vert^2\big)\big(\sup_{\Vert f\Vert=1}\sum_{j\in\Bbb I}v_j^2\Vert \Lambda_j \pi_{W_j}f\Vert^2\big)\\ &\leq B\big(\sum_{j\in\Bbb I}\Vert g_j\Vert^2\big) \end{align*} Thus, by Corollary \ref{cor}, $\Lambda$ is a g-fusion Bessel sequence for $H$. Therefore, $T_{\Lambda}$ is well-defined and bounded. Furthermore, if the range of the map (\ref{4}) is closed, the same is true for $T_{\Lambda}$. So, by Theorem \ref{2.6}, $\Lambda$ is a g-fusion frame sequence. \end{proof} \begin{theorem} Let $\Lambda=(W_j, \Lambda_j, v_j)$ be a g-fusion frame sequence Then, it is a g-fusion frame for $H$ if and only if the map \begin{equation}\label{5} f \longmapsto \lbrace v_j \Lambda_j \pi_{W_j}f\rbrace_{j\in\Bbb J} \end{equation} from $H$ onto a closed subspace of $\mathscr{H}_2$ be injective. \end{theorem} \begin{proof} Suppose that the map (\ref{5}) is injective and $v_j \Lambda_j \pi_{W_j}f=0$ for all $j\in\Bbb J$. Then, the value of the map at $f$ is zero. So, $f=0$. This means that $(W_j, \Lambda_j)$ is gf-complete. Since, $\Lambda$ is a g-fusion frame sequence, so, it is a g-fusion frame for $H$.
The converse is evident. \end{proof} \begin{theorem} Let $\Lambda$ be a g-fusion frame for $H$ and $u\in\mathcal{B}(H)$. Then $\Gamma:=(uW_j, \Lambda_j u^*, v_j)$ is a g-fusion frame sequence if and only if $u$ has closed range. \end{theorem} \begin{proof} Assume that $\Gamma$ is a g-fusion frame sequence. So, by Theorem \ref{2.6}, $T_{\Lambda u^*}$ is a well-defined operator from $\mathscr{H}_2$ into $H$ with closed range. If $\lbrace f_j\rbrace_{j\in\Bbb J}\in\mathscr{H}_2$, then \begin{align*} uT_{\Lambda}\lbrace f_j\rbrace_{j\in\Bbb J}&=\sum_{j\in\Bbb J}v_ju\pi_{W_j}\Lambda_j^* f_j\\ &=\sum_{j\in\Bbb J}v_j\pi_{uW_j}u\Lambda_j^* f_j\\ &=\sum_{j\in\Bbb J}v_j\pi_{uW_j}(\Lambda_j u^*)^* f_j\\ &=T_{\Lambda u^*}\lbrace f_j\rbrace_{j\in\Bbb J}, \end{align*} therefore $uT_{\Lambda}=T_{\Lambda u^*}$. Thus $uT_{\Lambda}$ has closed range too. Let $y\in\mathcal{R}_u$, then there is $x\in H$ such that $u(x)=y$. By Theorem \ref{2.3}, $T_{\Lambda}$ is surjective, so there exist $\{f_j\}_{j\in\Bbb J}\in\mathscr{H}_2$ such that $$y=u(T_{\Lambda}\{f_j\}_{j\in\Bbb J}).$$ Thus, $\mathcal{R}_{u}=\mathcal{R}_{uT_{\Lambda}}$ and $u$ has closed range.
For the opposite implication, let \begin{align*} T_{\Lambda u^*}:&\mathscr{H}_2\longrightarrow H\\ T_{\Lambda u^*}\lbrace f_j\rbrace_{j\in\Bbb J}&=\sum_{j\in\Bbb J}v_j\pi_{uW_j}(\Lambda_j u^*)^* f_j. \end{align*} Hence, $T_{\Lambda u^*}=uT_{\Lambda}$. Since, $T_{\Lambda}$ is surjective, so $T_{\Lambda u^*}$ has closed range and by Theorem \ref{t2}, is well-defined. Therefore, by Theorem \ref{2.6}, the proof is completed. \end{proof}
\section{Conclusions} In this paper, we could transfer some common properties in general frames to g-fusion frames with the definition of the g-fusion frames and their operators. Afterward, we reviewed a basic theorem about deleting a member in Theorem \ref{del} with the definition of the dual g-fusion frames and the gf-completeness. In this theorem, the defined operator in part \textit{3} could be replaced by some other operators which are the same as the parts \textit{1} and \textit{2}; this is an open problem at the moment. Eventually, the g-fusion frame sequences and their relationship with the closed range operators were defined and presented.
\end{document} |
\begin{document}
\title{Natural Philosophy and Quantum Theory}\author{Thomas Marlow \\ \emph{School of Mathematical Sciences, University of Nottingham,}\\ \emph{UK, NG7 2RD}}
\maketitle
\begin{abstract} We attempt to show how relationalism might help in understanding Bell's theorem. We also present an analogy with Darwinian evolution in order to pedagogically hint at how one might go about using a theory in which one does not even desire to explain correlations by invoking common causes. \end{abstract}
\textbf{Keywords}: Bell's Theorem; Quantum Theory; Relationalism.\\
\section*{Motivation} \label{sec:convergency}
In the first section we shall introduce Bell's theorem and explain how a relational philosophy might be used in interpreting it. In the second section we will present an analogy with Darwinian evolution via two parables. Our main aim is pedagogy---in explaining Bell's theorem to the layman.
\section*{Bell's Theorem} \label{sec:Bell}
In his seminal work \cite{BellBOOK} Bell defines formally what he means by `locality' and `completeness' and he convincingly proves that in order to maintain his definition of `locality' one must reject his definition of `completeness' (or vice versa). His definitions are very intuitive so they are very difficult to reject---and this is exactly the reason his theorem is so powerful. We will outline Bell's beautiful and, we would argue, irrefutable proof below and then we shall show that using relationalism we can, as Bell proves we should, reject his definition of `completeness' and/or `locality'. Bell's theorem is even more difficult to dissolve than Einstein, Podolsky and Rosen's famous theorem \cite{EPR} because we have, strictly speaking, to reject both his particular `completeness' assumption and his particular `locality' one---this is merely because his `completeness' assumption is an integral part of his `locality' assumption. But we are getting ahead of ourselves. Let us briefly discuss Bell's theorem and then we will discuss relationalism (note that we emphatically do not refute his proof because it is, in our opinion, wholly and profoundly correct---the aim of this note is to explain why this is the case).
If we have two experiments, labeled by parameters $a$ and $b$, which are spacelike separated, where $A$ and $B$ parameterise the particular results that we might receive in $a$ and $b$ respectively, then Bell assumes the following:
\begin{equation} p(AB \vert ab I) = p(A \vert a I)p(B \vert b I) \label{locality} \end{equation}
\noindent where $I$ is any other prior information that goes into the assignment of probabilities (this might include `hidden variables' if one wishes to introduce such odd things). $A,B,a$ and $b$ can take a variety of different values. Bell called this factorisation assumption `local causality' with the explicit hint that any `complete' theory which disobeys this assumption must embody nonlocal causality of a kind.
Jarrett \cite{Jarrett84} showed that this factorisation assumption can be split up into two logically distinct assumptions which, when taken together, imply Eq.\,(\ref{locality}). These two assumptions are, as named by Shimony \cite{Shimon86}, parameter independence and outcome independence. Shimony \cite{Shimon86} argued that orthodox quantum theory itself obeys parameter independence, namely:
\begin{equation} p(A \vert abI) = p(A \vert aI). \label{parameter} \end{equation}
\noindent This means that, in orthodox quantum theory, the probability that one predicts for an event $A$ in one spacetime region does not change when the parameter $b$ chosen in measuring an entangled subsystem elsewhere is known. Knowledge of $b$ doesn't help in predicting anything about $A$. If parameter independence is disobeyed then, some suggest \cite{Shimon86}, we would be able to signal between spacelike separated regions using orthodox quantum theory and, as such, there seems to be no harm in presuming that parameter independence failing means that nonlocal causality is probably manifest. However, by far the more subtle assumption implicit in Bell's `locality' assumption is outcome independence (which orthodox quantum theory does not obey):
\begin{equation} p(A \vert BabI) = p(A \vert abI). \label{outcome} \end{equation}
\noindent Knowledge of $B$ can and does effect the predictions one makes about $A$ in the standard interpretation of quantum theory \cite{Shimon86}. Bell implicitly justified assumption (\ref{outcome}) by presuming `completeness' of a certain kind. If one has `complete' information about the experiments then finding out the outcome $B$ cannot teach you anything about the probability of outcome $A$, hence we should presume (\ref{outcome}) for such `complete' theories. So outcome independence does not necessarily have anything to do with locality. Even if one were a nonlocal observer who gathered his information nonlocally one would still assume (\ref{outcome}) as long as that information was `complete'. So two assumptions go into Bell's factorisation assumption, namely a `locality' assumption (\ref{parameter}) and a `completeness' assumption (\ref{outcome}). Bell then proves elegantly that (\ref{parameter}) and (\ref{outcome}) together imply an inequality that quantum theory disobeys. So, quantum theory does not obey Bell's locality condition (\ref{locality}) which is itself made up of two sub-assumptions (\ref{parameter}) and (\ref{outcome}). Note that \emph{the one sub-assumption that orthodox quantum theory emphatically does not obey is the `completeness' one} (\ref{outcome}). We hope to convince you that any justification of (\ref{outcome}) relies on a category error\footnote{One argument against this presentation \cite{MaudlinBOOK} is that the splitting of the Bell's factorisation assumption into (\ref{parameter}) and (\ref{outcome}) is not unique, and one could equally well split the factorisation assumption into different sub-assumptions. There is no \emph{a priori} reason to choose one particular way to split (\ref{locality}) into prior assumptions. This we agree with, but note that there are \emph{a posteriori} reasons from choosing to use Jarrett's analysis while discussing orthodox quantum theory. Since we are not refuting Bell's theorem---we would rather merely adopt a different interpretation of his assumptions---it does not matter which way we split (\ref{locality}), we merely split it up into (\ref{parameter}) and (\ref{outcome}) for pedagogical reasons.}.
So we used completeness to justify demanding (\ref{outcome}) but could we not similarly have used completeness to demand (\ref{parameter})? Yes we could. So the problem arises that it is difficult even to justify (\ref{parameter}) as a locality assumption. The only reason we \emph{do} call (\ref{parameter}) a locality assumption is that some theories which disobey it could perhaps be used to signal between spacelike separated regions. The point, however, is mute because quantum theory (in the orthodox interpretation) obeys (\ref{parameter}) and hence we have no problem with signalling in standard quantum theory \cite{Shimon86}. So Bell's theorem seems to suggest that quantum theory is `incomplete' rather than `nonlocal'. The very assumption that we should demand that we can ever find the \emph{real, absolute, ontological} probability independently of the probabilities we assign to other propositions seems to be \emph{the} assumption that orthodox quantum theory disobeys in Bell's analysis. Hence one might wish to adopt a relational approach to probability theory \cite{Marlow06fold} where we cannot even define the probability of an event independently of the probabilities we assign to other events---\emph{cf.} Cox's axioms of probability \cite{CoxBOOK,Cox46,JaynesBOOK}. So we must, it seems, try and interpret quantum theory in such a way that demanding (\ref{outcome}) is simply a category error of some sort (rather than something we desire in the first place)---Bell's theorem is so elegant and irrefutable that we must try to accommodate his conclusions.
Relational theories are ones in which we do not use statements about mere ontology, we only use facts or definitions about things that we can rationally justify. In fact, this is the only real principle of relational theories introduced by Leibniz, it is called the Principle of Sufficient Reason (PSR) \cite{Smolin05}. The PSR implies another principle called the Principle of Identifying the Indiscernible (PII) which says that if we cannot distinguish two theoretical entities by distinctions that we can rationally justify, we should identify them. In other words, if two things are to be distinct then they must be distinct for an identifiable reason. No two equals are the same. These are desiderata that ensure that we do not introduce facts, statements or distinctions in our theories which we are not rationally compelled to. However natural or obvious our assumptions sound, if we are not rationally compelled to use them we clearly ought not to use them in case we are mistaken (and, of course, there are lots and lots of ways we might be mistaken). So, for example, it seems very natural and obvious to define motion with respect to an absolute `flat' space and time (like in Newton's theory of gravity) because we instinctively na\"{\i}vely feel this is the case but it turns out that, since we are not rationally compelled to use such an absolute definition, we ought not to. This was Einstein's insight and it allowed him to derive a better theory than Newton's. In \cite{Marlow06fold} we argued that such a `parsimonious' relational philosophy can help in interpreting and generalising probability theory and quantum theory. For other relational approaches to quantum theory see \cite{Rovel96,SR06,Marlow06weak}.
\section*{Convergency}
It may seem like an odd jump to now begin discussing an analogy with Darwinian evolution. Nonetheless, this is exactly what we shall do and let us explain why. It is easy to see that Darwin also seemingly used such careful rational desiderata \cite{Smolin05}. Many people used to claim that species were eternal categories. One might perversely say that species possessed `elements of biology' that could not change. Rabbits are rabbits because they embody some eternal `Rabbitness'. A few hundred years ago this assumption didn't seem too implausible. Nonetheless, it is wrong. Utterly fallacious. Darwin argued that one was not rationally compelled to make such an assumption. And rejecting this assumption allowed him to design, with considerable effort and insight, his theories of natural and sexual selection. It might be that an analogous rationalist approach might help in understanding Bell's theorem.
Inspired by \cite{DawkinsBOOK}, let us introduce a little parable. Imagine that, in some horrible future, we find two species which look and behave in identical ways and these two organisms each call home an island that is geologically separate from the other. This amazes us so we search as hard as we possibly can but we find that, beyond reasonable doubt, that there is \emph{no} way that a common ancestor of the two species could have recently (in geological time) `rafted', `tunneled', flown or swam between the two islands. This amazes us even more. Furthermore, we check the two organisms' genetics and we find that they each have \emph{exactly the same genome}. Exactly the same. We must conclude that two species have evolved separately and have exactly the same genome. This more than merely amazes us, it shocks us to our cores, and we reel about jabbering for years trying to understand it. We know of no (and cannot rationally justify a) physical mechanism that can ensure a common genome for separate species. Some biologists commit suicide, some find God, some continue to search for a way that the two species might have evolved and conspired together regardless of the fact they are separate, and others consider it a fluke and move on. However, I am confident that we do not need to worry about this parable, this future will never become reality... beyond reasonable doubt anyway.
Note that this parable suggests a reading that is \emph{exactly opposite} to many expositions of Bell's theorem (especially Mermin's `instruction set' approach, see \cite{Mermin02} and references therein). One might question whether the `instruction sets' (or, equivalently, the catalogue of causes) we assign to two separated correlated particles are common to both. If we assume the instruction sets for each particle should be the same for two correlated particles then we conclude that quantum theory is surprising (it disobeys this assumption or embodies causal nonlocality). Our parable suggests the opposite view. Turning the argument on its head we should be \emph{utterly shocked} to find that the `instruction sets' or catalogues of causes are the same in each region. We know of no (and cannot rationally justify a) physical mechanism that can ensure common causes within the separate regions. We are shocked if two correlated phenotypes evolved separately and ended up embodying the same genetics, but why would we not be shocked if two correlated particles were shown to have arisen from common causes. The orthodoxy \emph{demands} that separate correlated properties should arise from common causes. Why the distinction between the two analogous cases? Why?
But, of course, we need another parable because perhaps the two organisms embody the same phenotypes merely because they live in sufficiently similar surroundings, so it might be that the `instruction sets' or common causes arise in the environment of the two separate convergent species. So we use two different definitions of `niche': an organism has its internal `genetic' niche and it has its external `environmental' niche. Convergent species might evolve in geological separation seemingly \emph{because} their environmental niches are sufficiently similar. But just as two distinct genetics can give rise to the same phenotype, might it not be that two distinct `ecological' niches might also house a common correlated phenotype? Yes!---this happens all over the place; biology is abundantly and wonderfully filled with convergent phenotypes that exist within significantly distinct environmental niches. (Species like Phytophthora, Platypus and Thylacine---Tasmanian Tiger---are all good examples of species which seem to embody convergent phenotypes with respect to other separated species; similarly, Koalas have fingerprints...). So we begin to question whether, in biology, we should ever demand that correlated phenotypes arise from common causes. Certainly we might happily begin to question such a thing without resorting to proclamations about nature being `nonlocal'.
So, another parable. Imagine in the future we again have two geologically separate islands and two seemingly identical species. Lets say they are apes, and that all the phenotypes that we can identify are pretty much the same: they have the same gait, both have grey hair on their heads, squat when they defaecate, speak the same jargon, the same kidney function etc. We would be very surprised if they have the same genetics as long as we rule out the idea that a recent ancestor `rafted' between the islands. We catch these apes and try to make them mate, but they do not seem to want to. We talk to them nicely and even though it disgusts them they mate. They do not produce any offspring or, if they do, their offspring are sterile\footnote{This is a biological version of the relational Principle of Identifying the Indiscernible.}; they are different species with different genetics. We are happy about this because it confirms that they evolved separately as we would expect two geologically separate species would. But then we ask ourselves, why do they share common phenotypes regardless of the fact they evolved in geologically separate niches? Perhaps they share common phenotypes because their niches are sufficiently similarly simian. So now let us list common features of their ecological niches: the forests on each islands have similarly shaped trees, a similar array of fruit, a similar array of predators, the temperatures and humidity are the same, the same sun shines down on the two islands, etc. So it seems that the two apes have evolved common phenotypes \emph{because} their niches embody common causes. This seems obvious, right? Bell convincingly, and profoundly, proves that we cannot use analogous sets of common causes to explain correlations in quantum theory while maintaining causal locality, for such a `complete' local theory would obey (\ref{outcome}).
However, note that, in biology at least, we have exactly the same problem we had with the `genetics' parable. We should look at it from \emph{exactly the opposite point of view}. If we were to find that the environmental niches were so amazingly similar between the islands so as to ensure that two simians existed with the same phenotypes we would be utterly shocked. If the local ecologies had `genomes'---we might call them `e-nomes'---we would be shocked if their `e-nomes' were the same (for \emph{exactly the same} reason we would be shocked if two species that have evolved separately for a long time have the same genome). It would be as if the environments of each island were conspiring with each other, regardless of geological separation, so as to design simian apes. \emph{That} would be, by far, the more amazingly shocking conspiracy in comparison to what, in fact, actually happens: convergent phenotypes evolve with distinct genetics and within distinct ecologies. Common causes are the conspiracy---an analogue of the illusion of design. So we must, in biology, be careful to distinguish common causes from `sufficiently similar' uncommon causes which might arise by natural means. We simply \emph{cannot justify} the presumption of common causes. Like a magician on stage, nature does not repeat its tricks in the same way twice\footnote{The space of sets of possible causes for a particular phenotype to be the case is so vast that nature is unlikely to use the same set of causes the next time. There are many ways to \emph{evolve} a cat.}. Bell's theorem proves that we cannot assume an analogous common cause design-conspiracy in quantum theory while maintaining causal locality, but clearly it is a logical possibility that, however unpalatable, we ought \emph{not to assume it in the first place}. So, just like Bohr \cite{Bohr35} famously rejected EPR's \cite{EPR} assumptions and dissolved their nonlocality proof, we might yet happily reject Bell's \cite{BellBOOK} assumptions. And Bell (like EPR) \emph{proved} that we should reject his (their) assumptions. Two interpretations remain: some suggest that we should search for a way to maintain causal locality \cite{SR06,Bohr35,Peres03,Jaynes89,BellBOOKsub} while others suggest that we should use causally nonlocal common cause theories because they are, in the least, pedagogical and easy to understand \cite{BellBOOK}.
Furthermore, we can ask ourselves another penetrating question: in our analogy we know that the islands are similar, that they have similar forests, fruits, predators, temperatures and so forth but what, by Darwin's beard, gives us the sheer audacity, the silly ape-centric irrationalism, to call such things `causes'? Such a thing stinks of teleology. The `real' causes are small unpredictable changes by seemingly `random' mutation. Niches don't `cause' particular phenotypes to be the case, neither as a whole nor in their particulars. Analogously, what sheer teleological audacity do we have to discuss hidden `causes' in quantum theory? Measurements don't cause particular properties to be the case. Phenotypes and properties merely evolve---and we can design theories of probabilistic inference in which we assign probabilities to propositions about whether such phenotypes (resp. properties) will be said to be the case in certain niches (resp. measurements)\footnote{This Darwinian analogy correlates quite well with the idea of complementarity \cite{BohrBOOK}. `Complimentary' phenotypes rarely arise within the same niche merely because they tend to arise in different niches.}. Nonetheless we don't have to give up on causality, nor do we have to give up local causality (merely some of our mere definitions of `local causality'). Local causality ought to be inviolate unless we find something physical, that we can \emph{rationally justify}, that travels faster than light.
This analogy with biology, and Bell's theorem itself, suggests that common causes might be something that we \emph{desire} but they are not something that we can \emph{demand} of nature; common causes are an anthropomorphic ideal. So, now let us ask ourselves, where might this analogy fail? Correlations between separate phenotypes arise over long periods whereas quantum correlations arise over very short time scales. Perhaps convergency will not have enough `time' to occur in quantum theory. However, `long' is also defined relative to us: the statutory apes. Quantum correlations happen over `long' periods of `Plank' time.
There is one interesting way in which the analogy succeeds brilliantly. Evolution happens by ensuring that small changes of phenotypes occur unpredictably. Quantum mechanics is, at its very core, a theory of small\footnote{\emph{Cf.} Hardy's `damn classical jumps' \cite{Hardy01}.} unpredictable changes as well. Perhaps we can learn lessons from quantum theory that are similar to those we learnt from Darwin. Rationally we cannot justify any conspiracy or design (some demon or god which ensures correlations arise from common causes) so rather, we must invoke a theory that explains all the `magical' and `interesting' correlations we see in nature by manifestly rejecting such accounts. Instead we should search for a physical mechanism by which convergency occurs in a causally local manner (interestingly, this is an approach suggested by Bell himself\footnote{He discusses trying to define a more `human' definition of locality that is weaker than his formal definition---we would argue, in opposition, that his formal definition (\ref{locality}) is the `human' one because it panders to an anthropomorphic ideal and is \emph{clearly not obeyed by nature.}} in \cite{BellBOOKsub}). Let us learn from life.
So, this brings us back to relationalism. If we follow a relational philosophy we need to obey the Principle of Identifying the Indiscernible (PII). Remember that the PII ensures that if all the facts are telling us that two things are exactly the same then we must identify them, they are the same entity and not separate entities. Let us catalogue all the `causes' for one event to be the case, or even for it to probably be the case (one might call these `causes' hidden variables). Similarly for another correlated event at spacelike separation. The PII tells us that these two catalogues of `causes' \emph{must not be the same}. The catalogues of `causes' cannot be the same otherwise we would identify indiscernibles and we could not be discussing separate entities. This \emph{uncommon} `cause' principle should not worry us---it is the very definition we use to allow us to call two things separate in the first place. This absolves us of reasoning teleologically about nonlocality. Instead we can reason rationally about locality. So two species that evolve separately will, in general, be separate species that cannot breed however similar the phenotypes that they embody. If we isolate two species, we know of no natural mechanism which maintains common causes in geological time, there are no `elements of biology'. Analogously, two separate correlated particles are, in fact, separate \emph{because} they arise from uncommon `causes'.
\section*{Summary and Conclusion}
Bell's theorem takes the form: `something' plus casual locality logically implies a condition---let's call it $C$---and quantum theory disobeys $C$. Rather than fruitlessly argue over whether Bell's theorem is logically sound or whether quantum theory disobeys $C$, we have tried to discuss why people choose to reject causal locality rather than the `something else' that goes into Bell's theorem. Bell was quite explicit in noting that the reason he chose to reject causal locality was because the `something' constituted some very basic ideas about realism that he didn't know how to reject. That `something' might be called `completeness' and we obviously don't want quantum theory to be incomplete. Hence Bell's conclusions rely on that `something' being eminently desirable---a very \emph{anthropomorphic} criteria. Also, he noted that we are used to causally nonlocal theories (\emph{cf.} Newtonian gravity) and that they are pedagogically useful and easy to understand. These are all good, if not wholly compelling, reasons for choosing to reject causal locality.
The major problem for those who would rather not reject causal locality (until a particle or information is found to travel faster than light) is in identifying clearly what that `something' is, or what part of that `something' we should reject. We have used a relational standpoint to give two options. Clearly Bell made some very specific assumptions about probability theory and it might be here that we can nullify his theorem. Perhaps probability theory ought to be relational, and thus it is not clear whether we should demand outcome independence (\ref{outcome}) on logical grounds \cite{Jaynes89}. If we define probabilities relationally then what we \emph{mean} by a probability will be its unique catalogue of relationships with all other probabilities (regardless of any separation of the events to which those probabilities refer). This doesn't seem to convince people, perhaps because Bell's justification for (\ref{locality}) didn't seem to come from probability theory \emph{per se} but rather it stemmed from some notion of `completeness' or `realism' (an argument which falters when we note that Bell's ideas of `realism' or `completeness' might themselves stem from a particular understanding of probability theory \cite{Jaynes89}). Nonetheless we have provided a second way out. Instead of assuming that the pertinent part of that `something' is mainly to do with probability theory, let us go to the heart of Bell's theorem and assume that `something' that we might yet reject is ``correlations ought to have common causes''. This common cause principle is the cornerstone of Bell's theorem, but perhaps it is just plain wrong. Like `elements of biology' are just plain wrong.
We have given an example of a physical theory---if also a biological theory---which convincingly inspires doubt about the anthropomorphic desire for common causes. This is not to suggest that Darwinian evolution necessarily disobeys $C$, nor that we necessarily ought to use Darwinian principles in quantum theory. Rather, ``correlations ought to have common causes'' might be a rejectable assumption, and there might be \emph{good reason} to reject it. If one agrees with a relational philosophy then there is a simple argument that suggests that correlations ought to have \emph{un}common causes, \emph{cf.} Leibniz's PII. Even if you do not accept this na\"{\i}ve argument, we hope that the analogy with Darwinian evolution will, in the least, convince you that the assumption of common causes might yet \emph{possibly} be rejected by rational reasoning (there are theories out there, even if quantum theory is not yet one of them, where the presumption of common causes for correlations just doesn't hold true, and for good reason). Perhaps we have even convinced you that uncommon causes are \emph{plausible}, and that there might yet be found some natural mechanism by which quantum-convergency occurs---one that we might yet identify and investigate. It is often suggested that causal nonlocality is a route that we logically need not take, but it is also possible that we \emph{ought} not to take it.
\end{document} |
\begin{document}
\title{N-sided Radial Schramm-Loewner Evolution}
\author{Vivian Olsiewski Healey\footnote{research supported in part by NSF DMS-1246999.} \,and Gregory F. Lawler\footnote{research suported by NSF DMS-1513036}}
\date{\today}
\maketitle
\abstract{ We use the interpretation of the Schramm-Loewner evolution as a limit of path measures tilted by a loop term in order to motivate the definition of $n$-radial SLE going to a particular point. In order to justify the definition we prove that the measure obtained by an appropriately normalized loop term on $n$-tuples of paths has a limit. The limit measure can be described as $n$ paths moving by the Loewner equation with a driving term of Dyson Brownian motion. While the limit process has been considered before, this paper shows why it naturally arises as a limit of configurational measures obtained from loop measures.}
\section{Introduction}
Multiple Schramm-Loewner evolution has been studied by a number of authors including \cite{KL}, \cite{Dubedat}, \cite{Mohammad}, \cite{PWGlobal_Local_SLE}, \cite{BPW_GlobalSLE} (chordal) and \cite{Z2SLEbdry}, \cite{Z2SLEint} ($2$-sided radial). For $\kappa\leq 4$, domain $D$, and $n$-tuples $\boldsymbol x$ and $\boldsymbol y$ of boundary points, multiple chordal $SLE$ from $\boldsymbol x$ to $\boldsymbol y$ in $D$ is defined as the measure absolutely continous with the $n$-fold product measure of chordal $SLE$ in $D$ with Radon-Nikodym derivative \begin{equation}\label{chordal_tilt} Y(\bgamma)= I(\bgamma) \exp\left\{\frac \cent 2 \sum_{j=2}^n \, m[K_j(\bgamma)] \right\}, \end{equation} where $I(\bgamma)$ is the indicator function of \[ \{ \gamma^j\cap \gamma^k =\emptyset, \, 1\leq j <k \leq n \}, \] and $m[K_j(\bgamma)]$ is the Brownian loop measure of loops that intersect at least $j$ paths (see, e.g., \cite{JL_partition_function} for this result; see \cite{ConfInvLERW} for the construction of Brownian loop measure). We would like to define multiple radial $SLE$ by direct analogy with the chordal case, but this is not possible for two reasons. First, in the radial case the event $I(\bgamma)$ would have measure $0$, and second, the Brownian loop measure $m[K_j(\bgamma)]$ would be infinite, since all paths approach $0$. Instead, the method will be to construct a measure on $n$ paths that is absolutely continuous with respect to the product measure on $n$ independent radial $SLE$ curves with Radon-Nikodym derivative analogous to (\ref{chordal_tilt}) but for both $I(\bgamma)$ and $m[K_j(\bgamma)]$ depending only on the truncations of the curves at a large time $T$. Taking $T$ to infinity then gives the definition of multiple radial $SLE$. The precise details of this construction, the effect on the driving functions, and the rate of convergence of the partition function are the main concern of this work.
Schramm-Loewner evolution, originally introduced in \cite{Schramm}, is a distribution on a curve in a domain $D\subset {\mathbb C}$ from a boundary point to either another boundary point (\emph{chordal} $SLE$) or an interior point (\emph{radial} $SLE$). In both the chordal and radial cases, there are various ways to define $SLE$ measure. Schramm's original observation was that any probability measure on curves satisfying conformal invaraiance and the domain Markov property can be described in the upper half plane or the disc using the Loewner differential equation. More precisely, after a suitable time change, it is the measure on parameterized curves $\gamma$ such that for each $t\in [0, \infty)$, $D=g_t\left(D\setminus \gamma[0,t]\right)$, where $g_t$ solves the Loewner equation: \[ \text{Chordal: } \dot g_t(z) =\frac{a}{g_t(z)-B_t} ,\quad g_0(z)=z \]
\[ \text{Radial: } \dot g_t(w) = 2a g_t(w) \frac{z_t + g_t(w) }{z_t-g_t(z)} ,\quad g_0(z)=z, \]
where $a=2/\kappa$, $B_t$ is a standard Brownian motion, and $z_t= e^{2i B_t}$. However, this dynamical interpretation is somewhat artificial in the sense that the curves typically arise from limits of models in equilibrium physics and are not ``created'' dynamically using this equation. Indeed, the dynamic interpretation is just a way of describing conditional distributions given certain amounts of information. When studying $SLE$, one goes back and forth between such dynamical interpretations and configurational or ``global'' descriptions of the curve.
One aspect of the global pespective is that
radial $SLE$ measure in different domains may be compared by also considering the partition function $\slepart_D(z,w)$, which assigns a total mass to the set of $SLE$ curves from $z$ to $w$ in the domain $D$. It is defined as the function with normalization $\slepart_\Disk(1,0)=1$ satisfying conformal covariance: \begin{equation}\label{eq:slepart_conformalcovariance} \slepart_D(z,w)= \abs{f'(z)}^b \abs{f'(w)}^{\tilde b}\slepart_{D'}(z',w'), \end{equation} where $f(D)=D'$, $f(z)=z'$, $f(w)=w'$, and \[ b=\frac{6-\kappa}{2\kappa}=\frac{3a-1}{2}, \qquad
\tilde b = b \,\frac{\kappa-2}{4}=b\,\frac{1-a}{2a} \] are the boundary and interior scaling exponents. (This definition requires sufficient smoothness of the boundary near $z$.) Another convention defines the partition function with an additional term for the determinant of the Laplacian, however, the benefit of our convention is that value of the partition function is equal to the total mass.
Considering $SLE$ as a measure with total mass allows for direct comparison between $SLE$ measure in $D$ with $SLE$ measure in a smaller domain $D'\subset D$. This comparison is called either \emph{boundary perturbation} or the \emph{restriction property}, and is stated precisely in Proposition \ref{restriction} \cite{Mohammad}.
Multiple chordal $SLE$ was first considered in \cite{BBK, Dubedat,KL}. Dub\'edat \cite{Dubedat} shows that two (or more) $SLE$s commute only if a system of differential equations is satisfied, and the construction holds until the curves intersect. Using this framework, the uniqueness of global multiple $SLE$ is shown in \cite{PK} \cite{PWGlobal_Local_SLE} and \cite{BPW_GlobalSLE}. In these works, the term \emph{local} $SLE$ is used to refer to solutions to the Loewner equation up to a stopping time, while \emph{global} $SLE$ refers to the measure on entire paths.
\begin{figure}
\caption{Initial segments of $n$-radial $SLE$}
\label{fig:unit_circle}
\end{figure}
This work builds on the approach of \cite{KL}, which relies on the loop interpretation to give a global definition for $0<\kappa\leq 4$. However, because we have to take limits, we will need to use both global and dynamical expressions. The dynamical description relies on computations concerning the radial Bessel process (Dyson Brownian motion on the circle) and go back to \cite{Cardy}, and hold in the more general setting of $\kappa<8$.
\begin{figure}
\caption{Left: a loop $\ell$ contained in $\loops{j}{t}$. Right: a loop $\ell$ contained in $\pastloops{j}{t}$. The key difference is that loops in $\pastloops{j}{t}$ must intersect $\gamma^j$ before time $t$. In both cases, $\ell$ intersects $\gamma^k_t$ before $\gamma^j$, i.e. $s(\ell)<s^j(\ell)$. }
\label{fig:loops}
\end{figure}
Our main result is the following. Let $n$ be a positive integer and $\bgamma=(\gamma^1, \ldots, \gamma^n)$ an $n$-tuple of curves from $z^j_0 \in \bdry \Disk$ to $0$ with driving functions $z^j_t=e^{2i\theta^j_t}$. We will assume that the curves are parameterized using the $a$-common parameterization, which is defined in \S \ref{sec:locindep}. Let $\Prob$ denote the $n$-fold product measure on independent radial $SLE$ curves from $\gamma^j(0)$ to $0$ in $\Disk$ with this parameterization. (See Figure~\ref{fig:unit_circle}.) Let $\loops{j}{t} = \loops{j}{t}(\bgamma_t)$ be the set of loops $\ell$ that hit the curve $\gamma^j$ and at least one initial segment $\gamma^k_t$ for $k=1, \ldots, n$, $k\neq j$ but do not hit $\gamma^j$ first. (See the lefthand side of Figure \ref{fig:loops}.) Here we are measuring the ``time'' on the curves $\bgamma$ and not on the loops. Define \[ \loopterm_t = \indicator_t \, \exp \left\{ \frac{\cent}{2} \sum_{j=1}^n m_\Disk(\loops{j}{t}) \right\} \] where $I_t$ is the indicator function that $\gamma^j_t \cap \gamma^k = \eset $ for $j \leq k $, and $m_\Disk$ is the Brownian loop measure.
\begin{reptheorem}{maintheorem}
Suppose $0 < \kappa \leq 4$ and
$t>0$. For each $T>t$, let
$\mu_{T}=\mu_{T,t}$ denote the measure
whose Radon-Nikodym
derivative with respect to $\Prob$ is
\[ \frac{ \loopterm_T}
{ {\mathbb E}^{\btheta_0}\left[ \loopterm_T\right]}.\]
Then as $T \rightarrow \infty$, the measure $\mu_{T,t}$,
viewed as a measure on curves stopped at time $t$,
approaches a probability measure with respect to the variation distance.
Moreover, the measures
are consistent and give a probability measure
on curves $\{\bgamma(t): t \geq 0\}$. This measure can be decribed as the solution
to the $n$-point Loewner equation with
driving functions $z^j_t=e^{2i \theta^j_t}$ satisfying
\begin{equation}
d \theta_t^j = 2a \sum_{k \neq j}
\cot(\theta_t^j -\theta_t^k) \, dt + dW_t^j,
\end{equation}
where $W^j_t$ are independent standard Brownian motions.
\end{reptheorem}
A key step in the proof is Theorem \ref{exponentialrateofconv1} which gives exponential convergence of a particular partition function for $n$-radial Brownian motion. This theorem is valid for $0 < \kappa < 8$, but only in the $\kappa \leq 4$ case can we apply this to our model and give a corollary that we now describe.
Let $ \mathcal X ={ \mathcal X}_n$ denote the set of ordered pairs $\btheta = (\theta^1,\ldots,\theta^n)$ in the torus $[0,\pi)^n$ for which there are representatives with $0 \leq \theta^1 < \theta^2 < \ldots < \theta^n < \theta^1 + \pi$. Denote
\[ F_a(\btheta) = \prod_{1\leq j<k\leq n} |\sin(\theta^k - \theta^j)|^a \] \[ \Integral{a} = \int_{\mathcal X} F_a(\btheta) \, d \btheta. \]
\[ \beta = \beta(a,n) = \frac{a(n^2-1)}{4}.\]
\begin{repcorollary}{expconvloopversion}
If $a \geq 1/2$, there exists $u =u(2a,n)> 0$ such that
\[ {\mathbb E}^{\btheta_0}\left[ \loopterm_t \right] = e^{-2an\beta t} \, \frac{\Integral {3a}}{\Integral{4a}} F_a(\btheta)
[ 1 + O(e^{-ut})]. \] \end{repcorollary}
The paper is organized as follows. Section \ref{sec:discrete} describes the multiple $\lambda$-SAW model, a discrete model which provides motivation and intuition for the perspective we take in the construction of $n$-radial $SLE$. Section \ref{sec:preliminaries} gives an overview of the necessary background for the radial Loewner equation. Section \ref{sec:MultipleSLE} contains the construction of $n$-radial $SLE$ (Theorem \ref{maintheorem}) as well as locally independent $SLE$. The necessary results about the $n$-radial Bessel process are stated here in the context of $\kappa\leq 4$ without proof. Finally, section \ref{sec:DysonBM} contains our results about the $n$-radial Bessel process, including Theorem \ref{exponentialrateofconv1}. These results hold for all $\kappa<8$ and include proofs of the statements that were needed in section \ref{sec:MultipleSLE}.
\section{Preliminaries}\label{sec:preliminaries}
\subsection{Discrete Model}\label{sec:discrete}
Although we will not prove any results about convergence of a discrete model to the continuous, much of the motivation for our work comes from a belief that $SLE$ is a scaling limit of the ``$\lambda$-SAW'' described first in \cite{KL}. In particular, the key insight needed to prove Theorem \ref{maintheorem}, the use of the intermediate process \emph{locally independent} $SLE_\kappa$ as a step between independent $SLE_\kappa$ and $n$-radial $SLE_\kappa$, was originally formulated by considering the partition function of multiple $\lambda$-SAW paths approaching the same point. For this reason, we describe the discrete model in detail here.
The model weights self-avoiding paths using the \emph{random walk loop measure}, so we begin by defining this.
A (rooted) random walk loop in ${\mathbb Z}^2$ is a nearest neighbor path $\ell = [\ell_0,\ell_1,\ldots,\ell_{2k}]$ with $\ell_0 = \ell_{2k}$. The loop measure gives measure $\hat m(\ell) = (2k)^{-1} \, 4^{-2k}$ to each nontrivial loop of length $2k >0$. If $V \subset A \subset {\mathbb Z}^2$, we let \[ F_V(A) = \exp \left\{\sum_{\ell \subset A,
\ell \cap V \neq \eset } m(\ell) \right\}, \]
that is, $\log F_V(A)$ is the measure of loops
in $A$ that intersect $V$.
We fix $n$ and some $r_n > 0$ such that there exists $n$ infinite self-avoiding paths starting at the origin that have no intersection after they first leave the ball of radius $r_n$.
(For $n \leq 4$, we can choose $r_n = 0$ but for larger
$n$ we need to choose $r_n$ bigger because one cannot
have five nonintersecting paths starting at the origin.
This is a minor discrete detail that we will not worry about.) If $A \subset {\mathbb Z}^2$ is a finite, simply connected
set containing the disk of radius $r_n$ about the origin, we let $\saws_A$ denote the set of
self-avoiding walks $\eta$ starting at $\p A$,
ending at $0$, and otherwise staying in $A$.
As a slight abuse of notation, we will write $\eta^1
\cap \eta^2 = \eset$ if the paths have no intersections
other then the beginning of the reversed paths up to
the first exit from the ball of
radius $r_n$. (If $n \leq 4$ and $r_n = 0$, this means
that the paths do not intersect anywhere except their terminal
point which is the origin.)
If $\bfeta = (\eta^1,\ldots,\eta^n)$ is an $n$-tuple
of such paths, we let $I(\bfeta)$ be the indicator function
of the event that $\eta^j \cap \eta^k = \eset$ for all $j\neq k$. We write $|\eta^j|$ for the number of edges in $\eta^j$
and $|\bfeta| = |\eta^1| + \cdots + |\eta^j|$.
Let $\bar \saws_A = \bar \saws_{A,n} $ denote the set of $n$-tuples $\bfeta$
in $\saws_A$ with $I(\bfeta) = 1$. We then consider
the measure on configurations given by
\[ \nu_{A,\cent}
(\bfeta) = \exp\{- \beta |\bfeta|\}
\, I(\bfeta)\, F_\bfeta(A)^{\cent/2} . \] Here $\beta = \beta_\cent$ is a critical value under which the measure becomes critical. If $\z \in (\p A)^n$, we write $\bar \saws_A(\z)$ for the set of $\bfeta \in \bar \saws_A$ such that $\eta^j$ starts at $z^j$.
Suppose $D$ is a bounded, simply connected domain in ${\mathbb C}$ containing the origin and let $\z = (z^1,\ldots,z^n)$ be an $n$-tuple of distinct points in $\p D$ oriented counterclockwise. For ease, we assume that for each $j$, $\p D$ in a neighborhood of $z^j$ is a straight line segment parallel to the coordinate axes (e.g., $D$ could be a rectangle and none of the $z^j$ are corner points). For each lattice spacing $N^{-1}$, let $A_N$ be an approximation of $N D$ in $ {\mathbb Z}^2$ and let $\z_N = (z^1_N,\ldots,z^n_N)$ be lattice points corresponding to $N \z$. We can consider the limit as $N \rightarrow \infty$ of the measure on scaled configurations $N^{-1} \, \bfeta$ given by $\nu_{A_N,\cent}$ restricted to $\bar \saws_{A_N}(\z_N)$.
\begin{conjecture} Suppose $ \cent \leq 1$. Then there exist $b,\tilde b_n$ and, critical $\beta = \beta_\cent$ and a partition function $\slepart^*(D;\z,0)$ such that as $N \rightarrow \infty$, \[ \nu_{A_N,\cent} (\bar \saws_{A_N}(\z_N))
\sim \slepart^*(D;\z,0) \, N^{nb}
\, N^{\tilde b_n} . \] Moreover, the scaling limit $N^{-nb} \, N^{-\tilde b_n}
\, \nu_{A_N,\cent} $ is $n$-radial $SLE_\kappa$, $\mu_D(\z,0)$ with
partition function $\slepart(D;\z,0).$ If $f:D \rightarrow
f(D)$ is a conformal transformation with $f(0) = 0$, then
\[ f \circ \mu_D(\z,0) = |f'(\z)|^b \, |f'(0)|^{\tilde{b}_n }\, \mu_{f(D)}(f(\z),0).\] Here
$f(\z) = (f(z^1),\ldots,f(z^n))$ and $ f'(\z)
= f'(z^1) \cdots f'(z^n)$. \end{conjecture}
This conjecture is not precise, and since we are not planning on proving it, we will not make it more precise. The main goal of this paper is to show that assuming the conjecture informs us as to what $n$-radial $SLE_\kappa$ should be and what the exponents $b, \tilde b_n$ are.
The case $n=1$ is usual radial $SLE_\kappa$ for $\kappa \leq 4$ and the relation is \[ b = \frac{6-\kappa}{2\kappa}, \;\;\;\;
\tilde b_1 = \tilde b = b \, \frac{\kappa - 2}{4} , \;\;\;\;
\cent = \frac{(3 \kappa-8)(6-\kappa)}{2\kappa}.\] This is understood rigorously in the case of $\cent = -2, \kappa = 2$ since the model is equivalent to the loop-erased random walk. For other cases it is an open problem. For $\cent = 0$,
it is essentially equivalent to most of the very hard open problems about self-avoiding walk. However, assuming the conjecture and using the fact that the limit should satisfy the restriction property, one can determine $\kappa = 8/3,
\cent = 0, b = 5/8, \tilde b = 5/48$. The critical exponents
for SAW can be determined (exactly but
nonrigorously) from these values.
The case $n=2$ is related to {\em two-sided radial $SLE_\kappa$} which can also be viewed as chordal $SLE_\kappa$ from $z^1$ to $z^2$, restricted to paths that go through the origin. In this case, $b_2 = d-2$ where $d = 1 + \frac \kappa 8$ is the fractal dimension of the paths.
\subsection{Radial $SLE$ and the restriction property} \label{radialsec}
The radial Schramm-Loewner evolution with parameter $\kappa =2/a$ ($SLE_\kappa$) from $ z = e^{2i\theta}$ to the origin in the unit disk is defined as the random curve $\gamma(t)$ with the following properties. Let $D_t$ be the component of $\Disk \setminus \gamma[0,t]$ containing the origin. If $g_t: D_t \rightarrow \Disk $ is the conformal transformation with $g_t(0) = 0, g_t'(0) > 0$, then $g_t$ satisfies \[ \dot g_t(w) = 2a \, g_t(z) \, \frac{e^{2iB_t}+ g_t(w)}
{e^{2iB_t} - g_t(w)} , \;\;\;\; g_0(w) = w, \] where $B_t$ is a standard Brownian motion. More precisely, this is the definition of radial $SLE_\kappa$ when the curve has been parameterized so that $g_t'(0) = e^{2at}$.
We will view $SLE_\kappa$ as a measure on curves modulo reparameterization (there is a natural parameterization that can be given to the curves, but we will not need this in this paper). We extend $SLE_\kappa$ to be a probability measure $\mu^\#_D(z,w)$ where $D$ is a simply connected domain, $z \in \p D$ and $w \in D$ by conformal transformation. It is also useful for us to consider the non-probability measure $\mu_D(z,w) =
\slepart_D(z,w)\, \mu_D^\#(z,w)$. Here
$\slepart_D(z,w)$ is the radial partition function
that can be defined by $\slepart_\Disk(1,0) = 1$
and the scaling rule
\[ \slepart_D(z,w) = |f'(z)|^b \, |f'(w)|^{\tilde b}
\, \slepart_{f(D)}(f(z),f(w)), \]
where
\[ b= \frac{6 - \kappa}{2\kappa}
= \frac{3a-1}{2} , \;\;\;\; \tilde b = b \, \frac{\kappa-2}{4} =
b \, \frac{1-a}{2a} , \]
are the boundary and interior scaling exponents.
This definition requires sufficient smoothness of
the boundary near $z$. However, if $D' \subset D$
agree in neighborhoods of $z$, then the ratio
\[ \slepart(D,D';z,w):=
\frac{\slepart_{D'}(z,w)}{\slepart_D(z,w)} \]
is a conformal invariant and hence is well defined
even for rough boundary points.
We will need the restriction property for radial
$SLE_\kappa, \kappa \leq 4$. We state here it in a way that does
not depend on the parameterization, which is the form that we will use.
\begin{definition}
If $D$ is a domain and $K_1,K_2$ are disjoint subsets
of $D$, then $m_D(K_1,K_2)$ is the Brownian loop
measure of loops that intersect both $K_1$ and $K_2$.
\end{definition}
\begin{proposition}[Restriction property]
Suppose $\kappa \leq 4$ and $D = \Disk \setminus K$
is a simply connected domain containing the origin.
Let $z \in \p D$ with ${\rm dist}(z,K) > 0$, and let $\gamma$
be a radial $SLE_\kappa$ path from $z$ to $0$
in $\Disk$. Let
\[ M_t = 1\{\gamma_t \cap K = \eset\} \, \exp\left\{\frac \cent 2
\, m_\Disk(\gamma_t,K) \right\} \, \slepart_t , \]
where $\slepart_t = \slepart(\Disk \setminus
\gamma_t, D \setminus \gamma_t; \gamma(t),0).$
Then $M_t$ is a uniformly integrable martingale,
and the probability
measure obtained from Girsanov's theorem by tilting by $M_t$ is $SLE_\kappa$ from $z$ to $0$ in
$D$. In particular,
\begin{equation} \label{restriction}
{\mathbb E}\left[1\{\gamma \cap K = \eset\}
\, \exp\left\{\frac \cent 2
\, m_\Disk(\gamma,K) \right\} \right] = {\mathbb E}[M_\infty]
= M_0 = \slepart(\Disk,D; z,0).
\end{equation}
\end{proposition}
See \cite{Mohammad} for a proof. It will be useful for us to discuss the ideas in the proof. We parameterize the curve as above and we consider $\Psi_t$, the ratio of partition functions at time $t$ of $SLE$ in $D \setminus \gamma_t$ with $SLE$ in $\Disk \setminus \gamma_t$. Using the scaling rule and It\^o's formula, one computes the SDE for $\slepart_t,$ \[ d\slepart_t = \slepart_t \, \left[A_t \, dt + R_t
\, dB_t\right].\] This is not a local martingale, so we find the compensator and let \[ M_t = \Psi_t \, \exp \left\{-\int_0^t A_s \, ds \right\},\] which satisfies \[ dM_t = R_t \, M_t \, dB_t.\] This is clearly a local martingale. The following observations are made in the calculations: \begin{itemize} \item The compensator term is the same as $\exp\left\{\frac \cent 2
\, m_D(\gamma,K) \right\} $.
\item If we use Girsanov theorem and tilt by this
martingale, we get the same distribution on paths as
$SLE_\kappa$ in $D$. The latter distribution
was defined by conformal invariance.
\end{itemize} All of this is valid for all $\kappa$ up to the first time $t$ that $\gamma(t) \in K$. For $\kappa \leq 4$, we now use the fact that radial $SLE$ in $D$ never hits $K$ and is continuous at the origin. This allows us to conclude that it is a uniformly integrable martingale. With probability one in the new measure we have $\gamma \cap K = \eset$ and hence we can conclude the proposition.
We sketched the proof in order to see what happens when we allow the set $D$ to shrink with time. In particular, let $D_t = \Disk \setminus K_t$, where $K_t$ grows with time, and let \[ \slepart_t = \slepart(\Disk \setminus \gamma_t,
D_t\setminus \gamma_t;\gamma(t),0), \]
\[ T = \inf\{t: \gamma_t \cap K_t \neq \eset \}.\] For $t < T$, we can again consider $SLE$ tilted by $\slepart_t$. However, since $K_t$ is growing, the loop term is more subtle in this case. Roughly speaking, the relevant loops are those that intersect $K_s$ for some $s$ smaller than their first intersection time with $\gamma_t$.
More precisely, the local martingale has the form \[ \loopterm_t^{\cent/2} \, \exp\left\{- \int_0^t A_s \, ds \right\} \, \Psi_t, \] where \begin{itemize} \item $\log \loopterm_t$ is the Brownian measure of loops $\ell$ that hit $\gamma_t$ and satisfy the following: if $s_\ell$ is the smallest time with $\gamma(s_\ell)\in \ell$, then $l\cap K_{s_\ell} \ne \eset$.
\item $ A_t = \rho'(0)$, for \[\rho(\epsilon)= \rho_t(\epsilon) = \slepart(\Disk \setminus \gamma_t,
D_{t+\epsilon}\setminus \gamma_t; \gamma(t),0).\] We assume that $A_t$ is well defined, that is, that $\rho$ is differentiable.
\end{itemize} When we tilt by $\slepart_t$, the process at time $t$ moves like $SLE$ in $D_t$. We will only consider this up to time $T$.
\section{Measures on $n$-tuples of paths} \label{sec:MultipleSLE}
We will use a similar method to define two measures on $n$-tuples of paths which can be viewed as process taking values in $\overline \Disk^n$. We start with $n$ independent radial $SLE$ paths. First, we will tilt independent $SLE$ by a loop term to define a process with the property that each of the $n$ paths locally acts like $SLE$ in the disk minus all $n$ initial segments. We will tilt this process again by another loop term and take a limit to give the definition of global multiple radial $SLE$. Splitting up the construction into two distinct tiltings will allow us to analyze the contribution of $t$-measurable loops separately from that of ``future loops." Furthermore, each of these processes is interesting in its own right, and we show that in each case the driving function satisfies the radial Bessel equation. (See equations (\ref{Bessel-a}) and (\ref{eq:SDE2a}).)
This clarifies which terms cause the multiple paths to avoid each other's past versus the terms that ensure that the paths continue to avoid each other in the future until all curves reach the origin.
\subsection{Notation}
We will set up some basic notation; some of the notation that was used in the single $SLE$ setting above will be repurposed here in the setting of $n$ curves. (See Figure~\ref{fig:unzip}.)
\begin{figure}
\caption{$n$-radial $SLE$}
\label{fig:unzip}
\end{figure}
\begin{itemize}
\item We fix positive integer $n$ and let $\btheta = (\theta^1,\ldots,\theta^n)$ with \[ \theta^1 < \cdots < \theta^n < \theta^1 +\pi.\] Let $z^j = \exp\{2i \theta^j$\} and $\bz = (z^1,\ldots,z^n)$. Note that $z^1,\ldots,z^n$ are $n$ distinct points on the unit circle ordered counterclockwise.
\item Let $\bgamma = (\gamma^1,\ldots,\gamma^n)$ be an $n$-tuple of curves $\gamma^j:(0,\infty) \rightarrow \Disk\setminus\{0\}$ with $\gamma^j(0+) = z^j $ and $\gamma^j(\infty) = 0$. We write $\gamma^j_t$ for $\gamma^j[0,t]$ and $\bgamma_t = (\gamma^1_t,\ldots,\gamma^n_t)$.
In a slight abuse of notation, we will use $\gamma^j_t$ to refer to both the set $\gamma^j[0,t]$ and the function $\gamma^j$ restricted to times in $[0,t]$.
\item Let $ D_t^j, D_t$ be the connected components of $ \Disk \setminus \gamma_t^j,
\Disk \setminus \bgamma_t,$ respectively, containing
the origin. Let $g_t^j:
D_t^j \rightarrow \Disk, g_t:D_t \rightarrow \Disk$ be the unique conformal transformations with \[ g_t^j(0) = g_t(0) = 0 , \;\;\;\;\;
(g_t^j)'(0), g_t'(0) > 0.\]
\item Let $T$ be the first time $t$ such that $\gamma_t^j \cap \gamma_t^k \neq \eset $ for some $1 \leq j < k \leq n$.
\item Define $z_t^j = \exp\{2i\theta^j_t\}$ by $g_t(\gamma^j(t)) = z_t^j$. Let $\z_t = (z^1_t,\ldots,z^n_t), \btheta_t = (\theta^1_t,\ldots,\theta^n_t)$. For $\zeta \in \Half$ define $h_t(\zeta )$ to be the continuous function of $t$ with $h_0(\zeta) = \zeta$ and \[ g_t(e^{2i\zeta}) = e^{2ih_t(\zeta)}.\] Note that if $\zeta \in {\mathbb{R}}$ so that $e^{2i\zeta} \in \p \Disk,$ we can differentiate with respect to $\zeta$ to get \begin{equation} \label{jan25.3}
|g_t'(e^{2i\zeta})|
= h_t'(\zeta). \end{equation}
\item More generally, if $\bt = (t_1,\ldots,t_n)$ is an $n$-tuple of times, we define $\bgamma_\bt, D_\bt, g_\bt$. We let $ \alpha(\bt) = \log g_\bt'(0).$
\item We will say that the curves have the \textit{common (capacity $a$-)parameterization} if for each $t$, \begin{equation} \label{jan25.1}
\p_{j}\alpha(t,t,\ldots,t) = 2a, \;\;\;\; j=1,\ldots,n.
\end{equation} In particular, \begin{equation} \label{jan25.2} g_t'(0) = e^{2ant}.
\end{equation}
Note that \eqref{jan25.1} is a stronger condition than
\eqref{jan25.2}.
\end{itemize}
The following form of the Loewner differential equation is proved in the same was as the $n=1$ case,
\begin{proposition}\label{prop_radial_Loewner_equation}[Radial Loewner equation] If $\bgamma_t$ has the common parameterization, then for $t < T$, the functions $g_t,h_t$ satisfy \[ \dot g_t(w) = 2a \, g_t(w) \sum_{j=1}^n
\frac{z_t^j + g_t(w)}{z_t^j - g_t(w)}, \;\;\;\;\;
\dot h_t(\zeta) = a \sum_{j=1}^n \cot(h_t(\zeta) -
\theta_t^j).\] If $\p D_t$ contains an open arc of $\p \Disk$ including $w = e^{2i\zeta}$, then \begin{equation} \label{jan25.4}
|g_t'(w)| = \exp \left\{-a\int_0^t \sum_{j=1}^n \csc^2 (h_s(\zeta)-\theta^j_t)
\, ds \right\}.
\end{equation} \end{proposition}
\subsection{Common parameterization and local independence }\label{sec:locindep}
Suppose that $\gamma^1,\ldots,\gamma^n$ are independent radial $SLE_\kappa$ paths in $\Disk$ starting at $z^1,\ldots,z^n$, respectively, going to the origin. Then we can parameterize the paths so that they have the common
parameterization. (This parameterization is only possible until the first time that two of the paths intersect, but this will not present a problem since we will usually restrict to nonintersecting paths.) Indeed, suppose
$\tilde \gamma^1,\ldots,\tilde \gamma^n$ are independent $SLE_\kappa$ paths with the usual parameterization as in Section \ref{radialsec}. It is not true that $\tilde \bgamma_t = (\tilde \gamma^1,\ldots,\tilde \gamma^n_t)$ has the common parameterizaton. We will write $\gamma^j(t) = \tilde \gamma(\sigma^j(t))$ where $\sigma^j(t)$ is the necessary time change. Define $g_{t,j}$ by $g_t =g_{t,j} \circ g_t^j$. The driving functions for $\tilde \bgamma_t$ are independent standard Brownian motions; denote these by $\drjradial{j}{t}$. Define $\drjcommon{j}{t}$ by
$ \drjcommon{j}{t}=\drjradial{j}{\sigma^j(t)} $ so that $ e^{2 i \drjcommon{j}{t} }=g^j_t(\gamma^j(t)). $ Furthermore, define define $h_{t}^j$ and $h_{t, j}$ so that \begin{equation}\label{notation:htj} h_t (w)=h_{t,j}\circ h^j_t(w), \end{equation}
and \[ g_t^j (e^{2iw})= e^{2i h_{t}^j(w)}.\] (See Figure \ref{fig:unzip}.)
\begin{lemma} \label{time_change_derivative}
The derivative
$\dot \sigma^j(t)$ depends only on $\bgamma_t$ and is given by \begin{equation} \label{Time_change_common_param} \dot \sigma^j(t)
=h_{t,j}'(\xi^j_t)^{-2}. \end{equation} \end{lemma}
\begin{proof} Differentiating both sides of equation (\ref{notation:htj}), we obtain \begin{equation}\label{time_deriv_of_h} \dot h_t(w)= \dot h_{t,j}(h^j_t(w)) + h_{t,j}' (h^j_t(w)) \times \dot h^j_t(w). \end{equation} Since $g^j_t$ satisfies the (single-slit) radial Loewner equation with an extra term for the time change, $h^j_t$ satisfies \[ \dot h^j_t (w) = a \cot\left(h^j_t(w) - \drjcommon{j}{t} \right) \times \dot \sigma^j(t). \] On the other hand, $h_{t,j}$ satisfies \[ \dot h_{t,j} (w) = a \sum_{k\neq j} \cot \left(h_{t,j}(w) - \theta^k_t\right).\] Substituting these expressions for $\dot h_t(w)$ and $\dot h^j_t (w)$ into (\ref{time_deriv_of_h}) and using the equation for $\dot h_t(w)$ given in Proposition \ref{prop_radial_Loewner_equation} shows that \begin{equation} a \sum_{k=1}^n \cot \left( h_t(w)-\theta^k_t\right) = a \sum_{k\neq j} \cot \left(h_{t}(w) - \theta^k_t\right) + h'_{t, j}(h^j_t(w)) \times a \dot \sigma^j(t)\cot\left(h^j_t(w) - \drjcommon{j}{t}\right). \end{equation} Solving for $\dot \sigma^j(t)$ and taking the limit as $w\to \gamma^j(t)$ verifies (\ref{Time_change_common_param}). \end{proof}
The components of $\bgamma$ are not quite independent because the rate of ``exploration'' of
the path $\gamma^j$ depends on the other paths.
However, the paths are still independent in the
sense that the conditional distribution of the
remainder of the paths given $\bgamma_t$ are
independent $SLE$ paths; in the case of $\gamma^j$
it is $SLE$ in $D_t^j $ from $\gamma^j(t)$ to $0$.
We will define another process, which we will call \emph{locally independent} $SLE_\kappa$ that has the
property that locally each curve grows like $SLE_\kappa$ from $\gamma^j(t)$ to $0$ in $D_t$ (rather than in $D_t^j$).
This will
be done similarly as for a single path. Intuitively, at time $t$ each curve $\gamma^j_t$ can ``see'' $\bgamma_t$, but not the future evolution of the other curves.
Recall that
$SLE_\kappa$ in $D\subset \Disk$ is obtained from $SLE_\kappa$ in $\Disk$ by weighting by the appropriate partition function. Since the partition function is not a martingale, this is done by finding an appropriate differentiable compensator so that the product is a martingale, and then applying Girsanov's theorem.
Let
\begin{equation} \label{jan27.1}
\slepart_t^j = \slepart(D_t^j, D_t;\gamma^j(t),0), \;\;\;\;
\slepart_t = \prod_{j=1}^n \slepart_t^j,
\end{equation}
\begin{equation} \label{oct3.1} \sumcsc(\btheta_t) =
\sum_{j=1}^n \sum_{k \neq j} \csc^2(\theta^j_t - \theta^k_t), \end{equation}
\[
\stoptime=\inf \{t: \exists j\neq k \text{ such that }\gamma^j_t \cap \gamma^k_t \neq \emptyset\}.
\]
For any loop, let \[ s^j(\ell) = \inf\{t: \gamma^j(t) \in \ell\},\;\;\;\;\;
s(\ell) = \min_{j} s^j(\ell). \]
We make a simple observation that will make the ensuing
definitions valid. \begin{lemma} Let $\gamma^1,\ldots,\gamma^n$ be nonintersecting curves. Then except for a set of loops of Brownian loop measure zero, either $s(\ell) =\infty$ or there exists a unique $j$ with $s^j(\ell) = s(\ell).$ \end{lemma}
\begin{proof}[Sketch of proof] We consider excursions between the curves $\gamma^1,\ldots, \gamma^n$, that is, times $r$ such that $\ell(r) \in \gamma^k$ for some $k$ and the most recent visit before time $r$ was to a different curve $\gamma^j$. There are only a finite number of such excursions. For each one, the probability of hitting a point with the current smallest index is zero. \end{proof}
Let $\pastloops{j}{t} = \pastloops{j}{t}(\bgamma_t)$ be the set of loops $\ell$ with $s(\ell) < s^j(\ell)
\leq t$, and let \begin{equation} \label{defpastloops}
\pastloopterm_t = \pastindicator_t \, \exp \left\{ \frac{\cent}{2}
\sum_{j=1}^n m_\Disk(\pastloops{j}{t}) \right\} . \end{equation} (See Figure \ref{fig:loops}.) Here $\hat{I}_t$ is the indicator function that $\gamma^j_t \cap \gamma_t^k = \eset $ for $j \neq k $.
We note that while the definitions of $s^j$ and $s$ (and hence $\pastloops{j}{t}$) depend on the parameterization of the curve, $\pastloopterm_t$ depends only on the traces of the curves $\gamma^1_t,\ldots,\gamma^n_t$. For this reason, we could also define $\pastloopterm_\bt$ for an $n$-tuple $\bt = (t_1,\ldots,t_n)$.
\begin{proposition}\label{locally_independent}
Let $0 < \kappa \leq 4$.
If $\bgamma_t$ is independent $SLE_\kappa$ with the common parameterization, and
\begin{equation}\label{def:locindepmart}
M_t = \pastloopterm_t \, \Psi_t \, \exp\left\{ab\int_0^t \sumcsc(\btheta_s) \,ds \right\}, \end{equation}
then $M_t$ is a local martingale for $0 \leq t < \stoptime$.
If $\Prob_*$ denotes the measure obtained by
tilting $\Prob$ by $M_t$, then
\begin{equation}\label{Bessel-a}
d \theta_t^j = a \sum_{k \neq j}
\cot(\theta_t^j -\theta_t^k) \, dt + dW_t^j,
\end{equation}
where $W_t^1,\ldots,W_t^n$ are independent
standard Brownian motions with respect to
$\Prob_*$. Furthermore,
\[ \Prob_*\{\stoptime < \infty\} = 0 . \] \end{proposition}
\begin{definition}We call the $n$-tuple of curves $\bgamma_t$ under the measure $\Prob_*$ \emph{locally independent} $SLE_\kappa$. \end{definition}
The idea of the proof will be to express $M_t$ as a product of martingales
\[M_t = \prod_{j=1}^n M^j_t\] with the following property: after tilting by the martingale $M_t^j$ the curve $\gamma^j$ locally at time $t$ evolves as $SLE_\kappa$ in the domain $D_t=\Disk\setminus \bgamma_t$. The martingales $M^j_t$ are found by following the method of proof in [Proposition 5, \cite{Mohammad}]. The construction shows that under $\mathbb P_*$, at each time $t$ the curves $\gamma^1, \ldots \gamma^n$ are locally growing as $n$ independent $SLE_\kappa$ curves in $D_t$,
which is the reason for the name \emph{locally independent $SLE$}. Locally independent $SLE$ is revisited in \S\ref{sec:locallyindepSLE}.
\begin{proof}[Proof of Proposition \ref{locally_independent}] Since the $\drjcommon{j}{t}$ are independent standard Brownian motions under the time changes $\sigma^1, \ldots, \sigma^n$, there exist independent standard Brownian motions $B^1_t, \ldots, B^n_t$ such that \[d\drjcommon{j}{t}= \sqrt {\dot \sigma^j(t)} \,dB^j_t, \qquad j=1, \ldots, n. \] By Lemma \ref{time_change_derivative}, \[ dB^j_t=h_{t,j}' \,(\drjcommon{j}{t})\,d\drjcommon{j}{t} , \quad j=1, \dots, n, \] and It\^o's formula shows that each $\theta^j_t$ satisfies \[ d \theta^j_t = \dot h_{t,j}(\drjcommon{j}{t}) \, dt + \frac{h_{t,j}''(\drjcommon{j}{t})}{2 \left( h_{t,j}'(\drjcommon{j}{t}) \right)^2} \, dt + dB^j_t. \] Define \begin{equation} M^j_t=I^j_t \slepart_t^j \exp \left\{ \frac{\cent}{2} m_\Disk(\pastloops{j}{t}) \right\} \exp\left\{ ab \int _0^t \sum_{k\neq j} \csc^2 (\theta^j_s -\theta^k_s) \,ds \right\}, \quad t<T, \end{equation} so that \begin{equation} M_t=\prod_{j=1}^n M^j_t. \end{equation} Applying the method of proof of the boundary perturbation property for single slit radial $SLE$ [Proposition 5, \cite{Mohammad}], we see that $\slepart_t^j$ satisfies \[ d\slepart^j_t=\slepart^j_t \left[ \left( -\frac{\cent}{2}m_\Disk (\pastloops{j}{t}) - ab \int_0^t \sum_{k:k\neq j} \csc^2 (\theta^j_s-\theta^k_s)ds \right) dt + \frac{b}{2} \frac{h''_{t,j} (\drjcommon{j}{t})}{h'_{t,j} (\drjcommon{j}{t})} \, d\drjcommon{j}{t}\right], \] and $M^j_t$ is a local martingale satisfying \[ dM^j_t= M^j_t \frac{b}{2} \frac{h''_{t,j} (\drjcommon{j}{t})}{h'_{t,j} (\drjcommon{j}{t})} \, d\drjcommon{j}{t}, \quad M^j_0=1. \] Since the $\drjcommon{j}{t}$ are independent, $M_t$ satisfies \[ dM_t= M_t\left[ \sum_{j=1}^n \frac{b}{2} \frac{h''_{t,j} (\drjcommon{j}{t})}{h'_{t,j} (\drjcommon{j}{t})} \, d\drjcommon{j}{t} \right]. \] Therefore, $M_t$ is a local martingale, and equation (\ref{Bessel-a}) follows by the Girsanov theorem. \end{proof}
\subsection{Dyson Brownian Motion on the Circle}
The construction of $n$-radial $SLE_\kappa$ in Section \ref{sec:n-radial} will require some results about the $n$-radial Bessel process (Dyson Brownian motion on the circle), which we state here. However, the proofs of these results are postponed until Section \ref{sec:DysonBM}, since they hold in the more general setting of $0<\kappa< 8$ and do not rely on Brownian loop measure.
A note about parameters: we state the results here using parameters $\alpha$ and $b_\alpha$ since the results hold outside of the $SLE$ setting. When we apply these results to $SLE_\kappa$ in the next section, we will set $\alpha=a=2/\kappa$ or $\alpha=2a=4/\kappa$. In particular, when $\alpha=a$, $b_\alpha=b=(3a-1)/2$.
Define
\[ F_\alpha (\btheta) = \prod_{1\leq j<k\leq n} |\sin(\theta^k - \theta^j)|^\alpha ,\;\;\;\; \stoptime=\inf \{ t: F_\alpha(\btheta) =0\}, \] and recall the definition of $\sumcsc(\btheta)$ from \eqref{oct3.1}. The next result will be verified in the discussion following the proof of Lemma \ref{lemma:trig_identity}.
\begin{proposition} \label{Mart_a} Let $\theta^1, \ldots, \theta^n$ be independent standard Brownian motions, and let $\alpha>0$. If \begin{equation}\label{eq:defMart_a} \begin{aligned} M_{t,\alpha}
&= F_\alpha( \btheta_t) \exp \left \{ \frac{\alpha^2 n(n^2-1)}{6}t \right \} \, \exp \left\{ \frac{\alpha-\alpha^2}{2} \int_0^t \psi( \btheta_s) ds \right \}, \quad 0\leq t<\tau, \end{aligned} \end{equation} then $M_{t,\alpha}$ is a local martingale for $0 \leq t < \tau$ satisfying \[ dM_{t,\alpha}=M_{t,\alpha}\,\sum_{j=1}^n \left( \sum_{k\neq j} \alpha \cot(\theta^j_t-\theta^k_t) \right) d\theta^j_t. \] If $\Prob_\alpha$ denotes the probability measure obtained after tilting by $M_{t,\alpha}$, then \begin{equation} \label{aBessel} d\theta^j_t=\alpha\sum_{k\neq j}\cot(\theta^j_t-\theta^k_t)\,dt + dW^j_t,\;\;\;\; 0\leq t < \tau, \end{equation} where $W^1_t, \ldots, W^n_t$ are independent standard Brownian motions with respect to $\Prob_\alpha$. Furthermore, if $\alpha \geq 1/2$, \[\mathbb P_\alpha (\tau=\infty)=1.\] \end{proposition}
\begin{proposition} \label{prop:P2a}
Suppose that $\alpha \geq 1/4$ and
\begin{equation} \label{def:N}
N_t = N_{t,\alpha,2\alpha} = F_\alpha(\btheta_t) \,
\exp \left \{ \frac{ \alpha^2 n(n^2-1)}{2}t \right \} \exp \left\{ -\alpha b_\alpha\int_0^t \psi( \btheta_s) ds \right \}
,
\end{equation}
where $b_\alpha = (3\alpha-1)/2$. Then $N_t$ is a $\Prob_\alpha$-martingale, and the measure
obtained by tilting $\Prob_\alpha$ by $N_t$ is $\Prob_{2\alpha}$. \end{proposition}
\begin{proof}
See Proposition \ref{prop:P2a_restated} and its proof. \end{proof}
We will also require the following theorem, which is proven immediately after Proposition \ref{prop:P2a_restated}.
\begin{theorem} \label{exponentialrateofconv1}
If $\alpha \geq 1/2$, there exists $u=u(2\alpha,n) > 0$ such that
\[ {\mathbb E}^{\btheta_0}_\alpha\left[ \exp \left\{ -\alpha b_\alpha \int_0^t \psi( \btheta_s) ds \right \} \right] = e^{-2an\beta t} \, F_\alpha(\btheta_t) \,\frac{\Integral{3a}}{
\Integral{4a}}\,
[ 1 + O(e^{-ut})], \]
where
\begin{equation}\label{eq:defbeta}
\beta = \beta(\alpha,n) = \frac{\alpha(n^2-1)}{4},
\end{equation}
and ${\mathbb E}_\alpha $ denotes expectation with respect to $\Prob_\alpha$. \end{theorem}
\subsection{$n$-Radial $SLE_\kappa$} \label{sec:n-radial} The remainder of the section is devoted to the construction of \emph{$n$-radial $SLE_\kappa$}, which may also be called \emph{global multiple radial} $SLE_\kappa$. As we have stated before, we will consider three measures on $n$-tuples of curves with the common parameterization. \begin{itemize} \item $\Prob,{\mathbb E}$ will denote independent $SLE_\kappa$ with the common parameterization; \item $\Prob_* , {\mathbb E}_*$ will denote locally independent $SLE_\kappa$; \item $\multProb, \multE$ will denote $n$-radial $SLE_\kappa$. \end{itemize}
In Section \ref{sec:locindep}, we obtained $\Prob_*$ from $\Prob$ by tilting by a $\Prob$-local martingale $M_t$. We will obtain $\multProb$ from $\Prob_{*}$ by tilting by a $\Prob_*$-local martingale $N_{t,T}$ and then letting $T \rightarrow \infty$. Equivalently, we obtain $\multProb$ from $\Prob$ by tilting by $\tildeN{t}{T} := M_t N_{t,T}$ and letting $T \rightarrow \infty$.
Let $\loops{j}{t} = \loops{j}{t}(\bgamma_t)$ be the set of loops $\ell$ with $s(\ell) < s^j(\ell)$ and $s(\ell) \leq t$, as in Figure \ref{fig:loops}. Define \[ \loopterm_t = \indicator_t \, \exp \left\{ \frac{\cent}{2} \sum_{j=1}^n m_\Disk(\loops{j}{t}) \right\} .\] Here $I_t$ is the indicator function that $\gamma^j_t \cap \gamma^k = \eset $ for $j \neq k $.
Let \[ \tildeN{t}{T} = {\mathbb E}^{\btheta_0}\left[ \loopterm_T \mid \bgamma_t \right],\;\;\;\;
0 \leq t \leq T,\]
where the conditional expectation is with respect
to $\Prob$. By construction, $\tildeN{t}{T}$ is a martingale for $0\leq t\leq T$ with $\tildeN{0}{T} = {\mathbb E}^{\btheta_0}[\loopterm_T]$.
For the next proposition, recall that $\pastloopterm_t$ weights by loops that hit at least two curves before time $t$; the precise definition is given in (\ref{defpastloops}).
\begin{proposition}
Let $T\geq 0$. If $\bgamma_t$ is independent $SLE_\kappa, 0 < \kappa \leq 4$, with the common parameterization, then
\begin{equation} \label{condition} \tildeN{t}{T} = \pastloopterm_t \, \slepart_t \, {\mathbb E}^{\btheta_t}\left[\loopterm_{T-t}\right], \quad 0\leq t \leq T. \end{equation}
In particular, if
\[ N_{t,T} = \exp\left\{-ab\int_0^t \sumcsc(\btheta_s) \,ds
\right\} \, {\mathbb E}^{\btheta_t}\left[\loopterm_{T-t}\right], \quad 0\leq t \leq T, \]
then $N_{t,T}$ is a $\Prob_*$-martingale for $0 \leq t \leq T$,
and
\begin{equation}\label{ELoop}
{\mathbb E}^{\btheta_t}\left[\loopterm_{T-t}\right]
= {\mathbb E}^{\btheta_t}_*\left[ \exp\left\{-ab\int_0^{T-t} \sumcsc(\btheta_s) \,ds
\right\} \right].
\end{equation}
\end{proposition} Note that the expectation on the righthand side of \eqref{ELoop} is with respect to $\Prob_*$.
\begin{proof}
We may write \[ \loopterm_T= \pastloopterm_t\, \frac{\loopterm_t}{\pastloopterm_t} \, \loopterm_{T,t}, \] where \[ \loopterm_{T,t} =\exp\left\{\frac{\cent}{2}\sum_{j=1}^nm_\Disk (\ell: t<s(\ell)\leq T, s(\ell)<s^j(\ell)) \right\}, \] The term $\loopterm_{T,t}$ should be thought of as the ``future loop'' term, since it accounts for loops that hit at least two curves with the first hit occurring during $(t,T]$.
The restriction property shows that
\[ {\mathbb E}^{\btheta_0} \left[ \pastloopterm_t \left (\frac{\loopterm_t}{\pastloopterm_t}\right) \, \big \vert \, \bgamma_t \right] = \pastloopterm_t \,\slepart_t. \] Moreover, the conditional distribution on $\bgamma_T\setminus \bgamma_t$, after tilting by $\pastloopterm_t \left ({\loopterm_t}/{\pastloopterm_t}\right) $ is that of independent $SLE$ in $D_t$. Since $\loopterm_{T,t}$ depends only on $\bgamma_T\setminus \bgamma_t$,
this gives (\ref {condition}).
For the second part of the proposition, notice that \[ N_{t,T}\, M_t = \tildeN{t}{T},\]
which is a $\Prob$-martingale by construction, so $N_{t,T}$ is a $\Prob_{*}$-martingale. Since
${\mathbb E}^{\btheta_T}\left[ \loopterm_0\right]=1$, this implies that \[ N_{t,T}={\mathbb E}_*^{\btheta_0}\left[ N_{T,T} \mid \bgamma_t \right] = {\mathbb E}_*^{\btheta_0}\left[ \exp \left \{ -ab \int_0^T \sumcsc (\btheta_s)\, ds \right \} \,\vert\, \bgamma_t \right], \]
which verifies (\ref{ELoop}). \end{proof}
\begin{proposition}\label{conditional_distr_T} Let $\Prob_*^T$ denote the probability measure obtained by tilting $\Prob$ by $\tildeN{t}{T}$. Under $\Prob_*^T$, conditionally on $\hat \bgamma^j_T$, the distribution of $\gamma^j$ is $SLE_\kappa$ in $\mathbb D\setminus \hat \bgamma^j_T$. \end{proposition}
\begin{proof}
The result follows by an application of the restriction property. \end{proof}
The next result, which gives the exponential rate of convergence of ${\mathbb E}^{\btheta_0}[\loopterm_T]$, is a direct application of Theorem \ref{exponentialrateofconv1}.
\begin{corollary} \label{expconvloopversion}
There exists $u=u(2a,n)>0$ such
that as $T \rightarrow \infty$,
\[
{\mathbb E}^{\btheta_0}\left[ \loopterm_T\right] =
\frac{\Integral{3a}}{\Integral{4a}}\,
F_a(\btheta_0) \, e^{-2an \beta T}
\, \left [1 + O(e^{-uT})\right],\]
where $\beta$ is given by (\ref{eq:defbeta}).
\end{corollary}
\begin{proof}
Notice that (\ref{ELoop}) implies that
\begin{equation}\label{ELoop2}
\tildeN{0}{T}=
{\mathbb E}^{\btheta_0}\left[
\loopterm_T \right] = {\mathbb E}^{\btheta_0}_*\left[ \exp\left\{-ab\int_0^{T} \sumcsc(\btheta_s) \,ds
\right\} \right].
\end{equation}
Substituting this into Theorem \ref{exponentialrateofconv1} gives the result. \end{proof} We define the \emph{$n$-interior scaling exponent}: \begin{equation}\label{eq:n_scalingexp} \hat \beta_n = \beta -\tilde b(n-1) = \frac{4(n^2-1) + (6-\kappa)(\kappa-2)}{8\kappa}, \end{equation} where $\beta$ is defined by (\ref{eq:defbeta}).
\begin{proposition} \label{prop:tildeM}
With respect to $\Prob$,
\[ \tilde M_t := e^{2an\hat \beta_n t}\, \pastloopterm_t\,
F_a(\btheta_t)\]
is a local martingale.
If $\multProb$ denotes the measure obtained by
tilting by $\tilde M_t$, then
\begin{equation}\label{eq:SDE2a}
d \theta_t^j = 2a \sum_{k \neq j}
\cot(\theta_t^j -\theta_t^k) \, dt + dW_t^j,
\end{equation}
where $W_t^1,\ldots,W_t^n$ are independent
standard Brownian motions with respect to
$\multProb$. \end{proposition}
\begin{proof}
Comparing (\ref{Bessel-a}) and (\ref{aBessel}), we see that tilting $n$ independent Brownian motions by $M_{t, a}$ gives the SDE satisfied by the driving functions of locally independent $SLE_\kappa$.
By Proposition \ref{prop:P2a}, tilting further by $N_{t,a,2a}$ (defined in (\ref{def:N}) above) gives driving functions that satisfy (\ref{eq:SDE2a}), which is the $n$-radial Bessel equation (\ref{aBessel}) for $\alpha=2a$.
This implies that $\Prob_{2a}$ is obtained by tilting $\Prob$ by $M_t N_{t,a,2a}$.
To verify that \[
M_t N_{t,a,2a} = \tilde M_t,
\]
we use the fact that
\[
\slepart_t=\exp\left\{ -2a\tilde b n(n-1)t \right\}.
\]
which follows from conformal covariance of the partition function. \end{proof}
As above, let $\Prob$ denote the measure on $n$ independent radial $SLE_\kappa$ curves from $\btheta_0$ to $ 0$ with the $a$-common parameterization.
\begin{theorem}\label{maintheorem}
Let $0 < \kappa \leq 4$. Let $t>0$ be fixed. For each $T>t$, let
$\mu_{T}=\mu_{T,t}$ denote the measure
whose Radon-Nikodym
derivative with respect to $\Prob$ is
\[ \frac{ \loopterm_T}
{ {\mathbb E}^{\btheta_0}\left[ \loopterm_T\right]}.\]
Then as $T \rightarrow \infty$, the measure $\mu_{T,t}$
approaches $\multProb$ with respect to the variation distance. Furthermore, the driving functions $z^j_t=e^{2i \theta^j_t}$ satisfy
\begin{equation}
d \theta_t^j = 2a \sum_{k \neq j}
\cot(\theta_t^j -\theta_t^k) \, dt + dW_t^j,
\end{equation}
where $W^j_t$ are independent standard Brownian motions in $\multProb$. \end{theorem}
\begin{proof}
We see that
\begin{equation}
\begin{aligned}
\frac{d \mu_{T,t}}{d\Prob_t} = \frac{{\mathbb E}^{\btheta_0} \left[ \loopterm_T \mid \bgamma_t \right]}{ {\mathbb E}^{\btheta_0}\left[ \loopterm_T\right]}
&=\frac{\tildeN{t}{T}}{\tildeN{0}{T}}.
\end{aligned}\end{equation}
By Proposition \ref{prop:tildeM}, $\multProb$ is obtained by tilting $\Prob$ by $\tilde M_t$, so we compare $\tildeN{t}{T}$ to $\tilde M_t$ and apply Corollary \ref{expconvloopversion}:
\begin{equation}
\begin{aligned}
\frac{d \mu_{T,t}/d \Prob_t}{ d\multProb_{t}/d\Prob_t}
&= \frac{\tildeN{t}{T}/\tildeN{0}{T}}{ \tilde M_t / \tilde M_0} \\
&= \frac{{\mathbb E}^{\btheta_t}\left[ \loopterm_{T-t}\right] F_a(\btheta_0) }{{\mathbb E}^{\btheta_0}\left[ \loopterm_T\right] F_a(\btheta_t) \exp \left\{ \frac{a^2 n(n^2-1)}{2}t\right\} }
\frac{ } { }\\
&= 1+O (e^{-u(T-t)}).
\end{aligned}
\end{equation}
Therefore,
\[
\lim_{T\to \infty} \left[
\frac{d \multProb_{t}}{d \Prob_t} \left(\frac{d \mu_{T,t}} {d\Prob_t} -\frac{d\multProb_{t}}{d \Prob_t} \right) \right]=0.
\] But $\frac{d \multProb_{t}}{d \Prob_t}$ is constant (since $t$ is fixed), so this implies convergence of $\mu_{T,t}$ to $\multProb_{t}$ in the variation distance.
\end{proof}
\begin{definition} Let $0<\kappa\leq 4$.
If the curves $\gamma^1, \ldots, \gamma^n$ are distributed according to $\multProb$, we call $\bgamma$ \emph{(global) $n$-radial $SLE_\kappa$}. \end{definition}
\begin{corollary}
Let $\bgamma$ be $n$-radial $SLE_\kappa$ for $0<\kappa \leq 4$. With probability one, $\bgamma$ is an $n$-tuple of simple curves. \end{corollary}
\begin{proof} By construction, $n$-radial $SLE_\kappa$ is a measure on $n$-tuples of curves that is absolutely continuous with respect to $n$-independent $SLE_\kappa$. But since $0<\kappa\leq 4$, each independent $SLE_\kappa$ curve is almost surely simple. \end{proof}
To conclude this section, we remark that the results above do not address the question of continuity at $t=\infty$. Additionally, it would be natural to extend the definition of $n$-radial $SLE$ to apply to $\kappa \in (0, 8)$ by using the measure $\Prob_{2a}$ instead of $\multProb$, but we will not consider this here.
\section{Locally independent $SLE$} \label{sec:locallyindepSLE}
Here we discuss locally independent $SLE$ and explain how it arises as a limit of processes that act like ``independent $SLE$ paths in the current domain.'' For ease we will do the chordal case and $2$ paths, but the same idea works for any number of paths and for radial $SLE.$ Locally independent $SLE$ is defined here for all $\kappa<8$, but when $\kappa \leq 4$ the radial version is the same as the process defined in Proposition \ref{locally_independent}.
This construction clarifies the connection between locally independent $SLE$ and commuting $SLE$ defined in \cite{Dubedat}. Intuitively, given a sequence of commuting $SLE$ increments, as the time duration of the increments goes to $0$, the curves converge to locally independent $SLE$.
Throughout this section we write $\B_t = (B_t^1,B_t^2)$ for a standard two-dimensional Brownian motion, that is, two independent one-dimensional Brownian motions. We will use the fact that $\B_t$ is H\"older continuous. We give a quantitative version here which is stronger than we need.
\begin{itemize}
\item Let $E_h$ denote the event that for all
$0 \leq t \leq 1/h$ and all $0 \leq s \leq h$,
\[ |\B_{t+s} - \B_t|
\leq s^{1/2} \, \log(1/s) . \]
Then as $h \rightarrow 0$, $\Prob(E_h^c)$ decays
faster than any power of $h$. \end{itemize}
We will define the discrete approximation using the same Brownian motions as for the continuum and then the convergence follows from deterministic estimates coming from the Loewner equation. Since these are standard we will not give full details. We first define the process. Let $a = 2/\kappa$.
\begin{definition} Let $\X_t =(X_t^1,X_t^2)$ be the
solution to the SDEs,
\[ dX_t^1 = \frac{a}{X_t^1 - X_t^2} \, dt
+ dB_t^1, \;\;\;\;\;
dX^2_t = \frac{a}{X_t^2 - X_t^1} \, dt
+ dB_t^2 , \]
with $X_0^1 = x_1, X_0^2 = x_2$.
Let $\tau_u = \inf\{t: |X_t^2- X_t^1| \leq u\}$,
$\tau = \tau_{0+}$. \end{definition}
Note that $Z_t := X_t^2 - X_t^1 $ satisfies \[ dZ_t = \frac{ 2a}{Z_t} \,dt + \sqrt 2 \, dW_t, \] where $W_t := (B_t^2 - B_t^1)/\sqrt 2$ is a standard Brownian motion. This is a (time change of a) Bessel process from which we see that $\Prob\{\tau < \infty\} = 0$ if and only if $\kappa \leq 4$. If $4 < \kappa < 8$ we can continue the process for all $\tau < \infty$ by using reflection. We will consider only $ \kappa < 8$.
\begin{definition} If $\kappa < 8$,
locally independent $SLE_\kappa$
is defined to be the collection of conformal maps $g_t$ satisfying the Loewner equation
\[ \p_t g_t(z) = \frac{a}{g_t(z) - X_t^1}
+ \frac{a}{g_t(z) - X_t^2} , \;\;\;\;
g_0(z) = z. \]
This is defined up to time
\[ T_z = \sup\{t: \Im[g_t(z)] > 0\}. \] \end{definition}
Locally independent $SLE_\kappa$ produces a pair of curves $\bgamma(t) = (\gamma^1(t),\gamma^2(t)).$ Note that $\hcap[\bgamma_t] = 2at$. If $ \kappa \leq 4$, then $\gamma^1_t \cap \gamma^2_t \neq \eset$; this is not true for all $t < \tau$ if $4 < \kappa < 8$.
Let us fix a small number $h = 1/n$ and consider the process viewed at time increments $\{kh:k=0,1,\ldots\}$. The following estimates hold uniformly for $k \leq 1/h$ on the event $E_h$. The first comes just by the definition of the SDE and the second uses the Loewner equation. Let $\Delta_k^j = \Delta_{k,h}^j = B^j_{ kh} - B_{(k-1)h}^j$. \begin{itemize}
\item If $|X_{hk}^2 - X_{hk}^1 | \geq h^{1/8}$,
then
\begin{equation} \label{1.1}
X_{ (k+1)h}^j = X_{kh}^j
+ \frac{ah}{X_{kh}^j - X_{kh}^{3-j}}
+ \Delta_{k+1}^j
+ o(h^{4/3}).
\end{equation}
\item If $\Im[g_{hk}(z)] \geq u/2$,
and $ 0 \leq s \leq h$,
\begin{equation} \label{1.2} g_{kh+s}(z) = g_{kh}(z)
+ \frac{as}{g_{kh}(z)- X_{kh}^1}
+ \frac{as}{g_{hk}(z)- X_{kh}^2}
+ o_u(h^{4/3}).
\end{equation} \end{itemize}
We will compare this process to the process which at each time $kh$ grows independent $SLE_\kappa$ paths in the current domain, increasing the capacity of each path by $h$. Let us start with the first time period in which we have independent $SLE$ paths. Again, we restrict to the event $E_h$. \begin{itemize}
\item Let $\tilde \gamma^1,\tilde \gamma^2$ be independent $SLE_\kappa$
paths starting at $x_1,x_2$ respectively with driving
function $\tilde X^j_t = B^j_t$, each run until
time $h$. To be more precise if $\tilde g^j_t:
\Half \setminus \gamma^j_t \rightarrow \Half$
is the standard conformal transformation, then
\[ \p_t \tilde g^j_t(z)
= \frac{a}{g^j_t(z) - \tilde X^j_t}, \;\;\;\;
g^j_0(z) = z, \;\;\;\; 0 \leq t \leq h\]
Note that $\hcap[\gamma^1_t]
= \hcap[\gamma^2_t] = ah$. Although $\hcap[\bgamma_t]
< 2ah$, if $|x_2 - x_1| \geq h^{1/8}$,
\[ \hcap[\bgamma_t] = 2ah - o(h^{4/3}).\]
This defines $\bgamma_t$ for $0 \leq t \leq h$ and
we get corresponding conformal maps
\[ \tilde g_t :
\Half \setminus \bgamma_t \rightarrow
\Half, \;\;\;\; 0 \leq t \leq h .\]
If $\Im[ z] \ \geq 1/2$, then
\[ \tilde g_{h}(z) = z + \frac{ah}
{z - x^1} + \frac{ah}{z-x^2}
+ o(h^{4/3}).\]
Also, by writing $\tilde g_h = \phi \circ \tilde g_h^j$,
we can show that
\[ \hat X_h^j := \tilde g_h(\tilde \gamma^j(h)) =
\tilde X_h^j + \frac{ah}{x_j - x_{3-j}}
+ o(h^{4/3}).\]
\item Recursively, given $\hat X_{kh}$
and $\tilde \gamma_t^1,
\tilde \gamma_t^2$ and $\tilde g_t$ for
$0 \leq t \leq kh$ (the definition of these quantities depends on
$h$ but we have suppressed that from the notation), let
\[ \tilde X_{kh+t}^j
= \hat X_{kh}^j + [B_{kh + t}^j -
B_{k}^j], \;\;\;\; 0 \leq t \leq h , \]
and let
$\hat \gamma_{t,k}^1,
\hat \gamma_{t,k}^2, 0 \leq t \leq h$ be independent $SLE_\kappa$
paths with driving functions $\tilde X_{kh+t}^j$.
For $j=1,2$, define
\[ \tilde \gamma_{kh+t}^j
= \tilde g_{kh}^{-1} [\hat \gamma_{t,k}^j],
\;\;\;\; 0 \leq t \leq h.\]
This defines $\tilde \bgamma_{kh+t}, 0 \leq t \leq h$
and $\tilde g_{kh+t}: \Half \setminus \bgamma_{kh+t}
\rightarrow \Half$ is defined as before. Set
\[ \hat X_{(k+1)h}^j = \tilde g_{(k+1)h}(\tilde{\gamma}^j((k+1)h)).\]
Note that if $|\hat X_{kh}^2 - \hat X_{kh}^1|
\geq h^{1/8}$, then
\begin{equation} \label{1.3} \hat X_{(k+1)h}^j
= \hat X_{kh}^j + \Delta^j_{(k+1)h}
+ \frac{ah}{ \hat X_{kh}^j - \hat X_{kh}^{3-j}} + o(h^{4/3}). \end{equation}
Also, if $\Im[\tilde g_{kh}(z)] \geq u/2$,
\begin{equation} \label{1.4}
\tilde g_{(k+1)h}(z) =
\tilde g_{kh}(z) + \frac{ah}{ \tilde g_{kh}(z)
- \hat X_{kh}^1 } + \frac{ah}{ \tilde g_{kh}(z)
- \hat X_{kh}^2 }
+ o_u(h^{4/3}).
\end{equation}
\item If at any time $\tilde \gamma_{h,k}^1
\cap \tilde \gamma_{h,k}^2 \neq \eset$ this procedure
is stopped. \end{itemize}
Note that we are using the same Brownian motions as we used before.
\begin{proposition}
With probability one, for all $t < \tau$ and all
$z \in \Half \setminus \bgamma_t$,
\[ \lim_{h \downarrow 0} \tilde g_t(z)
= g_t(z) . \] \end{proposition}
\begin{proof} We actually prove more. Let
\[ K(u,h) =
\sup\left\{|\tilde g_t(z) - g_t(z)|:
\Im[g_t(z)] \geq u, t \leq \tau_u \wedge u^{-1}
\right\}. \]
Then for each $u > 0$, with probability one,
\[ \lim_{h \downarrow 0} K(u,h) = 0 . \]
We fix $u$ and allow constants to depend on $u$
and assume that $\Im[g_t(z)] \geq u$.
Then, if
\[ \Theta_k = \max_{r \leq k }
|\X_{rh}^j - \tilde \X_{rh}^j| \]
Then \eqref{1.1} and \eqref{1.3} imply that
\[ \Theta_{k+1} \leq \Theta_k [1 + O(h)] +
O(h^{4/3} ) , \]
or if $\hat \Theta_k = \Theta_k + k \, h^{4/3}, $
then
$ \hat \Theta_k \leq \hat \Theta_k[1 + O(h)]
.$ This shows that $\hat \Theta_k$ is bounded
for $k \leq (hu)^{-1}$ and hence
\begin{equation} \label{1.5}
\Theta_k \leq c h^{1/3}, \;\;\;\;
k \leq (hu)^{-1}.
\end{equation}
We now let
\[ D_k = \max_{r \leq k}
|g_{kh}(z) - \tilde g_{kh}(z) | , \]
and see that \eqref{1.2}, \eqref{1.4}, and \eqref{1.5}
imply
\[ D_{k+1} \leq D_k [1 + O(h)] +
O(h^{4/3} ) , \]
which then gives
\[ D_k \leq c h^{1/3}, \;\;\;\;
k \leq (hu)^{-1}.\]
Note that for $kh \leq t \leq (k+1)h$.
\[ g_t(z) = g_{kh}(z) +O(h), \;\;\;\;
\tilde g_t(z) = \tilde g_{kh}(z) + O(h), \]
and hence for all $t \leq \tau_u \wedge u^{-1}$
\[ \tilde g_t(z) = g_t(z) + O(h^{1/3}).\]
\end{proof}
\section{$n$-Radial Bessel process}\label{sec:DysonBM}
In this section we study
the process that we call the $n$-particle radial Bessel process. The image of this process under the
map $z \mapsto e^{2iz}$ will be called Dyson
Brownian motion on the circle. We fix integer $n\geq 2$ and allow constants to depend on $n$. Let ${\mathcal X}'
= {\mathcal X}_n' $ be the torus $[0, \pi)^n$ with periodic boundary conditions
and ${\mathcal X}= {\mathcal X}_n$ the set of $ \btheta=(\theta^1, \ldots, \theta^n)\in {\mathcal X}'$ such that we can find representatives with \begin{equation} \label{nyd.2} \theta^1<\theta^2< \cdots < \theta^n <\theta^{n+1}:= \theta^1 + \pi. \end{equation} Let $\mathcal X^*_n$ be the set of $\bz = (z^1,\ldots,z^n)$ with $z^j = \exp \{2i\theta^j\}$ and $\btheta \in \mathcal X$. In other words, $\mathcal X^*_n$ is the set of $n$-tuples of distinct points on the unit circle ordered counterclockwise (with a choice of a first point). Note that $\abs{z^j-z^k}=2 \abs{\sin(\theta^k-\theta^j)}$. We let \[ \begin{aligned} \psi( \btheta) &= \sum_{j=1}^n \sum_{k\neq j} \csc^2(\theta^j-\theta^k) = 2 \sum_{1\leq j<k\leq n} \csc^2(\theta^j-\theta^k),
\\ F(\btheta)& = \ \prod_{1\leq j<k\leq n} |\sin(\theta^k - \theta^j)|
= 2^{-n(n-1)/2} \prod_{1\leq j<k\leq n} \abs{z_k-z_j},
\label{nyd.1} \\ F_\alpha(\btheta)&=F(\btheta)^\alpha \\ d( \btheta)&=\min_{1\leq j<k\leq n} \abs{\sin(\theta^{j+1}-\theta^j)} \\ f_\alpha(\btheta)& = \Integral{\alpha}^{-1} \, F_\alpha(\btheta), \;\;\;\; \Integral{\alpha} =
\int_{\mathcal X} F_\alpha(\btheta) \, d \btheta. \end{aligned} \]
Here $d \btheta$ denotes integration with respect to Lebesgue measure restricted to ${\mathcal X}$. \begin{remark} We choose to represent points $z^j$ on the unit circle as $\exp\{2i\theta^j\}$ (rather than $\exp\{i\theta^j\}$) because the relation \[ F_\alpha(\btheta) = 2^{-\alpha n(n-1)/2} \prod_{1\leq j<k\leq n} \abs{z_k-z_j}^\alpha,\] makes it easy to relate measures on ${\mathcal X}_n$ with measures that arise in random matrices. (See, for example, Chapter 2 of \cite{Forrester} for the distribution of the eigenvalues of the circular $\beta$-ensemble.) Note that if $\theta^1,\ldots,\theta^n$ are independent standard Brownian motions, then $z^1,\ldots,z^n$ are independent driftless Brownian motions on the circle with variance parameter $4$.
\end{remark}
We will use the following trigonometric identity.
\begin{lemma} \label{lemma:trig_identity}
If $\btheta\in \mathcal X_n$,
\begin{equation}\label{cotsum}
\sum_{j=1}^n \left( \sum_{k\neq j} \cot(\theta^j -\theta^k) \right)^2 = \psi( \btheta)-\frac{n(n^2-1)}{3}.
\end{equation} \end{lemma}
\begin{proof}
We first note that if $x,y,z$ are distinct points in $[0,\pi)$, then \begin{equation}\label{apr5.1}
\cot(x - y) \, \cot (x-z)
+ \cot (y-x) \, \cot(y-z)
+ \cot(z-x) \, \cot(z-y )
= - 1
\end{equation} Indeed, without loss of generality, we may assume that $0 = x < y < z$ in which case the lefthand side is \[ \cot(y-z) \,[\cot y - \cot z] + \cot y \, \cot z
\] which equals $-1$ using the sum formula \[ \cot(y-z) = \frac{\cot y \, \cot z + 1}{\cot z - \cot y}\]
When we expand the square on the lefthand side of \eqref{cotsum} we get the sum of two terms, \begin{equation} \label{jan2.1}
\sum_{j=1}^n \sum_{k \neq j } \cot^2 (\theta^j - \theta^k) \end{equation} \begin{equation} \label{jan2.2}
\sum_{j=1}^n \sum_{k=1}^n \sum_{l=1}^n \, 1\{j \neq k, j \neq l, k \neq l \}
\cot(\theta^j - \theta^k) \, \cot (\theta^j - \theta^l) . \end{equation}
Using $\cot^2 y + 1 = \csc^2 y$,
we see that \eqref{jan2.1} equals $ \psi( \btheta) - n(n-1) $. We write \eqref{jan2.2} as $2$ times
\[ \sum
\left[\cot(\theta^j - \theta^k) \, \cot (\theta^j - \theta^l)
+ \cot(\theta^k - \theta^j) \, \cot (\theta^k - \theta^l) + \cot(\theta^l - \theta^j) \, \cot (\theta^l - \theta^k) \right], \] where the sum is over all $3$ elements subsets $\{j,k,l\}$ of $\{1,\ldots,n\}$. Using \eqref{apr5.1}, we see that \eqref{jan2.2} equals \[ -2 \, \binom{n}{3} = -\frac{n(n-1)(n-2)}{3} . \] Therefore, the lefthand side of \eqref{cotsum} equals \[ \psi( \btheta) - n(n-1)-\frac{n(n-1)(n-2)}{3} = \psi( \btheta) -\frac{n(n^2-1)}{3}. \] \end{proof}
We will let $ \btheta_t=(\theta^1_t, \ldots, \theta^n_t)$ be a standard $n$-dimensional Brownian motion in $\mathcal X^*$ starting at $ \btheta_0 \in \mathcal X$ and stopped at \[ T=\inf\{ t: \btheta_t\not\in \mathcal X \}=\inf \{ t: d(\btheta_t)=0\}, \] defined on the filtered probability space $(\Omega, \mathcal F_t, \BMProb)$.
Differentiation using \eqref{cotsum} shows that \[ \begin{aligned} \partial_j F_\alpha( \btheta)&=F_\alpha(\btheta) \sum_{k\neq j}\alpha \cot(\theta^j-\theta^k), \\ \partial_{jj}F_\alpha( \btheta)&=F_\alpha( \btheta) \left[ \left( \sum_{k\neq j}\alpha \cot(\theta^j-\theta^k) \right)^2 -\alpha \sum_{k\neq j} \csc^2(\theta^j-\theta^k) \right], \\ \Delta F_\alpha( \btheta) &= F_\alpha( \btheta)\left[ -\frac{\alpha^2n(n^2-1)}{3} + (\alpha^2-\alpha)\psi(\btheta) \right]. \end{aligned} \] Hence, if we define \begin{equation}\label{Mart} \begin{aligned} M_{t,\alpha}:&= F_\alpha( \btheta_t) \exp \left \{-\frac{1}{2} \int_0^t \frac{\Delta F_\alpha(\btheta_s)}{F_\alpha( \btheta_s)}ds \right\} \\ &= F_\alpha( \btheta_t) \exp \left \{ \frac{\alpha^2 n(n^2-1)}{6}t \right \} \, \exp \left\{ \frac{\alpha-\alpha^2}{2} \int_0^t \psi( \btheta_s) ds \right \}, \quad t<T, \end{aligned} \end{equation} then $M_{t,\alpha}$ is a local martingale for $0 \leq t < T$
satisfying \[ dM_{t,\alpha}=M_{t,\alpha}\,\sum_{j=1}^n \left( \sum_{k\neq j} \alpha \cot(\theta^j_t-\theta^k_t) \right) d\theta^j_t. \] We will write $\mathbb P_\alpha,{\mathbb E}_\alpha$ for the probability measure obtained after tilting $\BMProb $ by $M_{t,\alpha}$ using the Girsanov theorem. Then \begin{equation} \label{eve.2} d\theta^j_t=\alpha \sum_{k\neq j}\cot(\theta^j_t-\theta^k_t)\,dt + dW^j_t,\;\;\;\; t < T, \end{equation} for independent standard Brownian motions $W^1_t, \ldots, W^n_t$ with respect to $\mathbb P_\alpha$. If $\alpha\geq 1/2$, comparison with the usual Bessel process shows that $\mathbb P_\alpha(T=\infty)=1$. In particular, $M_{t,\alpha}$ is a martingale and $\mathbb P_\alpha \ll {\BMProb }$ on $\mathcal F_t$ for each $t$. (It is not true that ${\BMProb }\ll \mathbb P_\alpha$ since ${\BMProb } \{ T<t \}>0$.)
This leads to the following definitions.
\begin{definition}
The $n$-radial Bessel process with parameter $\alpha$ is the process satisfying \eqref{eve.2} where $W_t^1,\ldots, W_t^n$ are independent Brownian motions.
\end{definition}
\begin{proposition} If $\btheta_t$ satisfies \eqref{eve.2} and $\tilde \btheta_t = \btheta_{t/n}$, then $\tilde \btheta_t$ satisfies \begin{equation} \label{eve.2.new}
d\tilde \theta_t = \frac \alpha n \sum_{k\neq j}\cot( \tilde \theta^j_t- \tilde \theta^k_t)\,dt + \frac{1}{\sqrt n}\, d\tilde W^j_t, \;\;\;\; t < \tilde T, \end{equation} where $\tilde W_t^1,\ldots,\tilde W_t^n$ are independent Brownian motions and $\tilde T = nT$.
\end{proposition}
We also refer to a process satisfying \eqref{eve.2.new} as the $n$-radial Bessel process. If $n=2$, $\tilde \theta_t^1, \tilde \theta_t^2$ satisfy \eqref{eve.2.new} and \[ X_t = \tilde \theta_t^2 - \tilde \theta_t^1, \;\;\;\;
B_t = \frac{1}{\sqrt 2} \, [\tilde W_t^2 - \tilde W_t^1],\]
then $B_t$ is a standard Brownian motion and $X_t$
satisfies \[ dX_t = \alpha \, \cot X_t \, dt + d B_t. \] This equation is called the radial Bessel equation.
\begin{proposition} Let $p_{t,\alpha}(\btheta,\btheta')$ denote the transition density for the system \eqref{eve.2}. Then for all $t$ and all $\btheta, \btheta'$, \begin{equation} \label{jan24.1}
p_{t,\alpha}(\btheta,\btheta') = \frac{F_{2\alpha}(\btheta')} {F_{2\alpha}(\btheta)} \, p_{t,\alpha} \, (\btheta',\btheta). \end{equation} \end{proposition}
\begin{proof} Let $p_t = p_{t,0}$ be the transition density for independent Brownian motions killed at time $T$. Fix $t,\btheta,\btheta'$. Let $\gamma:[0,t] \rightarrow {\mathcal X}$ be any curve with $\gamma(0) = \btheta, \gamma(t) = \btheta'$ and note that the Radon-Nikodym derivative of $\Prob_\alpha$ with respect to $\BMProb$ evaluated at $\gamma$ is \[ Y(\gamma):= \frac{F_{\alpha}(\btheta ')}{F_{\alpha}(\btheta)}
\,A_t(\gamma) , \;\;\;\; A_t(\gamma) = e^{a^2n(n-1)t/2} \, \exp
\left\{\frac{-\alpha^2}{2} \int_0^t \psi(\gamma(s)) \, ds
\right\}. \]
If $\gamma^R$ is the reversed path, $\gamma^R(s) = \gamma(t-s)$, then $A_t(\gamma^R) = A_t(\gamma)$ and hence
\[ Y(\gamma^R) = \frac{F_{\alpha}(\btheta )}{F_{\alpha}(\btheta')} \, A_t. \] Therefore, \[ \frac{Y(\gamma)}{Y(\gamma^R)} = \frac{F_{\alpha}(\btheta ')^2}{F_{\alpha}(\btheta)^2} =
\frac{F_{2\alpha}(\btheta ')}{F_{2\alpha}(\btheta)}.\] Since the reference measure $\BMProb$ is time reversible and the above holds for every path, \eqref{jan24.1} holds.
\end{proof}
\begin{proposition} If $\alpha \geq 1/2$ and
$\btheta_t$ satisfies \eqref{eve.2}, then with probability one $T = \infty$. \end{proposition}
\begin{proof} This follows by comparison with a usual Bessel process; we will only sketch the proof. Suppose $\Prob\{T < \infty\} >0$. Then there would exist $j < k$ such that with positive probability $\gamma_T^j = \gamma_T^k$ but $\gamma_T^{j-1} < \gamma_T^j$ and $\gamma_t^{k+1} > \gamma_t^k$ (here we are using ``modular arithmetic'' for the indices $j,k$ in our torus). If $k = j+1$, then by comparison to the process \[ dX_t = \left(\frac{a}{X_t} - r\right) \, dt + dB_t, \] one can see that this cannot happen. If $k > j+1$, we can compare to the Bessel process obtained by removing the points with indices $j+1$ to $k-1$. \end{proof}
\begin{proposition} \label{prop:invdensity} If $\alpha \geq 1/2$, then the invariant density for $\eqref{eve.2}$ is $f_{2\alpha}$. Moreover, there exists $u>0$ such that for all $\btheta, \btheta'$, \begin{equation} \label{eq:expconv}
p_t(\btheta',\btheta) = f_{2\alpha}(\btheta)
\, \left[1 + O(e^{ -ut})\right]. \end{equation} \end{proposition}
\begin{proof} The fact that $f_{2\alpha}$ is invariant follows from \[ \int p_{t,\alpha}(\btheta,\btheta') \, f_{2\alpha}(\btheta)
\, d\btheta =
\int p_{t,\alpha}(\btheta',\btheta)\, \frac{F_{2\alpha}(\btheta')}
{F_{2\alpha}(\btheta)} \, f_{2\alpha}(\btheta) \,d\btheta
\hspace{1in} \]
\[ \hspace{1in}
= f_{2\alpha}(\btheta') \int p_{t,\alpha}(\btheta',\btheta)
\,d\btheta =
f_{2\alpha}(\btheta').\]
Proposition \ref{bounds} below shows that there exist $0<c_1<c_2<\infty$ such that for all $\bx, \by \in \mathcal X$, \begin{equation}\label{bounds1} c_1 F_{2\alpha}(\by) \leq p_1(\bx, \by) \leq c_2 F_{2\alpha}(\by). \end{equation} The proof of this fact is the subject of \S\ref{subsec:exp_rate_of_conv}. The exponential rate of convergence (\ref{eq:expconv}) then follows by a standard coupling argument (see, for example, \S4 of \cite{Lawler_Minkowski_RealLine}).
\end{proof}
\begin{proposition} \label{prop:P2a_restated}
Suppose $\alpha \geq 1/4$ and
\[ N_{t,\alpha,2\alpha} = F_a(\btheta_t) \,
\exp \left \{ \frac{ \alpha^2 n(n^2-1)}{2}t \right \} \exp \left\{ -\alpha b_{\alpha}\int_0^t \psi( \btheta_s) ds \right \}
,\]
where $b_\alpha = (3\alpha-1)/2$. Then $N_{t, \alpha, 2\alpha}$ is a $\Prob_\alpha$-martingale, and the measure
obtained by tilting $\Prob_\alpha$ by $N_{t,\alpha,2\alpha}$ is $\Prob_{2\alpha}$. \end{proposition}
\begin{proof}
Note that \[ \begin{aligned} M_{t,2\alpha}:&= F_{2\alpha}( \btheta_t) \exp \left \{ \frac{(2\alpha)^2 n(n^2-1)}{6}t \right \} \exp \left\{ \frac{2\alpha-(2\alpha)^2}{2} \int_0^t \psi( \btheta_s) ds \right \}, \\ & = M_{t,\alpha} \, N_{t,\alpha,2\alpha}. \end{aligned} \] Since $M_{t,\alpha},M_{t,2\alpha}$ are both local martingales, we see that $N_{t,\alpha,2\alpha}$ is a local martingale with respect to $\Prob_\alpha$. Also, the induced measure by ``tilting first by $M_{t,\alpha}$ and then tilting by $N_{t,\alpha,2\alpha}$'' is the same as tilting by $M_{t,2\alpha}$. Since $2\alpha \geq 1/2$, we see that with probability one, $T = \infty$ in the new measure, from which we conclude that it is a martingale. \end{proof}
We now prove Theorem \ref{exponentialrateofconv1}.
\begin{proof}[Proof of Theorem \ref{exponentialrateofconv1}]
Using the last two propositions, we see that \begin{eqnarray*} {\mathbb E}^\btheta_\alpha\left[ \exp \left\{ -\alpha b_\alpha\int_0^t \psi( \btheta_s) ds \right \} \right]& = &
\exp \left \{ \frac{ - \alpha^2 n(n^2-1)}{2}t \right \}
\, {\mathbb E}^\btheta_a[N_t \, F_{-\alpha}(\btheta_t)]\\ & = & \exp \left \{ \frac{ - \alpha^2 n(n^2-1)}{2}t \right \}
\,F_\alpha(\btheta) \, {\mathbb E}_{2 \alpha}^\btheta\left [ F_{-\alpha}(\btheta_t)\right ]\\ & = & e^{-2\alpha n\beta t} \, F_\alpha(\btheta) \,\frac{\Integral{3\alpha}}{ \Integral{4\alpha}}\,
[ 1 + O(e^{-ut})] . \end{eqnarray*}
In particular, the last equality follows by applying Proposition \ref{prop:invdensity} for $\alpha=2a$. \end{proof}
Setting $\alpha=a$, we can write the result as \[ e^{2 n (n-1)\tilde b t} \, {\mathbb E}^\btheta_a\left[ \exp \left\{ -ab\int_0^t \psi( \btheta_s) ds \right \} \right] = e^{-2an\hat \beta_n t} \, F_a(\btheta) \,\frac{\Integral{3a}}{ \Integral{4a}}\,
[ 1 + O(e^{-ut})] \]
where
\[ \hat \beta_n = \beta - \tilde b (n-1),
\]
is the $n$-interior scaling exponent, as in (\ref{eq:n_scalingexp}).
\subsection{Rate of convergence to invariant density}\label{subsec:exp_rate_of_conv}
It remains to verify the bounds (\ref{bounds1}) used in the proof of Proposition \ref{prop:invdensity}. While related results have appeared elsewhere, including \cite{ErdosYauBook}, we have not found Proposition \ref{bounds} in the literature, and so we provide a full proof here. This is a sharp pointwise result; however, unlike
results coming from random matrices, the constants
depend on $n$ and we prove no uniform result
as $n \rightarrow \infty$. Our argument
uses the general idea of a ``separation lemma'' (originally \cite{Lawler_cutpoints}).
We consider the $n$-radial Bessel process given by the system (\ref{eve.2}) with $\alpha \geq 1/2$. By Proposition \ref{prop:invdensity} its invariant density is $f_{2\alpha}$, so that for $\bx, \by \in \mathcal X$, \[ \frac{p_t(\bx, \by)}{F_{2\alpha}(\by)}=\frac{p_t(\by, \bx)}{F_{2\alpha}(\bx)}. \] We will prove the following.
\begin{proposition}\label{bounds}
For every positive integer $n$ and $\alpha \geq 1/2$,
there exist $0<c_1<c_2<\infty$, such that for all $\mathbf x, \mathbf y\in \mathcal X$,
\[
c_1 F_{2\alpha}(\mathbf y)\leq p_1(\mathbf x, \mathbf y)\leq c_2 F_{2\alpha}(\mathbf y).
\] \end{proposition}
For the remainder of this section, we fix $n$ and $\alpha \geq 1/2$ and allow constants to depend on $n,\alpha$.
We will let $\BMtrans_t (\bx, \by)$ denote the transition density for independent Brownian motions killed upon leaving $ \mathcal X$. If $U \subset \mathcal X$ we will write $p_t({\bf x},{\bf y}; U)$ or $p_t({\bf x},{\bf y}; \overline U)$ for the density of the $n$-radial Bessel process killled upon leaving $U$; we write $\tilde p_t({\bf x},{\bf y}; U)$ or $ \tilde p_t({\bf x},{\bf y}; \overline U)$ for the analogous densities for independent Brownian motions. Then we have \begin{equation}\label{densities} p_t(\bx, \by;U)=\BMtrans_t(\bx, \by;U) \; \frac{M_{t,\alpha}}{M_{0,\alpha}}. \end{equation} We can use properties of the density of Brownian motion to conclude analogous properties for $p_t(\bx, \by)$. For example we have the following: \begin{itemize} \item For every open $U$ with $\overline U \subset {\mathcal X}$ and every $t_0$, there exists $C = C(U,t_0)$ such that \begin{equation} \label{covid.3}
C^{-1} \,\tilde p_t({\bf x},{\bf y},U) \leq p_t({\bf x},{\bf y};U)\leq C\,\tilde p_t({\bf x},{\bf y};U), \;\;\; 0 \leq t \leq t_0. \end{equation} \end{itemize} Indeed, $M_{t,\alpha}/M_{0,\alpha}$ is uniformly bounded away from $0$ and $\infty$ for $t \leq t_0$ and paths staying in $\bar U$. Another example is the following easy lemma which we will use in the succeeding lemma.
\begin{lemma} \label{extralemma}
Suppose that $0 \leq \theta_0^1 < \theta_0^2
< \theta_0^n < \theta_0^{n+1}$ where $\theta_j^{n+1} = \theta_t^1 + \pi$. For every $r > 0$, there exists $q >0$ such that the following holds. If $\epsilon \leq 1/(8n)$ and $\theta_0^{j+1} - \theta_0^j \geq r \, \epsilon$ for $j=1,\ldots,n$, then with probability at least $q$ the following holds: \[ \theta_t^{j+1} - \theta_t^j\geq 2\epsilon, \;\;\;\;\; \epsilon^2/2 \leq t \leq \epsilon^2, \;\;\; j=1,\ldots,n,\] \[ \theta_t^{j+1} - \theta_t^j \geq r\epsilon/2, \;\;\;\;\; 0\leq t \leq \epsilon, \;\;\; j=1,\ldots,n,\]
\[ |\theta_t^j - \theta_0^j| \leq 4n\epsilon, \;\;\;0\leq t \leq \epsilon^2, \;\;\;
j=1,\ldots,n.\]
\end{lemma}
\begin{proof} If $\theta_t^j$ were independent Brownian motions, then scaling shows that the probability of the event is independent of $\epsilon$ and it is easy to see that it is positive. Also, on this event $M_{t,\alpha}/M_{0,\alpha}$ is uniformly bounded away from $0$ uniformly in $\epsilon$. \end{proof}
The next lemma shows that there is a constant $\delta > 0$ such that from any initial configuration, with probability at least $\delta$ all the points are separated by $2\epsilon$ by time $\epsilon^2$.
\begin{lemma}\label{double_distance}
There exists $\delta>0$ such that if $0<d(\btheta_0)\leq \epsilon<\delta$, then
\[
\mathbb P\{ d(\btheta_{\epsilon^2})\geq 2\epsilon \}\geq \delta.
\]
Moreover, if $\tau=\tau_\epsilon=\inf\{ t: d(\btheta_t)=2\epsilon \}$, then for all positive integers $k$,
\[
\mathbb P\{\tau>k\epsilon^2\}\leq(1-\delta)^{k}.
\] \end{lemma}
\begin{proof} The second inequality follows immediately from the first and the Markov property. We will prove a slightly stronger version of the first inequality result. Let \[ Y_t = \max_{j=1,\ldots,n}
\max_{0 \leq s \leq t} |\theta_s^j - \theta^j_0|.\] Then we will show that there exists
$\delta_n$ such that \begin{equation} \label{covid.2}
\Prob\{d(\btheta_{\epsilon^2})\geq 2\epsilon;
Y_{\epsilon^2} \leq \delta_n^{-1} \, \epsilon \}
\geq \delta_n.
\end{equation}
We have put the explicit $n$ dependence on $\delta_n$
because our argument will use induction on $n$. Without loss of generality we assume that \[
0 = \theta^1<\theta^2<\cdots <\theta^n<\pi \] and $ \pi - \theta^n \geq \theta^j - \theta^{j-1}$ for $
j =2\ldots n.$
For $n=2$,
$\theta_t^2 - \theta_t^1$ is a radial Bessel process which
for small $\epsilon$ is very closely approximated by a
regular Bessel process. Either by using the explicit
transition density or by scaling, we see that
there exists $c_1 > 0$ such that
for all $\epsilon \leq 1$, if $\theta^2 - \theta^1 \leq \epsilon$,
\[ \Prob\{ \theta_{\epsilon^2}^2 - \theta_{\epsilon^2}^1 \geq 2\epsilon \; ; \; \theta_{t}^2 - \theta _t^1 \leq4 \epsilon
\mbox { for } 0 \leq t \leq \epsilon^2 \} \geq c_1. \]
Let $A_\epsilon $ denote the event
\[ A_\epsilon = \{ \theta_{\epsilon^2}^2 - \theta_{\epsilon^2}^1 \geq 2\epsilon \; ; \; \theta_{t}^2 - \theta _t^1 \leq4 \epsilon
\mbox { for } 0 \leq t \leq \epsilon^2 \;;\;
|W_t^1| , |W_t^2| \leq u \, \epsilon, \; 0 \leq t \leq \epsilon^2\},\]
where $u$ is chosen so
that
\[ \Prob\left\{\max_{0 \leq t \leq 1} |W_t^j| \geq u\right\} = c_1/4.\] Then $\Prob(A_\epsilon) \geq c_1/2$.
Since
\[ \theta_t^j = \alpha \int_0^t \cot(\theta^j_s - \theta^{3-j}_s)
\, ds + W_t^j, \]
we see that
\[ 2\alpha \int_0^{\epsilon^2} \cot(\theta^2_s - \theta^1_s)
\, ds =
\theta_\epsilon^2 - \theta_\epsilon^1 - W_{\epsilon^2}^2 + W_{\epsilon^2}^1 \leq (4 + 2u)\, \epsilon.\] and for $0 \leq t \leq \epsilon^2$,
\[ |\theta_t^j - \theta_0^j| \leq \alpha \int_0^t \cot(\theta^j_s - \theta^{3-j}_s)
\, ds + |W_t^j| \leq \left(2 + 2u \right) \epsilon. \] This establishes \eqref{covid.2} for $n=2$.
We now assume that \eqref{covid.2} holds for all $j < n$.
We claim that it suffices to prove that there
exists $\delta_n > 0$ such that if $d(\btheta_0) \leq \epsilon < \delta_n$, then \begin{equation} \label{covid.1}
\Prob\{{\tau_\epsilon} \leq \delta^{-2}_n \, \epsilon^2,
Y_{\tau_\epsilon} \leq \delta_n^{-1} \, \epsilon
\} \geq \delta_n .
\end{equation}
If
we apply \eqref{covid.1} to $\tau_{\delta_n n/2}$
we can use Lemma \ref{extralemma} to conclude \eqref{covid.2} for $n$.
Let us first assume that $\epsilon \leq \delta_{n-1}$ and that
there exists $j \in \{1,\ldots,n-1\}$ with
$\theta^{j+1} - \theta^j \geq \epsilon$. Consider independent $j$-radial
and $(n-j)$-radial Bessel processes
$(\theta_t^1,\ldots,\theta_t^j)$ and
$(\theta_t^{j+1},\ldots, \theta_t^n)$. In other words,
remove the terms of the form
\[ \pm \cot(\theta_t^m - \theta^k_t), \;\;\;\;
k \leq j < j+1 \leq m \]
from the drift in the $n$-radial Bessel
process, so that now particles $\{1, \ldots, j\}$ do not interact with particles $\{j+1, \ldots, n\}$.
Using the inductive hypothesis, we can find a $\lambda$
such that with probability at least $\lambda$ we have
\[ d(\btheta_{\lambda^2 \epsilon^2} ) \geq 2\lambda \, \epsilon,\;\;\;\;
Y_{\lambda^2 \epsilon^2} \leq \frac{\epsilon}{4}. \]
This calculation is done with respect to the two independent
processes but we note that on this event,
\[ \theta^{j+1}_t - \theta^j_t
\geq \frac{\epsilon}{2} , \;\;\;\; 0 \leq t \leq \lambda^2 \,
\epsilon^2 .\]
Hence we get a lower bound on the Radon-Nikodym derivative
between the $n$-radial Bessel process and the two independent
processes.
Now suppose there is no such separation.
Let $\sigma_\epsilon = \inf\{t: |\theta^n_t - \theta^1_t| \geq n\, \epsilon \}$; it is possible that $\sigma_\epsilon = 0$. If there were no other particles, $ \theta^n_t - \theta^1_t$ would be a radial Bessel process. The addition of other particles pushes the first particle more to the left and the $n$th particle more to the right. Hence by comparison, we see that \[ \Prob\{\sigma_\epsilon \leq n^2\,\epsilon^2 \}
\geq c_1 \] and as above we can find $u$ such that \[ \Prob\left\{\sigma_\epsilon \leq n^2\,\epsilon^2
\; ; \; \max_{0 \leq t \leq n^2 \epsilon^2} |W_t^j|
\leq u \epsilon \right\} \geq \frac{c_1}{2}, \] and hence (with a different value of $u$) \[ \Prob\left\{\sigma_\epsilon \leq n^2\,\epsilon^2 \; ; \; Y_{\sigma_\epsilon} \leq u \, \epsilon\right\} \geq \frac{c_1}{2} , \]
Note that on this good event there exists
at least one $j=1,\ldots,n-1$ with
$\theta^{j+1}_{\sigma_\epsilon } -
\theta^j_{\sigma_\epsilon } \geq \epsilon$.
\end{proof}
For $\zeta >0 $, let $V_\zeta = \{\btheta \in \mathcal X: d(\btheta) \geq 2^{-\zeta}\}$ and let $\sigma_\zeta$ denote the first time that the process enters $V_\zeta$: \[ \sigma_\zeta= \inf \left \{ t: d(\btheta_t)\geq 2^{-\zeta}\right \}.
\] Define \begin{equation}\label{def_r} r=r(\delta)=\min\{ k \in {\mathbb Z}: 2^{-k}<\delta\}, \end{equation} where $\delta$ is as in Lemma \ref{double_distance}. Note that $r$ is a fixed constant for the remainder of this proof.
\begin{lemma}\label{Enters_by_1/2}
There exists $\const>0$ such that for any $\mathbf x
\in \mathcal X$, \begin{equation} \mathbb P^{\mathbf x} \{\sigma_r \leq 1/4\}\geq \const. \end{equation} \end{lemma}
\begin{proof}
We will construct a sequence of times $\frac{1}{8}\leq t_1 \leq t_2 \leq \cdots \leq \frac{1}{4}$ such that if \[ q_k = \inf_{\mathbf x \in V_k } \mathbb P^{\mathbf x} \{\sigma_r \leq t_k\},
\qquad k \in \nat,
\]
then $
q:= \inf _{k} q_k >0. $ Using \eqref{covid.3}, we can see that $q_k > 0$ for each $k$. To show that $q > 0$, it suffices to show that there is a summable sequence such that for all $k$ sufficiently large, $q_{k+1} \geq q_k \, (1-u_k)$. We will do this with $u_k = (1-\delta)^{k^2}$ where $\delta$ is as in Lemma \ref{double_distance}.
For this purpose, denote
\begin{equation}\label{def:s_k}
s_k= k^2\; 2^{-2(k+1)},
\end{equation}
and let $l\geq r$ be sufficiently large so that
\[
\sum_{k=l}^\infty s_k \leq \frac{1}{8}.
\]
Define the sequence $\{t_k\}$ by
\begin{equation}\label{def:t_k}
t_{k}= \begin{cases} \frac{1}{4}, \quad & k\leq l
\\ t_{k-1} + s_{k-1}, & k> l.
\end{cases}
\end{equation}
This sequence satisfies $\frac{1}{8} \leq t_1 \leq t_2 \cdots \leq \frac{1}{4}$.
Applying Lemma \ref{double_distance}, we see that if
$d(\mathbf x)\leq 2^{-(k+1)}$, then
\[\mathbb P^{\mathbf x} \left \{ \sigma_{k}\leq s_k \right \}\geq 1- u_{k}.\]
Therefore,
\[
\begin{aligned}
\mathbb P^{\mathbf x} \{ \sigma_r\leq t_{k+1} \}
&\geq \mathbb P^{\mathbf x}\{ \sigma_{k+1}\leq s_{k+1} \}\; \mathbb P^{\mathbf x} \{\sigma_r \leq t_{k+1}\vert \sigma_{k+1}\leq s_{k+1} \}
\\ &\geq (1-u_k) \inf_{\mathbf z \in \partial V_k}\Prob^{\mathbf z} \{\sigma_r \leq t_k\},
\end{aligned}
\]
so that for all $k>r$,
\[
q_{k+1}\geq q_k (1- u_k).
\] \end{proof}
We are now prepared to prove Proposition \ref{bounds}; we prove the upper and lower bounds separately.
\begin{proof}[Proof of Proposition \ref{bounds}, lower bound]
We let
\begin{equation}
\hat \epsilon = \inf\left \{ p_t(\mathbf z, \by): \mathbf z \in \bdry V_{r}, \, \by \in V_r, \, \frac{1}{4}\leq t \leq 1 \right\}.
\end{equation}
To see that this is positive, we use \eqref{covid.3} to see that \[ \hat \epsilon \geq c \, \inf\left \{ \tilde p_t(\mathbf z, \by;
V_{r+1}): \mathbf z \in \bdry V_{r}, \, \by \in V_r, \, \frac{1}{4}\leq t \leq 1 \right\},\]
and straightforward arguments show that the righthand side
is positive.
Lemma \ref{Enters_by_1/2} implies that for $\mathbf y \in V_r$, $\mathbf x \in \mathcal X$, and $\frac 12 \leq t \leq 1$,
\[ p_t(\mathbf x, \mathbf y) \geq \Prob^x\left\{
\sigma_r \leq \frac 14 \right\} \, \hat \epsilon = q \, \hat \epsilon.\]
Also, since $F_{2\alpha}$ is bounded uniformly away
from $0$ in $V_r$, we have
\[ p_t({\bf y},{\bf x}) = \frac{ F_{2\alpha}({\bf x})}{F_{2 \alpha}({\bf y})}
\, p_t({\bf x},{\bf y}) \geq c \, F_{2\alpha}({\bf x}) .\]
This assumes ${\bf x} \in\mathcal X, {\bf y} \in V_r$. More generally,
if
$\mathbf x, \mathbf y \in \mathcal X$, \begin{eqnarray*} p_1({\bf x},{\bf y}) & \geq & \int_{V_r} p_{1/2} ({\bf x},\z) \,
p_{1/2} (\z,{\bf y}) \, d\z \\
& = & {F_{2\alpha}({\bf y})} \int_{V_r} p_{1/2} ({\bf x},\z) \,
p_{1/2} ({\bf y},\z) \, F_{2\alpha}(\z)^{-1}\, d\z \\
& \geq & c \, {F_{2\alpha}({\bf y})}
\end{eqnarray*}
\end{proof}
In order to prove the upper bound, we will need two more lemmas.
\begin{lemma}\label{upperboundlemma}
If $r$ is as defined in (\ref{def_r}), then
\begin{equation}\label{p*}
p^*:=\sup \left \{ p_t(\mathbf z, \mathbf y): \mathbf z\in \partial V_{r+1}, \mathbf y \in V_r, 0\leq t\leq 1 \right \} <\infty.
\end{equation}
\end{lemma}
\begin{proof}
Let
\[ p^\# = \sup \left \{ p_t(\mathbf z, \mathbf y;
V_{r+2}): \mathbf z\in \partial V_{r+1}, \mathbf y \in V_r, 0\leq t\leq 1 \right \} .\]
By comparison with Brownian motion using \eqref{covid.3}
we can see that $p^\# < \infty$. Also by comparison with
Brownian motion, we can see that there exists $\rho > 0$
such that for all ${\bf x} \in \p V_{r+2}$,
\[ \Prob^{\bf x}\{\btheta[0,1] \cap V_{r+1} = \eset \} \geq
\rho. \] From this and the strong Markov property, we see that
\[ p^* \leq p^\# + (1-\rho) \, p^*.\]
(The term $p^\#$ on the righthand side corresponds to paths from ${\bf x}$ to ${\bf y}$ that stay within $V_{r+2}$. The term $(1-\rho) \, p^*$ corresponds to paths from ${\bf x}$ to ${\bf y}$ that hit $\bdry V_{r+2}$.)
Therefore,
\[
p^*\leq \frac{p^\#}{\rho}<\infty.
\]
\end{proof}
\begin{lemma} There exist $c, \beta< \infty$ such that
if $t \geq 1$, $k > r$, ${\bf x} \in \mathcal X, {\bf y} \in \overline V_{k+1} \setminus V_k$,
then \begin{equation} \label{covid.5}
{ p_{2^{-2k}t} ({\bf x},{\bf y}; V_k^c)}
\leq c \, 2^{ \beta k} \, (1-\delta)^t \, F_{2\alpha}
({\bf y}),
\end{equation}
where $\delta >0$ is the constant in Lemma \ref{double_distance}.
\end{lemma}
\begin{proof} We first consider $t=1$. For independent Brownian motions, we have
\[ \tilde p_{2^{-2k}} ({\bf x},{\bf y}) \leq c \, 2^{nk}, \;\;\; {\bf x} \in V_{k+1}, \]
\[ \tilde p_{t} ({\bf x},{\bf y}) \leq c \, 2^{nk}, \;\;\; 0 \leq t \leq
2^{-k} , \, \;\;\; {\bf x} \in \p V_{k+2}. \] There exist $\lambda,c'$ such that $F_{2\alpha}(\z) \geq c'\, 2^{-\lambda k}$ for $\z \in V_{k+2}$, Using the first inequality and \eqref{covid.3} we have
\[ p_{2^{-2k}} ({\bf x},{\bf y}; V_{k+3}) \leq c \, 2^{(n+\lambda) k}
\leq c \, 2^{\beta n} \, F_{2\alpha}({\bf y}) \]
\[ p_{t} (\z,{\bf y}; V_{k+3} ) \leq c \, 2^{\beta k }
\, F_{2\alpha}({\bf y})
\;\;\;\;\; t \leq 2^{-2k}, \;\;\; \z \in \p V_{k+2} , \]
where $\beta = n + 2\lambda$. Arguing as in the previous lemma, we can conclude that \[ p_{2^{-2k}} ({\bf x},{\bf y} )
\leq c \, 2^{\beta n} \, F_{2\alpha}({\bf y}) .\] This gives the first inequality and the second follows from Lemma \ref{double_distance} using \[ p_{2^{-2k}t} ({\bf x},{\bf y}; V_k^c)
\leq \Prob^{\bf x}\{\btheta[0,2^{-2k}(t-1)] \subset V_k^c\}
\, \sup_{\z \in \mathcal X} p_{2^{-2k}}(\z,{\bf y}).\] \end{proof}
\begin{proof}[Proof of Proposition \ref{bounds}, upper bound]
For each $k\in \mathbb N$, define
\[
J_k=\sup_{\shortstack{$\scriptstyle{\by \in V_k}$ \\ $\scriptstyle{ \forall \mathbf x}$ \\$\scriptstyle{t_k\leq t\leq 1} $
}}
\frac{p_t(\mathbf x, \mathbf y)}{F_{2\alpha}(\mathbf y)},
\]
where $\tilde t_k= t_k + 1/2$, so that
\[
\tilde t_{k}=\begin{cases}
\frac{3}{4}, & k\leq l
\\ \tilde t_{k-1}+s_{k-1}, & k> l,
\end{cases}
\]
where $s_k$ and $t_k$ are defined in (\ref{def:s_k}) and (\ref{def:t_k}) above. Since $F_{2\alpha}(\mathbf y) \geq d(\mathbf y)^{n(n-1)/2}$, Lemma \ref{upperboundlemma} implies that for each $k$, $J_k<\infty$.
We will show that $J_{k+1}\leq J_k + c_k$ for a summable sequence $\{c_k \}$, which implies that $\lim_{k\to \infty} J_k <\infty$.
To bound $J_{k+1}$, notice that if $\mathbf y \in V_{k+1}$ and $t_{k+1}\leq t \leq 1$, we have the decomposition for arbitrary $\mathbf x$:
\begin{equation}\label{eq:upperdecomp}
\frac{p_t(\mathbf x, \mathbf y)}{F_{2\alpha} (\mathbf y)}
= \frac{p_t(\mathbf y, \mathbf x)}{F_{2\alpha} (\mathbf x)}
= \frac{p_t(\mathbf y, \mathbf x; \sigma_k \leq s_k)}{F_{2\alpha} (\mathbf x)}
+ \frac{p_t(\mathbf y, \mathbf x; \sigma_k > s_k)}{F_{2\alpha} (\mathbf x)} .
\end{equation}
The strong Markov property implies that the first term on the righthand side is equal to
\[
\int_{\partial V_k}p_{\sigma_k} (\mathbf y, \mathbf z;\sigma_k\leq s_k) \frac{p_{t-\sigma_k} (\mathbf z, \mathbf x)}{F_{2\alpha}(\mathbf x)} \, d\mathbf z
\, \,\, \leq \,\mathbb P^{\mathbf y} \{ \sigma_k\leq s_k \} \sup_{\shortstack{ $\scriptstyle{ \mathbf z \in \partial V_k }$ \\ $ \scriptstyle{\tilde t_{k} \leq s \leq 1}$
} } \frac{p_s(\mathbf x', \mathbf z)}{F_{2\alpha} (\mathbf z)}
\,\,\leq \,\,J_k.
\]
Using \eqref{covid.5}. we can see that the second term on the righthand side of (\ref{eq:upperdecomp}) may be bounded by
\[
\begin{aligned}
\frac{p_t(\mathbf y, \mathbf x; \sigma_k >s_k)}{F_{2\alpha} (\mathbf x)}
& = \int_{\mathcal X_n} p_{s_k} (\mathbf y, \mathbf z; \sigma_k >s_k) \; \frac{p_{t-s_k} (\mathbf z, \mathbf x)}{F_{2\alpha}(\mathbf x)}\,d\mathbf z \\
& = \int_{V_k^c} p_{s_k} (\mathbf y, \mathbf z; V_k^c)\; \frac{p_{t-s_k} (\mathbf x, \mathbf z)}{F_{2\alpha}(\mathbf z)}\,d\mathbf z
\\ & = \frac{1}{F_{2\alpha}({\bf y})}
\int_{V_k^c} p_{s_k} (\mathbf z, \mathbf y; V_k^c)\; {p_{t-s_k} (\mathbf x, \mathbf z)} \,d\mathbf z
\\
& \leq e^{-O(k^2)} \int_{V_k^c} {p_{t-s_k} (\mathbf x, \mathbf z)} \,d\mathbf z \leq e^{-O(k^2)} .
\end{aligned}
\] Therefore,
\[
J_{k+1} \leq J_k + e^{-O(k^2)},
\]
completing the proof.
\end{proof}
\end{document} |
\begin{document}
\thispagestyle{empty} \title{Looking for Bird Nests: Identifying Stay Points with Bounded Gaps}
\begin{abstract} A stay point of a moving entity is a region in which it spends a significant amount of time. In this paper, we identify all stay points of an entity in a certain time interval, where the entity is allowed to leave the region but it should return within a given time limit. This definition of stay points seems more natural in many applications of trajectory analysis than those that do not limit the time of entity's absence from the region. We present an $O(n \log n)$ algorithm for trajectories in $R^1$ with $n$ vertices and a $(1 + \epsilon)$-approximation algorithm for trajectories in $R^2$ for identifying all such stay points. Our algorithm runs in $O(kn^2)$, where $k$ depends on $\epsilon$ and the ratio of the duration of the trajectory to the allowed gap time. \end{abstract}
\section{Introduction}
The question, asking where a moving entity, like an animal or a vehicle, spends a significant amount of its time is very common in trajectory analysis \cite{zheng15}. These regions are usually called popular places, hotspots, interesting places, stops, or stay points in the literature. There are several definitions of stay points and different techniques have been presented to find them \cite{benkert10,gudmundsson13,fort14,perez16,arboleda17}. However, from a geometric perspective, which is the focus of the present paper, few papers are dedicated to this problem.
Benkert et al.~\cite{benkert10} defined a popular place to be an axis-aligned square of fixed side length in the plane which is visited by the most number of distinct trajectories. They modelled a visit either as the inclusion of a trajectory vertex or the inclusion of any portion of a trajectory edge, and presented optimal algorithms for both cases. Gudmundsson et al.~\cite{gudmundsson13} introduced several different definitions of trajectory hotspots. In some of these definitions, a hotspot is an axis-aligned square that contains a contiguous sub-trajectory with the maximum duration and in others it is an axis-aligned square in which the entity spends the maximum possible duration but its presence may not be contiguous. For hotspots of fixed side length, for the former they presented an $O(n \log n)$ algorithm and for the latter they presented an algorithm with the time complexity $O(n^2)$, where $n$ is the number of trajectory vertices. Damiani et al.~\cite{damiani14}, like some of the cases considered by Gudmundsson et al.~\cite{gudmundsson13}, allowed gaps between stay point and presented heuristic algorithms for finding them.
There are applications in which we need to identify regions that are regularly visited. Djordjevic et al.~\cite{djordjevic11} concentrated on a limited form of this problem and presented an algorithm to decide if a region is visited almost regularly (in fixed periods of time) by an entity. However, in many applications that require spatio-temporal analysis, these definitions are inadequate. For instance, a bird needs to return to its nest regularly to feed its chicks. In other words, the bird may leave its nest but it cannot be away for a long time. We would like to find all possible locations for its nest.
Arboleda et al.~\cite{arboleda17} studied a problem very similar to the focus of the present paper, except that they assumed the algorithm takes as input, in addition to the trajectories, a set of polygons as potential stay points or interesting sites. They presented a simple algorithm to identify stay points among the given interesting sites; their algorithm computes the longest sub-trajectory visiting each interesting site for each trajectory, while allowing the entity to leave the site for some predefined amount of time. They also mentioned motivating real world examples to show that in some applications, it makes sense to allow the entity to leave the site for short periods of time, like leaving a cinema for the bathroom.
Our goal is identifying all trajectory stay points, i.e.~axis-aligned squares in which the entity is always present, except for short periods of time, where both the side length of the squares and the allowed gap time are specified as inputs of the algorithm and assumed to be fixed. Note that we ignore the duration in which the entity stays in a region. If, for instance, a region with the maximum duration among our stay points is desired, our algorithm can be combined with those that find a stay point with the maximum duration, but allow unbounded entity absence, like the ones presented by Gudmundsson et al.~\cite{gudmundsson13}.
This paper is organized as follows. In Section~\ref{sprel}, we introduce the notation and define some of the main concepts of this paper. In Section~\ref{sone}, we handle trajectories in $R^1$ and present an algorithm to find all stay points of such trajectories with the time complexity $O(n \log n)$. We focus on trajectories in $R^2$ in Section~\ref{stwo} and present an approximation algorithm for finding their stay points. We conclude this paper by showing that the complexity of the stay map of two-dimensional trajectories can be $\Theta(n^2)$.
\section{Preliminaries} \label{sprel} A trajectory $T$ describes the movement of an entity in a certain time interval. Trajectories can be modelled as a set of vertices and edges in the plane. Each vertex of $T$ represents a location at which the entity was observed. The time of this observation is indicated as the timestamp of the vertex. We assume that the entity moves in a straight line and with constant speed from a vertex to the next; the edges of the trajectory connect its contiguous vertices. A sub-trajectory of $T$ for a time interval $(a, b)$ is denoted as $T(a, b)$, and describes the movement of the entity from time $a$ to time $b$. Except possibly the first and the last vertices of a sub-trajectory, which may fall on an edge of $T$, its set of vertices is a subset of those of $T$. The stay points considered in this paper are formally described in Definition~\ref{dstaypoint}. We use the symbols defined here, such as $g$ and $s$, throughout the paper without repeating their description. Also, any square that appears in the rest of this paper is axis-aligned and has side length $s$. \begin{figure}
\caption{An example two-dimensional trajectory.
The number near each vertex shows its timestamp.
The green region is the stay map
and the green square is a stay point ($g = 15$).}
\label{fsmap}
\end{figure}
\begin{defn} \label{dstaypoint} A stay point of a trajectory $T$ in $R^2$ is a square of fixed side length $s$ in the plane such that the entity never spends more than a given time limit $g$ outside it continuously. \end{defn}
The goal of this paper is identifying all stay points of a trajectory, or its stay map (Definition~\ref{dstaymap}). Note that the parameters $s$ and $g$ are assumed to be fixed and specified as inputs of the algorithm.
\begin{defn} \label{dstaymap} The stay map $M$ of a trajectory $T$ in $R^2$ is a subset of the plane such that every square of side length $s$ whose lower left corner is in $M$ is a stay point of $T$, and the lower left corners of all stay points of $T$ are in $M$. \end{defn}
Figure~\ref{fsmap} shows an example trajectory, its stay map, and one of its stay points. Note that every square, whose lowest left corner is in the stay map, is a stay point. Although these definitions are presented for trajectories in $R^2$, they can be trivially adapted for one-dimensional trajectories, as we do in Section~\ref{sone}.
\section{Stay Maps of One-Dimensional Trajectories} \label{sone} Let $T$ be a trajectory in $R^1$. A stay point of $T$ is an interval of length $s$ such that the entity never leaves it for a period of time longer than $g$. The stay map $M$ of $T$ is the region containing the left end points of all stay points of $T$. In this section, we present an algorithm for finding $M$. \begin{figure}
\caption{Mapping a one-dimensional trajectory to
the time-location plane.
The green rectangle of height $s$ shows a possible stay point.}
\label{fonemap}
\end{figure}
\begin{lemma} \label{lonecon} The stay map $M$ of a trajectory $T$ in $R^1$ is continuous. \end{lemma} \begin{proof} To obtain a contradiction, let points $p$ and $q$ be inside $M$ and $v$ be outside it such that $p < v < q$ (our assumption that $M$ is non-continuous implies the existence of this triple). Let $r_p$, $r_q$, and $r_v$ be three segments of length $s$, whose left corners are at $p$, $q$, and $v$, respectively. Clearly, $r_p$ and $r_q$ are stay points while $r_v$ is not. Whenever the entity moves to the left of $v$, it must return to $q$ before the time limit $g$ to visit $r_q$. Also, whenever the entity moves beyond the right end point of $r_v$ (which is outside $r_p$), it must return to $r_p$ before the time limit. Therefore, it can never be outside $r_v$ for more than time $g$ and this implies that $v$ is also a stay point and inside $M$, which yields the desired contradiction. \end{proof}
\begin{lemma} \label{lonecheck} Given a trajectory $T$ with $n$ vertices in $R^1$, we can answer in $O(n)$ time whether a point $p$ is in the stay map or not, and if not, whether the stay map is on its left side or on its right side. \end{lemma} \begin{proof} Define $r$ as segment $pq$, in which $q$ is $p + s$. Testing each trajectory edge in order, we can compute the duration of each maximal sub-trajectory outside $r$ and check if it is at most $g$. Therefore, we can decide if $p$ is the left end point of a stay point in $O(n)$ time. If it is not a stay point, there is at least one time interval, in which the entity spends more than time $g$ on the left or on the right side of $r$. Without loss of generality, suppose it does so on the left side. Then, no point on the right of $r$ can be a stay point and therefore the whole stay map of $T$ must appear on the left of $p$. This again can be tested in $O(n)$ time by processing trajectory edges. \end{proof}
An event point of a trajectory $T$ in $R^1$ is a point on the line in which one of the following occurs: i) a trajectory vertex lies on that point, ii) the time gap between two contiguous visits to that point is exactly $g$.
\begin{lemma} \label{loneends} The stay map $M$ of a trajectory $T$ starts and ends at an event point or at distance $s$ from one. \end{lemma} \begin{proof} By Lemma~\ref{lonecon}, $M$ is continuous. Let $p$ be the left end point of the stay map $M$. Let $r = pq$ be a segment such that $q = p + s$. Whenever the entity leaves $r$ through $p$, it returns by passing it again within the time limit $g$. Similarly, if the entity leaves $r$ through $q$, it visits $q$ again within time $g$. Suppose, for the sake of contradiction, that $p$ is not an event point. Then, we can move $r$ slightly to the left to obtain $r'$. $r'$ must also be a stay point because every time the entity leaves it from either of its end points, it returns within time $g$, because neither $p$ nor $q$ is an event point (the time between the contiguous visits of the entity is not exactly $g$ and they are not on a trajectory vertex). This contradicts the choice of $p$. A similar argument shows that the right endpoint of $M$ must also be an event point or at distance $s$ from one. \end{proof}
\begin{lemma} \label{toneevents} The set of event points of a trajectory with $n$ vertices can be computed in $O(n \log n)$ time. \end{lemma} \begin{proof} We map the trajectory to a plane such that a trajectory vertex at position $p$ with timestamp $t$ is mapped to point $(t, p)$ (see Figure~\ref{fonemap}). Obviously, the polygonal path representing the trajectory in this plane is $y$-monotone. We perform a plane sweep by sweeping a line parallel to the $x$-axis in the positive direction of the $y$-axis in this plane.
The edges in this plane chop the sweep line into several segments. We maintain the length of every such segment during the sweep line algorithm. When the sweep line intersects a trajectory vertex $v$, an event point is recorded and, based on the other end point of the edges that meet at that vertex, one of the following cases occurs: \begin{enumerate} \item If $v$ is the lowest end point of both edges, two new segments are introduced. Based on the slope of the edges bounding each segment, we record an event at which the distance between the edges is exactly $g$, if they are long enough. \item If $v$ is the highest end point of both edges that meet at $v$, three segments on the sweep line are merged (when the sweep line is before $v$, three segments are created by the edges incident to $v$, at $v$, there are two such segments, and after $v$, they merge into one). We also record an event for the location at which the length of the remaining segment becomes $g$ in the plane. \item If $v$ is the highest end point of one edge and the lowest end point of another, the event scheduled for the location at which the length of each of the two incident segments on the sweep line are $g$ may need to be updated. \end{enumerate} Note that since the sweep line stops at $n$ vertices and at each vertex only a constant number of event points are added, the total number of event points is $O(n)$. \end{proof}
\begin{theorem} \label{tonealg} The stay map $M$ of a trajectory $T$ with $n$ vertices in $R^1$ can be computed in $O(n \log n)$ time. \end{theorem} \begin{proof} Lemma~\ref{toneevents} implies that the set of event points of $T$ can be computed with the time complexity $O(n \log n)$. From this set, we can obtain an ordered sequence of event points and points at distance exactly $s$ from them in $O(n \log n)$ time (note that the length of this sequence is still $O(n)$). Based on Lemma~\ref{loneends}, $M$ starts and ends at a point of this sequence. Also, Lemma~\ref{lonecheck} implies that we can decide if any of the end points of $M$ appears before or after any point in $O(n)$ time. Therefore, we can perform a binary search on the sequence obtained from the event points of $T$ to find the left and the right end points of $M$. Since the length of the sequence is $O(n)$, the time complexity of the binary search is $O(n \log n)$. \end{proof}
Unfortunately, this algorithm cannot be adapted for two-dimensional trajectories, because their stay maps may no longer be continuous.
\section{Stay Maps of Two-Dimensional Trajectories} \label{stwo} We use the notation $P(a, b)$ to denote the region that contains the lower left corners of all squares of side length $s$ that contain at least one point of the sub-trajectory $T(a, b)$. We also use $M(a, b)$ to indicate the stay map of the sub-trajectory $T(a, b)$. We assume that trajectory $T$ starts at time $0$ and has total duration $D$. It is clear that every point in the stay map of $T$ must appear in $P(t, t + g)$ for any value of $t$, where $0 \le t \le D - g$ (because the entity cannot be outside a stay point of $T$ for more than time $g$). Therefore, the stay map of $T$ is the intersection of $P(t, t + g)$ for every possible value of $t$, $0 \le t \le D - g$. This suggests the general scheme demonstrated in Algorithm~\ref{atwoexact} for finding the stay map of a two-dimensional trajectory, assuming $D > g$.
\begin{alg} \label{atwoexact} Let $T$ be two-dimensional trajectory with $n$ edges and total duration $D$. Compute the stay map of $T$ ($M(0, D)$) as follows. \begin{enumerate} \item Compute $P(0, g)$, as the union of polygons $P(u, v)$, for all edges $uv$ in $T(0, g)$. \item Let $M(0, g)$ be $P(0, g)$. This is not strictly correct as $M(0, t)$ must include the complete plane when $t \le g$ and its value changes to a subset of $T(0, g)$ for any value of $t > g$. This simplifying assumption, however, does not affect the correctness of the algorithm, since $D > g$. \item Incrementally compute $M(0, D)$ as follows. Compute $M(0, b)$ from $M(0, a)$, in which $M(0, a)$ is the last computed stay map and $b$ is the smallest value after $a$, such that $b - g$ or $b$ is the timestamp of a trajectory vertex. Let $V$ be the difference between $M(0, a)$ and $M(0, b)$ (note again that $M(0, b)$ is a subset of $M(0, a)$). After computing $V$, we obtain $M(0, b)$ by excluding $V$ from $M(0, a)$. \end{enumerate} \end{alg}
\begin{figure}
\caption{The difference $V$ in Algorithm~\ref{atwoexact},
when $P(a - g, b - g)$ and $P(a, b)$ do not overlap.}
\label{fvcomp}
\end{figure} The core of Algorithm~\ref{atwoexact} is the computation of the difference $V$. By the choice of $b$, $T(a - g, b - g)$ and $T(a, b)$ are both line segments. The value of $V$ depends on these segments and $T(b - g, a)$.
Let $r$ be a square, whose lower left corner is in $V$ and let $a - g + \delta$ be the time of entity's departure from $r$ before time $b - g$. Since the lower left corner of $r$ is in $V$, $r$ is not visited by the entity in the sub-trajectory $T(a - g + \delta, a + \delta)$. In other words, any point not in $P(a - g + \delta, b - g)$, $P(b - g, a)$, and $P(a, a + \delta)$ for any value of $\delta$ in $0 \le \delta \le g$ cannot be a stay point.
To make the computation of $V$ easier, we define $V'$ as follows ($V'$ is very similar to $V$, except that it ignores $P(b - g, a)$): { \scriptsize \[ V' = \bigcup\limits_{0 \le \delta \le g} {P(a - g, a - g + \delta) \setminus \left( P(a - g + \delta, b - g) \cup P(a, a + \delta)\right)} \] } $V'$ contains the lower left corners of all squares that have been visited during the interval $(a - g, a - g + \delta)$, but have not been visited in $(a - g + \delta, b - g)$ or $(a, a + \delta)$ for some $\delta$ in $0 \le \delta \le g$. Then, $V = V' \setminus P(b - g, a)$.
If no square intersects both $T(a - g, b - g)$ and $T(a, b)$, $V'$ is $P(a - g, b - g)$. This case is shown in Figure~\ref{fvcomp}, in which $V'$ is the rectangle on the left. Otherwise, $V'$ depends on the relative speed of the entity in these sub-trajectories. In both cases, $V'$ is a polygon of constant complexity and can be computed in constant time. We do not discuss the details of the computation of $V'$ in this paper, however. Since $T(b - g, a)$ consists of $O(n)$ edges, $P(b - g, a)$ is the union of $O(n)$ simple polygons. Therefore, $V' \setminus P(b - g, a)$ is also the union of a set of polygons with the total complexity $O(n)$. Let $V_t$ be the union of the differences $V$ for all iterations of the third step of Algorithm~\ref{atwoexact} (note that the complexity of $V_t$ is $O(n^2)$). When the algorithm finishes, $M(0, D)$ is $P(0, g) \setminus V_t$. Since the computation of $V_t$ requires finding the union of polygons with the total complexity $O(n^2)$, an $O(n^2)$ implementation of this exact algorithm seems unlikely.
\subsection{Approximate Stay Maps of Two-Dimensional Trajectories} In Algorithm~\ref{atwo}, we consider $P(t, t + g)$ for limited discrete values of $t$ to compute \emph{approximate stay maps} of a trajectory (Definitions~\ref{dstaypointapx} and \ref{dstaymapapx}), to improve the time complexity of Algorithm~\ref{atwoexact}.
\begin{defn} \label{dstaypointapx} A $(1 + \epsilon)$-approximate stay point of a trajectory $T$ in $R^2$ is a square of fixed side length $s$, such that the entity is never outside it for more than $g + \epsilon g$ time. \end{defn}
\begin{defn} \label{dstaymapapx} A $(1 + \epsilon)$-approximate stay map of a trajectory $T$ in $R^2$ is the region containing the lower left corners of all exact stay points of $T$ and possibly the lower left corners of some of its $(1 + \epsilon)$-approximate stay points. \end{defn}
\begin{alg} \label{atwo} Let $T$ be a trajectory in $R^2$ with $n$ edges and total duration $D$ and let $\epsilon$ be any real positive constant no greater than $D / g$. Compute a $(1 + \epsilon)$-approximate stay map of $T$ as follows. \begin{enumerate} \item Compute $P(t, t + g)$ for $t = i\lambda$ for integral values of $i$ from $0$ to $D / \lambda$, where $\lambda$ is $\epsilon g$. We call $P(t, t + g)$ for any value of $t$ a snapshot of $T$. \item Compute the intersection of these snapshots. For this, we can use the topological sweep of Chazelle and Edelsbrunner~\cite{chazelle92} on the subdivision of the plane induced by the edges of the snapshots and include in the output the faces present in all snapshots. \end{enumerate} \end{alg}
\begin{theorem} \label{ttwo} For trajectory $T$ in $R^2$ with $n$ edges and total duration $D$ and any real positive constant $\epsilon$ no greater than $D / g$, Algorithm~\ref{atwo} computes a $(1 + \epsilon)$-approximate stay map of $T$. \end{theorem} \begin{proof} Since the output of Algorithm~\ref{atwo} is the intersection of different snapshots of $T$, the lower left corner of every stay point must be inside it. Therefore, it suffices to show that every point in the output of the algorithm is the lower left corner of a $(1 + \epsilon)$-approximate stay point.
Let $r$ be a square whose lower left corner is in the region reported by this algorithm. Suppose that the entity leaves $r$ at $t_b$ and reenters $r$ at $t_e$. We can set $t_b = 0$ for handling the initial part of the trajectory, and, if the entity never returns to $r$, we can set $t_e = D$. To prove the approximation factor, we show that $t_e \le t_b + g + \epsilon g$. Let $i$ be the largest index such that $\lambda i \le t_b$ and let $t_1 = \lambda i$. We show that the entity must return before time $t_1 + \lambda + \lambda/\epsilon$. Otherwise, $P(t_1 + \lambda, t_1 + \lambda + \lambda/\epsilon)$, which is a snapshot since $\lambda/\epsilon$ is equal to $g$, does not contain the lower left corner of $r$ (this is demonstrated in Figure~\ref{fsnap}) and this contradicts the assumption that it is included in the region returned by the algorithm. Therefore, the entity cannot be outside $r$ for longer than $\lambda/\epsilon + \lambda$, and $t_e \le t_b + g + \epsilon g$. \end{proof}
\begin{figure}
\caption{The entity leaves a square at $t_b$ and returns at $t_e$.
If $t_e - t_b$ is larger than $g + g\epsilon$, there is a
snapshot in which the entity is outside the square.}
\label{fsnap}
\end{figure} \begin{theorem} \label{ttwoanalysis} The time complexity of Algorithm~\ref{atwo} is $O(n^2 / \epsilon^2 + \sigma^2 / \epsilon^2)$, in which $\sigma$ is $D / g$. \end{theorem} \begin{proof} A subdivision of the plane by $m$ line segments has $O(m^2)$ faces and can be swept with the same time complexity \cite{chazelle92}. Moreover, the number of the segments of each snapshot depends on the number of vertices of the sub-trajectory inside that snapshot (the region containing the lower left corners of the squares that intersect an edge of the sub-trajectory is a polygon with a constant number of sides). We, therefore, count the total number of vertices of the sub-trajectories in all snapshots. There are two types of trajectory vertices in each snapshot: those present in the original trajectory $T$ and the end points of the snapshot, which may not coincide with a trajectory vertex. Since the duration of each snapshot is $g$ and the difference between the start time of contiguous snapshots is $\epsilon g$, each trajectory vertex appears in at most $1 / \epsilon$ snapshots. Therefore, the total number of vertices is at most $n / \epsilon + 2D / (\epsilon g)$ and the time complexity of Algorithm~\ref{atwo} is $O(n^2 / \epsilon^2 + \sigma^2 / \epsilon^2)$. \end{proof}
It is not difficult to see that the stay map of a two-dimensional trajectory may contain $\Theta(n^2)$ faces and therefore we cannot expect an algorithm with the worst-case time complexity $o(n^2)$. In what follows, we demonstrate a trajectory with $O(n)$ edges and a stay map of $\Theta(n^2)$ faces. Trajectory edges are added incrementally, as demonstrated in Figure~\ref{fgrid}, in which filled regions represent the stay map (except for $t \le g$, in which they represent $P(0, t)$) and arrows show trajectory edges. We assume that the entity starts at time $0$ and position $(0, 0)$.
\begin{figure*}
\caption{A trajectory with a stay map of $O(n^2)$ faces.
The arrows indicate trajectory edges and filled
regions indicate the stay map at each step.}
\label{fgrid}
\end{figure*}
Generate $m$ vertical strips as follows. Add the second vertex at $(2s, 0)$ with timestamp $g/2$ (Figure~\ref{fgrid}.a). Move the entity to its initial position using three vertices as shown in Figure~\ref{fgrid}.b; the position of the last vertex is $(0, 0)$ and its timestamp is $g - g / 2m$. Create the vertical strips as follows: after every $g / m$ time, quickly move the entity by $s / m$ to the right (Figures~\ref{fgrid}.c--\ref{fgrid}.e). After $m$ such steps and waiting for at least $g$, the current stay map consists of $m$ vertical strips (Figure~\ref{fgrid}.f).
The same trajectory we used for creating vertical strip can be used for creating horizontal strips after rotating the trajectory 90 degrees. If this is performed after the previous step, however, this would result in a stay map (Figure~\ref{fgrid}.g), which consists of $\Theta(m^2)$ small squares.
\section{Concluding Remarks} The definition of stay points with bounded gaps can be easily extended to multiple trajectories. A multi-trajectory stay point is a square that is visited by at least one of the entities in any interval of duration $g$. It seems possible to compute such stay maps, by modifying Algorithm~\ref{atwo} to compute the intersection of the union of the snapshots of different entities. However, the time complexity of this algorithm may no longer be $O(n^2)$, where $n$ is the total number of trajectory vertices. Finding an efficient exact algorithm for the multi-trajectory version of the problem seems interesting.
As shown in Section~\ref{stwo}, the complexity of a stay map can be $\Theta(n^2)$, rendering an algorithm with the time complexity $o(n^2)$ impossible. This bound however is not tight and a natural question is whether it is possible to find the exact stay map of two-dimensional trajectories in $O(n^2)$ time. Also, by limiting the size of the output, for instance by finding only one of the stay points, a more efficient algorithm is not unlikely. Furthermore, it seems interesting to study the problem in higher dimensions.
\end{document} |
\begin{document}
\title{Counting dimer coverings on self-similar Schreier graphs}
\author{Daniele D'Angeli} \address{Departamento de Matem\'{a}ticas, Universidad de los Andes, Cra 1 n. 18A-12 Bogot\'{a}, Colombia} \email{[email protected]}
\author{Alfredo Donno} \address{Dipartimento di Matematica, Sapienza Università di Roma, Piazzale A. Moro, 5 \quad 00185 Roma, Italia} \email{[email protected]}
\author{Tatiana Nagnibeda} \address{Section de Mathématiques, Université de Genève, 2-4, Rue du Lièvre, Case Postale 64 1211 Genève 4, Suisse} \email{[email protected]}
\keywords{Dimer model, partition function, self-similar group, Schreier graph} \date{\today}
\begin{abstract} We study partition functions for the dimer model on families of finite graphs converging to infinite self-similar graphs and forming approximation sequences to certain well-known fractals. The graphs that we consider are provided by actions of finitely generated groups by automorphisms on rooted trees, and thus their edges are naturally labeled by the generators of the group. It is thus natural to consider weight functions on these graphs taking different values according to the labeling. We study in detail the well-known example of the Hanoi Towers group $H^{(3)}$, closely related to the Sierpi\'nski gasket. \end{abstract} \maketitle
\begin{center} {\it Dedicated to Toni Machì} \end{center}
\begin{center} {\footnotesize{\bf Mathematics Subject Classification (2010):} 82B20, 05A15, 20E08.\footnote{This research has been supported by the Swiss National Science Foundation Grant PP0022$_{-}$118946.}} \end{center}
\section{Introduction}
The dimer model is widely studied in different areas of mathematics ranging from combinatorics to probability theory to algebraic geometry. It originated in statistical mechanics where it was introduced in the purpose of investigating absorption of diatomic molecules on surfaces. In particular, one wants to find the number of ways in which diatomic molecules, called dimers, can cover a doubly periodic lattice, so that each dimer covers two adjacent lattice points and no lattice point remains uncovered. First exact results on the dimer model in a finite rectangle of $\mathbb{Z}^2$ were obtained by Kasteleyn \cite{kasteleyn1,kasteleyn2} and independently Temperley and Fisher \cite{TF} in the 60-s. A much more recent breakthrough is the solution of the dimer model on arbitrary planar bipartite periodic graphs by Kenyon, Okounkov and Sheffield \cite{KOS}. We refer to the lecture notes by Kenyon \cite{kenyon2} for an introduction into the dimer model. \\ \indent Let $Y=(V,E)$ be a finite graph with the vertex set $V$ and the edge set $E$. A \emph{dimer} is a graph consisting of two vertices connected by a non-oriented edge. A \textit{dimer covering} $D$ of $Y$ is an arrangement of dimers on $Y$ such that each vertex of $V$ is the endpoint of exactly one dimer. In other words, dimer coverings correspond exactly to \emph{perfect matchings} in $Y$.
Let $\mathcal{D}$ denote the set of dimer coverings of $Y$, and let $w:E\longrightarrow \mathbb{R}_{+}$ be a weight function defined on the edge set of $Y$. The physical meaning of the weight function can be, for example, the interaction energy between the atoms in a diatomic molecule. We associate with each dimer covering $D\in \mathcal{D}$ its weight $$ W(D):=\prod_{e\in D} w(e), $$ i.e., the product of the weights of the edges belonging to $D$. To each weight function $w$ on $Y$ corresponds the Boltzmann measure on $\mathcal{D}$, $\mu=\mu(Y,w)$ defined as $$ \mu(D)=\frac{W(D)}{\Phi(w)}. $$ The normalizing constant that ensures that this is a probability measure is one of the central objects in the theory, it is called the \textit{partition function}: $$ \Phi(w):=\sum_{D\in \mathcal{D}} W(D). $$ If the weight function is constant equal to 1, the partition function just counts
all the dimer coverings on $Y$.\\ \indent For a growing sequence of finite graphs, $\{Y_n\}_n$, one can ask whether the limit $$
\lim_{n\to \infty}\frac{\log (\Phi_n(w_n))}{|V(Y_n)|} $$ exists, where $w_n$ is the weight function on $Y_n$, and $\Phi_n(w_n)$ is the associated partition function. If yes, it is then called the \textit{thermodynamic limit}. For $w_n\equiv 1$, it specializes to the \textit{entropy} of the absorption of diatomic molecules per site.\\ \indent Let us recall the method developed by Kasteleyn \cite{kasteleyn1} to compute the partition function of the dimer model on finite planar graphs. It consists in, given a finite graph $Y=(V,E)$, constructing an anti-symmetric matrix such that the absolute value of its Pfaffian is the partition function of the dimer model on $Y$. Recall that the Pfaffian $Pf(M)$ of an anti-symmetric matrix $M=(m_{ij})_{i,j=1,\ldots,N}$, with $N$ even, is $$ Pf(M):=\sum_{\pi\in Sym(N)}sgn(\pi) m_{p_1p_2}\cdots m_{p_{N-1}p_N}, $$ where the sum runs over all permutations $\pi=\begin{pmatrix}
1 & 2 & \cdots & N \\
p_1 & p_2 & \cdots & p_N \end{pmatrix}$ such that $p_1<p_2$, $p_3<p_4,\ \ldots$, $p_{N-1}<p_N$ and $p_1<p_3<\cdots <p_{N-1}$. One has $(Pf(M))^2= \det(M)$.\\ \indent Given an orientation on $Y$ and a weight function $w$ on $E$, consider the oriented adjacency matrix
$A=(a_{ij})_{i,j=1,\ldots,|V|}$ of $(Y,w)$ with this orientation.
It is of course anti-symmetric. \begin{defi} A good orientation on $Y$ is an orientation of the edges of $Y$ such that the number of clockwise oriented edges around each face of $Y$ is odd. \end{defi} \begin{teo}[\cite{kasteleyn1}] \begin{enumerate} \item Let $Y=(V,E)$ be a planar graph with a good orientation, let $w$ be a weight function on $E$. If $A$ is the associated oriented adjacency matrix, then $$
\Phi(w)=|Pf(A)|. $$ \item If $Y$ is planar, a good orientation on $Y$ always exists. \end{enumerate} \end{teo} In this paper we apply Kasteleyn's method to study dimers partition functions on families of finite graphs that form approximating sequences for some well-known fractals, and on the other hand converge in local convergence to interesting self-similar graphs. The graphs that we consider are Schreier graphs of certain finitely generated groups and thus come naturally endowed with a labeling of the edges of the graph by the generators of the group. It is therefore natural to think about the edges with different labels as being of different type, and to consider weight functions on them that take different values according to the type of the edge.\\
\indent We now turn to Schreier graphs of self-similar groups and recall some basic facts and definitions. Let $T$ be the infinite regular rooted tree of degree $q$, i.e., the rooted tree where each vertex has $q$ offsprings. Every vertex of the $n$-th level of the tree can be regarded as an element of the set $X^n$ of words of length $n$ over the alphabet $X=\{0,1,\ldots, q-1\}$; the set $X^{\omega}$ of infinite words over $X$ can be identified with the set $\partial T$ of infinite geodesic rays in $T$ emanating from the root. Now let $G<Aut(T)$ be a group acting on $T$ by automorphisms, generated by a finite symmetric set $S\subset G$. Throughout the paper we will assume that the action of $G$ is transitive on each level of the tree (note that any action by automorphisms is level-preserving). \begin{defi}\label{defischreiernovembre} The $n$-th {\it Schreier graph} $\Sigma_n$ of the action of $G$ on $T$, with respect to the generating set $S$, is a (labeled, oriented) graph with $V(\Sigma_n)=X^n$, and edges $(u,v)$ between vertices $u$ and $v$ such that $u$ is moved to $v$ by the action of some generator $s\in S$. The edge $(u,v)$ is then labeled by $s$. \\ \indent For an infinite ray $\xi\in\partial T$, the {\it orbital Schreier graph} $\Sigma_\xi$ has vertex set $G\cdot \xi$ and the edge set determined by the action of generators on this orbit, as above. \end{defi} The graphs $\Sigma_n$ are Schreier graphs of stabilizers of vertices of the $n$-th level of the tree and the graphs $\Sigma_\xi$ are Schreier graphs of stabilizers of infinite rays. It is not difficult to see that the orbital Schreier graphs are infinite and that the finite Schreier graphs $\{\Sigma_n\}_{n=1}^\infty$ form a sequence of graph coverings. Finite Schreier graphs converge to infinite Schreier graphs in the space of rooted (labeled) graphs with local convergence (rooted Gromov-Hausdorff convergence \cite{Grom}, Chapter 3). More precisely, for an infinite ray $\xi\in X^\omega$ denote by $\xi_n$ the $n$-th prefix of the word $\xi$. Then the sequence of rooted graphs $\{(\Sigma_n,\xi_n)\}$ converges to the infinite rooted graph $(\Sigma_\xi, \xi)$ in the space $\mathcal{X}$ of (rooted isomorphism classes of ) rooted graphs endowed with the following metric: the distance between two rooted graphs $(Y_1,v_1)$ and $(Y_2,v_2)$ is $$ Dist((Y_1,v_1),(Y_2,v_2)):=\inf\left\{\frac{1}{r+1};\textrm{$B_{Y_1}(v_1,r)$ is isomorphic to $B_{Y_2}(v_2,r)$}\right\} $$ where $B_Y(v,r)$ is the ball of radius $r$ in $Y$ centered in $v$. \begin{defi}\label{defiselfsimilar} A finitely generated group $G<Aut(T)$ is {\it self-similar} if, for all $g\in G, x\in X$, there exist $h\in G, y\in X$ such that $$ g(xw)=yh(w), $$ for all finite words $w$ over the alphabet $X$. \end{defi} Self-similarity implies that $G$ can be embedded into the wreath product $Sym(q)\wr G$, so that any automorphism $g\in G$ can be represented as \begin{eqnarray}\label{tauleftformula} g=\tau(g_0,\ldots,g_{q-1}), \end{eqnarray} where $\tau\in Sym(q)$ describes the action of $g$ on the first level of $T$ and $g_i\in G, i=0,...,q-1$ is the restriction of $g$ on the full subtree of $T$ rooted at the vertex $i$ of the first level of $T$ (observe that any such subtree is isomorphic to $T$). Hence, if $x\in X$ and $w$ is a finite word over $X$, we have $$ g(xw)=\tau(x)g_x(w). $$ See \cite{volo} and references therein for more information about this interesting class of groups, also known as {\it automata groups}.\\ \indent In many cases, self-similarity of a group action allows to formulate a number of rules that allow to construct inductively the sequence of Schreier graphs $\{\Sigma_n\}_{n\geq 1}$ \cite{fractal, volo} and thus to describe inductively the action of the group on the $n$-th level of the tree. More precisely, the action of $g\in G$ on the $n$-th level can be represented by a permutation matrix of size $q$, whose entries are matrices of size $q^{n-1}$. If $g$ is as in \eqref{tauleftformula}, the nonzero entries of the matrix are at position $(i,\tau(i))$ and correspond to the action of the restriction $g_i$ of $g$ on the subtree of depth $n-1$ rooted at $i$, for each $i=0,\ldots, q-1$.\\ \indent In the next sections we will systematically use this description. Our idea is to define recursively an oriented adjacency matrix associated with the action of the generators on the $n$-th level, with some prescribed signs. The rows and columns of this matrix are indexed by the words of length $n$ over the alphabet $\{0,1,\ldots, q-1\}$, in their lexicographic order. The signs can be interpreted as corresponding to a good orientation of the graph $\Sigma_n$, in the sense of Kasteleyn. This allows to compute the partition function and the number of dimer coverings by studying the Pfaffian of this matrix.\\ \indent In this paper we compute the partition function of the dimer model on the following examples of planar Schreier graphs associated with self-similar actions:\\ \indent- the first Grigorchuk's group of intermediate growth (see \cite{grigorchuk} for a detailed account and further references);\\ \indent - the Basilica group that can be described as the iterated monodromy group of the complex polynomial $z^2-1$ (see \cite{volo} for connections of self-similar groups to complex dynamics);\\ \indent - the Hanoi Towers group $H^{(3)}$ whose action on the ternary tree models the famous Hanoi Towers game on three pegs, see \cite{hanoi}, and whose Schreier graphs are closely related to the Sierpi\'{n}ski gasket. Let us mention that counting dimers on the Schreier graphs of the Hanoi towers group $H^{(3)}$ is related to the computation of the partition function for the Ising model on the Sierpi\'{n}ski triangle, via Fisher's correspondence \cite{fisher}, see Subsection 4.5 in our paper \cite{ddn}, devoted to the Ising model on the self-similar Schreier graphs.\\ \indent Finally we also compute the partition function of the dimer model on the (finite approximations of) the Sierpi\'{n}ski triangle. These graphs cannot be labeled so as to become Schreier graphs of a self-similar group, but they are very similar to the Schreier graphs of the group $H^{(3)}$. They have a few natural labeling of the edges in three different types, of which we describe three, and provide computations for two of those.
\subsection{Plan of the paper} The rest of the paper is structured as follows. In Section \ref{SECTION2} we study the dimer model on the Schreier graphs associated with the action of the Grigorchuk's group and of the Basilica group on the rooted binary tree. Even if the model on these graphs can be easily computed directly, we prefer to apply the general Kasteleyn theory: the partition function at each finite level is described, the thermodynamic limit and the entropy are explicitly computed. In Section \ref{SECTION4} the dimer model is studied on the Schreier graphs of the Hanoi Towers group $H^{(3)}$. First, we follow a combinatorial approach using recursion and the property of self-similarity of these graphs (see Section \ref{SECTIONCOMBINATORIAL}). A recursive description of the partition function is given in Theorem \ref{numerohanoi}. The thermodynamic limit is not explicitly computed, although its existence is proven in two particular cases (see Proposition \ref{a=b=c} and Proposition \ref{COROLLARYEXISTENCE}). Then, the problem is studied by using Kasteleyn method (Section \ref{2210}): the Pfaffian of the oriented adjacency matrix is recursively investigated via the Schur complement. The description of the partition function that we give in Theorem \ref{PROPOSITIONPARTITION} uses iterations of a rational map. In Section \ref{SECTIONSIERPINSKI}, the dimer model is studied on finite approximations of the well-known Sierpi\'{n}ski gasket: these are self-similar graphs closely related to the Schreier graphs of the group $H^{(3)}$. Two different weight functions on the edges of these graphs are considered and for both of them the partition function, the thermodynamic limit and the entropy are computed. In Section \ref{statistiques} we perform, for the Schreier graph of $H^{(3)}$ and the Sierpi\'{n}ski gasket, a statistical analysis of the random variable defined as the number of occurrences of a fixed label in a random dimer covering.
\section{The partition function of the dimer model on the Schreier graphs of the Grigorchuk's group and of the Basilica group}\label{SECTION2}
In this section we study the dimer model on two examples of Schreier graphs: the Schreier graphs of the Grigorchuk's group and of the Basilica group. Even if in these cases the problem can be easily solved combinatorially, we prefer to apply here the Kasteleyn theory because we will follow the same strategy in the next sections to solve the problem on more complicated graphs. \subsection{The Grigorchuk's group}
Let us start with the Grigorchuk's group: this is the self-similar group acting on the rooted binary tree generated by the automorphisms: $$ a=\epsilon(id,id), \qquad b=e(a,c), \qquad c=e(a,d), \qquad d=e(id,b), $$ where $e$ and $\epsilon$ are, respectively, the trivial and the non-trivial permutations in $Sym(2)$ (observe that all the generators are involutions). The following substitutional rules describe how to construct recursively the graph $\Sigma_{n+1}$ from $\Sigma_n$, starting from the Schreier graph of the first level $\Sigma_1$ \cite{hecke, grigorchuk}. More precisely, the construction consists in replacing the labeled subgraphs of $\Sigma_{n}$ on the top of the picture by new labeled graphs (on the bottom). \begin{center} \begin{picture}(400,110) \letvertex A=(65,100)\letvertex B=(105,100)\letvertex C=(145,100) \letvertex D=(185,100)\letvertex E=(225,100)\letvertex F=(265,100)\letvertex G=(305,100) \letvertex H=(345,100)\letvertex I=(45,20)\letvertex L=(65,20)\letvertex M=(105,20)\letvertex N=(125,20) \letvertex c=(145,20) \letvertex d=(185,20)\letvertex e=(225,20)\letvertex f=(265,20)\letvertex g=(305,20) \letvertex h=(345,20)
\put(82,60){$\Downarrow$}\put(162,60){$\Downarrow$}\put(242,60){$\Downarrow$}\put(322,60){$\Downarrow$}
\put(62,92){$u$} \put(102,92){$v$}\put(142,92){$u$} \put(182,92){$v$}\put(222,92){$u$} \put(262,92){$v$} \put(302,92){$u$}\put(342,92){$v$}
\put(40,10){$1u$} \put(62,10){$0u$}\put(102,10){$0v$} \put(122,10){$1v$}
\put(141,10){$1u$} \put(181,10){$1v$}\put(221,10){$1u$} \put(261,10){$1v$} \put(301,10){$1u$}\put(341,10){$1v$}
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$} \drawvertex(I){$\bullet$}\drawvertex(L){$\bullet$} \drawvertex(M){$\bullet$}\drawvertex(N){$\bullet$} \drawvertex(c){$\bullet$}\drawvertex(d){$\bullet$} \drawvertex(e){$\bullet$}\drawvertex(f){$\bullet$} \drawvertex(g){$\bullet$}\drawvertex(h){$\bullet$}
\drawundirectedloop(L){$d$}\drawundirectedloop(M){$d$}
\drawundirectededge(A,B){$a$}\drawundirectededge(C,D){$b$} \drawundirectededge(E,F){$c$} \drawundirectededge(G,H){$d$} \drawundirectededge(c,d){$d$}\drawundirectededge(e,f){$b$} \drawundirectededge(g,h){$c$} \drawundirectededge[r](L,I){$a$}\drawundirectededge(M,N){$a$} \drawundirectedcurvededge(L,M){$b$}\drawundirectedcurvededge[b](L,M){$c$} \end{picture} \end{center} starting from \begin{center} \begin{picture}(200,40) \letvertex A=(70,25)\letvertex B=(130,25)
\put(69,16){$0$}\put(127,16){$1$}\put(15,22){$\Sigma_1$}
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$}
\drawundirectedloop[l](A){$b,c,d$}\drawundirectedloop[r](B){$b,c,d$} \drawundirectededge(A,B){$a$} \end{picture} \end{center} In the study of the dimer model on these graphs, we consider the graphs without loops. We keep the notation $\Sigma_n$ for these graphs. The following pictures give examples for $n=1,2,3$. \begin{center} \begin{picture}(300,40) \letvertex A=(60,20)\letvertex B=(90,20)
\letvertex C=(155,20)\letvertex D=(185,20) \letvertex E=(225,20)\letvertex F=(255,20)
\drawundirectededge(A,B){$a$} \drawundirectededge(C,D){$a$} \drawundirectededge(E,F){$a$} \drawundirectedcurvededge(D,E){$b$}\drawundirectedcurvededge(E,D){$c$}
\put(40,18){$\Sigma_1$} \put(270,18){$\Sigma_2$}
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \end{picture} \end{center}
\begin{center} \begin{picture}(300,40) \letvertex A=(30,20)\letvertex B=(60,20)\letvertex C=(100,20)\letvertex D=(130,20) \letvertex E=(170,20)\letvertex F=(200,20)\letvertex G=(240,20)\letvertex H=(270,20)
\drawundirectededge(A,B){$a$} \drawundirectededge(C,D){$a$} \drawundirectededge(E,F){$a$} \drawundirectededge(G,H){$a$}
\put(285,18){$\Sigma_3$}
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$}
\drawundirectedcurvededge(B,C){$b$} \drawundirectedcurvededge(C,B){$c$} \drawundirectedcurvededge(D,E){$b$} \drawundirectedcurvededge(E,D){$d$} \drawundirectedcurvededge(F,G){$b$} \drawundirectedcurvededge(G,F){$c$} \end{picture} \end{center} In general, the Schreier graph $\Sigma_n$, without loops, has a linear shape and it has $2^{n-1}$ simple edges, all labeled by $a$, and $2^{n-1}-1$ cycles of length 2 whose edges are labeled by $b,c,d$.\\ \indent What we need in order to apply the Kasteleyn theory is an adjacency matrix giving a good orientation to $\Sigma_n$. We start by providing the (unoriented weighted) adjacency matrix $\Delta_n$ of $\Sigma_n$, which refers to the graph with loops, that one can easily get by using the self-similar definition of the generators of the group. It is defined by putting $$ a_1 = \begin{pmatrix}
0 & 1 \\
1 & 0 \end{pmatrix} \qquad b_1 = c_1 = d_1 = \begin{pmatrix}
1 & 0 \\
0 & 1 \end{pmatrix} $$ and, for every $n\geq 2$, $$ a_n = \begin{pmatrix}
0 & I_{n-1} \\
I_{n-1} & 0 \end{pmatrix}, \qquad b_n = \begin{pmatrix}
a_{n-1} & 0 \\
0 & c_{n-1} \end{pmatrix}, \qquad c_n = \begin{pmatrix}
a_{n-1} & 0 \\
0 & d_{n-1} \end{pmatrix}, \qquad d_n = \begin{pmatrix}
I_{n-1} & 0 \\
0 & b_{n-1} \end{pmatrix}, $$ where $a_n,b_n,c_n,d_n$ and $I_n$ are matrices of size $2^n$. If we put $A_n=aa_n, B_n=bb_n, C_n=cc_n$ and $D_n = dd_n$, then $\Delta_n$ is given by $$ \Delta_n = A_n + B_n + C_n + D_n = \begin{pmatrix}
ba_{n-1}+ca_{n-1}+dI_{n-1} & aI_{n-1} \\
aI_{n-1} & bc_{n-1} + cd_{n-1} + db_{n-1} \end{pmatrix}. $$ We want now to modify $\Delta_n$ in order to get an oriented adjacency matrix $\Delta_n'$ for $\Sigma_n$, corresponding to a good orientation in the sense of Kasteleyn. To do this, it is necessary to delete the nonzero diagonal entries in $\Delta_n$ (this is equivalent to delete loops in the graph) and to construct an anti-symmetric matrix whose entries coincide, up to the sign, with the corresponding entries of $\Delta_n$. Finally, we have to verify that each cycle of $\Sigma_n$, with the orientation induced by $\Delta_n'$, has an odd number of edges clockwise oriented. So let us define the matrices $$ a_1' = \begin{pmatrix}
0 & 1 \\
-1 & 0 \end{pmatrix} \qquad \ b_1' = c_1' =d_1' = \begin{pmatrix}
1 & 0 \\
0 & 1 \end{pmatrix}. $$ Then, for every $n\geq 2$, we put $$ a_n' = \begin{pmatrix}
0 & I_{n-1} \\
-I_{n-1} & 0 \end{pmatrix}, \qquad b_n' = \begin{pmatrix}
a_{n-1}' & 0 \\
0 & c_{n-1}' \end{pmatrix}, \qquad c_n' = \begin{pmatrix}
a_{n-1}' & 0 \\
0 & d_{n-1}' \end{pmatrix}, \qquad d_n' = \begin{pmatrix}
I_{n-1} & 0 \\
0 & b_{n-1}' \end{pmatrix}. $$ For each $n$, we put $A_n'=aa_n', B_n'=bb_n', C_n'=cc_n'$ and $D_n' = dd_n'$. Moreover, set $$ J_1 = \begin{pmatrix}
b+c+d & 0 \\
0 & b+c+d \end{pmatrix} \quad \mbox{and } \ \ J_n =\begin{pmatrix}
dI_{n-1} & 0 \\
0 & \overline{J_{n-1}} \end{pmatrix} \ \ \mbox{ for } n\geq 2, $$ where, for every $n\geq 1$, the matrix $\overline{J_n}$ is obtained from $J_n$ with the following substitutions: $$ b\mapsto d \qquad c\mapsto b \qquad d\mapsto c. $$ Define $$ \Delta_1' = A_1' + B_1' + C_1' + D_1' - J_1 = \begin{pmatrix}
0 & a \\
-a & 0 \end{pmatrix} $$ and, for each $n\geq 2$, $$ \Delta_n' = A_n' + B_n' + C_n' + D_n' - J_n = \begin{pmatrix}
ba_{n-1}'+ca_{n-1}' & aI_{n-1} \\
-aI_{n-1} & bc_{n-1}' + cd_{n-1}' + db_{n-1}' - \overline{J_{n-1}} \end{pmatrix}. $$ Note that the matrix $J_n$ is introduced to erase the nonzero diagonal entries of $\Delta_n$, corresponding to loops. \begin{prop} The matrix $\Delta_n'$ induces a good orientation on the Schreier graph $\Sigma_n$ of the Grigorchuk's group. \end{prop}
\begin{proof} It is easy to show by induction that $\Delta_n'$ is anti-symmetric and that each entry of $\Delta_n'$ coincides, up to the sign, with the corresponding entry of the adjacency matrix $\Delta_n$ of $\Sigma_n$, where loops are deleted. Finally, we know that all cycles in the Schreier graph have length $2$ and this ensures that each cycle has a good orientation in the sense of Kasteleyn. \end{proof} \begin{teo} The partition function of the dimer model on the Schreier graph $\Sigma_n$ of the Grigorchuk's group is $$ \Phi_n(a,b,c,d)= a^{2^{n-1}}. $$ \end{teo}
\begin{proof} It is easy to check, by using the self-similar definition of the generators of the group, that $$ a(1^{n-1}0)=01^{n-2}0 \qquad b(1^{n-1}0)=c(1^{n-1}0)=d(1^{n-1}0) =1^{n-1}0 $$ and $$ a(1^n)=01^{n-1} \qquad b(1^n)=c(1^n)=d(1^n)=1^n. $$ This implies that the vertices $1^{n-1}0$ and $1^n$ are the (only) two vertices of degree $1$ of $\Sigma_n$, for each $n$. This allows us to easily compute $\det(\Delta_n')$ by an iterated application of the Laplace expansion. We begin from the element $a$ at the entry $(2^{n-1},2^n)$, which is the only nonzero element of the column $2^n$. So we can \lq\lq burn\rq\rq the row $2^{n-1}$ and the column $2^n$. Similarly, row $2^n$ and column $2^{n-1}$ can be deleted and a second factor $a$ appears in $\det(\Delta_n')$. With these deletions, we have \lq\lq deleted\rq\rq in the graph all edges going to and coming from the vertex $01^{n-1}$ (corresponding to the row and column $2^{n-1}$). So the vertex $001^{n-2}$ (which is adjacent to $01^{n-1}$ in $\Sigma_n$) has now degree $1$ and on the lines corresponding to it there is just a letter $a$ (or $-a$) corresponding to the edge joining it to $101^{n-2}$. Hence, the Laplace expansion can be applied again with respect to this element, and so on. Observe that each simple edge labeled $a$ of $\Sigma_n$ contributes $a^2$ to $\det(\Delta_n')$. The assertion follows since the number of simple edges is $2^{n-1}$. \end{proof}
\begin{cor} The thermodynamic limit is $\frac{1}{2}\log a$. In particular, the entropy of absorption of diatomic molecules per site is zero. \end{cor}
\begin{proof} A direct computation gives \begin{eqnarray*} \lim_{n\to +\infty}
\frac{\log(\Phi_n(a,b,c,d))}{|V(\Sigma_n)|}=\lim_{n\to +\infty} \frac{\log(\Phi_n(a,b,c,d))}{2^n}= \frac{1}{2}\log a. \end{eqnarray*} By putting $a=1$, we get the entropy. \end{proof}
\subsection{The Basilica group}
The Basilica group \cite{primo} is the self-similar group generated by the automorphisms: $$ a=e(b,id), \qquad b=\epsilon(a,id). $$ It acts level-transitively on the binary tree, and the following substitutional rules \cite{ddn2}
allow to construct inductively $\Sigma_{n+1}$ from $\Sigma_n$, \begin{center} \begin{picture}(400,110) \letvertex A=(120,100)\letvertex B=(100,20)\letvertex C=(140,20)
\letvertex D=(180,100)\letvertex E=(220,100)\letvertex F=(180,20)\letvertex G=(220,20) \letvertex H=(280,100)\letvertex I=(320,100)\letvertex L=(260,10)\letvertex M=(300,20)\letvertex N=(340,10) \put(117,60){$\Downarrow$}\put(197,60){$\Downarrow$}\put(297,60){$\Downarrow$}
\put(117,92){$u$}\put(97,11){$1u$}\put(137,11){$0u$}
\put(177,92){$u$}\put(217,92){$v$}\put(177,11){$0u$}\put(217,11){$0v$}
\put(277,92){$u$}\put(317,92){$v$} \put(257,1){$0u$}\put(296,10){$1v$}\put(337,1){$0v$}
\put(327,97){$u\neq v$}
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$} \drawvertex(I){$\bullet$}\drawvertex(L){$\bullet$} \drawvertex(M){$\bullet$}\drawvertex(N){$\bullet$}
\drawundirectedloop(A){$a$}\drawundirectedloop[l](B){$a$} \drawundirectedcurvededge(B,C){$b$}\drawundirectedcurvededge(C,B){$b$}
\drawundirectededge(D,E){$b$} \drawundirectededge(F,G){$a$} \drawundirectededge(H,I){$a$} \drawundirectedcurvededge(L,M){$b$}\drawundirectedloop(M){$a$} \drawundirectedcurvededge(M,N){$b$} \end{picture} \end{center} starting with the Schreier graph $\Sigma_1$ on the first level. \begin{center} \begin{picture}(200,40) \letvertex A=(70,25)\letvertex B=(130,25)
\put(69,16){$0$}\put(127,16){$1$}\put(155,21){$\Sigma_1$}
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$}
\drawundirectedloop[l](A){$a$}\drawundirectedloop[r](B){$a$} \drawundirectedcurvededge(A,B){$b$}\drawundirectedcurvededge(B,A){$b$} \end{picture} \end{center}
We consider here the dimer model on the Schreier graphs of the Basilica group without loops, as in the following pictures, for $n=1,...,5$. \unitlength=0,3mm \begin{center} \begin{picture}(300,30) \letvertex A=(30,15)\letvertex B=(70,15)\letvertex C=(150,15)\letvertex D=(190,15) \letvertex E=(230,15)\letvertex F=(270,15)
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$}
\drawundirectedcurvededge(A,B){$b$} \drawundirectedcurvededge(B,A){$b$} \drawundirectedcurvededge(C,D){$b$} \drawundirectedcurvededge(D,C){$b$} \drawundirectedcurvededge(D,E){$a$} \drawundirectedcurvededge(E,D){$a$} \drawundirectedcurvededge(E,F){$b$} \drawundirectedcurvededge(F,E){$b$} \put(5,12){$\Sigma_1$} \put(295,12){$\Sigma_2$} \end{picture} \end{center}
\begin{center} \begin{picture}(300,60) \letvertex A=(50,30)\letvertex B=(90,30)\letvertex C=(130,30)\letvertex D=(150,50) \letvertex E=(150,10)\letvertex F=(170,30)\letvertex G=(210,30)\letvertex H=(250,30)
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$}
\drawundirectededge(C,D){$b$} \drawundirectededge(E,C){$b$} \drawundirectededge(F,E){$b$} \drawundirectededge(D,F){$b$}
\drawundirectedcurvededge(A,B){$b$} \drawundirectedcurvededge(B,A){$b$} \drawundirectedcurvededge(B,C){$a$} \drawundirectedcurvededge(C,B){$a$} \drawundirectedcurvededge(F,G){$a$} \drawundirectedcurvededge(G,F){$a$} \drawundirectedcurvededge(G,H){$b$} \drawundirectedcurvededge(H,G){$b$} \put(20,27){$\Sigma_3$} \end{picture} \end{center}
\begin{center} \begin{picture}(300,140) \letvertex A=(10,70) \letvertex B=(50,70)\letvertex C=(90,70)\letvertex D=(110,90)\letvertex E=(110,50) \letvertex F=(130,70)\letvertex G=(150,90)\letvertex H=(150,50)\letvertex I=(170,70)
\letvertex J=(150,130)\letvertex K=(150,10)
\letvertex L=(190,90) \letvertex M=(190,50)\letvertex N=(210,70) \letvertex O=(250,70)\letvertex P=(290,70)
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$} \drawvertex(I){$\bullet$}\drawvertex(L){$\bullet$} \drawvertex(M){$\bullet$}\drawvertex(N){$\bullet$} \drawvertex(O){$\bullet$}\drawvertex(P){$\bullet$} \drawvertex(J){$\bullet$}\drawvertex(K){$\bullet$}
\drawundirectedcurvededge(A,B){$b$}\drawundirectedcurvededge(B,A){$b$} \drawundirectedcurvededge(B,C){$a$}\drawundirectedcurvededge(C,B){$a$} \drawundirectededge(C,D){$b$} \drawundirectededge(D,F){$b$} \drawundirectededge(F,E){$b$} \drawundirectededge(E,C){$b$} \drawundirectededge(F,G){$a$} \drawundirectededge(G,I){$a$} \drawundirectededge(I,H){$a$} \drawundirectededge(H,F){$a$}
\drawundirectededge(I,L){$b$} \drawundirectededge(L,N){$b$} \drawundirectededge(N,M){$b$} \drawundirectededge(M,I){$b$}
\drawundirectedcurvededge(G,J){$b$}\drawundirectedcurvededge(J,G){$b$} \drawundirectedcurvededge(H,K){$b$}\drawundirectedcurvededge(K,H){$b$} \drawundirectedcurvededge(N,O){$a$}\drawundirectedcurvededge(O,N){$a$} \drawundirectedcurvededge(O,P){$b$}\drawundirectedcurvededge(P,O){$b$} \put(-20,67){$\Sigma_4$} \end{picture} \end{center}
\begin{center} \begin{picture}(400,210) \letvertex A=(0,110)\letvertex B=(40,110)\letvertex C=(80,110)\letvertex D=(100,130) \letvertex E=(100,90)\letvertex F=(120,110)\letvertex G=(140,130)\letvertex H=(140,160) \letvertex I=(160,110)\letvertex L=(140,90)\letvertex M=(140,60)\letvertex N=(170,140) \letvertex O=(200,150)\letvertex R=(230,140)\letvertex S=(240,110)\letvertex T=(230,80) \letvertex U=(200,70)\letvertex V=(170,80)\letvertex P=(200,180)\letvertex Q=(200,210) \letvertex Z=(200,40)\letvertex J=(200,10)\letvertex K=(260,130)\letvertex X=(280,110) \letvertex W=(260,90)\letvertex g=(260,160)\letvertex h=(260,60)\letvertex c=(300,130) \letvertex Y=(300,90)\letvertex d=(320,110)\letvertex e=(360,110)\letvertex f=(400,110) \drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$} \drawvertex(I){$\bullet$}\drawvertex(L){$\bullet$} \drawvertex(M){$\bullet$}\drawvertex(N){$\bullet$} \drawvertex(O){$\bullet$}\drawvertex(P){$\bullet$} \drawvertex(J){$\bullet$}\drawvertex(K){$\bullet$} \drawvertex(Q){$\bullet$}\drawvertex(R){$\bullet$} \drawvertex(S){$\bullet$}\drawvertex(T){$\bullet$} \drawvertex(U){$\bullet$}\drawvertex(V){$\bullet$} \drawvertex(W){$\bullet$}\drawvertex(X){$\bullet$} \drawvertex(Y){$\bullet$}\drawvertex(Z){$\bullet$} \drawvertex(g){$\bullet$}\drawvertex(h){$\bullet$} \drawvertex(c){$\bullet$}\drawvertex(f){$\bullet$} \drawvertex(d){$\bullet$}\drawvertex(e){$\bullet$}
\drawundirectedcurvededge(A,B){$b$}\drawundirectedcurvededge(B,A){$b$} \drawundirectedcurvededge(B,C){$a$}\drawundirectedcurvededge(C,B){$a$} \drawundirectededge(C,D){$b$} \drawundirectededge(D,F){$b$} \drawundirectededge(F,E){$b$} \drawundirectededge(E,C){$b$}
\drawundirectededge(F,G){$a$} \drawundirectededge(G,I){$a$} \drawundirectededge(I,L){$a$} \drawundirectededge(L,F){$a$}
\drawundirectedcurvededge(G,H){$b$}\drawundirectedcurvededge(H,G){$b$} \drawundirectedcurvededge(L,M){$b$}\drawundirectedcurvededge(M,L){$b$}
\drawundirectededge(I,N){$b$} \drawundirectededge(N,O){$b$} \drawundirectededge(O,R){$b$} \drawundirectededge(R,S){$b$} \drawundirectededge(S,T){$b$} \drawundirectededge(T,U){$b$} \drawundirectededge(U,V){$b$} \drawundirectededge(V,I){$b$}
\drawundirectedcurvededge(O,P){$a$}\drawundirectedcurvededge(P,O){$a$} \drawundirectedcurvededge(Q,P){$b$}\drawundirectedcurvededge(P,Q){$b$}
\drawundirectedcurvededge(U,Z){$a$}\drawundirectedcurvededge(Z,U){$a$} \drawundirectedcurvededge(Z,J){$b$}\drawundirectedcurvededge(J,Z){$b$}
\drawundirectededge(S,K){$a$} \drawundirectededge(K,X){$a$} \drawundirectededge(X,W){$a$} \drawundirectededge(W,S){$b$} \drawundirectededge(X,c){$b$} \drawundirectededge(c,d){$b$} \drawundirectededge(d,Y){$b$} \drawundirectededge(Y,X){$b$}
\drawundirectedcurvededge(d,e){$a$}\drawundirectedcurvededge(e,d){$a$} \drawundirectedcurvededge(e,f){$b$}\drawundirectedcurvededge(f,e){$b$} \drawundirectedcurvededge(K,g){$b$}\drawundirectedcurvededge(g,K){$b$} \drawundirectedcurvededge(W,h){$b$}\drawundirectedcurvededge(h,W){$b$} \put(20,60){$\Sigma_5$} \end{picture} \end{center} It follows from the substitutional rules described above that each $\Sigma_n$ is a cactus, (i.e., a separable graph whose blocks are either cycles or single edges), and that the maximal length of a cycle in $\Sigma_n$ is $\lceil\frac{n}{2}\rceil$. We further compute the number of cycles in $\Sigma_n$, that will be needed later. Denote by $a^i_j$ the number of cycles of length $j$ labeled by $a$ in $\Sigma_i$ and, similarly, denote by $b^i_j$ the number of cycles of length $j$ labeled by $b$ in $\Sigma_i$.
\begin{prop}\label{computationcycles} For any $n\geq 4$ consider the Schreier graph $\Sigma_n$ of the Basilica group. For each $k\geq 1$, the number of cycles of length $2^k$ labeled by $a$ is $$ a^n_{2^k} = \begin{cases} 2^{n-2k-1} & 1\leq k \leq \frac{n-1}{2}-1\\ 2 & k = \frac{n-1}{2} \end{cases}, \mbox{ for } n \mbox{ odd},\qquad a^n_{2^k}= \begin{cases} 2^{n-2k-1} & 1\leq k \leq \frac{n}{2}-1\\ 1 & k = \frac{n}{2}
\end{cases}, \mbox{ for } n \mbox{ even} $$ and the number of cycles of length $2^k$ labeled by $b$ is $$ b^n_{2^k}=\begin{cases} 2^{n-2k} & 1\leq k \leq \frac{n-1}{2}-1\\ 2 & k = \frac{n-1}{2}\\ 1 & k = \frac{n+1}{2} \end{cases}, \mbox{ for } n \mbox{ odd}, \qquad b^n_{2^k}=\begin{cases} 2^{n-2k} & 1\leq k \leq \frac{n}{2}-1\\ 2& k = \frac{n}{2} \end{cases}, \mbox{ for } n \mbox{ even}. $$ \end{prop} \begin{proof} The recursive formulae for the generators imply that, for each $n\geq 3$, one has $$ a^n_2 = b^{n-1}_2 \ \ \ \mbox{and } \ \ \ b^n_2 = a^{n-1}_1 = 2^{n-2} $$ and in general $a^n_{2^k} = a^{n-2(k-1)}_2$ and $b^n_{2^k} = b^{n-2(k-1)}_2$. In particular, for each $n\geq 4$, the number of $2$-cycles labeled by $a$ is $2^{n-3}$ and the number of $2$-cycles labeled by $b$ is $2^{n-2}$. More generally, the number of cycles of length $2^k$ is given by $$ a^n_{2^k} = 2^{n-2k-1}, \ \ \ \ b^n_{2^k}=2^{n-2k}, $$ where the last equality is true if $n-2k+2 \geq 4$, i.e., for $k\leq \frac{n}{2}-1$. Finally, for $n$ odd, there is only one cycle of length $2^{\frac{n+1}{2}}$ labeled by $b$ and there are four cycles of length $2^{\frac{n-1}{2}}$, two of them labeled by $a$ and two labeled by $b$; for $n$ even, there are three cycles of length $2^{\frac{n}{2}}$, two of them labeled by $b$ and one labeled by $a$. \end{proof} \begin{cor} For each $n\geq 4$, the number of cycles labeled by $a$ in $\Sigma_n$ is $$ \begin{cases} \frac{2^{n-1}+2}{3} & n \ \text{odd},\\ \frac{2^{n-1}+1}{3} & n \ \text{even} \end{cases} $$ and the number of cycles labeled by $b$ in $\Sigma_n$ is $$ \begin{cases} \frac{2^n+1}{3} & n \ \text{odd},\\ \frac{2^n+2}{3} & n \ \text{even}. \end{cases} $$ The total number of cycles of length $\geq 2$ is $2^{n-1}+1$ and the total number of edges, without loops, is $3\cdot 2^{n-1}$. \end{cor} Also in this case we construct an adjacency matrix $\Delta_n'$ associated with a good orientation of $\Sigma_n$, in the sense of Kasteleyn. We first present the (unoriented weighted) adjacency matrix $\Delta_n$ of the Schreier graph of the Basilica group. Define the matrices $$ a_1 = a_1^{-1} = \begin{pmatrix}
1 & 0 \\
0 & 1 \end{pmatrix}, \ \ \ \mbox{and } \ b_1 = b_1^{-1} = \begin{pmatrix}
0 & 1 \\
1 & 0 \end{pmatrix}. $$ Then, for every $n\geq 2$, we put $$ a_n = \begin{pmatrix}
b_{n-1} & 0 \\
0 & I_{n-1} \end{pmatrix}, \qquad a_n^{-1} = \begin{pmatrix}
b_{n-1}^{-1} & 0 \\
0 & I_{n-1} \end{pmatrix}, \qquad b_n = \begin{pmatrix}
0 & a_{n-1} \\
I_{n-1} & 0 \end{pmatrix},\qquad b_n^{-1} = \begin{pmatrix}
0 & I_{n-1} \\
a_{n-1}^{-1} & 0 \end{pmatrix}. $$ If we put $A_n=aa_n, A_n^{-1}=aa_n^{-1}, B_n=bb_n$ and $B_n^{-1} = bb_n^{-1}$, then $\Delta_n$ is given by $$ \Delta_n = A_n + A_n^{-1}+B_n +B_n^{-1} = \begin{pmatrix}
a(b_{n-1}+b_{n-1}^{-1}) & b(a_{n-1}+I_{n-1}) \\
b(a_{n-1}^{-1}+I_{n-1}) & 2aI_{n-1} \end{pmatrix}. $$ We modify now $\Delta_n$ in order to get the oriented adjacency matrix $\Delta_n'$. To do this, we need to delete the nonzero diagonal entries and to construct an anti-symmetric matrix whose entries are equal, up to the sign, to the corresponding entries of $\Delta_n$. Finally, we have to check that each elementary cycle of $\Sigma_n$, with the orientation induced by $\Delta_n'$, has an odd number of edges clockwise oriented. We define the matrices $$ a_1' = \begin{pmatrix}
1 & 0 \\
0 & 1 \end{pmatrix}, \qquad a_1'^{-1} = \begin{pmatrix}
-1 & 0 \\
0 & -1 \end{pmatrix} \ \mbox{and } \ b_1' = b_1'^{-1}= \begin{pmatrix}
0 & 1 \\
-1 & 0 \end{pmatrix}. $$ Then, for every $n\geq 2$, we put $$ a_n' = \begin{pmatrix}
b_{n-1}' & 0 \\
0 & I_{n-1} \end{pmatrix}, \quad a_n'^{-1} = \begin{pmatrix}
b_{n-1}'^{-1} & 0 \\
0 & -I_{n-1} \end{pmatrix}, \quad b_n' = \begin{pmatrix}
0 & a_{n-1}' \\
-I_{n-1} & 0 \end{pmatrix},\quad \ b_n'^{-1} = \begin{pmatrix}
0 & I_{n-1} \\
a_{n-1}'^{-1} & 0 \end{pmatrix}. $$ (Observe that here the exponent $-1$ is just a notation and it does not correspond to the inverse in algebraic sense.) Put $A_n'=aa_n', A_n'^{-1}=aa_n'^{-1}, B_n'=bb_n'$ and $B_n'^{-1} = bb_n'^{-1}$. Then $$ \Delta_1' = A_1' + A_1'^{-1}+B_1' +B_1'^{-1} = \begin{pmatrix}
0 & 2b \\
-2b & 0 \end{pmatrix} $$ and, for each $n\geq 2$, $$ \Delta_n' = A_n' + A_n'^{-1}+B_n' +B_n'^{-1} = \begin{pmatrix}
a(b_{n-1}'+b_{n-1}'^{-1}) & b(a_{n-1}'+I_{n-1}) \\
b(a_{n-1}'^{-1}-I_{n-1}) & 0 \end{pmatrix}. $$ \begin{prop}\label{orientedrules} $\Delta_n'$ induces a good orientation on the Schreier graph $\Sigma_n$ of the Basilica group. \end{prop} \begin{proof} It is easy to show by induction that $\Delta_n'$ is anti-symmetric and that each entry of $\Delta_n'$ coincides, up to the sign, with the corresponding entry of the adjacency matrix $\Delta_n$ of $\Sigma_n$, where loops are deleted. We also prove the assertion about the orientation by induction. For $n=1,2$ we have $\Delta_1' = \begin{pmatrix}
0 & 2b \\
-2b & 0 \end{pmatrix}$ and $\Delta_2' = \begin{pmatrix}
0 & 2a & 2b & 0 \\
-2a & 0 & 0 & 2b \\
-2b & 0 & 0 & 0 \\
0 & -2b & 0 & 0 \end{pmatrix}$, which correspond to\unitlength=0,4mm \begin{center} \begin{picture}(300,30) \letvertex A=(30,15)\letvertex B=(70,15)\letvertex C=(150,15)\letvertex D=(190,15) \letvertex E=(230,15)\letvertex F=(270,15)
\put(22,11){0}\put(72,11){1}\put(138,12){10}\put(186,5){00}\put(226,5){01}\put(271,12){11}
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$}
\drawcurvededge(A,B){$b$} \drawcurvededge[b](A,B){$b$} \drawcurvededge(D,C){$b$} \drawcurvededge[b](D,C){$b$} \drawcurvededge(D,E){$a$} \drawcurvededge[b](D,E){$a$} \drawcurvededge(E,F){$b$} \drawcurvededge[b](E,F){$b$} \end{picture} \end{center} Now look at the second block $b(a_{n-1}'+I_{n-1}) = b\begin{pmatrix}
b_{n-2}'+I_{n-2} & 0 \\
0 & 2I_{n-2} \end{pmatrix}$ of $\Delta'_n$. The matrix $2bI_{n-2}$ corresponds to the $2$-cycles \begin{center} \begin{picture}(200,40)
\letvertex A=(70,25)\letvertex B=(130,25)
\put(53,22){$01u$}\put(132,22){$11u$}
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$}
\drawcurvededge(A,B){$b$}\drawcurvededge[b](A,B){$b$} \end{picture} \end{center} which come from the $a$-loops of $\Sigma_{n-1}$ centered at $1u$ and so they are $2^{n-2}$. The block $b(b_{n-2}'+I_{n-2})$ corresponds to the $b$-cycles of length $2^k\geq 4$. These cycles come from the $a$-cycles of level $n-1$ but they have double length. In particular, $bb_{n-2}'$ corresponds to the $b$-cycles at level $n-2$ (well oriented by induction), that correspond to the $a$-cycles at level $n-1$ with the same good orientation given by the substitutional rule \begin{center} \begin{picture}(400,20) \letvertex D=(100,15)\letvertex E=(140,15)\letvertex F=(260,15)\letvertex G=(300,15)
\put(197,13){$\Longrightarrow$}
\put(97,7){$u$}\put(137,7){$v$}\put(257,7){$0u$}\put(297,7){$0v$}
\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}
\drawedge(D,E){$b$} \drawedge(F,G){$a$} \end{picture} \end{center} Such a cycle labeled $a$ with vertices $u_1, u_2, \ldots, u_{2^{k-1}}$ (of length $2^{k-1}\geq 2$) at level $n-1$ gives rise to a $b$-cycle of length $2^{k}$ in $\Sigma_n$ following the third substitutional rule. In this new cycle, by induction, there is an odd number of clockwise oriented edges of type \begin{center} \begin{picture}(400,20) \letvertex D=(180,15)\letvertex E=(220,15) \put(172,6){$0u_i$}\put(215,6){$1u_{i+1}$} \drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$} \drawundirectededge(D,E){$b$} \end{picture} \end{center} All the other edges have the same orientation (given by the matrix $bI_{n-2}$). Since these edges are in even number, this implies that the $b$-cycle is well oriented. A similar argument can be developed for edges labeled by $a$ and this completes the proof. \end{proof}
\begin{teo} The partition function of the dimer model on the Schreier graph $\Sigma_n$ of the Basilica group is $$ \Phi_n(a,b)=
\begin{cases}
2^{\frac{2^n+1}{3}}b^{2^{n-1}} & n \text{ odd}, \\
2^{\frac{2^n+2}{3}}b^{2^{n-1}} & n \text{ even}.
\end{cases} $$ \end{teo}
\begin{proof} For small $n$ the assertion can be directly verified. Suppose now $n\geq 5$. Observe that we have $\det(\Delta_n') = b^{2^n}(\det(a_{n-1}'+I_{n-1}))^2$, since the matrices $a_{n-1}'+I_{n-1}$ and $a_{n-1}'^{-1}-I_{n-1}$ have the same determinant. Let us prove by induction on $n$ that, for every $n\geq 5$, $(\det(a_{n-1}'+I_{n-1}))^2 = 2^{2^{n-1}}\cdot 2^{2l'}$, where $l'$ is the number of cycles labeled by $b$ in $\Sigma_n$ having length greater than 2. One can verify by direct computation that $\det(\Delta_5') = 2^{22}$ and $\det(\Delta_6') = 2^{44}$. Now \begin{eqnarray*}
\det(\Delta_n') &=& (\det(a_{n-1}'+I_{n-1}))^2 = \left| \begin{matrix}
b_{n-2}'+I_{n-2} & 0 \\
0 & 2I_{n-2}
\end{matrix}\right|^2 = 2^{2^{n-1}}\cdot (\det(b_{n-2}'+I_{n-2}))^2\\ &=& 2^{2^{n-1}}\left| \begin{matrix}
I_{n-3} & a_{n-3}' \\
-I_{n-3} & I_{n-3}
\end{matrix}\right|^2 = 2^{2^{n-1}} (\det(a_{n-3}'+I_{n-3}))^2 = 2^{2^{n-1}}\cdot 2^{2^{n-3}}\cdot 2^{2l''}, \end{eqnarray*} where the last equality follows by induction and $l''$ is the number of $b$-cycles in $\Sigma_{n-2}$ having length greater than 2. Now observe that $l''$ is also equal to the number of $a$-cycles of length greater than 2 in $\Sigma_{n-1}$ but also to the number of $b$-cycles of length greater than 4 in $\Sigma_n$. We already proved that $b^n_4 = b^{n-2}_2 = 2^{n-4}$ and so $2^{2^{n-3}} = 2^{2b^n_4}$. Similarly $2^{2^{n-1}} = 2^{2b^n_2}$. Then one gets the assertion by using computations made in Proposition \ref{computationcycles}. \end{proof}
\begin{cor} The thermodynamic limit is $\frac{1}{3}\log 2 + \frac{1}{2}\log b$. In particular, the entropy of absorption of diatomic molecules per site is $\frac{1}{3}\log 2$. \end{cor}
\section{The dimer model on the Schreier graphs of the Hanoi Towers group $H^{(3)}$}\label{SECTION4}
We present here a more sophisticated example of dimers computation on Schreier graphs -- the Schreier graphs of the action of the Hanoi Towers group $H^{(3)}$ on the rooted ternary tree.
\subsection{The Schreier graphs}
The group $H^{(3)}$ is generated by the automorphisms of the ternary rooted tree having the following self-similar form \cite{hanoi}: $$ a= (01)(id,id,a)\qquad b=(02)(id,b,id) \qquad c=(12)(c,id,id), $$ where $(01), (02)$ and $(12)$ are transpositions in $Sym(3)$. Observe that $a,b,c$ are involutions. The associated Schreier graphs are self-similar in the sense of \cite{wagner2}, that is, each $\Sigma_{n+1}$ contains three copies of $\Sigma_n$ glued together by three edges. These graphs can be recursively constructed via the following substitutional rules \cite{hanoi}: \unitlength=0,4mm \begin{center} \begin{picture}(400,115) \letvertex A=(240,10)\letvertex B=(260,44) \letvertex C=(280,78)\letvertex D=(300,112) \letvertex E=(320,78)\letvertex F=(340,44) \letvertex G=(360,10)\letvertex H=(320,10)\letvertex I=(280,10)
\letvertex L=(70,30)\letvertex M=(130,30) \letvertex N=(100,80)
\put(236,0){$00u$}\put(243,42){$20u$}\put(263,75){$21u$} \put(295,116){$11u$}\put(323,75){$01u$}\put(343,42){$02u$}\put(353,0){$22u$} \put(315,0){$12u$}\put(275,0){$10u$}
\put(67,20){$0u$}\put(126,20){$2u$}\put(95,84){$1u$}\put(188,60){$\Longrightarrow$} \put(0,60){Rule I}
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$} \drawvertex(I){$\bullet$} \drawundirectededge(A,B){$b$}\drawundirectededge(B,C){$a$}\drawundirectededge(C,D){$c$} \drawundirectededge(D,E){$a$}\drawundirectededge(E,C){$b$}\drawundirectededge(E,F){$c$}\drawundirectededge(F,G){$b$} \drawundirectededge(B,I){$c$}\drawundirectededge(H,F){$a$}\drawundirectededge(H,I){$b$} \drawundirectededge(I,A){$a$}\drawundirectededge(G,H){$c$}
\drawvertex(L){$\bullet$} \drawvertex(M){$\bullet$}\drawvertex(N){$\bullet$} \drawundirectededge(M,L){$b$}\drawundirectededge(N,M){$c$}\drawundirectededge(L,N){$a$} \end{picture} \end{center}
\begin{center} \begin{picture}(400,135) \letvertex A=(240,10)\letvertex B=(260,44) \letvertex C=(280,78)\letvertex D=(300,112) \letvertex E=(320,78)\letvertex F=(340,44) \letvertex G=(360,10)\letvertex H=(320,10)\letvertex I=(280,10)
\letvertex L=(70,30)\letvertex M=(130,30) \letvertex N=(100,80)
\put(236,0){$00u$}\put(243,42){$10u$}\put(263,75){$12u$} \put(295,116){$22u$}\put(323,75){$02u$}\put(343,42){$01u$}\put(353,0){$11u$} \put(315,0){$21u$}\put(275,0){$20u$}
\put(67,20){$0u$}\put(126,20){$1u$}\put(95,84){$2u$}\put(188,60){$\Longrightarrow$} \put(0,60){Rule II} \drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$} \drawvertex(I){$\bullet$} \drawundirectededge(A,B){$a$}\drawundirectededge(B,C){$b$}\drawundirectededge(C,D){$c$} \drawundirectededge(D,E){$b$}\drawundirectededge(E,C){$a$}\drawundirectededge(E,F){$c$}\drawundirectededge(F,G){$a$} \drawundirectededge(B,I){$c$}\drawundirectededge(H,F){$b$}\drawundirectededge(H,I){$a$} \drawundirectededge(I,A){$b$}\drawundirectededge(G,H){$c$}
\drawvertex(L){$\bullet$} \drawvertex(M){$\bullet$}\drawvertex(N){$\bullet$} \drawundirectededge(M,L){$a$}\drawundirectededge(N,M){$c$}\drawundirectededge(L,N){$b$} \end{picture} \end{center} \begin{center} \begin{picture}(400,80) \letvertex A=(50,10)\letvertex B=(100,10) \letvertex C=(175,10)\letvertex D=(225,10) \letvertex E=(300,10)\letvertex F=(350,10) \letvertex G=(50,50)\letvertex H=(100,50) \letvertex I=(175,50)\letvertex L=(225,50) \letvertex M=(300,50)\letvertex N=(350,50)
\put(45,53){$0u$}\put(45,0){$0v$} \put(95,0){$00v$}\put(95,53){$00u$}\put(170,0){$1v$}\put(170,53){$1u$}\put(220,0){$11v$} \put(220,53){$11u$}\put(295,0){$2v$}\put(295,53){$2u$}\put(345,0){$22v$}\put(345,53){$22u$}
\put(68,27){$\Longrightarrow$}\put(193,27){$\Longrightarrow$}\put(318,27){$\Longrightarrow$} \put(0,30){Rule III} \put(130,30){Rule IV} \put(260,30){Rule V}
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$} \drawvertex(I){$\bullet$}\drawvertex(L){$\bullet$} \drawvertex(M){$\bullet$}\drawvertex(N){$\bullet$}
\drawundirectededge(A,G){$c$}\drawundirectededge(B,H){$c$}\drawundirectededge(C,I){$b$} \drawundirectededge(D,L){$b$}\drawundirectededge(E,M){$a$}\drawundirectededge(F,N){$a$} \end{picture} \end{center} The starting point is the Schreier graph $\Sigma_1$ of the first level.\unitlength=0,3mm \begin{center} \begin{picture}(400,125) \letvertex A=(240,10)\letvertex B=(260,44) \letvertex C=(280,78)\letvertex D=(300,112) \letvertex E=(320,78)\letvertex F=(340,44) \letvertex G=(360,10)\letvertex H=(320,10)\letvertex I=(280,10)
\letvertex L=(70,30)\letvertex M=(130,30) \letvertex N=(100,80)
\put(228,60){$\Sigma_2$} \put(50,60){$\Sigma_1$}
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$} \drawvertex(I){$\bullet$} \drawundirectededge(A,B){$b$}\drawundirectededge(B,C){$a$}\drawundirectededge(C,D){$c$} \drawundirectededge(D,E){$a$}\drawundirectededge(E,C){$b$}\drawundirectededge(E,F){$c$}\drawundirectededge(F,G){$b$} \drawundirectededge(B,I){$c$}\drawundirectededge(H,F){$a$}\drawundirectededge(H,I){$b$} \drawundirectededge(I,A){$a$}\drawundirectededge(G,H){$c$}
\drawvertex(L){$\bullet$} \drawvertex(M){$\bullet$}\drawvertex(N){$\bullet$} \drawundirectededge(M,L){$b$}\drawundirectededge(N,M){$c$}\drawundirectededge(L,N){$a$}
\drawundirectedloop[l](A){$c$}\drawundirectedloop(D){$b$}\drawundirectedloop[r](G){$a$}\drawundirectedloop[r](M){$a$}
\drawundirectedloop(N){$b$}\drawundirectedloop[l](L){$c$} \end{picture} \end{center} In fact, the substitutional rules determine not only the graphs $\Sigma_n$ but the graphs together with a particular embedding in the plane. Throughout the paper we will always consider the graphs embedded in the plane, as drawn on the Figures, up to translations. Observe that, for each $n\geq 1$, the graph $\Sigma_n$ has three loops, at the vertices $0^n,1^n$ and $2^n$, labeled by $c,b$ and $a$, respectively. Moreover, these are the only loops in $\Sigma_n$. Since the number of vertices of the Schreier graph $\Sigma_n$ is $3^n$ (and so an odd number), we let a dimer covering of $\Sigma_n$ cover either zero or two outmost vertices: we will consider covered by a loop the vertices not covered by any dimer. For this reason we do not erase the loops in this example.\\ The two subsections below correspond to the calculation of the dimers on Hanoi Schreier graphs by two different methods: combinatorial (Section \ref{SECTIONCOMBINATORIAL}), using the self-similar structure of the graph; and via the Kasteleyn theory (Section \ref{2210}), using self-similarity of the group $H^{(3)}$ in the construction of the oriented adjacency matrix.
\subsection{A combinatorial approach}\label{SECTIONCOMBINATORIAL}
There are four possible dimer configurations on $\Sigma_1$: \begin{center} \begin{picture}(480,90) \letvertex A=(30,20)\letvertex B=(90,20) \letvertex C=(60,70)\letvertex D=(150,20) \letvertex E=(210,20)\letvertex F=(180,70) \letvertex G=(270,20)\letvertex H=(330,20)\letvertex I=(300,70) \letvertex L=(390,20)\letvertex M=(450,20) \letvertex N=(420,70)
\put(27,6){0}\put(87,6){2}\put(66,66){1}
\put(147,6){0}\put(207,6){2}\put(186,66){1}
\put(267,6){0}\put(327,6){2}\put(306,66){1}
\put(387,6){0}\put(447,6){2}\put(426,66){1}
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$} \drawvertex(I){$\bullet$}\drawvertex(L){$\bullet$} \drawvertex(M){$\bullet$}\drawvertex(N){$\bullet$} \thicklines
\drawundirectedloop[l](A){$c$}\drawundirectedloop[r](B){$a$}\drawundirectedloop(C){$b$}\drawundirectedloop[l](D){$c$}\drawundirectedloop(I){$b$} \drawundirectedloop[r](M){$a$}
\drawundirectededge(F,E){$c$}\drawundirectededge(H,G){$b$}\drawundirectededge(L,N){$a$}
\thinlines \drawundirectededge(B,A){$b$}\drawundirectededge(C,B){$c$}\drawundirectededge(A,C){$a$}\drawundirectededge(D,F){$a$} \drawundirectededge(E,D){$b$}\drawundirectededge(I,H){$c$}\drawundirectededge(G,I){$a$} \drawundirectededge(N,M){$c$}\drawundirectededge(M,L){$b$}
\drawundirectedloop[r](E){$a$}\drawundirectedloop(F){$b$}\drawundirectedloop[r](H){$a$}\drawundirectedloop[l](G){$c$} \drawundirectedloop(N){$b$}\drawundirectedloop[l](L){$c$} \end{picture} \end{center} At level $2$, we have eight possible dimer configurations: \begin{center} \begin{picture}(400,125) \letvertex A=(40,10)\letvertex B=(60,44) \letvertex C=(80,78)\letvertex D=(100,112) \letvertex E=(120,78)\letvertex F=(140,44) \letvertex G=(160,10)\letvertex H=(120,10)\letvertex I=(80,10) \put(38,-3){$00$}\put(150,-3){22}\put(105,108){11}
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$} \drawvertex(I){$\bullet$}
\put(188,75){Type I}
\thicklines \drawundirectedloop[l](A){$c$}\drawundirectedloop(D){$b$}\drawundirectedloop[r](G){$a$} \drawundirectededge(E,C){$b$}\drawundirectededge(B,I){$c$}\drawundirectededge(H,F){$a$}
\thinlines \drawundirectededge(A,B){$b$}\drawundirectededge(B,C){$a$}\drawundirectededge(C,D){$c$} \drawundirectededge(D,E){$a$}\drawundirectededge(E,F){$c$} \drawundirectededge(F,G){$b$} \drawundirectededge(H,I){$b$}\drawundirectededge(I,A){$a$}\drawundirectededge(G,H){$c$}
\letvertex a=(240,10)\letvertex b=(260,44) \letvertex c=(280,78)\letvertex d=(300,112) \letvertex e=(320,78)\letvertex f=(340,44) \letvertex g=(360,10)\letvertex h=(320,10)\letvertex i=(280,10) \put(238,-3){$00$}\put(350,-3){22}\put(305,108){11} \drawvertex(a){$\bullet$}\drawvertex(b){$\bullet$} \drawvertex(c){$\bullet$}\drawvertex(d){$\bullet$} \drawvertex(e){$\bullet$}\drawvertex(f){$\bullet$} \drawvertex(g){$\bullet$}\drawvertex(h){$\bullet$} \drawvertex(i){$\bullet$}
\thicklines
\drawundirectedloop[l](a){$c$}\drawundirectedloop(d){$b$}\drawundirectedloop[r](g){$a$} \drawundirectededge(b,c){$a$}\drawundirectededge(e,f){$c$} \drawundirectededge(h,i){$b$}
\thinlines \drawundirectededge(a,b){$b$}\drawundirectededge(c,d){$c$} \drawundirectededge(d,e){$a$}\drawundirectededge(e,c){$b$} \drawundirectededge(f,g){$b$}\drawundirectededge(b,i){$c$}\drawundirectededge(h,f){$a$} \drawundirectededge(i,a){$a$}\drawundirectededge(g,h){$c$} \end{picture} \end{center}
\begin{center} \begin{picture}(400,125) \letvertex A=(40,10)\letvertex B=(60,44) \letvertex C=(80,78)\letvertex D=(100,112) \letvertex E=(120,78)\letvertex F=(140,44) \letvertex G=(160,10)\letvertex H=(120,10)\letvertex I=(80,10)
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$} \drawvertex(I){$\bullet$}
\put(188,75){Type II} \put(38,-3){$00$}\put(150,-3){22}\put(105,108){11} \put(238,-3){$00$}\put(350,-3){22}\put(305,108){11}
\thicklines \drawundirectededge(C,D){$c$}\drawundirectedloop[l](A){$c$} \drawundirectededge(E,F){$c$}\drawundirectededge(B,I){$c$}\drawundirectededge(G,H){$c$}
\thinlines \drawundirectededge(A,B){$b$}\drawundirectededge(B,C){$a$} \drawundirectededge(D,E){$a$} \drawundirectededge(F,G){$b$} \drawundirectededge(H,I){$b$}\drawundirectededge(I,A){$a$} \drawundirectedloop(D){$b$}\drawundirectedloop[r](G){$a$} \drawundirectededge(E,C){$b$}\drawundirectededge(H,F){$a$}
\letvertex a=(240,10)\letvertex b=(260,44) \letvertex c=(280,78)\letvertex d=(300,112) \letvertex e=(320,78)\letvertex f=(340,44) \letvertex g=(360,10)\letvertex h=(320,10)\letvertex i=(280,10)
\drawvertex(a){$\bullet$}\drawvertex(b){$\bullet$} \drawvertex(c){$\bullet$}\drawvertex(d){$\bullet$} \drawvertex(e){$\bullet$}\drawvertex(f){$\bullet$} \drawvertex(g){$\bullet$}\drawvertex(h){$\bullet$} \drawvertex(i){$\bullet$}
\thicklines \drawundirectedloop[l](a){$c$}\drawundirectededge(d,e){$a$} \drawundirectededge(b,c){$a$}\drawundirectededge(f,g){$b$} \drawundirectededge(h,i){$b$}
\thinlines \drawundirectededge(a,b){$b$}\drawundirectededge(c,d){$c$} \drawundirectededge(e,c){$b$} \drawundirectededge(b,i){$c$}\drawundirectededge(h,f){$a$} \drawundirectededge(i,a){$a$}\drawundirectededge(g,h){$c$} \drawundirectedloop(d){$b$}\drawundirectedloop[r](g){$a$} \drawundirectededge(e,f){$c$} \end{picture} \end{center}
\begin{center} \begin{picture}(400,125) \letvertex A=(40,10)\letvertex B=(60,44) \letvertex C=(80,78)\letvertex D=(100,112) \letvertex E=(120,78)\letvertex F=(140,44) \letvertex G=(160,10)\letvertex H=(120,10)\letvertex I=(80,10)
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$} \drawvertex(I){$\bullet$}
\put(186,75){Type III} \put(38,-3){$00$}\put(150,-3){22}\put(105,108){11} \put(238,-3){$00$}\put(350,-3){22}\put(305,108){11} \thicklines \drawundirectededge(E,C){$b$} \drawundirectedloop(D){$b$}\drawundirectededge(A,B){$b$} \drawundirectededge(H,I){$b$}\drawundirectededge(F,G){$b$}
\thinlines \drawundirectededge(B,C){$a$} \drawundirectededge(D,E){$a$} \drawundirectededge(I,A){$a$} \drawundirectedloop[r](G){$a$} \drawundirectededge(H,F){$a$} \drawundirectededge(C,D){$c$}\drawundirectedloop[l](A){$c$} \drawundirectededge(E,F){$c$}\drawundirectededge(B,I){$c$}\drawundirectededge(G,H){$c$}
\letvertex a=(240,10)\letvertex b=(260,44) \letvertex c=(280,78)\letvertex d=(300,112) \letvertex e=(320,78)\letvertex f=(340,44) \letvertex g=(360,10)\letvertex h=(320,10)\letvertex i=(280,10)
\drawvertex(a){$\bullet$}\drawvertex(b){$\bullet$} \drawvertex(c){$\bullet$}\drawvertex(d){$\bullet$} \drawvertex(e){$\bullet$}\drawvertex(f){$\bullet$} \drawvertex(g){$\bullet$}\drawvertex(h){$\bullet$} \drawvertex(i){$\bullet$}
\thicklines \drawundirectedloop(d){$b$}\drawundirectededge(b,c){$a$} \drawundirectededge(e,f){$c$}\drawundirectededge(i,a){$a$}\drawundirectededge(g,h){$c$}
\thinlines \drawundirectededge(a,b){$b$}\drawundirectededge(c,d){$c$} \drawundirectededge(e,c){$b$} \drawundirectededge(b,i){$c$}\drawundirectededge(h,f){$a$} \drawundirectedloop[r](g){$a$} \drawundirectedloop[l](a){$c$}\drawundirectededge(d,e){$a$} \drawundirectededge(f,g){$b$} \drawundirectededge(h,i){$b$} \end{picture} \end{center}
\begin{center} \begin{picture}(400,125) \letvertex A=(40,10)\letvertex B=(60,44) \letvertex C=(80,78)\letvertex D=(100,112) \letvertex E=(120,78)\letvertex F=(140,44) \letvertex G=(160,10)\letvertex H=(120,10)\letvertex I=(80,10)
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$} \drawvertex(I){$\bullet$} \put(38,-3){$00$}\put(150,-3){22}\put(105,108){11} \put(238,-3){$00$}\put(350,-3){22}\put(305,108){11} \put(186,75){Type IV}
\thicklines \drawundirectedloop[r](G){$a$}\drawundirectededge(D,E){$a$} \drawundirectededge(B,C){$a$}\drawundirectededge(H,F){$a$}\drawundirectededge(I,A){$a$}
\thinlines \drawundirectededge(E,C){$b$}\drawundirectedloop(D){$b$}\drawundirectededge(A,B){$b$} \drawundirectededge(H,I){$b$}\drawundirectededge(F,G){$b$} \drawundirectededge(C,D){$c$}\drawundirectedloop[l](A){$c$} \drawundirectededge(E,F){$c$}\drawundirectededge(B,I){$c$}\drawundirectededge(G,H){$c$}
\letvertex a=(240,10)\letvertex b=(260,44) \letvertex c=(280,78)\letvertex d=(300,112) \letvertex e=(320,78)\letvertex f=(340,44) \letvertex g=(360,10)\letvertex h=(320,10)\letvertex i=(280,10)
\drawvertex(a){$\bullet$}\drawvertex(b){$\bullet$} \drawvertex(c){$\bullet$}\drawvertex(d){$\bullet$} \drawvertex(e){$\bullet$}\drawvertex(f){$\bullet$} \drawvertex(g){$\bullet$}\drawvertex(h){$\bullet$} \drawvertex(i){$\bullet$}
\thicklines \drawundirectedloop[r](g){$a$}\drawundirectededge(c,d){$c$} \drawundirectededge(e,f){$c$}\drawundirectededge(a,b){$b$}\drawundirectededge(h,i){$b$}
\thinlines \drawundirectededge(e,c){$b$}\drawundirectedloop(d){$b$}\drawundirectededge(b,c){$a$} \drawundirectededge(i,a){$a$}\drawundirectededge(g,h){$c$} \drawundirectededge(b,i){$c$}\drawundirectededge(h,f){$a$} \drawundirectedloop[l](a){$c$}\drawundirectededge(d,e){$a$} \drawundirectededge(f,g){$b$} \end{picture} \end{center} More generally, we will say that a dimer covering is of type I if it contains all the three loops, of type II if it only contains the leftmost loop (at vertex $0^n$), of type III if it only contains the upmost loop (at vertex $1^n$) and of type IV if it only contains the rightmost loop (at vertex $2^n$).\\ \indent For $\Sigma_n$, $n\geq 1 $, let us denote by $\Phi^i_n(a,b,c)$ the partition function of the dimer coverings of type $i$, for $i=$ I, II, III, IV, so that $\Phi_n=\Phi_n^I+\Phi_n^{II}+\Phi_n^{III}+\Phi_n^{IV}$. In what follows we omit the variables $a,b,c$ in the partition functions.
\begin{teo}\label{numerohanoi} The functions $\{\Phi_n\}$, $n=1,...,4$, satisfy the system of equations \begin{eqnarray}\label{generalsystem} \begin{cases} \Phi^I_{n+1} = \left(\Phi^I_n\right)^3\cdot\frac{1}{abc} + \Phi_n^{II}\Phi_n^{III}\Phi_n^{IV} \\ \Phi^{II}_{n+1} = \left(\Phi_n^{II}\right)^3\cdot\frac{1}{c}+\Phi_n^I\Phi_n^{III}\Phi_n^{IV}\cdot\frac{1}{ab}\\ \Phi^{III}_{n+1} =\left(\Phi_n^{III}\right)^3\cdot\frac{1}{b}+\Phi_n^I\Phi_n^{II}\Phi_n^{IV}\cdot\frac{1}{ac}\\ \Phi^{IV}_{n+1} = \left(\Phi_n^{IV}\right)^3\cdot\frac{1}{a} +\Phi_n^I\Phi_n^{II}\Phi_n^{III}\cdot\frac{1}{bc} \end{cases}, \end{eqnarray} with the initial conditions \begin{eqnarray*} \begin{cases} \Phi^I_{1} = abc \\ \Phi^{II}_{1} = c^2 \\ \Phi^{III}_{1} = b^2 \\ \Phi^{IV}_{1} = a^2 \end{cases}. \end{eqnarray*} \end{teo} \begin{proof} We prove the assertion by induction on $n$. The initial conditions can be easily verified. The induction step follows from the substitutional rules. More precisely, in $\Sigma_n$ we have a copy $T_0$ of $\Sigma_{n-1}$ reflected with respect to the bisector of the angle with vertex $0^{n-1}$; a copy $T_1$ of $\Sigma_{n-1}$ reflected with respect to the bisector of the angle with vertex $1^{n-1}$; a copy $T_2$ of $\Sigma_{n-1}$ reflected with respect to the bisector of the angle with vertex $2^{n-1}$. \unitlength=0,4mm \begin{center} \begin{picture}(400,125) \letvertex A=(140,10)\letvertex B=(160,44) \letvertex C=(180,78)\letvertex D=(200,112) \letvertex E=(220,78)\letvertex F=(240,44) \letvertex G=(260,10)\letvertex H=(220,10)\letvertex I=(180,10)
\put(86,75){$\Sigma_n$} \put(156,18){$T_0$} \put(237,18){$T_2$} \put(197,90){$T_1$}
\put(186,111){$1^n$} \put(170,75){$A$} \put(223,75){$F$} \put(150,41){$B$} \put(243,41){$E$} \put(138,0){$0^n$} \put(176,0){$C$} \put(216,0){$D$} \put(253,0){$2^n$}
\put(163,58){$a$} \put(233,58){$c$} \put(198,2){$b$}
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$} \drawvertex(I){$\bullet$}
\drawundirectededge(D,E){} \drawundirectededge(B,C){}\drawundirectededge(H,F){}\drawundirectededge(I,A){} \drawundirectededge(E,C){}\drawundirectededge(A,B){} \drawundirectededge(H,I){}\drawundirectededge(F,G){} \drawundirectededge(C,D){} \drawundirectededge(E,F){}\drawundirectededge(B,I){}\drawundirectededge(G,H){}
\drawundirectedloop[r](G){$a$} \drawundirectedloop(D){$b$} \drawundirectedloop[l](A){$c$}
\end{picture} \end{center} Using this information, let us analyze the dimer coverings of $\Sigma_n$ as constructed from dimer coverings of $T_i$, $i=0,1,2$, that can be in turn interpreted as dimer coverings of $\Sigma_{n-1}$. \\ \indent First suppose that the dimer covering of $\Sigma_n$ contains only one loop, and without loss of generality assume it is at $0^n$. There are then two possible cases. \begin{itemize} \item The copy of $\Sigma_{n-1}$ corresponding to $T_0$ was covered using three loops (type I). Then the covering of $\Sigma_n$ must cover the edges connecting
$T_0$ to $T_1$ and $T_2$, (labeled $a$ and $b$ respectively). So in the copy of $\Sigma_{n-1}$ corresponding to $T_1$ we had a dimer covering with only a loop in $A$ (type IV, by reflection) and in $T_2$ a covering with only a loop in $D$ (type III, by reflection). These coverings cover vertices $E$ and $F$ and so the edge labeled $c$ joining $E$ and $F$ does not belong to the cover of $\Sigma_n$. We describe this situation as: \begin{center} \begin{picture}(200,43) \letvertex A=(85,10)\letvertex B=(115,10) \letvertex C=(100,35)
\put(82,0){I} \put(112,0){III} \put(97,38){IV}
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawundirectededge(A,B){} \drawundirectededge(B,C){}\drawundirectededge(C,A){} \end{picture} \end{center} \item The copy of $\Sigma_{n-1}$ corresponding to $T_0$ was covered with only one loop in $0^{n-1}$ (type II), so that the vertices $B$ and $C$ are covered. This implies that the edges joining $A,B$ and $C,D$, labeled $a$ and $b$ respectively are not covered in $\Sigma_n$. So in the copy of $\Sigma_{n-1}$ corresponding to $T_1$ there was no loop at $1^{n-1}$ (or $A$), which implies that in it only the loop at $F$ was covered (type II, by reflection). Similarly the covering of the copy of $\Sigma_{n-1}$ corresponding to $T_2$ could only contain a loop in $E$ (type II, by reflection). Consequently, in $\Sigma_n$, the edge joining $E$ and $F$ (labeled $c$) must belong to the covering. Schematically this situation can be described as follows. \begin{center} \begin{picture}(200,43) \letvertex A=(85,10)\letvertex B=(115,10) \letvertex C=(100,35)
\put(82,0){II} \put(112,0){II} \put(97,38){II}
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}
\drawundirectededge(A,B){} \drawundirectededge(B,C){}\drawundirectededge(C,A){} \end{picture} \end{center} \end{itemize} \indent Now suppose that the covering of $\Sigma_n$ contains loops at $0^n,1^n,2^n$. There are again two possible cases. \begin{itemize} \item We have on $T_0$ a dimer covering with three loops (type I), so that the dimer covering of $\Sigma_n$ must use the edges joining the copy $T_0$ to the copies $T_1$ and $T_2$. So in the copy of $\Sigma_{n-1}$ corresponding to $T_1$ (and similarly for $T_2$) we necessarily have a covering with three loops (type I). \begin{center} \begin{picture}(200,43) \letvertex A=(85,10)\letvertex B=(115,10) \letvertex C=(100,35)
\put(82,0){I} \put(114,0){I} \put(99,38){I}
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}
\drawundirectededge(A,B){} \drawundirectededge(B,C){}\drawundirectededge(C,A){} \end{picture} \end{center} \item We have on $T_0$ a dimer covering with only one loop in $0^{n-1}$ (type II), so that the vertices $B$ and $C$ are covered. This implies that the edges joining $A,B$ and $C,D$ are not covered in $\Sigma_n$. So in $T_1$ we cannot have a loop at $A$, which implies that the corresponding copy of $\Sigma_{n-1}$ was covered with only a loop in $1^{n-1}$ (type III). Similarly $T_2$ was covered with only a loop in $2^{n-1}$ (type IV). \begin{center} \begin{picture}(200,43) \letvertex A=(85,10)\letvertex B=(115,10) \letvertex C=(100,35)
\put(82,0){II} \put(110,0){IV} \put(95,38){III}
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}
\drawundirectededge(A,B){} \drawundirectededge(B,C){}\drawundirectededge(C,A){} \end{picture} \end{center} \end{itemize} The claim follows. \end{proof}
\begin{cor}\label{corollaryentropyHanoi} For each $n\geq 1$, the number of dimer coverings of type I, II, III, IV of the Schreier graph $\Sigma_n$ is the same and it is equal to $2^{\frac{3^{n-1}-1}{2}}$. Hence, the number of dimer coverings of the Schreier graph $\Sigma_n$ is equal to $2^{\frac{3^{n-1}+3}{2}}$. The entropy of absorption of diatomic molecules per site is $\frac{1}{6}\log 2$. \end{cor}
\begin{proof} By construction in the proof of Theorem \ref{numerohanoi} we see that the number of configurations of type I of $\Sigma_n$ is given by $2h^3$, where $h$ is (by inductive hypothesis) the common value of the number of configurations of type I, II, III, IV of $\Sigma_{n-1}$, equal to $2^{\frac{3^{n-2}-1}{2}}$. So the number of configurations of type I in $\Sigma_n$ is equal to $$ 2\cdot 2^{\frac{3(3^{n-2}-1)}{2}} = 2^{\frac{3^{n-1}-1}{2}}. $$ Clearly the same count holds for the coverings of type II, III and IV of $\Sigma_n$, and this completes the proof. \end{proof} \begin{os}\rm Analogues of Theorem 3.1. can be deduced for the dimers partition function on Schreier graphs of any self-similar (automata) group with bounded activity (generated by a bounded automaton). Indeed, V. Nekrashevych in \cite{volo}
introduces an inductive procedure (that he calls \lq\lq inflation") that produces a sequence of graphs that differ from the Schreier graphs only in a bounded number of edges (see also \cite{bondarenkothesis}, where this construction is used to study growth of infinite Schreier graphs). This inductive procedure allows to describe the partition function for the dimer model on them by writing a system of recursive equations as in (\ref{generalsystem}). \end{os} Unfortunately, we were not able to find an explicit solution of the system \eqref{generalsystem}. We will come back to these equations in Section \ref{statistiques} where we study some statistics for the dimer coverings on $\Sigma_n$. Meanwhile, in the next Subsection \ref{2210}, we will attempt to compute the partition function in a different way, using the Kasteleyn theory.\\ \indent In the rest of this subsection we present the solution of the system \eqref{generalsystem} in the particular case, when all the weights on the edges are put to be equal, and deduce the thermodynamic limit.
\begin{prop}\label{a=b=c} The partition function for $a=b=c$ is $$ \Phi_n(a,a,a)= 2^{\frac{3^{n-1}-1}{2}}\cdot a^{\frac{3^n+1}{2}}(a+3). $$ In this case, the thermodynamic limit is $ \frac{1}{6}\log 2 + \frac{1}{2}\log a$. \end{prop}
\begin{proof} By putting $a=b=c$, the system (\ref{generalsystem}) reduces to \begin{eqnarray}\label{systemalla} \begin{cases} \Phi^I_{n+1} = \left(\Phi^I_n\right)^3\cdot\frac{1}{a^3} + (\Phi_n^{II})^3\\ \Phi^{II}_{n+1} = \left(\Phi_n^{II}\right)^3\cdot\frac{1}{a}+\Phi_n^I(\Phi_n^{II})^2\cdot\frac{1}{a^2} \end{cases} \end{eqnarray} with initial conditions $\Phi^I_{1} = a^3$ and $\Phi^{II}_{1} = a^2$, since $\Phi_n^{II} = \Phi_n^{III} = \Phi_n^{IV}$. One can prove by induction that $\frac{\Phi_n^{I}}{\Phi_n^{II}}=a$, so that the first equation in (\ref{systemalla}) becomes $$ \Phi_{n+1}^{I}=\frac{2(\Phi_n^{I})^3}{a^3}, $$ giving $$ \begin{cases} \Phi^I_n = 2^{\frac{3^{n-1}-1}{2}}\cdot a^{\frac{3^n+3}{2}}\\ \Phi^{II}_n = 2^{\frac{3^{n-1}-1}{2}}\cdot a^{\frac{3^n+1}{2}}. \end{cases} $$ The partition function is then obtained as $$ \Phi_n=\Phi_n^I+3\Phi_n^{II} = 2^{\frac{3^{n-1}-1}{2}}\cdot a^{\frac{3^n+1}{2}}(a+3). $$ The thermodynamic limit is $$ \lim_{n\to \infty}\frac{\log(\Phi_n)}{3^n} = \frac{1}{6}\log 2 + \frac{1}{2}\log a. $$ Finally, by putting $a=1$, we get the entropy of absorption of diatomic molecules as in Corollary \ref{corollaryentropyHanoi}. \end{proof} Next, we deduce the existence of the thermodynamic limit of the function $\Phi_n(1,1,c)$, for $c\geq 1$. Observe that $\Phi_n(1,1,c) = \Phi_n^I(1,1,c) + \Phi_n^{II}(1,1,c) + 2\Phi_n^{III}(1,1,c)$, since $\Phi_n^{III}(1,1,c) = \Phi_n^{IV}(1,1,c)$. A similar argument holds for the functions $\Phi_n(a,1,1)$ and $\Phi_n(1,b,1)$.
\begin{prop}\label{COROLLARYEXISTENCE} For every $c\geq 1$, the thermodynamic limit $\lim_{n\to \infty}\frac{\log(\Phi_n(1,1,c))}{3^n}$ exists. \end{prop}
\begin{proof} It is clear that, for every $c\geq 1$, the sequence $\varepsilon_n:=\frac{\log(\Phi_n(1,1,c))}{3^n}$ is positive. We claim that $\varepsilon_n$ is decreasing. We have: \begin{eqnarray*} \frac{\varepsilon_{n+1}}{\varepsilon_n}&=& \frac{\log(\Phi_{n+1}(1,1,c))}{3\cdot\log(\Phi_n(1,1,c))}\\ &=&\frac{3\cdot\log(\Phi_n(1,1,c))+\log\left(\frac{\Phi_{n+1}(1,1,c)} {\Phi_n(1,1,c)^3}\right)}{3\cdot\log(\Phi_n(1,1,c))}\\ &=& 1+ \frac{\log\left(\frac{\Phi_{n+1}(1,1,c)} {\Phi_n(1,1,c)^3}\right)}{3\cdot\log(\Phi_n(1,1,c))}. \end{eqnarray*}
Since $\log(\Phi_n(1,1,c))>0$ for every $c\geq 1$, it suffices to prove that $\frac{\Phi_{n+1}(1,1,c)}{\Phi_n(1,1,c)^3}$ is less or equal to 1 for every $c\geq 1$.
\begin{eqnarray*} \frac{\Phi_{n+1}(1,1,c)}{\Phi_n(1,1,c)^3}&=& \frac{\Phi^{I}_{n+1}(1,1,c)+\Phi^{II}_{n+1}(1,1,c)+2\Phi^{III}_{n+1}(1,1,c)}{(\Phi^{I}_{n}(1,1,c)+\Phi^{II}_{n}(1,1,c)+2\Phi^{III}_{n}(1,1,c))^3}\\ &=& \frac{\left(\frac{(\Phi^{I}_{n})^3}{c}+ \Phi^{II}_{n}(\Phi^{III}_{n})^2+\frac{ (\Phi^{II}_{n})^3}{c}+\Phi^{I}_{n}(\Phi^{III}_{n})^2 + 2(\Phi^{III}_{n})^3+2\frac{\Phi^{I}_{n}\Phi^{II}_{n}\Phi^{III}_{n}}{c}\right)}{\left( \Phi^{I}_{n}+\Phi^{II}_{n}+2\Phi^{III}_{n} \right)^3}\\ &\leq & \frac{\left((\Phi^{I}_{n})^3+ \Phi^{II}_{n}(\Phi^{III}_{n})^2+ (\Phi^{II}_{n})^3+\Phi^{I}_{n}(\Phi^{III}_{n})^2 + 2(\Phi^{III}_{n})^3+2\Phi^{I}_{n}\Phi^{II}_{n}\Phi^{III}_{n}\right)}{\left( \Phi^{I}_{n}+\Phi^{II}_{n}+2\Phi^{III}_{n} \right)^3}\\ &\leq& 1 \end{eqnarray*}
This implies the existence of $\lim_{n\to \infty}\frac{\log(\Phi_n(1,1,c))}{3^n}$. \end{proof}
\subsection{Partition function by Kasteleyn method}\label{2210}
In order to define a matrix inducing a good orientation, in the sense of Kasteleyn, on the Schreier graphs of $H^{(3)}$, we introduce the matrices $$ a_1 = \begin{pmatrix}
0 & 1 & 0 \\
-1 & 0 & 0 \\
0 & 0 & 1 \end{pmatrix}, \qquad b_1 =\begin{pmatrix}
0 & 0 & -1 \\
0 & 1 & 0 \\
1 & 0 & 0 \end{pmatrix}, \qquad c_1 = \begin{pmatrix}
1 & 0 & 0 \\
0 & 0 & 1 \\
0 & -1 & 0 \end{pmatrix}. $$ Then, for every $n$ even, we put $$ a_n = \begin{pmatrix}
0 & -I_{n-1} & 0 \\
I_{n-1} & 0 & 0 \\
0 & 0 & a_{n-1} \end{pmatrix}, \qquad b_n =\begin{pmatrix}
0 & 0 & I_{n-1} \\
0 & b_{n-1} & 0 \\
-I_{n-1} & 0 & 0 \end{pmatrix}, \qquad c_n = \begin{pmatrix}
c_{n-1} & 0 & 0 \\
0 & 0 & -I_{n-1} \\
0 & I_{n-1} & 0 \end{pmatrix}, $$ and for every $n>1$ odd, we put $$ a_n = \begin{pmatrix}
0 & I_{n-1} & 0 \\
-I_{n-1} & 0 & 0 \\
0 & 0 & a_{n-1} \end{pmatrix}, \qquad b_n =\begin{pmatrix}
0 & 0 & -I_{n-1} \\
0 & b_{n-1} & 0 \\
I_{n-1} & 0 & 0 \end{pmatrix}, \qquad c_n = \begin{pmatrix}
c_{n-1} & 0 & 0 \\
0 & 0 & I_{n-1} \\
0 & -I_{n-1} & 0 \end{pmatrix}, $$ where $a_n, b_n,c_n$ and $I_n$ are square matrices of size $3^n$. Now we put $A_n=aa_n, B_n=bb_n$ and $C_n=cc_n$ for each $n\geq 1$ and define $\Delta_n=A_n+B_n+C_n$, so that $$ \Delta_1 = \begin{pmatrix}
c & a & -b \\
-a & b & c \\
b & -c & a \end{pmatrix} $$ and, for each $n > 1$, $$ \Delta_n =\begin{pmatrix}
C_{n-1} & -aI_{n-1} & bI_{n-1} \\
aI_{n-1} & B_{n-1} & -cI_{n-1} \\
-bI_{n-1} & cI_{n-1} & A_{n-1} \end{pmatrix} \ \mbox{for }n \mbox{ even, } \qquad \Delta_n =\begin{pmatrix}
C_{n-1} & aI_{n-1} & -bI_{n-1} \\
-aI_{n-1} & B_{n-1} & cI_{n-1} \\
bI_{n-1} & -cI_{n-1} & A_{n-1} \end{pmatrix} \ \mbox{for }n \mbox{ odd}. $$ We want to prove that, for each $n\geq1$, the oriented adjacency matrix $\Delta_n$ induces a good orientation on $\Sigma_n$. Then we will apply Kasteleyn theory to get the partition function of the dimer model on $\Sigma_n$. One can easily verify that also in this case the entries of the matrix $\Delta_n$ coincide, in absolute value, with the entries of the (unoriented weighted) adjacency matrix of the Schreier graphs of the group. The problem related to loops and their orientation will be discussed later. The figures below describe the orientation induced on $\Sigma_1$ and $\Sigma_2$ by the matrices $\Delta_1$ and $\Delta_2$, respectively.\unitlength=0,4mm \begin{center} \begin{picture}(400,125) \letvertex A=(240,10)\letvertex B=(260,44) \letvertex C=(280,78)\letvertex D=(300,112) \letvertex E=(320,78)\letvertex F=(340,44) \letvertex G=(360,10)\letvertex H=(320,10)\letvertex I=(280,10)
\letvertex L=(70,30)\letvertex M=(130,30) \letvertex N=(100,80)
\put(236,0){$00$}\put(248,42){$20$}\put(268,75){$21$} \put(305,109){$11$}\put(323,75){$01$}\put(343,42){$02$}\put(353,0){$22$} \put(315,0){$12$}\put(275,0){$10$}
\put(67,20){$0$}\put(126,20){$2$}\put(105,77){$1$}
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$} \drawvertex(I){$\bullet$}
\drawedge(A,B){$b$}\drawedge(B,C){$a$}\drawedge(C,D){$c$} \drawedge(D,E){$a$}\drawedge(E,C){$b$}\drawedge(E,F){$c$}\drawedge(F,G){$b$} \drawedge(B,I){$c$}\drawedge(H,F){$a$}\drawedge(H,I){$b$} \drawedge(I,A){$a$}\drawedge(G,H){$c$}
\drawvertex(L){$\bullet$} \drawvertex(M){$\bullet$}\drawvertex(N){$\bullet$} \drawedge(M,L){$b$}\drawedge(N,M){$c$}\drawedge(L,N){$a$}
\drawloop[l](A){$c$}\drawloop(D){$b$}\drawloop[r](G){$a$}\drawloop[r](M){$a$}
\drawloop(N){$b$}\drawloop[l](L){$c$} \end{picture} \end{center} \begin{prop} For each $n\geq 1$, the matrix $\Delta_n$ induces a good orientation on $\Sigma_n$. \end{prop} \begin{proof} Observe that in $\Sigma_1$ the sequence of labels $a,b,c$ appears in anticlockwise order. Following the substitutional rules, we deduce that for every $n$ odd we can read in each elementary triangle the sequence $a,b,c$ in anticlockwise order. On the other hand, for $n$ even, the occurrences of $a,b,c$ in each elementary triangle of $\Sigma_n$ follow a clockwise order. We prove our claim by induction on $n$. For $n=1$, the matrix $\Delta_1$ induces on $\Sigma_1$ the orientation shown in the picture above, so that the assertion is true for $n=1$. Now observe that, for every $n$ odd, the blocks $\pm aI_{n-1}, \pm bI_{n-1}, \pm cI_{n-1}$ in $\Delta_n$ ensure that each elementary triangle in $\Sigma_n$ has the same orientation given by \begin{center} \begin{picture}(400,60) \letvertex L=(170,10)\letvertex M=(230,10) \letvertex N=(200,60)
\put(165,0){$0u$}\put(226,0){$2u$}\put(195,63){$1u$}
\drawvertex(L){$\bullet$} \drawvertex(M){$\bullet$}\drawvertex(N){$\bullet$} \drawedge(M,L){$b$}\drawedge(N,M){$c$}\drawedge(L,N){$a$} \end{picture} \end{center} For $n$ even, the sequence $a,b,c$ is clockwise and the blocks $\pm aI_{n-1}, \pm bI_{n-1}, \pm cI_{n-1}$ in $\Delta_n$ ensure that the orientation induced on the edges is clockwise as the following picture shows: \begin{center} \begin{picture}(400,57) \letvertex L=(170,5)\letvertex M=(230,5) \letvertex N=(200,55)
\put(165,-5){$0u$}\put(226,-5){$1u$}\put(195,58){$2u$}
\drawvertex(L){$\bullet$} \drawvertex(M){$\bullet$}\drawvertex(N){$\bullet$} \drawedge(M,L){$a$}\drawedge(N,M){$c$}\drawedge(L,N){$b$} \end{picture} \end{center} So we conclude that for every $n$ all the elementary triangles of $\Sigma_n$ are clockwise oriented. Now construct the graph $\Sigma_{n+1}$ from $\Sigma_n$ and suppose $n$ odd (the same proof works in the case $n$ even). Rule I gives \begin{center} \begin{picture}(400,115) \letvertex A=(240,10)\letvertex B=(260,44) \letvertex C=(280,78)\letvertex D=(300,112) \letvertex E=(320,78)\letvertex F=(340,44) \letvertex G=(360,10)\letvertex H=(320,10)\letvertex I=(280,10)
\letvertex L=(70,30)\letvertex M=(130,30) \letvertex N=(100,80)
\put(236,0){$00u$}\put(243,42){$20u$}\put(263,75){$21u$} \put(295,116){$11u$}\put(323,75){$01u$}\put(343,42){$02u$}\put(353,0){$22u$} \put(315,0){$12u$}\put(275,0){$10u$}
\put(67,20){$0u$}\put(126,20){$2u$}\put(95,84){$1u$}\put(188,60){$\Longrightarrow$}
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$} \drawvertex(I){$\bullet$} \drawedge(A,B){$b$}\drawedge(B,C){$a$}\drawedge(C,D){$c$} \drawedge(D,E){$a$}\drawedge(E,C){$b$}\drawedge(E,F){$c$}\drawedge(F,G){$b$} \drawedge(B,I){$c$}\drawedge(H,F){$a$}\drawedge(H,I){$b$} \drawedge(I,A){$a$}\drawedge(G,H){$c$}
\drawvertex(L){$\bullet$} \drawvertex(M){$\bullet$}\drawvertex(N){$\bullet$} \drawedge(M,L){$b$}\drawedge(N,M){$c$}\drawedge(L,N){$a$} \end{picture} \end{center} In order to understand why the edges not belonging to an elementary triangle have this orientation, we observe that the edge \begin{center} \begin{picture}(400,2) \letvertex N=(170,5)\letvertex K=(230,5)
\put(165,-3){$20u$}\put(225,-3){$21u$}
\drawvertex(N){$\bullet$} \drawvertex(K){$\bullet$} \drawundirectededge(N,K){$a$} \end{picture} \end{center} has the same orientation as the edge \begin{center} \begin{picture}(400,2) \letvertex N=(170,5)\letvertex K=(230,5)
\put(165,-3){$0u$}\put(225,-3){$1u$}
\drawvertex(N){$\bullet$} \drawvertex(K){$\bullet$} \drawundirectededge(N,K){$a$} \end{picture} \end{center} since the entry $(20u,21u)$ of the matrix $\Delta_{n+1}$ is the same as the entry $(0u,1u)$ of the matrix $\Delta_n$. Similarly for the other two edges joining vertices $01u,02u$ and $10u,12u$. This implies that each elementary hexagon has a good orientation. Now note that in $\Sigma_{n+1}$ we have $3^{n-i}$ elementary cycles of length $3\cdot 2^i$, for each $i=0,1, \ldots, n$. We already know that in $\Sigma_{n+1}$ all the elementary triangles and hexagons have a good orientation. Observe that each cycle in $\Sigma_n$ having length $k = 3\cdot 2^m$, with $m\geq 1$, gives rise in $\Sigma_{n+1}$ to a cycle of length $2k$. In this new cycle of $\Sigma_{n+1}$, $k$ edges join vertices starting with the same letter and keep the same orientation as in $\Sigma_n$ (so they are well oriented by induction); the remaining $k$ edges belong to elementary triangles and have the form \begin{center} \begin{picture}(400,3) \letvertex N=(170,5)\letvertex K=(230,5)
\put(165,-3){$xu$}\put(225,-3){$\overline{x}u$}
\drawvertex(N){$\bullet$} \drawvertex(K){$\bullet$} \drawundirectededge(K,N){$ $} \end{picture} \end{center} where $x \neq \overline{x}$ and $x, \overline{x}\in \{0,1,2\}$, $u\in \Sigma_n$. Since the last $k$ edges belong to elementary triangles, they are oriented in the same direction and, since $k$ is even, they give a good orientation to the cycle. The same argument works for each elementary cycle and so the proof is completed. \end{proof}
The matrix $\Delta_n$ cannot be directly used to find the partition function because it is not anti-symmetric (there are three nonzero entries in the diagonal corresponding to loops) and it is of odd size. Let $\Gamma_{n,c}$ be the matrix obtained from $\Delta_n$ by deleting the row and the column indexed by $0^n$ and where the entries $(1^n,1^n)$ and $(2^n,2^n)$ are replaced by $0$, so that the partition function of dimer coverings of type II is given by $c\sqrt{\det(\Gamma_{n,c})}$. Similarly, we define $\Gamma_{n,b},\Gamma_{n,a}$ for dimer coverings of type III, IV, respectively. Now let $\Lambda_n$ be the matrix obtained from $\Delta_n$ by deleting the three rows and the three columns indexed by $0^n,1^n$ and $2^n$, so that the partition function of the dimer coverings of type I is $abc\sqrt{\det{\Lambda_n}}$. This gives \begin{eqnarray}\label{partitionassum} \Phi_n(a,b,c) =c\sqrt{\det(\Gamma_{n,c})}+b\sqrt{\det(\Gamma_{n,b})}+a\sqrt{\det(\Gamma_{n,a})}+abc\sqrt{\det{\Lambda_n}}. \end{eqnarray} In order to compute $\det(\Gamma_{n,c})$ (the case of $\Gamma_{n,b},\Gamma_{n,a}$ and $\Lambda_n$ is analogous), we put $$ a'_1=\begin{pmatrix}
0 & a & 0 \\
-a & 0 & 0 \\
0 & 0 & 0 \end{pmatrix}, \qquad b'_1=\begin{pmatrix}
0 & 0 & -b \\
0 & 0 & 0 \\
b & 0 & 0 \end{pmatrix}, \qquad c'_1=\begin{pmatrix}
1 & 0 & 0\\
0 & 0 & c \\
0 & -c & 0 \end{pmatrix}. $$ Then for every $n>1$ odd we put $$ a'_n = \begin{pmatrix}
0 & aI_{n-1} & 0 \\
-aI_{n-1} & 0 & 0 \\
0 & 0 & a'_{n-1} \end{pmatrix}, \quad b'_n =\begin{pmatrix}
0 & 0 & -bI_{n-1} \\
0 & b'_{n-1} & 0 \\
bI_{n-1} & 0 & 0 \end{pmatrix}, \quad c'_n = \begin{pmatrix}
c'_{n-1} & 0 & 0 \\
0 & 0 & cI_{n-1} \\
0 & -cI_{n-1} & 0 \end{pmatrix}, $$ and for every $n$ even we put $$ a'_n = \begin{pmatrix}
0 & -aI_{n-1} & 0 \\
aI_{n-1} & 0 & 0 \\
0 & 0 & a'_{n-1} \end{pmatrix}, \quad b'_n =\begin{pmatrix}
0 & 0 & bI_{n-1} \\
0 & b'_{n-1} & 0 \\
-bI_{n-1} & 0 & 0 \end{pmatrix}, \quad c'_n = \begin{pmatrix}
c'_{n-1} & 0 & 0 \\
0 & 0 & -cI_{n-1} \\
0 & cI_{n-1} & 0 \end{pmatrix}. $$ Finally, set $$ \overline{A}_1= \overline{B}_1=\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0 \end{pmatrix}, \qquad \overline{C}_1=\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & c \\
0 & -c & 0 \end{pmatrix}. $$ Then for every $n>1$ odd we put $$ \overline{A}_n = \begin{pmatrix}
0 & aI_{n-1}^0 & 0 \\
-aI_{n-1}^0 & 0 & 0 \\
0 & 0 & a'_{n-1} \end{pmatrix}, \qquad \overline{B}_n =\begin{pmatrix}
0 & 0 & -bI_{n-1}^0 \\
0 & b'_{n-1} & 0 \\
bI_{n-1}^0 & 0 & 0 \end{pmatrix}, \qquad \overline{C}_n = c_n', $$ and for every $n$ even we put $$ \overline{A}_n = \begin{pmatrix}
0 & -aI_{n-1}^0 & 0 \\
aI_{n-1}^0 & 0 & 0 \\
0 & 0 & a'_{n-1} \end{pmatrix}, \qquad \overline{B}_n =\begin{pmatrix}
0 & 0 & bI_{n-1}^0 \\
0 & b'_{n-1} & 0 \\
-bI_{n-1}^0 & 0 & 0 \end{pmatrix}, \qquad \overline{C}_n = c_n', $$ with $$ I_n^0=I_n-\begin{pmatrix}
1 & 0 & \cdots & 0 \\
0 & 0 & \cdots & 0 \\
\vdots & 0 & \ddots & 0 \\
0 & 0 & 0 & 0 \end{pmatrix}. $$ Finally, let $\overline{\Delta}_n = \overline{A}_n+\overline{B}_n+\overline{C}_n$ for each $n\geq 1$, so that $$ \overline{\Delta}_n =\begin{pmatrix}
c'_{n-1} & aI_{n-1}^0 & -bI_{n-1}^0 \\
-aI_{n-1}^0 & b'_{n-1} & cI_{n-1} \\
bI_{n-1}^0 & -cI_{n-1} & a'_{n-1} \end{pmatrix} \ \mbox{for }n \mbox{ odd}, \qquad \overline{\Delta}_n =\begin{pmatrix}
c'_{n-1} & -aI_{n-1}^0 & bI_{n-1}^0 \\
aI_{n-1}^0 & b'_{n-1} & -cI_{n-1} \\
-bI_{n-1}^0 & cI_{n-1} & a'_{n-1} \end{pmatrix} \ \mbox{for }n \mbox{ even}. $$ The introduction of the matrices $I_n^0$ guarantees that $\det(\overline{\Delta}_n)=\det(\Gamma_{n,c})$, since we have performed all the necessary cancellations in $\Delta_n$. Geometrically this corresponds to erasing the loops rooted at the vertices $1^n$ and $2^n$ and the edges connecting the vertex $0^n$ to the rest of $\Sigma_n$.\\ \indent Next, we define a rational function $F:\mathbb{R}^6\longrightarrow \mathbb{R}^6$ as follows $$ F(x_1,x_2,x_3,x_4,x_5,x_6)=\left(x_1,x_2,x_3, \frac{x_1x_4^3+x_2x_3x_5x_6}{x_1x_2x_3+x_4x_5x_6},\frac{x_2x_5^3+x_1x_3x_4x_6}{x_1x_2x_3+x_4x_5x_6},\frac{x_3x_6^3+x_1x_2x_4x_5}{x_1x_2x_3+x_4x_5x_6} \right). $$ Denote $F^{(k)}(\underline{x})$ the $k$-th iteration of the function $F$, and $F_i$ the $i$-th projection of $F$ so that $$ F(\underline{x})=(F_1(\underline{x}), \ldots, F_6(\underline{x})). $$ Set $$ F^{(k)}(a,b,c,a,b,c)=(a,b,c,a^{(k)}, b^{(k)}, c^{(k)}). $$ \begin{teo}\label{PROPOSITIONPARTITION} For each $n\geq 3$, the partition function $\Phi_n(a,b,c)$ of the dimer model on the Schreier graph $\Sigma_n$ of the Hanoi Tower group $H^{(3)}$ is \begin{eqnarray*} \Phi_n(a,b,c)&=& \prod_{k=0}^{n-3}\left(abc+a^{(k)}b^{(k)}c^{(k)} \right)^{ 3^{n-k-2}} \left(abc\left(a^{(n-2)}b^{(n-2)} + a^{(n-2)}c^{(n-2)} + b^{(n-2)}c^{(n-2)}\right.\right.\\ &+& \left.\left. a^{(n-2)}b^{(n-2)}c^{(n-2)} + abc\right) + a^2(a^{(n-2)})^3 + b^2(b^{(n-2)})^3 + c^2(c^{(n-2)})^3 \right). \end{eqnarray*} \end{teo}
\begin{proof} We explicitly analyze the case of dimer coverings of type II. It follows from the discussion above that $\Phi_n^{II}(a,b,c)=c\sqrt{\det(\overline{\Delta}_n)}$. More precisely, the factor $c$ corresponds to the label of the loop at $0^n$, and the factor $\sqrt{\det(\overline{\Delta}_n)}$ is the absolute value of the Pfaffian of the oriented adjacency matrix of the graph obtained from $\Sigma_n$ by deleting the edges connecting the vertex $0^n$ to the rest of the graph and the loops rooted at $1^n$ and $2^n$. The cases corresponding to coverings of type III and IV are analogous. If we expand twice the matrix $\overline{\Delta}_n$ using the recursion formula and perform the permutations $(17)$ and $(58)$ for both rows and columns, we get the matrix (for $n$ odd)\footnotesize $$ \begin{pmatrix}
M_{11} & M_{12} \\
M_{21} & M_{22}
\end{pmatrix}= \left(\begin{array}{cccccc|ccc}
0 & 0 & 0 & -cI_{n-2} & -aI_{n-2} & 0 & bI_{n-2}^0 & 0 & 0 \\
0 & 0 & -cI_{n-2} & 0 & -bI_{n-2} & 0 & 0 & aI_{n-2} & 0 \\
0 & cI_{n-2} & 0 & 0 & 0 & aI_{n-2} & 0 & 0 & -bI_{n-2} \\
cI_{n-2} & 0 & 0 & 0 & 0 & bI_{n-2} & -aI_{n-2}^0 & 0 & 0 \\
aI_{n-2} & bI_{n-2} & 0 & 0 & 0 & 0 & 0 & -cI_{n-2} & 0 \\
0 & 0 & -aI_{n-2} & -bI_{n-2} & 0 & 0 & 0 & 0 & cI_{n-2}\\
\hline
-bI_{n-2}^0 & 0 & 0 & aI_{n-2}^0 & 0 & 0 & c'_{n-2} & 0 & 0 \\
0 & -aI_{n-2} & 0 & 0 & cI_{n-2} & 0 & 0 & b'_{n-2} & 0 \\
0 & 0 & bI_{n-2} & 0 & 0 & -cI_{n-2} & 0 & 0 & a'_{n-2} \end{array}\right). $$\normalsize Note that each entry is a square matrix of size $3^{n-2}$. Hence, the Schur complement formula gives \begin{eqnarray}\label{matrice per hanoi} \det(\overline{\Delta}_n) &=& \det(M_{11})\cdot \det(M_{22}-M_{21}M_{11}^{-1}M_{12})\\
&=& (2abc)^{2\cdot 3^{n-2}}\left|\begin{matrix}
c'_{n-2} & -\frac{a^4+b^2c^2}{2abc}I_{n-2}^0 & \frac{b^4+a^2c^2}{2abc}I_{n-2}^0 \\
\frac{a^4+b^2c^2}{2abc}I_{n-2}^0 & b_{n-2}' & -\frac{c^4+a^2b^2}{2abc} \\
-\frac{b^4+a^2c^2}{2abc}I_{n-2}^0 & \frac{c^4+a^2b^2}{2abc} & a_{n-2}'
\end{matrix}\right| . \nonumber \end{eqnarray} The matrix obtained in (\ref{matrice per hanoi}) has the same shape as $\overline{\Delta}_{n-1}$, so we can use recursion by defining $$ \overline{\Delta}_k(x_1,\ldots, x_6)= \begin{pmatrix}
c_{k-1}' & -F_4(x_1, \ldots, x_6)I_{k-1}^0 & F_5(x_1, \ldots, x_6)I_{k-1}^0 \\
F_4(x_1, \ldots, x_6)I_{k-1}^0 & b_{k-1}' & -F_6(x_1, \ldots, x_6) \\
-F_5(x_1, \ldots, x_6)I_{k-1}^0 & F_6(x_1, \ldots, x_6) & a_{k-1}' \end{pmatrix}. $$ Hence, (\ref{matrice per hanoi}) becomes \begin{eqnarray*} \det(\overline{\Delta}_n)&=&(2abc)^{2\cdot 3^{n-2}}\cdot\det(\overline{\Delta}_{n-1}(a,b,c,a,b,c))\\ &=& (2abc)^{2\cdot 3^{n-2}}(abc+a^{(1)}b^{(1)}c^{(1)})^{2\cdot 3^{n-3}}\cdot\det\left( \overline{\Delta}_{n-2}(F^{(1)}(a,b,c,a,b,c))\right)\\ &=& \prod_{k=0}^{n-3}\left(abc+a^{(k)}b^{(k)}c^{(k)} \right)^{2\cdot 3^{n-k-2}}\cdot\det\left(\overline{\Delta}_2(F^{(n-3)}(a,b,c,a,b,c)) \right)\\ &=& \prod_{k=0}^{n-3}\left(abc+a^{(k)}b^{(k)}c^{(k)} \right)^{2\cdot 3^{n-k-2}}\cdot\left( ab(a^{(n-2)} b^{(n-2)}) +c(c^{(n-2)})^3 \right)^2. \end{eqnarray*} A similar recurrence holds for coverings of type III and IV. For dimer coverings of type I, by using the Schur complement again, we get $$ \det(\Lambda_n)=\prod_{k=0}^{n-3}\left(abc+a^{(k)}b^{(k)}c^{(k)} \right)^{2\cdot 3^{n-k-2}}\cdot\left( abc+a^{(n-2)} b^{(n-2)}c^{(n-2)} \right)^2. $$ Then we use \eqref{partitionassum} and the proof is completed. \end{proof}
\begin{os}\rm The proof above, with $a=b=c=1$, gives the number of dimers coverings of $\Sigma_n$ that we had already computed to be $2^{\frac{3^{n-1}+3}{2}}$ in Corollary \ref{corollaryentropyHanoi}. \end{os}
\section{The dimer model on the Sierpi\'{n}ski gasket}\label{SECTIONSIERPINSKI}
In this section we study the dimer model on a sequence of graphs $\{\Gamma_n\}_{n\geq 1}$ forming finite approximations to the well-known Sierpi\'{n}ski gasket. The local limit of these graphs is an infinite graph known as the infinite Sierpi\'{n}ski triangle. The graphs $\Gamma_n$ are not Schreier graphs of a self-similar group. However, they are self-similar in the sense of \cite{wagner2}, and their structure is very similar to that of the Schreier graphs $\Sigma_n$ of the group $H^{(3)}$, studied in the previous section. More precisely, one can obtain $\Gamma_n$ from $\Sigma_n$ by contracting the edges joining two different elementary triangles. \\ \indent One can think of a few natural ways to label the edges of these graphs with weights of three types. The one that springs first into mind is the \lq\lq directional" weight, where the edges are labeled $a,b,c$ according to their direction in the graph drawn on the plane, see the picture of $\Gamma_3$ with the directional labeling below. Note that the labeled graph $\Gamma_n$ is obtained from the labeled graph $\Gamma_{n-1}$ by taking three translated copies of the latter (and identifying three pairs of corners, see the picture). \unitlength=0,2mm\begin{center} \begin{picture}(400,210) \put(90,110){$\Gamma_3$} \letvertex A=(200,210)\letvertex B=(170,160)\letvertex C=(140,110) \letvertex D=(110,60)\letvertex E=(80,10)\letvertex F=(140,10)\letvertex G=(200,10) \letvertex H=(260,10)\letvertex I=(320,10) \letvertex L=(290,60)\letvertex M=(260,110)\letvertex N=(230,160) \letvertex O=(200,110)\letvertex P=(170,60)\letvertex Q=(230,60)
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$} \drawvertex(I){$\bullet$}\drawvertex(L){$\bullet$}\drawvertex(M){$\bullet$} \drawvertex(N){$\bullet$}\drawvertex(O){$\bullet$} \drawvertex(P){$\bullet$}\drawvertex(Q){$\bullet$}
\drawundirectededge(E,D){$a$} \drawundirectededge(D,C){$a$} \drawundirectededge(C,B){$a$} \drawundirectededge(B,A){$a$} \drawundirectededge(A,N){$b$} \drawundirectededge(N,M){$b$} \drawundirectededge(M,L){$b$} \drawundirectededge(L,I){$b$} \drawundirectededge(I,H){$c$} \drawundirectededge(H,G){$c$} \drawundirectededge(G,F){$c$} \drawundirectededge(F,E){$c$} \drawundirectededge(N,B){$c$} \drawundirectededge(O,C){$c$} \drawundirectededge(M,O){$c$} \drawundirectededge(P,D){$c$} \drawundirectededge(L,Q){$c$} \drawundirectededge(B,O){$b$} \drawundirectededge(O,N){$a$} \drawundirectededge(C,P){$b$} \drawundirectededge(P,G){$b$} \drawundirectededge(D,F){$b$} \drawundirectededge(Q,M){$a$} \drawundirectededge(G,Q){$a$} \drawundirectededge(H,L){$a$} \drawundirectededge(F,P){$a$} \drawundirectededge(Q,H){$b$} \end{picture} \end{center}
The dimer model on $\Gamma_n$ with this labeling was previously studied in \cite{wagner1}: the authors wrote down a recursion between levels $n$ and $n+1$, obtaining a system of equations involving the partition functions, but did not arrive at an explicit solution. Unfortunately, we were not able to compute the generating function of the dimer covers corresponding to this \lq\lq directional" weight function either. Below we describe two other natural labelings of $\Gamma_n$ for which we were able to compute the partition functions: we refer to them as the \lq\lq Schreier\rq\rq\ labeling and the \lq\lq rotation-invariant\rq\rq\ labeling.
\subsection{The \lq\lq Schreier\rq\rq labeling}\label{firstmodelll} In the \lq\lq Schreier\rq\rq labeling, at a given corner of labeled $\Gamma_n$ we have a copy of labeled $\Gamma_{n-1}$ reflected with respect to the bisector of the corresponding angle, see the picture below. \unitlength=0,2mm \begin{center} \begin{picture}(500,220) \letvertex a=(30,60)\letvertex b=(0,10)\letvertex c=(60,10)
\letvertex d=(160,110)\letvertex e=(130,60)\letvertex f=(100,10)\letvertex g=(160,10)
\letvertex h=(220,10)\letvertex i=(190,60)
\drawvertex(a){$\bullet$}\drawvertex(b){$\bullet$} \drawvertex(c){$\bullet$}\drawvertex(d){$\bullet$} \drawvertex(e){$\bullet$}\drawvertex(f){$\bullet$} \drawvertex(g){$\bullet$}\drawvertex(h){$\bullet$} \drawvertex(i){$\bullet$}
\drawundirectededge(b,a){$a$} \drawundirectededge(c,b){$b$} \drawundirectededge(a,c){$c$} \drawundirectededge(e,d){$c$} \drawundirectededge(f,e){$b$} \drawundirectededge(g,f){$a$}
\drawundirectededge(h,g){$c$} \drawundirectededge(i,h){$b$} \drawundirectededge(d,i){$a$} \drawundirectededge(i,e){$b$} \drawundirectededge(e,g){$c$} \drawundirectededge(g,i){$a$} \put(0,80){$\Gamma_1$}
\put(265,80){$\Gamma_3$}
\put(95,80){$\Gamma_2$} \letvertex A=(380,210)\letvertex B=(350,160)\letvertex C=(320,110)
\letvertex D=(290,60)\letvertex E=(260,10)\letvertex F=(320,10)\letvertex G=(380,10)
\letvertex H=(440,10)\letvertex I=(500,10) \letvertex L=(470,60)\letvertex M=(440,110)\letvertex N=(410,160)
\letvertex O=(380,110)\letvertex P=(350,60)\letvertex Q=(410,60)
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$} \drawvertex(I){$\bullet$}\drawvertex(L){$\bullet$}\drawvertex(M){$\bullet$} \drawvertex(N){$\bullet$}\drawvertex(O){$\bullet$} \drawvertex(P){$\bullet$}\drawvertex(Q){$\bullet$}
\drawundirectededge(E,D){$a$} \drawundirectededge(D,C){$c$} \drawundirectededge(C,B){$b$} \drawundirectededge(B,A){$a$} \drawundirectededge(A,N){$c$} \drawundirectededge(N,M){$b$} \drawundirectededge(M,L){$a$} \drawundirectededge(L,I){$c$} \drawundirectededge(I,H){$b$} \drawundirectededge(H,G){$a$} \drawundirectededge(G,F){$c$} \drawundirectededge(F,E){$b$} \drawundirectededge(N,B){$b$} \drawundirectededge(O,C){$c$} \drawundirectededge(M,O){$a$} \drawundirectededge(P,D){$a$} \drawundirectededge(L,Q){$c$} \drawundirectededge(B,O){$a$} \drawundirectededge(O,N){$c$} \drawundirectededge(C,P){$b$} \drawundirectededge(P,G){$a$} \drawundirectededge(D,F){$c$} \drawundirectededge(Q,M){$b$} \drawundirectededge(G,Q){$c$} \drawundirectededge(H,L){$a$}\drawundirectededge(F,P){$b$} \drawundirectededge(Q,H){$b$} \end{picture} \end{center} It turns out that this labeling of the graph $\Gamma_n$ can be alternatively described by considering the labeled Schreier graph $\Sigma_n$ of the Hanoi Towers group and then contracting the edges connecting copies of $\Sigma_{n-1}$ in $\Sigma_n$, hence the name \lq\lq Schreier\rq\rq labeling.\\ \indent For every $n\geq 1$, the number of vertices of $\Gamma_n$
is $|V(\Gamma_n)| = \frac{3}{2}(3^{n-1}+1)$. This implies that, for $n$ odd, $|V(\Gamma_n)|$ is odd and so we allow dimer coverings touching either two or none of the corners. If $n$ is even, $|V(\Gamma_n)|$ is even, and so we allow dimer coverings touching either one or three corners. We say that a dimer covering of $\Gamma_n$ is: \begin{itemize} \item of type $f$, if it covers no corner; \item of type $g^{ab}$ (respectively $g^{ac},g^{bc}$), if it covers the corner of $\Gamma_n$ where two edges $a$ and $b$ (respectively $a$ and $c$, $b$ and $c$) meet, but does not cover any other corner; \item of type $h^{ab}$ (respectively $h^{ac},h^{bc}$), if it does not cover the corner of $\Gamma_n$ where two edges $a$ and $b$ (respectively $a$ and $c$, $b$ and $c$) meet, but it covers the remaining two corners; \item of type $t$, if it covers all three corners. \end{itemize} Observe that for $n$ odd we can only have configurations of type $f$ and $h$, and for $n$ even we can only have configurations of type $g$ and $t$.\\ \indent From now on, we will denote by $f_n,g^{ab}_n,g^{ac}_n,g^{bc}_n,h^{ab}_n,h^{ac}_n,h^{bc}_n,t_n$ the summand in the partition function $\Phi_n(a,b,c)$ counting the coverings of the corresponding type. For instance, for $n=1$, the only nonzero terms are $f_1=1$, $h^{ab}_1=c$, $h^{ac}_1=b$, $h^{bc}_1=a$, so that we get $$ \Phi_1(a,b,c) = 1 + a + b + c. $$ For $n=2$, the only nonzero terms are $t_2=2abc$, $g^{ab}_2=2ab$, $g^{ac}_2=2ac$, $g^{bc}_2=2bc$, so that we get $$ \Phi_2(a,b,c) = 2(abc+ab+ac+bc). $$ In the following pictures, the dark bullet next to a vertex means that this vertex is covered by a dimer in that configuration. Since the graph $\Gamma_{n+1}$ consists of three copies of $\Gamma_n$, a dimer covering of $\Gamma_{n+1}$ can be constructed from three coverings of $\Gamma_n$. For example, a configuration of type $f$ for $\Gamma_{n+1}$ can be obtained using three configurations of $\Gamma_n$ of type $g^{ab},g^{ac},g^{bc}$, as the first of the following pictures shows.\unitlength=0,3mm \begin{center} \begin{picture}(400,70) \letvertex A=(80,60)\letvertex B=(50,10)\letvertex C=(110,10)
\letvertex D=(200,60)\letvertex E=(185,35)\letvertex F=(170,10)\letvertex G=(200,10)
\letvertex H=(230,10)\letvertex I=(215,35) \letvertex L=(320,60)\letvertex M=(305,35)\letvertex N=(290,10)
\letvertex O=(320,10)\letvertex P=(350,10)\letvertex Q=(335,35) \letvertex R=(190,38)\letvertex S=(215,30)\letvertex T=(195,13) \letvertex U=(330,38)\letvertex V=(305,30)\letvertex Z=(325,13) \drawvertex(R){$\bullet$}\drawvertex(S){$\bullet$} \drawvertex(T){$\bullet$}\drawvertex(U){$\bullet$} \drawvertex(V){$\bullet$}\drawvertex(Z){$\bullet$}
\put(80,60){\circle*{1}}\put(50,10){\circle*{1}}\put(110,10){\circle*{1}} \put(200,60){\circle*{1}}\put(185,35){\circle*{1}}\put(170,10){\circle*{1}}\put(200,10){\circle*{1}} \put(230,10){\circle*{1}}\put(215,35){\circle*{1}} \put(320,60){\circle*{1}}\put(305,35){\circle*{1}}\put(290,10){\circle*{1}} \put(320,10){\circle*{1}}\put(350,10){\circle*{1}}\put(335,35){\circle*{1}} \put(137,33){$=$}\put(257,33){$+$}
\drawundirectededge(A,B){} \drawundirectededge(B,C){} \drawundirectededge(C,A){} \drawundirectededge(D,E){} \drawundirectededge(E,F){} \drawundirectededge(F,G){} \drawundirectededge(G,H){} \drawundirectededge(H,I){} \drawundirectededge(I,D){} \drawundirectededge(E,G){} \drawundirectededge(G,I){} \drawundirectededge(I,E){} \drawundirectededge(L,M){} \drawundirectededge(M,N){} \drawundirectededge(N,O){} \drawundirectededge(O,P){} \drawundirectededge(P,Q){} \drawundirectededge(Q,L){} \drawundirectededge(M,Q){} \drawundirectededge(M,O){} \drawundirectededge(O,Q){} \end{picture} \end{center}
\begin{center} \begin{picture}(400,70)
\letvertex A=(80,60)\letvertex B=(50,10)\letvertex C=(110,10)
\letvertex D=(200,60)\letvertex E=(185,35)\letvertex F=(170,10)\letvertex G=(200,10)
\letvertex H=(230,10)\letvertex I=(215,35) \letvertex L=(320,60)\letvertex M=(305,35)\letvertex N=(290,10)
\letvertex O=(320,10)\letvertex P=(350,10)\letvertex Q=(335,35) \letvertex u=(345,13)\letvertex r=(205,13)\letvertex Z=(315,13)\letvertex T=(175,13)\letvertex s=(225,13) \letvertex S=(295,13) \letvertex a=(55,13)\letvertex b=(105,13) \letvertex V=(215,30)\letvertex t=(305,30) \letvertex R=(190,38)\letvertex U=(330,38)
\drawvertex(a){$\bullet$}\drawvertex(b){$\bullet$} \drawvertex(u){$\bullet$}\drawvertex(r){$\bullet$} \drawvertex(s){$\bullet$}\drawvertex(t){$\bullet$} \drawvertex(R){$\bullet$}\drawvertex(S){$\bullet$} \drawvertex(T){$\bullet$}\drawvertex(U){$\bullet$} \drawvertex(V){$\bullet$}\drawvertex(Z){$\bullet$}
\put(80,60){\circle*{1}}\put(50,10){\circle*{1}}\put(110,10){\circle*{1}} \put(200,60){\circle*{1}}\put(185,35){\circle*{1}}\put(170,10){\circle*{1}}\put(200,10){\circle*{1}} \put(230,10){\circle*{1}}\put(215,35){\circle*{1}} \put(320,60){\circle*{1}}\put(305,35){\circle*{1}}\put(290,10){\circle*{1}} \put(320,10){\circle*{1}}\put(350,10){\circle*{1}}\put(335,35){\circle*{1}}
\put(137,33){$=$}\put(257,33){$+$}
\drawundirectededge(A,B){} \drawundirectededge(B,C){} \drawundirectededge(C,A){} \drawundirectededge(D,E){} \drawundirectededge(E,F){} \drawundirectededge(F,G){} \drawundirectededge(G,H){} \drawundirectededge(H,I){} \drawundirectededge(I,D){} \drawundirectededge(E,G){} \drawundirectededge(G,I){} \drawundirectededge(I,E){} \drawundirectededge(L,M){} \drawundirectededge(M,N){} \drawundirectededge(N,O){} \drawundirectededge(O,P){} \drawundirectededge(P,Q){} \drawundirectededge(Q,L){} \drawundirectededge(M,Q){} \drawundirectededge(M,O){} \drawundirectededge(O,Q){} \end{picture} \end{center}
\begin{center} \begin{picture}(400,70) \letvertex A=(80,60)\letvertex B=(50,10)\letvertex C=(110,10)
\letvertex D=(200,60)\letvertex E=(185,35)\letvertex F=(170,10)\letvertex G=(200,10)
\letvertex H=(230,10)\letvertex I=(215,35) \letvertex L=(320,60)\letvertex M=(305,35)\letvertex N=(290,10)
\letvertex O=(320,10)\letvertex P=(350,10)\letvertex Q=(335,35)
\letvertex R=(205,13)\letvertex S=(315,13)\letvertex T=(215,30) \letvertex U=(305,30)\letvertex V=(190,38)\letvertex Z=(330,38) \letvertex u=(200,55)\letvertex v=(320,55)\letvertex a=(80,55)
\drawvertex(a){$\bullet$} \drawvertex(R){$\bullet$}\drawvertex(S){$\bullet$} \drawvertex(T){$\bullet$}\drawvertex(U){$\bullet$} \drawvertex(V){$\bullet$}\drawvertex(Z){$\bullet$} \drawvertex(u){$\bullet$}\drawvertex(v){$\bullet$} \put(80,60){\circle*{1}}\put(50,10){\circle*{1}}\put(110,10){\circle*{1}} \put(200,60){\circle*{1}}\put(185,35){\circle*{1}}\put(170,10){\circle*{1}}\put(200,10){\circle*{1}} \put(230,10){\circle*{1}}\put(215,35){\circle*{1}} \put(320,60){\circle*{1}}\put(305,35){\circle*{1}}\put(290,10){\circle*{1}} \put(320,10){\circle*{1}}\put(350,10){\circle*{1}}\put(335,35){\circle*{1}}
\put(137,33){$=$}\put(257,33){$+$}
\drawundirectededge(A,B){} \drawundirectededge(B,C){} \drawundirectededge(C,A){} \drawundirectededge(D,E){} \drawundirectededge(E,F){} \drawundirectededge(F,G){} \drawundirectededge(G,H){} \drawundirectededge(H,I){} \drawundirectededge(I,D){} \drawundirectededge(E,G){} \drawundirectededge(G,I){} \drawundirectededge(I,E){} \drawundirectededge(L,M){} \drawundirectededge(M,N){} \drawundirectededge(N,O){} \drawundirectededge(O,P){} \drawundirectededge(P,Q){} \drawundirectededge(Q,L){} \drawundirectededge(M,Q){} \drawundirectededge(M,O){} \drawundirectededge(O,Q){} \end{picture} \end{center}
\begin{center} \begin{picture}(400,70)
\letvertex A=(80,60)\letvertex B=(50,10)\letvertex C=(110,10)
\letvertex D=(200,60)\letvertex E=(185,35)\letvertex F=(170,10)\letvertex G=(200,10)
\letvertex H=(230,10)\letvertex I=(215,35) \letvertex L=(320,60)\letvertex M=(305,35)\letvertex N=(290,10)
\letvertex O=(320,10)\letvertex P=(350,10)\letvertex Q=(335,35)
\letvertex u=(55,13)\letvertex v=(105,13)\letvertex z=(80,55) \letvertex R=(175,13)\letvertex S=(195,13)\letvertex T=(225,13) \letvertex U=(295,13)\letvertex V=(325,13)\letvertex Z=(345,13) \letvertex a=(305,30)\letvertex b=(215,30)\letvertex c=(190,38) \letvertex d=(330,38)\letvertex e=(200,55)\letvertex f=(320,55)
\drawvertex(a){$\bullet$}\drawvertex(f){$\bullet$} \drawvertex(b){$\bullet$}\drawvertex(e){$\bullet$} \drawvertex(c){$\bullet$}\drawvertex(d){$\bullet$} \drawvertex(R){$\bullet$}\drawvertex(S){$\bullet$} \drawvertex(T){$\bullet$}\drawvertex(U){$\bullet$} \drawvertex(V){$\bullet$}\drawvertex(Z){$\bullet$} \drawvertex(u){$\bullet$}\drawvertex(v){$\bullet$} \drawvertex(z){$\bullet$} \put(80,60){\circle*{1}}\put(50,10){\circle*{1}}\put(110,10){\circle*{1}} \put(200,60){\circle*{1}}\put(185,35){\circle*{1}}\put(170,10){\circle*{1}}\put(200,10){\circle*{1}} \put(230,10){\circle*{1}}\put(215,35){\circle*{1}} \put(320,60){\circle*{1}}\put(305,35){\circle*{1}}\put(290,10){\circle*{1}} \put(320,10){\circle*{1}}\put(350,10){\circle*{1}}\put(335,35){\circle*{1}}
\put(137,33){$=$}\put(257,33){$+$}
\drawundirectededge(A,B){} \drawundirectededge(B,C){} \drawundirectededge(C,A){} \drawundirectededge(D,E){} \drawundirectededge(E,F){} \drawundirectededge(F,G){} \drawundirectededge(G,H){} \drawundirectededge(H,I){} \drawundirectededge(I,D){} \drawundirectededge(E,G){} \drawundirectededge(G,I){} \drawundirectededge(I,E){} \drawundirectededge(L,M){} \drawundirectededge(M,N){} \drawundirectededge(N,O){} \drawundirectededge(O,P){} \drawundirectededge(P,Q){} \drawundirectededge(Q,L){} \drawundirectededge(M,Q){} \drawundirectededge(M,O){} \drawundirectededge(O,Q){} \end{picture} \end{center} By using these recursions and arguments similar to those in the proof of Theorem \ref{numerohanoi}, one can show that for $n$ odd (respectively, for $n$ even), the number of coverings of types $f,h^{ab},h^{ac},h^{bc}$ (respectively of types $t,g^{ab},g^{ac},g^{bc}$) is the same, and so is equal to the quarter of the total number of dimer coverings of $\Gamma_n$.
\begin{teo} For each $n$, the partition function of the dimer model on $\Gamma_n$ is: $$ \begin{cases}
\Phi_n(a,b,c)= 2(4abc)^{\frac{3^{n-1}-3}{4}}(abc+ab+ac+bc)& \text{for } n \text{ even}\\
\Phi_n(a,b,c)= (4abc)^{\frac{3^{n-1}-1}{4}}(1+a+b+c)& \text{for } n \text{ odd}. \end{cases} $$ Hence, the number of dimer coverings of $\Gamma_n$ is $2^{\frac{3^{n-1}+3}{2}}$. \end{teo}
\begin{proof} The recursion shows that, for $n$ odd, one has: \begin{eqnarray}\label{pari}
\begin{cases}
f_n = 2g^{ab}_{n-1}g^{ac}_{n-1}g^{bc}_{n-1}\\
h^{ab}_n= 2t_{n-1}g^{ac}_{n-1}g^{bc}_{n-1}\\
h^{ac}_n = 2t_{n-1}g^{ab}_{n-1}g^{bc}_{n-1}\\
h^{bc}_n = 2t_{n-1}g^{ab}_{n-1}g^{ac}_{n-1}
\end{cases} \end{eqnarray} Similarly, for $n$ even, one has: \begin{eqnarray}\label{dispari} \begin{cases}
t_n = 2h^{ab}_{n-1}h^{ac}_{n-1}h^{bc}_{n-1}\\
g^{ab}_n= 2f_{n-1}h^{ac}_{n-1}h^{bc}_{n-1}\\
g^{ac}_n = 2f_{n-1}h^{ab}_{n-1}h^{bc}_{n-1}\\
g^{bc}_n = 2f_{n-1}h^{ab}_{n-1}h^{ac}_{n-1}
\end{cases} \end{eqnarray} The solutions of systems (\ref{pari}) and (\ref{dispari}), with initial conditions $f_1=1, h^{ab}_1=c, h^{ac}_1=b, h^{bc}_1=a$, can be computed by induction on $n$. We find: $$ \begin{cases} t_n=2abc(4abc)^{\frac{3^{n-1}-3}{4}} \\ g^{ab}_n=2ab(4abc)^{\frac{3^{n-1}-3}{4}}\\ g^{ac}_n=2ac(4abc)^{\frac{3^{n-1}-3}{4}}\\ g^{bc}_n=2bc(4abc)^{\frac{3^{n-1}-3}{4}} \end{cases} \ \ \ n \mbox{ even,} \ \ \ \ \begin{cases} f_n=(4abc)^{\frac{3^{n-1}-1}{4}} \\ h^{ab}_n=c(4abc)^{\frac{3^{n-1}-1}{4}}\\ h^{ac}_n=b(4abc)^{\frac{3^{n-1}-1}{4}}\\ h^{bc}_n=a(4abc)^{\frac{3^{n-1}-1}{4}} \end{cases} \ \ \ n \mbox{ odd.} $$ The assertion follows from the fact that $\Phi_n(a,b,c)= f_n + h^{ab}_n+h^{ac}_n+h^{bc}_n$ for $n$ odd and $\Phi_n(a,b,c)= t_n+g^{ab}_n+g^{ac}_n+g^{bc}_n$ for $n$ even. The number of dimer coverings of $\Gamma_n$ is obtained as $\Phi_n(1,1,1)$. \end{proof} \begin{cor} The thermodynamic limit is $\frac{1}{6}\log(4abc)$. In particular, the entropy of absorption of diatomic molecules per site is $\frac{1}{3}\log 2$. \end{cor} The number of dimer coverings and the value of the entropy have already appeared in \cite{taiwan}, where the dimers on $\Gamma_n$ with the weight function constant 1 were considered.\\ \indent Note also that the number of dimer coverings found for Sierpi\'nski graphs $\Gamma_n$ coincides with the number of dimer coverings for the Schreier graphs $\Sigma_n$ of the group $H^{(3)}$ (see Section \ref{SECTION4}).
\subsection{\lq\lq Rotation-invariant\rq\rq labeling}
\unitlength=0,2mm \begin{center} \begin{picture}(400,115) \put(120,60){$\Gamma_2$}
\letvertex D=(200,110)\letvertex E=(170,60)\letvertex F=(140,10)\letvertex G=(200,10)
\letvertex H=(260,10)\letvertex I=(230,60)
\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$} \drawvertex(I){$\bullet$}
\drawundirectededge(E,D){$a$} \drawundirectededge(F,E){$b$} \drawundirectededge(G,F){$a$}
\drawundirectededge(H,G){$b$} \drawundirectededge(I,H){$a$} \drawundirectededge(D,I){$b$} \drawundirectededge(I,E){$c$} \drawundirectededge(E,G){$c$} \drawundirectededge(G,I){$c$} \end{picture} \end{center}
This labeling is, for $n\geq 2$, invariant under rotation by $\frac{2\pi}{3}$. For $n\geq 3$, the copy of $\Gamma_{n-1}$ at the left (respectively upper, right) corner of $\Gamma_n$ is rotated by $0$ (respectively $2\pi/3$, $4\pi/3$).
\begin{center} \begin{picture}(400,205)
\put(90,110){$\Gamma_3$}
\letvertex A=(200,210)\letvertex B=(170,160)\letvertex C=(140,110)
\letvertex D=(110,60)\letvertex E=(80,10)\letvertex F=(140,10)\letvertex G=(200,10)
\letvertex H=(260,10)\letvertex I=(320,10) \letvertex L=(290,60)\letvertex M=(260,110)\letvertex N=(230,160)
\letvertex O=(200,110)\letvertex P=(170,60)\letvertex Q=(230,60)
\drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$}\drawvertex(D){$\bullet$} \drawvertex(E){$\bullet$}\drawvertex(F){$\bullet$} \drawvertex(G){$\bullet$}\drawvertex(H){$\bullet$} \drawvertex(I){$\bullet$}\drawvertex(L){$\bullet$}\drawvertex(M){$\bullet$} \drawvertex(N){$\bullet$}\drawvertex(O){$\bullet$} \drawvertex(P){$\bullet$}\drawvertex(Q){$\bullet$}
\drawundirectededge(E,D){$b$} \drawundirectededge(D,C){$a$} \drawundirectededge(C,B){$b$} \drawundirectededge(B,A){$a$} \drawundirectededge(A,N){$b$} \drawundirectededge(N,M){$a$} \drawundirectededge(M,L){$b$} \drawundirectededge(L,I){$a$} \drawundirectededge(I,H){$b$} \drawundirectededge(H,G){$a$} \drawundirectededge(G,F){$b$} \drawundirectededge(F,E){$a$} \drawundirectededge(N,B){$c$} \drawundirectededge(O,C){$a$} \drawundirectededge(M,O){$b$} \drawundirectededge(P,D){$c$} \drawundirectededge(L,Q){$c$} \drawundirectededge(B,O){$c$} \drawundirectededge(O,N){$c$} \drawundirectededge(C,P){$b$} \drawundirectededge(P,G){$a$} \drawundirectededge(D,F){$c$} \drawundirectededge(Q,M){$a$} \drawundirectededge(G,Q){$b$} \drawundirectededge(H,L){$c$}\drawundirectededge(F,P){$c$} \drawundirectededge(Q,H){$c$} \end{picture} \end{center} We distinguish here the following types of dimers coverings: we say that a dimer covering is of type $g$ (respectively of type $h$, of type $f$, of type $t$) if exactly one (respectively exactly two, none, all) of the three corners of $\Gamma_n$ is (are) covered.
Observe that by symmetry of the labeling we do not need to define $g^{ab}, g^{ac}, g^{bc}, h^{ab}, h^{ac}, h^{bc}$. It is easy to check that in this model for $n$ odd we can only have configurations of type $f$ and $h$, for $n$ even we can only have configurations of type $g$ and $t$. By using recursion, one also checks that for $n$ even, the number of coverings of type $t$ is one third of the number of coverings of type $g$, and that for $n$ odd, the number of coverings of type $f$ is one third of the number of dimer coverings of type $h$.\\ \indent Next, we compute the partition function $\Phi_n(a,b,c)$ associated with the \lq\lq rotation-invariant\rq\rq labeling. We will denote by $f_n,g_n,h_n,t_n$ the summands in $\Phi_n(a,b,c)$ corresponding to coverings of types $f,g,h,t$. For instance, for $n=2$, we have $g_2=3c(a+b)$, $t_2=a^3+b^3$, so that $$ \Phi_2(a,b,c)= a^3+b^3+3c(a+b). $$ \begin{teo}\label{partition2011} The partition function of $\Gamma_n$, for each $n\geq 2$, is given by: $$ \begin{cases}
\Phi_n(a,b,c)= 2^{\frac{3^{n-2}-1}{2}}(a^3+b^3)^{\frac{3^{n-2}-1}{4}}(ac+bc)^{\frac{3^{n-1}-3}{4}}(a^3+b^3+3c(a+b))& \text{for } n \text{ even} \\
\Phi_n(a,b,c)= 2^{\frac{3^{n-2}-1}{2}}(a^3+b^3)^{\frac{3^{n-2}-3}{4}}(ac+bc)^{\frac{3^{n-1}-1}{4}}(3(a^3+b^3)+c(a+b))& \text{for } n \text{ odd}. \end{cases} $$ \end{teo}
\begin{proof} Similarly to how we computed the partition function for the \lq\lq Schreier\rq\rq labeling, we get, for $n\geq 3$ odd: \begin{eqnarray}\label{parisec}
\begin{cases}
f_n = 2\left(\frac{g_{n-1}}{3}\right)^3\\
h_n= 6t_{n-1}\left(\frac{g_{n-1}}{3}\right)^2 \end{cases} \end{eqnarray} and for $n$ even: \begin{eqnarray}\label{disparisec} \begin{cases}
t_n = 2\left(\frac{h_{n-1}}{3}\right)^3 \\
g_n= 6f_{n-1}\left(\frac{h_{n-1}}{3}\right)^2.
\end{cases} \end{eqnarray} The solutions of systems (\ref{parisec}) and (\ref{disparisec}), with the initial conditions $g_2=3c(a+b)$ and $t_2=a^3+b^3$, can be computed by induction on $n$: one gets, for $n$ even, $$ \begin{cases} t_n=2^{\frac{3^{n-2}-1}{2}}(a^3+b^3)^{\frac{3^{n-2}+3}{4}}(ac+bc)^{\frac{3^{n-1}-3}{4}}\\ g_n=3\cdot 2^{\frac{3^{n-2}-1}{2}}(a^3+b^3)^{\frac{3^{n-2}-1}{4}}(ac+bc)^{\frac{3^{n-1}+1}{4}} \end{cases} $$ and for $n\geq 3$ odd $$ \begin{cases} f_n=2^{\frac{3^{n-2}-1}{2}}(a^3+b^3)^{\frac{3^{n-2}-3}{4}}(ac+bc)^{\frac{3^{n-1}+3}{4}}\\ h_n=3\cdot 2^{\frac{3^{n-2}-1}{2}}(a^3+b^3)^{\frac{3^{n-2}+1}{4}}(ac+bc)^{\frac{3^{n-1}-1}{4}}. \end{cases} $$ The assertion follows from the fact that $\Phi_n(a,b,c)= f_n + h_n$ for $n$ odd and $\Phi_n(a,b,c)= t_n+g_n$ for $n$ even. \end{proof}
\begin{cor} The thermodynamic limit is $\frac{1}{9}\log 2+\frac{1}{18}\log (a^3+b^3)+\frac{1}{6}\log c(a+b)$. \end{cor}
By putting $a=b=c=1$, one finds the same value of entropy as in the \lq\lq Schreier\rq\rq labeling, as expected.
\section{Statistics}\label{statistiques}
In this section we study the statistics of occurrences of edges with a given label in a random dimer covering, for the Schreier graphs of $H^{(3)}$ and for the Sierpi\'{n}ski triangles. We compute the mean and the variance and, in some cases, we are able to find the asymptotic behavior of the moment generating function of the associated normalized random variable.
\subsection{Schreier graphs of the Hanoi Towers group}
Denote by $c_n$ (by symmetry, $a_n$ and $b_n$ can be studied in the same way) the random variable that counts the number of occurrences of edges labeled $c$ in a random dimer covering on $\Sigma_n$. In order to study it, introduce the function $\Phi_n^i(c) = \Phi_n^i(1,1,c)$, for $i=I, II, III, IV$, and observe that $\Phi_n^{III}(c)=\Phi_n^{IV}(c)$. Moreover, denote by $\mu_{n,i}$ and $\sigma^2_{n,i}$ the mean and the variance of $c_n$ in a random dimer covering of type $i$, respectively. Note that we have $\mu_{n,III}= \mu_{n,IV}$ and $\sigma^2_{n,III}=\sigma^2_{n,IV}$. \begin{teo} For each $n\geq 1$, $$ \mu_{n,I}=\frac{3^{n-1}+1}{2}, \qquad \mu_{n,II}=\frac{3^{n-1}+3}{2}, \qquad \mu_{n,III}=\frac{3^{n-1}-1}{2}, $$ $$ \sigma^2_{n,I} = \frac{3^n-6n+3}{4}, \qquad \sigma_{n,II}^2 = \frac{3^n+10n-13}{4}, \qquad \sigma_{n,III}^2 = \frac{3^n-2n-1}{4}. $$ \end{teo} \begin{proof} For $a=b=1$, the system \eqref{generalsystem} reduces to $$ \begin{cases} \Phi^I_{n+1} = \frac{\left(\Phi^I_n\right)^3}{c} + \Phi_n^{II}\left(\Phi_n^{III}\right)^2 \\ \Phi^{II}_{n+1} = \frac{\left(\Phi_n^{II}\right)^3}{c}+\Phi_n^{I}\left(\Phi_n^{III}\right)^2\\ \Phi^{III}_{n+1} = \left(\Phi_n^{III}\right)^3+\frac{\Phi_n^{I}\Phi_n^{II}\Phi_n^{III}}{c} \end{cases} $$ with initial conditions $\Phi^I_1(c)=c$, $\Phi^{II}_1(c)=c^2$ and $\Phi^{III}_1(c)=1$. Now put, for every $n\geq 1$, $q_n=\frac{\Phi_n^{II}}{\Phi_n^I}$ and $r_n=\frac{\Phi_n^{III}}{\Phi_n^I}$. Observe that both $q_n$ and $r_n$ are functions of the only variable $c$. In particular, for each $n$, one has $q_n(1)=r_n(1)=1$, since the number of dimer covering is the same for each type of configuration. By computing the quotient $\Phi_{n+1}^{II}/\Phi_{n+1}^I$ and dividing each term by $(\Phi_n^I)^3$, one gets \begin{eqnarray}\label{uno1} q_{n+1} = \frac{\frac{q_n^3}{c}+r_n^2}{\frac{1}{c}+q_nr_n^2}. \end{eqnarray} Similarly, one has \begin{eqnarray}\label{due2} r_{n+1} = \frac{r_n^3+\frac{q_nr_n}{c}}{\frac{1}{c}+q_nr_n^2}. \end{eqnarray} Using (\ref{uno1}) and (\ref{due2}), one can show by induction that $q_n'(1)=1$ and $r_n'(1)=-1$, for every $n\geq 1$. Moreover, $q_n''(1)=4(n-1)$ and $r_n''(1)=n+1$.\\ From the first equation of the system, one gets $\Phi_n^I(c)=\frac{(\Phi_{n-1}^I)^3}{c}\left(1+q_{n-1}(c)r_{n-1}^2(c)\right)$. By applying the logarithm and using recursion, we have: $$ \log(\Phi_n^I(c))=3^{n-1}\log(\Phi_1^I(c))-\sum_{k=0}^{n-2}3^k\log c+\sum_{k=1}^{n-1}3^{n-1-k}\log(1+cq_k(c)r_k^2(c)). $$ Taking the derivative and putting $c=1$, one gets $$
\mu_{n,I}=\frac{\Phi_n^{I'}(c)}{\Phi_n^I(c)}\left|_{c=1}=\frac{3^{n-1}+1}{2},\right. $$ what is one third of the total number of edges involved in such a covering, as it was to be expected because of the symmetry of the graph and of the labeling. Taking once more derivative, one gets $$
\frac{\Phi_n^{I''}(c)\Phi_n^I(c)-(\Phi_n^{I'}(c))^2}{(\Phi_n^I(c))^2}\left|_{c=1}=\frac{3^{n-1}-6n+1}{4}.\right. $$ Hence, $$ \sigma_{n,I}^2=\frac{\Phi_n^{I'}(1)}{\Phi_n^I(1)} + \frac{\Phi_n^{I''}(1)\Phi_n^I(1)}{(\Phi_n^I(1))^2}-\frac{(\Phi_n^{I'}(1))^2}{(\Phi_n^I(1))^2} =\frac{3^n-6n+3}{4}. $$ In a similar way one can find $\mu_{n,II}, \sigma_{n,II}^2,\mu_{n,III},\sigma_{n,III}^2$. \end{proof}
Observe that one has $\mu_{n,II} > \mu_{n,I} >\mu_{n,III}$: this corresponds to the fact that the distribution of labels $a,b,c$ is uniform in a configuration of type I, but not in the other ones. In fact, a configuration of type II has a loop labeled $c$, but a configuration of type III (resp. IV) has a loop labeled $b$ (resp. $a$): so the label $c$ is \lq\lq dominant\rq\rq in type II, whereas the label $b$ (resp. $a$) is \lq\lq dominant\rq\rq in type III (resp. IV).
\subsection{Sierpi\'{n}ski triangles}
\begin{teo}\label{Theoprobabilityfunctions} For Sierpi\'{n}ski triangles with the \lq\lq Schreier\rq\rq labeling, for each $n\geq 1$, the random variable $c_n$ has $$ \mu_n=\frac{3^{n-1}}{4}, \qquad \sigma^2_n=\frac{3}{16}. $$ Moreover, the associated probability density function is $$ f(x) = \begin{cases} \frac{3}{4}\delta(x+\frac{1}{\sqrt{3}})+\frac{1}{4}\delta(x-\sqrt{3})& n \ \text{odd}\\ \frac{3}{4}\delta(x-\frac{1}{\sqrt{3}})+\frac{1}{4}\delta(x+\sqrt{3})&n\ \text{even}, \end{cases} $$ where $\delta$ denotes the Dirac function. \end{teo} \begin{proof} Putting $a=b=1$, one gets $$ \begin{cases} \Phi_n(c)= (4c)^{\frac{3^{n-1}-1}{4}}(c+3)& \text{for } n \text{ odd}\\ \Phi_n(c)= 2(4c)^{\frac{3^{n-1}-3}{4}}(3c+1)& \text{for } n \text{ even}. \end{cases} $$ The mean and the variance of $c_n$ can be computed as in the previous theorem, by using logarithmic derivatives. Now let $C_n = \frac{c_n-\mu_n}{\sigma_n}$ be the normalized random variable, then the moment generating function of $C_n$ is given by $$ \mathbb{E}(e^{sC_n}) = e^{-\mu_ns/\sigma_n}\mathbb{E}(e^{sx_n/\sigma_n}) = e^{-\mu_ns/\sigma_n}\frac{\Phi_n(e^{s/\sigma_n})}{\Phi_n(1)}. $$ We get $$ \mathbb{E}(e^{sC_n}) = \begin{cases} \frac{e^{\sqrt{3}s}+3e^{-\frac{s}{\sqrt{3}}}}{4} & n\ \text{odd}\\ \frac{e^{-\sqrt{3}s}+3e^{\frac{s}{\sqrt{3}}}}{4} & n\ \text{even}. \end{cases} $$ and the claim follows. \end{proof}
Observe that the moment generating functions that we have found only depend on the parity of $n$. The following theorem gives an interpretation of the probability density functions given in Theorem \ref{Theoprobabilityfunctions}. \begin{teo}\label{spiegazionedensity} For $n$ odd, the normalized random variable $C_n$ is equal to $\sqrt{3}$ in each covering of type $h^{ab}$ and to $\frac{-1}{\sqrt{3}}$ in each covering of type $f,h^{ac},h^{bc}$. For $n$ even, the normalized random variable $C_n$ is equal to $-\sqrt{3}$ in each covering of type $g^{ab}$ and to $\frac{1}{\sqrt{3}}$ in each covering of type $t,g^{ac},g^{bc}$. \end{teo}
\begin{proof} The assertion can be proved by induction. For $n=1,2$ a direct computation shows that the assertion is true. We give here only the proof for $n>2$ odd. The following pictures show how to get a labeled dimer covering for $\Gamma_n$, $n$ odd, starting from three dimer coverings of $\Gamma_{n-1}$. One can easily check that these recursions hold, by using the definition of the labeling of $\Gamma_n$. \unitlength=0,4mm \begin{center} \begin{picture}(400,70) \letvertex A=(80,60)\letvertex B=(50,10)\letvertex C=(110,10)
\letvertex D=(200,60)\letvertex E=(185,35)\letvertex F=(170,10)\letvertex G=(200,10)
\letvertex H=(230,10)\letvertex I=(215,35) \letvertex L=(320,60)\letvertex M=(305,35)\letvertex N=(290,10)
\letvertex O=(320,10)\letvertex P=(350,10)\letvertex Q=(335,35) \letvertex R=(190,38)\letvertex S=(215,30)\letvertex T=(195,13) \letvertex U=(330,38)\letvertex V=(305,30)\letvertex Z=(325,13) \drawvertex(R){$\bullet$}\drawvertex(S){$\bullet$} \drawvertex(T){$\bullet$}\drawvertex(U){$\bullet$} \drawvertex(V){$\bullet$}\drawvertex(Z){$\bullet$}
\put(80,60){\circle*{1}}\put(50,10){\circle*{1}}\put(110,10){\circle*{1}} \put(200,60){\circle*{1}}\put(185,35){\circle*{1}}\put(170,10){\circle*{1}}\put(200,10){\circle*{1}} \put(230,10){\circle*{1}}\put(215,35){\circle*{1}} \put(320,60){\circle*{1}}\put(305,35){\circle*{1}}\put(290,10){\circle*{1}} \put(320,10){\circle*{1}}\put(350,10){\circle*{1}}\put(335,35){\circle*{1}} \put(137,33){$=$}\put(257,33){$+$}
\put(78,30){$f$}\put(195,42){$g^{bc}$}\put(180,17){$g^{ac}$} \put(210,17){$g^{ab}$}\put(315,42){$g^{ab}$}\put(300,17){$g^{bc}$}\put(330,17){$g^{ac}$}
\drawundirectededge(A,B){} \drawundirectededge(B,C){} \drawundirectededge(C,A){} \drawundirectededge(D,E){} \drawundirectededge(E,F){} \drawundirectededge(F,G){} \drawundirectededge(G,H){} \drawundirectededge(H,I){} \drawundirectededge(I,D){} \drawundirectededge(E,G){} \drawundirectededge(G,I){} \drawundirectededge(I,E){} \drawundirectededge(L,M){} \drawundirectededge(M,N){} \drawundirectededge(N,O){} \drawundirectededge(O,P){} \drawundirectededge(P,Q){} \drawundirectededge(Q,L){} \drawundirectededge(M,Q){} \drawundirectededge(M,O){} \drawundirectededge(O,Q){} \end{picture} \end{center}
\begin{center} \begin{picture}(400,70)
\letvertex A=(80,60)\letvertex B=(50,10)\letvertex C=(110,10)
\letvertex D=(200,60)\letvertex E=(185,35)\letvertex F=(170,10)\letvertex G=(200,10)
\letvertex H=(230,10)\letvertex I=(215,35) \letvertex L=(320,60)\letvertex M=(305,35)\letvertex N=(290,10)
\letvertex O=(320,10)\letvertex P=(350,10)\letvertex Q=(335,35) \letvertex u=(345,13)\letvertex r=(205,13)\letvertex Z=(315,13)\letvertex T=(175,13)\letvertex s=(225,13) \letvertex S=(295,13) \letvertex a=(55,13)\letvertex b=(105,13) \letvertex V=(215,30)\letvertex t=(305,30) \letvertex R=(190,38)\letvertex U=(330,38)
\put(78,30){$h^{ac}$}\put(195,42){$g^{bc}$}\put(180,17){$g^{ab}$} \put(213,17){$t$}\put(315,42){$g^{ab}$}\put(303,17){$t$}\put(330,17){$g^{bc}$}
\drawvertex(a){$\bullet$}\drawvertex(b){$\bullet$} \drawvertex(u){$\bullet$}\drawvertex(r){$\bullet$} \drawvertex(s){$\bullet$}\drawvertex(t){$\bullet$} \drawvertex(R){$\bullet$}\drawvertex(S){$\bullet$} \drawvertex(T){$\bullet$}\drawvertex(U){$\bullet$} \drawvertex(V){$\bullet$}\drawvertex(Z){$\bullet$}
\put(80,60){\circle*{1}}\put(50,10){\circle*{1}}\put(110,10){\circle*{1}} \put(200,60){\circle*{1}}\put(185,35){\circle*{1}}\put(170,10){\circle*{1}}\put(200,10){\circle*{1}} \put(230,10){\circle*{1}}\put(215,35){\circle*{1}} \put(320,60){\circle*{1}}\put(305,35){\circle*{1}}\put(290,10){\circle*{1}} \put(320,10){\circle*{1}}\put(350,10){\circle*{1}}\put(335,35){\circle*{1}}
\put(137,33){$=$}\put(257,33){$+$}
\drawundirectededge(A,B){} \drawundirectededge(B,C){} \drawundirectededge(C,A){} \drawundirectededge(D,E){} \drawundirectededge(E,F){} \drawundirectededge(F,G){} \drawundirectededge(G,H){} \drawundirectededge(H,I){} \drawundirectededge(I,D){} \drawundirectededge(E,G){} \drawundirectededge(G,I){} \drawundirectededge(I,E){} \drawundirectededge(L,M){} \drawundirectededge(M,N){} \drawundirectededge(N,O){} \drawundirectededge(O,P){} \drawundirectededge(P,Q){} \drawundirectededge(Q,L){} \drawundirectededge(M,Q){} \drawundirectededge(M,O){} \drawundirectededge(O,Q){} \end{picture} \end{center}
\begin{center} \begin{picture}(400,70) \letvertex A=(80,60)\letvertex B=(50,10)\letvertex C=(110,10)
\letvertex D=(200,60)\letvertex E=(185,35)\letvertex F=(170,10)\letvertex G=(200,10)
\letvertex H=(230,10)\letvertex I=(215,35) \letvertex L=(320,60)\letvertex M=(305,35)\letvertex N=(290,10)
\letvertex O=(320,10)\letvertex P=(350,10)\letvertex Q=(335,35)
\letvertex a=(80,55)\letvertex b=(55,13)\letvertex d=(200,55)\letvertex e=(190,37) \letvertex i=(211,37)\letvertex f=(175,12)\letvertex g=(205,12)
\letvertex l=(320,55)\letvertex m=(305,30)\letvertex n=(295,12)\letvertex o=(315,12) \letvertex q=(335,30)
\put(78,30){$h^{bc}$}\put(198,42){$t$}\put(180,17){$g^{ab}$} \put(210,17){$g^{ac}$}\put(315,42){$g^{ac}$}\put(303,17){$t$}\put(330,17){$g^{ab}$}
\drawvertex(a){$\bullet$} \drawvertex(b){$\bullet$}\drawvertex(f){$\bullet$} \drawvertex(d){$\bullet$}\drawvertex(g){$\bullet$} \drawvertex(e){$\bullet$}\drawvertex(l){$\bullet$} \drawvertex(i){$\bullet$}\drawvertex(m){$\bullet$} \drawvertex(n){$\bullet$} \drawvertex(o){$\bullet$}\drawvertex(q){$\bullet$}
\put(80,60){\circle*{1}}\put(50,10){\circle*{1}}\put(110,10){\circle*{1}} \put(200,60){\circle*{1}}\put(185,35){\circle*{1}}\put(170,10){\circle*{1}}\put(200,10){\circle*{1}} \put(230,10){\circle*{1}}\put(215,35){\circle*{1}} \put(320,60){\circle*{1}}\put(305,35){\circle*{1}}\put(290,10){\circle*{1}} \put(320,10){\circle*{1}}\put(350,10){\circle*{1}}\put(335,35){\circle*{1}}
\put(137,33){$=$}\put(257,33){$+$}
\drawundirectededge(A,B){} \drawundirectededge(B,C){} \drawundirectededge(C,A){} \drawundirectededge(D,E){} \drawundirectededge(E,F){} \drawundirectededge(F,G){} \drawundirectededge(G,H){} \drawundirectededge(H,I){} \drawundirectededge(I,D){} \drawundirectededge(E,G){} \drawundirectededge(G,I){} \drawundirectededge(I,E){} \drawundirectededge(L,M){} \drawundirectededge(M,N){} \drawundirectededge(N,O){} \drawundirectededge(O,P){} \drawundirectededge(P,Q){} \drawundirectededge(Q,L){} \drawundirectededge(M,Q){} \drawundirectededge(M,O){} \drawundirectededge(O,Q){} \end{picture} \end{center}
\begin{center} \begin{picture}(400,70) \letvertex A=(80,60)\letvertex B=(50,10)\letvertex C=(110,10)
\letvertex D=(200,60)\letvertex E=(185,35)\letvertex F=(170,10)\letvertex G=(200,10)
\letvertex H=(230,10)\letvertex I=(215,35) \letvertex L=(320,60)\letvertex M=(305,35)\letvertex N=(290,10)
\letvertex O=(320,10)\letvertex P=(350,10)\letvertex Q=(335,35)
\letvertex a=(80,55)\letvertex c=(105,13)\letvertex d=(200,55)\letvertex e=(190,38) \letvertex i=(210,38)\letvertex g=(195,13)\letvertex h=(225,13) \letvertex l=(320,55)\letvertex m=(305,30)\letvertex q=(335,30) \letvertex o=(325,13)\letvertex p=(345,13)
\put(78,30){$h^{ab}$}\put(198,42){$t$}\put(180,17){$g^{ac}$} \put(210,17){$g^{bc}$}\put(315,42){$g^{ac}$}\put(300,17){$g^{bc}$}\put(333,17){$t$}
\drawvertex(a){$\bullet$}\drawvertex(m){$\bullet$} \drawvertex(c){$\bullet$}\drawvertex(q){$\bullet$} \drawvertex(d){$\bullet$}\drawvertex(o){$\bullet$} \drawvertex(e){$\bullet$}\drawvertex(p){$\bullet$} \drawvertex(i){$\bullet$}\drawvertex(g){$\bullet$} \drawvertex(h){$\bullet$}\drawvertex(l){$\bullet$}
\put(80,60){\circle*{1}}\put(50,10){\circle*{1}}\put(110,10){\circle*{1}} \put(200,60){\circle*{1}}\put(185,35){\circle*{1}}\put(170,10){\circle*{1}}\put(200,10){\circle*{1}} \put(230,10){\circle*{1}}\put(215,35){\circle*{1}} \put(320,60){\circle*{1}}\put(305,35){\circle*{1}}\put(290,10){\circle*{1}} \put(320,10){\circle*{1}}\put(350,10){\circle*{1}}\put(335,35){\circle*{1}}
\put(137,33){$=$}\put(257,33){$+$}
\drawundirectededge(A,B){} \drawundirectededge(B,C){} \drawundirectededge(C,A){} \drawundirectededge(D,E){} \drawundirectededge(E,F){} \drawundirectededge(F,G){} \drawundirectededge(G,H){} \drawundirectededge(H,I){} \drawundirectededge(I,D){} \drawundirectededge(E,G){} \drawundirectededge(G,I){} \drawundirectededge(I,E){} \drawundirectededge(L,M){} \drawundirectededge(M,N){} \drawundirectededge(N,O){} \drawundirectededge(O,P){} \drawundirectededge(P,Q){} \drawundirectededge(Q,L){} \drawundirectededge(M,Q){} \drawundirectededge(M,O){} \drawundirectededge(O,Q){} \end{picture} \end{center} If we look at the first three pictures, we see that the variable $C_n$ in a dimer covering of type $f, h^{ac}, h^{bc}$ is given, by induction, by the sum of two contributions $1/\sqrt{3}$ and one contribution $-\sqrt{3}$, which gives $-1/\sqrt{3}$. The fourth picture shows that the variable $C_n$ in a dimer covering of type $h^{ab}$ is given, by induction, by the sum of three contributions $1/\sqrt{3}$, which gives $\sqrt{3}$. A similar proof can be given for $n$ even. The statement follows. \end{proof}
\begin{teo} For Sierpi\'nski graphs with the \lq\lq rotation-invariant\rq\rq\ labeling, for each $n\geq 2$, the random variables $a_n$ and $b_n$ have $$ \mu_n=\frac{3^{n-1}}{4} \qquad \sigma^2_n=\frac{4\cdot 3^{n-1}+3}{4} $$ and they are asymptotically normal. The random variable $c_n$ has $$ \mu_n=\frac{3^{n-1}}{4} \qquad \sigma^2_n=\frac{3}{16} $$ and the associated probability density function is $$ f(x) = \begin{cases} \frac{3}{4}\delta(x-\frac{1}{\sqrt{3}})+\frac{1}{4}\delta(x+\sqrt{3}) \qquad \mbox{for }n \mbox{ even}\\ \frac{3}{4}\delta(x+\frac{1}{\sqrt{3}})+\frac{1}{4}\delta(x-\sqrt{3}) \qquad \mbox{for }n \mbox{ odd}. \end{cases} $$ \end{teo} \begin{proof} By putting $b=c=1$ in the partition functions given in Theorem \ref{partition2011}, one gets $$ \begin{cases} \Phi_n(a)= 2^{\frac{3^{n-2}-1}{2}}(a^3+1)^{\frac{3^{n-2}-1}{4}}(a+1)^{\frac{3^{n-1}-3}{4}}(a^3+3a+4)& \text{for } n \text{ even} \\ \Phi_n(a)= 2^{\frac{3^{n-2}-1}{2}}(a^3+1)^{\frac{3^{n-2}-3}{4}}(a+1)^{\frac{3^{n-1}-1}{4}}(3a^3+a+4)& \text{for } n \text{ odd}. \end{cases} $$ Similarly, one can find $$ \begin{cases} \Phi_n(c)= 2^{\frac{3^{n-1}-1}{2}}c^{\frac{3^{n-1}-3}{4}}(3c+1)& \text{for } n \text{ even} \\ \Phi_n(c)= 2^{\frac{3^{n-1}-1}{2}}c^{\frac{3^{n-1}-1}{4}}(c+3)& \text{for } n \text{ odd}. \end{cases} $$ Then one proceeds as in the previously studied cases. \end{proof} A similar interpretation as in the case of the \lq\lq Schreier\rq\rq labeling can be given. \begin{teo} For $n$ even, the normalized random variable $C_n$ is equal to $-\sqrt{3}$ in each covering of type $t$ and to $\frac{1}{\sqrt{3}}$ in each covering of type $g$. For $n$ odd, the normalized random variable $C_n$ is equal to $\sqrt{3}$ in each covering of type $f$ and to $-\frac{1}{\sqrt{3}}$ in each covering of type $h$. \end{teo}
\begin{os}\label{finaleremark}\rm In \cite{wagner1} the authors study the statistical properties of the dimer model on $\Gamma_n$ endowed with the \lq\lq directional\rq\rq labeling: for $n$ even (which is the only case allowing a perfect matching), they get the following expressions for the mean and the variance of the number of labels $c$: $$ \mu_n=\frac{3^{n-1}+1}{4} \qquad \sigma_n^2=\frac{3^{n-1}-3}{4}. $$ Moreover, they show that the associated normalized random variable tends weakly to the normal distribution. \end{os}
\end{document} |
\begin{document}
\thispagestyle{title}
{\setlength{\parindent}{0pt}\phantom{x}
Karsten Kremer
{\Large \bf Normal origamis of Mumford curves}
{\small 19 February 2010}
{\small {\bf Abstract.} An origami (also known as square-tiled surface) is a Riemann surface covering a torus with at most one branch point. Lifting two generators of the fundamental group of the punctured torus decomposes the surface into finitely many unit squares. By varying the complex structure of the torus one obtains easily accessible examples of Teichm\"uller curves in the moduli space of Riemann surfaces. The $p$-adic analogues of Riemann surfaces are Mumford curves. A $p$-adic origami is defined as a covering of Mumford curves with at most one branch point, where the bottom curve has genus one. A classification of all normal non-trivial $p$-adic origamis is presented and used to calculate some invariants. These can be used to describe $p$-adic origamis in terms of glueing squares.\\
} }
\hrule
\section{Introduction} An origami is a covering of a nonsingular complex projective curve over an elliptic curve which may be ramified only over zero. We will see in Section \ref{sct:complex} that such an object defines a curve (called origami-curve) in the moduli space of Riemann surfaces. Origamis have been studied recently for example by Lochak \cite{Loch}, Zorich \cite{Zor}, Schmith\"usen \cite{Schm2} and Herrlich \cite{HS1}. Several equivalent definitions for origamis are in use, see \zitat{Ch.~1}{KreDiss}. Our definition can be generalized to other ground fields, such as the $p$-adic field $\ensuremath{\mathds{C}}_p$. In the complex world we get every Riemann surface as a quotient of an open subset $\Omega$ of $\ensuremath{\mathds{P}}^1(\ensuremath{\mathds{C}})$ by a discrete subgroup $G$ of $\PSL_2(\ensuremath{\mathds{C}})$. In the $p$-adic world the analogues of Riemann surfaces, which admit a similar uniformization $\Omega/G$, are called \emph{Mumford curves}. But contrary to the complex world not every nonsingular projective curve over $\ensuremath{\mathds{C}}_p$ is a Mumford curve. Mumford curves have been thoroughly studied; two textbooks giving a comprehensive introduction are \cite{GvdP} and \cite{FvdP}. \pagebreak[2]
As Mumford curves are the $p$-adic analogues of Riemann surfaces we define $p$-adic origamis to be coverings of Mumford curves with only one branch point, where the bottom curve has genus one. In Section \ref{sct:p-adic_origamis} we classify all normal non-trivial $p$-adic origamis. This is done using the description of the bottom curve as an orbifold $\Omega/G$, where $\Omega \subset \ensuremath{\mathds{P}}^1(\ensuremath{\mathds{C}}_p)$ is open and $G$ is a group acting discontinuously on $\Omega$. These groups and the corresponding orbifolds can be studied by looking at the action of $G$ on the Bruhat-Tits tree of a suitable subfield of $\ensuremath{\mathds{C}}_p$ and the resulting quotient graph of groups. This has been done by Herrlich \cite{Her}, and more recently by Kato \cite{Kato2} and Bradley \cite{Brad}.
We will see in Section \ref{sct:p-adic_origamis} that all normal $p$-adic origamis with a given Galois group $H$ are of the type $\Omega/\Gamma \to \Omega/G$ with the following possible choices for the groups $\Gamma$ and $G$: The quotient graph of $G$ can be contracted to \begin{center}
\GraphOrigami{\Delta}{C_a}{C_b}
\end{center} for a $p$-adic triangle group $\Delta$ (where the single vertex represents a subtree with fundamental group $\Delta$, the arrow indicates an end of the graph, and $C_a$ and $C_b$ are finite cyclic groups of order $a$ resp. $b$), which means that $G$ is isomorphic to the fundamental group of this graph, i.e. \[G \cong \left<\Delta,\gamma;\, \gamma\alpha_1=\alpha_2\gamma\right> \text{ with $\alpha_i \in \Delta$ of order $a$.}\] $\Gamma$ is the kernel of a morphism $\varphi : G \to H$ which is injective when restricted to the vertex groups of the quotient graph of $G$. The ramification index of the $p$-adic origami is then $b$. We have a similar result (Theorem \ref{thm:p-adic_origami_aut}) for the automorphism group of the $p$-adic origami.
Given a $p$-adic origami which is defined over ${\overline{\ensuremath{\mathds{Q}}}}$ we can change the ground field to $\ensuremath{\mathds{C}}$ and know that there our origami can be described as a surface glued from squares. Actually doing this is usually hard, because we would have to work out equations for the Mumford curves and for the complex curves corresponding to the Riemann surfaces. Nevertheless in many cases we can find out which complex origami-curve belongs to our $p$-adic origami, as mostly the curve is already uniquely determined by fixing the Galois group. In Section \ref{sct:unique_curve} we prove that this is true for the Galois groups $D_n \times \ensuremath{\mathds{Z}}/m\ensuremath{\mathds{Z}}$ and $A_4 \times \ensuremath{\mathds{Z}}/m\ensuremath{\mathds{Z}}$ (with $n,m \in \ensuremath{\mathds{N}}$ and $n$ odd).
We will also discuss some cases where this does not work, i.e. where there are several origami-curves of origamis with the same Galois group. For groups of order less than or equal to 250 this happens only for 30 groups. To construct examples we can take an origami-curve which is not fixed by a certain element $\sigma$ of the absolute Galois group $\Gal({\overline{\ensuremath{\mathds{Q}}}}/\ensuremath{\mathds{Q}})$. As the action of $\Gal({\overline{\ensuremath{\mathds{Q}}}}/\ensuremath{\mathds{Q}})$ on origami-curves is faithful by \zitat{Theorem 5.4}{Mol} we can find such a curve for any given $\sigma$. In this case of course both the curve and its image contain origamis with the same Galois group, and we suspect that all other known invariants of origami-curves are equal as well.
\setlength{\textheight}{19.0cm} \section{Complex origamis}\label{sct:complex}
An \defi{origami} is a covering $p : X \to E$ of degree $d$ of a connected surface $X$ over the torus $E=\ensuremath{\mathds{R}}^2/\ensuremath{\mathds{Z}}^2$ which may be ramified only over $0 \in E$. We can lift the unit square defining $E$ to $X$. This yields a decomposition of $X$ into $d$ copies of the unit square. Removing the ramification points leads to an unramified restriction $p : X^* := X \setminus p^{-1}(0) \to E^* := E \setminus \set{0}$ of degree $d$. \pagebreak[4]
The covering is \defi{normal} if a (Galois) group $G$ acts on $X$ such that $E = X/G$. The monodromy of such a covering is by definition the action\footnote{\label{foot:ori:right_to_left}Note that if we want to consider elements $\alpha,\beta \in \pi_1(E^*,P)$ as permutations of the fiber $p^{-1}(P)$, we need $\alpha\beta$ to be the path \emph{first} along $\beta$ and \emph{afterwards} along $\alpha$. This may not be an intuitive way to define multiplication in $\pi_1(E^*,P)$, but otherwise the group would not act on the fiber from the left.} of the fundamental group $\pi_1(E^*,\overline{P})$ on the fiber $p^{-1}(\overline{P})=G \cdot P$ over any basepoint $\overline{P}=p(P)$ with $P \in X^*$. Without loss of generality we can choose both coordinates of $\overline{P}$ to be non-zero in $\ensuremath{\mathds{R}}/\ensuremath{\mathds{Z}}$. The fundamental group $\pi_1(E^*,\overline{P})$ is isomorphic to the free group generated by $x$ and $y$, where $x$ is the closed path starting at $\overline{P}$ in horizontal direction, and $y$ is the closed path starting at $\overline{P}$ in vertical direction. Now we have $F_2$ acting on the orbit $G \cdot P$, and this action has to be compatible with the group action of $G$, thus the monodromy is defined by a homomorphism $f : F_2 \to G$. If we think of the squares making up $X$ labelled by elements of $G$ then the right neighbor of $g \in G$ is $f(x)g$, and its upper neighbor is $f(y)g$. As the surface is connected $f$ has to be surjective. Of course the monodromy map is only well-defined up to an automorphism of $G$.
We identify the torus $E$ with $\ensuremath{\mathds{C}} / \ensuremath{\mathds{Z}}[i]$, thus an origami $p : X \to E$ becomes a Riemann surface using the coordinate charts induced by $p$. In fact, we get a lot of Riemann surfaces: for every $A \in \SL_2(\ensuremath{\mathds{R}})$ we can define the lattice $\Lambda_A = A\cdot \ensuremath{\mathds{Z}}^2$ and the homeomorphism $c_A : \ensuremath{\mathds{R}}^2/\ensuremath{\mathds{Z}}^2 \to \ensuremath{\mathds{R}}^2/\Lambda_A =: E_A, x \mapsto A\cdot x$. The identification of $\ensuremath{\mathds{R}}^2$ with $\ensuremath{\mathds{C}}$ then leads to new coordinate charts induced by $p_A := c_A \circ p$. We get again a complex structure on the surface $X$ which we denote by $X_A$.
If the torus $E_A$ is isomorphic to our torus $E = E_I$ as a Riemann surface, then $p_A : X \to E_A \cong E$ defines another origami. We thus get an action of $\SL_2(\ensuremath{\mathds{Z}})$ on the set of all origamis. If $\varphi \in \Aut(F_2)$ is a preimage of $A \in \SL_2(\ensuremath{\mathds{Z}}) \cong \Out(F_2) = \Aut(F_2)/\Inn(F_2)$, then the action of $A$ on the monodromy $f : F_2 \to G$ of a normal origami is given by $f \circ \varphi^{-1}$ (\zitat{Prop. 1.6}{KreDiss}).
The moduli space $\ensuremath{\mathcal{M}}_{g,n}$ is the set of isomorphism classes of Riemann surfaces of genus $g$ with $n$ punctures, endowed with the structure of an algebraic variety. An origami defines a subset $c(O) = \set{X_A : A \in \SL_2(\ensuremath{\mathds{R}})}$ in $\ensuremath{\mathcal{M}}_{g,n}$. This set turns out to be an algebraic curve (\zitat{Prop. 3.2 ii)}{Loch}), called an origami-curve. The origami-curves of two origamis of the same degree $d$ are equal if and only if the two origamis are in the same $\SL_2(\ensuremath{\mathds{Z}})$ orbit (\zitat{Prop. 5 b)}{HS1}). The set of normal origamis with a given Galois group $G$ can thus be identified with the set $ \Aut(G) \backslash \Epi (F_2,G) / \Out(F_2)$.
We would like to define the automorphism group of an origami as an invariant of the origami-curve. Therefore we call a bijective map $\sigma : X \to X$ an \defi{automorphism} of the origami, if it induces for every $A \in \SL_2(\ensuremath{\mathds{R}})$ via $X \to E \to E_A$ a well-defined automorphism on $E_A$. An automorphism is called \defi{translation} if it induces the identity on $E$. An origami of degree $d$ is normal if and only if it has $d$ translations. In this case the group of translations is isomorphic to the Galois group $G$ (\zitat{Prop. 3.12}{KreDiss}).
\section{Discontinuous groups}\label{sct:discontinuous}
After defining Mumford curves we will construct in this section the \emph{Bruhat-Tits-Tree} $\ensuremath{\mathcal{B}}$ for an extension of $\ensuremath{\mathds{Q}}_p$ using a quite concrete definition from $\cite{Her}$. A Mumford curve is closely related to the quotient graph of an action of $G$ on a subtree of $\ensuremath{\mathcal{B}}$. Often one defines a suitable subtree such that the quotient becomes a finite graph, but instead we will follow Kato \cite{Kato2}, who uses a slightly larger quotient graph, which can be used to control the ramification behavior of coverings of Mumford curves.
\begin{defn} Let $k$ be a field which is complete with respect to a non-ar\-chi\-me\-de\-an valuation and $G$ a subgroup of $\PGL_2(k)$. A point $x \in \ensuremath{\mathds{P}}^1(k)$ is called a \defi{limit point} of $G$, if there exist pairwise different $\gamma_n \in G$ $(n \in \ensuremath{\mathds{N}})$ and a point $y \in \ensuremath{\mathds{P}}^1(k)$ satisfying $\lim \gamma_n(y) = x$. The set of limit points is denoted by $\ensuremath{\mathcal{L}}(G)$.
$G$ is a \defi{discontinuous group}, if $\Omega(G) := \ensuremath{\mathds{P}}^1(k) \setminus \ensuremath{\mathcal{L}}(G)$ is nonempty and for each $x \in \ensuremath{\mathds{P}}^1(k)$ the closure $\overline{Gx}$ of its orbit is compact. A discontinuous group $G$ is called a \defi{Schottky group} if it is finitely generated and has no non-trivial elements of finite order. Every Schottky group is free (\zitat{Theorem I.3.1}{GvdP}).
A discontinuous group $G$ acts properly discontinuously on $\Omega(G)$. For a Schottky group $G$ we know from \zitat{Theorem III.2.2}{GvdP} that the quotient $\Omega(G)/G$ is the analytification of an algebraic curve. Such a curve is called a \defi{Mumford curve}. \end{defn} If an arbitrary group $G \subset \PGL_2(k)$ contains a discontinuous group $G'$ of finite index, then $\ensuremath{\mathcal{L}}(G)=\ensuremath{\mathcal{L}}(G')$ and $G$ is also discontinuous. We know from \zitat{Ch. I, Theorem 3.1}{GvdP} that every finitely generated discontinuous group contains a Schottky group as a subgroup of finite index.
Let $k \subset \ensuremath{\mathds{C}}_p$ be a finitely generated extension of $\ensuremath{\mathds{Q}}_p$. Then the set of absolute values $\abs{k^\times} := \set{\abs{x}:x\in k^\times}$ is a discrete set in $\ensuremath{\mathds{R}}^\times$. For $r \in \abs{k^\times}$ and $x \in k$ let $B(x,r) := \set{y \in k: \abs{x-y}\leq r}$ be the ``closed'' ball\footnote{Note that as $\abs{k^\times}$ is discrete the ball $B(x,r)$ is both open and closed for the topology induced by the $p$-adic norm.} around $x$. Construct a graph with vertices $B(x,r)$ and insert edges connecting $B(x,r)$ and $B(x',r')$ with $B(x,r) \subset B(x',r')$ and $[r,r'] \cap \abs{k^\times} = \set{r,r'}$. This graph is a simplicial tree, called the \defi{Bruhat-Tits-Tree} $\ensuremath{\mathcal{B}}(k)$. The ends\footnote{An \defi{end of a graph} is an infinite ray up to finitely many edges.} of this graph correspond bijectively to the points in $\ensuremath{\mathds{P}}^1(\hat{k})$, where $\hat{k}$ denotes the completion of $k$. The action of $\PGL_2(k)$ on $\ensuremath{\mathds{P}}^1(k)$ can be continued to an action on $\ensuremath{\mathcal{B}}(k)$, and we can modify $\ensuremath{\mathcal{B}}(k)$ by adding vertices such that this action is without inversion.
Let $\gamma \in \PGL_2(k)$ be hyperbolic or elliptic with two fixed points in $\ensuremath{\mathds{P}}^1(k)$. In this case we define the \defi{axis} $\ensuremath{\mathcal{A}}(\gamma)$ to be the infinite path connecting the two ends of $\ensuremath{\mathcal{B}}(k)$ corresponding to the fixed points of $\gamma$. A hyperbolic element $\gamma \in \PGL_2$ acts on $\ensuremath{\mathcal{A}}(\gamma)$ by shifting the whole axis towards the end corresponding to the attracting fixpoint of $\gamma$. An elliptic element fixes $\ensuremath{\mathcal{A}}(\gamma)$ pointwise. It has additional fixed points in $\ensuremath{\mathcal{B}}(k)$ if and only if $\ord(\gamma)$ is a power of $p$ (this is made more precise in \zitat{Lemma 3}{Her}). \pagebreak
Let $G$ be a finitely generated discontinuous subgroup of $\PGL_2(\ensuremath{\mathds{C}}_p)$ and let $F(\gamma)$ be the set of the two fixed points of $\gamma \in G$ in $\ensuremath{\mathds{P}}^1(\ensuremath{\mathds{C}}_p)$. Now let $k$ be the extension of $\ensuremath{\mathds{Q}}_p$ generated by the coefficients of the generators of $G$ and the fixed points of representatives of every conjugacy class of elliptic elements in $G$. There are only finitely many such conjugacy classes, hence $k$ is a finitely generated field extension of $\ensuremath{\mathds{Q}}_p$ and we can therefore construct the Bruhat-Tits-Tree $\ensuremath{\mathcal{B}}(k)$ as described above. Note that by construction $G \subset \PGL_2(k)$ and $F(\gamma) \subset \ensuremath{\mathds{P}}^1(k)$ for all elliptic elements $\gamma \in G$. Moreover we have $F(\gamma) \subset \ensuremath{\mathds{P}}^1(\hat{k})$ for every hyperbolic element $\gamma \in G$ because the endpoints of $\ensuremath{\mathcal{A}}(\gamma)$ correspond to the fixed points of $\gamma$.
As $G$ is discontinuous, $G$ contains only hyperbolic and elliptic elements (\zitat{Lemma 4.2}{Kato2}). The set of all fixed points of $G$ \[F(G) := \bigcup_{\gamma\in G} F(\gamma)\] is a $G$-invariant subset of $\ensuremath{\mathds{P}}^1(\hat{k})$, therefore $G$ also acts on the subtree $\ensuremath{\mathcal{T}}^*(G)$ of $\ensuremath{\mathcal{B}}(k)$ generated by the ends corresponding to $F(G)$. We can now construct the quotient graph $\ensuremath{\mathcal{G}}^*(G) := \ensuremath{\mathcal{T}}^*(G)/G$. Each axis of a hyperbolic element will be mapped to a circle in $\ensuremath{\mathcal{G}}^*(G)$, while each end of $\ensuremath{\mathcal{T}}^*(G)$ corresponding to a fixed point of an elliptic element but not to a fixed point of a hyperbolic element will be mapped to an end of $\ensuremath{\mathcal{G}}^*(G)$. We now turn $\ensuremath{\mathcal{G}}^*(G)$ into a graph of groups\footnote{In this chapter a \emph{graph} will always mean a \emph{graph of groups} to shorten notation. For the definition of a \defi{graph of groups} we refer to \zitat{I.4.4}{Ser}.} by labeling the image of a vertex resp. edge $x \in \ensuremath{\mathcal{T}}^*(G)$ with the conjugacy class of the stabilizing group $G_x$ of $x$.
The graph $\ensuremath{\mathcal{G}}^*(G)$ contains a lot of useful information about the Mumford curve $\Omega(G)/G$. The ramification points of the covering $\Omega(G) \to \Omega(G)/G$ are the fixed points of elliptic elements of $G$ which are not fixed points of hyperbolic elements (\zitat{Prop. 5.6.2}{Kato2}). Therefore the branch points correspond bijectively to the ends of $\ensuremath{\mathcal{G}}^*(G)$. The stabilizing group of such an end is a cyclic group whose order equals the corresponding ramification index. And by studying the action of the hyperbolic elements, we find that the genus\footnote{By the \defi{genus of a graph} we mean its first Betti number.} of $\ensuremath{\mathcal{G}}^*(G)$ equals the genus of $\Omega(G)/G$ (\zitat{\S5.6.0}{Kato2}).
\begin{defn} We define a \defindex{$p$-adic origami}{p-adic origami@$p$-adic origami} to be a covering of Mumford curves $X \to E$ ramified above at most one point with $g(E)=1$. \end{defn}
Starting from a Mumford curve $X= \Omega/\Gamma$ for a Schottky group $\Gamma$ we will later consider the quotient map to $E=\Omega/G$ for an extension $G$ of $\Gamma$. As $\Gamma$ is free the map $\Omega \to \Omega/\Gamma$ is unramified, therefore the branch points of $X \to E$ are equal to those of $\Omega \to \Omega/G$. Thus both necessary informations for $X \to E$ to be a $p$-adic origami (the genus of $E$ and the number of branch points) are coded in the quotient graph $\ensuremath{\mathcal{G}}^*(G)$ (as its genus and the number of its ends). We will now give two examples of how we can use this to construct $p$-adic origamis.
\pagebreak \begin{ex}\label{ex:dihedral_origami} Let $p > 5$ and $n \in \ensuremath{\mathds{N}}$ be odd, $\zeta \in \ensuremath{\mathds{C}}_p$ be a primitive $n$-th root of unity, $q \in \ensuremath{\mathds{C}}_p$ with $\abs{q} < \abs{1-\zeta}$ and set \[ \delta = \mattwo{\zeta}001 \quad \sigma = \mattwo0110 \quad \gamma = \mattwo{1+q}{1-q}{1-q}{1+q}. \] Thus $\delta$ is elliptic of order $n$ with fixed points $0$ and $\infty$, the involution $\sigma$ exchanges the fixed points of $\delta$ and has fixed points $1$ and $-1$, and $\gamma$ is hyperbolic with the same fixed points as $\sigma$. Then we have
\begin{enumerate}[$\bullet$] \setlength{\itemsep}{1pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt} \item $\gamma\sigma=\sigma\gamma$ and $\delta\sigma=\sigma\delta^{-1}$. \item $\Delta := \gen{\delta,\sigma}$ is the dihedral group $D_n$ and fixes a single vertex $\ensuremath{\mathcal{A}}(\delta)\cap\ensuremath{\mathcal{A}}(\sigma)$. \item $\Gamma := \gen{\delta^i\gamma\delta^{-i}: i\in\set{0,\dots,n-1}}$ is a Schottky group on $n$ free generators.
\item $\Gamma$ is a normal subgroup of $G := \gen{\delta,\sigma,\gamma}$ of index $2n$, hence $\Omega(G)=\Omega(\Gamma)=:\Omega$. It is the kernel of the map $\varphi : G \to \Delta$ defined by $\varphi|_\Delta = \id$ and $\varphi(\gamma)=1$. \item The quotient graph $\ensuremath{\mathcal{G}}^*$ of $\Omega/G$ is
\begin{center} \GraphOrigami{\Delta}{\left<\sigma\right>}{\left<\delta\right>} \end{center}
where we use the arrow to indicate an end of this graph. \item Since $\ensuremath{\mathcal{G}}^*$ has genus 1 and one end the map $\Omega/\Gamma \to \Omega/G$ is a normal $p$-adic origami with Galois group $G/\Gamma \cong D_n$. \end{enumerate}
A more detailed investigation of this origami can be found in \zitat{Bemerkung 4.3}{Kre}. \end{ex}
\begin{ex}\label{ex:tetrahedral_origami} Let $p > 5$ and $\zeta \in \ensuremath{\mathds{C}}_p$ be a third root of unity, $q \in \ensuremath{\mathds{C}}_p$ with $\abs{q}$ small enough\footnote{This is made more precise in \zitat{Bemerkung 4.4}{Kre}.} and set \[ \delta = \mattwo{\zeta}001 \;\; \sigma = \mattwo{-1}121 \;\; \gamma = \mattwo{q}001. \] Thus $\delta$ is elliptic of order $3$ with fixed points $0$ and $\infty$, and $\gamma$ is hyperbolic with the same fixed points. The fixed points of the involution $\sigma$ are $-\frac12(1\pm\sqrt{3})$, those of $\sigma\delta\sigma$ are $\sigma(0)=1$ and $\sigma(\infty)=-\frac12$. Then we have
\begin{enumerate}[$\bullet$] \setlength{\itemsep}{1pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt} \item $\gamma\delta=\delta\gamma$ and $(\delta\sigma)^3=\id$. \item $\Delta := \gen{\delta,\sigma}$ is the tetrahedral group $A_4$ and fixes a single vertex $\ensuremath{\mathcal{A}}(\delta)\cap\ensuremath{\mathcal{A}}(\sigma)$. \item $\Gamma := \gen{\alpha\gamma\alpha^{-1}: \alpha \in T}$ is a Schottky group on $4$ free generators.
\item $\Gamma$ is a normal subgroup of $G := \gen{\delta,\sigma,\gamma}$ of index $12$, hence $\Omega(G)=\Omega(\Gamma)=:\Omega$. It is the kernel of the map $\varphi : G \to \Delta$ defined by $\varphi|_\Delta = \id$ and $\varphi(\gamma)=1$. \item The quotient graph $\ensuremath{\mathcal{G}}^*$ of $\Omega/G$ is
\begin{center} \GraphOrigami{\Delta}{\left<\delta\right>}{\left<\sigma\right>} \end{center}
\item Since $\ensuremath{\mathcal{G}}^*$ has genus 1 and one end the map $\Omega(\Gamma)/\Gamma \to \Omega(G)/G$ is a normal $p$-adic origami with Galois group $G/\Gamma \cong A_4$. \end{enumerate}
\enlargethispage{12pt} A more detailed investigation of this origami can be found in \zitat{Bemerkung 4.4}{Kre}. \end{ex}
We will see in Section \ref{sct:p-adic_origamis} that the quotient graphs $\ensuremath{\mathcal{G}}^*$ of all non-trivial normal $p$-adic origamis look similar. We will then use this to investigate how the groups $G$ and $\Gamma$ have to be chosen such that the map $\Omega/\Gamma \to \Omega/G$ becomes a $p$-adic origami. But first we have to study the quotient graph $\ensuremath{\mathcal{G}}^*$ more closely.
\section{Properties of the quotient graph}
A graph of groups $\ensuremath{\mathcal{G}}^*$ is called \defindex{$p$-realizable}{p-realizable@$p$-realizable}\footnote{In this section we will often just write $\defi{realizable}$ if the statements hold for arbitrary $p$.}, if there exists a finitely generated discontinuous group $G \subset \PGL_2(\ensuremath{\mathds{C}}_p)$ with $\ensuremath{\mathcal{G}}^*=\ensuremath{\mathcal{G}}^*(G)$. Let $\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}}$ resp. $\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}}$ be the subgraph of $\ensuremath{\mathcal{G}}^*$ containing only vertices and edges with non-trivial resp. non-cyclic groups.
\begin{thm}\label{thm:nb_endpoints} The number of ends of a realizable graph $\ensuremath{\mathcal{G}}^*$ is \[n = \chi(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}}) + 2\chi(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}})\] where $\chi(\ensuremath{\mathcal{G}})$ is the Euler-characteristic\footnote{Recall that the \defi{Euler-characteristic} (number of vertices minus number of edges) equals the difference of the first two Betti-numbers (number of connected components minus genus).} of a graph $\ensuremath{\mathcal{G}}$ (for infinite $\ensuremath{\mathcal{G}}$ we take the limit of $\chi$ for all finite subgraphs of $\ensuremath{\mathcal{G}}$). \end{thm} \begin{myproof} Let $D$ resp. $d$ be the the number of vertices resp. edges in $\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}}$. Then $\chi(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}})=D-d$. Analogously let $C$ resp. $c$ be the number of vertices resp. edges in $\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}} \setminus \ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}}$. Thus $\chi(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}}) = (C+D)-(c+d)$. Then we have to show \[n = D-d+2((C+D)-(c+d)) = 2(C-c)+3(D-d)\] Thus our statement is just a reformulation of \zitat{Theorem 1}{Brad}. \end{myproof}
\begin{lem}\label{lem:subgraphs} Let $G$ be a finitely generated discontinuous group and let $\ensuremath{\mathcal{N}}$ be a subgraph of $\ensuremath{\mathcal{G}}^*(G)$. Then there exists a subgroup $N$ of $G$ with quotient graph $\ensuremath{\mathcal{N}}^* := \ensuremath{\mathcal{G}}^*(N) \supset \ensuremath{\mathcal{N}}$, such that the difference between the two graphs $\ensuremath{\mathcal{N}}$ and $\ensuremath{\mathcal{N}}^*$ is contractible\footnote{An edge in a graph of groups may be \defindex{contracted}{contraction!of an edge in a graph} if it is not a loop and the inclusion of the edge group into one of its vertex groups is an isomorphism. After the contraction only the other vertex remains. Such a contraction does not change the fundamental group\footnotemark of the graph.}, except for the ends of $\ensuremath{\mathcal{N}}^*$. \footnotetext{For the definition of the \defindex{fundamental group}{fundamental group!of a graph of groups} of a graph of groups we refer to \zitat{I.5.1}{Ser}.} \end{lem} \begin{myproof} Choose a spanning tree of $\ensuremath{\mathcal{N}}$ by deleting edges $\set{e_1,\dots,e_g}$ and let $\hat\ensuremath{\mathcal{N}}$ be a preimage of this spanning tree in $\ensuremath{\mathcal{T}}^*(G)$. For each edge $e_i$ connecting vertices $v_i$ and $w_i$ let $\hat{e}_i$ be the lift of $e_i$ with $\hat{v}_i \in \hat\ensuremath{\mathcal{N}}$ and $\hat{e}'_i$ the lift with $\hat{w}'_i \in \hat\ensuremath{\mathcal{N}}$. The other endpoints $\hat{w}_i$ and $\hat{v}'_i$ of $\hat{e}_i$ resp. $\hat{e}'_i$ cannot be contained in $\hat{\ensuremath{\mathcal{N}}}$ because otherwise $\hat{\ensuremath{\mathcal{N}}}$ would contain a circle. Let $N$ be the subgroup of $G$ generated by all stabilizers of vertices in $\hat\ensuremath{\mathcal{N}}$ and for each edge $e_i$ a hyperbolic element $\gamma_i$ mapping an $\hat{v}_i$ to $\hat{v}'_i$.
Thus $N$ is isomorphic to the fundamental group\footnotemark[\value{footnote}] of the graph of groups $\ensuremath{\mathcal{N}}$ by \zitat{I.5.4, Theorem 13}{Ser}. The stabilizers in $\ensuremath{\mathcal{N}}$ do not change if we restrict the action from $G$ to $N$, neither do the identifications of vertices via the $\gamma_i$, hence the quotient graph $\ensuremath{\mathcal{T}}^*(G)/N$ contains $\ensuremath{\mathcal{N}}$. Both graphs have the common fundamental group $N$, thus their difference cannot change their fundamental group, and therefore has to be contractible.
The tree $\ensuremath{\mathcal{T}}^*(N)$ is contained in $\ensuremath{\mathcal{T}}^*(G)$, hence $\ensuremath{\mathcal{G}}^*(N)=\ensuremath{\mathcal{T}}^*(N)/N$ is contained in $T^*(G)/N$. Again both graphs have the common fundamental group $N$ and hence their difference is contractible. \end{myproof}
\begin{prop}\label{prop:subgraphs} Let $\ensuremath{\mathcal{G}}^*$ be a realizable graph, and let $\ensuremath{\mathcal{C}}$ be a connected component of $\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}}$. Then there exists a realizable graph $\ensuremath{\mathcal{N}}^*$ with $\ensuremath{\mathcal{N}}\ensuremath{^{\mathrm{nc}}} = \ensuremath{\mathcal{C}}$ (up to contractions) and $g(\ensuremath{\mathcal{N}}\ensuremath{^{\mathrm{nc}}})=g(\ensuremath{\mathcal{N}}^*)$.
\end{prop} \begin{myproof} Let $G$ be a finitely generated discontinuous group with $\ensuremath{\mathcal{G}}^*(G)=\ensuremath{\mathcal{G}}^*$. Subdivide all edges emanating from $\ensuremath{\mathcal{C}}$ (which all have cyclic stabilizers), and let $\partial(\ensuremath{\mathcal{C}})$ be the set of all resulting edges in $\ensuremath{\mathcal{G}}^*\setminus \ensuremath{\mathcal{C}}$ which still have a common vertex with $\ensuremath{\mathcal{C}}$. For the graph $\ensuremath{\mathcal{C}} \cup \partial(\ensuremath{\mathcal{C}})$ Lemma \ref{lem:subgraphs} yields a graph $\ensuremath{\mathcal{N}}^*$ with $\ensuremath{\mathcal{N}}^* \supset \ensuremath{\mathcal{C}}$. As $\ensuremath{\mathcal{N}}^*$ and $\ensuremath{\mathcal{C}}$ differ up to contraction only by ends, and ends are stabilized by cyclic groups, we get $\ensuremath{\mathcal{N}}\ensuremath{^{\mathrm{nc}}}=\ensuremath{\mathcal{C}}$ (up to contractions) and $g(\ensuremath{\mathcal{N}}\ensuremath{^{\mathrm{nc}}})=g(\ensuremath{\mathcal{C}})=g(\ensuremath{\mathcal{N}}^*)$.
\end{myproof}
\begin{prop}\label{prop:circle_contains_cyclic_edge} Let $\ensuremath{\mathcal{G}}$ be a connected graph of noncyclic groups which sa\-tis\-fies $g(\ensuremath{\mathcal{G}})>0$. Then there exists no realizable graph $\ensuremath{\mathcal{G}}^*$ with $\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}} = \ensuremath{\mathcal{G}}$ (up to contractions) and $g(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}})=g(\ensuremath{\mathcal{G}}^*)$.
\end{prop} \begin{myproof} Assume there is a finitely generated discontinuous group $G$ such that $\ensuremath{\mathcal{G}}^* := \ensuremath{\mathcal{G}}^*(G)$ has the stated properties. $\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}}$ is connected, because if $\sigma$ and $\tau$ are elements of stabilizers of two different connected components of $\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}}$, then $\sigma\tau$ is hyperbolic and its axis contains the path $p$ between the axes of $\sigma$ and $\tau$. The image of $\ensuremath{\mathcal{A}}(\sigma\tau)$ in $\ensuremath{\mathcal{G}}^*$ is a circle which contains $p$ and hence an edge with trivial stabilizer. This would imply $g(\ensuremath{\mathcal{G}}^*)>g(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}})$ contrary to the assumption.
Thus we have $g(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}})= g(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}})$ and both $\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}}$ and $\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}}$ are connected, hence $\chi(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}})=\chi(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}})$. Then Theorem \ref{thm:nb_endpoints} states $\chi(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}})\geq 0$. But we have $\chi(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}})<1$ because $\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}}$ is connected and has positive genus. We thus get $\chi(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}})=\chi(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}})=0$ and therefore $g(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}})=g(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}})=1$. Hence by Theorem \ref{thm:nb_endpoints} the graph $\ensuremath{\mathcal{G}}^*$ has no ends.
$G$ contains a normal subgroup $\Gamma$ of finite index which is a Schottky group. As $\ensuremath{\mathcal{G}}^*$ has no ends, the covering $\Omega(G)/\Gamma \to \Omega(G)/G$ is unramified. As $g(\Omega(G)/G)=1$, we conclude $g(\Omega(G)/\Gamma)=1$ by Riemann-Hurwitz. Therefore $\Gamma$ is generated by a single hyperbolic element $\gamma$. All the elements in $G$ have the same axis as $\gamma$ (because otherwise there would be ramification points). Therefore every finite subgroup of $G$ is cyclic, which contradicts the assumption.
\end{myproof}
\begin{prop}\label{prop:genus_Gnc} Let $\ensuremath{\mathcal{G}}^*$ be a realizable graph. Then $g(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}})=0$.
\end{prop} \begin{myproof} For every connected component of $\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}}$ this follows from Propositions \ref{prop:subgraphs} and \ref{prop:circle_contains_cyclic_edge}.
\end{myproof}
\begin{defn} Let $G\subset\PGL_2(\ensuremath{\mathds{C}}_p)$ be a discontinuous group, $g(\Omega(G)/G) = 0$ and $\Omega(G) \to \Omega(G)/G$ ramified over exactly three points with ramification indices $n_1,n_2,n_3$. Then we call $G$ a ($p$-adic) \defi{triangle group} of type $\Delta(n_1,n_2,n_3)$.
\end{defn}
The graph $\ensuremath{\mathcal{G}}^*(\Delta(n_1,n_2,n_3))$ is a tree with exactly three ends, corresponding to the three branch points. Conversely if $G$ is a discontinuous group, $\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}}$ is a tree, and $\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}}$ is connected, then $\chi(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}})=\chi(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}})=1$, and $G$ is a triangle group by Theorem \ref{thm:nb_endpoints}.
\enlargethispage{12pt}
\begin{thm}\label{thm:contracted_graph} Let $\ensuremath{\mathcal{G}}^*$ be a realizable graph and $\ensuremath{\mathcal{C}}$ be a connected component of $\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}}$. Then the fundamental group of $\ensuremath{\mathcal{C}}$ is a triangle group $\Delta$. This means that $\ensuremath{\mathcal{C}}$ can be replaced by a single vertex with vertex group $\Delta$ without changing the fundamental group of $\ensuremath{\mathcal{G}}^*$. \end{thm}\pagebreak[4] \begin{myproof} Let $\ensuremath{\mathcal{N}}^*$ be the realizable graph associated to $\ensuremath{\mathcal{C}}$ by Proposition \ref{prop:subgraphs}. By Proposition \ref{prop:genus_Gnc} this graph has genus zero, so by Theorem \ref{thm:nb_endpoints} it has three ends. Therefore the discontinuous group $\Delta$ with quotient graph $\ensuremath{\mathcal{N}}^*$ is a triangle group. \end{myproof}
Now we know that a $p$-realizable graph $\ensuremath{\mathcal{G}}$ with $\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}} \neq \emptyset$ can be made up of vertices with $p$-adic triangle groups connected by edges with cyclic groups, it becomes vitally important to find all triangle groups which can occur.
Fortunately for $p > 5$ those triangle groups are well-known:
\begin{thm}\label{thm:triangle_groups} For every $p$ there exist the classical spherical triangle groups (i.e. those with $\frac1{n_1}+\frac1{n_2}+\frac1{n_3}>1$): the dihedral group $D_n=\Delta(2,2,n)$, and the symmetry groups of the platonic solids $A_4=\Delta(2,3,3)$, $S_4 = \Delta(2,3,4)$ and $A_5 = \Delta(2,3,5)$. For $p > 5$ there are no other $p$-adic triangle groups.
\end{thm} \begin{myproof} Let $\Delta$ be one of the given groups. It is a finite subgroup of $\PGL_2(\ensuremath{\mathds{C}}_p)$ and hence discontinuous. Its quotient graph $\ensuremath{\mathcal{G}}^*(\Delta)$ consists up to contraction of only one vertex (otherwise $\Delta$ would be a non-trivial amalgam or HNN-extension of smaller groups, hence would not be finite). This vertex has to be fixed by the whole group, which is non-cyclic, hence $\chi(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}})=\chi(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}})=1$. Therefore $\Delta$ is a triangle group by Theorem \ref{thm:nb_endpoints}.
Now let $\Delta$ be a triangle group for $p > 5$. We have $g(\ensuremath{\mathcal{G}}^*(\Delta))=0$, hence $\chi(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}})\geq 1$ and $\chi(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}})\geq 1$. By Theorem \ref{thm:nb_endpoints} we have then $\chi(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}})=\chi(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}})=1$, hence both graphs are connected. One can show that for $p > 5$ all edges of a realizable tree of groups can be contracted, which leaves a single vertex. The stabilizer of a vertex always is a finite subgroup of $\PGL_2(\ensuremath{\mathds{C}}_p)$ and hence either cyclic or isomorphic to one of the groups stated (for a proof we refer to \zitat{Satz 2.7}{Kre}). \end{myproof}
For $p \leq 5 $ there are additional non-spherical triangle groups. Bradley, Kato and Voskuil are currently working on their classification \cite{BKV}. A preliminary version and an idea of the proofs can be found in \cite{Kato1}.
\begin{ex}\label{ex:triangle}
For $p=5$ an elliptic element $\delta$ of order 5 fixes not only its axis, but also all vertices contained in a small tube around this axis. Thus if we start with a vertex $v\in \ensuremath{\mathcal{B}}$ on the axis of $\delta$ fixed by a dihedral group $D_{5}$ generated by an element $\sigma$ of order 2 and $\delta$, then $\delta$ fixes also other vertices on the axis of $\sigma$. These vertices have the stabilizer $\left<\sigma,\delta\right>\cong D_5$ and we can find two elements $\tau$ and $\tau'$ of order 3, each with an axis through one of those vertices but not through $v$, such that the stabilizers of these two vertices under the action of $G := \gen{\sigma,\delta,\tau,\tau'}$ are $\gen{\sigma,\delta,\tau}\cong A_5$ and $\gen{\sigma,\delta,\tau'}\cong A_5$ respectively. If we do all this carefully we can get a discontinuous group whose quotient graph looks like this: \begin{center} \GraphTriangle \end{center} The generated group is thus a $p$-adic triangle group of type $\Delta(3,3,5)$. It is the fundamental group of the graph shown above, which is $A_5 *_{D_5} A_5$, where $*_{D_5}$ is the amalgamated product over the common subgroup $D_5$. Details about amalgams as fundamental groups of trees of groups can be found in \zitat{I.4.5}{Ser}.
One can generalize this example by starting with the dihedral group $D_{5n}$ for $n\in \ensuremath{\mathds{N}}$. Then $\delta^n$ has order 5 and can be used to construct two stabilizers isomorphic to $A_5$. This results in a discontinuous group $A_5 *_{D_5} D_{5n} *_{D_5} A_5$, which is a $p$-adic triangle group of type $\Delta(3,3,5n)$. \end{ex}
\section{Normal $p$-adic origamis}\label{sct:p-adic_origamis}
After the preliminaries in the last two sections we are now ready to formulate our main result. We will restrict ourselves to ramified $p$-adic origamis, i.e. the case $g(X) > 1$. We are particularly interested in normal origamis, which we will now classify:
\begin{thm}\label{thm:p-adic_origami} Let $X \to E$ be a normal $p$-adic origami with $g(X)>1$. Then there is a discontinuous group $G$ and a Schottky group $\Gamma \vartriangleleft G$ of finite index such that $X \cong \Omega/\Gamma$ and $E \cong \Omega/G$ with $\Omega := \Omega(\Gamma)=\Omega(G)$.
The group $G$ is isomorphic to the fundamental group of the graph of groups
\begin{center}
\GraphOrigami{\Delta}{C_a}{C_b} \end{center}
where $\Delta$ is a $p$-adic triangle group of type $\Delta(a,a,b)$. This means that $G$ is isomorphic to the fundamental group of this graph.
Thus we get \[G \cong \left<\Delta,\gamma;\, \gamma\alpha_1=\alpha_2\gamma\right> \text{ with $\alpha_i \in \Delta$ of order $a$.}\] The Galois group of the origami is $G/\Gamma$. \end{thm} \begin{myproof} $X$ is a Mumford curve, thus there is a Schottky group $\Gamma \subset \PGL_2(\ensuremath{\mathds{C}}_p)$ such that $X \cong \Omega(\Gamma)/\Gamma$. The automorphism group $\Aut X$ is isomorphic to $N/\Gamma$, where $N$ is the normalizer of $\Gamma$ in $\PGL_2(\ensuremath{\mathds{C}}_p)$ (this is a theorem from \zitat{VII.2}{GvdP}). The Galois group of the covering $X\to E$ is a finite subgroup of $\Aut X$ and therefore takes the form $G/\Gamma$, where $\Gamma$ is a normal subgroup in $G \subseteq N$ of finite index. In this case $G$ is discontinuous and $\Omega(G)=\Omega(\Gamma)$.
The genus of $\ensuremath{\mathcal{G}}^*:=\ensuremath{\mathcal{G}}^*(G)$ equals the genus of $E$, which is 1. The number of ends of $\ensuremath{\mathcal{G}}^*$ equals the number of branch points of the map $\Omega \to \Omega/G$. As the map $\Omega \to \Omega/\Gamma$ is unramified, this number equals the number of branch points of $X \cong \Omega/\Gamma \to E \cong \Omega/G$, which is also 1. Thus $\ensuremath{\mathcal{G}}^*$ is a realizable graph of genus one with one end. The stabilizer of this end is a cyclic group whose order equals the ramification index above the branch point.
We now prove that $\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}}$ can be replaced by a vertex whose vertex group is a triangle group of the form $\Delta(a,a,b)$ ($a,b \in \ensuremath{\mathds{N}}$): We know $g(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}}) \leq 1$, hence $\chi(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}}) \geq 0$, and the same holds for $\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}}$. Theorem \ref{thm:nb_endpoints} states $\chi(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}})+2\chi(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}})=1$, therefore $\chi(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}})=1$ and $\chi(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}})=0$. As $g(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}})\leq 1$ we conclude $g(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}})=1$ and $\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nt}}}$ is connected. Prop. \ref{prop:genus_Gnc} states $g(\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}})=0$, therefore $\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}}$ is connected as well. By Theorem \ref{thm:contracted_graph} we can replace $\ensuremath{\mathcal{G}}\ensuremath{^{\mathrm{nc}}}$ by a single vertex $v$ whose vertex group is a triangle group $\Delta$. If we contract the rest of the graph as much as possible, we get an edge from $v$ to $v$ with a cyclic stabilizer. This stabilizer occurs therefore on two ends of $\ensuremath{\mathcal{G}}^*(\Delta)$ (this was $\ensuremath{\mathcal{N}}^*$ in Theorem \ref{thm:contracted_graph}). \end{myproof}
Note we have seen in Theorem \ref{thm:triangle_groups} that the spherical triangle groups of type $\Delta(a,a,b)$ are $D_n=\Delta(2,2,n)$ and $A_4=\Delta(2,3,3)$, and for $p>5$ there exist no other ones. For $p \leq 5$ there are additional possible triangle groups, for $p=5$ we have seen the type $\Delta(3,3,5n)$ in Example \ref{ex:triangle}.
We now know that normal $p$-adic origamis are always of the form $\Omega/\Gamma \to \Omega/G$, and we know quite well which groups $G$ can occur. It remains to investigate what groups $\Gamma$ are possible. The only restriction we have for $\Gamma$ is that it has to be a Schottky group of finite index and normal in $G$: As the covering $\Omega \to \Omega/\Gamma$ is always unramified the ramification of $\Omega/\Gamma \to \Omega/G$ is equal to the ramification of $\Omega \to \Omega/G$ and hence only depends on $G$. The genus of $\Omega/G$ also does not depend on the choice of $\Gamma$.
\begin{thm}\label{thm:schottky_equiv} Let $G \subset \PGL_2(\ensuremath{\mathds{C}}_p)$ be a finitely generated discontinuous group and $\Gamma$ be a normal subgroup of $G$ of finite index. Then the following statements are equivalent: \begin{enumerate}[i)] \setlength{\itemsep}{2pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt} \item $\Gamma$ is a Schottky group \item $\Gamma \cap G_i = \set{1}$ for every vertex group $G_i$ in $\ensuremath{\mathcal{G}}^*(G)$. \end{enumerate} \end{thm} \begin{myproof} i) $\Rightarrow$ ii) is easy: The vertex groups are finite, therefore every $g \in G_i$ has finite order. $\Gamma$ does not contain elements of finite order. Thus $\Gamma \cap G_i$ is trivial. For ii) $\Rightarrow$ i) we proceed with three steps:
Step 1: Every element of $\Gamma$ has infinite order: $G$ is the fundamental group of $\ensuremath{\mathcal{G}}^*(G)$, hence an HNN-extension of an amalgamated product of the $G_i$. If $g \in G$ has finite order $n > 1$, then $g$ is conjugated to a $g' \in G_i$ (see \zitat{IV.2.4 and IV.2.7}{LS}) with $\ord(g') = \ord(g) > 1$. But by assumption $g' \not\in \Gamma$ and hence $g \not\in \Gamma$ as $\Gamma$ is normal in $G$.
Step 2: $\Gamma$ is free by Ihara's theorem (\zitat{I.1.5, Theorem 4}{Ser}): $G$ acts on the tree $\ensuremath{\mathcal{T}}^*(G)$ with quotient graph $\ensuremath{\mathcal{G}}^*(G)$. For an $x \in \ensuremath{\mathcal{T}}^*(G)$ let $g \in \Stab_G(x)$ be non-trivial. Then $g$ has finite order, and with step 1 we see $g \not\in \Gamma$. Therefore the action on $\ensuremath{\mathcal{T}}^*(G)$ restricted to $\Gamma$ is free, thus $\Gamma$ is a free group by \zitat{\S 3.3}{Ser}.
Step 3: $\Gamma$ is a Schottky group: $\Gamma$ is by definition a finite index normal subgroup of $G$. It is discontinuous because $G$ is and it contains no elements of finite order. It is finitely generated because $G$ is (by Reidemeister-Schreier, \zitat{Prop. II.4.2}{LS}). Thus we know that $\Gamma$ is a Schottky group. We can even find a finite set of free generators of $\Gamma$ by looking at its action on $\ensuremath{\mathcal{T}}^*(G)$: this action has a finite fundamental domain and this domain therefore has only finitely many neighboring translates. The set of these neighboring translates corresponds to a finite set of free generators for $\Gamma$. \end{myproof}
We are especially interested in the resulting Galois group $H := G/\Gamma$. Thus we now answer the question what choices of $\Gamma$ are possible if we fix this Galois group: \pagebreak
\begin{cor}\label{cor:schottky_equiv} Let $H$ be a finite group and $G \subset \PGL_2(\ensuremath{\mathds{C}}_p)$ be a finitely generated discontinuous group. Further let $\Gamma$ be the kernel of a homomorphism $\varphi : G \to H$. Then the following statements are equivalent: \begin{enumerate}[i)] \setlength{\itemsep}{2pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt} \item $\Gamma$ is a Schottky group \item $\varphi\restrict{G_i}$ is injective for every vertex group $G_i$ in $\ensuremath{\mathcal{G}}^*(G)$. \end{enumerate} \end{cor} \begin{myproof} Follows with $\ker(\varphi\restrict{G_i}) = \Gamma \cap G_i$ from the Theorem. \end{myproof}
\begin{ex}\label{ex:p-adic_origamis}
\begin{enumerate}[a)] \setlength{\itemsep}{2pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item Set $G_n := \left<D_n,\gamma;\, \gamma\sigma=\sigma\gamma\right>$ as in Example \ref{ex:dihedral_origami} (with $n$ odd and $\ord(\sigma)=2$). We extend this example: Choose $m \in \ensuremath{\mathds{N}}$ and define $\varphi: G_n \to D_n \times C_m$ by $\varphi|_{D_n} = (\id,1)$ and $\varphi(\gamma) = (1,c)$ where $c$ is a generator of $C_m$. Then Corollary \ref{cor:schottky_equiv} states that $\Gamma' := \ker(\varphi)$ is a Schottky group and the Galois group of the $p$-adic origami $\Omega/\Gamma' \to \Omega/G$ is $D_n \times C_m$. Note that $\Gamma' \subseteq \Gamma := \ker(G_n \to D_n)$ for every $m$, thus we have a covering of origamis $\Omega/\Gamma' \to \Omega/\Gamma \to \Omega/G$.
\item Set $G := \left<A_4,\gamma;\, \gamma\delta=\delta\gamma\right>$ as in Example \ref{ex:tetrahedral_origami} (with $\ord(\delta)=3$). We extend this example as in a): Choose $m \in \ensuremath{\mathds{N}}$ and define $\varphi: G_n \to A_4 \times C_m$ by $\varphi|_{A_4} = (\id,1)$ and $\varphi(\gamma) = (1,c)$ where $c$ is a generator of $C_m$. Then Corollary \ref{cor:schottky_equiv} states that $\Gamma' := \ker(\varphi)$ is a Schottky group and the Galois group of the $p$-adic origami $\Omega/\Gamma' \to \Omega/G$ is $A_4 \times C_m$. Note that $\Gamma' \subseteq \Gamma := \ker(G_n \to A_4)$ for every $m$, thus we have again a covering of origamis $\Omega/\Gamma' \to \Omega/\Gamma \to \Omega/G$. \item In Example \ref{ex:triangle} we have constructed the $5$-adic triangle group $\Delta(3,3,5)= A_5 *_{D_5} A_5$. The group $G := \left<A_5 *_{D_5} A_5,\gamma;\, \gamma\delta_1=\delta_2\gamma\right>$, where the $\delta_i$ of order 3 are chosen out of the two different $A_5$-components, can be embedded into $\PGL_2(\overline\ensuremath{\mathds{Q}}_5)$ (to show this one can use \zitat{Theorem II}{Kato2}). Then we can define $\varphi: G \to A_5$ as the identity on both $A_5$-components of the amalgamated product, and $\varphi(\gamma)=1$. This leads to a $5$-adic origami with Galois group $A_5$. \item Take the group $G_5$ from part a) and consider the homomorphism $\varphi:G_5 \to \PSL_2(\ensuremath{\mathds{F}}_{11})$ defined by \[\varphi(\sigma)=\mattwo6715,\quad \varphi(\delta)=\mattwo0963, \quad \varphi(\gamma)=\mattwo3788\]
We can calculate that this is indeed a homomorphism by checking the relations of $G_5$ for the given images of $\varphi$. We can also check that $\varphi$ is surjective and that $\varphi|_{D_5}$ is injective. Hence $\ker(\varphi)$ is a Schottky group and the corresponding origami has Galois group $\PSL_2(\ensuremath{\mathds{F}}_{11})$. \end{enumerate} \end{ex}
\section{Automorphisms of $p$-adic origamis}\enlargethispage{12pt}
The Galois group of the covering $X \to E$ is a subgroup of the automorphism group $\Aut X$. For normal complex origamis we know from Section \ref{sct:complex} that the Galois group consists precisely of all possible translations. If the automorphism group is strictly larger than the Galois group, then there have to be automorphisms which are not translations, i.e. there is an automorphism which does not induce the identity but an involution on $E$. We now investigate the implications if this happens in the $p$-adic setting.
\begin{thm}\label{thm:p-adic_origami_aut} In the situation of Theorem \ref{thm:p-adic_origami} let $\Aut(X)$ contain an element $\sigma$ of order 2 which induces a non-trivial automorphism $\overline{\sigma}$ of $E$ fixing the branch point of $X \to E$. Then there is a discontinuous group $H$ containing $G$ as normal subgroup of index 2, which is isomorphic to the fundamental group of the graph of groups \begin{center} \GraphOrigamiAut{\Delta_1}{\Delta_2}{C_a}{C_2}{C_2}{C_2}{C_{2b}} \end{center} where $\Delta_1$ is the $p$-adic triangle group of type $\Delta(2,2,a)$, i.e. $\Delta_1 \cong D_a$, and $\Delta_2$ is a $p$-adic triangle group of type $\Delta(2,a,2b)$ containing $\Delta$ of index 2. \end{thm} \begin{myproof} Let $L$ be the subgroup of $\Aut X$ generated by $\sigma$ and the Galois group $\Gal(X/E)$. Every $\ell \in L\setminus \Gal(X/E)$ induces $\overline{\sigma}$ on $E$, thus $\ell \circ \sigma \in \Gal(X/E)$. Hence $L$ contains $\Gal(X/E)$ with index 2, and therefore as a normal subgroup. As in the proof of Theorem \ref{thm:p-adic_origami} we have $L \cong H/\Gamma$ and for a discontinuous group $H$ and $\Gal(X/E) \cong G/\Gamma$ for a normal subgroup $G$ of index 2 in $H$ with $\Omega(H)=\Omega(G)=\Omega(\Gamma)$. Now $\Omega/H \cong E/\left<\sigma\right> =: P$.
The branch point of $X\to E$ is a fixed point of $\overline{\sigma}$ and therefore a ramification point of $E \to P$. By Riemann-Hurwitz this means that $g(P)=0$ and there are four ramification points of $E \to P$. As the degree of the map $E \to P$ is 2, and this is also the ramification index of the four ramification points, we know that there are also exactly four branch points. The composition $X\to P$ of the maps thus has four branch points. Over three of them the map $X \to E$ is unramified, therefore the corresponding ramification indices of $X \to P$ are $2$. Over the fourth branch point the map $X \to E$ is ramified with ramification index $b$, thus the total ramification index is $2b$.
Now let $\ensuremath{\mathcal{H}}^*$ be the quotient graph corresponding to $H$. Since $\Omega/H \cong P$ the graph $\ensuremath{\mathcal{H}}^*$ is a realizable graph of genus zero with four ends. The stabilizer of one end is a cyclic group of order $2b$, the stabilizers of the other three ends are cyclic groups of order $2$.
We have $g(\ensuremath{\mathcal{H}}^*)=0$, thus this also holds for all subgraphs of $\ensuremath{\mathcal{H}}^*$. Therefore any Euler characteristic equals the number of connected components and we have $\chi(\ensuremath{\mathcal{H}}\ensuremath{^{\mathrm{nc}}})\geq1$ and $\chi(\ensuremath{\mathcal{H}}\ensuremath{^{\mathrm{nt}}})\geq 1$. As $\chi(\ensuremath{\mathcal{H}}\ensuremath{^{\mathrm{nc}}})+2\chi(\ensuremath{\mathcal{H}}\ensuremath{^{\mathrm{nt}}})=4$ by Theorem \ref{thm:nb_endpoints}, we conclude $\chi(\ensuremath{\mathcal{H}}\ensuremath{^{\mathrm{nc}}})=2$ and $\chi(\ensuremath{\mathcal{H}}\ensuremath{^{\mathrm{nt}}})=1$. Thus $\ensuremath{\mathcal{H}}\ensuremath{^{\mathrm{nc}}}$ has two connected components, which we can by Theorem \ref{thm:contracted_graph} both replace by single vertices whose vertex groups are triangle groups $\Delta_1$ and $\Delta_2$. Furthermore $\ensuremath{\mathcal{H}}\ensuremath{^{\mathrm{nt}}}$ has to be connected, therefore those two triangle groups have to be connected by a path with a nontrivial cyclic stabilizer. This path can be contracted to a single edge.
It remains to find the connection between the stabilizing groups of $\ensuremath{\mathcal{G}}^*$ and $\ensuremath{\mathcal{H}}^*$. We get the graph $\ensuremath{\mathcal{H}}^*$ as the quotient of $\ensuremath{\mathcal{G}}^*$ by $\sigma$ (if we ignore the ends of both graphs). The single edge from $\ensuremath{\mathcal{G}}^*$ has to be mapped to itself and inverted (because otherwise there would still be a closed edge in $\ensuremath{\mathcal{H}}^*$). Thus we have to insert a vertex on this edge and for constructing the quotient $\ensuremath{\mathcal{G}}^*$ we have to take one of the half edges. Therefore the stabilizer of this edge (as it is not fixed by $\sigma$) is the same as before (namely $C_a$) and the original vertex is fixed by $\Delta$ and $\sigma$. \end{myproof}
\begin{ex}\label{ex:p-adic_aut} Let $\zeta$ be a primitive 10-th root of unity and choose $\delta,\sigma$ and $\gamma$ as in Example \ref{ex:dihedral_origami}. Then the group $G := \gen{\delta^2,\sigma,\gamma}$ corresponds to the case $n=5$ of this example. But if we work out the quotient graph of $H := \gen{\delta,\sigma,\gamma}$ we note that while there still is a vertex with stabilizer $\gen{\sigma,\delta}\cong D_{10}$ now the element $\gamma\delta^5$ is elliptic of order 2, but does not fix this vertex. Instead there is now another vertex fixed by $\gamma\delta^5$ on the axis of $\sigma$, its stabilizer is therefore $\gen{\sigma,\gamma\delta^5}\cong D_2$. This means that $\gamma$ does not create a circle in the graph any more, but the quotient graph becomes \begin{center} \GraphOrigamiAut{D_2}{D_{10}}{C_2}{C_2}{C_2}{C_2}{C_{10}} \end{center} Now we define a homomorphism $\varphi:H \to \PSL_2(\ensuremath{\mathds{F}}_{11}) \times \ensuremath{\mathds{Z}}/2\ensuremath{\mathds{Z}}$ by \[\varphi(\sigma)=\left(\mattwo6715,0\right),\quad \varphi(\delta)=\left(\mattwo864{10},1\right), \quad \varphi(\gamma)=\left(\mattwo3788,0\right).\]
As in Example \ref{ex:p-adic_origamis} d) we can check that this is indeed a homomorphism and is injective when restricted to the vertex groups. Note that $\varphi(\delta^2)=\left(\smat{0 & 9 \\ 6 & 3},0\right)$, thus $\varphi|_G$ is exactly the homomorphism considered in Example \ref{ex:p-adic_origamis} d), and we have $\ker(\varphi) = \ker(\varphi|_G) = \Gamma$. Thus the $p$-adic origami $\Omega/\Gamma \to \Omega/G$ can be extended to $\Omega/\Gamma \to \Omega/H$ with Galois group $H/\Gamma \cong \PSL_2(\ensuremath{\mathds{F}}_{11})\times \ensuremath{\mathds{Z}}/2\ensuremath{\mathds{Z}}$. This means that the automorphism group of this origami contains the group $\PSL_2(\ensuremath{\mathds{F}}_{11})\times \ensuremath{\mathds{Z}}/2\ensuremath{\mathds{Z}}$. \end{ex}
\section{Complex and $p$-adic origamis} Now we want to connect the complex and the p-adic worlds: An origami-curve in $\ensuremath{\mathcal{M}}_{g,\ensuremath{\mathds{C}}}$ is always defined over ${\overline{\ensuremath{\mathds{Q}}}}$, and thus defines also a curve in $\ensuremath{\mathcal{M}}_{g,{\overline{\ensuremath{\mathds{Q}}}}}$, and this curve can in turn be interpreted as a curve in $\ensuremath{\mathcal{M}}_{g,\ensuremath{\mathds{C}}_p}$. Now we ask the question: Does this curve intersect the subspace of $\ensuremath{\mathcal{M}}_{g,\ensuremath{\mathds{C}}_p}$ containing Mumford curves?
The resulting points in $\ensuremath{\mathcal{M}}_{g,\ensuremath{\mathds{C}}_p}$ are still curves which cover an elliptic curve with only one branch point. Thus those curves are Mumford curves if and only if they occur as $p$-adic origamis. We have introduced several invariants of origami-curves in \cite{KreDiss}, some of which turn up in both worlds: The ramification indices, the Galois group of a normal origami and its automorphism group. By the Lefschetz principle (\zitat{Appendix}{Lef}) these algebraic properties of the origamis coincide over $\ensuremath{\mathds{C}}$ and over $\ensuremath{\mathds{C}}_p$. In some cases this is enough information to identify the complex origami-curve which belongs to a given $p$-adic origami.
Let $X_\ensuremath{\mathds{C}} \to E_\ensuremath{\mathds{C}}$ be an origami over $\ensuremath{\mathds{C}}$. We can write $E_\ensuremath{\mathds{C}} = E_{\ensuremath{\mathds{C}},\tau} = \ensuremath{\mathds{C}}/(\ensuremath{\mathds{Z}}+\tau\ensuremath{\mathds{Z}})$ with $\tau \in \ensuremath{\mathds{H}}$, where $0$ is the only branch point of $X_\ensuremath{\mathds{C}} \to E_\ensuremath{\mathds{C}}$. We have the Weierstrass-covering $\wp: E_{\ensuremath{\mathds{C}},\tau} \to \ensuremath{\mathds{P}}^1(\ensuremath{\mathds{C}})$. For any ramification point $x\neq 0$ we have $\wp'(x)^2 = 4\wp^3(x)-g_2\wp(x)-g_3 = 0$, so if $g_2, g_3 \in {\overline{\ensuremath{\mathds{Q}}}}$ we get $\wp(x) \in {\overline{\ensuremath{\mathds{Q}}}}$. As $\wp(0)=\infty$ Belyi's theorem would then imply that both $X_\ensuremath{\mathds{C}}$ and $E_\ensuremath{\mathds{C}}$ are defined over ${\overline{\ensuremath{\mathds{Q}}}}$. Therefore we then have $X_{\overline{\ensuremath{\mathds{Q}}}}$ and $E_{\overline{\ensuremath{\mathds{Q}}}}$ over ${\overline{\ensuremath{\mathds{Q}}}}$ with the following diagram of base changes: \[\begin{CD} X_{\overline{\ensuremath{\mathds{Q}}}} \times_{\overline{\ensuremath{\mathds{Q}}}} \ensuremath{\mathds{C}} = X_\ensuremath{\mathds{C}} @>>> X_{\overline{\ensuremath{\mathds{Q}}}} @<<< X_{\ensuremath{\mathds{C}}_p} = X_{\overline{\ensuremath{\mathds{Q}}}} \times_{\overline{\ensuremath{\mathds{Q}}}} \ensuremath{\mathds{C}}_p\\ @VVV @VVV @VVV \\ E_{\overline{\ensuremath{\mathds{Q}}}} \times_{\overline{\ensuremath{\mathds{Q}}}} \ensuremath{\mathds{C}} = E_\ensuremath{\mathds{C}} @>>> E_{\overline{\ensuremath{\mathds{Q}}}} @<<< E_{\ensuremath{\mathds{C}}_p} = E_{\overline{\ensuremath{\mathds{Q}}}} \times_{\overline{\ensuremath{\mathds{Q}}}} \ensuremath{\mathds{C}}_p\\ \end{CD}\]
By varying $\tau \in \ensuremath{\mathds{H}}$ we get a curve in the moduli-space $\ensuremath{\mathcal{M}}_{g,\ensuremath{\mathds{C}}}$, which leads to a curve in $\ensuremath{\mathcal{M}}_{g,{\overline{\ensuremath{\mathds{Q}}}}}$, which itself can be considered as a subset of a curve in $\ensuremath{\mathcal{M}}_{g,\ensuremath{\mathds{C}}_p}$. This curve may or may not intersect the subset of Mumford curves in $\ensuremath{\mathcal{M}}_{g,\ensuremath{\mathds{C}}_p}$.
\begin{ex}\label{ex:con:kappes1} Consider the following origami: \begin{center} \KappesOrigami \end{center}
Kappes has proven in \zitat{Theorem IV.3.7}{Kap} that the origami-curve in $\ensuremath{\mathcal{M}}_{g,\ensuremath{\mathds{C}}}$ of this origami contains all the curves birationally equivalent to \[y^2=(x^2-1)(x^2-\lambda^2)\left(x^2-\left(\tfrac{\lambda}{\lambda+1}\right)^2\right)\] for $\lambda \in \ensuremath{\mathds{C}} \setminus \set{0,\pm 1,-\frac12,-2}$. If we now restrict the choice of $\lambda$ to ${\overline{\ensuremath{\mathds{Q}}}}$, we get a curve in $\ensuremath{\mathcal{M}}_{g,{\overline{\ensuremath{\mathds{Q}}}}}$. We can now change the base of this curve to $\ensuremath{\mathds{C}}_p$ for an arbitrary prime $p$. This will result in a curve in $\ensuremath{\mathcal{M}}_{g,\ensuremath{\mathds{C}}_p}$. Does this curve intersect the subspace of $\ensuremath{\mathcal{M}}_{g,\ensuremath{\mathds{C}}_p}$ containing Mumford curves?
Fortunately \zitat{Theorem 4.3}{Brad-Cyc} offers a criterion for a hyperelliptic curve $X$ to be a Mumford curve: This is the case if and only if the branch points of $X \to \ensuremath{\mathds{P}}^1$ (in our case $\pm1, \pm\lambda$ and $\pm \frac{\lambda}{\lambda+1}$) can be matched into pairs $(a_i,b_i)$ such that $\ensuremath{\mathds{P}}^1$ can be covered by annuli $U_i$ each containing exactly one of those pairs. In our case we consider only $p>2$, set $\lambda := q-1$ for any $q \in \ensuremath{\mathds{C}}_p^\times$ with $\abs{q}<1$ and match the points as follows: \begin{align*} a_1 &= 1,& b_1 &= -\lambda =1-q & \Rightarrow & \abs{a_1-b_1} = \abs{q} < 1 \\ a_2 &= -1,& b_2 &= \lambda =q-1 & \Rightarrow & \abs{a_2-b_2} = \abs{q} < 1 \\ a_3 &= \tfrac{\lambda}{\lambda+1} = \tfrac1q(q-1), &b_3 &= -\tfrac1q(q-1) & \Rightarrow & \abs{a_3} = \abs{b_3}= \abs{\tfrac1q}> 1 \end{align*} Thus we can choose $U_1 = B(1,1)\setminus B(-1,\abs{q})$, $U_2 = B(1,1)\setminus B(1,\abs{q})$ and $U_3 = \ensuremath{\mathds{P}}^1\setminus B(1,1)$ to get the desired covering\footnote{The balls $B(x,r)$ were defined in Section \ref{sct:discontinuous}.}. \end{ex} \pagebreak
In general it is almost impossible to find the equation of a given origami. Therefore we would like a simpler approach, and we will do this the other way round: We have already constructed $p$-adic origamis, now we try to match them to the corresponding complex origami-curve. We will do this by matching some of the corresponding invariants.
\section{Galois groups with a unique curve}\label{sct:unique_curve} Some Galois groups occur only for a single origami-curve over $\ensuremath{\mathds{C}}$. We will now prove that this is the case for the Galois groups $D_n \times \ensuremath{\mathds{Z}}/m\ensuremath{\mathds{Z}}$ and $A_4 \times \ensuremath{\mathds{Z}}/m\ensuremath{\mathds{Z}}$, which occurred as Galois groups of $p$-adic origamis in Example \ref{ex:p-adic_origamis} a) and b). Recall from Section \ref{sct:complex} that for a normal origami with monodromy $f : F_2 \twoheadrightarrow G$ the other origamis on the same curve arise from $f$ by concatenation of elements of $\Aut(F_2)$ and $\Aut(G)$.
\begin{lem}\label{lem:cyclic_origami_01} Let $f : F_2 \to \ensuremath{\mathds{Z}}/m\ensuremath{\mathds{Z}}$ be surjective. Then up to an automorphism of $F_2$ we can assume $f(x)=1$ and $f(y)=0$. \end{lem} \begin{myproof} Let $c_x, c_y \in \ensuremath{\mathds{N}}$ with $c_x= f(x)$ and $c_y = f(y)$ in $\ensuremath{\mathds{Z}}/m\ensuremath{\mathds{Z}}$.
We prove first that we can choose the representatives $c_x$ and $c_y$ coprime: Let $p_i$ be the prime factors of $c_x$ and set \[c_y' := c_y + m \cdot \prod_{p_i \nmid c_y}p_i \] Assume that there is a $p_i$ which is a factor of $c_y'$. If $p_i \mid c_y$ then $p_i$ would also have to be a factor of the right-hand summand, and as it is not contained in the product we would then have $p_i \mid m$. But this contradicts $\gen{c_x,c_y} = \ensuremath{\mathds{Z}}/m\ensuremath{\mathds{Z}}$. If on the other hand $p_i \nmid c_y$, then $p_i$ would be a factor of the right-hand summand but not of the left one, which would contradict the assumption.
Therefore no $p_i$ is a factor of $c_y'$, thus $\gcd(c_x,c_y') = 1$. We can replace $c_y$ by $c_y'$ as both are equivalent modulo $m$.
Now $\gcd(c_x,c_y) = 1$, thus there exist $a,b \in \ensuremath{\mathds{Z}}$ with $ac_x+bc_y = 1$. Set \[A := \mattwo{a}{-c_y}{b}{c_x} \in \SL_2(\ensuremath{\mathds{Z}})\] and let $\varphi \in \Aut(F_2)$ be a lift of $A$. As $\ensuremath{\mathds{Z}}/m\ensuremath{\mathds{Z}}$ is abelian we get a commutative diagram \begin{center} \commdiag{@!=2mm}{
F_2 \ar[rr]^\varphi \ar[rd] && F_2 \ar[rr]^f \ar[rd] & & \ensuremath{\mathds{Z}}/m\ensuremath{\mathds{Z}} \\
&\ensuremath{\mathds{Z}}^2 \ar[rr]^{\overline{\varphi}} & & \ensuremath{\mathds{Z}}^2 \ar[ur]_{\overline{f}} & } \end{center} where $\overline{\varphi}$ is the multiplication with $A$.
Thus \begin{align*} (f\circ\varphi)(x) &= (\overline{f}\circ\overline{\varphi})\left(\mat{1\\0}\right) = \overline{f}\left(\mat{a \\ b}\right) = ac_x+bc_y = 1 \\ (f\circ\varphi)(y) &= (\overline{f}\circ\overline{\varphi})\left(\mat{0\\1}\right) = \overline{f}\left(\mat{-c_y \\ c_x}\right) = -c_yc_x+c_xc_y = 0 \end{align*} \end{myproof}
\begin{prop} Let $n,m \in \ensuremath{\mathds{N}}$ and $n$ be odd. Up to an automorphism of $F_2$ there exists only one curve of origamis with Galois group $D_n \times \ensuremath{\mathds{Z}}/m\ensuremath{\mathds{Z}}$. \end{prop} \begin{myproof} Choose $\sigma, \tau \in D_n$ with $\gen{\sigma,\tau}=D_n$ and $\ord(\sigma)=\ord(\tau)=2$, and set $\delta = \sigma\tau$.
Let $f : F_2 \to D_n \times \ensuremath{\mathds{Z}}/m\ensuremath{\mathds{Z}}$ be the monodromy of a normal origami. By Lemma \ref{lem:cyclic_origami_01} we can apply an automorphism of $F_2$ to get $f(x)=(\alpha,1)$ and $f(y)=(\beta,0)$ for some $\alpha,\beta \in D_n$. As $f$ has to be surjective we know $\gen{\alpha,\beta} = D_n$. Up to an automorphism of $D_n$ we have three cases: \begin{enumerate}[i)] \setlength{\itemsep}{2pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt} \item $\ord(\alpha)=2, \ord(\beta)=2$, w.l.o.g. $(\alpha,\beta)=(\tau,\sigma)$ \item $\ord(\alpha)=n, \ord(\beta)=2$, w.l.o.g. $(\alpha,\beta)=(\delta,\sigma)$ \item $\ord(\alpha)=2, \ord(\beta)=n$, w.l.o.g. $(\alpha,\beta)=(\tau,\delta)$ \end{enumerate}
We can apply $\varphi \in \Aut(F_2)$ with $x \mapsto yx$ and $y \mapsto y$ to get from i) to ii): \begin{align*} (f \circ \varphi)(x) &= f(yx) = (\sigma\tau,0+1) = (\delta,1) \text{ and } \\ (f \circ \varphi)(y) &= f(y) = (\sigma,0) \end{align*}
For odd $m$ we can apply $\varphi \in \Aut(F_2)$ with $x \mapsto x$ and $y \mapsto yx^m$ to get from i) to iii): \begin{align*} (f \circ \varphi)(x) &= f(x) = (\tau,1) \text{ and } \\ (f \circ \varphi)(y) &= f(yx^m) = (\sigma\tau^m, 0+m) = (\delta,0) \end{align*}
For even $m$ case iii) is not possible, as $f$ would not be surjective: Assume there is a preimage $z \in f^{-1}(\id,1)$. We know that $f(xy)=f(y^{-1}x)$ (because this holds in both components). Thus we can choose $z$ of the form $x^ay^b$, its image is $(\tau^a\delta^b,a)$. Now for $f(z)_1 = \id$ we need $a$ even, but for $f(z)_2 = 1$ we need $a=1$ in $\ensuremath{\mathds{Z}}/m\ensuremath{\mathds{Z}}$. For even $m$ this is impossible. Therefore there is no preimage of $(\id,1)$ in this case. \end{myproof}
\begin{prop}\label{prop:con:single_tetrahedral} Up to an automorphism of $F_2$ there exists, for any $m \in \ensuremath{\mathds{N}}$, only one curve of origamis with Galois group $A_4 \times \ensuremath{\mathds{Z}}/m\ensuremath{\mathds{Z}}$. \end{prop} \begin{myproof} Let $f : F_2 \to A_4 \times \ensuremath{\mathds{Z}}/m\ensuremath{\mathds{Z}}$ be the monodromy of a normal origami. By Lemma \ref{lem:cyclic_origami_01} we can apply an automorphism of $F_2$ to get $f(x)=(\alpha,1)$ and $f(y)=(\beta,0)$. As $f$ has to be surjective we know $\gen{\alpha,\beta} = A_4$. Up to an isomorphism of $A_4$ we have four cases: \pagebreak[2] \begin{enumerate}[i)] \setlength{\itemsep}{2pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt} \item $\ord(\alpha)=2, \ord(\beta)=3$, w.l.o.g. $(\alpha,\beta)=((1\,2)(3\,4),(2\,3\,4))$ \item $\ord(\alpha)=3, \ord(\beta)=2$, w.l.o.g. $(\alpha,\beta)=((1\,2\,3),(1\,2)(3\,4))$ \item $\ord(\alpha)= \ord(\beta)=3$, $\ord(\alpha\beta)=2$, w.l.o.g. $(\alpha,\beta)=((1\,2\,3),(2\,3\,4))$ \item $\ord(\alpha)= \ord(\beta)=3$, $\ord(\alpha\beta)=3$, w.l.o.g. $(\alpha,\beta)=((1\,2\,4),(2\,3\,4))$ \end{enumerate}
We can apply $\varphi \in \Aut(F_2)$ with $x \mapsto xy$ and $y \mapsto y$ to get from iii) to i): \begin{align*} (f \circ \varphi)(x) &= f(xy) = ((1\,2\,3)(2\,3\,4),1) = ((1\,2)(3\,4),1) \text{ and } \\ (f \circ \varphi)(y) &= f(y) = ((2\,3\,4),0) \end{align*}
We can apply $\varphi \in \Aut(F_2)$ with $x \mapsto xy^{-1}$ and $y \mapsto y$ to get from iii) to iv): \begin{align*} (f \circ \varphi)(x) &= f(xy^{-1}) = ((1\,2\,3)(2\,4\,3),1) = ((1\,2\,4),1) \text{ and } \\ (f \circ \varphi)(y) &= f(y) = ((2\,3\,4),0) \end{align*}
If $m$ is not divisible by 3 choose $k \in \ensuremath{\mathds{N}}$ such that $km \equiv 1 \mod 3$. We can then apply $\varphi \in \Aut(F_2)$ with $x \mapsto x$ and $y \mapsto x^{km}y$ to get from iii) to ii): \begin{align*} (f \circ \varphi)(x) &= f(x) = ((1\,2\,3),1) \text{ and } \\ (f \circ \varphi)(y) &= f(x^{km}y) = ((1\,2\,3)(2\,3\,4), 0+km) = ((1\,2)(3\,4),0) \end{align*}
If $m$ is divisible by 3, case ii) is not possible, as $f$ would not be surjective: Assume there is a preimage $z \in f^{-1}(\id,1)$. As the first component of $f(z)$ is $\id \in A_4$ the presentation of $A_4$ tells us that the element $z$ is contained in the normal subgroup generated by $y^2,x^3$ and $(yx)^3$, and hence is a product of conjugates of those elements. But $f(y^2)=(\id,0)$ and $f(x^3)=f((yx)^3)=(\id,3)$. Therefore the second component of $f(z)$ is divisible by 3. As $m$ is also divisible by 3 this contradicts $f(z)=(\id,1)$. \end{myproof}
\begin{ex}\label{ex:con:kappes2} In Example \ref{ex:con:kappes1} we have shown that the origami $O$ defined by \begin{center} \KappesOrigami \end{center} occurs as a $p$-adic origami. Now we have an alternative way of showing this: Let $O'$ be the minimal normal cover of $O$. The Galois group of $O'$ is $A_4$. Proposition \ref{prop:con:single_tetrahedral} tells us that the origami-curve of $O'$ is the only curve with this Galois group.
In Example \ref{ex:tetrahedral_origami} we constructed a $p$-adic origami with Galois group $A_4$; in fact one for every suitably chosen $q \in \ensuremath{\mathds{P}}^1(\ensuremath{\mathds{C}}_p)$. If one of those origamis $X'$ is (as an algebraic curve) defined over ${\overline{\ensuremath{\mathds{Q}}}}$ and hence over $\ensuremath{\mathds{C}}$ then it has to occur on the origami-curve of $O'$. The covering $O'\to O$ leads to a morphism $X' \to X$, where $X$ is on the origami-curve of $O$. As $X'$ is a Mumford curve the same holds for $X$ by \zitat{Satz 5.24}{Brad-Dip}. \end{ex}
\section{When the group is not enough}
In \zitat{2.2}{KreDiss} we have proposed an algorithm to find all complex origami-curves of normal origamis with a given Galois group $H$. This results in a set of representative origamis given as epimorphisms $f : F_2 \twoheadrightarrow H$ as described in Section \ref{sct:complex}. For most groups there is only one curve with the given Galois group, but there are cases where there are more than one.\footnote{There are 2386 groups with order less than or equal to 250 which can be generated by two elements. Of these there are only 30 where there is more than one curve with this Galois group, c.f. \zitat{A.3}{KreDiss}.} The smallest example for such a group is the group $A_5$ where there are two curves. Representatives are given by \begin{align*} f_1:F_2 \to A_5, \quad x&\mapsto (1\,5\,3\,4\,2), \quad y \mapsto (1\, 3\, 2\, 4\, 5)\quad \text {and}\\ f_2:F_2 \to A_5, \quad x&\mapsto (1\,5\,2\,4\,3), \quad y \mapsto (2\, 3)(4\, 5) \end{align*} We can easily see that those two origamis do not define the same curve: The ramification index of a normal origami is the order of $f(xyx^{-1}y^{-1})$. Hence the ramification indices of those two origamis are \[ \ord(f_1(xyx^{-1}y^{-1})) = 5 \text { and } \ord(f_2(xyx^{-1}y^{-1})) = 3 \]
\begin{ex} In Example \ref{ex:p-adic_origamis} c) we have investigated a $5$-adic origami with Galois group $A_5$. Our $5$-adic origami had ramification index 5 (recall that this was the order of the cyclic stabilizer of the single end of the quotient graph), hence this corresponds to the curve of the origami defined by $f_1$. \end{ex}
Sometimes fixing the ramification index makes the origami-curve unique. But there are still some cases where two curves have equal Galois groups and equal ramification index. An example for such a group is the group $\PSL_2(\ensuremath{\mathds{F}}_7)$, where there are even four curves, represented by $f_i : F_2 \to \PSL_2(\ensuremath{\mathds{F}}_7)$ with $f_i(x)=\sigma_i$ and $f_i(y)=\tau_i$ with \begin{align*} &\sigma_1=\sigma_2=\sigma_3 = \mattwo1254, \quad \sigma_4=\mattwo0616 \\ &\tau_1 = \mattwo6026, \quad \tau_2=\sigma_4, \quad \hspace{3pt}\tau_3= \mattwo1365, \quad \tau_4=\mattwo6334 \end{align*} The two curves defined by $f_1$ and $f_2$ both have ramification index 4.
In \zitat{Prop. 3.13}{KreDiss} we show that the automorphism group of a normal origami with Galois group $H$ is either $H$ itself or isomorphic to $C_2 \ltimes_\Phi H$ where $\Phi: C_2 \to \Aut(H)$ maps the generator of the cyclic group $C_2$ to the automorphism $\varphi$ of $H$ defined by $\varphi(f(x))=f(x)^{-1}$ and $\varphi(f(y))=f(y)^{-1}$. In the example above with Galois group $\PSL_2(\ensuremath{\mathds{F}}_7)$ and ramification index 4 the automorphism groups are isomorphic and hence can not be used to distinguish those two curves. But in some cases they are helpful:
\begin{ex} In Example \ref{ex:p-adic_origamis} d) we have investigated a $5$-adic origami with Galois group $\PSL_2(\ensuremath{\mathds{F}}_{11})$ with ramification index $5$. There are four possible origami-curves, represented by $f_i : F_2 \to \PSL_2(\ensuremath{\mathds{F}}_{11})$ with $f_i(x)=\sigma_i$ and $f_i(y)=\tau_i$ with \begin{align*} &\sigma_1=\sigma_2=\sigma_3 = \mattwo4684,\quad \,\sigma_4=\mattwo{10}{10}76 \\ &\tau_1 = \sigma_4, \quad \tau_2 = \mattwo218{10},\quad \tau_3= \mattwo1401, \quad \tau_4=\mattwo4{10}03 \end{align*} The automorphism groups of $f_2$ and $f_4$ are isomorphic to $\PSL_2(\ensuremath{\mathds{F}}_{11})\times \ensuremath{\mathds{Z}}/2\ensuremath{\mathds{Z}}$, while the other two are isomorphic to $\Aut(\PSL_2(\ensuremath{\mathds{F}}_{11}))$.\ We have seen in Example \ref{ex:p-adic_aut} that the automorphism group of the $p$-adic origami contains $\PSL_2(\ensuremath{\mathds{F}}_{11})\times \ensuremath{\mathds{Z}}/2\ensuremath{\mathds{Z}}$, thus the corresponding complex origami-curve is either defined by $f_2$ or $f_4$. We can presently not decide which of the two curves is the right one. \end{ex}
\end{document} |
\begin{document}
\title{Logical blocks for fault-tolerant topological quantum computation}
\newcommand*\leadauthor{\thanks{Lead authors: \\ \mbox{[email protected]} \\ \mbox{[email protected]} }}
\author{H\'ector Bomb\'in} \author{Chris Dawson} \author{Ryan V. Mishmash} \leadauthor \author{Naomi Nickerson} \author{Fernando Pastawski} \author{Sam Roberts}\leadauthor
\affiliation{PsiQuantum Corp., Palo Alto} \date\today
\begin{abstract}
Logical gates constitute the building blocks of fault-tolerant quantum computation. While quantum error-corrected \emph{memories} have been extensively studied in the literature, explicit constructions and detailed analyses of thresholds and resource overheads of universal logical gate sets have so far been limited. In this paper, we present a comprehensive framework for universal fault-tolerant logic motivated by the combined need for (\textit{i}) platform-independent logical gate definitions, (\textit{ii}) flexible and scalable tools for numerical analysis, and (\textit{iii}) exploration of novel schemes for universal logic that improve resource overheads. Central to our framework is the description of logical gates holistically in a way which treats space and time on a similar footing. Focusing on quantum instruments based on surface codes, we introduce explicit, but platform-independent representations of topological logic gates---called logical blocks---and generate new, overhead-efficient methods for universal quantum computation. As a specific example, we propose fault-tolerant schemes based on surface codes concatenated with more general low-density parity check (LDPC) codes, suggesting an alternative path toward LDPC-based quantum computation. The logical blocks framework enables a convenient software-based mapping from an abstract description of the logical gate to a precise set of physical instructions for executing both circuit-based and fusion-based quantum computation (FBQC). Using this, we numerically simulate a surface-code-based universal gate set implemented with FBQC, and verify that the threshold for fault-tolerant gates is consistent with the bulk threshold for memory. We find, however, that boundaries, defects, and twists can significantly impact the logical error rate scaling, with periodic boundary conditions potentially halving resource requirements. Motivated by the favorable logical error rate suppression for boundaryless computation, we introduce a novel computational scheme based on the teleportation of twists that may offer further resource reductions.
\end{abstract}
\maketitle
\section{Introduction}
Quantum fault tolerance will form the basis of large-scale universal quantum computation. The surface code~\cite{kitaev2003fault, kitaev1997quantum,dennis2002topological} and related topological approaches~\cite{raussendorf2007fault, raussendorf2007topological, bolt2016foliated, nickerson2018measurement, brown2020universal} are among the most appealing methods for near term fault-tolerant quantum computing (FTQC), primarily due to their high thresholds and amenability to planar architectures with nearest-neighbor interactions. In recent years there have been numerous studies exploring the surface code memory threshold, which is the error rate below which encoded information can be protected arbitrarily well in the limit of large code size~\cite{dennis2002topological,wang2003confinement, stace2009thresholds, duclos2010fast, bombin2012strong, fowler2012surface, watson2014logical, bravyi2014efficient, darmawan2017tensor}. However, to understand the thresholds and overhead for universal fault-tolerant quantum \emph{computation}, it is necessary to study the behavior of fault-tolerant \emph{logical gates}. In topological codes these gates can be implemented using methods that draw inspiration from condensed matter, where encoded operations are achieved by manipulating topological features such as boundaries, defects, and twists~\cite{raussendorf2007topological, raussendorf2007fault, bombin2009quantum, bombin2010topological, landahl2011fault, horsman2012surface, fowler2012time, barkeshli2013classification, barkeshli2013twist, hastings2014reduced, yoshida2015topological, terhal2015quantum, yoder2017surface, brown2017poking, yoshida2017gapped, roberts2017symmetry, bombin2018transversal, bombin20182d, lavasani2018low, lavasani2019universal, webster2020fault, hanks2020effective, roberts20203, webster2020fault, zhu2021topological,chamberland2021universal,landahl2021logical}.
To date, there has been no taxonomic analysis that validates the effect on the error threshold and below-threshold scaling in the presence of topological features (see Refs.~\cite{beverland2021cost, chamberland2021universal} for developments in this direction). It is known that the introduction of modified boundary conditions can have a significant impact on the error suppression of the code~\cite{fowler2013accurate,beverland2019role}, and therefore, as technology moves closer to implementing large-scale fault-tolerant quantum computations~\cite{doi:10.1126/science.abi8378,egan2021fault,ryan2021realization,postler2021demonstration}, it is critical to fully understand the behavior not only of a quantum memory, but of a universal set of logical gates.
In this paper we comprehensively study universal logical gates for fault-tolerant schemes based on \textit{surface codes}. Our approach is centered around a framework for defining and analyzing logical instruments as three-dimensional (3D) objects called fault-tolerant logical instruments, allowing for a fully topological interpretation of logical gates. Focusing on fault-tolerant instruments directly---rather than starting with an error-correcting code and considering operations thereon---is beneficial for several reasons. Firstly, it provides a holistic approach to logical gate optimization, allowing us to explore options that are not particularly natural from a code-centric perspective. Secondly, it provides a way to define explicit logical instruments from physical instruments in a way that is applicable across different physical settings and models; in our case, we provide explicit instructions to compile these fault-tolerant gates to both circuit-based quantum computation (CBQC) based on planar arrays of static qubits and to fusion-based quantum computing (FBQC)~\cite{bartolucci2021fusion}. Thirdly, it enables a unified definition of the fault-distance of a protocol; while distance of a code is straightforwardly defined, logical failures in a protocol can occur in ways that cannot be associated with any single time-step of the protocol, (for instance, timelike chains of measurement errors in a topological code). The fault distance of a protocol provides a go-to proxy for fault tolerance, which avoids the computational overhead of full numerical simulations. Using this framework, we introduce several new approaches to topological quantum computation with surface codes (including both planar and toric), and numerically investigate their performance. An outline of the paper is displayed in Fig.~\ref{LogicBlocksOutline}.
\textbf{A framework to define fault-tolerant instruments.} Our first contribution is to define the framework of \textit{fault-tolerant logical instruments} to describe quantum computation based on stabilizer codes holistically as instruments in space-time rather than as operations on a specific code. This framework, defined in Sec.~\ref{secFTInstruments}, builds upon concepts first introduced in topological measurement-based quantum computation (MBQC)~\cite{raussendorf2007topological,raussendorf2007fault} and extended in related approaches~\cite{fujii2015quantum,brown2017poking,brown2020universal,hanks2020effective, bombin2021interleaving}. Within this framework, introduce a surface-code-specific construction called a \textit{logical block template}, which allows one to explicitly specify (in a platform-independent way) a fault-tolerant surface code instrument in terms of (2{+}1)D space-time topological features. The ingredients of a logical block template---the \textit{topological features}---consist of boundaries (of which there are two types), corners, symmetry defects, and twists of the surface code, as defined in Sec.~\ref{secElements}. In Sec.~\ref{sec3DElements}, we define logical block templates and how to compile them into physical instructions (for either CBQC or FBQC), with the resulting instrument being referred to as a \textit{logical block}. This framework highlights similarities between different approaches to fault-tolerant gates, for example between transversal gates, code deformations, and lattice surgeries, as well as between different models of quantum computation.
\textbf{Logic blocks for universal quantum computation.} Our first application of the fault-tolerant instrument framework is to define a universal gate set based on planar codes~\cite{bravyi1998quantum}. Some of these logical blocks offer reduced overhead compared to previous protocols. For example, we show how to perform a phase gate on the distance $d$ rotated planar code~\cite{nussinov2009symmetry,bombin2007optimal,beverland2019role}, using a space-time volume of $4d^3$. This implementation requires no distillation of Pauli-$Y$ eigenstates, and thus we expect it to perform better than conventional techniques. In addition to its reduced overhead, the phase gate we present can be implemented in a static 2D planar (square) lattice of qubits using only the standard 4-qubit stabilizer measurements, without needing higher-weight stabilizer measurements~\cite{bombin2010topological,litinski2019game} or modified code geometries~\cite{yoder2017surface,brown2020universal} that are typically required for braiding twists. These logical blocks can be composed together to produce fault-tolerant circuits, which we illustrate by proposing an avenue for fault-tolerant quantum computation based on concatenating surface codes with more general quantum low-density parity check (LDPC) codes~\cite{tillich2014quantum,gottesman2013fault,fawzi2018constant,fawzi2018efficient,breuckmann2021ldpc,hastings2021fiber,breuckmann2021balanced,panteleev2021asymptotically}. Such concatenated code schemes may offer the advantages of both the high thresholds of surface codes, with the reduced overheads of constant-rate LDPC codes---an attractive prospect for future generations of quantum computers.
\textbf{Fusion-based quantum computation---physical operations, decoding, and simulation.} Fusion-based quantum computation is a new paradigm of quantum computation, where the computation proceeds by preparing many copies of a constant-sized (i.e., independent of the algorithm size) entangled resource state, and performing entangling measurements between pairs (or more) resource states. This model is motivated by photonic architectures, where such resource states can be created with high fidelity, and then destructively measured using \textit{fusion} measurements~\cite{bartolucci2021fusion}.
In Sec.~\ref{sec:topological_fbqc}, we review FBQC and show how logical block templates can be compiled to physical FBQC instructions. In Sec.~\ref{secNumerics}, we introduce tools to decode and simulate such blocks, and numerically investigate the performance of a complete set of logical operations in FBQC (these operations are complete in that they are universal when supplemented with noisy magic states).
Firstly, we verify that the thresholds for these logical operations all agree with the bulk memory threshold. Secondly, we uncover the significant role that boundary conditions have in the resources required to achieve a target logical error rate. Namely, we see that qubits encoded with periodic boundary conditions offer more favorable logical error rate scaling with code size than for qubits defined with boundaries (as has been previously observed in Ref.~\cite{fowler2013accurate}). For instance, at half threshold, nontrivial logic gates can require up to $25\%$ larger distance (about 2 times larger volume) than that estimated for a memory with periodic boundary conditions in all three directions (i.e., lattice on a 3-torus). Our results demonstrate that entropic contributions to the logical error rate can be significant, and should be contemplated in gate design and in overhead estimates for fault-tolerant quantum algorithms.
\textbf{Logical instruments by teleporting twists.} Finally, motivated by the advantages in error suppression offered by periodic boundary conditions, we introduce a novel computational scheme in Sec.~\ref{secPortals}, where fault-tolerant gates are achieved by teleporting twists in time. In this scheme, qubits are encoded in twists of the surface code, and logical operations are performed using space-time defects known as portals. These portals require nonlocal operations to implement, and are naturally suited to, for instance, photonic fusion-based architectures~\cite{bartolucci2021fusion,bombin2021interleaving} for which we prescribe the physical operations required. To our knowledge, this is the first surface code scheme that does not require boundaries to achieve a universal set of gates and may offer even further resource reductions in the overhead of logical gates. This logic scheme is an important example of the power of the fault-tolerant instrument framework, as the operations are difficult to understand as sequences of operations on a 2D quantum code.
\begin{figure}
\caption{Outline of the paper.}
\label{LogicBlocksOutline}
\end{figure}
\section{Fault-tolerant instruments}\label{secFTInstruments}
In this section, we describe the notion of a stabilizer \emph{fault-tolerant logical instrument}, suitable for describing a wide class of logical operations. A fault-tolerant logical instrument takes some number $k_\text{in}$ of encoded quantum states as inputs (encoded in stabilizer codes~\cite{gottesman1997stabilizer}), performs an encoded operation, and outputs some number of encoded states $k_\text{out}$.
We use the term {\it logical port}, to refer to a group of physical qubits that together represent logical input or output qubits of the instrument. Each port has a quantum error-correcting code with a fixed number of physical and logical qubits associated with it. In this way, fault-tolerant instruments can be composed with each other only through a pair of compatible input and output ports. Alternatively, {\it logical ports} can be seen as a specific collection of cuts that partition a complex logical quantum circuit into logical blocks, the elementary quantum instruments that are amenable to independent study and optimization. In this way, quantum error-correcting codes continue to play a crucial role in defining modular interface structure through which to compose fault-tolerant algorithms from elementary logical blocks.
The main feature of a fault-tolerant instrument is the classical data they produce (as intermediate measurement outcomes). These classical data are used both to identify errors (by relying on measurement outcome redundancy in the form of checks) as well as to determine the Pauli frame required to interpret the logical mapping and logical measurement outcomes, as described below. Examples of fault-tolerant instruments that are included within this model are transversal gates, code deformations, gauge fixing, and lattice surgeries~~\cite{bombin2006topological, raussendorf2007topological, raussendorf2007fault, bombin2009quantum, bombin2010topological, horsman2012surface, paetznick2013universal, hastings2014reduced}. Such operations are commonly understood in terms of a series of gates and measurements on a fixed set of physical qubits, a perspective originating from matter-based qubits. Nevertheless, several recent works have introduced new approaches to fault-tolerant memories beyond the setting of static codes~\cite{nickerson2018measurement, newman2020generating, hastings2021dynamically}. Here we want to generalize and extend these concepts to reach a holistic perspective on logical operations rather than as operations on an underlying code.
In this section we describe properties of a fault-tolerant logical instrument, building upon the formalism introduced for the one-way measurement-based quantum computer by Raussendorf \textit{et al}.~\cite{raussendorf2007fault} and extended in Refs.~\cite{brown2020universal, bartolucci2021fusion}. In subsequent sections we specialize to logical instruments that are achieved by manipulating topological features of the surface code in (2{+}1)D space-time.
\begin{figure}\label{figAbstractChannel}
\end{figure}
\subsection{Quantum instrument networks}
Here we draw the curtain and present the stage: a general framework to think about FTQC.
\subsubsection{Quantum instruments}
Quantum instruments describe the most general process in which the input is a quantum system and the output is a combination of quantum and classical systems. We refer to classical outputs as {\it outcomes}, the collection of which is labeled by $\mathcal{O}$. Quantum instruments can model {\bf any} of the (idealized) physical devices that take part in a quantum computation (state preparation, unitaries and measurement). A quantum instrument with a single value classical outcome may be used to represent quantum state preparations and quantum maps (also called channels).
A quantum instrument is specified as a collection \begin{equation}\label{eqnQuantumInstrument} \{\mathcal E_m\} \end{equation} of completely positive and trace nonincreasing linear maps, such that their sum is trace preserving. The maps are indexed by the outcome $m$: for an input state $\rho$, if the instruments outcome is $m$, the unnormalized final state is $\mathcal{E}_m(\rho)$ and the probability for the outcome $m$ to occur is the trace of this state.
\subsubsection{Networks}
A natural way to describe a fault-tolerant quantum computation is as a network (or circuit) of quantum instruments.
\begin{defn}
A {\it quantum instrument network} ({\bf QIN}) is a directed acyclic graph ({\bf DAG}) in which \begin{itemize} \item edges are interpreted as quantum systems, \item vertices are interpreted as quantum instruments: their quantum input (output) is the tensor product of incoming (outgoing) edges. \end{itemize}
\end{defn}
Recall that the vertices of a DAG can always be ordered so that all edges point towards the ``largest'' vertex (as per the ordering). Thus we can interpret a QIN as a process in which the quantum instruments are applied sequentially, each mapping a collection of subsystems to a new such collection. Since the specific ordering is immaterial, the DAG is enough to specify the process.
In such a process, each vertex of the DAG contributes an outcome. Classical beings as we are, the ultimate object of interest is the classical distribution of outcomes. The probability of an outcome configuration can be computed as a tensor network contraction: the tensor network has the same topology as the DAG and, given some choice of basis for each edge, the tensor at a given vertex is obtained from the corresponding (outcome-dependent) linear map.
\subsection{Stabilizer fault tolerance}
In order to make headway in the analysis of fault-tolerant logical blocks, we focus on {\it stabilizer quantum instruments} $\{\mathcal{E}_m\}$ and stabilizer QINs. In terms of domains, this restricts the input and output Hilbert spaces of each instrument to tensor products of qubits and classical outcomes to bit strings. For each classical outcome $m$, the quantum channel $\mathcal{E}_m$ is in fact required to be a stabilizer operator stabilized by $\mathcal{S}_m$. We assume that all such operators share the same stabilizer group up to signs (i.e., $\langle -1, {\mathcal S}_m \rangle = \langle -1, {\mathcal S}_n \rangle$ for outcomes $m$, $n$). Moreover, there is a linear transform (over $\mathbbm{Z}_2$) that relates outcome bits with signs of stabilizer generators (see App.~\ref{secStabilizerOperatorsMapsAndInstruments} for a full definition of the notion of stabilizer instruments). Crucially, this property is preserved under composition (i.e., the quantum instruments resulting from composing elementary stabilizer quantum instruments in QINs will themselves be stabilizer quantum instruments).
This set of stabilizer quantum instruments includes full or partial Pauli product measurements as well as unitaries from the Clifford group and encoding isometries for stabilizer codes. However, it does not include the full flexibility of {\it adaptivity}, i.e., conditionally applying distinct instruments depending on previously obtained outcomes. While adaptivity is crucial to allow for universal quantum computation at the logical level, it is not required to achieve a restricted form of fault tolerance limited to logical stabilizer operations.
\subsubsection{Pauli frame} The different signs for the resulting stabilizer group can be interpreted as being an outcome-dependent {\it Pauli frame correction}. Thus, similarly to quantum teleportation, in the absence of noise, a specific Pauli correction can be directly ($\mathbbm{Z}_2$ linearly) inferred from a parity combination of the classical outcomes associated with the stabilizer instrument.
\subsubsection{Check generators} Under a noise-free operation, not all outcome combinations are possible for a fault-tolerant logical block. In stabilizer fault tolerance, the set of possible outcomes is characterized by linear constraints (considering outcomes as a vector space over $\mathbbm{Z}_2$). The generators for said set of constraints are called {\it check} generators. In the case of topological fault tolerance, check generators can be chosen to be geometrically local (i.e., involving only outcomes in small neighborhood with respect to the QIN graph).
It is the presence of checks that allows fault-tolerant protocols to reliably extract logical outcomes from noisy physical outcomes. The presence of check violations, together with statistical understanding of the noise model, allows one to infer the correct way to interpret logical outcomes with an increasingly high degree of reliability.
\subsection{Properties of fault-tolerant instruments \\ (aka logical blocks)}\label{secPropeprtiesGeneralFTChannel}
A fault-tolerant stabilizer instrument $\Phi$ is a stabilizer QIN on which the following specific structure has been identified. \begin{itemize}[leftmargin=*]
\item {\bf Check generators} are the basis of fault tolerance and jointly define a check group $\mathcal{C}$ (a $\mathbbm{Z}_2$-affine subspace).
The check operators are combinations of classical outcomes within the QIN that yield a predefined parity.
We choose a minimal set of low-weight, geometrically local {\it check generators} whenever possible.
\item External {\bf logical ports} are a partitioning of the {\it physical} input and output ports of the QIN into {\it logical} input and output ports.
Each logical port is thus a collection of physical ports in the QIN that have not been attached within it.
\item The outputs and inputs of any given port are related by {\bf port stabilizers} whose signs generally depend on the values of outcomes within the QIN.
Port stabilizers jointly give the port the structure of a quantum error-correcting code (up to an outcome-dependent Pauli frame).
Each port stabilizer is a Pauli stabilizer on the corresponding port qubits together with a collection of QIN outcomes that jointly determine its sign.
\item {\bf Logical correlators} $\mathcal{S}$ in fault-tolerant stabilizer instruments are encoded through logical stabilizer operators in one (or more) ports together with an outcome mask $m$.
Drawing from topological codes, the outcome masks will be commonly referred to as {\it logical membranes} as this is the shape these have in the topological protocols we focus on.
There is no mathematical difference between logical correlators and port stabilizers, but rather in their intent.
In fact, logical correlators are defined up to multiplication by check generators and/or port stabilizers (a form of gauge freedom). We denote by $\mathcal{S}(U)$ the stabilizer group of the channel $U$. See App.~\ref{secStabilizerOperatorsMapsAndInstruments} for more details. \end{itemize} An abstract depiction of these properties is shown in Fig.~\ref{figAbstractChannel}. The concepts will be further clarified in the context of surface codes in Secs.~\ref{sec3DElements}~and~\ref{secLogicBlocks}.
\begin{figure*}
\caption{
Schematic representation of ports, logical correlator (membranes) for surface code computations.
The internal structure for the fault-tolerant quantum instrument is omitted, leaving only the ports depicted, with the input [output] being on the left [right] of each instrument.
(left) correlator representation of the logical Hadamard.
The first (second) tensor factor corresponds to the input (output) of the instrument.
A fault-tolerant instrument realizing this operation has logical membranes for each stabilizer in $\mathcal{S}(\overline{H})$, which can be understood as mapping logical observables between input and output ports.
(right) Fault-tolerant instruments can be composed along a pair of input and output ports that share a common code, leading to new composite logical correlators.}
\label{figLogicalBlockConcatenation}
\end{figure*}
In CBQC, we can understand logical correlators and membranes as follows. Logical operators at an input port are transformed by every constituent instrument of the QIN. These transformations can be tracked stepwise, following the temporal order of application in the circuit model. At each intermediate step, the logical operator admits an instantaneous representation. This {\it instantaneous representation} is supported on {\it intermediate qubits}, which correspond to contracted quantum inputs and outputs of the constituent quantum instruments. In the case of a noiseless CBQC fault tolerantly representing unitary instruments, the logical correlators need not become correlated with any of the classical outcomes $\mathcal{O}$ of the QIN. Elements of $\mathcal{S}$ correspond to combinations of logical operators on input and output ports. However, this lack of correlation does not persist in other models of computation such as MBQC and FBQC; even the logical correlators for CBQC must be sign corrected in the presence of noise. Noisy operation requires identifying and tracking the most likely error class consistent with visible outcomes, with such errors possibly changing the sign of the logical correlator. This sign correction is referred to as (logical) Pauli frame tracking, as it amounts to applying a logical Pauli operator to the quantum ports of the fault-tolerant instrument.
In certain situations, there may be elements of $\mathcal{S}$ corresponding to state preparation isometries, which are supported exclusively on output ports. Similarly, for measurements and partial projections, there will be elements of $\mathcal{S}$ supported exclusively on input ports. In this latter case, in order for the corresponding fault-tolerant instruments to be trace preserving the corresponding logical stabilizers must be correlated with the classical outcome of the corresponding QIN. An example of this is given by the $X$ or $Z$ measurement instruments in Fig.~\ref{figAllLogicalBlocks} below, wherein the logical block has no quantum output and is closed off by a layer of single-qubit measurements revealing the value of a corresponding logical operator.
In general, outcome masks $m\subset \mathcal{O}$ are required to determine the Pauli frame correction; the product of their outcomes determines the sign of the logical correlator, and hence the Pauli correction operator that is additionally applied. Check generators in $\mathcal{C}$ as well as local port stabilizers, can be understood as trivial logical membranes (i.e., equivalent to logical identity), and thus, logical membranes form equivalence classes under multiplication by checks. In this way, \emph{fault tolerance} can be observed at the level of the logical membranes: a fault-tolerant instrument should have many equivalent logical membranes for a given logical correlator. Whereas in the absence of noise all these membranes lead to a consistent Pauli frame correction, in the presence of noise, it is the job of the decoder algorithm to identify the most likely fault equivalence class and corresponding correction.
\textbf{Logical faults}. For the instrument to be fault tolerant, it is essential that a small number of elementary faults retain the intended noiseless logical interpretation. We define the fault distance as the minimal number of elementary faults yielding trivial (i.e., correct) check outcomes and an incorrect logical outcome. The choice of what to interpret as elementary faults is model specific and should be motivated by the physics of the device(s) being described. It is common to choose arbitrary single-qubit Pauli error on any of the underlying physical input or output qubits of the constituent instruments of the QIN. These generally include outcome or measurement errors that have a purely classical interpretation as an outcome being flipped. The \textit{weight} of a fault combination is defined as the number of elementary faults that compose it. A fault combination is called \textit{undetectable} if it leaves all checks invariant. An undetectable fault combination is a \textit{logical fault} if it leads to an incorrect logical Pauli frame (i.e., it flips the sign of a logical correlator with respect to that of a noiseless scenario). The \textit{fault distance} of a quantum protocol, is defined as the smallest number of \textit{elementary faults} that combine to form a logical fault.
In practice, to make progress, we focus on the fault distance of individual logical blocks, wherein local port stabilizers are assumed to be perfectly measured. This approach is more pragmatic than focusing on quantum circuits, which are composed of a number of {\it logical blocks} as large as demanded by their intended function rather than by fault-tolerance considerations. The approach we use to isolate individual logical blocks can be interpreted as attaching an idealized decoding partial projection at output ports and an ideal encoding isometry at input ports and, as such, is slightly optimistic. Further discussion on the port boundary conditions and block decoding simulation can be found in App.~\ref{appPortBoundaryConditionsAndDecodingSims}.
\textbf{Example: quantum memory}. We consider a simple circuit-based example of a fault-tolerant instrument performing a logical identity operation (or any single-qubit Pauli operation for that matter) on a surface-code-encoded qubit. We follow the usual circuit model assumption wherein the set of data qubits is fixed throughout the computation. Thus, the only two logical ports ${\mathcal{P}} = \{\text{in}, \text{out}\}$ can be seen as using a common set of labels for the physical qubits. The quantum instrument network is composed of elementary circuit elements repeatedly performing {\it stabilizer measurements}. For the circuit model these measurements are typically described in terms of (i) auxiliary qubit initialization, (ii) two-qubit entangling gates such as controlled-NOT ({\texttt{CX}}) and controlled-phase ({\texttt{CZ}}) gates that map stabilizer information onto auxiliary qubits and (iii) single-qubit measurement applied to auxiliary qubits. The auxiliary qubits are {\it recycled} between measurement and initialization, allowing a local realization through geometrically bound quantum information carriers such as superconducting qubits~\cite{fowler2012surface}. Of these, only (iii) yields a classical outcome, and the circuits are designed such that they yield the measurement value for a code stabilizer generator. These measurements are repeatedly performed throughout the code for a certain number $T$ of {\it code cycles}.
We identify a check generator of $\mathcal{C}$ for every pair of consecutive measurements of the same stabilizer. In practice, the {\it consecutive} qualifier is important as it leads to relatively compact check generators that allow directly revealing information on low-weight faults; the underlying assumption is that the outcome of these check generators can only be affected (i.e., flipped) by a small set of fault generators between the two consecutive stabilizer measurements. For each stabilizer generator of the underlying code, the first (final) measurement round for it leads to a corresponding localized input (output) port stabilizer with sign correlated to the measurement outcome. Finally, as the logical operators of the code are preserved by the stabilizer measurements, the logical operators can be seen as propagating as unperturbed logical membranes $M^{\overline{X},\overline{X}}$, $M^{\overline{Z},\overline{Z}}$ through the (2{+}1)D structure of the QIN from input to output. In the usual circuit model, the logical Pauli frame---in the absence of errors---is trivial (i.e., the identity), as logical operators map deterministically from input to output. This can change when performing logic gates through code deformation, for example. An additional example is shown in Fig.~\ref{figLogicalBlockConcatenation} for a block implementing a logical Hadamard on a surface code.
\section{Elements of topological computation}\label{secElements}
We are mostly interested in constructing \textit{topological} fault-tolerant instruments; that is, for instruments whose inputs and output ports are encoded using topological codes, a useful subset of logical correlations can be generated by manipulating topological \emph{features} such as boundaries, domain walls, and twists. These section is devoted to the detailed description of these features.
We now introduce the ingredients that make up a topological fault-tolerant instrument network. In topological quantum computation, fault tolerance is achieved by creating a fault-tolerant \emph{bulk}, generally with a periodic repeating structure that contains parity checks (stabilizers) to enable error correction. An example of this would be the repeated measurement of stabilizer operators on a toric code, or a three-dimensional fusion network in a FBQC setting. The resulting bulk allows logically encoded quantum information to be stored. However, in order to use this system to perform a nontrivial quantum \emph{computation}, the homogeneous nature of the bulk must be broken. One way of achieving this is to introduce \emph{topological features} that can be manipulated to perform logical gates. The simplest example of a topological feature is a boundary, which terminates the bulk in a certain location~\cite{kitaev2006anyons,preskill1999lecture, raussendorf2007topological, raussendorf2007fault}. In surface codes one can also introduce further topological features known as domain walls and twists~\cite{bombin2009quantum}. By introducing these features in an appropriate configuration, logical information can not only be stored, but also manipulated to perform all Clifford operations~\cite{bombin2010topological, barkeshli2013twist, levin2013protected, liu2017quon, yoder2017surface, brown2017poking, barkeshli2019symmetry}.
In this section, we introduce the surface code and its topological features, beginning with explicit examples in two-dimensions. We then interpret these features and their relationships in (2{+}1)D space-time. In particular we describe the symmetries of the code and how they relate to the behavior of anyonic excitations of the code---a tool we use throughout this paper to define and describe the behavior of topological features. These features form the anatomy of a general topological fault-tolerant logical operation, which we explore in the following section.
\subsection{The surface code, anyons, and their symmetries} \label{secSurfaceCode}
The simplest example of the surface code~\cite{kitaev2003fault,wen2003quantum} with no topological features consists of qubits positioned on the vertices of a square lattice with periodic boundary conditions. (Note that there are many variations of the surface code, originally introduced by Kitaev~\cite{kitaev2003fault}. We utilize the symmetric, \textit{rotated} version due to Wen~\cite{wen2003quantum}, because of its better encoding rate~\cite{bombin2006topologicalencoding,bombin2007optimal,tomita2014low}, and comment on the relationship between the two in App.~\ref{appKitaevToWen}.)
\textbf{Stabilizers and logical operators.} The surface code is a stabilizer code, and for each plaquette (face) of the lattice, there is a stabilizer generator $s_{(i,j)} = Z_{(i,j)} X_{(i+1,j)} Z_{(i+1,j+1)} X_{(i,j+1)}$, where $(i,j)$ labels the vertices of the lattice, as described in Fig.~\ref{figToricCode}. We bicolor the faces of the lattice in a checkerboard pattern, as depicted in Fig.~\ref{figToricCode}, and call stabilizers on blue (red) plaquettes primal (dual) stabilizers. This primal and dual coloring is simply a gauge choice. The logical Pauli operators of the encoded qubits are associated with noncontractible cycles on the lattice, and so the number of logical qubits encoded in a surface code depends on the boundary conditions. For example, the surface code on a torus encodes two logical qubits. The surface code can also be defined for many different lattice geometries by associating qubits with the edges of an arbitrary 2D cell complex~\cite{kitaev2003fault}, but for simplicity, we restrict our discussion to the square lattice. The descriptions of topological features that follow apply to arbitrary lattice geometries, provided one first finds the corresponding symmetry representation, which, for a general surface code, is given by a constant-depth circuit.
\textbf{Errors.} When low-weight Pauli errors act on the toric code, they anticommute with some subset of the stabilizers, such that if measured, these stabilizers would produce ``${-}1$'' outcomes. The measurement outcomes for stabilizers form the classical \emph{syndrome} that can then be decoded to identify a suitable correction. Stabilizers with associated ${-}1$ outcome are said to be flipped and are also viewed as ``excitations" of the codespace that behave as anyonic quasiparticles. The behavior of these anyons has been widely studied~\cite{kitaev2006anyons,preskill1999lecture,barkeshli2019symmetry}, but for our purposes, they will be a useful tool for characterizing the behavior of topological features, and how they affect encoded logical information. As illustrated in Fig.~\ref{figToricCode}, anyons can be thought of as residing on any plaquette of the lattice and are created at the endpoints of open strings of Pauli operators. We refer to anyons residing on primal (blue) plaquettes as ``primal'' anyons, and those on dual (red) plaquettes as ``dual'' anyons \footnote{The two types of anyons of the surface code are also often also referred to as $e$-type and $m$-type, or alternatively $X$-type and $Z$-type.}. Primal and dual anyons are topologically distinct, in that (in the absence of topological features) there is no local operation than can change one into the other. We refer to the string operators that create primal (dual) anyons as dual (primal) string operators. We can understand primal (dual) stabilizers as being given by primal (dual) string operators supported on closed, topologically trivial loops. Similarly, Pauli-$X$ and Pauli-$Z$ logical operators consist of primal and dual string operators (respectively) supported on nontrivial loops of the lattice.
\textbf{Symmetries.} Before defining features of the surface code, we first examine its symmetries. The symmetries of the surface code allow us to explicitly construct topological features, as well as to discuss the relationships between them. We refer to any locality-preserving unitary operation (a unitary that maps local operators to local operators) that leaves the bulk codespace invariant as a \emph{symmetry} of the code. Transversal logical gates acting between one or more copies of a code are examples of symmetries. Symmetry operations can also be understood as an operation that, when applied to the code(s), generates a permutation on the anyon labels that leaves the behavior of the anyons (i.e., their braiding and fusion rules) unchanged (see, e.g., Refs.~\cite{beverland2016protected, yoshida2017gapped}). For a single surface code, there is only one nontrivial symmetry generator, which for the surface code in Fig.~\ref{figToricCode}, is realized by shifting the checkerboard pattern by one unit in either the horizontal or vertical direction. This symmetry permutes the primal and dual anyons, as shown in Fig.~\ref{figToricCode} (bottom left), and we refer to it as the \emph{primal-dual symmetry} or $\mathbbm{Z}_2$ (translation) symmetry.
\begin{figure*}
\caption{
(left) Topological features in 2D and their relationship to the $\mathbbm{Z}_2$ primal-dual symmetry transformation. Primal (dual) anyonic excitations are created on primal (dual) plaquettes at the end of open dual (primal) string operators. The symmetry swaps primal and dual stabilizer checks, and anyonic excitations. Primal boundaries absorb dual-type excitations, dual boundaries absorb primal-type excitations. Primal and dual anyons are swapped upon crossing a domain wall. Corners and transparent corners appear on the interface between primal and dual boundaries in the absence and presence of domain walls, respectively. Twists appear on the boundary of domain walls (with corresponding stabilizer shown).
(right) The (2{+}1) depiction of topological features and their relationships.
Twists and corners are two different manifestations of the same object.
Their location is physical and has observable consequences.
In contrast, transparent domain walls and corners can be relocated and simply correspond to a book-keeping gauge choice.
}
\label{figToricCode}
\end{figure*}
\subsection{Topological features in the surface code}\label{sec:topological_features_surface_code}
We now present three types of topological features that can be introduced in the surface code and how they relate to the primal-dual symmetry. The features are termed boundaries, domain walls and twist defects, and each breaks the translational invariance of the bulk in a different way. Explicit static examples are provided for the square lattice surface code in Fig.~\ref{figToricCode}. As the topological features in a code are dynamically changed over time, they trace out world lines and world sheets in space-time and are naturally interpreted as (2{+}1)D topological objects. We provide a schematic representation of these features in Fig.~\ref{figToricCode} and give a more precise meaning of such (2{+}1)D features in Sec.~\ref{sec3DElements}. We focus on the codimension of a feature (as opposed to its dimension), as it applies to both the 2D code and (2{+}1)D space-time instrument network. Here a codimension-$k$ feature is a ($2{-}k$)D object in a 2D code, or a ($3{-}k$)D object in a space-time instrument network.
\textbf{Primal and dual boundaries.} Boundaries of the surface code are codimension-1 objects that arise when the code is defined on a lattice with a boundary~\cite{bravyi1998quantum} (i.e., they form 1D boundaries in a 2D code, or 2D world sheets in a (2+1)D quantum instrument network). Specific anyonic excitations can be locally created or destroyed at a boundary~\cite{kitaev2012models, levin2012braiding, levin2013protected, barkeshli2013classification, lan2015gapped} (a process referred to as anyon condensation by parts of the physics community). In terms of the code, one can understand this in terms of error chains that can terminate on a boundary without flipping any boundary stabilizers, as shown in Fig.~\ref{figToricCode}. There are two boundary types: primal boundaries condense dual anyons (i.e., they terminate primal error strings), while dual boundaries condense primal anyons (i.e., they terminate dual error strings). We can encode logical qubits using configurations of primal and/or dual boundaries; logical operators can be formed by string operators spanning between distinct boundaries. Note that in the 2D surface code these boundaries are often referred to as ``rough'' and ``smooth'' boundaries~\cite{bravyi1998quantum}. Examples of these two boundary types are shown in Fig.~\ref{figToricCode}.
\textbf{(Transparent) Domain walls.} A domain wall is a codimension-1 feature formed as the boundary between two bulk regions of the code, where the symmetry transformation has been applied to one of the regions\footnote{Note that generally, the term ``domain wall'' refers to any boundary between two topological phases. Here, we specifically use it to refer to the primal-dual swapping boundary.}, as shown in Fig.~\ref{figToricCode}. When an anyon crosses a domain wall it changes from primal to dual, or vice versa (which follows from the fact that the symmetry implements the anyon permutation). Microscopically this transformation can be induced by the interpretation of the checks straddling the domain wall---they are primal on one side and dual on the other (which we can view as a change of gauge). Thus, the corresponding string operators must transform from primal to dual (and vice versa) in order to commute with the stabilizers in the neighborhood of the domain wall. In (2+1)D space-time, domain walls form world sheets that can be used to exchange $\overline{X}$ and $\overline{Z}$ logical operators, as well as primal and dual anyons upon crossing. We note that such domain walls are also referred to as ``transparent boundaries" in the literature \cite{kitaev2012models, lan2015gapped}.
\textbf{Corners.} In the absence of any transparent domain walls, the codimension-2 region where primal and dual boundaries meet is called a corner [i.e., it is a 0-dimensional point in the 2D code, or a 1D line in the (2+1)D instrument]. Corners can condense arbitrary anyons as they straddle a primal boundary to one side and a dual boundary to the other and can be used to encode quantum information. For example, using surface codes with alternating segments of primal and dual boundaries, one can encode $n$ logical qubits within $2n{+}2$ corners~\cite{bravyi1998quantum,freedman2001projective}. In the (2+1)D context we also refer to corners as cornerlines, and their manipulation (e.g., braiding) can lead to encoded gates. They can be understood as twist defects (defined below) that have been moved into a boundary.
\textbf{Twists.} A twist is a codimension-2 object that arises when a transparent domain wall terminates in the bulk~\cite{bombin2010topological,barkeshli2013twist}. Similarly to corners, twist defects are topologically nontrivial objects and can carry anyonic charge. In particular, the composite primal-dual anyon can locally condense on a twist and one can use the charge of a twist to encode quantum information. Indeed, twists and corners can be thought of as two variations of the same topological object; namely, a twist can be thought of as a corner that has been moved into the bulk, leaving behind a transparent corner (defined below), as is readily identified in the space-time picture as per Fig.~\ref{figToricCode}. Like corners, we can use $2n{+}2$ twists to encode $n$ logical qubits.
\textbf{Transparent corners.} In the presence of domain walls, primal and dual boundaries may meet in another way. We call the codimension-2 region at which a primal boundary, a dual boundary, and a domain wall meet a transparent corner. Unlike the previous corners, transparent corners carry no topological charge, cannot be used to encode logical information, and should be thought of as the region at which a primal and dual boundary are locally relabeled (i.e., a change of gauge).
For the purposes of defining logical block templates in the following section, we refer to the codimension-1 features (boundaries and domain walls) as \textit{fundamental features}, and the codimension-2 features (twists, cornerlines, and transparent cornerlines) as \textit{derived features}: the locations of derived features are uniquely determined by the locations of fundamental features. We remark that this terminology is a matter of convention and not a statement about the importance of a feature---indeed most encodings and logical gates can be understood from the perspective of twists and corners alone. This represents all possible features of one copy of the surface code. As we consider more copies of the surface code, the symmetry group becomes richer, and thereby corresponds to a much larger set of symmetry defects (domain walls and twists) that can be created between the codes\footnote{For example, the 2D color code (which is locally equivalent to two copies of the surface code code~\cite{bombin2012universal}) has a symmetry group containing 72 elements~\cite{yoshida2015topological,scruby2020hierarchy}, compared to the $\mathbbm{Z}_2$ symmetry of a single surface code.}. Sec.~\ref{secPortals}, for example, introduces a particularly interesting defect known as a \emph{portal} that arises when we create defects between copies of the same code.
\section{Fault-tolerant instruments for the surface code in (2{+}1)D}\label{sec3DElements}
Having defined the topological features of the surface code, we now introduce a framework for fault-tolerant logical instruments that are achieved by manipulating these topological features in (2{+}1)D space-time. The central object we define is called a \textit{logical block template}, which is a platform-independent set of instructions for (2{+}1)D surface code fault-tolerant instruments. The template provides an explicit description of the space-time location of topological features (boundaries, domain walls, twists, cornerlines, and transparent cornerlines), allowing for flexibility in the design of logical operations. Templates provide a direct way of identifying checks and logical membranes of the logical instrument, as well as a method of verifying that it is fault tolerant. They can be directly compiled to physical instructions of a QIN, prescribing the qubits, ports and instruments to implement a fault-tolerant instrument in a physical architecture for CBQC and FBQC.
\subsection{Logical block templates: diagrammatic abstraction for (2{+}1)D topological computation}\label{secLogicalTemplate}
\begin{figure*}
\caption{(left) The surface code consists of stabilizer check operators supported on primal [blue] and dual [red] plaquettes. Stabilizers on the primal (dual) boundary consist of truncated primal (dual) stabilizer operators. Logical Pauli operators for the surface code consist of operators supported on nontrivial cycles of the underlying surface, or between distinct boundaries.
(right) Representation of a topological instrument network; if implemented in CBQC, each slice of the network describes stabilizer measurement instructions to perform on a plane of qubits. If implemented in FBQC, the network describes globally what fusions and measurements one should perform between and on resource states.
}
\label{figTemplateInterpretation}
\end{figure*}
Each logical block template is given by a 3D cubical cell complex $\mathcal{L}$ and an accompanying set of cell labels. Here, by cubical cell complex, we mean that $\mathcal{L}$ is a cubic lattice consisting of sets of vertices $\mathcal{L}_0$, edges $\mathcal{L}_1$, faces $\mathcal{L}_2$, and volumes $\mathcal{L}_3$ with appropriate incidence relations\footnote{This cell complex is distinct from the cell complex commonly used in the context of fault-tolerant topological MBQC~\cite{raussendorf2007fault,raussendorf2007topological, nickerson2018measurement} in which the checks correspond to 3-cells and 0-cells of the complex.}. The labels are to determine regions of the cell complex that support a primal boundary, dual boundary, domain wall, or a port. We denote the set of features by \begin{align} \mathcal{F} = \{& \text{PrimalBoundary}, \text{DualBoundary}, \text{DomainWall}, \nonumber \\ & \text{Port}_i, \emptyset \}, \end{align} where $i$ indexes the distinct ports of the instrument network (e.g., $1, 2, 3$ for the network depicted in Fig.~\ref{figAbstractChannel}). The label $\emptyset$ is used as a convenience to denote the absence of any feature.
We now formally define a logical block template as follows. \begin{defn}
A logical block template is a pair $(\mathcal{L}, F)$ where $\mathcal{L}$ is a cubical cell complex and $F: \mathcal{L}_2 \rightarrow \mathcal{F}$ is a labeling of the 2-cells $\mathcal{L}_2.$ \end{defn}
To define a logical block template, we have only specified the location of the ports and \textit{fundamental}, codimension-1 features (the boundaries and domain walls). The \textit{derived}, codimension-2 features (twists, cornerlines, and transparent cornerlines) are all inferrable from them. Recall that twists reside at the boundary of a domain wall in the absence of any boundaries, while cornerlines (transparent cornerlines) reside on the interface between primal and dual boundaries in the absence (presence) of a domain wall.
In order to simplify the construction of derived features, it is convenient to decompose the label for topological features into two indicator functions $B$ and $T$ on $\mathcal{L}_2$. The first one, $B$, indicates whether the 2-cell is a boundary or not (this may simply be derived as $\partial \mathcal{L}_3$). The second, $T$, identifies transparent domain walls as well as distinguishing dual boundaries ($B \wedge T$) from primal boundaries ($B \wedge \neg T$). The union of {\it twist defects} and {\it cornerlines} can then be identified as $\partial T$, of which only $(\partial T) \cap B$ are identified as {\it cornerlines}. Finally, elements of $\partial (T \cap B)$ that are not in $\partial T$ are considered {\it transparent cornerlines}. In this way, cornerlines, transparent cornerlines, and twists can be described as labels on 1-cells of the template and one can extend the domain of $F$ to include 1-cells accordingly (see App.~\ref{secLogicalBlockTemplateExtension} for the explicit definition of the extension).
\textbf{Remarks.} The logical block templates do not make any explicit reference to a causal or temporal direction. As such, they provide a natural starting point to describe pictures of topological fault tolerance that do not explicitly present a local temporal ordering such as FBQC and MBQC. In order to ascribe a circuit model interpretation, it is necessary to extend the template with a causal (i.e., temporal) order compatible with the input and output status of logical ports. Contrary to the static 2D case, boundaries, domain walls, twists, cornerlines and transparent cornerlines may all exist along planes normal to the time direction. The physical operations generating such features will be explained in the following sections.
Finally, we remark that the cubical complex is natural for blocks based on the square lattice surface code (in CBQC) and the six-ring fusion network (in FBQC). For other surface code geometries or fusion networks one can generalize the logical block template to other cell complexes. This case remains important for the 3-cells in the bulk to remain 2-colorable as they will continue to represent primal and dual checks and locations where these are not locally two colorable will correspond to twist defects (and cornerlines if one considers the exterior as a color). Twist defects in $\mathcal{L}_1$ are associated with an odd number of incident faces from $\mathcal{L}_2$.
\subsection{Fault-tolerant instruments from logical block templates}
Logical templates define fault-tolerant logical instruments without reference to the computational model, but can be directly compiled into physical instructions for different models of computation. We now explain how the different features of the template correspond to measurement instructions, checks and logical membranes.
\textbf{Compiling templates to physical instructions.} Logical block templates can be directly compiled into a network of quantum instruments realizing surface-code-style fault tolerance for CBQC, and FBQC. We provide an overview of this mapping here, leaving the explicit mapping from templates to CBQC instructions in App.~\ref{secTemplateToCBQC}, and to FBQC instructions in Sec.~\ref{sec:topological_fbqc}. One can also obtain MBQC instructions on cluster states using the framework of Ref.~\cite{brown2020universal} applied to the CBQC instructions.
To compile a template into physical instructions for CBQC we must choose a coordinate direction as the temporal direction or otherwise equip the template with a causal structure. From this, each time slice defines a 2D subcomplex of the template, each vertex of which corresponds to a qubit, and each bulk 2-cell of which corresponds to a bulk surface code stabilizer measurement. The feature labels on the subcomplex result in modifications to the measurement pattern, as per Fig.~\ref{figToricCode} and as shown in the example in Fig.~\ref{figTemplateInterpretation}.
In FBQC, the symmetry between space and time is maintained, and measurements may be performed in any order. The flavor of FBQC presented in \cite{bartolucci2021fusion}, which uses 6-qubit ring graph states as resource states, is naturally adapted to the logical block template. To each vertex of the template we place a resource state, while each edge of the template corresponds to a fusion measurement (a two-qubit projective measurement) between resource states as determined by the feature labels.
\textbf{Checks.} There is a close connection between the elements used to describe topological codes with those required to characterize fault-tolerant instruments. For instance, stabilizer operators of the code give rise to check operators for the QIN as the stabilizers are repeatedly measured. Similarly, the logical operators of the code give rise to logical correlators, which track how the corresponding degree of freedom map between ports. In topological fault tolerance, going from code to protocol involves increasing the geometric dimension by one, which in the circuit model is naturally interpreted as the temporal direction. In particular, checks correspond to parity constraints on the outcomes of operators supported on closed (i.e., without boundary), homologically trivial surfaces of codimension 1 (i.e., two dimensional) in the template complex. This is analogous to how 2D surface code stabilizers consist of Pauli operators supported on closed, homologically trivial loops. In particular, the surface of every bulk 3-cell of the template corresponds to a check, in the following way. In CBQC, a surface code stabilizer measurement repeated between two subsequent timesteps gives rise to a check, and this check can be identified with the 3-cell whose two faces the measurements are supported on. In FBQC, fusion measurements between resource states supported on the vertices of a 3-cell constitute a resource state stabilizer, and thus a check (this will be carefully validated in Sec.~\ref{sec:topological_fbqc}).
Much like the two-dimensional case, bulk check generators can be partitioned into two disjoint sets, either primal or dual, as depicted in Fig.~\ref{figTemplateInterpretation}. For CBQC, this partition consists of the 2D checkerboard pattern extended in time, while for FBQC, the primal and dual checks follow a 3D checkerboard pattern. Thus, we may label a bulk 3-cell (and its surface) by either Primal or Dual, depending on what subset it belongs to. These checks can be viewed in terms of Gauss's law---they detect the boundaries of chains of errors\footnote{In condensed matter language, this check operator group can be understood as a $\mathbbm{Z}_2\times \mathbbm{Z}_2$ 1-form symmetry~\cite{gaiotto2015generalized,kapustin2017higher,roberts2020symmetry}.}. The presence of features modifies the check operator group: primal and dual boundaries lead to checks supported on truncated 3-cells, while defects and twists lead to checks supported on the surfaces of pairs of 3-cells sharing a defect 2-cell or twist 1-cell. We discuss the check operator group and how it is modified by features in much more detail in Sec.~\ref{sec:topological_fbqc} for FBQC and App.~\ref{secTemplateToCBQC} for CBQC.
\textbf{Logical correlators and membranes}. Logical membranes determine how logical information is mapped between input and output ports of the instrument. In the template, logical membranes---which can be thought of as the world sheets of logical operators---are supported on closed, homologically nontrivial surfaces of codimension 1. Logical membranes can be obtained by finding relatively closed surfaces $M \subseteq \mathcal{L}_2$, each with their 2-cells taking labels from $\mathcal{F}_M = \{\text{Primal}, \text{Dual}, \text{PrimalDual}\}$ satisfying certain requirements. Here, by relatively closed, we mean that the surface is allowed to have a boundary on the boundary of the template cell complex. The labels must satisfy the following conditions: (i) only faces with Primal (Dual) labels can terminate on primal (dual) boundaries, (ii) upon crossing a domain wall, the Primal and Dual labels of the membrane are exchanged (and the PrimalDual label is left invariant).
Any such surface corresponds to a logical membrane in the following ways. In CBQC, the membrane can be projected into a given time slice where it corresponds to a primal, dual, or composite logical string operator. The components of a membrane in a plane of constant time correspond to stabilizer measurements that must be multiplied to give the equivalent representative on that slice (and thus their outcome is used to determine the Pauli frame). In FBQC, the membrane corresponds to a stabilizer of the resource state, whose bulk consists of a set of fusion and measurement operators used to determine the Pauli frame. In both cases, when projected onto a port, these membranes correspond to a primal-type ($X$-type), dual-type ($Z$-type), or a composite primal-dual-type ($Y$-type) string logical operator. Checks can be considered ``trivial'' logical membranes, with logical membranes forming equivalence classes up to multiplication by them (i.e., by local deformations of the membrane surfaces).
\textbf{Logical errors}. Errors can be understood at the level of the template. Elementary errors---Pauli or measurement errors---are categorized as either primal or dual, according to whether they flip dual or primal checks, respectively. Undetectable chains of elementary errors comprise processes involving creation of primal or dual excitations, propagating them through the logical instrument, transforming them through domain walls, and absorbing them into boundaries or twists. Specifically, primal (dual) excitations can condense on dual (primal) boundaries, composite primal-dual excitations can condense on twists, and primal and dual excitations are swapped upon crossing a transparent domain wall. The primal and dual components of a logical membrane can be thought of as measuring the flux of primal and dual excitations, respectively. If an undetectable error results in an odd number of primal or dual excitations having passed through the primal and dual components of a logical membrane, then a logical error has occurred. The fault distance of the logical instrument is the weight of the smallest weight logical error.
\textbf{Graphical conventions.} Throughout the rest of the paper, as per Figs.~\ref{figLogicalBlockConcatenation},~\ref{figToricCode}, and ~\ref{figTemplateInterpretation}, we pictorially represent primal boundaries and logical membranes in blue, while dual boundaries and dual logical membranes will be represented in red. Domain walls are represented in green.
\section{Universal block-sets for topological quantum computation based on planar codes}\label{secLogicBlocks}
In this section we apply the framework of logic block templates to construct a universal set of fault-tolerant instruments based on planar codes~\cite{bravyi1998quantum}. The gates we design are based on fault-tolerant Clifford operations combined with noisy magic state preparation, which together are sufficient for universal fault-tolerant logic (via magic state distillation~\cite{bravyi2005universal,bravyi2012magic,litinski2019game}). To the best of our knowledge, some of the Clifford operations we present---in particular the phase gate and controlled-NOT gate---are the most efficient versions in the literature in terms of volume (defined as the volume of the template cell complex). These Clifford operations form the backbone of the quantum computer, and set for instance the cost of and rate at which magic states can be distilled and consumed (the latter of which can become quite expensive for applications with large numbers of logical qubits~\cite{litinski2019magic,kim2021faulttolerant}). Later, in Sec.~\ref{secPortals}, we show another way of performing Clifford operations---namely, Pauli product measurements---for twist-encoded qubits, using a space-time feature known as a portal. These portals require long-range operations in general, and will be discussed in the context of FBQC.
\begin{figure*}\label{figAllLogicalBlocks}
\end{figure*}
\subsection{Planar code logical block templates for Clifford operations}\label{secCliffordOperators}
We begin by defining logical block templates for a generating set of the Clifford operations on planar codes~\cite{bravyi1998quantum}: the Hadamard, the phase gate, the lattice surgery for measuring a Pauli-$X$ string, along with Pauli preparations and measurements. Recall that a fault-tolerant instrument realizes an encoded version of the Clifford operator $U\in \mathcal{C}_{l,m}$ if it has $l$ distinct input ports, $m$ distinct output ports, and an equivalence class of logical membranes $M^{\overline{P},\overline{Q}}$ for every logical correlator $P\otimes Q \in \mathcal{S}(U)$. Each of the block templates are depicted in Fig.~\ref{figAllLogicalBlocks} along with a generating set of membranes, showing the mapping of logical operators between the input and output ports. To complete a universal block set, we present the template for noisy magic state preparation in Fig.~\ref{figFBQCMagicStatePreparation} and App.~\ref{secMagicStatePreparation}. We reemphasize that the instruments we discuss are only true up to a random but known Pauli operator---known as the Pauli frame. The logical membranes determine the measurement outcomes that are be used to infer this Pauli frame.
\textbf{Logical block criteria.} The logical blocks we present are designed for planar code qubits with the following criteria in mind: (i) They each have a fault distance of $d$ for both CBQC and FBQC under independent and identically distributed (IID) Pauli and measurement errors, where $d$ is the (tunable) distance of the planar codes on the ports. (ii) They are composable by transversal concatenation and that composition preserves distance. This means that for a given distance, we can compose two blocks together by identifying the input port(s) of one with the output(s) of another. In particular, this requires that inputs and output ports are a fixed planar code geometry. (iii) They admit a simple implementation in 2D hardware: after choosing any Cartesian direction as the flow of physical time, the 2D time slices can be realized in a 2D rectangular layout.
These criteria allow the logical blocks to be used in a broad range of contexts, however, it is possible to find more efficient representations of networks of logical operations by relaxing them. For instance, we need not require the input and output codes to be the same, as we can classically keep track of rotations on the planar code and compensate accordingly. Secondly, networks of such operations can be compiled into more efficient versions with the same distance. In particular, one may also find noncubical versions of these blocks with reduced volumes.
\textbf{Hadamard and phase gates.} To the best of our knowledge, the phase gate presented in Fig.~\ref{figAllLogicalBlocks} is the most volume-efficient representation in the literature for planar code qubits using the so-called rotated form~\cite{nussinov2009symmetry,bombin2007optimal,beverland2019role}. Moreover, the scheme we present can be implemented in CBQC with a static 2D planar lattice using at most four-qubit stabilizer measurements on neighboring qubits on a square lattice. In particular, the physical operations for the phase gate can be ordered in a way that does not require the usual five-qubit stabilizer measurements~\cite{bombin2010topological,litinski2019game} or modified code geometry~\cite{yoder2017surface,brown2020universal} that are typically required for twists. We explicitly show how to implement this phase gate in CBQC in Fig.~\ref{figPhaseGateCBQC} in App.~\ref{AppPhaseGateCBQC}. We remark that with access to nonlocal gates or geometric operations called ``folds'', one can find an even more efficient phase gate based on the equivalence of the toric and color codes~\cite{kubica2015unfolding,moussa2016transversal}. The Hadamard gate has the same volume as the ``plane rotation" from Ref.~\cite{litinski2019game}.
\textbf{Lattice surgery and Pauli product measurements}. To complete the Clifford operations, we consider the nondestructive measurement of an arbitrary $n$-qubit Pauli operator, known as a Pauli product measurement (PPM)~\cite{elliott2009graphical} (a general Clifford computation can be performed using a sequence of PPMs alone~\cite{litinski2019game}). By nondestructive, we mean that only the specified Pauli is measured, and not, for example, its constituent tensor factors. A general PPM $M_{P}: (\mathbb{C}^2)^{\otimes n} \rightarrow (\mathbb{C}^2)^{\otimes n}$, $P \in \mathcal{P}_n$, has a stabilizer given by \begin{align} \mathcal{S}(M_{P}) =
\langle P \otimes I, I \otimes P, Q\otimes Q^{\dagger} ~|~ Q\in \mathcal{Z}_{\mathcal{P}_n}(P) \rangle. \end{align} These PPMs can be performed using lattice surgery~\cite{horsman2012surface, litinski2019game}. With access to single-qubit Clifford unitaries, an arbitrary PPM can be generated using lattice surgery in a fixed basis, such as the $X$ basis as depicted in Fig.~\ref{figAllLogicalBlocks}.
To efficiently perform a general PPM using lattice surgery~\cite{horsman2012surface}, one may utilize planar codes with six corners, as described in Ref.~\cite{litinski2019game}. Each six-corner planar code encodes two logical qubits and supports representatives of all logical operators $\overline{X}_i$, $\overline{Y}_i$, and $\overline{Z}_i$ of each qubit $i\in \{1,2\}$ on a boundary. This enables us to measure arbitrary $n$-qubit Pauli operators in a more efficient way as no single-qubit Clifford unitaries or code rotations are required between successive lattice surgery operations. The price to pay is that single-qubit Pauli measurements and preparations can no longer be done in constant time. As a further improvement, by utilizing periodic boundary conditions, we can compactly measure any logical Pauli operator on $k$ logical qubits using a block with at most $2kd^3$ volume, as shown in Ref.~\cite{bombin2021interleaving, kim2021faulttolerant}.
\begin{figure}\label{figBlockCNOT}
\end{figure}
\textbf{Qubits in space and time: from lattice surgery to controlled-Pauli operations}. The logical flow (the order in which the logical block maps inputs to outputs) and physical flow (the order in which physical operations are implemented to realize the fault-tolerant instrument) for a logical block do not need to be aligned. One can take advantage of this in the design of logic operations. In particular, each qubit participating in the lattice surgery may be regarded as undergoing a controlled-$X$, controlled-$Y$, or controlled-$Z$ gate with a spacelike ancilla qubit (i.e., a qubit propagating in a spatial direction). For example, the $X$-type lattice surgery of Fig.~\ref{figAllLogicalBlocks} can be understood as preparing an ancilla in the $\ket{+}$ state, performing controlled-$X$ gates between the ancilla and target logical qubits, then measuring the ancilla in the $X$ basis.
One can use this to define a logical block template for the controlled-$X$ gate, as depicted in Fig.~\ref{figBlockCNOT}. Therein, one can verify that the block induces the correct action by finding membranes representing stabilizers of $CX: (\mathbb{C}^2)^{\otimes 2}\rightarrow (\mathbb{C}^2)^{\otimes 2}$ \begin{align} \mathcal{S}(CX) = \langle X_1\otimes X_1X_2, X_2 \otimes X_2,~ \nonumber\\
Z_1 \otimes Z_1, Z_2 \otimes Z_1Z_2 \rangle. \end{align} One can similarly define $CZ$ and $CY$ gates by appropriately including domain walls in the template. The stabilizers for the $CZ$ and $CY$ can be obtained by applying a Hadamard or phase to the second qubit after the $CX$.
In the next section, we expand on this concept by designing a logic scheme using ``toric code spiders'' that can be considered an alternative approach to lattice surgery.
\section{Assembling blocks into circuits: Concatenating LDPC codes with surface codes}\label{secConcatenation}
We now introduce a logic scheme that takes advantage of the space-time flexibility inherent to surface code logical blocks and show how to generate larger circuits using these building blocks. An application of this scheme is the construction of fault-tolerant schemes based on surface codes concatenated with more general LDPC codes.
While the surface code (and topological codes) are advantageous due to their very high thresholds, they are somewhat disadvantaged by their asymptotically zero encoding rate (i.e., the ratio of encoded logical qubits to physical qubits vanishes as the code size goes to infinity). Fortunately, there are families of LDPC codes that have nonzero rates~\cite{tillich2014quantum,gottesman2013fault,fawzi2018constant,fawzi2018efficient,breuckmann2021ldpc,hastings2021fiber,breuckmann2021balanced,panteleev2021asymptotically}, meaning that the number of encoded logical qubits increases with the number of physical qubits. Such codes may offer ways of greatly reducing the overhead for fault-tolerant quantum computation~\cite{gottesman2013fault}.
Code concatenation allows us to take advantage of the high threshold of the surface code and the high rates of LDPC codes. The space-time language of the previous sections provides a natural setting for the construction and analysis of the resulting codes. The building blocks for these constructions consist of certain topological projections that we refer to as ``toric code spiders''---these correspond to encoded versions of the $Z$ and $X$ spiders of the ZX-calculus~\cite{coecke2008interacting,van2020zx,de2020zx}. Spiders are encoded Greenberger-Horne-Zeilinger (GHZ) basis projections for surface code qubits. Our protocol is intended to be illustrative, and we emphasize that further investigation into the performance of such concatenated codes is an interesting open problem.
\subsection{Toric code spiders} We now define spiders and toric code spiders, the building blocks for the concatenated codes we consider. There are two types of spiders, which we label by $\spiderx{k}$ and $\spiderz{k}$, where $k\in \mathbbm{N}$ labels the number of input and output ports. We do not distinguish between input and output ports here, and, as such, we write the stabilizer groups for $k>1$ as \begin{align}
\mathcal{S}(\spiderx{k}) =& \langle Z^{\otimes k}, X_{i}X_{i+1} ~|~ i = 1,\ldots, k-1 \rangle, \\
\mathcal{S}(\spiderz{k}) =& \langle X^{\otimes k}, Z_{i}Z_{i+1} ~|~ i = 1,\ldots, k-1 \rangle, \end{align} with $\mathcal{S}(\spiderx{1}) = \langle Z \rangle$ and $\mathcal{S}(\spiderz{1}) = \langle X \rangle$. If all ports are considered outputs, then spiders can regarded as preparing GHZ states (up to normalization) \begin{align}
\ket{\spiderx{k}} &= \ket{+\ldots +} + \ket{-\ldots -}, \label{eqSpiderxState} \\
\ket{\spiderz{k}} &= \ket{0\ldots 0} + \ket{1\ldots 1}. \label{eqSpiderzState} \end{align} Similarly, if all ports are considered inputs, then the spider performs a GHZ basis measurement. The flexibility arises by considering networks of spiders where each spider may have both input and output ports, i.e., where each type of spider $\spiderx{k}$ and $\spiderz{k}$ is a map $(\mathbb{C}^2)^{\otimes k-a} \rightarrow (\mathbb{C}^2)^{\otimes a}$ for any choice of $a\in \{0,1,\ldots, k\}$, and can be obtained by turning some kets to bras in Eqs.~(\ref{eqSpiderxState}), (\ref{eqSpiderzState}). We note that in this language, Pauli-$X$ (Pauli-$Z$) measurements and preparations may be regarded as 1-port spiders $\spiderz{1}$ ($\spiderx{1}$), while the identity gate may be regarded as 2-port spiders of either type.
Toric code spiders are logical blocks representing encoded versions of these spiders such that each input and output is a qubit encoded in a surface code. We depict example toric code spiders corresponding to $\spiderx{4}$ and $\spiderz{4}$ in Fig.~\ref{figToricCodeSpiders}.
\textbf{Stabilizer measurements.} By composing many toric code spiders in a network along with single-qubit Hadamard and phase gates, we can perform any Clifford circuit. Such circuits can be used to measure the stabilizers of any stabilizer code, and in the following we show how networks of spiders alone are sufficient to measure the stabilizers of any Calderbank-Shor-Steane (CSS) code~\cite{calderbank1996good,steane1996multiple}, thus giving us a recipe to measure the stabilizers of the concatenated codes of interest.
As a simple example, consider the Clifford circuit depicted in Fig.~\ref{figToricCodeSpiderNetwork} that uses one ancilla to measure the Pauli operator $X^{\otimes 4}$ on four qubits. Such a circuit may be regarded as the syndrome measurement of a surface code stabilizer. One can rewrite the circuit as a network of operations consisting of $\spiderx{k}$ and $\spiderx{k}$ (also depicted in the figure), where each spider is represented by a $k$-legged tensor that is composed along ports. One can verify this using standard stabilizer techniques or using the ZX-calculus~\cite{coecke2008interacting,van2020zx} (see also App.~\ref{appSpiderNetworks} for more details). We can arrange a space-time network as depicted in the figure. To measure $Z^{\otimes 4}$, one simply swaps the roles of $\spiderx{k}$ and $\spiderz{k}$.
\begin{figure}
\caption{Logical block templates for toric code spiders. On the left (right) we have the toric code spider for the $\spiderx{4}$ ($\spiderz{4}$) operation. Lengths of the legs are exaggerated for illustration purposes. On the bottom row we depict two logical membranes; labelling the ports 1 to 4 clockwise from the left, we have example membranes $X_3X_4 \in \mathcal{S}(\spiderx{4})$ and $X_1X_2X_3X_4 \in \mathcal{S}(\spiderz{4})$. The length of the legs of the spider is exaggerated for illustration purposes.}
\label{figToricCodeSpiders}
\end{figure}
\begin{figure*}
\caption{Arranging toric code spider networks for encoded Clifford circuits. (left) a Clifford circuit for measuring the Pauli operator $X^{\otimes 4}$ using an extra ancilla. Time moves from left to right. (middle) Converting the Clifford circuit into a network of $\spiderx{k}$ and $\spiderz{k}$ operations. Time moves from bottom to top. (right) The resulting space-time network. In App.~\ref{appSpiderNetworks} we demonstrate how the spider network is obtained, along with a larger network measuring encoded surface code stabilizers. Time moves from bottom to top. The length of the legs of the toric code spiders along with their spacing is exaggerated for illustration purposes.}
\label{figToricCodeSpiderNetwork}
\end{figure*}
\textbf{Concatenation with toric codes.} The previous example of performing stabilizer measurements can be generalized to the stabilizers of arbitrary concatenated CSS codes. Namely, denote the concatenation of an \textit{inner code} $C_{\text{in}}$ and an \textit{outer code} $C_{\text{out}}$ by $C_{\text{out}}\circ C_{\text{in}}$. It is the result of encoding each logical qubit of $C_{\text{out}}$ into $C_{\text{in}}$. We consider using a surface code as the inner code, and a general CSS LDPC code as the outer code, (The high-threshold surface code is used to suppress the error rate to below the threshold of the LDPC outer code.)
We can construct toric code spider networks to measure the stabilizers of the outer code $C_{\text{out}}$ as follows. For each round of $X$-type ($Z$-type) measurements of $C_{\text{out}}$, we place \begin{enumerate}
\item a $\spiderx{\delta + 2}$ ($\spiderz{\delta + 2}$) spider for each code qubit, with $\delta$ equal to the number of $X$ ($Z$) stabilizers the code qubit is being jointly measured with
(the two additional ports can be considered as an input and an output port for the code qubit),
\item a $\spiderz{k}$ ($\spiderx{k}$) spider for each stabilizer measurement, connected to the corresponding code qubit spiders. \end{enumerate} Whenever an $X$ or $Z$ spider with more than four legs arises, it can be decomposed into a sequence of connected spiders with degree at most 4, and thus implementable with toric code spiders. Such a protocol allows us to measure stabilizers of the outer code. We emphasize that in constructing the instrument network to realize the stabilizer measurements, only the graph topology matters, and one may order the instruments in many different ways.
As a simple example, we depict a surface code concatenated with itself in Fig.~\ref{figConvertingCircuitToSpider} in App.~\ref{appSpiderNetworks}. Note that for simplicity, the outer surface code is the Kitaev version that consists of independent $X$-type and $Z$-type generators.
In general, the stabilizers of a LDPC code cannot be made local in a planar layout, and long-range interactions will be required (necessarily so if the code has an nonzero rate and nonconstant distance~\cite{bravyi2010tradeoffs}). Such long-range connections can be facilitated by toric code spiders with long legs (topologically these ``long legs'' look like long identity gates), each of which comprises local operations, thus dispensing with the need for long-range connections. Alternatively, one may connect distant toric code spiders using portals (as we describe in Sec.~\ref{secPortals}).
For example, embedding the qubits of an $[N,K,D]$ quantum LDPC code in a finite-dimensional Euclidean space will in general require connections between qubits of range $r = \mathrm{poly}(N)$. If the error rate on qubits is proportional to their separation, one can use the surface code concatenation scheme to reduce the error rate experienced by the qubits of the (top-level) LDPC code to a constant rate (independent of range). In particular, this can be achieved using surface code spiders with a distance of $d = \mathcal{O}(\log(r_{\mathrm{max}})) = \mathrm{polylog}(N)$ for each connection (where $r_{\mathrm{max}}$ is the largest qubit separation). This leads to a protocol with rate $\frac{K}{N \mathrm{polylog}(N)}$, which despite being asymptotically zero, may still be larger than a pure topological encoding, providing one starts with a LDPC code with good rate~\cite{hastings2021fiber,breuckmann2021balanced,panteleev2021asymptotically}.
\textbf{Logical operations.} One can perform fault-tolerant logical gates on such codes using networks of toric code spiders along with magic state injection. In particular, gates are performed by measuring a target logical Pauli operator (a PPM) jointly with an ancilla magic state~\cite{litinski2019game}. The logical Pauli operator is measured either directly or through a code deformation approach~\cite{gottesman2013fault,krishna2021fault,cohen2021low}, using a toric code spider network as described above. The ancilla magic states can be obtained by magic state distillation at the inner (surface) code level. The initial state preparations in the $\overline{X}$ and $\overline{Z}$ bases can be achieved by preparing the inner code qubits in either the $X$ or $Z$ eigenbasis (through an appropriate choice of primal or dual boundaries on the input) followed by a round of measurement of the outer code stabilizers\footnote{More generally, the outer code encoding circuit for an arbitrary state is Clifford for a general CSS code, but it may not be constant depth.}. This approach may be viewed as a spider network approach to LDPC lattice surgery.
\section{Implementing logical blocks in topological fusion-based quantum computation} \label{sec:topological_fbqc}
\begin{figure*}
\caption{The $6$-ring fusion network. The fusion network can be compactly represented by a fusion graph---a cubic graph whose vertices represent resource states ($6$-ring cluster states) and whose edges represent fusions. Logical membranes and check operators for various features are depicted.}
\label{figFBQCFeatures}
\end{figure*}
This section describes the use of logical block templates to implement fault-tolerant instruments in FBQC~\cite{bartolucci2021fusion}. FBQC is a universal model of quantum computation in which entangling \emph{fusion} measurements are performed on \emph{resource states}. It is a natural physical model for many architectures, including photonic architectures where fusions are native operations~\cite{browne2005resource,gimeno2015three,bartolucci2021fusion}. In FBQC a topological computation can be expressed as a sequence of fusions between a number of resource states arranged in (2+1)D space-time. This gives rise to a construction termed a \emph{fusion network}. For concreteness, we consider an example implementation based on the six-ring fusion network introduced in Ref.~\cite{bartolucci2021fusion}, and with it construct fusion networks that realize the templates of Sec.~\ref{sec3DElements}.
\subsection{Resource state and measurement groups} \label{sec:resource_and_measurement_groups}
To begin, we consider the cubical cell complex of a logical template introduced in Subsection~\ref{secLogicalTemplate}. Recall that this complex is referred to as the lattice $\mathcal{L}$. In a FBQC implementation of the template we define a fusion network from the vertices and edges of $\mathcal{L}$. Each vertex specifies the location of a \emph{resource state}, while the edges represent fusions (multiqubit projective measurements) and single-qubit measurements on its qubits.
The six-ring resource state is a graph state~\cite{hein2004graph} with stabilizer generators \begin{equation} \label{eqClusterRingStabilizers}
\mathcal{R}_6 = \langle K_i ~|~ K_i = Z_{i-1} X_i Z_{i+1}, i \in \{0,\ldots 5\} \rangle. \end{equation} The indexing is not arbitrary and will be described in Sec.~\ref{sec:surviving_stabilizers} below.
The \emph{resource state group} $\mathcal{R}$ is defined as the tensor product \begin{equation} \mathcal{R} = {\otimes}_{v \in \mathcal{L}_0} \mathcal{R}_{6}, \end{equation} where $\mathcal{L}_0$ is the set of all vertices in $\mathcal{L}$. Each edge $e$ of $\mathcal{L}_1$ thereby corresponds to a pair of qubits from distinct resource states. In the featureless bulk of the lattice we perform fusion measurements on each such pair in the basis \begin{equation}\label{eqFusion} M_{e} = \langle XX, ZZ\rangle. \end{equation}
For features and boundaries, the template labels in $\mathcal{F}$ may specify alternative fusion bases or single-qubit measurements. In the latter case we continue to identify the measurement with an edge $e$; however, the template will specify one vertex as vacant so only a single qubit is involved. The \emph{measurement group} $\mathcal{M}$ is then defined as the group generated by \begin{equation}
\mathcal{M} = \langle M_e ~|~ e \in \mathcal{L}_1 \rangle \end{equation} where $\mathcal{L}_1$ is the set of all edges in $\mathcal{L}$.\footnote{In Ref.~\cite{bartolucci2021fusion} this group is termed the \emph{fusion group} and denoted $F$. It is renamed the measurement group $\mathcal{M}$ here to note the inclusion of single-qubit measurements when required.} Importantly, in FBQC, the measurements to be performed are all commuting, and thus may be performed in any order (for example, layer by layer, or sequentially~\cite{bombin2021interleaving}).
Elements in the resource state group that commute with the measurement group play an important role in FBQC, as we will see in the next section.
\subsection{Surviving stabilizers: checks and membranes} \label{sec:surviving_stabilizers}
Both the error-correcting capabilities and logical instruments of FBQC may be understood using the \emph{surviving stabilizers} formalism \cite{raussendorf2007topological, brown2020universal, bartolucci2021fusion}. The surviving stabilizer subgroup $\mathcal{S}$ is defined as those stabilizers $r \in \mathcal{R}$ that commute with all elements of the measurement group \begin{equation}
\mathcal{S} = \mathcal{Z}_\mathcal{R}\left(\mathcal{M}\right) = \left\{ r \in \mathcal{R} ~|~ rm = mr \textrm{ for all } m \in \mathcal{M} \right\}. \end{equation} Elements of $\mathcal{S}$ are termed surviving stabilizers. The lattice structure allows us to determine this centralizer relatively easily as will be shown below.
One important subgroup of $\mathcal{S}$ is the intersection $\mathcal{C} = \mathcal{R} \cap \mathcal{M}$, whose elements provide the check operators of the FBQC instrument network. Qubits of each six-ring resource state are arrayed on the edges in such a way that each generator of Eq.~(\ref{eqClusterRingStabilizers}) can be associated with a corner of a $3$-cell; one suitable indexing on Cartesian axes is $\{z_+, x_+, y_+, z_-, x_-, y_-\}$.
Consider now a 3-cell in the bulk of the lattice as shown in Fig.~\ref{fig:surviving_check}. For six of its vertices we may choose a $ZXZ$ generator of Eq.~(\ref{eqClusterRingStabilizers}) on a corner of the cell as shown. For the remaining two corners we take a product of three six-ring stabilizer generators to obtain an $XXX$ operator on the corner. The product of all eight corner stabilizers can be rewritten as a product of elements $XX$ and $ZZ$ from $\mathcal{M}$, and is therefore contained in $\mathcal{C}$.
\begin{figure}
\caption{ A surviving stabilizer in the intersection $\mathcal{R} \cap \mathcal{M}$. On the left are stabilizers of the $6$-ring resource states arrayed on each vertex of a lattice cell. These are of the form $ZXZ$ (green-purple-green) or $XXX$ (purple-purple-purple). The product can be rewritten in terms of $XX$ (purple) and $ZZ$ (green) and is therefore also in the measurement group. This type of surviving stabilizer is a check operator of the FBQC instrument network.}
\label{sfig:check_from_r}
\label{sfig:check_from_m}
\label{fig:surviving_check}
\end{figure}
These check stabilizers corresponding to 3-cells of the lattice are analogous to the plaquette operators of the surface code. Note that each cell edge shows only one outcome of the $\langle XX, ZZ\rangle$ fusion, the other outcome is included in a neighboring check operator. This partitions the cells into a 3D checkerboard of primal and dual check operators, as shown in Fig.~\ref{figFBQCFeatures}. Errors on fusion outcomes flip the value of checks, which can be viewed as anyonic excitations as before.
When all qubits are measured or fused, the surviving stabilizer group is given by the intersection $\mathcal{S} = \mathcal{R} \cap \mathcal{M}$. We now turn to the case where some qubits remain unmeasured, and construct additional elements of $\mathcal{S}$ that will be important in understanding the FBQC implementations of logical instruments.
\textbf{Topological boundary states.} Consider a boxlike region $R$ within a bulk fusion network, and suppose that fusions are performed only inside this box. Qubits involved in these fusions will be referred to as \emph{inner} qubits, and the remaining unmeasured qubits referred to as \emph{outer} qubits. As the measurement group has full (stabilizer) rank on the inner qubits all surviving stabilizers can be written as $s = s_\mathrm{outer} \otimes s_\mathrm{inner}$. In the interior of the region we can define check operators as above where $s_\mathrm{outer} = \mathbbm{1}$. However, in the presence of unmeasured qubits, we will find surviving stabilizers that reveal topological boundary states on the boundary of the measured region $\partial R$. In particular, if we consider a planarlike boundary of the bulk fusion network, we find surviving stabilizers as shown in Fig.~\ref{fig:surviving_on_boundary}. For this boundary geometry, the first type are of the form $XZZX \otimes s_\mathrm{inner}$, where $s_\mathrm{inner}$ consists of a product of fusion measurements. We recognize this first type from Sec.~\ref{secSurfaceCode} as a surface code check operator acting on qubits on the region's boundary. These \emph{boundary checks} partition into primal and dual types, as shown in Fig.~\ref{fig:surviving_on_boundary}. The second type of surviving stabilizers are two-dimensional sheets, some of which, as we will see in the next section, define membranes of the logical instrument implemented by the fusion network. Note that membranes may be deformed by multiplying by the cubic and boundary check operators.
\begin{figure}
\caption{Topological boundary states for the six-ring fusion network. By fusing resource states in a region $R$, we are left with a series of unmeasured, outer qubits on the boundary $\partial R$ (represented by short lines coming out of the page). These outer qubits are in a code state of the surface code (up to signs depending on measurement outcomes), as can be inferred from the surviving stabilizers $XZZX \otimes s_\mathrm{inner}$ depicted. Therein, short purple (green) lines correspond to Pauli-$X$ (Pauli-$Z$) operators, while long purple (green) lines correspond to $XX$ ($ZZ$) operators. Another type of surviving stabilizer $\overline{Z}\otimes s_\mathrm{inner}$ is depicted, where $\overline{Z}$ is a logical operator for the surface code on $\partial R$ and $s_\mathrm{inner}$ is supported on a two-dimensional membrane.}
\label{fig:surviving_on_boundary}
\end{figure}
\subsection{Topological instrument networks in FBQC} We now turn to the implementation of the logical instruments described in Sec.~\ref{secCliffordOperators} and the topological features by which they are implemented. Additionally, using an example, we describe the second type of surviving stabilizer; the logical membrane. In Sec.~\ref{secElements} we understood features in terms of their properties with respect to anyonic excitations, which in turn were derived from the structure of the stabilizer check operators of the code. The FBQC approach is similar in nature. We introduce local modifications to the fusion and measurement patterns of the basic six-ring fusion network to create topological features, and show that this generates a surviving stabilizer group that includes the appropriate checks and stabilizers.
\subsubsection{Logical membranes and the identity gate} \label{sec:fbqc_identity_channel}
In this subsection we describe the FBQC implementation of the identity gate, the simplest logical operation shown in Fig.~\ref{figAllLogicalBlocks}, and in particular the implementation of its ports and boundaries. The fusion graph is cubic with dimensions $d \times d \times d$, and fusions on qubits of neighboring resource are performed as before. Qubits on the primal and dual boundaries are measured to ensure that only the appropriate boundary checks remain as members of $\mathcal{S}$ (defined in the following). The remaining boundary checks on each port then exactly define a surface code, with membrane stabilizers given by \begin{equation}\label{eqMembraneFBQC} \begin{array}{c}
\overline{X}_\mathrm{in} \otimes \overline{X}_\mathrm{out} \otimes m^{X,X}, \\
\overline{Z}_\mathrm{in} \otimes
\overline{Z}_\mathrm{out} \otimes m^{Z,Z}. \end{array} \end{equation}
The membranes $m^{X, X}, m^{Z,Z}$ represent the world sheet of each logical operator through the fusion network, and they define the action of the logical instrument. They are depicted in Figs.~\ref{fig:surviving_on_boundary},~\ref{fig:fbqc_membranes}. Undetectable errors strings are those that cross from one primal (dual) boundary to the other, thereby introducing an error on the membrane.
\begin{figure}
\caption{The bulk of the logical membranes. Both $m^{X, X}$ and $m^{Z,Z}$ from Eq.~(\ref{eqMembraneFBQC}) are elements of the surviving stabilizer $\mathcal{R} \cap \mathcal{M}$. They can be constructed from stabilizers of the six-ring resource states in such a way that they also clearly belong to the measurement group. Purple and green full (half) edges represent $XX$ ($X$) and $ZZ$ ($Z$) operators. }
\label{sfig:membrane_from_r}
\label{sfig:membrane_from_m}
\label{fig:fbqc_membranes}
\end{figure}
\subsubsection{Ports} While not a topological feature, ports are an important component of the logical block framework that allow us to define logical blocks. In terms of the fusion network, a port identifies a set of resource states that have unpaired (and thus unfused) qubits. These unpaired qubits remain unmeasured and form the set of outer qubits, each connected component of which forms a surface code state (postmeasurement) encoding the input or output of a block (as depicted in Fig.~\ref{fig:surviving_on_boundary}).
\subsubsection{Boundaries}\label{sec:fbqc_boundaries} Recall from Section~\ref{secElements} that boundaries come in two types according to the type of excitation that may condense on them. The primal boundary absorbs only dual-type excitations, and the dual boundary absorbs only primal-type excitations. These boundaries can be created using single-qubit measurements on a two-dimensional sublattice.
To see how they are created, consider a region $R$ with a boundary $\partial R$. As we have seen, by fusing the resource states within $R$, we are left with a topological surface code state on $\partial R$. There are surviving stabilizers $r\in \mathcal{S}$ that describe this boundary state. We consider in particular the surviving stabilizers $r\in \mathcal{S}\setminus \mathcal{C}$ with support on $\partial R$. These surviving stabilizers admit a generating set in terms of primal and dual stabilizers (recall they can be viewed as truncated bulk check operators).
To create a primal or dual boundary on $\partial R$, we perform a set of single-qubit measurements that commute with either the primal or dual surviving stabilizers. In other words, we perform measurements such that only the primal or dual surviving stabilizers $r\in \mathcal{S}\setminus \mathcal{C}$ remain (but not both). For the planar geometry of Fig.~\ref{fig:surviving_on_boundary}, the measurement patterns to create a primal boundary and a dual boundary differ only by a translation. Namely, they consist of an alternating pattern of $X$ and $Z$ single-qubit measurements, as shown in the example of Fig.~\ref{figFBQCMagicStatePreparation} below. The boundary checks are shown in Fig.~\ref{figFBQCFeatures}. Other geometries can be found similarly by implementing the single-qubit measurement pattern that completes either the primal or dual check operators (which are obtained by restricting bulk checks to the region with boundary). We remark that it is often convenient to describe the fusion graph on the dual of the template complex, such that resource states belong to 3-cells, and the measurement basis for each qubit is uniquely determined by the feature label on the 2-cell on which it resides.
\subsubsection{Domain walls} \label{secK6SymmetriesAndDefects}
As is the case for the surface code, nontrivial logic operations can be implemented by introducing defects using the underlying $\mathbbm{Z}_2$ symmetry. The domain wall defect was described in Sec.~\ref{sec:topological_features_surface_code} as a codimension-1 feature formed when the symmetry transformation, i.e., the exchange of primal and dual checks, is applied to a subregion of the lattice. In the logical block template this domain wall is denoted by labeled 2-cells identifying this region. The fusion pattern is modified such that the \emph{next-to-nearest} resource states on opposite sides of the domain wall are fused together directly. Resource states assigned to vertices within the domain wall plane do not participate and may be discarded. The local check operators along the 2D domain wall plane are supported on the two 3-cells intersecting on the domain wall, as shown in Fig.~\ref{fig:domain_wall_check}. This check structure results in the exchange of primal and dual excitations upon crossing.
\begin{figure}
\caption{The measurement pattern and check operator of a FBQC domain wall defect. Purple and green edges represent $XX$ and $ZZ$ operators. The extended edges are the measurements from next-to-nearest-neighbor fusions.}
\label{fig:domain_wall_check}
\end{figure}
\subsubsection{Twist defects}
Twist defect lines occur along the 1D boundaries of a domain wall. In the fusion network, the line of resource states associated with vertices on the twist line each have one qubit that does not partake in a bulk fusion or domain wall fusion. These qubits are associated with the edge directed into the domain wall, and are measured in the $Y$ basis producing the measurement pattern and check operators shown in Fig.~\ref{fig:twist_check}. These operators have overlap with both primal and dual checks, and it can be verified that composite primal-dual excitations may condense on twist lines.
\begin{figure}
\caption{The measurement pattern and check operator of a FBQC twist defect. Purple and green edges represent $XX$ and $ZZ$ operators. Yellow full and half edges represent $YY$ and $Y$ operators. The $YY$ operator is obtained as the product of $XX$ and $ZZ$ measurements of the given fusion.}
\label{fig:twist_check}
\end{figure}
\subsubsection{Cornerlines and transparent cornerlines}
The last feature we consider in the FBQC implementation of logical templates is that of \emph{cornerlines}. Cornerlines and transparent cornerlines naturally arise when the two distinct boundaries meet, depending on whether a domain wall is present or not (i.e., no further modifications to the measurement pattern are required). In particular, one performs the appropriate single-qubit measurements on either side of the (transparent) cornerline, according to whether that qubit belongs to the primal or dual boundary type. We depict an example of the measurement pattern that gives rise to cornerlines in Fig.~\ref{figFBQCMagicStatePreparation}.
\subsubsection{Fusion-based magic state preparation} For completeness, we depict in Fig.~\ref{figFBQCMagicStatePreparation} the fusion network and precise measurement pattern that can be used to prepare a noisy magic state. This can be viewed as the fusion-based analogue of the protocol described in Refs.~\cite{lodyga2015simple,brown2020universal}.
\begin{figure}\label{figFBQCMagicStatePreparation}
\end{figure}
\section{Simulating logical blocks}\label{secNumerics}
In this section, we introduce tools to simulate the error rates of logical blocks, and present numerical results for the thresholds and logical error rates of the logical blocks depicted in Fig.~\ref{figAllLogicalBlocks}. We begin by describing the \textit{syndrome graph} representation of a fault-tolerant instrument, which will be a convenient data structure to decode and simulate logical blocks. Moreover, the logical block templates that we have defined can be implemented in software as a tool to automatically generate such syndrome graphs, thereby allowing for the simulation of complex logical block arrangements. Using this software framework, we perform numerical simulations for logical blocks based on the six-ring fusion network under a phenomenological noise model\footnote{Note that this is distinct from phenomenological noise model that is commonly used to model measurement and Pauli errors in a code based fault tolerance scheme. Our model is closer to a gate error model in that it accounts for large scale entanglement generation from finite sized resources.} known as the \textit{hardware-agnostic fusion error model} in Ref.~\cite{bartolucci2021fusion}. We demonstrate that all blocks have a threshold that agrees to within error with the bulk memory threshold. We also evaluate the logical error rate scaling of each block in the subthreshold regime and evaluate the resources required to achieve a target logical error rate for practical regions of interest. In doing so, we see the important role that boundary conditions have on the logical error rate, and in particular that periodic boundary conditions provide a favorable scaling. Finally, we observe that for sufficiently large distances and low error rates, the logical error rates for different logical qubits participating in lattice surgery behave approximately independently\footnote{A potentially related observation is found in Ref.~\cite{farrelly2020parallel} for a different family of codes, whereby different logical qubits can be decoded independently while remaining nearly globally optimal.}.
\subsection{Simulation details} To perform simulations of complex logical block arrangements, we implement the template in software as a cubical complex with labeled 2-cells. Each template corresponds to a set of physical operations, and can be used together with a set of rules (building upon the description in Sec.~\ref{sec:topological_fbqc}) to automatically generate a set of syndrome graphs (as defined below), and a set of bit masks representing the logical membranes thereon. We can subsequently use these syndrome graphs to perform sampling and decoding of errors.
\subsubsection{Syndrome graphs} To evaluate the thresholds and below threshold scaling behavior of logical blocks we rely on the \textit{syndrome graph} representation of errors and checks. This representation can be used for flexible error sampling and decoding, sufficient for many decoders such as minimum-weight perfect matching (MWPM)~\cite{dennis2002topological,kolmogorov2009blossom} and union find (UF)~\cite{delfosse2017almost}. The syndrome graph representation can be defined for any fusion network where each measurement ($X\otimes X$, $Z \otimes Z$, or single-qubit measurement) belongs to precisely two local check operator generators, such as the six-ring network. We define the syndrome graph as follows. \begin{defn}
(Syndrome graph). Let $\mathcal{C}_{\text{local}}$ be the set of local check generators for $\mathcal{C}$ (depicted in Fig.~\ref{figFBQCFeatures} for the six-ring network). We define the syndrome graph $G_{\text{synd}} = (V_{\text{synd}},E_{\text{synd}})$ by placing a vertex $v\in V_{\text{synd}}$ for every local check generator $c\in \mathcal{C}_{\text{local}}$, and an edge between two vertices if their corresponding checks share a measurement. \end{defn}
Flipped measurement outcomes are represented by edges of the syndrome graph, and the syndrome can be obtained by taking their mod-2 boundary (in other words, a flipped check operator corresponds to a vertex with an odd number of flipped edges incident to it). Logical membranes are represented on the syndrome graph as a collection of edges corresponding to the fusions and measurements that it consists of. We refer to this subset of edges as a logical mask. A logical error corresponds to a set of edges whose mod-2 boundary is zero, and that has odd intersection with a logical mask.
For the bulk six-ring fusion network, the syndrome graph consists of two decoupled 12-valent graphs, which are referred to as primal and dual syndrome graphs. Domain walls and twists may prevent the syndrome graph from decomposing into two disconnected components (as is the case in the phase gate for example), as, in particular, the two syndrome graphs are swapped across domain walls and fused together along twists. Example syndrome graphs for a lattice surgery block and phase gate block are depicted in Fig.~\ref{figLatticeSurgerySyndromeGraph}.
\begin{figure}
\caption{(left) Syndrome graph and a syndrome mask (i.e., a logical membrane projected onto the syndrome graph) for lattice surgery. The three input ports are displayed at the front along the bottom of the diagram. (right) Syndrome graph for the phase gate.}
\label{figLatticeSurgerySyndromeGraph}
\end{figure}
\subsubsection{Noise model and decoder} Resource states are in general noisy, each subject to Pauli noise and erasures that can arise during preparation, propagation and measurement. The net effect of noise processes affecting both resource states and operations on them can be phenomenologically captured by modeling each measurement outcome (both fusion outcomes and single-qubit measurement outcomes) as being subject to an erasure error with probability $p_E$ or bit-flip error with probability $p_P$. This is known as the \textit{hardware-agnostic fusion error model} in Ref.~\cite{bartolucci2021fusion}. Specifically, we assign each measurement a probability of $p_E$ that the outcome is erased, a probability of $p_P(1-p_E)$ that the outcome is incorrect (i.e., bit flipped but not erased), and a probability of $(1-p_E)(1-p_P)$ that the measurement is correct. We refer to $p=(p_P, p_E)$ as the physical error rate. (Note that up to a reparametrization, the bit-flip errors are equivalent to an IID Pauli-$X$ and Pauli-$Z$ error channel acting on each qubit.)
To decode these errors, we utilize the union-find decoder of Ref.~\cite{delfosse2017almost} due to its optimal performance against erasures, high performance against bit-flip noise, and fast runtime. We remark that higher tolerance to bit-flip noise can be achieved with the minimum-weight perfect-matching decoder~\cite{dennis2002topological,kolmogorov2009blossom}. More details on the general decoding problem for (2+1)D logical blocks can be found in Sec.~\ref{secDecoding}.
\subsubsection{Simulated logical blocks} We simulate six-ring fusion networks for the identity gate, the Hadamard, the phase gate, and the $Z$-type lattice surgery involving a variable number of logical qubits as described in Sec.~\ref{secLogicBlocks}. We also simulate the bulk fusion network on a cube with periodic boundary directions (i.e., a 3-torus), in order to have a bulk comparison. While this block contains no ports nor nontrivial logical correlators, for simulation purposes, we may define logical membranes on each of the nontrivial 2-cycles of the torus, such that failure is declared for any error spanning a nontrivial 1-cycle of the torus.
For each of these block families, we generate a family of syndrome graphs of varying distance to be used for logical error rate Monte Carlo simulations (involving error sampling and decoding). For the purposes of simulation, we assume certain fictitious boundary conditions for the ports where all qubits are perfectly read out (such that errors terminating on ports always generate syndromes). This allows logical operators to be noiselessly read out on each port.
\begin{figure*}\label{figNumericsDecay}
\end{figure*}
\subsection{Numerical results}\label{secNumericalResults}
\subsubsection{Logical block thresholds} We provide numerical estimates of the noise threshold values for each logical block against both IID erasure noise with rate $p_E$ and IID outcome bit-flip noise with rate $p_P$.
To evaluate the threshold value, we sweep along different error rates $p_E$ and $p_P$ and evaluate the logical error rate for different block sizes. The logical error rate at each physical error rate is obtained by performing $10^{7}$ to $10^{8}$ decoder trials for each block distance, up to $d = 26$. Each decoder trial consists of sampling an error configuration, running the decoder, and declaring success if and only if all of the correlation operators are successfully recovered.
The threshold values for each block are displayed in Table~\ref{tab:threshtable} and the threshold plots from which these values are obtained are displayed in Figs.~\ref{figThresholdsEr} and \ref{figThresholdsPa} below. For each error model, we estimate thresholds for each block that all agree to within error bars. This verifies the conventional intuition that the threshold should be determined by the bulk properties alone, and insensitive to the presence of codimension-1 and codimension-2 features. We remark that \textit{a priori} this may not have been true, due to the well-known error-correction stat-mech correspondence~\cite{dennis2002topological} and the fact that bulk phase properties can be driven by boundary conditions~\cite{henkel1994boundary}. See App.~\ref{secThreshAnalysis} for more details on how the thresholds are estimated.
\subsubsection{Overhead for target error rate and logical error rate fits}
The threshold sets an upper bound on the rate of errors that are tolerable for the scheme. However, to estimate the overhead for fault-tolerant quantum computation, it is important to estimate the block distances required for each logical operation to achieve a target logical error rate.
For a given physical error rate $p$ and block distance $d$, we perform $10^{9}$ to $10^{10}$ trials to obtain estimates of the logical error rate. Following Ref.~\cite{bravyi2013simulation}, we fit the logical error rate to an exponential decay as a function of distance according to \begin{equation}\label{eqLERfit} P_{p}(d) = \alpha_p e^{-\beta_p d}, \end{equation} where both $\alpha_p$ and $\beta_p > 0$ depend on the physical error rate $p$. We refer to $\beta_p$ as the logical decay and to $\alpha_p$ as the logical prefactor. In Fig.~\ref{figNumericsDecay}, we plot the logical decay $\beta_p$ as a function of the physical error rate. In App.~\ref{appLogicalPrefactor} we explain the fitting methodology and also plot the logical prefactor $\alpha_p$ as a function of the physical error rate.
One can invert Eq.~(\ref{eqLERfit}) to obtain the distance required to achieve a target logical block error rate at a given physical error rate. In Fig.~\ref{figNumericsDecay} we display estimates for the required distance based on numerical fits for $\alpha_p$ and $\beta_p$. For concreteness, we choose a target logical block error rate of $10^{-12}$. Such error rates are relevant for many existing algorithms in the fault-tolerant regime, for example for quantum chemistry applications~\cite{kivlichan2020improved, von2020quantum, kim2021faulttolerant, su2021faulttolerant}. The figure shows the importance of having error rates significantly below threshold, as otherwise the distance and fault-tolerance overheads required can become extremely large.
\begin{table}
\begin{tabular}{ p{1.5cm}||p{2.3cm}|p{2.5cm} }
\multicolumn{3}{c}{Threshold values} \\
\hline
Block & Erasure rate $p_E$ & Bit-flip rate $p_P$ \\
\hline
3-Torus & $(12.02\pm 0.04)\%$ & $(0.948\pm 0.005)\%$ \\
Identity & $(12.04\pm 0.05)\%$ & $(0.955\pm 0.006)\%$ \\
Phase gate & $(12.05\pm 0.06)\%$ & $(0.951\pm 0.005)\%$ \\
Hadamard & $(12.02\pm 0.06)\%$ & $(0.948\pm 0.005)\%$ \\
LS $Z^{\otimes 2}$ & $(12.06\pm 0.07)\%$ & $(0.956\pm 0.006)\%$ \\
LS $Z^{\otimes 3}$ & $(12.07\pm 0.08)\%$ & $(0.957\pm 0.006)\%$ \\
LS $Z^{\otimes 4}$ & $(12.08\pm 0.09)\%$ & $(0.958\pm 0.008)\%$ \\
\hline \end{tabular} \caption{\label{tab:threshtable} Threshold values for various logical blocks. `LS' stands for lattice surgery.} \end{table}
\subsubsection{Periodic vs. open boundary conditions} Next, we observe the importance of boundary conditions for logical error rates. In Fig.~\ref{figNumericsDecay} we note that the distance required for a target logical error rate for the 3-torus block (having periodic boundary directions) is notably smaller than any of the other blocks and in particular the identity block. In other words, the logical error rates are more greatly suppressed with periodic boundary conditions as opposed to open boundary conditions, as has previously been observed in Ref.~\cite{fowler2013accurate}. In particular, we plot $\alpha_p^{\mathbb{T}^3} / \alpha_p$ for each block, where $\alpha_p^{\mathbb{T}^3}$ is the logical decay for the 3-torus block and $\alpha_p$ is the logical decay for a given block. This ratio is an approximate measure of the distance saving offered by periodic boundary conditions---it tells us the factor that each block distance must be increased by in order to have the same logical error rate scaling as the 3-torus.
This difference in performance may be explained by \textit{entropy}; for a given distance $d$, the identity block (i.e., with open boundary conditions) contains a larger number of logical errors of weight $k\geq d$ than the 3-torus block (i.e., with periodic boundary conditions). In addition to the favorable error rates, periodic boundary conditions can be used to encode more qubits, offering potentially further advantages at least when used for memory. These results demonstrate that entropy plays an important role in the design and performance of logical gates (as has previously been noted for quantum error-correcting codes in Ref.~\cite{beverland2019role}). In particular, the scaling advantage motivates us to study schemes without boundaries, and we present one such proposal---based on the teleportation of twists---in Sec.~\ref{secPortals}.
\subsubsection{Stability of lattice surgery} Finally, we observe that at low physical error rates, the logical decay for lattice surgery is insensitive to the number of logical qubits. This is expected at sufficiently large distances and low error rates, as each logical error behaves approximately independently on each qubit participating in the lattice surgery. More precisely, the logical error rate for $n$ independent planar codes undergoing memory is expected to behave like
$\overline{P}_n = 1 - (1 - \overline{P}_1)^n = n\overline{P}_1 + \mathcal{O}(\overline{P}_1^2)$,
where $\overline{P}_1$ is the logical error rate of a single planar code. Therefore, at sufficiently low error rates and to first order, we expect the logical decay to be invariant to the number of qubits undergoing lattice surgery, and the logical prefactor $\alpha_p$ to increase proportionally to the number of qubits $n$. This agrees well with the observed data in Fig.~\ref{figNumericsDecay} and Fig.~\ref{figLogicalPrefactor} below.
\section{Topological quantum computation without boundaries: Portals and teleported twists}\label{secPortals}
In this section, we introduce a new computational scheme for twist-encoded qubits. In this scheme, logical information is encoded in twist defects and fault-tolerant logical operations are achieved by introducing space-time defects known as portals, which teleport the twists forward in time. This scheme is motivated by the favorable logical error rate suppression observed for logical blocks that have no boundaries. To our knowledge, this is the first universal surface code scheme that does not require boundaries.
The native gates implementable with twists and portals are PPMs (i.e., measurement of $n$-qubit Pauli operators), which are universal when supplemented with noisy magic states. In this scheme, a PPM is implemented by introducing a pair of portals that teleport a subset of twists to another space-time location. These portals generally require long-range operations to implement. For concreteness, we focus on portals and twists in a photonic FBQC architecture, where such long-range operations are conceivable~\cite{bartolucci2021fusion, bombin2021interleaving}.
\subsection{Encoding in twists}
First consider a standard encoding whereby $n$ logical qubits can be encoded in $2n{+}2$ twists on a (topological) sphere (see, e.g., Refs.~\cite{bombin2010topological,barkeshli2013twist}). This can be understood using the Majorana fermion mapping~\cite{bombin2010topological,barkeshli2013twist}, whereby each twist defect can be expressed by a Majorana fermion operator $\gamma_i$, $i \in \{1,\ldots, 2n{+}2\}$ satisfying \begin{equation}\label{eqMFCommutation} \gamma_j \gamma_k + \gamma_k \gamma_j = 2\delta_{jk}, \end{equation} where $\delta_{jk}$ is the kronceker delta. In terms of these Majorana fermions, the logical operators may be expressed \begin{align}\label{eqMFEncoding} \overline{X}_k = i\prod_{1\leq i \leq 2k} \gamma_i, \quad \overline{Z}_k = i\gamma_{2k} \gamma_{2k+1}. \end{align}
One can verify that these logical operators satisfy the correct (anti)commutation relations using Eq.~(\ref{eqMFCommutation}). An arbitrary Pauli operator is thus represented by an even number of Majorana operators, $\mathcal{P}_{n} \cong \langle i , \gamma_j \gamma_k ~|~ j,k \rangle$. In Fig.~\ref{figTwistsEncoding} we depict surface code string operator representatives for the logical operators of Eq.~(\ref{eqMFEncoding}), which are realized as loops enclosing the corresponding twists.
\textbf{Operator traceability}. The enabling property for the teleported twist scheme is that all logical Pauli operators $\mathcal{P}_n$ in the twist encoding are \textit{traceable}. Traceability was introduced in Ref.~\cite{krishna2020topological}, and for our purposes, we say that a logical operator is traceable if it can be represented as a connected, non-self-intersecting string operator that is piecewise (i.e., locally) primal or dual. (Note that a traceable string operator can swap between primal and dual as it crosses a domain wall).\footnote{For instance, logical $\overline{X}$ and $\overline{Z}$ operators of a toric code or planar code are traceable, but the logical $\overline{Y}$ is not, due to the unavoidable self-intersection of string operators (see, for example, Fig.~\ref{figLogicalBlockConcatenation}).} That twist-encoded logical operators are traceable is shown in App.~\ref{appTwistTraceability}. As a consequence of traceability, every logical operator $\overline{P}\in \mathcal{P}_n$ can be identified by a subset of twists $\mathcal{T}_{\overline{P}}$ that it encloses. Furthermore, any logical operator $\overline{Q} \in \mathcal{P}_n$ that commutes with $\overline{P}$ can be generated by traceable loop operators both contained within $\mathcal{T}_{\overline{P}}$ or outside of $\mathcal{T}_{\overline{P}}$. Examples of commuting traceable logical operators for the twist encoding are shown in Fig.~\ref{figTwistsEncoding}.
\begin{figure}
\caption{
Encoding $n$ qubits in $2n+2$ twists. (top) The generating set of logical operators is traceable. (bottom) Example of a product of Pauli generators that is also traceable. A general Pauli operator can be associated by the twists it encloses. In Fig.~\ref{figLogicalTraceability} we show that all Pauli operators are traceable.
}
\label{figTwistsEncoding}
\end{figure}
\begin{figure*}
\caption{ Portals in the $6$-ring fusion network. (right) The modified fusion pattern to create a portal pair between two topological disks. The qubits (belonging to resource states) on either side of each portal are fused with the qubits on the corresponding side of the other portal (i.e., qubits of like-colors are fused; pink is fused with pink, and teal is fused with teal). (right) An example of a check operator spanning the portals.
}
\label{figPortalMicroscopics}
\end{figure*}
\subsection{Portals} To perform logic on these twist-encoded qubits, we introduce the concept of a portal. Portals are two-sided, codimension-1 (i.e., two-dimensional) objects that can be thought of as a new type of geometric defect for space-time surface code instruments. They represent geometrically nonlocal correlations. We are specifically interested in portals that come in pairs, but note that self-portals (i.e., those defined on a single connected surface) are also possible and may have other interesting applications. To be concrete, let us consider how to create portals in the six-ring fusion network. These portals are similar to the wormholes introduced in the 2D case in Ref.~\cite{krishna2020topological}.
To microscopically define a portal pair, we modify the bulk fusion pattern of the six-ring fusion network along two two-dimensional topological disks. Firstly, consider the dual of the fusion complex $\mathcal{L}^*$. (We obtain $\mathcal{L}^*$ from $\mathcal{L}$ by replacing vertices with volumes, edges by faces, and so on. In $\mathcal{L}^*$, volumes represent resource states and faces represent fusions between resource states.) Consider a topological disk $D$ consisting of a number of faces in $\mathcal{L}^*$. Let $D'$ be another topological disk obtained by translating (and potentially rotating) $D$. Disks $D$ and $D'$ specify a set of fusions. Each disk has two sides, separating qubits from resource states on either side. In the bulk, qubits on one side of a given disk are fused with qubits on the other side. To create a portal pair, we pair sides from $D$ and $D'$ and fuse qubits on each side of $D$ with qubits on the matching side of $D'$. This is depicted in the fusion graph in Fig.~\ref{figPortalMicroscopics}.
By changing the fusion group in this way, we obtain a new check operator group, $\mathcal{C}$, which contains check operators supported between the two disks, as depicted in Fig.~\ref{figPortalMicroscopics}. One can view the check operator entering one side of the disk as being mapped to the matching side of the other. Correspondingly, excitations can be mapped between disks by chains of errors entering one disk and emerging from the matching side of the other disk. We therefore refer to these disks as portals and their effect is to modify the connectivity geometry of the fusion network (leading to changes in topology and geometry).
\subsection{Logical operations by teleporting twists}
We now show how portals enable nondestructive measurements of logical operators for the twist encoding. As before, for any logical Pauli operator $\overline{P} \in \mathcal{P}_n$, we let $\mathcal{T}_{\overline{P}}$ be the set of twists enclosed by $\overline{P}$. To measure $\overline{P}$, construct a pair of portals to teleport the twists belonging to $\mathcal{T}_{\overline{P}}$ to a future temporal slice. Namely, define a topological disk $D_{\overline{P}}$ that encloses precisely the twists belonging to $\mathcal{T}_{\overline{P}}$ and another topological disk $D_{\overline{P}}'$ obtained by translating $D_{\overline{P}}$ by $d$ timesteps into the future (where $d$ is the fault distance). By matching the top face of one disk with the bottom face of the other (and vice versa), disks $D_{\overline{P}}$ and $D_{\overline{P}}'$ define a pair of portals. The the fusion pattern is modified such that twists and defects entering through the side of one portal are transmitted through the corresponding side of the other portal (one can verify that the check operator structure is valid, as we showed for portals in the bulk). We depict this in Fig.~\ref{figTeleportedTwist}.
In general, one needs to resolve the locations of domain walls to ensure compatibility with the locations of twists that are traveling through the portals. Fortunately, the number of twists enclosed by a Pauli operator is always even, and therefore so is the number of twists entering a portal implementing its measurement. Therefore we can always find a compatible domain wall configuration, as exemplified in Fig.~\ref{figTeleportedTwist}.
We claim that this block and fusion pattern implements measurement of $\overline{P}$. In particular, we need to firstly check that the instrument network contains a logical membrane $M^{\overline{P},\mathbbm{1}}$ corresponding to the logical correlation $P\otimes \mathbbm{1} \in \mathcal{S}(M_{P})$. We verify this graphically in Fig.~\ref{figTeleportedTwist}, where the correlation surfaces of the fusion network corresponding to the input logical operator $\overline{P}$ is be ``capped off" and thus measured. Secondly, we need to check that any logical operator $\overline{Q} \in \mathcal{P}_n$ commuting with $\overline{P}$ is undisturbed, meaning that there are logical membranes $M^{\overline{Q},\overline{Q}}$ corresponding to the stabilizer $Q\otimes Q \in \mathcal{S}(M_{P})$. This is verified by the traceability property---every such commuting $\overline{Q}$ is generated by loop operators wholly within $D_{\overline{P}}$ or its complement, and the corresponding membranes propagate through the instrument network either through the portals or bulk, following Fig.~\ref{figTeleportedTwist}.
\begin{figure}\label{figTeleportedTwist}
\end{figure}
If the twists are separated by distance $d$ then it is sufficient to separate the portals by a distance $d$ to maintain an overall fault distance of $d$ for the protocol. We remark that many other portal and twist configurations are possible, including spacelike separated portals. Surprisingly, no boundaries need to be utilized in this construction. As we have seen in Sec.~\ref{secNumerics}, the lack of boundaries provides a favorable logical error rate scaling relative to the fault distance. Therefore, this scheme may provide an efficient approach to logical operations, as one can arrange twists in compact geometries.
Beyond twist-based encodings, portals can also be used to save overhead for logical blocks based on planar encodings, as observed in Ref.~\cite{bombin2021interleaving}. For instance, portals can be used to compose the toric code spiders of Sec.~\ref{secConcatenation} that may be spatially separated. This is attractive in the context of measuring the stabilizer checks of a LDPC code.
\section{Conclusion}\label{secConclusion}
In this paper we have introduced a comprehensive framework for the analysis and design of universal fault-tolerant logic. The key components of this are the concept of fault-tolerant logical instruments, along with their specific application to surface-code-based fault tolerance with logical block templates.
\textbf{Platform-independent logical gate definitions.} The framework of logical templates introduces a platform-independent method for defining universal logical operations in topological quantum computation based on surface codes. We have demonstrated how symmetry defects and boundaries can be used to encode and manipulate logical information, and have explicitly shown how these can be mapped onto fusion-based quantum computation as well as circuit-based models. As an application of our framework, we have presented volume-efficient logic templates, which, in addition to fusion-based quantum computation, can be utilized in any surface-code-based architecture. We hope that this can provide a valuable basis for a more unified approach to the study and design of fault-tolerant gates such that new techniques can map easily between different hardware platforms and models of computation.
\textbf{Flexible and scalable tools for numerical analysis.} The logical blocks framework enables a software-based mapping to physical instructions and a powerful tool for performing numerical analysis on complex logic blocks and their composition into small circuits. Using these tools, we have numerically investigated the performance of a set of Clifford logical operations, which when supplemented with noisy magic states (which can be distilled), are universal for fault-tolerant quantum computation. We have verified that the gate and memory thresholds are in agreement, and we have observed the important role that geometry and topology play in the fault-tolerance overhead---an important consideration when estimating resource costs for useful computations. The numerical results indicated that boundaryless computation appears to be a promising direction due to the further suppression of logical error rates for blocks without boundaries. As quantum technologies advance closer to implementing large-scale fault-tolerant computation~\cite{doi:10.1126/science.abi8378,egan2021fault,ryan2021realization,postler2021demonstration}, it is essential to have scalable software tools that allow analysis of complex logical operations. Our simulation framework based on logical block templates enables these advanced simulations by providing explicit definition, error sampling, and decoding of error prone logical operations on dozens of logical qubits with complex sets of topological features.
\textbf{Exploration of novel logic schemes.} We have focused on designing logical gates directly---as fault-tolerant instruments---rather than as operations on a code, as this holistic view enables the construction of schemes that would be nonobvious from a code-centric view. As a specific example of this, we introduced a new scheme for surface code computation based on the teleportation of twist defects, which was motivated by the improved performance of boundaryless computation. In this scheme, logical qubits are encoded in twists, and logical operations are performed by modifying the global space-time topology with nonlocal defects called portals. Such portals in general require the ability to perform long-range operations, such as those that are available in a photonic FBQC architecture. This scheme may offer further reductions in resource overheads.
\textbf{Future work.} We have focused primarily on surface code fault-tolerant instruments and local operations, but the concepts we have introduced can be applied much more broadly. More general topological codes, in higher dimensions, with non-Euclidean geometry, or color codes may benefit from study in the framework of quantum instrument networks. Further study into the possible operations and resource reductions that can be offered by such codes is an important problem, as such codes can support significantly richer symmetry groups and domain walls. Further study into the power of nonlocal operations is also a promising avenue, with the teleported twist logic scheme we have introduced as one such example. For instance, transversal gates between multiple copies of the surface code may offer drastic resource reductions up to $\mathcal{O}(d)$ per logical operation, where $d$ is the block distance (i.e., the gate volume may reduce from $\mathcal{O}(d^3)$ to $\mathcal{O}(d^2)$). These transversal gates can be thought of in terms of symmetry domain walls between two or more copies of the surface code. It is advantageous to study the properties of transversal gates in the setting of fault-tolerant instruments, as they appear as codimension-1 defects on a constant time slice, where one can more easily reason about fault distances. Furthermore, one may study nontransparent domain walls (e.g., to implement Pauli measurements) that, in contrast to the domain walls we have focused on, allow certain anyonic excitations to condense on them. In these cases, special purpose decoders are generally required to decode the instrument network, and more detailed analysis is required to fairly assess their performance.
Beyond purely topological codes, there is a lot of promise for fault tolerance in more general quantum LDPC codes~\cite{tillich2014quantum,gottesman2013fault,fawzi2018constant,fawzi2018efficient,breuckmann2021ldpc,hastings2021fiber,breuckmann2021balanced,panteleev2021asymptotically}. We have provided a proof-of-principle approach for fault-tolerant quantum computation based on concatenating surface codes with a general LDPC code. Such concatenated schemes may offer the advantages of the high thresholds of surface codes with the reduced overhead of constant-rate LDPC codes, but further analysis is required to determine the regimes in which they may outperform conventional topological code-based schemes.
With recent advances in quantum hardware technology, and increased focus on large-scale, fault-tolerant computation, it is important that there is a unified language for fault tolerance that can span computational models and hardware platforms. We hope that the methods presented here provide a valuable step towards that goal.
\section{Ackowledgements} We thank Daniel Dries, Terry Farrelly, and Daniel Litinski for detailed feedback and discussions on the draft, and Sara Bartolucci, Patrick Birchall, Hugo Cable, Axel Dahlberg, Andrew Doherty, Megan Durney, Mercedes Gimeno-Segovia, Eric Johnston, Konrad Kieling, Isaac Kim, Ye-Hua Liu, Sam Morley-Short, Andrea Olivo, Sam Pallister, Mihir Pant, William Pol, Terry Rudolph, Karthik Seetharam, Jake Smith, Chris Sparrow, Mark Steudtner, Jordan Sullivan, David Tuckett, Andrzej P\'erez Veitia, and all our colleagues at PsiQuantum for useful discussions. R.V.M.'s current affiliation is Microsoft Station Q.
\section{Appendix}\label{secAppendix}
\subsection{Stabilizer states, operators, maps, and instruments}\label{secStabilizerOperatorsMapsAndInstruments}
The {\it stabilizer framework} \cite{gottesman1997stabilizer,gottesman2010introduction} has proven to be an extremely effective tool for describing a constrained form of fault-tolerant quantum computing. In its most basic form, states are described by (Pauli) operators for which they are $+1$ eigenstates, rather than by their amplitudes in some computational reference basis. Because it is possible to have highly entangled states be eigenstates of a set of commuting tensor product operators, this greatly enriches the set of states that can be described efficiently. In general, in the Pauli stabilizer formalism, $n$-qubit pure states are described (up to a global phase) by $n$ commuting and independent Pauli product operators on $n$ qubits. The number of classical bits required to provide such a representation is $\mathcal{O}(n^2)$ and allows representing $2^{\mathcal{O}(n^2)}$ distinct states.
The set of stabilizer states are preserved under the so-called {\it Clifford unitaries}. Together, stabilizer states and Clifford unitaries admit a succinct algebraic description as a symplectic vector space over ${\mathbbm{Z}}_2$. Within this constrained subset, this structure allows for efficient classical simulation as well as exhaustive theoretical analysis~\cite{gottesman1997stabilizer,gottesman2010introduction,aaronson2004improved}. In this appendix, we describe extensions of this algebraic description and an extension to the set of operators, maps, and instruments that allow equally efficient simulation thanks to the underlying algebraic structure. These extensions build on intuition gained from tensor network descriptions of stabilizer states (see, for example, Refs.~\cite{pastawski2015holographic, cao2021quantum,farrelly2021local}). Although in general, stabilizer operators, need not be tensor product Pauli operators, this article makes the implicit assumption that they are, so the term {\it stabilizer} should be interpreted as {\it Pauli stabilizer}.
{\bf Notation.} Because the Pauli operators \begin{align*} X={\begin{pmatrix}0&1\\1&0\end{pmatrix}},~
Y={\begin{pmatrix}0&-i\\ i&0\end{pmatrix}},~
Z={\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}},~
I={\begin{pmatrix}1&0\\ 0&1\end{pmatrix}} \end{align*} play such a prominent role, we reserve these symbols for said operators. Furthermore, we liberally omit the tensor product operator symbol (``$\otimes$'') when specifying tensor products of such Pauli operators such that $X \otimes Y \otimes I \otimes Z$ may be simply denoted by $XYIZ$ (i.e., omission of an operator symbol among Pauli constants implies the tensor product rather than the usual matrix product).
\subsubsection{Stabilizer states} \begin{defn}\label{defStabState} A pure {\bf stabilizer state} $\ket{\psi}$ in an $n$-qubit Hilbert space is a state that can be specified (up to a scalar) as the common $+1$ eigenstate of a maximal Abelian subgroup ${\mathcal S}$ of $n$-qubit Pauli product operators (${\mathcal{P}}_n$) such that $-1 \not\in \mathcal{S}$. \end{defn} A maximal Pauli stabilizer group, $\mathcal{S}$ on $n$ qubits can be defined through $n$ independent and commuting Pauli operators $P_1, \ldots, P_n$, and is denoted as $ \mathcal{S}=\langle P_1, \ldots, P_n \rangle$.
{\bf Example (Three-qubit GHZ state):} The three-qubit entangled state $\ket{\psi} = (\ket{000}+\ket{111})/{\sqrt{2}}$ is a $+1$ eigenstate of the three independent Pauli product operators $Z Z I$, $I Z Z$ and $X X X$. It is thus a stabilizer state stabilized by the group \begin{equation} \mathcal{S}(\ket{\psi}) \equiv \langle Z Z I,~ I Z Z,~ X X X \rangle. \end{equation}
\subsubsection{Stabilizer operators} As mentioned earlier, the group of Clifford unitary operators (sometimes denoted as $\mathcal{C}_n$) is defined in such a way that it plays well with Pauli stabilizer states. These are the $n$-qubit unitary operators that map Pauli product operators onto Pauli product operators: \begin{align}\label{eqCliffordGroup}
{\mathcal{C}}_n = \{ U \in \mathsf{SU}(2^n): P \in {\mathcal{P}_n} \implies U P U^\dagger \in {\mathcal{P}_n} \} \end{align} The defining condition for elements of the Clifford group can be reexpressed as a set of $2n$ stabilizer conditions of the form \begin{align}\label{eqCliffordStabilizer}
Q^\dagger U P = U \quad\text{ with }\quad Q:= UPU^\dagger \end{align} and $P$ ranging over all generators of the Pauli group $\mathcal{P}_n$.
In fact, this is a special case of what we call stabilizer operators (also called generalized Clifford operators). \begin{defn} An operator $O$ taking $k_{\text{in}}$ qubits as input, and $k_{\text{out}}$ qubits as output is a {\it stabilizer operator} if and only if the state $\ket{\psi(O)} := (I^{\otimes k_{\text{in}}} \otimes O)\ket{\Omega_{k_{\text{in}}}}$ is a stabilizer state. \end{defn} Here, $\ket{ \Omega_{k_\text{in} } } = \sum_{j=0}^{2^{k_\text{in}}-1} \ket{jj}$ is the (unnormalized) $2k_\text{in}$ qubit state used in the {\it operator state correspondence} (also known as the Choi-Jamio{\l}kowski isomorphism). In it, the first $n$ qubits and the remaining $k_\text{in}$ qubits are pairwise maximally entangled. It is equivalent (up to permutation of the tensor factors) to the tensor product of $k_\text{in}$ Bell states $\ket{\text{Bell}}^{\otimes k_\text{in}}$.
The stabilizer group $\mathcal{S}(O)$ is given by the stabilizer group of the corresponding state $\ket{\psi(O)}$ under the operator state correspondence. This definition includes, for instance, stabilizer states, Clifford unitaries, and full and partial projectors onto stabilizer subspaces. In the following examples, we use the tensor product to partition the input and output spaces.
{\bf Example (Identity):} For the single-qubit identity $I$, we have $\mathcal{S}(I)=\langle {X}\otimes {X}, {Z}\otimes {Z}\rangle$.
{\bf Example (Phase gate):} The Phase gate \begin{equation} S\equiv {\begin{pmatrix}1&0\\0&i\end{pmatrix}} \end{equation} has a corresponding state $\ket{\psi(S)} = \ket{00} + i \ket{11}$. Its corresponding stabilizer group is thus $\mathcal{S}(S)=\langle {X}\otimes {Y}, {Z}\otimes {Z}\rangle$.
{\bf Example (Lowering operator):} The qubit lowering operator $\sigma^- := \ket{0}\bra{1}$ is also a stabilizer operator under this general definition. Its corresponding state is $\ket{\psi(\sigma^-)} = \ket{10}$, which is stabilized by $\mathcal{S}(\sigma^-) = \langle -Z\otimes I, I\otimes Z \rangle$.
{\bf Example (2-repetition encoding):} The encoding isometry $E := \ket{00}\bra{0} + \ket{11}\bra{1}$ that encodes a qubit onto the two-dimensional subspace stabilized by $ZZ$ of a two-qubit subspace in a way that the $X$ operator is mapped onto $XX$ and the $Z$ operator is mapped onto $IZ$ is a stabilizer operator. Its corresponding state $\psi(I)=\ket{000}+\ket{111}$ is a three-qubit GHZ state up to normalization and its stabilizer group is $\mathcal{S}(E)= \langle I\otimes ZZ, X\otimes XX, Z\otimes IZ \rangle $.
{\bf Example (Partial projection):} The partial projection $\Pi$ onto a subspace stabilized by a Pauli stabilizer subgroup $\mathcal{G}_{\Pi}$ is a stabilizer operator.
It is stabilized by $\mathcal{S}(\Pi) = \langle I \otimes G, G^T \otimes I, N^T \otimes N \rangle$, where $G$ is taken over a set of generators for $\mathcal{G}_{\Pi}$ and $N$ is taken over the commutant of $\mathcal{G}_{\Pi}$ (i.e., $\{ N\in\mathcal{P}_n ~|~ NGN^\dagger = G \quad \forall G \in G_{\Pi}\}$).
\subsubsection{Stabilizer maps} In principle, a general quantum channel $\Lambda$ can be expressed using the Kraus representation as a combination of quantum operators $K_j$ as \begin{align}
\Lambda(\rho) = \sum_j K_j \rho K^\dagger_j. \end{align} The stabilizer formalism only specifies the stabilized operator up to a global scalar. Requiring a channel to be trace preserving fixes one such magnitude up to an irrelevant global phase. However, the relative magnitude of operators is important for specifying a channel with multiple Kraus operators. For this reason, stabilizer channels are limited to a single Kraus operator. This makes the trace preservation requirement particularly restrictive as it excludes all stabilizer operators other than the unitaries (corresponding to the Clifford group) and isometries from being lifted into a trace-preserving quantum channel interpretation.
In principle, it is possible to construct quantum channels from multiple stabilizer operators as Kraus operators; it becomes necessary to introduce scalars that significantly complicate the picture. We find that, for the idealized fault-tolerant QIN, significant headway can be made by restricting the focus to stabilizer maps and instruments. More general maps will become absolutely necessary when incorporating noise modeling.
\subsubsection{Stabilizer instruments} Instruments allow modeling of operations that have both quantum and classical output. The possibility of extracting classical data is what enables entropy extraction in fault-tolerant protocols which is the main tool for noise mitigation. While stabilizer operators are clearly more general than Clifford unitaries, the restriction on having maps be trace preserving and involve a single stabilizer operator leads to a very limited selection of valid stabilizer QINs.
Whereas, for general instruments, the structure of the classical outcomes is not prescribed, we define stabilizer instruments to posses a very specific $\mathbbm{Z}_2$-type linear structure on the classical outcomes. In particular, a quantum instrument will be a quantum map for which a subset of the output qubits can be treated as a classical outcome register. In a stabilizer quantum instrument, the map associated with each specific classical outcome will itself be a stabilizer operator. \begin{defn} A {\bf stabilizer quantum instrument} from $k_\text{in}$ qubit inputs onto $k_\text{out}$ qubit outputs and $b$ classical outcome bits is specified by a stabilizer group $\mathcal{S} \subseteq \mathcal{P}_{k_\text{in} + k_\text{out} + b}$ such that
\begin{itemize}
\item outcome bits $b$ carry classical correlations ($\mathcal{S}|_b \subseteq \mathbf{Z}_b$),
\item $\mathcal{S}$ can be completed into a maximal stabilizer group by including additional generators exclusively from $\mathbf{Z}_b$,
\item the instrument is trace preserving ($\mathcal{S}\|_{\text{in}} = \langle I \rangle$).
\end{itemize} \end{defn}
Here, $\mathbf{Z}_b$ denotes the group generated by $Z$-type operators in $b$ together with real phases and $\mathcal{S}\|_b$ is the subset of Pauli operators in $\mathcal{S}$ with support in subsystem $b$. We obtain $\mathcal{S}|_b$ by restricting each Pauli operator in $\mathcal{S}$ to subsystem $b$. It is only defined up to phases. Each distinct completion of $\mathcal{S}$ through elements of $\mathbf{Z}_b$ corresponds to a distinct stabilizer operator and corresponds to one of the terms composing the quantum instrument. The generators of $\mathcal{S}\|_b \equiv \mathbf{Z}_b \cap \mathcal{S}$ are predetermined parity combinations of classical outcomes and correspond to checks in composite quantum instruments. The number of distinct outcomes and stabilizer operators in the instrument is $2^b/|\mathcal{S}\|_b|$. The outcome contains $b-\log |\mathcal{S}|_b|$ uniformly random bits of information that are uncorrelated with the input state or the transformation performed on it.
The trace-preserving condition is included to guarantee that the instrument does not postselect on a specific subspace. It may be dropped if postselection is to be allowed.
{\bf Example (Single qubit destructive $M_Z$ measurement):} In a single qubit-measurement the $Z$ observable in a qubit is mapped onto a classical bit. The (incomplete) stabilizer group defining the quantum instrument is $\mathcal{S}(M_Z) = \langle ZZ\rangle$, with the second tensor factor representing the classical outcome. The stabilizer can be completed by adding either $IZ$ or $-IZ$. The corresponding stabilizer operators are stabilized by $\langle ZI, IZ \rangle$ and $\langle -ZI, -IZ\rangle$. The corresponding instrument $\{ \mathcal{E}_0, \mathcal{E}_1 \}$ has two terms corresponding to the two possible outcomes and computational basis projections \begin{align*}
\mathcal{E}_0(\rho) &= \langle 0 \vert \rho \vert 0 \rangle \\
\mathcal{E}_1(\rho) &= \langle 1 \vert \rho \vert 1 \rangle. \end{align*}
{\bf Example (Partial projective measurement):} Consider a projective measurement of $XX$ on two qubits, wherein the two qubits are retained. The ordering of qubits to represent the stabilizer will be input, output, classical outcome bits. The (incomplete) stabilizer group defining the partial projective measurement is given by $\mathcal{S} = \langle XIXII, IXIXI, ZZZZI, XXIIZ \rangle$. The last generator is indicating that the $XX$ observable is mapped onto the classical bit outcome whereas the first three are indicating that observables commuting with $XX$ are preserved. The corresponding instrument $\{ \mathcal{E}_0, \mathcal{E}_1 \}$, has two terms corresponding to the two possible outcomes and computational basis projections \begin{align*}
\mathcal{E}_0(\rho) &= \Pi_{+XX} \rho \Pi_{+XX} \\
\mathcal{E}_1(\rho) &= \Pi_{-XX} \rho \Pi_{-XX}, \end{align*} with $\Pi_{\pm XX} := (II\pm XX)/{2}$ respectively being projectors onto the $\pm1$ eigenspaces of $XX$.
\subsection{Kitaev to Wen versions of the toric code}\label{appKitaevToWen}
\begin{figure}
\caption{Logical template for a Hadamard gate based on the (Kitaev-style) planar code. Note that while this geometry of planar code uses twice as many qubits than the Wen-style geometry to encode a qubit for a given distance, the volume of this gate is less than the version depicted in Fig.~\ref{figAllLogicalBlocks} for a fixed fault distance.}
\label{figKitaevHadamard}
\end{figure}
The surface code can be defined on a variety of different lattice geometries. The original (CSS) toric code due to Kitaev~\cite{kitaev2003fault} associates qubits with the edges of an arbitrary 2D cell complex, with $X$-checks $X_v$ associated with the coboundary of a vertex $v$, and $Z$-checks $Z_p$ associated with the boundary of a 2-cell $p$, for every vertex $v$ and 2-cell $p$ of the cell complex. These checks are guaranteed to commute for an arbitrary 2D cell complex due to properties of the (co)boundary.
If we consider a square lattice then the symmetry for the code is given by \begin{equation}\label{eqToricCodeLatticeSymKitaev} S_{\text{Kitaev}}(g) = H^{\otimes n} T(\hat{u}), \end{equation} where $T(\hat{u})$ is the translation operator, translating a vertex to a plaquette in the Kitaev lattice, and $H$ is the Hadamard gate. In particular, this symmetry differs from that of the surface code due to Wen by a transversal Hadamard. To map between the Wen and Kitaev formulations of the surface code we can apply (to the stabilizers) a Hadamard transversally to half of the qubits in a bipartite way.
The Wen geometry requires half as many qubits than the Kitaev geometry to achieve the same distance~\cite{bombin2006topologicalencoding,bombin2007optimal,tomita2014low}. However, the logical blocks may require different volumes. For instance, for the planar code with boundaries introduced in Ref.~\cite{bravyi1998quantum}, one can perform a Hadamard using the protocol introduced in Ref.~\cite{dennis2002topological}, leading to the template displayed in Fig.~\ref{figKitaevHadamard}. Ref.~\cite{beverland2019role} illustrated that entropic effects can be significant when comparing the two codes for memory, as we have also seen in the context of gates. Therefore, one should carefully consider the noise model, decoder, and set of logic gates when determining the most efficient surface code geometry.
\subsection{Extending the feature labels to 1-cells}\label{secLogicalBlockTemplateExtension} For a logical block template $(\mathcal{L}, F)$, we can extend the function $F: \mathcal{L}_2 \rightarrow L$ to the 1-cells $\mathcal{L}_1$. Namely, let the set of 1-cell labels be given by $\mathcal{F}_1= \{\text{Cornerline}, \text{TransparentCornerline}, \text{Twist} \}$. For convenience, we denote PrimalBoundary by PB, DualBoundary by DB, DomainWall by DW, Cornerline by Cl, TransparentCornerline by TCl, and Twist by T.
Let $\delta c$ be the coboundary of $c$, consisting of the (at most four) faces that contain $c$. For a 1-cell $c\in \mathcal{L}_1$ we define $F(c)$ as \begin{widetext} \begin{equation} F(c) = \begin{cases}
\text{Cl} & \text{if } |F^{-1} (\text{PB})\cap \delta c| = |F^{-1} (\text{DB})\cap \delta c| = 1 \land |F^{-1} (\text{DW})\cap \delta c| = 0 \\ & \text{or } \{|F^{-1} (\text{PB})\cap \delta c|, ~|F^{-1} (\text{DB})\cap \delta c|\} = \{0, 2\} \land |F^{-1} (\text{DW})\cap \delta c| = 0, \\
\text{TCl} &\text{if } |F^{-1} (\text{PB})\cap \delta c| = |F^{-1} (\text{DB})\cap \delta c| = 1 \land |F^{-1} (\text{DW})\cap \delta c| = 1, \\
\text{T} & \text{if } |F^{-1} (\text{PB})\cap \delta c| = |F^{-1} (\text{DB})\cap \delta c| = 0 \land |F^{-1} (\text{DW})\cap \delta c| = 1. \end{cases} \end{equation} \end{widetext}
\subsection{Converting a template to circuit-based measurement instructions}\label{secTemplateToCBQC} In this section we explain how to convert a template $(\mathcal{L}, F)$ to a system $\Phi=(\mathcal{Q}, \mathcal{P}, \mathcal{O})$ of physical instructions on static qubits for circuit-based quantum computation. We walk through the phase gate example in the following section. Let the coordinates of the complex $\mathcal{L}$ be given by $(x,y,z)$.
\begin{figure*}
\caption{Circuit-based instructions for the phase gate in Fig.~\ref{figAllLogicalBlocks}, assuming time slices are taken progressing from left to right. Data qubits are laid out in a square grid. We also depict the ancilla qubits required, which are placed on each face, and allow for the stabilizer measurements. Ancilla qubits are connected to neighbouring data qubits, between which two qubit gates (such as CNOTs) can be performed. At each step, the stabilizer measurements which are to be performed are highlighted---one may first measure stabilizers on plaquettes of one type before the other. In step 4, a row of data qubits is measured in the Pauli-$Y$ basis, after which all data qubits below are translated up. This translation can be achieved with a two-step process using SWAP gates as shown. One may progress through the block in Fig.~\ref{figAllLogicalBlocks} in other directions, e.g., from front to back or bottom to top, leading to different instruction sets. For example, if we progress from front to back, then 5-qubit twist operators need to be measured. }
\label{figPhaseGateCBQC}
\end{figure*}
\textbf{The quantum system $\mathcal{Q}$.} Logic block templates have no directional preference; the notions of space and time are on equal footing. To compile a template into physical instructions for CBQC we must break this symmetry (due to the static nature of qubits assumed in CBQC). In particular, to map a template to a sequence of stabilizer measurements on a 2D array of qubits, we first choose a coordinate direction, say $\hat{z}$, and define it as the physical time direction. For each time slice, we have a 2D subcomplex of the template, upon each vertex of which we place a qubit. Vertices with the same $x$ and $y$ coordinate but different $z$ coordinates correspond to the same qubit at different times. This defines the quantum system $\mathcal{Q}$.
\textbf{The input and output ports $\mathcal{P}$.} The ports are simply given by the set of qubits at the first and final time slices. In particular, if the template complex $\mathcal{L}$ has $z$ coordinates spanning $z\in[0,1,\ldots, T]$ then the set of qubits living on vertices at $z=0$ ($z=T$) define one or more surface codes forming the input (output) ports.
\begin{figure*}
\caption{Converting a Clifford circuit to a spider network. (top) The CNOTs, $\ket{+}$ preparation and $M_X$ measurement operations can be represented by the spiders $\spiderx{k}$ and $\spiderz{k}$ as shown. One can simplify the spider operations by merging them as shown, giving the minimal network on the right. For the first three figures, time moves from left to right, whereas for the right-most figure, time moves from bottom to top. (bottom) Measuring stabilizers of a (Kitaev) surface code concatenated with itself. Time moves from bottom to top. On the left, we depict a network of $\spiderx{4}$ and $\spiderz{k}$ spiders to implement a round of $X^{\otimes 4}$ surface-code stabilizers. On the right, we depict the corresponding network of logical blocks. To obtain the measurement pattern of the $Z^{\otimes 4}$ surface-code stabilizers, one can swap the role of the $\spiderx{4}$ and $\spiderz{k}$ spiders.}
\label{figConvertingCircuitToSpider}
\end{figure*}
\textbf{The physical instructions $\mathcal{O}$.} At a high level, each slice of the complex for a different $z$ coordinate corresponds to a different set of stabilizer measurements that are to be performed at a given timestep. For example, on a given time slice, each face in the absence of features corresponds to a bulk surface code stabilizer check measurement, as depicted in Fig.~\ref{figToricCode}. The features on the template determine stabilizer measurements to perform, and may have different meanings depending on whether they lie in the $x-y$ plane (a slice of constant time), or if they have a component in the time direction, as is due to the asymmetry between space and time.
\textit{Timelike features.} First, we consider the operations corresponding to features propagating in the time direction. For a given time slice, the intersection pattern of domain walls, twists, and boundaries propagating in the time direction (i.e., a twist supported on a link in the $z$ direction, or a domain wall or boundary supported on a face normal to the $x$ or $y$ direction) defines a configuration of pointlike twists and 1D boundaries and domain walls on a 2D surface code, as depicted in Fig.~\ref{figToricCode}. For such features, one performs stabilizer measurements according to the corresponding 2D surface code features, as depicted by the examples in Fig.~\ref{figToricCode}.
\textit{Spacelike features.} Now we interpret twists, domain walls, and boundaries that are supported in a given time slice. Twists propagating in a spatial direction (i.e., along a $x$ or $y$ link) correspond to a sequence of single-qubit Pauli-$Y$ measurements that need be performed on all qubits supported on the twist line. Since spacelike twists are supported on the boundary of a timelike domain wall, such measurements allow one to transition to and from measuring the 2D stabilizer terms along a defect to the 2D bulk stabilizers (each of which is depicted in Fig.~\ref{figToricCode}) while maintaining a useful syndrome history. Domain walls in the spacelike direction (i.e., on faces normal to the $z$ direction) signal that one needs to apply the $\mathbbm{Z}_2$ translation symmetry transformation---the natural direction to translate is toward the twist line. Since the purpose of this symmetry is to swap primal and dual plaquettes, rather than physically apply the translation symmetry to the qubits, one can simply keep track of the transformation and update future stabilizer measurements as appropriate. Finally, primal and dual measurement patterns correspond to checkerboard patterns of single-qubit measurements in the $X$ and $Z$ bases. Namely, for a primal (dual) boundary one needs to measure qubits in the single-qubit $X$ and $Z$ bases according to the restriction of primal (dual) checks to each individual qubit. In other words, the single-qubit measurement pattern should be such that primal (dual) checks can be recovered from the measurement outcomes---such measurement patterns can be thought of as transversal logical readouts.
\begin{figure*}
\caption{Threshold crossing plots for the different logical blocks against IID erasure noise. The logical error rate is defined with respect to any logical error (which is block dependent). In other words, a logical error occurs whenever \textit{any} of the logical membranes is incorrectly recovered. Each data point is the estimated logical error rate from $10^7$ to $10^8$ trials. The estimated thresholds are displayed by dashed lines.}
\label{figThresholdsEr}
\end{figure*}
\begin{figure*}
\caption{Threshold crossing plots as per Fig.~\ref{figThresholdsEr}, but under IID bit-flip noise.}
\label{figThresholdsPa}
\end{figure*}
\textbf{Checks.} Instrument checks may be identified at the level of the template and depend on the locations of features. Firstly, checks arise from repeated measurements (which should give the same outcome in the absence of errors). In the absence of features, we have a check for every repeated bulk stabilizer measurement and thus we have a check for every bulk 3-cell. Similarly, on a primal or dual timelike boundary we have a check for every repeated boundary stabilizer measurement, and these can be identified with boundary 2-cells (note that not every boundary 2-cell corresponds to a check). For spacelike primal (dual) boundaries, we have a check for each primal (dual) 2-cell; in the absence of errors, the stabilizer measurement must agree with the product of the four single-qubit measurements that comprise it. Finally, in the presence of domain walls and checks, we have a check for every pair of 3-cells sharing a defect or twist line: by design, the product of all measurement outcomes supported on the pair of 3-cells---including bulk stabilizers, domain wall stabilizers, twist stabilizers, and single-qubit $Y$ measurements---must deterministically multiply to $+1$. For example, in Fig.~\ref{figPhaseGateCBQC}, a pair of primal and dual stabilizers from step 3 (on either side of the location of the twist line in the following step) may be multiplied with two $Y$ measurements from step 4 on their common support, and one stabilizer from step 5, to produce a check.
\textbf{Logical membranes.} Logical membranes correspond to representative logical operators being tracked through the surface codes at each timestep. To obtain the logical operator that the membrane corresponds to at time slice $\tau \in \{0,1,\ldots,T\}$, we take the restriction of a membrane to the subcomplex between times $0\leq z\leq \tau$ and take its boundary---this will give a set of Primal/Dual/PrimalDual labels on edges that corresponds to the representative logical Pauli operator at that time interval, as depicted in Fig.~\ref{figToricCode}. Components of a membrane in a plane of constant time correspond to stabilizer measurements that must be multiplied to give the equivalent representative on that slice (and thus their outcome is used to determine the Pauli frame). Recall that the membrane can locally be of primal type, dual type, or a composite primal-dual type, meaning when their projection on a single time slice corresponds to a primal ($X$-type), dual ($Z$-type), or composite ($Y$-type) logical string operator.
In FBQC the symmetry between space and time is restored, as there is no natural direction of time. This means that the fusions may be performed in any order; layer by layer (following the surface code analogy) or in a rastering pattern as in Ref.~\cite{bombin2021interleaving}. This decoupling of ``simulated time'' corresponding to the passage of time that logical qubits experience, and the ``physical time'' corresponding to the order in which physical operations are undertaken presents additional flexibility in FBQC.
We remark that one can intuitively map between the different models of computation using ZX instruments of the diagrammatic ZX calculus~\cite{bombin2023unifying}.
\subsection{Converting a phase gate into circuit-based instructions}\label{AppPhaseGateCBQC} We now present a concrete example of converting a template to circuit-based instructions. We give the phase gate of Fig.~\ref{figAllLogicalBlocks} as an example, as it illustrates many of the possible features in the construction. The instructions are depicted in Fig.~\ref{figPhaseGateCBQC}. Note that, when we propagate through the block in this way, no stabilizer measurements of weight higher than four are required. Importantly, if we progress through the block in a different way, we may require five-qubit measurements along the twist, as is the case when we propagate from front to back and the twist line is perpendicular to our time slices.
\subsection{Magic state preparation}\label{secMagicStatePreparation} To complete a universal gate set, we need a non-Clifford operation. A common approach is to prepare noisy encoded magic states and then distill them~\cite{bravyi2005universal}. A standard magic state to prepare is the $T$-state $\ket{T} = (\ket{0} + e^{\frac{i \pi}{4}} \ket{1})/{\sqrt{2}}$. We show how to prepare the noisy encoded $\ket{T}$ states in Fig.~\ref{figFBQCMagicStatePreparation}, which is based on the preparation scheme in Ref.~\cite{lodyga2015simple}. We refer to the qubit in Fig.~\ref{figFBQCMagicStatePreparation} where the non-Clifford operation is performed as the \textit{injection site}.
The scheme is inherently noisy due to the existence of weight-1 and other low-weight logical errors on and around the injection site. Excluding errors on the injection site, all weight-1 Pauli errors are detectable. Therefore one can perform postselection (which depends on the erasure and syndrome pattern) to filter out noisy preparations. For example, with a simple error model of flipped measurement outcomes with rate $p$, the postselected logical error rate can be made to be of order $p + O(p^2)$. If the error rate $p$ is sufficiently small then the overhead for postselection can also be small~\cite{li2015magic, bombin2022fault}.
\begin{figure*}
\caption{Estimated fits of the logical prefactor $\alpha_p$ to the logical block error rate $P_{p}(d) = \alpha_p e^{-\beta_p d}$ under IID erasure and IID bit-flip noise models for the identity, 3-torus, phase, Hadamard, and multi-qubit lattice surgery blocks.}
\label{figLogicalPrefactor}
\end{figure*}
\begin{figure*}
\caption{All Pauli logical operators for twist encoded qubits are traceable. (a) An arbitrary Pauli operator can be expressed as a product of composite primal-dual (Pauli-$Y$) strings connecting pairs of twists, as represented by the purple strings. (b) Multiply this logical by a trivial primal or dual loop (i.e., a stabilizer). (c) The results in a new string net configuration, involving primal, dual and composite primal-dual strings in general. (d) This string net can be further resolved to a new traceable string net consisting of only primal and dual strings enclosing some number of twists.}
\label{figLogicalTraceability}
\end{figure*}
\begin{figure*}
\caption{Commuting traceable operators can be resolved to only have intersections on strings of the same type. For simplicity we consider a setting absent of domain walls---the general case holds using the same argument up to local string relabellings. Label the two traceable operators $P$ and $Q$. (a) In a region, $P$ is depicted on the left by a single primal (blue) string, while $Q$ is a pair of dual (red) strings. For $P$ and $Q$ to commute, the primal strings of $P$ must overlap an even number of times with the dual strings of $Q$. (b) Consider pairing up the primal and dual intersections between $P$ and $Q$. For each intersection we can multiply by a trivial dual loop (i.e., a stabilizer) in the neighbourhood of the intersection. (c) This loop resolves the crossings, and so we are left only with intersections of the same type.}
\label{figLogicalTraceableCommuting}
\end{figure*}
\begin{figure}
\caption{Since a general Pauli measurement involves an even number of twists, we can always find a valid configuration of domain walls. As shown, we can pair twists and configure domain walls such that their boundary is the desired twists. This portal configuration is to measure the traceable operator depicted at the bottom.}
\label{figTeleportedTwistResolvedDomainWall}
\end{figure}
\subsection{Toric code spider networks}\label{appSpiderNetworks}
In this section we present in more detail the concatenation schemes discussed in Sec.~\ref{secConcatenation}. In particular, in the top panels of Fig.~\ref{figConvertingCircuitToSpider} we demonstrate how to convert the Clifford circuit in Fig.~\ref{figToricCodeSpiderNetwork} to a network of toric code spiders. In the bottom panels of Fig.~\ref{figConvertingCircuitToSpider} we present the logical block for the spider network that measures stabilizers of the surface code concatenated with itself.
\subsection{FBQC instrument networks as gauge fixing on a subsystem code}\label{secFBQCsubsystem} We remark that fusion-based quantum computation in this setting can be understood as gauge fixing on a subsystem code. This observation has previously been identified in the setting of MBQC~\cite{brown2020universal}, and we extend the discussion to include FBQC in this section.
Subsystem (stabilizer) codes are a generalization of stabilizer codes, where some code degrees of freedom are not used to encode information, and are referred to as gauge qubits. A subsystem code~\cite{poulin2005stabilizer,bacon2006operator} is defined by a subgroup of the Pauli group $\mathcal{G}\subseteq \mathcal{P}_n$, which is not necessarily Abelian, known as the gauge group. For a given gauge group, stabilizers of the subsystem code are given by elements $\mathcal{Z}_{\mathcal{G}}(\mathcal{G})$, logical degrees of freedom are described by the group $\mathcal{Z}_{\mathcal{P}_n}(\mathcal{G})$, while $\mathcal{G}$ describes operations on the gauge qubits. In particular, we refer to an element of $\mathcal{Z}_{\mathcal{P}_n}(\mathcal{G})$ as a \textit{bare logical operator}---such operators act nontrivially only on logical degrees of freedom. One may multiply any bare logical operator by elements of $\mathcal{G}$ to obtain an operator known as a \textit{dressed logical operator}, which has equivalent action on logical degrees of freedom.
We now express a FBQC instrument network in terms of a subsystem code. Namely, in FBQC, we may construct a gauge group $\mathcal{G}=\mathcal{R}\cup \mathcal{M}$. We can understand a fault-tolerant instrument in terms of \textit{gauge fixing}~\cite{paetznick2013universal}, whereby we start in one particular gauge of the subsystem code (i.e., in a joint eigenstate of some subset of $\mathcal{G}$) and project into a different gauge. In particular, the computation begins by starting in the gauge defined by $\mathcal{R}$, and performing measurements to fix onto the gauge defined by $\mathcal{M}$. The computation can be understood as proceeding by examining how bare logical operators are transformed. In terms of the subsystem code, the bare logical operators are precisely the logical membrane operators that we have previously studied.
\subsection{Decoding}\label{secDecoding}
In this section we briefly describe how to decode in the FBQC setting.
In order to extract useful logical data from the observed measurement data we must use a decoder. The decoder's job is to take as input the (potentially incomplete) measurement outcomes, and produce a recovery operation that is consistent with the observed syndrome. In particular, let $T$ be the set of trivial undetectable errors (i.e., errors that commute with all check operators and membranes). For a Pauli error $E\in\mathcal{P}$, let $\sigma(E)$ be the syndrome (the outcomes of a generating set of checks). The decoder takes the syndrome and produces a recovery operator $R\in \mathcal{P}$ with the same syndrome. The decoding is successful whenever $ER \in T$. In this case we necessarily have $[ER, M^{\alpha,\beta}] = 0$ for all logical membranes $M^{\alpha,\beta}$, implying that the logical Pauli frame observed is identical to the case without errors.
The decoding problem is naturally expressed on the syndrome graph; we can immediately make use of many standard decoders. In particular, on the syndrome graph, the syndrome takes the form $\sigma(E) = \partial(\text{Supp}(E))$, where $\text{Supp}(E)$ is the set of edges on the syndrome graph that correspond to flipped (fusion) measurement outcomes and $\partial$ is the usual mod-2 boundary operator on the graph. The decoder must return a recovery operator $R\in \mathcal{P}$ such that $\sigma(E) = \sigma(R)$. MWPM~\cite{dennis2002topological,kolmogorov2009blossom} and UF~\cite{delfosse2017almost} are two popular decoding methods that produce a low-weight description of the observed syndrome. Such algorithms can be applied to the primal and dual syndrome graphs separately, where they match pairs of primal-type excitations and separately the dual-type excitations.
\subsection{Threshold analysis}\label{secThreshAnalysis}
We simulate two types of IID noise models; an erasure and a bit-flip model. In the context of FBQC, such errors arise from (i) photon loss and fusion failure, and (ii) Pauli errors on the qubits on resource states. To estimate the logical error rate on each 3D block at physical error rate $p$, we fix a block distance and perform many Monte Carlo trials. Each trial consists of generating a random sample of erasures or bit-flip errors on each edge of the corresponding 3D syndrome graph with some rate $p$, determining the syndrome, decoding based on the syndrome, and finally, checking if the decoding is successful. In particular, decoding is successful whenever the combined effect of error and recovery leads to the correct measurement outcome of the logical membrane. Note that different blocks have different numbers (and shapes) of logical membranes, and we declare success if and only if no logical errors occur (i.e., all membranes are correct). On the syndrome graph, an error sample results in a chain of flipped edges $E$. The recovery consists of a chain of flipped edges $R$ with the same boundary as $E$. The logical membrane is represented as a subset (a mask) of syndrome graph edges $M$, and to determine correct decoding, we only need to check if $|E\oplus R \cap M| \cong 0 \mod 2$. In other words, verify that the error and recovery intersects the membrane an even number of times. An example of a syndrome graph and logical mask is given in Fig.~\ref{figLatticeSurgerySyndromeGraph}.
To estimate the threshold, we find estimates of the logical error rate for different block distances $L\in\{8,14,20,26\}$ and for a range of physical error rates. For each block, we fit the logical error rate for each distance to the cumulative distribution function of the (rescaled and shifted) beta distribution. The threshold is estimated as the $p$ value for which the logical error rate curves of different distances intersect such that it is invariant under increasing distances. Figs.~\ref{figThresholdsEr} and \ref{figThresholdsPa} show the logical error rate fits for each logical block, and the crossing point at which the threshold is identified. Error bars for each data point are given by the standard error of the mean for the binomial distribution, from which we can identify error bars for the threshold crossing.
\subsection{Logical decay fits}\label{appLogicalPrefactor} Here we outline how we obtain fitting parameters $\alpha_p$ and $\beta_p$ for the logical error rate $P_{p}(d)$ as a function of the block distance $d$, according to \begin{equation} P_{p}(d) = \alpha_p e^{-\beta_p d}. \end{equation} For a given physical error rate $p$, we estimate the logical error rate $P_{p}(d)$ for a variety of block distances, by running no less than $10^9$ decoder trials. Each estimate carries an error bar given by the standard error of the mean for the binomial distribution. We then perform an ordinary least-squares regression to the logarithm of the estimated logical error rate to directly infer $\alpha_p$ and $\beta_p$. The error bars for the estimates for $\alpha_p$ and $\beta_p$ are given by the heteroskedasticity robust standard errors. Fig.~\ref{figNumericsDecay} contains numerical estimates of the logical decay $\beta_p$, while Fig.~\ref{figLogicalPrefactor} contains the numerical estimates of the logical prefactor $\alpha_p$.
\subsection{Twist encoding traceability}\label{appTwistTraceability}
In this section we show that all Pauli logical operators in the twist encoding are traceable. The argument is presented in Fig.~\ref{figLogicalTraceability}, where we demonstrate how to start with a general Pauli logical operator and transform it using stabilizer equivalences into traceable form. In Fig.~\ref{figLogicalTraceableCommuting} we show that commuting traceable operators can be made to have intersection only on strings of the same type. Finally, we remark that a general portal configuration to measure a Pauli operator always admits a compatible domain wall configuration. This is because the measurement of a Pauli logical operator always results in the measurement of an even number of twists. We present an illustrative example in Fig.~\ref{figTeleportedTwistResolvedDomainWall} of how such a domain wall can be found with the correct boundary conditions.
\subsection{Port boundary conditions and block decoding simulations}\label{appPortBoundaryConditionsAndDecodingSims}
In order to simulate and calculate \emph{fault distance} of a modular component within a larger logical network a pragmatic boundary condition must be set, which allows evaluating the fault-tolerant performance of different logical blocks in isolation. This choice is documented here as it plays a crucial role in specifying the numerical simulations that are being performed.
For the purpose of evaluating the fault-tolerant performance of a block and evaluating its \emph{fault distance}, each port is treated as though ideal stabilizer measurements had been performed on the stabilizer subgroup local to said port. There is no additional noise attributed to the measurement outcomes, other than the noise already present in the qubits themselves. The ideal nature of such stabilizer measurements makes it impractical in the context of a real-world quantum computer, since such operations would generally be noisy in practice. However, the qubits on which the stabilizer measurements are applied are themselves still subject to the underlying noise model.
In the setting of topological fault tolerance, the stabilizer outcomes being extracted correspond to the geometrically local stabilizer generators of the surface code. The situation studied corresponds to a well-defined quantum instrument inserted at said ports. In terms of the blocks already considered, the chosen boundary condition corresponds to attaching a noiseless injection block (from Fig.~\ref{figFBQCMagicStatePreparation}) at every surface code port, with the noise rate being artificially set to zero for such blocks and the injection qubit left unmeasured (in other words, a noiseless encoding and unencoding isometry). These boundary conditions and their impact are displayed in Figs.~\ref{figDecodingWithPortStabilizerBC} and \ref{figBCcapturedErrors}.
The blocks we use in this article are chosen such that they individually have fault-tolerance properties with respect to the idealized port boundary condition, but also such that fault tolerance is preserved under arbitrary combinations of such blocks. Whereas this statement is made as a claim in this article, a set of sufficient conditions satisfied by this set of blocks is presented in Ref.~\cite{modularDecoding} in the context of \textit{modular decoding}.
\begin{figure}
\caption{Without port stabilizers, it would not be possible to associate a sensible fault distance to independent logical blocks.
For any choice of the logical correlator, there would be low weight fault configurations which result in undetectable errors on the logical correlator.
Incorporating stabilizer outcomes local to the ports is an idealization which allows us to evaluate sensible fault distance and fault-tolerance properties for the proposed logical blocks.
}
\label{figDecodingWithPortStabilizerBC}
\end{figure}
\begin{figure}
\caption{The figure shows how a fault configuration which is undetectable in the context of a composite protocol (right), can become detectable in the context of the idealized boundary condition described by stabilizer measurements when the complete protocol is partitioned into logical blocks (left).
}
\label{figBCcapturedErrors}
\end{figure}
\begin{thebibliography}{133} \makeatletter \providecommand \@ifxundefined [1]{
\@ifx{#1\undefined} } \providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Kitaev}(2003)}]{kitaev2003fault}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~Y.}\ \bibnamefont
{Kitaev}},\ }\bibfield {title} {\emph {\bibinfo {title} {Fault-tolerant
quantum computation by anyons},\ }}\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Annals of Physics}\ }\textbf {\bibinfo {volume}
{303}},\ \bibinfo {pages} {2} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kitaev}(1997)}]{kitaev1997quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~Y.}\ \bibnamefont
{Kitaev}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Quantum
Communication, Computing, and Measurement}}}\ (\bibinfo {publisher}
{Springer},\ \bibinfo {year} {1997})\ pp.\ \bibinfo {pages}
{181--188}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dennis}\ \emph {et~al.}(2002)\citenamefont {Dennis},
\citenamefont {Kitaev}, \citenamefont {Landahl},\ and\ \citenamefont
{Preskill}}]{dennis2002topological}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Dennis}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kitaev}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Landahl}}, \ and\
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Preskill}},\ }\bibfield
{title} {\emph {\bibinfo {title} {Topological quantum memory},\ }}\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Journal of Mathematical
Physics}\ }\textbf {\bibinfo {volume} {43}},\ \bibinfo {pages} {4452}
(\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Raussendorf}\ and\ \citenamefont
{Harrington}(2007)}]{raussendorf2007fault}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Raussendorf}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Harrington}},\ }\bibfield {title} {\emph {\bibinfo {title} {Fault-tolerant
quantum computation with high threshold in two dimensions},\ }}\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Physical review letters}\
}\textbf {\bibinfo {volume} {98}},\ \bibinfo {pages} {190504} (\bibinfo
{year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Raussendorf}\ \emph {et~al.}(2007)\citenamefont
{Raussendorf}, \citenamefont {Harrington},\ and\ \citenamefont
{Goyal}}]{raussendorf2007topological}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Raussendorf}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Harrington}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Goyal}},\ }\bibfield {title} {\emph {\bibinfo {title} {Topological
fault-tolerance in cluster state quantum computation},\ }}\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {New Journal of Physics}\ }\textbf
{\bibinfo {volume} {9}},\ \bibinfo {pages} {199} (\bibinfo {year}
{2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bolt}\ \emph {et~al.}(2016)\citenamefont {Bolt},
\citenamefont {Duclos-Cianci}, \citenamefont {Poulin},\ and\ \citenamefont
{Stace}}]{bolt2016foliated}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Bolt}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Duclos-Cianci}},
\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Poulin}}, \ and\ \bibinfo
{author} {\bibfnamefont {T.}~\bibnamefont {Stace}},\ }\bibfield {title}
{\emph {\bibinfo {title} {Foliated quantum error-correcting codes},\
}}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical review
letters}\ }\textbf {\bibinfo {volume} {117}},\ \bibinfo {pages} {070501}
(\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nickerson}\ and\ \citenamefont
{Bomb{\'\i}n}(2018)}]{nickerson2018measurement}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Nickerson}}\ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bomb{\'\i}n}},\ }\bibfield {title} {\emph {\bibinfo {title} {Measurement
based fault tolerance beyond foliation},\ }}\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {arXiv preprint arXiv:1810.09621}\ } (\bibinfo
{year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brown}\ and\ \citenamefont
{Roberts}(2020)}]{brown2020universal}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~J.}\ \bibnamefont
{Brown}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Roberts}},\ }\bibfield {title} {\emph {\bibinfo {title} {Universal
fault-tolerant measurement-based quantum computation},\ }}\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Physical Review Research}\
}\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {033305} (\bibinfo {year}
{2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2003)\citenamefont {Wang},
\citenamefont {Harrington},\ and\ \citenamefont
{Preskill}}]{wang2003confinement}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Wang}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Harrington}}, \
and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Preskill}},\
}\bibfield {title} {\emph {\bibinfo {title} {Confinement-higgs transition in
a disordered gauge theory and the accuracy threshold for quantum memory},\
}}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Annals of
Physics}\ }\textbf {\bibinfo {volume} {303}},\ \bibinfo {pages} {31}
(\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Stace}\ \emph {et~al.}(2009)\citenamefont {Stace},
\citenamefont {Barrett},\ and\ \citenamefont
{Doherty}}]{stace2009thresholds}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~M.}\ \bibnamefont
{Stace}}, \bibinfo {author} {\bibfnamefont {S.~D.}\ \bibnamefont {Barrett}},
\ and\ \bibinfo {author} {\bibfnamefont {A.~C.}\ \bibnamefont {Doherty}},\
}\bibfield {title} {\emph {\bibinfo {title} {Thresholds for topological
codes in the presence of loss},\ }}\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Physical review letters}\ }\textbf {\bibinfo {volume}
{102}},\ \bibinfo {pages} {200501} (\bibinfo {year} {2009})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Duclos-Cianci}\ and\ \citenamefont
{Poulin}(2010)}]{duclos2010fast}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Duclos-Cianci}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Poulin}},\ }\bibfield {title} {\emph {\bibinfo {title} {Fast decoders for
topological quantum codes},\ }}\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Physical review letters}\ }\textbf {\bibinfo {volume} {104}},\
\bibinfo {pages} {050504} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bombin}\ \emph
{et~al.}(2012{\natexlab{a}})\citenamefont {Bombin}, \citenamefont {Andrist},
\citenamefont {Ohzeki}, \citenamefont {Katzgraber},\ and\ \citenamefont
{Mart{\'\i}n-Delgado}}]{bombin2012strong}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bombin}}, \bibinfo {author} {\bibfnamefont {R.~S.}\ \bibnamefont {Andrist}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ohzeki}}, \bibinfo
{author} {\bibfnamefont {H.~G.}\ \bibnamefont {Katzgraber}}, \ and\ \bibinfo
{author} {\bibfnamefont {M.~A.}\ \bibnamefont {Mart{\'\i}n-Delgado}},\
}\bibfield {title} {\emph {\bibinfo {title} {Strong resilience of
topological codes to depolarization},\ }}\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Physical Review X}\ }\textbf {\bibinfo {volume} {2}},\
\bibinfo {pages} {021004} (\bibinfo {year} {2012}{\natexlab{a}})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Fowler}\ \emph {et~al.}(2012)\citenamefont {Fowler},
\citenamefont {Mariantoni}, \citenamefont {Martinis},\ and\ \citenamefont
{Cleland}}]{fowler2012surface}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont
{Fowler}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mariantoni}},
\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Martinis}}, \ and\
\bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont {Cleland}},\
}\bibfield {title} {\emph {\bibinfo {title} {Surface codes: Towards
practical large-scale quantum computation},\ }}\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo
{volume} {86}},\ \bibinfo {pages} {032324} (\bibinfo {year}
{2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Watson}\ and\ \citenamefont
{Barrett}(2014)}]{watson2014logical}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~H.}\ \bibnamefont
{Watson}}\ and\ \bibinfo {author} {\bibfnamefont {S.~D.}\ \bibnamefont
{Barrett}},\ }\bibfield {title} {\emph {\bibinfo {title} {Logical error rate
scaling of the toric code},\ }}\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {New Journal of Physics}\ }\textbf {\bibinfo {volume} {16}},\
\bibinfo {pages} {093045} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bravyi}\ \emph {et~al.}(2014)\citenamefont {Bravyi},
\citenamefont {Suchara},\ and\ \citenamefont {Vargo}}]{bravyi2014efficient}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Bravyi}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Suchara}}, \
and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Vargo}},\ }\bibfield
{title} {\emph {\bibinfo {title} {Efficient algorithms for maximum
likelihood decoding in the surface code},\ }}\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo
{volume} {90}},\ \bibinfo {pages} {032326} (\bibinfo {year}
{2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Darmawan}\ and\ \citenamefont
{Poulin}(2017)}]{darmawan2017tensor}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont
{Darmawan}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Poulin}},\ }\bibfield {title} {\emph {\bibinfo {title} {Tensor-network
simulations of the surface code under realistic noise},\ }}\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Physical review letters}\
}\textbf {\bibinfo {volume} {119}},\ \bibinfo {pages} {040502} (\bibinfo
{year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bomb{\'\i}n}\ and\ \citenamefont
{Martin-Delgado}(2009)}]{bombin2009quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bomb{\'\i}n}}\ and\ \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont
{Martin-Delgado}},\ }\bibfield {title} {\emph {\bibinfo {title} {Quantum
measurements and gates by code deformation},\ }}\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Journal of Physics A: Mathematical and
Theoretical}\ }\textbf {\bibinfo {volume} {42}},\ \bibinfo {pages} {095302}
(\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bomb{\'\i}n}(2010)}]{bombin2010topological}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bomb{\'\i}n}},\ }\bibfield {title} {\emph {\bibinfo {title} {Topological
order with a twist: Ising anyons from an abelian model},\ }}\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Physical review letters}\
}\textbf {\bibinfo {volume} {105}},\ \bibinfo {pages} {030403} (\bibinfo
{year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Landahl}\ \emph {et~al.}(2011)\citenamefont
{Landahl}, \citenamefont {Anderson},\ and\ \citenamefont
{Rice}}]{landahl2011fault}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont
{Landahl}}, \bibinfo {author} {\bibfnamefont {J.~T.}\ \bibnamefont
{Anderson}}, \ and\ \bibinfo {author} {\bibfnamefont {P.~R.}\ \bibnamefont
{Rice}},\ }\bibfield {title} {\emph {\bibinfo {title} {Fault-tolerant
quantum computing with color codes},\ }}\href
{https://arxiv.org/abs/1108.5738} {\bibfield {journal} {\bibinfo {journal}
{arXiv preprint arXiv:1108.5738}\ } (\bibinfo {year} {2011})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Horsman}\ \emph {et~al.}(2012)\citenamefont
{Horsman}, \citenamefont {Fowler}, \citenamefont {Devitt},\ and\
\citenamefont {Van~Meter}}]{horsman2012surface}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Horsman}}, \bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont {Fowler}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Devitt}}, \ and\ \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Van~Meter}},\ }\bibfield {title}
{\emph {\bibinfo {title} {Surface code quantum computing by lattice
surgery},\ }}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New
Journal of Physics}\ }\textbf {\bibinfo {volume} {14}},\ \bibinfo {pages}
{123011} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fowler}(2012)}]{fowler2012time}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont
{Fowler}},\ }\bibfield {title} {\emph {\bibinfo {title} {Time-optimal
quantum computation},\ }}\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {arXiv preprint arXiv:1210.4626}\ } (\bibinfo {year}
{2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Barkeshli}\ \emph
{et~al.}(2013{\natexlab{a}})\citenamefont {Barkeshli}, \citenamefont {Jian},\
and\ \citenamefont {Qi}}]{barkeshli2013classification}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Barkeshli}}, \bibinfo {author} {\bibfnamefont {C.-M.}\ \bibnamefont {Jian}},
\ and\ \bibinfo {author} {\bibfnamefont {X.-L.}\ \bibnamefont {Qi}},\
}\bibfield {title} {\emph {\bibinfo {title} {Classification of topological
defects in abelian topological states},\ }}\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Physical Review B}\ }\textbf {\bibinfo
{volume} {88}},\ \bibinfo {pages} {241103} (\bibinfo {year}
{2013}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Barkeshli}\ \emph
{et~al.}(2013{\natexlab{b}})\citenamefont {Barkeshli}, \citenamefont {Jian},\
and\ \citenamefont {Qi}}]{barkeshli2013twist}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Barkeshli}}, \bibinfo {author} {\bibfnamefont {C.-M.}\ \bibnamefont {Jian}},
\ and\ \bibinfo {author} {\bibfnamefont {X.-L.}\ \bibnamefont {Qi}},\
}\bibfield {title} {\emph {\bibinfo {title} {Twist defects and projective
non-abelian braiding statistics},\ }}\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Physical Review B}\ }\textbf {\bibinfo {volume} {87}},\
\bibinfo {pages} {045130} (\bibinfo {year} {2013}{\natexlab{b}})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Hastings}\ and\ \citenamefont
{Geller}(2014)}]{hastings2014reduced}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont
{Hastings}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Geller}},\ }\bibfield {title} {\emph {\bibinfo {title} {Reduced space-time
and time costs using dislocation codes and arbitrary ancillas},\ }}\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint
arXiv:1408.3379}\ } (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yoshida}(2015)}]{yoshida2015topological}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Yoshida}},\ }\bibfield {title} {\emph {\bibinfo {title} {Topological color
code and symmetry-protected topological phases},\ }}\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Physical Review B}\ }\textbf {\bibinfo
{volume} {91}},\ \bibinfo {pages} {245131} (\bibinfo {year}
{2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Terhal}(2015)}]{terhal2015quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~M.}\ \bibnamefont
{Terhal}},\ }\bibfield {title} {\emph {\bibinfo {title} {Quantum error
correction for quantum memories},\ }}\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Reviews of Modern Physics}\ }\textbf {\bibinfo {volume}
{87}},\ \bibinfo {pages} {307} (\bibinfo {year} {2015})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Yoder}\ and\ \citenamefont
{Kim}(2017)}]{yoder2017surface}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont
{Yoder}}\ and\ \bibinfo {author} {\bibfnamefont {I.~H.}\ \bibnamefont
{Kim}},\ }\bibfield {title} {\emph {\bibinfo {title} {The surface code with
a twist},\ }}\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Quantum}\ }\textbf {\bibinfo {volume} {1}},\ \bibinfo {pages} {2} (\bibinfo
{year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brown}\ \emph {et~al.}(2017)\citenamefont {Brown},
\citenamefont {Laubscher}, \citenamefont {Kesselring},\ and\ \citenamefont
{Wootton}}]{brown2017poking}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~J.}\ \bibnamefont
{Brown}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Laubscher}},
\bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Kesselring}}, \ and\
\bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {Wootton}},\
}\bibfield {title} {\emph {\bibinfo {title} {Poking holes and cutting
corners to achieve clifford gates with the surface code},\ }}\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Physical Review X}\ }\textbf
{\bibinfo {volume} {7}},\ \bibinfo {pages} {021029} (\bibinfo {year}
{2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yoshida}(2017)}]{yoshida2017gapped}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Yoshida}},\ }\bibfield {title} {\emph {\bibinfo {title} {Gapped boundaries,
group cohomology and fault-tolerant logical gates},\ }}\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Annals of Physics}\ }\textbf
{\bibinfo {volume} {377}},\ \bibinfo {pages} {387} (\bibinfo {year}
{2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Roberts}\ \emph {et~al.}(2017)\citenamefont
{Roberts}, \citenamefont {Yoshida}, \citenamefont {Kubica},\ and\
\citenamefont {Bartlett}}]{roberts2017symmetry}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Roberts}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Yoshida}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kubica}}, \ and\ \bibinfo
{author} {\bibfnamefont {S.~D.}\ \bibnamefont {Bartlett}},\ }\bibfield
{title} {\emph {\bibinfo {title} {Symmetry-protected topological order at
nonzero temperature},\ }}\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {96}},\ \bibinfo
{pages} {022306} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bombin}(2018{\natexlab{a}})}]{bombin2018transversal}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bombin}},\ }\bibfield {title} {\emph {\bibinfo {title} {Transversal gates
and error propagation in 3d topological codes},\ }}\href
{https://arxiv.org/abs/1810.09575} {\bibfield {journal} {\bibinfo {journal}
{arXiv preprint arXiv:1810.09575}\ } (\bibinfo {year}
{2018}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bombin}(2018{\natexlab{b}})}]{bombin20182d}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bombin}},\ }\bibfield {title} {\emph {\bibinfo {title} {2d quantum
computation with 3d topological codes},\ }}\href
{https://arxiv.org/abs/1810.09571} {\bibfield {journal} {\bibinfo {journal}
{arXiv preprint arXiv:1810.09571}\ } (\bibinfo {year}
{2018}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lavasani}\ and\ \citenamefont
{Barkeshli}(2018)}]{lavasani2018low}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Lavasani}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Barkeshli}},\ }\bibfield {title} {\emph {\bibinfo {title} {Low overhead
clifford gates from joint measurements in surface, color, and hyperbolic
codes},\ }}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical
Review A}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo {pages} {052319}
(\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lavasani}\ \emph {et~al.}(2019)\citenamefont
{Lavasani}, \citenamefont {Zhu},\ and\ \citenamefont
{Barkeshli}}]{lavasani2019universal}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Lavasani}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Zhu}}, \
and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Barkeshli}},\
}\bibfield {title} {\emph {\bibinfo {title} {Universal logical gates with
constant overhead: instantaneous dehn twists for hyperbolic quantum codes},\
}}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint
arXiv:1901.11029}\ } (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Webster}\ and\ \citenamefont
{Bartlett}(2020)}]{webster2020fault}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Webster}}\ and\ \bibinfo {author} {\bibfnamefont {S.~D.}\ \bibnamefont
{Bartlett}},\ }\bibfield {title} {\emph {\bibinfo {title} {Fault-tolerant
quantum gates with defects in topological stabilizer codes},\ }}\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf
{\bibinfo {volume} {102}},\ \bibinfo {pages} {022403} (\bibinfo {year}
{2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hanks}\ \emph {et~al.}(2020)\citenamefont {Hanks},
\citenamefont {Estarellas}, \citenamefont {Munro},\ and\ \citenamefont
{Nemoto}}]{hanks2020effective}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Hanks}}, \bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont
{Estarellas}}, \bibinfo {author} {\bibfnamefont {W.~J.}\ \bibnamefont
{Munro}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Nemoto}},\ }\bibfield {title} {\emph {\bibinfo {title} {Effective
compression of quantum braided circuits aided by zx-calculus},\ }}\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Physical Review X}\ }\textbf
{\bibinfo {volume} {10}},\ \bibinfo {pages} {041030} (\bibinfo {year}
{2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Roberts}\ and\ \citenamefont
{Williamson}(2020)}]{roberts20203}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Roberts}}\ and\ \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont
{Williamson}},\ }\bibfield {title} {\emph {\bibinfo {title} {3-fermion
topological quantum computation},\ }}\href {https://arxiv.org/abs/2011.04693}
{\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2011.04693}\
} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhu}\ \emph {et~al.}(2021)\citenamefont {Zhu},
\citenamefont {Jochym-O'Connor},\ and\ \citenamefont
{Dua}}]{zhu2021topological}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Zhu}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Jochym-O'Connor}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Dua}},\ }\bibfield {title} {\emph {\bibinfo {title} {Topological order,
quantum codes and quantum computation on fractal geometries},\ }}\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint
arXiv:2108.00018}\ } (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chamberland}\ and\ \citenamefont
{Campbell}(2021)}]{chamberland2021universal}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Chamberland}}\ and\ \bibinfo {author} {\bibfnamefont {E.~T.}\ \bibnamefont
{Campbell}},\ }\bibfield {title} {\emph {\bibinfo {title} {Universal quantum
computing with twist-free and temporally encoded lattice surgery},\
}}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint
arXiv:2109.02746}\ } (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Landahl}\ and\ \citenamefont
{Morrison}(2021)}]{landahl2021logical}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont
{Landahl}}\ and\ \bibinfo {author} {\bibfnamefont {B.~C.}\ \bibnamefont
{Morrison}},\ }\bibfield {title} {\emph {\bibinfo {title} {Logical majorana
fermions for fault-tolerant quantum simulation},\ }}\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {arXiv preprint arXiv:2110.10280}\ } (\bibinfo
{year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Beverland}\ \emph {et~al.}(2021)\citenamefont
{Beverland}, \citenamefont {Kubica},\ and\ \citenamefont
{Svore}}]{beverland2021cost}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~E.}\ \bibnamefont
{Beverland}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kubica}}, \
and\ \bibinfo {author} {\bibfnamefont {K.~M.}\ \bibnamefont {Svore}},\
}\bibfield {title} {\emph {\bibinfo {title} {Cost of universality: A
comparative study of the overhead of state distillation and code switching
with color codes},\ }}\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {PRX Quantum}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages}
{020341} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fowler}(2013)}]{fowler2013accurate}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont
{Fowler}},\ }\bibfield {title} {\emph {\bibinfo {title} {Accurate
simulations of planar topological codes cannot use cyclic boundaries},\
}}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review
A}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {062320} (\bibinfo
{year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Beverland}\ \emph {et~al.}(2019)\citenamefont
{Beverland}, \citenamefont {Brown}, \citenamefont {Kastoryano},\ and\
\citenamefont {Marolleau}}]{beverland2019role}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~E.}\ \bibnamefont
{Beverland}}, \bibinfo {author} {\bibfnamefont {B.~J.}\ \bibnamefont
{Brown}}, \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont
{Kastoryano}}, \ and\ \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont
{Marolleau}},\ }\bibfield {title} {\emph {\bibinfo {title} {The role of
entropy in topological quantum error correction},\ }}\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Journal of Statistical Mechanics: Theory and
Experiment}\ }\textbf {\bibinfo {volume} {2019}},\ \bibinfo {pages} {073404}
(\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Satzinger}\ \emph {et~al.}(2021)\citenamefont
{Satzinger}, \citenamefont {Liu}, \citenamefont {Smith}, \citenamefont
{Knapp}, \citenamefont {Newman}, \citenamefont {Jones}, \citenamefont {Chen},
\citenamefont {Quintana}, \citenamefont {Mi}, \citenamefont {Dunsworth},
\citenamefont {Gidney}, \citenamefont {Aleiner}, \citenamefont {Arute},
\citenamefont {Arya}, \citenamefont {Atalaya}, \citenamefont {Babbush},
\citenamefont {Bardin}, \citenamefont {Barends}, \citenamefont {Basso},
\citenamefont {Bengtsson}, \citenamefont {Bilmes}, \citenamefont {Broughton},
\citenamefont {Buckley}, \citenamefont {Buell}, \citenamefont {Burkett},
\citenamefont {Bushnell}, \citenamefont {Chiaro}, \citenamefont {Collins},
\citenamefont {Courtney}, \citenamefont {Demura}, \citenamefont {Derk},
\citenamefont {Eppens}, \citenamefont {Erickson}, \citenamefont {Faoro},
\citenamefont {Farhi}, \citenamefont {Fowler}, \citenamefont {Foxen},
\citenamefont {Giustina}, \citenamefont {Greene}, \citenamefont {Gross},
\citenamefont {Harrigan}, \citenamefont {Harrington}, \citenamefont {Hilton},
\citenamefont {Hong}, \citenamefont {Huang}, \citenamefont {Huggins},
\citenamefont {Ioffe}, \citenamefont {Isakov}, \citenamefont {Jeffrey},
\citenamefont {Jiang}, \citenamefont {Kafri}, \citenamefont {Kechedzhi},
\citenamefont {Khattar}, \citenamefont {Kim}, \citenamefont {Klimov},
\citenamefont {Korotkov}, \citenamefont {Kostritsa}, \citenamefont
{Landhuis}, \citenamefont {Laptev}, \citenamefont {Locharla}, \citenamefont
{Lucero}, \citenamefont {Martin}, \citenamefont {McClean}, \citenamefont
{McEwen}, \citenamefont {Miao}, \citenamefont {Mohseni}, \citenamefont
{Montazeri}, \citenamefont {Mruczkiewicz}, \citenamefont {Mutus},
\citenamefont {Naaman}, \citenamefont {Neeley}, \citenamefont {Neill},
\citenamefont {Niu}, \citenamefont {O’Brien}, \citenamefont {Opremcak},
\citenamefont {Pató}, \citenamefont {Petukhov}, \citenamefont {Rubin},
\citenamefont {Sank}, \citenamefont {Shvarts}, \citenamefont {Strain},
\citenamefont {Szalay}, \citenamefont {Villalonga}, \citenamefont {White},
\citenamefont {Yao}, \citenamefont {Yeh}, \citenamefont {Yoo}, \citenamefont
{Zalcman}, \citenamefont {Neven}, \citenamefont {Boixo}, \citenamefont
{Megrant}, \citenamefont {Chen}, \citenamefont {Kelly}, \citenamefont
{Smelyanskiy}, \citenamefont {Kitaev}, \citenamefont {Knap}, \citenamefont
{Pollmann},\ and\ \citenamefont {Roushan}}]{doi:10.1126/science.abi8378}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.~J.}\ \bibnamefont
{Satzinger}}, \bibinfo {author} {\bibfnamefont {Y.-J.}\ \bibnamefont {Liu}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Smith}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Knapp}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Newman}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Jones}}, \bibinfo {author} {\bibfnamefont
{Z.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Quintana}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Mi}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Dunsworth}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Gidney}}, \bibinfo {author}
{\bibfnamefont {I.}~\bibnamefont {Aleiner}}, \bibinfo {author} {\bibfnamefont
{F.}~\bibnamefont {Arute}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Arya}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Atalaya}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}},
\bibinfo {author} {\bibfnamefont {J.~C.}\ \bibnamefont {Bardin}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Barends}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Basso}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Bengtsson}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Bilmes}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Broughton}}, \bibinfo {author} {\bibfnamefont {B.~B.}\
\bibnamefont {Buckley}}, \bibinfo {author} {\bibfnamefont {D.~A.}\
\bibnamefont {Buell}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Burkett}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Bushnell}},
\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Chiaro}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Collins}}, \bibinfo {author}
{\bibfnamefont {W.}~\bibnamefont {Courtney}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Demura}}, \bibinfo {author} {\bibfnamefont
{A.~R.}\ \bibnamefont {Derk}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Eppens}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Erickson}}, \bibinfo {author} {\bibfnamefont
{L.}~\bibnamefont {Faoro}}, \bibinfo {author} {\bibfnamefont
{E.}~\bibnamefont {Farhi}}, \bibinfo {author} {\bibfnamefont {A.~G.}\
\bibnamefont {Fowler}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Foxen}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Giustina}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Greene}}, \bibinfo
{author} {\bibfnamefont {J.~A.}\ \bibnamefont {Gross}}, \bibinfo {author}
{\bibfnamefont {M.~P.}\ \bibnamefont {Harrigan}}, \bibinfo {author}
{\bibfnamefont {S.~D.}\ \bibnamefont {Harrington}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Hilton}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Hong}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Huang}}, \bibinfo {author} {\bibfnamefont {W.~J.}\ \bibnamefont {Huggins}},
\bibinfo {author} {\bibfnamefont {L.~B.}\ \bibnamefont {Ioffe}}, \bibinfo
{author} {\bibfnamefont {S.~V.}\ \bibnamefont {Isakov}}, \bibinfo {author}
{\bibfnamefont {E.}~\bibnamefont {Jeffrey}}, \bibinfo {author} {\bibfnamefont
{Z.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Kafri}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Kechedzhi}}, \bibinfo {author} {\bibfnamefont
{T.}~\bibnamefont {Khattar}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Kim}}, \bibinfo {author} {\bibfnamefont {P.~V.}\
\bibnamefont {Klimov}}, \bibinfo {author} {\bibfnamefont {A.~N.}\
\bibnamefont {Korotkov}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Kostritsa}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Landhuis}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Laptev}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Locharla}}, \bibinfo {author}
{\bibfnamefont {E.}~\bibnamefont {Lucero}}, \bibinfo {author} {\bibfnamefont
{O.}~\bibnamefont {Martin}}, \bibinfo {author} {\bibfnamefont {J.~R.}\
\bibnamefont {McClean}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{McEwen}}, \bibinfo {author} {\bibfnamefont {K.~C.}\ \bibnamefont {Miao}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mohseni}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Montazeri}}, \bibinfo {author}
{\bibfnamefont {W.}~\bibnamefont {Mruczkiewicz}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Mutus}}, \bibinfo {author} {\bibfnamefont
{O.}~\bibnamefont {Naaman}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Neeley}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Neill}}, \bibinfo {author} {\bibfnamefont {M.~Y.}\
\bibnamefont {Niu}}, \bibinfo {author} {\bibfnamefont {T.~E.}\ \bibnamefont
{O’Brien}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Opremcak}},
\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Pató}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Petukhov}}, \bibinfo {author}
{\bibfnamefont {N.~C.}\ \bibnamefont {Rubin}}, \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Sank}}, \bibinfo {author} {\bibfnamefont
{V.}~\bibnamefont {Shvarts}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Strain}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Szalay}}, \bibinfo {author} {\bibfnamefont
{B.}~\bibnamefont {Villalonga}}, \bibinfo {author} {\bibfnamefont {T.~C.}\
\bibnamefont {White}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont
{Yao}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Yeh}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Yoo}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Zalcman}}, \bibinfo {author} {\bibfnamefont
{H.}~\bibnamefont {Neven}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Boixo}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Megrant}}, \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Kelly}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Smelyanskiy}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kitaev}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Knap}}, \bibinfo {author}
{\bibfnamefont {F.}~\bibnamefont {Pollmann}}, \ and\ \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {Roushan}},\ }\bibfield {title} {\emph
{\bibinfo {title} {Realizing topologically ordered states on a quantum
processor},\ }}\href {\doibase 10.1126/science.abi8378} {\bibfield {journal}
{\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {374}},\ \bibinfo
{pages} {1237} (\bibinfo {year} {2021})},\ \Eprint
{http://arxiv.org/abs/https://www.science.org/doi/pdf/10.1126/science.abi8378}
{https://www.science.org/doi/pdf/10.1126/science.abi8378} \BibitemShut
{NoStop} \bibitem [{\citenamefont {Egan}\ \emph {et~al.}(2021)\citenamefont {Egan},
\citenamefont {Debroy}, \citenamefont {Noel}, \citenamefont {Risinger},
\citenamefont {Zhu}, \citenamefont {Biswas}, \citenamefont {Newman},
\citenamefont {Li}, \citenamefont {Brown}, \citenamefont {Cetina} \emph
{et~al.}}]{egan2021fault}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Egan}}, \bibinfo {author} {\bibfnamefont {D.~M.}\ \bibnamefont {Debroy}},
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Noel}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Risinger}}, \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Zhu}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Biswas}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Newman}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {K.~R.}\
\bibnamefont {Brown}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Cetina}}, \emph {et~al.},\ }\bibfield {title} {\emph {\bibinfo {title}
{Fault-tolerant control of an error-corrected qubit},\ }}\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo
{volume} {598}},\ \bibinfo {pages} {281} (\bibinfo {year}
{2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ryan-Anderson}\ \emph {et~al.}(2021)\citenamefont
{Ryan-Anderson}, \citenamefont {Bohnet}, \citenamefont {Lee}, \citenamefont
{Gresh}, \citenamefont {Hankin}, \citenamefont {Gaebler}, \citenamefont
{Francois}, \citenamefont {Chernoguzov}, \citenamefont {Lucchetti},
\citenamefont {Brown} \emph {et~al.}}]{ryan2021realization}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Ryan-Anderson}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Bohnet}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Lee}},
\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Gresh}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Hankin}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Gaebler}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Francois}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Chernoguzov}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Lucchetti}}, \bibinfo {author} {\bibfnamefont
{N.}~\bibnamefont {Brown}}, \emph {et~al.},\ }\bibfield {title} {\emph
{\bibinfo {title} {Realization of real-time fault-tolerant quantum error
correction},\ }}\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{arXiv preprint arXiv:2107.07505}\ } (\bibinfo {year} {2021})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Postler}\ \emph {et~al.}(2021)\citenamefont
{Postler}, \citenamefont {Heu{\ss}en}, \citenamefont {Pogorelov},
\citenamefont {Rispler}, \citenamefont {Feldker}, \citenamefont {Meth},
\citenamefont {Marciniak}, \citenamefont {Stricker}, \citenamefont
{Ringbauer}, \citenamefont {Blatt} \emph
{et~al.}}]{postler2021demonstration}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Postler}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Heu{\ss}en}},
\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Pogorelov}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Rispler}}, \bibinfo {author}
{\bibfnamefont {T.}~\bibnamefont {Feldker}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Meth}}, \bibinfo {author} {\bibfnamefont {C.~D.}\
\bibnamefont {Marciniak}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Stricker}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ringbauer}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Blatt}}, \emph
{et~al.},\ }\bibfield {title} {\emph {\bibinfo {title} {Demonstration of
fault-tolerant universal quantum gate operations},\ }}\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2111.12654}\
} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bartolucci}\ \emph {et~al.}(2023)\citenamefont
{Bartolucci}, \citenamefont {Birchall}, \citenamefont {Bombin}, \citenamefont
{Cable}, \citenamefont {Dawson}, \citenamefont {Gimeno-Segovia},
\citenamefont {Johnston}, \citenamefont {Kieling}, \citenamefont {Nickerson},
\citenamefont {Pant} \emph {et~al.}}]{bartolucci2021fusion}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Bartolucci}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Birchall}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Bombin}},
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Cable}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Dawson}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Gimeno-Segovia}}, \bibinfo {author}
{\bibfnamefont {E.}~\bibnamefont {Johnston}}, \bibinfo {author}
{\bibfnamefont {K.}~\bibnamefont {Kieling}}, \bibinfo {author} {\bibfnamefont
{N.}~\bibnamefont {Nickerson}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Pant}}, \emph {et~al.},\ }\bibfield {title} {\emph
{\bibinfo {title} {Fusion-based quantum computation},\ }}\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Nature Communications}\ }\textbf
{\bibinfo {volume} {14}},\ \bibinfo {pages} {912} (\bibinfo {year}
{2023})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fujii}(2015)}]{fujii2015quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Fujii}},\ }\href {\doibase https://doi.org/10.1007/978-981-287-996-7} {\emph
{\bibinfo {title} {Quantum Computation with Topological Codes: from qubit to
topological fault-tolerance}}},\ \bibinfo {series} {SpringerBriefs in
Mathematical Physics}, Vol.~\bibinfo {volume} {8}\ (\bibinfo {publisher}
{Springer},\ \bibinfo {address} {Singapore},\ \bibinfo {year}
{2015})\BibitemShut {NoStop} \bibitem [{\citenamefont {Bombin}\ \emph {et~al.}(2021)\citenamefont {Bombin},
\citenamefont {Kim}, \citenamefont {Litinski}, \citenamefont {Nickerson},
\citenamefont {Pant}, \citenamefont {Pastawski}, \citenamefont {Roberts},\
and\ \citenamefont {Rudolph}}]{bombin2021interleaving}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bombin}}, \bibinfo {author} {\bibfnamefont {I.~H.}\ \bibnamefont {Kim}},
\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Litinski}}, \bibinfo
{author} {\bibfnamefont {N.}~\bibnamefont {Nickerson}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Pant}}, \bibinfo {author} {\bibfnamefont
{F.}~\bibnamefont {Pastawski}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Roberts}}, \ and\ \bibinfo {author} {\bibfnamefont
{T.}~\bibnamefont {Rudolph}},\ }\bibfield {title} {\emph {\bibinfo {title}
{Interleaving: Modular architectures for fault-tolerant photonic quantum
computing},\ }}\href {https://arxiv.org/abs/2103.08612} {\bibfield {journal}
{\bibinfo {journal} {arXiv preprint arXiv:2103.08612}\ } (\bibinfo {year}
{2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bravyi}\ and\ \citenamefont
{Kitaev}(1998)}]{bravyi1998quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~B.}\ \bibnamefont
{Bravyi}}\ and\ \bibinfo {author} {\bibfnamefont {A.~Y.}\ \bibnamefont
{Kitaev}},\ }\bibfield {title} {\emph {\bibinfo {title} {Quantum codes on a
lattice with boundary},\ }}\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {arXiv preprint quant-ph/9811052}\ } (\bibinfo {year}
{1998})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nussinov}\ and\ \citenamefont
{Ortiz}(2009)}]{nussinov2009symmetry}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont
{Nussinov}}\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Ortiz}},\ }\bibfield {title} {\emph {\bibinfo {title} {A symmetry principle
for topological quantum order},\ }}\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Annals of Physics}\ }\textbf {\bibinfo {volume}
{324}},\ \bibinfo {pages} {977} (\bibinfo {year} {2009})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Bomb{\'\i}n}\ and\ \citenamefont
{Martin-Delgado}(2007)}]{bombin2007optimal}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bomb{\'\i}n}}\ and\ \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont
{Martin-Delgado}},\ }\bibfield {title} {\emph {\bibinfo {title} {Optimal
resources for topological two-dimensional stabilizer codes: Comparative
study},\ }}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical
Review A}\ }\textbf {\bibinfo {volume} {76}},\ \bibinfo {pages} {012305}
(\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Litinski}(2019{\natexlab{a}})}]{litinski2019game}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Litinski}},\ }\bibfield {title} {\emph {\bibinfo {title} {A game of surface
codes: Large-scale quantum computing with lattice surgery},\ }}\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Quantum}\ }\textbf {\bibinfo
{volume} {3}},\ \bibinfo {pages} {128} (\bibinfo {year}
{2019}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tillich}\ and\ \citenamefont
{Z{\'e}mor}(2014)}]{tillich2014quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.-P.}\ \bibnamefont
{Tillich}}\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Z{\'e}mor}},\ }\bibfield {title} {\emph {\bibinfo {title} {Quantum ldpc
codes with positive rate and minimum distance proportional to the square root
of the blocklength},\ }}\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {IEEE Transactions on Information Theory}\ }\textbf {\bibinfo
{volume} {60}},\ \bibinfo {pages} {1193} (\bibinfo {year}
{2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gottesman}(2013)}]{gottesman2013fault}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Gottesman}},\ }\bibfield {title} {\emph {\bibinfo {title} {Fault-tolerant
quantum computation with constant overhead},\ }}\href
{https://arxiv.org/abs/1310.2984} {\bibfield {journal} {\bibinfo {journal}
{arXiv preprint arXiv:1310.2984}\ } (\bibinfo {year} {2013})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Fawzi}\ \emph
{et~al.}(2018{\natexlab{a}})\citenamefont {Fawzi}, \citenamefont
{Grospellier},\ and\ \citenamefont {Leverrier}}]{fawzi2018constant}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont
{Fawzi}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Grospellier}},
\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Leverrier}},\ }in\
\href@noop {} {\emph {\bibinfo {booktitle} {2018 IEEE 59th Annual Symposium
on Foundations of Computer Science (FOCS)}}}\ (\bibinfo {organization}
{IEEE},\ \bibinfo {year} {2018})\ pp.\ \bibinfo {pages}
{743--754}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fawzi}\ \emph
{et~al.}(2018{\natexlab{b}})\citenamefont {Fawzi}, \citenamefont
{Grospellier},\ and\ \citenamefont {Leverrier}}]{fawzi2018efficient}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont
{Fawzi}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Grospellier}},
\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Leverrier}},\ }in\
\href@noop {} {\emph {\bibinfo {booktitle} {Proceedings of the 50th Annual
ACM SIGACT Symposium on Theory of Computing}}}\ (\bibinfo {year} {2018})\
pp.\ \bibinfo {pages} {521--534}\BibitemShut {NoStop} \bibitem [{\citenamefont {Breuckmann}\ and\ \citenamefont
{Eberhardt}(2021{\natexlab{a}})}]{breuckmann2021ldpc}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~P.}\ \bibnamefont
{Breuckmann}}\ and\ \bibinfo {author} {\bibfnamefont {J.~N.}\ \bibnamefont
{Eberhardt}},\ }\bibfield {title} {\emph {\bibinfo {title} {Quantum
low-density parity-check codes},\ }}\href {\doibase
10.1103/PRXQuantum.2.040101} {\bibfield {journal} {\bibinfo {journal} {PRX
Quantum}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {040101}
(\bibinfo {year} {2021}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hastings}\ \emph {et~al.}(2021)\citenamefont
{Hastings}, \citenamefont {Haah},\ and\ \citenamefont
{O'Donnell}}]{hastings2021fiber}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont
{Hastings}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Haah}}, \
and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {O'Donnell}},\ }in\
\href@noop {} {\emph {\bibinfo {booktitle} {Proceedings of the 53rd Annual
ACM SIGACT Symposium on Theory of Computing}}}\ (\bibinfo {year} {2021})\
pp.\ \bibinfo {pages} {1276--1288}\BibitemShut {NoStop} \bibitem [{\citenamefont {Breuckmann}\ and\ \citenamefont
{Eberhardt}(2021{\natexlab{b}})}]{breuckmann2021balanced}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~P.}\ \bibnamefont
{Breuckmann}}\ and\ \bibinfo {author} {\bibfnamefont {J.~N.}\ \bibnamefont
{Eberhardt}},\ }\bibfield {title} {\emph {\bibinfo {title} {Balanced product
quantum codes},\ }}\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{IEEE Transactions on Information Theory}\ }\textbf {\bibinfo {volume}
{67}},\ \bibinfo {pages} {6653} (\bibinfo {year}
{2021}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Panteleev}\ and\ \citenamefont
{Kalachev}(2021)}]{panteleev2021asymptotically}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Panteleev}}\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Kalachev}},\ }\bibfield {title} {\emph {\bibinfo {title} {Asymptotically
good quantum and locally testable classical ldpc codes},\ }}\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2111.03654}\
} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gottesman}(1997)}]{gottesman1997stabilizer}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Gottesman}},\ }\emph {\bibinfo {title} {Stabilizer codes and quantum error
correction}},\ \href {\doibase
https://doi.org/10.48550/arXiv.quant-ph/9705052} {Ph.D. thesis},\ \bibinfo
{school} {California Institute of Technology}, \bibinfo {address} {Pasadena,
California} (\bibinfo {year} {1997})\BibitemShut {NoStop} \bibitem [{\citenamefont {Bombin}\ and\ \citenamefont
{Martin-Delgado}(2006{\natexlab{a}})}]{bombin2006topological}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bombin}}\ and\ \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont
{Martin-Delgado}},\ }\bibfield {title} {\emph {\bibinfo {title} {Topological
quantum distillation},\ }}\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Physical review letters}\ }\textbf {\bibinfo {volume} {97}},\
\bibinfo {pages} {180501} (\bibinfo {year} {2006}{\natexlab{a}})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Paetznick}\ and\ \citenamefont
{Reichardt}(2013)}]{paetznick2013universal}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Paetznick}}\ and\ \bibinfo {author} {\bibfnamefont {B.~W.}\ \bibnamefont
{Reichardt}},\ }\bibfield {title} {\emph {\bibinfo {title} {Universal
fault-tolerant quantum computation with only transversal gates and error
correction},\ }}\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Physical review letters}\ }\textbf {\bibinfo {volume} {111}},\ \bibinfo
{pages} {090505} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Newman}\ \emph {et~al.}(2020)\citenamefont {Newman},
\citenamefont {de~Castro},\ and\ \citenamefont
{Brown}}]{newman2020generating}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Newman}}, \bibinfo {author} {\bibfnamefont {L.~A.}\ \bibnamefont
{de~Castro}}, \ and\ \bibinfo {author} {\bibfnamefont {K.~R.}\ \bibnamefont
{Brown}},\ }\bibfield {title} {\emph {\bibinfo {title} {Generating
fault-tolerant cluster states from crystal structures},\ }}\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Quantum}\ }\textbf {\bibinfo
{volume} {4}},\ \bibinfo {pages} {295} (\bibinfo {year} {2020})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Hastings}\ and\ \citenamefont
{Haah}(2021)}]{hastings2021dynamically}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont
{Hastings}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Haah}},\ }\bibfield {title} {\emph {\bibinfo {title} {Dynamically generated
logical qubits},\ }}\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Quantum}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {564}
(\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kitaev}(2006)}]{kitaev2006anyons}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Kitaev}},\ }\bibfield {title} {\emph {\bibinfo {title} {Anyons in an
exactly solved model and beyond},\ }}\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Annals of Physics}\ }\textbf {\bibinfo {volume}
{321}},\ \bibinfo {pages} {2} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Preskill}(1999)}]{preskill1999lecture}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Preskill}},\ }\bibfield {title} {\emph {\bibinfo {title} {Lecture notes for
physics 219: Quantum computation},\ }}\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Caltech Lecture Notes}\ } (\bibinfo {year}
{1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Levin}(2013)}]{levin2013protected}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Levin}},\ }\bibfield {title} {\emph {\bibinfo {title} {Protected edge modes
without symmetry},\ }}\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Physical Review X}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo
{pages} {021009} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Liu}\ \emph {et~al.}(2017)\citenamefont {Liu},
\citenamefont {Wozniakowski},\ and\ \citenamefont {Jaffe}}]{liu2017quon}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont
{Liu}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Wozniakowski}}, \
and\ \bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Jaffe}},\
}\bibfield {title} {\emph {\bibinfo {title} {Quon 3d language for quantum
information},\ }}\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Proceedings of the National Academy of Sciences}\ }\textbf {\bibinfo
{volume} {114}},\ \bibinfo {pages} {2497} (\bibinfo {year}
{2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Barkeshli}\ \emph {et~al.}(2019)\citenamefont
{Barkeshli}, \citenamefont {Bonderson}, \citenamefont {Cheng},\ and\
\citenamefont {Wang}}]{barkeshli2019symmetry}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Barkeshli}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Bonderson}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Cheng}}, \
and\ \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Wang}},\ }\bibfield
{title} {\emph {\bibinfo {title} {Symmetry fractionalization, defects, and
gauging of topological phases},\ }}\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Physical Review B}\ }\textbf {\bibinfo {volume}
{100}},\ \bibinfo {pages} {115147} (\bibinfo {year} {2019})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Wen}(2003)}]{wen2003quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-G.}\ \bibnamefont
{Wen}},\ }\bibfield {title} {\emph {\bibinfo {title} {Quantum orders in an
exact soluble model},\ }}\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Physical review letters}\ }\textbf {\bibinfo {volume} {90}},\
\bibinfo {pages} {016803} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bombin}\ and\ \citenamefont
{Martin-Delgado}(2006{\natexlab{b}})}]{bombin2006topologicalencoding}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bombin}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Martin-Delgado}},\ }\bibfield {title} {\emph {\bibinfo {title} {Topological
quantum error correction with optimal encoding rate},\ }}\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf
{\bibinfo {volume} {73}},\ \bibinfo {pages} {062303} (\bibinfo {year}
{2006}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tomita}\ and\ \citenamefont
{Svore}(2014)}]{tomita2014low}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Tomita}}\ and\ \bibinfo {author} {\bibfnamefont {K.~M.}\ \bibnamefont
{Svore}},\ }\bibfield {title} {\emph {\bibinfo {title} {Low-distance surface
codes under realistic quantum noise},\ }}\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {90}},\
\bibinfo {pages} {062320} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{Note1()}]{Note1}
\BibitemOpen
\bibinfo {note} {The two types of anyons of the surface code are also often
also referred to as $e$-type and $m$-type, or alternatively $X$-type and
$Z$-type.}\BibitemShut {Stop} \bibitem [{\citenamefont {Beverland}\ \emph {et~al.}()\citenamefont
{Beverland}, \citenamefont {Buerschaper}, \citenamefont {Koenig},
\citenamefont {Pastawski}, \citenamefont {Preskill},\ and\ \citenamefont
{Sijher}}]{beverland2016protected}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~E.}\ \bibnamefont
{Beverland}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont
{Buerschaper}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Koenig}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Pastawski}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Preskill}}, \ and\ \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Sijher}},\ }\bibfield {title}
{\emph {\bibinfo {title} {Protected gates for topological quantum field
theories},\ }}\href@noop {} {\bibinfo {journal} {Journal of Mathematical
Physics}\ }\BibitemShut {NoStop} \bibitem [{\citenamefont {Kitaev}\ and\ \citenamefont
{Kong}(2012)}]{kitaev2012models}
\BibitemOpen \bibfield {journal} { }\bibfield {author} {\bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Kitaev}}\ and\ \bibinfo {author} {\bibfnamefont
{L.}~\bibnamefont {Kong}},\ }\bibfield {title} {\emph {\bibinfo {title}
{Models for gapped boundaries and domain walls},\ }}\href {\doibase
10.1007/s00220-012-1500-5} {\bibfield {journal} {\bibinfo {journal}
{Communications in Mathematical Physics}\ }\textbf {\bibinfo {volume}
{313}},\ \bibinfo {pages} {351} (\bibinfo {year} {2012})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Levin}\ and\ \citenamefont
{Gu}(2012)}]{levin2012braiding}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Levin}}\ and\ \bibinfo {author} {\bibfnamefont {Z.-C.}\ \bibnamefont {Gu}},\
}\bibfield {title} {\emph {\bibinfo {title} {Braiding statistics approach to
symmetry-protected topological phases},\ }}\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Physical Review B}\ }\textbf {\bibinfo
{volume} {86}},\ \bibinfo {pages} {115109} (\bibinfo {year}
{2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lan}\ \emph {et~al.}(2015)\citenamefont {Lan},
\citenamefont {Wang},\ and\ \citenamefont {Wen}}]{lan2015gapped}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Lan}}, \bibinfo {author} {\bibfnamefont {J.~C.}\ \bibnamefont {Wang}}, \
and\ \bibinfo {author} {\bibfnamefont {X.-G.}\ \bibnamefont {Wen}},\
}\bibfield {title} {\emph {\bibinfo {title} {Gapped domain walls, gapped
boundaries, and topological degeneracy},\ }}\href {\doibase
10.1103/PhysRevLett.114.076402} {\bibfield {journal} {\bibinfo {journal}
{Physical Review Letters}\ }\textbf {\bibinfo {volume} {114}},\ \bibinfo
{pages} {076402} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{Note2()}]{Note2}
\BibitemOpen
\bibinfo {note} {Note that generally, the term ``domain wall'' refers to any
boundary between two topological phases. Here, we specifically use it to
refer to the primal-dual swapping boundary.}\BibitemShut {Stop} \bibitem [{\citenamefont {Freedman}\ and\ \citenamefont
{Meyer}(2001)}]{freedman2001projective}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~H.}\ \bibnamefont
{Freedman}}\ and\ \bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont
{Meyer}},\ }\bibfield {title} {\emph {\bibinfo {title} {Projective plane and
planar quantum codes},\ }}\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Foundations of Computational Mathematics}\ }\textbf {\bibinfo
{volume} {1}},\ \bibinfo {pages} {325} (\bibinfo {year} {2001})}\BibitemShut
{NoStop} \bibitem [{Note3()}]{Note3}
\BibitemOpen
\bibinfo {note} {For example, the 2D color code (which is locally equivalent
to two copies of the surface code code~\cite {bombin2012universal}) has a
symmetry group containing 72 elements~\cite
{yoshida2015topological,scruby2020hierarchy}, compared to the $\protect
\mathbbm {Z}_2$ symmetry of a single surface code.}\BibitemShut {Stop} \bibitem [{Note4()}]{Note4}
\BibitemOpen
\bibinfo {note} {This cell complex is distinct from the cell complex commonly
used in the context of fault-tolerant topological MBQC~\cite
{raussendorf2007fault,raussendorf2007topological, nickerson2018measurement}
in which the checks correspond to 3-cells and 0-cells of the
complex.}\BibitemShut {Stop} \bibitem [{Note5()}]{Note5}
\BibitemOpen
\bibinfo {note} {In condensed matter language, this check operator group can
be understood as a $\protect \mathbbm {Z}_2\times \protect \mathbbm {Z}_2$
1-form symmetry~\cite
{gaiotto2015generalized,kapustin2017higher,roberts2020symmetry}.}\BibitemShut
{Stop} \bibitem [{\citenamefont {Bravyi}\ and\ \citenamefont
{Kitaev}(2005)}]{bravyi2005universal}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Bravyi}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Kitaev}},\ }\bibfield {title} {\emph {\bibinfo {title} {Universal quantum
computation with ideal clifford gates and noisy ancillas},\ }}\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf
{\bibinfo {volume} {71}},\ \bibinfo {pages} {022316} (\bibinfo {year}
{2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bravyi}\ and\ \citenamefont
{Haah}(2012)}]{bravyi2012magic}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Bravyi}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Haah}},\
}\bibfield {title} {\emph {\bibinfo {title} {Magic-state distillation with
low overhead},\ }}\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Physical Review A}\ }\textbf {\bibinfo {volume} {86}},\ \bibinfo {pages}
{052329} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Litinski}(2019{\natexlab{b}})}]{litinski2019magic}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Litinski}},\ }\bibfield {title} {\emph {\bibinfo {title} {Magic state
distillation: Not as costly as you think},\ }}\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Quantum}\ }\textbf {\bibinfo {volume} {3}},\
\bibinfo {pages} {205} (\bibinfo {year} {2019}{\natexlab{b}})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Kim}\ \emph {et~al.}(2022)\citenamefont {Kim},
\citenamefont {Liu}, \citenamefont {Pallister}, \citenamefont {Pol},
\citenamefont {Roberts},\ and\ \citenamefont {Lee}}]{kim2021faulttolerant}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {I.~H.}\ \bibnamefont
{Kim}}, \bibinfo {author} {\bibfnamefont {Y.-H.}\ \bibnamefont {Liu}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Pallister}}, \bibinfo
{author} {\bibfnamefont {W.}~\bibnamefont {Pol}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Roberts}}, \ and\ \bibinfo {author}
{\bibfnamefont {E.}~\bibnamefont {Lee}},\ }\bibfield {title} {\emph
{\bibinfo {title} {Fault-tolerant resource estimate for quantum chemical
simulations: Case study on li-ion battery electrolyte molecules},\ }}\href
{\doibase 10.1103/PhysRevResearch.4.023019} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Research}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo
{pages} {023019} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kubica}\ \emph {et~al.}(2015)\citenamefont {Kubica},
\citenamefont {Yoshida},\ and\ \citenamefont
{Pastawski}}]{kubica2015unfolding}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Kubica}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Yoshida}}, \
and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Pastawski}},\
}\bibfield {title} {\emph {\bibinfo {title} {Unfolding the color code},\
}}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New Journal of
Physics}\ }\textbf {\bibinfo {volume} {17}},\ \bibinfo {pages} {083026}
(\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Moussa}(2016)}]{moussa2016transversal}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~E.}\ \bibnamefont
{Moussa}},\ }\bibfield {title} {\emph {\bibinfo {title} {Transversal
clifford gates on folded surface codes},\ }}\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo
{volume} {94}},\ \bibinfo {pages} {042316} (\bibinfo {year}
{2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Elliott}\ \emph {et~al.}(2009)\citenamefont
{Elliott}, \citenamefont {Eastin},\ and\ \citenamefont
{Caves}}]{elliott2009graphical}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont
{Elliott}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Eastin}}, \
and\ \bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont {Caves}},\
}\bibfield {title} {\emph {\bibinfo {title} {Graphical description of pauli
measurements on stabilizer states},\ }}\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Journal of Physics A: Mathematical and Theoretical}\
}\textbf {\bibinfo {volume} {43}},\ \bibinfo {pages} {025301} (\bibinfo
{year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Coecke}\ and\ \citenamefont
{Duncan}(2008)}]{coecke2008interacting}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Coecke}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Duncan}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {International
Colloquium on Automata, Languages, and Programming}}}\ (\bibinfo
{organization} {Springer},\ \bibinfo {year} {2008})\ pp.\ \bibinfo {pages}
{298--310}\BibitemShut {NoStop} \bibitem [{\citenamefont {van~de Wetering}(2020)}]{van2020zx}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{van~de Wetering}},\ }\bibfield {title} {\emph {\bibinfo {title}
{Zx-calculus for the working quantum computer scientist},\ }}\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2012.13966}\
} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {de~Beaudrap}\ and\ \citenamefont
{Horsman}(2020)}]{de2020zx}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{de~Beaudrap}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Horsman}},\ }\bibfield {title} {\emph {\bibinfo {title} {The zx calculus is
a language for surface code lattice surgery},\ }}\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Quantum}\ }\textbf {\bibinfo {volume} {4}},\
\bibinfo {pages} {218} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Calderbank}\ and\ \citenamefont
{Shor}(1996)}]{calderbank1996good}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~R.}\ \bibnamefont
{Calderbank}}\ and\ \bibinfo {author} {\bibfnamefont {P.~W.}\ \bibnamefont
{Shor}},\ }\bibfield {title} {\emph {\bibinfo {title} {Good quantum
error-correcting codes exist},\ }}\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {54}},\
\bibinfo {pages} {1098} (\bibinfo {year} {1996})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Steane}(1996)}]{steane1996multiple}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Steane}},\ }\bibfield {title} {\emph {\bibinfo {title} {Multiple-particle
interference and quantum error correction},\ }}\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Proceedings of the Royal Society of London.
Series A: Mathematical, Physical and Engineering Sciences}\ }\textbf
{\bibinfo {volume} {452}},\ \bibinfo {pages} {2551} (\bibinfo {year}
{1996})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bravyi}\ \emph {et~al.}(2010)\citenamefont {Bravyi},
\citenamefont {Poulin},\ and\ \citenamefont {Terhal}}]{bravyi2010tradeoffs}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Bravyi}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Poulin}}, \
and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Terhal}},\
}\bibfield {title} {\emph {\bibinfo {title} {Tradeoffs for reliable quantum
information storage in 2d systems},\ }}\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Physical review letters}\ }\textbf {\bibinfo {volume}
{104}},\ \bibinfo {pages} {050503} (\bibinfo {year} {2010})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Krishna}\ and\ \citenamefont
{Poulin}(2021)}]{krishna2021fault}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Krishna}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Poulin}},\ }\bibfield {title} {\emph {\bibinfo {title} {Fault-tolerant
gates on hypergraph product codes},\ }}\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Physical Review X}\ }\textbf {\bibinfo {volume} {11}},\
\bibinfo {pages} {011023} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cohen}\ \emph {et~al.}(2021)\citenamefont {Cohen},
\citenamefont {Kim}, \citenamefont {Bartlett},\ and\ \citenamefont
{Brown}}]{cohen2021low}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~Z.}\ \bibnamefont
{Cohen}}, \bibinfo {author} {\bibfnamefont {I.~H.}\ \bibnamefont {Kim}},
\bibinfo {author} {\bibfnamefont {S.~D.}\ \bibnamefont {Bartlett}}, \ and\
\bibinfo {author} {\bibfnamefont {B.~J.}\ \bibnamefont {Brown}},\ }\bibfield
{title} {\emph {\bibinfo {title} {Low-overhead fault-tolerant quantum
computing using long-range connectivity},\ }}\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {arXiv preprint arXiv:2110.10794}\ } (\bibinfo
{year} {2021})}\BibitemShut {NoStop} \bibitem [{Note6()}]{Note6}
\BibitemOpen
\bibinfo {note} {More generally, the outer code encoding circuit for an
arbitrary state is Clifford for a general CSS code, but it may not be
constant depth.}\BibitemShut {Stop} \bibitem [{\citenamefont {Browne}\ and\ \citenamefont
{Rudolph}(2005)}]{browne2005resource}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~E.}\ \bibnamefont
{Browne}}\ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Rudolph}},\ }\bibfield {title} {\emph {\bibinfo {title} {Resource-efficient
linear optical quantum computation},\ }}\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume}
{95}},\ \bibinfo {pages} {010501} (\bibinfo {year} {2005})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Gimeno-Segovia}\ \emph {et~al.}(2015)\citenamefont
{Gimeno-Segovia}, \citenamefont {Shadbolt}, \citenamefont {Browne},\ and\
\citenamefont {Rudolph}}]{gimeno2015three}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Gimeno-Segovia}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Shadbolt}}, \bibinfo {author} {\bibfnamefont {D.~E.}\ \bibnamefont
{Browne}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Rudolph}},\ }\bibfield {title} {\emph {\bibinfo {title} {From three-photon
greenberger-horne-zeilinger states to ballistic universal quantum
computation},\ }}\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Physical review letters}\ }\textbf {\bibinfo {volume} {115}},\ \bibinfo
{pages} {020502} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hein}\ \emph {et~al.}(2004)\citenamefont {Hein},
\citenamefont {Eisert},\ and\ \citenamefont {Briegel}}]{hein2004graph}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Hein}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Eisert}}, \ and\
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Briegel}},\ }\bibfield
{title} {\emph {\bibinfo {title} {Multiparty entanglement in graph states},\
}}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review
A}\ }\textbf {\bibinfo {volume} {69}},\ \bibinfo {pages} {062311} (\bibinfo
{year} {2004})}\BibitemShut {NoStop} \bibitem [{Note7()}]{Note7}
\BibitemOpen
\bibinfo {note} {In Ref.~\cite {bartolucci2021fusion} this group is termed
the \protect \emph {fusion group} and denoted $F$. It is renamed the
measurement group $\protect \mathcal {M}$ here to note the inclusion of
single-qubit measurements when required.}\BibitemShut {Stop} \bibitem [{\citenamefont {{\L}odyga}\ \emph {et~al.}(2015)\citenamefont
{{\L}odyga}, \citenamefont {Mazurek}, \citenamefont {Grudka},\ and\
\citenamefont {Horodecki}}]{lodyga2015simple}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{{\L}odyga}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Mazurek}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Grudka}}, \ and\ \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Horodecki}},\ }\bibfield {title}
{\emph {\bibinfo {title} {Simple scheme for encoding and decoding a qubit in
unknown state for various topological codes},\ }}\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Scientific reports}\ }\textbf {\bibinfo
{volume} {5}},\ \bibinfo {pages} {8975} (\bibinfo {year} {2015})}\BibitemShut
{NoStop} \bibitem [{Note8()}]{Note8}
\BibitemOpen
\bibinfo {note} {Note that this is distinct from phenomenological noise model
that is commonly used to model measurement and Pauli errors in a code based
fault tolerance scheme. Our model is closer to a gate error model in that it
accounts for large scale entanglement generation from finite sized
resources.}\BibitemShut {Stop} \bibitem [{Note9()}]{Note9}
\BibitemOpen
\bibinfo {note} {A potentially related observation is found in Ref.~\cite
{farrelly2020parallel} for a different family of codes, whereby different
logical qubits can be decoded independently while remaining nearly globally
optimal.}\BibitemShut {Stop} \bibitem [{\citenamefont {Kolmogorov}(2009)}]{kolmogorov2009blossom}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Kolmogorov}},\ }\bibfield {title} {\emph {\bibinfo {title} {Blossom v: a
new implementation of a minimum cost perfect matching algorithm},\
}}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Mathematical
Programming Computation}\ }\textbf {\bibinfo {volume} {1}},\ \bibinfo {pages}
{43} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Delfosse}\ and\ \citenamefont
{Nickerson}(2017)}]{delfosse2017almost}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Delfosse}}\ and\ \bibinfo {author} {\bibfnamefont {N.~H.}\ \bibnamefont
{Nickerson}},\ }\bibfield {title} {\emph {\bibinfo {title} {Almost-linear
time decoding algorithm for topological codes},\ }}\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {arXiv preprint arXiv:1709.06218}\ } (\bibinfo
{year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Henkel}\ and\ \citenamefont
{Sch{\"u}tz}(1994)}]{henkel1994boundary}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Henkel}}\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Sch{\"u}tz}},\ }\bibfield {title} {\emph {\bibinfo {title}
{Boundary-induced phase transitions in equilibrium and non-equilibrium
systems},\ }}\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Physica A: Statistical Mechanics and its Applications}\ }\textbf {\bibinfo
{volume} {206}},\ \bibinfo {pages} {187} (\bibinfo {year}
{1994})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bravyi}\ and\ \citenamefont
{Vargo}(2013)}]{bravyi2013simulation}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Bravyi}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Vargo}},\
}\bibfield {title} {\emph {\bibinfo {title} {Simulation of rare events in
quantum error correction},\ }}\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {88}},\ \bibinfo
{pages} {062308} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kivlichan}\ \emph {et~al.}(2020)\citenamefont
{Kivlichan}, \citenamefont {Gidney}, \citenamefont {Berry}, \citenamefont
{Wiebe}, \citenamefont {McClean}, \citenamefont {Sun}, \citenamefont {Jiang},
\citenamefont {Rubin}, \citenamefont {Fowler}, \citenamefont {Aspuru-Guzik}
\emph {et~al.}}]{kivlichan2020improved}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {I.~D.}\ \bibnamefont
{Kivlichan}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Gidney}},
\bibinfo {author} {\bibfnamefont {D.~W.}\ \bibnamefont {Berry}}, \bibinfo
{author} {\bibfnamefont {N.}~\bibnamefont {Wiebe}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {McClean}}, \bibinfo {author} {\bibfnamefont
{W.}~\bibnamefont {Sun}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont
{Jiang}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Rubin}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Fowler}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Aspuru-Guzik}}, \emph {et~al.},\
}\bibfield {title} {\emph {\bibinfo {title} {Improved fault-tolerant quantum
simulation of condensed-phase correlated electrons via trotterization},\
}}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Quantum}\
}\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {296} (\bibinfo {year}
{2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {von Burg}\ \emph {et~al.}(2020)\citenamefont {von
Burg}, \citenamefont {Low}, \citenamefont {H{\"a}ner}, \citenamefont
{Steiger}, \citenamefont {Reiher}, \citenamefont {Roetteler},\ and\
\citenamefont {Troyer}}]{von2020quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {von
Burg}}, \bibinfo {author} {\bibfnamefont {G.~H.}\ \bibnamefont {Low}},
\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {H{\"a}ner}}, \bibinfo
{author} {\bibfnamefont {D.~S.}\ \bibnamefont {Steiger}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Reiher}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Roetteler}}, \ and\ \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Troyer}},\ }\bibfield {title} {\emph {\bibinfo {title}
{Quantum computing enhanced computational catalysis},\ }}\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2007.14460}\
} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Su}\ \emph {et~al.}(2021)\citenamefont {Su},
\citenamefont {Berry}, \citenamefont {Wiebe}, \citenamefont {Rubin},\ and\
\citenamefont {Babbush}}]{su2021faulttolerant}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Su}}, \bibinfo {author} {\bibfnamefont {D.~W.}\ \bibnamefont {Berry}},
\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Wiebe}}, \bibinfo
{author} {\bibfnamefont {N.}~\bibnamefont {Rubin}}, \ and\ \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Babbush}},\ }\bibfield {title} {\emph
{\bibinfo {title} {Fault-tolerant quantum simulations of chemistry in first
quantization},\ }}\href {https://arxiv.org/abs/2105.12767} {\ (\bibinfo
{year} {2021})},\ \Eprint {http://arxiv.org/abs/2105.12767} {arXiv:2105.12767
[quant-ph]} \BibitemShut {NoStop} \bibitem [{\citenamefont {Krishna}\ and\ \citenamefont
{Poulin}(2020)}]{krishna2020topological}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Krishna}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Poulin}},\ }\bibfield {title} {\emph {\bibinfo {title} {Topological
wormholes: Nonlocal defects on the toric code},\ }}\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Physical Review Research}\ }\textbf {\bibinfo
{volume} {2}},\ \bibinfo {pages} {023116} (\bibinfo {year}
{2020})}\BibitemShut {NoStop} \bibitem [{Note10()}]{Note10}
\BibitemOpen
\bibinfo {note} {For instance, logical $\protect \overline {X}$ and $\protect
\overline {Z}$ operators of a toric code or planar code are traceable, but
the logical $\protect \overline {Y}$ is not, due to the unavoidable
self-intersection of string operators (see, for example, Fig.~\ref
{figLogicalBlockConcatenation}).}\BibitemShut {Stop} \bibitem [{\citenamefont {Gottesman}(2010)}]{gottesman2010introduction}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Gottesman}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Quantum
information science and its contributions to mathematics, Proceedings of
Symposia in Applied Mathematics}}},\ Vol.~\bibinfo {volume} {68}\ (\bibinfo
{year} {2010})\ pp.\ \bibinfo {pages} {13--58}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aaronson}\ and\ \citenamefont
{Gottesman}(2004)}]{aaronson2004improved}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Aaronson}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Gottesman}},\ }\bibfield {title} {\emph {\bibinfo {title} {Improved
simulation of stabilizer circuits},\ }}\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {70}},\
\bibinfo {pages} {052328} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pastawski}\ \emph {et~al.}(2015)\citenamefont
{Pastawski}, \citenamefont {Yoshida}, \citenamefont {Harlow},\ and\
\citenamefont {Preskill}}]{pastawski2015holographic}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Pastawski}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Yoshida}},
\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Harlow}}, \ and\ \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Preskill}},\ }\bibfield {title}
{\emph {\bibinfo {title} {Holographic quantum error-correcting codes: Toy
models for the bulk/boundary correspondence},\ }}\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Journal of High Energy Physics}\ }\textbf
{\bibinfo {volume} {2015}},\ \bibinfo {pages} {1} (\bibinfo {year}
{2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cao}\ and\ \citenamefont
{Lackey}(2021)}]{cao2021quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Cao}}\ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Lackey}},\
}\bibfield {title} {\emph {\bibinfo {title} {Quantum lego: Building quantum
error correction codes from tensor networks},\ }}\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {arXiv preprint arXiv:2109.08158}\ } (\bibinfo
{year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Farrelly}\ \emph {et~al.}(2021)\citenamefont
{Farrelly}, \citenamefont {Tuckett},\ and\ \citenamefont
{Stace}}]{farrelly2021local}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Farrelly}}, \bibinfo {author} {\bibfnamefont {D.~K.}\ \bibnamefont
{Tuckett}}, \ and\ \bibinfo {author} {\bibfnamefont {T.~M.}\ \bibnamefont
{Stace}},\ }\bibfield {title} {\emph {\bibinfo {title} {Local tensor-network
codes},\ }}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv
preprint arXiv:2109.11996}\ } (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {PsiQuantum}(2022)}]{bombin2023unifying} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}\ \bibnamefont
{Bombin}}, \bibinfo {author} {\bibfnamefont {D.}\ \bibnamefont
{Litinski}}, \bibinfo {author} {\bibfnamefont {N.}\ \bibnamefont
{Nickerson}}, \bibinfo {author} {\bibfnamefont {F.}\ \bibnamefont
{Pastawski}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}\ \bibnamefont
{Roberts}},\ }\bibfield {title} {\emph {\bibinfo {title} {Unifying flavors of fault tolerance with the ZX calculus},\ }}\href{https://arxiv.org/abs/2303.08829} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2303.08829}\ } (\bibinfo {year} {2023})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Li}(2015)}]{li2015magic}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Li}},\ }\bibfield {title} {\emph {\bibinfo {title} {A magic state’s
fidelity can be superior to the operations that created it},\ }}\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {New Journal of Physics}\ }\textbf
{\bibinfo {volume} {17}},\ \bibinfo {pages} {023037} (\bibinfo {year}
{2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bombín}\ \emph {et~al.}(2022)\citenamefont
{Bombín}, \citenamefont {Pant}, \citenamefont {Roberts},\ and\ \citenamefont
{Seetharam}}]{bombin2022fault}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bombín}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Pant}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Roberts}}, \ and\
\bibinfo {author} {\bibfnamefont {K.~I.}\ \bibnamefont {Seetharam}},\
}\bibfield {title} {\emph {\bibinfo {title} {Fault-tolerant post-selection
for low overhead magic state preparation},\ }}\href
{https://arxiv.org/abs/2212.00813} {\bibfield {journal} {\bibinfo {journal}
{arXiv preprint arXiv:2212.00813}\ } (\bibinfo {year} {2022})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Poulin}(2005)}]{poulin2005stabilizer}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Poulin}},\ }\bibfield {title} {\emph {\bibinfo {title} {Stabilizer
formalism for operator quantum error correction},\ }}\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Physical review letters}\ }\textbf {\bibinfo
{volume} {95}},\ \bibinfo {pages} {230504} (\bibinfo {year}
{2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bacon}(2006)}]{bacon2006operator}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Bacon}},\ }\bibfield {title} {\emph {\bibinfo {title} {Operator quantum
error-correcting subsystems for self-correcting quantum memories},\
}}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review
A}\ }\textbf {\bibinfo {volume} {73}},\ \bibinfo {pages} {012340} (\bibinfo
{year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {PsiQuantum}(2022)}]{modularDecoding} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}\ \bibnamefont
{Bombin}}, \bibinfo {author} {\bibfnamefont {C.}\ \bibnamefont
{Dawson}}, \bibinfo {author} {\bibfnamefont {Y.}\ \bibnamefont
{Liu}}, \bibinfo {author} {\bibfnamefont {N.}\ \bibnamefont
{Nickerson}}, \bibinfo {author} {\bibfnamefont {F.}\ \bibnamefont
{Pastawski}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}\ \bibnamefont
{Roberts}},\ }\bibfield {title} {\emph {\bibinfo {title} {Modular decoding: parallelizable real-time decoding for quantum computers},\ }}\href{https://arxiv.org/abs/2303.04846} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2303.04846}\ } (\bibinfo {year} {2023})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bombin}\ \emph
{et~al.}(2012{\natexlab{b}})\citenamefont {Bombin}, \citenamefont
{Duclos-Cianci},\ and\ \citenamefont {Poulin}}]{bombin2012universal}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bombin}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Duclos-Cianci}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Poulin}},\ }\bibfield {title} {\emph {\bibinfo {title} {Universal
topological phase of two-dimensional stabilizer codes},\ }}\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {New Journal of Physics}\ }\textbf
{\bibinfo {volume} {14}},\ \bibinfo {pages} {073048} (\bibinfo {year}
{2012}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Scruby}\ and\ \citenamefont
{Browne}(2020)}]{scruby2020hierarchy}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Scruby}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Browne}},\ }\bibfield {title} {\emph {\bibinfo {title} {A hierarchy of
anyon models realised by twists in stacked surface codes},\ }}\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Quantum}\ }\textbf {\bibinfo
{volume} {4}},\ \bibinfo {pages} {251} (\bibinfo {year} {2020})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Gaiotto}\ \emph {et~al.}(2015)\citenamefont
{Gaiotto}, \citenamefont {Kapustin}, \citenamefont {Seiberg},\ and\
\citenamefont {Willett}}]{gaiotto2015generalized}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Gaiotto}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kapustin}},
\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Seiberg}}, \ and\
\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Willett}},\ }\bibfield
{title} {\emph {\bibinfo {title} {Generalized global symmetries},\
}}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Journal of High
Energy Physics}\ }\textbf {\bibinfo {volume} {2015}},\ \bibinfo {pages} {172}
(\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kapustin}\ and\ \citenamefont
{Thorngren}(2017)}]{kapustin2017higher}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Kapustin}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Thorngren}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Algebra,
Geometry, and Physics in the 21st Century}}}\ (\bibinfo {publisher}
{Springer},\ \bibinfo {year} {2017})\ pp.\ \bibinfo {pages}
{177--202}\BibitemShut {NoStop} \bibitem [{\citenamefont {Roberts}\ and\ \citenamefont
{Bartlett}(2020)}]{roberts2020symmetry}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Roberts}}\ and\ \bibinfo {author} {\bibfnamefont {S.~D.}\ \bibnamefont
{Bartlett}},\ }\bibfield {title} {\emph {\bibinfo {title}
{Symmetry-protected self-correcting quantum memories},\ }}\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Physical Review X}\ }\textbf
{\bibinfo {volume} {10}},\ \bibinfo {pages} {031041} (\bibinfo {year}
{2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Farrelly}\ \emph {et~al.}(2020)\citenamefont
{Farrelly}, \citenamefont {Harris}, \citenamefont {McMahon},\ and\
\citenamefont {Stace}}]{farrelly2020parallel}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Farrelly}}, \bibinfo {author} {\bibfnamefont {R.~J.}\ \bibnamefont
{Harris}}, \bibinfo {author} {\bibfnamefont {N.~A.}\ \bibnamefont {McMahon}},
\ and\ \bibinfo {author} {\bibfnamefont {T.~M.}\ \bibnamefont {Stace}},\
}\bibfield {title} {\emph {\bibinfo {title} {Parallel decoding of multiple
logical qubits in tensor-network codes},\ }}\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {arXiv preprint arXiv:2012.07317}\ } (\bibinfo
{year} {2020})}\BibitemShut {NoStop} \end{thebibliography}
\end{document} |
\begin{document}
\title{Alexandrov meets Kirszbraun} \author{S. Alexander, V. Kapovitch, A. Petrunin} \date{} \maketitle
\begin{abstract} We give a simplified proof of the generalized Kirszbraun theorem for Alexandrov spaces, which is due to Lang and Schroeder. We also discuss related questions, both solved and open. \end{abstract}
\section{Introduction}
Kirszbraun's theorem states that any \emph{short map} (i.e. 1-Lipschitz map) from a subset of Euclidean space to another in Euclidean space can be extended as a short map to the whole space.
This theorem was proved first by Kirszbraun in \cite{kirszbraun}. Later it was reproved by Valentine in \cite{valentine-sphere} and \cite{valentine-kirszbraun}, where he also generalized it to pairs of Hilbert spaces of arbitrary dimension as well as pairs of spheres of the same dimension and pairs of hyperbolic spaces with the same curvature.
J.~Isbel in \cite{isbell} studied target spaces that satisfy the above condition for any source space.
Valentine was also interested in pairs of metric spaces, say $\spc{U}$ and $\spc{L}$, which satisfy the above property, namely, given a subset $Q\subset\spc{U}$, any short map $Q\to\spc{L}$ can be extended to a short map $\spc{U}\to \spc{L}$. It turns out that this property has a lot in common with the definition of Alexandrov spaces (see theorems \ref{thm:kirsz+}, \ref{thm:kirsz-def} and \ref{thm:cba-kirsz-def}). Surprisingly, this relationship was first discovered only in the 1990's; it was first published by Lang and Schroeder in \cite{lang-schroeder}. (The third author of this paper came to similar conclusions a couple of years earlier, and told it to the first author, but did not publish the result.)
We slightly improve the results of Lang and Schroeder. Our proof is based on the barycentric maps introduced by Kleiner in \cite{kleiner}. The material of this paper will be included in the book on Alexandrov geometry that we are currently writing, but it seems useful to publish it now.
\parbf{Structure of the paper.} We introduce notations in Section~\ref{sec:prelim}. In section \ref{sec:4pt} we give altternative definitions of Alexandrov spaces based on the Kirszbraun property for 4-point sets. The generalized Kirszbraun theorem is proved in Section~\ref{sec:kirszbraun}. In the sections \ref{sec:1+n} and \ref{sec:2n+2} we describe some comparison properties of finite subsets of Alexandrov spaces. In Section~\ref{sec:kirszbraun:open} we discuss related open problems. Appendices~\ref{sec:baricentric} and \ref{sec:helly} describe Kleiner's barycentric map and an analog of Helly's theorem for Alexandrov spaces.
\parbf{Historical remark. }
Not much is known about the author of this remarkable theorem. The theorem appears in Kirszbraun's master's thesis which he defended in Warsaw University in 1930. His name is Moj\.{z}esz and his second name is likely to be Dawid but is uncertain. He was born either in 1903 or 1904 and died in a ghetto in 1942. After university, he worked as an actuary in an insurance company; \cite{kirszbraun} seems to be his only publication in mathematics.
\parbf{Acknowledgment.} We want to thank S.~Ivanov, N.~Lebedeva and A.~Lytchak for useful comments and pointing out misprints. Also we want to thank L.~Grabowski for bringing to our attention the entry about Kirszbraun in the Polish Biographical Dictionary.
\section{Preliminaries}\label{sec:prelim}
In this section we mainly introduce our notations.
\parbf{Metric spaces.} Let $\spc{X}$ be a metric space. The distance between two points $x,y\in\spc{X}$ will be denoted as $\dist{x}{y}{}$ or $\dist{x}{y}{\spc{X}}$.
Given $R\in[0,\infty]$ and $x\in \spc{X}$, the sets \begin{align*} \oBall(x,R)&=\{y\in \spc{X}\mid \dist{x}{y}{}<R\}, \\ \cBall[x,R]&=\{y\in \spc{X}\mid \dist{x}{y}{}\le R\}. \end{align*} are called respectively the \emph{open} and \emph{closed ball} of radius $R$ with center at $x$.
A metric space $\spc{X}$ is called \emph{intrinsic} if for any $\eps>0$ and any two points $x,y\in \spc{X}$ with $\dist{x}{y}{}<\infty$ there is an $\eps$-midpoint for $x$ and $y$; i.e. there is a point $z\in \spc{X}$ such that $\dist{x}{z}{},\dist{z}{y}{}<\tfrac{1}{2}\cdot \dist[{{}}]{x}{y}{}+\eps$.
\parbf{Model space.} $\Lob{m}{\kappa}$ denotes $m$-dimensional model space with curvature $\kappa$; i.e. the simply connected $m$-dimensional Riemannian manifold with constant sectional curvature $\kappa$.
Set $\varpi\kappa=\diam\Lob2\kappa$\index{$\varpi\kappa$}, so $\varpi\kappa=\infty$ if $\kappa\le0$ and $\varpi\kappa=\pi/\sqrt{\kappa}$ if $\kappa>0$. (The letter $\varpi{}$ is a glyph variant of lower case $\pi$, but is usually pronounced as \emph{pomega}.)
\parbf{Ghost of Euclid.} Let $\spc{X}$ be a metric space and $\II$ be a real interval. A globally isometric map $\gamma\:\II\to \spc{X}$ will be called a \emph{unitspeed geodesic}. A unitspeed geodesic between $p$ and $q$ will be denoted by $\geod_{[p q]}$. We consider $\geod_{[p q]}$ with parametrization starting at $p$; i.e. $\geod_{[p q]}(0)=p$ and $\geod_{[p q]}(\dist{p}{q}{})=q$. The image of $\geod_{[p q]}$ will be denoted by $[p q]$ and called a \emph{geodesic}\index{geodesic}.
Also we will use the following short-cut notation: \begin{align*} \l] p q \r[&=[p q]\backslash\{p,q\}, & \l] p q \r]&=[p q]\backslash\{p\}, & \l[ p q \r[&=[p q]\backslash\{q\}. \end{align*}
A metric space $\spc{X}$ is called \emph{geodesic} if for any two points $x,y\in \spc{X}$ there is a geodesic $[x y]$ in $\spc{X}$.
Given a geodesic $[p q]$, we denote by $\dir{p}{q}$ its direction at $p$. We may think of $\dir{p}{q}$ as belonging to the space of directions $\Sigma_p$ at $p$, which in turn can be identified with the unit sphere in the tangent space $\T_p$ at $p$. Further we set $\ddir{p}{q}=\dist[{{}}]{p}{q}{}\cdot\dir{p}{q}$; it is a \emph{tangent vector} at $p$, that is, an element of $\T_p$.
For a triple of points $p,q,r\in \spc{X}$, a choice of triple of geodesics $([q r], [r p], [p q])$ will be called a \emph{triangle} and we will use the notation $\trig p q r=([q r], [r p], [p q])$. If $p$ is distinct from $x$ and $y$, a pair of geodesics $([p x],[p y])$ will be called a \emph{hinge}\index{hinge}, and denoted by $\hinge p x y=([p x],[p y])$.
\parbf{Functions.} A locally Lipschitz function $f$ on a metric space $\spc{X}$ is called $\lambda$-convex ($\lambda$-concave) if for any geodesic $\geod_{[p q]}$ in $\spc{X}$ the real-to-real function $$t\mapsto f\circ\geod_{[p q]}(t)-\tfrac\lambda2\cdot t^2$$ is convex (respectively concave). In this case we write $f''\ge \lambda$ (respectively $f''\le \lambda$).
A function $f$ is called \emph{strongly convex} (\emph{strongly concave}) if $f''\ge \delta$ (respectively $f''\le -\delta$) for some $\delta>0$.
\parbf{Model angles and triangles.} Let $\spc{X}$ be a metric space, $p,q,r\in \spc{X}$ and $\kappa\in\RR$. Let us define a \emph{model triangle} $\trig{\~p}{\~q}{\~r}$ (briefly, $\trig{\~p}{\~q}{\~r}=\modtrig\kappa(p q r)$) to be a triangle in the model plane $\Lob2\kappa$ such that $$\dist{\~p}{\~q}{}=\dist{p}{q}{}, \ \ \dist{\~q}{\~r}{}=\dist{q}{r}{}, \ \ \dist{\~r}{\~p}{}=\dist{r}{p}{}.$$ If $\kappa\le 0$, the model triangle is said to be defined, since such a triangle always exists and is unique up to an isometry of $\Lob2\kappa$. If $\kappa>0$, the model triangle is said to be defined if in addition $$\dist{p}{q}{}+\dist{q}{r}{}+\dist{r}{p}{}< 2\cdot\varpi\kappa.$$ In this case the triangle also exists and is unique up to an isometry of $\Lob2\kappa$.
If for $p,q,r\in \spc{X}$, the model triangle $\trig{\~p}{\~q}{\~r}=\modtrig\kappa(p q r)$ is defined and $\dist{p}{q}{},\dist{p}{r}{}>0$, then the angle measure of $\trig{\~p}{\~q}{\~r}$ at $\~p$ will be called the \emph{model angle} of the triple $p$, $q$, $r$, and will be denoted by $\angk\kappa p q r$.
\parbf{Curvature bounded below.} We will denote by $\CBB{}{\kappa}$, complete intrinsic spaces $\spc{L}$ with curvature $\ge\kappa$ in the sense of Alexandrov. Specifically, $\spc{L}\in \CBB{}{\kappa}$ if for any quadruple of points $p,x^1,x^2,x^3\in \spc{U}$ , we have $$\angk\kappa p{x^1}{x^2} +\angk\kappa p{x^2}{x^3} +\angk\kappa p{x^3}{x^1}\le 2\cdot\pi.\eqlbl{Yup-kappa}$$ or at least one of the model angles $\angk\kappa p{x^i}{x^j}$ is not defined.
Condition \ref{Yup-kappa} will be called \emph{(1+3)-point comparison}.
According to Plaut's theorem \cite[Th. 27]{plaut:survey}, any space $\spc{L}\in \CBB{}{}$ is $G_\delta$-geodesic; that is, for any point $p\in \spc{L}$ there is a dense $G_\delta$-set $W_p\subset\spc{L}$ such that for any $q\in W_p$ there is a geodesic $[p q]$.
We will use two more equivalent definitions of $\CBB{}{}$ spaces (see \cite{AKP}). Namely, a complete $G_\delta$-geodesic space is in $\CBB{}{}$ if and only if it satisfies either of following conditions: \begin{enumerate}
\item\label{POS-CBB-ref} (point-on-side comparison) For any geodesic $[x y]$ and $z\in \l]x y\r[$, we have $$\angk\kappa x p y\le\angk\kappa x p z; \eqlbl{POS-CBB}$$ or, equivalently, $$\dist{\~p}{\~z}{}\le \dist{p}{z}{},$$ where $\trig{\~p}{\~x}{\~y}=\modtrig\kappa(p x y)$, $\~z\in\l] \~x\~y\r[$, $\dist{\~x}{\~z}{}=\dist{x}{z}{}$.
\item (hinge comparison) For any hinge $\hinge x p y$, the angle $\mangle\hinge x p y$ is defined and $$\mangle\hinge x p y\ge\angk\kappa x p y.$$ Moreover, if $z\in\l]x y\r[$, $z\not=p$ then for any two hinges $\hinge z p y$ and $\hinge z p x$ with common side $[z p]$ $$\mangle\hinge z p y + \mangle\hinge z p x\le\pi.$$ \end{enumerate}
We also use the following standard result in Alexandrov geometry, which follows from the discussion in the survey of Plaut \cite[8.2]{plaut:survey}.
\begin{thm}{Theorem}\label{thm:cbb-lin-part} Let $\spc{L}\in \CBB{}{}$. Given an array of points $(x^1,x^2\dots,x^n)$ in $\spc{L}$,
there is a dense $G_\delta$-set $W\subset\spc{L}$ such that for any $p\in W$, all the directions $\dir{p}{x^i}$ lie in an isometric copy of a unit sphere in $\Sigma_p$. (Or, equivaletntly, all the vectors $\ddir{p}{x^i}$ lie in a subcone of the tangent space $\T_p$ which is isometric to Euclidean space.) \end{thm}
\parbf{Curvature bounded above.} We will denote by $\Cat{}{\kappa}$ the class of metric spaces $\spc{U}$ in which any two points at distance $<\varpi\kappa$ are joined by a geodesic,
and which have curvature $\le\kappa$ in the following global sense of Alexandrov: namely, for any quadruple of points $p^1,p^2,x^1,x^2\in \spc{U}$, we have $$
\angk{\kappa}{p^1}{x^1}{x^2} \le \angk{\kappa}{p^1}{p^2}{x^1}+\angk{\kappa}{p^1}{p^2}{x^2}, \ \t{or}\ \angk{\kappa} {p^2}{x^1}{x^2}\le \angk{\kappa} {p^2}{p^1}{x^1} + \angk{\kappa} {p^2}{p^1}{x^2}, \eqlbl{gokova:eq:2+2}$$
or one of the six model angles above is undefined.
The condition \ref{gokova:eq:2+2} will be called \emph{(2+2)-point comparison} (or \emph{(2+2)-point $\kappa$-comparison} if a confusion may arise).
We denote the complete $\Cat{}{\kappa}$ spaces by $\cCat{}{\kappa}$.
The following lemma is a direct consequence of the definition:
\begin{thm}{Lemma}\label{lem:cat-complete} Any complete intrinsic
space $\spc{U}$ in which every quadruple $p^1,p^2,x^1,x^2$ satisfies the (2+2)-point $\kappa$- comparison is a $\cCat{}{\kappa}$ space (that is, any two points at distance $<\varpi\kappa$ are joined by a geodesic).
In particular, the completion of a $\Cat{}{\kappa}$ space again lies in $\Cat{}{\kappa}$. \end{thm}
We have the following basic facts (see [1]):
\begin{thm}{Lemma}\label{lem:cat-unique} In a $\Cat{}{\kappa}$ space, geodesics of length $<\varpi\kappa$ are uniquely determined by, and continuously dependent on, their endpoint pairs.\end{thm}
\begin{thm}{Lemma}\label{lem:convex-balls} In a $\Cat{}{\kappa}$ space, any open ball $\oBall(x,R)$ of radius $R\le\varpi\kappa/2$ is convex, that is, $\oBall(x,R)$ contains every geodesic whose endpoints it contains. \end{thm}
We also use an equivalent definition of $\Cat{}{\kappa}$ spaces (see \cite{AKP}). Namely, a metric space $\spc{U}$ in which any two points at distance $<\varpi\kappa$ are joined by a geodesic is a $\Cat{}{\kappa}$ space if and only if it satisfies the following condition: \begin{enumerate}
\item (point-on-side comparison)\label{cat-monoton} for any geodesic $[x y]$ and $z\in \l]x y\r[$, we have $$\angk\kappa x p y\ge\angk\kappa x p z,$$ or equivalently, $$\dist{\~p}{\~z}{}\ge \dist{p}{z}{}, \eqlbl{POS-CAT}$$ where $\trig{\~p}{\~x}{\~y}=\modtrig\kappa(p x y)$, $\~z\in\l] \~x\~y\r[$, $\dist{\~x}{\~z}{}=\dist{x}{z}{}$.
\end{enumerate}
We also use Reshetnyak's majorization theorem \cite{reshetnyak:major}. Suppose $\~\alpha$ is a simple closed curve of finite length in $\Lob2{\kappa}$, and $D\subset\Lob2{\kappa}$ is a closed region bounded by $\~\alpha$. If $\spc{X}$ is a metric space, a length-nonincreasing map $F\:D\to\spc{X}$ is called \emph{majorizing} if it is length-preserving on $\~\alpha$. In this case, we say that $D$ \emph{majorizes} the curve $\alpha=F\circ\~\alpha$ under the map $F$.
\begin{thm}{Reshetnyak's majorization theorem} \label{thm:major} Any closed curve $\alpha$ of length $<2\cdot \varpi\kappa$ in $\spc{U}\in\Cat{}{\kappa}$ is majorized by a convex region in $\Lob2\kappa$. \end{thm}
\parbf{Ultralimit of metric spaces.} Given a metric space $\spc{X}$, its ultrapower (i.e. ultralimit of constant sequence $\spc{X}_n=\spc{X}$) will be denoted as $\spc{X}^\o$; here $\o$ denotes a fixed nonprinciple ultrafilter. For definitions and properties of ultrapowers, we refer to a paper of Kleiner and Leeb \cite[2.4]{kleiner-leeb}.
We use the following facts about ultrapowers which easily follow from the definitions (see \cite{AKP} for details): \begin{itemize} \item $\spc{X}\in\cCat{}{\kappa}\ \Longleftrightarrow\ \spc{X}^\o\in\cCat{}{\kappa}$. \item $\spc{X}\in\CBB{}{\kappa}\ \Longleftrightarrow\ \spc{X}^\o\in\CBB{}{\kappa}$. \item $\spc{X}$ is intrinsic if and only if $\spc{X}^\o$ is geodesic. \end{itemize}
Note that if $\spc{X}$ is \textit{proper} (namely, bounded closed sets are compact), then $\spc{X}$ and $\spc{X}^\o$ coincide. Thus a reader interested only in proper spaces may ignore everything related to ultrapower in this article.
\section{Short map extension definitions.}\label{sec:4pt}
Theorems \ref{thm:kirsz-def} and \ref{thm:cba-kirsz-def}
give characterizations of $\CBB{}{\kappa}$ and $\Cat{}{\kappa}$. Very similar theorems were proved by Lang and Shroeder in \cite{lang-schroeder}.
\begin{thm}{Theorem}\label{thm:kirsz-def} Let $\spc{L}$ be a complete intrinsic space. Then $\spc{L}\in\CBB{}{\kappa}$ if and only if for any 3-point set $V_3$ and any 4-point set $V_4\supset V_3$ in $\spc{L}$,
any short map $f\:V_3\to\Lob2\kappa$ can be extended to a short map $F\:V_4\to\Lob2\kappa$ (so $f=F|_{V_3}$). \end{thm}
\begin{thm}{Theorem}\label{thm:cba-kirsz-def} Let $\spc{U}$ be a metric space in which any pair of points at distance $<\varpi\kappa$ are joined by a unique geodesic. Then $\spc{U}\in\Cat{}{\kappa}$ if and only if for any $3$-point set $V_3$ and $4$-point set $V_4\supset V_3$ in $\Lob2\kappa$, where the perimeter of $V_3$ is $<2\cdot\varpi\kappa$, any short map $f\:V_3\to\spc{U}$ can be extended to a short map $F\:V_4\to\spc{U}$. \end{thm}
The proof of the ``only if'' part of Theorem \ref{thm:kirsz-def} can be obtained as a corollary of Kirszbraun's theorem (\ref{thm:kirsz+}). But we present another proof, based on more elementary ideas. The ``only if'' part of Theorem \ref{thm:cba-kirsz-def} does not follow directly from Kirszbraun's theorem, since the desired extension is in $\spc{U}$, not just the completion of $\spc{U}$.
In the proof of Theorem~\ref{thm:kirsz-def}, we use the following lemma in the geometry of model planes. Here we say that two triangles with a common vertex \emph{do not overlap} if their convex hulls intersect only at the common vertex.
\begin{thm}{Overlap lemma} \label{lem:extend-overlap} Let $\trig{\~x^1}{\~x^2}{\~x^3}$ be a triangle in $\Lob2{\kappa}$. Let $\~p^1,\~p^2,\~p^3$ be points such that, for any permutation $\{i,j,k\}$ or $\{1,2,3\}$, we have \begin{enumerate}[(i)]
\item \label{no-overlap:px=px} $\dist{\~p^i}{\~x^\kay}{}=\dist{\~p^j}{\~x^\kay}{}$,
\item \label{no-overlap:orient-1} $\~p^i$ and $\~x^i$ lie in the same closed halfspace determined by $[\~x^j\~x^\kay]$,
\item \label{no-overlap:orient-2} $\mangle\hinge {\~x^i}{\~x^j}{\~p^k}+\mangle\hinge {\~x^i}{\~p^j}{\~x^k}<\pi$. \end{enumerate} Set $\mangle\~p^i=\mangle\hinge{\~p^i}{\~x^\kay}{\~x^j}$. It follows that: \begin{subthm}{two-overlap} If $\mangle{\~p^1} +\mangle {\~p^2}+\mangle {\~p^3} \le 2\cdot\pi$ and triangles $\trig{\~p^3}{\~x^1}{\~x^2}$, $\trig{\~p^2}{\~x^3}{\~x^1}$ do not overlap, then $$ \mangle {\~p^1} > \mangle{\~p^2}+ \mangle{\~p^3}.$$ \end{subthm} \begin{subthm}{no-overlap} No pair of triangles $\trig{\~p^i}{\~x^j}{\~x^\kay}$ overlap if and only if $$\mangle{\~p^1} +\mangle {\~p^2}+\mangle{\~p^3}> 2\cdot\pi.$$ \end{subthm} \end{thm}
\begin{wrapfigure}{r}{20mm} \begin{lpic}[t(-3mm),b(0mm),r(0mm),l(0mm)]{pics/contr-no-overlap(0.4)} \lbl[rt]{37,34;$\~p^1$} \lbl[tl]{10,36;$\~p^2$} \lbl[bl]{14,6;$\~p^3$} \lbl[lb]{13,51;$\~x^1$} \lbl[tr]{12,0;$\~x^2$} \lbl[l]{44,30;$\~x^3$} \end{lpic} \end{wrapfigure}
\parbf{Remark.} If $\kappa\le 0$, the ``only if'' part of (\ref{SHORT.no-overlap}) can be proved without using condition (\ref{no-overlap:px=px}). This follows immediately from the formula that relates the sum of angles for the hexagon $[\~p^1\~x^2\~p^3\~x^1\~p^2\~x^3]$ and its area: $$ \mangle\~p^1 - \mangle\~x^2 + \mangle\~p^3 - \mangle\~x^1 + \mangle\~p^2 - \mangle\~x^3 =2\cdot\pi-\kappa\cdot{\area}. $$
In case $\kappa>0$, condition (\ref{no-overlap:px=px}) is essential. An example for $\kappa>0$ can be constructed by perturbing the degenerate spherical configuration on the picture.
\parit{Proof.} Rotate the triangle $\trig{\~p^3}{\~x^1}{\~x^2}$ around $\~x^1$ to make $[\~x^1\~p^3]$ coincide with $[\~x^1\~p^2]$. Let $\dot x^2$ denote the image of $\~x^2$ after rotation. By (\ref{no-overlap:orient-1}) and (\ref{no-overlap:orient-2}), the triangles $\trig{\~p^3}{\~x^1}{\~x^2}$ and $\trig{\~p^2}{\~x^3}{\~x^1}$ do not overlap if and only if $\mangle\hinge{\~x^1}{\~x^3}{\dot x^2} < \mangle\hinge{\~x^1}{\~x^3}{\~x^2}$, and hence if and only if $\dist{\dot x^2}{\~x^3}{} < \dist{\~x^2}{\~x^3}{}$. This inequality holds if and only if $$ \begin{aligned} \mangle\~p^1 &> \mangle\hinge{\~p^2}{\~x^3}{\dot x^2} \\ &= \min\{\mangle\~p^3+\mangle\~p^2,2\cdot\pi -(\mangle\~p^3+\mangle\~p^2)\}, \end{aligned} \eqlbl{eq:no-overlap}$$ since in the inequality, the corresponding hinges have the same pairs of sidelengths. (The two pictures show that both possibilities for the minimum can occur.)
\begin{center}
\begin{lpic}[t(0mm),b(10mm),r(0mm),l(0mm)]{pics/4-pnt-kirsz-x2(0.2)} \lbl[t]{125,52;$\~p^3$} \lbl[l]{154,96;$\~p^1$} \lbl[rb]{108,80;$\~p^2$} \lbl[tr]{5,5;$\~x^1$} \lbl[t]{246,5;$\~x^2$} \lbl[l]{242,65;$\dot x^2$} \lbl[b]{97,200;$\~x^3$}
\lbl[lb]{449,51;$\~p^1$} \lbl[l]{544,185;$\~p^2$} \lbl[l]{590,65;$\~p^3$} \lbl[t]{283,4;$\~x^1$} \lbl[t]{521,5;$\~x^2$} \lbl[lt]{500,105;$\dot x^2$} \lbl[b]{372,200;$\~x^3$} \end{lpic}
\end{center}
If $\mangle\~p^1 + \mangle\~p^2+\mangle\~p^3 \le 2\cdot\pi$, then \ref{eq:no-overlap} implies $\mangle\~p^1>\mangle\~p^2 + \mangle\~p^3$. That proves (\ref{SHORT.two-overlap}).
\parit{``Only if'' part of (\ref{SHORT.no-overlap}).} Suppose no two triangles overlap and $\mangle\~p^2 + \mangle\~p^2+\mangle\~p^3 \le 2\cdot\pi$. By \ref{SHORT.two-overlap}), for $\{i,j,k\}=\{1,2,3\}$ we have $$\mangle\~p^i > \mangle\~p^j+\mangle\~p^\kay.$$ Adding these three inequalities gives a contradiction: $$ \mangle\~p^1+\mangle\~p^2+\mangle\~p^3 > 2\cdot (\mangle\~p^1+\mangle\~p^2+\mangle\~p^3).$$
\parit{``If'' part of (\ref{SHORT.no-overlap}). } Suppose triangles $\trig{\~p^3}{\~x^1}{\~x^2}$ and $\trig{\~p^2}{\~x^3}{\~x^1}$ overlap and $$\mangle\~p^1 + \mangle\~p^2+\mangle\~p^3 > 2\cdot\pi. \eqlbl{eq:<p1+<p2+<p3}$$ By the former, \ref{eq:no-overlap} fails. By \ref{eq:<p1+<p2+<p3}, $\mangle\~p^2+\mangle\~p^3 > \pi$. Therefore $$\mangle\~p^1 \le 2\cdot\pi -(\mangle\~p^2+\mangle\~p^3),$$ which contradicts \ref{eq:<p1+<p2+<p3}. \qeds
\parit{Proof of \ref{thm:kirsz-def}; ``if'' part.} Assume $\spc{L}$ is geodesic. Let $x^1,x^2,x^3\in \spc{L}$ be such that the model triangle $\trig{\~x^1}{\~x^2}{\~x^3}=\modtrig\kappa(x^1 x^2 x^3)$ is defined. Choose $p\in \,{]}x^1x^2{[}\,$. Let $V_3=\{x^1,x^2,x^3\}$ and $V_4=\{x^1,x^2,x^3,p\}$, and set $f(x^i)=\~x^i$. Then a short extension of $f$ to $V_4$ gives point-on-side comparison (see page~\pageref{POS-CBB}).
In case $\spc{L}$ is not geodesic, pass to its ultrapower $\spc{L}^\o$. Note that if $\spc{L}$ satisfies the conditions of Theorem \ref{thm:kirsz-def} then so does $\spc{L}^\o$. Also, recall that $\spc{L}^\o$ is geodesic. Thus, from above, ${\spc{L}^\o}\in\CBB{}{\kappa}$. Hence $\spc{L}\in\CBB{}{\kappa}$.
\parit{``Only if'' part.} Assume the contrary; i.e., $x^1,x^2,x^3,p\in \spc{L}$, and $\~x^1,\~x^2,\~x^3\in\Lob2\kappa$ are such that $\dist{\~x^i}{\~x^j}{}\le\dist{x^i}{x^j}{}$ for all $i,j$ and there is no point $\~p\in \Lob2\kappa$ such that $\dist{\~p}{\~x^i}{}\le \dist{p}{x^i}{}$ for all $i$.
We claim that in this case all comparison triangles $\modtrig\kappa(p x^ix^j)$ are defined. That is always true if $\kappa\le0$. If $\kappa>0$, and say $\modtrig\kappa(p x^1x^2)$ is undefined, then \begin{align*} \dist{p}{x^1}{}+\dist{p}{x^2}{} &\ge 2\cdot\varpi\kappa-\dist{x^1}{x^2}{} \ge \\ &\ge 2\cdot\varpi\kappa-\dist{\~x^1}{\~x^2}{} \ge \\ &\ge \dist{\~x^1}{\~x^3}{}+\dist{\~x^2}{\~x^3}{}. \end{align*} Thus one can take $\~p$ on $[\~x^1\~x^3]$ or $[\~x^2\~x^3]$.
For each $i\in \{1,2,3\}$, consider a point $\~p^i\in\Lob2\kappa$ such that $\dist{\~p^i}{\~x^i}{}$ is minimal among points satisfying $\dist{\~p^i}{\~x^j}{}\le\dist{p}{ x^j}{}$ for all $j\not=i$. Clearly, every $\~p^i$ is inside the triangle $\trig{\~x^1}{\~x^2}{\~x^3}$ (that is, in $\Conv(\~x^1,\~x^2,\~x^3)$), and $\dist{\~p^i}{\~x^i}{}>\dist{p}{ x^i}{}$ for each $i$. It follows that \begin{enumerate}[(i)] \item $\dist{\~p^i}{\~x^j}{}=\dist{p}{ x^j}{}$ for $i\not=j$; \item no pair of triangles from $\trig{\~p^1}{\~x^2}{\~x^3}$, $\trig{\~p^2}{\~x^3}{\~x^1}$, $\trig{\~p^3}{\~x^1}{\~x^2}$ overlap in $\trig{\~x^1}{\~x^2}{\~x^3}$. \end{enumerate}
As follows from Lemma~\ref{no-overlap}, in this case $$\mangle\hinge {\~p^1}{\~x^2}{\~x^3} +\mangle\hinge {\~p^2}{\~x^3}{\~x^1} +\mangle\hinge {\~p^3}{\~x^1}{\~x^2} >2\cdot\pi.$$ Thus we arrive at a contradiction, since $\dist{\~x^i}{\~x^j}{}\le\dist{x^i}{x^j}{}$ implies that $$\mangle\hinge {\~p^\kay}{\~x^i}{\~x^j} \le \angk\kappa p{x^i}{x^j}$$ if $(i,j,k)$ is a permutation of $(1,2,3)$. \qeds
In the proof of Theorem~\ref{thm:cba-kirsz-def}, we use the following lemma in the geometry of model planes:
\begin{thm}{Lemma}\label{lem:smaller-trig} Let $x^1,x^2,x^3,y^1,y^2,y^3\in\Lob{}{\kappa}$ be points such that $\dist{x^i}{x^j}{}\ge\dist{y^i}{y^j}{}$ for all $i,j$. Then there is a short map $\map\:\Lob{}{\kappa}\to\Lob{}{\kappa}$ such that $\map(x^i)=y^i$ for all $i$; moreover, one can choose $\map$ so that $$\Im \map\subset\Conv(y^1,y^2,y^3).$$
\end{thm}
We only give an idea of the proof of this lemma; alternatively, one can get the result as a corollary of Kirszbraun's theorem (\ref{thm:kirsz+})
\parit{Idea of the proof.} The map $\map$ can be constructed as a composition of the following folding maps: Given a halfspace $H$ in $\Lob{}{\kappa}$, consider the map $\Lob{}{\kappa}\to H$, which is the identity on $H$ and reflects all points outside of $H$ into $H$. This map is a path isometry, in particular, it is short.
One can get the last part of the lemma by composing the above map with foldings along the sides of triangle $\trig{y^1}{y^2}{y^3}$ and passing to a partial limit. \qeds
\parit{Proof of \ref{thm:cba-kirsz-def}; ``if'' part.} The point-on-side comparison (\ref{cat-monoton}) follows by taking $V_3=\{\~x,\~y,\~p\}$ and $V_4=\{\~x,\~y,\~p,\~z\}$ where $z\in \l]x y\r[$. It is only necessary to observe that $F(\~z)=z$ by uniqueness of $[x y]$.
\parit{``Only if'' part.} Let $V_3=\{\~x^1,\~x^2,\~x^3\}$ and $V_4=\{\~x^1,\~x^2,\~x^3,\~p\}$.
Set $y^i\z=f(\~x^i)$ for all $i$; we need to find a point $q\in\spc{U}$ such that $\dist{y^i}{q}{}\le\dist{\~x^i}{\~p}{}$ for all $i$.
Consider the model triangle $\trig{\~y^1}{\~y^2}{\~y^3}=\modtrig\kappa({y^1}{y^2}{y^3})$. Set $D\z=\Conv({\~y^1},{\~y^2},{\~y^3})$.
Note that $\dist{\~y^i}{\~y^j}{}=\dist{y^i}{y^j}{}\le\dist{\~x^i}{\~x^j}{}$ for all $i,j$. Applying Lemma \ref{lem:smaller-trig}, we get a short map $\map\:\Lob{}{\kappa}\to D$ such that $\map\:\~x^i\mapsto\~y^i$.
Further, from Reshetnyak majorization (\ref{thm:major}), there is a short map $F\:D\to \spc{U}$ such that $\~y^i\mapsto y^i$ for all $i$.
Thus one can take $q=F\circ\map(\~p)$. \qeds
\section{(1+\textit{n})-point comparison}\label{sec:1+n}
The following theorem gives a more sensitive analog of (1+3)-point comparison. In a bit more analytic form it was discovered by Sturm in \cite{sturm}.
\begin{thm}{(1+\textit{n})-point comparison} \label{thm:pos-config} Let $\spc{L}\in\CBB{}{\kappa}$. Then for any array of points $p,x^1,\dots,x^n\in \spc{L}$ there is a model array $\~p,\~x^1,\dots,\~x^n\in\Lob{n}\kappa$ such that \begin{subthm}{} $\dist{\~p}{\~x^i}{}=\dist{p}{x^i}{}$ for all $i$. \end{subthm}
\begin{subthm}{}$\dist{\~x^i}{\~x^j}{}\ge\dist{x^i}{x^j}{}$ for all $i,j$. \end{subthm} \end{thm}
\parit{Proof.}
It is enough to show that given $\eps>0$ there is a configuration $\~p,\~x^1,\dots,\~x^n\in\Lob{n}\kappa$ such that $\dist{\~x^i}{\~x^j}{}\ge\dist{x^i}{x^j}{}$ and $\bigl|\dist{\~p}{\~x^i}{}-\dist{p}{x^i}{}\bigr|\le \eps$. Then one can pass to a limit configuration for $\eps\to 0+$.
According to \ref{thm:cbb-lin-part}, there is a point $p'$ such that $\dist{p'}{p}{}\le\eps$ and $\T_{p'}$ contains a subcone $E$ isometric to a Euclidean space which contains all vectors $\ddir{p'}{x^i}$. Passing to a subspace if necessary, we can assume that $\dim E\le n$.
Mark a point $\~p\in \Lob{n}\kappa$ and choose an isometric embedding $\imath\: E\to \T_{\~p}\Lob{n}\kappa$. Set $$\~x^i=\exp_{\~p}\circ\imath\circ\ddir{p'}{x^i}.$$ Thus $\dist{\~p}{\~x^i}{}=\dist{p'}{x^i}{}$ and therefore
$$\bigl|\dist{\~p}{\~x^i}{}-\dist{p}{x^i}{}\bigr|\le \dist{p}{p'}{} \le\eps.$$ From the hinge comparison, we have $$\angk\kappa{\~p}{\~x^i}{\~x^j} =\mangle\hinge{\~p}{\~x^i}{\~x^j} =\mangle\hinge{p'}{x^i}{x^j}\ge \angk\kappa{p'}{x^i}{x^j},$$ thus $$\dist{\~x^i}{\~x^j}{}\ge \dist{x^i}{x^j}{}.$$ \qedsf
\section{Kirszbraun's theorem}\label{sec:kirszbraun}
A slightly weaker version of the following theorem was proved by Lang and Schroeder in \cite{lang-schroeder}. The Conjecture~\ref{conj:kirsz} (if true) gives an equivalent condition for the existence of a short extension; roughly it states that example \ref{example:SS_+} is the only obstacle.
\begin{thm}{Kirszbraun's theorem} \label{thm:kirsz+} Let $\spc{L}\in\CBB{}{\kappa}$, $\spc{U}\in\cCat{}{\kappa}$, $Q\subset \spc{L}$ be arbitrary subset and $f\: Q\to\spc{U}$ be a short map. Assume that there is $z\in\spc{U}$ such that $f(Q)\subset \oBall[z,\tfrac{\varpi\kappa}{2}]$. Then $f\:Q\to\spc{U}$ can be extended to a short map $F\:\spc{L}\to \spc{U}$
(that is, there is a short map $F\:\spc{L}\to \spc{U}$ such that $F|_Q=f$.) \end{thm}
The condition $f(Q)\subset \oBall[z,\tfrac{\varpi\kappa}{2}]$ trivially holds for any $\kappa\le 0$ since in this case $\varpi\kappa=\infty$. The following example shows that this condition is needed for $\kappa>0$.
\begin{thm}{Example}\label{example:SS_+} Let $\SS^m_+$ be a closed $m$-dimensional unit hemisphere. Denote its boundary, which is isometric to $\SS^{m-1}$, by $\partial\SS^m_+$. Clearly, $\SS^m_+\in\CBB{}1$ and $\partial\SS^m_+\in\cCat{}1$ but the identity map ${\partial\SS^m_+}\to \partial\SS^m_+$ cannot be extended to a short map $\SS^m_+\to \partial\SS^m_+$ (there is no place for the pole).
There is also a direct generalization of this example to a hemisphere in a Hilbert space of arbitrary cardinal dimension. \end{thm}
First we prove this theorem in the case $\kappa\le 0$ (\ref{thm:kirsz}). In the proof of the more complicated case $\kappa>0$, we use the case $\kappa=0$. The following lemma is the main ingredient in the proof.
\begin{thm}{Finite$\bm{+}$one lemma}\label{lem:kirsz-neg:new} Let $\kappa\le 0$, $\spc{L}\in\CBB{}{\kappa}$, and $\spc{U}\in\cCat{}{\kappa}$. Let $x^1,x^2,\dots,x^n\in\spc{L}$ and $y^1,y^2,\dots,y^n\in\spc{U}$ be such that $\dist{x^i}{x^j}{}\ge\dist{y^i}{y^j}{}$ for all $i,j$.
Then for any $p\in\spc{L}$, there is $q\in\spc{U}$ such that $\dist{y^i}{q}{}\le\dist{x^i}{p}{}$ for each $i$. \end{thm}
\parit{Proof.} It is sufficient to prove the lemma only for $\kappa=0$ and $-1$. The proofs of these two cases are identical, only the formulas differ. In the proof, we assume $\kappa=0$ and provide the formulas for $\kappa=-1$ in the footnotes.
From (1+\textit{n})-point comparison (\ref{thm:pos-config}), there is a model configuration $\~p,\~x^1,\~x^2,\dots,\~x^n\in \Lob{n}{\kappa}$ such that $\dist{\~p}{\~x^i}{}=\dist{p}{x^i}{}$ and $\dist{\~x^i}{\~x^j}{}\ge\dist{x^i}{x^j}{}$ for all $i$, $j$.
For each $i$, consider functions $f^i\:\spc{U}\to\RR$ and $\~f^i\:\Lob{n}{\kappa}\to\RR$ defined as follows
\footnote{In case $\kappa=-1$, $$ \begin{aligned} &f^i(y)=\cosh\dist{y^i}{y}{}, & &\~f^i(\~x)=\cosh\dist{\~x^i}{\~x}{}. \end{aligned}\eqno{(A)\mc-}$$}:
$$ \begin{aligned} &f^i(y)=\tfrac{1}{2}\cdot\dist[2]{y^i}{y}{}, & &\~f^i(\~x)=\tfrac{1}{2}\cdot\dist[2]{\~x^i}{\~x}{}. \end{aligned}\eqno{(A)\mc0} $$ Set $\bm{f}=(f^1,f^2,\dots,f^n)\:\spc{U}\to\RR^n$ and $\bm{\~f}=(\~f^1,\~f^2,\dots,\~f^n)\:\Lob{n}{\kappa}\to\RR^n$.
Recall that $\SupSet$ (superset in $\RR^n$) is defined in \ref{def:supset+succcurlyeq}. Note that it is sufficient to prove that $\bm{\~f}(\~p)\in\SupSet\bm{f}(\spc{U})$.
Clearly, $(f^i)''\ge 1$. Thus, by the theorem on barycentric simplex (\ref{bary-iff}), the set $\SupSet\bm{f}(\spc{U})\subset\RR^{n}$ is convex.
Arguing by contradiction, let us assume that $\bm{\~f}(\~p)\not\in\SupSet\bm{f}(\spc{U})$.
Then there exists a supporting hyperplane $\alpha_1x_1+\ldots \alpha_nx_n=c$ to $\SupSet\bm{f}(\spc{U})$, separating it from $\bm{\~f}(\~p)$. Just as in the proof of Theorem~\ref{thm:bary} we have that all $\alpha_i\ge 0$. So by rescaling we can assume that $(\alpha_1,\alpha_2,\dots,\alpha_n)\in\Delta^{n-1}$ and $$\sum_i\alpha_i\cdot\~f^i(\~p) < \inf \set{\sum_i\alpha_i\cdot f^i(q)}{q\in\spc{U}}.$$ The latter contradicts the following claim.
\begin{clm}{Claim} Given $\bm{\alpha}=(\alpha_1,\alpha_2,\dots,\alpha_n)\in\Delta^{n-1}$, set \begin{align*} &h=\sum_i\alpha_i\cdot f^i, & &h\:\spc{U}\to\RR, & &z=\argmin h\in \spc{U}, \\ &\~h=\sum_i\alpha_i\cdot \~f^i, & &\~h\:\Lob{n}{\kappa}\to\RR, & &\~z=\argmin \~h\in \Lob{n}{\kappa}. \end{align*} Then $h(z)\le \~h(\~z)$. \end{clm}
\parit{Proof of the claim.} Note that $\d_z h\ge 0$. Thus, for each $i$, we have \footnote{In case $\kappa=-1$, the same calculations give $$ \begin{aligned} 0 &\le\dots \le -\tfrac{1}{\sinh\dist[{{}}]{z}{y^i}{}} \cdot \sum_j \alpha_j\cdot\l[\cosh\dist[{{}}]{z}{y^i}{}\cdot\cosh\dist[{{}}]{z}{y^j}{}-\cosh\dist[{{}}]{y^i}{y^j}{}\r]. \end{aligned} \eqno{(B)\mc-} $$ } $$ \begin{aligned} 0 &\le (\d_z h)(\dir{z}{y^i}) = \\ &= -\sum_j\alpha_j\cdot\dist[{{}}]{z}{y^j}{}\cdot\cos\mangle\hinge{z}{y^i}{y^j} \le \\ &\le -\sum_j\alpha_j\cdot\dist[{{}}]{z}{y^j}{}\cdot\cos\angk0{z}{y^i}{y^j} = \\ &= -\tfrac{1}{2\cdot\dist[{{}}]{z}{y^i}{}} \cdot \sum_j \alpha_j\cdot\l[\dist[2]{z}{y^i}{}+\dist[2]{z}{y^j}{}-\dist[2]{y^i}{y^j}{}\r]. \end{aligned} \eqno{(B)\mc0}$$ In particular
\footnote{In case $\kappa=-1$, the same calculations give $$ \begin{aligned} \sum_{i}\alpha_i\cdot\l[\sum_j \alpha_j\cdot\l[\cosh\dist[{{}}]{z}{y^i}{}\cdot\cosh\dist[{{}}]{z}{y^j}{} -\cosh\dist[{{}}]{y^i}{y^j}{}\r] \r]\le0 \end{aligned}. \eqno{(C)\mc-} $$ },
$$ \begin{aligned} \sum_{i} \alpha_i \cdot \l[\sum_j \alpha_j \cdot \l[\dist[2]{z}{y^i}{}+\dist[2]{z}{y^j}{}-\dist[2]{y^i}{y^j}{}\r] \r]\le 0, \end{aligned} \eqno{(C)\mc0} $$ or
\footnote{In case $\kappa=-1$, $$(h(z))^2\le \sum_{i,j} \alpha_i\cdot\alpha_j \cdot \cosh\dist[{{}}]{y^i}{y^j}{}. \eqno{(D)\mc-}$$ }
$$2\cdot h(z) \le \sum_{i,j} \alpha_i\cdot\alpha_j \cdot \dist[2]{y^i}{y^j}. \eqno{(D)\mc0}$$
Note that if $\spc{U}\iso\Lob{n}{\kappa}$, then all inequalities in $(B,C,D)$ are sharp. Thus the same argument as above, repeated for $\~x^1,\~x^2,\dots,\~x^n\in\Lob{n}{\kappa}$ gives \footnote {In case $\kappa=-1$, $$(\~h(\~z))^2 = \sum_{i,j} \alpha_i\cdot\alpha_j \cdot \cosh\dist[{{}}]{\~x^i}{\~x^j}{}. \eqno{(E)\mc-}$$ } $$2\cdot \~h(\~z) = \sum_{i,j} \alpha_i\cdot\alpha_j \cdot \dist[2]{\~x^i}{\~x^j}{}. \eqno{(E)\mc0}$$ Note that $$\dist{\~x^i}{\~x^j}{}\ge\dist{x^i}{x^j}{}\ge\dist{y^i}{y^j}{}$$ for all $i$, $j$. Thus, $(D)$ and $(E)$ imply the claim. \qeds
\begin{thm}{Kirszbraun's theorem for nonpositive bound} \label{thm:kirsz} Let $\kappa\le0$, $\spc{L}\in\CBB{}{\kappa}$, $\spc{U}\in\cCat{}{\kappa}$, $Q\subset \spc{L}$ be arbitrary subset and $f\: Q\to\spc{U}$ be a short map. Then there is a short extension
$F\:\spc{L}\to \spc{U}$ of $f$; that is, there is a short map $F\:\spc{L}\to \spc{U}$ such that $F|_Q=f$. \end{thm}
\parbf{Remark.} If $\spc{U}$ is proper, then in the following proof the Helly's theorem (\ref{thm:helly}) is not needed. Everything follows directly from compactness of closed balls in $\spc{U}$.
\parit{Proof of \ref{thm:kirsz}.} By Zorn's lemma, we can assume that $Q\subset\spc{L}$ is a maximal set; i.e. $f\:Q\to\spc{U}$ does not admits a short extension to any larger set $Q'\supset Q$.
Let us argue by contradiction. Assume that $Q\not=\spc{L}$; choose $p\in \spc{L}\backslash Q$. Then $$\bigcap_{x\in Q} \cBall[f(x),\dist{p}{x}{}] = \emptyset.$$
Since $\kappa\le 0$, the balls are convex; thus, by Helly's theorem (\ref{thm:helly}), one can choose a point array $x^1,x^2,\dots, x^n\in Q$ such that $$\bigcap_{i=1}^n \cBall[y^i,\dist{x^i}{p}{}] = \emptyset, \eqlbl{eq:cap=cBalls=0}$$ where $y^i=f(x^i)$. Finally note that \ref{eq:cap=cBalls=0} contradicts the Finite+one lemma (\ref{lem:kirsz-neg:new}). \qeds
\parit{Proof of Kirszbraun's theorem (\ref{thm:kirsz+}).} The case $\kappa\le 0$ is already proved in \ref{thm:kirsz}. Thus it remains to prove the theorem only in case $\kappa>0$. After rescaling we may assume that $\kappa=1$ and therefore $\varpi\kappa=\pi$.
Since $\cBall[z,\pi/2]\in\cCat{}{\kappa}$ (\ref{lem:convex-balls}, \ref{lem:cat-complete}), we can assume $\spc{U}=\cBall[z,\pi/2]$. In particular, any two points of $\spc{U}$ at distance $<\pi$ are joined by a geodesic, and $\diam\spc{U}\le\pi$. If $\dist{x}{y}{}=\pi$ for some $x,y\in\spc{U}$, then the concatenation of $[x z]$ and $[z y]$ forms a geodesic $[x y]$. Hence $\spc{U}$ is geodesic.
Further, we can also assume that $\diam\spc{L}\le\pi$. Otherwise $\spc{L}$ is one-dimensional; in this case the result follows since $\spc{U}$ is geodesic.
Assume the theorem is false. Then there is a set $Q\subset \spc{L}$, a short map $f\: Q\to \spc{U}$ and $p\in \spc{L}\backslash Q$ such that $$\bigcap_{x\in Q} \cBall[f(x),\dist{x}{p}{}]=\emptyset. \eqlbl{eq:cap-of-balls}$$
We are going to apply \ref{thm:kirsz} for $\kappa=0$ to the Euclidean cones $\mathring{\spc{L}}=\Cone \spc{L}$ and $\mathring{\spc{U}}\z=\Cone \spc{U}$. Note that \begin{itemize} \item ${\mathring{\spc{U}}}\in\cCat{}0$, \item since $\diam \spc{L}\le \pi$ we have ${\mathring{\spc{L}}}\in\CBB{}0$. \end{itemize} Further, we view the spaces $\spc{L}$ and $\spc{U}$ as unit spheres in $\mathring{\spc{L}}$ and $\mathring{\spc{U}}$ respectively. In the cones $\mathring{\spc{L}}$ and $\mathring{\spc{U}}$, we use
``$|{*}|$'' for distance to the vertex, say $o$, and ``$\cdot$'' for cone multiplication. We also use short-cuts $\mangle(x,y)\df\mangle\hinge{o}{x}{y}$ and \begin{align*}
\<x,y\>\df&\,|x|\cdot|y|\cdot\cos\mangle\hinge{o}{x}{y}=\\
=&\,\tfrac12\l(|x|^2+|y|^2-\dist[2]{x}{y}{}\r). \end{align*} In particular, \begin{itemize} \item $\dist{x}{y}{\spc{L}}=\mangle(x,y)$ for any $x,y\in\spc{L}$, \item $\dist{x}{y}{\spc{U}}=\mangle(x,y)$ for any $x,y\in\spc{U}$, \item for any $y\in \spc{U}$, we have $$\mangle(z,y)\le\tfrac\pi2.\eqlbl{eq:=<pi/2}$$
\end{itemize} Set $\mathring{Q}=\Cone Q\subset \mathring{\spc{L}}$ and let $\mathring f\:\mathring{Q}\to \mathring{\spc{U}}$ be the natural cone extension of $f$; i.e., $y=f(x)$ $\Rightarrow$ $t\cdot y=\mathring f(t\cdot x)$ for $t\ge0$. Clearly $\mathring f$ is short.
Applying \ref{thm:kirsz} for $\mathring f$, we get a short extension map $\mathring F\:\mathring{\spc{L}}\to\mathring{\spc{U}}$. Set $s=\mathring F(p)$. Thus, $$\dist{s}{\mathring f(w)}{} \le \dist{p}{w}{} \eqlbl{eq:clm:kirszbraun-curv=1-rad-star}$$
for any $w\in \mathring Q$. In particular, $|s|\le 1$. Applying \ref{eq:clm:kirszbraun-curv=1-rad-star} for $w=t\cdot x$ and $t\to\infty$ we get
\begin{wrapfigure}[17]{l}{42mm} \begin{lpic}[t(0mm),b(3mm),r(0mm),l(0mm)]{pics/k_0(0.3)} \lbl[t]{100,150;$\mathring{\spc{U}}=\Cone \spc{U}$} \lbl[br]{75,20;$\nwarrow$} \lbl[tl]{75,20;$\spc{U}$} \lbl[lt]{80,74;$z$} \lbl[lb]{75,102;$\bar s$} \lbl[b]{128,101;$\alpha$} \lbl[rb]{57,102;$s$} \lbl[lt]{7,73;$o$} \end{lpic} \end{wrapfigure}
$$\<f(x),s\>\ge \cos\mangle(p,x)\eqlbl{eq:<,>=<}$$ for any $x\in Q$.
Since $\spc{U}\in\cCat{}{0}$, the geodesics $\geod_{[s\ t\cdot z]}$ converge as $t\to\infty$ to a ray, say $\alpha\:[0,\infty)\to \mathring{\spc{U}}$. From \ref{eq:=<pi/2}, we have that the function $t\mapsto\<f(x),\alpha(t)\>$ is non-decreasing.
Therefore, from \ref{eq:<,>=<}, for the necessarily unique point $\bar s$ on the ray $\alpha$ such that $|\bar s|=1$ we also have $$\<f(x),\bar s\>\ge \cos\mangle(p,x)$$ or $$\mangle(\bar s,f(x)) \le \mangle(p,f(x))$$ for any $x\in Q$. The latter contradicts \ref{eq:cap-of-balls}. \qeds
\section{(2\textit{n}+2)-point comparison}\label{sec:2n+2}
Here we give a generalization of the (2+2)-point comparison to (2\textit{n}+2) points. It follows from the generalized Kirszbraun's theorem.
First let us give a reformulation of (2+2)-point comparison.
\begin{thm}{Reformulation of (2+2)-point comparison} Let $\spc{X}$ be a metric space. A quadruple $p,q,x,y\in \spc{X}$ satisfies (2+2)-point comparison if one of the following holds: \begin{subthm}{} One of the triples $(p,q,x)$ or $(p, q, y)$ has perimeter $>2\cdot\varpi\kappa$. \end{subthm}
\begin{subthm}{} If $\trig{\~p}{\~q}{\~x} = \modtrig\kappa(p q x)$ and $\trig{\~p}{\~q}{\~y} = \modtrig\kappa p q y$, then $$\dist{\~x}{\~z}{}+\dist{\~z}{\~y}{}\ge \dist{x}{y}{},$$ for any $\~z\in[\~p\~q]$.
\end{subthm}
\end{thm}
\begin{thm}{(2\textit{n}+2)-point comparison}\label{CBA-n-point} Let $\spc{U}\in\cCat{}{\kappa}$. Consider $x,y\in \spc{U}$ and an array of pairs of points $(p^1,q^1)$, $(p^2,q^2),\dots,(p^n,q^n)$ in $\spc{U}$, such that there is a model configuration $\~x$, $\~y$ and array of pairs $(\~p^1,\~q^1)$, $(\~p^2,\~q^2),\dots,(\~p^n,\~q^n)$ in $\Lob{3}\kappa$ with the following properties: \begin{subthm}{} $\trig{\~x}{\~p^1}{\~q^1}=\modtrig\kappa x p^1q^1$ and $\trig{\~y}{\~p^n}{\~q^n}=\modtrig\kappa y p^n q^n$; \end{subthm}
\begin{subthm}{} The simplex $\~p^i\~p^{i+1}\~q^i\~q^{i+1}$ is a model simplex \footnote{i.e., perimeter of each triple in $p^i,p^{i+1},q^i$ and $q^{i+1}$ is $<2\cdot\pi$ and $\dist{\~p^i}{\~q^i}{} =\dist{p^i}{q^i}{}$, $\dist{\~p^i}{\~p^{i+1}}{} =\dist{p^i}{p^{i+1}}{}$, $\dist{\~q^i}{\~q^{i+1}}{} =\dist{q^i}{q^{i+1}}{}$, $\dist{\~p^i}{\~q^{i+1}}{} = \dist{p^i}{q^{i+1}}{}$ and $\dist{\~p^{i+1}}{\~q^{i}}{}=\dist{p^{i+1}}{q^{i}}{}$.}
of $p^ip^{i+1}q^iq^{i+1}$ for all $i$. \end{subthm}
Then for any choice of $n$ points $\~z^i\in [\~p^i\~q^i]$, we have $$\dist{\~x}{\~z^1}{}+\dist{\~z^1}{\~z^2}{}+\dots+\dist{\~z^{n-1}}{\~z^n}{}+\dist{\~z^n}{\~y}{} \ge \dist{x}{y}{}.$$ \begin{center} \begin{lpic}[t(0mm),b(0mm),r(0mm),l(0mm)]{pics/chain(0.27)} \lbl[r]{4,33;$\~x$} \lbl[tr]{87,12;$\~p^1$} \lbl[t]{147,20;$\~p^2$} \lbl[t]{175,3;$\~p^3$} \lbl[lt]{275,18;$\~p^4$} \lbl[br]{40,104;$\~q^1$} \lbl[br]{138,127;$\~q^2$} \lbl[bl]{192,105;$\~q^3$} \lbl[bl]{266,100;$\~q^4$} \lbl[bl]{70,49;$\~z^1$} \lbl[br]{143,60;$\~z^2$} \lbl[bl]{184,51;$\~z^3$} \lbl[bl]{272,54;$\~z^4$} \lbl[l]{369,51;$\~y$} \end{lpic} \end{center} \end{thm}
To prove (2\textit{n}+2)-point comparison, we need the following lemma, which is an easy corollary from Kirszbraun's theorem (\ref{thm:kirsz+}).
\begin{thm}{Lemma}\label{cor:kir-from-hemisphere} Let $\spc{L}\in\CBB{}{\kappa}$, $\spc{U}\in\cCat{}{\kappa}$, and $Q\subset \oBall(p,\tfrac{\varpi\kappa}2)\subset \spc{L}$. Then any short map $f\:Q\to \spc{U}$ can be extended to a short map $F\:\spc{L}\to \spc{U}$. \end{thm}
\parit{Proof.} Directly from Kirszbraun's theorem (\ref{thm:kirsz} or \ref{thm:kirsz+}), we obtain the case $\kappa\le 0$. Thus it remains to prove the theorem only in case $\kappa>0$. After rescaling we may assume that $\kappa=1$ and therefore $\varpi\kappa=\pi$.
It is enough to prove that there is a point $z\in \spc{U}$ such that $\dist{z}{f(x)}{}\le \tfrac\pi2$ for all $x\in Q$; once it is proved, the statement follows from Kirszbraun's theorem (\ref{thm:kirsz+}).
Further we use the same notations as in the proof of \ref{thm:kirsz+}.
Apply Kirszbraun's theorem (\ref{thm:kirsz} or \ref{thm:kirsz+}) for $\mathring f\:\mathring Q\to\mathring{\spc{U}}$ and set $q\z={\mathring F}(p)$. Clearly, $$\<f(x),q\>\ge \cos\mangle(p,x)>0$$
for any $x\in Q$. In particular, $|q|>0$.
Thus, for $z=\tfrac{1}{|q|}\cdot q\in\spc{U}$, we get $\dist{z}{f(x)}{\spc{U}}=\mangle(z,f(x))\le \tfrac{\pi}{2}$ for all $x\in Q$. \qeds
\parit{Proof of (2\textit{n}+2)-point comparison.} Direct application of \ref{cor:kir-from-hemisphere} gives an array of short maps $f^0,f^1,\dots,f^n\:\Lob{3}\kappa\to \spc{U}$ such that \begin{enumerate}[(i)]
\item $\~x\stackrel{f^0}{\longmapsto} x$, $\~p^1\stackrel{f^0}{\longmapsto} p^1$ and $\~q^1\stackrel{f^0}{\longmapsto}q^1$;
\item $\~p^i \stackrel{f^i}{\longmapsto} p^i$, $\~q^{i} \stackrel{f^i}{\longmapsto} q^i$ and $\~p^{i+1} \stackrel{f^i}{\longmapsto} p^{i+1}$, $\~q^{i+1} \stackrel{f^i}{\longmapsto} q^{i+1}$\\ for $1\le i\le n-1$; \item $\~p^n\stackrel{f^n}{\longmapsto} p^n$, $\~q^n\stackrel{f^n}{\longmapsto}q^n$ and $\~y\stackrel{f^n}{\longmapsto} y$. \end{enumerate}
For each $i>0$, we have that $f^{i-1}|_{[\~p^i\~q^i]}=f^{i}|_{[\~p^i\~q^i]}$, since both $f^{i-1}$ and $f^{i}$ send $[\~p^i\~q^i]$ isometrically to a geodesic $[p^i q^i]$ in $\spc{U}$ which has to be unique . Thus the curves $$f^0([\~x\~z^1]),\ f^1([\~z^1\~z^2]),\dots,\ f^{n-1}([\~z^{n-1}\~z^n]),\ f^n([\~z^n\~y])$$ can be joined in $\spc{U}$ into a curve connecting $x$ to $y$ with length at most $$\dist{\~x}{\~z^1}{}+\dist{\~z^1}{\~z^2}{}+\dots+\dist{\~z^{n-1}}{\~z^n}{}+\dist{\~z^n}{\~y}{}.\eqno\qed$$
\section{Comments and open problems}\label{sec:kirszbraun:open}
\begin{thm}{Open problem}\label{open:n-point-CBB} Find a necessary and sufficient condition for a finite metric space to be isometrically embeddable into some $\CBB{}{\kappa}$ space. \end{thm}
A metric on a finite set $\{a^1,a^2,\dots,a^n\}$, can be described by the matrix with components $$s^{ij}=\dist[2]{a^i}{a^j}{},$$ which we will call the \emph{decrypting matrix}\index{decrypting matrix}. The set of decrypting matrices of all metrics that admit an isometric embedding into a $\CBB{}{0}$ space form a convex cone, as follows from the fact that the product of $\CBB{}{0}$ spaces is a $\CBB{}{0}$ space. This convexity gives hope that the cone admits an explicit description.
The set of metrics on $\{a^1,a^2,\dots,a^n\}$ that can be embedded into a product of spheres with different radii admits a simple description. Obviously, this gives a sufficient condition for \ref{open:n-point-CBB}. This condition is not necessary. For instance, as follows from from a result of Vilms, \cite[2.2]{vilms}, a sufficiently dense finite subset in a generic closed positively curved manifold cannot be embedded into a product of spheres.
Theorem \ref{thm:pos-config} gives a necessary condition for \ref{open:n-point-CBB}, but the condition is not sufficient. One sees this in the following example constructed by Sergei Ivanov. A generalization of this example is given in \cite[1.1]{LPZ}.
\begin{wrapfigure}{r}{40mm} \begin{lpic}[t(0mm),b(0mm),r(0mm),l(0mm)]{pics/ivanov-example(0.7)} \lbl[rb]{1,16;$a$} \lbl[lb]{55,16;$b$} \lbl[rb]{28,31;$x$} \lbl[rb]{28,7;$y$} \lbl[rt]{28,1;$z$} \lbl[lb]{44,26;$q$} \end{lpic} \end{wrapfigure}
\parbf{Example.} Consider the finite set $\spc{F}\z=\{a,b,x,y,z,q\}$ with distances defined as follows: \begin{enumerate} \item $\dist{a}{b}{}=4$; \item $\dist{a}{x}{}\z=\dist{a}{y}{}\z=\dist{a}{z}{}\z=\dist{b}{x}{}\z=\dist{b}{y}{}\z=\dist{b}{z}{}\z=2$; \item $\dist{x}{y}{}=2$, $\dist{y}{z}{}=1$, $\dist{x}{z}{}=3$; \item $\dist{x}{q}{}=\dist{q}{b}{}=1$ and thus $\dist{a}{q}{}=3$; \item $\angk{0}{x}{q}{y}=\angk{0}{x}{q}{z}=\tfrac\pi3$; i.e. $\dist{q}{y}{}=\sqrt{3}$ and $\dist{q}{z}{}=\sqrt{7}$. \end{enumerate}
On the diagram the degenerate triangles are marked by solid lines. Note that if one removes from $\spc{F}$ the point $q$ then the remaining part can be embedded in a sphere of intrinsic diameter $4$ with poles at $a$ and $b$ and the points $x,y,z$ on the equator. On the other hand, if one removes the point $a$ from the space and changes the distance $\dist{z}{b}{}$ then it can be isometrically embedded into the plane.
It is straightforward to check that this finite set satisfies the conclusion of Theorem \ref{thm:pos-config} for $\kappa=0$. However, if such a metric appeared as an inherited metric on a subset $\{a,b,x,y,z,q\}\subset \spc{L}\in\CBB{}{0}$ then clearly $$\mangle\hinge x a y\z=\mangle\hinge y a z\z=\mangle\hinge y b z\z= \tfrac{\pi}{3},$$ contradicting $\dist{b}{z}{}=2$.
The following problem was mentioned by Gromov in \cite[15(b)]{gromov-CAT}
\begin{thm}{Open problem}\label{open:n-point-CBA} Describe metrics on an $n$-point set which are embeddable into $\Cat{}{\kappa}$ spaces. \end{thm}
The set of metrics on $\{a^1,a^2,\dots,a^n\}$ which can be embedded into a product of trees and hyperbolic spaces admits a simple description using decrypting matrices defined above. Obviously, this gives a sufficient condition for problem \ref{open:n-point-CBA}. This condition is not necessary. The existence of a counterexample follows from the same result of Vilms \cite[2.2]{vilms}; it is sufficient to take a sufficiently dense finite subset in a ball in a generic Hadamard space.
The (2$n$+2)-point comparison (\ref{CBA-n-point}) gives a necessary condition for \ref{open:n-point-CBA} which is not sufficient. One can see this in the following example constructed by Nina Lebedava:
Consider a square $[\~x^1\~y^1\~x^2\~y^2]$ in $\EE^3$ with two more points $\~z^1$, $\~z^2$ in general position on opposite sides of the plane spanned by $[\~x^1\~y^1\~x^2\~y^2]$ so that the convex hull of $\~x^1,\~x^2,\~y^1,\~y^2,\~z^1,\~z^2$ forms a nonregular octahedron with the faces formed by triangles $[\~x^i \~y^j \~z^\kay]$. Consider the induced metric on the 6-point set $\~x^1,\~x^2,\~y^1,\~y^2,\~z^1,\~z^2$. Note that if we increase the distance $\dist{\~z^1}{\~z^2}{}$ slightly then in the obtained 6-point metric $\spc{F}_6$ space all the (2+2) and (4+2)-point comparisons continue to hold.
Now assume we embed the points $x^1,x^2,y^1,y^2,z^1,z^2$ to lie in a $\cCat{}{0}$ space $\spc{U}$ in such a way that all the distances except $\dist{z^1}{z^2}{}$ are the same as between corresponding points in $\spc{F}_6$. Note that $[\~x^1\~y^1\~x^2\~y^2]$ is a square, therefore we get that $\spc{U}$ contains an isometric copy of a square $\Conv(x^1,y^1,x^2,y^2)_{\spc{U}}\iso\Conv(\~x^1,\~y^1,\~x^2,\~y^2)_{\EE^3}$. Let \[\~w\in \Conv(\~x^1,\~y^1,\~x^2,\~y^2)_{\EE^3}\] and $w$ be the corresponding point in $\Conv(x^1,y^1,x^2,y^2)_{\spc{U}}$. By point-on-side comparison (\ref{cat-monoton}) we have $\dist{z^i}{w}{\spc{U}}\le \dist{\~z^i}{\~w}{\EE^3}$. It follows that \[\dist{z^1}{z^2}{\spc{U}} \le \dist{\~z^1}{\~z^2}{\EE^3},\] a contradiction.
The next conjecture (if true) would give the right generality for Kirszbraun's theorem (\ref{thm:kirsz+}). Roughly it states that the example \ref{example:SS_+} is the only obstacle for extending a short map.
\begin{thm}{Conjecture}\label{conj:kirsz} Assume $\spc{L}\in\CBB{}{1}$, $\spc{U}\in\cCat{}{1}$, $Q\subset \spc{L}$ is a proper subset, and $f\: Q\to\spc{U}$ is a short map that does not admit a short extension to any bigger set $Q'\supset Q$. Then:
\begin{subthm}{} $Q$ is isometric to a sphere in a Hilbert space (of finite or cardinal dimension). Moreover, there is a point $p\in \spc{L}$ such that $\dist{p}{q}{}=\tfrac{\pi}{2}$ for any $q\in Q$. \end{subthm}
\begin{subthm}{} The map $f\:Q\to\spc{U}$ is a global isometric embedding and there is no point $p'\in \spc{U}$ such that $\dist{p'}{q'}{}=\tfrac{\pi}{2}$ for all $q'\in f(Q)$. \end{subthm} \end{thm} \appendix \def\claim#1{ \par
\noindent \refstepcounter{thm} \hbox{\bf\boldmath \Alph{section}.\arabic{thm}. #1.} \it\ } \section{Barycentric simplex}\label{sec:baricentric}
The barycentric simplex was introduced by Kleiner in \cite{kleiner}; it is a construction that works in a general metric space. Roughly, it gives a $\kay$-\nospace dimensional submanifold for a given ``nondegenerate'' array of $\kay+1$ strongly convex functions.
Let us denote by $\Delta^\kay\subset \RR^{\kay+1}$\index{$\Delta^m$} the \emph{standard $\kay$-simplex}\index{standard simplex}; i.e. $\bm{x}=(x_0,x_1,\dots,x_n)\in\Delta^\kay$ if $\sum_{i=0}^\kay x_i=1$ and $x_i\ge0$ for all $i$.
Let $\spc{X}$ be a metric space and $\bm{f}=(f^0,f^1,\dots,f^\kay)\:\spc{X}\to \RR^{\kay+1}$ be a function array. Consider the map $\spx{\bm{f}}\:\Delta^\kay\to \spc{X}$,\index{$\spx{\bm{f}}$} defined by $$\spx{\bm{f}}(\bm{x})=\argmin\sum_{i=0}^\kay x_i\cdot f^i,$$ where $\argmin f$\index{$\argmin$} denotes a point of minimum of $f$. The map $\spx{\bm{f}}$ will be called a \emph{barycentric simplex}\index{barycentric simplex of function array} of $\bm{f}$. In general, a barycentric simplex of a function array might be undefined and need not be unique.
The name comes from the fact that if $\spc{X}$ is a Euclidean space and $f^i(x)\z=\tfrac{1}{2}\cdot\dist[2]{p^i}{x}{}$ for some array of points $\bm{p}=(p^0,p^1,\dots,p^\kay)$, then $\spx{\bm{f}}(\bm{x})$ is the barycenter of points $p^i$ with weights $x_i$.
A barycentric simplex $\spx{\bm{f}}$ for the function array $f^i(x)=\tfrac{1}{2}\cdot\dist[2]{p^i}{x}{}$ will also be called a \emph{barycentric simplex with vertices at} $\{p^i\}$\index{barycentric simplex with vertices at point array}.
It is clear from the definition that if $\bm{\hat f}$ is a subarray of $\bm{f}$, then $\spx{\bm{\hat f}}$ coincides with the restriction of $\spx{\bm{f}}$ to the corresponding face of $\Delta^\kay$.
The following theorem shows that the barycentric simplex is defined for an array of strongly convex functions on a complete geodesic space. In order to formulate the theorem, we need to introduce a partial order $\succcurlyeq$ on $\RR^{\kay+1}$.
\begin{thm}{Definition}\label{def:supset+succcurlyeq} For two real arrays $\bm{v}$, $\bm{w}\in \RR^{\kay+1}$, $\bm{v}=(v^0,v^1,\dots,v^\kay)$ and $\bm{w}=(w^0,w^1,\dots,w^\kay)$, we write $\bm{v}\succcurlyeq\bm{w}$ if $v^i\ge w^i$ for each $i$.
Given a subset $Q\subset \RR^{\kay+1}$, define its \emph{superset}\index{superset} \index{$\SupSet$} $$\SupSet Q =\{\bm{v}\in\RR^\kay\mid\exists\, \bm{w}\in Q\ \t{such that}\ \bm{v}\succcurlyeq\bm{w}\}.$$
\end{thm}
\begin{thm}{Theorem on barycentric simplex}\label{thm:bary} Assume $\spc{X}$ is a complete geodesic space and $\bm{f}\z=(f^0,f^1,\dots,f^\kay)\:\spc{X}\to\RR^\kay$ is an array of strongly convex and locally Lipschitz functions.
Then the barycentric simplex $\spx{\bm{f}}\:\Delta^\kay\to \spc{X}$ is uniquely defined and moreover:
\begin{subthm}{bary-Lip} $\spx{\bm{f}}$ is Lipschitz. \end{subthm}
\begin{subthm}{bary-iff} The set $\SupSet{\bm{f}(\spc{X})}\subset\RR^{\kay+1}$ is convex, and $p\in \spx{\bm{f}}(\Delta^\kay)$ if and only if $\bm{f}(p)\in\Fr\l[\SupSet{\bm{f}(\spc{X})}\r]$. In particular, $\bm{f}\circ\spx{\bm{f}}(\Delta^\kay)$ lies on a convex hypersurface in $\RR^{\kay+1}$. \end{subthm}
\begin{subthm}{bary-embed} The restriction $\bm{f}|_{\spx{\bm{f}}(\Delta^\kay)}$ has $C^{\frac{1}{2}}$-inverse. \end{subthm}
\begin{subthm}{bary-R^n} The set $\mathfrak{S}=\spx{\bm{f}}(\Delta^\kay)\backslash\spx{\bm{f}}(\partial\Delta^\kay)$ is $C^{\frac{1}{2}}$-homeomorphic to an open domain in $\RR^\kay$. \end{subthm} \end{thm}
The set $\mathfrak{S}$ described above will be called \emph{Kleiner's spine}\index{Kleiner's spine} of $\bm{f}$. If $\mathfrak{S}$ is nonempty, we say the barycentric simplex $\spx{\bm{f}}$ is \emph{nondegenerate}\index{nondegenerate}.
We precede the proof of the theorem with the following lemma.
\begin{thm}{Lemma}\label{lem:argmin(convex)} Assume $\spc{X}$ is a complete geodesic metric space and let $f\:\spc{X}\to\RR$ be a locally Lipschitz, strongly convex function. Then the minimum point $p=\argmin f$ is uniquely defined. \end{thm}
\parit{Proof.} Assume that $x$ and $y$ are distinct minimum points of $f$. Then for the midpoint $z$ of a geodesic $[x y]$ we have $$f(z)<f(x)=f(y),$$ a contradiction. It only remains to show existence.
Fix a point $p\in \spc{X}$; let $\Lip\in\RR$ be a Lipschitz constant of $f$ in a neighborhood of $p$. Without loss of generality, we can assume that $f$ is $1$-convex. Consider function $\phi(t)=f\circ\geod_{[px]}(t)$. Clearly $\phi$ is $1$-convex and $\phi^+(0)\ge -\Lip$. Setting $\ell=\dist{p}{x}{}$, we get \begin{align*} f(x) &= \phi(\ell) \ge \\ &\ge f(p)-\Lip\cdot\ell+\tfrac{1}{2}\cdot\ell^2 \ge \\ &\ge f(p)-\tfrac{1}{2}\cdot{\Lip^2}. \end{align*}
In particular, $$s \df \inf\set{f(x)}{x\in \spc{X}} \ge f(p)-\tfrac{1}{2}\cdot{\Lip^2}.$$ If $z$ is a midpoint of $[x y]$ then $$s\le f(z) \le \tfrac{1}{2}\cdot f(x)+\tfrac{1}{2}\cdot f(y)-\tfrac{1}{8}\cdot\dist[2]{x}{y}{}. \eqlbl{mid-point}$$ Choose a sequence of points $p_n\in \spc{X}$ such that $f(p_n)\to s$. Applying \ref{mid-point}, for $x\z=p_n$, $y\z=p_m$, we get that $(p_n)$ is a Cauchy sequence. Clearly, $p_n\to \argmin f$. \qeds
\parit{Proof of theorem \ref{thm:bary}.} Without loss of generality, we can assume that each $f^i$ is $1$-convex. Thus, for any $\bm{x}\in\Delta^\kay$, the convex combination $\sum x_i\cdot f^i\:\spc{X}\to\RR$ is also $1$-convex. Therefore, according to Lemma~\ref{lem:argmin(convex)}, $\spx{\bm{f}}(\bm{x})$ is defined.
\parit{(\ref{SHORT.bary-Lip}).} Since $\Delta^\kay$ is compact, it is sufficient to show that $\spx{\bm{f}}$ is locally Lipschitz.
For $\bm{x},\bm{y}\in\Delta^\kay$, set \begin{align*} f_{\bm{x}} &=\sum x_i\cdot f^i, & f_{\bm{y}} &=\sum y_i\cdot f^i, \\ p &=\spx{\bm{f}}(\bm{x}), & q &=\spx{\bm{f}}(\bm{y}). \end{align*} Let $\ell=\dist[2]{p}{q}{}$. Clearly $\phi(t)=f_{\bm{x}}\circ\geod_{[p q]}(t)$ takes its minimum at $0$ and $\psi(t)=f_{\bm{y}}\circ\geod_{[p q]}(t)$ takes its minimum at $\ell$. Thus $\phi^+(0)$, $\psi^-(\ell)\ge 0$ \footnote{Here $\phi^\pm$ denotes ``signed one sided derivative''; i.e.
$$\phi^\pm(t_0)=\lim_{t\to t_0\pm}\frac{\phi(t)-\phi(t_0)}{|t-t_0|}$$} . From $1$-convexity of $f_{\bm{y}}$, we have $\psi^+(0)+\psi^-(\ell)+\ell\le0$.
Let $\Lip$ be a Lipschitz constant for all $f^i$ in a neighborhood $\Omega\ni p$. Then $\psi^+(0)\le \phi^+(0)+\Lip\cdot\|\bm{x}-\bm{y}\|_{{}_1}$,
where $\|\bm{x}-\bm{y}\|_{{}_1}=\sum_{i=0}^\kay|x_i-y_i|$. That is, given $\bm{x}\in\Delta^\kay$, there is a constant $\Lip$ such that $$\dist{\spx{\bm{f}}(\bm{x})}{\spx{\bm{f}}(\bm{y})}{} =
\ell\le \Lip\cdot\|\bm{x}-\bm{y}\|_{{}_1}$$
for any $\bm{y}\in\Delta^\kay$. In particular, there is $\eps>0$ such that if $\|\bm{x}-\bm{y}\|_{{}_1},$ $\|\bm{x}-\bm{z}\|_{{}_1} <\eps$, then $\spx{\bm{f}}(\bm{y})$, $\spx{\bm{f}}(\bm{z})\in\Omega$. Thus, the same argument as above implies $$\dist{\spx{\bm{f}}(\bm{y})}{\spx{\bm{f}}(\bm{z})}{} =
\ell\le \Lip\cdot\|\bm{y}-\bm{z}\|_{{}_1}$$ for any $\bm{y}$ and $\bm{z}$ sufficiently close to $\bm{x}$; i.e. $\spx{\bm{f}}$ is locally Lipschitz.
\parit{(\ref{SHORT.bary-iff}).} The ``only if'' part is trivial, let us prove the ``if''-part.
Note that convexity of $f^i$ implies that for any two points $p,q\in \spc{X}$ and $t\in[0,1]$ we have $$(1-t)\cdot\bm{f}(p)+t\cdot \bm{f}(q) \succcurlyeq \bm{f}\circ\geodpath_{[p q]}(t), \eqlbl{n-convex}$$ where $\geodpath_{[p q]}$ is a geodesic path from $p$ to $q$; i.e. $\geodpath_{[p q]}(t)=\geod_{[p q]}(\tfrac{t}{\dist{p}{q}{}})$.
From \ref{n-convex}, we have that $\SupSet[\bm{f}(\spc{X})]$ is a convex subset of $\RR^{\kay+1}$. If $$\max_{i}\{f^i(q)\z-f^i(p)\}\ge0$$ for any $q\in \spc{X}$, then $\bm{f}(p)$ lies in the boundary of $\SupSet[\bm{f}(\spc{X})]$. Take a supporting vector $\bm{x}\in\RR^{\kay+1}$ to $\SupSet[\bm{f}(\spc{X})]$ at $\bm{f}(p)$. Thus $\bm{x}\not=\bm{0}$ and $\sum_i x_i\cdot[w^i-f^i(p)]\ge0$ for any $\bm{w}\in \SupSet[\bm{f}(\spc{X})]$. In particular, $\sum_i x_i\cdot v_i \ge 0$ for any $v=(v_1,\ldots, v_k)$ with all $v_i\ge 0$. Hence $x_i\ge 0$ for all $i$ and
$\bm{x}'=\frac{\bm{x}}{\|\bm{x}\|}_{{}_1}\in\Delta^\kay$. Thus $p=\spx{\bm{f}}(\bm{x}')$.
\parit{(\ref{SHORT.bary-embed}).}
The restriction $\bm{f}|_{\spx{\bm{f}}(\Delta^\kay)}$ is Lipschitz. Thus we only have to show that it has a $C^{\frac{1}{2}}$-inverse. Given $\bm{v}\in\RR^{\kay+1}$, consider the function $h_{\bm{v}}\: \spc{X}\to \RR$ given by $$h_{\bm{v}}(p)=\max_{i}\{f^i(p)-v^i\}.$$ Define a map $\map \:\RR^{\kay+1}\to \spc{X}$ by $\map (\bm{v})=\argmin h_{\bm{v}}$.
Clearly $h_{\bm{v}}$ is $1$-convex. Thus, according to \ref{lem:argmin(convex)}, $\map (\bm{v})$ is uniquely defined for any $\bm{v}\in\RR^{\kay+1}$. From (\ref{SHORT.bary-iff}), for any $p\in \spx{\bm{f}}(\Delta^\kay)$ we have $\map \circ\bm{f}(p)=p$.
It remains to show that $\map $ is $C^{\frac{1}{2}}$-continuous. Clearly,
$$|h_{\bm{v}}-h_{\bm{w}}|
\le\|\bm{v}-\bm{w}\|_\subinfty \df
\max_{i}\{|v^i-w^i|\},$$ for any $\bm{v},\bm{w}\in\RR^{\kay+1}$. Set $p=\map (\bm{v})$ and $q=\map (\bm{w})$. Since $h_{\bm{v}}$ and $h_{\bm{w}}$ are 1-convex, \begin{align*} h_{\bm{v}}(q) &\ge h_{\bm{v}}(p)+\tfrac{1}{2}\cdot\dist[2]{p}{q}{}, & h_{\bm{w}}(p) &\ge h_{\bm{w}}(q)+\tfrac{1}{2}\cdot\dist[2]{p}{q}{}. \end{align*} Therefore,
$$\dist[2]{p}{q}{}\le 2\cdot\|\bm{v}-\bm{w}\|_\subinfty.$$ Hence the result.
\parit{(\ref{SHORT.bary-R^n}).} Let $S=\Fr\SupSet(\bm{f}(\spc{X}))$. Note that orthogonal projection to the hyperplane $\WW^\kay$ in $\RR^{\kay+1}$ defined by equation $x_0+x_1+\dots+x_n=0$ gives a bi-Lipschits homeomorphism $S\to \WW^\kay$.
Clearly, $\bm{f}({\spx{\bm{f}}(\Delta^\kay)}\backslash\spx{\bm{f}}(\partial\Delta^\kay))$ is an open subset of $S$. Hence the result. \qeds \section{Helly's theorem}\label{sec:helly}
\begin{thm}{Helly's theorem}\label{thm:helly} Let $\spc{U}\in\cCat{}0$ and $\{K_\alpha\}_{\alpha\in \IndexSet}$ be an arbitrary collection of closed bounded convex subsets of $\spc{U}$.
If $$\bigcap_{\alpha\in \IndexSet}K_\alpha=\emptyset,$$ then there is an index array $\alpha_1,\alpha_2,\dots,\alpha_n\in \IndexSet$ such that $$\bigcap_{i=1}^nK_{\alpha_i}=\emptyset.$$
\end{thm}
\parbf{Remarks.} \begin{enumerate}[(i)] \item In general, none of $K_\alpha$ might be compact. Thus the the statement is not completely trivial. \item If $\spc{U}$ is a Hilbert space (not necessarily separable), then the above result is equivalent to the statement that a convex bounded set which is closed in the ordinary topology forms a compact set in the weak topology.
In fact, one can define the \emph{weak topology} on an arbitrary metric space, by taking exteriors of closed balls as its prebase. Then the result above implies for $\spc{U}\in\cCat{}0$, any closed bounded convex set in $\spc{U}$ is compact in the weak topology (this is very similar to the definition given by Monod in \cite{monod}). \end{enumerate}
We present the proof of Lang and Shroeder from \cite{lang-schroeder}.
\begin{thm}{Lemma}\label{lem:closest point} Let $\spc{U}\in\cCat{}0$. Given a closed convex set $K\subset \spc{U}$ and a point $p\in \spc{U}\backslash K$, there is unique point $p^*\in K$ such that $\dist{p^*}{p}{}=\dist{K}{p}{}$. \end{thm}
\parit{Proof.} Let us first prove uniqueness. Assume there are two points $y',y''\in K$ so that $\dist{y'}{p}{}=\dist{y''}{p}{}=\dist{K}{p}{}$. Take $z$ to be the midpoint of $[y'y'']$. Since $K$ is convex, $z\in K$. From comparison, we have that $\dist{z}{p}{}<\dist{y'}{p}{}=\dist{K}{p}{}$, a contradiction
The proof of existence is analogous. Take a sequence of points $y_n\in K$ such that $\dist{y_n}{p}{}\to \dist{K}{p}{}$. It is enough to show that $(y_n)$ is a Cauchy sequence; thus one could take $p^*=\lim_n y_n$.
Assume $(y_n)$ is not Cauchy, then for some fixed $\eps>0$, we can choose two subsequences $(y_n')$ and $(y_n'')$ of $(y_n)$ such that $\dist{y'_n}{y''_n}{}\ge\eps$ for each $n$. Set $z_n$ to be the midpoint of $[y'_ny''_n]$; from convexity we have $z_n\in K$. From point-on-side comparison (see page \pageref{POS-CAT}), there is $\delta>0$ such that $\dist{p}{z_n}{}\le \max\{\dist{p}{y'_n}{},\dist{p}{y''_n}{}\}-\delta$. Thus $$\limsup_{n\to\infty}\dist{p}{z_n}{}<\dist{K}{x}{},$$ a contradiction\qeds
\parit{Proof of \ref{thm:helly}.} Assume the contrary. Then for any finite set $F\subset \IndexSet$, $$K_{F}\df \bigcap_{\alpha\in F}K_{\alpha}\not=\emptyset.$$ We construct a point $z$ such that $z\in K_\alpha$ for each $\alpha\in \IndexSet$. Thus we arrive at a contradiction since $$\bigcap_{\alpha\in \IndexSet}K_\alpha=\emptyset.$$
Choose a point $p\in \spc{U}$ and set $r=\sup\dist{K_{F}}{p}{}$ where $F$ runs over all finite subsets of $\IndexSet$. Let $p^*_F$ be the closest point on $K_{F}$ from $p$; according to Lemma \ref{lem:closest point}, $p^*_F$ exits and is unique.
Take a nested sequence of finite subsets $F_1\subset F_2\subset \dots$ of $\IndexSet$, such that $\dist{K_{F_n}}{p}{}\to r$.
Let us show that $p^*_{F_n}$ is a Cauchy sequence. Indeed, if not then for some fixed $\eps>0$, we can choose two subsequences $(y'_n)$ and $(y''_n)$ of $(p^*_{F_n})$ such that $\dist{y'_n}{y''_n}{}\ge\eps$. Set $z_n$ to be midpoint of $[y'_ny''_n]$. From point-on-side comparison (see page \pageref{POS-CAT}), there is $\delta>0$ such that $\dist{p}{z_n}{}\le \max\{\dist{p}{y'_n}{},\dist{p}{y''_n}{}\}-\delta$. Thus $$\limsup_{n\to\infty}\dist{p}{z_n}{}<r.$$ On the other hand, from convexity, each $F_n$ contains all $z_\kay$ with sufficiently large $\kay$, a contradiction.
Thus, $p^*_{F_n}$ converges and we can set $z=\lim_n p^*_{F_n}$. Clearly $\dist{p}{z}{}=r$.
Repeat the above arguments for the sequence $F_n'=F_n\cup \{\alpha\}$. As a result, we get another point $z'$ such that $\dist{p}{z}{}=\dist{p}{z'}{}=r$ and $z,z'\in K_{F_n}$ for all $n$. Thus, if $z\not=z'$ the midpoint $\hat z$ of $[zz']$ would belong to all $K_{F_n}$ and from comparison we would have $\dist{p}{\hat z}{}<r$, a contradiction.
Thus, $z'=z$; in particular $z\in K_\alpha$ for each $\alpha\in\IndexSet$. \qeds
\end{document} |
\begin{document}
\title[Mazur-Ulam theorem in non-Archimedean $n$-normed spaces]
{A Mazur-Ulam theorem for Mappings of conservative distance in non-Archimedean $n$-normed spaces}
\author[Hahng-Yun Chu and Se-Hyun Ku]{Hahng-Yun Chu$^\dagger$ and Se-Hyun Ku$^\ast$}
\address{Hahng-Yun Chu, Department of Mathematical Sciences, Korea Advanced Institute of Science and Technology, 335, Gwahak-ro, Yuseong-gu, Daejeon 305-701, Republic of Korea } \email{\rm hychu@@kaist.ac.kr}
\address{Se-Hyun Ku, Department of Mathematics, Chungnam National University, 79, Daehangno, Yuseong-Gu, Daejeon 305-764, Republic of Korea} \email{\rm shku@@cnu.ac.kr}
\thanks{ \newline\ \noindent 2010 {\it Mathematics Subject Classification}. primary 46B20, secondary 51M25, 46S10 \newline {\it Key words and Phrases}. Mazur-Ulam theorem, $n$-isometry, non-Archimedean $n$-normed space \newline{\it $\ast$ Corresponding author} \newline{\it $\dagger$ The author was supported by the second stage of the Brain Korea 21 Project, The Development Project of Human Resources in Mathematics, KAIST in 2008. \\}}
\maketitle
\begin{abstract} In this article, we study the notions of $n$-isometries in non-Archimedean $n$-normed spaces over linear ordered non-Archimedean fields, and prove the Mazur-Ulam theorem in the spaces. Furthermore, we obtain some properties for $n$-isometries in non-Archimedean $n$-normed spaces. \end{abstract}
\baselineskip=18pt
\theoremstyle{definition}
\newtheorem{df}{Definition}[section]
\newtheorem{rk}[df]{Remark} \newtheorem{ma}[df]{Main Theorem}
\newtheorem{cj}[df]{Conjecture}
\newtheorem{pb}[df]{Problem} \theoremstyle{plain}
\newtheorem{lm}[df]{Lemma}
\newtheorem{eq}[df]{equation}
\newtheorem{thm}[df]{Theorem}
\newtheorem{cor}[df]{Corollary}
\newtheorem{pp}[df]{Proposition}
\setcounter{section}{0}
\section{Introduction}
Let $X$ and $Y$ be metric spaces with metrics $d_X$ and $d_Y,$ respectively. A map $f:X\rightarrow Y$ is called an isometry if $d_Y(f(x),f(y))=d_X(x,y)$ for every $x,y\in X.$ Mazur and Ulam\cite{mu32} were the first to treated the theory of isometry. They have proved the following theorem;
\textbf{Mazur-Ulam Theorem}\,\,\,\textit{Let $f$ be an isometric transformation from a real normed vector space $X$ onto a real normed vector space $Y$ with $f(0)=0$. Then $f$ is linear.}
It was a natural ask if the result holds without the onto assumption. Asked about this natural question, Baker\cite{b71} answered that every isometry of a real normed linear space into a strictly convex real normed linear space is affine. The Mazur-Ulam theorem has been widely studied by \cite{j01,ms08,r01,rs93,rw03,v03,x01}.
Chu et al.\cite{cpp04} have defined the notion of a $2$-isometry which is suitable to represent the concept of an area preserving mapping in linear $2$-normed spaces. In \cite{c07}, Chu proved that the Mazur-Ulam theorem holds in linear $2$-normed spaces under the condition that a $2$-isometry preserves collinearity. Chu et al.\cite{ckk08} discussed characteristics of $2$-isometries. In \cite{as}, Amyari and Sadeghi proved the Mazur-Ulam theorem in non-Archimedean $2$-normed spaces under the condition of strictly convexity. Recently, Choy et al.\cite{cck} proved the theorem on non-Archimedean $2$-normed spaces over linear ordered non-Archimedean fields without the strictly convexity assumption.
Misiak\cite{m89-1, m89-2} defined the concept of an $n$-normed space and investigated the space. Chu et al.\cite{clp04}, in linear $n$-normed spaces, defined the concept of an $n$-isometry that is suitable to represent the notion of a volume preserving mapping. In~\cite{cck09}, Chu et al. generalized the Mazur-Ulam theorem to $n$-normed spaces. In recent years, Chen and Song\cite{cs} characterized $n$-isometries in linear $n$-normed spaces.
In this paper, without the condition of the strictly convexity, we prove the (additive) Mazur-Ulam theorem on non-Archimedean $n$-normed spaces. Firstly, we assert that an $n$-isometry $f$ from a non-Archimedean space to a non-Archimedean space preserves the midpoint of a segment under some condition about the set of all elements of a valued field whose valuations are 1. Using the above result, we show that the Mazur-Ulam theorem on non-Archimedean $n$-normed spaces over linear ordered non-Archimedean fields. In addition, we prove that the barycenter of a triangle in the non-Archimedean $n$-normed spaces is $f$-invariant under different conditions from those referred in previous statements. And then we also prove the (second typed) Mazur-Ulam theorem in non-Archimedean $n$-normed spaces under some different conditions.
\section{The Mazur-Ulam theorem on non-Archimedean $n$-normed spaces}
In this section, we introduce a non-Archimedean $n$-normed space which is a kind of a generalization of a non-Archimedean $2$-normed space and we show the (additive) Mazur-Ulam theorem for an $n$-isometry $f$ defined on a non-Archimedean $n$-normed space, that is, $f(x)-f(0)$ is additive. Firstly, we consider some definitions and lemmas which are needed to prove the theorem.
Recall that a {\emph{non-Archimedean}} (or {\emph{ultrametric}}) {\emph{valuation}} is given by a map $|\cdot|$ from a field ${\mathcal{K}}$ into $[0,\infty)$ such that for all $r,s\in{\mathcal{K}}$:
(i) $|r|=0$ if and only if $r=0$;
(ii) $|rs|=|r||s|$;
(iii) $|r+s|\leq \max\{|r|, |s|\}.$
If every element of ${\mathcal{K}}$ carries a valuation then a field ${\mathcal{K}}$ is called a {\emph{valued field}}, for a convenience, simply call it a field. It is obvious that $|1|=|-1|=1$ and $|n| \leq 1$ for all $n \in {\mathbb N}$. A trivial example of a non-Archimedean valuation is the map $|\cdot|$ taking everything but $0$ into $1$ and
$|0|=0$ (see~\cite{nb81}).
A {\emph{non-Archimedean norm}} is a function $\| \cdot \| :{\mathcal X} \to [0, \infty)$ such that for all $r \in {\mathcal K}$ and $x,y \in {\mathcal X}$:
(i) $\|x\| = 0$ if and only if $x=0$;
(ii) $\|rx\| = |r| \|x\|$;
(iii) the strong triangle inequality
$$\| x+ y\| \leq \max \{\|x\|, \|y\|\}.$$
Then we say $({\mathcal X}, \|\cdot\|)$ is a {\emph{non-Archimedean space}}.
\begin{df}\label{df31} Let ${\mathcal X}$ be a vector space with the dimension greater than $n-1$ over a valued field $\mathcal{K}$
with a non-Archimedean valuation $|\cdot|.$
A function $\| \cdot, \cdots, \cdot \|:{\mathcal{X}}\times\cdots\times{\mathcal{X}}\rightarrow[0,\infty)$ is said to be a {\emph{non-Archimedean $n$-norm}} if
$(\textrm{i}) \ \ \|x_1, \cdots, x_n \|=0 \Leftrightarrow x_1, \cdots, x_n
\textrm{ are linearly dependent} ;$
$(\textrm{ii}) \ \ \|x_1, \cdots, x_n \| = \| x_{j_1}, \cdots, x_{j_n} \| $ for every permutation $(j_1, \cdots, j_n)$ of $(1, \cdots, n) ;$
$(\textrm{iii}) \ \ \| \alpha x_1, \cdots, x_n \| =| \alpha | \
\| x_1, \cdots, x_n \| ;$
$(\textrm{iv}) \ \ \|x+y, x_2, \cdots, x_n \| \le \max\{\|x, x_2,
\cdots, x_n\|, \|y, x_2, \cdots, x_n\|\} $
\noindent for all $\alpha \in \mathcal{K}$ and all $x, y, x_1,
\cdots, x_n \in \mathcal{X}$. Then $(\mathcal{X},\| \cdot, \cdots, \cdot \|)$ is called a {\it non-Archimedean $n$-normed space}.
\end{df}
From now on, let $\mathcal{X}$ and $\mathcal{Y}$ be non-Archimedean $n$-normed spaces over a linear ordered non-Archimedean field $\mathcal{K}.$
\begin{df} \cite{clp04}
Let $\mathcal{X}$ and $\mathcal{Y}$ be non-Archimedean $n$-normed spaces and $f : \mathcal{X} \rightarrow \mathcal{Y}$ a mapping. We call $f$ an {\it $n$-isometry} if
$$\|x_1 - x_0, \cdots, x_n - x_0\|=\|f(x_1) - f(x_0), \cdots, f(x_n)- f(x_0)\|$$ for all $x_0, x_1, \cdots, x_n \in \mathcal{X}$. \end{df}
\begin{df}~\cite{clp04}
The points $x_{0}, x_{1}, \cdots, x_{n}$ of a non-Archimedean $n$-normed space $\mathcal{X}$ are said to be {\it n-collinear} if for every $i$, $\{x_{j} - x_{i} \mid 0 \le j \neq i \le n \}$ is linearly dependent. \end{df}
The points $x_0,x_1$ and $x_2$ of a non-Archimedean $n$-normed space $\mathcal{X}$ are said to be $2$-collinear if and only if $x_2-x_0=t(x_1-x_0)$ for some element $t$ of a real field. We denote the set of all elements of $\mathcal{K}$ whose valuations are $1$ by $\mathcal{C},$
that is, ${\mathcal{C}}=\{\gamma\in{\mathcal{K}}:|\gamma|=1\}.$
\begin{lm}{\label{lm31}} Let $x_{i}$ be an element of a non-Archimedean $n$-normed space $\mathcal{X}$ for every $i \in \{1, \cdots , n\}$ and $\gamma\in\mathcal{K}.$ Then $$
\|x_{1}, \cdots , x_{i}, \cdots , x_{j}, \cdots , x_{n} \| =
\|x_{1}, \cdots , x_{i}, \cdots , x_{j} + \gamma x_{i}, \cdots , x_{n} \|. $$ for all $1 \leq i \ne j \leq n.$ \end{lm}
\begin{pf} By the condition $(\textrm{iv})$ of Definition~\ref{df31}, we have \begin{eqnarray*}
&&\|x_{1}, \cdots , x_{i}, \cdots , x_{j} + \gamma x_{i}, \cdots ,x_{n} \|\\
&\leq&\max\{\|x_{1}, \cdots , x_{i}, \cdots , x_{j}, \cdots , x_{n} \|,|\gamma|\,\|x_{1}, \cdots , x_{i}, \cdots , x_{i}, \cdots ,x_{n} \|\}\\
&=&\max\{\|x_{1}, \cdots , x_{i}, \cdots , x_{j}, \cdots , x_{n} \|,0\}\\
&=&\|x_{1}, \cdots , x_{i}, \cdots , x_{j}, \cdots , x_{n} \|. \end{eqnarray*} One can easily prove the converse using the similar methods. This completes the proof. \end{pf}
\begin{rk}\label{rk33} Let $\mathcal{X,Y}$ be non-Archimedean $n$-normed spaces over a linear ordered non-Archimedean field $\mathcal{K}$ and let $f:\mathcal{X}\rightarrow\mathcal{Y}$ be an $n$-isometry. One can show that the $n$-isometry $f$ from $\mathcal{X}$ to $\mathcal{Y}$ preserves the $2$-collinearity using the similar method in ~\cite[Lemma 3.2]{cck09}. \end{rk}
The {\emph{midpoint}} of a segment with endpoints $x$ and $y$ in the non-Archimedean $n$-normed space $\mathcal{X}$ is defined by the point $\frac{x+y}{2}.$
Now, we prove the Mazur-Ulam theorem on non-Archimedean $n$-normed spaces. In the first step, we prove the following lemma. And then, using the lemma, we show that an $n$-isometry $f$ from a non-Archimedean $n$-normed space $\mathcal{X}$ to a non-Archimedean $n$-normed space $\mathcal{Y}$ preserves the midpoint of a segment. I.e., the $f$-image of the midpoint of a segment in $\mathcal{X}$ is also the midpoint of a corresponding segment in $\mathcal{Y}.$
\begin{lm}\label{lm39}
Let $\mathcal{X}$ be a non-Archimedean $n$-normed space over a linear ordered non-Archimedean field $\mathcal{K}$ with ${\mathcal{C}}=\{2^n|\,n\in{\mathbb{Z}}\}$ and let $x_0,x_1\in\mathcal{X}$ with $x_0\neq x_1.$ Then $u:=\frac{x_0+x_1}{2}$ is the unique member of $\mathcal{X}$ satisfying \begin{eqnarray*}
\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
=\|x_0-u,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
=\|x_1-u,x_1-x_2,x_1-x_3,\cdots,x_1-x_n\| \end{eqnarray*}
for some $x_2,\cdots,x_n\in \mathcal{X}$ with $\|x_0-x_1,x_0-x_2,\cdots,x_0-x_n\|\neq0$ and $u,x_0,x_1$ are $2$-collinear. \end{lm}
\begin{pf} Let $u:=\frac{x_0+x_1}{2}.$
From the assumption for the dimension of $\mathcal{X},$ there exist $n-1$ elements $x_2,\cdots,x_n$ in $\mathcal{X}$ such that $\|x_0-x_1,x_0-x_2,\cdots,x_0-x_n\|\neq0.$ One can easily prove that $u$ satisfies the above equations and conditions. It suffices to show the uniqueness for $u.$ Assume that there is an another $v$ satisfying \begin{eqnarray*}
\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
=\|x_0-v,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
=\|x_1-v,x_1-x_2,x_1-x_3,\cdots,x_1-x_n\| \end{eqnarray*}
for some elements $x_2,\cdots,x_n$ of $\mathcal{X}$ with $\|x_0-x_1,x_0-x_2,\cdots,x_0-x_n\|\neq0$ and $v,x_0,x_1$ are $2$-collinear. Since $v,x_0,x_1$ are $2$-collinear, $v=tx_0+(1-t)x_1$ for some $t\in\mathcal{K}.$ Then we have \begin{eqnarray*}
&&\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&\|x_0-v,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&\|x_0-tx_0-(1-t)x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&|1-t|\,\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|, \end{eqnarray*} \begin{eqnarray*}
&&\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&\|x_1-v,x_1-x_2,x_1-x_3,\cdots,x_1-x_n\|\\
&=&\|x_1-tx_0-(1-t)x_1,x_1-x_2,x_1-x_3,\cdots,x_1-x_n\|\\
&=&|t|\,\|x_0-x_1,x_1-x_2,x_1-x_3,\cdots,x_1-x_n\|\\
&=&|t|\,\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|. \end{eqnarray*}
Since $\|x_0-x_1,x_0-x_2,\cdots,x_0-x_n\|\neq0$, we have two equations $|1-t|=1$ and $|t|=1.$ So there are two integers $k_1,k_2$ such that $1-t=2^{k_1},\,t=2^{k_2}.$ Since $2^{k_1}+2^{k_2}=1,$ $k_i<0$ for all $i=1,2.$ Thus we may assume that $1-t=2^{-n_1},\,t=2^{-n_2}$ and $n_1\geq n_2\in\mathbb{N}$ without loss of generality. If $n_1\gneq n_2,$ then $1=2^{-n_1}+2^{-n_2}=2^{-n_1}(1+2^{n_1-n_2}),$ that is, $2^{n_1}=1+2^{n_1-n_2}.$ This is a contradiction because the left side of the equation is a multiple of $2$ but the right side of the equation is not. Thus $n_1=n_2=1$ and hence $v=\frac{1}{2}x_0+\frac{1}{2}x_1=u.$ \end{pf}
\begin{thm}\label{thm38}
Let $\mathcal{X},\mathcal{Y}$ be non-Archimedean $n$-normed spaces over a linear ordered non-Archimedean field $\mathcal{K}$ with ${\mathcal{C}}=\{2^n|\,n\in{\mathbb{Z}}\}$ and $f:\mathcal{X}\rightarrow \mathcal{Y}$ an $n$-isometry. Then the midpoint of a segment is $f$-invariant, i.e., for every $x_0,x_1\in\mathcal{X}$ with $x_0\neq x_1,$ $f(\frac{x_0+x_1}{2})$ is also the midpoint of a segment with endpoints $f(x_0)$ and $f(x_1)$ in $\mathcal{Y}.$ \end{thm}
\begin{pf} Let $x_0,x_1\in\mathcal{X}$ with $x_0\neq x_1.$ Since the dimension of $\mathcal{X}$ is greater than $n-1,$
there exist $n-1$ elements $x_2,\cdots,x_n$ of $\mathcal{X}$ satisfying $\|x_0-x_1,x_0-x_2,\cdots,x_0-x_n\|\neq0.$ Since $x_0,x_1$ and their midpoint $\frac{x_0+x_1}{2}$ are $2$-collinear in $\mathcal{X}$, $f(x_0),f(x_1),f(\frac{x_0+x_1}{2})$ are also $2$-collinear in $\mathcal{Y}$ by Remark~\ref{rk33}. Since $f$ is an $n$-isometry, we have the followings \begin{eqnarray*}
&&\|f(x_0)-f(\frac{x_0+x_1}{2}),f(x_0)-f(x_2),\cdots,f(x_0)-f(x_n)\|\\
&=&\|x_0-\frac{x_0+x_1}{2},x_0-x_2,\cdots,x_0-x_n\|\\
&=&|\frac{1}{2}|\,\|x_0-x_1,x_0-x_2,\cdots,x_0-x_n\|\\
&=&\|f(x_0)-f(x_1),f(x_0)-f(x_2),\cdots,f(x_0)-f(x_n)\|\,,\\ \\
&&\|f(x_1)-f(\frac{x_0+x_1}{2}),f(x_1)-f(x_2),\cdots,f(x_1)-f(x_n)\|\\
&=&\|x_1-\frac{x_0+x_1}{2},x_1-x_2,\cdots,x_1-x_n\|\\
&=&|\frac{1}{2}|\,\|x_1-x_0,x_1-x_2,\cdots,x_1-x_n\|\\
&=&\|f(x_0)-f(x_1),f(x_0)-f(x_2),\cdots,f(x_0)-f(x_n)\|. \end{eqnarray*} By Lemma ~\ref{lm39}, we obtain that $f(\frac{x_0+x_1}{2})=\frac{f(x_0)+f(x_1)}{2}$ for all $x_0,x_1\in\mathcal{X}$ with $x_0\neq x_1.$ This completes the proof. \end{pf}
\begin{lm}\label{lm36} Let $\mathcal{X}$ and $\mathcal{Y}$ be non-Archimedean $n$-normed spaces and $f:\mathcal{X} \rightarrow \mathcal{Y}$ an $n$-isometry. Then the following conditions are equivalent.
$(\textrm{i})$ The $n$-isometry $f$ is additive, i.e., $f(x_0+x_1)=f(x_0)+f(x_1)$ for all $x_0,x_1\in \mathcal{X};$
$(\textrm{ii})$ the $n$-isometry $f$ preserves the midpoint of a segment in $\mathcal{X},$ i.e., $f(\frac{x_0+x_1}{2})=\frac{f(x_0)+f(x_1)}{2}$ for all $x_0,x_1\in \mathcal{X}$ with $x_0 \neq x_1;$
$(\textrm{iii})$ the $n$-isometry $f$ preserves the barycenter of a triangle in $\mathcal{X}$, i.e., $f(\frac{x_0+x_1+x_2}{3})=\frac{f(x_0)+f(x_1)+f(x_2)}{3}$ for all $x_0,x_1,x_2\in \mathcal{X}$ satisfying that $x_0,x_1,x_2$ are not $2$-collinear. \end{lm}
\begin{pf} Firstly, the equivalence between $(\textrm{i})$ and $(\textrm{ii})$ is obvious. Then we suffice to show that $(\textrm{ii})$ is equivalent to $(\textrm{iii}).$ Assume that the $n$-isometry $f$ preserves the barycenter of a triangle in $\mathcal{X}.$ Let $x_0,x_1$ be in $\mathcal{X}$ with $x_0 \neq x_1.$ Since the $n$-isometry $f$ preserves the $2$-collinearity, $f(x_0),f(\frac{x_0+x_1}{2}),f(x_1)$ are $2$-collinear. So \begin{equation}\label{eq31} f(\frac{x_0+x_1}{2})-f(x_0)=s\Big(f(x_1)-f(x_0)\Big) \end{equation} for some element $s$ of a real field. By the hypothesis for the dimension of $\mathcal{X}$, we can choose the element $x_2$ of $\mathcal{X}$ satisfying that $x_0,x_1$ and $x_2$ are not $2$-collinear. Since $x_2,\frac{x_0+x_1+x_2}{3},\frac{x_0+x_1}{2}$ are $2$-collinear, we have that $f(x_2),f(\frac{x_0+x_1+x_2}{3}),f(\frac{x_0+x_1}{2})$ are also $2$-collinear by Remark~\ref{rk33}. So we obtain that \begin{equation}\label{eq32} f(\frac{x_0+x_1+x_2}{3})-f(x_2)=t\Big(f(\frac{x_0+x_1}{2})-f(x_2)\Big) \end{equation} for some element $t$ of a real field. By the equations (\ref{eq31}), (\ref{eq32}) and the barycenter preserving property for the $n$-isometry $f$, we have $$\frac{f(x_0)+f(x_1)+f(x_2)}{3}-f(x_2)=t\Big(f(x_0)+sf(x_1)-sf(x_0)-f(x_2)\Big).$$ Thus we get $$\frac{f(x_0)+f(x_1)-2f(x_2)}{3}=t(1-s)f(x_0)+tsf(x_1)-tf(x_2).$$ So we have the following equation $$\frac{2}{3}\Big(f(x_0)-f(x_2)\Big)-\frac{1}{3}\Big(f(x_0)-f(x_1)\Big)=t\Big(f(x_0)-f(x_2)\Big)-ts\Big(f(x_0)-f(x_1)\Big).$$ By a calculation, we obtain \begin{equation}\label{eq33} (\frac{2}{3}-t)\Big(f(x_0)-f(x_2)\Big)+(-\frac{1}{3}+ts)\Big(f(x_0)-f(x_1)\Big)=0. \end{equation}
Since $x_0,x_1,x_2$ are not $2$-collinear, $x_0-x_1,x_0-x_2$ are linearly independent. Since $\dim{\mathcal{X}}\geq n,$ there are $x_3,\cdots,x_n\in \mathcal{X}$ such that $\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\neq0.$ Since $f$ is an $n$-isometry, \begin{eqnarray*}
&&\|f(x_0)-f(x_1),f(x_0)-f(x_2),f(x_0)-f(x_3),\cdots,f(x_0)-f(x_n)\|\\
&&=\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\neq0. \end{eqnarray*} So $f(x_0)-f(x_1)$ and $f(x_0)-f(x_2)$ are linearly independent. Hence, from the equation (\ref{eq33}), we have $\frac{2}{3}-t=0$ and $-\frac{1}{3}+ts=0.$ I.e., we obtain $t=\frac{2}{3}, s=\frac{1}{2},$ which imply the equation $$f(\frac{x_0+x_1}{2})=\frac{f(x_0)+f(x_1)}{2}$$ for all $x_0,x_1\in \mathcal{X}$ with $x_0\neq x_1.$
Conversely, $(\textrm{ii})$ trivially implies $(\textrm{iii})$. This completes the proof of this lemma. \end{pf}
\begin{rk}{\label{rk3-10}} One can prove that the above lemma also holds in the case of linear $n$-normed spaces. \end{rk}
\begin{thm}{\label{rk37}}
Let $\mathcal{X}$ and $\mathcal{Y}$ be non-Archimedean $n$-normed spaces over a linear ordered non-Archimedean field $\mathcal{K}$ with ${\mathcal{C}}=\{2^n|\,n\in{\mathbb{Z}}\}.$ If $f:\mathcal{X}\rightarrow \mathcal{Y}$ is an $n$-isometry, then $f(x)-f(0)$ is additive. \end{thm}
\begin{pf} Let $g(x):=f(x)-f(0).$ Then it is clear that $g(0)=0$ and $g$ is also an $n$-isometry.
From Theorem~\ref{thm38}, for $x_0,x_1\in{\mathcal{X}}(x_0\neq x_1),$ we have $$g\Big(\frac{x_0+x_1}{2}\Big)=\frac{g(x_0)+g(x_1)}{2}.$$ Hence, by Lemma~\ref{lm36}, we obtain that $g$ is additive which completes the proof. \end{pf}
In the remainder of this section, under different conditions from those previously referred in Theorem~\ref{rk37}, we also prove the Mazur-Ulam theorem on a non-Archimedean $n$-normed space. Firstly, we show that an $n$-isometry $f$ from a non-Archimedean $n$-normed space $\mathcal{X}$ to a non-Archimedean $n$-normed space $\mathcal{Y}$ preserves the barycenter of a triangle. I.e., the $f$-image of the barycenter of a triangle is also the barycenter of a corresponding triangle. Then, using Lemma~\ref{lm36}, we also prove the Mazur-Ulam theorem(non-Archimedean $n$-normed space version) under some different conditions.
\begin{lm}\label{lm35} Let $\mathcal{X}$ be a non-Archimedean $n$-normed space over a linear ordered non-Archimedean field $\mathcal{K}$
with ${\mathcal{C}}=\{3^n|\,n\in{\mathbb{Z}}\}$ and let $x_0,x_1,x_2$ be elements of $\mathcal{X}$ such that $x_0,x_1,x_2$ are not $2$-collinear. Then $u:=\frac{x_0+x_1+x_2}{3}$ is the unique member of $\mathcal{X}$ satisfying \begin{eqnarray*}
\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
=\|x_0-x_1,x_0-u,x_0-x_3,\cdots,x_0-x_n\|\\
=\|x_1-x_2,x_1-u,x_1-x_3,\cdots,x_1-x_n\|\\
=\|x_2-x_0,x_2-u,x_2-x_3,\cdots,x_2-x_n\| \end{eqnarray*}
for some $x_3,\cdots,x_n\in \mathcal{X}$ with $\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\neq0$ and $u$ is an interior point of $\triangle_{x_0x_1x_2}.$ \end{lm}
\begin{pf} Let $u:=\frac{x_0+x_1+x_2}{3}.$ Thus $u$ is an interior point of $\triangle_{x_0x_1x_2}.$ Since $\dim{\mathcal{X}}>n-1,$ there are $n-2$ elements $x_3,\cdots,x_n$ of $\mathcal{X}$ such that
$\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\neq0.$ Applying the lemma~\ref{lm31}, we have that \begin{eqnarray*}
&&\|x_0-x_1,x_0-u,x_0-x_3,\cdots,x_0-x_n\|\\
&=&\|x_0-x_1,x_0-\frac{x_0+x_1+x_2}{3},x_0-x_3,\cdots,x_0-x_n\|\\
&=&|\frac{1}{3} |\,\|x_0-x_1,x_0-x_1+x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|. \end{eqnarray*} And we can also obtain that \begin{eqnarray*}
&&\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&\|x_1-x_2,x_1-u,x_1-x_3,\cdots,x_1-x_n\|\\
&=&\|x_2-x_0,x_2-u,x_2-x_3,\cdots,x_2-x_n\|. \end{eqnarray*} For the proof of uniqueness, let $v$ be an another interior point of $\triangle_{x_0x_1x_2}$ satisfying \begin{eqnarray*}
\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
=\|x_0-x_1,x_0-v,x_0-x_3,\cdots,x_0-x_n\|\\
=\|x_1-x_2,x_1-v,x_1-x_3,\cdots,x_1-x_n\|\\
=\|x_2-x_0,x_2-v,x_2-x_3,\cdots,x_2-x_n\| \end{eqnarray*}
with $\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\neq0.$
Since $v$ is an element of the set $\{t_0x_0+t_1x_1+t_2x_2 |\,t_0+t_1+t_2=1,\,t_i\in{\mathcal{K}},\,t_i>0\,\,{\text{for all}}\,\,i\},$ there are elements $s_0,s_1,s_2$ of $\mathcal{K}$ with $s_0+s_1+s_2=1,\,s_i>0$ such that $v=s_0x_0+s_1x_1+s_2x_2.$ Then we have \begin{eqnarray*}
&&\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&\|x_0-x_1,x_0-v,x_0-x_3,\cdots,x_0-x_n\|\\
&=&\|x_0-x_1,x_0-s_0x_0-s_1x_1-s_2x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&\|x_0-x_1,(s_0-1)x_0+s_1x_1+(1-s_0-s_1)x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&\|x_0-x_1,(s_0+s_1-1)x_0+(1-s_0-s_1)x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&|s_0+s_1-1|\,\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&|s_2|\,\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\| \end{eqnarray*}
and hence $|s_2|=1$ since $\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\neq0.$
Similarly, we obtain $|s_0|=|s_1|=1.$ By the hypothesis of $\mathcal{C},$ there are integers $k_0,k_1,k_2$ such that $s_0=3^{k_1},s_1=3^{k_2},s_2=3^{k_2}.$ Since $s_0+s_1+s_2=1,$ every $k_i$ is less than $0.$ So, one may let $s_0=3^{-n_0},s_1=3^{-n_1},s_2=3^{-n_2}$ and $n_0\geq n_1\geq n_2\in\mathbb{N}.$ Assume that the one of above inequalities holds. Then $1=s_0+s_1+s_2=3^{-n_0}(1+3^{n_0-n_1}+3^{n_0-n_2}),$ i.e.,\,$3^{n_0}=1+3^{n_0-n_1}+3^{n_0-n_2}.$ This is a contradiction, because the left side is a multiple of $3$ whereas the right side is not. Thus $n_0=n_1=n_2.$ Consequently, $s_0=s_1=s_2=\frac{1}{3}.$ This means that $u$ is unique. \end{pf}
\begin{thm}\label{thm36} Let $\mathcal{X},\mathcal{Y}$ be non-Archimedean $n$-normed spaces over a linear ordered non-Archimedean field $\mathcal{K}$
with ${\mathcal{C}}=\{3^n|\,n\in{\mathbb{Z}}\}$ and $f:\mathcal{X}\rightarrow\mathcal{Y}$ an interior preserving $n$-isometry. Then the barycenter of a triangle is $f$-invariant. \end{thm}
\begin{pf} Let $x_0,x_1$ and $x_2$ be elements of $\mathcal{X}$ satisfying that $x_0,x_1$ and $x_2$ are not $2$-collinear. It is obvious that the barycenter $\frac{x_0+x_1+x_2}{3}$ of a triangle $\triangle_{x_0x_1x_2}$ is an interior point of the triangle. By the assumption, $f(\frac{x_0+x_1+x_2}{3})$ is also the interior point of a triangle $\triangle_{f(x_0)f(x_1)f(x_2)}.$ Since $\dim{\mathcal{X}}>n-1,$ there exist $n-2$ elements $x_3,\cdots,x_n$ in $\mathcal{X}$ such that
$\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|$ is not zero. Since $f$ is an $n$-isometry, we have \begin{eqnarray*}
&&\|f(x_0)-f(x_1),f(x_0)-f\Big(\frac{x_0+x_1+x_2}{3}\Big),f(x_0)-f(x_3),\cdots,f(x_0)-f(x_n)\|\\
&=&\|x_0-x_1,x_0-\frac{x_0+x_1+x_2}{3},x_0-x_3,\cdots,x_0-x_n\|\\
&=&|\frac{1}{3}|\,\|x_0-x_1,x_0-x_1+x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&\|x_0-x_1,x_0-x_2,x_0-x_3,\cdots,x_0-x_n\|\\
&=&\|f(x_0)-f(x_1),f(x_0)-f(x_2),f(x_0)-f(x_3),\cdots,f(x_0)-f(x_n)\|. \end{eqnarray*} Similarly, we obtain \begin{eqnarray*}
&&\|f(x_1)-f(x_2),f(x_1)-f\Big(\frac{x_0+x_1+x_2}{3}\Big),f(x_1)-f(x_3),\cdots,f(x_1)-f(x_n)\|\\
&=&\|f(x_2)-f(x_1),f(x_2)-f\Big(\frac{x_0+x_1+x_2}{3}\Big),f(x_2)-f(x_3),\cdots,f(x_2)-f(x_n)\|\\
&=&\|f(x_0)-f(x_1),f(x_0)-f(x_2),f(x_0)-f(x_3),\cdots,f(x_0)-f(x_n)\|. \end{eqnarray*} From Lemma~\ref{lm35}, we get $$f\Big(\frac{x_0+x_1+x_2}{3}\Big)=\frac{f(x_0)+f(x_1)+f(x_2)}{3}$$ for all $x_0,x_1,x_2\in\mathcal{X}$ satisfying that $x_0,x_1,x_2$ are not $2$-collinear. \end{pf}
\begin{thm} {\label{thm3.11}}
Let $\mathcal{X}$ and $\mathcal{Y}$ be non-Archimedean $n$-normed spaces over a linear ordered non-Archimedean field $\mathcal{K}$ with ${\mathcal{C}}=\{3^n|\,n\in{\mathbb{Z}}\}.$ If $f:{\mathcal{X}}\rightarrow{\mathcal{Y}}$ is an interior preserving $n$-isometry, then $f(x)-f(0)$ is additive. \end{thm}
\begin{pf} Let $g(x):=f(x)-f(0).$ One can easily check that $g(0)=0$ and $g$ is also an $n$-isometry. Using a similar method in ~\cite[Theorem 2.4]{cck}, we can easily prove that $g$ is also an interior preserving mapping.
Now, let $x_0,x_1,x_2$ be elements of ${\mathcal{X}}$ satisfying that $x_0,x_1,x_2$ are not $2$-collinear. Since $g$ is an interior preserving $n$-isometry, by Theorem~\ref{thm36}, $$g\Big(\frac{x_0+x_1+x_2}{3}\Big)=\frac{g(x_0)+g(x_1)+g(x_2)}{3}$$ for any $x_0,x_1,x_2\in {\mathcal{X}}$ satisfying that $x_0,x_1,x_2$ are not $2$-collinear. From the lemma \ref{lm36}, we obtain that the interior preserving $n$-isometry $g$ is additive which completes the proof. \end{pf}
{
\end{document} |
\begin{document}
\title{Identification and well-posedness in nonparametric models with independence conditions} \author{Victoria Zinde-Walsh\thanks{ The support of the Social Sciences and Humanities Research Council of Canada (SSHRC) and the \textit{Fonds\ qu\'{e}becois de la recherche sur la soci\'{e} t\'{e} et la culture} (FRQSC) is gratefully acknowledged. } \\
\\ McGill University and CIREQ\\ [email protected]} \maketitle \date{}
\begin{center}
\pagebreak
{\LARGE Abstract} \end{center}
This paper provides a nonparametric analysis for several classes of models, with cases such as classical measurement error, regression with errors in variables, and other models that may be represented in a form involving convolution equations. The focus here is on conditions for existence of solutions, nonparametric identification and well-posedness in the space $ S^{\ast }$ of generalized functions (tempered distributions). This space provides advantages over working in function spaces by relaxing assumptions and extending the results to include a wider variety of models, for example by not requiring existence of density. Classes of (generalized) functions for which solutions exist are defined; identification conditions, partial identification and its implications are discussed. Conditions for well-posedness are given and the related issues of plug-in estimation and regularization are examined.
\section{Introduction}
Many statistical and econometric models involve independence (or conditional independence) conditions that can be expressed via convolution. Examples are independent errors, classical measurement error and Berkson error, regressions involving data measured with these types of errors, common factor models and models that conditionally on some variables can be represented in similar forms, such as a nonparametric panel data model with errors conditionally on observables independent of the idiosyncratic component.
Although the convolution operator is well known, this paper provides explicitly convolution equations for a wide list of models for the first time. In many cases the analysis in the literature takes Fourier transforms as the starting point, e.g. characteristic functions for distributions of random vectors (as in the famous Kotlyarski lemma, 1967). The emphasis here on convolution equations for the models provides the opportunity to explicitly state nonparametric classes of functions defined by the model for which such equations hold, in particular, for densities, conditional densities and regression functions. The statistical model may give rise to different systems of convolution equations and may be over-identified in terms of convolution equations; some choices may be better suited to different situations, for example, here in Section 2 two sets of convolution equations (4 and 4a in Table 1) are provided for the same classical measurement error model with two measurements; it turns out that one of those allows to relax some independence conditions, while the other makes it possible to relax a support assumption in identification. Many of the convolution equations derived here are based on density-weighted conditional averages of the observables.
The main distinguishing feature is that here all the functions defined by the model are considered within the space of generalized functions $S^{\ast },$ the space of so-called tempered distributions (they will be referred to as generalized functions). This is the dual space, the space of linear continuous functionals, on the space $S$ of well-behaved functions: the functions in $S$ are infinitely differentiable and all the derivatives go to zero at infinity faster than any power. An important advantage of assuming the functions are in the space of generalized functions is that in that space any distribution function has a density (generalized function) that continuously depends on the distribution function, so that distributions with mass points and fractal measures have well-defined generalized densities.
Any regular function majorized by a polynomial belongs to $S^{\ast }$; this includes polynomially growing regression functions and binary choice regression as well as many conditional density functions. Another advantage is that Fourier transform is an isomorphism of this space, and thus the usual approaches in the literature that employ characteristic functions are also included. Details about the space $S^{\ast }$ are in Schwartz (1966) and are summarized in Zinde-Walsh (2012).
The model classes examined here lead to convolution equations that are similar to each other in form; the main focus of this paper is on existence, identification, partial identification and well-posedness conditions. Existence and uniqueness of solutions to some systems of convolution equations in the space $S^{\ast }$ were established in Zinde-Walsh (2012). Those results are used here to state identification in each of the models. Identification requires examining support of the functions and generalized functions that enter into the models; if support excludes an open set then identification at least for some unknown functions in the model fails, however, some isolated points or lower-dimensional manifolds where the e.g. the characteristic function takes zero values (an example is the uniform distribution) does not preclude identification in some of the models. This point was made in e.g. Carrasco and Florens (2010), Evdokimov and White (2011) and is expressed here in the context of operating in $S^{\ast }.$ Support restriction for the solution may imply that only partial identification will be provided. However, even in partially identified models some features of interest (see, e.g. Matzkin, 2007) could be identified thus some questions could be addressed even in the absence of full identification. A common example of incomplete identification which nevertheless provides important information is Gaussian deconvolution of a blurred image of a car obtained from a traffic camera; the filtered image is still not very good, but the licence plate number is visible for forensics.
Well-posedness conditions are emphasized here. The well-known definition by Hadamard (1923) defines well-posedness via three conditions: existence of a solution, uniqueness of the solution and continuity in some suitable topology. The first two are essentially identification. Since here we shall be defining the functions in subclasses of $S^{\ast }$ we shall consider continuity in the topology of this generalized functions space. This topology is weaker than the topologies in functions spaces, such as the uniform or $L_{p}$ topologies; thus differentiating the distribution function to obtain a density is a well-posed problem in $S^{\ast },$ by contrast, even in the class of absolutely continuous distributions with uniform metric where identification for density in the space $L_{1}$ holds, well-posedness however does not obtain (see discussion in Zinde-Walsh, 2011). But even though in the weaker topology of $S^{\ast }$ well-posedness obtains more widely, for the problems considered here some additional restrictions may be required for well-posedness.
Well-posedness is important for plug-in estimation since if the estimators are in a class where the problem is well-posed they are consistent, and conversely, if well-posedness does not hold consistency will fail for some cases. Lack of well-posedness can be remedied by regularization, but the price is often more extensive requirements on the model and slower convergence. For example, in deconvolution (see e.g. Fan, 1991, and most other papers cited here) spectral cut-off regularization is utilized; it crucially depends on knowing the rate of the decay at infinity of the density.
Often non-parametric identification is used to justify parametric or semi-parametric estimation; the claim here is that well-posedness should be an important part of this justification. The reason for that is that in estimating a possibly misspecified parametric model, the misspecified functions of the observables belong in a nonparametric neighborhood of the true functions; if the model is non-parametrically identified, the unique solution to the true model exists, but without well-posedness the solution to the parametric model and to the true one may be far apart.
For deconvolution An and Hu (2012) demonstrate well-posedness in spaces of integrable density functions when the measurement error has a mass point; this may happen in surveys when probability of truthful reporting is non-zero. The conditions for well-posedness here are provided in $S^{\ast }$ ; this then additionally does not exclude mass points in the distribution of the mismeasured variable itself; there is some empirical evidence of mass points in earnings and income. The results here show that in $S^{\ast }$ well-posedness holds more generally: as long as the error distribution is not super-smooth.
The solutions for the systems of convolution equations can be used in plug-in estimation. Properties of nonparametric plug-in estimators are based on results on stochastic convergence in $S^{\ast }$ for the solutions that are stochastic functions expressed via the estimators of the known functions of the observables.
Section 2 of the paper enumerates the classes of models considered here. They are divided into three groups: 1. measurement error models with classical and Berkson errors and possibly an additional measurement, and common factor models that transform into those models; 2. nonparametric regression models with classical measurement and Berkson errors in variables; 3. measurement error and regression models with conditional independence. The corresponding convolution equations and systems of equations are provided and discussed. Section 3 is devoted to describing the solutions to the convolution equations of the models. The main mathematical aspect of the different models is that they require solving equations of a similar form. Section 4 provides a table of identified solutions and discusses partial identification and well-posedness. Section 5 examines plug-in estimation. A brief conclusion follows.
\section{Convolution equations in classes of models with independence or conditional independence}
This section derives systems of convolution equations for some important classes of models. The first class of model is measurement error models with some independence (classical or Berkson error) and possibly a second measurement; the second class is regression models with classical or Berkson type error; the third is models with conditional independence. For the first two classes the distributional assumptions for each model and the corresponding convolution equations are summarized in tables; it is indicated which of the functions are known and which unknown; a brief discussion of each model and derivation of the convolution equations follows. The last part of this section discusses convolution equations for two specific models with conditional independence; one is a panel data model studied by Evdokimov (2011), the other a regression model where independence of measurement error of some regressors obtains conditionally on a covariate.
The general assumption made here is that all the functions in the convolution equations belong to the space of generalized functions $S^{\ast }.$
\textbf{Assumption 1. }\textit{All the functions defined by the statistical model are in the space of generalized functions }$S^{\ast }.$
This space of generalized function includes functions from most of the function classes that are usually considered, but allows for some useful generalizations. The next subsection provides the necessary definitions and some of the implications of working in the space $S^{\ast }.$
\subsection{The space of generalized functions $S^{\ast }.$}
The space $S^{\ast }$ is the dual space, i.e. the space of continuous linear functionals on the space $S$ of functions. The theory of generalized functions is in Schwartz (1966); relevant details are summarized in Zinde-Walsh (2012). In this subsection the main definitions and properties are reproduced.
Recall the definition of $S.$
For any vector of non-negative integers $m=(m_{1},...m_{d})$ and vector $ t\in R^{d}$ denote by $t^{m}$ the product $t_{1}^{m_{1}}...t_{d}^{m_{d}}$ and by $\partial ^{m}$ the differentiation operator $\frac{\partial ^{m_{1}} }{\partial x_{1}^{m_{1}}}...\frac{\partial ^{m_{d}}}{\partial x_{d}^{m_{d}}} ; $ $C_{\infty }$ is the space of infinitely differentiable (real or complex-valued) functions on $R^{d}.$ The space $S\subset C_{\infty }$ of test functions is defined as: \begin{equation*}
S=\left\{ \psi \in C_{\infty }(R^{d}):|t^{l}\partial ^{k}\psi (t)|=o(1)\text{ as }t\rightarrow \infty \right\} , \end{equation*} for any $k=(k_{1},...k_{d}),l=(l_{1},...l_{d}),$ where $k=(0,...0)$ corresponds to the function itself, $t\rightarrow \infty $ coordinate-wise; thus \ the functions in $S$ go to zero at infinity faster than any power as do their derivatives; they are rapidly decreasing functions. A sequence in $ S $ converges if in every bounded region each $\left\vert t^{l}\partial ^{k}\psi (t)\right\vert $ converges uniformly.
Then in the dual space $S^{\ast }$ any $b\in S^{\ast }$ represents a linear functional on $S;$ the value of this functional for $\psi \in S$ is denoted by $\left( b,\psi \right) .$ When $b$ is an ordinary (point-wise defined) real-valued function, such as a density of an absolutely continuous distribution or a regression function, the value of the functional on real-valued $\psi $ defines it and is given by \begin{equation*} \left( b,\psi \right) =\int b(x)\psi (x)dx. \end{equation*} If $b$ is a characteristic function it may be complex-valued, then the value of the functional $b$ applied to $\psi \in S$ where $S$ is the space of complex-valued functions, is \begin{equation*} \left( b,\psi \right) =\int b(x)\overline{\psi (x)}dx, \end{equation*} where overbar denotes complex conjugate. The integrals are taken over the whole space $R^{d}.$
The generalized functions in the space $S^{\ast }$ are continuously differentiable and the differentiation operator is continuous; Fourier transforms and their inverses are defined for all $b\in S^{\ast },$ the operator is a (continuos) isomorphism of the space $S^{\ast }.$ However, convolutions and products are not defined for all pairs of elements of $ S^{\ast },$ unlike, say, the space $L_{1};$ on the other hand, in $L_{1}$ differentiation is not defined and not every distribution has a density that is an element of $L_{1}.$
Assumption 1 places no restrictions on the distributions, since in $S^{\ast } $ any distribution function is differentiable and the differentiation operator is continuous. The advantage of not restricting distributions to be absolutely continuous is that mass points need not be excluded; distributions representing fractal measures such as the Cantor distribution are also allowed. This means that mixtures of discrete and continuous distributions e.g. such as those examined by An and Hu (2012) for measurement error in survey responses, some of which may be error-contaminated, but some may be truthful leading to a mixture with a mass point distribution are included. Moreover, in $S^{\ast }$ the case of mass points in the distribution of the mismeasured variable is also easily handled; in the literature such mass points are documented for income or work hours distributions in the presence of rigidities such as unemployment compensation rules (e.g. Green and Riddell, 1997). Fractal distributions may arise in some situations, e.g. Karlin's (1958) example of the equilibrium price distribution in an oligopolistic game.
For regression functions the assumption $g\in S^{\ast }$ implies that growth at infinity is allowed but is somewhat restricted. In particular for any ordinary point-wise defined function $b\in S^{\ast }$ the condition \begin{equation} \int ...\int \Pi _{i=1}^{d}\left( (1+t_{i}^{2}\right) ^{-1})^{m_{i}}\left\vert b(t)\right\vert dt_{1}...dt_{d}<\infty , \label{condition} \end{equation} needs to be satisfied for some non-negative valued $m_{1},...,m_{d}.$ If a locally integrable function $g$ is such that its growth at infinity is majorized by a polynomial, then $b\equiv g$ satisfies this condition. While restrictive this still widens the applicability of many currently available approaches. For example in Berkson regression the common assumption is that the regression function be absolutely integrable (Meister, 2009); this excludes binary choice, linear and polynomial regression functions that belong to $S^{\ast }$ and satisfy Assumption 1. Also, it is advantageous to allow for functions that may not belong to any ordinary function classes, such as sums of $\delta -$functions ("sum of peaks") or (mixture) cases with sparse parts of support, such as isolated points; such functions are in $ S^{\ast }.$ Distributions with mass points can arise when the response to a survey questions may be only partially contaminated; regression "sum of peaks" functions arise e.g. in spectroscopy and astrophysics where isolated point supports are common.
\subsection{Measurement error and related models}
Current reviews for measurement error models are in Carrol et al, (2006), Chen et al (2011), Meister (2009).
Here and everywhere below the variables $x,z,x^{\ast },u,u_{x}$ are assumed to be in $R^{d};y,v$ are in $R^{1};$ all the integrals are over the corresponding space; density of $\nu $ for any $\nu $ is denoted by $f_{v};$
independence is denoted by $\bot $; expectation of $x$ conditional on $z$ is denoted by $E(x|z).$
\subsubsection{List of models and corresponding equations}
The table below lists various models and corresponding convolution equations. Many of the equations are derived from density weighted conditional expectations of the observables.
Recall that for two functions, $f$ and $g$ convolution $f\ast g$ is defined by \begin{equation*} (f\ast g)\left( x\right) =\int f(w)g(x-w)dw; \end{equation*} this expression is not always defined. A similar expression (with some abuse of notation since generalized functions are not defined pointwise) may hold for generalized functions in $S^{\ast };$ similarly, it is not always defined. With Assumption 1 for the models considered here we show that convolution equations given in the Tables below hold in $S^{\ast }.$
\begin{center} \textbf{Table 1.} Measurement error models: 1. Classical measurement error; 2. Berkson measurement error; 3. Classical measurement error with additional observation (with zero conditional mean error); 4., 4a. Classical error with additional observation (full independence).
\begin{tabular}{|c|c|c|c|c|} \hline Model & $ \begin{array}{c} \text{Distributional} \\ \text{assumptions} \end{array} $ & $ \begin{array}{c} \text{Convolution } \\ \text{equations} \end{array} $ & $ \begin{array}{c} \text{Known} \\ \text{ functions} \end{array} $ & $ \begin{array}{c} \text{Unknown} \\ \text{ functions} \end{array} $ \\ \hline
\multicolumn{1}{|l|}{$\ \ \ $1.} & \multicolumn{1}{|l|}{$\ \ \ \ \ \ \ \begin{array}{c} z=x^{\ast }+u \\ x^{\ast }\bot u \end{array}
$} & \multicolumn{1}{|l|}{$\ \ \ \ \ \ \ f_{x^{\ast }}\ast f_{u}=f_{z}$} &
\multicolumn{1}{|l|}{$\ \ \ \ \ \ \ f_{z},f_{u}$} & \multicolumn{1}{|l|}{$\ \ \ \ \ \ \ f_{x^{\ast }}$} \\ \hline 2. & $\ \begin{array}{c} z=x^{\ast }+u \\ z\bot u \end{array} $ & $f_{z}\ast f_{-u}=f_{x^{\ast }}$ & $f_{z},f_{u}$ & $f_{x^{\ast }}$ \\ \hline
\multicolumn{1}{|l|}{$\ $\ 3.} & \multicolumn{1}{|l|}{$\ \ \begin{array}{c} z=x^{\ast }+u; \\ x=x^{\ast }+u_{x} \\ x^{\ast }\bot u; \\
E(u_{x}|x^{\ast },u)=0; \\ E\left\Vert z\right\Vert <\infty ;E\left\Vert u\right\Vert <\infty . \end{array}
$} & \multicolumn{1}{|l|}{$ \begin{array}{c} f_{x^{\ast }}\ast f_{u}=f_{z}; \\ h_{k}\ast f_{u}=w_{k}, \\ \text{with }h_{k}(x)\equiv x_{k}f_{x^{\ast }}(x); \\ k=1,2...d \end{array}
$} & \multicolumn{1}{|l|}{$ \begin{array}{c} f_{z},w_{k}, \\ k=1,2...d \end{array}
$} & \multicolumn{1}{|l|}{$f_{x^{\ast }}$; $f_{u}$} \\ \hline 4. & $ \begin{array}{c} z=x^{\ast }+u; \\ x=x^{\ast }+u_{x};x^{\ast }\bot u; \\ x^{\ast }\bot u_{x};E(u_{x})=0; \\ u\bot u_{x}; \\ E\left\Vert z\right\Vert <\infty ;E\left\Vert u\right\Vert <\infty . \end{array} $ & $ \begin{array}{c} f_{x^{\ast }}\ast f_{u}=f_{z}; \\ h_{k}\ast f_{u}=w_{k}; \\ f_{x^{\ast }}\ast f_{u_{x}}=f_{x}; \\ \text{with }h_{k}(x)\equiv x_{k}f_{x^{\ast }}(x); \\ k=1,2...d \end{array} $ & $ \begin{array}{c} f_{z}\text{, }f_{x};w;w_{k} \\ k=1,2...d \end{array} $ & $f_{x^{\ast }};f_{u},$ $f_{u_{x}}$ \\ \hline 4a. & $ \begin{array}{c} \text{Same model as 4.,} \\ \text{alternative} \\ \text{equations:} \end{array} $ & $ \begin{array}{c} f_{x^{\ast }}\ast f_{u}=f_{z}; \\ f_{u_{x}}\ast f_{-u}=w; \\ h_{k}\ast f_{-u}=w_{k}, \\ \text{with }h_{k}(x)\equiv x_{k}f_{u_{x}}(x); \\ k=1,2...d \end{array} $ & --"-- & --"-- \\ \hline \end{tabular} \end{center}
Notation: $k=1,2,...,d;$ in 3. and 4, $w_{k}=E(x_{k}f_{z}(z)|z);$ in 4a $
w=f_{z-x};w_{k}=E(x_{k}w(z-x)|\left( z-x\right) ).$
\textbf{Theorem 1.} \textit{Under Assumption 1 for each of the models 1-4 the corresponding convolution equations of Table 1 hold in the generalized functions space }$S^{\ast }$\textit{.}
The proof is in the derivations of the following subsection.
Assumption 1 requires considering all the functions defined by the model as elements of the space $S^{\ast },$ but if the functions (e.g. densities, the conditional moments) exist as regular functions, the convolutions are just the usual convolutions of functions, on the other hand, the assumption allows to consider convolutions for cases where distributions are not absolutely continuous.
\subsubsection{\protect
Measurement error models and derivation of the corresponding equations.}
1. The classical measurement error model.
The case of the classical measurement error is well known in the literature. The concept of error independent of the variable of interest is applicable to many problems in seismology, image processing, where it may be assumed that the source of the error is unrelated to the signal. In e.g. Cunha et al. (2010) it is assumed that some constructed measurement of ability of a child derived from test scores fits into this framework. As is well-known in regression a measurement error in the regressor can result in a biased estimator (attenuation bias).
Typically the convolution equation \begin{equation*} f_{x^{\ast }}\ast f_{u}=f_{z} \end{equation*} is written for density functions when the distribution function is absolutely continuous. The usual approach to possible non-existence of density avoids considering the convolution and focuses on the characteristic functions. Since density always exists as a generalized function and convolution for such generalized functions is always defined it is possible to write convolution equations in $S^{\ast }$ for any distributions in model 1. The error distribution (and thus generalized density $f_{u})$ is assumed known thus the solution can be obtained by "deconvolution" (Carrol et al (2006), Meister (2009), the review of Chen et al (2011) and papers by Fan (1991), Carrasco and Florens(2010) among others).
2. The Berkson error model.{}
For Berkson error the convolution equation is also well-known. Berkson error of measurement arises when the measurement is somehow controlled and the error is caused by independent factors, e.g. amount of fertilizer applied is given but the absorption into soil is partially determined by factors independent of that, or students' grade distribution in a course is given in advance, or distribution of categories for evaluation of grant proposals is determined by the granting agency. The properties of Berkson error are very different from that of classical error of measurement, e.g. it does not lead to attenuation bias in regression; also in the convolution equation the unknown function is directly expressed via the known ones when the distribution of Berkson error is known. For discussion see Carrol et al (2006), Meister (2009), and Wang (2004).
Models 3. and 4. The classical measurement error with another observation.
In 3., 4. in the classical measurement error model the error distribution is not known but another observation for the mis-measured variable is available; this case has been treated in the literature and is reviewed in Carrol et al (2006), Chen et al \ (2011). In econometrics such models were examined by Li and Vuong (1998), Li (2002), Schennach (2004) and subsequently others (see e.g. the review by Chen et al, 2011). In case 3 the additional observation contains an error that is not necessarily independent, just has conditional mean zero.
Note that here the multivariate case is treated where arbitrary dependence for the components of vectors is allowed. For example, it may be of interest to consider the vector of not necessarily independent latent abilities or skills as measured by different sections of an IQ test, or the GRE scores.
Extra measurements provide additional equations. Consider for any $k=1,...d$ the function of observables $w_{k}$ defined by density weighted expectation $
E(x_{k}f_{z}(z)|z)$ as a generalized function; it is then determined by the values of the functional $\left( w_{k},\psi \right) $ for every $\psi \in S.$
Note that by assumption $E(x_{k}f_{z}(z)|z)=E(x_{k}^{\ast }f_{z}(z)|z);$ then for any $\psi \in S$ the value of the functional:
\begin{eqnarray*}
(E(x_{k}^{\ast }f_{z}(z)|z),\psi ) &=&\int [\int x_{k}^{\ast }f_{x^{\ast },z}(x^{\ast },z)dx^{\ast }]\psi (z)dz= \\ \int \int x_{k}^{\ast }f_{x^{\ast },z}(x^{\ast },z)\psi (z)dx^{\ast }dz &=&\int \int x_{k}^{\ast }\psi (x^{\ast }+u)f_{x^{\ast },u}(x^{\ast },u)dx^{\ast }du= \\ \int \int x_{k}^{\ast }f_{x^{\ast }}(x^{\ast })f_{u}(u)\psi (x^{\ast }+u)dx^{\ast }du &=&(h_{k}\ast f_{u},\psi ). \end{eqnarray*}
The third expression is a double integral which always exists if $ E\left\Vert x^{\ast }\right\Vert <\infty $; this is a consequence of boundedness of the expectations of $z$ and $u.$ The fourth is a result of change of variables $\left( x^{\ast },z\right) $ into $\left( x^{\ast },u\right) ,$ the fifth uses independence of $x^{\ast }$and $u,$ and the sixth expression follows from the corresponding expression for the convolution of generalized functions (Schwartz, 1967, p.246). The conditions of model 3 are not sufficient to identify the distribution of $u_{x};$ this is treated as a nuisance part in model 3.
The model in 4 with all the errors and mis-measured variable independent of each other was investigated by Kotlyarski (1967) who worked with the joint characteristic function. In 4 consider in addition to the equations written for model 3 another that uses the independence between $x^{\ast }$ and $ u_{x} $ and involves $f_{u_{x}}.$
In representation 4a the convolution equations involving the density $ f_{u_{x}}$ are obtained by applying the derivations that were used here for the model in 3.: \begin{equation*} \begin{array}{c} z=x^{\ast }+u; \\ x=x^{\ast }+u_{x}, \end{array} \end{equation*} to the model in 4 with $x-z$ playing the role of $z,$ $u_{x}$ playing the role of $x^{\ast },$ $-u$ playing the role of $u,$ and $x^{\ast }$ playing the role of $u_{x}.$ The additional convolution equations arising from the extra independence conditions provide extra equations and involve the unknown density $f_{u_{x}}.$ This representation leads to a generalization of Kotlyarski's identification result similar to that obtained by Evdokimov (2011) who used the joint characteristic function. The equations in 4a make it possible to identify $f_{u},f_{u_{x}}$ ahead of $f_{x^{\ast }};$ for identification this will require less restrictive conditions on the support of the characteristic function for $x^{\ast }.$
\subsubsection{Some extensions}
\textbf{A. Common factor models.}
Consider a model $\tilde{z}=AU,$ with $A$ a matrix of known constants and $ \tilde{z}$ a $m\times 1$ vector of observables, $\ U$ a vector of unobservable variables. Usually, $A$ is a block matrix and $AU$ can be represented via a combination of mutually independent vectors. Then without loss of generality consider the model \begin{equation} \tilde{z}=\tilde{A}x^{\ast }+\tilde{u}, \label{factormod} \end{equation} where $\tilde{A}$ is a $m\times d$ known matrix of constants, $\tilde{z}$ is a $m\times 1$ vector of observables, unobserved $x^{\ast }$ is $d\times 1$ and unobserved $\tilde{u}$ is $m\times 1.$ If the model $\left( \ref {factormod}\right) $ can be transformed to model 3 considered above, then $ x^{\ast }$ will be identified whenever identification holds for model 3. Once some components are identified identification of other factors could be considered sequentially.
\textbf{Lemma 1. }\textit{If in }$\left( \ref{factormod}\right) $ \textit{ the vectors }$x^{\ast }$\textit{\ and }$\tilde{u}$\textit{\ are independent and all the components of the vector }$\tilde{u}$\textit{\ are mean independent of each other and are mean zero and the matrix }$A$ \textit{can be partitioned after possibly some permutation of rows as }$\left( \begin{array}{c} A_{1} \\ A_{2} \end{array} \right) $\textit{\ with }$rankA_{1}=rankA_{2}=d,$\textit{\ then the model }$ \left( \ref{factormod}\right) $\textit{\ implies model 3.}
Proof. Define $z=T_{1}\tilde{z},$ where conformably to the partition of $A$ the partitioned $T_{1}=\left( \begin{array}{c} \tilde{T}_{1} \\ 0 \end{array} \right) ,$ with $\tilde{T}_{1}A_{1}x^{\ast }=x^{\ast }$ (such a $\tilde{T} _{1}$ always exists by the rank condition); then $z=x^{\ast }+u,$ where $ u=T_{1}\tilde{u}$ is independent of $x^{\ast }.$ Next define $T_{2}=\left( \begin{array}{c} 0 \\ \tilde{T}_{2} \end{array} \right) $ similarly with $\tilde{T}_{2}A_{2}x^{\ast }=x^{\ast }$.
Then $x=T_{2}\tilde{z}$ is such that $x=x^{\ast }+u_{x},$ where $u_{x}=T_{2} \tilde{u}$ and does not include any components from $u.$ This implies $
Eu_{x}|(x^{\ast },u)=0.$ Model 3 holds. $\blacksquare $
Here dependence in components of $x^{\ast }$ is arbitrary. A general structure with subvectors of $U$ independent of each other but with components which may be only mean independent (as $\tilde{u}$ here) or arbitrarily dependent (as in $x^{\ast })$ is examined by Ben-Moshe (2012). Models of linear systems with full independence were examined by e.g. Li and Vuong (1998). These models lead to systems of first-order differential equations for the characteristic functions.
It may be that there are no independent components $x^{\ast }$ and $\tilde{u} $ for which the conditions of Lemma 1 are satisfied. Bonhomme and Robin (2010) proposed to consider products of the observables to increase the number of equations in the system and analyzed conditions for identification; Ben-Moshe (2012) provided necessary and sufficient conditions under which this strategy leads to identification when there may be some dependence.
\textbf{B. Error correlations with more observables.}
The extension to non-zero $E(u_{x}|z)$ in model 3 is trivial if this expectation is a known function. A more interesting case results if the errors $u_{x}$ and $u$ are related, e.g. \begin{equation*} u_{x}=\rho u+\eta ;\eta \bot z. \end{equation*}
With an unknown parameter (or function of observables) $\rho $ if more observations are available more convolution equations can be written to identify all the unknown functions. Suppose that additionally a observation $ y$ is available with \begin{eqnarray*} y &=&x^{\ast }+u_{y}; \\ u_{y} &=&\rho u_{x}+\eta _{1};\eta _{1}\bot ,\eta ,z. \end{eqnarray*} Without loss of generality consider the univariate case and define $
w_{x}=E(xf(z)|z);w_{y}=E(yf(z)|z).\,\ $Then the system of convolution equations expands to
\begin{equation} \left\{ \begin{array}{ccc} f_{x^{\ast }}\ast f_{u} & & =w; \\ (1-\rho )h_{x^{\ast }}\ast f_{u} & +\rho zf(z) & =w_{x}; \\ (1-\rho ^{2})h_{x^{\ast }}\ast f_{u} & +\rho ^{2}zf(z) & =w_{y}. \end{array} \right. \label{ar(1)} \end{equation}
The three equations have three unknown functions, $f_{x^{\ast }},f_{u}$ and $ \rho .$ Assuming that support of $\rho $ does not include the point 1, $\rho $ can be expressed as a solution to a linear algebraic equation derived from the two equations in $\left( \ref{ar(1)}\right) $ that include $\rho :$ \begin{equation*} \rho =(w_{x}-zf(z))^{-1}\left( w_{y}-w_{x}\right) . \end{equation*}
\subsection{Regression models with classical and Berkson errors and the convolution equations}
\subsubsection{The list of models}
The table below provides several regression models and the corresponding convolution equations involving density weighted conditional expectations.
\begin{center} Table 2. Regression models: 5. Regression with classical measurement error and an additional observation; 6. Regression with Berkson error ($x,y,z$ are observable); 7. Regression with zero mean measurement error and Berkson instruments. \end{center}
\begin{tabular}{|c|c|c|c|c|} \hline Model & $ \begin{array}{c} \text{Distributional} \\ \text{assumptions} \end{array} $ & $ \begin{array}{c} \text{Convolution } \\ \text{equations} \end{array} $ & $ \begin{array}{c} \text{Known} \\ \text{ functions} \end{array} $ & $ \begin{array}{c} \text{Unknown} \\ \text{ functions} \end{array} $ \\ \hline
\multicolumn{1}{|l|}{$\ \ $5.} & \multicolumn{1}{|l|}{$\ \ \begin{array}{c} y=g(x^{\ast })+v \\ z=x^{\ast }+u; \\ x=x^{\ast }+u_{x} \\ x^{\ast }\bot u;E(u)=0; \\
E(u_{x}|x^{\ast },u)=0; \\
E(v|x^{\ast },u,u_{x})=0. \end{array}
$} & \multicolumn{1}{|l|}{$ \begin{array}{c} f_{x^{\ast }}\ast f_{u}=f_{z}; \\ \left( gf_{x^{\ast }}\right) \ast f_{u}=w, \\ h_{k}\ast f_{u}=w_{k}; \\ \text{with }h_{k}(x)\equiv x_{k}g(x)f_{x^{\ast }}(x); \\ k=1,2...d \end{array}
$} & \multicolumn{1}{|l|}{$f_{z};$ $w;w_{k}$} & \multicolumn{1}{|l|}{$ f_{x^{\ast }}$; $f_{u}$; $g.$} \\ \hline
\multicolumn{1}{|l|}{$\ \ $6.} & \multicolumn{1}{|l|}{$ \begin{array}{c} y=g(x)+v \\
z=x+u;E(v|z)=0; \\ z\bot u;E(u)=0. \end{array}
$} & \multicolumn{1}{|l|}{$\ \ \ \ \ \begin{array}{c} f_{x}=f_{-u}\ast f_{z}; \\ g\ast f_{-u}=w \end{array}
$} & \multicolumn{1}{|l|}{$\ f_{z};f_{x},w$} & \multicolumn{1}{|l|}{$f_{u}$; $g.$} \\ \hline
\multicolumn{1}{|l|}{$\ \ $7.} & \multicolumn{1}{|l|}{$\ \ \begin{array}{c} y=g(x^{\ast })+v; \\ x=x^{\ast }+u_{x}; \\ z=x^{\ast }+u;z\bot u; \\
E(v|z,u,u_{x})=0; \\
E(u_{x}|z,v)=0. \end{array}
$} & \multicolumn{1}{|l|}{$ \begin{array}{c} g\ast f_{u}=w; \\ h_{k}\ast f_{u}=w_{k}, \\ \text{with }h_{k}(x)\equiv x_{k}g(x); \\ k=1,2...d \end{array}
$} & \multicolumn{1}{|l|}{$w,w_{k}$} & \multicolumn{1}{|l|}{$f_{u}$; $g.$} \\ \hline \end{tabular}
Notes. Notation: $k=1,2...d;$ in model 5.$
w=E(yf_{z}(z)|z);w_{k}=E(x_{k}f_{z}(z)|z);$ in model 6. $w=E(y|z);$ in model 7. $w=E(y|z);w_{k}=E(x_{k}y|z).$
\textbf{Theorem 2.} \textit{Under Assumption 1 for each of the models 5-7 the corresponding convolution equations hold.}
The proof is in the derivations of the next subsection.
\subsubsection{\protect
Discussion of the regression models and derivation of the convolution equations.}
5. The nonparametric regression model with classical measurement error and an additional observation.
This type of model was examined by Li (2002) and Li and Hsiao (2004); the convolution equations derived here provide a convenient representation. Often models of this type were considered in semiparametric settings. Butucea and Taupin (2008) (extending the earlier approach by Taupin, 2001) consider a regression function known up to a finite dimensional parameter with the mismeasured variable observed with independent error where the error distribution is known. Under the latter condition the model 5 here would reduce to the two first equations \begin{equation*} f_{x^{\ast }}\ast f_{u}=f_{z};\text{ }\left( gf_{x^{\ast }}\right) \ast f_{u}=w, \end{equation*} where $f_{u}$ is known and two unknown functions are $g$ (here nonparametric) and $f_{x^{\ast }}.$
The model 5 incorporates model 3 for the regressor and thus the convolution equations from that model apply. An additional convolution equation is derived here; it is obtained from considering the value of the density weighted conditional expectation in the dual space of generalized functions, $S^{\ast },$ applied to arbitrary $\psi \in S,$ \begin{equation*}
(w,\psi )=(E(f(z)y|z),\psi )=(E(f(z)g(x^{\ast })|z),\psi ); \end{equation*} this equals \begin{eqnarray*} &&\int \int g(x^{\ast })f_{x^{\ast },z}(x^{\ast },z)\psi (z)dx^{\ast }dz \\ &=&\int \int g(x^{\ast })f_{x^{\ast },u}(x^{\ast },u)\psi (x^{\ast }+u)dx^{\ast }du \\ &=&\int g(x^{\ast })f_{x^{\ast }}(x^{\ast })f_{u}(u)dx^{\ast }\psi (x^{\ast }+u)dx^{\ast }du=((gf_{x^{\ast }})\ast f_{u},\psi ). \end{eqnarray*}
Conditional moments for the regression function need not be integrable or bounded functions of $z$; we require them to be in the space of generalized functions $S^{\ast }.$
6. Regression with Berkson error.
This model may represent the situation when the regressor (observed) $x$ is correlated with the error $v,$ but $z$ is a (vector) possibly representing an instrument uncorrelated with the regression error.
Then as is known in addition to the Berkson error convolution equation the equation \begin{equation*}
w=E(y|z)=E(g(x)|z)=\int g(x)\frac{f_{x,z}(x,z)}{f_{z}(z)}dx=\int g(z-u)f_{u}(u)dx=g\ast f_{u} \end{equation*} holds. This is stated in Meister (2008); however, the approach there is to consider $g$ to be absolutely integrable so that convolution can be defined in the $L_{1}$ space. Here by working in the space of generalized functions $ S^{\ast }$ a much wider nonparametric class of functions that includes regression functions with polynomial growth is allowed.
7. Nonparametric regression with error in the regressor, where Berkson type instruments are assumed available.
This model was proposed by Newey (2001), examined in the univarite case by Schennach (2007) and Zinde-Walsh (2009), in the multivariate case in Zinde-Walsh (2012), where the convolution equations given here in Table 2 were derived.
\subsection{\textbf{Convolution equations in models with conditional independence conditions.}}
All the models 1-7 can be extended to include some additional variables where conditionally on those variables, the functions in the model (e.g. conditional distributions) are defined and all the model assumptions hold conditionally.
Evdokimov (2011) derived the conditional version of the model 4 from a very general nonparametric panel data model. Model 8 below describes the panel data set-up and how it transforms to conditional model 4 and 4a and possibly model 3 with relaxed independence condition (if the focus is on identifying the regression function).
Model 8. Panel data model with conditional independence.
Consider a two-period panel data model with an unknown regression function $ m $ and an idiosyncratic (unobserved) $\alpha :$ \begin{eqnarray*} Y_{i1} &=&m(X_{i1},\alpha _{i})+U_{i1}; \\ Y_{i2} &=&m(X_{i2},\alpha _{i})+U_{i2}. \end{eqnarray*}
To be able to work with various conditional characteristic functions corresponding assumptions ensuring existence of the conditional distributions need to be made and in what follows we assume that all the conditional density functions and moments exist as generalized functions in $ S^{\ast }$.
In Evdokimov (2011) independence (conditional on the corresponding period $ X^{\prime }s)$ of the regression error from $\alpha ,$ and from the $ X^{\prime }s$ and error of the other period is assumed: \begin{equation*}
f_{t}=f_{Uit}|_{X_{it},\alpha _{i},X_{i(-t)},U_{i(-t)}}(u_{t}|x,...)=f_{Uit}|_{X_{it}}(u_{t}|x),t=1,2 \end{equation*}
with $f_{\cdot |\cdot }$ denoting corresponding conditional densities. Conditionally on $X_{i2}=X_{i1}=x$ the model takes the form 4 \begin{equation*} \begin{array}{c} z=x^{\ast }+u; \\ x=x^{\ast }+u_{x} \end{array} \end{equation*} with $z$ representing $Y_{1},x$ representing $Y_{2},$ $x^{\ast }$ standing in for $m(x,\alpha ),$ $u$ for $U_{1}$ and $u_{x}$ for $U_{2}.$ The convolution equations derived here for 4 or 4a now apply to conditional densities.
The convolution equations in 4a are similar to Evdokimov; they allow for equations for $f_{u},$ $f_{u_{x}}$ that do not rely on $f_{x^{\ast }}.$ The advantage of those lies in the possibility of identifying the conditional error distributions without placing the usual non-zero restrictions on the characteristic function of $x^{\ast }$ (that represents the function $m$ for the panel model).
The panel model can be considered with relaxed independence assumptions. Here in the two-period model we look at forms of dependence that assume zero conditional mean of the second period error, rather than full independence of the first period error: \begin{eqnarray*}
f_{Ui1}|_{X_{i1},\alpha _{i},X_{i2},Ui2}(u_{t}|x,...)
&=&f_{Ui1}|_{Xi1}(u_{t}|x); \\
E(U_{i2}|X_{i1},\alpha _{i},X_{i2},U_{i1}) &=&0; \\
f_{Ui2}|_{\alpha _{i},X_{i2}=X_{i1}=x}(u_{t}|x,...)
&=&f_{Ui2}|_{Xi2}(u_{t}|x). \end{eqnarray*} Then the model maps into the model 3 with the functions in the convolution equations representing conditional densities and allows to identify distribution of $x^{\ast }$ (function $\ m$ in the model). But the conditional distribution of the second-period error in this set-up is not identified.
Evdokimov introduced parametric AR(1) or MA(1) dependence in the errors $U$ and to accommodate that extended the model to three periods. Here this would lead in the AR case to the equations in $\left( \ref{ar(1)}\right) .$
Model 9. Errors in variables regression with classical measurement error conditionally on covariates.
Consider the regression model \begin{equation*} y=g(x^{\ast },t)+v, \end{equation*} with a measurement of unobserved $x^{\ast }$ given by $\ \tilde{z}=x^{\ast }+ \tilde{u},$ with $x^{\ast }\bot \tilde{u}$ conditionally on $t$. Assume that
$E(\tilde{u}|t)=0$ and that $E(v|x^{\ast },t)=0.$ Then redefining all the densities and conditional expectations to be conditional on $t$ we get the same system of convolution equations as in Table 2 for model 5 with the unknown functions now being conditional densities and the regression function, $g.$
Conditioning requires assumptions that provide for existence of conditional distribution functions in $S^{\ast }$.
\section{\textbf{Solutions for the models.}}
\subsection{Existence of solutions}
To state results for nonparametric models it is important first to clearly indicate the classes of functions where the solution is sought. Assumption 1 requires that all the (generalized) functions considered are elements in the space of generalized functions $S^{\ast }.$ This implies that in the equations the operation of convolution applied to the two functions from $ S^{\ast }$ provides an element in the space $S^{\ast }.$ This subsection gives high level assumptions on the nonparametric classes of the unknown functions where the solutions can be sought: any functions from these classes that enter into the convolution provide a result in $S^{\ast }.$
No assumptions are needed for existence of convolution and full generality of identification conditions in models 1,2 where the model assumptions imply that the functions represent generalized densities. For the other models including regression models convolution is not always defined in $S^{\ast }.$ Zinde-Walsh (2012) defines the concept of convolution pairs of classes of functions in $S^{\ast }$ where convolution can be applied.
To solve the convolution equations a Fourier transform is usually employed, so that e.g. one transforms generalized density functions into characteristic functions. Fourier transform is an isomorphism of the space $ S^{\ast }.$ The Fourier transform of a generalized function $a\in S^{\ast }$ , $Ft(a),$ is defined as follows. For any $\psi \in S,$ as usual $Ft(\psi )(s)=\int \psi (x)e^{isx}dx;$ then the functional $Ft(a)$ is defined by \begin{equation*} (Ft(a),\psi )\equiv (a,Ft(\psi )). \end{equation*} The advantage of applying Fourier transform is that integral convolution equations transform into algebraic equations when the "exchange formula" applies: \begin{equation} a\ast b=c\Longleftrightarrow Ft(a)\cdot Ft(b)=Ft(c). \label{exchange} \end{equation} In the space of generalized functions $S^{\ast },$ the Fourier transform and inverse Fourier transform always exist. As shown in Zinde-Walsh (2012) there is a dichotomy between convolution pairs of subspaces in $S^{\ast }$ and the corresponding product pairs of subspaces of their Fourier transforms.
The classical pairs of spaces (Schwartz, 1966) are the convolution pair $ \left( S^{\ast },O_{C}^{\ast }\right) $ and the corresponding product pair $ \left( S^{\ast },O_{M}\right) ,$ where $O_{C}^{\ast }$ is the subspace of $ S^{\ast }$ that contains rapidly decreasing (faster than any polynomial) generalized functions and $\mathit{O}_{M}$ is the space of infinitely differentiable functions with every derivative growing no faster than a polynomial at infinity. These pairs are important in that no restriction is placed on one of the generalized functions that could be any element of space $S^{\ast }$; the other belongs to a space that needs to be correspondingly restricted. A disadvantage of the classical pairs is that the restriction is fairly severe, for example, the requirement that a characteristic function be in $O_{M}\,\ $implies existence of all moments for the random variable. Relaxing this restriction would require placing constraints on the other space in the pair; Zinde-Walsh (2012) introduces some pairs that incorporate such trade-offs.
In some models the product of a function with a component of the vector of arguments is involved,such as $d(x)=x_{k}a(x),$ then for Fourier transforms $ Ft(d)\left( s\right) =-i\frac{\partial }{\partial s_{k}}Ft(a)(s);$ the multiplication by a variable is transformed into ($-i)$ times the corresponding partial derivative. Since the differentiation operators are continuous in $S^{\ast }$ this transformation does not present a problem.
\textbf{Assumption 2.} \textit{The functions }$a\in A,b\in B,$\textit{\ are such that }$\left( A,B\right) $\textit{\ form a convolution pair in }$ S^{\ast }$\textit{.}
Equivalently, $Ft(a),$ $Ft(b)$ are in the corresponding product pair of spaces.
Assumption 2 is applied to model 1 for $a=f_{x^{\ast }},b=f_{u};$ to model 2 with $a=f_{z},b=f_{u};$ to model 3 with $a=f_{x^{\ast }},b=f_{u}$ and with $ a=h_{k},b=f_{u},$ for all $k=1,...,d;$ to model 4a for $a=f_{x^{\ast }},$ or $f_{u_{x}},$ or $h_{k}$ for all $k$ and $b=f_{u};$ to model 5 with $ a=f_{x^{\ast }},$ or $gf_{x^{\ast }},$ or $h_{k}f_{x^{\ast }}$ and $b=f_{u};$ to model 6 with $a=f_{z},$ or $g$ and $b=f_{u};$ to model 7 with $a=g$ or $ h_{k}$ and $b=f_{u}.$
Assumption 2 is a high-level assumption that is a sufficient condition for a solution to the models 1-4 and 6-7 to exist. Some additional conditions are needed for model 5 and are provided below.
Assumption 2 is automatically satisfied for generalized density functions, so is not needed for models 1 and 2. Denote by $\bar{D}\subset S^{\ast }$ the subset of generalized derivatives of distribution functions (corresponding to Borel probability measures in $R^{d}$) then in models 1 and 2 $A=B=\bar{D};$ and for the characteristic functions there are correspondingly no restrictions; denote the set of all characteristic functions, $Ft\left( \bar{D}\right) \subset S^{\ast },$ by $\bar{C}.$
Below a (non-exhaustive) list of nonparametric classes of generalized functions that provide sufficient conditions for existence of solutions to the models here is given. The classes are such that they provide minimal or often no restrictions on one of the functions and restrict the class of the other in order that the assumptions be satisfied.
In models 3 and 4 the functions $h_{k}$ are transformed into derivatives of continuous characteristic functions. An assumption that either the characteristic function of $x^{\ast }$ or the characteristic function of $u$ be continuously differentiable is sufficient, without any restrictions on the other to ensure that Assumption 2 holds. Define the subset of all continuously differentiable characteristic functions by $\bar{C}^{(1)}.$
In model 5 equations involve a product of the regression function $g$ with $ f_{x^{\ast }}.$ Products of generalized functions in $S^{\ast }$ do not always exist and so additional restrictions are needed in that model. If $g$ is an arbitrary element of $S^{\ast },$ then for the product to exist, $ f_{x^{\ast }}$ should be in $\mathit{O}_{M}$. On the other hand, if $ f_{x^{\ast }}$ is an arbitrary generalized density it is sufficient that $g$ and $h_{k}$ belong to the space of $d$ times continuously differentiable functions with derivatives that are majorized by polynomial functions for $ gf_{x^{\ast }},h_{k}f_{x^{\ast }}$ to be elements of $S^{\ast }.$ Indeed, the value of the functional $h_{k}f_{x^{\ast }}$ for an arbitrary $\psi \in S $ is defined by \begin{equation*} (h_{k}f_{x^{\ast }},\psi )=\left( -1\right) ^{d}\int F_{x^{\ast }}(x)\partial ^{(1,...,1)}(h_{k}(x)\psi (x))dx; \end{equation*} here $F$ is the distribution (ordinary bounded) function and this integral exists because $\psi $ and all its derivatives go to zero at infinity faster than any polynomial function. Denote by $\bar{S}^{B,1}$ the space of continuously differentiable functions $g\in S^{\ast }$ such that the functions $h_{k}(x)=x_{k}g(x)$ are also continuously differentiable with all derivatives majorized by polynomial functions$.$ Since the products are in $ S^{\ast }$ then the Fourier transforms of the products are defined in $ S^{\ast }.$ Further restrictions requiring the Fourier transforms of the products $gf_{x^{\ast }}$\ and $h_{k}f_{x^{\ast }}$ to be continuously differentiable functions in $S^{\ast }$ would remove any restrictions on $ f_{u}$ for the convolution to exist. Denote the space of all continuously differentiable functions in $S^{\ast }$ by $\bar{S}^{(1)}.$
If $g$ is an ordinary function that represents a regular element in $S^{\ast }$ the infinite differentiability condition on $f_{x^{\ast }}$ can be relaxed to simply requiring continuous first derivatives.
In models 6 and 7 if the generalized density function for the error, $f_{u},$ decreases faster than any polynomial (all moments need to exist for that), so that $f_{u}\in \mathit{O}_{C}^{\ast },$ \ then $g$ could be any generalized function in $S^{\ast };$ this will of course hold if $f_{u}$ has bounded support. Generally, the more moments the error is assumed to have, the fewer restrictions on the regression function $g$ are needed to satisfy the convolution equations of the model and the exchange formula. The models 6, 7 satisfy the assumptions for any error $u$ when support of generalized function $g$ is compact (as for the "sum of peaks"), then $g\in E^{\ast }\subset S^{\ast },$ where $E^{\ast }$ is the space of generalized functions with compact support. More generally the functions $g$ and all the $h_{k}$ could belong to the space $\mathit{O}_{C}^{\ast }$ of generalized functions that decrease at infinity faster than any polynomial, and still no restrictions need to be placed on $u.$
Denote for any generalized density function $f_{\cdot }$ the corresponding characteristic function, $Ft(f_{\cdot }),$ by $\phi _{\cdot }.$ Denote Fourier transform of the (generalized) regression function $g,$ $Ft(g),$ by $ \gamma .$
The following table summarizes some fairly general sufficient conditions on the models that place restrictions on the functions themselves or on the characteristic functions of distributions in the models that will ensure that Assumption 2 is satisfied and a solution exists. The nature of these assumptions is to provide restrictions on some of the functions that allow the others to be completely unrestricted for the corresponding model.
\textbf{Table 3.} Some nonparametric classes of generalized functions for which the convolution equations of the models are defined in $S^{\ast }$.
\begin{tabular}{|c|c|c|} \hline Model & Sufficient & assumptions \\ \hline 1 & no restrictions: & $\phi _{x^{\ast }}\in \bar{C};\phi _{u}\in \bar{C}$ \\ \hline 2 & no restrictions: & $\phi _{x^{\ast }}\in \bar{C};\phi _{u}\in \bar{C}$ \\ \hline & Assumptions A & Assumptions B \\ \hline
\multicolumn{1}{|l|}{$\ \ \ \ $3} & \multicolumn{1}{|l|}{any$\ \ \phi _{x^{\ast }}\in \bar{C};\phi _{u}\in \bar{C}^{(1)}$} & \multicolumn{1}{|l|}{ any $\phi _{u}\in \bar{C};\phi _{x^{\ast }}\in \bar{C}^{(1)}$} \\ \hline 4 & any $\phi _{u_{x}},\phi _{x^{\ast }}\in \bar{C};\phi _{u}\in \bar{C} ^{(1)}$ & any $\phi _{u},\phi _{x^{\ast }}\in \bar{C};\phi _{u_{x}}\in \bar{C }^{(1)}$ \\ \hline 4a & any $\phi _{u_{x}},\phi _{x^{\ast }}\in \bar{C};\phi _{u}\in \bar{C} ^{(1)}$ & any $\phi _{u},\phi _{u_{x}}\in \bar{C};\phi _{x^{\ast }}\in \bar{C }^{(1)}$ \\ \hline
\multicolumn{1}{|l|}{$\ \ \ \ $5} & \multicolumn{1}{|l|}{any $g\in S^{\ast
};f_{x^{\ast }}\in O_{M};f_{u}\in O_{C}^{\ast }$} & \multicolumn{1}{|l|}{$\ $ any $\ f_{x^{\ast }}\in \bar{D};\ g,h_{k}\in \bar{S}^{B,1};f_{u}\in O_{C}^{\ast }$} \\ \hline
\multicolumn{1}{|l|}{$\ \ \ \ $6} & \multicolumn{1}{|l|}{any$\ g\in S^{\ast
};f_{u}\in O_{C}^{\ast }$} & \multicolumn{1}{|l|}{$\ g\in O_{C}^{\ast };$ any $f_{u}:\phi _{u}\in \bar{C}$} \\ \hline 7 & any $g\in S^{\ast };f_{u}\in O_{C}^{\ast }$ & $g\in O_{C}^{\ast };$ any $ f_{u}:\phi _{u}\in \bar{C}$ \\ \hline \end{tabular}
The next table states the equations and systems of equations for Fourier transforms that follow from the convolution equations.
\textbf{Table 4.} The form of the equations for the Fourier transforms:
\begin{tabular}{|c|c|c|} \hline Model & Eq's for Fourier transforms & Unknown functions \\ \hline 1 & $\phi _{x^{\ast }}\phi _{u}=\phi _{z};$ & $\phi _{x^{\ast }}$ \\ \hline 2 & $\phi _{x^{\ast }}=\phi _{z}\phi _{-u};$ & $\phi _{x^{\ast }}$ \\ \hline 3 & $\left\{ \begin{array}{c} \phi _{x^{\ast }}\phi _{u}=\phi _{z}; \\ \left( \phi _{x^{\ast }}\right) _{k}^{\prime }\phi _{u}=\varepsilon _{k},k=1,...,d. \end{array} \right. $ & $\phi _{x^{\ast }},\phi _{u}$ \\ \hline 4 & $\left\{ \begin{array}{c} \phi _{x^{\ast }}\phi _{u}=\phi _{z}; \\ \left( \phi _{x^{\ast }}\right) _{k}^{\prime }\phi _{u}=\varepsilon _{k},k=1,...,d; \\ \phi _{x^{\ast }}\phi _{u_{x}}=\phi _{x}. \end{array} \right. $ & $\phi _{x^{\ast }},\phi _{u},\phi _{u_{x}}$ \\ \hline 4a & $\left\{ \begin{array}{c} \phi _{u_{x}}\phi _{u}=\phi _{z-x}; \\ \left( \phi _{u_{x}}\right) _{k}^{\prime }\phi _{u}=\varepsilon _{k},k=1,...,d. \\ \phi _{x^{\ast }}\phi _{u_{x}}=\phi _{x}. \end{array} \right. $ & --"-- \\ \hline 5 & $\left\{ \begin{array}{c} \phi _{x^{\ast }}\phi _{u}=\phi _{z}; \\ Ft\left( gf_{x^{\ast }}\right) \phi _{u}=\varepsilon \\ \left( Ft\left( gf_{x^{\ast }}\right) \right) _{k}^{\prime }\phi _{u}=\varepsilon _{k},k=1,...,d. \end{array} \right. $ & $\phi _{x^{\ast }},\phi _{u},g$ \\ \hline 6 & $\left\{ \begin{array}{c} \phi _{x}=\phi _{-u}\phi _{z}; \\ Ft(g)\phi _{-u}=\varepsilon . \end{array} \right. $ & $\phi _{u},g$ \\ \hline 7 & $\left\{ \begin{array}{c} Ft(g)\phi _{u}=\varepsilon ; \\ \left( Ft\left( g\right) \right) _{k}^{\prime }\phi _{u}=\varepsilon _{k},k=1,...,d. \end{array} \right. $ & $\phi _{u},g$ \\ \hline \end{tabular}
Notes. Notation $\left( \cdot \right) _{k}^{\prime }$ denotes the k-th partial derivative of the function. The functions $\varepsilon $ are Fourier transforms of the corresponding $w,$ and $\varepsilon _{k}=-iFt(w_{k})$ defined for the models in Tables 1 and 2.
Assumption 2 (that is fulfilled e.g. by generalized functions classes of Table 3) ensures existence of solutions to the convolution equations for models 1-7; this does not exclude multiple solutions and the next section provides a discussion of solutions for equations in Table 4.
\subsection{Classes of solutions; support and multiplicity of solutions}
Typically, support assumptions are required to restrict multiplicity of solutions; here we examine the dependence of solutions on the support of the functions. The results here also give conditions under which some zeros, e.g. in the characteristic functions, are allowed. Thus in common with e.g. Carrasco and Florens (2010), Evdokimov and White (2011), distributions such as the uniform or triangular for which the characteristic function has isolated zeros are not excluded. The difference here is the extension of the consideration of the solutions to $S^{\ast }$ and to models such as the regression model where this approach to relaxing support assumptions was not previously considered.
\ Recall that for a continuous function $\psi (x)$ on $R^{d}$ support is defined as the set $W=$supp($\psi ),$ such that \begin{equation*} \psi (x)=\left\{ \begin{array}{cc} a\neq 0 & \text{for }x\in W \\ 0 & \text{for }x\in R^{d}\backslash W. \end{array} \right. \end{equation*} Support of a continuous function is an open set.
Generalized functions are functionals on the space $S$ and support of a generalized function $b\in S^{\ast }$ is defined as follows (Schwartz, 1967, p. 28). Denote by $\left( b,\psi \right) $ the value of the functional $b$ for $\psi \in S.$ Define a null set for $b\in S^{\ast }$ as the union of supports of all functions in $S$ for which the value of the functional is zero$:$ $\Omega =\{\cup $supp$\left( \psi \right) ,$ $\psi \in S,$ such that $\left( b,\psi \right) =0\}.$ Then supp$\left( b\right) =R^{d}\backslash \Omega .$ Note that a generalized function has support in a closed set, for example, support of the $\delta -function$ is just one point 0.
Note that for model 2 Table 4 gives the solution for $\phi _{x^{\ast }}$ directly and the inverse Fourier transform can provide the (generalized) density function, $f_{x^{\ast }}.$
In Zinde-Walsh (2012) identification conditions in $S^{\ast }$ were given for models 1 and 7 under assumptions that include the ones in Table 3 but could also be more flexible.
The equations in Table 3 for models 1,3, 4, 4a, 5, 6 and 7 are of two types, similar to those solved in Zinde-Walsh (2012). One is a convolution with one unknown function; the other is a system of equations with two unknown functions, each leading to the corresponding equations for their Fourier transforms.
\subsubsection{Solutions to the equation $\protect\alpha \protect\beta = \protect\gamma .$}
Consider the equation \begin{equation} \alpha \beta =\gamma , \label{product} \end{equation} with one unknown function $\alpha ;$ $\beta $ is a given continuous function. By assumption 2 the non-parametric class for $\alpha $ is such that the equation holds in $S^{\ast }$ on $R^{d}$; it is also possible to consider a nonparametric class for $\alpha $ with restricted support, $\bar{W }.$ Of course without any restrictions $\bar{W}=R^{d}.$ Recall the differentiation operator, $\partial ^{m},$ for $m=(m_{1},...m_{d}\dot{)}$ and denote by $supp(\beta ,\partial )$ the set $\cup _{\Sigma m_{i}=0}^{\infty }supp(\partial ^{m}\beta );$ where $supp(\partial ^{m}\beta )$ is an open set where a continuous derivative $\partial ^{m}\beta $ exists. Any point where $\beta $ is zero belongs to this set if some finite-order partial continuous derivative of $\beta $ is not zero at that point (and in some open neighborhood); for $\beta $ itself $supp(\beta )\equiv supp(\beta ,0).$
Define the functions \begin{equation} \alpha _{1}=\beta ^{-1}\gamma I\left( supp(\beta ,\partial )\right) ;\alpha _{2}(x)=\left\{ \begin{array}{cc} 1 & \text{for }x\in supp(\beta ,\partial ); \\ \tilde{\alpha} & \text{for }x\in \bar{W}\backslash (supp(\beta ,\partial )) \end{array} \right. \label{division} \end{equation} with any $\tilde{\alpha}$ such that $\alpha _{1}\alpha _{2}\in Ft\left( A\right) .$
Consider the case when $\alpha ,\beta $ and thus $\gamma $ are continuous. For any point $x_{0}$ if $\beta (x_{0})\neq 0,$ there is a neighborhood $ N(x_{0})$ where $\beta \neq 0,$ and division by $\beta $ is possible. If $ \beta (x_{0})$ has a zero, it could only be of finite order and in some neighborhood, $N(x_{0})\in supp(\partial ^{m}\beta )$ a representation \begin{equation} \beta =\eta (x)\Pi _{i=1}^{d}\left( x_{i}-x_{0i}\right) ^{m_{i}} \label{finitezero} \end{equation} holds for some continuous function $\eta $ in $S^{\ast },$ such that $\eta >c_{\eta }>0$ on $supp(\eta ).$Then $\eta ^{-1}\gamma $ in $N(x_{0})$ is a non-zero continuous function; division of such a function by $\Pi _{i=1}^{d}\left( x_{i}-x_{0i}\right) ^{m_{i}}$ in $S^{\ast }$ is defined (Schwartz, 1967, pp. 125-126), thus division by $\beta $ is defined in this neighborhood $N(x_{0})$. For the set $supp(\beta ,\partial )$ consider a covering of every point by such neighborhoods, the possibility of division in each neighborhood leads to possibility of division globally on the whole $ supp(\beta ,\partial ).$ Then $a_{1}$ as defined in $\left( \ref{division} \right) $ exists in $S^{\ast }.$
In the case where $\gamma $ is an arbitrary generalized function, if $\beta $ is infinitely differentiable then then by (Schwartz, 1967, pp.126-127) division by $\beta $ is defined on $supp(\beta ,\partial )$ and the solution is given by $\left( \ref{division}\right) .$
For the cases where $\gamma $ is not continuous and $\beta $ is not infinitely differentiable the solution is provided by \begin{equation*} \alpha _{1}=\beta ^{-1}\gamma I\left( supp(\beta ,0)\right) ;\alpha _{2}(x)=\left\{ \begin{array}{cc} 1 & \text{for }x\in supp(\beta ,0); \\ \tilde{\alpha} & \text{for }x\in \bar{W}\backslash (supp(\beta ,0)) \end{array} \right. \end{equation*} with any $\tilde{\alpha}$ such that $\alpha _{1}\alpha _{2}\in Ft\left( A\right) .$
Theorem 2 in Zinde-Walsh (2012) implies that the solution to $\left( \ref {product}\right) $ is $a=Ft^{-1}(\alpha _{1}\alpha _{2});$ the sufficient condition for the solution to be unique is $supp(\beta ,0)\supset \bar{W};$ if additionally either $\gamma $ is a continuous function or $\beta $ is an infinitely continuously differentiable function it is sufficient for uniqueness that $supp(\beta ,\partial )\supset \bar{W}.$
This provides solutions for models 1 and 6 where only equations of this type appear.
\subsubsection{Solutions to the system of equations}
For models 3,4,5 and 7 a system of equations of the form \begin{eqnarray} && \begin{array}{cc} \alpha \beta & =\gamma ; \\ \alpha \beta _{k}^{\prime } & =\gamma _{k}, \end{array} \label{twoeq} \\ k &=&1,...,d. \notag \end{eqnarray} (with $\beta $ continuously differentiable) arises. Theorem 3 in Zinde-Walsh (2012) provides the solution and uniqueness conditions for this system of equations. It is first established that a set of continuous functions $ \varkappa _{k},k=1,...,d,$ that solves the equation \begin{equation} \varkappa _{k}\gamma -\gamma _{k}=0 \label{difeq} \end{equation} in the space $S^{\ast }$ exists and is unique on $W=supp(\gamma )$ as long as $supp(\beta )\supset W.$ Then $\beta _{k}^{\prime }\beta ^{-1}=\varkappa _{k}$ and substitution into $\left( \ref{difeq}\right) $ leads to a system of first-order differential equations in $\beta .$
Case 1. Continuous functions; $W$ is an open set.
For the models 3 and 4 the system $\left( \ref{twoeq}\right) $ involves continuous characteristic functions thus there $W$ is an open set. In some cases $W$ can be an open set under conditions of models 5 and 7, e.g. if the regression function is integrable in model 7.
For this case represent the open set $W$ as a union of (maximal) connected components $\cup _{v}W_{v}.$
Then by the same arguments as in the proof of Theorem 3 in Zinde-Walsh (2012)\ the solution can be given uniquely on $W$ as long as at some point $ \zeta _{0v}\in (W_{v}\cap W)$ the value $\beta \left( \zeta _{0\nu }\right) $ is known for each of the connected components . Consider then $\beta _{1}(\zeta )=\Sigma _{\nu }[\beta \left( \zeta _{0\nu }\right) \exp \int_{\zeta _{0}}^{\zeta }\tsum\limits_{k=1}^{d}\varkappa _{k}(\xi )d\xi ]I(W_{\nu }),$ where integration is along any arc within the component that connects $\zeta $ to $\zeta _{0\nu }.$ Then $\alpha _{1}=\beta _{1}^{-1}\gamma ,$ and $\alpha _{2},\beta _{2}$ are defined as above by being $1$ on $\cup _{v}W_{v}$ and arbitrary outside of this set.
When $\beta (0)=1$ as is the case for the characteristic function, the function is uniquely determined on the connected component that includes 0.
Evdokimov and White (2012) provide a construction that permits in the univariate case to extend the solution $\beta \left( \zeta _{0\nu }\right) [\exp \int_{\zeta _{0}}^{\zeta }\tsum\limits_{k=1}^{d}\varkappa _{k}(\xi )d\xi ]I(W_{\nu })$ from a connected component of support where $\beta \left( \zeta _{0\nu }\right) $ is known (e.g. at 0 for a characteristic function) to a contiguous connected component when on the border between the two where $\beta =0,$ at least some finite order derivative of $\beta $ is not zero. In the multivariate case this approach can be extended to the same construction along a one-dimensional arc from one connected component to the other. Thus identification is possible on a connected component of $ supp(\beta ,\partial ).$
Case 2. $W$ is a closed set.
Generally for models 5 and 7, $W$ is the support of a generalized function and is a closed set. It may intersect with several connected components of support of $\beta .$ Denote by $W_{v\text{ }}$ here the intersection of a connected component of support of $\beta $ and $W.$ Then similarly $\beta _{1}(\zeta )=\tsum\limits_{\nu }[\beta \left( \zeta _{0\nu }\right) \exp \int_{\zeta _{0}}^{\zeta }\tsum\limits_{k=1}^{d}\varkappa _{k}(\xi )d\xi ]I(W_{\nu }),$ where integration is along any arc within the component that connects $\zeta $ to $\zeta _{0\nu }.$ Then $\alpha _{1}=\beta _{1}^{-1}\varepsilon ,$ and $\alpha _{2},\beta _{2}$ are defined as above by being $1$ on $\cup _{v}W_{v}$ and arbitrary outside of this set. The issue of the value of $\beta $ at some point within each connected component arises. In the case of $\beta $ being a characteristic function if there is only one connected component, $W$ and $0\in W$ the solution is unique, since then $\beta (0)=1.$
Note that for model 5 the solution to equations of the type $\left( \ref {twoeq}\right) $ would only provide $Ft(gf_{x^{\ast }})$ and $\phi _{u};$ then from the first equation for this model in Table 4 $\phi _{x^{\ast }}$ can be obtained; it is unique if supp$\phi _{x^{\ast }}=$supp$\phi _{z}$. To solve for $g$ find $g=Ft^{-1}\left( Ft\left( gf_{x^{\ast }}\right) \right) \cdot \left( f_{x^{\ast }}\right) ^{-1}.$
\section{Identification, partial identification and well-posedness}
\subsection{Identified solutions for the models 1-7}
As follows from the discussion of the solutions uniqueness in models 1,2,3,4,4a,5,6 holds (in a few cases up to a value of a function at a point) if all the Fourier transforms are supported over the whole $R^{d};$ in many cases it is sufficient that $supp(\beta ,\partial )=R^{d}.$
The classes of functions could be defined with Fourier transforms supported on some known subset $\bar{W}$ of $R^{d},$ rather than on the whole space; if all the functions considered have $\bar{W}$ as their support, and the support consists of one connected component that includes 0 as an interior point then identification for the solutions holds. For the next table assume that $\bar{W}$ is a single connected component with $0$ as an interior point; again $\bar{W}$ could coincide with $supp(\beta ,\partial )$. For model 5 under Assumption B assume additionally that the value at zero: $ Ft(gf_{x^{\ast }})(0)$ is known; similarly for model 7 under assumption B additionally assume that $Ft(g)(0)$ is known.
Table 5. The solutions for identified models on $\bar{W}.$
\begin{tabular}{|c|c|} \hline Model & $ \begin{array}{c} \text{Solution to } \\ \text{equations} \end{array} $ \\ \hline
\multicolumn{1}{|l|}{$\ \ \ $1.} & \multicolumn{1}{|l|}{$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ f_{x^{\ast }}=Ft^{-1}\left( \phi _{u}^{-1}\phi _{z}\right) .$} \\ \hline 2. & $f_{x^{\ast }}=Ft^{-1}\left( \phi _{-u}\phi _{z}\right) .$ \\ \hline
\multicolumn{1}{|l|}{$\ $\ 3.} & \multicolumn{1}{|l|}{$ \begin{array}{c} \text{Under Assumption A} \\ f_{x^{\ast }}=Ft^{-1}(\exp \int_{\zeta _{0}}^{\zeta }\tsum\limits_{k=1}^{d}\varkappa _{k}(\xi )d\xi ), \\ \text{where }\varkappa _{k}\text{ solves }\varkappa _{k}\phi _{z}-[\left( \phi _{z}\right) _{k}^{\prime }-\varepsilon _{k}]=0; \\ f_{u}=Ft^{-1}(\phi _{x^{\ast }}^{-1}\varepsilon ). \\ \text{Under Assumption B} \\ f_{u}=Ft^{-1}(\exp \int_{\zeta _{0}}^{\zeta }\tsum\limits_{k=1}^{d}\varkappa _{k}(\xi )d\xi ); \\ \varkappa _{k}\text{ solves }\varkappa _{k}\phi _{z}-\varepsilon _{k}=0; \\ f_{x^{\ast }}=Ft^{-1}(\phi _{u}^{-1}\varepsilon ). \end{array} $} \\ \hline 4 & $ \begin{array}{c} f_{x^{\ast }},f_{u}\text{ obtained similarly to those in 3.;} \\ \phi _{u_{x}}=\phi _{x^{\ast }}^{-1}\phi _{x}. \end{array} $ \\ \hline 4a. & $ \begin{array}{c} f_{u_{x}},f_{u}\text{ obtained similarly to }\phi _{x^{\ast }},\phi _{u} \text{ in 3.;} \\ \phi _{x^{\ast }}=\phi _{u_{x}}^{-1}\phi _{x}. \end{array} $ \\ \hline 5. & $ \begin{array}{c} \text{Three steps:} \\ \text{1. (a) Get }Ft(gf_{x^{\ast }}),\phi _{u}\text{ similarly to }\phi _{x^{\ast }},\phi _{u}\text{ in model 3} \\ \text{(under Assumption A use }Ft(gf_{x^{\ast }})(0))\text{;} \\ \text{2. Obtain }\phi _{x^{\ast }}=\phi _{u}^{-1}\phi _{z}; \\ \text{3. Get }g=\left[ Ft^{-1}\left( \phi _{x^{\ast }}\right) \right] ^{-1}Ft^{-1}(Ft(gf_{x^{\ast }})). \end{array} $ \\ \hline 6. & $\phi _{-u}=\phi _{z}^{-1}\phi _{x}$ and $g=Ft^{-1}(\phi _{x}^{-1}\phi _{z}\varepsilon ).$ \\ \hline 7. & $ \begin{array}{c} \phi _{x^{\ast }},Ft(g)\text{obtained similarly to }\phi _{x^{\ast }},\phi _{u}\text{in }3 \\ \text{(under Assumption A use }Ft(g)(0)). \end{array} $ \\ \hline \end{tabular}
\subsection{Implications of partial identification.}
Consider the case of Model 1. Essentially lack of identification, say in the case when the error distribution has characteristic function supported on a convex domain $W_{u}$ around zero results in the solution for $\phi _{x^{\ast }}=\phi _{1}\phi _{2},$ with $\phi _{1}$ non-zero and unique on $ W_{u},$ and thus captures the lower-frequency components of $x^{\ast },$ and with $\phi _{2}$ is a characteristic function of a distribution with arbitrary high frequency components. Transforming back to densities provides a corresponding model with independent components \begin{equation*} z=x_{1}^{\ast }+x_{2}^{\ast }+u, \end{equation*} where $x_{1}^{\ast }$ uniquely extracts the lower frequency part of observed $z.$ The more important the contribution of $x_{1}^{\ast }$ to $x^{\ast }$ the less important is lack of identification.
If the feature of interest as discussed e.g. by Matzkin (2007) involves only low frequency components of $x^{\ast },$ it may still be fully identified even when the distribution for $x^{\ast }$ itself is not. An example of that is a deconvolution applied to an image of a car captured by a traffic camera; although even after deconvolution the image may still appear blurry the licence plate number may be clearly visible. In nonparametric regression the polynomial growth of the regression or the expectation of the response function may be identifiable even if the regression function is not fully identified.
Features that are identified include any functional, $\Phi ,$ linear or non-linear on a class of functions of interest, such that in the frequency domain $\Phi $ is supported on $W_{u}.$
\subsection{Well-posedness in $S^{\ast }$}
Conditions for well-posedness in $S^{\ast }$ for solutions of the equations entering in models 1-7 were established in Zinde-Walsh (2012). Well-posedness is needed to ensure that if a sequence of functions converges (in the topology of $S^{\ast })$ to the known functions of the equations characterizing the models 1-7 in tables 1 and 2, then the corresponding sequence of solutions will converge to the solution for the limit functions. A feature of well-posedness in $S^{\ast }$ is that the solutions are considered in a class of functions that is a bounded set in $S^{\ast }.$
The properties that differentiation is a continuous operation, and that the Fourier transform is an isomorphism of the topological space $S^{\ast },$ make conditions for convergence in this space much weaker than those in functions spaces, say, $L_{1},$ $L_{2}.$ Thus for density that is given by the generalized derivative of the distribution function well-posedness holds in spaces of generalized functions by the continuity of the differentiation operator$.$
For the problems here however, well-posedness does not always obtain. The main sufficient condition is that the inverse of the characteristic function of the measurement error satisfy the condition $\left( \ref{condition} \right) $ with $b=\phi _{u}^{-1}$ on the corresponding support. This holds if either the support is bounded or if the distribution is not super-smooth. If $\phi _{u}$ has some zeros but satisfies the identification conditions so that it has local representation $\left( \ref{finitezero}\right) $ where $ \left( \ref{condition}\right) $ is satisfied for $b=\eta ^{-1}$ well-posedness will hold.
Example in Zinde-Walsh (2012) demonstrates that well-posedness of deconvolution will not hold even in the weak topology of $S^{\ast }$ for super-smooth (e.g. Gaussian) distributions on unbounded support. On the other hand, well-posedness of deconvolution in $S^{\ast }$ obtains for ordinary smooth distributions and thus under less restrictive conditions than in function spaces, such as $L_{1}$ or $L_{2}$ usually considered.
In the models 3-7 with several unknown functions, more conditions are required to ensure that all the operations by which the solutions are obtained are continuous in the topology of $S^{\ast }.$ It may not be sufficient to assume $\left( \ref{condition}\right) $ for the inverses of unknown functions where the solution requires division; for continuity of the solution the condition may need to apply uniformly.
Define a class of ordinary functions on $R^{d},$ $\Phi (m,V)$ (with $m$ a vector of integers, $V$ a positive constant) where $b\in \Phi (m,V)$ if \begin{equation} \int \Pi \left( (1+t_{i}^{2})^{-1}\right) ^{m_{i}}\left\vert b(t)\right\vert dt<V<\infty .\text{ } \label{condb} \end{equation}
Then in Zinde-Walsh (2012) well-posedness is proved for model 7 as long as in addition to Assumption A or B, for some $\Phi (m,V)$ both $\phi _{u}$ and $\phi _{u}^{-1}$ belong to the class $\Phi (m,V)$. This condition is fulfilled by non-supersmooth $\phi _{u};$ this could be an ordinary smooth distribution or a mixture with some mass point.
A convenient way of imposing well-posedness is to restrict the support of functions considered to a bounded $\bar{W}.$ If the features of interest are associated with low-frequency components only, then if the functions are restricted to a bounded space the low-frequency part can be identified and is well-posed.
\section{Implications for estimation}
\subsection{Plug-in non-parametric estimation}
Solutions in Table 5 for the equations that express the unknown functions via known functions of observables give scope for plug-in estimation. As seen e.g. in the example of Model 4, 4 and 4a are different expressions that will provide different plug-in estimators for the same functions.
The functions of the observables here are characteristic functions and Fourier transforms of density-weighted conditional expectations and in some cases their derivatives, that can be estimated by non-parametric methods. There are some direct estimators, e.g. for characteristic functions. In the space $S^{\ast }$ the Fourier transform and inverse Fourier transform are continuous operations thus using standard estimators of density weighted expectations and applying the Fourier transform would provide consistency in $S^{\ast }$; the details are provided in Zinde-Walsh (2012). Then the solutions can be expressed via those estimators by the operations from Table 5 and, as long as the problem is well-posed, the estimators will be consistent and the convergence will obtain at the appropriate rate. As in An and Hu (2012), the convergence rate may be even faster for well-posed problems in $S^{\ast }$ than the usual nonparametric rate in (ordinary) function spaces. For example, as demonstrated in Zinde-Walsh (2008) kernel estimators of density that may diverge if the distribution function is not absolutely continuous, are always (under the usual assumptions on kernel/bandwidth) consistent in the weak topology of the space of generalized functions, where the density problem is well-posed. Here, well-posedness holds for deconvolution as long as the error density is not super-smooth.
\subsection{Regularization in plug-in estimation}
When well-posedness cannot be ensured, plug-in estimation will not provide consistent results and some regularization is required; usually spectral cut-off is employed for the problems considered here. In the context of these non-parametric models regularization requires extra information: the knowledge of the rate of decay of the Fourier transform of some of the functions.
For model 1 this is not a problem since $\phi _{u}$ is assumed known; the regularization uses the information about the decay of this characteristic function to construct a sequence of compactly supported solutions with support increasing at a corresponding rate. In $S^{\ast }$ no regularization is required for plug-in estimation unless the error distribution is super-smooth. Exponential growth in $\phi _{u}^{-1}$ provides a logarithmic rate of convergence in function classes for the estimator (Fan, 1991). Below we examine spectral cut-off regularization for the deconvolution in $S^{\ast }$ when the error density is super-smooth.
With super-smooth error in $S^{\ast }$ define a class of generalized functions $\Phi (\Lambda ,m,V)$ for some non-negative-valued function $ \Lambda $; a generalized function $b\in \Phi (\Lambda ,m,V)$ if there exists a function $\bar{b}(\zeta )\in \Phi (m,V)$ such that also $\bar{b}(\zeta )^{-1}\in \Phi (m,V)$ and $b=\bar{b}(\zeta )\exp \left( -\Lambda (\zeta )\right) .$ Note that a linear combination of functions in $\Phi (\Lambda ,m,V)$ belongs to the same class. Define convergence: a sequence of $ b_{n}\in \Phi (\Lambda ,m,V)$ converges to zero if the corresponding sequence $\bar{b}_{n}$ converges to zero in $S^{\ast }.$
Convergence in probability for a sequence of random functions, $\varepsilon _{n},$ in $S^{\ast }$ is defined as follows: $(\varepsilon _{n}-\varepsilon )\rightarrow _{p}0$ in $S^{\ast }$ if for any set $\psi _{1},...,\psi _{v}\in S$ the random vector of the values of the functionals converges: $ \left( (\varepsilon _{n}-\varepsilon ,\psi _{1}),...,(\varepsilon _{n}-\varepsilon ,\psi _{v})\right) \rightarrow _{p}0.$
\textbf{Lemma 2.} \textit{If in model 1 }$\phi _{u}=b\in \Phi (\Lambda ,m,V), $\textit{\ where }$\Lambda $\textit{\ is a polynomial function of order no more than }$k,$\textit{\ and }$\varepsilon _{n}$\textit{\ is a sequence of estimators of }$\varepsilon $\textit{\ that are consistent in }$S^{\ast }:r_{n}(\varepsilon _{n}-\varepsilon )\rightarrow _{p}0$\textit{\ in }$ S^{\ast }$\textit{\ at some rate }$r_{n}\rightarrow \infty ,$\textit{\ then for any sequence of constants }$\bar{B}_{n}:$\textit{\ }$0<\bar{B} _{n}<\left( \ln r_{n}\right) ^{\frac{1}{k}}$\textit{\ and the corresponding set }$B_{n}=\left\{ \zeta :\left\Vert \zeta \right\Vert <\bar{B}_{n}\right\} $\textit{\ the sequence of regularized estimators }$\phi _{u}^{-1}(\varepsilon _{n}-\varepsilon )I(B_{n})$\textit{\ converges to zero in probability in }$S^{\ast }.$\textit{\ }
Proof. For $n$ the value of the random functional \begin{equation*} (\phi _{u}^{-1}(\varepsilon _{n}-\varepsilon )I(B_{n}),\psi )=\int \bar{b} ^{-1}(\zeta )r_{n}(\varepsilon _{n}-\varepsilon )r_{n}^{-1}I(B_{n})\exp \left( \Lambda (\zeta )\right) \psi (\zeta )d\zeta . \end{equation*} Multiplication by $\bar{b}^{-1}\in \Phi (m,V),$ that corresponds to $\phi _{u}=b$ does not affect convergence thus $\bar{b}^{-1}(\zeta )r_{n}(\varepsilon _{n}-\varepsilon )$ converges to zero in probability in $ S^{\ast }.$ To show that $(\phi _{u}^{-1}(\varepsilon _{n}-\varepsilon )I(B_{n}),\psi )$ converges to zero it is sufficient to show that the function $r_{n}^{-1}I(B_{n})\exp \left( \Lambda (\zeta )\right) \psi (\zeta ) $ is bounded$.$ It is then sufficient to find $B_{n}$ such that $ r_{n}^{-1}I(B_{n})\exp \left( \Lambda (\zeta )\right) $ is bounded (by possibly a polynomial), thus it is sufficient that $\underset{B_{n}}{\sup } \left\vert \exp \left( \Lambda (\zeta )\right) r_{n}^{-1}\right\vert $ be bounded. This will hold if $\exp \left( \bar{B}_{n}^{k}\right) <r_{n},$ $ \bar{B}_{n}^{k}<\ln r_{n}.\blacksquare $
Of course an even slower growth for spectral cut-off would result from $ \Lambda $ that grows faster than a polynomial. The consequence of the slow growth of the support is usually a correspondingly slow rate of convergence for $\phi _{u}^{-1}\varepsilon _{n}I(B_{n}).$ Additional conditions (as in function spaces) are needed for the regularized estimators to converge to the true $\gamma $.
It may be advantageous to focus on lower frequency components and ignore the contribution from high frequencies when the features of interest depend on the contribution at low frequency.
\section{Concluding remarks}
Working in spaces of generalized functions extends the results on nonparametric identification and well-posedness for a wide class of models. Here identification in deconvolution is extended to generalized densities in the class of all distributions from the usually considered classes of integrable density functions. In regression with Berkson error nonparametric identification in $S^{\ast }$\ holds for functions of polynomial growth, extending the usual results obtained in $L_{1};$ a similar extension applies to regression with measurement error and Berkson type measurement; this allows to consider binary choice and polynomial regression models. Also, identification in models with sum-of-peaks regression function that cannot be represented in function spaces is included. Well-posedness results in $ S^{\ast }$ also extend the results in the literature provided in function spaces; well-posedness of deconvolution holds as long as the characteristic function of the error distribution does not go to zero at infinity too fast (as e.g. super-smooth) and a similar condition provides well-posedness in the other models considered here.
Further investigation of the properties of estimators in spaces of generalized functions requires deriving the generalized limit process for the function being estimated and investigating when it can be described as a generalized Gaussian process. A generalized Gaussian limit process holds for kernel estimator of the generalized density function (Zinde-Walsh, 2008). Determining the properties of inference based on the limit process for generalized random functions requires both further theoretical development and simulations evidence.
\end{document} |
\begin{document}
\title{{\LARGE\bf
Open and other kinds of extensions over zero-dimensional local compactifications} \begin{abstract} {\footnotesize
\noindent Generalizing a theorem of Ph. Dwinger \cite{Dw}, we describe the partially ordered set of all (up to equivalence) zero-dimensional locally compact Hausdorff extensions of a zero-dimensional Hausdorff space. Using this description, we find the necessary and sufficient conditions which has to satisfy a map between two zero-dimensional Hausdorff spaces in order to have some kind of extension over arbitrary given in advance Hausdorff zero-dimensional local compactifications of these spaces; we regard the following kinds of extensions: continuous, open, quasi-open, skeletal, perfect, injective, surjective. In this way we generalize some classical results of B. Banaschewski \cite{Ba} about the maximal zero-dimensional Hausdorff compactification. Extending a recent theorem of G. Bezhanishvili \cite{B}, we describe the local proximities corresponding to the zero-dimensional Hausdorff local compactifications.} \end{abstract}
{\footnotesize {\em MSC:} primary 54C20, 54D35; secondary 54C10, 54D45, 54E05.
{\em Keywords:} Locally compact (compact) Hausdorff zero-dimensional extensions; Banaschewski compactification; Zero-dimensi\-o\-nal local proximities; Local Boolean algebra; Admissible ZLB-algebra; (Quasi-)Open extensions; Perfect extensions; Skeletal extensions.}
\footnotetext[1]{{\footnotesize {\em E-mail address:} [email protected]}}
\baselineskip = \normalbaselineskip
\section*{Introduction}
In \cite{Ba}, B. Banaschewski proved that every zero-dimensional Hausdorff space $X$ has a zero-dimensional Hausdorff compactification $\beta_0X$ with the following remarkable property: every continuous map $f:X\longrightarrow Y$, where $Y$ is a zero-dimensional Hausdorff compact space, can be extended to a continuous map $\beta_0f:\beta_0X\longrightarrow Y$; in particular, $\beta_0X$ is the maximal zero-dimensional Hausdorff compactification of $X$. As far as I know, there are no descriptions of the maps $f$ for which the extension $\beta_0f$ is open or quasi-open. In this paper we solve the following more general problem: let $f:X\longrightarrow Y$ be a map between two zero-dimensional Hausdorff spaces and $(lX,l_X)$, $(lY,l_Y)$ be Hausdorff zero-dimensional locally compact extensions of $X$ and $Y$, respectively; find the necessary and sufficient conditions which has to satisfy the map $f$ in order to have an $``$extension" $g:lX\longrightarrow lY$ (i.e. $g\circ l_X=l_Y\circ f$) which is a map with some special properties (we regard the following properties: continuous, open, perfect, quasi-open, skeletal, injective, surjective). In \cite{LE2}, S. Leader solved such a problem for continuous extensions over Hausdorff local compactifications (= locally compact extensions)
using the language of {\em local proximities} (the later, as he showed, are in a bijective correspondence (preserving the order) with the Hausdorff local compactifications regarded up to equivalence). Hence, if one can describe the local proximities which correspond to zero-dimensional Hausdorff local compactifications then the above problem will be solved for continuous extensions. Recently, G. Bezhanishvili \cite{B}, solving an old problem of L. Esakia, described the {\em Efremovi\v{c} proximities} which correspond (in the sense of the famous Smirnov Compactification Theorem \cite{Sm2}) to the zero-dimensional Hausdorff compactifications
(and called them {\em zero-dimensional Efremovi\v{c} proximities}).
We extend here his result to the Leader's local proximities, i.e. we describe the local proximities which correspond to the Hausdorff zero-dimensional local compactifications and call them {\em zero-dimensional local proximities} (see Theorem \ref{zdlpth}). We do not use, however, these zero-dimensional local proximities for solving our problem. We introduce a simpler notion (namely, the {\em admissibe ZLB-algebra}) for doing this. Ph. Dwinger \cite{Dw} proved, using Stone Duality Theorem \cite{ST}, that the ordered set of all, up to equivalence, zero-dimensional Hausdorff compactifications of a zero-dimensional Hausdorff space is isomorphic to the ordered by inclusion set of all {\em Boolean bases} of $X$ (i.e. of those bases of $X$ which are Boolean subalgebras of the Boolean algebra $CO(X)$ of all clopen (= closed and open) subsets of $X$). This description is much simpler than that by Efremovi\v{c} proximities. It was rediscovered by K. D. Magill Jr. and J. A. Glasenapp \cite{MG} and applied very successfully to the study of the poset of all, up to equivalence, zero-dimensional Hausdorff compactifications of a zero-dimensional Hausdorff space. We extend the cited above Dwinger Theorem \cite{Dw} to the zero-dimensional Hausdorff {\em local compactifications} (see Theorem \ref{dwingerlc} below) with the help of our generalization of the Stone Duality Theorem proved in \cite{Di4} and the notion of $``$admissible ZLB-algebra" which we introduce here. We obtain the solution of the problem formulated above in the language of the admissible ZLB-algebras (see Theorem \ref{zdextcmain}). As a corollary, we characterize the maps $f:X\longrightarrow Y$ between two Hausdorff zero-dimensional spaces $X$ and $Y$ for which the extension $\beta_0f:\beta_oX\longrightarrow\beta_0Y$ is open or quasi-open (see Corollary \ref{zdextcmaincb}). Of course, one can pass from admissible ZLB-algebras to zero-dimensional local proximities and conversely (see Theorem \ref{ailp} below; it generalizes
an analogous result about the connection between Boolean bases and
zero-dimensional Efremovi\v{c} proximities
obtained in \cite{B}).
We now fix the notations.
If ${\cal C}$ denotes a category, we write $X\in \card{\cal C}$ if $X$ is
an object of ${\cal C}$, and $f\in {\cal C}(X,Y)$ if $f$ is a morphism of
${\cal C}$ with domain $X$ and codomain $Y$. By $Id_{{\cal C}}$ we denote the identity functor on the category ${\cal C}$.
All lattices are with top (= unit) and bottom (= zero) elements, denoted respectively by 1 and 0. We do not require the elements $0$ and $1$ to be distinct. Since we follow Johnstone's terminology from \cite{J}, we will use the term {\em pseudolattice} for a poset having all finite non-empty meets and joins; the pseudolattices with a bottom will be called {\em $\{0\}$-pseudolattices}. If $B$ is a Boolean algebra then we denote by $Ult(B)$ the set of all ultrafilters in $B$.
If $X$ is a set then we denote the power set of $X$ by $P(X)$; the identity function on $X$ is denoted by $id_X$.
If $(X,\tau)$ is a topological space and $M$ is a subset of $X$, we denote by $\mbox{{\rm cl}}_{(X,\tau)}(M)$ (or simply by $\mbox{{\rm cl}}(M)$ or $\mbox{{\rm cl}}_X(M)$) the closure of $M$ in $(X,\tau)$ and by $\mbox{{\rm int}}_{(X,\tau)}(M)$ (or briefly by $\mbox{{\rm int}}(M)$ or $\mbox{{\rm int}}_X(M)$) the interior of $M$ in $(X,\tau)$.
The closed maps and the open maps between topological spaces are assumed to be continuous but are not assumed to be onto. Recall that a map is {\em perfect}\/ if it is closed and compact (i.e. point inverses are compact sets).
For all notions and notations not defined here see \cite{Dw, E2, J, NW}.
\section{Preliminaries}
We will need some of our results from \cite{Di4} concerning the extension of the Stone Duality Theorem to the category ${\bf ZLC}$ of all locally compact zero-dimensio\-nal Hausdorff spaces and all continuous maps between them.
Recall that if $(A,\le)$ is a poset and $B\subseteq A$ then $B$ is said to be a {\em dense subset of} $A$ if for any $a\in A\setminus\{0\}$ there exists $b\in B\setminus\{0\}$ such that $b\le a$.
\begin{defi}\label{deflba}{\rm (\cite{Di4})} \rm A pair $(A,I)$, where $A$ is a Boolean algebra and $I$ is an ideal of $A$ (possibly non proper) which is dense in $A$, is called a {\em local Boolean algebra} (abbreviated as LBA). Two LBAs $(A,I)$ and $(B,J)$ are said to be {\em LBA-isomorphic} (or, simply, {\em isomorphic}) if there exists a Boolean isomorphism $\varphi:A\longrightarrow B$ such that $\varphi(I)=J$. \end{defi}
Let $A$ be a distributive $\{0\}$-pseudolattice and $Idl(A)$ be the frame of all ideals of $A$. If $J\in Idl(A)$ then we will write $\neg_A J$ (or simply $\neg J$) for the pseudocomplement of $J$ in $Idl(A)$ (i.e. $\neg J=\bigvee\{I\in Idl(A)\ |\ I\wedge J=\{0\}\}$). Recall that an ideal $J$ of $A$ is called {\em simple} (Stone \cite{ST}) if $J\vee\neg J= A$ (i.e. $J$ has a complement in $Idl(A)$). As it is proved in \cite{ST}, the set $Si(A)$ of all simple ideals of $A$ is a Boolean algebra with respect to the lattice operations in $Idl(A)$.
\begin{defi}\label{defzlba}{\rm (\cite{Di4})} \rm An LBA $(B, I)$ is called a {\em ZLB-algebra} (briefly, {\em ZLBA}) if, for every $J\in Si(I)$, the join $\bigvee_B J$($=\bigvee_B \{a\ |\ a\in J\}$) exists.
Let ${\bf ZLBA}$ be the category whose objects are all ZLBAs and whose morphisms are all functions $\varphi:(B, I)\longrightarrow(B_1, I_1)$ between the objects of ${\bf ZLBA}$ such that $\varphi:B\longrightarrow B_1$ is a Boolean homomorphism satisfying the following condition:
\noindent(ZLBA) For every $b\in I_1$ there exists $a\in I$ such that $b\le \varphi(a)$;
\noindent let the composition between the morphisms of ${\bf ZLBA}$ be the usual composition between functions, and the ${\bf ZLBA}$-identities be the identity functions. \end{defi}
\begin{exa}\label{zlbaexa}{\rm (\cite{Di4})} \rm Let $B$ be a Boolean algebra. Then the pair $(B,B)$ is a ZLBA. \end{exa}
\begin{notas}\label{kxckx} \rm Let $X$ be a topological space. We will denote by $CO(X)$ the set of all clopen (= closed and open) subsets of $X$,
and by $CK(X)$ the set of all clopen compact subsets of $X$. For every $x\in X$, we set
$u_x^{CO(X)}=\{F\in CO(X)\ |\ x\in F\}.$
When there is no ambiguity, we will write $``u_x^C$" instead of $``u_x^{CO(X)}$". \end{notas}
The next assertion follows from the results obtained in \cite{R2,Di4}.
\begin{pro}\label{psiult} Let $(A,I)$ be a ZLBA. Set $X=\{u\in Ult(A)\ |\ u\cap I\neq\emptyset\}$. Set, for every $a\in A$, $\lambda_A^C(a)=\{u\in X\ |\ a\in u\}$. Let $\tau$ be the topology on $X$ having as an open base the family $\{\lambda_A^C(a)\ |\ a\in I\}$. Then $(X,\tau)$ is a zero-dimensional locally compact Hausdorff space, $\lambda_A^C(A)= CO(X)$, $\lambda_A^C(I)=CK(X)$ and $\lambda_A^C:A\longrightarrow CO(X)$ is a Boolean isomorphism; hence, $\lambda_A^C:(A,I)\longrightarrow (CO(X),CK(X))$ is a ${\bf ZLBA}$-isomorphism. We set $\Theta^a(A,I)=(X,\tau)$. \end{pro}
\begin{theorem}\label{genstonec}{\rm (\cite{Di4})} The category\/ ${\bf ZLC}$ is dually equivalent to the category\/ ${\bf ZLBA}$. In more details, let $\Theta^a:{\bf ZLBA}\longrightarrow{\bf ZLC}$ and $\Theta^t:{\bf ZLC}\longrightarrow{\bf ZLBA}$ be two contravariant functors defined as follows: for every $X\in\card{{\bf ZLC}}$, we set $\Theta^t(X)=(CO(X), CK(X))$, and for every $f\in{\bf ZLC}(X,Y)$, $\Theta^t(f):\Theta^t(Y)\longrightarrow\Theta^t(X)$ is defined by the formula $\Theta^t(f)(G)=f^{-1}(G)$, where $G\in CO(Y)$;
for the definition of $\Theta^a(B, I)$, where $(B, I)$ is a ZLBA, see
\ref{psiult};
for every $\varphi\in{\bf ZLBA}((B, I),(B_1, J))$,
$\Theta^a(\varphi):\Theta^a(B_1, J)\longrightarrow\Theta^a(B, I)$ is given by the formula $\Theta^a(\varphi)(u^\prime)=\varphi^{-1}(u^\prime), \ \forall u^\prime\in\Theta^a(B_1,J)$; then $t^{C}:Id_{{\bf ZLC}}\longrightarrow \Theta^a\circ \Theta^t$, where $t^{C}(X)=t_X^C, \ \forall X\in\card{\bf ZLC}$ and $t_X^C(x)=u_x^C$, for every $x\in X$, is a natural isomorphism (hence, in particular, $t_X^C:X\longrightarrow\Theta^a(\Theta^t(X))$ is a homeomorphism for every $X\in\card{{\bf ZLC}}$); also, $\lambda^C: Id_{{\bf ZLBA}}\longrightarrow \Theta^t\circ \Theta^a$, where $ \lambda^C(B, I)=\lambda_B^C, \ \forall (B, I)\in\card{\bf ZLBA}$, is a natural isomorphism. \end{theorem}
Finally, we will recall some definitions and facts from the theory of extensions of topological spaces, as well as the fundamental Leader's Local Compactification Theorem \cite{LE2}.
Let $X$ be a Tychonoff space. We will denote by ${\cal L}(X)$ the set of all, up to equivalence, locally compact Hausdorff extensions of $X$ (recall that two (locally compact Hausdorff) extensions $(Y_1,f_1)$ and $(Y_2,f_2)$ of $X$ are said to be {\em equivalent}\/ iff there exists a homeomorphism $h:Y_1\longrightarrow Y_2$ such that $h\circ f_1=f_2$). Let $[(Y_i,f_i)]\in{\cal L}(X)$, where $i=1,2$. We set $[(Y_1,f_1)]\le [(Y_2,f_2)]$ if there exists a continuous mapping $h:Y_2\longrightarrow Y_1$ such that $f_1=h\circ f_2$. Then $({\cal L}(X),\le)$ is a poset (=partially ordered set).
Let $X$ be a Tychonoff space. We will denote by ${\cal K}(X)$ the set of all, up to equivalence, Hausdorff compactifications of $X$.
\begin{nist}\label{nilea} \rm Recall that if $X$ is a set and $P(X)$ is the power set of $X$ ordered by the inclusion, then a triple $(X,\delta,{\cal B})$ is called a {\em local proximity space} (see \cite{LE2}) if ${\cal B}$ is an ideal (possibly non proper) of $P(X)$ and $\delta$ is a symmetric binary relation on $P(X)$ satisfying the following conditions:
\noindent(P1) $\emptyset(-\delta) A$ for every $A\subseteq X$ ($``-\delta$" means $``$not $\delta$");
\noindent(P2) $A\delta A$ for every $A\not =\emptyset$;
\noindent(P3) $A\delta(B\cup C)$ iff $A\delta B$ or $A\delta C$;
\noindent(BC1) If $A\in {\cal B}$, $C\subseteq X$ and $A\ll C$ (where, for $D,E\subseteq X$, $D\ll E$ iff $D(-\delta) (X\setminus E)$) then there exists a $B\in{\cal B}$ such that $A\ll B\ll C$;
\noindent(BC2) If $A\delta C$, then there is a $B\in{\cal B}$ such that $B\subseteq C$ and $A\delta B$.
\noindent A local proximity space $(X,\delta,{\cal B})$ is said to be {\em separated} if $\delta$ is the identity relation on singletons. Recall that every separated local proximity space $(X,\delta,{\cal B})$ induces a Tychonoff topology $\tau_{(X,\delta,{\cal B})}$ in $X$ by defining $\mbox{{\rm cl}}(M)=\{x\in X\ |\ x\delta M\}$ for every $M\subseteq X$ (\cite{LE2}). If $(X,\tau)$ is a topological space then we say that $(X,\delta,{\cal B})$ is a {\em local proximity space on} $(X,\tau)$ if $\tau_{(X,\delta,{\cal B})}=\tau$.
The set of all separated local proximity spaces on a Tychonoff space $(X,\tau)$ will be denoted by ${\cal L}{\cal P}(X,\tau)$. An order in ${\cal L}{\cal P}(X,\tau)$ is defined by $(X,\beta_1,{\cal B}_1)\preceq (X,\beta_2,{\cal B}_2)$ if $\beta_2\subseteq\beta_1$ and ${\cal B}_2\subseteq{\cal B}_1$ (see \cite{LE2}).
A function $f:X_1\longrightarrow X_2$ between two local proximity spaces $(X_1,\beta_1,{\cal B}_1)$ and $(X_2,\beta_2,{\cal B}_2)$ is said to be an {\em equicontinuous mapping\/} (see \cite{LE2}) if the following two conditions are fulfilled:
\noindent(EQ1) $A\beta_1 B$ implies $f(A)\beta_2 f(B)$, for $A,B\subseteq X$, and
\noindent(EQ2) $B\in{\cal B}_1$ implies $f(B)\in{\cal B}_2$. \end{nist}
\begin{theorem}\label{Leader} {\rm (S. Leader \cite{LE2})} Let $(X,\tau)$ be a Tychonoff space. Then there exists an isomorphism $\Lambda_X$ between the ordered sets $({\cal L}(X,\tau),\le)$ and $({\cal L}{\cal P}(X,\tau),\preceq)$. In more details, for every $(X, \rho, {\cal B})\in{\cal L}{\cal P}(X,\tau)$ there exists a locally compact Hausdorff extension $(Y,f)$ of X satisfying the following two conditions:
\noindent(a) $A \rho B$ iff $\mbox{{\rm cl}}_Y(f(A))\cap \mbox{{\rm cl}}_Y(f(B))\neq\emptyset$;
\noindent(b) $B\in{\cal B}$ iff $\mbox{{\rm cl}}_Y(f(B))$ is compact.
\noindent Such a local compactification is unique up to equivalence; we set $(Y,f)=L(X,\rho,{\cal B})$ and $(\Lambda_X)^{-1}(X,\rho,{\cal B})=[(Y,f)]$. The space $Y$ is compact iff $X\in{\cal B}$. Conversely, if $(Y,f)$ is a locally compact Hausdorff extension of $X$ and $\rho$ and ${\cal B}$ are defined by (a) and (b), then $(X, \rho, {\cal B})$ is a separated local proximity space, and we set $\Lambda_X([(Y,f)])=(X,\rho,{\cal B})$.
Let $(X_i,\beta_i,{\cal B}_i)$, $i=1,2$, be two separated local proximity spaces and $f:X_1\longrightarrow X_2$ be a function. Let $(Y_i,f_i)=L(X_i,\beta_i,{\cal B}_i)$, where $i=1,2$. Then there exists a continuous map $L(f):Y_1\longrightarrow Y_2$ such that $f_2\circ f= L(f)\circ f_1$ iff $f$ is an equicontinuous map between $(X_1,\beta_1,{\cal B}_1)$ and $(X_2,\beta_2,{\cal B}_2)$. \end{theorem}
Recall that a subset $F$ of a topological space $(X,\tau)$ is called {\em regular closed}\/ if $F=\mbox{{\rm cl}}(\mbox{{\rm int}} (F))$. Clearly, $F$ is regular closed iff it is the closure of an open set. For any topological space $(X,\tau)$, the collection $RC(X,\tau)$ (we will often write simply $RC(X)$) of all regular closed subsets of $(X,\tau)$ becomes a complete Boolean algebra $(RC(X,\tau),0,1,\wedge,\vee,{}^*)$ under the following operations: $ 1 = X, 0 = \emptyset, F^* = \mbox{{\rm cl}}(X\setminus F), F\vee G=F\cup G, F\wedge G =\mbox{{\rm cl}}(\mbox{{\rm int}}(F\cap G)). $ The infinite operations are given by the following formulas: $\bigvee\{F_\gamma\ |\ \gamma\in\Gamma\}=\mbox{{\rm cl}}(\bigcup\{F_\gamma\ |\ \gamma\in\Gamma\})$ and $\bigwedge\{F_\gamma\ |\ \gamma\in\Gamma\}=\mbox{{\rm cl}}(\mbox{{\rm int}}(\bigcap\{F_\gamma\ |\ \gamma\in\Gamma\})).$ We denote by $CR(X,\tau)$ the family of all compact regular closed subsets of $(X,\tau)$. We will often write $CR(X)$ instead of $CR(X,\tau)$.
We will need a lemma from \cite{CNG}:
\begin{lm}\label{isombool} Let $X$ be a dense subspace of a topological space $Y$. Then the functions $r:RC(Y)\longrightarrow RC(X)$, $F\mapsto F\cap X$, and $e:RC(X)\longrightarrow RC(Y)$, $G\mapsto \mbox{{\rm cl}}_Y(G)$, are Boolean isomorphisms between Boolean algebras $RC(X)$ and $RC(Y)$, and $e\circ r=id_{RC(Y)}$, $r\circ e=id_{RC(X)}$. \end{lm}
\section{A Generalization of Dwinger Theorem}
\begin{defi}\label{admzlba} \rm Let $X$ be a zero-dimensional Hausdorff space. Then:
\noindent(a) A ZLBA $(A,I)$ is called {\em admissible for} $X$ if $A$ is a Boolean subalgebra of the Boolean algebra $CO(X)$ and $I$ is an open base of $X$.
\noindent(b) The set of all admissible for $X$ ZLBAs is denoted by ${\cal Z}{\cal A}(X)$.
\noindent(c) If $(A_1,I_1),(A_2,I_2)\in {\cal Z}{\cal A}(X)$ then we set $(A_1,I_1)\preceq_0(A_2,I_2)$ if $A_1$ is a Boolean subalgebra of $A_2$ and for every $V\in I_2$ there exists $U\in I_1$ such that $V\subseteq U$. \end{defi}
\begin{nota}\label{lz} \rm The set of all (up to equivalence) zero-dimensional locally compact Hausdorff extensions of a zero-dimensional Hausdorff space $X$ will be denoted by ${\cal L}_0(X)$. \end{nota}
\begin{theorem}\label{dwingerlc} Let $X$ be a zero-dimensional Hausdorff space. Then the ordered sets $({\cal L}_0(X),\le)$ and $({\cal Z}{\cal A}(X),\preceq_0)$ are isomorphic; moreover, the zero-dimensional compact Hausdorff extensions of $X$ correspond to ZLBAs of the form $(A,A)$. \end{theorem}
\hspace{-1cm}{\em Proof.}~~ Let $(Y,f)$ be a locally compact Hausdorff zero-dimensional extensions of $X$. Set \begin{equation}\label{01} A_{(Y,f)}=f^{-1}(CO(Y)) \mbox{ and } I_{(Y,f)}=f^{-1}(CK(Y)). \end{equation}
Note that $A_{(Y,f)}=\{F\in CO(X)\ |\ \mbox{{\rm cl}}_Y(f(F))$ is open in $Y\}$ and $I_{(Y,f)}=\{F\in A_{(Y,f)}\ |\ \mbox{{\rm cl}}_Y(f(F))$ is compact$\}$. We will show that $(A_{(Y,f)},I_{(Y,f)})\in{\cal Z}{\cal A}(X)$. Obviously, the map
$r_{(Y,f)}^0:(CO(Y),CK(Y))\longrightarrow(A_{(Y,f)},I_{(Y,f)}), \ G\mapsto f^{-1}(G),$
is a Boolean isomorphism such that $r_{(Y,f)}^0(CK(Y))=I_{(Y,f)}$. Hence $(A_{(Y,f)},I_{(Y,f)})$ is a ZLBA and $r_{(Y,f)}^0$ is an LBA-isomorphism. It is easy to see that $I_{(Y,f)}$ is a base of $X$ (because $Y$ is locally compact). Hence $(A_{(Y,f)},I_{(Y,f)})\in{\cal Z}{\cal A}(X)$.
It is clear that if $(Y_1,f_1)$ is a locally compact Hausdorff zero-dimensional extensions of $X$ equivalent to the extension $(Y,f)$, then $(A_{(Y,f)},I_{(Y,f)})=(A_{(Y_1,f_1)},I_{(Y_1,f_1)})$. Therefore, a map \begin{equation}\label{dw1} \alpha_X^0:{\cal L}_0(X)\longrightarrow{\cal Z}{\cal A}(X), \ [(Y,f)]\mapsto(A_{(Y,f)},I_{(Y,f)}), \end{equation}
is well-defined.
Let $(A,I)\in{\cal Z}{\cal A}(X)$ and $Y=\Theta^a(A,I)$. Then $Y$ is a locally compact Hausdorff zero-dimensional space. For every $x\in X$, set \begin{equation}\label{02} u_{x,A}=\{F\in A\ |\ x\in F\}. \end{equation}
Since $I$ is a base of $X$, we get that $u_{x,A}$ is an ultrafilter in $A$ and $u_{x,A}\cap I\neq\emptyset$, i.e. $u_{x,A}\in Y$. Define \begin{equation}\label{0f} f_{(A,I)}:X\longrightarrow Y, \ x\mapsto u_{x,A}. \end{equation}
Set, for short, $f=f_{(A,I)}$. Obviously, $\mbox{{\rm cl}}_Y(f(X))=Y$. It is easy to see that $f$ is a homeomorphic embedding. Hence $(Y,f)$ is a locally compact Hausdorff zero-dimensional extension of $X$. We now set: \begin{equation}\label{dw2} \beta_X^0:{\cal Z}{\cal A}(X)\longrightarrow {\cal L}_0(X), \ (A,I)\mapsto [(\Theta^a(A,I),f_{(A,I)})]. \end{equation}
We will show that $\alpha_X^0\circ\beta_X^0=id_{{\cal Z}{\cal A}(X)}$ and $\beta_X^0\circ\alpha_X^0=id_{{\cal L}_0(X)}$.
Let $[(Y,f)]\in{\cal L}_0(X)$. Set, for short, $A=A_{(Y,f)}$, $I=I_{(Y,f)}$, $g=f_{(A,I)}$, $Z=\Theta^a(A,I)$ and $\varphi=r_{(Y,f)}^0$. Then $\beta_X^0(\alpha_X^0([(Y,f)]))=\beta_X(A,I)= [(Z,g)]$. We have to show that $[(Y,f)]=[(Z,g)]$. Since $\varphi$ is an LBA-isomorphism, we get that $h=\Theta^a(\varphi):Z\longrightarrow\Theta^a(\Theta^t(Y))$ is a homeomorphism. Set $Y^\prime=\Theta^a(\Theta^t(Y))$. By Theorem \ref{genstonec}, the map $t_Y^C:Y\longrightarrow Y^\prime, \ y\mapsto u_y^{CO(Y)}$ is a homeomorphism. Let $h^\prime=(t_Y^C)^{-1}\circ h$. Then $h^\prime:Z\longrightarrow Y$ is a homeomorphism. We will prove that $h^\prime\circ g=f$ and this will imply that $[(Y,f)]=[(Z,g)]$. Let $x\in X$. Then $h^\prime(g(x))=h^\prime(u_{x,A})=(t_Y^C)^{-1}(h(u_{x,A}))=(t_Y)^{-1}(\varphi^{-1}(u_{x,A}))$. We have that $u_{x,A}=\{f^{-1}(F)\ |\ F\in CO(Y), x\in f^{-1}(F)\}=\{\varphi(F)\ |\ F\in CO(Y), x\in f^{-1}(F)\}$. Thus $\varphi^{-1}(u_{x,A})=\{F\in CO(Y)\ |\ f(x)\in F\}=u_{f(x)}^{CO(Y)}$. Hence $(t_Y)^{-1}(\varphi^{-1}(u_{x,A}))=f(x)$. So, $h^\prime\circ g=f$. Therefore, $\beta_X^0\circ\alpha_X^0=id_{{\cal L}_0(X)}$.
Let $(A,I)\in{\cal Z}{\cal A}(X)$ and $Y=\Theta^a(A,I)$. Set $f=f_{(A,I)}$, $B=A_{(Y,f)}$ and $J=I_{(Y,f)}$. Then
$\alpha_X^0(\beta_X^0(A,I))=(B,J)$.
By Theorem \ref{genstonec}, we have that $\lambda_A^C:(A,I)\longrightarrow(CO(Y),CK(Y))$ is an LBA-isomorphism. Hence $\lambda_A^C(A)=CO(Y)$ and $\lambda_A^C(I)=CK(Y)$. We will show that $f^{-1}(\lambda_A^C(F))=F$, for every $F\in A$. Recall that $\lambda_A^C(F)=\{u\in Y\ |\ F\in u\}$. Now we have that if $F\in A$ then $f^{-1}(\lambda_A^C(F))=\{x\in X\ |\ f(x)\in\lambda_A^C(F)\}=\{x\in X\ |\ u_{x,A}\in\lambda_A^C(F)\}=\{x\in X\ |\ F\in u_{x,A}\}=\{x\in X\ |\ x\in F\}=F$. Thus \begin{equation}\label{05} B=f^{-1}(CO(Y))=A \mbox{ and } J=f^{-1}(CK(Y))=I. \end{equation}
Therefore, $\alpha_X^0\circ\beta_X^0=id_{{\cal Z}{\cal A}(X)}$.
We will now prove that $\alpha_X^0$ and $\beta_X^0$ are monotone maps.
Let $[(Y_i,f_i)]\in{\cal L}_0(X)$, where $i=1,2$, and $[(Y_1,f_1)]\le[(Y_2,f_2)]$. Then there exists a continuous map $g:Y_2\longrightarrow Y_1$ such that $g\circ f_2=f_1$. Set $A_i=A_{(Y_i,f_i)}$ and $I_i=I_{(Y_i,f_i)}$, $i=1,2$. Then $\alpha_X^0([(Y_i,f_i)])=(A_i,I_i)$, where $i=1,2$. We have to show that $A_1\subseteq A_2$ and for every $V\in I_2$ there exists $U\in I_1$ such that $V\subseteq U$. Let $F\in A_1$. Then $F^\prime=\mbox{{\rm cl}}_{Y_1}(f_1(F))\in CO(Y_1)$ and, hence, $G^\prime=g^{-1}(F^\prime)\in CO(Y_2)$. Thus $(f_2)^{-1}(G^\prime)\in A_2$. Since $(f_2)^{-1}(G^\prime)=(f_2)^{-1}(g^{-1}(F^\prime))=(f_2)^{-1}(g^{-1}(\mbox{{\rm cl}}_{Y_1}(f_1(F))))=(f_1)^{-1}(\mbox{{\rm cl}}_{Y_1}(f_1(F)))=F$, we get that $F\in A_2$. Therefore, $A_1\subseteq A_2$. Further, let $V\in I_2$. Then $V^\prime=\mbox{{\rm cl}}_{Y_2}(f_2(V))\in CK(Y_2)$. Thus $g(V^\prime)$ is a compact subset of $Y_1$. Hence there exists $U\in I_1$ such that $g(V^\prime)\subseteq \mbox{{\rm cl}}_{Y_1}(f_1(U))$. Then $V\subseteq (f_2)^{-1}(g^{-1}(g(\mbox{{\rm cl}}_{Y_2}(f_2(V)))))=(f_1)^{-1}(g(V^\prime))\subseteq (f_1)^{-1}(\mbox{{\rm cl}}_{Y_1}(f_1(U)))=U$. So, $\alpha_X^0([(Y_1,f_1)])\preceq_0\alpha_X^0([(Y_2,f_2)])$. Hence, $\alpha_X^0$ is a monotone function.
Let now $(A_i,I_i)\in{\cal Z}{\cal A}(X)$, where $i=1,2$, and $(A_1,I_1)\preceq_0(A_2,I_2)$. Set, for short, $Y_i=\Theta^a(A_i,I_i)$ and $f_i=f_{(A_i,I_i)}$, $i=1,2$. Then $\beta_X^0(A_i,I_i)=[(Y_i,f_i)]$, $i=1,2$. We will show that $[(Y_1,f_1)]\le[(Y_2,f_2)]$. We have that, for $i=1,2$, $f_i:X\longrightarrow Y_i$ is defined by $f_i(x)=u_{x,A_i}$, for every $x\in X$. We also have that $A_1\subseteq A_2$ and for every $V\in I_2$ there exists $U\in I_1$ such that $V\subseteq U$. Let us regard the function
$\varphi:(A_1,I_1)\longrightarrow(A_2,I_2), \ F\mapsto F.$
Obviously, $\varphi$ is a ${\bf ZLBA}$-morphism. Then $g=\Theta^a(\varphi):Y_2\longrightarrow Y_1$ is a continuous map. We will prove that $g\circ f_2=f_1$, i.e. that for every $x\in X$, $g(u_{x,A_2})=u_{x,A_1}$. So, let $x\in X$. We have that $u_{x,A_2}=\{F\in A_2\ |\ x\in F\}$ and $g(u_{x,A_2})=\varphi^{-1}(u_{x,A_2})$. Clearly, $\varphi^{-1}(u_{x,A_2})=\{F\in A_1\cap A_2\ |\ x\in F\}$. Since $A_1\subseteq A_2$, we get that $\varphi^{-1}(u_{x,A_2})=\{F\in A_1\ |\ x\in F\}=u_{x,A_1}$. So, $g\circ f_2=f_1$. Thus $[(Y_1,f_1)]\le[(Y_2,f_2)]$. Therefore, $\beta_X^0$ is also a monotone function. Since $\beta_X^0=(\alpha_X^0)^{-1}$, we get that $\alpha_X^0$ (as well as $\beta_X^0$) is an isomorphism. \sq
\begin{defi}\label{admba} \rm Let $X$ be a zero-dimensional Hausdorff space.
A Boolean algebra $A$ is called {\em admissible for} $X$ (or, a {\em Boolean base of} $X$) if $A$ is a Boolean subalgebra of the Boolean algebra $CO(X)$ and $A$ is an open base of $X$.
The set of all admissible Boolean algebras for $X$ is denoted by ${\cal B}{\cal A}(X)$. \end{defi}
\begin{nota}\label{cz} \rm The set of all (up to equivalence) zero-dimensional compact Hausdorff
extensions of a zero-dimensional Hausdorff space $X$ will be denoted by ${\cal K}_0(X)$. \end{nota}
\begin{cor}\label{dwinger}{\rm (Ph. Dwinger \cite{Dw})} Let $X$ be a zero-dimensional Hausdorff space. Then the ordered sets $({\cal K}_0(X),\le)$ and $({\cal B}{\cal A}(X),\subseteq)$ are isomorphic. \end{cor}
\hspace{-1cm}{\em Proof.}~~ Clearly, a Boolean algebra $A$ is admissible for $X$ iff the ZLBA $(A,A)$ is admissible for $X$. Also, if $A_1,A_2$ are two admissible for $X$ Boolean algebras then $A_1\subseteq A_2$ iff $(A_1,A_1)\preceq_0(A_2,A_2)$. Since the admissible ZLBAs of the form $(A,A)$ and only they correspond to the zero-dimensional compact Hausdorff extensions of $X$, it becomes obvious that our assertion follows from Theorem \ref{dwingerlc}. \sq
\section{Zero-dimensional Local Proximities}
\begin{defi}\label{zdlpdef} \rm A local proximity $(X,\delta,{\cal B})$ is called {\em zero-dimensional} if for every $A,B\in{\cal B}$ with $A\ll B$ there exists $C\subseteq X$ such that $A\subseteq C\subseteq B$ and $C\ll C$.
The set of all separated zero-dimensional local proximity spaces on a Tychonoff space $(X,\tau)$ will be denoted by ${\cal L}{\cal P}_0(X,\tau)$. The restriction of the order relation $\preceq$ in ${\cal L}{\cal P}(X,\tau)$ (see \ref{nilea}) to the set ${\cal L}{\cal P}_0(X,\tau)$ will be denoted again by $\preceq$. \end{defi}
\begin{theorem}\label{zdlpth} Let $(X,\tau)$ be a zero-dimensional Hausdorff space. Then the ordered sets $({\cal L}_0(X),\le)$ and $({\cal L}{\cal P}_0(X,\tau),\preceq)$ are isomorphic (see \ref{zdlpdef} and \ref{dwingerlc} for the notations). \end{theorem}
\hspace{-1cm}{\em Proof.}~~ Having in mind Leader's Theorem \ref{Leader}, we need only to show that if $[(Y,f)]\in {\cal L}(X)$ and $\Lambda_X([(Y,f)])=(X,\delta,{\cal B})$ then $Y$ is a zero-dimensional space iff $(X,\delta,{\cal B})\in{\cal L}{\cal P}_0(X)$.
So, let $Y$ be a zero-dimensional space. Then, by Theorem \ref{Leader}, ${\cal B}=\{B\subseteq X\ |\ \mbox{{\rm cl}}_Y(f(B))$ is compact$\}$, and for every $A,B\subseteq X$, $A\delta B$ iff $\mbox{{\rm cl}}_Y(f(A))\cap\mbox{{\rm cl}}_Y(f(B))\neq\emptyset$. Let $A,B\in{\cal B}$ and $A\ll B$. Then $\mbox{{\rm cl}}_Y(f(A))\cap\mbox{{\rm cl}}_Y(f(X\setminus B))=\emptyset$. Since $\mbox{{\rm cl}}_Y(f(A))$ is compact and $Y$ is zero-dimensional, there exists $U\in CO(Y)$ such that $\mbox{{\rm cl}}_Y(f(A))\subseteq U\subseteq Y\setminus \mbox{{\rm cl}}_Y(f(X\setminus B))$. Set $V=f^{-1}(U)$. Then $A\subseteq V\subseteq\mbox{{\rm int}}_X(B)$, $\mbox{{\rm cl}}_Y(f(V))=U$ and $\mbox{{\rm cl}}_Y(f(X\setminus V))=Y\setminus U$. Thus $V\ll V$ and $A\subseteq V\subseteq B$.
Therefore, $(X,\delta,{\cal B})\in{\cal L}{\cal P}_0(X)$.
Conversely, let $(X,\delta,{\cal B})\in{\cal L}{\cal P}_0(X)$ and $(Y,f)=L(X,\delta,{\cal B})$ (see \ref{Leader} for the notations). We will prove that $Y$ is a zero-dimensional space. We have again, by Theorem \ref{Leader}, that the
formulas written in the preceding paragraph for ${\cal B}$ and $\delta$ take place.
Let $y\in Y$ and $U$ be an open neighborhood of $y$. Since $Y$ is locally compact and Hausdorff, there exist $F_1, F_2\in CR(Y)$ such that $y\in F_1\subseteq\mbox{{\rm int}}_Y(F_2)\subseteq F_2\subseteq U$. Let $A_i=f^{-1}(F_i)$, $i=1,2$. Then $\mbox{{\rm cl}}_Y(f(A_i))=F_i$, and hence $A_i\in{\cal B}$, for $i=1,2$. Also, $A_1\ll A_2$. Thus there exists $C\in{\cal B}$ such that $A_1\subseteq C\subseteq A_2$ and $C\ll C$. It is easy to see that $F_1\subseteq\mbox{{\rm cl}}_Y(f(C))\subseteq F_2$ and that $\mbox{{\rm cl}}_Y(f(C))\in CO(Y)$. Therefore, $Y$ is a zero-dimensional space. \sq
By Theorem \ref{Leader}, for every Tychonoff space $(X,\tau)$, the local proximities of the form $(X,\delta,P(X))$ on $(X,\tau)$ and only they correspond to the Hausdorff compactifications of $(X,\tau)$. The pairs $(X,\delta)$ for which the triple $(X,\delta,P(X))$ is a local proximity are called {\em Efremovi\v{c} proximities}. Hence, Leader's Theorem \ref{Leader} implies the famous Smirnov Compactification Theorem \cite{Sm2}. The notion of a zero-dimensional proximity was introduced recently by G. Bezhanishvili \cite{B}. Our notion of a zero-dimensional local proximity is a generalization of it. We will denote by ${\cal P}_0(X)$ the set of all zero-dimensional proximities on a zero-dimensional Hausdorff space $X$. Now it becomes clear that our Theorem \ref{zdlpth} implies immediately the following theorem of G. Bezhanishvili \cite{B}:
\begin{cor}\label{zdlpcor}{\rm (G. Bezhanishvili \cite{B})} Let $(X,\tau)$ be a zero-dimensional Hausdorff space. Then there exists an isomorphism between the ordered sets $({\cal K}_0(X),\le)$ and $({\cal P}_0(X,\tau),\preceq)$ (see \ref{zdlpdef} and \ref{dwingerlc} for the notations). \end{cor}
The connection between the zero-dimensional local proximities on a zero-dimensional Hausdorff space $X$ and the admissible for $X$ ZLBAs is clarified in the next result:
\begin{theorem}\label{ailp} Let $(X,\tau)$ be a zero-dimensional Hausdorff space. Then:
\noindent(a) Let $(A,I)\in{\cal Z}{\cal A}(X,\tau)$. Set ${\cal B}=\{M\subseteq X\ |\ \exists B\in I$ such that $M\subseteq B\}$, and for every $M,N\in{\cal B}$, let $M\delta N\iff (\forall F\in I)[(M\subseteq F)\rightarrow(F\cap N\neq\emptyset)]$; further, for every $K,L\subseteq X$, let $K\delta L\iff[\exists M,N\in{\cal B}$ such that $M\subseteq K, N\subseteq L$ and $M\delta N]$. Then $(X,\delta,{\cal B})\in{\cal L}{\cal P}_0(X,\tau)$. Set $(X,\delta,{\cal B})=L_X(A,I)$.
\noindent(b) Let $(X,\delta,{\cal B})\in{\cal L}{\cal P}_0(X,\tau)$. Set $A=\{F\subseteq X\ |\ F\ll F\}$ and $I=A\cap{\cal B}$. Then $(A,I)\in{\cal Z}{\cal A}(X,\tau)$. Set $(A,I)=l_X(X,\delta,{\cal B})$.
\noindent(c) $\beta_X^0=(\Lambda_X)^{-1}\circ L_X$ and, for every $(X,\delta,{\cal B})\in{\cal L}{\cal P}_0(X,\tau)$, $(\beta_X^0\circ l_X)(X,\delta,{\cal B})=(\Lambda_X)^{-1}(X,\delta,{\cal B})$ (see \ref{Leader}, (\ref{dw2}), as well as (a) and (b) here for the notations);
\noindent(d) The correspondence $L_X:({\cal Z}{\cal A}(X,\tau),\preceq_0)\longrightarrow({\cal L}{\cal P}_0(X,\tau),\preceq)$ is an isomorphism (between posets) and $L_X^{-1}=l_X$. \end{theorem}
\hspace{-1cm}{\em Proof.}~~ It follows from Theorems \ref{dwingerlc}, \ref{zdlpth} and \ref{Leader}. \sq
The above assertion is a generalization of the analogous result of G. Bezhanishvili \cite{B} concerning the connection between the zero-dimensional proximities on a zero-dimensional Hausdorff space $X$ and the admissible for $X$ Boolean algebras.
\section{Extensions over Zero-dimensional Local Compactifications}
\begin{theorem}\label{zdextc} Let $(X_i,\tau_i)$, where $i=1,2$, be zero-dimensional Hausdorff spaces, $(Y_i,f_i)$ be a zero-dimensional Hausdorff local compactification of $(X_i,\tau_i)$, $(A_i,I_i)=\alpha_X^0(Y_i,f_i)$
(see (\ref{dw1}) and (\ref{01}) for $\alpha_{X_i}^0$)), where $i=1,2$,
and $f:X_1\longrightarrow X_2$ be a function. Then there exists a continuous function
$g=L_0(f):Y_1\longrightarrow Y_2$ such that $g\circ f_1=f_2\circ f$ iff $f$ satisfies the
following conditions:
\noindent{\rm (ZEQ1)} For every $G\in A_2$, $f^{-1}(G)\in A_1$ holds;
\noindent{\rm (ZEQ2)} For every $F\in I_1$ there exists $G\in I_2$ such that $f(F)\subseteq G$. \end{theorem}
\hspace{-1cm}{\em Proof.}~~ ($\Rightarrow$) Let there exists a continuous function $g:Y_1\longrightarrow Y_2$ such that $g\circ f_1=f_2\circ f$. By Lemma \ref{isombool} and (\ref{05}), we have that the maps \begin{equation}\label{0i} r_i^c:CO(Y_i)\longrightarrow A_i, \ G\mapsto (f_i)^{-1}(G), \ e_i^c:A_i\longrightarrow CO(Y_i), \ F\mapsto\mbox{{\rm cl}}_{Y_i}(f_i(F)), \end{equation} where $i=1,2$, are Boolean isomorphisms; moreover, since $r_i^c(CK(Y_i))=I_i$ and $e_i^c(I_i)=CK(Y_i)$, we get that \begin{equation}\label{0il} r_i^c:(CO(Y_i),CK(Y_i))\longrightarrow (A_i,I_i) \mbox{ and } e_i^c:(A_i,I_i)\longrightarrow (CO(Y_i),CK(Y_i)), \end{equation}
where $i=1,2$, are LBA-isomorphisms. Set \begin{equation}\label{0ic} \psi_g:CO(Y_2)\longrightarrow CO(Y_1), \ G\mapsto g^{-1}(G), \mbox{ and } \psi_f=r_1^c\circ\psi_g\circ e_2^c. \end{equation} Then $\psi_f:A_2\longrightarrow A_1$. We will prove that \begin{equation}\label{psif} \psi_f(G)=f^{-1}(G), \mbox{ for every } G\in A_2. \end{equation} Indeed, let $G\in A_2$. Then $\psi_f(G)=(r_1^c\circ\psi_g\circ e_2^c)(G)=(f_1)^{-1}(g^{-1}(\mbox{{\rm cl}}_{Y_2}(f_2(G))))= \{x\in X_1\ |\ (g\circ f_1)(x)\in\mbox{{\rm cl}}_{Y_2}(f_2(G))\}=\{x\in X_1\ |\ f_2(f(x))\in\mbox{{\rm cl}}_{Y_2}(f_2(G))\} =\{x\in X_1\ |\ f(x)\in (f_2)^{-1}(\mbox{{\rm cl}}_{Y_2}(f_2(G)))\}= \{x\in X_1\ |\ f(x)\in G\}=f^{-1}(G)$. This shows that condition (ZEQ1) is fulfilled. Since, by Theorem \ref{genstonec}, $\psi_g=\Theta^t(g)$, we get that $\psi_g$ is a ${\bf ZLBA}$-morphism. Thus $\psi_f$ is a ${\bf ZLBA}$-morphism. Therefore, for every $F\in I_1$ there exists $G\in I_2$ such that $f^{-1}(G)\supseteq F$. Hence, condition (ZEQ2) is also checked.
\noindent($\Leftarrow$) Let $f$ be a function satisfying conditions (ZEQ1) and (ZEQ2). Set $\psi_f:A_2\longrightarrow A_1$, $G\mapsto f^{-1}(G)$. Then $\psi_f:(A_2,I_2)\longrightarrow (A_1,I_1)$ is a ${\bf ZLBA}$-morphism. Put $g=\Theta^a(\psi_f)$. Then $g:\Theta^a(A_1,I_1)\longrightarrow\Theta^a(A_2,I_2)$, i.e. $g:Y_1\longrightarrow Y_2$ and $g$ is a continuous function (see Theorem \ref{genstonec} and (\ref{dw2})). We will show that $g\circ f_1=f_2\circ f$. Let $x\in X_1$. Then, by (\ref{0f}) and Theorem \ref{genstonec}, $g(f_1(x))= g(u_{x,A_1})=(\psi_f)^{-1}(u_{x,A_1})=\{G\in A_2\ |\ \psi_f(G)\in u_{x,A_1}\}=\{G\in A_2\ |\ x\in f^{-1}(G)\}=\{G\in A_2\ |\ f(x)\in G\}=u_{f(x),A_2}=f_2(f(x))$. Thus, $g\circ f_1=f_2\circ f$. \sq
It is natural to write $f:(X_1,A_1,I_1)\longrightarrow (X_2,A_2,I_2)$ when we have a situation like that which is described in Theorem \ref{zdextc}. Then, in analogy with the Leader's equicontinuous functions (see Leader's Theorem \ref{Leader}), the functions $f:(X_1,A_1,I_1)\longrightarrow (X_2,A_2,I_2)$ which satisfy conditions (ZEQ1) and (ZEQ2) will be called {\em 0-equicontinuous functions}. Since $I_2$ is a base of $X_2$,
we obtain that every 0-equcontinuous function is a continuous function.
\begin{cor}\label{zdextcc} Let $(X_i,\tau_i)$, $i=1,2$, be two zero-dimensional Hausdorff spaces, $A_i\in{\cal B}{\cal A}(X_i)$, $(Y_i,f_i)=\beta_{X_i}^0(A_i,A_i)$ (see (\ref{dw2}) for $\beta_{X_i}^0$), where $i=1,2$,
and $f:X_1\longrightarrow X_2$ be a function. Then there exists a continuous function
$g=L_0(f):Y_1\longrightarrow Y_2$ such that $g\circ f_1=f_2\circ f$ iff $f$ satisfies condition {\rm (ZEQ1)}. \end{cor}
\hspace{-1cm}{\em Proof.}~~ It follows from Theorem \ref{zdextc} because for ZLBAs of the form $(A_i,A_i)$, where $i=1,2$, condition (ZEQ2) is always fulfilled. \sq
Clearly, Theorem \ref{dwinger} implies (and this is noted in \cite{Dw}) that every zero-dimensi\-o\-nal Hausdorff space $X$ has a greatest zero-dimensional Hausdorff compactification which corresponds to the admissible for $X$ Boolean algebra $CO(X)$. This compactification was discovered by B. Banaschewski \cite{Ba}; it is denoted by $(\beta_0X,\beta_0)$ and it is called {\em Banaschewski compactification} of $X$. One obtains immediately its main property using our Corollary \ref{zdextcc}:
\begin{cor}\label{zdextcb}{\rm (B. Banaschewski \cite{Ba})} Let $(X_i,\tau_i)$, $i=1,2$, be two zero-dimensional Hausdorff spaces and $(cX_2,c)$ be a zero-dimensional Hausdorff compactification of $X_2$. Then for every continuous function $f:X_1\longrightarrow X_2$ there exists a continuous function $g:\beta_0X_1\longrightarrow cX_2$ such that $g\circ\beta_0=c\circ f$. \end{cor}
\hspace{-1cm}{\em Proof.}~~ Since $\beta_0X_1$ corresponds to the admissible for $X_1$ Boolean algebra $CO(X_1)$, condition (ZEQ1) is clearly fulfilled when $f$ is a continuous function. Now apply Corollary \ref{zdextcc}. \sq
If in the above Corollary \ref{zdextcb} $cX_2=\beta_0X_2$, then the map $g$ will be denoted by $\beta_0f$.
Recall that a function $f:X\longrightarrow Y$ is called {\em skeletal}\/ (\cite{MR}) if
\begin{equation}\label{ske} \mbox{{\rm int}}(f^{-1}(\mbox{{\rm cl}} (V)))\subseteq\mbox{{\rm cl}}(f^{-1}(V)) \end{equation}
for every open subset $V$ of $Y$. Recall also the following result:
\begin{lm}\label{skel}{\rm (\cite{D1})} A function $f:X\longrightarrow Y$ is skeletal iff\/ $\mbox{{\rm int}}(\mbox{{\rm cl}}(f(U)))\neq\emptyset$, for every non-empty open subset $U$ of $X$. \end{lm}
\begin{lm}\label{skelnew} A continuous map $f:X\longrightarrow Y$, where $X$ and $Y$ are topological spaces, is skeletal iff for every open subset $V$ of $Y$ such that $\mbox{{\rm cl}}_Y(V)$ is open, $\mbox{{\rm cl}}_X(f^{-1}(V))= f^{-1}(\mbox{{\rm cl}}_Y(V))$ holds. \end{lm}
\hspace{-1cm}{\em Proof.}~~ ($\Rightarrow$) Let $f$ be a skeletal continuous map and $V$ be an open subset of $Y$ such that $\mbox{{\rm cl}}_Y(V)$ is open. Let $x\in f^{-1}(\mbox{{\rm cl}}_Y(V))$. Then $f(x)\in \mbox{{\rm cl}}_Y(V)$. Since $f$ is continuous, there exists an open neighborhood $U$ of $x$ in $X$ such that $f(U)\subseteq \mbox{{\rm cl}}_Y(V)$. Suppose that $x\not\in\mbox{{\rm cl}}_X(f^{-1}(V))$. Then there exists an open neighborhood $W$ of $x$ in $X$ such that $W\subseteq U$ and $W\cap f^{-1}(V)=\emptyset$. We obtain that
$\mbox{{\rm cl}}_Y(f(W))\cap V=\emptyset$ and $\mbox{{\rm cl}}_Y(f(W))\subseteq \mbox{{\rm cl}}_Y(f(U))\subseteq \mbox{{\rm cl}}_Y(V)$. Since, by Lemma \ref{skel}, $\mbox{{\rm int}}_Y(\mbox{{\rm cl}}_Y(f(W)))\neq\emptyset$, we get a contradiction. Thus $f^{-1}(\mbox{{\rm cl}}_Y(V))\subseteq\mbox{{\rm cl}}_X(f^{-1}(V))$. The converse inclusion follows from the continuity of $f$. Hence $f^{-1}(\mbox{{\rm cl}}_Y(V))=\mbox{{\rm cl}}_X(f^{-1}(V))$.
\noindent($\Leftarrow$) Suppose that there exists an open subset $U$ of $X$ such that $\mbox{{\rm int}}_Y(\mbox{{\rm cl}}_Y(f(U)))=\emptyset$ and $U\neq\emptyset$. Then, clearly, $V=Y\setminus\mbox{{\rm cl}}_Y(f(U))$ is an open dense subset of $Y$. Hence $\mbox{{\rm cl}}_Y(V)$ is open in $Y$. Thus $\mbox{{\rm cl}}_X(f^{-1}(V))= f^{-1}(\mbox{{\rm cl}}_Y(V))=f^{-1}(Y)=X$ holds. Therefore $X=\mbox{{\rm cl}}_X(f^{-1}(V))=\mbox{{\rm cl}}_X(f^{-1}(Y\setminus \mbox{{\rm cl}}_Y(f(U))))=\mbox{{\rm cl}}_X(X\setminus f^{-1}(\mbox{{\rm cl}}_Y(f(U))))$. Since $U\subseteq f^{-1}(\mbox{{\rm cl}}_Y(f(U)))$, we get that $X\setminus U\supseteq\mbox{{\rm cl}}_X(X\setminus f^{-1}(\mbox{{\rm cl}}_Y(f(U))))=X$, a contradiction. Hence, $f$ is a skeletal map. \sq
Note that the proof of Lemma \ref{skelnew} shows that the following assertion is also true:
\begin{lm}\label{skelnewcor} A continuous map $f:X\longrightarrow Y$, where $X$ and $Y$ are topological spaces, is skeletal iff for every open dense subset $V$ of $Y$, $\mbox{{\rm cl}}_X(f^{-1}(V))= X$ holds. \end{lm}
\begin{lm}\label{skelnewnew} Let $(X_i,\tau_i)$, $i=1,2$, be two topological spaces, $(Y_i,f_i)$ be some extensions of $(X_i,\tau_i)$, $i=1,2$, $f:X_1\longrightarrow X_2$ and $g:Y_1\longrightarrow Y_2$ be two continuous functions such that $g\circ f_1=f_2\circ f$. Then $g$ is skeletal iff $f$ is skeletal. \end{lm}
\hspace{-1cm}{\em Proof.}~~ ($\Rightarrow$) Let $g$ be skeletal and $V$ be an open dense subset of $X_2$. Set $U=Ex_{Y_2}(V)$, i.e. $U=Y_2\setminus \mbox{{\rm cl}}_{Y_2}(f_2(X_2\setminus V))$. Then $U$ is an open dense subset of $Y_2$ and $f_2^{-1}(U)=V$. Hence, by Lemma \ref{skelnewcor}, $g^{-1}(U)$ is a dense open subset of $Y_1$. We will prove that $f_1^{-1}(g^{-1}(U))\subseteq f^{-1}(V)$. Indeed, let $x\in f_1^{-1}(g^{-1}(U))$. Then $g(f_1(x))\in U$, i.e. $f_2(f(x))\in U$. Thus $f(x)\in f_2^{-1}(U)=V$. So, $f_1^{-1}(g^{-1}(U))\subseteq f^{-1}(V)$. This shows that $f^{-1}(V)$ is dense in $X_1$. Therefore, by Lemma \ref{skelnewcor}, $f$ is a skeletal map.
\noindent($\Leftarrow$) Let $f$ be a skeletal map and $U$ be a dense open subset of $Y_2$. Set $V=f_2^{-1}(U)$. Then $V$ is an open dense subset of $X_2$. Thus, by Lemma \ref{skelnewcor}, $f^{-1}(V)$ is a dense subset of $X_1$. We will prove that $f^{-1}(V)\subseteq f_1^{-1}(g^{-1}(U))$. Indeed, let $x\in f^{-1}(V)$. Then $f(x)\in V=f_2^{-1}(U)$. Thus $f_2(f(x))\in U$, i.e. $g(f_1(x))\in U$. So, $f^{-1}(V)\subseteq f_1^{-1}(g^{-1}(U))$. This implies that $g^{-1}(U)$ is dense in $Y_1$. Now, Lemma \ref{skelnewcor} shows that $g$ is a skeletal map. \sq
We are now ready to prove the following result:
\begin{theorem}\label{zdextcmain} Let $(X_i,\tau_i)$, where $i=1,2$, be zero-dimensional Hausdorff spaces. Let, for $i=1,2$, $(Y_i,f_i)$ be a zero-dimensional Hausdorff local compactification of $(X_i,\tau_i)$, $(A_i,I_i)=\alpha_X^0(Y_i,f_i)$
(see (\ref{dw1}) and (\ref{01}) for $\alpha_{X_i}^0$),
$f:(X_1,A_1,I_1)\longrightarrow (X_2,A_2,I_2)$
be a 0-equicontinuous function and $g=L_0(f):Y_1\longrightarrow Y_2$ be the continuous
function such that $g\circ f_1=f_2\circ f$ (its existence is guaranteed by
Theorem \ref{zdextc}). Then:
\noindent(a) $g$ is skeletal iff $f$ is skeletal;
\noindent(b) $g$ is an open map iff $f$ satisfies the following condition:
\noindent{\rm(ZO)} For every $F\in I_1$, $\mbox{{\rm cl}}_{X_2}(f(F))\in I_2$ holds;
\noindent(c) $g$ is a perfect map iff $f$ satisfies the following condition:
\noindent{\rm(ZP)} For every $G\in I_2$, $f^{-1}(G)\in I_1$ holds (i.e., briefly, $f^{-1}(I_2)\subseteq I_1$);
\noindent(d) $\mbox{{\rm cl}}_{Y_2}(g(Y_1))=Y_2$ iff\/ $\mbox{{\rm cl}}_{X_2}(f(X_1))=X_2$;
\noindent(e) $g$ is an injection iff $f$ satisfies the following condition:
\noindent{\rm(ZI)} For every $F_1,F_2\in I_1$ such that $F_1\cap F_2=\emptyset$ there exist $G_1,G_2\in I_2$ with $G_1\cap G_2=\emptyset$ and $f(F_i)\subseteq G_i$, $i=1,2$;
\noindent(f) $g$ is an open injection iff $I_1\subseteq f^{-1}(I_2)$ and $f$ satisfies condition {\rm (ZO)};
\noindent(g) $g$ is a closed injection iff $f^{-1}(I_2)=I_1$;
\noindent(h) $g$ is a perfect surjection iff $f$ satisfies condition {\rm (ZP)} and $\mbox{{\rm cl}}_{X_2}(f(X_1))=X_2$;
\noindent(i) $g$ is a dense embedding iff\/ $\mbox{{\rm cl}}_{X_2}(f(X_1))=X_2$ and $I_1\subseteq f^{-1}(I_2)$. \end{theorem}
\hspace{-1cm}{\em Proof.}~~ Set $\psi_g=\Theta^t(g)$ (see Theorem \ref{genstonec}). Then $\psi_g:CO(Y_2)\longrightarrow CO(Y_1)$, $G\mapsto g^{-1}(G)$. Set also $\psi_f: A_2\longrightarrow A_1$, $G\mapsto f^{-1}(G)$. Then, (\ref{0ic}), (\ref{0i}) and (\ref{psif}) imply that $\psi_f=r_1^c\circ\psi_g\circ e_2^c$.
\noindent(a) It follows from Lemma \ref{skelnewnew}.
\noindent(b) {\em First Proof.}~ Using \cite[Theorem 2.8(a)]{Di5} and (\ref{0il}), we get that the map $g$ is open iff there exists a map $\psi^f:I_1\longrightarrow I_2$ satisfying the following conditions:
\noindent(OZL1) For every $F\in I_1$ and every $G\in I_2$, $(F\cap f^{-1}(G)=\emptyset)\rightarrow (\psi^f(F)\cap G=\emptyset)$;
\noindent(OZL2) For every $F\in I_1$, $f^{-1}(\psi^f(F))\supseteq F$.
Obviously, condition (OZL2) is equivalent to the following one: for every $F\in I_1$, $f(F)\subseteq\psi^f(F)$. We will show that for every $F\in I_1$, $\psi^f(F)\subseteq\mbox{{\rm cl}}_{X_2}(f(F))$. Indeed, let $y\in\psi^f(F)$ and suppose that $y\not\in \mbox{{\rm cl}}_{X_2}(f(F))$. Since $I_2$ is a base of $X_2$, there exists a $G\in I_2$ such that $y\in G$ and $G\cap f(F)=\emptyset$. Then $F\cap f^{-1}(G)=\emptyset$ and condition (OZL1) implies that $\psi^f(F)\cap G=\emptyset$. We get that $y\not\in\psi^f(F)$, a contradiction. Thus $f(F)\subseteq\psi^f(F)\subseteq\mbox{{\rm cl}}_{X_2}(f(F))$. Since $\psi^f(F)$ is a closed set, we obtain that $\psi^f(F)=\mbox{{\rm cl}}_{X_2}(f(F))$. Obviously, conditions (OZL1) and (OZL2) are satisfied when $\psi^f(F)=\mbox{{\rm cl}}_{X_2}(f(F))$. This implies that $g$ is an open map iff for every $F\in I_1$, $\mbox{{\rm cl}}_{X_2}(f(F))\in I_2$.
\noindent{\em Second Proof.}~ We have, by (\ref{01}), that $I_i=(f_i)^{-1}(CK(Y_i))$, for $i=1,2$. Thus, for every $F\in I_i$, where $i\in\{1,2\}$, we have that $\mbox{{\rm cl}}_{Y_i}(f_i(F))\in CK(Y_i)$.
Let $g$ be an open map and $F\in I_1$. Then, $G=\mbox{{\rm cl}}_{Y_1}(f_1(F))\in CK(Y_1)$. Thus $g(G)\in CK(Y_2)$. Since $G$ is compact, we have that $g(G)=\mbox{{\rm cl}}_{Y_2}(g(f_1(F)))=\mbox{{\rm cl}}_{Y_2}(f_2(f(F)))=\mbox{{\rm cl}}_{Y_2}(f_2(\mbox{{\rm cl}}_{X_2}(f(F))))$. Therefore, $\mbox{{\rm cl}}_{X_2}(f(F))=(f_2)^{-1}(g(G))$, i.e. $\mbox{{\rm cl}}_{X_2}(f(F))\in I_2$.
Conversely, let $f$ satisfies condition (ZO). Since $CK(Y_1)$ is an open base of $Y_1$, for showing that $g$ is an open map, it is enough to prove that for every $G\in CK(Y_1)$, $g(G)=\mbox{{\rm cl}}_{Y_2}(f_2(\mbox{{\rm cl}}_{X_2}(f(F))))$ holds, where $F=(f_1)^{-1}(G)$ and thus $F\in I_1$. Obviously, $G=\mbox{{\rm cl}}_{Y_1}(f_1(F))$. Using again the fact that $G$ is compact, we get that $g(G)=g(\mbox{{\rm cl}}_{Y_1}(f_1(F)))=\mbox{{\rm cl}}_{Y_2}(g(f_1(F)))= \mbox{{\rm cl}}_{Y_2}(f_2(f(F)))=\mbox{{\rm cl}}_{Y_2}(f_2(\mbox{{\rm cl}}_{X_2}(f(F))))$. So, $g$ is an open map.
\noindent(c) Since $Y_2$ is a locally compact Hausdorff space and $CK(Y_2)$ is a base of $Y_2$, we get, using the well-known \cite[Theorem 3.7.18]{E2}, that $g$ is a perfect map iff $g^{-1}(G)\in CK(Y_1)$ for every $G\in CK(Y_2)$. Thus $g$ is a perfect map iff $\psi_g(G)\in CK(Y_1)$ for every $G\in CK(Y_2)$. Now, (\ref{0il}) and (\ref{0ic}) imply that $g$ is a perfect map $\iff$ $\psi_f(G)\in I_1$ for every $G\in I_2$ $\iff$ $f$ satisfies condition (ZP).
\noindent(d) This is obvious.
\noindent(e) Having in mind (\ref{0il}) and (\ref{0ic}), our assertion follows from \cite[Theorem 3.5]{Di5}.
\noindent(f) It follows from (b), (\ref{0il}), (\ref{0ic}), and \cite[Theorem 3.12]{Di5}.
\noindent(g) It follows from (c), (\ref{0il}), (\ref{0ic}), and \cite[Theorem 3.14]{Di5}.
\noindent(h) It follows from (c) and (d).
\noindent(i) It follows from (d) and \cite[Theorem 3.28 and Proposition 3.3]{Di5}. We will also give a {\em second proof}\/ of this fact. Obviously, if $g$ is a dense embedding then $g(Y_1)$ is an open subset of $Y_2$ (because $Y_1$ is locally compact); thus $g$ is an open mapping and we can apply (f) and (d). Conversely, if $\mbox{{\rm cl}}_{X_2}(f(X_1))=X_2$ and $I_1\subseteq f^{-1}(I_2)$, then, by (d), $g(Y_1)$ is a dense subset of $Y_2$. We will show that $f$ satisfies condition (ZO). Let $F_1\in I_1$. Then there exists $F_2\in I_2$ such that $F_1=f^{-1}(F_2)$. Then, obviously, $\mbox{{\rm cl}}_{X_2}(f(F_1))\subseteq F_2$. Suppose that $G_2=F_2\setminus\mbox{{\rm cl}}_{X_2}(f(F_1))\neq\emptyset$. Since $G_2$ is open, there exists $x_2\in G_2\cap f(X_1)$. Then there exists $x_1\in X_1$ such that $f(x_1)=x_2\in F_2$. Thus $x_1\in F_1$, a contradiction. Therefore, $\mbox{{\rm cl}}_{X_2}(f(F_1))= F_2$. Thus, $\mbox{{\rm cl}}_{X_2}(f(F_1))\in I_2$. So, condition (ZO) is fulfilled. Hence, by (b), $g$ is an open map. Now, using (f), we get that $g$ is also an injection. All this shows that $g$ is a dense embedding. \sq
Recall that a continuous map $f:X\longrightarrow Y$ is called {\em quasi-open\/} (\cite{MP}) if for every non-empty open subset $U$ of $X$, $\mbox{{\rm int}}(f(U))\neq\emptyset$ holds. As it is shown in \cite{D1}, if $X$ is regular and Hausdorff, and $f:X\longrightarrow Y$ is a closed map, then $f$ is quasi-open iff $f$ is skeletal. This fact and Theorem \ref{zdextcmain} imply the following two corollaries:
\begin{cor}\label{zdextcmaincb} Let $X_1$, $X_2$ be two zero-dimensional Hausdorff spaces and
$f:X_1\longrightarrow X_2$ be a continuous function. Then:
\noindent(a) $\beta_0f$ is quasi-open iff $f$ is skeletal;
\noindent(b) $\beta_0f$ is an open map iff $f$ satisfies the following condition:
\noindent{\rm(ZOB)} For every $F\in CO(X_1)$, $\mbox{{\rm cl}}_{X_2}(f(F))\in CO(X_2)$ holds;
\noindent(c) $\beta_0f$ is a surjection iff\/ $\mbox{{\rm cl}}_{X_2}(f(X_1))=X_2$;
\noindent(d) $\beta_0f$ is an injection iff $f^{-1}(CO(X_2))=CO(X_1)$. \end{cor}
\begin{cor}\label{zdextcmaincc} Let $X_1$, $X_2$ be two zero-dimensional Hausdorff spaces,
$f:X_1\longrightarrow X_2$ be a continuous function, ${\cal B}$ be a Boolean algebra admissible for $X_2$, $(cX_2,c)$ be the Hausdorff zero-dimensional
compactification of $X_2$ corresponding to ${\cal B}$ (see Theorems \ref{dwingerlc} and \ref{dwinger}) and $g:\beta_0X_1\longrightarrow cX_2$ be the continuous function such
that $g\circ \beta_0=c\circ f$ (its existence is guaranteed by
Theorem \ref{zdextcb}). Then:
\noindent(a) $g$ is quasi-open iff $f$ is skeletal;
\noindent(b) $g$ is an open map iff $f$ satisfies the following condition:
\noindent{\rm(ZOC)} For every $F\in CO(X_1)$, $\mbox{{\rm cl}}_{X_2}(f(F))\in {\cal B}$ holds;
\noindent(c) $g$ is a surjection iff\/ $\mbox{{\rm cl}}_{X_2}(f(X_1))=X_2$;
\noindent(d) $g$ is an injection iff $f^{-1}({\cal B})=CO(X_1)$. \end{cor}
\end{document} |
\begin{document}
\title{Nonuniform average sampling in multiply generated shift-invariant subspaces of mixed Lebesgue spaces}
\author{Qingyue Zhang\\ \footnotesize\it College of Science, Tianjin University of Technology,
Tianjin~300384, China\\
\footnotesize\it {e-mails: {[email protected]}}\\ }
\maketitle
\textbf{Abstract.}\,\, In this paper, we study nonuniform average sampling problem in multiply generated shift-invariant subspaces of mixed Lebesgue spaces. We discuss two types of average sampled values: average sampled values $\{\left \langle f,\psi_{a}(\cdot-x_{j},\cdot-y_{k}) \right \rangle:j,k\in\mathbb{J}\}$ generated by single averaging function and average sampled values $\left\{\left \langle f,\psi_{x_{j},y_{k}}\right \rangle:j,k\in\mathbb{J}\right\}$ generated by multiple averaging functions. Two fast reconstruction algorithms for this two types of average sampled values are provided.
\textbf{Key words.}\,\, mixed Lebesgue spaces; nonuniform average sampling; shift-invariant subspaces.
\textbf{2010 MR Subject Classification}\,\, 94A20, 94A12, 42C15, 41A58
\section{Introduction and motivation} \ \ \ \ In 1961, Benedek firstly proposed mixed Lebesgue spaces \cite{Benedek,Benedek2}. In the 1980s, Fernandez, and Francia, Ruiz and Torrea developed the theory of mixed Lebesgue spaces in integral operators and Calder\'on--Zygmund operators, respectively \cite{Fernandez,Rubio}. Recently, Torres and Ward, and Li, Liu and Zhang studied sampling problem in shift-invariant subspaces of mixed Lebesgue spaces \cite{Torres,Ward,LiLiu,zhangqingyue}. Mixed Lebesgue spaces generalize Lebesgue spaces. It was proposed due to considering functions that depend on independent quantities with different properties. For a function in mixed Lebesgue spaces, one can consider the integrability of each variable independently. This is distinct from traditional Lebesgue spaces. The flexibility of mixed Lebesgue spaces makes them have a crucial role to play in the study of time based partial differential equations. In this context, we study nonuniform average sampling problem in shift-invariant subspaces of mixed Lebesgue spaces.
Sampling theorem is the theoretical basis of modern pulse coded modulation communication system, and is also one of the most powerful basic tools in signal processing and image processing. In 1948, Shannon formally proposed sampling theorem \cite{S,S1}. Shannon sampling theorem shows that for any $f\in L^{2}(\mathbb{R})$ with $\mathrm{supp}\hat{f}\subseteq[-T,T],$ $$f(x)=\sum_{n\in \mathbb{Z}}f\left(\frac{n}{2T}\right)\frac{\sin\pi(2Tx-n)}{\pi(2Tx-n)},$$ where the series converges uniformly on compact sets and in $L^{2}(\mathbb{R})$, and $$\hat{f}(\xi)=\int_{\mathbb{R}}f(x)e^{-2\pi i x\xi }dx,\ \ \ \ \ \ \xi\in\mathbb{R}.$$ However, in many realistic situations, the sampling set is only a nonuniform sampling set. For example, the transmission through the internet from satellites only can be viewed as a nonuniform sampling problem, because there exists the loss of data packets in the transmission. In recent years, there are many results concerning nonuniform sampling problem \cite{SQ,BF1,M,Venkataramani,Christopher,Sun,Zhou}. Uniform and nonuniform sampling problems also have been generalized to more general shift-invariant spaces \cite{Aldroubi1,Aldroubi2,Aldroubi3,Aldroubi4,Vaidyanathan,Zhang1} of the form $$V(\phi)=\left\{\sum_{k\in\mathbb{Z}}c(k)\phi(x-k):\{c(k):k\in\mathbb{Z}\}\in \ell^{2}(\mathbb{Z})\right\}.$$
In the classical sampling theory, the sampling values are the function values of the signal at the sampling points. In practical application, due to the precision of physical equipment and so on, it is impossible to measure the exact value of a signal at a point. And the actual measured value is the local mean value of the signal near this point. Therefore, average sampling has attracted more and more the attentions of researchers \cite{SunZhou,Portal,Ponnaian,Kang,Atreas,Aldroubi10,Aldroubisun,Xianli,sunqiyu}.
For the sampling problem in shift-invariant subspaces of mixed Lebesgue spaces, Torres and Ward studied uniform sampling problem for band-limited functions in mixed Lebesgue spaces \cite{Torres,Ward}. Li, Liu and Zhang discussed the nonuniform sampling problem in principal shift-invariant spaces of mixed Lebesgue spaces \cite{LiLiu,zhangqingyue}. In this paper, we discuss nonuniform average sampling problem in multiply generated shift-invariant subspaces of mixed Lebesgue spaces. We discuss two types of average sampled values: average sampled values $\{\left \langle f,\psi_{a}(\cdot-x_{j},\cdot-y_{k}) \right \rangle:j,k\in\mathbb{J}\}$ generated by single averaging function and average sampled values $\left\{\left \langle f,\psi_{x_{j},y_{k}}\right \rangle:j,k\in\mathbb{J}\right\}$ generated by multiple averaging functions. Two fast reconstruction algorithms for this two types of average sampled values are provided.
The paper is organized as follows. In the next section, we give the definitions and preliminary results needed and define multiply generated shift-invariant subspaces in mixed Lebesgue spaces $L^{p,q}\left(\mathbb{R}^{d+1}\right)$. In Section 3, we give main results of this paper. Section 4 gives some useful propositions and lemmas. In Section 5, we give proofs of main results. Finally, concluding remarks are presented in Section 6.
\section{Definitions and preliminary results} \ \ \ \ In this section, we give some definitions and preliminary results needed in this paper. First of all, we give the definition of mixed Lebesgue spaces $L^{p,q}(\Bbb R^{d+1})$. \begin{definition} For $1 \leq p,q <+\infty$. $L^{p,q}=L^{p,q}(\Bbb R^{d+1})$ consists of all measurable functions $f=f(x,y)$ defined on $\Bbb R\times\Bbb R^{d}$ satisfying
$$\|f\|_{L^{p,q}}=\left[\int_{\Bbb R}\left(\int_{\Bbb R^d}|f(x,y)|^{q}dy\right)^{\frac{p}{q}}dx\right]^{\frac{1}{p}}<+\infty.$$ \end{definition}
The corresponding sequence spaces are defined by $$\ell^{p,q}=\ell^{p,q}(\mathbb{Z}^{d+1})=\left\{c: \|c\|^{p}_{\ell^{p,q}}=\sum_{k_{1} \in \Bbb Z}\left(\sum_{k_{2} \in \Bbb Z^d }
|c(k_{1},k_{2})|^{q}\right)^{\frac{p}{q}} <+\infty\right\}.$$
In order to control the local behavior of functions, we introduce mixed Wiener amalgam spaces $W(L^{p,q})(\Bbb R^{d+1})$. \begin{definition} For $1\leq p,q<\infty$, if a measurable function $f$ satisfies
$$\|f\|^{p}_{W(L^{p,q})}:=\sum_{n\in \Bbb Z}\sup_{x\in[0,1]}\left[\sum_{l\in \Bbb Z^d}\sup_{y\in [0,1]^d}|f(x+n,y+l)|^{q}\right]^{p/q}<\infty,$$ then we say that $f$ belongs to the mixed Wiener amalgam space $W(L^{p,q})=W(L^{p,q})(\Bbb R^{d+1})$. \end{definition} For $1\leq p,q<\infty$, let $W_{0}\left ( L^{p,q} \right )$ denote the space of all continuous functions in $W(L^{p,q})$.
For $1\leq p<\infty,$ if a function $f$ satisfies
$$\|f\|^{p}_{W(L^{p})}:=\sum_{k\in \Bbb Z^{d+1}}\mathrm{ess\sup}_{x\in[0,1]^{d+1}} |f(x+k)|^{p}<\infty,$$ then we say that $f$ belongs to the Wiener amalgam space $W(L^{p})=W(L^{p})(\Bbb R^{d+1}).$ Obviously, $W(L^{p})\subset W(L^{p,p}).$
For $1\leq p< \infty $, let $W_{0}\left ( L^{p} \right )$ denote the space of all continuous functions in $W(L^{p})$. Let $B$ be a Banach space. $(B)^{(r)}$ denotes $r$ copies $B\times\cdots\times B$ of $B$.
For any $f,g\in L^{2}(\mathbb{R}^{d+1})$, define their convolution $$(f*g)(x)=\int_{\mathbb{R}^{d+1}} f(y)g(x-y)dy.$$
The following is a preliminary result.
\begin{lemma}\label{convolution relation} If $f\in L^{1}(\mathbb{R}^{d+1})$ and $g\in W(L^{1,1})$, then $f*g\in W(L^{1,1})$ and \[
\|f*g\|_{W(L^{1,1})}\leq\|g\|_{W(L^{1,1})}\|f\|_{L^{1}}. \] \end{lemma}
\begin{proof} Since \begin{eqnarray*}
&&\|f*g\|_{W(L^{1,1})}=\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\\
&&\quad\quad\left|\int\limits_{y_{1}\in\mathbb{R}}\int\limits_{y_{2}\in\mathbb{R}^{d}}
f(y_{1},y_{2})g(x_{1}+k_{1}-y_{1},x_{2}+k_{2}-y_{2}))dy_{2}dy_{1}\right|\\ &&\quad\leq \sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\\ &&\quad\quad\int\limits_{y_{1}\in\mathbb{R}}\int\limits_{y_{2}\in\mathbb{R}^{d}}
|f(y_{1},y_{2})||g(x_{1}+k_{1}-y_{1},x_{2}+k_{2}-y_{2}))|dy_{2}dy_{1}\\ &&\quad\leq \int\limits_{y_{1}\in\mathbb{R}}\int\limits_{y_{2}\in\mathbb{R}^{d}}
|f(y_{1},y_{2})|\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}|g(x_{1}+k_{1}-y_{1},x_{2}+k_{2}-y_{2}))|dy_{2}dy_{1}\\ &&\quad\leq \int\limits_{y_{1}\in\mathbb{R}}\int\limits_{y_{2}\in\mathbb{R}^{d}}
|f(y_{1},y_{2})|\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}|g(x_{1}+k_{1},x_{2}+k_{2}))|dy_{2}dy_{1}\\
&&\quad= \int\limits_{y_{1}\in\mathbb{R}}\int\limits_{y_{2}\in\mathbb{R}^{d}}|f(y_{1},y_{2})|\|g\|_{W(L^{1,1})}dy_{2}dy_{1}\\
&&\quad= \|g\|_{W(L^{1,1})}\int\limits_{y_{1}\in\mathbb{R}}\int\limits_{y_{2}\in\mathbb{R}^{d}}|f(y_{1},y_{2})|dy_{2}dy_{1}\\
&&\quad=\|g\|_{W(L^{1,1})}\|f\|_{L^{1}}, \end{eqnarray*} the desired result in Lemma \ref{convolution relation} is following. \end{proof}
\subsection{Shift-invariant spaces}
\ \ \ \ For $\Phi=(\phi_{1},\phi_{2},\cdots,\phi_{r})^{T}\in W(L^{1,1})^{(r)}$, the multiply generated shift-invariant space in the mixed Legesgue spaces $L^{p,q}$ is defined by \begin{eqnarray*} &&V_{p,q}(\Phi)=\left\{\sum_{i=1}^{r}\sum_{k_{1}\in \Bbb Z}\sum_{k_{2}\in \Bbb Z^{d}}c_{i}(k_{1},k_{2})\phi_{i}(\cdot-k_{1},\cdot-k_{2}):\right.\\ && \quad\quad\quad\quad\quad\quad\quad\quad\left.c_{i}=\left\{c_{i}(k_{1},k_{2}):k_{1}\in \Bbb Z,k_{2}\in\Bbb Z^{d}\right\}\in \ell^{p,q},\,1\leq i\leq r\right\}. \end{eqnarray*} It is easy to see that the three sum pointwisely converges almost everywhere. In fact, for any $1\leq i\leq r$, $c_{i}=\left\{c_{i}(k_{1},k_{2}):k_{1}\in \Bbb Z,k_{2}\in\Bbb Z^{d}\right\}\in\ell^{p,q}$ derives $c_{i}\in \ell^{\infty}.$ This combines $\Phi=(\phi_{1},\phi_{2},\cdots,\phi_{r})^{T}\in W(L^{1,1})^{(r)}$ gets \begin{eqnarray*}
\sum_{i=1}^{r}\sum_{k_{1}\in \Bbb Z}\sum_{k_{2}\in \Bbb Z^{d}}\left|c_{i}(k_{1},k_{2})\phi_{i}(x-k_{1},y-k_{2})\right|&\leq& \sum_{i=1}^{r}\|c_{i}\|_\infty\sum_{k_{1}\in \Bbb Z}\sum_{k_{2}\in \Bbb Z^{d}}|\phi_{i}(x-k_{1},y-k_{2})|\\
&\leq&\sum_{i=1}^{r}\|c_{i}\|_\infty\| \phi_{i} \|_{W(L^{1,1})}<\infty\,(a.e.).
\end{eqnarray*}
The following proposition gives that multiply generated shift-invariant spaces are well-defined in $L^{p,q}$.
\begin{proposition}\cite[Theorem 2.8]{zhangqingyue}\label{thm:stableup} Assume that $1\leq p,q<\infty $ and $\Phi=(\phi_{1},\phi_{2},\cdots,\phi_{r})^{T}\in W(L^{1,1})^{(r)}$. Then for any $C=(c_{1},c_{2},\cdots,c_{r})^{T}\in (\ell^{p,q})^{(r)}$, the function \[ f=\sum_{i=1}^{r}\sum_{k_{1}\in \Bbb Z}\sum_{k_{2}\in \Bbb Z^d}c_{i}(k_{1},k_{2})\phi_{i}(\cdot-k_{1},\cdot-k_{2}) \]
belongs to $L^{p,q}$ and there exist $D_{1}, D_{2}>0$ such that $$
D_{1}\|f\|_{L^{p,q}}\leq\left(\sum_{i=1}^{r}\|c_{i}\|^{2}_{\ell^{p,q}}\right)^{1/2}\leq D_{2}\|f\|_{L^{p,q}}. $$ \end{proposition}
\section{Main results} \ \ \ \ In this section, we mainly discuss nonuniform average sampling in multiply generated shift-invariant spaces. The main results of this section are two fast reconstruction algorithms which allow to exactly reconstruct the signals $f$ in multiply generated shift-invariant subspaces from the average sampled values of $f$.
\subsection{The case of single averaging function} \ \ \ \ In this subsection, we will give a fast reconstruction algorithm which allows to exactly reconstruct the signals $f$ in multiply generated shift-invariant subspaces from the average sampled values $\{\left \langle f,\psi_{a}(\cdot-x_{j},\cdot-y_{k}) \right \rangle:j,k\in\mathbb{J}\}$ of $f$. Before giving the main result of this subsection, we first give some definitions.
\begin{definition} A bounded uniform partition of unity $\{\beta_{j,k}:j,k\in\mathbb{J}\}$ associated to $\{B_{\gamma}(x_{j},y_{k}):j,k\in\mathbb{J}\}$ is a set of functions satisfying \begin{enumerate}
\item $0\leq\beta_{j,k}\leq1, \forall\,j,k\in\mathbb{J},$
\item $\mathrm{supp}\beta_{j,k}\subset B_{\gamma}(x_{j},y_{k}),$
\item $\sum_{j\in\mathbb{J}}\sum_{k\in\mathbb{J}}\beta_{j,k}=1$.
\end{enumerate} Here $B_{\gamma}(x_{j},y_{k})$ is the open ball with center $(x_{j},y_{k})$ and radius $\gamma$. \end{definition}
If $f\in W_{0}(L^{1,1})$, we define \[ A_{X,a}f=\sum_{j\in\mathbb{J}}\sum_{k\in\mathbb{J}}\left \langle f,\psi_{a}(\cdot-x_{j},\cdot-y_{k}) \right \rangle\beta_{j,k}=\sum_{j\in\mathbb{J}}\sum_{k\in\mathbb{J}}(f*\psi^{*}_{a})(x_{j},y_{k})\beta_{j,k}, \] and define \[ Q_{X}f=\sum_{j\in\mathbb{J}}\sum_{k\in\mathbb{J}}f(x_{j},y_{k})\beta_{j,k} \] for the quasi-interpolant of the sequence $c(j,k)=f(x_{j},y_{k})$. Here $\psi_{a}(\cdot)=1/a^{d+1}\psi(\cdot/a)$ and $\psi^{*}_{a}(x)=\overline{\psi_{a}(-x)}$. Obviously, one has $A_{X,a}f=Q_{X}(f*\psi^{*}_{a})$.
In order to describe the structure of the sampling set $X$, we give the following definition.
\begin{definition} If a set $X=\{(x_{j},y_{k}):k,j\in \mathbb{J},x_{k}\in\mathbb{R},y_{j}\in\mathbb{R}^{d}\}$ satisfies \[ \mathbb{R}^{d+1}=\cup_{j,k}B_{\gamma}(x_{j},y_{k})\quad\mbox{for every}\,\gamma>\gamma_{0}, \] then we say that the set $X$ is $\gamma_{0}$-dense in $\mathbb{R}^{d+1}$. Here $B_{\gamma}(x_{j},y_{k})$ is the open ball with center $(x_{j},y_{k})$ and radius $\gamma$, and $\mathbb{J}$ is a countable index set. \end{definition}
The following is main result of this subsection.
\begin{theorem}\label{th:suanfa} Assume that $\Phi=(\phi_{1},\phi_{2},\cdots,\phi_{r})^{T}\in W_{0}(L^{1,1})^{(r)}$ whose support is compact and $P$ is a bounded projection from $L^{p,q}$ onto $V_{p,q}(\Phi)$. Let $\psi\in W_{0}(L^{1,1})$ and $\int_{\mathbb{R}^{d+1}}\psi=1$. Then there are density $\gamma_{0}=\gamma_{0}(\Phi,\psi)>0$ and $a_{0}=a_{0}(\Phi,\psi)>0$ such that any $f\in V_{p,q}(\Phi)$ can be reconstructed from its average samples $\{\left \langle f,\psi_{a}(\cdot-x_{j},\cdot-y_{k}) \right \rangle:j,k\in\mathbb{J}\}$ on any $\gamma\,(\gamma\leq\gamma_{0})$-dense set $X=\{(x_{j},y_{k}):j,k\in\mathbb{J}\}$ and for any $0<a\leq a_{0}$, by the following iterative algorithm: \begin{eqnarray}\label{eq:iterative algorithm} \left\{ \begin{array}{rl}f_{1}=&PA_{X,a}f \\
f_{n+1}=&PA_{X,a}(f-f_{n})+f_{n}.\\ \end{array}\right. \end{eqnarray} The iterate $f_{n}$ converges to $f$ in the $L^{p,q}$ norm. Furthermore, the convergence is geometric, namely,
\[
\|f-f_{n}\|_{L^{p,q}}\leq M\alpha^{n} \] for some $\alpha=\alpha(\gamma,a,P,\Phi,\psi))<1$ and $M<\infty.$
\end{theorem}
\subsection{The case of multiple averaging functions} \ \ \ \ In above subsection, we treat the case of single averaging function. However, in practice, we often encounter the case of multiple averaging functions. Thus, the average sampled values can be described by $\left\{\left \langle f,\psi_{x_{j},y_{k}}\right \rangle:j,k\in\mathbb{J}\right\}$. For this case, we recover the functions $f$ exactly by using the following fast algorithm. Before giving the fast algorithm, we first define \[ A_{X}f=\sum_{j\in\mathbb{J}}\sum_{k\in\mathbb{J}}\left \langle f,\psi_{x_{j},y_{k}} \right \rangle\beta_{j,k}. \]
\begin{theorem}\label{th:suanfa-m}
Assume that $\Phi=(\phi_{1},\phi_{2},\cdots,\phi_{r})^{T}\in W_{0}(L^{1,1})^{(r)}$ whose support is compact and $P$ is a bounded projection from $L^{p,q}$ onto $V_{p,q}(\Phi)$. Let the averaging sampling functions $\psi_{x_{j},y_{k}}\in W(L^{1,1})$ satisfy $\int_{\mathbb{R}^{d+1}}\psi_{x_{j},y_{k}}=1$ and $\int_{\mathbb{R}^{d+1}}|\psi_{x_{j},y_{k}}|\leq M$, where $M>0$ is independent of $(x_{j},y_{k})$. Then there exist density $\gamma_{0}=\gamma_{0}(\Phi,M)>0$ and $a_{0}=a_{0}(\Phi,M)>0$ such that if $X=\{(x_{j},y_{k}): j,k\in \mathbb{J}\}$ is
$\gamma\,(\gamma\leq\gamma_{0})$-dense in $\mathbb{R}^{d+1}$, and if the average sampling functions $\psi_{x_{j},y_{k}}$ satisfy $\textup{supp}\,\psi_{x_{j},y_{k}}\subseteq (x_{j},y_{k})+[-a,a]^{d+1}$ for some $0<a\leq a_{0}$, then any $f\in V_{p,q}(\Phi)$ can be recovered from its average samples $\left\{\left \langle f,\psi_{x_{j},y_{k}}\right \rangle:j,k\in\mathbb{J}\right\}$ by the following iterative algorithm: \begin{eqnarray}\label{eq:iterative algorithm-m} \left\{ \begin{array}{rl}f_{1}=&PA_{X}f \\
f_{n+1}=&PA_{X}(f-f_{n})+f_{n}.\\ \end{array}\right. \end{eqnarray} In this case, the iterate $f_{n}$ converges to $f$ in the $L^{p,q}$-norm. Moreover, the convergence is geometric, that is, \[
\|f-f_{n}\|_{L^{p,q}}\leq C\alpha^{n} \] for some $\alpha=\alpha(\gamma,a,P,\Phi,M)<1$ and $C<\infty.$ \end{theorem}
\section{Useful propositions and lemmas}
\ \ \ \ In this section, we introduce some useful propositions and lemmas.
Let $f$ be a continuous function. We define the oscillation (or modulus of continuity) of $f$
by $\hbox{osc}_{\delta}(f)(x_{1},x_{2})=\sup_{|y_{1}|\leq\delta,|y_{2}|\leq\delta}|f(x_{1}+y_{1},x_{2}+y_{2})-f(x_{1},x_{2})|$.
The following tow propositions are needed in the proof of two lemmas in this section.
\begin{proposition}\cite[Lemma 3.4]{zhangqingyue}\label{pro:in Wiener space} If $\phi\in W_{0}(L^{1,1})$ whose support is compact, then there exists $\delta_{0}>0$ such that
$\hbox{osc}_{\delta}(\phi)\in W_{0}(L^{1,1})$ for any $\delta\leq\delta_{0}$. \end{proposition}
\begin{proposition}\cite[Lemma 3.5]{zhangqingyue}\label{pro:oscillation}
If $\phi\in W_{0}(L^{1,1})$ whose support is compact, then $$\lim_{\delta\rightarrow0}\|\hbox{osc}_{\delta}(\phi)\|_{W(L^{1,1})}=0.$$ \end{proposition}
To prove our main results, we need the following Lemma.
\begin{lemma}\label{lem:average function} Let $\psi\in L^{1}(\mathbb{R}^{d+1})$ satisfying $\int_{\mathbb{R}^{d+1}}\psi(x)dx=1$ and $\psi_{a}=(1/a^{d+1})\psi(\cdot/a)$. Then for every $\phi\in W_{0}(L^{1,1})$ whose support is compact, \[
\|\phi-\phi*\psi^{*}_{a}\|_{W(L^{1,1})}\rightarrow0\quad \mbox{as}\quad a\rightarrow0^{+}. \] Here $a$ is any positive real number. \end{lemma}
\begin{proof} Since $\int_{\mathbb{R}^{d+1}}\psi(x)dx=1$ and $\psi^{*}_{a}(x)=\overline{\psi_{a}(-x)}$, one has \begin{eqnarray*} \phi-\phi*\psi^{*}_{a}=\int_{\mathbb{R}^{d+1}}(\phi(x)-\phi(x+t))\overline{\psi_{a}(t)}dt. \end{eqnarray*} By Proposition \ref{pro:in Wiener space}, there exists $\delta_{0}>0$ such that
$\hbox{osc}_{\delta}(\phi)\in W_{0}(L^{1,1})$ for any $\delta\leq\delta_{0}$. Thus \begin{eqnarray*}
&&\|\phi-\phi*\psi^{*}_{a}\|_{W(L^{1,1})}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\\ &&\quad=\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\\
&&\quad\quad\left|\int_{\mathbb{R}}\int_{\mathbb{R}^{d}}(\phi(x_{1}+k_{1},x_{2}+k_{2})-\phi(x_{1}+k_{1}+t_{1},x_{2}+k_{2}+t_{2}))
\overline{\psi_{a}(t_{1},t_{2})}dt_{2}dt_{1}\right|\\ &&\quad\leq\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\\
&&\quad\quad\int_{\mathbb{R}}\int_{\mathbb{R}^{d}}\left|\phi(x_{1}+k_{1},x_{2}+k_{2})-\phi(x_{1}+k_{1}+t_{1},x_{2}+k_{2}+t_{2})\right|
\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\ &&\quad\leq\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\\
&&\quad\quad\int\limits_{|t_{1}|\leq\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}\left|\phi(x_{1}+k_{1},x_{2}+k_{2})-\phi(x_{1}+k_{1}+t_{1},x_{2}+k_{2}+t_{2})\right|
\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\ &&\quad\quad+\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\\
&&\quad\quad\int\limits_{|t_{1}|>\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}\left|\phi(x_{1}+k_{1},x_{2}+k_{2})-\phi(x_{1}+k_{1}+t_{1},x_{2}+k_{2}+t_{2})\right|
\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\ &&\quad\quad+\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\\
&&\quad\quad\int\limits_{\mathbb{R}}\int\limits_{|t_{2}|>\delta_{0}}\left|\phi(x_{1}+k_{1},x_{2}+k_{2})-\phi(x_{1}+k_{1}+t_{1},x_{2}+k_{2}+t_{2})\right|
\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\ &&\quad=I_{1}+I_{2}+I_{3}, \end{eqnarray*} where \begin{eqnarray*} &&I_{1}=\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\\
&&\quad\quad\int\limits_{|t_{1}|\leq\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}\left|\phi(x_{1}+k_{1},x_{2}+k_{2})-\phi(x_{1}+k_{1}+t_{1},x_{2}+k_{2}+t_{2})\right|
\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}, \end{eqnarray*} \begin{eqnarray*} &&I_{2}=\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\\
&&\quad\quad\int\limits_{|t_{1}|>\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}\left|\phi(x_{1}+k_{1},x_{2}+k_{2})-\phi(x_{1}+k_{1}+t_{1},x_{2}+k_{2}+t_{2})\right|
\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1} \end{eqnarray*} and \begin{eqnarray*} &&I_{3}=\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\\
&&\quad\quad\int\limits_{\mathbb{R}}\int\limits_{|t_{2}|>\delta_{0}}\left|\phi(x_{1}+k_{1},x_{2}+k_{2})-\phi(x_{1}+k_{1}+t_{1},x_{2}+k_{2}+t_{2})\right|
\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}. \end{eqnarray*}
First of all, we treat $I_{1}$: let $|t|=\max\{|t_{1}|,|t_{2}|\}$. Then \begin{eqnarray*}
I_{1}&\leq& \sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\int\limits_{|t_{1}|\leq\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}
\hbox{osc}_{|t|}(\phi)(x_{1}+k_{1},x_{2}+k_{2}))\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&\leq& \int\limits_{|t_{1}|\leq\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}
\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\hbox{osc}_{|t|}(\phi)(x_{1}+k_{1},x_{2}+k_{2}))\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&=&\int\limits_{|t_{1}|\leq\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}\|\hbox{osc}_{|t|}(\phi)\|_{W(L^{1,1})}\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}. \end{eqnarray*} By proposition \ref{pro:oscillation}, for any $\epsilon>0$, there exists $\delta_{1}>0\,(\delta_{1}<\delta_{0})$ such that \[
\|\hbox{osc}_{\delta}(\phi)\|_{W(L^{1,1})}<\epsilon, \quad \mbox{for any}\,\delta\leq\delta_{1}. \] Write \begin{eqnarray*}
&&\int\limits_{|t_{1}|\leq\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}\|\hbox{osc}_{|t|}(\phi)\|_{W(L^{1,1})}\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&&\quad=\int\limits_{|t_{1}|\leq\delta_{1}}\int\limits_{|t_{2}|\leq\delta_{1}}\|\hbox{osc}_{|t|}(\phi)\|_{W(L^{1,1})}\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&&\quad\quad+\int\limits_{\delta_{1}<|t_{1}|\leq\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}\|\hbox{osc}_{|t|}(\phi)\|_{W(L^{1,1})}\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&&\quad \quad\quad +\int\limits_{|t_{1}|\leq\delta_{1}}\int\limits_{\delta_{1}<|t_{2}|\leq\delta_{0}}\|\hbox{osc}_{|t|}(\phi)\|_{W(L^{1,1})}\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\ &&\quad=I_{4}+I_{5}+I_{6}. \end{eqnarray*} Then \begin{eqnarray*}
I_{4}&\leq& \int\limits_{|t_{1}|\leq\delta_{1}}\int\limits_{|t_{2}|\leq\delta_{1}}\|\hbox{osc}_{\delta_{1}}(\phi)\|_{W(L^{1,1})}\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&\leq& \epsilon\int\limits_{|t_{1}|\leq\delta_{1}}\int\limits_{|t_{2}|\leq\delta_{1}}\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\leq\epsilon\|\psi\|_{L^{1}}, \end{eqnarray*} \begin{eqnarray*}
I_{5}&\leq&\int\limits_{\delta_{1}<|t_{1}|\leq\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}\|\hbox{osc}_{\delta_{0}}(\phi)\|_{W(L^{1,1})}\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&\leq&\|\hbox{osc}_{\delta_{0}}(\phi)\|_{W(L^{1,1})}\int\limits_{\delta_{1}/a<|s_{1}|}\int\limits_{s_{2}\in\mathbb{R}^{d}}\left|\psi(s_{1},s_{2})\right|ds_{2}ds_{1}\\ &\rightarrow& 0\quad \mbox{as}\,a\rightarrow0^{+} \end{eqnarray*} and \begin{eqnarray*}
I_{6}&\leq&\int\limits_{|t_{1}|\leq\delta_{1}}\int\limits_{\delta_{1}<|t_{2}|\leq\delta_{0}}\|\hbox{osc}_{|t|}(\phi)\|_{W(L^{1,1})}\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&\leq&\|\hbox{osc}_{\delta_{0}}(\phi)\|_{W(L^{1,1})}\int\limits_{s_{1}\in\mathbb{R}}\int\limits_{\delta_{1}/a<|s_{2}|}\left|\psi(s_{1},s_{2})\right|ds_{2}ds_{1}\\ &\rightarrow& 0\quad \mbox{as}\,a\rightarrow0^{+}. \end{eqnarray*}
Next, we treat $I_{2}$: \begin{eqnarray*}
&&I_{2}\leq\int\limits_{|t_{1}|>\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\left|\phi(x_{1}+k_{1},x_{2}+k_{2})\right|
\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&&\quad\quad+\int\limits_{|t_{1}|>\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}\\
&&\quad\quad\quad\sum_{k_{1}\in\mathbb{Z}}\sup_{x_{1}\in[0,1]}\sum_{k_{2}\in\mathbb{Z}^{d}}\sup_{x_{2}\in[0,1]^{d}}\left|\phi(x_{1}+k_{1}+t_{1},x_{2}+k_{2}+t_{2})\right|
\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&&\quad\leq2\int\limits_{|t_{1}|>\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}\|\phi\|_{W(L^{1,1})}\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&&\quad\leq2\|\phi\|_{W(L^{1,1})}\int\limits_{|t_{1}|>\delta_{0}}\int\limits_{|t_{2}|\leq\delta_{0}}\left|\psi_{a}(t_{1},t_{2})\right|dt_{2}dt_{1}\\
&&\quad\leq2\|\phi\|_{W(L^{1,1})}\int\limits_{|s_{1}|>\delta_{0}/a}\int\limits_{t_{2}\in\mathbb{R}^{d}}\left|\psi(s_{1},s_{2})\right|ds_{2}ds_{1}\\ &&\quad\rightarrow 0\quad \mbox{as}\,a\rightarrow0^{+}. \end{eqnarray*} Similarly, we can prove $I_{3}\rightarrow 0\quad \mbox{as}\,a\rightarrow0^{+}$. This completes the proof of Lemma \ref{lem:average function}. \end{proof}
The following lemma is a generalization of \cite[Lemma 4.1]{Aldroubisun}.
\begin{lemma}\label{Qoperator} Let $X$ be any sampling set which is $\gamma$-dense in $\mathbb{R}^{d+1}$, let $\{\beta_{j,k}:j,k\in\mathbb{J}\}$ be a bounded uniform partition of unity associated with $X$, and let $\phi\in W_{0}(L^{1,1})$ whose support is compact.
Then there exist constants $C$ and $\gamma_{0}$ such that for any $f=\sum_{k\in\mathbb{Z}^{d+1}}c_{k}\phi(\cdot-k)$ and $\gamma\leq\gamma_{0}$, one has \[
\|Q_{X}f\|_{L^{p,q}}\leq C\|c\|_{\ell^{p,q}}\|\phi\|_{W(L^{1,1})} \quad \mbox{for any}\, c=\{c_{k}:k\in\mathbb{Z}^{d+1}\}\in\ell^{p,q}. \] \end{lemma}
To prove the Lemma \ref{Qoperator}, we introduce the following proposition.
\begin{proposition}\cite[Theorem 3.1]{LiLiu}\label{pro:stableup2} Assume that $1\leq p,q<\infty $ and $\phi\in W(L^{1,1})$. Then for any $c\in \ell^{p,q}$, the function $f=\sum_{k_{1}\in \Bbb Z}\sum_{k_{2}\in \Bbb Z^d}c(k_{1},k_{2})\phi(\cdot-k_{1},\cdot-k_{2})$
belongs to $L^{p,q}$ and $$
\|f\|_{L^{p,q}}\leq\|c\|_{\ell^{p,q}}\left \| \phi \right \|_{W(L^{1,1})}. $$ \end{proposition}
\begin{proof}[\textbf{Proof of Lemma \ref{Qoperator}}] By Proposition \ref{pro:stableup2} and Proposition \ref{pro:in Wiener space}, one has that there exists $\gamma_{0}>0$ such that for any $\gamma\leq\gamma_{0}$ \begin{eqnarray*}
\|f-Q_{X}f\|_{L^{p,q}}\leq\|\hbox{osc}_{\gamma}(f)\|_{L^{p,q}}\leq\|c\|_{\ell^{p,q}}\|\hbox{osc}_{\gamma}(\phi)\|_{W(L^{1,1})}. \end{eqnarray*} Using Proposition \ref{pro:stableup2} and the proof of \cite[Lemma 3.4]{zhangqingyue}, one obtains that there exists $C'$ such that \begin{eqnarray*}
\|Q_{X}f\|_{L^{p,q}}&=&\|f-Q_{X}f+f\|_{L^{p,q}}\leq\|f-Q_{X}f\|_{L^{p,q}}+\|f\|_{L^{p,q}}\\
&\leq&\|c\|_{\ell^{p,q}}\|\hbox{osc}_{\gamma}(\phi)\|_{W(L^{1,1})}+\|c\|_{\ell^{p,q}}\left \| \phi \right \|_{W(L^{1,1})}\\
&\leq&\|c\|_{\ell^{p,q}}C'\|\phi\|_{W(L^{1,1})}+\|c\|_{\ell^{p,q}}\left \| \phi \right \|_{W(L^{1,1})}\\
&\leq&C\|c\|_{\ell^{p,q}}\left \| \phi \right \|_{W(L^{1,1})}, \end{eqnarray*} where $C=1+C'$. \end{proof}
The following lemma plays an important role in the proof Theorem \ref{th:suanfa}.
\begin{lemma}\label{lem:co} Let $\Phi=(\phi_{1},\phi_{2},\cdots,\phi_{r})^{T}\in W_{0}(L^{1,1})^{(r)}$ whose support is compact and $P$ be a bounded projection from $L^{p,q}(\mathbb{R}^{d+1})$ onto $V_{p,q}(\Phi)$. Then there exist $\gamma_{0}> 0$ and $a_{0}> 0$ such that for $\gamma$-dense set $X$ with $\gamma\leq\gamma_{0}$ and for every positive real number $a\leq a_{0}$, the operator $I-PA_{X,a}$ is a contraction operator on $V_{p,q}(\Phi)$. \end{lemma}
\begin{proof} Putting $f=\sum_{i=1}^{r}\sum_{k_{1}\in \Bbb Z}\sum_{k_{2}\in \Bbb Z^{d}}c_{i}(k_{1},k_{2})\phi_{i}(\cdot-k_{1},\cdot-k_{2})\in V_{p,q}(\Phi)$. Then one has \begin{eqnarray*}
\|f-PA_{X,a}f\|_{L^{p,q}}&=&\|f-PQ_{X}f+PQ_{X}f-PA_{X,a}f\|_{L^{p,q}}\\
&\leq&\|f-PQ_{X}f\|_{L^{p,q}}+\|PQ_{X}f-PA_{X,a}f\|_{L^{p,q}}\\
&=&\|Pf-PQ_{X}f\|_{L^{p,q}}+\|PQ_{X}f-PA_{X,a}f\|_{L^{p,q}}\\
&\leq&\|P\|_{op}(\|f-Q_{X}f\|_{L^{p,q}}+\|Q_{X}f-A_{X,a}f\|_{L^{p,q}})\\
&=&\|P\|_{op}(\|f-Q_{X}f\|_{L^{p,q}}+\|Q_{X}f-Q_{X}(f*\psi^{*}_{a})\|_{L^{p,q}}). \end{eqnarray*}
Firstly, we estimate the first term of the last inequality. By Proposition \ref{pro:stableup2}, Proposition \ref{thm:stableup} and Cauchy inequality, one has that there exists $\gamma_{1}$ such that for any $\gamma\leq\gamma_{1}$ \begin{eqnarray}\label{eq:1}
\|f-Q_{X}f\|_{L^{p,q}}&\leq&\left\|\sum_{i=1}^{r}\sum_{k_{1}\in \Bbb Z}\sum_{k_{2}\in \Bbb Z^{d}}|c_{i}(k_{1},k_{2})|\hbox{osc}_{\gamma}(\phi_{i})(\cdot-k_{1},\cdot-k_{2})\right\|_{L^{p,q}}\nonumber\\
&\leq&\sum_{i=1}^{r}\left\|\sum_{k_{1}\in \Bbb Z}\sum_{k_{2}\in \Bbb Z^{d}}|c_{i}(k_{1},k_{2})|\hbox{osc}_{\gamma}(\phi_{i})(\cdot-k_{1},\cdot-k_{2})\right\|_{L^{p,q}}\nonumber\\
&\leq&\sum_{i=1}^{r}\|c_{i}\|_{\ell^{p,q}}\|\hbox{osc}_{\gamma}(\phi_{i})\|_{W(L^{1,1})}\nonumber\\
&\leq&\max_{1\leq i\leq r}\|\hbox{osc}_{\gamma}(\phi_{i})\|_{W(L^{1,1})}\sum_{i=1}^{r}\|c_{i}\|_{\ell^{p,q}}\nonumber\\
&\leq&\max_{1\leq i\leq r}\|\hbox{osc}_{\gamma}(\phi_{i})\|_{W(L^{1,1})}\sqrt{r}\left(\sum_{i=1}^{r}\|c_{i}\|^{2}_{\ell^{p,q}}\right)^{1/2}\nonumber\\
&\leq&D_{2}\sqrt{r}\max_{1\leq i\leq r}\|\hbox{osc}_{\gamma}(\phi_{i})\|_{W(L^{1,1})} \|f\|_{L^{p,q}}. \end{eqnarray}
Next, we estimate the second term of the last inequality. Let $\phi^{a}_{i}=\phi_{i}-\phi_{i}*\psi^{*}_{a}\,(i=1,\cdots,r)$. Using $\phi_{i}\in W_{0}(L^{1,1})\,(i=1,\cdots,r)$, $\psi\in L^{1}$ and Lemma \ref{convolution relation}, it follows that $\phi^{a}_{i}\in W_{0}(L^{1,1})$. Note that $f-f*\psi^{*}_{a}=\sum_{i=1}^{r}\sum_{k\in \mathbb{Z}^{d+1}}c_{i}(k)\phi^{a}_{i}(\cdot-k)$. By Lemma \ref{Qoperator}, there exist constants $C,\gamma_{2}>0$ such that for any $\gamma\leq\gamma_{2}$ \begin{eqnarray*}
\|Q_{X}f-Q_{X}(f*\psi^{*}_{a})\|_{L^{p,q}}&\leq&\left\|Q_{X}\left(\sum_{i=1}^{r}\sum_{k\in \mathbb{Z}^{d+1}}c_{i}(k)\phi^{a}_{i}(\cdot-k)\right)\right\|_{L^{p,q}}\\
&=&\left\|\sum_{i=1}^{r}Q_{X}\left(\sum_{k\in \mathbb{Z}^{d+1}}c_{i}(k)\phi^{a}_{i}(\cdot-k)\right)\right\|_{L^{p,q}}\\
&=&\sum_{i=1}^{r}\left\|Q_{X}\left(\sum_{k\in \mathbb{Z}^{d+1}}c_{i}(k)\phi^{a}_{i}(\cdot-k)\right)\right\|_{L^{p,q}}\\
&\leq&C\sum_{i=1}^{r}\|c_{i}\|_{\ell^{p,q}}\|\phi^{a}_{i}\|_{W(L^{1,1})}. \end{eqnarray*} Using Proposition \ref{thm:stableup} and Cauchy inequality \begin{eqnarray*}
\|Q_{X}f-Q_{X}(f*\psi^{*}_{a})\|_{L^{p,q}}&\leq& C\max_{1\leq i\leq r}\|\phi^{a}_{i}\|_{W(L^{1,1})}\sum_{i=1}^{r}\|c_{i}\|_{\ell^{p,q}}\\
&\leq&C\max_{1\leq i\leq r}\|\phi^{a}_{i}\|_{W(L^{1,1})}\sqrt{r}\left(\sum_{i=1}^{r}\|c_{i}\|^{2}_{\ell^{p,q}}\right)^{1/2}\\
&\leq&D_{2}\sqrt{r}C\max_{1\leq i\leq r}\|\phi^{a}_{i}\|_{W(L^{1,1})} \|f\|_{L^{p,q}}. \end{eqnarray*}
Assume that $\epsilon>0$ is any positive real number. By Proposition \ref{pro:oscillation}, there exists $\gamma_{3}$ such that
$D_{2}\sqrt{r}\max_{1\leq i\leq r}\|\hbox{osc}_{\gamma}(\phi_{i})\|_{W(L^{1,1})}\leq \epsilon/2$ for any $\gamma\leq\gamma_{3}$. Using Lemma \ref{lem:average function}, there exists $a_{0}>0$ such that for any $a\leq a_{0}$ \[
D_{2}\sqrt{r}C\max_{1\leq i\leq r}\|\phi^{a}_{i}\|_{W(L^{1,1})} \leq \epsilon/2. \] Hence, we choose $\gamma_{0}=\min\{\gamma_{1},\gamma_{2},\gamma_{3}\}$, one has \[
\|f-PA_{X,a}f\|_{L^{p,q}}\leq\epsilon\|P\|_{op}\|f\|_{L^{p,q}} \quad\mbox{for any} \, f\in V_{p,q}(\Phi),\,\gamma\leq\gamma_{0},\,a\leq a_{0}. \]
To get a contraction operator, we choose $\epsilon\|P\|_{op}<1$. \end{proof}
\section{Proofs of main results} \ \ \ \ In this section, we give proofs of Theorem \ref{th:suanfa} and Theorem \ref{th:suanfa-m}.
\subsection{Proof of Theorem \ref{th:suanfa}} \ \ \ \ For convenience, let $e_{n}=f-f_{n}$ be the error after $n$ iterations. Using (\ref{eq:iterative algorithm}),
\begin{eqnarray*} e_{n+1}&=&f-f_{n+1}\\ &=&f-f_{n}-PA_{X,a}(f-f_{n})\\ &=&(I-PA_{X,a})e_{n}. \end{eqnarray*} Using Lemma \ref{lem:co}, we may choose right $\gamma_{0}$ and $a_{0}$ such that for any $\gamma\leq\gamma_{0}$ and $a\leq a_{0}$ \[
\|I-PA_{X,a}\|_{op}=\alpha<1. \] Then we obtain \[
\|e_{n+1}\|_{L^{p,q}}\leq \alpha\|e_{n}\|_{L^{p,q}}\leq \alpha^{n}\|e_{1}\|_{L^{p,q}}. \]
Wherewith $\|e_{n}\|_{L^{p,q}}\rightarrow0$, when $n\rightarrow\infty$. This completes the proof.
\subsection{Proof of Theorem \ref{th:suanfa-m}} \ \ \ \ We only need to prove $I-PA_{X}$ is a contraction operator on $V_{p,q}(\Phi)$.
Let $f=\sum_{i=1}^{r}\sum_{k_{1}\in \Bbb Z}\sum_{k_{2}\in \Bbb Z^{d}}c_{i}(k_{1},k_{2})\phi_{i}(\cdot-k_{1},\cdot-k_{2})\in V_{p,q}(\Phi)$. One has \begin{eqnarray}\label{eq:2}
\|f-PA_{X}f\|_{L^{p,q}}&=&\|f-PQ_{X}f+PQ_{X}f-PA_{X}f\|_{L^{p,q}}\nonumber\\
&\leq&\|f-PQ_{X}f\|_{L^{p,q}}+\|PQ_{X}f-PA_{X}f\|_{L^{p,q}}\nonumber\\
&=&\|Pf-PQ_{X}f\|_{L^{p,q}}+\|PQ_{X}f-PA_{X}f\|_{L^{p,q}}\nonumber\\
&\leq&\|P\|_{op}(\|f-Q_{X}f\|_{L^{p,q}}+\|Q_{X}f-A_{X}f\|_{L^{p,q}}). \end{eqnarray}
Putting $f_{i}=\sum_{k_{1}\in \Bbb Z}\sum_{k_{2}\in \Bbb Z^{d}}c_{i}(k_{1},k_{2})\phi_{i}(\cdot-k_{1},\cdot-k_{2}),\,(1\leq i\leq r)$. Then $f_{i}\in V^{p,q}(\phi_{i})$ for $1\leq i\leq r$ and $f=\sum_{i=1}^{r}f_{i}$. For each $f_{i}$, one has \begin{eqnarray*}
|Q_{X}f_{i}-A_{X}f_{i}|&=&\left|\sum_{j\in\mathbb{J}}\sum_{k\in\mathbb{J}}\left(f_{i}(x_{j},y_{k})-\left \langle f_{i},\psi_{x_{j},y_{k}} \right \rangle\right)\beta_{j,k}\right|\\
&=&\left|\sum_{j\in\mathbb{J}}\sum_{k\in\mathbb{J}}\left(\int_{\mathbb{R}^{d+1}}(f_{i}(x_{j},y_{k})-f_{i}(t))\overline{\psi_{x_{j},y_{k}}(t)}dt\right)\beta_{j,k}\right|\\
&\leq&\sum_{j\in\mathbb{J}}\sum_{k\in\mathbb{J}}\int_{\mathbb{R}^{d+1}}|f_{i}(x_{j},y_{k})-f_{i}(t)||\psi_{x_{j},y_{k}}(t)|dt\beta_{j,k}\\
&\leq&\sum_{j\in\mathbb{J}}\sum_{k\in\mathbb{J}}\hbox{osc}_{a}(f_{i})(x_{j},y_{k})\int_{\mathbb{R}^{d+1}}|\psi_{x_{j},y_{k}}(t)|dt\beta_{j,k}\\ &\leq&M\sum_{j\in\mathbb{J}}\sum_{k\in\mathbb{J}}\hbox{osc}_{a}(f_{i})(x_{j},y_{k})\beta_{j,k}\\
&\leq&M\sum_{j\in\mathbb{J}}\sum_{k\in\mathbb{J}}\sum_{k_{1}\in \Bbb Z}\sum_{k_{2}\in \Bbb Z^{d}}|c_{i}(k_{1},k_{2})|\hbox{osc}_{a}(\phi_{i})(x_{j}-k_{1},y_{k}-k_{2})\beta_{j,k}. \end{eqnarray*} By Proposition \ref{pro:stableup2} and Proposition \ref{pro:in Wiener space}, there exists $a_{1}$ such that for any $a\leq a_{1}$ \begin{eqnarray*}
\|Q_{X}f_{i}-A_{X}f_{i}\|_{L^{p,q}}\leq MC\|c_{i}\|_{\ell^{p,q}}\|\hbox{osc}_{a}(\phi_{i})\|_{W(L^{1,1})}. \end{eqnarray*} Therewith \[
\|Q_{X}f-A_{X}f\|_{L^{p,q}}\leq MC\sum_{i=1}^{r}\|c_{i}\|_{\ell^{p,q}}\|\hbox{osc}_{a}(\phi_{i})\|_{W(L^{1,1})}. \] Thus by the proof of Lemma \ref{lem:co} \begin{eqnarray}\label{eq:3}
\|Q_{X}f-A_{X}f\|_{L^{p,q}}\leq MCD_{2}\sqrt{r}\max_{1\leq i\leq r}\|\hbox{osc}_{a}(\phi_{i})\|_{W(L^{1,1})} \|f\|_{L^{p,q}}. \end{eqnarray} Using (\ref{eq:1}), (\ref{eq:2}) and (\ref{eq:3}), one has that there exists $\gamma_{1}$ such that for any $\gamma\leq\gamma_{1}$ and $a\leq a_{1}$ \begin{eqnarray*}
\|f-PA_{X}f\|_{L^{p,q}}&\leq&\|P\|_{op}(D_{2}\sqrt{r}\max_{1\leq i\leq r}\|\hbox{osc}_{\gamma}(\phi_{i})\|_{W(L^{1,1})}\\
&&\quad+MCD_{2}\sqrt{r}\max_{1\leq i\leq r}\|\hbox{osc}_{a}(\phi_{i})\|_{W(L^{1,1})} )\|f\|_{L^{p,q}}. \end{eqnarray*}
Assume that $\epsilon>0$ is any positive real number. By Proposition \ref{pro:oscillation}, there exists $\gamma_{2}$ such that for any $\gamma\leq\gamma_{2}$
$$D_{2}\sqrt{r}\max_{1\leq i\leq r}\|\hbox{osc}_{\gamma}(\phi_{i})\|_{W(L^{1,1})}\leq \epsilon/2.$$ Using Lemma \ref{lem:average function}, there exists $a_{2}>0$ such that for any $a\leq a_{2}$ \[
MCD_{2}\sqrt{r}\max_{1\leq i\leq r}\|\hbox{osc}_{a}(\phi_{i})\|_{W(L^{1,1})} \leq \epsilon/2. \] Hence, we choose $\gamma_{0}=\min\{\gamma_{1},\gamma_{2}\}$ and $a_{0}=\min\{a_{1},a_{2}\}$, one has \[
\|f-PA_{X}f\|_{L^{p,q}}\leq\epsilon\|P\|_{op}\|f\|_{L^{p,q}} \quad\mbox{for any} \, f\in V_{p,q}(\Phi),\,\gamma\leq \gamma_{0},\,a\leq a_{0}. \]
To get a contraction operator, we choose $\epsilon\|P\|_{op}<1$.
\section{Conclusion} \ \ \ \ In this paper, we study nonuniform average sampling problem in multiply generated shift-invariant subspaces of mixed Lebesgue spaces. We discuss two types of average sampled values. Two fast reconstruction algorithms for this two types of average sampled values are provided. Studying $L^{p,q}$-frames in multiply generated shift-invariant subspaces of mixed Lebesgue spaces is the goal of future work.
\\ \textbf{Acknowledgements}
This work was supported partially by the National Natural Science Foundation of China (11326094 and 11401435).
\end{document} |
\begin{document}
\title[Hybrid Euler-Hadamard product]{Hybrid Euler-Hadamard product for quadratic Dirichlet L-functions in function fields} \author{H. M. Bui and Alexandra Florea} \address{School of Mathematics, University of Manchester, Manchester M13 9PL, UK} \email{[email protected]} \address{Department of Mathematics, Stanford University, Stanford CA 94305, USA} \email{[email protected]}
\allowdisplaybreaks
\begin{abstract} We develop a hybrid Euler-Hadamard product model for quadratic Dirichlet $L$--functions over function fields (following the model introduced by Gonek, Hughes and Keating for the Riemann-zeta function). After computing the first three twisted moments in this family of $L$--functions, we provide further evidence for the conjectural asymptotic formulas for the moments of the family.
\end{abstract} \maketitle
\section{Introduction}
An important and fascinating theme in number theory is the study of moments of the Riemann zeta-function and families of $L$-function. In this paper, we consider the moments of quadratic Dirichlet $L$-functions in the function field setting. Denote by $\mathcal{H}_{2g+1}$ the space of monic, square-free polynomials of degree $2g+1$ over $\mathbb{F}_q[x]$. We are interested in the asymptotic formula for the $k$-th moment, \[
I_k(g)=\frac{1}{|\mathcal{H}_{2g+1}|}\sum_{D\in\mathcal{H}_{2g+1}}L(\tfrac{1}{2},\chi_D)^k, \] as $g\rightarrow\infty$.
In the corresponding problem over number fields, the first and second moments have been evaluated by Jutila [\textbf{\ref{J}}], with subsequent improvements on the error terms by Goldfeld and Hoffstein [\textbf{\ref{GH}}], Soundararajan [\textbf{\ref{S}}] and Young [\textbf{\ref{Y}}], and the third moment has been computed by Soundararajan [\textbf{\ref{S}}]. Conjectural asymptotic formulas for higher moments have also been given, being based on either random matrix theory [\textbf{\ref{KS}}] or the ``recipe'' [\textbf{\ref{CFKRS}}].
Using the idea of Jutila [\textbf{\ref{J}}], Andrade and Keating [\textbf{\ref{AK}}] obtained the asymptotic formula for $I_1(g)$ when $q$ is fixed and $q\equiv1(\textrm{mod}\ 4)$. They explicitly computed the main term, which is of size $g$, and bounded the error term by $O\big(q^{(-1/4+\log_q2)(2g+1)}\big)$. This result was recently improved by Florea [\textbf{\ref{F1}}] with a secondary main term and an error term of size $O_\varepsilon\big(q^{-3g/2+\varepsilon g}\big)$. Florea's approach is similar to Young's [\textbf{\ref{Y}}], but in the function field setting, it is striking that one can surpass the square-root cancellation. Florea [\textbf{\ref{F23}}, \textbf{\ref{F4}}] later also provided the asymptotic formulas for $I_k(g)$ when $k=2,3,4$.
For other values of $k$, by extending the Ratios Conjecture to the function field setting, Andrade and Keating [\textbf{\ref{AK2}}] proposed a general formula for the integral moments of quadratic Dirichlet $L$-functions over function fields. Concerning the leading terms, their conjecture reads
\begin{conjecture} For any $k\in\mathbb{N}$ we have \[ I_k(g)\sim 2^{-k/2}\mathcal{A}_k\frac{G(k+1)\sqrt{\Gamma(k+1)}}{\sqrt{G(2k+1)\Gamma(2k+1)}}(2g)^{k(k+1)/2} \] as $g\rightarrow\infty$, where \[
\mathcal{A}_k=\prod_{P\in\mathcal{P}}\bigg[\bigg(1-\frac{1}{|P|}\bigg)^{k(k+1)/2}\bigg(1+\bigg(1+\frac{1}{|P|}\bigg)^{-1}\sum_{j=1}^{\infty}\frac{\tau_{k}(P^{2j})}{|P|^j}\bigg)\bigg] \] with $\tau_k(f)$ being the $k$-th divisor function, and $G(k)$ is the Barnes $G$-function. \label{ak} \end{conjecture}
\begin{remark}\emph{An equivalent form of $\mathcal{A}_k$ is} \begin{equation*}
\mathcal{A}_{k}=\prod_{P\in\mathcal{P}}\bigg(1-\frac{1}{|P|}\bigg)^{k(k+1)/2}\bigg(1+\frac{1}{|P|}\bigg)^{-1}\bigg(\frac12\bigg(1-\frac{1}{|P|^{1/2}}\bigg)^{-k}+\frac12\bigg(1+\frac{1}{|P|^{1/2}}\bigg)^{-k}+\frac{1}{|P|}\bigg). \end{equation*} \end{remark}
Beside random matrix theory and the recipe, another method to predict asymptotic formulas for moments comes from the hybrid Euler-Hadamard product for the Riemann zeta-function developed by Gonek, Hughes and Keating [\textbf{\ref{GHK}}]. Using a smoothed form of the explicit formula of Bombieri and Hejhal [\textbf{\ref{BH}}], the value of the Riemann zeta-function at a height $t$ on the critical line can be approximated as a partial Euler product multiplied by a partial Hadamard product over the nontrivial zeros close to $1/2+it$. The partial Hadamard product is expected to be modelled by the characteristic polynomial of a large random unitary matrix as it involves only local information about the zeros. Calculating the moments of the partial Euler product rigorously and making an assumption (which can be proved in certain cases) about the independence of the two products, Gonek, Hughes and Keating then reproduced the conjecture for the moments of the Riemann zeta-function first put forward by Keating and Snaith [\textbf{\ref{KS2}}]. The hybrid Euler-Hadamard product model has been extended to various cases [\textbf{\ref{BK}}, \textbf{\ref{BK2}}, \textbf{\ref{D}}, \textbf{\ref{H}}, \textbf{\ref{BGM}}].
In this paper, we give further support for Conjecture 1.1 using the idea of Gonek, Hughes and Keating. Along the way, we also derive the first three twisted moments of quadratic Dirichlet $L$-functions over function fields.
\section{Statements of results}
Throughout the paper we assume $q$ is fixed and $q\equiv 1(\textrm{mod}\ 4)$. All theorems still hold for all $q$ odd by using the modified auxiliary lemmas in function fields as in [\textbf{\ref{BF}}], but we shall keep the assumption for simplicity. Let $\mathcal{M}$ be the set of monic polynomials in $\mathbb{F}_q[x]$, $\mathcal{M}_n$ and $\mathcal{M}_{\leq n}$ be the sets of those of degree $n$ and degree at most $n$, respectively. The letter $P$ will always denote a monic irreducible polynomial over $\mathbb{F}_q[x]$. The set of monic irreducible polynomials is denoted by $\mathcal{P}$. For a polynomial $f\in \mathbb{F}_q[x]$, we denote its degree by $d(f)$, its norm $|f|$ is defined to be $q^{d(f)}$, and the von Mangoldt function is defined by $$ \Lambda(f) = \begin{cases} d(P) & \mbox{ if } f=cP^j \text{ for some }c \in \mathbb{F}_q^{\times}\ \text{and}\ j\geq 1, \\ 0 & \mbox{ otherwise. } \end{cases} $$
Note that
$$|\mathcal{H}_d| = \begin{cases} q & \mbox{ if } d=1, \\ q^{d-1}(q-1) & \mbox{ if } d \geq 2. \end{cases} $$ For any function $F$ on $\mathcal{H}_{2g+1}$, the expected value of $F$ is defined by
$$ \Big\langle F \Big\rangle_{\mathcal{H}_{2g+1}} := \frac{1}{| \mathcal{H}_{2g+1} |} \sum_{D \in \mathcal{H}_{2g+1}} F(D).$$
The Euler-Hadamard product we use, which is proved in Section 4, takes the following form.
\begin{theorem}\label{HEH} Let $u(x)$ be a real, non-negative, $C^\infty$-function with mass $1$ and compactly supported on $[q,q^{1+1/X}]$. Let \begin{equation*} U(z)=\int_{0}^{\infty}u(x)E_{1}(z\log x)dx, \end{equation*} where $E_{1}(z)$ is the exponential integral, $E_{1}(z)=\int_{z}^{\infty}e^{-x}/xdx$. Then for $\emph{Re}(s)\geq0$ we have \begin{equation*} L(s,\chi_D)=P_{X}(s,\chi_D)Z_{X}(s,\chi_D), \end{equation*} where \begin{equation*}
P_{X}(s,\chi_D)=\exp\bigg( \sum_{\substack{f\in\mathcal{M}\\d(f)\leq X}}\frac{\Lambda(f)\chi_D(f)}{|f|^{s}d(f)}\bigg) \end{equation*} and \begin{equation*} Z_{X}(s,\chi_D)=\exp\Big(-\sum_{\rho}U\big((s-\rho)\ X\big)\Big), \end{equation*} where the sum is over all the zeros $\rho$ of $L(s,\chi_D)$. \end{theorem}
As remarked in [\textbf{\ref{GHK}}], $P_X(s,\chi_D)$ can be thought of as the Euler product for $L(s,\chi_D)$ truncated to include polynomials of degree $\leq X$, and $Z_X(s,\chi_D)$ can be thought of as the Hadamard product for $L(s,\chi_D)$ truncated to include zeros within a distance $\lesssim 1/X$ from the point $s$. The parameter $X$ thus controls the relative contributions of the Euler and Hadamard products. Note that a similar hybrid product formula was developed independently by Andrade, Keating, Gonek in [\textbf{\ref{AKG}}].
In Section 5 we evaluate the moments of $P_X(\chi_D):=P_X(1/2,\chi_D)$ rigorously and prove the following theorem.
\begin{theorem}\label{theoremP} Let $0<c<2$. Suppose that $X\leq (2-c)\log g/\log q$. Then for any $k\in\mathbb{R}$ we have \[ \Big\langle P_X(\chi_D)^k \Big\rangle_{\mathcal{H}_{2g+1}}=2^{-k/2}\mathcal{A}_k\big(e^\gamma X\big)^{k(k+1)/2}+O\big( X^{k(k+1)/2-1}\big). \] \end{theorem}
For the partial Hadamard product, $Z_X(\chi_D):=Z_X(1/2,\chi_D)$, we conjecture that
\begin{conjecture}\label{conjectureZ} Let $0<c<2$. Suppose that $X\leq (2-c)\log g/\log q$ and $X,g\rightarrow\infty$. Then for any $k\geq0$ we have \[ \Big\langle Z_X(\chi_D)^k \Big\rangle_{\mathcal{H}_{2g+1}}\sim \frac{G(k+1)\sqrt{\Gamma(k+1)}}{\sqrt{G(2k+1)\Gamma(2k+1)}}\Big(\frac{2g}{e^\gamma X}\Big)^{k(k+1)/2}. \] \end{conjecture}
In Section 7 we shall provide some support for Conjecture \ref{conjectureZ} using the random matrix theory model as follows. The zeros of quadratic Dirichlet $L$-functions are believed to have the same statistical distribution as the eigenangles $\theta_n$ of $2N\times 2N$ random symplectic unitary matrices with respect to the Haar measure for some $N$. Equating the density of the zeros and the density of the eigenangles suggests that $N=g$. Hence the $k$-th moment of $Z_X(\chi_D)$ is expected to be asymptotically the same as $Z_X(\chi_D)^k$ when the zeros $\rho$ are replaced by the eigenangles $\theta_n$ and averaged over all $2g\times 2g$ symplectic unitary matrices. This random matrix calculation is carried out in Section 7.
We also manage to verify Conjecture \ref{conjectureZ} in the cases $k=1,2,3$. As, from Theorem \ref{HEH}, $Z_X(\chi_D)=L(\tfrac{1}{2},\chi_D)P_X(\chi_D)^{-1}$, that is the same as to establish the following theorem.
\begin{theorem}\label{k123} Let $0<c<2$. Suppose that $X\leq (2-c)\log g/\log q$. Then we have \begin{align*} &\Big\langle L(\tfrac{1}{2},\chi_D)P_X(\chi_D)^{-1} \Big\rangle_{\mathcal{H}_{2g+1}}= \frac{1}{\sqrt{2}}\frac{2g}{e^\gamma X} + O\big(gX^{-2}\big),\\ &\Big\langle L(\tfrac{1}{2},\chi_D)^2P_X(\chi_D)^{-2} \Big\rangle_{\mathcal{H}_{2g+1}}= \frac{1}{12}\Big(\frac{2g}{e^\gamma X}\Big)^3+O\big(g^3X^{-4}\big) \end{align*} and \[ \Big\langle L(\tfrac{1}{2},\chi_D)^3P_X(\chi_D)^{-3} \Big\rangle_{\mathcal{H}_{2g+1}}= \frac{1}{720\sqrt{2}}\Big(\frac{2g}{e^\gamma X}\Big)^6+O\big(g^6X^{-7}\big). \] \end{theorem}
Our Theorem \ref{theoremP} and Theorem \ref{k123} suggest that at least when $X$ is not too large relative to $q^g$, the $k$-th moment of $L(1/2,\chi_D)$ is asymptotic to the product of the moments of $P_X(\chi_D)$ and $Z_X(\chi_D)$ for $k=1,2,3$. We believe that this is true in general and we make the following conjecture.
\begin{conjecture}[Splitting Conjecture] Let $0<c<2$. Suppose that $X\leq (2-c)\log g/\log q$ and $X,g\rightarrow\infty$. Then for any $k\geq0$ we have \[ \Big\langle L(\tfrac12,\chi_D)^k \Big\rangle_{\mathcal{H}_{2g+1}}\sim \Big\langle P_X(\chi_D)^k \Big\rangle_{\mathcal{H}_{2g+1}}\Big\langle Z_X(\chi_D)^k \Big\rangle_{\mathcal{H}_{2g+1}}. \] \end{conjecture}
Theorem \ref{theoremP}, Conjecture \ref{conjectureZ} and the Splitting Conjecture imply Conjecture 1.1.
To prove Theorem \ref{k123} requires knowledge and understanding about twisted moments of quadratic Dirichlet $L$-functions over function fields, \[ I_k(\ell;g)=\Big\langle L(\tfrac12,\chi_D)^k\chi_D(\ell) \Big\rangle_{\mathcal{H}_{2g+1}}. \] For that we shall compute the first three twisted moments in Section 6 and show that the following theorems hold.
\begin{theorem}[Twisted first moment]\label{tfm} Let $\ell=\ell_1\ell_2^2$ with $\ell_1$ square-free. Then we have \begin{align*}
I_1(\ell;g)=&\,\frac{\eta_1(\ell;1)}{|\ell_1|^{1/2}}\bigg(g-d(\ell_1)+1-\frac{\partial_u\eta_1}{\eta_1}(\ell;1)\bigg)+|\ell_1|^{1/6} q^{-4g/3} P\big(g+d(\ell_1)\big)\\
&\qquad\qquad+O_\varepsilon\big(|\ell|^{1/2}q^{-3g/2+ \varepsilon g}\big), \end{align*} where the function $\eta_1(\ell,u)$ is defined in \eqref{eta} and $P(x)$ is a linear polynomial whose coefficients can be written down explicitly. \end{theorem}
\begin{theorem}[Twisted second moment]\label{tsm} Let $\ell=\ell_1\ell_2^2$ with $\ell_1$ square-free. Then we have \begin{align*}
I_2(\ell;g)=&\,\frac{\eta_2(\ell;1)}{24|\ell_1|^{1/2}}\bigg(\sum_{j=0}^{3}\frac{\partial_u^j\eta_2}{\eta_2}(\ell;1)P_{2,j}\big(2g-d(\ell_1)\big)- 6g\sum_{j=0}^{2}\frac{\partial_u^j\kappa_2}{\kappa_2}(\ell;1,1)Q_{2,j}\big(d(\ell_1)\big)\\
&\qquad\qquad\qquad+2\sum_{i=0}^{1}\sum_{j=0}^{3-i}\frac{\partial_u^j\partial_w^i\kappa_2}{\kappa_2}(\ell;1,1)R_{2,i,j}\big(d(\ell_1)\big)\bigg)+O_\varepsilon\big(|\ell|^{1/2}q^{-g+ \varepsilon g}\big), \end{align*} where the functions $\eta_2(\ell,u)$ and $\kappa_2(\ell;u,v)$ are defined in \eqref{eta} and \eqref{kappa2}. Here $P_{2,j}(x)$'s are some explicit polynomials of degrees $3-j$ for all $0\leq j\leq 3$. Also, $Q_{2,j}(x)$'s and $R_{2,i,j}(x)$'s are some explicit polynomials of degrees $2-j$ and $3-i-j$, respectively.
As for the leading term we have \begin{align*}
I_2(\ell;g)=&\,\frac{\eta_2(\ell;1)}{24|\ell_1|^{1/2}}\Big(8g^3-12g^2d(\ell_1)+d(\ell_1)^3\Big)+O_\varepsilon\big(g^2d(\ell)^\varepsilon\big)+O_\varepsilon\big(|\ell|^{1/2}q^{-g+ \varepsilon g}\big). \end{align*} \end{theorem}
\begin{theorem}[Twisted third moment]\label{ttm} Let $\ell=\ell_1\ell_2^2$ with $\ell_1$ square-free. Then we have \begin{align*}
I_3(\ell;g)=&\,\frac{\eta_3(\ell;1)}{2^56!|\ell_1|^{1/2}}\sum_{j=0}^{6}\frac{\partial_u^j\eta_3}{\eta_3}(\ell;1)P_{3,j}\big(3g-d(\ell_1)\big)\\
&\qquad+\frac{\kappa_3(\ell;1,1)q^4}{(q-1) |\ell_1|^{1/2}}\sum_{N=3g-1}^{3g}\sum_{i_1=0}^{2}\sum_{i_2=0}^{2-i_1}\sum_{j=0}^{6-i_1-i_2}\frac{\partial_u^j\partial_w^{i_1}\kappa_3}{\kappa_3}(\ell;1,1)R_{3,i_1,i_2,j}(\mathfrak{a},g+d)N^{i_2}\\
&\qquad\qquad+O_\varepsilon(|\ell_1|^{-3/4}q^{-g/4+\varepsilon g})+ O_\varepsilon\big(|\ell|^{1/2}q^{-g/2+\varepsilon g}\big), \end{align*} where the functions $\eta_3(\ell,u)$ and $\kappa_3(\ell;u,v)$ are defined in \eqref{eta} and \eqref{kappa3}. Here $P_{3,j}(x)$'s are some explicit polynomials of degrees $6-j$ for all $0\leq j\leq 6$. Also, $\mathfrak{a} \in \{0,1\}$ according to whether $N-d(\ell)$ is even or odd, and $R_{3,i_1,i_2,j}(\mathfrak{a},x)$ are some explicit polynomials in $x$ with degree $6-i_1-i_2-j$.
As for the leading term we have \begin{align*}
I_3(\ell;g)=\frac{\eta_3(\ell;1)}{2^56! |\ell_1|^{\frac{1}{2}}}&\Big(\big(3g-d(\ell_1)\big)^6-73\big(g+d(\ell_1)\big)^6+396g\big(g+d(\ell_1)\big)^5\\
&\qquad\qquad-540g^2\big(g+d(\ell_1)\big)^4\Big)+O_\varepsilon\big(g^5d(\ell)^\varepsilon\big)+ O_\varepsilon\big(|\ell|^{1/2}q^{-g/4+\varepsilon g}\big). \end{align*} \end{theorem}
\section{Background in function fields}
We first give some background information on $L$-functions over function fields and their connection to zeta functions of curves.
Let $\pi_q(n)$ denote the number of monic, irreducible polynomials of degree $n$ over $\mathbb{F}_q[x]$. The following Prime Polynomial Theorem holds \begin{equation*}
\pi_q(n) = \frac{1}{n} \sum_{d|n} \mu(d) q^{n/d}. \label{pnt} \end{equation*} We can rewrite the Prime Polynomial Theorem in the form \begin{equation*} \sum_{f \in \mathcal{M}_n} \Lambda(f) = q^n. \end{equation*}
\subsection{Quadratic Dirichlet $L$-functions over function fields}
For $\textrm{Re}(s)>1$, the zeta function of $\mathbb{F}_q[x]$ is defined by \[
\zeta_q(s):=\sum_{f\in\mathcal{M}}\frac{1}{|f|^s}=\prod_{P\in \mathcal{P}}\bigg(1-\frac{1}{|P|^s}\bigg)^{-1}. \] Since there are $q^n$ monic polynomials of degree $n$, we see that \[ \zeta_q(s)=\frac{1}{1-q^{1-s}}. \] It is sometimes convenient to make the change of variable $u=q^{-s}$, and then write $\mathcal{Z}(u)=\zeta_q(s)$, so that $$\mathcal{Z}(u)=\frac{1}{1-qu}.$$
For $P$ a monic irreducible polynomial, the quadratic residue symbol $\big(\frac{f}{P}\big)\in\{0,\pm1\}$ is defined by \[
\Big(\frac{f}{P}\Big)\equiv f^{(|P|-1)/2}(\textrm{mod}\ P). \] If $Q=P_{1}^{\alpha_1}P_{2}^{\alpha_2}\ldots P_{r}^{\alpha_r}$, then the Jacobi symbol is defined by \[ \Big(\frac{f}{Q}\Big)=\prod_{j=1}^{r}\Big(\frac{f}{P_j}\Big)^{\alpha_j}. \] The Jacobi symbol satisfies the quadratic reciprocity law. That is to say if $A,B\in \mathbb{F}_q[x]$ are relatively prime, monic polynomials, then \[ \Big(\frac{A}{B}\Big)=(-1)^{(q-1)d(A)d(B)/2}\Big(\frac{B}{A}\Big). \] As we are assuming $q\equiv 1(\textrm{mod}\ 4)$, the quadratic reciprocity law gives $\big(\frac{A}{B}\big)=\big(\frac{B}{A}\big)$, a fact we will use throughout the paper.
For $D$ monic, we define the character \[ \chi_D(g)=\Big(\frac{D}{g}\Big), \] and consider the $L$-function attached to $\chi_D$, \[
L(s,\chi_D):=\sum_{f\in\mathcal{M}}\frac{\chi_D(f)}{|f|^s}. \] With the change of variable $u=q^{-s}$ we have \begin{equation*} \mathcal{L}(u,\chi_D):=L(s,\chi_D)=\sum_{f\in\mathcal{M}}\chi_D(f)u^{d(f)}=\prod_{P\in \mathcal{P}}\big(1-\chi_D(P)u^{d(P)}\big)^{-1}. \end{equation*} For $D\in\mathcal{H}_{2g+1}$, $\mathcal{L}(u,\chi_D)$ is a polynomial in $u$ of degree $2g$ and it satisfies a functional equation \begin{equation*} \mathcal{L}(u,\chi_D)=(qu^2)^g\mathcal{L}\Big(\frac{1}{qu},\chi_D\Big). \end{equation*}
There is a connection between $\mathcal{L}(u,\chi_D)$ and zeta function of curves. For $D\in\mathcal{H}_{2g+1}$, the affine equation $y^2=D(x)$ defines a projective and connected hyperelliptic curve $C_D$ of genus $g$ over $\mathbb{F}_q$. The zeta function of the curve $C_D$ is defined by \[ Z_{C_D}(u)=\exp\bigg(\sum_{j=1}^{\infty}N_j(C_D)\frac{u^j}{j}\bigg), \] where $N_j(C_D)$ is the number of points on $C_D$ over $\mathbb{F}_q$, including the point at infinity. Weil [\textbf{\ref{W}}] showed that \[ Z_{C_D}(u)=\frac{P_{C_D}(u)}{(1-u)(1-qu)}, \]
where $P_{C_D}(u)$ is a polynomial of degree $2g$. It is known that $P_{C_D}(u)=\mathcal{L}(u,\chi_D)$ (this was proved in Artin's thesis). The Riemann Hypothesis for curves over function fields was proven by Weil [\textbf{\ref{W}}], so all the zeros of $\mathcal{L}(u,\chi_D)$ are on the circle $|u|=q^{-1/2}$.
\subsection{Preliminary lemmas}
The first three lemmas are in [\textbf{\ref{F1}}; Lemma 2.2, Proposition 3.1 and Lemma 3.2].
\begin{lemma}\label{L1} For $f\in\mathcal{M}$ we have \[
\sum_{D\in\mathcal{H}_{2g+1}}\chi_D(f)=\sum_{C|f^\infty}\sum_{h\in\mathcal{M}_{2g+1-2d(C)}}\chi_f(h)-q\sum_{C|f^\infty}\sum_{h\in\mathcal{M}_{2g-1-2d(C)}}\chi_f(h), \] where the summations over $C$ are over monic polynomials $C$ whose prime factors are among the prime factors of $f$. \end{lemma}
We define the generalized Gauss sum as \[ G(V,\chi):= \sum_{u (\textrm{mod}\ f)}\chi(u)e\Big(\frac{uV}{f}\Big), \] where the exponential was defined in [\textbf{\ref{Hayes}}] as follows. For $a \in \mathbb{F}_q\big((\frac 1x)\big) $, $$ e(a) = e^{ 2 \pi i \text{Tr}_{\mathbb{F}_q / \mathbb{F}_p} (a_1)/p},$$ where $a_1$ is the coefficient of $1/x$ in the Laurent expansion of $a$.
\begin{lemma}\label{L3} Let $f\in\mathcal{M}_n$. If $n$ is even then \[
\sum_{h\in\mathcal{M}_m}\chi_f(h)=\frac{q^m}{|f|}\bigg(G(0,\chi_f)+q\sum_{V\in\mathcal{M}_{\leq n-m-2}}G(V,\chi_f)-\sum_{V\in\mathcal{M}_{\leq n-m-1}}G(V,\chi_f)\bigg), \] otherwise \[
\sum_{h\in\mathcal{M}_m}\chi_f(h)= \frac{q^{m+1/2}} {|f|}\sum_{V\in\mathcal{M}_{n-m-1}}G(V,\chi_f). \] \end{lemma}
\begin{lemma}\label{L2}
\begin{enumerate} \item If $(f,h)=1$, then $G(V, \chi_{fh})= G(V, \chi_f) G(V,\chi_h)$. \item Write $V= V_1 P^{\alpha}$ where $P \nmid V_1$. Then
$$G(V , \chi_{P^j})= \begin{cases} 0 & \mbox{if } j \leq \alpha \text{ and } j \text{ odd,} \\ \varphi(P^j) & \mbox{if } j \leq \alpha \text{ and } j \text{ even,} \\
-|P|^{j-1} & \mbox{if } j= \alpha+1 \text{ and } j \text{ even,} \\
\chi_P(V_1) |P|^{j-1/2} & \mbox{if } j = \alpha+1 \text{ and } j \text{ odd, } \\ 0 & \mbox{if } j \geq 2+ \alpha . \end{cases}$$ \end{enumerate} \end{lemma}
\begin{lemma}\label{L5} For $\ell\in\mathcal{M}$ a square polynomial we have \[
\frac{1}{| \mathcal{H}_{2g+1}|}\sum_{D \in \mathcal{H}_{2g+1}} \chi_D(\ell)=\prod_{\substack{P\in\mathcal{P}\\P|\ell}}\bigg(1+\frac{1}{|P|}\bigg)^{-1}+O(q^{-2g}). \] \end{lemma} \begin{proof} See [\textbf{\ref{BF}}; Lemma 3.7]. \end{proof}
We also have the following estimate. \begin{lemma}[P\'olya--Vinogradov inequality] \label{nonsq} For $\ell\in\mathcal{M}$ not a square polynomial, let $\ell= \ell_1 \ell_2^2$ with $\ell_1$ square-free. Then we have
$$\bigg| \sum_{D \in \mathcal{H}_{2g+1}} \chi_D(\ell) \bigg| \ll_\varepsilon q^{g} | \ell_1|^{\varepsilon}.$$ \end{lemma} \begin{proof} As in the proof of Lemma $2.2$ in [\textbf{\ref{F1}}], using Perron's formula we have
$$ \sum_{D \in \mathcal{H}_{2g+1}} \chi_D( \ell) = \frac{1}{2 \pi i} \oint_{|u|=r}\mathcal{L} (u,\chi_{\ell})\prod_{P | \ell} \Big(1-u^{2d(P)}\Big)^{-1} \frac{(1-qu^2) du}{ u^{2g+2}},$$ where we pick $r = q^{-1/2}$. If we write $\ell= \ell_1 \ell_2^2$ with $\ell_1$ square-free, then
$$ \mathcal{L}(u,\chi_{\ell})= \mathcal{L}(u,\chi_{\ell_1}) \prod_{\substack{P \nmid \ell_1 \\ P | \ell_2}} \Big(1-u^{d(P)}\chi_{\ell_1}(P) \Big).$$ Now we use the Lindel\"{o}f bound for $\mathcal{L}(u,\chi_{\ell_1})$ (see Theorem $3.4$ in [\textbf{\ref{altug}}]),
$$ \mathcal{L}(u,\chi_{\ell_1}) \ll |\ell_1|^{\varepsilon},$$ in the integral above and the conclusion follows. \end{proof}
\begin{lemma}[Mertens' theorem] \label{mertens} We have
$$ \prod_{d(P) \leq X} \bigg( 1-\frac{1}{|P|} \bigg)^{-1} = e^{\gamma} X + O(1),$$ where $\gamma$ is the Euler constant. \end{lemma} \begin{proof} A more general version of Mertens' estimate was proved in [\textbf{\ref{R}}; Theorem 3]. Here we give a simpler proof in the above form for completeness.
Using the Prime Polynomial Theorem,
$$ \sum_{d(P) \leq X} \frac{d(P)}{|P|} = X+O(1),$$ and hence by partial summation, we get that
$$ \sum_{d(P) \leq X} \frac{1}{|P|} = \log X + c + O \Big( \frac{1}{X} \Big)$$ for some constant $c$. Then \begin{align*}
\sum_{d(P) \leq X} \log &\bigg( 1- \frac{1}{|P|} \bigg)^{-1} = \sum_{d(P) \leq X} \frac{1}{|P|} + \sum_{d(P) \leq X} \sum_{j=2}^{\infty} \frac{1}{j|P|^j}\\
&= \log X +c + \sum_{P\in\mathcal{P}} \sum_{j=2}^{\infty} \frac{1}{j|P|^j} - \sum_{d(P) >X} \sum_{j=2}^{\infty} \frac{1}{j|P|^j} + O \Big( \frac{1}{X} \Big) \\ &= \log X + C +O \Big( \frac{1}{X} \Big),
\end{align*} where $C = c + \sum_{P\in\mathcal{P}} \sum_{j=2}^{\infty} \frac{1}{j|P|^j}$. Exponentiating and using the fact that for $x<1$, $e^x=1+O(x)$, we get that
$$ \prod_{d(P) \leq X} \bigg( 1-\frac{1}{|P|} \bigg)^{-1} = e^{C} X + O(1),$$ and it remains to show that $C=\gamma$.
Now by the Prime Polynomial Theorem,
$$ \sum_{\substack{f\in\mathcal{M}\\ d(f) \leq X}} \frac{\Lambda(f)}{|f| d(f)} = \sum_{n \leq X} \frac{1}{n} = \log X + \gamma + O \Big( \frac{1}{X} \Big).$$ Combining the formulas above, we also have that \begin{align*}
\sum_{\substack{f\in\mathcal{M}\\ d(f) \leq X}} \frac{\Lambda(f)}{|f| d(f)} &= \sum_{d(P) \leq X} \frac{1}{|P|} + \sum_{P\in\mathcal{P}}\sum_{2\leq j\leq X/d(P)} \frac{1}{j |P|^j} \\
&= \log X + c+ \sum_{P\in\mathcal{P}} \sum_{j=2}^{\infty} \frac{1}{j|P|^j} - \sum_{j=2}^{\infty} \sum_{d(P) > X/j} \frac{1}{j|P|^j} + O \Big( \frac{1}{X} \Big) \\ &= \log X + C + O \Big( \frac{1}{X}\Big). \end{align*} Using the previous two identities, it follows that $C = \gamma$, which finishes the proof. \end{proof}
\section{Hybrid Euler-Hadamard product}
We start with an explicit formula.
\begin{lemma}\label{explicit} Let $u(x)$ be a real, non-negative, $C^{\infty}$ function with mass $1$ and compactly supported on $[q,q^{1+1/X}]$. Let $v(t)=\int_{t}^{\infty}u(x)dx$ and let $\widetilde{u}$ be the Mellin transform of $u$. Then for $s$ not a zero of $L(s,\chi_D)$ we have \begin{eqnarray*}
&&-\frac{L'}{L}(s,\chi_D)=\sum_{f\in\mathcal{M}}\frac{(\log q)\Lambda(f)\chi_D(f)}{|f|^s}v\big(q^{d(f)/ X}\big)-\sum_{\rho}\frac{\widetilde{u}\big(1-(s-\rho) X\big)}{s-\rho}, \end{eqnarray*} where the sum over $\rho$ runs over all the zeros of $L(s,\chi_D)$. \end{lemma}
This lemma can be proved in a familiar way [\textbf{\ref{BH}}], beginning with the integral \begin{equation*} -\frac{1}{2\pi i}\int_{(c)}\frac{L'}{L}(s+z,\chi_D)\widetilde{u}(1+zX)\frac{dz}{z}, \end{equation*} where $c=\max\{2,2-\textrm{Re}(s)\}$.
Following the arguments in [\textbf{\ref{GHK}}], we can integrate the formula in Lemma \ref{explicit} to give a formula for $L(s,\chi_D)$: for $s$ not equal to one of the zeros and $\textrm{Re}(s)\geq0$ we have \begin{eqnarray}\label{explicitintegrate}
L(s,\chi_D)=\exp\bigg(\sum_{f\in\mathcal{M}}\frac{\Lambda(f)\chi_D(f)}{|f|^sd(f)}v\big(q^{d(f)/ X}\big)\bigg)Z_{X}(s,\chi_D). \end{eqnarray} To remove the former restriction on $s$, we note that we may interpret $\exp\big(-U(z)\big)$ to be asymptotic to $Cz$ for some constant $C$ as $z\rightarrow0$, so both sides of \eqref{explicitintegrate} vanish at the zeros. Thus \eqref{explicitintegrate} holds for all $\textrm{Re}(s)\geq0$. Furthermore, since $v(q^{d(f)/X})=1$ for $d(f)\leq X$ and $v(q^{d(f)/X})=0$ for $d(f)\geq X+1$, the first factor in \eqref{explicitintegrate} is precisely $P_{X}(s,\chi_D)$, and that completes the proof of Theorem \ref{HEH}.
\section{Moments of the partial Euler product}\label{PXchi}
Recall that \begin{equation*}
P_{X}(s,\chi_D)=\exp\bigg( \sum_{\substack{f\in\mathcal{M}\\d(f)\leq X}}\frac{\Lambda(f)\chi_D(f)}{|f|^{s}d(f)}\bigg). \end{equation*} We first show that we can approximate $P_X(s,\chi_D)^k$ by \begin{equation*}
P_{k,X}^{*}(s,\chi_D)=\prod_{d(P)\leq X/2}\bigg( 1-\frac{\chi_D(P)}{|P|^{s}}\bigg) ^{-k}\prod_{X/2<d(P)\leq X}\bigg( 1+\frac{k\chi_{D}(P)}{|P|^s}+\frac{k^2\chi_{D}(P)^2}{2|P|^{2s}}\bigg) \end{equation*} for any $k\in\mathbb{R}$.
\begin{lemma}\label{PP*} For any $k\in\mathbb{R}$ we have \begin{equation*} P_{X}(s,\chi_D)^{k}=\Big(1+O_{k}\big(q^{-X/6}/X\big)\Big)P_{k,X}^{*}(s,\chi_D) \end{equation*} uniformly for $\emph{Re}(s)=\sigma\geq1/2$. \end{lemma} \begin{proof} For any $P\in\mathcal{P}$ we let $N_{P}=\lfloor X/d(P)\rfloor$, the integer part of $X/d(P)$. Then we have \begin{equation*}
P_{X}(s,\chi_D)^{k}=\exp\bigg(k\sum_{d(P)\leq X}\sum_{1\leq j\leq N_{P}}\frac{\chi_D(P^j)}{j|P|^{js}}\bigg) \end{equation*} and \begin{align*}
P_{k,X}^{*}(s,\chi_D)=&\exp\bigg(k\sum_{d(P)\leq X/2}\sum_{j=1}^{\infty}\frac{\chi_D(P)^j}{j|P|^{js}}+\sum_{X/2<d(P)\leq X}\frac{k\chi_D(P)}{|P|^{s}}\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+O_k\bigg(\sum_{X/2<d(P)\leq X}\frac{1}{|P|^{3\sigma}}\bigg)\bigg). \end{align*} We note that $N_{P}=1$ for $X/2<d(P)\leq X$, so \begin{equation*}
P_{X}(s,\chi_D)^{k}P_{k,X}^{*}(s,\chi_D)^{-1}=\exp\bigg(-k\sum_{d(P)\leq X/2}\sum_{j>N_{P}}\frac{\chi_D(P)^j}{j|P|^{js}}+O_k\bigg(\sum_{X/2<d(P)\leq X}\frac{1}{|P|^{3\sigma}}\bigg)\bigg). \end{equation*} The expression in the exponent is \begin{eqnarray*}
&\ll_k&\sum_{d(P)\leq X/2}\frac{1}{|P|^{\sigma(N_{P}+1)}}+\sum_{X/2<d(P)\leq X}\frac{1}{|P|^{3\sigma}}\nonumber\\
&\ll_k&\sum_{j=2}^{X}\sum_{X/(j+1)<d(P)\leq X/j}\frac{1}{|P|^{(j+1)/2}}+\sum_{X/2<d(P)\leq X}\frac{1}{|P|^{3/2}}\\ &\ll_k&\sum_{j=2}^{X}\frac{jq^{-(j-1)X/2(j+1)}}{X}+\frac{q^{-X/4}}{X}\ll_k\frac{q^{-X/6}}{X}. \end{eqnarray*} Hence $P_{X}(s,\chi_D)^{k}P_{k,X}^{*}(s,\chi_D)^{-1}=1+O_{k}\big(q^{-X/6}/X\big)$ as claimed. \end{proof}
Next we write $P_{k,X}^{*}(s,\chi_{D})$ as a Dirichlet series \begin{equation*}
\sum_{\ell\in\mathcal{M}}\frac{\alpha_{k}(\ell)\chi_{D}(\ell)}{|\ell|^s}=\prod_{d(P)\leq X/2}\bigg( 1-\frac{\chi_D(P)}{|P|^s}\bigg) ^{-k}\prod_{X/2<d(P)\leq X}\bigg(1+\frac{k\chi_{D}(P)}{|P|^s}+\frac{k^2\chi_{D}(P)^2}{2|P|^{2s}}\bigg) . \end{equation*} We note that $\alpha_{k}(\ell)\in\mathbb{R}$, and if we denote by $S(X)$ the set of $X$-smooth polynomials, i.e. \begin{displaymath}
S(X)=\{\ell\in\mathcal{M}:P|\ell\rightarrow d(P)\leq X\}, \end{displaymath}
then $\alpha_{k}(\ell)$ is multiplicative, and $\alpha_{k}(\ell)=0$ if $\ell\notin S(X)$. We also have $0\leq\alpha_{k}(\ell)\leq\tau_{|k|}(\ell)$ for all $\ell\in\mathcal{M}$. Moreover, $\alpha_{k}(\ell)=\tau_{k}(\ell)$ if $\ell\in S(X/2)$, and $\alpha_{k}(P)=k$ and $\alpha_{k}(P^2)=k^2/2$ for all $P\in\mathcal{P}$ with $X/2<d(P)\leq X$.
We now truncate the series, for $s=1/2$, at $d(\ell)\leq \vartheta g$. From the Prime Polynomial Theorem we have \begin{align}\label{truncation}
\sum_{\substack{\ell\in S(X)\\ d(\ell)> \vartheta g}}\frac{\alpha_{k}(\ell)\chi_{D}(\ell)}{|\ell|^{1/2}}&\leq \sum_{\ell\in S(X)}\frac{\tau_{|k|}(\ell)}{|\ell|^{1/2}}\Big(\frac{|\ell|}{q^{\vartheta g}}\Big)^{c/4}=q^{-c\vartheta g/4}\prod_{d(P)\leq X}\bigg(1-\frac{1}{|P|^{(2-c)/4}}\bigg)^{-|k|}\nonumber\\
&\ll q^{-c\vartheta g/4}\exp\bigg(O_k\Big(\sum_{d(P)\leq X}\frac{1}{|P|^{(2-c)/4}}\Big)\bigg)\nonumber\\ &\ll q^{-c\vartheta g/4}\exp\bigg(O_k\Big(\frac{q^{(2+c)X/4}}{X}\Big)\bigg)\ll_\varepsilon q^{-c\vartheta g/4+\varepsilon g}, \end{align} as $X\leq (2-c)\log g/\log q$. Hence \begin{equation}\label{P*}
P_{k,X}^{*}(\chi_{D}):=P_{k,X}^{*}(\tfrac12,\chi_{D})=\sum_{\substack{\ell\in S(X)\\ d(\ell)\leq\vartheta g}}\frac{\alpha_{k}(\ell)\chi_{D}(\ell)}{|\ell|^{1/2}}+O_\varepsilon(q^{-c\vartheta g/4+\varepsilon g}) \end{equation} for all $k\in\mathbb{R}$ and $\vartheta>0$, and it follows that \begin{equation*}
\Big\langle P_{k,X}^{*}(\chi_{D}) \Big\rangle_{\mathcal{H}_{2g+1}}=\frac{1}{|\mathcal{H}_{2g+1}|}\sum_{\substack{\ell\in S(X)\\ d(\ell)\leq\vartheta g}}\frac{\alpha_{k}(\ell)}{|\ell|^{1/2}}\sum_{D\in\mathcal{H}_{2g+1}} \chi_{D}(\ell)+O_\varepsilon\big(q^{-c\vartheta g/4+\varepsilon g}\big). \end{equation*}
We first consider the contribution of the terms with $\ell=\square$. Denote this by $I(\ell=\square)$. By Lemma \ref{L5}, \begin{displaymath}
I\big(\ell=\square\big)=\sum_{\substack{\ell\in S(X)\\ d(\ell)\leq\vartheta g/2}}\frac{\alpha_{k}(\ell^2)}{|\ell|}\prod_{\substack{P\in\mathcal{P}\\P|\ell}}\bigg(1+\frac{1}{|P|}\bigg)^{-1}+O_\varepsilon\big(q^{-2g+\varepsilon g}\big). \end{displaymath} The sum can be extended to all $\ell\in S(X)$ as, like in \eqref{truncation}, \begin{displaymath}
\sum_{\substack{\ell\in S(X)\\ d(\ell)>\vartheta g/2}}\frac{\alpha_{k}(\ell^2)}{|\ell|}\prod_{\substack{P\in\mathcal{P}\\P|\ell}}\bigg(1+\frac{1}{|P|}\bigg)^{-1}\ll_\varepsilon q^{-\vartheta g/4+\varepsilon g}. \end{displaymath} So, using the multiplicativity of $\alpha_k(\ell)$ and Lemma \ref{mertens}, \begin{align*}
I\big(&\ell=\square\big)=\prod_{d(P)\leq X}\bigg(1+\sum_{j=1}^{\infty}\frac{\alpha_{k}(P^{2j})}{|P|^{j-1}(1+|P|)}\bigg)+O_\varepsilon\big(q^{-\vartheta g/4+\varepsilon g}\big)\\
&=\prod_{d(P)\leq X/2}\bigg(1+\sum_{j=1}^{\infty}\frac{\tau_{k}(P^{2j})}{|P|^{j-1}(1+|P|)}\bigg)\prod_{X/2<d(P)\leq X}\bigg(1+\frac{k^2}{2(1+|P|)}+O_\varepsilon\big(|P|^{-2+\varepsilon}\big)\bigg)\\ &\qquad\qquad+O_\varepsilon\big(q^{-\vartheta g/4+\varepsilon g}\big)\\
&=\Big(1+O\big(q^{-X/2}/X\big)\Big)\prod_{d(P)\leq X/2}\bigg[\bigg(1-\frac{1}{|P|}\bigg)^{k(k+1)/2}\bigg(1+\sum_{j=1}^{\infty}\frac{\tau_{k}(P^{2j})}{|P|^{j-1}(1+|P|)}\bigg)\bigg]\\
&\qquad\qquad\prod_{d(P)\leq X/2}\bigg(1-\frac{1}{|P|}\bigg)^{-k(k+1)/2}\prod_{X/2<d(P)\leq X}\bigg(1-\frac{1}{|P|}\bigg)^{-k^2/2}+O_\varepsilon\big(q^{-\vartheta g/4+\varepsilon g}\big)\\ &=\bigg(1+O\Big(\frac 1X\Big)\bigg)2^{-k/2}\mathcal{A}_k\big(e^\gamma X\big)^{k(k+1)/2}+O_\varepsilon\big(q^{-\vartheta g/4+\varepsilon g}\big). \end{align*}
Now we consider the contribution from $\ell \neq \square$, which we denote by $I ( \ell \neq \square)$. Using Lemma \ref{nonsq} we have that
$$I(\ell \neq \square) \ll q^{-g} \sum_{\ell \in S(X)} \frac{ \tau_{|k|}(\ell)}{|\ell|^{1/2-\varepsilon}}.$$ As in \eqref{truncation}, \begin{align*}
\sum_{\ell \in S(X)} \frac{ \tau_{|k|}(\ell)}{|\ell|^{1/2-\varepsilon}}& \ll \prod_{d(P) \leq X} \bigg( 1- \frac{1}{|P|^{1/2-\varepsilon}} \bigg)^{-|k|}\ll \exp \bigg( O_k\Big( \sum_{d(P) \leq X} \frac{1}{|P|^{1/2 -\varepsilon}}\Big) \bigg)\\ &\ll \exp \bigg( O_k\Big(\frac{q^{(1/2+\varepsilon)X}}{X}\Big) \bigg)\ll_\varepsilon q^{\varepsilon g}. \end{align*} Hence $$I( \ell \neq \square) \ll_\varepsilon q^{-g+\varepsilon g},$$ and we obtain the theorem.
\section{Twisted moments of $L(\frac12,\chi_D)$} \label{twist} In this section, we are interested in the $k$-th twisted moment \[
I_k(\ell;g)=\frac{1}{|\mathcal{H}_{2g+1}|}\sum_{D\in\mathcal{H}_{2g+1}}L(\tfrac12,\chi_D)^k\chi_D(\ell). \]
We first recall the approximate functional equation, \[
L(\tfrac12,\chi_D)^k=\sum_{f\in\mathcal{M}_{kg}}\frac{\tau_k(f)\chi_D(f)}{|f|^{1/2}}+\sum_{f\in\mathcal{M}_{kg-1}}\frac{\tau_k(f)\chi_D(f)}{|f|^{1/2}} \] for any $k\in\mathbb{N}$. So \[ I_k(\ell;g)=S_{k}(\ell;kg)+S_{k}(\ell;kg-1), \] where \[
S_k(\ell;N)=\frac{1}{|\mathcal{H}_{2g+1}|}\sum_{f\in\mathcal{M}_{\leq N}}\frac{\tau_k(f)}{|f|^{1/2}}\sum_{D\in\mathcal{H}_{2g+1}}\chi_D(f\ell) \] for $N\in\{kg,kg-1\}$. Define \begin{align*}
S_{k,1}(\ell;N)=\frac{1}{|\mathcal{H}_{2g+1}|}\sum_{f\in\mathcal{M}_{\leq N}}\frac{\tau_k(f)}{|f|^{1/2}}\sum_{C|(f\ell)^\infty}\sum_{h\in\mathcal{M}_{2g+1-2d(C)}}\chi_{f\ell}(h) \end{align*} and \[
S_{k,2}(\ell;N)=\frac{1}{|\mathcal{H}_{2g+1}|}\sum_{f\in\mathcal{M}_{\leq N}}\frac{\tau_k(f)}{|f|^{1/2}}\sum_{C|(f\ell)^\infty}\sum_{h\in\mathcal{M}_{2g-1-2d(C)}}\chi_{f\ell}(h) \] so that, in view of Lemma \ref{L1}, \[ S_k(\ell;N)=S_{k,1}(\ell;N)-qS_{k,2}(\ell;N). \]
We further write \[ S_{k,1}(\ell;N)=S_{k,1}^{\textrm{o}}(\ell;N)+S_{k,1}^{\textrm{e}}(\ell;N) \] according to whether the degree of the product $f\ell$ is even or odd, respectively. Lemma \ref{L3} and Lemma \ref{L2} then lead to \begin{align}
S_{k,1}^{\textrm{o}}(\ell;N)=\frac{q^{3/2}}{(q-1)|\ell|}\sum_{\substack{f\in\mathcal{M}_{\leq N}\\d(f\ell)\ \textrm{odd}}}\frac{\tau_k(f)}{|f|^{3/2}}\sum_{\substack{C|(f\ell)^\infty\\d(C)\leq g}}\frac{1}{|C|^2}\sum_{V\in\mathcal{M}_{d(f\ell)-2g-2+2d(C)}}G(V,\chi_{f\ell}) \label{s1odd} \end{align} and \[ S_{k,1}^{\textrm{e}}(\ell;N)=M_{k,1}(\ell;N)+S_{k,1}^{\textrm{e}}(\ell;N;V\ne0), \] where \begin{align}\label{M}
M_{k,1}(\ell;N)=\frac{q}{(q-1)|\ell|}\sum_{\substack{f\in\mathcal{M}_{\leq N}\\f\ell=\square}}\frac{\tau_k(f)\varphi(f\ell)}{|f|^{3/2}}\sum_{\substack{C|(f\ell)^\infty\\d(C)\leq g}}\frac{1}{|C|^2} \end{align} and \begin{align}\label{Ske}
S_{k,1}^{\textrm{e}}(\ell;N;V\ne0)&=\frac{q}{(q-1)|\ell|}\sum_{\substack{f\in\mathcal{M}_{\leq N}\\d(f\ell)\ \textrm{even}}}\frac{\tau_k(f)}{|f|^{3/2}}\sum_{\substack{C|(f\ell)^\infty\\d(C)\leq g}}\frac{1}{|C|^2}\\ &\bigg(q\sum_{V\in\mathcal{M}_{\leq d(f\ell)-2g-3+2d(C)}}G(V,\chi_{f\ell})-\sum_{V\in\mathcal{M}_{\leq d(f\ell)-2g-2+2d(C)}}G(V,\chi_{f\ell})\bigg).\nonumber \end{align} We also decompose \[ S_{k,1}^{\textrm{e}}(\ell;N;V\ne0)=S_{k,1}^{\textrm{e}}(\ell;N;V=\square)+S_{k,1}^{\textrm{e}}(\ell;N;V\ne\square) \] correspondingly to whether $V$ is a square or not.
We treat $S_{k,2}(\ell;N)$ similarly and define the functions $S_{k,2}^{\textrm{o}}(\ell;N)$, $M_{k,2}(\ell;N)$, $S_{k,2}^{\textrm{e}}(\ell;N;V=\square)$ and $S_{k,2}^{\textrm{e}}(\ell;N;V\ne\square)$ in the same way. Further denote \[ M_{k}(\ell;N)=M_{k,1}(\ell;N)-qM_{k,2}(\ell;N),\qquad M_{k}(\ell)=M_{k}(\ell;kg)+M_{k}(\ell;kg-1) \] and $$S_{k}^{\textrm{e}}(\ell;V=\square)=S_{k}^{\textrm{e}}(\ell;kg;V=\square)+S_{k}^{\textrm{e}}(\ell;kg-1;V=\square),$$ where $$S_{k}^{\textrm{e}}(\ell;N;V=\square)=S_{k,1}^{\textrm{e}}(\ell;N;V=\square)-qS_{k,2}^{\textrm{e}}(\ell;N;V=\square).$$
We shall next consider $M_{k}(\ell)$. The term $S_{k}^{\textrm{e}}(\ell;V=\square)$ also contributes to the main term and will be evaluated in Section \ref{Vsquare}. We will see that that combines nicely with the contribution from $M_{k}(\ell)$ for $k=1$. For the terms $S_{k,1}^{\textrm{o}}(\ell;N)$ and $S_{k,2}^{\textrm{o}}(\ell;N)$, we note that the summations over $V$ are over odd degree polynomials, so $V\ne\square$ in these cases. Let $S_{k}^{\textrm{o}}(\ell;N) = S_{k,1}^{\textrm{o}}(\ell;N)-qS_{k,2}^{\textrm{o}}(\ell;N)$, $S_{k}^{\textrm{e}}(\ell;N;V\ne\square)=S_{k,1}^{\textrm{e}}(\ell;N;V\ne\square)-qS_{k,2}^{\textrm{e}}(\ell;N;V\ne\square)$ and \begin{equation}\label{Sknonsquare} S_k(\ell;N;V \neq \square) = S_{k}^{\textrm{e}}(\ell;N;V\ne\square) +S_{k}^{\textrm{o}}(\ell;N) \end{equation} be the total contribution from $V \neq \square$. We will bound $S_k(\ell;N;V \neq \square)$ in Section \ref{Vnonsquare}.
\subsection{Evaluate $M_{k}(\ell)$} \label{main}
We first note that the sum over $C$ in \eqref{M} can be extended to all $C|(f\ell)^\infty$ with the cost of an error of size $O_\varepsilon(q^{N/2-2g+\varepsilon g})=O_\varepsilon\big(q^{(k-4)g/2+\varepsilon g}\big)$, as \begin{equation}\label{Cextend}
\sum_{\substack{C|(f\ell)^\infty\\d(C)> g}}\frac{1}{|C|^2}\ll \sum_{C|(f\ell)^\infty}\frac{1}{|C|^2}\Big(\frac{|C|}{q^{g}}\Big)^{2-\varepsilon}=q^{-2g+\varepsilon g}\prod_{P|f\ell}\bigg(1-\frac{1}{|P|^\varepsilon}\bigg)^{-1}. \end{equation} So \begin{align*}
M_{k,1}(\ell;N)&=\frac{q}{(q-1)|\ell|}\sum_{\substack{f\in\mathcal{M}_{\leq N}\\f\ell=\square}}\frac{\tau_k(f)\varphi(f\ell)}{|f|^{3/2}}\prod_{P|f\ell}\bigg(1-\frac{1}{|P|^2}\bigg)^{-1}+O_\varepsilon\big(q^{(k-4)g/2+\varepsilon g}\big)\\
&=\frac{q}{(q-1)}\sum_{\substack{f\in\mathcal{M}_{\leq N}\\f\ell=\square}}\frac{\tau_k(f)}{|f|^{1/2}}\prod_{P|f\ell}\bigg(1+\frac{1}{|P|}\bigg)^{-1}+O_\varepsilon\big(q^{(k-4)g/2+\varepsilon g}\big). \end{align*} The condition $f\ell=\square$ implies that $f=f_1^2\ell_1$ for some $f_1\in\mathcal{M}$. Hence \begin{align*}
M_{k,1}(\ell;N)=&\frac{q}{(q-1)|\ell_1|^{1/2}}\sum_{2d(f)\leq N-d(\ell_1)}\frac{\tau_k(f^2\ell_1)}{|f|}\prod_{P|f\ell_1\ell_2}\bigg(1+\frac{1}{|P|}\bigg)^{-1}+O_\varepsilon\big(q^{(k-4)g/2+\varepsilon g}\big). \end{align*}
We are going to use an analogue of the Perron formula in the following form \[
\sum_{2n\leq N}a(n)=\frac{1}{2\pi i}\int_{|u|=r}\Big(\sum_{n=0}^{\infty}a(n)u^{2n}\Big)\frac{du}{u^{N+1}(1-u)}, \]
provided that the power series $\sum_{n=0}^{\infty}a(n)u^n$ is absolutely convergent in $|u|\leq r<1$. Hence \begin{align*}
M_{k,1}(\ell;N)=\frac{q}{(q-1)|\ell_1|^{1/2}}\frac{1}{2\pi i}\oint_{|u|=r}\frac{\mathcal{F}_k(u)du}{u^{N-d(\ell_1)+1}(1-u)}+O_\varepsilon\big(q^{(k-4)g/2+\varepsilon g}\big) \end{align*} for any $r<1$, where \begin{align*}
\mathcal{F}_k(u)&=\sum_{f\in\mathcal{M}}\frac{\tau_k(f^2\ell_1)}{|f|}\prod_{P|f\ell_1\ell_2}\bigg(1+\frac{1}{|P|}\bigg)^{-1}u^{2d(f)}. \end{align*} Now by multiplicativity we have \[ \mathcal{F}_k(u)=\eta_k(\ell;u)\mathcal{Z}\Big(\frac{u^2}{q}\Big)^{k(k+1)/2}, \] where \begin{equation}\label{eta}
\eta_k(\ell;u)=\prod_{P\in\mathcal{P}}\mathcal{A}_{k,P}(u)\prod_{P|\ell_1}\mathcal{B}_{k,P}(u)\prod_{\substack{P\nmid\ell_1\\P|\ell_2}}\mathcal{C}_{k,P}(u) \end{equation} with \begin{align*}
&\mathcal{A}_{k,P}(u)=\bigg(1-\frac{u^{2d(P)}}{|P|}\bigg)^{k(k+1)/2}\bigg(1+\bigg(1+\frac{1}{|P|}\bigg)^{-1}\sum_{j=1}^{\infty}\frac{\tau_{k}(P^{2j})}{|P|^j}u^{2jd(P)}\bigg),\\
&\mathcal{B}_{k,P}(u)=\bigg(\sum_{j=0}^{\infty}\frac{\tau_k(P^{2j+1})}{|P|^j}u^{2jd(P)}\bigg)\bigg(1+\frac{1}{|P|}+\sum_{j=1}^{\infty}\frac{\tau_{k}(P^{2j})}{|P|^j}u^{2jd(P)}\bigg)^{-1} \end{align*} and \begin{align*}
\mathcal{C}_{k,P}(u)=\bigg(\sum_{j=0}^{\infty}\frac{\tau_k(P^{2j})}{|P|^j}u^{2jd(P)}\bigg)\bigg(1+\frac{1}{|P|}+\sum_{j=1}^{\infty}\frac{\tau_{k}(P^{2j})}{|P|^j}u^{2jd(P)}\bigg)^{-1}. \end{align*} Thus, \begin{align*}
M_{k,1}(\ell;N)=&\frac{q}{(q-1)|\ell_1|^{1/2}}\frac{1}{2\pi i}\oint_{|u|=r}\\ &\qquad\qquad\frac{\eta_k(\ell;u)du}{u^{N-d(\ell_1)+1}(1-u)^{k(k+1)/2+1}(1+u)^{k(k+1)/2}}+O_\varepsilon\big(q^{(k-4)g/2+\varepsilon g}\big). \end{align*}
Similarly, \begin{align*}
M_{k,2}(\ell;N)=&\frac{1}{q(q-1)|\ell_1|^{1/2}}\frac{1}{2\pi i}\oint_{|u|=r}\\ &\qquad\qquad\frac{\eta_k(\ell;u)du}{u^{N-d(\ell_1)+1}(1-u)^{k(k+1)/2+1}(1+u)^{k(k+1)/2}}+O_\varepsilon\big(q^{(k-4)g/2+\varepsilon g}\big), \end{align*} and hence we obtain that \begin{align*}
M_{k}(\ell)=&\frac{1}{|\ell_1|^{1/2}}\frac{1}{2\pi i}\oint_{|u|=r}\frac{\eta_k(\ell;u)du}{u^{kg-d(\ell_1)+1}(1-u)^{k(k+1)/2+1}(1+u)^{k(k+1)/2-1}}+O_\varepsilon\big(q^{(k-4)g/2+\varepsilon g}\big). \end{align*}
As discussed in [\textbf{\ref{F23}}, \textbf{\ref{F1}}], $\eta_k(\ell;u)$ has an analytic continuation to the region $|u|\leq R_k=q^{\vartheta_k}$ for $1\leq k\leq 3$, where $\vartheta_1=1-\varepsilon$, $\vartheta_2=1/2-\varepsilon$ and $\vartheta_3=1/3-\varepsilon$. We then move the contour of integration to $|u|=R_k$, encountering a pole of order $k(k+1)/2+1$ at $u=1$ and a pole of order $k(k+1)/2-1$ at $u=-1$. In doing so we get \begin{align*}
M_{k}(\ell)=&\frac{1}{|\ell_1|^{1/2}}\frac{1}{2\pi i}\oint_{|u|=R_k}\frac{\eta_k(\ell;u)du}{u^{kg-d(\ell_1)+1}(1-u)^{k(k+1)/2+1}(1+u)^{k(k+1)/2-1}}\\
&\qquad\qquad-\frac{1}{|\ell_1|^{1/2}}\textrm{Res}(u=1) -\frac{1}{|\ell_1|^{1/2}}\textrm{Res}(u=-1)+O_\varepsilon\big(q^{(k-4)g/2+\varepsilon g}\big). \end{align*}
We now evaluate the residue of the pole at $u=1$ and $u=-1$. We have \begin{align*} &\eta_k(\ell;u)=\eta_k(\ell;1)\sum_{j\geq0}\frac{1}{j!}\frac{\partial_u^j\eta_k}{\eta_k}(\ell;1)(u-1)^j,\\ &u^{-(kg-d(\ell_1)+1)}=1-\big(kg-d(\ell_1)+1\big)(u-1)+\ldots,\\ &\frac{1}{(1+u)^{k(k+1)/2-1}}=\frac{1}{2^{k(k+1)/2-1}}-\frac{k(k+1)/2-1}{2^{k(k+1)/2}}(u-1)\ldots. \end{align*} Similar expressions hold for the Taylor expansions around $u=-1$. So, using the fact that $\eta_k(\ell;u)$ is even, \begin{align*} \textrm{Res}(u=1)+\textrm{Res}&(u=-1)\\ &=\,- \frac{\eta_k(\ell;1)}{2^{k(k+1)/2-1}\big(k(k+1)/2\big)!}\sum_{j=0}^{k(k+1)/2}\frac{\partial_u^j\eta_k}{\eta_k}(\ell;1)P_{k,j}\big(kg-d(\ell_1)\big), \end{align*} where $P_{k,j}$'s are some explicit polynomials of degrees $k(k+1)/2-j$ for all $0\leq j\leq k(k+1)/2$, and the leading coefficient of $P_{k,0}$ is $1$.
We note that \begin{align*}
&\eta_1(\ell;\pm1)=\mathcal{A}_1\prod_{P|\ell}\bigg(1+\frac{1}{|P|}-\frac{1}{|P|^2}\bigg)^{-1},\\
&\eta_2(\ell;\pm1)=\frac{\mathcal{A}_2\tau(\ell_1)|\ell_1|}{\sigma(\ell_1)}\prod_{P|\ell}\bigg(1+\frac{1}{|P|}\bigg)\bigg(1+\frac{2}{|P|}-\frac{2}{|P|^2}+\frac{1}{|P|^3}\bigg)^{-1},\\
&\eta_3(\ell;\pm1)=\mathcal{A}_3\prod_{P|\ell}\bigg(1+\frac{3}{|P|}\bigg)\bigg(1+\frac{4}{|P|}-\frac{3}{|P|^2}+\frac{3}{|P|^3}-\frac{1}{|P|^4}\bigg)^{-1}\prod_{P|\ell_1}\frac{1+3|P|}{3+|P|}, \end{align*}
where $\mathcal{A}_k$'s are as in Conjecture \ref{ak}, and $\sigma(\ell_1)= \sum_{d | \ell_1} |d| = \prod_{P | \ell_1} (1+|P|)$ is the sum of divisors function. Moreover, differentiating \eqref{eta} $j$ times we see that \[
\frac{\partial_u^j\eta_k}{\eta_k}(\ell;1)=\sum_{j_1,j_2=0}^{j}c_{j,j_1,j_2}\bigg(\sum_{P|\ell_1}\frac{D_{1,j,j_1}(P)d(P)^{j_1}}{|P|}\bigg)\bigg(\sum_{\substack{P\nmid\ell_1\\P|\ell_2}}\frac{D_{2,j,j_2}(P)d(P)^{j_2}}{|P|}\bigg) \] for some absolute constants $c_{j,j_1,j_2}$ and $D_{1,j,j_1}(P)\ll_{j,j_1}1$, $D_{2,j,j_2}(P)\ll_{j,j_2}1$. Hence, \[ \frac{\partial_u^j\eta_k}{\eta_k}(\ell;1)\ll_{j,\varepsilon} d(\ell)^\varepsilon, \] and in particular we have \[
M_{k}(\ell)=\frac{\eta_k(\ell;1)}{2^{k(k+1)/2-1}\big(k(k+1)/2\big)!|\ell_1|^{1/2}}\big(kg-d(\ell_1)\big)^{k(k+1)/2} +O_\varepsilon\big(g^{k(k+1)/2-1}d(\ell)^\varepsilon\big). \]
For future purposes (see Section \ref{Vsquare}), we explicitly write down the main term for $k=1$: \begin{equation} \label{mainfirst}
M_{1}(\ell)=\frac{1}{|\ell_1|^{1/2}}\frac{1}{2\pi i}\oint_{|u|=r}\frac{\eta_1(\ell;u)du}{u^{g-d(\ell_1)+1}(1-u)^{2}}+O_\varepsilon\big(q^{-3g/2+\varepsilon g}\big) \end{equation} with
$$ \eta_1(\ell;u) = \prod_{P \in \mathcal{P}} \bigg(1- \frac{u^{2d(P)}}{|P|(1+|P|)} \bigg) \prod_{P | \ell}\bigg(1+\frac{1}{|P|}-\frac{u^{2d(P)}}{|P|^2}\bigg)^{-1}.$$
\subsection{Evaluate $S_{k}^{\textrm{e}}(\ell;V=\square)$}\label{Vsquare}
We proceed similarly as in [\textbf{\ref{F1}}] and [\textbf{\ref{F23}}]. First we note that as in \eqref{Cextend} we can extend the sum over $C$ in \eqref{Ske} to infinity, at the expense of an error of size $O_\varepsilon(q^{(k-4)g/2+\varepsilon g})$. So \begin{align*}
&S_{k}^{\textrm{e}}(\ell;N;V=\square)=\frac{q}{(q-1)|\ell|}\sum_{\substack{f\in\mathcal{M}_{\leq N}\\d(f\ell)\ \textrm{even}}}\frac{\tau_k(f)}{|f|^{3/2}}\sum_{C|(f\ell)^\infty}\frac{1}{|C|^2}\\ &\qquad\qquad\qquad\bigg(q\sum_{V\in\mathcal{M}_{\leq d(f\ell)/2-g-2+d(C)}}G(V^2,\chi_{f\ell})-2\sum_{V\in\mathcal{M}_{\leq d(f\ell)/2-g-1+d(C)}}G(V^2,\chi_{f\ell})\\ &\qquad\qquad\qquad\qquad\qquad\qquad+\frac1q\sum_{V\in\mathcal{M}_{\leq d(f\ell)/2-g+d(C)}}G(V^2,\chi_{f\ell})\bigg)+O_\varepsilon\big(q^{(k-4)g/2+\varepsilon g}\big). \end{align*}
Applying the Perron formula in the form \[
\sum_{n\leq N}a(n)=\frac{1}{2\pi i}\int_{|u|=r}\Big(\sum_{n=0}^{\infty}a(n)u^{n}\Big)\frac{du}{u^{N+1}(1-u)} \] for the sums over $V$ we get \begin{align*}
S_{k}^{\textrm{e}}(\ell;N;V=\square)& = \frac{1}{(q-1)|\ell|} \sum_{\substack{f \in \mathcal{M}_{\leq N} \\ d(f \ell) \text{ even}}} \frac{\tau_k(f) }{|f|^{3/2}} \sum_{C | (f\ell)^{\infty}}\frac{1}{|C|^2} \frac{1}{2 \pi i} \oint_{|u|=r_1}u^{-d(f)/2-d(C)}\\ & \qquad\qquad \Big( \sum_{V \in \mathcal{M}} G(V^2,\chi_{f \ell})\, u^{d(V)}\Big) \frac{(1-qu)^2du}{ u^{d(\ell)/2-g+1}(1-u) }+O_\varepsilon\big(q^{(k-4)g/2+\varepsilon g}\big), \end{align*} where $r_1=q^{-1-\varepsilon}$. Another application of the Perron formula, this time in the form \[
\sum_{\substack{n\leq N\\n+l\ \textrm{even}}}a(n)=\frac{1}{2\pi i}\int_{|w|=r}\Big(\sum_{n=0}^{\infty}a(n)w^{n}\Big)\delta(l,N;w)\frac{dw}{w^{N+1}}, \] where \[ \delta(l,N;w)=\frac12\bigg(\frac{1}{1-w}+\frac{(-1)^{N-l}}{1+w}\bigg)=\begin{cases} \frac{1}{1-w^2}&N-l\ \textrm{even},\\ \frac{w}{1-w^2}&N-l\ \textrm{odd}, \end{cases} \]for the sum over $f$ yields \begin{align*}
&S_{k}^{\textrm{e}}(\ell;N;V=\square) = \frac{1}{(q-1)|\ell|} \\
&\qquad\quad \frac{1}{(2 \pi i)^2} \oint_{|u|=r_1} \oint_{|w|=r_2}\frac{\mathcal{N}_k (\ell;u,w) (1-qu)^2 dwdu }{u^{(N+d(\ell)-\mathfrak{a})/2-g+1}w^{N+1-\mathfrak{a}}(1-u) (1-uw^2)}+O_\varepsilon\big(q^{(k-4)g/2+\varepsilon g}\big), \end{align*} where $r_2<1$,
$$ \mathcal{N}_k (\ell;u,w) = \sum_{f,V \in \mathcal{M}} \frac{\tau_k(f) G(V^2,\chi_{f \ell})}{|f|^{3/2}} \prod_{P | f \ell} \bigg(1- \frac{1}{|P|^2u^{d(P)}} \bigg)^{-1} u^{d(V)} w^{d(f)} $$ and $\mathfrak{a} \in \{0,1\}$ according to whether $N-d(\ell)$ is even or odd.
We next write $ \mathcal{N}_k (\ell;u,w)$ as an Euler product. From Lemma \ref{L2} we have \begin{align*}
&\sum_{f \in \mathcal{M}} \frac{\tau_k(f) G(V^2,\chi_{f\ell})}{|f|^{3/2}} \prod_{P|f \ell} \bigg(1-\frac{1}{|P|^2 u^{d(P)}} \bigg)^{-1}w^{d(f)} \\
&\quad = \prod_{P |\ell} \bigg(1-\frac{1}{|P|^2 u^{d(P)}} \bigg)^{-1} \prod_{P \nmid \ell V} \bigg( 1+\frac{k w^{d(P)}}{|P|}\bigg(1-\frac{1}{|P|^2 u^{d(P)}} \bigg)^{-1}\bigg) \\
& \qquad\ \ \prod_{\substack{P\nmid \ell \\P|V}} \bigg( 1+ \sum_{j=1}^{\infty} \frac{ \tau_k(P^j) G(V^2,\chi_{P^j}) w^{j d(P)}}{|P|^{3j/2} }\bigg(1-\frac{1}{|P|^2 u^{d(P)}} \bigg)^{-1} \bigg) \\
&\qquad\quad\ \prod_{\substack{ P | \ell\\P \nmid V }} G(V^2,\chi_{P^{\text{ord}_P(\ell)}})\prod_{\substack{ P | \ell\\P | V }} \bigg( G(V^2,\chi_{P^{\text{ord}_P(\ell)}}) + \sum_{j=1}^{\infty} \frac{\tau_k(P^j) G(V^2,\chi_{P^{j+\text{ord}_P(\ell)}}) w^{jd(P)}}{|P|^{3j/2}} \bigg) .
\end{align*} Note that if $P| \ell_2$ and $P \nmid V$, then the above expression is $0$. Hence we must have that $\text{rad}(\ell_2) | V$. Moreover, from the last Euler factor above, note that we must have $\ell_2 | V$, so write $V= \ell_2 V_1$. Using Lemma \ref{L2}, we rewrite
$$ \prod_{\substack{P | \ell \\ P \nmid V}} G(V^2,\chi_{P^{\text{ord}_P(\ell)}}) =\prod_{\substack{P | \ell_1 \\ P \nmid \ell_2}} |P|^{1/2} \prod_{\substack{P | \ell_1 \\ P \nmid \ell_2 \\ P |V_1}} |P|^{-1/2}.$$ By multiplicativity we then obtain \begin{align*}
& \mathcal{N}_k (\ell;u,w) = u^{d(\ell_2)}\prod_{P\in\mathcal{P}} \bigg( 1- \frac{1}{|P|^2 u^{d(P)}} \bigg)^{-1} \\
&\ \prod_{P \nmid \ell} \bigg(1-\frac{1}{|P|^2u^{d(P)}}+\frac{k w^{d(P)}}{|P|}\\
&\qquad\qquad\qquad\qquad\qquad+ \sum_{i=1}^{\infty} u^{i d(P)} \bigg( 1-\frac{1}{|P|^2u^{d(P)}}+ \sum_{j=1}^{\infty} \frac{\tau_k(P^j) G(P^{2i},\chi_{P^j}) w^{j d(P)}}{|P|^{3j/2}}\bigg) \bigg) \\
&\ \ \prod_{P | \ell_1} \Bigg(|P|^{1/2+2 \text{ord}_P(\ell_2)}+\sum_{i=1}^{\infty}u^{i d(P)} \sum_{j=0}^{\infty} \frac{\tau_k(P^j) G( P^{2i+ 2 \text{ord}_P(\ell_2)} , \chi_{P^{j+1+2 \text{ord}_P(\ell_2)}}) w^{jd(P)}}{|P|^{3j/2}} \Bigg) \\
&\ \ \ \prod_{\substack{P \nmid \ell_1 \\ P | \ell_2 }} \bigg( \varphi\big(P^{2 \text{ord}_P(\ell_2)}\big) + \frac{k |P|^{2 \text{ord}_P(\ell_2)} w^{d(P)}}{|P|}\\
&\qquad\qquad\qquad\qquad\qquad\qquad+ \sum_{i=1}^{\infty} u^{i d(P)} \sum_{j=0}^{\infty} \frac{ \tau_k(P^j) G( P^{2i+ 2 \text{ord}_P(\ell_2)} , \chi_{P^{j+2 \text{ord}_P(\ell_2)}}) w^{jd(P)}}{|P|^{3j/2}} \bigg). \end{align*}
\subsubsection{The case $k=1$} We have \begin{align*}
\mathcal{N}_1(\ell;u,w) = \frac{|\ell|u^{d(\ell_2)}}{| \ell_1 |^{1/2}}\kappa_{1}(\ell;u,w) \mathcal{Z}(u) \mathcal{Z}\Big(\frac wq\Big) \mathcal{Z}\Big(\frac {uw^2}{q}\Big), \end{align*} where \begin{equation*}
\kappa_1(\ell;u,w)=\prod_{P\in\mathcal{P}} \mathcal{D}_{1,P}(u,w)\prod_{P | \ell_1} \mathcal{H}_{1,P}(u,w)\prod_{\substack{P \nmid \ell_1 \\ P | \ell_2}} \mathcal{J}_{1,P}(u,w) \end{equation*} with \begin{align*}
\mathcal{D}_{1,P}(u,w)=& \bigg(u^{d(P)}-\frac{1}{|P|^2}\bigg)^{-1}\bigg(1-\frac{w^{d(P)}}{|P|}\bigg)\\
&\qquad\qquad \bigg(u^{d(P)}+\frac{(uw)^{d(P)}(1-u^{d(P)})}{|P|}-\frac{1+(uw)^{2d(P)}}{|P|^2}+\frac{(uw^2)^{d(P)}}{|P|^3}\bigg), \end{align*} \begin{align*}
\mathcal{H}_{1,P}(u,w)=& u^{d(P)}\bigg(1-u^{d(P)}+(uw)^{d(P)}-\frac{(uw)^{d(P)}}{|P|}\bigg)\\
&\qquad\qquad \bigg(u^{d(P)}+\frac{(uw)^{d(P)}(1-u^{d(P)})}{|P|}-\frac{1+(uw)^{2d(P)}}{|P|^2}+\frac{(uw^2)^{d(P)}}{|P|^3}\bigg)^{-1} \end{align*} and \begin{align*}
\mathcal{J}_{1,P}(u,w)=& u^{d(P)}\bigg(1-\frac{1-w^{d(P)}+(uw)^{d(P)}}{|P|}\bigg)\\
&\qquad\qquad \bigg(u^{d(P)}+\frac{(uw)^{d(P)}(1-u^{d(P)})}{|P|}-\frac{1+(uw)^{2d(P)}}{|P|^2}+\frac{(uw^2)^{d(P)}}{|P|^3}\bigg)^{-1}. \end{align*} So \begin{align*}
S_{1}^{\textrm{e}}(\ell;N;V&=\square) =\ \frac{1}{(q-1)|\ell_1|^{1/2}} \frac{1}{(2 \pi i)^2} \oint_{|u|=r_1} \oint_{|w|=r_2}\\ &\ \frac{ \kappa_1(\ell;u,w)(1-qu) dwdu }{u^{(N+d(\ell_1)-\mathfrak{a})/2-g+1}w^{N+1-\mathfrak{a}}(1-u)(1-w) (1-uw^2)^2}+O_\varepsilon\big(q^{-3g/2+\varepsilon g}\big). \end{align*} Note that $ \kappa_1(1;u,w)$ is the same as $\prod_{P}\mathcal{B}_P(u,w/q)$ in [\textbf{\ref{F1}}].
Similarly as in [\textbf{\ref{F1}}], we take $r_1 = q^{-3/2}$ and $r_2<1$ in the double integral above. Recall from Lemma $6.3$ in [\textbf{\ref{F1}}] that $$ \kappa_1(1;u,w) = \mathcal{Z} \Big( \frac{w}{q^3u} \Big) \mathcal{Z} \Big( \frac{w^2}{q^2} \Big)^{-1}\prod_{P\in\mathcal{P}} \mathcal{R}_P(u,w),$$ where \begin{align*}
\mathcal{R}_{P}(u,w)=& 1-\bigg(u^{d(P)}-\frac{1}{|P|^2}\bigg)^{-1}\bigg(1+\frac{w^{d(P)}}{|P|}\bigg)^{-1}\frac{w^{d(P)}}{|P|u^{d(P)}}\bigg(u^{3d(P)} + \frac{(u^3w)^{d(P)}}{|P|}\\
&\qquad\qquad - \frac{ (u^2w)^{d(P)}}{|P|^2} +\frac{(uw)^{d(P)}(1-u^{d(P)})}{|P|^3} - \frac{1+(uw)^{2d(P)}}{|P|^4} + \frac{(uw^2)^{d(P)}}{|P|^5} \bigg), \end{align*}
and $\prod_{P\in\mathcal{P}} \mathcal{R}_P(u,w)$ converges absolutely for $|w|^2<q^3 |u|, |w| < q^4 |u|^2 , |w|<q$ and $|wu| <1$. In the double integral, we enlarge the contour of integration over $w$ to $|w| = q^{3/4-\varepsilon}$ and encounter two poles at $w=1$ and $w=q^2u$. Let $A(\ell;N)$ be the residue of the pole at $w=1$ and $B(\ell;N)$ be the residue of the pole at $w=q^2u$. By bounding the integral on the new contour, we can write $$S_{1}^{\textrm{e}}(\ell;N;V=\square) = A(\ell;N)+B(\ell;N)+O_\varepsilon\big(|\ell_1|^{1/4}q^{-3g/2+\varepsilon g }\big).$$
For the residue at $w=1$ we have
$$A(\ell;N) = \frac{1}{(q-1)|\ell_1|^{1/2}} \frac{1}{2 \pi i} \oint_{|u|=r_1} \frac{\kappa_1(\ell;u,1) (1-qu)du}{u^{(N+d(\ell_1)-\mathfrak{a})/2-g+1} (1-u)^3} .$$ We make the change of variables $u \mapsto 1/u^2$ and use the fact that $$ 1- \frac{q}{u^2} = - \frac{q}{u^2 \mathcal{Z} ( \frac{u^2}{q^2} )}\qquad \text{ and }\qquad \Big(1-\frac{1}{u^2} \Big)^{-1} = \mathcal{Z}\Big( \frac{1}{qu^2} \Big).$$ A direct computation with the Euler product shows that
\begin{align*} \zeta_q(2) \kappa_1\Big( \ell;\frac{1}{u^2},1\Big) \mathcal{Z} \Big( \frac{1}{qu^2} \Big) \mathcal{Z} \Big( \frac{u^2}{q^2} \Big)^{-1} &= \prod_{P \in \mathcal{P}} \bigg(1- \frac{u^{2d(P)}}{|P|(1+|P|)} \bigg) \prod_{P | \ell}\bigg(1+\frac{1}{|P|}-\frac{u^{2d(P)}}{|P|^2}\bigg)^{-1}\\ & = \eta_1(\ell;u).\end{align*} So after the change of variables, we have
$$A(\ell;N) = -\frac{1}{|\ell_1|^{1/2}} \frac{1}{2 \pi i} \oint_{|u| = r_1^{-1/2}} \frac{ \eta_1(\ell;u) du}{u^{2g-N-d(\ell_1)+\mathfrak{a}-1} (1-u^2)^2},$$ and hence
$$A(\ell):=A(\ell;g-1)+A(\ell;g) = -\frac{1}{|\ell_1|^{1/2}} \frac{1}{2 \pi i} \oint_{|u| = r_1^{-1/2}} \frac{ \eta_1(\ell;u)\big(u^{\mathfrak{a}(g)}+ u^{2-\mathfrak{a}(g)}\big)du}{u^{g-d(\ell_1)+1} (1-u^2)^2}.$$
Consider the integral \[
\frac{1}{2\pi i}\oint_{|u| = r_1^{-1/2}}\frac{\eta_1(\ell;u)du}{u^{g-d(\ell_1)+1}(1-u)^{2}}. \] Making the change of variables $u \mapsto -u$ and using the facts that $\eta_1(\ell;u)$ is an even function and that $(-1)^{g-d(\ell_1)}=(-1)^{\mathfrak{a}(g)}$ (which follows from the definition of $\mathfrak{a}(g)$), this is equal to \[
\frac{1}{2\pi i}\oint_{|u| = r_1^{-1/2}}\frac{(-1)^{\mathfrak{a}(g)}\eta_1(\ell;u)du}{u^{g-d(\ell_1)+1}(1+u)^{2}}. \] Hence we get \begin{align*}
\frac{2}{2\pi i}\oint_{|u| = r_1^{-1/2}}\frac{\eta_1(\ell;u)du}{u^{g-d(\ell_1)+1}(1-u)^{2}}=\frac{1}{2\pi i}\oint_{|u| = r_1^{-1/2}}\frac{\eta_1(\ell;u)\big((1+u)^2+(-1)^{\mathfrak{a}(g)}(1-u)^2\big)du}{u^{g-d(\ell_1)+1}(1-u^2)^{2}}. \end{align*} Note that $(1+u)^2+(-1)^{\mathfrak{a}(g)}(1-u)^2=2\big(u^{\mathfrak{a}(g)}+ u^{2-\mathfrak{a}(g)}\big)$, and so \[
A(\ell)=-\frac{1}{|\ell_1|^{1/2}} \frac{1}{2 \pi i}\oint_{|u| = r_1^{-1/2}}\frac{\eta_1(\ell;u)du}{u^{g-d(\ell_1)+1}(1-u)^{2}}. \]
Now recall the expression \eqref{mainfirst} for the main term $M_1(\ell)$. Since the integrand above has no poles other than at $u=1$ between the circles of radius $r$ and $r_1^{-1/2}$ (recall that $r<1$ and $r_1^{-1/2}= q^{3/4})$, it follows that
$$A(\ell)+M_1(\ell) = - \frac{1}{| \ell_1|^{1/2}} \text{Res} (u=1)+ O_{\varepsilon}(q^{-3g/2+\varepsilon g}).$$ Note that the residue computation was done in Section \ref{main}.
Next we compute the residue at $w=q^2u$. We have \begin{align*}
B(\ell;N) = \frac{1}{(q-1)| \ell_1|^{1/2}} \frac{1}{2 \pi i} \oint_{|u|=r_1} & \frac{(1-qu) (1-q^3u^2) }{u^{(N+d(\ell_1)-\mathfrak{a})/2-g+1}(q^2u)^{N-\mathfrak{a}}(1-u) (1-q^2u) (1-q^4u^3)^2}\\
& \prod_{P\in\mathcal{P}} \mathcal{R}_P(u,q^2u) \prod_{P | \ell_1} \mathcal{H}_{1,P}(u,q^2u) \prod_{\substack{P \nmid \ell_1 \\ P | \ell_2}} \mathcal{J}_{1,P}(u,q^2u) \, du. \end{align*}
We shift the contour of integration to $|u| =q^{-1-\varepsilon}$ and encounter a double pole at $u=q^{-4/3}.$ The integral over the new contour will be bounded by $q^{-3g/2+\varepsilon g}$, and after computing the residue at $u= q^{-4/3}$, it follows that
$$B(\ell;g)+B(\ell;g-1) =|\ell_1|^{1/6} q^{-4g/3} P\big(g+d(\ell_1)\big) + O \big( q^{-3g/2 + \varepsilon g} \big),$$ where $P(x)$ is a linear polynomial whose coefficients can be written down explicitly.
\subsubsection{The case $k=2$} We have \begin{align*}
\mathcal{N}_2(\ell;u,w) &= \frac{|\ell|u^{d(\ell_2)}}{| \ell_1 |^{1/2}}\kappa_{2}(\ell;u,w) \mathcal{Z}(u)\mathcal{Z}\Big(\frac wq\Big)^2 \mathcal{Z}\Big(\frac{uw^2}{q}\Big) \mathcal{Z} \Big( \frac{1}{q^2u} \Big) , \end{align*}
where
\begin{equation}\label{kappa2}
\kappa_2(\ell;u,w)=\prod_{P\in\mathcal{P}} \mathcal{D}_{2,P}(u,w)\prod_{P | \ell_1} \mathcal{H}_{2,P}(u,w)\prod_{\substack{P \nmid \ell_1 \\ P | \ell_2}} \mathcal{J}_{2,P}(u,w) \end{equation} with \begin{align*}
\mathcal{D}_{2,P}(u,w) =&\bigg( 1-\frac{ w^{d(P)}}{|P|}\bigg)^2\bigg(1-\frac{(uw^2)^{d(P)}}{|P|}\bigg)^{-1}\bigg(1 +\frac{w^{d(P)}\big(2-2u^{d(P)}+(uw)^{d(P)}\big)}{|P|}\\
&\qquad\quad -\frac{\big(u^{-d(P)}+3(uw^2)^{d(P)}\big)}{|P|^2}+\frac{w^{ 2d(P)}\big(2+(uw)^{2d(P)}\big)}{|P|^3}-\frac{(uw^4)^{d(P)}}{|P|^4}\bigg),
\end{align*} \begin{align*}
& \mathcal{H}_{2,P}(u,w) = \bigg(1-u^{d(P)}+2(uw)^{d(P)}-\frac{(uw)^{d(P)}\big(2-w^{d(P)}+(uw)^{d(P)}}{|P|}\bigg)\\
&\qquad\qquad \bigg(1 +\frac{w^{d(P)}\big(2-2u^{d(P)}+(uw)^{d(P)}\big)}{|P|}\\
&\qquad\qquad\qquad\qquad-\frac{\big(u^{-d(P)}+3(uw^2)^{d(P)}\big)}{|P|^2} +\frac{w^{ 2d(P)}\big(2+(uw)^{2d(P)}\big)}{|P|^3}-\frac{(uw^4)^{d(P)}}{|P|^4}\bigg)^{-1} \end{align*} and \begin{align*}
& \mathcal{J}_{2,P}(u,w) = \Big(1- \frac{1-2w^{d(P)}+2(uw)^{d(P)}- (uw^2)^{d(P)}}{|P|}-\frac{(uw^2)^{d(P)}}{|P|^2}\Big)\\
&\qquad\qquad \bigg(1 +\frac{w^{d(P)}\big(2-2u^{d(P)}+(uw)^{d(P)}\big)}{|P|}\\
&\qquad\qquad\qquad\qquad -\frac{\big(u^{-d(P)}+3(uw^2)^{d(P)}\big)}{|P|^2}+\frac{w^{ 2d(P)}\big(2+(uw)^{2d(P)}\big)}{|P|^3}-\frac{(uw^4)^{d(P)}}{|P|^4}\bigg)^{-1}.\end{align*} Hence \begin{align}\label{S2integral}
S_{2}^{\textrm{e}}(\ell;&N;V=\square) =\ - \frac{q}{(q-1)|\ell_1|^{1/2}} \frac{1}{(2 \pi i)^2} \oint_{|u|=r_1} \oint_{|w|=r_2}\nonumber \\ &\qquad \frac{\kappa_2(\ell;u,w) dwdu }{u^{(N+d(\ell_1)-\mathfrak{a})/2-g}w^{N+1-\mathfrak{a}}(1-u)(1-w)^2 (1-uw^2)^2}+O_\varepsilon\big(q^{-g+\varepsilon g}\big). \end{align}
Note that $\kappa_2(1;u,w) $ is the same as $\mathcal{F}(u,w/q)$ in Lemma 4.3 of [\textbf{\ref{F23}}], and, hence, $\kappa_2(\ell;u,w) $ is absolutely convergent for $|u| > 1/q$, $|w| <
q^{1/2}$, $|uw| < 1$ and $|uw^2| < 1$.
We first shift the contour $|u|=r_1$ to $|u|=r_1'=q^{-1+\varepsilon}$, and then the contour $|w|=r_2$ to $|w|=r_2'=q^{1/2-\varepsilon}$ in the expression \eqref{S2integral}. In doing so, we encounter a double pole at $w=1$. Moreover, the new integral is bounded by $O_\varepsilon(q^{-g+\varepsilon g})$. Hence \begin{align*}
S_{2}^{\textrm{e}}(\ell,N;V=\square) =\ & \frac{q}{(q-1) |\ell_1|^{\frac{1}{2}}} \frac{1}{2 \pi i} \oint_{|u| = r_1'}\bigg( \frac{ \partial_ w \mathcal{\kappa}_2}{\kappa_2}(\ell;u,1) + \frac{5u-1}{1-u}-N+\mathfrak{a} \bigg)\\ &\qquad\qquad \frac{\kappa_2(\ell;u,1) du }{u^{(N+d(\ell_1)-\mathfrak{a})/2-g}(1-u)^3} + O_\varepsilon(q^{-g+\varepsilon g}), \end{align*} and so letting $N=2g$ and $N=2g-1$ we obtain \begin{align}\label{S2small}
S_{2}^{\textrm{e}}(\ell;V=\square) =\ & \frac{q}{(q-1) |\ell_1|^{\frac{1}{2}}} \frac{1}{2 \pi i} \oint_{|u| = r_1'}\bigg( \frac{ \partial_ w \mathcal{\kappa}_2}{\kappa_2}(\ell;u,1) -2g+ \frac{5u-1}{1-u}+\frac{2u}{u+u^{\mathfrak{a}(\ell)}} \bigg)\nonumber\\ &\qquad\qquad \frac{\kappa_2(\ell;u,1)(u+u^{\mathfrak{a}(\ell)}) du }{u^{(d(\ell_1)+\mathfrak{a}(\ell))/2}(1-u)^3} + O_\varepsilon(q^{-g+\varepsilon g}), \end{align} where $\mathfrak{a}(\ell) \in \{0,1\}$ according to whether $d(\ell)$ is even or odd.
It is a straightforward exercise to verify that \begin{equation}\label{D2u}
\mathcal{D}_{2,P} (u,1)=\bigg(1-\frac{1}{|P|}\bigg)^2\bigg(1+\frac{2}{|P|}-\frac{u^{-d(P)}+u^{d(P)}}{|P|^2}+\frac{1}{|P|^3}\bigg)=\mathcal{D}_{2,P}\Big (\frac 1u,1\Big), \end{equation}
$$ \mathcal{H}_{2,P} \Big( u, 1 \Big) =\big(1+u^{d(P)}\big)\bigg(1+\frac{2}{|P|}-\frac{u^{-d(P)}+u^{d(P)}}{|P|^2}+\frac{1}{|P|^3}\bigg)^{-1} = u^{d(P)}\mathcal{H}_{2,P} \Big( \frac{1}{u}, 1 \Big),$$
$$ \mathcal{J}_{2,P} \Big(u, 1 \Big) =\bigg(1+\frac{1}{|P|}\bigg)\bigg(1+\frac{2}{|P|}-\frac{u^{-d(P)}+u^{d(P)}}{|P|^2}+\frac{1}{|P|^3}\bigg)^{-1}= \mathcal{J}_{2,P} \Big( \frac{1}{u},1\Big) , $$ and hence \begin{equation} \kappa_2(\ell;u,1)=u^{d(\ell_1)}\kappa_2\Big(\ell;\frac 1u,1\Big). \label{simf2} \end{equation} Let \[
\alpha_P(u)=\frac{\partial_ w \mathcal{D}_{2,P}}{\mathcal{D}_{2,P}}(u,1),\qquad\beta_P(u) = \frac{ \partial_ w \mathcal{H}_{2,P}}{ \mathcal{H}_{2,P}}(u,1)\qquad\textrm{and}\qquad \gamma_P(u) = \frac{\partial_ w \mathcal{J}_{2,P}}{ \mathcal{J}_{2,P}}(u,1).\]
By direct computation we obtain \begin{align*}
&\alpha_P(u)=\ 2 d(P)\bigg(1-\frac{1}{|P|}\bigg)^{-1}\bigg(1-\frac{u^{d(P)}}{|P|}\bigg)^{-1}\bigg(1+\frac{2}{|P|}-\frac{u^{-d(P)}+u^{d(P)}}{|P|^2}+\frac{1}{|P|^3}\bigg)^{-1}\\
&\qquad\bigg(\frac{u^{d(P)}}{|P|}-\frac{3+u^{d(P)}}{|P|^2}+\frac{u^{-d(P)}+1+4u^{d(P)}+u^{2d(P)}}{|P|^3}-\frac{3+u^{d(P)}+2u^{2d(P)}}{|P|^4}+\frac{2u^{d(P)}}{|P|^5}\bigg), \end{align*} \begin{align*}
\beta_P(u)=\ &2d(P)\big(1+u^{d(P)}\big)^{-2} \bigg(u^{d(P)}-\frac{1-u^{d(P)}}{|P|}+\frac{u^{2 d(P)}+2 u^{d(P)}-1}{|P|^2}-\frac{u^{d(P)}+2}{|P|^3}\bigg) \end{align*} and \[
\gamma_P(u)=2d(P)\bigg(1+\frac{1}{|P|}\bigg)^{-2}\bigg(u^{d(P)}-\frac{1}{|P|}\bigg)\bigg(\frac{u^{-d(P)}+2}{|P|^2}+\frac{1}{|P|^3}\bigg) . \] From Lemma 4.4 of [\textbf{\ref{F23}}] we have \begin{equation*} \sum_{P\in\mathcal{P}}\alpha_P(u)=\sum_{P\in\mathcal{P}}\alpha_P\Big(\frac 1u\Big)+4u\sum_{P\in\mathcal{P}}\frac{\partial_u \mathcal{D}_{2,P}}{\mathcal{D}_{2,P}}(u,1)+\frac{2(1+u)}{1-u}. \end{equation*} Also note that $$\beta_P(u)-\beta_P\Big(\frac 1u\Big) = 4 u \frac{\partial_u \mathcal{H}_{2,P} }{\mathcal{H}_{2,P}}(u, 1)-2d(P)$$ and $$\gamma_P(u)-\gamma_P\Big(\frac1u\Big) = 4 u \frac{\partial_u \mathcal{J}_{2,P}}{\mathcal{J}_{2,P}}(u, 1).$$ Combining the above equations we get that \begin{equation}
\frac{ \partial_ w \mathcal{\kappa}_2}{\kappa_2}(\ell;u,1) = \frac{ \partial_ w \mathcal{\kappa}_2}{\kappa_2}\Big(\ell;\frac1u,1\Big) + 4u \frac{ \partial_u \kappa_2}{\kappa_2} (\ell;u,1)+\frac{2(1+u)}{1-u}- 2 d(\ell_1). \label{simbeta} \end{equation}
We remark from \eqref{D2u} that $\kappa_2 (\ell;u,1)$ is analytic for $1/q<|u|<q$. Making a change of variables $u \mapsto 1/u$ in the integral \eqref{S2small} and using equations \eqref{simf2} and \eqref{simbeta} we get \begin{align*}
&S_{2}^{\textrm{e}}(\ell;V=\square)= - \frac{q}{(q-1) |\ell_1|^{\frac{1}{2}}} \frac{1}{2 \pi i} \oint_{|u| = q^{1-\varepsilon}}\frac{\kappa_2(\ell;u,1)(u+u^{\mathfrak{a}(\ell)}) }{u^{(d(\ell_1)+\mathfrak{a}(\ell))/2} (1-u)^3}\\ &\quad \bigg(\frac{ \partial_ w \kappa_2}{\kappa_2}(\ell;u,1) -4u \frac{ \partial_u \kappa_2}{\kappa_2} (\ell;u,1)+ 2 d( \ell_1)-2g- \frac{7+u}{1-u}+\frac{2u^{\mathfrak{a}(\ell)}}{u+u^{\mathfrak{a}(\ell)}} \bigg) du+O_\varepsilon(q^{-g+ \varepsilon g}). \end{align*} We further use the fact that \begin{align*}
&\frac{1}{2 \pi i} \oint_{|u| = q^{1-\varepsilon}}\frac{ \partial_u \kappa_2(\ell;u,1)(u+u^{\mathfrak{a}(\ell)})du }{u^{(d(\ell_1)+\mathfrak{a}(\ell))/2-1} (1-u)^3}\\
&\quad= - \frac{1}{2 \pi i} \oint_{|u| = q^{1-\varepsilon}} \frac{\kappa_2(\ell;u,1)(u+u^{\mathfrak{a}(\ell)})}{u^{(d(\ell_1)+\mathfrak{a}(\ell))/2-1}(1-u)^3}\bigg(\frac{u}{u+u^{\mathfrak{a}(\ell)}} -\frac{d(\ell_1)}{2}+\frac{1+2u}{1-u}\bigg)\, du, \end{align*} as $\mathfrak{a}(\ell)(u^{\mathfrak{a}(\ell)}-u)=0$. Combining the two equations above, it follows that \begin{align}\label{S2large}
S_{2}^{\textrm{e}}(\ell;V=\square)= & - \frac{q}{(q-1) |\ell_1|^{\frac{1}{2}}} \frac{1}{2 \pi i} \oint_{|u| = q^{1-\varepsilon}}\frac{\kappa_2(\ell;u,1)(u+u^{\mathfrak{a}(\ell)}) }{u^{(d(\ell_1)+\mathfrak{a}(\ell))/2} (1-u)^3}\\ &\qquad\qquad \bigg( \frac{ \partial_ w \kappa_2}{\kappa_2}(\ell;u,1) -2g+ \frac{5u-1}{1-u}+\frac{2u}{u+u^{\mathfrak{a}(\ell)}} \bigg) du+O_\varepsilon(q^{-g+ \varepsilon g}).\nonumber \end{align}
As there is only one pole of the integrand at $u=1$ in the annulus between $|u|=q^{-1+\varepsilon}$ and $|u|=q^{1-\varepsilon}$, in view of \eqref{S2small} and \eqref{S2large} we conclude that
\begin{equation*}
S_{2}^{\textrm{e}}(\ell;V=\square) = - \frac{q }{2(q-1)|\ell_1|^{\frac{1}{2}}}\text{Res}(u=1) +O_\varepsilon(q^{-g+ \varepsilon g}). \end{equation*}
To compute the residue at $u=1$, we proceed as in calculating the residue of $M_k(\ell)$ in the previous subsection. In doing so we have \begin{align*} \textrm{Res}(u=1)=&\ \kappa_2(\ell;1,1)\bigg(\frac{g}{2}\sum_{j=0}^{2}\frac{\partial_u^j\kappa_2}{\kappa_2}(\ell;1,1)Q_{2,j}\big(d(\ell_1)\big)\\ &\qquad\qquad\qquad\qquad\qquad\qquad-\frac{1}{6}\sum_{i=0}^{1}\sum_{j=0}^{3-i}\frac{\partial_u^j\partial_w^i\kappa_2}{\kappa_2}(\ell;1,1)R_{2,i,j}\big(d(\ell_1)\big)\bigg), \end{align*} where $Q_{2,j}(x)$'s and $R_{2,i,j}(x)$'s are explicit polynomials of degrees $2-j$ and $3-i-j$, respectively, and the leading coefficients of $Q_{2,0}(x)$ and $R_{2,0,0}(x)$ are $1$. We also note that \[ \kappa_2(\ell;1,1)=\frac{\eta_2(\ell;1)}{\zeta_q(2)}=\frac{(q-1)\eta_2(\ell;1)}{q}, \] and as before we have the estimates \[ \frac{\partial_u^j\kappa_2}{\kappa_2}(\ell;1,1),\, \frac{\partial_u^j\partial_w\kappa_2}{\kappa_2}(\ell;1,1)\ll_{j,\varepsilon}d(\ell)^\varepsilon. \] Hence, in particular, we get \[
S_{2}^{\textrm{e}}(\ell;V=\square)=-\frac{\eta_2(\ell;1)}{12|\ell_1|^{1/2}}\Big(3gd(\ell_1)^2-d(\ell_1)^3\Big)+O\big(gd(\ell_1)d(\ell)^\varepsilon\big). \]
\subsubsection{The case $k=3$}
We have
\begin{align*}
\mathcal{N}_3(\ell;z,w) =& \frac{|\ell| u^{d(\ell_2)}}{| \ell_1 |^{1/2}}\kappa_3(\ell;u,w)\mathcal{Z}(u) \mathcal{Z}\Big(\frac wq\Big)^3 \mathcal{Z}\Big(\frac{uw^2}{q}\Big)^6 \mathcal{Z} \Big( \frac{1}{q^2u} \Big) \mathcal{Z}(uw/q)^{-3} , \end{align*}
where
\begin{equation}\label{kappa3}
\kappa_3(\ell;u,w)=\prod_{P\in\mathcal{P}} \mathcal{D}_{3,P}(u,w)\prod_{P | \ell_1} \mathcal{H}_{3,P}(u,w)\prod_{\substack{P \nmid \ell_1 \\ P | \ell_2}} \mathcal{J}_{3,P}(u,w) \end{equation} with \begin{align*}
&\mathcal{D}_{3,P}(u,w) =\bigg (1-\frac{w^{d(P)}}{|P|}\bigg)^3 \bigg(1\frac{(uw)^{d(P)}}{|P|}\bigg)^{-3}\bigg (1-\frac{(uw^2)^{d(P)}}{|P|}\bigg)^3 \\
&\qquad\bigg(1+\frac{3w^{d(P)}\big(1-u^{d(P)}+(uw)^{d(P)}\big)}{|P|}-\frac{u^{-d(P)}+(uw^2)^{d(P)}\big(6-w^{d(P)}+(uw)^{d(P)}\big)}{|P|^2}\\
&\qquad\qquad+\frac{3w^{2d(P)}\big(1+(uw)^{2d(P)}\big)}{|P|^3}-\frac{(uw^4)^{d(P)}\big(3+(uw)^{2d(P)}\big)}{|P|^4}+\frac{(uw^3)^{2d(P)}}{|P|^5}\bigg), \end{align*} $\mathcal{H}_{3,P} (u,w) =$ \begin{align*}
& \bigg(1-u^{d(P)}+3(uw)^{d(P)}-\frac{(uw)^{d(P)}\big(3-3w^{d(P)}+3(uw)^{d(P)}-(uw^2)^{d(P)}\big)}{|P|}-\frac{(u^2w^3)^{d(P)}}{|P|^2}\bigg)\\
&\qquad\bigg(1+\frac{3w^{d(P)}\big(1-u^{d(P)}+(uw)^{d(P)}\big)}{|P|}-\frac{u^{-d(P)}+(uw^2)^{d(P)}\big(6-w^{d(P)}+(uw)^{d(P)}\big)}{|P|^2}\\
&\qquad\qquad+\frac{3w^{2d(P)}\big(1+(uw)^{2d(P)}\big)}{|P|^3}-\frac{(uw^4)^{d(P)}\big(3+(uw)^{2d(P)}\big)}{|P|^4}+\frac{(uw^3)^{2d(P)}}{|P|^5}\bigg)^{-1} \end{align*} and $\mathcal{J}_{3,P} (u,w) =$ \begin{align*}
& \bigg(1-\frac{\big(1-3w^{d(P)}+3(uw)^{d(P)}-3(uw^2)^{d(P)}\big)}{|P|}-\frac{(uw^2)^{d(P)}\big(3-w^{d(P)}+(uw)^{d(P)}\big)}{|P|^2}\bigg)\\
&\qquad\bigg(1+\frac{3w^{d(P)}\big(1-u^{d(P)}+(uw)^{d(P)}\big)}{|P|}-\frac{u^{-d(P)}+(uw^2)^{d(P)}\big(6-w^{d(P)}+(uw)^{d(P)}\big)}{|P|^2}\\
&\qquad\qquad+\frac{3w^{2d(P)}\big(1+(uw)^{2d(P)}\big)}{|P|^3}-\frac{(uw^4)^{d(P)}\big(3+(uw)^{2d(P)}\big)}{|P|^4}+\frac{(uw^3)^{2d(P)}}{|P|^5}\bigg)^{-1}. \end{align*}
Using the above, we obtain \begin{align}\label{S3integral}
&S_{3}^{\textrm{e}}(\ell;N;V=\square) =- \frac{q}{(q-1)|\ell_1|^{1/2}} \frac{1}{(2 \pi i)^2} \oint_{|u|=r_1} \oint_{|w|=r_2} \\ &\qquad\qquad \frac{(1-uw)^3\kappa_3(\ell;u,w) dwdu }{u^{(N+d(\ell_1)-\mathfrak{a})/2-g}w^{N+1-\mathfrak{a}}(1-u)(1-w)^3 (1-uw^2)^7}+O\big(q^{-g/2+\varepsilon g}\big).\nonumber \end{align}
Note that $\kappa_3(1;u,w)$ is the same as $\mathcal{T}(u,w/q)$ in Lemma 7.4 of [\textbf{\ref{F23}}]. As a result [\textbf{\ref{F23}}], $\kappa_3(\ell;u,w)$ is absolutely convergent for $|u| > 1/q$, $|w| < q^{1/2}$, $|uw| <q^{1/2}$ and $|uw^2| <q^{1/2}$. Moreover, $\kappa_3(\ell;u,1)$ has an analytic continuation when $1/q<|u|<q$.
We proceed as in the case $k=2$. First we move the contour $|u|=r_1$ to $|u|=r_1'=q^{-1+\varepsilon}$, and then the contour $|w|=r_2$ to $|w|=r_2'=q^{1/2-\varepsilon}$ in the equation \eqref{S3integral}. In doing so, we cross a triple pole at $w=1$. On the new contours, the integral is bounded by $O_\varepsilon(q^{-g+\varepsilon g})$. Hence, by expanding the terms in their Laurent series, \begin{align*}
S_{3}^{\textrm{e}}(\ell,N;V=\square) &=\ \frac{q}{(q-1) |\ell_1|^{\frac{1}{2}}} \frac{1}{2 \pi i} \oint_{|u| = r_1'} \frac{\kappa_3(\ell;u,1)}{u^{(N+d(\ell_1)-\mathfrak{a})/2-g}(1-u)^7}\nonumber \\ &\quad\sum_{i_1=0}^{2}\sum_{i_2=0}^{2-i_1}\frac{\partial_w^{i_1}\kappa_3}{\kappa_3}(\ell;u,1)Q_{3,i_1,i_2}(\mathfrak{a},u)(1-u)^{i_1+i_2}N^{i_2} du + O_\varepsilon(q^{-g/2+\varepsilon g}), \end{align*} where $Q_{3,i,j}(\mathfrak{a},u)$'s are some explicit functions and are analytic as functions of $u$.
Next we move the $u$-contour to $|u|=q^{1-\varepsilon}$. We encounter a pole at $u=1$ and we bound the new integral by $O_\varepsilon(|\ell_1|^{-1}q^{-g/2+\varepsilon g})$. For the residue at $u=1$, we calculate the Taylor series of the terms in the integrand and get \begin{align*} \textrm{Res}(u=1)=&\kappa_3(\ell;1,1)\sum_{i_1=0}^{2}\sum_{i_2=0}^{2-i_1}\sum_{j=0}^{6-i_1-i_2}\frac{\partial_u^j\partial_w^{i_1}\kappa_3}{\kappa_3}(\ell;1,1)R_{3,i_1,i_2,j}(\mathfrak{a},g+d(\ell_1))N^{i_2}, \end{align*} where $R_{3,i_1,i_2,j}(\mathfrak{a},x)$'s are explicit polynomials in $x$ with degree $6-i_1-i_2-j$. Thus, \begin{align*}
S_{3}^{\textrm{e}}(\ell;V=\square)=&\frac{\kappa_3(\ell;1,1)q}{(q-1) |\ell_1|^{1/2}}\sum_{N=3g-1}^{3g}\sum_{i_1=0}^{2}\sum_{i_2=0}^{2-i_1}\sum_{j=0}^{6-i_1-i_2}\frac{\partial_u^j\partial_w^{i_1}\kappa_3}{\kappa_3}(\ell;1,1)R_{3,i_1,i_2,j}(\mathfrak{a},g+d(\ell_1))N^{i_2}\\
&\qquad\qquad+ O_\varepsilon(q^{-g/2+\varepsilon g})+O_\varepsilon(|\ell_1|^{-3/4}q^{-g/4+\varepsilon g}). \end{align*}
As for the leading term, as before we can show that \[ \kappa_3(\ell;1,1)=\frac{\eta_3(\ell;1)}{\zeta_q(2)} ,\qquad\frac{\partial_u^j\partial_w^{i_1}\kappa_3}{\kappa_3}(\ell;1,1)\ll_{i_1,j,\varepsilon}d(\ell)^\varepsilon, \] and so \begin{align*}
S_{3}^{\textrm{e}}(\ell;V=\square)&=\frac{\eta_3(\ell;1)}{ |\ell_1|^{1/2}}\sum_{N=3g-1}^{3g}\sum_{i_2=0}^{2}R_{3,0,i_2,0}(\mathfrak{a},g+d)N^{i_2}+O_\varepsilon\big(g^5d(\ell)^\varepsilon\big)\\
&= \frac{\eta_3(\ell;1)}{ |\ell_1|^{\frac{1}{2}}}\sum_{N=3g-1}^{3g}\sum_{i_2=0}^{2} \frac{1}{2 \pi i} \oint_{|u| = r_1'} \frac{Q_{3,0,i_2}(\mathfrak{a},1)(1-u)^{i_2}N^{i_2} du}{u^{(g+d(\ell_1))/2}(1-u)^7}\nonumber \\ &\qquad\qquad +O_\varepsilon\big(g^5d(\ell)^\varepsilon\big). \end{align*} Expanding the terms in \eqref{S3integral} in the Laurent series \begin{align*} &(1-uw)^3=(1-u)^3-3u(1-u)^2(w-1)+3u^2(1-u)(w-1)^2\ldots,\\ &w^{-N}=1-N(w-1)+\frac{N(N+1)}{2}(w-1)^2\ldots,\\ &(1-uw^2)^{-7}=(1-u)^{-7}+14u(1-u)^{-8}(w-1)+14u(1-u)^{-9}(1+7u)(w-1)^2\ldots, \end{align*} we see that \begin{align*}
S_{3}^{\textrm{e}}(\ell;V=\square)&= \frac{\eta_3(\ell;1)}{ |\ell_1|^{\frac{1}{2}}}\sum_{N=3g-1}^{3g} \frac{1}{2 \pi i} \oint_{|u| = r_1'} \frac{1}{u^{(g+d(\ell_1))/2}(1-u)^7}\nonumber \\ &\qquad\qquad \bigg(73-11(1-u)N+\frac{(1-u)^2N^2}{2}\bigg)du+O_\varepsilon\big(g^5d(\ell)^\varepsilon\big)\\
&=-\frac{\eta_3(\ell;1)}{2^56! |\ell_1|^{\frac{1}{2}}}(g+d)^4\Big(73(g+d)^2-396g(g+d)+540g^2\Big)+O_\varepsilon\big(g^5d(\ell)^\varepsilon\big). \end{align*}
\subsection{Bounding $S_k(\ell;N;V \neq \square)$}\label{Vnonsquare} Recall from \eqref{Sknonsquare} that $$ S_k(\ell;N;V \neq \square) = S_{k}^{\textrm{e}}(\ell;N;V\ne\square) +S_{k}^{\textrm{o}}(\ell;N) , $$ with $S_{k}^{\textrm{e}}(\ell;N;V\ne\square) = S_{k,1}^{\textrm{e}}(\ell;N;V\ne\square)-qS_{k,2}^{\textrm{e}}(\ell;N;V\ne\square)$ and $S_{k}^{\textrm{o}}(\ell;N) = S_{k,1}^{\textrm{o}}(\ell;N)-qS_{k,2}^{\textrm{o}}(\ell;N)$, and $S_{k,1}^{\textrm{o}}(\ell;N)$ is given by equation \eqref{s1odd}. We will focus on bounding $S_{k,1}^{\textrm{o}}(\ell;N)$, since bounding the other ones follow similarly.
Using the fact that for $r_1<1$,
$$\sum_{\substack{C \in \mathcal{M}_j \\ C| (f \ell)^{\infty}}} \frac{1}{|C|^2} = \frac{1}{2 \pi i} \oint_{|u|=r_1} q^{-2j} \prod_{P | f \ell} \big(1-u^{d(P)}\big)^{-1}\, \frac{du}{u^{j+1}},$$ and writing $V=V_1V_2^2$ with $V_1$ a square-free polynomial, we have \begin{align*}
S_{k,1}^{\textrm{o}}(\ell;N) &= \frac{q^{3/2}}{(q-1)|\ell|} \frac{1}{2 \pi i} \oint_{|u|=r_1} \sum_{\substack{n\leq N \\ n +d(\ell) \text{ odd}}}\sum_{j=0}^g \sum_{\substack{r\leq n+d(\ell)-2g+2j-2 \\ r \text{ odd}}}q^{-2j} \\
& \qquad \sum_{V_1 \in \mathcal{H}_r} \sum_{V_2 \in \mathcal{M}_{(n+d(\ell)-r)/2-g+j-1}} \sum_{f \in \mathcal{M}_n} \frac{\tau_k(f) G(V_1V_2^2, \chi_{f \ell})}{|f|^{3/2} }\prod_{P | f \ell} \big(1-u^{d(P)}\big)^{-1} \, \frac{du}{u^{j+1}}. \end{align*} Now \begin{align*}
\sum_{f \in \mathcal{M}} \frac{\tau_k(f) G(V_1V_2^2, \chi_{f \ell})}{|f|^{3/2}} \prod_{P | f \ell} \big(1-u^{d(P)}\big)^{-1}w^{d(f)}= \mathcal{H}(V,\ell;u,w) \mathcal{J}(V,\ell;u,w)\mathcal{K}(V_1 ; u,w), \end{align*} where
$$ \mathcal{H}(V,\ell;u,w) = \prod_{P | \ell} \bigg( \sum_{j=0}^{\infty} \frac{\tau_k(P^j) G(V,\chi_{P^{j+\text{ord}_P(\ell)}}) w^{j d(P)}}{|P|^{3j/2}} \bigg) \big(1-u^{d(P)}\big)^{-1},$$ \begin{align*}
\mathcal{J}(V,\ell;u,w) &= \prod_{\substack{P \nmid \ell\\P |V }} \bigg( 1+ \sum_{j=1}^{\infty} \frac{\tau_k(P^j) G(V, \chi_{P^j}) w^{j d(P)}}{|P|^{3j/2}} \big(1-u^{d(P)}\big)^{-1} \bigg) \\
& \qquad\qquad \prod_{\substack{P \nmid V_1\\P | \ell V_2 }}\Big( 1+ \frac{k \chi_{V_1}(P) w^{d(P)}}{|P|} \big(1-u^{d(P)}\big)^{-1}\Big)^{-1} \end{align*}
and
$$ \mathcal{K}(V_1;u,w) = \prod_{P \nmid V_1} \bigg( 1+ \frac{k \chi_{V_1}(P) w^{d(P)}}{|P|}\big(1-u^{d(P)}\big)^{-1} \bigg). $$ We use the Perron formula for the sum over $f$ and obtain \begin{align*}
S_{k,1}^{\textrm{o}}(\ell;N) &= \frac{q^{3/2}}{(q-1)|\ell|} \frac{1}{(2 \pi i)^2} \oint_{|u|=q^{-\varepsilon}} \oint_{|w| = q^{1/2-\varepsilon}} \sum_{\substack{n\leq N \\ n +d(\ell) \text{ odd}}} \sum_{j=0}^g \sum_{\substack{r\leq n+d(\ell)-2g+2j-2 \\ r \text{ odd}}} q^{-2j}\\ & \qquad \sum_{V_1 \in \mathcal{H}_r} \sum_{V_2 \in \mathcal{M}_{(n+d(\ell)-r)/2-g+j-1}} \mathcal{H}(V,\ell;u,w) \mathcal{J}(V,\ell;u,w)\mathcal{K}(V_1;u,w) \, \frac{du}{u^{j+1}} \, \frac{dw}{w^{n+1}}. \end{align*}
Let $j_0$ be minimal such that $|wu^{j_0}| < 1$. Then we write \begin{equation} \mathcal{K}(V_1;u,w) = \mathcal{L}\Big(\frac wq,\chi_{V_1}\Big)^k \mathcal{L}\Big(\frac{uw}{q},\chi_{V_1}\Big)^k \cdot \ldots \cdot \mathcal{L}\Big(\frac{u^{j_0-1}w}{q}, \chi_{V_1}\Big)^k \mathcal{T}(V_1;u,w), \label{expr} \end{equation} where $\mathcal{T}(V_1;u,w)$ is absolutely convergent in the selected region. We also have $$ \mathcal{J}(V,\ell;u,w) \ll 1$$ and similarly as in the proof of Lemma $5.3$ in [\textbf{\ref{S}}],
$$ \mathcal{H} (V,\ell;u,w) \ll_\varepsilon |\ell|^{1/2+\varepsilon} \big| (\ell, V_2^2)\big|^{1/2}|V|^{\varepsilon}.$$ We trivially bound the sum over $V_2$. Then we use \eqref{expr} and upper bounds for moments of $L$--functions (see Theorem $2.7$ in [\textbf{\ref{F4}}]) to get that
$$ \sum_{V_1 \in \mathcal{H}_r} \bigg| \mathcal{L}\Big(\frac wq,\chi_{V_1}\Big) \mathcal{L}\Big(\frac{uw}{q},\chi_{V_1}\Big) \cdot \ldots \cdot \mathcal{L}\Big(\frac{u^{j_0-1}w}{q}, \chi_{V_1}\Big) \bigg|^k \ll_\varepsilon q^{r} r^{k(k+1)/2+\varepsilon} .$$ Alternatively, one can use a Lindel\"{o}f type bound for each $L$--function to get the weaker upper bound of $q^{r+\varepsilon r}$ for the expression above. Trivially bounding the rest of the expression, we obtain that
$$ S_{k,1}^{\textrm{o}}(\ell;N) \ll_\varepsilon |\ell|^{1/2} q^{N/2 -2g+ \varepsilon g}.$$
Hence $$S_k(\ell;N;V \neq \square) \ll_\varepsilon |\ell|^{1/2}q^{(k-4)g/2+ \varepsilon g}.$$
\section{Moments of the partial Hadamard product}\label{momentsZ}
\subsection{Random matrix theory model}
Recall that \begin{equation*} Z_{X}(s,\chi_D)=\exp\Big(-\sum_{\rho}U\big((s-\rho)\ X\big)\Big), \end{equation*} where \begin{equation*} U(z)=\int_{0}^{\infty}u(x)E_{1}(z\log x)dx. \end{equation*}
Denote the zeros by $\rho=1/2+i\gamma$. Since $E_1(-ix)+E_1(ix)=-2\textrm{Ci}(|x|)$ for $x\in\mathbb{R}$, where $\textrm{Ci}(z)$ is the cosine integral, \[ \textrm{Ci}(z)=-\int_{z}^{\infty}\frac{\cos(x)}{x}dx, \] we have \begin{equation}\label{rmt} \Big\langle Z_X(\chi_D)^k \Big\rangle_{\mathcal{H}_{2g+1}}=\bigg\langle \prod_{\gamma>0}\exp\Big(2k\int_{0}^{\infty}u(x)\textrm{Ci}\big(\gamma X(\log x) \big)dx\Big) \bigg\rangle_{\mathcal{H}_{2g+1}}. \end{equation}
We model the right hand side of \eqref{rmt} by replacing the ordinates $\gamma$ by the eigenangles of a $2g\times 2g$ symplectic unitary matrix and averaging over all such matrices with respect to the Haar measure. The $k$-moment of $Z_X(\chi_D)$ is thus expected to be asymptotic to \[ \mathbb{E}_{2g}\bigg[\prod_{n=1}^{g}\exp\Big(2k\int_{0}^{\infty}u(x)\textrm{Ci}\big(\theta_nX(\log x) \big)dx\Big)\bigg], \] where $\pm\,\theta_n$ with $0\leq\theta_1\leq\ldots\leq\theta_{g}\leq\pi$ are the $2g$ eigenangles of the random matrix and $\mathbb{E}_{2g}[.]$ denotes the expectation with respect to the Haar measure. It is convenient to have our function periodic, so we instead consider \begin{equation}\label{rmt1} \mathbb{E}_{2g}\bigg[\prod_{n=1}^{g}\phi(\theta_n)\bigg], \end{equation} where \begin{align*}
\phi(\theta)&=\exp\bigg(2k\int_{0}^{\infty}u(x)\Big(\sum_{j=-\infty}^{\infty}\textrm{Ci}\big(|\theta+2\pi j|X(\log x) \big)\Big)dx\bigg)\\
&=\Big|2\sin\frac \theta 2\Big|^{2k}\exp\bigg(2k\int_{0}^{\infty}u(x)\Big(\sum_{j=-\infty}^{\infty}\textrm{Ci}\big(|\theta+2\pi j|X(\log x) \big)\Big)dx-2k\log\Big|2\sin\frac \theta 2\Big| \bigg). \end{align*} The average \eqref{rmt1} over the symplectic group has been asymptotically evaluated in [\textbf{\ref{DIK}}] and we have \[ \mathbb{E}_{2g}\bigg[\prod_{n=1}^{g}\phi(\theta_n)\bigg]\sim \frac{G(k+1)\sqrt{\Gamma(k+1)}}{\sqrt{G(2k+1)\Gamma(2k+1)}}\Big(\frac{2g}{e^\gamma X}\Big)^{k(k+1)/2}. \]
\subsection{Proof of Theorem \ref{k123}}
What most important to us in evaluating the moments of $Z_X(\chi_D)$ is the leading term coming from the twisted moments. Theorem \ref{tfm}, Theorem \ref{tsm}, Theorem \ref{ttm} and \eqref{eta} show that \begin{align*}
\Big\langle L(\tfrac12,\chi_D)^k\chi_D(\ell) \Big\rangle_{\mathcal{H}_{2g+1}}=&\ \frac{c_k\mathcal{A}_k\mathcal{B}_k(\ell_1)\mathcal{C}_k(\ell_1,\ell_2)}{2^{k(k+1)/2-1}(k(k+1)/2)!|\ell_1|^{1/2}}\big(kg\big)^{k(k+1)/2}\\
&\qquad\qquad+O\Big(\frac{\tau_k(\ell_1)}{|\ell_1|^{1/2}}g^{k(k+1)/2-1}d(\ell)\Big)+O_\varepsilon\big(|\ell|^{1/2}q^{(k-4)g/2+\varepsilon g}\big) \end{align*} with $c_1=c_2=1$ and $c_3=2^9/3^6$, where \begin{align*}
&\mathcal{A}_{k}=\prod_P\bigg(1-\frac{1}{|P|}\bigg)^{k(k+1)/2}\bigg(1+\bigg(1+\frac{1}{|P|}\bigg)^{-1}\sum_{j=1}^{\infty}\frac{\tau_{k}(P^{2j})}{|P|^j}\bigg),\\
&\mathcal{B}_{k}(\ell_1)=\prod_{P|\ell_1}\bigg(\sum_{j=0}^{\infty}\frac{\tau_k(P^{2j+1})}{|P|^j}\bigg)\bigg(1+\frac{1}{|P|}+\sum_{j=1}^{\infty}\frac{\tau_{k}(P^{2j})}{|P|^j}\bigg)^{-1},\\
&\mathcal{C}_{k}(\ell_1,\ell_2)=\prod_{\substack{P\nmid \ell_1\\P|\ell_2}}\bigg(\sum_{j=0}^{\infty}\frac{\tau_k(P^{2j})}{|P|^j}\bigg)\bigg(1+\frac{1}{|P|}+\sum_{j=1}^{\infty}\frac{\tau_{k}(P^{2j})}{|P|^j}\bigg)^{-1}. \end{align*} Combining with \eqref{P*} we get \begin{align*} \Big\langle L(\tfrac12,\chi_D)^kP_{-k,X}^{*}(\chi_{D}) \Big\rangle_{\mathcal{H}_{2g+1}}&=\, J_{k,1}+J_{k,2}+O_\varepsilon\big(q^{\vartheta g+(k-4)g/2+\varepsilon g}\big)+O_\varepsilon(q^{-c\vartheta g/4+\varepsilon g}) \end{align*} for any $\vartheta>0$, where \begin{equation}\label{Jk1}
J_{k,1}=\frac{2c_k\mathcal{A}_k}{(k(k+1)/2)!}\Big(\frac{kg}{2}\Big)^{k(k+1)/2}\sum_{\substack{\ell_1,\ell_2\in S(X)\\ \ell_1\ \textrm{square-free}\\ d(\ell_1)+2d(\ell_2)\leq\vartheta g}}\frac{\mathcal{B}_k(\ell_1)\mathcal{C}_k(\ell_1,\ell_2)\alpha_{-k}(\ell_1\ell_2^2)}{|\ell_1||\ell_2|} \end{equation} and \begin{align}\label{Jk2}
J_{k,2}&\ll g^{k(k+1)/2-1}\sum_{\ell_1,\ell_2\in S(X)}\frac{\tau_k(\ell_1)\tau_{k}(\ell_1\ell_2^2)\big(d(\ell_1)+2d(\ell_2)\big)}{|\ell_1||\ell_2|}\nonumber\\
&\ll g^{k(k+1)/2-1}\sum_{\ell_2\in S(X)}\frac{\tau_{k}(\ell_2^2)d(\ell_2)}{|\ell_2|}\sum_{\ell_1\in S(X)}\frac{\tau_k(\ell_1)^2d(\ell_1)}{|\ell_1|}, \end{align} as $\alpha_{-k}(\ell_1\ell_2^2)\ll\tau_k(\ell_1\ell_2^2)\ll\tau_k(\ell_1)\tau_{k}(\ell_2^2)$.
We first consider the error term $J_{k,2}$. Let \begin{displaymath}
F(\sigma)=\sum_{\ell\in S(X)}\frac{\tau_k(\ell)^2}{|\ell|^\sigma}=\prod_{d(P)\leq X}\bigg(\sum_{j=0}^{\infty}\frac{\tau_k(P^j)^2}{|P|^{j\sigma}}\bigg). \end{displaymath}
Then $F(1)\asymp\prod_{d(P)\leq X}(1-1/|P|)^{-k^2}\asymp X^{k^2}$. We note that the sum over $\ell_1$ in \eqref{Jk2} is \[
-\frac{F'(1)}{\log q}=F(1)\sum_{d(P)\leq X}\frac{\sum_{j=0}^{\infty}j\tau_k(P^j)^2d(P)/|P|^{j}}{\sum_{j=0}^{\infty}\tau_k(P^j)^2/|P|^{j}}, \] and hence it is \[
\ll X^{k^2}\sum_{d(P)\leq X}\frac{d(P)}{|P|}\ll X^{k^2+1}. \]
Similarly we have $$\sum_{\ell_2\in S(X)}\frac{\tau_{k}(\ell_2^2)d(\ell_2)}{|\ell_2|}\ll X^{k(k+1)/2+1},$$ and hence $$J_{k,2}\ll g^{k(k+1)/2-1}X^{k(3k+1)/2+2}.$$
For the main term $J_{k,1}$, recall that the function $\alpha_{-k}(\ell)$ is given by \begin{equation}\label{alpha}
\sum_{\ell\in\mathcal{M}}\frac{\alpha_{-k}(\ell)\chi_{D}(\ell)}{|\ell|^s}=\prod_{d(P)\leq X/2}\bigg( 1-\frac{\chi_D(P)}{|P|^s}\bigg) ^{k}\prod_{X/2<d(P)\leq X}\bigg(1-\frac{k\chi_{D}(P)}{|P|^s}+\frac{k^2\chi_{D}(P)^2}{2|P|^{2s}}\bigg) . \end{equation} So for $1\leq k\leq 3$, the function $\alpha_{-k}(\ell)$ is supported on quad-free polynomials. As a result, if we let \[ \mathcal{P}_X=\prod_{d(P)\leq X}P, \]
then for the sum over $\ell_1,\ell_2$ in \eqref{Jk1} we can write $\ell_1=\ell_1'\ell_3$ and $\ell_2=\ell_2'\ell_3$, where $\ell_1',\ell_2',\ell_3$ are all square-free, i.e. $\ell_1',\ell_2',\ell_3|\mathcal{P}_X$, and $\ell_1',\ell_2',\ell_3$ are pairwise co-prime. Hence \begin{align*}
J_{k,1}=&\frac{2c_k\mathcal{A}_k}{(k(k+1)/2)!}\Big(\frac{kg}{2}\Big)^{k(k+1)/2}\sum_{\substack{\ell_3|\mathcal{P}_X\\ d(\ell_3)\leq\vartheta g/3}}\frac{\mathcal{B}_k(\ell_3)\alpha_{-k}(\ell_3^3)}{|\ell_3|^2}\\
&\qquad\qquad\sum_{\substack{\ell_2|(\mathcal{P}_X/\ell_3)\\ d(\ell_2)\leq(\vartheta g-3d(\ell_3))/2}}\frac{\mathcal{C}_k(\ell_2)\alpha_{-k}(\ell_2^2)}{|\ell_2|}\sum_{\substack{\ell_1|(\mathcal{P}_X/\ell_2\ell_3)\\ d(\ell_1)\leq\vartheta g-2d(\ell_2)-3d(\ell_3)}}\frac{\mathcal{B}_k(\ell_1)\alpha_{-k}(\ell_1)}{|\ell_1|}, \end{align*} where \[
\mathcal{C}_k(\ell_2)=\mathcal{C}_k(1,\ell_2)=\prod_{P|\ell_2}\bigg(\sum_{j=0}^{\infty}\frac{\tau_k(P^{2j})}{|P|^j}\bigg)\bigg(1+\frac{1}{|P|}+\sum_{j=1}^{\infty}\frac{\tau_{k}(P^{2j})}{|P|^j}\bigg)^{-1}. \]
Like in \eqref{truncation}, we can remove the condition $d(\ell_1)+2d(\ell_2)+3d(\ell_3)\leq\vartheta g$ at the cost of an error of size $O_\varepsilon\big(q^{-\vartheta g/2+\varepsilon g}\big)$. Define the following multiplicative functions \begin{align*}
T_1(f)=\sum_{\ell|f}\frac{\mathcal{B}_k(\ell)\alpha_{-k}(\ell)}{|\ell|},\qquad T_2(f)=\sum_{\ell|f}\frac{\mathcal{C}_k(\ell)\alpha_{-k}(\ell^2)}{|\ell|T_1(\ell)} \end{align*} and \[
T_3(f)=\sum_{\ell|f}\frac{\mathcal{B}_k(\ell)\alpha_{-k}(\ell^3)}{|\ell|^2T_1(\ell)T_2(\ell)}. \] Then \begin{align*} J_{k,1}=&\ \frac{2c_k\mathcal{A}_k}{(k(k+1)/2)!}\Big(\frac{kg}{2}\Big)^{k(k+1)/2}T_1\big(\mathcal{P}_X\big)T_2\big(\mathcal{P}_X\big)T_3\big(\mathcal{P}_X\big)\\ =&\ \frac{2c_k\mathcal{A}_k}{(k(k+1)/2)!}\Big(\frac{kg}{2}\Big)^{k(k+1)/2}\\
&\qquad\prod_{d(P)\leq X}\bigg(1+\frac{\mathcal{B}_k(P)\alpha_{-k}(P)}{|P|}+\frac{\mathcal{C}_k(P)\alpha_{-k}(P^2)}{|P|}+\frac{\mathcal{B}_k(P)\alpha_{-k}(P^3)}{|P|^2}\bigg). \end{align*} We remark that \[
\mathcal{A}_k=\Big(1+O\big(q^{-X}/X\big)\Big)\prod_{d(P)\leq X}\bigg(1-\frac{1}{|P|}\bigg)^{k(k+1)/2}\bigg(1+\frac{1}{|P|}\bigg)^{-1}\mathcal{A}_{k}(P), \] where \[
\mathcal{A}_{k}(P)=1+\frac{1}{|P|}+\sum_{j=1}^{\infty}\frac{\tau_{k}(P^{2j})}{|P|^j}. \] So \begin{align*}
&J_{k,1}=\Big(1+O\big(q^{-X}/X\big)\Big) \frac{2c_k}{(k(k+1)/2)!}\Big(\frac{kg}{2}\Big)^{k(k+1)/2}\prod_{d(P)\leq X}\bigg(1-\frac{1}{|P|}\bigg)^{k(k+1)/2}\bigg(1+\frac{1}{|P|}\bigg)^{-1}\\
&\ \prod_{d(P)\leq X}\bigg(\mathcal{A}_{k}(P)+\frac{\mathcal{A}_{k}(P)\mathcal{B}_k(P)\alpha_{-k}(P)}{|P|}+\frac{\mathcal{A}_{k}(P)\mathcal{C}_k(P)\alpha_{-k}(P^2)}{|P|}+\frac{\mathcal{A}_{k}(P)\mathcal{B}_k(P)\alpha_{-k}(P^3)}{|P|^2}\bigg). \end{align*} We note from \eqref{alpha} that $\alpha_{-k}(P)=-k$. Also if $P\in\mathcal{P}$ with $d(P)\leq X/2$, then $$\alpha_{-k}(P^2)=\frac{k(k-1)}{2}\qquad \textrm{and}\qquad\alpha_{-k}(P^3)=-\frac{k(k-1)(k-2)}{6},$$ and if $P\in\mathcal{P}$ with $X/2<d(P)\leq X$, then $$\alpha_{-k}(P^2)=\frac{k^2}{2}\qquad \textrm{and}\qquad\alpha_{-k}(P^3)=0.$$ Standard calculations also give \[ \mathcal{A}_{k}(P)=\begin{cases}
\big(1-\frac{1}{|P|}\big)^{-1}\big(1+\frac{1}{|P|}-\frac{1}{|P|^2}\big)& k=1,\\
\big(1-\frac{1}{|P|}\big)^{-2}\big(1+\frac{2}{|P|}-\frac{2}{|P|^2}+\frac{1}{|P|^3}\big) & k=2,\\
\big(1-\frac{1}{|P|}\big)^{-3}\big(1+\frac{4}{|P|}-\frac{3}{|P|^2}+\frac{3}{|P|^3}-\frac{1}{|P|^4}\big) & k=3, \end{cases} \] \[ \mathcal{A}_{k}(P)\mathcal{B}_k(P)=\begin{cases}
\big(1-\frac{1}{|P|}\big)^{-1} & k=1,\\
2\big(1-\frac{1}{|P|}\big)^{-2} & k=2,\\
\big(1-\frac{1}{|P|}\big)^{-3}\big(3+\frac{1}{|P|}\big) & k=3 \end{cases} \] and \[ \mathcal{A}_{k}(P)\mathcal{C }_k(P)=\begin{cases}
\big(1-\frac{1}{|P|}\big)^{-1} & k=1,\\
\big(1-\frac{1}{|P|}\big)^{-2}\big(1+\frac{1}{|P|}\big) & k=2,\\
\big(1-\frac{1}{|P|}\big)^{-3}\big(1+\frac{3}{|P|}\big) & k=3. \end{cases} \] Hence, using Lemma \ref{mertens} we get \begin{align*}
J_{1,1}&=\Big(1+O\big(q^{-X}/X\big)\Big)g\prod_{d(P)\leq X}\bigg(1+\frac{1}{|P|}\bigg)^{-1}\\
&\qquad\qquad\prod_{d(P)\leq X/2}\bigg(1-\frac{1}{|P|^2}\bigg)\prod_{X/2<d(P)\leq X}\bigg(1+\frac{1}{2|P|}-\frac{1}{|P|^2}\bigg)\\
&=\Big(1+O\big(q^{-X/2}/X\big)\Big) g\prod_{d(P)\leq X/2}\bigg(1-\frac{1}{|P|}\bigg)\prod_{X/2<d(P)\leq X}\bigg(1-\frac{1}{|P|}\bigg)^{1/2}\\ &= \frac{1}{\sqrt{2}}\frac{g}{e^\gamma X/2}+O\big(gX^{-2}\big) \end{align*} and \begin{align*}
J_{2,1}&=\Big(1+O\big(q^{-X}/X\big)\Big)\frac{g^3}{3}\prod_{d(P)\leq X}\bigg(1-\frac{1}{|P|}\bigg)\bigg(1+\frac{1}{|P|}\bigg)^{-1}\\
&\qquad\qquad\prod_{d(P)\leq X/2}\bigg(1-\frac{1}{|P|}-\frac{1}{|P|^2}+\frac{1}{|P|^3}\bigg)\prod_{X/2<d(P)\leq X}\bigg(1+\frac{1}{|P|^3}\bigg)\\
&=\Big(1+O\big(q^{-X/2}/X\big)\Big) \frac{g^3}{3}\prod_{d(P)\leq X/2}\bigg(1-\frac{1}{|P|}\bigg)^3\prod_{X/2<d(P)\leq X}\bigg(1-\frac{1}{|P|}\bigg)^{2}\\ &=\frac{1}{12} \Big(\frac{g}{e^\gamma X/2}\Big)^3+O\big(g^3X^{-4}\big). \end{align*} Lastly, \begin{align*}
J_{3,1}&=\Big(1+O\big(q^{-X}/X\big)\Big)\frac{g^6}{45}\prod_{d(P)\leq X}\bigg(1-\frac{1}{|P|}\bigg)^3\bigg(1+\frac{1}{|P|}\bigg)^{-1}\\
&\qquad\qquad\prod_{d(P)\leq X/2}\bigg(1-\frac{2}{|P|}+\frac{2}{|P|^3}-\frac{1}{|P|^4}\bigg)\prod_{X/2<d(P)\leq X}\bigg(1-\frac{1}{2|P|}+O\bigg(\frac{1}{|P|^2}\bigg)\bigg)\\
&=\Big(1+O\big(q^{-X/2}/X\big)\Big) \frac{g^6}{45}\prod_{d(P)\leq X/2}\bigg(1-\frac{1}{|P|}\bigg)^6\prod_{X/2<d(P)\leq X}\bigg(1-\frac{1}{|P|}\bigg)^{9/2}\\ &=\frac{1}{720\sqrt{2}} \Big(\frac{g}{e^\gamma X/2}\Big)^6+O\big(g^6X^{-7}\big). \end{align*} The theorem follows by choosing any $0<\vartheta <(4-k)/2$.
\end{document} |
\begin{document}
\date{}
\title{ Linear Quaternion Differential Equations: Basic Theory and Fundamental Results } \author{ Kit Ian Kou$^{1,2}$\footnote{ Kit Ian Kou acknowledges financial support from the National Natural Science Foundation of China under Grant (No. 11401606), University of Macau (No. MYRG2015-00058-FST and No. MYRG099(Y1-L2)-FST13-KKI) and the Macao Science and Technology Development Fund (No. FDCT/094/2011/A and No. FDCT/099/2012/A3). }
\,\,\,\,\, Yong-Hui Xia$^{1,3}$
\footnote{ Corresponding author. Email: [email protected]. Yonghui Xia was supported by the National Natural Science Foundation of China under Grant (No. 11671176 and No. 11271333), Natural Science Foundation of Zhejiang Province under Grant (No. Y15A010022), Marie Curie Individual Fellowship within the European Community Framework Programme(MSCA-IF-2014-EF, ILDS - DLV-655209), the Scientific Research Funds of Huaqiao University and China Postdoctoral Science Foundation (No. 2014M562320). }
\\ {\small 1. School of Mathematical Sciences, Huaqiao University, 362021, Quanzhou, Fujian, China.}\\
{\small\em [email protected]; [email protected] (Y.H.Xia)}\\ {\small 2.Department of Mathematics, Faculty of Science and Technology, University of Macau, Macau}\\ {\small \em [email protected] (K. I. Kou) }
\\
{\small 3. Department of Mathematics, Zhejiang Normal University, Jinhua, 321004, China}\\
}
\maketitle
\begin{center} \begin{minipage}{140mm} \begin{abstract}
Quaternion-valued differential equations (QDEs) is a new kind of differential equations which have many applications in physics and life sciences. The largest difference between QDEs and ODEs is the algebraic structure. Due to the non-commutativity of the quaternion algebra, the algebraic structure of the solutions to the QDEs is completely different from ODEs. {\em It is actually a right free module, not a linear vector space.}
This paper establishes {\bf a systematic frame work for the theory of linear QDEs}, which can be applied to quantum mechanics, fluid mechanics, Frenet frame in differential geometry, kinematic modelling, attitude dynamics, Kalman filter design and spatial rigid body dynamics, etc. We prove that the algebraic structure of the solutions to the QDEs is actually a right free module, not a linear vector space. On the non-commutativity of the quaternion algebra, many concepts and properties for the ordinary differential equations (ODEs) can not be used. They should be redefined accordingly. A definition of {\em Wronskian} is introduced under the framework of quaternions which is different from standard one in the ordinary differential equations. Liouville formula for QDEs is given. Also, it is necessary to treat the eigenvalue problems with left- and right-sides, accordingly. Upon these, we studied the solutions to the linear QDEs. Furthermore, we present two algorithms to evaluate the fundamental matrix. Some concrete examples are given to show the feasibility of the obtained algorithms. Finally, a conclusion and discussion ends the paper.
\end{abstract}
{\bf Keywords:}\ quaternion; quantum; solution; noncommutativity; eigenvalue; differential equation; fundamental matrix
{\bf 2000 Mathematics Subject Classification:} 34K23; 34D30; 37C60; 37C55;39A12
\end{minipage} \end{center} \section{\bf Introduction and Motivation}
The ordinary differential equations (ODEs) have been well studied. Theory of ODEs is very systematic and rather complete (see monographs, e.g. \cite{Arnold, Chicone,Chow,DingLi,Hale,Hartman,ZWN}). Quaternion-valued differential equations (QDEs) is a new kind of differential equations which have many applications in physics and life sciences. Recently, there are few papers trying to study QDEs \cite{Mawhin,Zhang1,Leo3,W,Zhang2}. Up till now, the theory of the QDEs is not systemic. Many questions remains open. For example, even the structure of the solutions to linear QDEs is unknown. What kind of vector space is the solutions to linear QDEs? What is the difference between QDEs and ODEs? First of all, why should we study the QDEs? In the following, we will introduce the background of the quaternion-valued differential equations?
\subsection{Background for QDEs}
Quaternions are 4-vectors whose multiplication rules are governed by a simple non-commutative division algebra. We denote the quaternion $q=(q_0, q_1, q_2, q_3)^T\in \mathbb{R}^4$ by \[ q= q_0 + q_1 \mathbf{i} + q_2 \mathbf{j} + q_3 \mathbf{k}, \] where $q_0, q_1, q_2, q_3$ are real numbers and $\mathbf{i},\mathbf{j},\mathbf{k}$ satisfy the multiplication table formed by \[ \mathbf{i}^2=\mathbf{j}^2=\mathbf{k}^2=\mathbf{ijk}=-1,\,\, \mathbf{ij}=-\mathbf{ji}=\mathbf{k},\,\, \mathbf{ki}=-\mathbf{ik}=\mathbf{j},\,\,\,\mathbf{jk}=-\mathbf{kj}=\mathbf{i}. \] The concept was originally invented by Hamilton in 1843 that extends the complex numbers to four-dimensional space.
Quaternions have shown advantages over real-valued vectors in physics and engineering applications for their powerful modeling of rotation and orientation. Orientation can be defined as a set of parameters that relates the angular position of a frame to another reference frame. There are numerous methods for describing this relationship. Some are easier to visualize than the others. Each has some kind of limitations. Among them,
Euler angles and quaternions are commonly used. We give an example from \cite{AHS, AHS1} for illustration. Attitude and Heading Sensors (AHS) from CH robotics can provide orientation information using both Euler angles and quaternions. Compared to quaternions, Euler angles are simple and intuitive
(see Figure \ref{fig1},\ref{fig2},\ref{fig3} and \ref{fig4}). The complete rotation matrix for moving from the inertial frame to the body frame is given by the multiplication of three matrix taking the form \[ R_{I}^{B}(\phi,\theta,\psi)=R_{V_2}^{B}(\phi)R_{V_1}^{V_2}(\theta)R_{I}^{V_1}(\psi), \] where $R_{V_2}^{B}(\phi),R_{V_1}^{V_2}(\theta),R_{I}^{V_1}(\psi)$ are the rotation matrix moving from vehicle-2 frame to the body frame,
vehicle-1 frame to vehicle-2 frame, inertial frame to vehicle-1 frame, respectively. \begin{figure}
\caption{Inertial Frame}
\label{fig1}
\end{figure} \begin{figure}
\caption{ Vehicle-1 Frame.}
\label{fig2}
\end{figure} \begin{figure}
\caption{ Vehicle-2 Frame.}
\label{fig3}
\end{figure} \begin{figure}
\caption{ Body Frame.}
\label{fig4}
\end{figure} \begin{figure}
\caption{Gimbal Lock.}
\label{fig5}
\end{figure}
On the other hand, Euler angles are limited by a phenomenon called ``Gimbal Lock". The cause of such phenomenon is that when the pitch angle is 90 degrees (see Figure \ref{fig5}). An orientation sensor
that uses Euler angles will always fail to produce reliable estimates when the pitch angle approaches 90 degrees.
This is a serious shortcoming of Euler angles and can only be solved by switching to a different representation method. Quaternions provide an alternative measurement technique that does not suffer from gimbal lock. Therefore, all CH Robotics attitude sensors use quaternions so that the output is always valid even when Euler Angles are not.
The attitude quaternion estimated by CH Robotics orientation sensors encodes rotation from the ``inertial frame" to the sensor ``body frame." The inertial frame is an Earth-fixed coordinate frame defined so that the x-axis points north, the y-axis points east, and the z-axis points down as shown in Figure 1. The sensor body-frame is a coordinate frame that remains aligned with the sensor at all times. Unlike Euler angle estimation, only the body frame and the inertial frame are needed when quaternions are used for estimation.
Let the vector $q=(q_0, q_1, q_2, q_3)^{T}$ be defined as the unit-vector quaternion encoding rotation from the inertial frame to the body frame of the sensor, where $T$ is the vector transpose operator. The elements $q_1, q_2$, and $q_3$ are the ``vector part" of the quaternion, and can be thought of as a vector about which rotation should be performed. The element $q_0$ is the ``scalar part" that specifies the amount of rotation that should be performed about the vector part. Specifically, if $\theta$ is the angle of rotation and the vector $e=(e_x,e_y,e_z)$ is a unit vector representing the axis of rotation, then the quaternion elements are defined as \[ q = \left ( \begin{array}{c} q_0 \\ q_1 \\ q_2 \\ q_3 \end{array} \right ) =\left ( \begin{array}{c} \cos (\theta/2) \\ e_x \sin (\theta/2) \\ e_y \sin (\theta/2) \\ e_z \sin (\theta/2) \end{array} \right ). \]
In fact, Euler's theorem states that given two coordinate systems, there is one invariant axis (namely, Euler axis), along which measurements are the same in both coordinate systems. Euler's theorem also shows that it is possible to move from one coordinate system to the other through one rotation $\theta$ about that invariant axis. Quaternionic representation of the attitude is based on Euler's theorem. Given a unit vector $e=(e_x,e_y,e_z)$ along the Euler axis, the quaternion is defined to be \[ q=\left ( \begin{array}{c} \cos (\theta/2) \\ e \sin (\theta/2) \end{array} \right ). \]
Besides the attitude orientation, quaternions have been widely applied to study life science, physics and engineering. For instances,
an interesting application is the description of protein structure (see Figure \ref{fig6}) \cite{Protein}, neural networks \cite{NN}, spatial rigid body transformation \cite{Body}, Frenet frames in differential geometry, fluid mechanics, quantum mechanics, and so on.
\begin{figure}
\caption{The transformation that sends one peptide plane to the next is a screw motion.}
\label{fig6}
\end{figure}
\subsubsection { Quaternion Frenet frames in differential geometry}
The differential geometry of curves \cite{DG1,DG2,DG3} traditionally begins with a vector $\vec{x}(s)$ that describes the curve parametrically as a function of $s$ that is at least three times differentiable. Then the tangent vector $\vec{T}(s)$ is well-defined at every point $\vec{x}(s)$ and one may choose two additional orthogonal vectors in the plane perpendicular to $\vec{T}(s)$ to form a complete local orientation frame. Provided the curvature of $\vec{x}(s)$ vanishes nowhere, we can choose this local coordinate system to be the Frenet frame consisting of the tangent $\vec{T}(s)$, the binormal $\vec{B}(s)$, and the principal normal $\vec{N}(s)$ (see Figure \ref{fig7}), which are given in terms of the curve itself by these expressions: \[ \begin{array}{lll}
\vec{T}(s)&=& \displaystyle\frac{\vec{x}'(s)}{\|\vec{x}'(s)\|}, \\
\vec{B}(s)&=& \displaystyle\frac{\vec{x}'(s)\times \vec{x}''(s)}{\|\vec{x}'(s)\times \vec{x}''(s)\|}, \\ \vec{N}(s)&=& \displaystyle\vec{B}(s) \times \vec{T}(s), \end{array} \] \begin{figure}
\caption{Frenet frame for a curve.}
\label{fig7}
\end{figure} The standard frame configuration is illustrated in Figure 7.
The Frenet frames obeys the following differential equations. \begin{equation} \left ( \begin{array}{cccc} \vec{T}'(s) \\ \vec{B}'(s)
\\ \vec{N}'(s) \end{array} \right ) = v(s)\left ( \begin{array}{cccc} 0 & \kappa(s) & 0 \\ - \kappa(s) & 0 & \tau(s)
\\ 0 & -\tau(s) & 0 \end{array} \right ) \left ( \begin{array}{cccc} \vec{T}(s) \\ \vec{B}(s)
\\ \vec{N}(s) \end{array} \right ) \label{F1} \end{equation}
Here $v(s) =\|\vec{x}'(s)\|$ is the scalar magnitude of the curve derivative, and the intrinsic geometry of the curve is embodied in the curvature $\kappa(s)$ and the torsion $\tau(s)$, which may be written in terms of the curve itself as \[ \begin{array}{lll}
\kappa(s)(s)&=& \displaystyle\frac{\|\vec{x}'(s)\times \vec{x}''(s)\|}{\|\vec{x}'(s)\|^3}, \\
\tau(s)(s)&=& \displaystyle\frac{(\vec{x}'(s)\times \vec{x}''(s))\cdot \vec{x}'''(s)}{\|\vec{x}'(s)\times \vec{x}''(s)\|^2}. \end{array} \]
Handson and Ma \cite{Handson} pointed out that all $3D$ coordinate frame can be express in the form of quaternion. Then $q'(t)=(q'_0(t),q'_1(t),q'_2(t),q'_3(t))$ can be written as a first-order quaternion differential equation \[ q'(t)=\frac{1}{2}vK q(t) \] with \[ K=\left ( \begin{array}{cccc} 0 & -\tau & 0 & -\kappa \\ \tau &0 & \kappa &0
\\ 0 & -\kappa & 0 & \tau
\\ \kappa & 0 & -\tau & 0 \end{array} \right ). \]
\subsubsection { QDEs appears in kinematic modelling and attitude dynamics} Global Positioning System (GPS) have developed rapidly in recent years. It is compulsory for the navigation and guidance. In fact, GPS is based on attitude determination. And attitude determination results in the measurement of vehicle orientation with respect to a reference frame.
Wertz \cite{Wertz} derived the general attitude motion equations for the quaternions assuming a constant rotation rate over an infinitesimal time $\Delta t$ such that \[ q_{k+1}=q_{k}+\dot{q}\Delta t \] where $q =(q_1,q_2,q_3,q_4)^T$, and the quaternion differentiated with respect to time is approximated by \[ \dot {q}=\frac{1}{2}\Omega \cdot q_k \] with the skew symmetric form of the body rotations about the reference frame as \[ \Omega=\left ( \begin{array}{cccc} 0 & \omega_z & -\omega_y & \omega_x \\ - \omega_z &0 & \omega_x & \omega_y
\\ \omega_y & -\omega_x & 0 & \omega_z
\\ -\omega_x & -\omega_y & -\omega_z & 0 \end{array} \right ) \] and $ \omega=(\omega_x, \omega_y, \omega_z)^T$ is the angular velocity of body rotation.
Another attitude dynamic is the orientation tracking for humans and robots using inertial sensors. In \cite{Marins}, the authors designed a Kalman filter for estimating orientation. They developed a process model of a rigid body under rotational motion. The state vector of a process model will consists of the angular rate $\omega$ and parameters for characterizing orientation $n$. For human body motions, it is shown that quaternion rates are related to angular rates through the following first order quaternion differential equation: \[ \dot {n}=\frac{1}{2}n \, \omega, \] where $\omega$ is treated as a quaternion with zero scalar part.
Moreover, a general formulation for rigid body rotational dynamics is developed using quaternions. Udwadia et al. \cite{Udwadia} provided a simple and direct route for obtaining Lagrange's equation describing rigid body rotational motion in terms of quaternions. He presented a second order quaternionic differntial equation as follows. \[ 4E^TJE\ddot{u} + 8 \dot{E}^TJE\dot{u} +4 J_0N(\dot{u})u = 2E^T \Gamma_B- 2(\omega^T J \omega)u, \] where $u=(u_0,u_1,u_2,u_3)^T$ is the unit quaternion with $u^Tu=1$ and $E$ is the orthogonal matrix given by \[ E=\left ( \begin{array}{cccc} u_0 & u_1 & u_2 & u_3 \\ - u_1 &u_0 & u_3 & -u_2
\\ -u_2 & -u_3 & u_0 & u_1
\\ -u_3 & u_2 & -u_1 & u_0 \end{array} \right ), \] $\Gamma_B$ is the four-vector containing the components of the body torque about the body-fixed axes. $\omega=(0,\omega_1,\omega_2,\omega_3)^T$, $\omega_1,\omega_2,\omega_3$ are components of angular velocity. The inertia matrix of the rigid body is represented by $\hat{J} =diag(J_1,J_2,J_3)^T$ and the $4\times 4$ diagonal matrix $J=diag(J_0,J_1, J_2,J_3)^T$, where $J_0$ is any positive number.
\subsubsection { QDE appears in fluid mechanics} Recently Gibbon et al. \cite{Gibbon1,Gibbon2} used quaternion algebra to study the fluid mechanics. They proved that the three-dimensional incompressible Euler equations have a natural {\em quaternionic Riccati structure} in the dependent variable. He proved that the 4-vector $\zeta$ for the three-dimensional incompressible Euler equations evolves according to the {\em quaternionic Riccati equation} \[ \frac{D \zeta}{d t}+ \zeta^2 +\zeta_p=0, \] where $\zeta=(\alpha,\chi)^T$ and $\zeta_p=(\alpha_p,\chi_p)^T$ are a 4-vectors, $\alpha$ is a scalar and $\chi$ is a 3-vector, where $\alpha_p$, $\chi_p$ are defined in terms of the Hessian matrix $P$ \cite{Gibbon1}.
To convince the reader that this structure is robust and no accident, it is shown that the more complicated equations for ideal magneto-hydrodynamics (MHD) can also be written in an equivalent quaternionic form. Roubtsov and Roulstone \cite{RR1,RR2} also expressed the nearly geostrophic flow in terms of quaternions.
\subsubsection { QDE appears in quantum mechanics} Quaternion has been used to study the quantum mechanics for a very long time, and the quternionic quantum mechanic theory is well developed (see, for instance, Refs. \cite{Alder1,Alder2,Leo1,Leo2} and the references therein). In the standard formulation of nonrelativistic quantum mechanics, the complex wave function $\Phi(x, t)$, describing a particle without spin subjected to the influence of a real potential $V(x, t)$, satisfies the Schr$\ddot{o}$dinger equation \[
\partial_{t}\Phi(x, t)= \frac{\mathbf{i}}{{\hbar} } \Big[ \frac{ {\hbar}^2}{2m}\nabla^2 - V(x,t)\Big]\Phi(x, t). \] Alder generalized the classical mechanics to quaternionic quantum mechanics \cite{Alder2}, the anti-self-adjoint operator \[
{\cal{A}}^{V}(x, t)= \frac{\mathbf{i}}{{\hbar} } \Big[ \frac{ {\hbar}^2}{2m}\nabla^2 - V(x,t)\Big], \]
can be generalized by introducing the complex potential $W(x, t) = |W(x, t)| \exp[\mathbf{i}\theta(x, t)]$, \[
{\cal{A}}^{V,W}(x, t)= \frac{\mathbf{i}}{{\hbar} } \Big[ \frac{ {\hbar}^2}{2m}\nabla^2 - V(x,t)\Big]+ \frac{\mathbf{j}}{{\hbar} } W(x, t) , \] where the quaternionic imaginary units $\mathbf{i}, \mathbf{j}$ and $\mathbf{k}$ satisfy the following associative but noncommutative algebra: \[ \mathbf{i}^2=\mathbf{j}^2=\mathbf{k}^2=\mathbf{ijk}=-1. \] As pointed out in \cite{Leo1}, this generalization for the anti-self-adjoint Hamiltonian operator, the quaternionic wave function $\Phi(x, t)$ satisfies the following equation: \[
\partial_{t}\Phi(x, t)= \Big\{\frac{\mathbf{i}}{{\hbar} } \Big[ \frac{ {\hbar}^2}{2m}\nabla^2 - V(x,t)\Big]+ \frac{\mathbf{j}}{{\hbar} } W(x, t)\Big\} \Phi(x, t). \] Later, \cite{Leo2} gave further study on the quaternionic quantum mechanics. The complex wave function $\Phi(x, t)$, describing a particle without spin subjected to the influence of a potential $V(x, t)$, satisfies the Schr$\ddot{o}$dinger equation \[ \mathbf{i} \frac{ {\hbar}^2}{2m}\Phi_{xx}(x, t)-(\mathbf{i} V_1 + \mathbf{j} V_2 + \mathbf{k} V_3)\Phi(x, t)=\hbar \Phi_t(x, t). \] This partial differential equation can be reduced by using the well-known separation of variables, \[ \Phi(x, t)=\phi(x)\exp\{-\mathbf{i} Et/{\hbar}\}, \] to the following second order quaternion differential equation \[ \mathbf{i} \frac{ {\hbar}^2}{2m}\ddot\phi(x)-(\mathbf{i} V_1 + \mathbf{j} V_2 + \mathbf{k} V_3)\phi(x)=- \phi(x)E \mathbf{i}. \]
\subsection{History and motivation}
Though the above mentioned works show the existence of the quaternionic differential equation structure appearing in differential geometry, fluid mechanics, attitude dynamics, quantum mechanics, etc. for the author knowledge, there is no research on the systematic study for the theory of the quaternion-valued differential equations. Upon the non-commutativity of the quaternion algebra, the study on the quaternion differential equations becomes a challenge and much involved. The studies are rare in this area. However, if you would like to found more behavior of a complicated dynamical system, it is necessary to make further mathematical analysis. By using the real matrix representation of left/right acting quaternionic operators, Leo and Ducati \cite{Leo3} tried to solve some simple second order quaternionic differential equations. Later, Campos and Mawhin \cite{Mawhin} studied the existence of periodic solutions of one-dimensional first order periodic {\em quaternion differential equation}. Wilczynski \cite{W} continued this study and payed more attention on the existence of two periodic solutions of quaternion Riccati equations. Gasull et al \cite{Zhang1} studied a one-dimensional quaternion autonomous homogeneous differential equation \[ \dot{q} = aq^n. \] Their interesting results describe the phase portraits of the homogeneous quaternion differential equation. For $n=2,3$, the existence of
periodic orbits, homoclinic loops, invariant tori and the first integrals are discussed in \cite{Zhang1}. Zhang \cite{Zhang2} studied the global structure of the one-dimensional quaternion Bernoulli equations \[ \dot{q} = aq + aq^n. \]
By using the Liouvillian theorem of integrability and the topological characterization of 2-dimensional torus: orientable compact connected surface of genus one, Zhang proved that the quaternion Bernoulli equations may have invariant tori, which possesses a full Lebesgue measure subset of $\mathbb{H}$. Moreover, if $n = 2$ all the invariant tori are full of periodic orbits; if $n = 3$ there are infinitely many invariant tori fulfilling periodic orbits and also infinitely many invariant ones fulfilling dense orbits.
As we know, the second order quaternionic differential equation can be easily transformed to a two-dimensional QDEs. In fact, for a second order quaternionic differential equation \[ \frac{{\rm d^2} x}{{\rm} d t^2} + q_1(t) \frac{{\rm d} x}{{\rm d} t } +q_2(t) x=0, \] by the transformation \[ \left\{ \begin{array}{ccc} x_1& =& x \\ x_2 &= &\dot{x} \end{array} \right. \] then we have \[ \left\{ \begin{array}{lll} \dot{x}_1& =& x_1 \\ \dot{x}_2 &= & - q_2(t) x_1 - q_1(t) x_2 . \end{array} \right. \] Most of above mentioned works focused on the one-dimensional QDE and they mainly discussed the qualitative properties of the QDE (e.g. the existence of periodic orbits, homoclinic loops and invariant tori, integrability, the existence of periodic solutions and so on). They did not provide the algorithm to compute the exact solutions to linear QDEs. The most basic problem is to solve the QDEs. Since quaternions are a non-commutative extension of the complex numbers, a lot of concepts and theory of ODEs are not valid for QDEs. Up till now, there is no systematic theory on the algorithm of solving QDEs. What is superposition principle? Can the solutions of QDEs span a linear vector space? How is the structure of the general solutions? How to determine the relations of two independent solutions? How to define the Wronskian determinant? How to construct a fundamental matrix of the linear QDEs? There is no algorithm to compute the fundamental matrix of the linear QDEs yet. In particular, there is no algorithm to compute the fundamental matrix by using the eigenvalue and eigenvector theory.
In this paper, we try to establish a basic framework for the nonautonomous linear homogenous two-dimensional QDEs taking form of \[ \left\{ \begin{array}{ccc} \dot{x}_1& =& a_{11}(t) x_1 +a_{12}(t) x_2 \\ \dot{x}_2 &= & a_{21}(t) x_1 +a_{22}(t) x_2 , \end{array} \right. \] where all the coefficients $a_{11},a_{12},a_{21},a_{22}$ are quaternionic functions. A systematic basic theory on the solution to the higher dimensional linear QDEs is given in this paper. We estabilsh the superposition principle and the structure of the general solutions. Moreover, we provide two algorithms to compute the fundamental matrix of the linear QDEs. For the sake of understanding, we focus on the 2-dimensional system. However, in fact, our basic results can be easily extended to any arbitrary $n$-dimensional QDEs.
In the following, we will show that there are four profound differences between QDEs and ODEs.
1. On the non-commutativity of the quaternion algebra, the algebraic structure of the solutions to QDEs is {\bf not a linear vector space}. It is actually a {\bf right free module}. Consequently, to determine if two solutions of the QDEs are linearly independent or dependent, we have to define the concept of left dependence (independence) and right dependence (independence).
While for classical ODEs, left dependence and right dependence are the same.
2. {\em Wronskian} of ODEs is defined by Caley determinant. However, since Caley determinant depends on the expansion of $i$-th row and $j$-th column of quaternion, different expansions can lead to different results. {\em Wronskian} of ODEs can not be extended to QDEs. It is necessary to define the {\em Wronskian} of QDEs by a novel method (see Section 4). We use double determinants $\rm {ddet}M=\displaystyle\frac{1}{2}\rm{rddet}(MM^{+})$ or $\displaystyle\frac{1}{2}\rm{rddet}(M^{+}M)$ to define {\em Wronskian} of QDEs.
3. Liouville formula for QDEs and ODEs are different.
4. For QDEs, it is necessary to treat the eigenvalue problems with left and right, separately. This is a large-difference between QDEs and ODEs.
\subsection{Outline of the paper}
In next section, quaternion algebra is introduced. In section 3, existence and uniqueness of the solutions to QDEs are proved. Section 4 devotes to defining the Wronskian determinant for QDEs and proving the Liouville formulas. Superposition principle and structure of the general solution are given. In section 5, fundamental matrix and solution to linear QDEs are given. Section 6 presents two algorithms for computing fundamental matrix. Some examples are given to show the feasibility of the obtained algorithms. Finally, a conclusion and discussion ends the paper.
\section{\bf Quaternion algebra}
The quaternions were discovered in 1843 by Sir William Rowan Hamilton \cite{Hamilton}. We adopt the standard notation in \cite{Chen1,Hamilton}. We denote as usual a quaternion $q=(q_0, q_1, q_2, q_3)^T\in \mathbb{R}^4$ by \[ q= q_0 + q_1 \mathbf{i} + q_2 \mathbf{j} + q_3 \mathbf{k}, \] where $q_0, q_1, q_2, q_3$ are real numbers and $\mathbf{i},\mathbf{j},\mathbf{k}$ symbols satisfying the multiplication table formed by \[ \mathbf{i}^2=\mathbf{j}^2=\mathbf{k}^2=\mathbf{ijk}=-1. \] We denote the set of quaternion by $ \mathbb{H}$.
We can define for the quaternion $q$ and $h=h_0 + h_1 \mathbf{i} + h_2 \mathbf{j} + h_3 \mathbf{k}$, the {\em inner product}, and the {\em modulus}, respectively, by \[ < q, h>= q_0 h_0+ q_1 h_1 + q_2 h_2 +q_3 h_3, \] \[
|q|= \sqrt {< q, h>}=\sqrt {q_0^2 + q_1^2 + q_2^2 + q_3^2 }. \] \[ \Re q= q_0,\,\,\,\, \Im q= q_1 \mathbf{i} + q_2 \mathbf{j} + q_3 \mathbf{k}. \]
And the conjugate is \[ \overline {q}= q_0 - q_1 \mathbf{i} - q_2 \mathbf{j} - q_3 \mathbf{k}. \] Thus, we have \[ \overline {q h}= \overline{h} \overline {q},
\,\,\,\,\, and \,\,\,\, q^{-1}=\frac{\overline{q}} {|q|^2},\,\,\,\,q\neq0. \] For any $q,p\in \mathbb{H}$, we have \[
|q+p|\leq|q|+|p|,\,\,\,|q\cdot p|\leq|q|\cdot|p|. \] Let $\psi: \mathbb{R} \rightarrow \mathbb{H}$ be a quaternion-valued function defined on $\mathbb{R}$ ($t\in \mathbb{R}$ is a real variable). We denote the set of such quaternion-valued functions by $\mathbb{H}\otimes \mathbb{R}$. Denote $
\mathbb{H}^2= \{\Psi \big| \Psi = \left ( \begin{array}{l} \psi_1 \\ \psi_2 \end{array} \right ) \}$.
Then two dimensional quaternion-valued functions defined on real domain, $\Psi (t)= \left ( \begin{array}{l} \psi_1(t) \\ \psi_2(t) \end{array} \right ) \in \mathbb{H}^2 \otimes \mathbb{R},$ where $\psi_i (t)= \psi_{i0} (t) + \psi_{i1} (t) \mathbf{i}
+ \psi_{i2}(t) \mathbf{j} + \psi_{i3} (t) \mathbf{k} $. And the first derivative of two dimensional quaternionic functions with respect to the real variable $t$ will be denoted by \[ \dot \Psi (t) = \left ( \begin{array}{l} \dot {\psi}_1 (t) \\ \dot {\psi}_2 (t) \end{array} \right ),\,\,\,
\dot {\psi}_i:=\frac { {\rm d} \psi_i } { {\rm d} t } =\frac { {\rm d} \psi_{i0} } { {\rm d} t } +\frac { {\rm d} \psi_{i1} } { {\rm d} t } \mathbf{i}
+\frac { {\rm d} \psi_{i2} } { {\rm d} t } \mathbf{j} +\frac { {\rm d} \psi_{i2} } { {\rm d} t } \mathbf{k} . \]
For any $A(t)=(a_{ij}(t))_{n\times n}\in \mathbb{H}^{n\times n} \otimes \mathbb{R}$,
\[
\frac{ {\rm d} A(t)} {{\rm d} t}=\left(\frac{ {\rm d} a_{ij}(t)}{{\rm d} t} \right).
\]
For any quaternion-valued functions $f(t)$ defined on $\mathbb{R}$, $f(t)=f_{0}(t)+f_{1}(t)\mathbf{i}+f_{2}(t)\mathbf{j}+f_{3}(t)\mathbf{k}$ on the interval $[a,b]\subset \mathbb{R}$, if $f_{l}(t)$ $(l=0,1,2,3)$ is integrable, then $f(t)$ is integrable and \[ \int^{b}_{a}f(t)dt=\int^{b}_{a}f_{0}(t)dt+\int^{b}_{a}f_{1}(t)dt\mathbf{i}+\int^{b}_{a}f_{2}(t)dt\mathbf{j}+\int^{b}_{a}f_{3}(t)dt\mathbf{k}, (a\leq t \leq b). \] Moreover,
\[
\int_{a}^{b} A(t)dt=\left( \int_{a}^{b} a_{ij}(t)dt \right).
\] We define the norms for the quaternionic vector and matrix.
For any $A=(a_{ij})_{n\times n}\in \mathbb{H}^{n\times n}$ and ${x}=(x_{1},x_{2},\cdots,x_{n})^{T}\in \mathbb{H}^{n}$, we define
\[
\|A\|=\sum\limits^{n}_{i,j=1}|a_{ij}|,\,\,\,\| {x}\|=\sum\limits^{n}_{i=1}|x_{i}|.
\] For $A,B\in \mathbb{H}^{2\times 2}$ and $ {x}, {y}\in \mathbb{H}^{2}$, we see that \[ \begin{array}{ccc}
\|AB\|&\leq&\|A\|\cdot\|B\|, \\
\|A {x}\|&\leq&\|A\| \cdot\| {x}\|, \\
\|A+B\|&\leq&\|A\|+\|B\|, \\
\| {x}+ {y}\|&\leq&\| {x}\|+ \| {y}\|. \end{array} \]
\section{Existence and Uniqueness of Solution to QDEs}
Consider the linear system \begin{equation} \dot \Psi(t) = A(t)\Psi(t)\,\,\,\, or \,\,\,\,\,\, \left ( \begin{array}{l} \dot {\psi}_1 (t) \\ \dot {\psi}_2 (t) \end{array} \right )= \left ( \begin{array}{ll} {a} _{11}(t) & {a} _{12}(t)
\\
{a} _{21}(t) & {a} _{22}(t) \end{array} \right ) \left ( \begin{array}{l} {\psi}_1 (t) \\
{\psi}_2 (t) \end{array} \right ), \label{2.1} \end{equation}
where $ \Psi (t) \in \mathbb{H}^2 \otimes \mathbb{R}$, $A(t) \in \mathbb{H}^{2\times 2} \otimes \mathbb{R}$ is $2 \times 2$ continuous
quaternion-valued function matrix on the interval $I=[a,b],\, I\subset \mathbb{R}$.
System (\ref{2.1}) is associated with the following initial value problem \begin{equation} \Psi (t_0) = \xi, \,\,\,\,\,\, \xi \in \mathbb{H}^2 . \label{2.3}
\end{equation} \indent To discuss the existence and uniqueness of the solutions to QDEs (\ref{2.1}), we need introduce the limit of the quaternion-valued sequence and convergence of the quaternion-valued sequence. Let $\{x^{k}\}\in \mathbb{H}^n$ be a sequence of quaternionic vector, $x^k= (x^k_1,x^k_2,\cdots,x^k_n)^T$, where $x^k_{i}=x^k_{i0}+ x^k_{i1}\mathbf{i} + x^k_{i2}\mathbf{j} + x^k_{i3}k$, $x^k_{il}, (i = 1,2,\cdots,n;l = 0,1,2,3)$ are real numbers. If the sequences of real numbers $\{x^k_{il}\}$ are convergent, then the sequence of quaternonic vectors $\{x^{k}\}$ is said to be convergent. And we define
\[
\lim\limits_{k\rightarrow \infty}x^k_{i}=\lim\limits_{k\rightarrow \infty}x_{i0}^{k}+\lim\limits_{k\rightarrow \infty}x_{i1}^{k}\mathbf{i}+\lim\limits_{k\rightarrow \infty}x_{i2}^{k}\mathbf{j}+\lim\limits_{k\rightarrow \infty}x_{i3}^{k}\mathbf{k}.
\] Let $\{x^{k}(t)\}\in \mathbb{H}^n \otimes \mathbb{R}$ be a sequence of quaternion-valued vector functions defined on $\mathbb{R}$. $x^k(t)= (x^k_1(t),x^k_2(t),\cdots,x^k_n(t))^T$, where $x^k_{i}(t)=x^k_{i0}(t)+ x^k_{i1}(t)\mathbf{i} + x^k_{i2}(t)\mathbf{j} + x^k_{i3}(t)k$, $x^k_{il}(t)\in \mathbb{R}^n \otimes \mathbb{R}, (i = 1,2,\cdots,n;l = 0,1,2,3)$ are real functions. If the sequences of real-valued functions $\{x^k_{il}(t)\}$ are convergent (uniform convergent), then the sequence of quaternion-valued vector functions $\{x^{k}(t)\}$ is said to be convergent (uniform convergent). Moreover,
\[
\lim\limits_{k\rightarrow \infty}x^k_{i}(t)=\lim\limits_{k\rightarrow \infty}x_{i0}^{k}(t)+\lim\limits_{k\rightarrow \infty}x_{i1}^{k}(t)\mathbf{i}+\lim\limits_{k\rightarrow \infty}x_{i2}^{k}(t)\mathbf{j}+\lim\limits_{k\rightarrow \infty}x_{i3}^{k}(t)\mathbf{k}.
\] If the sequence of vector functions $S_n(t) = \sum\limits ^{n}_{k=1} {x}^{k}(t)$ is convergent (uniform convergent), then the series $\sum\limits ^{\infty}_{k=1} {x}^{k}(t)$ is said to be convergent (uniform convergent). It is not difficult to prove that if there exist positive real constants $\delta_k$ such that
\[
\| {x}^{k}(t)\|\leq \delta_{k}, \,\,(a\leq t \leq b)
\] and the real series $\sum\limits ^{\infty}_{k=1}\delta_{k}$ is convergent, then the quaternion-valued series
$\sum\limits ^{\infty}_{k=1} {x}^{k}(t)$ is absolutely convergent on the interval $[a,b]$. Certainly, it is uniform convergent on the interval $[a,b]$. Moreover, if the sequence of quaternion-valued vector functions $\{x^k(t)\}$ is uniform convergent on the interval $[a,b]$, then \[
\lim_{k\rightarrow \infty}\int^{b}_{a} {x}^{k}(t)dt=\int^{b}_{a}\lim_{k\rightarrow \infty} {x}^{k}(t)dt.
\] Based on the above concepts on the convergence of quaternion-valued series, by similar arguments to Theorem 3.1 in Zhang et al \cite{ZWN}, we have \begin{theorem}\label{Th3.1}
The initial value problem (\ref{2.1})-(\ref{2.3}) has exactly a unique solution. \end{theorem}
\section{ Wronskian and structure of general solution to QDEs }
To study the superposition principle and structure of the general solution to QDEs, we should introduce the concept of right- module. On the non-commutativity of the quaternion algebra, we will see that the algebraic structure of the solutions to QDEs is {\bf not a linear vector space}. It is actually a right- module. This is the largest difference between QDEs and ODEs.
Now we introduce some definitions and known results on groups, rings, modules in the abstract algebraic theory (e.g. \cite{abstract}). An {\em abelian group} is a set, $\mathbb{A}$, together with an operation $\cdot$ that combines any two elements $a$ and $b$ to form another element denoted $a \cdot b$. The symbol $\cdot$ is a general placeholder for a concretely given operation. The set and operation, $(\mathbb{A}, \cdot)$, must satisfy five requirements known as the abelian group axioms: \begin{description}
\item[(i)] {\em Closure:} For all $a, b$ in $\mathbb{A}$, the result of the operation $a \cdot b$ is also in $\mathbb{A}$.
\item[(ii)] {\em Associativity:} For all $a, b$ and $c$ in $\mathbb{A}$, the equation $(a \cdot b) \cdot c = a\cdot (b \cdot c)$ holds.
\item[(iii)] { \em Identity element:} There exists an element $e$ in $\mathbb{A}$, such that for all elements a in $\mathbb{A}$, the equation $e \cdot a = a \cdot e = a$ holds.
\item[(iv)] { \em Inverse element:} For each a in $\mathbb{A}$, there exists an element b in $\mathbb{A}$ such that $a \cdot b = b \cdot a = e$, where $e$ is the identity element.
\item[(v)] { \em Commutativity:} For all $a, b$ in $\mathbb{A}$, $a \cdot b = b \cdot a$. \end{description}
More compactly, an abelian group is a commutative group.
A {\em ring} is a set $(\cal{R},+, \cdot)$ equipped with binary operations $+$ and $\cdot$ satisfying the following three sets of axioms, called the ring axioms: \begin{description}
\item[(i)] $\cal{R}$ is an abelian group under addition, meaning that
$\bullet $ $(a + b) + c = a + (b + c)$ for all $a, b, c$ in $\cal{R}$ ($+$ is associative).
$\bullet $ $a + b = b + a$ for all $a, b$ in $\cal{R}$ ($+$ is commutative).
$\bullet $ There is an element $0$ in $\cal{R}$ such that $a + 0 = a$ for all a in $\cal{R}$ ($0$ is the additive identity).
$\bullet $ For each $a$ in $\cal{R}$ there exists .a in R such that $a + (-a) = 0$ ($-a$ is the additive inverse of $a$).
\item[(ii)] $\cal{R}$ is a monoid under multiplication, meaning that:
$\bullet $ $(a \cdot b) \cdot c = a \cdot (b \cdot c)$ for all $a, b, c$ in $\cal{R}$ ($\cdot$ is associative).
$\bullet $ There is an element $1$ in $\cal{R}$ such that $a \cdot 1 = a$ and $1 \cdot a = a$ for all $a$ in $\cal{R}$ ($1$ is the multiplicative identity).
\item[(iii)] Multiplication is distributive with respect to addition:
$\bullet $ $a \cdot (b + c) = (a \cdot b) + (a \cdot c)$ for all $a, b, c$ in $\cal{R}$ (left distributivity).
$\bullet $ $(b + c) \cdot a = (b \cdot a) + (c \cdot a)$ for all $a, b, c$ in $\cal{R}$ (right distributivity). \end{description}
$(\cal{F},+, \cdot)$ is said to be a field if $(\cal{F},+, \cdot)$ is a ring and ${\cal{F}}^{*}={\cal{F}}/\{0\}$ with respect to multiplication $\cdot$ is a commutative group. The most commonly used fields are the field of real numbers, the field of complex numbers.
A ring in which division is possible but commutativity is not assumed (such as the quaternions) is called a division ring. In fact, a field is a nonzero commutative division ring.
\begin{remark}\label{Rem4.1} Different from real or complex numbers, quaternion is a division ring, not a field due to the non-commutativity. Thus, the vector space $\mathbb{H}^2$ over quaternion is not a linear vector space. We need introduce the concept of {\bf right- module}. \end{remark}
Module over a ring $\cal{R}$ is a generalization of abelian group. You may view an $\cal{R}$-mod as a ``vector space over $\cal{R}$". A {\em right $\cal{R}$-module} is an additive abelian group $\mathbb{A}$ together with a function $\mathbb{A}\times \cal{R} \rightarrow \mathbb{A}$ (by $(a, r) \rightarrow a\cdot r$ ) such that for all $a, b \in \mathbb{A}$ and $r,s \in \cal{R}$: \begin{description} \item[(i)] $(a + b)\cdot r = a \cdot r + b \cdot r$. \item[(ii)] $a \cdot (r + s) = a \cdot r + a \cdot s$. \item[(iii)] $(a\cdot r)\cdot s = a\cdot (rs)$. \end{description} Similarly, we can define the left $\cal{R}$-module.
The multiplication $xr$, where $x\in\mathbb{A}$ and $r\in\cal{R}$, is called the right action of $\cal{R}$ on the abelian group $\mathbb{A}$. We will write $\mathbb{A}_{\cal{R}}^R$ to indicate that $\mathbb{A}$ is a right $\cal{R}$-module (over ring $\cal{R}$). Similarly, we can define the left $\cal{R}$-module. And we denote the left $\cal{R}$-module by $\mathbb{A}_{\cal{R}}^L$ (over ring $\cal{R}$).
\textbf{Right Submodules}. Let $\mathbb{A}_{\cal{R}}^R$ be a right $\cal{R}$-module, a subset $\mathbb{N}^{R}\subset\mathbb{A}_{\cal{R}}^R$ is called a right $\cal{R}$-submodule, if the following conditions are satisfied: \begin{description} \item[(i)] $\mathbb{N}^{R}$ is a subgroup of the (additive, abelian) group $\mathbb{A}^{R}$. \item[(ii)] $xr$ is in $\mathbb{N}^{R}$ for all $r\in \cal{R}$ and $x\in\mathbb{N}^{R}$. \end{description} Similarly, we can define the concept left submodules.
\textbf{Direct sums}. If $\mathbb{N}_{1}, \cdots, \mathbb{N}_{m}$ are submodules of a module $\mathbb{A}_{\cal{R}}$ over a ring $\cal{R}$, their sum $\mathbb{N}_{1}+\cdots+\mathbb{N}_{m}$ is defined to be the set of all sums of elements of the $\mathbb{N}_{i}$, namely, \[
\mathbb{N}_{1}+\cdots+\mathbb{N}_{m}=\{x_{1}+\cdots+x_{m}|x_{i}\in\mathbb{N}_{i}\,\, for \,\, each \,\, i\} \] this is a submodule of $\mathbb{A}$. We say $\mathbb{N}_{1}+\cdots+\mathbb{N}_{m}$ is a direct sum if this representation is unique for each $x\in\mathbb{N}_{1}+\cdots+\mathbb{N}_{m}$; that is, given two representations \[ x_{1}+\cdots+x_{m}=x=y_{1}+\cdots+y_{m} \] where $x_{i}\in\mathbb{N}_{i}$ and $y_{i}\in\mathbb{N}_{i}$ for each $i$, then necessarily $x_{i}=y_{i}$ for each $i$.
\begin{lemma}\label{Lemma4.0}
The following conditions are equivalent for a set $\mathbb{N}_{1}, \cdots, \mathbb{N}_{m}$ of submodules of a module $\mathbb{A}$: \begin{description} \item[(i)] The sum $\mathbb{N}_{1}+\cdots+\mathbb{N}_{m}$ is direct. \item[(ii)] $(\sum\limits_{i\neq k}\mathbb{N}_{i})\cap \mathbb{N}_{k}=0$ for each $k$. \item[(iii)] $\mathbb{N}_{1}+\cdots+\mathbb{N}_{k-1}\cap \mathbb{N}_{k}=0$ for each $k\geq2$. \item[(iv)] If $x_{1}+\cdots+x_{m}=0$ where $x_{i}\in\mathbb{N}_{i}$ for each $i$, then each $x_{i}=0$ \end{description} If $\mathbb{N}_{1}+\cdots+\mathbb{N}_{m}$ is direct sum, we denote it by $\mathbb{N}_{1}\oplus\cdots\oplus\mathbb{N}_{m}$. \end{lemma}
Given elements $x_{1}, \cdots, x_{k}$ in a right- module $\mathbb{A}_{\cal{R}}^{R}$ over ring $\cal{R}$, a sum $x_{1}r_{1}+ \cdots+x_{k}r_{k}$, $r_{i}\in{\cal{R}}$, is called right- linear combination of the $x_{i}$, with coefficients $r_{i}$. The sum of all the right- submodules $x_{i}{\cal{R}},\,(i=1,2,\cdots,k)$ is a right- submodule \[
x_1\mathcal R+\cdots+ x_k\mathcal R=\{x_1 r_1+\cdots+x_k r_k |r_i \in \mathcal R\} \] consisting of all such right linear combinations. If $\mathbb{A}_{\cal{R}}^{R}=x_1\mathcal R+\cdots+x_k\mathcal R$, we say that $\{x_1, \cdots, x_k\}$ is a generating set for in a right- module $\mathbb{A}_{\cal{R}}^{R}$. Similarly, the sum of all the left submodules ${\cal{R}}x_{i},\,(i=1,2,\cdots,k)$ is a left submodule \[
{\mathcal R} x_1 + \cdots + {\mathcal R} x_k =\{ r_1 x_1+\cdots+r_k x_k |r_i \in \mathcal R\} \] consisting of all such right linear combinations. If $\mathbb{A}_{\cal{R}}^{L}={\mathcal R} x_1 + \cdots+{\mathcal R} x_k $, we say that $\{x_1, \cdots, x_k\}$ is a generating set for in a left module $\mathbb{A}_{\cal{R}}^{L}$.
The elements $x_{1}, \cdots, x_{k}\in \mathbb{A}_{\cal{R}}^{R}$ are called independent if \[ x_1 r_1+ \cdots+x_k r_k=0,r_i\in\mathcal R~\text{implies that}~r_1=\cdots=r_k=0. \]
For the sake of convenience, we call it {\em right} independence.
A subset $\{x_{1}, \cdots, x_{k}\}$ is called a basis of a right module $\mathbb{A}_{\cal{R}}^{R}$ if it is right independent and generates $\mathbb{A}_{\cal{R}}^{R}$. Similarly, the elements $x_{1}, \cdots, x_{k}\in \mathbb{A}_{\cal{R}}^{L}$ are called {\em left independent} if \[ r_1 x_1+ \cdots+r_k x_k=0,r_i\in\mathcal R~\text{implies that}~r_1=\cdots=r_k=0. \]
A subset $\{x_{1}, \cdots, x_{k}\}$ is called a basis of a left module $\mathbb{A}_{\cal{R}}^{L}$ if it is left independent and generates $\mathbb{A}_{\cal{R}}^{L}$. A module that has a finite basis is called a {\bf free module}.
From the above discussion, we know that the quaternionic vector space $\mathbb{H}^{n}$ over the division ring $\mathbb{H}$ is a right- or left- module.
Thus, the quaternionic vectors $x_1,x_2,\cdots,x_n$, $x_i\in \mathbb{H}^{n}$ are right independent if \[ x_1 r_1+ \cdots+x_n r_n=0,r_i\in\mathbb{H}~\text{implies that}~r_1=\cdots=r_k=0. \] On the contrary, if there is a nonzero $r_i,\i=1,2,\cdots,n$ such that $x_1 r_1+ \cdots+x_n r_n=0$, then $x_1,x_2,\cdots,x_n$ are rihgt dependent. Similarly, we can define the left independence or dependence for the quaternionic vector $\mathbb{H}^{n}$ over the division ring $\mathbb{H}$.
For the sake of easily understanding, we focus on the two dimensional system (\ref{2.1}).
\begin{definition}\label{Def4.1}
For two quaternion-valued functions $x_1(t)$ and $x_2(t)$ defined on the real interval $I$, if there are two quaternionic constants $q_1,q_2\in \mathbb{H}$ (not both zero) such that \[ x_1(t) q_1 + x_2(t) q_2 =0,\,\, \,\,\,\, for \,\,\,\,any \,\,\,\, t\in I, \] then $x_1(t)$ and $x_2(t)$ are said to be {\em right} linearly dependent on $I$. On the contrary, $x_1(t)$ and $x_2(t)$ are said to be right linearly {\em independent} on $I$ if the algebric equation \[ x_1(t) q_1 + x_2(t) q_2 =0,\,\, \,\,\,\, for \,\,\,\,any \,\,\,\, t\in I, \] can only be satisfied by \[ q_1=q_2=0. \] If there are two quaternionic constants $q_1,q_2\in \mathbb{H}$ (not both zero) such that \[
q_1 x_1(t) + q_2 x_2(t) =0,\,\,\,\, for \,\,\,\,any \,\,\,\, t\in I, \] then $x_1(t)$ and $x_2(t)$ are said to be {\em left} linearly dependent on $I$. $x_1(t)$ and $x_2(t)$ are said to be left linearly {\em independent} on $I$ if the algebric equation \[
q_1 x_1(t) + q_2 x_2(t) =0,\,\,\,\, for \,\,\,\,any \,\,\,\, t\in I, \] can only be satisfied by $ q_1=q_2=0. $ \end{definition}
For sake of convenience, in this paper, we adopt Cayley determinant over the quaternion \cite{Cayley}. For a $2\times 2$ determinant we could use ${a}_{11} {a} _{22} - {a} _{12} {a} _{21}$ (expanding along the first row). That is, \[ {\rm {rdet}}
\left ( \begin{array}{ll}
{a} _{11} & {a} _{12}
\\
{a} _{21} & {a} _{22} \end{array} \right ) =
{a}_{11} {a} _{22} - {a} _{12} {a} _{21}.
\] Certainly, one can also use ${a}_{11} {a} _{22} - {a} _{21} {a} _{12}$ (expanding along the first column) as follows. \[ {\rm {cdet}}
\left ( \begin{array}{ll}
{a} _{11} & {a} _{12}
\\
{a} _{21} & {a} _{22} \end{array} \right ) =
{a}_{11} {a} _{22} - {a} _{21} {a} _{12}.
\]
There are other definitions (expanding along the second column) or (expanding along the second row).
Without loss of generality, we employ ${\rm rdet}$ to proceed our study in this paper. In fact, one can prove the main results in this paper by replacing the Cayley determinant by other definitions.
To study the solution of Eq. (\ref{2.1}), firstly, we should define the concept of {\em Wronskian} for QDEs. Consider any two solutions of (\ref{2.1}), \[ x_1(t) = \left ( \begin{array}{l} {x} _{11}(t) \\
{x}_{21} (t) \end{array} \right ),\,\,\,\, x_2(t) = \left ( \begin{array}{l} {x} _{12}(t) \\
{x}_{22} (t) \end{array} \right ) \in \mathbb{H}^2 \otimes \mathbb{R}. \] If we define {\em Wronskian} as standard consideration of ODEs, \begin{equation}
W_{ODE}(t) = {{\rm {rdet}}}
\left ( \begin{array}{ll}
{x} _{11}(t) & {x} _{12}(t)
\\
{x} _{21}(t) & {x} _{22}(t) \end{array} \right ) =
{x} _{11}(t) {x} _{22}(t) - {x} _{12}(t)
{x} _{21}(t).
\label{W-ODE}
\end{equation} This Wronskian definition cannot be extended to QDEs. In fact, let us consider two right linearly dependent solutions of (\ref{2.1}), \begin{equation} x_1(t) = x_2(t) \eta, \,\,\,\, or \,\,\,\, \left ( \begin{array}{l} {x} _{11}(t) \\
{x}_{21} (t) \end{array} \right ) = \left ( \begin{array}{l} {x} _{12}(t) \\
{x}_{22} (t) \end{array} \right )\eta,\,\,\,\, x_1(t) ,x_2(t) \in \mathbb{H}^2 \otimes \mathbb{R},\,\,\, \eta \in \mathbb{H}, \label{LI}
\end{equation} which implies \[ x_{12}(t)= x_{11}(t) \eta^{-1}, \,\,\, and \,\,\, x_{21}(t)= x_{22}(t) \eta. \] Substituting these inequalities into (\ref{W-ODE}), we have \[
W_{ODE}(t) = {x} _{11}(t) {x} _{22}(t) - {x} _{12}(t)
{x} _{21}(t)= {x} _{11}(t) {x} _{22}(t) - {x} _{11}(t)\eta^{-1}
{x} _{22}(t)\eta \neq 0. \] The reason lies in $\eta$ is a quaternion, which can not communicate. Therefore, we should change the standard definition of {\em Wronskian}. Now we give the concept of {\em Wronskian} for QDEs as follows.
\begin{definition}\label{Def4.2}
Let $x_1(t),x_2(t)$ be two solutions of Eq. (\ref{2.1}). Denote \[ M(t) = \left ( \begin{array}{ll}
{x} _{11}(t) & {x} _{12}(t)
\\
{x} _{21}(t) & {x} _{22}(t) \end{array} \right ). \] The {\em Wronskian} of QDEs is defined by \begin{equation}
W_{QDE}(t) =\rm{ddet} M(t): = \frac{1}{2}{\rm {rdet}} \big(M(t) M^+(t) \big)
\label{W}
\end{equation} where $\rm{ddet} M:=\displaystyle {\rm {rdet}} \big(M M^+ \big)$ is called double determinant, $M^{+}$ is the conjugate transpose of $M$, that is, \[ M^{+}(t) = \left ( \begin{array}{ll}
\overline {x} _{11}(t) & \overline {x} _{21}(t)
\\ \overline {x} _{12}(t) & \overline {x} _{22}(t) \end{array} \right ). \] \end{definition}
\begin{remark}\label{Rem4.2}
From Definition \ref{Def4.2}, we have \begin{equation} \begin{array}{lll}
W_{QDE}(t)
&=&
\displaystyle \{ |x_{11}(t)|^2|x_{22}(t)|^2 + |x_{12}(t)|^2|x_{21}(t)|^2
\\&&
\hspace{0.5cm}
- x_{12}(t) \overline{x}_{22}(t) x_{21}(t) \overline{x}_{11}(t)
- x_{11}(t) \overline{x}_{21} (t)x_{22}(t) \overline{x}_{12} (t) \}.
\end{array} \label{W2}
\end{equation} We see that \[ \overline{ x_{12}(t) \overline{x}_{22}(t) x_{21}(t) \overline{x}_{11}(t) } = x_{11}(t) \overline{x}_{21} (t)x_{22}(t) \overline{x}_{12} (t), \] which, combining with (\ref{W2}), implies that $W_{QDE}(t)$ is a real number. \end{remark}
For quaternionic matrix, it should be pointed out that \[ {\rm {rdet}} \big(M(t) M^+(t) \big)\neq {\rm {rdet}} M(t) \cdot {\rm {rdet}} M^+(t) . \]
\begin{proposition}\label{Pro4.1}
If $x_1(t)$ and $x_2(t)$ are right
linearly dependent on $I$ then $W_{QDE}(t)=0$.
\end{proposition}
{\bf PROOF.} If $x_1(t)$ and $x_2(t)$ are right linearly dependent on $I$ then Eq. (\ref{LI}) holds. Then by (\ref{W2}), \[ \begin{array}{lll}
W_{QDE}(t)
&=&
\displaystyle
\{
|x_{12}(t)|^2|\eta|^2 |x_{22}(t)|^2+ |x_{11}(t)|^2|\eta|^2|x_{22}(t)|^2
\\
&&
\displaystyle
- x_{12}(t) \overline{x}_{22} (t) x_{22}(t) |\eta|^2 \overline{x}_{12}(t)
- x_{12}(t) |\eta|^2 \overline{x}_{22}(t) x_{22}(t) \overline{x}_{12} (t) \} =0. \end{array} \] That is, $ W_{QDE}(t)=0$.
\begin{theorem}\label{Th4.1}
(Liouville formula) The Wronskian $W_{QDE}(t)$ of Eq. (\ref{2.1}) satisfies the following quaternionic Liouville formula.
\begin{equation} W_{QDE}(t)= \exp\left( \int_{t_0}^{t}{\rm tr} [A(s)+A^{+}(s)] ds\right) W_{QDE}(t_0), \label{Liou}
\end{equation} where ${\rm tr} A(t)$ is the trace of the coefficient matrix $A(t)$, i.e. ${\rm tr} A(t)=a_{11}(t)+a_{22}(t)$.
Moreover, if $W_{QDE}(t)=0$ at some $t_0$ in $I$ then $W_{QDE}(t)=0$ on $I$.
\end{theorem}
{\bf PROOF.} Let us consider Eq. (\ref{W}). By calculating the first derivative of the left-hand-side and right-hand-side terms, by using (\ref{W2}), we obtain \[ \begin{array}{lll} && \displaystyle\frac{d}{dt} W_{QDE}(t) \\ &=&
\displaystyle \frac{d}{dt} {\rm {rdet}} \left( M(t) M^{+}(t) \right)
\\
&=&
\displaystyle \frac{d}{dt}\Big[ x_{11}(t) \overline{x}_{11}(t) x_{22}(t) \overline{x}_{22}(t) + x_{12}(t) \overline{x}_{12}(t) x_{21}(t) \overline{x}_{21}(t)
\\
&&
\displaystyle
\hspace{0.6cm}
- x_{12}(t) \overline{x}_{22}(t) x_{21}(t) \overline{x}_{11}(t)
- x_{11}(t) \overline{x}_{21} (t)x_{22}(t) \overline{x}_{12} (t) \Big]
\\
&=&
\displaystyle \frac{d}{dt} x_{11}(t) \overline{x}_{11}(t) x_{22}(t) \overline{x}_{22}(t)
+ x_{11}(t) \displaystyle\frac{d}{dt} \overline{x}_{11}(t) x_{22}(t) \overline{x}_{22}(t)
\\
&&
+ x_{11}(t) \overline{x}_{11}(t) \displaystyle\frac{d}{dt} x_{22}(t) \overline{x}_{22}(t)
+ x_{11}(t) \overline{x}_{11}(t) x_{22}(t) \displaystyle\frac{d}{dt} \overline{x}_{22}(t)
\\ &&
+\displaystyle\frac{d}{dt}
x_{12}(t) \overline{x}_{12}(t) x_{21}(t) \overline{x}_{21}(t)
+
x_{12}(t) \displaystyle\frac{d}{dt} \overline{x}_{12}(t) x_{21}(t) \overline{x}_{21}(t)
\\
&&
+
x_{12}(t) \overline{x}_{12}(t) \displaystyle\frac{d}{dt} x_{21}(t) \overline{x}_{21}(t)
+
x_{12}(t) \overline{x}_{12}(t) x_{21}(t) \displaystyle\frac{d}{dt} \overline{x}_{21}(t)
\\
&&
- \displaystyle\frac{d}{dt} x_{12}(t) \overline{x}_{22}(t) x_{21}(t) \overline{x}_{11}(t)
- x_{12}(t) \displaystyle\frac{d}{dt} \overline{x}_{22}(t) x_{21}(t) \overline{x}_{11}(t)
\\
&&
- x_{12}(t) \overline{x}_{22}(t) \displaystyle\frac{d}{dt}x_{21}(t) \overline{x}_{11}(t)
- x_{12}(t) \overline{x}_{22}(t) x_{21}(t) \displaystyle\frac{d}{dt}\overline{x}_{11}(t)
\\
&&
- \displaystyle\frac{d}{dt} x_{11}(t) \overline{x}_{21} (t)x_{22}(t) \overline{x}_{12} (t)
- x_{11}(t) \displaystyle\frac{d}{dt}\overline{x}_{21} (t)x_{22}(t) \overline{x}_{12} (t)
\\
&&
- x_{11}(t) \overline{x}_{21} (t) \displaystyle\frac{d}{dt} x_{22}(t) \overline{x}_{12} (t)
- x_{11}(t) \overline{x}_{21} (t)x_{22}(t) \displaystyle\frac{d}{dt} \overline{x}_{12} (t) \\
&=&
\displaystyle [a_{11}(t) x_{11}(t)+ a_{12}(t)x_{21}(t)] \overline{x}_{11}(t) x_{22}(t) \overline{x}_{22}(t)
\\
&&
+ x_{11}(t)\displaystyle [\overline{x}_{11}(t) \overline{a}_{11}(t)+ \overline{x}_{21}(t) \overline{a}_{12}(t)] x_{22}(t) \overline{x}_{22}(t)
\\
&&
+ x_{11}(t) \overline{x}_{11}(t)\displaystyle [a_{21}(t) x_{12}(t)+ a_{22}(t)x_{22}(t)] \overline{x}_{22}(t)
\\
&&
+ x_{11}(t) \overline{x}_{11}(t) x_{22}(t) \displaystyle [ \overline{x}_{12}(t) \overline{a}_{21}(t)+ \overline{x}_{22}(t) \overline{a}_{22}(t)]
\\ &&
+\displaystyle[a_{11}(t) x_{12}(t)+ a_{12}(t)x_{22}(t)] \overline{x}_{12}(t) x_{21}(t) \overline{x}_{21}(t)
\\
&&
+
x_{12}(t) \displaystyle[\overline{x}_{12}(t) \overline{a}_{11}(t)+ \overline{x}_{22}(t) \overline{a}_{12}(t)] x_{21}(t) \overline{x}_{21}(t)
\\
&&
+
x_{12}(t) \overline{x}_{12}(t) \displaystyle [a_{21}(t) x_{11}(t)+ a_{22}(t)x_{21}(t)] \overline{x}_{21}(t)
\\
&&
+
x_{12}(t) \overline{x}_{12}(t) x_{21}(t) \displaystyle [ \overline{x}_{11}(t) \overline{a}_{21}(t)+ \overline{x}_{21}(t) \overline{a}_{22}(t)]
\\
&&
- \displaystyle[a_{11}(t) x_{12}(t)+ a_{12}(t)x_{22}(t)] \overline{x}_{22}(t) x_{21}(t) \overline{x}_{11}(t)
\\
&&
- x_{12}(t) \displaystyle [ \overline{x}_{12}(t) \overline{a}_{21}(t)+ \overline{x}_{22}(t) \overline{a}_{22}(t)] x_{21}(t) \overline{x}_{11}(t)
\\
&&
- x_{12}(t) \overline{x}_{22}(t) \displaystyle [a_{21}(t) x_{11}(t)+ a_{22}(t)x_{21}(t)] \overline{x}_{11}(t)
\\
&&
- x_{12}(t) \overline{x}_{22}(t) x_{21}(t) \displaystyle [\overline{x}_{11}(t) \overline{a}_{11}(t)+ \overline{x}_{21}(t) \overline{a}_{12}(t)]
\\
&&
- \displaystyle[a_{11}(t) x_{11}(t)+ a_{12}(t)x_{21}(t)] \overline{x}_{21} (t)x_{22}(t) \overline{x}_{12} (t)
\\
&&
- x_{11}(t) \displaystyle [ \overline{x}_{11}(t) \overline{a}_{21}(t)+ \overline{x}_{21}(t) \overline{a}_{22}(t)] x_{22}(t) \overline{x}_{12} (t)
\\
&&
- x_{11}(t) \overline{x}_{21} (t) \displaystyle [a_{21}(t) x_{12}(t)+ a_{22}(t)x_{22}(t)] \overline{x}_{12} (t)
\\
&&
- x_{11}(t) \overline{x}_{21} (t)x_{22}(t) \displaystyle[\overline{x}_{12}(t) \overline{a}_{11}(t)+ \overline{x}_{22}(t) \overline{a}_{12}(t)] \end{array} \] \[ \begin{array}{lll}
&=&
\displaystyle \Big\{ {a}_{11}(t) |x_{11}(t)|^2|x_{22}(t)|^2 + a_{12}(t)x_{21}(t) \overline{x}_{11}(t) |x_{22}(t)|^2
\\
&&
+ |x_{11}(t)|^2 \overline{a}_{11}(t)|x_{22}(t)|^2 +x_{11}(t) \overline{x}_{21}(t) \overline{a}_{12}(t)] x_{22}(t) \overline{x}_{22}(t)
\\
&&
+ |x_{11}(t)|^2 a_{21}(t) x_{12}(t)\overline{x}_{22}(t) + |x_{11}(t)|^2 a_{22}(t)|x_{22}(t)|^2
\\
&&
+ |x_{11}(t)|^2 x_{22}(t) \overline{x}_{12}(t) \overline{a}_{21}(t) + |x_{11}(t)|^2 |{x}_{22}(t)|^2 \overline{a}_{22}(t)]
\\ &&
+ a_{11}(t) |x_{12}(t)|^2 | x_{21}(t)|^2 + a_{12}(t)x_{22}(t) \overline{x}_{12}(t) |x_{21}(t)|^2
\\
&&
+
|x_{12}(t)|^2 \overline{a}_{11}(t) |x_{21}(t)|^2+ x_{12}(t) \overline{x}_{22}(t) \overline{a}_{12}(t) |x_{21}(t)|^2
\\
&&
+
|x_{12}(t)|^2 a_{21}(t) x_{11}(t)\overline{x}_{21}(t) + |x_{12}(t)|^2 a_{22}(t)|x_{21}(t)|^2
\\
&&
+
|x_{12}(t)|^2 x_{21}(t) \overline{x}_{11}(t) \overline{a}_{21}(t) + |x_{12}(t)|^2 |{x}_{21}(t)|^2 \overline{a}_{22}(t)]
\\
&&
- a_{11}(t) x_{12}(t) \overline{x}_{22}(t) x_{21}(t) \overline{x}_{11}(t) - a_{12}(t)|x_{22}(t)|^2 x_{21}(t) \overline{x}_{11}(t)
\\
&&
- |x_{12}(t)|^2\overline{a}_{21}(t)x_{21}(t) \overline{x}_{11}(t) - x_{12}(t) \overline{x}_{22}(t) \overline{a}_{22}(t)] x_{21}(t) \overline{x}_{11}(t)
\\
&&
- x_{12}(t) \overline{x}_{22}(t) a_{21}(t) |x_{11}(t)|^2 - x_{12}(t) \overline{x}_{22}(t) a_{22}(t)x_{21}(t)] \overline{x}_{11}(t)
\\
&&
- x_{12}(t) \overline{x}_{22}(t) x_{21}(t) \overline{x}_{11}(t) \overline{a}_{11}(t) - x_{12}(t) \overline{x}_{22}(t) |{x}_{21}(t)|^2 \overline{a}_{12}(t)]
\\
&&
- a_{11}(t) x_{11}(t)\overline{x}_{21} (t)x_{22}(t) \overline{x}_{12} (t) - a_{12}(t)|x_{12}(t)|^2 x_{22}(t) \overline{x}_{12} (t)
\\
&&
- |x_{11}(t)|^2 \overline{a}_{21}(t) x_{22}(t) \overline{x}_{12} (t) - x_{11}(t) \overline{x}_{21}(t) \overline{a}_{22}(t)] x_{22}(t) \overline{x}_{12} (t)
\\
&&
- x_{11}(t) \overline{x}_{21} (t) a_{21}(t) |x_{12}(t)|^2 - x_{11}(t) \overline{x}_{21} (t) a_{22}(t)x_{22}(t)] \overline{x}_{12} (t)
\\
&&
- x_{11}(t) \overline{x}_{21} (t)x_{22}(t) \overline{x}_{12}(t) \overline{a}_{11}(t) - x_{11}(t) \overline{x}_{21} (t) |{x}_{22}(t)|^2 \overline{a}_{12}(t)
\Big\}
\end{array}
\]
\begin{equation} \begin{array}{lll}
&=&
\displaystyle
\Big\{
\big[ {a}_{11}(t) + {a}_{22}(t) +\overline{a}_{11}(t) + \overline{a}_{22}(t) \big] |x_{11}(t)|^2|x_{22}(t)|^2
\\
&&
+
\big[ {a}_{11}(t) + {a}_{22}(t) +\overline{a}_{11}(t) + \overline{a}_{22}(t) \big] |x_{12}(t)|^2|x_{21}(t)|^2 \\ && -{a}_{11}(t)\big[ x_{12}(t) \overline{x}_{22}(t) x_{21}(t) \overline{x}_{11}(t) + x_{11}(t)\overline{x}_{21} (t)x_{22}(t) \overline{x}_{12} (t)\big] \\ && -\big[ x_{12}(t) \overline{x}_{22}(t) x_{21}(t) \overline{x}_{11}(t) + x_{11}(t)\overline{x}_{21} (t)x_{22}(t) \overline{x}_{12} (t)\big]\overline{a}_{11}(t) \\ && - x_{12}(t) \overline{x}_{22}(t)\big[ {a}_{22}(t) +\overline{a}_{22}(t) \big] x_{21}(t) \overline{x}_{11}(t) \\ && - x_{11}(t)\overline{x}_{21} (t) \big[ {a}_{22}(t) +\overline{a}_{22}(t) \big] x_{22}(t) \overline{x}_{12} (t)
\\
&&
+|x_{11}(t)|^2 a_{21}(t) x_{12}(t)\overline{x}_{22}(t) +|x_{11}(t)|^2 {x}_{22}(t) \overline {x}_{12}(t) \overline{a}_{21}(t)
\\
&&
-|x_{11}(t)|^2 \overline{a}_{21}(t) {x}_{22}(t) \overline {x}_{12}(t) - a_{21}(t) x_{12}(t)\overline{x}_{22}(t) |x_{11}(t)|^2
\\
&&
+|x_{12}(t)|^2 a_{21}(t) x_{11}(t)\overline{x}_{21}(t) +|x_{12}(t)|^2 {x}_{21}(t) \overline {x}_{11}(t) \overline{a}_{21}(t)
\\
&&
-|x_{12}(t)|^2 \overline{a}_{21}(t) {x}_{21}(t) \overline {x}_{11}(t) - a_{21}(t) x_{11}(t)\overline{x}_{21}(t) |x_{12}(t)|^2
\Big\}
.
\end{array}
\label{Deri} \end{equation} Set \[ Q_1=x_{12}(t) \overline{x}_{22}(t) x_{21}(t) \overline{x}_{11}(t) + x_{11}(t)\overline{x}_{21} (t)x_{22}(t) \overline{x}_{12} (t) \] It is easy to see that \[ \overline{Q}_1=Q_1, \] which implies that $Q_1$ is a real number. Thus, \[ \begin{array}{lll} && -{a}_{11}(t)\big[ x_{12}(t) \overline{x}_{22}(t) x_{21}(t) \overline{x}_{11}(t) + x_{11}(t)\overline{x}_{21} (t)x_{22}(t) \overline{x}_{12} (t)\big] \\ && -\big[ x_{12}(t) \overline{x}_{22}(t) x_{21}(t) \overline{x}_{11}(t) + x_{11}(t)\overline{x}_{21} (t)x_{22}(t) \overline{x}_{12} (t)\big]\overline{a}_{11}(t) \\ &=& -\big[ {a}_{11}(t)+\overline{a}_{11}(t)\big] \big[ x_{12}(t) \overline{x}_{22}(t) x_{21}(t) \overline{x}_{11}(t) + x_{11}(t)\overline{x}_{21} (t)x_{22}(t) \overline{x}_{12} (t)\big].
\end{array}
\] It is obvious that \[
|x_{11}(t)|^2 a_{21}(t) x_{12}(t)\overline{x}_{22}(t) +|x_{11}(t)|^2 {x}_{22}(t) \overline {x}_{12}(t) \overline{a}_{21}(t)
=2|x_{11}(t)|^2 \Re \{a_{21}(t) x_{12}(t)\overline{x}_{22}(t)\},
\] and \[
|x_{11}(t)|^2 \overline{a}_{21}(t) {x}_{22}(t) \overline {x}_{12}(t) + x_{12}(t)\overline{x}_{22}(t)a_{21}(t) |x_{11}(t)|^2
=2|x_{11}(t)|^2 \Re \{\overline {a}_{21}(t) {x}_{22}(t) \overline{x}_{12}(t)\}. \] Now we claim that \begin{equation} \Re \{a_{21}(t) x_{12}(t)\overline{x}_{22}(t)\}=\Re \{\overline {a}_{21}(t) \overline{ x_{12}(t)\overline{x}_{22}(t)}\}. \label{ReRe} \end{equation} In fact, for any $a,\, b\in \mathbb{H}$, it is easy to check that \[
\Re\{ a \overline{b} \}= \Re \{ \overline{a} b\}.
\] Thus, it follows from (\ref{ReRe}) that \[ \begin{array}{lll} &&
|x_{11}(t)|^2 a_{21}(t) x_{12}(t)\overline{x}_{22}(t) +|x_{11}(t)|^2 {x}_{22}(t) \overline {x}_{12}(t) \overline{a}_{21}(t)
\\
&&
-|x_{11}(t)|^2 \overline{a}_{21}(t) {x}_{22}(t) \overline {x}_{12}(t) - x_{12}(t)\overline{x}_{22}(t)a_{21}(t) |x_{11}(t)|^2=0
\end{array}
\] Similarly, we have \[ \begin{array}{lll} &&
|x_{12}(t)|^2 a_{21}(t) x_{11}(t)\overline{x}_{21}(t) +|x_{12}(t)|^2 {x}_{21}(t) \overline {x}_{11}(t) \overline{a}_{21}(t)
\\
&&
-|x_{12}(t)|^2 \overline{a}_{21}(t) {x}_{21}(t) \overline {x}_{11}(t) - a_{21}(t) x_{11}(t)\overline{x}_{21}(t) |x_{12}(t)|^2=0.
\end{array}
\] Substituting above equalities into (\ref{Deri}), we have \begin{equation} \begin{array}{lll} \displaystyle\frac{d}{dt} W_{QDE}(t) &=&
\displaystyle \Big\{\big[ {a}_{11}(t) + {a}_{22}(t) +\overline{a}_{11}(t) + \overline{a}_{22}(t) \big] |x_{11}(t)|^2|x_{22}(t)|^2
\\
&&
+
\big[ {a}_{11}(t) + {a}_{22}(t) +\overline{a}_{11}(t) + \overline{a}_{22}(t) \big] |x_{12}(t)|^2|x_{21}(t)|^2 \\ && - \big[ {a}_{11}(t) + {a}_{22}(t) +\overline{a}_{11}(t) + \overline{a}_{22}(t) \big] x_{12}(t) \overline{x}_{22}(t) x_{21}(t) \overline{x}_{11}(t) \\ &&
- \big[ {a}_{11}(t) + {a}_{22}(t) +\overline{a}_{11}(t) + \overline{a}_{22}(t) \big] x_{11}(t)\overline{x}_{21} (t)x_{22}(t) \overline{x}_{12} (t)\big]
\Big\}
\\
&=&
\displaystyle \big[ {a}_{11}(t) + {a}_{22}(t) +\overline{a}_{11}(t) + \overline{a}_{22}(t) \big] W_{QDE}(t)
\\
&=&
\displaystyle \big[ {\rm tr} A(t) + {\rm tr} A^{+}(t) \big] W_{QDE}(t)
. \end{array} \label{2.9} \end{equation} Integrating (\ref{2.9}) over $[t_0,t]$, Liouville formula (\ref{Liou}) follows. Consequently, if $W_{QDE}(t)=0$ at some $t_0$ in $I$ then $W_{QDE}(t)=0$ on $I$.
\begin{remark}\label{Rem4.3} It should be noted that the {\em Wronskian} of QDEs can also be defined by \[
W_{QDE}(t) =\rm{ddet} M(t): = {\rm {rdet}} \big( M^+(t) M(t)\big).
\label{W}
\]
By this definition, Proposition \ref{Pro4.1} and Theorem \ref{Th4.1} can be similarly proved. \end{remark}
We need a lemma from (Theorem 2.10 \cite{Ky-AMC}).
\begin{lemma}\label{Lem4.1}
The necessary and sufficient condition of invertibility of quaternionic matrix $M$ is $\rm{ddet} M\neq0$ (or $W_{QDE}(t)\neq 0$).
\end{lemma}
\begin{proposition}\label{Pro4.2}
If $W_{QDE}(t)=0$ at some $t_0$ in $I$ then $x_1(t)$ and $x_2(t)$ are right linearly dependent on $I$.
\end{proposition}
{\bf PROOF.} From Liouville formula (Theorem \ref{Th4.1}), we have
\[
W_{QDE}(t_0)=0,\,\,\,\,\, \mbox{implies} \,\,\,\,\, W_{QDE}(t)=0,\,\,\,\, \mbox{for any}\,\,\,\,t\in I,
\] which implies that the quaternionic matrix $M$ is not invertible on $I$, in view of Lemma 4.1. Hence, the
linear system
\[
M q=0,\,\,\,\, or \,\,\,\, (x_1(t),x_2(t))q=0, \,\,\,\, q=(q_1,q_2)^T\in \mathbb{H},
\]
has a non-zero solution. Consequently, the two solution $x_1(t)$ and $x_2(t)$ are right linearly dependent on $I$.
From Proposition \ref{Pro4.1}, Proposition \ref{Pro4.2} and Theorem \ref{Th4.1}, we immediately have
\begin{theorem}\label{Th4.2}
Let $A(t)$ in Eq. (\ref{2.1}) be continuous functions of $t$ on an interval $I$. Two solutions $x_1(t),x_2(t)$ of Eq. (\ref{2.1}) on $I$ are right linearly dependent on $I$ if and only if the absolute value of the Wronskian, $W_{QDE}(t)$, is zero at some $t_0$ in $I$.
\end{theorem}
Now, we present an important result on the superposition principle and structure of the general solution.
\begin{theorem}\label{Th4.3} (Structure of the general solution) If $x_1(t)$ and $x_2(t)$ are two solutions of Eq. (\ref{2.1}), then \[ x(t)= x_1(t) q_1 + x_2(t) q_2,
\] is also a solution of (\ref{2.1}), where $q_1, \, q_2$ are quaternionic constants. Let ${\cal {S}}_{\mathbb{H}}$ denote the set which contains of all the solutions of (\ref{2.1}). Then the set ${\cal {S}}_{\mathbb{H}}$ is a {\bf right
free module} (the quaternionic vector space $\mathbb{H}^2$ over a division ring $\mathbb{H}$). \end{theorem}
\begin{remark}\label{Rem-1D} We consider the one-dimensional nonautonomous system \[ \dot{x}=a(t) x. \]
We see that it has a fundamental solution denoted by $\bar{x}(t)$. The set of solutions ${\cal {S}}=\{\bar{x}(t) q| q\in \mathbb{H}\}$ is a 1-dimensional right free module. In fact, $q \bar{x}(t)$ does not satisfy the equation, that is $q \bar{x}(t)\not\in {\cal {S}}$. Simple computation shows: \[ \frac{d}{dt} (q \bar{x}(t))=q \dot\bar{x}(t)= q a(t) \bar{x}(t)\neq a(t) q\bar{x}(t). \] Consequently, $q \bar{x}(t)$ {\bf is not a solution of $\dot{x}=a(t) x$}, i.e. $q \bar{x}(t)\not\in {\cal {S}}$.
In particular, we consider the one-dimensional autonomous system \[ \dot{x}=a x. \]
We see that it has a fundamental solution denoted by $\bar{x}(t)=\exp\{a t\}$. The set of solutions ${\cal {S}}=\{\exp\{a t\} q| q\in \mathbb{H}\} $ is a 1-dimensional right free module.
\end{remark}
Suppose that $x_1(t),\,x_2(t)$ are two fundamental solutions (they are independent in classical sense, i.e., the quaternionic vector space $\mathbb{H}^2$ over a field $\mathbb{C}$) of (\ref{2.1}). It is easy to see that the solution set $ {\cal{S}_\mathbb{C}}=\{x(t) \,\,\,\mbox{is a solution of (\ref{2.1})}\,\,\, \big|x(t)=c_1 x_1(t)+c_2x_2(t),\,c_1,c_2\in\mathbb{C}\}$ (the quaternionic vector space $\mathbb{H}^2$ over a field $\mathbb{C}$) is a linear vector space. However, we claim that the set ${\cal{S}_\mathbb{C}}$ is not complete. The following counterexample shows this fact.\\ \noindent {\bf Couterexample:} Consider the following QDEs \begin{equation} \dot {x}= \left ( \begin{array}{ll} \mathbf{i} & 0
\\
0 & \mathbf{j} \end{array} \right )x,\,\,\,\, x=(x_1,x_2)^T. \label{CounterEX} \end{equation}
It is easy to see that $x_1(t)=(e^{\mathbf{i}t},0)^{T}$, $x_2(t)=(0,e^{\mathbf{j}t})^{T}$ are two fundamental solutions of (\ref{CounterEX}). Also, it is easy to verify that $x(t)=(e^{\mathbf{i}t}\mathbf{k},0)^{T}$ is a solution of (\ref{CounterEX}). Considering the solution set $ {\cal{S}_\mathbb{C}}=\{x(t) \,\,\,\mbox{is a solution of (\ref{2.1})}\,\,\, \big|x(t)=c_1 x_1(t)+c_2x_2(t),\,c_1,c_2\in\mathbb{C}\}$, we claim that the set $ {\cal{S}_\mathbb{C}}$ is not complete. If this is not the case, there exists the complex numbers $a,b$ such that $x(t)=x_1(t) a+x_2(t) b$, i.e., $(e^{\mathbf{i}t}\mathbf{k},0)^{T}=(e^{\mathbf{i}t},0)^{T} a+ (0,e^{\mathbf{j}t})^{T}b$. Then, \begin{equation}
\left\{ \begin{array}{ccc}
e^{\mathbf{i}t}\mathbf{k} & =& e^{\mathbf{i}t}a,
\\ 0 &=&e^{\mathbf{j}t}b. \end{array} \right. \label{EV1.1111} \end{equation} Consequently, we see that $-a\mathbf{k}=1,b=0$. Therefore, $a=\mathbf{k},b=0$, which contradicts to that $a$ is a complex number.
{\bf PROOF of THEOREM \ref{Th4.3}.} The first assertion is easily obtained by direct differentiation. Now we prove the second assertion. We proceed this with two steps. First, we will prove the existence of two linearly independent solutions of Eq. (\ref{2.1}). We can easily choose two linearly independent quaternionic vector $x_1^0=(x_{11}^0, x_{21}^0)^T\in \mathbb{H}^2$ and $x_2^0=(x_{12}^0, x_{22}^0)^T\in \mathbb{H}^2$ (e.g. the natural basis $e_1=(1, 0)^T, e_2=(0, 1)^T$). According to Theorem 3.1, for any $k=1,2$ and $t_0\in I$, Eq. (\ref{2.1}) exists a unique solution $x_k(t),\,k=1,2$ satisfying $x_k(t_0)=x_k^0$. If there are two quaternionic constants $q_1,q_2\in \mathbb{H}$ such that \[ x_1(t) q_1 + x_2(t) q_2 =0,\,\,\,\,for\,\,\, any \,\,\,\, t\in I. \] In particularly, taking $t=t_0$, we have \[ x_1(t_0) q_1 + x_2(t_0) q_2 =0,\,\,\,\,i.e.,\,\,\,\, x_1^0 q_1 + x_2^0 q_2 =0. \] Noting that $x_1^0$ and $x_2^0$ are two right linearly independent quaternionic vector, it follows that $q_1=q_2=0$. Thus, $x_1(t),x_2(t)$ of Eq. (\ref{2.1}) on $I$ are right linearly dependent on $I$.
Secondly, we prove that any solution $x(t)$ of Eq. (\ref{2.1}) can be represented by the combination of above two right linearly independent solutions $x_1(t),x_2(t)$ \begin{equation} x(t)= x_1(t) q_1 + x_2(t) q_2,\,\,\,\,for \,\,\, any \,\,\,\, t\in I, \label{Combination} \end{equation} where $q_1,q_2\in \mathbb{H}$. On one hand, $x_1^0$ and $x_2^0$ are two right linearly independent quaternionic vector, which are a basis of two dimensional vector space $ \mathbb{H}^2$. Thus, there are two quaternionic constants $q_1,q_2\in \mathbb{H}$ such that \[ x(t_0)= x_1^0 q_1 + x_2^0 q_2 = x_1(t_0) q_1 + x_2(t_0) q_2. \] On the other hand, from the first assertion of this theorem, we know that $
x_1(t) q_1 + x_2(t) q_2 $ is also a solution of (\ref{2.1}) satisfying the IVP $x(t_0)$. Therefore, $x(t)$ and $
x_1(t) q_1 + x_2(t) q_2 $ are two solutions of Eq. (\ref{2.1}) satisfying the same IVP $x(t_0)$. By the uniqueness theorem (Theorem \ref{Th3.1}), the equality (\ref{Combination}) holds.
\section{Fundamental matrix and solution to QDEs}
To study the solutions of Eq. (\ref{2.1}), it is necessary to
introduce the quaternion-valued exponential function. If $q\in \mathbb{H}$, the {\em exponential} of $q$ is defined by \[ \exp q= \sum_{n=0}^{\infty} \frac{ q^n} {n!} \] (the series converges absolutely and uniformly on compact subsets). If $r\in \mathbb{H}$ is such that $qr = rq$, the exponential satisfies the addition theorem \[ \exp\{ q\} \exp \{ r \} = \exp \{ q + r\}. \] \[
\exp\{ q\} = (\exp \Re q)[ (\cos \|\Im q\|) e + (\sin \| \Im q\|) \frac{\Im q} {\|\Im q\|}]. \] Thus, $\exp q=1$ if and only if \[
\Re q =0,\,\,\, and \,\,\, \|\Im q\|=0,\,\,\, ( mod 2\pi). \] Since $[q(t)r(t)]'=q'(t)r(t) + q(t) r'(t)$, ( here $'=\frac{ \rm d}{\rm dt}$), \begin{equation} [q^{n}(t)]'=\displaystyle\sum_{j=0}^{n-1} q^{j}(t) q'(t)q^{n-1-j}(t). \label{nD} \end{equation}
\begin{remark}\label{Rem5.1}
It is known that if $q(t)$ reduces to the real-valued function, \begin{equation} [q^{n}(t)]'=nq^{n-1}(t), \label{rnD} \end{equation} which is different from (\ref{nD}). This is the difference between quaternion-valued function and real function (complex function). But if $q$ is a quaternion function and $q(t)q'(t) = q'(t)q(t)$, then (\ref{rnD}) holds. \end{remark}
\begin{proposition}\label{Pro5.1}
If $q(t)$ is differentiable and if $q(t)q'(t) = q'(t)q(t)$, it follows from that \begin{equation} [\exp q(t) ]' = [\exp q(t) ] q'(t). \label{EXD} \end{equation} \end{proposition}
\begin{remark}\label{Rem5.2}
If $q(t)\equiv q$ is a constant quaternion, then $q(t)q'(t) = q'(t)q(t)$ is always satisfied. We see the difference of the derivative between the quaternionic exponential function and complex (real) exponential function. For quaternion-value functions, the condition $q(t)q'(t) = q'(t)q(t)$ is needed to guarantee that the equality (\ref{EXD}) hold. For the real or complex function, $q(t)q'(t) = q'(t)q(t)$ is always satisfied. \end{remark}
We prove Proposition \ref{Pro5.1} by two methods.
{\bf PROOF 1}: The way of Taylor expansion. Since $[q(t)r(t)]'=q'(t)r(t) + q(t) r'(t)$, by using (\ref{nD}), \[
\begin{array}{lll} [\exp q(t) ]'&=&[\sum_{n=0}^{\infty} \frac{ q^n(t)} {n!}]' =\displaystyle\sum_{n=0}^{\infty}\sum_{j=0}^{n-1} \frac{q^{j}(t) q'(t)q^{n-1-j}(t)}{n!} \\ &=& \displaystyle\sum_{n=0}^{\infty}\frac{ nq^{n-1}(t)} {n!}q'(t) =\displaystyle\sum_{n=0}^{\infty}\frac{ q^{n-1}(t)} {(n-1)!}q'(t)=[\exp q(t) ] q'(t).
\end{array} \]
{\bf PROOF 2}: The way of Euler formula. We rewrite $q(t)$ as \[ q(t)=q_0(t) + q_1(t) \mathbf{i} + q_2 (t) \mathbf{j} + q_3 (t) \mathbf{k} = q_0(t) + \Im {q}(t), \] where \[
\Im {q}(t)=q_1(t) \mathbf{i} + q_2 (t) \mathbf{j} + q_3 (t) \mathbf{k} = \frac{\Im {q}(t)}{|\Im {q}(t)|}|\Im {q}(t)|. \] Then we can rewrite $\exp \{ q(t)\}$ as \[ \begin{array}{lll}
e^{q(t)}
&=&
e^{q_0(t)+\Im {q}(t)}
\\
&=&
e^{q_0(t)}e^{\Im {q}(t)}
\\
&=&
e^{q_0(t)}e^{\frac{\Im {q}(t)}{|\Im {q}(t)|}|\Im {q}(t)|}
\\
&=&
e^{q_0(t)}[\cos (|\Im {q}(t)|) + \frac{\Im {q}(t)}{|\Im {q}(t)|}\sin( |\Im {q}(t)|)]
\end{array}
\] If $q(t)=0$, obviously, the equality (\ref{EXD}) holds. If $q(t)\neq 0$, differentiate $\exp \{ q(t)\}$ with respect to $t$, we have \[ \begin{array}{lll} && \displaystyle\frac{\rm d}{\rm dt} e^{q(t)} \\ &=&
\displaystyle\frac{\rm d}{\rm dt}\Big\{ e^{q_0(t)}[\cos (|\Im {q}(t)|) + \frac{\Im {q}(t)}{|\Im {q}(t)|}\sin( |\Im {q}(t)|)]\Big\} \\ &=&
\displaystyle q_0'(t)e^{q_0(t)}[\cos (|\Im {q}(t)|) + \frac{\Im {q}(t)}{|\Im {q}(t)|}\sin( |\Im {q}(t)|)] \\ && +
\displaystyle e^{q_0(t)}\frac{\rm d}{\rm dt}[ \cos (|\Im {q}(t)|) + \frac{\Im {q}(t)}{|\Im {q}(t)|}\sin( |\Im {q}(t)|)] \\ &=&
\displaystyle q_0'(t)e^{q_0(t)} \Big [\cos (|\Im {q}(t)|) + \frac{\Im {q}(t)}{|\Im {q}(t)|}\sin( |\Im {q}(t)|) \Big] \\ && +
\displaystyle e^{q_0(t)}\Big[- \sin (|\Im {q}(t)|) \frac{\rm d}{\rm dt} |\Im {q}(t)|
+ \frac{\Im {q}'(t)|\Im {q}(t)|- \Im {q}(t) \frac{\rm d}{\rm dt} |\Im {q}(t)|}{|\Im {q}(t)|^2}\sin( |\Im {q}(t)|) \\ && + \hspace{1cm}
\displaystyle \frac{\Im {q}(t)}{|\Im {q}(t)|}\cos( |\Im {q}(t)|) \frac{\rm d}{\rm dt} |\Im {q}(t)|
\Big] \end{array} \]
Since $q(t)q'(t) = q'(t)q(t)$, $ \frac{\Im {q}'(t)|\Im {q}(t)|- \Im {q}(t) \frac{\rm d}{\rm dt} |\Im {q}(t)|}{|\Im {q}(t)|^2}=0$, it follows that \[ \begin{array}{lll} && \displaystyle\frac{\rm d}{\rm dt} e^{q(t)} \\ &=&
\displaystyle q_0'(t)e^{q_0(t)} \Big [\cos (|\Im {q}(t)|) + \frac{\Im {q}(t)}{|\Im {q}(t)|}\sin( |\Im {q}(t)|) \Big ] \\ && +
\displaystyle e^{q_0(t)}\Big[- \sin (|\Im {q}(t)|) \frac{\rm d}{\rm dt} |\Im {q}(t)|
+ \frac{\Im {q}(t)}{|\Im {q}(t)|}\cos( |\Im {q}(t)|) \frac{\rm d}{\rm dt} |\Im {q}(t)|
\Big]
\\ &=&
\displaystyle q_0'(t)e^{q_0(t)} \Big [\cos (|\Im {q}(t)|) + \frac{\Im {q}(t)}{|\Im {q}(t)|}\sin( |\Im {q}(t)|) \Big] \\ && +
\displaystyle e^{q_0(t)}\frac{\Im {q}(t)}{|\Im {q}(t)|} \frac{\rm d}{\rm dt} |\Im {q}(t)|
\Big[- \Im {q}^{-1}(t) |\Im {q}(t)| \sin (|\Im {q}(t)|)
+ \cos( |\Im {q}(t)|)
\Big]. \end{array} \] Note that \[
\Im {q}^{-1}(t) =\frac{\overline \Im {q}(t)}{|\Im {q}(t)|^2}, \] and $-\overline \Im {q}(t)=\Im {q}(t)$. Thus, we have \begin{equation} \begin{array}{lll} && \displaystyle\frac{\rm d}{\rm dt} e^{q(t)}
\\ &=&
\displaystyle q_0'(t)e^{q_0(t)} \Big [\cos (|\Im {q}(t)|) + \frac{\Im {q}(t)}{|\Im {q}(t)|}\sin( |\Im {q}(t)|) \Big] \\ && +
\displaystyle \frac{\rm d}{\rm dt} |\Im {q}(t)|\frac{\Im {q}(t)}{|\Im {q}(t)|} e^{q_0(t)} \Big[
\cos( |\Im {q}(t)|)+ \frac{\Im {q}(t)}{|\Im {q}(t)|} \sin (|\Im {q}(t)|)
\Big]. \end{array} \label{De} \end{equation} From $q(t)q'(t) = q'(t)q(t)$, we conclude that $\Im q(t) \Im q'(t) = \Im q'(t) \Im q(t)$. Substituting $\Im {q}(t)$ by $q_1(t) i + q_2 (t) j + q_3 (t) k$, we obtain \[ \begin{array}{lll} q_1(t)q_2'(t) &=& q_1'(t)q_2(t), \\ q_1(t)q_3'(t) &=& q_1'(t)q_3(t), \\ q_2(t)q_3'(t) &=& q_2'(t)q_3(t). \end{array} \] By these equalities, we have \[ \begin{array}{lll}
\displaystyle \frac{\rm d}{\rm dt} |\Im {q}(t)|\frac{\Im {q}(t)}{|\Im {q}(t)|} &=& \displaystyle \frac{q_1(t)q_1'(t)+q_2(t)q_2'(t)+q_3(t)q_3'(t)}{\sqrt{q_1^2(t)+q_2^2(t)+q_2^2(t)}}\displaystyle \frac{q_1(t) \mathbf{i}+q_2(t)\mathbf{j}+q_3(t)\mathbf{k}}{\sqrt{q_1^2(t)+q_2^2(t)+q_2^2(t)}} \\ &=& \displaystyle \frac{[q_1^2(t)+q_2^2(t)+q_2^2(t)][q_1'(t) \mathbf{i}+q_2'(t)\mathbf{j}+q_3'(t)\mathbf{k}]}{ q_1^2(t)+q_2^2(t)+q_2^2(t) } \\ &=& \Im q'(t). \end{array} \] Consequently, it follows from (\ref{De}) that \[ \begin{array}{lll} && \displaystyle\frac{\rm d}{\rm dt} e^{q(t)}
\\ &=&
\displaystyle q_0'(t)e^{q_0(t)} \Big [\cos (|\Im {q}(t)|) + \frac{\Im {q}(t)}{|\Im {q}(t)|}\sin( |\Im {q}(t)|) \Big] \\ && + \displaystyle \Im q'(t) e^{q_0(t)} \Big[
\cos( |\Im {q}(t)|)+ \frac{\Im {q}(t)}{|\Im {q}(t)|} \sin (|\Im {q}(t)|)
\Big]
\\
&=&(q_0(t)+\Im q'(t))e^{q(t)}
\\
&=&q'(t)e^{q(t)}. \end{array} \] This completes the proof of Proposition \ref{Pro5.1}.
If a matrix $A\in \mathbb{H}^{2\times 2}$, the {\em exponential} of $A$ is defined by \[ \exp A= \sum_{n=0}^{\infty} \frac{ A^n} {n!}=E+\frac{ A} {1!}+\frac{ A^2} {2!}+\cdots. \] Moreover, for $t\in \mathbb{R}$, \[ \exp (A t)= \sum_{n=0}^{\infty} \frac{ A^nt^n} {n!}=E+\frac{ A t} {1!}+\frac{ A^2 t^2} {2!}+\cdots. \]
\begin{proposition}\label{Pro5.2}
The sires $\exp A= \sum_{n=0}^{\infty} \frac{ A^n} {n!}$ is absolutely convergent. The sires $\exp (A t)= \sum_{n=0}^{\infty} \frac{ A^n} {n!}$ is uniformly convergent on any finite interval $I$.
\end{proposition}
{\bf PROOF.} First, we show that the sires $\exp A= \sum_{n=0}^{\infty} \frac{ A^n} {n!}$ is absolutely convergent. In fact, for any positive integer $k$, we have \[
\|\frac{A^k}{k!}\|\leq \frac{\|A\|^k}{k!}. \]
For any quaternionic matrix $A$, $\|A\|$ is a real number. Then it is easy to see that the real series $\sum_{n=0}^{\infty} \frac{ \|A\|^n} {n!}$ is convergent. Thus, the sires $\exp A= \sum_{n=0}^{\infty} \frac{ A^n} {n!}$ is absolutely convergent.
Now we prove the second conclusion. In fact, for any finite interval $I$, without loss of generality, we assume that $t\leq \mu$. Then we have
\[
\|\frac{(At)^k}{k!}\|\leq \frac{\|A\|^k\|t\|^k}{k!}\leq \frac{\|A\|^k\|\mu\|^k}{k!}. \]
Note that $\frac{\|A\|^k\|\mu\|^k}{k!}$ is convergent. Thus, the sires $ \sum_{n=0}^{\infty} \frac{ A^n} {n!}$ is uniformly convergent on any finite interval $I$.
\begin{proposition}\label{Pro5.3}
For two matrix $A,B\in \mathbb{H}^{2\times2}$, if $AB=BA$, then \[ \exp(A+B)=\exp A\exp B. \] \end{proposition}
From Proposition \ref{Pro5.1}, we have
\begin{proposition}\label{Pro5.4} If $A(t)$ is differentiable and if $A(t)A'(t) = A'(t)A(t)$, then \[ [\exp A(t) ]' = [\exp A(t) ] A'(t).
\] \end{proposition}
\begin{definition}\label{Def5.1}
Let $x_1(t),x_2(t)$ be any two solutions of Eq. (\ref{2.1}) on $I$. Then we call
\[ M(t)=(x_1(t),x_2(t))^T = \left ( \begin{array}{ll}
{x} _{11}(t) & {x} _{12}(t)
\\
{x} _{21}(t) & {x} _{22}(t) \end{array} \right ) \] as a {\em solution matrix} of Eq. (\ref{2.1}). Moreover, if $x_1(t),x_2(t)$ be two right linearly independent solutions of Eq. (\ref{2.1}) on $I$, the solution matrix of Eq. (\ref{2.1}) is said to be a {\em fundamental matrix} of Eq. (\ref{2.1}). Further, if $M(t_0)=E(identity)$, $M(t)$ is said to be a {\em normal fundamental matrix}. \end{definition}
\begin{remark}\label{Rem5.3}
Note that fundamental matrix is not unique. From Definition \ref{Def5.1}, it is easy to know that if $M(t)$ is a {\em solution matrix} of Eq. (\ref{2.1}). $M(t)$ also satisfies Eq. (\ref{2.1}), that is, \[ \dot{M}(t)=A(t) M(t). \] \end{remark}
From Theorem \ref{Th4.2}, we know that if $M(t)$ is a fundamental matrix of Eq. (\ref{2.1}), the Wronskian determinant $W_{QDE}(t)\neq0$. By Theorem \ref{Th4.3}, we have
\begin{theorem}\label{Th5.1}
Let $M(t)$ be a fundamental matrix of Eq. (\ref{2.1}). Any solution $x(t)$ of Eq. (\ref{2.1}) can be represented by \[ x(t)=M(t) q, \] where $q$ is a constant quaternionic vector. Moreover, for given IVP $x(t_0)=x^0$, \[ x(t)=M(t)M^{-1}(t_0)x^0. \] \end{theorem}
By Theorem \ref{Th4.2}, we also have
\begin{theorem}\label{Th5.2}
A solution matrix $M(t)$ of Eq. (\ref{2.1}) on $I$ is a fundamental matrix if and only if $\rm{ddet} M(t)\neq0$ (or $W_{QDE}(t)\neq 0$) on $I$. Moreover, for some $t_0$ in $I$ such that $\rm{ddet} M(t_0)\neq0$ (or $W_{QDE}(t_0)\neq0$ ), then $\rm{ddet} M(t)\neq0$ (or $W_{QDE}(t)\neq0$ ).
\end{theorem}
\noindent {\bf Example 1} Show that \[ M(t)=(\phi_1(t),\phi_2(t))^T = \left ( \begin{array}{ll} e^{\boldsymbol{i} t} & e^{\boldsymbol{i} t}\int_{t_{0}}^{t}e^{-\boldsymbol{i} t}\cdot e^{\boldsymbol{j} t}dt \\ 0 & \,\,\,\,\,\,\,\,\,\,\,\,\,\, e^{\boldsymbol{j} t} \end{array} \right ) \] is a fundamental matrix of the QDES \begin{equation}
\left ( \begin{array}{ll}
\dot {x}_1
\\ \dot {x}_2 \end{array} \right ) = \left ( \begin{array}{ll}
\mathbf{i} & 1
\\
0 & \mathbf{j} \end{array} \right ) \left ( \begin{array}{ll}
{x}_1
\\
{x}_2 \end{array} \right ) . \label{Ex5.1} \end{equation}
{\bf Proof.} First, we show that $M(t)$ is a solution matrix of Eq.(\ref{Ex5.1}). In fact, let $\phi_1(t)=\left ( \begin{array}{ll}
e^{\boldsymbol{i} t}
\\
0 \end{array} \right )$, in view of $ (\boldsymbol{i} t)' (\boldsymbol{i}t)= (\boldsymbol{i}t) (\boldsymbol{i}t )'$, then \[ \dot{\phi}_1(t)=\left ( \begin{array}{ll} \boldsymbol{i} e^{\boldsymbol{i} t}
\\
0 \end{array} \right ) = \left ( \begin{array}{ll}
\boldsymbol{i} & 1
\\
0 & \boldsymbol{j} \end{array} \right ) \left ( \begin{array}{ll} e^{\boldsymbol{i} t}
\\
0 \end{array} \right ) = \left ( \begin{array}{ll}
\boldsymbol{i} & 1
\\
0 & \boldsymbol{j} \end{array} \right ) \phi_1(t), \] which implies that $\phi_1(t)$ is a solution of Eq.(\ref{Ex5.1}). Similarly, let $\phi_2(t)=\left ( \begin{array}{ll}
e^{\boldsymbol{i} t}\int_{t_{0}}^{t}e^{-\boldsymbol{i} t}\cdot e^{\boldsymbol{j} t}dt
\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\, e^{\boldsymbol{j} t} \end{array} \right )$, then \[ \dot{\phi}_2(t)=\left ( \begin{array}{ll} \boldsymbol{i}e^{\boldsymbol{i} t}\int_{t_{0}}^{t}e^{-\boldsymbol{i} t}\cdot e^{\boldsymbol{j} t}+e^{\boldsymbol{j} t}dt
\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \boldsymbol{j} e^{\boldsymbol{j} t} \end{array} \right ) = \left ( \begin{array}{ll}
\boldsymbol{i} & 1
\\
0 & \boldsymbol{j} \end{array} \right ) \left ( \begin{array}{ll}
e^{\boldsymbol{i} t}\int_{t_{0}}^{t}e^{-\boldsymbol{i} t}\cdot e^{\boldsymbol{j} t}dt
\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,e^{\boldsymbol{j} t} \end{array} \right ) = \left ( \begin{array}{ll}
\boldsymbol{i} & 1
\\
0 & \boldsymbol{j} \end{array} \right ) \phi_2(t), \]
which implies that $\phi_2(t)$ is another solution of Eq.(\ref{Ex5.1}). Therefore, $M(t)=(\phi_1(t),\phi_2(t))^\top$ is a solution matrix of Eq.(\ref{Ex5.1}).
Secondly, by Theorem \ref{Th5.2}, and taking $t_0=0$, \[ \rm{ddet} M(t_0)= \rm{rdet} (M(t_0) M^{+}(t_0)) \neq0, \] then $\rm{ddet} M(t)\neq0$. Therefore, $M(t)$ is a fundamental matrix of Eq. (\ref{Ex5.1}).
From Theorem \ref{Th5.1} and Theorem \ref{Th5.2}, the following corollaries follows immediately.
\begin{corollary}\label{Cor5.1} Let $M(t)$ be a fundamental matrix of Eq. (\ref{Ex5.1}) on $I$. $Q\in \mathbb{H}^{2\times 2}$ is an invertible constant quaternionic matrix. Then $M(t)Q$ is also a fundamental matrix of Eq. (\ref{Ex5.1}) on $I$. \end{corollary}
{\bf PROOF.} As pointed out in Remark \ref{Rem5.3}, any solution matrix $M(t)$ satisfies Eq. (\ref{2.1}), that is, \[ \dot{M}(t)=A(t) M(t). \] The inverse also holds. Taking $\Psi(t)=M(t)Q$, differentiating it, and noticing that $M(t)$ is a fundamental matrix satisfying Eq. (\ref{2.1}), we have \[ \Psi'(t)= M'(t)Q= A(t)M(t)Q=A(t)\Psi(t), \] which implies that $\Psi(t)$ is a solution matrix of Eq. (\ref{2.1}). Moreover, since $Q$ is invertible, we have \[ \rm{ddet} \Psi(t)=\rm{ddet} M(t)Q= \rm{ddet} M(t)\cdot \rm{ddet}Q \neq0. \] Therefore, $\Psi(t)=M(t)Q$ is also a fundamental matrix of Eq. (\ref{2.1}).
\begin{corollary}\label{Cor5.2}
If $M(t),\Psi(t)$ are two fundamental matrices of Eq. (\ref{2.1}) on $I$, there exists an invertible constant quaternionic matrix $Q\in \mathbb{H}^{2\times 2}$ such that \[ \Psi(t)=M(t)Q,\,\,\,t \in I. \] \end{corollary} \indent {\bf PROOF.} Since $M(t)$ is a fundamental matrix, $M(t)$ is invertible and denote the inverse by $M^{-1}(t)$. Set \begin{equation} \Psi(t)=M(t) X(t),\,\,t\in I. \label{X} \end{equation} $M(t),\Psi(t)$ are fundamental matrices, so ${\rm ddet}M(t)\neq0$ and ${\rm ddet}\Psi(t)\neq0$. Thus, ${\rm ddet}X(t)\neq0$. Differentiation on (\ref{X}) gives \[ \Psi'(t)=M'(t)X(t)+M(t)X'(t)=A(t)M(t)X(t)+M(t)X'(t)=A(t)\Psi(t)+M(t)X'(t) \] On the other hand, $\Psi(t)$ is a fundamental matrix, which implies that \[ \Psi'(t)=A(t)\Psi(t). \] Compare above two equalities, we have \[ M(t)X'(t)=0. \] Noticing that $M(t)$ is invertible, we obtain that $X'(t)=0$, which implies that $X(t)$ is a constant matrix. Letting $X(t)=Q$. The proof is complete.
Now consider the following quaternion-valued equations with constant quaternionic coefficients \begin{equation} \dot{x}(t) = A x(t), \label{Constant} \end{equation} where $A\in \mathbb{H}^{2\times 2} $ is a constant quaternion matrix. Then we have
\begin{theorem}\label{Th5.3}
$M(t)= \exp\{A t\}$ is a fundamental matrix of (\ref{Constant}). Moreover, any solution $x(t)$ of (\ref{Constant}) can be represented by \[ x(t)=\exp\{A t\} q, \] where $q\in \mathbb{H}^{2} $ is an arbitrary constant quaternion. For the IVP $x(t_0)=x^0$, any solution $x(t)$ of (\ref{Constant}) can be represented by \[ x(t)=\exp\{A( t-s)\} x^0. \] \end{theorem}
{\bf Proof.} Note that $A\in \mathbb{H}^{2\times 2} $ is a constant quaternion matrix, by using
$ A t \cdot (A t)'= (A t)' \cdot A t $ we have
\[ M'(t)= [\exp\{A t\}]'=A \exp\{A t\}=A M(t),
\]
which implies $ \exp\{A t\}$ is a solution matrix of (\ref{Constant}). Moreover, $\rm{ddet} M(0)=\rm{ddet} \exp(A\cdot 0)=\rm{ddet} E=1\neq 0$. Consequently,
$ \rm{ddet} M(t)\neq 0$. Therefore, $ \exp\{A t\}$ is a fundamental matrix of (\ref{Constant}).
Now consider the diagonal homogenous system. \begin{equation} \left ( \begin{array}{ll}
\dot{x} _{1}(t)
\\ \dot{x} _{2}(t) \end{array} \right ) = \left ( \begin{array}{ccc}
{a} _{1}(t) & 0
\\ 0 & {a} _{2}(t) \end{array} \right ) \left ( \begin{array}{ll}
{x} _{1}(t)
\\
{x} _{2}(t) \end{array} \right ) \label{diag} \end{equation}
\begin{theorem}\label{Th5.4}
Assume that \begin{equation} a_i(t) \int_{t_0}^{t} a_i(s) ds = \int_{t_0}^{t} a_i(s) ds a_i(t) \label{Condition1} \end{equation} holds. The fundamental matrix can be chose by \[ M(t)= \left ( \begin{array}{ccc}
\exp\{ \int_{t_0}^{t} a_1(s) ds\} & 0
\\ 0 & \exp\{ \int_{t_0}^{t} a_2(s) ds\} \end{array} \right ). \] Then the solution of the diagonal system (\ref{diag}) with the initial value $x(t_0)=x^0$ is given by \[ x(t)=M(t)x^0. \] \end{theorem}
\section{\bf Algorithm for computing fundamental matrix }
Since the fundamental matrix plays great role in solving QDEs, in this section, we provide an algorithm for computing fundamental matrix of linear QDEs with constant coefficients.
\subsection{ Method 1: using expansion of $\exp \{At\}$}
From Theorem \ref{Th5.3}, we know that $\exp\{A t\}$ is a fundamental matrix of (\ref{Constant}). So if the coefficient matrix is not very complicate, we can use the definition of $\exp (At)$ to compute fundamental matrix of linear QDEs with constant coefficients.
\begin{theorem}\label{Th6.1}
If $A=diag( \lambda_1,\lambda_2)\in \mathbb{H}^{2\times2}$ is a diagonal matrix, then \[ \exp \{At \}= \left ( \begin{array}{ccc}
\exp\{ \lambda_1t\} &0
\\
0& \exp\{ \lambda_2 t\} \end{array} \right ). \] \end{theorem}
{\bf Proof.} By the expansion, \[ \exp \{ At \}=E+ \left ( \begin{array}{ccc}
\lambda_1 &0
\\
0& \lambda_2 \end{array} \right )\frac{t}{1!} +
\left ( \begin{array}{ccc}
\lambda_1 &0
\\
0& \lambda_2 \end{array} \right )^2\frac{t^2}{2!} +\cdots = \left ( \begin{array}{ccc}
\exp\{ \lambda_1t\} &0
\\
0& \exp\{ \lambda_2 t\} \end{array} \right ). \]
By Proposition \ref{Pro5.3}, we can divide the matrix to some simple ones and use the expansion to compute the fundamental matrix. \[ A=diag A+ N, \] where $N$ is a nilpotent matrix. That is, $N^n=0$ and $n$ is a finite number.
\noindent {\bf Example 2} Find a fundamental matrix of the following QDES \[ \dot {x}= \left ( \begin{array}{ll}
\mathbf{k} & 1
\\
0 & \mathbf{k} \end{array} \right )x,\,\,\,\, x=(x_1,x_2)^T. \]
{\bf Answer.} We see that \[ \left ( \begin{array}{ll}
\mathbf{k} & 1
\\
0 & \mathbf{k} \end{array} \right ) =\left ( \begin{array}{ll}
\mathbf{k} & 0
\\
0 & \mathbf{k} \end{array} \right )+\left ( \begin{array}{ll}
0 & 1
\\
0 & 0 \end{array} \right ). \] Noticing that $\left ( \begin{array}{ll}
\mathbf{k} & 0
\\
0 & \mathbf{k} \end{array} \right ) \left ( \begin{array}{ll}
0 & 1
\\
0 & 0 \end{array} \right ) = \left ( \begin{array}{ll}
0 & 1
\\
0 & 0 \end{array} \right )\left ( \begin{array}{ll}
\mathbf{k} & 0
\\
0 & \mathbf{k} \end{array} \right ) $, by Proposition 5.3 and Theorem 6.1, we have \[ \begin{array}{lll} \exp(At) &=& \exp\left ( \begin{array}{ll}
\mathbf{k} & 0
\\
0 & \mathbf{k} \end{array} \right ) \exp\left ( \begin{array}{ll}
0 & 1
\\
0 & 0 \end{array} \right ) \\ &=& \left ( \begin{array}{ll}
e^{\mathbf{k}t} & 0
\\
0 & e^{\mathbf{k}t} \end{array} \right ) \Big( E+ \left ( \begin{array}{ccc} 0 & 1
\\
0 & 0 \end{array} \right )\frac{t}{1!} +
\left ( \begin{array}{ccc}
0 & 1
\\
0 & 0 \end{array} \right )^2\frac{t^2}{2!} +\cdots \Big) . \end{array} \] Note that \[
\left ( \begin{array}{ccc}
0 & 1
\\
0 & 0 \end{array} \right )^2 = \left ( \begin{array}{ccc}
0 & 0
\\
0 & 0 \end{array} \right ),\,\, \left ( \begin{array}{ccc}
0 & 1
\\
0 & 0 \end{array} \right )^3 = \left ( \begin{array}{ccc}
0 & 0
\\
0& 0 \end{array} \right ) ,\cdots, \] Then the fundamental matrix \[ \exp \{ At \} = \left ( \begin{array}{ll}
e^{\mathbf{k}t} & 0
\\
0 & e^{\mathbf{k}t} \end{array} \right ) \left ( \begin{array}{ll} 1 & t
\\ 0 & 1 \end{array} \right )
= \left ( \begin{array}{ll}
e^{\mathbf{k}t} & t e^{\mathbf{k}t}
\\
0 & e^{\mathbf{k}t} \end{array} \right ). \] For the matrix $\left ( \begin{array}{ll} \lambda & 1
\\
0 & \lambda \end{array} \right )$, $\lambda\in\mathbb{H}$, similar computation shows that
\begin{theorem}\label{Th6.1.1}
For any $\lambda\in\mathbb{H}$, we have \[ \exp \left\{ \left ( \begin{array}{ll} \lambda & 1
\\
0 & \lambda \end{array} \right )t \right\} = \left ( \begin{array}{ll}
e^{\lambda t} & 0
\\
0 & e^{\lambda t} \end{array} \right ) \left ( \begin{array}{ll} 1 & t
\\ 0 & 1 \end{array} \right ). \] \end{theorem} If we generalize this result to $n$-dimensional case, we have the following formula.
\begin{theorem}\label{Th6.1.2}
For any $\lambda\in\mathbb{H}$, we have \[ \exp \left\{ \left ( \begin{array}{ccccc} \lambda & 1 &0 \cdots 0 & 0 \\ 0 & \lambda &1 \cdots 0 & 0 \\ \,\,& \,\,& \,\,\,\,\,\,\, \ddots \,\,\ddots & \\ 0 & 0 &0\cdots \lambda & 1 \\ 0 & 0 &0\cdots 0& \lambda \end{array} \right )t \right\} = diag( \exp{\lambda t}, \cdots, \exp{\lambda t}) \left ( \begin{array}{ccccc} 1 & t &\frac{t^{2}}{2!} \cdots \frac{t^{k-2}}{(k-2)!} & \frac{t^{k-1}}{(k-1)!} \\ 0 & 1 &t \cdots \frac{t^{k-3}}{(k-3)!} & \frac{t^{k-2}}{(k-2)!} \\ \,\,& \,\,& \,\,\,\,\, \ddots \,\,\ddots & \\ 0 & 0 &0 \cdots 1 & t \\ 0 & 0 &0 \cdots 0 & 1 \end{array} \right ). \] \end{theorem}
We remark that this method can not be applied extensively when the coefficient matrix $A$ is complicate. Usually, the two divisions of $A$ can not be communicate. For example, \[ \left ( \begin{array}{ll}
\mathbf{i} & \mathbf{k}
\\
0 & \mathbf{j} \end{array} \right ) =\left ( \begin{array}{ll}
\mathbf{i} & 0
\\
0 & \mathbf{j} \end{array} \right )+\left ( \begin{array}{ll}
0 & \mathbf{k}
\\
0 & 0 \end{array} \right ). \] We see that \[ \left ( \begin{array}{ll}
\mathbf{i} & 0
\\
0 & \mathbf{j} \end{array} \right ) \left ( \begin{array}{ll}
0 & \mathbf{k}
\\
0 & 0 \end{array} \right ) \neq \left ( \begin{array}{ll}
0 & \mathbf{k}
\\
0 & 0 \end{array} \right )\left ( \begin{array}{ll}
\mathbf{i} & 0
\\
0 & \mathbf{j} \end{array} \right ) \] In this case, this method can not be used. Even, for a simple matrix $\left ( \begin{array}{ll}
\mathbf{i} & \mathbf{1}
\\
0 & \mathbf{j} \end{array} \right )$, you can not use this method. So we should find more effective method to compute the fundamental matrix. In what follows, we try to use the eigenvalue and eigenvector theory to compute it.
\subsection {Method 2: eigenvalue and eigenvector theory}
Since quaternion is a noncommunicative algebra, eigenvalue of a matrix $A\in \mathbb{H}^{2\times 2}$ should be defined by left eigenvalue and right eigenvalue, respectively. A quaternion $\lambda$ is said to be a right eigenvalue of $A$ if \[ Ax=x \lambda \]
for some nonzero (column) vector $x$ with quaternion components. Similarly, a quaternion $\lambda$ is a left eigenvalue of $A$ if \[ Ax= \lambda x \]
for some nonzero (column) vector $x$ with quaternion components. Right and left eigenvalues are in general unrelated. Usually, right and left eigenvalues are different. It should be noted that there are many differences in the eigenvalue problem for complex (real) and quaternion matrices. Eigenvalues of complex matrices satisfy Brauer's theorem \cite{Br} for the inclusion of the eigenvalues. While right eigenvalues of quaternion matrices do not have this property. Another contrast in the eigenvalue problems for complex and quaternion matrices is that a complex $n\times n$ matrix cannot have more than $n$ complex eigenvalues, while it can have infinitely many quaternion left eigenvalues (see Theorem 2.1 in \cite{ZhangFZ}). Finally, it is well-known that any eigenvalue of an $n\times n$ complex matrix must be a root of its characteristic polynomial. Left eigenvalues and right eigenvalues of quaternion matrices do not share this property of complex matrices.
In this paper, we focus on the right eigenvalue and eigenvector. Because we emphasize on finding the solution taking the form \[ x=q e^{\lambda t},\,\,\,q=(q_1,q_2)^T, \] where $\lambda$ is a quaternionic constant and $q$ is a constant quaternionic vector. Substituting it into Eq. (\ref{Constant}), we have \[ q \lambda e^{\lambda t} = A q e^{\lambda t}. \] Because $ e^{\lambda t}\neq 0$, it follows that \[ q \lambda = A q, \] or \begin{equation}
A q=q \lambda. \label{CQ} \end{equation} So if we find such eigenvalue $\lambda$ and eigenvector $q$ in (\ref{CQ}), we will find the solution of Eq. (\ref{Constant}). We also say that the eigenvalue $\lambda$ and eigenvector $q$ are the right eigenvalue and eigenvector of the QDEs (\ref{Constant}).
Notice also that if $0\neq \alpha\in \mathbb{H}$, then \[ A q = q \lambda \Rightarrow A q \alpha = q \alpha (\alpha^{-1} \lambda \alpha), \] so we can sensibly talk about the eigenline spanned by an eigenvector $q$, even though there may be many associated eigenvalues! In what follows, if \[ \theta=\alpha^{-1} \lambda \alpha \] we call it that $\theta$ is similar to $\lambda$. If there exists a nonsingular matrix $T$ such that \[ A=T^{-1}BT, \] we call it that $A$ is similar to $B$, denoted by $A\sim B$.
\begin{remark}\label{Rem6.1}
If $\lambda$ is a characteristic root of $A$, then so is $\alpha^{-1} \lambda \alpha$.
\end{remark}
From the definition of fundamental matrix, we have
\begin{theorem}\label{Th6.2}
If the matrix $A$ has two right linearly independent eigenvectors $q_1$ and $q_1$, corresponding to the eigenvalues $\lambda_1$ and $\lambda_2$ ($\lambda_1$ and $\lambda_2$ can be conjugate), then \[ M(t)=(q_1e^{\lambda_1 t},q_2e^{\lambda_2 t}) \] is a fundamental matrix of Eq. (\ref{Constant}). \end{theorem}
{\bf Proof.} From above discussion, we know $q_1e^{\lambda_1 t},q_2e^{\lambda_2 t}$ are two solution of Eq. (\ref{Constant}). Thus, $M(t)$ is a solution matrix of Eq. (\ref{Constant}). Moreover, by using the linear independence of $q_1$ and $q_2$, we have \[ {\rm ddet}M(0)={\rm ddet}(q_1,q_2) \neq 0. \] Therefore, $M(t)$ is a fundamental matrix of Eq. (\ref{Constant}).
Now we need a lemma from (\cite{Baker} Proposition 2.4).
\begin{lemma}\label{Lem6.1}
Suppose that $\lambda_1,\lambda_2,\cdots,\lambda_r$ are distinct eigenvalues for $A$, no two of which are similar, and let $q_1,\cdots,q_r$ be corresponding eigenvectors. Then $q_1,\cdots,q_r$ are linearly independent. \end{lemma}
Then by Lemma \ref{Lem6.1}, we have the corollary
\begin{corollary}
If the matrix $A$ has two distinct eigenvalues $\lambda_1$ and $\lambda_2$ ($\lambda_1$ and $\lambda_2$ can not be conjugate), then \[ M(t)=(q_1e^{\lambda_1 t},q_2e^{\lambda_2 t}) \] is a fundamental matrix of Eq. (\ref{Constant}). \end{corollary}
Now we need some results from (\cite{Bren} Theorem 11, Theorem 12).
\begin{lemma}\label{Lem6.2} If $A$ is in triangular form, then every diagonal element is a characteristic root. \end{lemma}
\begin{lemma}\label{Lem6.3} Let a matrix of quaternion be in triangular form. Then the only characteristic roots are the diagonal elements (and the numbers similar to them). \end{lemma}
\begin{lemma}\label{Lem6.4}
Similar matrices have the same characteristic roots.
\end{lemma}
\begin{lemma}\label{Lem6.5} (\cite{Bren} Theorem 2) Every matrix of quaternion can be transformed into triangular form by a unitary matrix. \end{lemma}
\noindent {\bf Example 3} Find a fundamental matrix of the following QDEs \begin{equation} \dot {x}= \left ( \begin{array}{ll}
\mathbf{i} & \mathbf{j}
\\
0 & \mathbf{i+j} \end{array} \right )x,\,\,\,\, x=(x_1,x_2)^T. \label{Ex6.2} \end{equation}
{\bf Answer:} From Lemma \ref{Lem6.3} and Lemma \ref{Lem6.4}, we see that $\lambda_1=\mathbf{i}$ and $\lambda_2=\mathbf{i}+\mathbf{j}$. To find the eigenvector of $\lambda_1=i$, we consider the following equation \[ Aq=q\lambda_1, \] that is \begin{equation}
\left\{ \begin{array}{ccc} \mathbf{ i} q_1 + \mathbf{j}q_2 & =& q_1 \mathbf{i}
\\ (\mathbf{i}+\mathbf{j}) q_2 &=& q_2 \mathbf{i} \end{array} \right. \label{EV1} \end{equation} From the second equation of (\ref{EV1}), we can take $q_2=0$. Substituting it into the first equation of (\ref{EV1}), we obtain $q_1=1$. So we can take one eigenvector as \[ \nu_1=\left ( \begin{array}{ll}
q_1
\\ q_2 \end{array} \right ) =\left ( \begin{array}{ll} 1
\\ 0 \end{array} \right ) \] To find the eigenvector of $\lambda_2=\mathbf{i}+\mathbf{j}$, we consider the following equation \[ Aq=q\lambda_2, \] that is \begin{equation}
\left\{ \begin{array}{ccc}
\mathbf{i} q_1 + \mathbf{j} q_2 & =& q_1 (\mathbf{i}+\mathbf{j})
\\ (\mathbf{i}+\mathbf{j}) q_2 &=& q_2 (\mathbf{i} + \mathbf{j}) \end{array} \right. \label{EV2} \end{equation} We can take one eigenvector as \[ \nu_2=\left ( \begin{array}{ll}
q_1
\\ q_2 \end{array} \right ) =\left ( \begin{array}{ll} 1
\\ 1 \end{array} \right ) \] Since \[ \begin{array}{lll} {\rm ddet}(\nu_1,\nu_2) &=& {\rm ddet}\left ( \begin{array}{ll}
1 & 1
\\ 0 & 1 \end{array} \right ) = \displaystyle {\rm rdet}\Big[\left ( \begin{array}{ll}
1 & 1
\\ 0 & 1 \end{array} \right )\left ( \begin{array}{ll}
1 & 0
\\ 1 & 1 \end{array} \right ) \Big] \neq0, \end{array} \]
the eigenvectors $\nu_1$ and $\nu_2$ is linearly independent. Taking
\[
M(t)=(\nu_1 e^{\lambda_1 t},\nu_2 e^{\lambda_2 t})=\left ( \begin{array}{ll}
e^{\mathbf{i} t} & e^{(\mathbf{i}+\mathbf{j}) t}
\\ 0 & e^{(\mathbf{i}+\mathbf{j}) t} \end{array} \right ),
\]
from Theorem \ref{Th6.1}, $M(t)$ is a fundamental matrix. In fact, by the definition of fundamental matrix, we can verify that $M(t)$ is a fundamental matrix of Eq. (\ref{Ex6.2}), which is consistent with Theorem \ref{Th6.1}. Now we verify the fundamental matrix as follows. First, we show that $M(t)$ is a solution matrix of Eq. (\ref{Ex6.2}).
Let $\phi_1(t)=\nu_1 e^{\lambda_1 t}$ and $\phi_2(t)=\nu_2 e^{\lambda_2 t}$, then
\[ \dot{\phi}_1(t) =\left ( \begin{array}{ll} 1
\\ 0 \end{array} \right )\mathbf{i} e^{\mathbf{i} t} = \left ( \begin{array}{ll}
\mathbf{i} & \mathbf{j}
\\
0 & \mathbf{i}+\mathbf{j} \end{array} \right ) \left ( \begin{array}{ll}
1
\\ 0 \end{array} \right ) e^{\mathbf{i} t} = \left ( \begin{array}{ll}
\mathbf{i} & \mathbf{j}
\\
0 & \mathbf{i}+\mathbf{j} \end{array} \right ) \phi_1(t), \] which implies that $\phi_1(t)$ is a solution of Eq. (\ref{Ex6.2}). Similarly, let $\phi_2(t)=\left ( \begin{array}{ll} e^{(\mathbf{i}+\mathbf{j}) t}
\\ e^{(\mathbf{i}+\mathbf{j}) t} \end{array} \right )$, then \[ \dot{\phi}_2(t)=\left ( \begin{array}{ll} 1
\\ 1 \end{array} \right )(\mathbf{i}+\mathbf{j})e^{(\mathbf{i}+\mathbf{j}) t} = \left ( \begin{array}{ll}
\mathbf{i} & \mathbf{j}
\\
0 & \mathbf{i}+\mathbf{j} \end{array} \right ) \left ( \begin{array}{ll} 1
\\ 1 \end{array} \right )e^{(\mathbf{i}+\mathbf{j}) t} = \left ( \begin{array}{ll}
\mathbf{i} & \mathbf{j}
\\
0 & \mathbf{i}+\mathbf{j} \end{array} \right ) \phi_2(t), \]
which implies that $\phi_2(t)$ is another solution of Eq. (\ref{Ex6.2}). Therefore, $M(t)=(\phi_1(t),\phi_2(t))^T$ is a solution matrix of Eq. (\ref{Ex6.2}).
Secondly, by Theorem \ref{Th5.2}, and taking $t_0=0$, \[ \rm{ddet} M(t_0)=\rm{ddet} M(0) ={\rm ddet}(\nu_1,\nu_2) \neq0, \] then $\rm{ddet} M(t)\neq0$. Therefore, $M(t)$ is a fundamental matrix of Eq. (\ref{Ex6.2}).
\section{Conclusion and Discussion}
\noindent {\bf Conclusion}
This paper established a systematic basic theory for two-dimensional linear QDEs, which have many applications in the real world such as quantum mechanics, differential geometry, Kalman filter design, attitude dynamics, fluid mechanics, and so on. For the sake of understanding, we focus on the 2-dimensional system. However, in fact, our basic results can be easily extended to any arbitrary $n$-dimensional QDEs. Due to the non-commutativity of quaternion algebra, there are four profound differences between QDEs and ODEs.
1. Due to the non-commutativity of the quaternion algebra, the algebraic structure of the solutions to QDEs is not a linear vector space. It is actually a right-free module.
2. {\em Wronskian} of ODEs is defined by Caley determinant. However, since Caley determinant depends on the expansion of $i$-th row and $j$-th column of quaternion, different expansions can lead to different results. {\em Wronskian} of ODEs can not be extended to QDEs. It is necessary to define the {\em Wronskian} of QDEs by a novel method (see Section 4). We use double determinants $\rm {ddet}M=\displaystyle \rm{rddet}(MM^{+})$ to define {\em Wronskian} of QDEs. We remark that the {\em Wronskian} of QDEs can also be defined by $\rm {ddet}M=\displaystyle \rm{rddet}(M^{+}M)$.
3. Liouville formula for QDEs and ODEs are different.
4. For QDEs, it is necessary to treat the eigenvalue problems with left and right, separately. This is a large-difference between QDEs and ODEs.
\section{Conflict of Interests}
The authors declare that there is no conflict of interests regarding the publication of this article.
\end{document} |
\begin{document}
\title{\Large\bf
Triplets and Symmetries of Arithmetic mod $p^k$}
\begin{abstract} The finite ring $Z_k=Z$(+,~.) mod $p^k$ of residue arithmetic with odd prime power modulus is analysed. The cyclic group of units $G_k$ in $Z_k$(.) has order $(p-1).p^{k-1}$, implying product structure
$G_k\equiv A_k.B_k$ with $|A_k|=p-1$ and $|B_k |=p^{k-1}$, the "core" and "extension subgroup" of $G_k$ respectively. It is shown that each subgroup $S \supset 1$ of core $A_k$ has zero sum, and $p$+1 generates subgroup $B_k$ of all $n\equiv 1$ mod $p$ in $G_k$. The $p$-th power residues $n^p$ mod $p^k$ in $G_k$ form an order
$|G_k|/p$ subgroup $F_k$, with $|F_k|/|A_k|=p^{k-2}$, so $F_k$ properly contains core $A_k$ for $k \geq 3$. By quadratic analysis (mod $p^3$) rather than linear analysis (mod $p^2$, re: Hensel's lemma [5]~), the additive structure of subgroups $G_k$ and $F_k$ is derived. ~Successor function $n$+1 combines with the two arithmetic $symmetries$ $-n$ and $n^{-1}$ to yield a $triplet$ structure in $G_k$ of three inverse pairs ($n_i,~n_i^{-1}$) with: $n_i +1 \equiv -(n_{i+1})^{-1}$, ~indices mod 3, and $n_0.n_1.n_2 \equiv 1$ mod $p^k$. In case $n_0 \equiv n_1 \equiv n_2 \equiv n$ this reduces to the cubic root solution $n+1 \equiv - n^{-1} \equiv - n^2$ mod $p^k ~(p$=1 mod 6). . . The property of exponent $p$ distributing over a sum of core residues : $(x+y)^p \equiv x+y \equiv x^p + y^p$ mod $p^k$ is employed to derive the known $FLT$ inequality for integers. In other words, to a $FLT$ mod $p^k$ equivalence for $k$ digits correspond $p$-th power integers of $pk$ digits, and the $(p-1)k$ carries make the difference, representing the sum of mixed-terms in the binomial expansion. \end{abstract}
{\bf Keywords}: Residue arithmetic, ring, group of units,
multiplicative semigroup, \\additive structure, triplet,
cubic roots of unity,~ $carry$, ~Hensel, Fermat, $FST, FLT$.
{\bf MSC-class}: ~11D41
\section*{Introduction}
The commutative semigroup $Z_k$(.) of multiplication mod $p^k$ (prime $p>$2) has for all $k>$0 ~just two idempotents: $1^2 \equiv 1$ and $0^2 \equiv 0$, and is the disjoint union of the corresponding maximal subsemigroups (~Archimedian components~[3], [4]~). Namely the group $G_k$ of units ($n^i\equiv 1$ mod $p^k$ for some $i>$0) which are all relative prime to $p$, and maximal ideal $N_k$ as nilpotent subsemigroup of all $p^{k-1}$ multiples of $p$ ($n^i\equiv 0$
mod $p^k$ for some $i>$0). Order $|G_k|=(p-1)p^{k-1}$ has two coprime factors, sothat $G_k\equiv A_kB_k$, with 'core' $|A_k|=p-1$
and 'extension group' $|B_k|=p^{k-1}$. Residues of $n^p$ form a subgroup $F_k \subset G_k$ of order
$|F_k|=|G_k|/p$, to be analysed for its additive structure. Each $n$ in core $A_k$ satisfies $n^p\equiv n$ mod $p^k$, a generalization of Fermat's Small Theorem ($FST$) for $k>1$, denoted as $FST_k$.
{\bf Base $p$} number representation is used, which notation is useful for computer experiments, as reported in tables 1,2. This models residue arithmetic mod $p^k$ by considering only the $k$ less significant digits, and ignoring the more significant digits. Congruence class [$n$] mod $p^k$ is represented by natural number $n<p^k$, encoded in $k$ digits (base $p$). Class [$n$] consists of all integers with the same least significant $k$ digits as $n$.
{\bf Define} the {\bf 0-extension} of residue $n$ mod $p^k$ as the natural number $n < p^k$ with the same $k$-digit representation (base $p$), and all more significant digits (at $p^m, ~m \geq k)$ set to 0.
Signed residue $-n$ is only a convenient notation for the complement $p^k-n$ of $n$, which are both positive. $C[n]$ or $C_n$ is a cyclic group of order $n$, such as $Z_k(+) \cong C[p^k]$. The units mod $p$ form a cyclic group $G_1=C_{p-1}$, and $G_k$ of order $(p-1).p^{k-1}$ is also cyclic for $k>$1 ~[1]. ~~Finite {\it semigroup structure} is applied, and {\it digit analysis} of prime-base residue arithmetic, to study the combination of (+) and (.) mod $p^k$, especially the additive properties of multiplicative subgroups of ring $Z_k(+, .)~$.
Only elementary residue arithmetic, cyclic groups, and (associative) function composition are used (thm3.2), to begin with the known cyclic (one generator) nature of group $G_k$ of units mod $p^k$ [1]. Lemma 1.1 on the direct product structure of $G_k$, and cor1.2 on all $p$-th power residues mod $p^k$ as all extensions of those mod $p^2$, are known in some form but are derived for completeness. Lemma 1.4 on $B_k=(p+1)^*$, and further results are believed to be new.
The {\bf two symmetries} of residue arithmetic mod $p^k$, defined as automorphisms of order 2, are {\bf complement} $-n$ under (+) with factor $-1$, and {\bf inverse} $n^{-1}$ under (.) with exponent $-1$.\\ Their essential role in the triplet- structure (thm3.1) of this finite ring is emphasized throughout. \\The main emphasis is on additive analysis of multiplicative semigroup $Z$(.) mod $p^k$.\\ Concatenation will be used to indicate multiplication.
\begin{tabular}{ll} {\bf Symbols} & and {\bf Definitions}~~~~~~(~odd prime $p$~)\\ \hline $Z_k$(+,~.) & the finite ring of residue arithmetic mod $p^k$\\ $M_k$ & multiplication $Z_k$(.) mod $p^k$, semigroup ($k$-digit
arithmetic base $p$)\\ $N_k$ & maximal ideal of $M_k:~n^i\equiv 0$ mod $p^k$~(some $i>$0),
$|N_k|=p^{k-1}$\\ $n \in M_k$ & unique product ~$n = g^i.p^{k-j}$ mod $p^k$ ~($g^i
\in G_j$ coprime to $p$)\\ 0-extension X & of residue $x$ mod $p^k$:
the smallest non-negative integer $X \equiv x$ mod $p^k$\\ (finite) extension $U$ & of $x$ mod $p^k$:
any integer $U \equiv x$ mod $p^k$\\ $C_m$ or $C[m]$ & cyclic group of order $m$:
~~e.g. ~$Z_k(+) \cong C[p^k]$\\ $G_k\equiv A_k.B_k$ & group of units: all ~$n^i \equiv 1$ mod $p^k$
~(some $i>$0), ~~$|G_k|\equiv (p-1)p^{k-1}$\\
$A_k$ & ~~~~~{\bf core} of $G_k$, ~~$|A_k|=p-1~~~(n^p$=$n$
mod $p^k$ for $n \in A_k$)\\ $B_k\equiv (p+1)^*$ & extension group of all $n\equiv 1$ mod $p$ ,
~~$|B_k|=p^{k-1}$\\ $F_k$ & subgroup of all $p$-th power residues in $G_k$ ,
~~$|F_k|=|G_k|/p$ \\ $A_k \subset F_k \subset G_k$ &
proper inclusions only for $k \geq 3~~(A_2\equiv F_2 \subset G_2)$\\ $d(n)$ & core increment $A(n+1)-A(n)$ of core func'n
$A(n)\equiv n^q,~q=|B_k|$\\ $FST_k$ & core $A_k$ extends $FST ~(n^p \equiv n$ mod $p$) to
mod $p^{k>1}$ for $p-1$ residues\\ solution in core & $x^p+y^p \equiv z^p$ mod $p^k$ ~with~ $x,y,z$
~in core $A_k$.\\
period of $n \in G_k$ & order $|n^*|$ of subgroup generated by
$n$ in $G_k(.)$\\ normation & divide $x^p+y^p \equiv z^p$ mod $p^k$ by one term (in $F_k$),
yielding one term $\pm 1$\\ complement $-n$ & unique in $Z_k$(+) : ~$-n+n\equiv 0$ mod $p^k$\\ inverse $n^{-1}$ & unique in ~$G_k$(.) : ~$n^{-1}.~n\equiv 1$ mod $p^k$\\ 1-complement $n"$ & unique in $Z_k$(+) : ~$n"+n\equiv -1$ mod $p^k$\\ inverse-pair & pair ($a, ~a^{-1}$) of inverses in $G_k$ \\ {\bf triplet} & 3 inv.pairs: ~$a+b^{-1}\equiv b+c^{-1}\equiv c+a^{-1}\equiv -1,
~(abc\equiv 1$ mod $p^k$)\\ triplet$^p$ & a triplet of three $p$-th power residues in subgroup $F_k$ (thm3.1)\\ triplet$^p$ equiv'ce & one of the three equivalences of a triplet$^p$\\ symmetry mod $p^k$ & $-n$ and $n^{-1}$: ~order 2 automorphism of
$Z_k(+)$ resp. $G_k(.)$\\ $EDS$ property & Exponent Distributes over a Sum:
$(a+b)^p\equiv a^p+b^p$ mod $p^k$ \end{tabular}
\section{ Structure of the group $G_k$ of units }
\begin{lem} ~~~~$G_k ~\cong ~A'_k \times B'_k ~\cong ~C[p-1]~.~C[p^{k-1}]$
~~~~~~~~~~~~ and $M_k$ (mod $p^k$) has a sub-semigroup isomorphic to
$M_1$ (mod $p$). \end{lem} \begin{proof} ~Cyclic group $G_k$ of {\it units} $n$ ($n^i\equiv 1$ for some $i>0$) has order $(p-1)p^{k-1}$, namely $p^{k}$ minus $p^{k-1}$ multiples of $p$. Then $G_k=A'_k \times B'_k$, the direct product of two relative prime cycles, with corresponding subgroups $A_k$ and $B_k$, sothat $G_k\equiv A_k.B_k$ where\\ {\bf extension group} $B_k=C[~p^{k-1}~]$ consists of all $p^{k-1}$ residues mod $p^k$ that are 1 mod $p$, \\ and ~{\bf core} $A_k=C[p-1]$, ~so $M_k$ contains sub-semigroup $A_k \cup 0 \cong M_1$. \end{proof}
{\bf Core $A_k$}, as $p-1$ cycle mod $p^k$, is Fermat's Small Theorem $n^{p}\equiv n$ mod $p$ extended to $k>$1 for $p$ residues (including 0), to be denoted as $FST_k$.\\ Recall that $n^{p-1} \equiv 1$ mod $p$ for $n \equiv \!\!\!\!\!\!\//~~$0 mod $p$ ($FST$), then lem1.1 implies:
\begin{cor}~~With $|B|=p^{k-1}=q$ and $|A|=p-1$:\\ \hspace*{1cm} Core $A_k=\{~n^q~\}$ mod $p^k~~(n=1.. p$-1) ~extends $FST$ for $k>$1,
~and: \\ \hspace*{1cm} ~~~~$B_k= \{n^{p-1}\}$ mod $p^k$ ~consists of all $p^{k-1}$
residues 1 mod $p$ in $G_k$. \end{cor}
Subgroup $F_k \equiv \{n^p\}$ mod $p^k$ of all $p$-th power residues in $G_k$, with $F_k \supseteq A_k$ (only $F_2 \equiv A_2$) and order
$|F_k|=|G_k|/p=(p-1)p^{k-2}$, consists of {\bf all} $p^{k-2}$ extensions mod $p^k$ of the $p-1$ ~~$p$-th power residues in $G_2$, which has order $(p-1)p$. Consequently we have:
\begin{cor}
Each extension of $n^p$ mod $p^2$ ~(in $F_2$)
is a $p$-th power residue (in $F_k$) \end{cor}
{\bf Core generation}: ~The $p-1$ residues $n^q$ mod $p^k ~(q=p^{k-1})$ define core $A_k$ for 0$<n<p$.\\Cores $A_k$ for successive $k$ are produced as the $p$-th power of each $n_0<p$ recursively:\\ ~$(n_0)^p \equiv n_1, ~(n_1)^p \equiv n_2,~(n_2)^p \equiv n_3$, etc., where $n_i$ has $i$+1 digits. In more detail:
\begin{lem}.\\ \hspace*{.5cm}
The $p-1$ values $a_0<p$ define core $A_k$ by~
$(a_0)^{p^{k-1}}=a_0+\sum_{i=1}^{k-1}a_ip^i$ ~(digits $a_i<p$). \end{lem} \begin{proof} Let $a=a_0+mp<p^2$ be in core $A_2$, so $a^p\equiv a$ mod $p^2$. Then $a^p= (a_0+mp)^p=a_0^p+p.a_0^{p-1}.mp\equiv a_0^p+mp^2$ mod $p^3$, using $FST$. Clearly the second core digit, of weight $p$, is not found this way as function of $a_0$, but requires actual computation (unless $a\equiv p \pm 1$ as in lem1.3-4). It depends on the $carries$ produced in computing the $p$-th power of $a_0$. Recursively, each next core digit can be found by computing the $p$-th power of a core $A_k$ residue with $k$+1 digit precision; here core $A_k$ remains fixed since $a^p\equiv a$ mod $p^k$. \end{proof}
Notice $(p^2 \pm 1)^p\equiv p^3 \pm 1$ mod $p^5$. Moreover, initial $(p+1)^p\equiv p^2+1$ mod $p^3$ yields in general for $(p \pm 1)^{p^m}$ the next property:
\begin{lem} ~~$(p+1)^{p^m}\equiv p^{m+1}+1$ {\rm ~~mod} $p^{m+2}$\\ \hspace*{1cm}
and: ~~~$(p-1)^{p^m}\equiv p^{m+1}-1$ {\rm ~~mod} $p^{m+2}$ \end{lem}
\begin{lem} ~Extension group $B_k$ is generated by $p$+1 (mod $p^k$),
with $|B_k|=p^{k-1}$, \\ \hspace*{1cm} and each subgroup
$S \subseteq B_k$, ~$|S|=|B_k|/p^s$ has
sum $\sum S \equiv |S|$ {\rm ~mod} $p^k$. \end{lem} \begin{proof}
The period of $p+1$, which is the smallest $x$ with $(p+1)^x\equiv 1$ mod $p^k$, implies $m+1=k$ (re lemma 1.3). So $m=k-1$, yielding period $p^{k-1}$. No smaller exponent generates 1 mod $p^k$ since
$|B_k|$ has only divisors $p^s$.
$B_k$ consists of all $p^{k-1}$ residues which are 1 mod $p$. The order of each subgroup $S \subset B_k$ must divide $|B_k|$, sothat $|S|=|B_k|/p^s$ ~($0 \leq s < k$) and~ $S=\{1+m.p^{s+1}\}
~(m=0~..~|S|-1)$.
Then ~$\sum S= |S|+p^{s+1}.|S|(|S|-1)/2$ ~mod $p^k$, ~where~
$p^{s+1}.|S|=p.|B_k|=p^k$,
sothat~ $\sum S = |S|= p^{k-1-s}$ mod $p^k$. ~~Hence no subgroup of $B_k$ ~sums to 0 mod $p^k$. \end{proof}
\begin{cor} ~~For core $A_k \equiv g^*$: each unit $n \in G_k \equiv A_kB_k$ has the form: \\ \hspace*{1cm} $n \equiv g^i(p+1)^j$ mod $p^k$ for a unique pair of
non-neg. exponents $i<|A_k|$ and $j<|B_k|$. \end{cor}
Pair $(i,j)$ are the exponents in the core- and extension- component of unit $n$.
\begin{thm}
Each subgroup $S \supset 1$ of core $A_k$ sums to ~0
{\rm ~mod} $p^k~~(k>$0). \end{thm} \begin{proof}
~For {\it even} $|S|$: $-1$ in $S$ implies pairwise zero-sums. In general: $c.S=S$ for all $c$ in $S$, and $c \sum S =\sum S$, so~ $S.x=x$, writing $x$ for $\sum S$. Now for any $g$ in $G_k$:
$|S.g|=|S|$ sothat $|S.x|$=1 implies $x$ not in $G_k$, hence~ $x=g.p^e$ ~for some $g$ in $G_k$ and $0<e<k$ or $x=0 ~(e=k)$. Then:
~~$S.x=S(g.p^e)=(S.g)p^e$ with $|S.g|=|S|$ if $e<k$.
~So~ $|S.x|$=1 ~yields~ $e$=$k$ ~and~ $x=\sum S$=0. \end{proof}
Consider the {\bf normation} of an additive equivalence $a+b \equiv c$ mod $p^k$ in units group $G_k$, by multiplying all terms with the inverse of one of these terms, for instance to yield rhs $-1$:
{\bf (1)} ~~~1-complement form: ~~$a+b \equiv -1$ mod $p^k$ in $G_k$
~~~(digitwise sum $p-1$, no carry).
For instance the well known $p$-th power residue equivalence:~ ~$x^p+y^p \equiv z^p$ ~in $F_k$ yields: \\[1ex] {\bf (2)} ~~~~normal form:~~~~~~~ $a^p+b^p \equiv -1$ mod $p^k$ ~in $G_k$,
with a special case (in core $A_k$) considered next.
\begin{figure}\label{fig.1}
\end{figure}
\section{The cubic root solution in core, ~and core symmetries}
\begin{lem} The cubic roots of 1 mod $p^k$ ($p \equiv 1$ mod 6) ~are $p$-th
power residues in core $A_k$, \\ \hspace*{1cm}
and for $a^3 \equiv 1 ~(a \equiv \!\!\!\!\!\!\//~~ 1):~ a+a^{-1} \equiv -1$ mod $p^{k>1}$
has no 0-extension to integers. \end{lem} \begin{proof}
~If $p \equiv 1$ mod 6 then $3|(p-1)$ implies a core-subgroup $S=\{a^2,a,1\}$ of three $p$-th powers: the cubic roots of 1 ($a^3 \equiv 1$) in $G_k$ that sum to 0 mod $p^k$ (thm1.1). Now $a^3-1=(a-1)(a^2+a+1)$, so if $a \equiv \!\!\!\!\!\!\//~~ 1$ then $a^2+a+1 \equiv 0$, hence $a+a^{-1} \equiv -1$ solves (1): a {\bf root-pair} of inverses, with $a^2 \equiv a^{-1}$. ~$S$ in core consists of $p$-th power residues with $n^{p}\equiv n$ mod $p^{k}$. Write $b$ for $a^{-1}$, then $a^p+b^p \equiv -1$ {\bf and} $a+b \equiv -1$, sothat $a^p+b^p\equiv (a+b)^p$ mod $p^k$. ~Notice the {\bf \it Exponent Distributes over a Sum ($EDS$)}, implying inequality $A^p+B^p<(A+B)^p$ for the corresponding 0-extensions $A,~B,~A+B$ of core residues $a,~b,~a+b$ mod $p^k$. \end{proof}
\begin{enumerate} {\small \item
{\bf ~Display} $G_k\equiv g^*$ by equidistant points on a {\bf unit circle}
in the plane, with 1 and $-1$ on the horizontal axis (fig1,~2).
The successive powers $g^i$ of generator $g$ produce $|G_k|$ points
($k$-digit residues) counter- clockwise. In this circle each inverse pair
$(a,a^{-1})$ is connected $vertically$, complements $(a,-a)~diagonally$,
and pairs $(a,-a^{-1})~horizontally$, representing functions $I, C$ and
$IC=CI$ resp. (thm3.2). Figures 1, 2 depict for $p$=7, 5 these symmetries
of residue arithmetic. \item
{\bf Scaling} any equation, such as $a+1\equiv -b^{-1}$, by a factor
$s\equiv g^i \in G_k\equiv g^*$, yields $s(a+1)\equiv -s/b$ mod $p^k$, represented by a {\bf rotation} counter clockwise over $i$ positions.
} \end{enumerate}
\subsection{Core increment symmetry at double precision, and asymmetry beyond}
Consider {\bf core function} $A_k(n)=n^{|B_k|} ~(~|B_k|=p^{k-1} ~cor1.1)$ as integer polynomial of odd degree, and {\bf core increment} function $d_k(n)=A_k(n+1)-A_k(n)$ of even degree one less than $A_k(n)$. Computing $A_k(n)$ upto precision $2k+1$ (base $p$) shows $d_k(n)$ mod $p^{2k+1}$ to have a 'double precision' symmetry for 1-complements $m+n=p-1$. Only $n<p$ need be considered due to periodicity $p$.\\ This naturally reflects in the additive properties of core $A_k$, as in table 1 for $p$=7 and $k$=1, with $n<p$ in $A_1$ by $FST$: symmetry of core increment $d_1(n)$ mod $p^3$ but not so mod $p^4$. Due to $A_k(n) \equiv n$ mod $p ~(FST)$ we have $d_k(n) \equiv 1$ mod $p$, so $d_k(n)$ is referred to as core 'increment', although in general $d_k(n) \equiv \!\!\!\!\!\!\//~~ 1$ mod $p^{k>1}$.
\begin{lem} ( Core increment at {\bf double precision} )~~
For $q=|B_k|=p^{k-1}$ and $k>0$:\\ (a)~~ Core function $A_k(n) \equiv n^q$ mod $p^k$ and increment
$d_k(n) \equiv A_k(n+1)-A_k(n)$ have {\bf period $p$}\\ (b)~~ for $m+n=p ~~~~:~ A_k(m) \equiv -A_k(n)$ mod $p^k$ {\rm ~~~(odd symmetry)}\\ (c)~~ for $m+n=p-1:~ d_k(m) \equiv d_k(n)$ mod $p^{2k+1}$
and $\equiv \!\!\!\!\!\!\//~~$ mod $p^{2(k+1)}$ \\ \hspace*{1cm} {\rm ('double precision' ~even symmetry and -inequivalence respectively).} \end{lem} \begin{proof} {\bf(a)} ~Core function $A_K(n)\equiv n^q$ mod $p^k ~(q=p^{k-1},~n \equiv \!\!\!\!\!\!\//~~$ 0 mod $p$) has just $p-1$ distinct residues with $(n^q)^p \equiv n^q$ mod $p^k$, and $A_k(n) \equiv n$ mod $p$ ($FST$). Including (non-core) $A_k(0) \equiv 0$ makes $A_k(n)$ mod $p^k$ periodic in $n$ with {\bf period $p$} :
~$A_k(n+p) \equiv A_k(n)$ mod $p^k$, so $n<p$ suffices for core analysis. Increment $d_k(n)$, as difference of two functions of period $p$, also has period $p$.
{\bf(b)} ~$A_k(n)$ is a polynomial of odd degree with
{\bf odd symmetry}~ $A_k(-n) \equiv (-n)^q \equiv -n^q \equiv -A_k(n)$.
{\bf(c)}~ Difference polynomial $d_k(n)$ is of even degree $q-1$ with leading term $q.n^{q-1}$, and residues 1 mod $p$ in extension group $B_k$. The even degree of $d_k(n)$ results in {\bf even symmetry}, because \\[1ex] \hspace*{1cm}
$d_k(n-1) = n^q-(n-1)^q = -(-n)^q+(-n+1)^q = d_k(-n)$.
Denote $q=p^{k-1}$, then for ~$m+n=p-1$ follows:~
$d_k(m)=A_k(m+1)-A_k(m)=(p-n)^q-m^q$ and $d_k(n)=A_k(n+1)-A_k(n)=(p-m)^q-n^q$, yielding:~ $d_k(m)-d_k(n) = [~(p-n)^q+n^q~] -[~(p-m)^q+m^q~]$. By binomial expansion and $n^{q-1} \equiv m^{q-1} \equiv 1$ mod $p^k$ in core $A_k$:~ $d_k(m)-d_k(n) \equiv 0$ mod $p^{2k+1}$. ~With~ $n \equiv \!\!\!\!\!\!\//~~ m$ mod $p$:~ $m^{q-2} \equiv m^{-1} \equiv \!\!\!\!\!\!\//~~ n^{-1} \equiv n^{q-2}$ mod $p$, ~causing~ $d_k(m) \equiv \!\!\!\!\!\!\//~~ d_k(n)$ mod $p^{2k+2}$. \end{proof}
Table 1 ~($p$=7) shows, for $\{n,m\}$ in core $A_1 ~(FST)$ with $n+m \equiv -1$ mod $p$, the core increment symmetry mod $p^3$ and difference mod $p^4 ~(k$=1). While $024^7 \equiv 024$ in $A_3 ~(k$=3) has core increment 1 mod $p^7$, but not 1 mod $p^8$, and similarly at 1-complementary cubic root $642^7 \equiv 642$.
\subsection{ Another derivation of the cubic root of 1 mod $p^k$ }
The cubic root solution was derived, for 3 dividing $p-1$, via subgroup $S \subset A_k$ of order 3 (thm1.1). For completeness a derivation using elementary arithmetic follows.
Notice ~$a+b \equiv -1$ ~~to yield ~~~$a^2+b^2 \equiv (a+b)^2-2ab \equiv 1-2ab$, ~and:\\ \hspace*{1cm} $a^3+b^3 \equiv (a+b)^3-3(a+b)ab \equiv -1+3ab$. ~~The combined sum is $ab-1$: \\[1ex] \hspace*{.5cm} $\sum_{i=1}^3(a^i+b^i) \equiv \sum_{i=1}^3 a^i + \sum_{i=1}^3 b^i \equiv ab-1$ ~mod $p^k$. ~~Find $a,b$ for $ab \equiv 1$ mod$p^k$.
Since $n^2+n+1=(n^3-1)/(n-1)$=0 for $n^3 \equiv 1 ~(n \neq 1$), we have $ab \equiv 1$ mod$p^{k>0}$ if $a^3 \equiv b^3 \equiv 1$ mod $p^k$, with 3 dividing $p-1~(p \equiv 1$ mod 6). Cubic roots $a^3 \equiv 1$ mod $p^k$ exist for any prime $p \equiv 1$ mod 6 at any precision $k>0$.
In the next section other solutions of $\sum_{i=1}^3 a^i + \sum_{i=1}^3 b^i \equiv 0$ mod $p^k$ will be shown, depending not only on $p$ but also on $k$, with $ab \equiv 1$ mod $p^2$ but $ab \equiv \!\!\!\!\!\!\//~~ 1$ mod $p^3$, for some primes $p \geq 59$.
\section{Triplets, and the Core}
Any solution of (2): ~$a^p+b^p=-1$ mod $p^k$ has at least one term ($-1$) in core, and at most all three terms in core $A_k$. To characterize such solution by the number of terms in core $A_k$, quadratic analysis (mod $p^3$) is essential since proper inclusion $A_k \subset F_k$ requires $k \geq 3$. The cubic root solution, with one inverse pair (lem2.1), has all three terms in core $A_{k>1}$. However, a computer search (table 2) does reveal another type of solution of (2) mod $p^2$ for some $p \geq 59$: three inverse pairs of $p$-th power residues, denoted triplet$^p$, ~in core $A_2$.
\begin{thm} A {\bf triplet$^p$} of three inverse-pairs of $p$-th power residues in $F_k$ satifies : \\ \hspace*{1in} {\bf (3a)}~~~~~~$a+b^{-1} \equiv -1$ ~(mod $p^k$) \\ \hspace*{1in} {\bf (3b)}~~~~~~$b+c^{-1} \equiv -1$ ~~~,, \\ \hspace*{1in} {\bf (3c)}~~~~~~$c+a^{-1} \equiv -1$ ~~~,, ~~~with $abc \equiv 1$ mod $p^k$. \end{thm} \begin{proof} ~Multiplying by $b,~c,~a$ resp. maps (3a) to (3b) if $ab \equiv c^{-1}$, and (3b) to (3c) if $bc \equiv a^{-1}$, and (3c) to (3a) if $ac \equiv b^{-1}$. ~All three conditions imply
$abc \equiv 1$ mod $p^k$. \end{proof}
Table 2 shows all normed solutions of (2) mod $p^2$ for $p<200$, with triplets at $p$= 59, 79, 83, 179, 193. The cubic roots, indicated by $C_3$, occur only at $p \equiv 1$ mod 6, while a triplet$^p$ can occur for either prime type $\pm 1$ mod 6. More than one triplet$^p$ can occur per prime (two at $p$=59, three at 1093, four at 36847: each first occurrance of such multiple triplet$^p$). There are primes for which both rootforms occur, e.g. $p=79$ has a cubic root solution as well as a triplet$^p$.
The question is if such {\bf loop structure} of inverse-pairs can have a length beyond 3. Consider the successor $S(n)=n$+1 and the two arithmetic symmetries, complement $C(n)=-n$ and inverse $I(n)=n^{-1}$, as {\bf functions}, which compose associatively.\\ Then looplength $>$3 is impossible in arithmetic ring $Z_k(+,~.)$ mod $p^k$, seen as follows.
\begin{thm} .. (two basic solution types)\\ \hspace*{1cm}
Each normed solution of (2) is (an extension of) a triplet$^p$
or an inverse- pair. \end{thm} \begin{proof} Assume $r$ equations $1-n_i^{-1}\equiv n_{i+1}$ form a loop of length $r$ (indices mod $r$). Consider function $ICS(n)\equiv 1-n^{-1}$, composed of the three elementary functions: Inverse, Complement and Successor, in that sequence.~ Let $E(n)\equiv n$ be the identity function, and $n \neq 0,1,-1$ to prevent division by zero, then under function composition the {\bf third iteration} $[ICS]^3=E$, since $[ICS]^2(n)\equiv -1/(n-1) ~\rightarrow ~[ICS]^3(n)\equiv n$ (repeat substituting $1-n^{-1}$ for $n$). Since $C$ and $I$ commute, $IC$=$CI$, the 3! = 6 permutations of \{$I,C,S$\} yield only four distinct dual-folded-successor {\it "dfs"} functions:
\hspace*{.5cm} $ICS(n)=-n^{-1},~SCI(n)=-(1+n)^{-1},
~CSI(n)=(1-n)^{-1}, ~ISC(n)=-(1+n^{-1})$.
By inspection each of these has $[dfs]^3=E$, referred to as {\bf loop length} 3. For a cubic rootpair {\it dfs=E}, and 2-loops do not occur since there are no duplets (next note 3.2). Hence solutions of (2) have only {\it dfs} function loops of length 1 and 3: inverse pair and triplet. \end{proof}
A special triplet occurs if one of $a,b,c$ equals 1, say $a \equiv 1$. Then $bc \equiv 1$ since $abc \equiv 1$, while (3a) and (3c) yield $b^{-1} \equiv c \equiv -2$, so $b \equiv c^{-1} \equiv -2^{-1}$. Although triplet $(a,b,c) \equiv (1,-2,-2^{-1})$ satisfies conditions (3), 2 is not in core $A_{k>2}$, and by symmetry $a,b,c \equiv \!\!\!\!\!\!\//~~ 1$ for any triplet$^p$ of form (3).~ If $2^p \equiv \!\!\!\!\!\!\//~~ 2$ mod $p^2$ then 2 is not a $p$-th power residue, so triplet $(1,-2,-2^{-1})$ is not a triplet$^p$ for such primes (all upto at least $10^9$ except 1093, 3511).
\begin{figure}
\caption{{ ~~G = A.B = $g^*$ ~(mod $5^2$), ~~~~~Cycle in the plane}}
\label{fig.2}
\end{figure}
\subsection{ A triplet for each $n$ in $G_k$ }
Notice the proof of thm3.2 does not require $p$-th power residues. So {\bf any} $n \in G_k$ generates a triplet by iteration of one of the four {\it dfs} functions (thm3.2), yielding the main triplet structure of $G_k$ :
\begin{cor}
~{\rm Each} $n$ in $G_k ~(k>$0) generates a triplet of ~{\rm three}
inverse pairs,\\ \hspace*{1cm} except if ~$n^3 \equiv 1$ and $n \equiv \!\!\!\!\!\!\//~~ 1$ mod $p^k
~(p \equiv 1$ mod 6), which involves {\rm one} inverse pair. \end{cor}
Starting at $n_0 \in G_k$ six triplet residues are generated upon iteration of e.g. $SCI(n)$: $n_{i+1}\equiv -(n_i+1)^{-1}$ (indices mod 3), or another {\it dfs} function to prevent a non-invertable residue. Less than 6 residues are involved if 3 or 4 divides $p-1$:
If $3|(p-1)$ then a cubic root of 1 ($a^3 \equiv 1, ~a \equiv \!\!\!\!\!\!\//~~ 1$) generates just 3 residues: ~$a+1\equiv -a^{-1}$; \\ --- together with its complement this yields a subgroup
$(a+1)^*\equiv C_6$ ~(fig.1, $p$=7)\\ If 4 divides $p-1$ then an $x$ on the vertical axis has $x^2 \equiv -1$ so $x \equiv -x^{-1}$,\\ --- so the 3 inverse pairs involve then only five residues ~(fig.2: $p$=5).
\begin{enumerate} {\small \item
It is no coincidence that the period 3 of each {\it dfs} composition [~of $-n,~n^{-1},~n$+1; \\ ~~~e.g: $CIS(n) \equiv 1-n^{-1}$~] exceeds the number of symmetries of finite ring $Z_k(+,~.)$ by one. \item
{\bf No duplet} occurs: multiply $a+b^{-1} \equiv -1,~b+a^{-1} \equiv -1$ by $b$ resp. $a$ then $ab+1 \equiv -b$ and $ab+1 \equiv -a$, ~sothat ~$-b \equiv -a$ and $a \equiv b$. \item
{\bf Basic triplet} mod $3^2: G_2 \equiv 2^* \equiv \{2,4,8,7,5,1\}$ is a 6-cycle of residues mod 9. ~Iteration: ~$SCI(1)^*: -(1+1)^{-1} \equiv 4,~ -(4+1)^{-1} \equiv 7,~ -(7+1)^{-1} \equiv 1$, and $abc \equiv 1.4.7 \equiv 1$ mod 9. } \end{enumerate}
\subsection{ The $EDS$ argument extended to non-core triplets }
The $EDS$ argument for the cubic root solution $CR$ (lem2.1), with all three terms in core, also holds for any triplet$^p$ mod $p^2$. Because $A_2 \equiv F_2$ mod $p^2$, so all three terms are in core for some linear transform (5). Then for each of the three equivalences (3a-c) holds the $EDS$ property: $(x+y)^p \equiv x^p+y^p$, and thus no finite (equality preserving) extension exists, yielding inequality for the corresponding integers for all $k>$1, to be shown next. A cubic root solution is a special triplet$^p$ for $p \equiv 1$ mod 6, with $a \equiv b \equiv c$ in (3a-c).
Denote the $p-1$ core elements as residues of integer function
~$A(n)=n^{|B|}, ~(0<n<p)$, then by freedom of $p$-th power extension beyond mod $p^2$ (cor1.4) choose, for any $k>2$ :
{\bf (4)} ~Core increment form:~ $A(n+1)-A(n) \equiv (r_n)^p$ mod $p^k$,
~~~~~~~~ with $(r_n)^p \equiv r_n.p^2+1 ~(r_n>$0), ~hence $(r_n)^p \equiv 1$ mod $p^2$, ~but ~$\equiv \!\!\!\!\!\!\//~~ 1$ mod $p^3$ ~in general.
This rootform of triplets, with two terms in core, is useful for the additive analysis of subgroup $F_k$ of $p$-th power residues mod $p^k$ (re: the known Fermat's Last Theorem $FLT$ case1: residues coprime to $p$ - to be detailed in the next section).
Any assumed $FLT~case_1$ solution (5) can be transformed into form (4) in two steps that preserve the assumed $FLT$ equality for integers $<p^{kp}$ in full $p$-th power precision $kp$ where $x,y<p^k$ , or $(k+1)p$ ~in case $p^k<x+y<p^{k+1}$ (one carry).\\ Namely first $scaling$ by an integer $p$-th power factor $s^p$ that is 1 mod $p^2$ (so $s \equiv 1$ mod $p$), to yield as one lefthand term the core residue $A(n+1)$ mod $p^k$. And secondly a $translation$ by an additive integer term $t$ which is 0 mod $p^2$ applied to both sides, resulting in the other lefthand term $-A(n)$ mod $p^k$, preserving the assumed integer equality (unit $x^p$ has inverse $x^{-p}$ in $G_k$). Without loss assume the normed form with $z^p \equiv 1$ mod $p^2$, then such {\bf linear transformation} ($s,t$) yields:
{\bf (5)}~~~~~~~~~~~
$x^p+y^p=z^p ~~\longleftrightarrow~~ (sx)^p+(sy)^p+t=(sz)^p+t$ ~~[ integers ],
~~~~~~~~ with~ $s^p \equiv A(n+1)/x^p, ~~~(sy)^p+t \equiv -A(n)$ ~mod $p^k$, ~so:
{\bf (5')} \hspace{3cm} $A(n+1)-A(n) \equiv (sz)^p+t$ ~mod $p^k$.
With $s^p \equiv z^p \equiv 1$ and $t \equiv 0$ mod $p^2$ this yields an equivalence which is 1 mod $p^2$, hence a $p$-th power residue, with two of the three terms in core. Such core increment form (4),(5') will be shown to have no (equality preserving) finite extension, of all residues involved, to $p$-th power integers, so the assumed integer $FLT~case_1$ equality cannot exist.
\begin{lem}
$p$-th powers of a 0-extended triplet$^p$ equivalence (mod $p^{k>1}$)
yield integer inequality. \end{lem} \begin{proof} In a triplet for some prime $p>2$ the core increment form (4) holds for three distinct values of $n<p$, where scaling by respective factors $-(r_n)^{-p}$ in $G_k$ mod $p^k$ returns 1-complement form (2). Consider each triplet equivalence separately, and for a simple notation let $r$ be any of the three $r_n$, with successive core residues $A(n+1) \equiv x^p \equiv x, ~-A(n) \equiv y^p \equiv y$ mod $p^k$. Then~ $x^p+y^p \equiv x+y \equiv r^p$ mod $p^k$, where $r^p \equiv 1$ mod $p^2$, has both summands in core, but right hand side $r^p \equiv \!\!\!\!\!\!\//~~ 1$ mod $p^{k>2}$ is not in core, with deviation $d \equiv r-r^p \equiv \!\!\!\!\!\!\//~~ 0$ mod $p^k$.\\ Hence~ $r \equiv r^p+d \equiv (x+y)+d$ mod $p^k$ ~(with $d \equiv 0$ mod $p^k$ in the cubic root case), and~ $x^p+y^p \equiv (x+y+d)^p$ mod $p^k$. This equivalence has no finite (equality preserving) 0-extension to integer $p$-th powers since $X^p+Y^p<(X+Y+D)^p$, ~so the assumed $FLT$ case$_1$ solution cannot exist. \end{proof}
For $p$=7 the cubic roots are $\{42,24,01\}$ mod $7^2$ (base 7). In full 14 digits: $42^7+24^7=01424062500666$ while $66^7=60262046400666$, which are equivalent mod $7^5$ but differ mod $7^6$.
More specifically, linear transform (5) adjoins to a $FLT$ case$_1$ solution mod $p^2$ a solution with two adjacent core residues mod $p^k$ ~(5') for any precision $k>1$, while preserving the assumed integer $FLT$ case$_1$ equality. Without loss one can assume scalefactor $s<p^k$ and shift term $t<p^{2k}$, yielding double precision integer operands $\{sx,sy,sz\} < p^{2k}$, with an (assumed) $p$-th power equality of terms $<p^{2kp}$.
Although equivalence mod $p^{2k+1}$ can hold by proper choice
of linear transform $(s,t)$, ~$inequivalence$ at base $p$
~triple precision~ $3k+1$ ~follows by:
\begin{lem} ~(~triple precision inequality ~): \\ \hspace*{5mm} Any extension $(X,Y,Z)$ of $(x,y,z)$ in~ $x^p+y^p \equiv z^p$ mod $p^{k>1}$ ($FLT_k$ case$_1$) yields an integer $p$-th power inequality (of terms $<p^{pk}$), with in fact inequivalence $X^p+Y^p \equiv \!\!\!\!\!\!\//~~ Z^p$
mod $p^{3k+1}$. \end{lem} \begin{proof} Let $X=up^k+x,~ Y=vp^k+y,~ Z=wp^k+z$ extend the residues $x,y,z<p^k$, ~such that $X,Y$ mod $p^{k+1}$ are not both in core $A_{k+1}$. So the extensions do not extend core precision $k$, and without loss take $u,v,w<p^k$, due to a scalefactor $s<p^k$ in (5). Write $h=(p-1)/2$, ~then binomial expansion upto quadratic terms yields:
~~~~~~~~ $X^p \equiv u^2x^{p-2}h~p^{2k+1}+ux^{p-1}p^{k+1}+x^p$ ~~mod $p^{3k+1}$, ~and similarly:
~~~~~~~~ $Y^p \equiv v^2y^{p-2}h~p^{2k+1}+vy^{p-1}p^{k+1}+y^p$ ~~mod $p^{3k+1}$,
~and:
~~~~~~~~ $Z^p \equiv w^2z^{p-2}h~p^{2k+1}+wz^{p-1}p^{k+1}+z^p$ ~~mod $p^{3k+1}$,
where: $x^p+y^p \equiv x+y \equiv z^p$ ~mod $p^k$, ~and~ $x^{p-1} \equiv y^{p-1} \equiv 1$ ~mod $p^k$, but not so mod $p^{k+1}$: \\ ~$u,v$ ~are such that not both $X,Y$ are in core $A_{k+1}$, hence core precision $k$ is not increased.
By lemma 3.1 the 0-extension of $x,y,z$ (so $u=v=w=0$) does not yield the required equality $X^p+Y^p=Z^p$. To find for which maximum precision equivalence $can$ hold, choose $u,v,w$ sothat:
~~$(u+v)p^{k+1}+x^p+y^p \equiv wz^{p-1}p^{k+1}+z^p$ mod $p^{2k+1}$ ...[*].. ~yielding~ $X^p+Y^p \equiv Z^p$ mod $p^{2k+1}$.
A cubic root solution has also $z^p \equiv z$ in core $A_k$, so $z^{p-1} \equiv 1$ mod $p^k$, then $w=u+v$ with $w^2>u^2+v^2$ would require $x^p+y^p \equiv z^p$ mod $p^{2k+1}$, readily verified for $k$=2 and any prime $p>2$. \\[1ex] Such extension [*] implies inequivalence $X^p+Y^p \equiv \!\!\!\!\!\!\//~~ Z^p$ mod $p^{3k+1}$ for non-zero extensions $u,v,w$. Because $u+v=w$ together with $u^2+v^2=w^2=(u+v)^2$ yields $uv=0$. So any (zero- or nonzero-) extension yields inequivalence mod $p^{3k+1}$. \end{proof}
\section{Residue triplets and Fermat's integer powersum inequality}
Core $A_k$ as $FST$ extension, the additive zero-sum property of its subgroups (thm1.1), and the triplet structure of units group $G_k$, allow a direct approach to Fermat's Last Theorem:
{\bf (6)}~~~~~
$x^p+y^p = z^p$ (prime $p>2$) ~has no solution for positive
integers $x,~y,~z$\\ \hspace*{2cm}
with ~~case$_1$ : ~$xyz \equiv \!\!\!\!\!\!\//~~ 0$ mod $p$,
and ~~case$_2$ : ~$p$ divides one of $x,y,z$.
Usually (6) mentions exponent $n>2$, but it suffices to show inequality for primes $p>2$, because for composite exponent $m=p.q$ holds $a^{pq}=(a^p)^q= (a^q)^p$. If $p$ divides two terms then it also divides the third, and all terms can be divided by $p^p$. So in $case$ 2:~ $p$ divides just one term.
A finite integer $FLT$ solution of (6) has three $p$-th powers $<p^k$ for some finite fixed $k$, so occurs in $Z_k$, yet with no $carry$ beyond $p^{k-1}$, and (6) is the 0-extension of this solution mod $p^k$. Each residue $n$ mod $p^k$ is represented uniquely by $k$ digits, and is the product of a $j$-digit number as 'mantissa' relative prime to $p$, and $p^{k-j}$ represented by $k-j$ trailing zero's (cor1.3).
Normation (2) to $rhs=-1$ simplifies the analysis, and maps residues ($k$ digits) to residues, keeping the problem finite. Inverse normation back to (6) mod $p^k$ is in case 1 always possible, using an inverse scale factor in group $F_k$. So normation does {\bf not} map to the reals or rationals.
The present approach needs only a simple form of Hensel's lemma [5] (in the general $p$-adic number theory), which is a direct consequence of cor1.2 : ~~extend digit-wise the 1-complement form such that the $i$-th digit of weight $p^i$ in $a^p$ and $b^p$ sum to $p-1$ ~(all $i \geq 0$), with $p$ choices per extra digit. Thus to each normed solution of (2) mod $p^2$ correspond $p^{k-2}$ solutions mod $p^k$:
\begin{cor} ~(1-cmpl extension)~~~~
A normed $FLT_k$ root is an extended $FLT_2$ root. \end{cor}
\subsection{Proof of the FLT inequality}
Regarding $FLT$ case$_1$, an inverse-pair and triplet$^p$ are the only (normed) $FLT_k$ roots (thm3.2). As shown (lem3.1), any assumed integer case$_1$ solution has a corresponding equivalent core increment form (4) with two terms in core, having no integer extension, against the assumption.
\begin{thm} ($FLT$ Case 1).
~For prime $p>$2 and integers $x,y,z>0$ coprime to $p$ :\\ \hspace*{1in}
~$x^p+y^p=z^p$ has no solution. \end{thm} \begin{proof} ~An $FLT_{k>1}$ solution is a linear transformed extension of an $FLT_2$ root in core $A_2=F_2$ (cor4.1). By lemmas 2.2c and 3.1 it has no finite $p$-th power extension, yielding the theorem. \end{proof}
In $FLT$ case$_2$ just one of $x,y,z$ ~is a multiple of $p$, hence $p^p$ divides one of the three $p$-th powers in $x^p+y^p=z^p$. Again, any assumed case$_2$ equality can be scaled and translated to yield an equivalence mod $p^p$ with two terms in core $A_p$, having no integer extension, contra the assumption.
\begin{thm} ($FLT$ case$_2$) ~~~For prime $p>$2 and positive integers $x,y,z$ :\\ \hspace*{1in}
if~ $p$ ~divides just one of~ $x,y,z$ ~then ~$x^p+y^p=z^p$
~has no solution. \end{thm} \begin{proof} In a case$_2$ solution $p$ divides a lefthand term, $x=cp$ or $y=cp~(c>$0), or the right hand side $z=cp$. Bring the multiple of $p$ to the right hand side, for instance if $y=cp$ we have~ $z^p-x^p=(cp)^p$, while otherwise $x^p+y^p=(cp)^p$. So the sum or difference of two $p$-th powers coprime to $p$ must be shown not to yield a $p$-th power $(cp)^p$ for any $c>0$:
{\bf (7)}~~~~
$x^p \pm y^p = (cp)^p$ ~has no solution for integers $x,y,c>0$.
Notice that core increment form (4) does not apply here. However, by $FST$ the two lefthand terms, coprime to $p$, are either complementary or equivalent mod $p$, depending on their sum or difference being $(cp)^p$. ~Scaling by $s^p$ for some $s \equiv 1$ mod $p$, ~so $s^p \equiv 1$ mod $p^2$, transforms one lefthand term into a core residue $A(n)$ mod $p^p$, with $n \equiv x$ mod $p$.
And translation by adding $t \equiv 0$ mod $p^2$ yields the other term $A(n)$ or $-A(n)$ mod $p^p$ respectively. The right hand side then becomes $s^p(cp)^p+t$, ~equivalent to $t$ mod $p^p$. So an assumed equality (7) yields, by two equality preserving tansformations, the next equivalence (8), where $A(n) \equiv u \equiv u^p$ mod $p^p$ ~($u$ in core $A=A_p$ for $0<n<p$ with $n \equiv x \equiv u$ mod $p$) and $s \equiv 1,~ t \equiv 0$ mod $p^2$:
{\bf (8)}~~~~~ $u^p \pm u^p \equiv u \pm u \equiv t$ mod $p^p ~(u \in A_p)$,~
~where~ $u \equiv (sx)^p$ ,~ $\pm u \equiv \pm (sy)^p+t$ mod $p^p$.
Equivalence (8) does not extend to integers, because $U^p+U^p>U+U$, ~and $U^p-U^p=0 \neq T$, ~where $U,T$ are the 0-extensions of $u,t$ mod $p^p$ respectively. But this contradicts assumed equalities (7), which consequently must be false. \end{proof}
{\bf Remark}: {\small ~~From a practical point of view the $FLT$ integer inequality of a 0-extended $FLT_k$ root (case$_1$) is caused by the {\it carries} beyond $p^{k-1}$, amounting to a multiple of the modulus, produced in the arithmetic (base $p$). In the expansion of $(a+b)^p$, the mixed terms $can$ vanish mod $p^k$ for some $a,b,p$. Ignoring the carries yields $(a+b)^p \equiv a^p+b^p$ mod $p^k$, and the $EDS$' property is as it were the $syntactical$ expression of ignoring the carry ($overflow$) in residue arithmetic. In other words, in terms of $p$-adic number theory, this means 'breaking the Hensel lift': the residue equivalence of an $FLT_k$ root mod $p^k$, although it holds for all $k>$0, $does$ imply inequality for integers due to its special triplet structure, where exponent $p$ distributes over a sum.}
\section*{ Conclusions }
\begin{enumerate} \item
Symmetries $-n,~n^{-1}$ determine $FLT_k$ roots but do not exist
for positive integers. \item
Another proof of $FLT$ case$_1$ might use product 1 mod $p^{k}$ of $FLT_k$ root terms: $ab \equiv 1$ or $abc \equiv 1$, which is impossible for integers $>1$. The product of $m$ (=2, 3, $p$) ~$k$-digit integers has $mk$ digits. ~Arithmetic mod $p^k$ {\bf ignores carries} of weight $p^k$ and beyond. Removal of the mod $p^k$ condition from a particular $FLT_k$ root equivalence 0-extends its terms, and the ignored carries imply inequality for integers. \item {\bf Core} $A_k \subset G_k$ as extension of $FST$ to mod $p^{k>1}$, and the zero-sum of its subgroups (thm1.1) yielding the cubic $FLT$ root (lem2.1), started this work. The triplets were found by analysing a computer listing (tab.2) of the $FLT$ roots mod $p^2$ for $p<200$. \item Linear analysis (mod $p^2$) suffices for root existence (Hensel, cor4.1), but {\bf quadratic} analysis (mod $p^3$) is necessary to derive triplet$^p$ core-increment form {\bf (4) ~(5,5')} with maximally two terms in core $A_3$. \item "$FLT$ eqn(1) has no finite solution" and "$[ICS]^3$ has no finite fixed point" \\are equivalent (thm3.2), yet each $n \in G_k$ is a fixed point of $[ICS]^3$ mod $p^k$ \\ (re: $FLT_2$ roots imply all roots for $k>$2, yet no 0-extension to integers). \item Crucial in finding the arithmetic triplet structure, and the double precision core-increment symmetry and inequivalence (lem2.2c) were extensive computer experiments, and the application of {\it associative function composition}, the essence of semi-groups, to the three elementary functions (thm3.2): \\ \hspace*{1cm}
successor $S(n)=n$+1, complement $C(n)=-n$ and inverse $I(n)=n^{-1}$, \\ with period 3 for $SCI(n)=-(n+1)^{-1}$ and the other three such compositions. In this sense $FLT$ is not a purely arithmetic problem, but essentially requires non-commutative and associative function composition for its proof. \end{enumerate}
\section*{ Acknowledgements }
The opportunity given me by the program committee in Prague [2], to present this simple application of finite semigroup structure to arithmetic, is remarkable and greatly appreciated. Also, the feedback from several correspondents is gratefully acknowledged.
\section*{ References }
\begin{enumerate} {\small \item T.Apostol: {\it Introduction to Analytical Number Theory} (thm 10.4-6), Springer Verlag, 1976. \item N.F.Benschop: "The semigroup of multiplication mod $p^k$, an
extension of Fermat's Small Theorem, and its additive structure",\\
International conference {\it Semigroups and their Applications}
(Digest p7) Prague, July 1996. \item A.Clifford, G.Preston: {\it The Algebraic Theory of Semigroups}
\\ Vol 1 (p130-135), AMS survey \#7, 1961. \item S.Schwarz: "The Role of Semigroups in the Elementary Theory of
Numbers", \\ Math.Slovaca V31, N4, p369-395, 1981. \item G.Hardy, E.Wright: {\it An Introduction to the Theory of Numbers}
\\ (Chap 8.3, Thm 123), Oxford-Univ. Press 1979.
} \end{enumerate}
\begin{center} -----///----- \end{center}
\begin{verbatim}
n. n F= n^7 F'= PDo PD1 PD2 p=7
0. 0000 000000000 000000001 010000000 000000000 7-ary code
1. 0001 000000001 000000241 023553100 050301000 9 digits
2. 0002 000000242 000006001 < 055440100 446621000
3. 0003 000006243 000056251 150660100 401161000 '<' :
4. 0004 000065524 000345001 < 324333100 302541000 Cubic roots
5. 0005 000443525 001500241 612621100 545561000 (n+1)^p - n^p
6. 0006 002244066 004422601 355655100 233411000 = 1 mod p^3
x xx ^^^sym
7. 0010 010000000 013553101 410000000 000000000
8. 0011 023553101 031554241 116312100 062461000
9. 0012 055440342 062226001 < 351003100 534051000
10. 0013 150666343 143432251 630552100 600521000
11. 0014 324431624 255633001 < 455101100 160521000
12. 0015 613364625 444534241 156135100 242641000
13. 0016 361232166 025434501 110316100 223621000
14. 0020 420000000 423165201 010000000 000000000
15. 0021 143165201 263245241 402261100 313151000
16. 0022 436443442 342105001 < 502606100 060611000
17. 0023 111551443 000651251 326354100 541031000
18. .024 112533024 ! 660000001 < 036146100 035011000 (n+1)^p - n^p
19. .025 102533025 ! 366015241 612322100 531201000 = 1 mod p^7
20. 0026 501551266 625115401 332500100 600441000 --------&c Table 1: Periodic Difference of i-th digit: PDi(n) = F(n+p^i) - F(n) \end{verbatim}
\label{lastpage} \begin{verbatim}
Find a+b = -1 mod p^2 (in A=F < G): Core A={n^p=n}, F={n^p} =A if k=2.
G(p^2)=g*, log-code: log(a)=i, log(b)=j; a.b=1 --> i+j=0 (mod p-1)
TRIPLET^p: a+ 1/b= b+ 1/c= c+ 1/a=-1; a.b.c=1; (p= 59 79 83 179 193 ... ^^^^^^^ Root-Pair: a+ 1/a=-1; a^3=1 ('C3') <--> p=6m+1 (Cubic rootpair of 1) ^^^^^^^^^ p:6m+-1 g=generator; p < 2000: two triplets at p= 59, 701, 1811
5:- 2 three triplets at p= 1093
7:+ 3 C3 11:- 2
13:+ 2 C3 17:- 3
19:+ 2 C3 23:- 5 29:- 2
31:+ 3 C3
37:+ 2 C3 41:- 6
43:+ 3 C3 47:- 5
53:- 2 log lin mod p^2
59:- 2 ------ ------------
-2,-25( 40 15, 18 43) 25, 23( 35 11, 23 47) -23, 2( 53 54, 5 4)
-- -- -- -- -- --
27, 19( 18 44, 40 14) -19, 8( 13 38, 45 20) -8,-27( 5 3, 53 55)
61:+ 2 C3
67:+ 2 C3 71:- 7
73:+ 5 C3
79:+ 3 C3
30, 20( 40 46, 38 32) -20, 10( 36 42, 42 36) -10,-30( 77 11, 1 67)
83:- 2
21, 3( 9 74, 73 8) -3, 18( 54 52, 28 30) -18,-21( 13 36, 69 46)
89:- 3
97:+ 5 C3 101:- 2 103:+ 5 C3 107:- 2 109:+ 6 C3 113:- 3 127:+ 3 C3 131:- 2 137:- 3 139:+ 2 C3 149:- 2 151:+ 6 C3 157:+ 5 C3 163:+ 2 C3 167:- 5 173:- 2 179:- 2
19, 1( 78 176,100 2) -1, 18( 64 90,114 88) -18,-19( 88 59, 90 119) 181:+ 2 C3 191:- 19 193:+ 5 C3
-81, 58( 64 106,128 86) -58, 53( 4 101,188 91) -53, 81(188 70, 4 122) 197:- 2 199:+ 3 C3 ------- ------------------------------------- Table 2: FLT_2 root: inv-pair (C3) & triplet^p (for p < 200) \end{verbatim}
\end{document} |
\begin{document}
\title{Heralded Control of Mechanical motion by Single Spins} \author{D. D. Bhaktavatsala Rao } \author{ S. Ali Momenzadeh} \author{J\"{o}rg Wrachtrup}
\affiliation{3. Physikalisches Institut, Research Center SCOPE, and MPI for Solid State Research, University of Stuttgart, Pfaffenwaldring 57, 70569 Stuttgart, Germany } \date{\today} \begin{abstract} We propose a method to achieve high degree control of nanomechanical oscillators by coupling their mechanical motion to single spins. By manipulating the spin alone and measuring its quantum state heralds the cooling or squeezing of the oscillator even for weak spin-oscillator couplings. We analytically show that the asymptotic behavior of the oscillator is determined by a spin-induced thermal filter function whose overlap with the initial thermal distribution of the oscillator determines its cooling, heating or squeezing. Counterintuitively, the rate of cooling dependence on the instantaneous thermal occupancy of the oscillator renders robust cooling or squeezing even for high initial temperatures and damping rates. We further estimate how the proposed scheme can be used to control the motion of a thin diamond cantilever by coupling it to its defect centers at low temperature.
\end{abstract}
\maketitle
Over the past decade ground state cooling of mechanical oscillators have paralleled the laser cooling methods for atoms, thus allowing to realize quantum mechanical effects even at microscopic length scales \cite{ref1}. With this ability these devices can be used in a variety of applications ranging from quantum metrology to quantum information processing \cite{hybrid}. Cooling (micro) nanomechanical oscillators (NMO) by laser induced radiation pressure force has received huge attention both theoretically and experimentally which lead to the observation of ground state cooling of mechanical motion \cite{ref2}, quantum back action \cite{ref3} and other nonclassical effects like squeezing \cite{ref4}. Recent advances in solid-state materials hosting single localized spins (emitters) are also shown to be promising candidates both for controlling the dynamics of the oscillator and also induce interaction (entanglement) between distant spin by the dynamical back-action of the NMO\cite{ref5}.
Solid state spins in diamond are robust in terms of their good spin coherence properties, high degree spin control \cite{ref6} and a well-resolved optical spectrum at low temperatures \cite{ref7} that allows optical excitation to various levels that can either initialize or readout the electronic spin state with fidelities exceeding $98\%$ f\cite{ref8}. On the otherhand, they suffer from weak coupling to external spins, incidents photons \cite{ref9} and phonons \cite{exp2}. Even with such weak couplings, projective readout techniques, can be used achieve deterministic nuclear spin state preparation \cite{ref12}, entanglement between solid state qubits mediated by photons \cite{ref13}, transfer and storage of single photon states in nuclear spins \cite{ref14}. In this work we employ these post-selection techniques to achieve robust control of mechanical modes of a NMO that are weakly coupled to the host spins.
Coupling single spins to the mechanical motion of a NMO has already been demonstrated \cite{exp1, exp2, exp3, exp4}. Due to the large thermal occupancy of these modes even at low temperatures, realizing gates between distant spins coupled to a common mechanical mode is a nontrivial task \cite{ref5}. Hence one needs to cool down or squeeze these modes to a high degree to observe any coherent effects or perform gates between distant spins that are coupled to a common mode of NMO. However, with the extremely small spin-phonon coupling achieving such tasks could be challenging. In other words, to achieve coherent coupling between a spin and a phonon the spin cooperativity $C_S =g^2T_2/\gamma$, should be be greater than unity \cite{lukin}. This can be possible only if the inverse spin coherence time, $1/T_2$, and the dissipation rate of the oscillator $\gamma$ are smaller than the spin-phonon coupling $g$. In this work we will show that even when $g \le \gamma, T_2$, repetitive projections of the spin to a specific state allows a filtering of unwanted thermal occupancies of the oscillator, thereby allowing us to cool down, heat or squeeze its motional degrees of freedom.
The proposed scheme is based on the conditional evolution of the NMO by repetitively post-selecting the spin state dynamics to which it is coupled. When coupling quantum systems of different Hilbert space dimensions, i.e., a two-level spin with an $N$-level oscillator, a short time evolution that looks like a perturbation on the spin state observables can lead to a dramatic change on the oscillator dynamics. Though, repeatedly finding the spin in a given state appears like an effective freezing of its dynamics (Zeno effect), on the contrary can drive the other system quite far from its equilibrium state \cite{naka, klaus}. In this work we will exactly solve the dynamics of the NMO conditioned on the repetitive measurement of a solid state spin to which it is coupled, and extract a measurement induced nonlinear cooling rate that leads to a rapid near ground state cooling of the oscillator modes.
We shall consider the geometry shown in Fig.1 (a), i.e., a one-sided clamped microcantilever with single spins implanted in it. In the presence of a large magnetic field gradient, these spins (two-level systems) couple to a position-dependent magnetic field i.e., the spins experience a phase shift of their energy states that depends on the position of the cantilever. The coupling between single spin and a given mechanical mode of frequency $\omega_m$ is determined by the zero point motion of the oscillator and the magnetic field gradient \cite{Rabl}. The Hamiltonian describing the dispersive interaction between the spin and the NMO (in units of $\hbar = 1$) is given by \begin{equation} H=\sum_m [\omega_m a^\dagger_m a_m + g_mS^z (a_m + a^\dagger_m)] + \Omega(t)S^x, \end{equation} where $g_m$ is the coupling between the spin ($S$) and the $m^{\rm{th}}-$mode of the oscillator ($a_m$) that has a frequency $\omega_m$. The stroboscopic control of the spin is determined by the time-dependent function $\Omega(t)$. In addition to this unitary coupling the spin and the oscillator modes suffer nonunitary decay processes which we will consider in the later part of this paper. Under evolution governed by the above Hamiltonian with a stroboscopic spin control i.e., $\Omega(t) =\pi \sum_k \delta(t-\tau_k)$, the time-evolution operator takes the simple form \begin{equation} U(t) = \mathcal{D}_+(t)\ket{1}\bra{1}+\mathcal{D}_-(t)\ket{0}\bra{0}, \end{equation} where $\ket{1(0)}$ are the eigenstates of the spin operator $S^z$ and the multimode displacement operators $\mathcal{D}_\pm = \exp({\pm i\sum_m g_m(F_t (\omega_m)a_m + F^*_t (\omega_m)a^\dagger_m)/\omega_m})$ \cite{uhrig}. The (mode) filter function, $F_m(t)$, is induced by the applied control which for equally spaced $\pi$-pulses intervals (i.e.,$\tau_k=\tau \equiv \pi/\omega$) takes the well-known form, $F_\tau(n_c,\omega) =4\tan^2[\omega t/(2n_c+2)]\cos^2(\omega t/2)$. In the limit of large number of $\pi$-pulses, $n_c$, the filter function $F_m(t) \approx 2n_c\delta(\omega-\pi/\tau)$, i.e., coupling to single mode (or to modes with a frequency that are integral multiples of $\omega$ are only relevant). The total time is now expressed in units of $\tau$ and the dynamics will be determined by a single dimensionless parameter $\lambda = 2g n_c/\omega$.
The pulse sequence for implementing the proposed scheme is shown in Fig. 1 (c). Starting from the initial spin state $\ket{0}$, we prepare a superposition state using the first $\pi/2$ pulse and allow it to evolve for a $t = n_c\tau$ with stroboscopic interruptions at intervals $\tau$. The spin is then projected back to the energy basis ($\ket{1(0)}$) with the final $\pi/2$ pulse. We now readout the state of the spin optically and note the measurement result. Owing to a successful measurement result i.e., finding the spin in the state $\ket{0}$, the procedure is repeated $M$ times, as shown in Fig.1. Since the time $t$ is arbitrary the probability of obtaining a successful measurement is always below unity. Given this conditional spin state dynamics the evolution of the NMO that is initially in thermal equilibrium is greatly effected which we analyze below.
\begin{figure}
\caption{(a) Schematic illustration of the control procedure detailed in the paper. A single-side clamped NMO hosting single spins (black) experiencing a gradient magnetic field $B$ (blue) is shown. Further the heralding mechanism is depicted by using a laser for optical readout of the solid-state spin where the emitted light is collected on a photo-detector. Also shown is the readout sequence with '$1$' signifying the successful readout (projection) of the spin state and '$0$' the unsuccessful event. (b) Schematic illustration of the initial thermal distribution (grey solid-line) of the oscillator and the spin-controlled filter function that lead to either cooling (blue-solid line) or heating of the oscillator (red-dashed line). (c) The total pulse sequence for the heralded control of the oscillator is shown. }
\label{level}
\end{figure} For an oscillator in its thermal state $\rho_B = \frac{1}{\sum_M e^{-\beta n}}\sum_M e^{-\beta n}\ket{n}\bra{n}$, with an initial thermal occupancy $n_\omega = Tr[\rho_B a^\dagger a]$, the effect of the spin-phonon coupling would lead to random displacements governed by the operators $\mathcal{D}$ as shown in Eq. (2). As we are dealing with the displacement operators, it is instructive to work in the coherent basis where the above-mentioned thermal state can be rewritten as \begin{equation}
\rho_B(0) = \int \mathcal{P}_0(\alpha) \ket{\alpha}\bra{\alpha}d^2 \alpha, ~\mathcal{P}_0(\alpha) = \frac{1}{\pi n_\omega}e^{-|\alpha|^2/n_\omega}. \end{equation} Upon obtaining a successful spin state measurement, the oscillator is projected onto the state \begin{equation} \rho_B(t) = \frac{V\rho_B(0)V^\dagger}{{\rm Tr}[V\rho_B(0)V^\dagger]}. \end{equation} where $V(t)= \frac{1}{2}(\mathcal{D}_+(t) +\mathcal{D}_-(t))$. The success probability for this projection is then simply given by ${{\rm Tr}[V\rho_B(0)V^\dagger]}$. Now $\rho_B(t)$ would be the initial state of the oscillator for the next repetition and evolving again for a time $t$, a successful measurement of the spin will project the oscillator onto the state $\rho_B(2t) = \frac{V\rho_B(t)V^\dagger}{{\rm Tr}[V\rho_B(t)V^\dagger]}$. After $M$-successful projections of the spin to its desired state the oscillator can again be expressed in the coherent basis as given in Eq. (3), where the $\mathcal{P}$-function is now modified as \begin{eqnarray} \mathcal{P}_M(\alpha) = {\mathcal{P}_0(\alpha) }G^\alpha_M(\epsilon,\lambda), \end{eqnarray} where \begin{equation} G^\alpha_M(\epsilon,\lambda) =C_M \prod_{k=1}^N{\rm Re}\left[\exp\left(\frac{\lambda (e^{i\epsilon t}-1)}{\epsilon}\left\lbrace\alpha e^{i(k-1)\epsilon t}+\alpha^*e^{-ik\epsilon t}\right\rbrace\right)\right]. \end{equation} acts like the spin-induced thermal filter function (see Suppl. Info) by suppressing unwanted excitations in the phase space of the oscillator. In the above equation $\epsilon$ determines the slight off-resonant driving of the spin i.e., the evolution of the spin is interrupted stroboscopically at intervals of $\tau = \pi/(\omega - \epsilon)$, and $C_M$ is determined from the normalization condition of the oscillator state i.e., $Tr[\rho_B(Mt)] =1$. In Fig. 2 (a) - (b) we plot the $\mathcal{P}$-function in the complex space of $\alpha$ which show the initial thermal distribution, the squeezed distribution for resonant driving and a narrowed thermal distribution for off-resonant driving of the spin. These confirm the above described picture of the role of a spin-induced filtering in the oscillator's Hilbert space to achieve a given task. We also confirm the above analysis by performing exact numerical diagonalization of the dynamics generated by the Hamiltonian given in Eq. (1). These results are shown in Fig. 2 (d) - (f).
The key aspect of this measurement control is to obtain a high probability for finding the spin in a given state ($\ket{0}$). To obtain such high probability, the effective spin-oscillator coupling should satisfy the condition $\lambda^2n_\omega < 1$. With this condition it is guaranteed that probability of finding the spin close to its initial state is much higher than $50\%$ during the first evolution cycle i.e, over a time $t$. This in turn also indicates that for higher temperatures it is better to have smaller effective couplings $\lambda$ (which is possible by adjusting the number of control pulses $n_c$). After the first successful projection of the spin, if the oscillator has also been projected onto a state with smaller thermal occupancy then the above condition guarantees that the success probability for the next heralding event will be much higher than the previous one. As the oscillator gets gradually cooled each heralding event increases the probability for the next one, thus resulting in a cascading behavior as shown in Fig. 2d where an exponential convergence to unity will be observed with either cooling or squeezing the oscillator. As the cooling procedure is not deterministic one can only find the event rate of achieving a ground state cooled oscillator state. For example, the net success probability (event rate) to achieve the cooling shown in Fig. 2 (f) is $0.125$ i.e., if a single experiment is described by $M$ repetitive spin-state measurements (see Fig. 1) then for every $8$ experiments one can find success in all $M$ measurements, thereby, driving the oscillator towards maximally cooled/squeezed state.
\begin{figure}
\caption{The $\mathcal{P}_M$-function is plotted in the complex space of $\alpha$ for (a) initial thermal state, (b) squeezed state obtained under resonant driving of the spin ($\epsilon = 0$) and (c) thermally cooled state for off-resonant driving of the spin $\epsilon=0.1\lambda\omega$. We have used $\lambda = 0.25$, and $M=10$. (d). The probability of successful heralding is plotted as function of number of heralding events $M$. (e) The squeezing of the oscillator position and (f) the thermal occupancy of the oscillator is plotted as a function of $M$. In plots (d)-(f) we compare the cases with different periodicity for the applied $\pi$-pulses: resonant driving $\tau = \pi/\omega$ (red-solid line) and slight off-resonant driving $\tau = \pi/(\omega + \epsilon)$ (blue circles). For the above plots the parameters used in the simulation are $g/\omega = 2.5 \times 10^{-4}$, the M-dependent control pulses $n_c (M) = 10^2M^{0.25}$ and the offset $\epsilon = 20g$. The exact numerical diagonalization is performed for an oscillator with $10^3$ basis states. }
\label{level}
\end{figure}
For cooling down the oscillator all the way to its ground state we have shown that the spin should be driven at a frequency that is slightly off-resonant by $\epsilon \sim g$ from the mode frequency $\omega$. The physical reason for this comes from the preferential basis chosen by the conditional dynamics on the phase space of the oscillator. For resonant driving the displacement operator, $V(t) = \cos [\lambda (a+a^\dagger)]$, is diagonal in the position basis of the oscillator and hence repetitive projections of the oscillator onto this basis will stabilize the system in one of the eigenstates of the position operator, thereby squeezing it maximally (see Fig. 2 (b), 2 (e)). On the other hand, with slight off-resonant driving, the displacement operator $V(t) \approx {\rm Re}(\exp [\lambda (c_0 a+ c_0^*a^\dagger)])$ (where $c_0$ is some complex number) has both position and momentum components, and this does not allow the oscillator to stabilize in either of the basis. Therefore the system continues to remain diagonal in the energy basis with thermal occupancy decreasing for increasing $M$ (see Fig. 2 (c), 2 (f)).
As the initial decay of the thermal occupancy is similar for both resonant and off-resonant driving (see Fig. 2 (f)) we will obtain the ground state cooling rate by analyzing the resonant case as it has a comparatively simpler form for the operator $V(t) =\cos [\lambda (a+a^\dagger)]$. Using this, and for $\lambda^2 n_\omega < 1$, one can evaluate $n_\omega(t) \equiv {\rm Tr}[V(t)a^\dagger a V(t) \rho_B(0)] =n_\omega(0)[1-\lambda^2(2n_\omega(0)+1)]$. The cooling behavior for $M$ measurements can then be approximated by (see Suppl. Info) \begin{equation} n_\omega((M+1)t) \approx {n_\omega(Mt)}e^{-2\lambda^2n_\omega(Mt)}. \end{equation} From the above equation, one can the effective cooling rate $\gamma_M \approx \left(\frac{4g^2}{\omega}\right) \braket{a^\dagger a}_M$. Surprisingly, the cooling rate itself depends on the instantaneous thermal occupancy of the oscillator i.e., higher the initial thermal occupancy higher is the decay (cooling rate). This cooling rate decreases with increasing $M$, eventually reaching to the single phonon limit, which is $\frac{4g^2}{\omega}$. We further find that the mode occupancy can be reduced on an average by approximately $70\%$ in each repetition cycle, hence the quantum speed limit of cooling the oscillator to its ground state is possible by $M \approx 2\log_2 (n_\omega)$ successful spin projections. The resolved side-band cooling methods generally employed for cooling the NMO \cite{Rabl} will only reduce the oscillators thermal occupancy in unit steps i.e., $\ket{0}\ket{n}\rightarrow \ket{1}\ket{n-1}$ over a time-scale of $\sim 1/g$, indicating that more optical cooling cycles for the spin have to be employed to reduce the oscillator thermal occupancy to half its initial value and $g > \Gamma$ to ensure cooling. On the contrary, the current method importantly shows that one successful projection of the spin can reduce the thermal occupancy drastically and its success probability is bounded by $0.5$.
\begin{figure}
\caption{We show the cooling of the oscillator as a function of heralding round $M$ for various damping rates of the oscillator. In the inset we show the final steady state population as a function of $\Gamma$ and the spin coherence time $T_2$ for a fixed $M = 50$.}
\label{level}
\end{figure}
As the spin is measured frequently at time intervals of $t$, the spin dephasing/decay processes are only relevant for time $t$. At low ($\sim 4K$) temperatures and with periodic $\pi$-pulses used in our scheme (Fig. 1c), the non-unitary spin processes become less important. On the other hand, the dissipation of the oscillator plays a prominent role in the cooling procedure. To include the damping effects of the oscillator, $\Gamma \ll \omega$, into the analysis all we need is to replace $\epsilon$ in Eq. (5) with $\epsilon + i\Gamma$ (see Suppl. Info). Under this assumption there are two competitive effects: (i) the spin-induced cooling and (ii) thermalization of the oscillator. These two processes allow the system to reach a steady state where the final thermal occupancy of the oscillator depends on the ratio between the spin-oscillator coupling $g$ and the dissipation rate $\Gamma$. We show this behavior in Fig. 3. For finite $\Gamma$, the thermal occupancy of the NMO stabilizes with increasing $M$, reaching a quasi-steady state where the cooling and heating rates are balanced. In the inset we show the dependence of this steady state value both as a function of spin coherence time $T_2$ and $\Gamma$. As described earlier, $T_2$ has almost no net effect on the final thermal occupancy, it only influences the probability of the spin state measurements. For $T_2 \rightarrow 0$, the success probability in each round becomes $0.5$ which means that in a total of $M$ repetitions the probability to observe a ground state cooling event is $\le 1/2^M$. Hence, the steady state value is only determined by the parameters $\lambda$ and $\Gamma$, and only has a weak dependence on $T_2$ as shown in the inset of Fig. 3. Comparing the cooling rate $\gamma_M$ with the damping rate $\Gamma$ one can see that the total number of measurements $M$ that contribute to cooling is determined by $\gamma_M/\Gamma > 1$.
To experimentally implement the above described heralded control of mechanical motion, we will consider a single-side clamped diamond cantilever of dimensions $(l, w, t) = (20, 1, 0.2)\mu$m, with a fundamental frequency of $\sim 10$ MHz \cite{book}, and the zero point fluctuation of $10^{-14}m$. A magnetic (AFM) tip held at a distance of $10$nm from the surface of the diamond producing a field gradient of $2~$G/nm, will result in the spin-phonon coupling of $g \sim100$Hz \cite{Rabl}. A Nitrogen Vacancy center in diamond has a spin one (three-level) ground state of which the two energetically low spin states form the two-level spin $S$. These centers are found at a depth $> 10$nm and can have long $T^*_2 \sim \mu$s. With dynamical decoupling sequences for example the one shown in Fig. 1(c), the spin coherence time can be extended to $T_2 > 10$ms at cryogenic temperatures $(4K)$. At such low temperatures and in high vacuum environment where the experiments are performed the cantilever can have quality factors $\sim10^5$ \cite{exp3}. In addition to this finite damping rate of the fundamental mode of the NMO, the readout laser and the applied microwave control can also heat up the diamond, and hence contribute to an additional heating rate $\Gamma_h$. With optimal design of microwave resonators of high $Q$, and using resonant excitation at $637nm$ with a very low laser powers of $~100$nW for spin state readout, $\Gamma_h$ could also be drastically reduced. One can also employ the cooling procedure for other geometries suggested in \cite{Rabl}, where a cantilever carrying a magnetic tip is cooled by the spins embedded in a large diamond membrane close to the tip to avoid additional heating rate $\Gamma_h$.
In conclusion we show here that by post-selecting the spin dynamics the evolution of the NMO coupled to it could be steered far from its equilibrium either towards a cooled or a squeezed state. For ultra weak couplings i.e., $g < T_2, \Gamma$, where the spin cooperativity is close to zero the heralded spin control method proposed here offers an alternative for cooling the NMO via the single spin. The successive spin state measurements heralds the cooling/squeezing of the NMO. Though probabilistic, the event rate to achieve near ground state cooling is quite high due to the dependence of the cooling rate on the instantaneous thermal occupancy. These methods can be further used to achieve heralded entanglement between distant spins (or perform gates) that are weakly coupled to a common mode of NMO's.
\end{document} |
\begin{document}
\begin{abstract}
We present a Markov chain on the $n$-dimensional hypercube
$\{0,1\}^n$ which satisfies $t_{{\rm mix}}^{(n)}(\varepsilon) = n[1 + o(1)]$. This
Markov chain alternates between random and deterministic moves and
we prove that the chain has cutoff with a window of size at most
$O(n^{0.5+\delta})$ where $\delta>0$. The deterministic moves
correspond to a linear shift register.
\end{abstract} \maketitle
\section{Introduction} \label{section:1}
Developing Markov chain Monte Carlo (MCMC) algorithms with fast mixing times remains a problem of practical importance. One would like to make computationally tractable modifications to existing chains which decrease the time required to obtain near equilibrium samples.
The \emph{mixing time} of an ergodic finite Markov chain $(X_t)$ with stationary distribution $\pi$ is defined as \begin{equation}
t_{{\rm mix}}(\varepsilon) = \min\Bigl\{ t \geq 0 \,:\, \max_{x} \|
{\mathbb P}_x(X_t \in \cdot ) - \pi \|_{{\rm TV}} < \varepsilon \Bigr\} \,, \end{equation} and we write $t_{{\rm mix}} = t_{{\rm mix}}(1/4)$.
A theoretical algorithm for chains with uniform stationary distribution is analyzed in Chatterjee and Diaconis \cite{chatterjee2020speeding}. They proposed chains that alternate between random steps made according to a probability transition matrix and deterministic steps defined by a bijection $f$ on the state space. Supposing the state-space has size $n$, the transition matrix satisfies a one-step reversibility condition, and
$f$ obeys an \emph{expansion condition},
they proved that $t_{{\rm mix}} = O(\log n)$. However, they note that finding an explicit bijection $f$ satisfying the expansion condition can be difficult even for simple state spaces like ${\mathbb Z}_n$.
In this paper, we analyze a Markov chain on the hypercube $\{0,1\}^n$ of the form $P\Pi$ for an explicit $\Pi$, where $P$ corresponds to the usual lazy random walk on $\{0,1\}^n$. This chain may be of independent interest, as the deterministic transformation $f$ on the state space is a ``shift register'' operator. Such shift registers have many applications in cryptography, psuedo-random number generation, coding, and other fields. See, for example, \textcite{G:SRS} for background on shift registers.
The \emph{lazy random walk} on $\{0,1\}^n$ makes transitions as follows: when the current state is $x$, a coordinate from $i \in \{1,2,\ldots,n\}$ is generated uniformly at random, and an independent random bit $R$ is added (mod $2$) to the bit $x_i$ at coordinate $i$. The new state obtained is thus \begin{equation} \label{eq:P} x \mapsto x' = (x_1, \ldots, x_i \oplus R, \ldots, x_n) \,. \end{equation} We will denote the transition matrix of this chain by $P$. For a chain with transition probabilities $Q$ on $S$ and stationary distribution $\pi$, let \[
d(t) = d_n(t) = \max_{x \in S} \| Q^t(x,\cdot) - \pi
\|_{{\rm TV}} \,. \]
A sequence of chains indexed by $n$ has a \emph{cutoff} if, for $t_n := t_{{\rm mix}}^{(n)}$, there exists a \emph{window sequence} $\{w_n\}$ with $w_n = o(t_n)$ such that \begin{align*}
\limsup_{n \to \infty} d(t_n + w_n) & = 0 \\
\liminf_{n \to \infty} d(t_n - w_n) & = 1 \,. \end{align*} For background on mixing times, cutoff, and related material, see, for example, \textcite{levin2017markov}.
It is well-known that for the lazy random walk on $\{0,1\}^n$, \[
t_{{\rm mix}}(\varepsilon) = \frac{1}{2}n \log n[1 + o(1)] \,, \] with a cutoff. (See \textcite{DGM} where precise information on the total variation distance is calculated. The difference of a factor of $2$ above comes from the laziness in our version.)
A natural deterministic ``mixing'' transformation on $\{0,1\}^n$ is the ``linear shift register'' which takes the xor sum of the bits in the current word $x = (x_1,\ldots,x_n)$ and appends it to the right-hand side, dropping the left-most bit: \begin{equation} \label{eq:fdef} x \mapsto f(x) = \Bigl(x_2, \ldots,
x_{n-1}, \oplus_{i=1}^n x_{i}\Bigr) \,. \end{equation}
Let $\Pi$ denote the permutation matrix corresponding to this transformation, so that \[
\Pi_{i,j} =
\begin{cases}
1 & \text{if } j = f(i) \\
0 & \text{else}.
\end{cases} \]
The chain studied in the sequel has the transition matrix $Q_1 = P\Pi$, whose dynamics are simply described by combining the stochastic operation \eqref{eq:P} with the deterministic $f$ in \eqref{eq:fdef}: \[
x \mapsto x' \mapsto f(x') \,. \]
Let $\log $ stand for the natural logarithm. The main result here is: \begin{theorem} \label{thm:main} For the chain $Q_1$, \hspace{1.2in}
\begin{itemize}
\item[(i)] For $n \geq 5$,
\[
d_n(n+1) \leq \frac{2}{n} \,.
\]
\item[(ii)] For any $1/2 < \alpha < 1$, if $t_n = n - n^{\alpha}$
\[
d_n(t_n) \geq \| Q_1^{t_n}(0, \cdot) - \pi \|_{{\rm TV}} \geq 1 -
o(1) \,.
\]
\end{itemize}
Thus, the sequence of chains has a cutoff at time $n$ with a window of at most size $n^{1/2 + \delta}$ for any $\delta > 0$. \end{theorem}
\begin{rmk}
If the transformation $f$ obeys the expansion condition of
\textcite{chatterjee2020speeding}, then the results therein yield a mixing time of order $n$. We were unable to directly verify that $f$ does obey this condition. Moreover, the result in Theorem \ref{thm:main} establishes the stronger cut-off property. \end{rmk}
\begin{rmk}
Obviously a simple way to exactly randomize $n$ bits in exactly $n$ steps is to simply randomize in sequence, say from left to right, each bit. This is called \emph{systematic scan}; Systematic scans avoid an extra factor of $\log n$ needed for \emph{random updates}
to touch a sufficient number of bits. (A ``coupon-collector'' argument shows that to touch all but $O(\sqrt{n})$ bits using random updates -- enough to achieve small total-variation distance from uniform -- order $n\log n$ steps are required.) Thus, clearly our interest in analyzing this chain is not for direct simulation of $n$
independent bits! Rather, we are motivated both by the potential for explicit deterministic moves to speed-up of Markov chains, and also by this particular chain which randomizes the well-known shift-register dynamical system.
\end{rmk}
This paper is organised as follows. In Section \ref{section:2} we review some related results. The upper bound in Theorem \ref{thm:main} is proved in Section \ref{section:5}, and the lower bound is established in Section \ref{section:4}. In section \ref{section:6}, a chain is analyzed that is similar to the chain of Theorem \ref{thm:main}, but always updates the same location.
\section{Related previous work} \label{section:2}
\subsection{Markov chains on hypercube}
Previous work on combining deterministic transformation with random moves on a hypercube is \textcite{diaconis1992affine}. They study the walk $\{X_t\}$ described by $X_{t+1}=AX_t+\epsilon_{t+1}$ where $A$ is a $n\times n$ lower triangular matrix and $\epsilon_t$ are i.i.d.\ vectors having the following distribution: The variable $\epsilon_{t}=\mathbf{0}$ with probability $\theta\neq \frac{1}{2}$ while $\epsilon_{t}=e_1$ with probability $1-\theta$. Here, $\mathbf{0}$ is a vector of zeros, $\mathbf{e_1}$ is a vector with a one in the first coordinate and zeros elsewhere. Fourier analysis is used to show that $O(n\log n)$ steps are necessary and sufficient for mixing, and they prove a sharp result in both directions. This line of work is a specific case of a random walk on a finite group $G$ described as $X_{t+1}=A(X_t)\epsilon_{t+1}$, where $A$ is an automorphism of $G$ and $\epsilon_1,\epsilon_2,\cdots$ are i.i.d. with some distribution $\mu$ on $G$. In the case of \textcite{diaconis1992affine}, $G={\mathbb Z}_2^n$ and the automorphism $A$ is a matrix. By comparison, our chain studied here mixes in only $n(1 + o(1))$ steps.
Another relevant (random) chain on $\{0,1\}^n$ is analyzed by \textcite{wilson1997random}. A subset of size $p$ from ${\mathbb Z}_2^n$ is chosen uniformly at random, and the graph $G$ with vertex set ${\mathbb Z}_2^n$ is formed which contains an edge between vertices if and only if their difference is in $S$. \textcite{wilson1997random} considered the random walk on the random graph $G$. It is shown that if $p=cn$ where $c>1$ is a constant, then the mixing time is linear in $n$ with high probability (over the choice of $S$) as $n \to \infty$.
This Markov chain depends on the random environment to produce the speedup.
Finally, another example of cutoff for a Markov chain on a hypercube is \textcite{ben2018cutoff}. This random walk moves by picking an ordered pair $(i, j)$ of distinct coordinates uniformly at random and adding the bit at location $i$ to the bit at location $j$, modulo $2$. They proved that this Markov chain has cutoff at time $\frac{3}{2}n\log n$ with window of size $n$, so the mixing time is the same order as that of the ordinary random walk.
\subsection{Related approaches to speeding up mixing}
Ben-Hamou and Peres \cite{ben2021cutoff} refined the results of \textcite{chatterjee2020speeding}, proving further that under mild assumptions on $P$ ``typical'' $f$ yield a mixing time of order $\log n$ with a cutoff.
In particular, they show that if a permutation matrix $\Pi$ is selected uniformly at random, then the (random) chain $Q=P\Pi$ has
cutoff at $\frac{\log n}{\textbf{h}}$ with high probability (with respect to the selection of $\Pi$). Here \textbf{h} is the entropy rate of $P$ and $n$ is the size of state space. Like the chain in \textcite{wilson1997random}, the random environment is critical to the analysis. However, in specific applications, one would like to know an explicit deterministic permutation $\Pi$ that mixes in $O(\log n)$ and does not require storage of the matrix $\Pi$, particularly when the state space increases exponentially with $n$.
A method for speeding up mixing called \emph{lifting} was introduced by \textcite{diaconis2000analysisnonrev}. The idea behind this technique is to create ``long cycles'' and introduce non-reversibility. For example, for simple random walk on the $n$-path, the mixing time of the lifting is $O(n)$, whereas the mixing time on the path is $\Theta(n^2)$. Thus this method can provide a speed-up of square root of the mixing time of the original chain. \textcite{chen1999lifting} give an explicit lower bound on the mixing time of the lifted chain in terms of the original chain. The chain we study has a similar flavor in that the transformation $f$ creates non-reversibility and long cycles.
Another related speed-up technique is \emph{hit and run}, which introduces non-local moves in a chosen direction. See the survey by \textcite{andersen2007hit}. \textcite{boardman2020hit} is a recent application of a top-to-random shuffle where it is shown that a speedup in mixing by a constant factor can be obtained for the $L^2$ and sup-norm. Jensen and Foster \cite{jensen2014level} have used this method to sample from high-dimensional and multi-modal posterior distributions in Bayesian models, and compared that with Gibbs and Hamiltonian Monte Carlo algorithms. In the physics literature, non-reversible chains are constructed from a reversible chains without augmenting the state space (in contrast to lifting) by introducing \emph{vorticity}, which is similar in spirit to the long cycles generated by lifting; see, for example, Bierkens \cite{bierkens2016non}, which analyzes a non-reversible version of Metropolis-Hastings.
As mentioned above, there are obvious other methods to obtain a fast uniform sample from $\{0,1\}^n$, In particular systematic scan, which generates an exact sample in precisely $n$ steps! See \textcite{diaconis2000analysis} for a comparison of systematic and random scans on different finite groups.
\section{Upper bound of Theorem \ref{thm:main}} \label{section:5}
The proof is based on Fourier analysis on ${\mathbb Z}_2^n$.
Let A be a matrix defined as follows \begin{equation} \label{eq:Adefn}
A:=\begin{bmatrix}0&1&0&\cdots 0\\
0&0&1&\cdots 0\\
\vdots&\ddots \\
0&0&0&\cdots 1 \\
1&1&1&\cdots1
\end{bmatrix}_{n\times n} \,. \end{equation}
Let $\{\epsilon_i\}$ be i.i.d.\ random vectors and have the following distribution \begin{align}
\label{eq:5.1}
\epsilon_i = \begin{cases}
\textbf{0} \quad \text{with probability } \frac{1}{2} \\
e_1 \quad \text{with probability } \frac{1}{2n} \\
\vdots \\
e_n \quad \text{with probability } \frac{1}{2n} \,,\\
\end{cases} \end{align} where $e_i = (0,\ldots,\underbrace{1}_{\text{$i$-th
place}},\ldots,0)$.
The random walk $X_t$ with transition matrix $Q_1$ and $X_0=x$ can be described as \[
X_t = A(X_{t-1}\oplus \epsilon_{t}) \,. \] The matrix arithmetic above is all modulo $2$. Induction shows that \begin{equation}
X_t = \left(\sum_{j=1}^t A^{t-j+1}\epsilon_j\right) \oplus A^tx \label{eq:5.2} \,, \end{equation} again where the matrix multiplication and vector sums are over the field $\mathbb{Z}_2$.
\begin{lem}
The matrix $A$ in \eqref{eq:Adefn} satisfies $A^{n+1}=I_{n\times
n}$.
\label{lemma:5.1} \end{lem}
\begin{proof}
Note that
\begin{align*}
Ae_1 &= e_n \\
A^2e_1 = A(Ae_1)&= e_{n}+e_{n-1} \\
A^3e_1 = A(A^2e_1) &= e_{n-1}+e_{n-2} \\
&\vdots \\
A^ne_1 &= e_2+e_1
\end{align*}
This implies that
\[
A^{n+1}e_1=A(A^ne_1)=A(e_2+e_1)=e_1 \,.
\]
The reader can check similarly that $A^{n+1}e_j=e_j$ for
$2\leq j \leq n$. \end{proof}
For $x,y\in {\mathbb Z}_2^n$, the Fourier transform of $Q_1^t(x,\cdot)$ at $y$ is defined as \begin{align}
\widehat{Q_1^t}(x,y)&:=\sum_{z\in \mathbb{Z}_2^n} (-1)^{y \cdot z}Q_1^t(x,z) \nonumber \\
&= {\mathbb E}[(-1)^{y \cdot X_{t}}] \label{eq:5.3} \\
&= (-1)^{y \cdot A^tx}\prod_{j=1}^{t} {\mathbb E}\left[ (-1)^{y \cdot A^{t-j+1}\epsilon_j}\right]\,. \label{eq:5.4} \end{align} The product $x \cdot y$ is the inner product $\sum_{i=1}^n x_i y_i$. The equality \eqref{eq:5.4} follows from plugging \eqref{eq:5.2} into \eqref{eq:5.3} and observing that $\epsilon_j$ are independent. The following Lemma bounds the total variation distance; this is proved in \textcite[Lemma 2.3]{diaconis1992affine}.
\begin{lem}
\label{lemma:5.2}
$\|Q_1^t(x,\cdot)-\pi\|_{{\rm TV}}^2 \leq \frac{1}{4}\sum_{y\neq 0}
\left( \widehat{Q_1^t}(x,y)\right) ^2$. \end{lem}
We will need the following Lemma to prove Theorem \ref{thm:main}(i):
\begin{lem}
\label{lemma:5.6}
Let
\begin{equation} \label{eq:hnk} h(n,k) = \binom{n}{k} \left(1 -
\frac{k}{n}\right)^{2n-2k} \left( \frac{k}{n} \right)^{2k}
\end{equation}
If $2 \leq k \leq n-2$ and $n > 5$, then $h(n,k) \leq 1/n^2$ and $h(n,n-1)\leq 1/n$. \end{lem} \begin{proof}
We first prove this for $2 \leq k \leq n-2$. For
$x,y \in \mathbb{Z}^+$, define
\begin{align*}
K(x,y) & :=\log\frac{\Gamma(x + 1)}{\Gamma(y + 1)\Gamma(x - y + 1)}+(2x-2y)\log\left( 1-\frac{y}{x}\right)\\
& \quad +2y\log\left( \frac{y}{x}\right)+2\log(x) \,,
\end{align*}
where $\Gamma$ is the Gamma function. We prove that, if
$2 \leq y \leq x-2$ and $x > 5$, then $K(x,y) < 0$. Since
$K(n,k) = \log \left(h(n,k) n^2\right)$, this establishes the lemma.
Let $\psi(x):=\frac{d\log \Gamma(x)}{dx}$. Then
\begin{align}
\frac{\partial^2 K}{\partial y^2} &= -\psi'(y+1)-\psi'(x-y+1)+\frac{2}{x-y}+\frac{2}{y} \nonumber \\
&> -\frac{1}{y+1}-\frac{1}{(y+1)^2}-\frac{1}{(x-y+1)}-\frac{1}{(x-y+1)^2}+\frac{2}{y}+\frac{2}{x-y} \label{eq:5.7} \\
&> 0 \,.\label{eq:5.8}
\end{align}
The inequality \eqref{eq:5.7} follows from Guo and Qi \cite[Lemma
1]{guo2013refinements}, which says that
$\psi'(x)<\frac{1}{x}+\frac{1}{x^2}$ for all $x>0$.
The second inequality follows since
$2/y - (y+1)^{-1} - (y+1)^{-2} > 0$, and we can apply this again
substituting in $x-y$ for $y$. Thus $K(x,\cdot)$ is a convex
function for all $x$. Also
\begin{align}
K(x,2)=K(x,x-2)&=\log\left(\frac{x(x-1)}{2} \right)+2(x-2)\log\left(1-\frac{2}{x} \right)+4\log\left(\frac{2}{x}\right)+2\log x \nonumber \\
&= \log \left( \frac{8x(x-1)}{x^2}\right)+2(x-2)\log\left( 1-\frac{2}{x}\right) \nonumber \\
&< \log (8)-\frac{4(x-2)}{x} \label{eq:5.9} \\
&<0 \,,\label{eq:5.10}
\end{align}
for $x > 5$. The inequality \eqref{eq:5.9} follows from
$\log(1-u)<-u$. Equations \eqref{eq:5.10} and \eqref{eq:5.8} prove this lemma for $2\leq k\leq n-2$. Finally, $h(n,n-1) \leq 1/n \iff nh(n,n-1)\leq 1$ which is true because one can verify that $\frac{d}{dn}nh(n,n-1)<0$ for $n\geq 5$ and $nh(n,n-1)<1$ for $n=5$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:main}(i)]
Let $y=(y_1,y_2,\cdots,y_n)\in {\mathbb Z}_n^2$. First,
\begin{align*}
\widehat{Q_1^{n+1}}(x,y) & =(-1)^{y \cdot A^{n+1}x}\prod_{j=1}^{n+1} {\mathbb E}\left[ (-1)^{y\cdot A^{n-j+2}\epsilon_j}\right]\\
& =(-1)^{y \cdot Ix}\prod_{j=1}^{n+1} {\mathbb E}\left[ (-1)^{y \cdot A^{n-j+2}\epsilon_j}\right] \,,
\end{align*}
which follows from \eqref{eq:5.4} and Lemma \ref{lemma:5.1}. Note
that the first factor in this product is
\[
\left( \frac{1}{2} + \frac{1}{2n}\left[
(-1)^{y_1}+(-1)^{y_2}+(-1)^{y_3}+\cdots+(-1)^{y_n}\right]\right)
\,,
\]
which follows from \eqref{eq:5.1} and Lemma \ref{lemma:5.1}. We can
similarly find other factors in the product, which gives
\begin{align}
\widehat{Q_1^{n+1}}& (x,y) = (-1)^{x \cdot y}\nonumber \\
& \quad \times \left( \frac{1}{2} + \frac{1}{2n}\left[ (-1)^{y_1}+(-1)^{y_2}+(-1)^{y_3}+\cdots+(-1)^{y_n}\right]\right)
\label{eq:secondcase} \\
& \quad \times \left( \frac{1}{2} + \frac{1}{2n}\left[ (-1)^{y_1}+(-1)^{y_1+y_2}+(-1)^{y_1+y_3}+\cdots+(-1)^{y_1+y_n}\right]\right)
\label{eq:fac1}\\
& \quad \times \left(\frac{1}{2} + \frac{1}{2n}\left[ (-1)^{y_2}+(-1)^{y_2+y_1}+(-1)^{y_2+y_3}+\cdots+(-1)^{y_2+y_n}\right]\right) \\
&\quad \vdots \\
&\quad \times \left(\frac{1}{2} + \frac{1}{2n}\left[ (-1)^{y_n}+(-1)^{y_n+y_1}+(-1)^{y_n+y_2}+\cdots+(-1)^{y_n+y_{n-1}}\right]\right)\label{eq:5.5}
\end{align}
Observe that $\widehat{Q_1^{n+1}}(x,y)=0$ for all $y\in {\mathbb Z}_2^n$ such that $W(y)\in \{1,n\}$ where $W(y)$ is the Hamming weight of $y\in {\mathbb Z}_2^n$. If $W(y)=1$, then one of the factors
displayed on line \eqref{eq:fac1} through line \eqref{eq:5.5} is
zero. If $W(y)=n$, then the factor on line \eqref{eq:secondcase} is zero. If we fix a $2\leq j\leq n-1$ and look at
all $y\in {\mathbb Z}_2^n$ with $W(y)=j$, then $[\widehat{Q_1^{n+1}}(x,y)]^2$
is the same for all such $y$, since the expression above is
invariant over permutation of coordinates, once the first factor is
squared.
If $y=(\underbrace{1,1,\ldots,1}_{\text{$k$ ones}},0,\cdots,0)$
where $2\leq k\leq n-1$, then
\[
\widehat{Q_1^{n+1}}(x,y)=(-1)^{\left(\sum_{i=1}^kx_i\right)}\left(1-\frac{k}{n}\right)^{n-k+1}\left(\frac{k-1}{n}\right)^k
\,.
\]
This holds because factors $3$ through $(k+2)$ are equal to
$\frac{k-1}{n}$ and all factors except the first and second one are
equal to $\frac{n-k}{n}$. To see this, note that factor $2$ is equal
to
\begin{align*}
&\frac{1}{2} + \frac{1}{2n}\left[ (\underbrace{-1-1\ldots-1}_{\text{$k$ negative ones}}) +(\underbrace{1+1\ldots+1}_{\text{$n-k$ positive ones}}) \right] \\
& = \frac{1}{2} + \frac{1}{2n}\left[ -k + (n-k) \right]
= \frac{n-k}{n} \end{align*}
Factors $3$ through $(k+2)$ are equal to \begin{align*}
&\frac{1}{2} + \frac{1}{2n}\left[ -1+ (\underbrace{1+1\ldots+1}_{\text{$k-1$ positive ones}}) + (\underbrace{-1-1\ldots-1}_{\text{$n-k$ negative ones}})\right] \\
& = \frac{1}{2} + \frac{1}{2n}\left[-1+k-1-(n-k)\right] = \frac{k-1}{n} \,. \end{align*}
Factors $(k+3)$ through $(n+2)$ are equal to \begin{align*}
&\frac{1}{2} + \frac{1}{2n}\left[ 1+ (\underbrace{-1-1\ldots-1}_{\text{$k$ negative ones}}) + (\underbrace{1+1\ldots+1}_{\text{$n-k-1$ positive ones}}) \right] \\
& = \frac{1}{2} + \frac{1}{2n}\left[1-k+n-k-1\right] = \frac{n-k}{n} \end{align*}
Thus, \begin{align}
\sum_{y\neq 0}\left(\widehat{Q_1^{n+1}}(x,y)\right)^2 &= \sum_{k=2}^{n-1} \binom{n}{k}\left(1-\frac{k}{n}\right)^{2n-2k+2} \left(\frac{k-1}{n}\right)^{2k} \nonumber \\
&\leq \sum_{k=2}^{n-1} \binom{n}{k}\left(1-\frac{k}{n}\right)^{2n-2k} \left(\frac{k}{n}\right)^{2k} \label{eq:5.6} \,. \end{align}
We finally analyze the terms in the sum of \eqref{eq:5.6}. Note that \[
\binom{n}{k} \left(1-\frac{k}{n}\right)^{2n-2k}
\left(\frac{k}{n}\right)^{2k}\leq \frac{1}{n^2} \] for $2\leq k\leq n-2$ and $n>5$ and $h(n,n-1)\leq 1/n$, by Lemma \ref{lemma:5.6}.
Thus,
\begin{equation} \label{eq:finbound} \sum_{k=2}^{n-1}
\binom{n}{k}\left(1-\frac{k}{n}\right)^{2n-2k}
\left(\frac{k}{n}\right)^{2k} \leq \frac{n-3}{n^2} + \frac{1}{n} \leq \frac{2}{n}
\,.
\end{equation} Lemma \ref{lemma:5.2}, with \eqref{eq:5.6} and the bound \eqref{eq:finbound}, establishes the upper bound in Theorem \ref{thm:main}. \end{proof}
\section{Lower bound of Theorem \ref{thm:main}} \label{section:4}
Let $\{U_t\}$ be the sequence of coordinates used to update the chain, and let $\{R_t\}$ be the sequence of random bits used to update. Thus, at time $t$, coordinate $U_t$ is updated using bit $R_t$. Both sequences are i.i.d. Let ${\mathcal F}_t$ be the $\sigma$-algebra generated by $(U_1,\ldots,U_t)$ and $(R_1,\ldots,R_t)$. Let $X_t = (X_t^{(1)}, \ldots, X_t^{(n)})$ be the chain with transition matrix $Q$, at time $t$. The proof is based on the distinguishing statistic $W_t = \sum_{i=1}^n X^{(i)}_t$, the Hamming weight at time $t$.
First, observe that
\[
{\mathbb P}\left(\left. X_{t+1}^{(n)} = 1 \right| {\mathcal F}_t \right) = \frac{1}{2}
\,, \] because if the state at time $t$ is $x=(x_1,x_2,\cdots,x_n)$, then $R_{t+1}$ is added at a uniformly chosen coordinate of $x$ and $X_{t+1}^{(n)}=\sum_{i=1}^n x_i \oplus R_{t+1}\in \{0,1\}$ with probability $1/2$ each, conditioned on ${\mathcal F}_t$. We now describe a recursive relation for $W_t$, the Hamming weight of $X_t$:
\begin{equation}\label{eq:4.1}
W_{t+1} = \sum_{j=2}^n \left( X_{t}^{(j)}\cdot 1_{(U_{t+1}\neq j)} +
\left( X_{t}^{(j)}\oplus R_{t+1}\right) \cdot 1_{(U_{t+1} = j)}
\right)+ 1_{\left( X_{t+1}^{(n)}=1\right)} \,. \end{equation}
The first terms in \eqref{eq:4.1} follows from the fact that for $1\leq j\leq n-1$, $$ X_{t+1}^{(j)}=\begin{cases}
X_t^{(j+1)} \hspace{1cm}\;\;\;\,\text{ if $U_{t+1}\neq j$} \\
X_t^{(j+1)}\oplus R_{t+1} \;\;\text{ if $U_{t+1}=j$} \end{cases} $$
Taking conditional expectation in \eqref{eq:4.1}, we get
\begin{align}
{\mathbb E}[W_{t+1} \mid {\mathcal F}_t] & = \sum_{j=2}^n\left( X_{t}^{(j)} \left( \frac{n-1}{n}\right)
+ \frac{1}{2}\left[1 - X_{t}^{(j)}+X_t^{(j)}\right] \frac{1}{n} \right) + \frac{1}{2} \nonumber \\
&= \left(1 - \frac{1}{n}\right)\sum_{j=2}^n X_t^{(j)}
+ \left(\frac{n-1}{2n}\right) + \frac{1}{2} \nonumber \\
&= \left(1 - \frac{1}{n} \right)\sum_{j=1}^n X_t^{(j)}
- \left( 1 - \frac{1}{n}\right) X_t^{(1)}+ \left(\frac{2n-1}{2n}\right) \nonumber \\
&= \left( 1 - \frac{1}{n}\right) W_t - \left( 1-\frac{1}{n}\right) X_t^{(1)}+\frac{2n-1}{2n} \label{eq:4.2} \end{align}
Let $\mu_t:={\mathbb E}(W_t)$. Taking total expectation in \eqref{eq:4.2}, we get
\begin{equation}\label{eq:4.3}
\mu_{t+1}=\left( 1 - \frac{1}{n}\right)\mu_t-\left( 1 - \frac{1}{n}\right){\mathbb P}\left( X_t^{(1)}=1 \right) + \frac{2n-1}{2n} \end{equation} We now estimate the probability in \eqref{eq:4.3}. Since $X_0 = 0$, for $t \leq n$, \begin{equation}\label{eq:4.4}
{\mathbb P}\left( X_t^{(1)} = 1 \right) = \left[ 1 - \left( 1-\frac{1}{n}\right) ^t \right]\frac{1}{2} \,. \end{equation} To obtain \eqref{eq:4.4}, follow the bit at coordinate $1$ at time $t$ backwards in time: at time $t-1$ it was at coordinate $2$, at time $t-2$ it was at coordinate $3$, etc. At time $t$ it is a $0$ unless it was updated at least once along this progression to the left, and the last time that it was updated, it was updated to a $1$.
The probability it was never updated along its trajectory is $\left(1-\frac{1}{n}\right)^t$, as we require that coordinates $2,3,\cdots,t,t+1$ at times $t,t-1,\cdots,2,1$ respectively have not been chosen for updates. The probability is thus $\left[ 1 - \left( 1-\frac{1}{n}\right) ^t \right]$ that at least one of these coordinates is chosen; the factor of $1/2$ appears because we need the last update to be to $1$. Each update is independent of the chosen coordinate and the previous updates.
We now look at a recursive relation for $\mu_t$.
\begin{equation}\label{eq:4.5}
\mu_{t+1}=C_1\mu_t-\frac{C_1}{2}\left[ 1-C_1^t\right]+C_2 \,, \end{equation} where \eqref{eq:4.5} is obtained by plugging \eqref{eq:4.4} in \eqref{eq:4.3} and the constants are $C_1:=\left( 1-\frac{1}{n}\right), C_2:=\left(\frac{2n-1}{2n}\right)$. Note that $\mu_0 = 0$. The following Lemma obtains a solution of \eqref{eq:4.5}.
\begin{lem}\label{lemma:4.1}
$\mu_t=\left(\frac{t-n}{2}\right)\cdot C_1^{t}+\frac{n}{2}$ is the
solution of the recursive relation \eqref{eq:4.5}.
\end{lem} \begin{proof}
Clearly $\mu_0 = 0$, and the reader can check that $\mu_t$ obeys the
recursion.
\end{proof}
Note that we can write $W_t=g_t(R_1,R_2,\ldots R_t,U_1,U_2,\ldots U_t)$ for some function $g_t$, and that the variables $R_1, \ldots, R_t, U_1, \ldots, U_t$ are independent. The following Lemma is used in proving that $W_t$ has variance of order $n$.
\begin{lem}
\label{lemma:4.2}
$$\max_{\substack{r_1,\ldots,r_t\\u_1,\ldots, u_t}} \Big|g_t(r_1,\ldots,r_i,\ldots,r_t,u_1,\ldots, u_t)-g_t(r_1,\ldots,r_i\oplus 1,\ldots,r_t,u_1,\ldots u_t)\Big|\leq 2 \,.$$
for $1\leq i\leq t$ and $1\leq t\leq n$. \end{lem}
\begin{proof}
Any sequence of coordinates $\{u_s\}_{s=1}^t$ in
$\{1,\ldots,n\}$ and bits
$\{r_s\}_{s=1}^t$ in $\{0,1\}$ determine inductively a sequence $\{x_s\}_{s=0}^t$ in
$\{0,1\}^n$ by updating at time $s$ the configuration $x_s$
by adding the bit $r_s$ at coordinate $u_s$ followed by an application of
the transformation $f$. We call a sequence of
pairs $\{(u_s,r_s)\}_{s=1}^t$ a \emph{driving sequence}, and the
resulting walk in the hypercube $\{x_s\}_{s=0}^t$ the
\emph{configuration sequence}.
We write
\[
g_t = g_t(r_1,\ldots,r_t, u_1,\ldots, u_t), \qquad
g_t' = g_t(r_1',\ldots,r_t', u'_1,\ldots,u_t')
\]
Consider a specific driving sequence of locations and update bits,
$\{(r_s,u_s)\}_{s=1}^t$, and a second such
\emph{driving sequence} $\{(r_s',u_s')\}_{s=1}^t$ which satisify \begin{itemize} \item $u_s'=u_s$ for $1 \leq s \leq t$, \item $r_s' = r_s$ for $1 \leq s \leq t$ and $s \neq s_0$, \item $r_{s_0}' = r_{s_0} \oplus 1$. \end{itemize} Thus, the two driving sequences agree everywhere except for at time $s_0$, where the update bits differ.
We want to show that $|g_t - g_t'| \leq 2$, for any $t \leq n$.
Let $\{x_s\}_{1 \leq s \leq t}$ and $\{y_s\}_{1 \leq s \leq t}$ be the two configuration sequences in $\{0,1\}^n$ obtained, respectively, from the two driving sequences. We will show inductively that the Hamming distance
\[
d_s := d_s(x_s,y_s) := \sum_{j=1}^n |x_s^{(j)} -
y_s^{(j)}|
\]
satisfies $d_s \leq 2$ for $s \leq t$, and hence the
maximum weight difference $|g_t - g_t'|$ is bounded by $2$.
Clearly $x_s = y_s$ for $s < s_0$, since the two
driving sequences agree prior to time $s_0$, whence
$d_s = 0$ for $s < s_0$.
We now consider $d_{s_0}$. Let
$\ell = u_{s_0} = u'_{s_0}$ be the coordinate
updated in both $x_{s_0}$ and $y_{s_0}$, and
as before let
\begin{align*}
x_{s_0-1}' & =
(x_{s_0-1}^{(1)}, \ldots, x_{s_0-1}^{(\ell)}\oplus r_{s_0}, \ldots, x_{s_0-1}^{(n)}) \,,\\
y_{s_0-1}' & =
(y_{s_0-1}^{(1)}, \ldots, y_{s_0-1}^{(\ell)}\oplus r'_{s_0}, \ldots, y_{s_0-1}^{(n)}) \,.
\end{align*}
Since $r_{s_0} \neq r'_{s_0}$ but
$x_{s_0-1} = y_{s_0-1}$, the configurations
$x_{s_0-1}'$ and $y_{s_0-1}'$ have different
parities. Recalling that
$x_{s_0} = f(x_{s_0-1}')$ and $y_{s_0} = f(y_{s_0-1}')$, we consequently have that
$x_{s_0}^{(n)} \neq y_{s_0}^{(n)}$.
Since $x_{s_0}$ and $y_{s_0}$ agree at all
other coordinates except at $\ell-1$,
we have
\[
d_{s_0} \leq I\{ \ell \neq 1\} + 1 \leq 2 \,.
\]
Next suppose that $d_s = 1$ for some time $s \geq s_0$, so that for some
$\ell \in \{1,\ldots,n\}$, we have
$x_s^{(j)} = y_s^{(j)}$ for $j \neq \ell$ and
$x_s^{(\ell)} \neq y_s^{(\ell)}$. Since $r_{s+1} = r'_{s+1}$
and $u_{s+1} = u'_{s+1}$, after adding the same update bit
at the same coordinate in the configurations $x_s$ and $y_s$,
but before applying $f$,
the resulting configurations will still have a single disagreement
at $\ell$. Thus, after applying $f$ to obtain the configurations at
time $s+1$, we have
$x_{s+1}^{(n)} \neq y_{s+1}^{(n)}$, but
\[
d_{s+1} = \sum_{j=1}^{n-1}|x_{s+1}^{(j)} - y_{s+1}^{(j)}|
+ |x_{s+1}^{(n)} - y_{s+1}^{(n)}|
= \sum_{j=2}^n |x_{s}^{(j)} - y_{s}^{(j)}| + 1
\leq d_s + 1 \leq 2 \,.
\]
(If $\ell = 1$, then $d_{s+1} = 1$.) Thus, $d_{s+1} \leq 2$.
Finally consider the case that $d_s = 2$
for $s \geq s_0$. Again, $u_{s+1} = u'_{s+1}$ and $r_{s+1} =
r'_{s+1}$; After updating $x_s$ and $y_s$ with the same
bit at the same coordinate, but before
applying $f$, the two configurations still differ at
exactly these two coordinates.
Thus, $x_{s+1}^{(n)} = y_{s+1}^{(n)}$, and
\[
d_{s+1} = \sum_{j=1}^{n-1}|x_{s+1}^{(j)} - y_{s+1}^{(j)}|
+ 0 = \sum_{j=2}^{n-1} |x_s^{(j)} - y_s^{(j)}|
\leq d_s \leq 2 \,.
\]
(Again, the sum is $1$ if one of the two disagreements at
time $s$ is at coordinate $1$.)
We now have that $d_s \leq 2$ for all $s \leq t$: For $s \leq s_0$,
we have $d_s = 0$, and $d_{s_0} = 1$. For $s \geq s_0$, if
$d_{s} \leq 2$, then $d_{s+1} \leq 2$.
It then follows in particular that $d_t \leq 2$ and that
$|g_t - g_t'| \leq 2$. \end{proof} \begin{lem}
\label{lemma:4.3} $$\max_{\substack{r_1,\ldots,r_t\\
u_1,\ldots,u_t, u_i'}}
\Big|g_t(r_1,\ldots,r_t,u_1,\ldots,u_i,\ldots, u_t)-g_t(r_1,\ldots,r_t,u_1,\ldots,u_i',\ldots, u_t)\Big|\leq 2 \,.$$ \end{lem}
\begin{proof}
Again, if two trajectories differ only in the coordinate selected at
time $i$, then the weight at time $t$ can by differ by at most
$2$. Fix time $1\leq t\leq n$ and consider the dynamics of number of
coordinates at which the two trajectories differ at time $k<t$.
The two trajectories agree with each other until time $i$ because
the same random bits and locations are used to define these
trajectories. At time $i$, we add the same random bit $r_i$ to
update both trajectories, but use coordinate $u_i$ for the first
trajectory and coordinate $u_i'$ in the second trajectory. If
$r_i=0$, then clearly the two trajectories continue to agree at time
$k\geq i$.
Now suppose that $r_i=1$. Let $b_1,b_2$ be the bits at coordinates
$u_i,u_i'$ in the \emph{first} trajectory at time $i-1$ and
$b_3,b_4$ be the bits at coordinates $u_i,u_i'$ in the \emph{second}
trajectory at time $i-1$. Note that since the trajectories are
identical for times less than $i$, $b_1 = b_3$ and $b_2=b_4$. For
all values of $(b_1,b_2,b_3,b_4)$ satisfying $b_1 = b_3$ and
$b_2 = b_4$, there are two disagreements between the trajectories at
coordinates $u_i-1,u_i'-1$ at time $i$. (If $u_i-1 < 0$ or
$u_i' - 1 < 0$, then there is a single disagreement). The appended
bit agrees, since
\[
(b_1\oplus 1) \oplus b_2 = b_1 \oplus (b_2 \oplus 1) = b_3 \oplus
(b_4 \oplus 1) \,.
\]
This takes care of what happens at time $i$ when the single
disagreement between update coordinates occurs; at time $i$ the
Hamming distance is bounded by $2$.
Now we consider an induction on the Hamming distance, showing that
at all times the Hamming distance is bounded by two.
{\em Case A}. Suppose that the trajectories differ at two
coordinates, say $\ell_1,\ell_2$ at time $k>i$. Since the two
trajectories only differ in the updated coordinate at time $i$ with
$i < k$, the chosen update coordinate and the chosen update bit are
the same for both trajectories at time $k$. Let $b_1,b_2$ be the
bits at coordinates $\ell_1,\ell_2$ in the \emph{first} trajectory
at time $k$ and $b_3,b_4$ be the bits at coordinates $\ell_1,\ell_2$
in the \emph{second} trajectory at time $k$. Necessarily
$b_1 \neq b_3$ and $b_2 \neq b_4$. Consider the following subcases:
{\em Subcase 1.} $(b_1,b_2,b_3,b_4)=(0,1,1,0)$ or
$(b_1,b_2,b_3,b_4)=(1,0,0,1)$. If $u_k\notin\{\ell_1,\ell_2\}$, then
the trajectories continue to have the same two disagreements at
these coordinates shifted by one at time $k+1$, since the updated
coordinate and the update bit is the same for both trajectories.
Also, the new bit which is appended agrees, since
$b_1 \oplus b_2 = 1 = b_3 \oplus b_4$, and all other bits in the
mod-$2$ sum agree. So the Hamming distance remains bounded by two,
allowing the possibility the Hamming distance decreases if
$\ell_1 \wedge \ell_2 = 1$.
Supposing that $u_k \in \{\ell_1,\ell_2\}$, without loss of
generality, assume that $u_k = \ell_1$. These disagreements
propagate to time $k+1$, allowing for the possibility for one to be
eliminated if it occurs at coordinate $1$. For $r_k = 0$, the
appended bit will agree since
\[
(b_1\oplus 0) \oplus b_2 = 1 = (b_3 \oplus 0) \oplus b_4
\]
and all other bits in the mod-$2$ sum agree. For $r_k = 1$, the
appended bit will still agree since
\[
(b_1 \oplus 1) \oplus b_2 = 1 = (b_3 \oplus 1) \oplus b_4 \,.
\]
This means at time $k+1$, the Hamming distance is bounded by $2$.
{\em Subcase 2.} $(b_1,b_2,b_3,b_4)=(1,1,0,0)$ or
$(b_1,b_2,b_3,b_4)=(0,0,1,1)$. If $u_k\notin\{\ell_1,\ell_2\}$, then
the trajectories continue to have the same two disagreements (unless
one of the disagreements is at coordinate $1$), the appended bit
agrees (since $1\oplus 1 = 0 \oplus 0$), and the Hamming distance
remains bounded by $2$.
Suppose that $u_k = \ell_1$. If $r_k = 0$, then the two
disagreements persist (or one is eliminated because it occurred at
coordinate $1$) and the appended bit agrees. If $r_k=1$, then the
two disagreements persist, and again the appended bit agrees,
because now $0 \oplus 1 = 1 \oplus 0$,
Therefore the Hamming distance remains bounded by $2$ at time $k+1$.
{\em Case B}. Suppose that the trajectories differ at one
coordinate, say $\ell$, at time $k>i$. Consider the following
subcases:
{\em Subcase 1.} $u_k\neq \ell$. The disagreement persists unless
$u_k = 1$, and the appended bit now disagrees. Thus the Hamming
distance is bounded by $2$ at time $k+1$.
{\em Subcase 2.} $u_k=\ell$. The disagreement persists at $u_k$
(unless $u_k = 1$), and the appended bit now disagrees. Again, the
Hamming distance is bounded by $2$ at time $k+1$.
Thus, by induction, the Hamming distance between the two
trajectories remains always bounded by $2$. As a consequence, the
difference in the Hamming {\em weight} remains never more than $2$. \end{proof}
\begin{lem} \label{lem:BDI} If $W_0=0$, then $\Var(W_t) \leq 4t$. \end{lem} \begin{proof}
We use the following consequence of the Efron-Stein inequality:
Suppose that $g:{\mathcal X}^n \to {\mathbb R}$ has the property that for
constants $c_1,\ldots,c_n > 0$,
\[
\sup_{x_1,\ldots,x_n, x_i'} |g(x_1, \ldots, x_n) -
g(x_1,\ldots,x_{i-1},x_i',x_{i+1},\ldots, x_n)| \leq c_i \,.
\]
and if $X_1,\ldots,X_n$ are independent variables, and
$Z=g(X_1,\ldots,X_n)$ is square-integrable, then
$\Var(Z) \leq 4^{-1}\sum_i c_i^2$. (See, for example,
\textcite[Corollary 3.2]{CI}.)
This inequality together with Lemmas \ref{lemma:4.2} and
\ref{lemma:4.3} show that
\begin{equation}
\Var(W_t)\leq \frac{1}{2}\sum_{i=1}^{2t} 2^2 = 4t \leq 4n \quad\text{for $t\leq n$}
\end{equation} \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:main}(ii)]
Plugging $t=n-n^\alpha$ in Lemma \ref{lemma:4.1}, where
$\frac{1}{2}<\alpha<1$, we get
\begin{equation}
{\mathbb E} W_t=\mu_t=\frac{n}{2}-\left( 1-\frac{1}{n}\right)^{n-n^{\alpha}}\frac{n^\alpha}{2}\leq \frac{n}{2}-\frac{1}{2e}n^\alpha \label{eq:4.18}
\end{equation}
For any real-valued function $h$ on $S$ and probability $\mu$ on
$S$, write $E_{\mu}h:=\sum_{x\in S}h(x)\mu (x)$. Similarly,
$\Var_\mu(h)$ is the variance of $h$ with respect to $\mu$. As
stated earlier, $W(x)$ is the Hamming weight of $x\in S$. The
distribution of the random variable $W$ under the stationary
distribution $\pi$ (uniform on $\{0,1\}^n$), is binomial with
parameters $n$ and $1/2$, whence
\begin{equation} \label{eq:statmom} E_\pi(W)=\frac{n}{2}, \quad
\Var_{\pi}(W)=\frac{n}{4} \,.
\end{equation}
Let $c>0$ be a constant and
$A_c:=\left(\frac{n}{2}-c\sqrt{n},\infty\right)$. Chebyshev's
inequality yields that
\[
\pi\{ W \in A_c\} \geq 1 - \frac{1}{4 c^2} \,.
\]
Thus we can pick $c$ so that this is at least $1 - \eta$ for any $\eta>0$.
Fix $\frac{1}{2}< \alpha< 1$. For $t_n=n-n^\alpha$, by
\eqref{eq:4.18},
\begin{align*}
{\mathbb P}_0\left(W_{t_n}\in A_c \right) & =
{\mathbb P}_0\left( W_{t_n} > \frac{n}{2} - c\sqrt{n}\right) \\
& \leq {\mathbb P}_0\left(W_{t_n}-{\mathbb E} W_{t_n}\geq \frac{n^\alpha}{2e}-c\sqrt{n}\right) \,.
\end{align*}
Since
\[
\frac{n^\alpha}{2e} - c n^{1/2} = n^{1/2}\underbrace{\left(
\frac{n^{\alpha-1/2}}{2e} - c \right)}_{\delta_n(c)}\,,
\]
we have again by Chebyshev's inequality,
\[
{\mathbb P}_0(W_{t_n} \in A_c) \leq \frac{ \Var(W_{t_n})}{n\delta_n(c)^2}
\leq \frac{4 t_n}{n\delta_n(c)^2} \leq \frac{4}{\delta_n(c)^2} \,.
\]
The last inequality follows from Lemma \ref{lem:BDI}, since
$t_n \leq n$.
Finally,
\[
\|Q_1^{t_n}(\textbf{0},\cdot)-\pi\|_{\text{TV}}\geq \big|\pi(W\in A_c)-
{\mathbb P}_0\left(W_{t_n}\in A_c \right)\big| \geq 1 - \frac{1}{4c^2} -
\frac{4}{\delta_n(c)^2} \,.
\]
One can take, for example, $c_n = \log n$ so that
$\delta_n(c_n) \to \infty$, in which case the bound above is
$1 - o(1)$.
\end{proof}
\section{A related chain} \label{section:6}
We now consider a Markov chain $Q_2$ on $\{0,1\}^{2m}$ related to the the chain $Q_1$. One step of $Q_2$ chain again consists of combining a stochastic move with $f$. Instead of updating a random coordinate, now the coordinate is always the ``middle'' coordinate.
Thus, when at $x$, first the random move \[
x \mapsto x' = (x_1,x_2.\cdots,x_{m}\oplus R,x_{m+1},\cdots,x_{2m}) \] where $R$ is a an independent random bit. Afterwards, again the transformation $f$ is applied to yield the new state $f(x')$.
\begin{theorem}\label{t2}
For all $x$, $\|Q_2^{(n)}(x,\cdot)-\pi\|_{{\rm TV}}= 0$ for all $n=2m$ where $m\geq 1$. \end{theorem}
\begin{rmk}
Note that if the transformation is the circular shift instead of
$f$, then this would be equivalent to systematic scan, which
trivially yields an exact uniform sample in exactly $n=2m$ steps.
Thus this chain can be viewed as a small perturbation of systematic
scan which is Markovian and still yields an exact sample in $n=2m$
steps. \end{rmk}
\begin{proof} We denote by $(R_1, R_2, \ldots)$ the sequence of bits used to update the chain.
To demonstrate how the walk evolves with time by means of an example, Table \ref{table:1} shows the coordinates of $Y_t$ at different $t$ for $2m=6$, when starting from $\textbf{0}$.
\begin{table}[h!]
\centering
\begin{tabular}{||c c c c c c c||}
\hline
Coordinate & 1 & 2 & 3 & 4 & 5 & 6 \\ [1ex]
\hline\hline
t=0 & 0 & 0 & 0 & 0 & 0 & 0 \\
t=1 & 0 & $R_1$ & 0 & 0 & 0 & $R_1$ \\
t=2 & $R_1$ & $R_2$ & 0 & 0 & $R_1$ & $R_2$ \\
t=3 & $R_2$ & $R_3$ & 0 & $R_1$ & $R_2$ & $R_3$ \\
t=4 & $R_3$ & $R_4$ & $R_1$ & $R_2$ & $R_3$ & $R_1\oplus R_4$ \\
t=5 & $R_4$ & $R_1\oplus R_5$ & $R_2$ & $R_3$ & $R_1\oplus R_4$ & $R_2\oplus R_5$\\
t=6 & $R_1\oplus R_5$ & $R_2\oplus R_6$ & $R_3$ & $R_1\oplus R_4$ & $R_2\oplus R_5$ & $R_3\oplus R_6$ \\ [1ex]
\hline
\end{tabular}
\caption{Evolution of coordinates with time for $2m=6$}
\label{table:1} \end{table}
Let $n = 2m$ be an even integer, and let $Z_1,Z_2,\cdots ,Z_{m}$ be the random variables occupying the $n$ coordinates at time $t=n$. The following relationships hold for any starting state $x=(x_1,x_2,\ldots,x_{n})$:
\begin{align*}
Z_1 &= R_1\oplus R_{m+2}\oplus \bigoplus_{i=1}^{2m} x_i \\
Z_2 &= R_2\oplus R_{m+3}\oplus x_1 \\
&\vdots \\
Z_{m-1} &= R_{m-1}\oplus R_{2m}\oplus x_{m-2} \\
Z_m &= R_n\oplus x_{m-1} \\
Z_{m+1} &= R_1\oplus R_{m+1}\oplus x_m \\
Z_{m+2} &= R_2\oplus R_{m+2}\oplus x_{m+1} \\
&\vdots \\
Z_{2n} &= R_n\oplus R_{2n}\oplus x_{2n-1} \end{align*}
This is because at $t=1$, the random variable at coordinate $n$ is $R_1+\bigoplus_{i=1}^{n} x_i$. At time $t=m+1$, this random variable moves to coordinate $m$ because of successive shift register operations. Because the coordinate updated at any time along this chain is $m$, we have that at time $t=m+2$, the random variable at coordinate $m-1$ is $R_1+R_{m+2}\oplus \bigoplus_{i=1}^{n} x_i$. Again, because of successive shift register operations, the random variable $R_1+R_{n+2}\oplus \bigoplus_{i=1}^{n} x_i$ moves to coordinate $1$ at time $n$. The random variables at other coordinates can similarly be worked out. Thus the above system of equations can be written in matrix form as $Z=BR+\Vec{x}$ where:
\[
Z = (Z_1,\ldots,Z_n)^T, \quad R = (R_1,\ldots,Z_n)^T \,, \] and
\begin{align*}
B_{n\times n} & =\begin{bmatrix} I_{m\times m} & C_{m\times m} \\
I_{m\times m} & I_{m\times m} \end{bmatrix}, &
C_{m\times m} & = \begin{bmatrix} 0_{(m-1)\times 1} & I_{(m-1)\times (m-1)} \\ 0_{1\times 1} & 0_{1\times (m-1)} \end{bmatrix} \,. \end{align*}
and \[
\Vec{x}=\begin{bmatrix}\oplus_{i=1}^{n} x_i\\x_1\\\vdots
\\x_{n-1} \end{bmatrix} \]
Note that \[
\det (B)=\det (I) \times \det (I-II^{-1}C) = \det (I-C)=1\neq 0 \,. \] The last equality follows since $\det (I-C)=1$ because $I-C$ is an upper triangular matrix with ones along the main diagonal. Hence $B$ is an invertible matrix and if $z\in \{0,1\}^{n}$, then $$ {\mathbb P}(Z=z)={\mathbb P}\big(R=B^{-1}(z-\Vec{x})\big)=\frac{1}{2^{n}} \,, $$ where the last equality follows from the fact that $R$ is uniform over $S=\{0,1\}^{n}$. Thus the state along $Q_2$ chain at $t=2m=n$ is uniform over $S$ and \[
\|Q_2^{(n)}(\textbf{0},\cdot)-\pi\|_{\text{TV}}=0, \quad \text{$n$ is even}. \]
\end{proof}
\begin{comment}
\textbf{Case 2:} Let the length of the bitstring be $2n-1$. The
precise random bits, at a few locations, in the bitstring at
different times are
\begin{itemize}
\item[$\ast$] \textbf{t=1} $x_{n-1}=W_1,x_{2n-1}=W_1$
\item[$\ast$] \textbf{t=2}
$x_{n-2}=W_1,x_{n-1}=W_2,x_{2n-2}=W_1,x_{2n-1}=W_{2}$
\item[$\ast$] \textbf{t=n}
$x_{1}=W_2,x_{n-1}=W_{n},x_{n}=W_1,x_{2n-1}=W_{n}$
\item[$\ast$] \textbf{t=2n-2}
$x_{1}=W_{n},x_{2}=W_1+W_{n+1}, x_{n}=W_{n-1},x_{n+1}=W_1+W_{n+1}$
\item[$\ast$] \textbf{t=2n-1}
$x_{1}=W_1+W_{n+1},x_{2}=W_{2}+W_{n+2},x_{n}=W_n,x_{n+1}=W_1+W_{n+1},x_{n+2}=W_2+W_{n+2}$
\end{itemize} \\
Hence, we see that at time $t=2n-1$, the random bits in the left and
right halves of the bitstring are the same and hence this chain does
not mix at time $t=2n-1$. \end{comment}
\section{Conclusion and Open Questions}
Here we have shown that the ``shift-register'' transformation speeds up the mixing of the walk on the hypercube, for which the stationary distribution is uniform. The shift-register transformation is a good candidate for a deterministic mixing function, as shift-registers were used for early pseudo-random number generation.
One of our original motivations for analyzing this chain was in part that the uniform distribution corresponds to the infinite temperature Ising measure. Indeed, of great interest are chains on product spaces having non-uniform stationary distributions, such as Gibbs measures. Finding deterministic transformations which speed up mixing for non-uniform distributions remains a challenging and intriguing open problem.
\end{document} |
\begin{document}
\title{Strong extinction of a laser beam by a single molecule}
\author{I. Gerhardt} \author{G. Wrigge} \author{P. Bushev} \author{G. Zumofen} \author{R. Pfab} \author{V. Sandoghdar}\email{[email protected]}
\affiliation{Laboratory of Physical Chemistry, ETH Zurich, CH-8093 Zurich, Switzerland}
\begin{abstract} We present an experiment where a single molecule strongly affects the amplitude and phase of a laser field emerging from a subwavelength aperture. We achieve a visibility of $-6\%$ in direct and $+10\%$ in cross-polarized detection schemes. Our analysis shows that a close to full extinction should be possible using near-field excitation. \end{abstract}
\pacs{42.50.Gy, 68.37.Uv, 42.50.Nn, 42.62.Fi}
\date{April 13, 2006}
\maketitle
The strength of the interaction between radiation and matter is often formulated in terms of a cross section, modelling the object as a disk of a certain area exposed to the radiation. For an ideal oriented quantum mechanical two-level system with a transition wavelength $\lambda$, the absorption cross section amounts to $\sigma_{abs}=3\lambda^2/2\pi$~\cite{Loudon}. This formulation suggests that a single emitter would fully extinguish the incident light if one could only focus it to an area smaller than $\sigma_{abs}$. In fact, it is possible to confine a laser beam to an area of about $(\lambda/2 N.A.)^2$, using immersion optics with a high numerical aperture ($N.A.$) or to confine it to a subwavelength area using Scanning Near-field Optical Microscopy (SNOM). However, it turns out that it is a nontrivial task to explore these ideas experimentally because systems in the gas phase are not easily compatible with strongly confined laser fields~\cite{Wineland:87}. Moreover, solid-state emitters suffer from broad homogeneous linewidths ($\gamma$) at room temperature, reducing $\sigma_{abs}$ by the factor $\gamma_0/\gamma$ where $\gamma_0$ is the radiative linewidth~\cite{Loudon}. Considering these constraints, we have chosen to apply SNOM to excite molecules at $T=1.4$~K and have succeeded in detecting single molecules in transmission with a visibility of up to $10\%$, which we present in this Letter.
Optical detection of single solid-state emitters was pioneered by applying a double modulation technique in absorption spectroscopy~\cite{Moerner:89}. In that work molecules doped in an organic crystal were excited via the narrow zero-phonon lines (transition $1 \rightarrow 2$, see Fig.~1b) at liquid helium temperature, and a laser focus of $3~\mu \rm m$ yielded an absorption effect of about $10^{-4}$. Soon after that it was shown that the detection signal-to-noise ratio (SNR) would be much more favorable if one performed fluorescence excitation spectroscopy to collect the Stokes shifted emission of the molecule (transition $2 \rightarrow 3$) while blocking the excitation beam with filters~\cite{Orrit:90,SMbook}. Unfortunately, however, this method sacrifices the information about coherent processes at the frequency $\nu_{21}$. A few recent works have used the interference between the coherent radiation of a single emitter and a part of the excitation beam in reflection to get around this issue~\cite{Plakhotnik:01,Alen:05}. Here we extend the latter idea to direct transmission measurements.
\begin{figure}
\caption{(a) Schematics of the experimental setup. The polarizer was inserted for the data in Fig.~3; for details see text. (b) The molecular energy level scheme. (c) An electron microscope image of a typical SNOM probe. }
\end{figure}
Figure 1a shows the schematics of the experimental arrangement based on a combined cryogenic confocal/near-field microscope operating at $T=1.4$~K~\cite{Michaelis:00}. Light from a tunable dye laser ($\lambda \sim 615$~nm, linewidth $\sim$1~MHz) was coupled into a single mode fiber leading to the SNOM tip in the cryostat. The SNOM probes were fabricated by evaporating aluminum onto heat-pulled fiber tips and subsequent focused ion beam milling of their ends to produce apertures of about 100~nm (see Fig.~1c). The tip could be positioned in front of the sample using home-built piezo-electric sliders. The shear force signal using a quartz tuning fork was used to determine the point of contact between the tip and the sample to within 10~nm. The sample could be scanned in 3 dimensions using a commercial piezo-electric stage. The light transmitted through the tip or emitted by the molecules was collected by a microscope objective ($N.A.$=0.8) which could also be translated by a slider. Filters and beam splitters were used to direct the radiation of the $2 \rightarrow 1$ transition (of the order of $40\%$ of the total emission) to the avalanche photodiode $PD_{21}$ and the Stokes shifted fluorescence to $PD_{23}$. In the experiments discussed in this work, an iris with an opening of about 1~mm selected the central $10\%$ of the forward emission in the $PD_{21}$ path.
Crystalline films of \emph{p}-terphenyl doped with dibenzanthanthrene (DBATT) molecules~\cite{Boiron:96} were fabricated on a glass coverslip by spin coating~\cite{pfab:04}. After an annealing step, the resulting sample contained large regions of \emph{p}-terphenyl with typical heights of 50-100~nm, thin enough to allow near-field studies. By performing room temperature experiments, we determined the transition dipole moments of the DBATT molecules to lie at an angle of about $25\pm 5^\circ$ with respect to the tip axis and measured the fluorescence lifetime of level $\left\vert 2\right\rangle$ to be $20\pm3$~ns, corresponding to a full width at half-maximum linewidth of $\gamma_0=8 \pm 1 $~MHz.
\begin{figure}
\caption{Symbols show simultaneously recorded near-field fluorescence excitation (a) and transmission (b) spectra. The solid red curves display fits. cps stands for counts per second.}
\end{figure}
In a given experimental run, we first used fluorescence excitation spectroscopy to identify the zero-phonon transition frequencies of the molecules located under the tip aperture. Next we adjusted the tip-sample separation to about 60~nm and optimized the fluorescence signal $I_{23}$ from the molecule by laterally positioning the sample in the $x$ and $y$ directions. Fig.~2a shows a near-field fluorescence excitation spectrum measured on $PD_{23}$. Fig.~2b displays the signal on $PD_{21}$ recorded simultaneously with the first spectrum, revealing an impressive drop of $6\%$ in the incident beam power due to the presence of a single molecule. The SNR of 20 was obtained with an integration time per pixel of 10~ms and averaging of 20 scans. The laser intensity in the tip was stabilized to $0.3\%$ by monitoring a slight light leakage around a fiber bend in the cryostat.
The transmission spectrum recorded on $PD_{21}$ suggests that, as expected intuitively, the incident light is ``absorbed" by the molecule on resonance. However, scattering theory~\cite{Jackson-book,Loudon} tells us that the signal $I_{\rm d}$ on the detector $PD_{21}$ is due to the interference of the electric fields of the excitation light $E_{\rm e}$ and the radiation emitted by the molecule $E_{\rm m}$. Using the operator notation common in quantum optics~\cite{Loudon}, we can write \begin{equation} I_{\rm d}=\left\langle \widehat{\mathbf{E}}_{\rm e}^{-}\cdot\widehat{\mathbf{E}}_{\rm e}^{+}\right \rangle +\left\langle \widehat{\mathbf{E}}_{\rm m}^{-}\cdot\widehat{\mathbf{E}}_{\rm m}^{+}\right\rangle +2 \operatorname{Re}{\left\{ \left\langle \widehat{\mathbf{E}}_{\rm e}^{-}\cdot\widehat{\mathbf{E}} _{\rm m}^{+}\right\rangle \right\}.} \end{equation} The first term represents the intensity $I_{\rm e}$ of the excitation light on the detector. The part of the electric field $E_{\rm m}$ on $PD_{21}$ is composed of two contributions, $E_{\rm m}=E_{21}^{\rm c}+E_{21}^{\rm ic}$. The first component represents the coherent elastic (Rayleigh) scattering process where the phase of the incident laser beam is preserved. The second term takes into account an incoherent part which is produced by the inelastic interactions between the excitation light and the upper state of the molecule~\cite{Loudon} as well as the emission in the phonon wing of the $2 \rightarrow 1$ transition due to its phonon coupling with the matrix~\cite{orrit:93}. So the second component of Eq.~(1) represents the molecular emission intensity $I^{\rm c}_{21}+I^{\rm ic}_{21}$. The last term of Eq.~(1) is responsible for the interference between the excitation field and the coherent part of the molecular emission $E_{21}^{\rm c}$ and is often called the ``extinction" term~\cite{Jackson-book}. It follows that if $E_{\rm m}$ is much weaker than $E_{\rm e}$, the extinction term dominates the direct molecular emission intensity. Thus, although the Stokes shifted fluorescence removes some energy out of the $PD_{21}$ detection path, the dip in Fig.~2b is predominantly due to interference.
We now develop a quantitative description of the transmission signal on $PD_{21}$, keeping in mind the vectorial nature and polarization properties of the light fields. The excitation electric field along the unit vector $\mathbf{u}_{\rm d}$ at position $\mathbf{r}$ of the detector can be written as \begin{equation} \left\langle \widehat{\mathbf{E}}_{\rm e}^{+}(\mathbf{r})\right\rangle =\left[\left\langle \widehat{\mathbf{E}}^{+}_{\rm e}(\mathbf{r}_{\rm m})\right\rangle \cdot \mathbf{u}_{\rm m}\right]~g~\mathbf{u }_{\rm d}. \end{equation} Here $\mathbf{r}_{\rm m}$ is the position of the molecule, and $\mathbf{u}_{\rm m}$ is a unit vector along its transition dipole moment. We have introduced $g=\left\vert g\right\vert e^{i\phi _g}$ as a complex modal factor that accounts for the amplitude, phase and polarization of the tip emission at $\mathbf{u}_{\rm d}$, starting from the projection of $\mathbf{E}_{\rm e}$ on $\mathbf{u}_{\rm m}$. The electric field of the coherently scattered light is given by~\cite{Loudon} \begin{equation} \left\langle \widehat{\mathbf{E}}_{21}^{c+}(\mathbf{r}) \right\rangle =(\sqrt{\alpha}d_{21})~\rho _{21}~f~\mathbf{u}_{\rm d}\cdot
\end{equation} The quantity $d_{21}=\langle 2|\hat{D}|1\rangle$ is the matrix element of the dipole operator. In a solid-state system, the intensity of the zero-phonon line is typically reduced by the Debye-Waller factor $\alpha$ due to emission into the phonon wing~\cite{orrit:93}. The dipole moment corresponding to the zero-phonon transition therefore, is given by $\sqrt{\alpha}d_{21}$. Our preliminary measurements for DBATT in \emph{p}-terphenyl thin films let us estimate $\alpha=0.25\pm 0.05$. Furthermore, adapting the standard textbook treatment of resonance fluorescence to a three level system, we find for the steady-state density matrix element $\rho _{21}$, \begin{equation} \rho _{21}=\frac{\Omega \left( -\Delta +i\gamma /2\right) }{2}\mathcal{L}(\nu) \end{equation} with \begin{equation} \mathcal{L}(\nu)=\frac{1}{\Delta ^{2}+\gamma ^{2}/4+\Omega ^{2}(\gamma /2\gamma _{0})K} \simeq \frac{1}{ \Delta ^{2}+\gamma ^{2}/4} \end{equation} where $\Delta=\nu-\nu_{21}$ is the detuning between the laser frequency $\nu$ and the zero-phonon frequency $\nu_{21}$. The quantity $\Omega =(\sqrt{\alpha}d_{21})\left[ \left\langle \widehat{\mathbf{E}}^{+}_{\rm e}(\mathbf{r}_{\rm m})\right\rangle \cdot \mathbf{u}_{\rm m}\right]/h$ stands for the Rabi frequency. We have verified that all our measurements are in the regime well below saturation so that $\Omega \ll\gamma$. The parameter $K=1+k_{23}/2k_{31}$ accounts for the competition of various decay rates $k_{21}$, $k_{23}$, and $k_{31}$ (see Fig.~1b). In our case $k_{31}\gg k_{23}$ so that $K\simeq 1$. These considerations result in the last form of the expression in Eq.~(5). Finally, in Eq.~(3) we have introduced $f=\left\vert f\right\vert e^{i\phi _{f}}$ similar to $g$, as a complex modal factor that determines the angular dependence, the phase and polarization of the molecular field at the detector. However, contrary to $g$, we have included some dimensional parameters in $f$ to simplify the presentation of our formulae.
Using Eqs.~(1-5), we can now write \begin{eqnarray} I_{\rm d}&=&I_{\rm e}\left[1+\alpha~d_{21}^{4}~\frac{\gamma }{\gamma _{0}}\left\vert \frac{f}{g} \right\vert ^{2}\mathcal{L}\left( \nu \right)\right. \nonumber\\ &-& \left. 2~\alpha~d^2_{21}\left\vert \frac{f}{g} \right\vert \mathcal{L}\left( \nu \right) \left( \Delta \cos \psi + \frac{\gamma }{2}\sin \psi\right)\right] \end{eqnarray} for the forward direction. The phase angle $\psi=\phi_f-\phi_g$ denotes the accumulated phase difference between the radiation of the molecule and of the SNOM tip on the detector after propagation through the whole optical system. By determining $I_{\rm e}$ from the off-resonant part of the spectra and $\gamma$ from the simultaneously recorded $I_{23}(\nu)$, we have fitted our frequency scans using Eq.~(6), whereby we have used $\psi$ and the multiplicative factor $d^2_{21}\left \vert f/g\right \vert$ as free parameters. An example is shown by the red solid curve in Fig.~2b where we obtained $\psi=\pi/2$. It is also possible to vary $\psi$ to obtain dispersive shapes or even peaks. One way of achieving this is to scan the tip in the \emph{x, y}, and \emph{z} directions; our results on such studies will be published separately. In what follows, we show how the observed signal could be changed by selecting different polarizations in the detection path. Furthermore, we define the visibility $V(\nu)=(I_{\rm d}-I_{\rm e})/I_{\rm e}$ of the detected signal and will discuss how the change in the ratio $\left\vert f/g \right\vert$ of the modal factors influences the observed visibility.
\begin{figure}
\caption{Tip emission and Stoke shifted molecular fluorescence for different orientations of a polarizer in the detection.}
\end{figure}
Figure 3 shows the polarization dependence of the SNOM emission and the Stokes shifted fluorescence on the two detectors when a linear polarizer was placed after the cryostat (see Fig.~1a). We find that the main polarization axes of $E_{\rm e}$ and $E_{23}$ are offset by about $20^\circ$. The weak polarization extinction ratio (2:1) of $I_{23}$ stems partly from the component of the molecular dipole along the optical axis which gives rise to radially polarized fields at the outer part of the detected beam~\cite{Lieb:04}. Furthermore, the \emph{p}-terphenyl film and dielectric mirrors introduce a non-negligible degree of ellipticity to the polarization. Note that since the polarization properties of the fields $E_{23}$ and $E^{\rm c}_{21}$ of the molecule are both determined by the orientation of its dipole moment, the dashed lines in Fig.~3 also provide information on the polarization properties of $E^{\rm c}_{21}$.
In Fig.~4a we present $V(\nu)$ recorded on $PD_{21}$ for a series of polarizer angles $ \theta$ whereby $\theta=0$ marks the highest $I_{\rm e}$. Here we have adapted the integration times for each $\theta$ to keep roughly the same SNR. As $\theta$ changes, the spectrum evolves from an absorptive to a dispersive shape, revealing a variation in $\psi=\phi_f-\phi_g$. The points in Fig.~4b display an example for $\theta=75^\circ$. The rotation of the polarizer results in the projection of $E_{\rm e}$ and $E^{\rm c}_{21}$ onto different detection polarizations $\mathbf{u }_{\rm d}$, which in turn implies variations of $g$ and $f$. In particular, $\psi$ is changed if the fields possess some ellipticity. Indeed, by extracting the degree of ellipticity from the polarization data in Fig.~3, we have been able to fit the recorded spectra in Fig.~4a simultaneously for all $\theta$ using Eq.~(6).
As displayed in Fig.~4c, a remarkable situation results for the cross-polarized detection ($\theta=90^\circ$) where the visibility reaches $+10\%$ on resonance, yielding $\psi\simeq 3\pi/2$. This surplus of light on resonance is a clear signature of interference and redistribution of light. In fact, we have verified that the sum of the spectra recorded at $\theta$ and $\theta+\pi/2$ remains constant and absorptive for all $\theta$. The fact that $V$ is larger here than in Fig.~2b although $I_{\rm d}$ is diminished to $1/20$ of its original value, can be readily explained by Eq.~(6) where reducing $g$ causes an increase in $V$. It is furthermore interesting to note that if $g$ approaches zero, the second term in Eq.~(6) dominates, giving access to the direct emission on the $2 \rightarrow 1$ transition. The physical origin and the optimization of $I_{\rm d}$ and $V$ therefore, depend on the ratio $\left\vert f/g \right\vert$.
We now discuss various aspects of our results and their prospects for future studies. The strong near-field coupling between the laser field and the molecule makes it possible to detect the coherent emission of weakly fluorescing emitters. The data in Fig.~2 show that a very low coherent emission of $I^{\rm c}_{21}\sim25$ counts per second (cps) could be detected at a comfortably measurable contrast of $1\%$ with an incident power of $I_{\rm e}=2.5\times 10^5$ cps. On the other hand, an ideal two-level system would deliver even a larger effect than that reported here. Since the extinction term in Eq.~(6) is proportional to $\alpha/\gamma$ on resonance, $\gamma_0=8 \pm 1 $~MHz and $\gamma=35$~MHz (see Fig.~2) let us estimate that $V$ could have been $ \gamma/(\alpha\gamma_0)\simeq16$ times larger, approaching $100\%$.
\begin{figure}
\caption{(a) Frequency spectra recorded on $PD_{21}$ for 30 polarizer angles spanning $180^\circ$. The visibility is color coded. (b, c) Cross sections from (a) for $\theta=75^\circ$ and $\theta=90^\circ$ with fit functions according to Eq.~(6)}
\end{figure}
We emphasize that the visibility $V$ defined above is not a direct measure of the absorption cross section that is conventionally defined for a plane wave excitation. A rigorous treatment of the absorption cross section for near-field excitation should take into account the inhomogeneous distribution of the vectorial excitation and molecular fields, as well as the tip and sample geometries, and it remains a topic of future study. Here it suffices to point out that the key advantage in near-field coupling is that the field lines of the SNOM tip or a nanoscopic dipolar antenna~\cite{Kuehn:06} are well matched to those of a dipolar emitter, resulting in a better mode overlap than that achievable in far-field optics~\cite{vanEnk:01}. In this work, we have considered the forward propagation through an iris (see Fig.~1a). However, by replacing the avalanche photodiode with a sensitive camera, it is possible to map the complex mode overlap between $E_{\rm m}$ and $E_{\rm e}$. We are currently pursuing such measurements.
The experiments presented here were performed on many samples, molecules, and tips, and visibilities above $2\%$ were consistently obtained. To the best of our knowledge, an extinction of $6\%$ and a surplus signal of $10\%$ are the largest effects yet reported for a single emitter acting directly on a light beam. The interferometric nature of our detection scheme provides access to the coherent optical phenomena in a single solid-state emitter. Furthermore, the efficient near-field absorption spectroscopy technique presented here can be used to study weakly fluorescing nano-objects or single emitters that do not have large Stokes shifts. Moreover, our work might find applications in single atom detection on atom chips~\cite{Horak:03}. Finally, a direct and strong coupling of light to a single emitter allows an efficient manipulation of its phase and amplitude, which are considered to be key elements for the execution of various quantum optical tasks~\cite{Turchette:95}.
We thank M. Agio for theoretical and A. Renn, C. Hettich and S. K\"{u}hn for experimental support. This work was financed by the Schweizerische Nationalfond (SNF) and the ETH Zurich initiative for Quantum Systems for Information Technology (QSIT).
\end{document} |
\begin{document}
\title{Extremal geometry of a Brownian porous medium}
\author{ Jesse Goodman \and Frank den Hollander }
\institute{J. Goodman \and F. den Hollander \at Mathematical Institute, Leiden University, P.O.\ Box 9512, 2300 RA Leiden, The Netherlands.\\ \email{[email protected], [email protected]}}
\date{Received: date / Accepted: date}
\maketitle
\begin{abstract} The path $W[0,t]$ of a Brownian motion on a $d$-dimensional torus $\mathbb{T}^d$ run for time $t$ is a random compact subset of $\mathbb{T}^d$. We study the geometric properties of the complement $\mathbb{T}^d\setminus W[0,t]$ as $t\to\infty$ for $d\geq 3$. In particular, we show that the largest regions in $\mathbb{T}^d\setminus W[0,t]$ have a linear scale $\varphi_d(t)=[(d\log t)/(d-2)\kappa_d t]^{1/(d-2)}$, where $\kappa_d$ is the capacity of the unit ball. More specifically, we identify the sets $E$ for which $\mathbb{T}^d\setminus W[0,t]$ contains a translate of $\varphi_d(t)E$, and we count the number of disjoint such translates. Furthermore, we derive large deviation principles for the largest inradius of $\mathbb{T}^d\setminus W[0,t]$ as $t\to\infty$ and the $\epsilon$-cover time of $\mathbb{T}^d$ as $\epsilon \downarrow 0$. Our results, which generalise laws of large numbers proved by Dembo, Peres and Rosen \cite{DPR2003}, are based on a large deviation estimate for the shape of the component with largest capacity in $\mathbb{T}^d \setminus W_{\rho(t)}[0,t]$, where $W_{\rho(t)}[0,t]$ is the Wiener sausage of radius $\rho(t)$, with $\rho(t)$ chosen much smaller than $\varphi_d(t)$ but not too small. The idea behind this choice is that $\mathbb{T}^d \setminus W[0,t]$ consists of ``lakes'', whose linear size is of order $\varphi_d(t)$, connected by narrow ``channels''. We also derive large deviation principles for the principal Dirichlet eigenvalue and for the maximal volume of the components of $\mathbb{T}^d \setminus W_{\rho(t)}[0,t]$ as $t\to\infty$. Our results give a complete picture of the extremal geometry of $\mathbb{T}^d\setminus W[0,t]$ and of the optimal strategy for $W[0,t]$ to realise the extremes.
\keywords{Brownian motion \and random set \and capacity \and largest inradius \and cover time \and principal Dirichlet eigenvalue \and large deviation principle} \subclass{60D05 \and 60F10 \and 60J65} \end{abstract}
\section{Introduction} \lbsect{Intro}
\subsection{Five key questions} \lbsubsect{Mot}
$\bullet$ Our basic object of study is the complement of a random path:
\begin{varquestion} \lbquestion{Geometry} Run a Brownian motion $W=(W(t))_{t \geq 0}$ on a $d$-dimensional torus $\mathbb{T}^d$, $d\geq 3$. What is the geometry of the random set $\mathbb{T}^d \setminus W[0,t]$ for large $t$? \end{varquestion}
\noindent \reffig{d=2sim} shows a simulation in $d=2$.
\begin{figure}
\caption{Simulation of $W[0,t]$ (shown in black) for $t=15$ in $d=2$. The holes in $\mathbb{T}^2\setminus W[0,t]$ (shown in white) have an irregular shape. The goal is to understand the geometry of the largest holes. The present paper only deals with $d \geq 3$. In \refsubsubsect{2d} below we will reflect on what happens in $d=2$. }
\end{figure}
Regions with a random boundary have been studied intensively in the literature, and questions such as \refquestion{Geometry} have been approached from a variety of perspectives. Sznitman~\cite{S1998} studies the principal Dirichlet eigenvalue when a Poisson cloud of obstacles is removed from Euclidean space $\mathbb{R}^d$, $d\geq 1$. Van den Berg, Bolthausen and den Hollander~\cite{vdBBdH2001} consider the large deviation properties of the volume of a Wiener sausage on $\mathbb{R}^d$, $d \geq 2$, and identify the geometric strategies for achieving these large deviations. Probabilistic techniques also play a role in the analysis of deterministic shapes, such as strong circularity in rotor-router and sandpile models shown by Levine and Peres~\cite{LP2009}, and heat flow in the von Koch snowflake and its relatives analysed by van den Berg and den Hollander~\cite{vdBdH1999}, van den Berg~\cite{vdB2000}, and van den Berg and Bolthausen~\cite{vdBB2004}. The discrete analogue to \refquestion{Geometry}, random walk on a large discrete torus, is connected to the random interlacements model of Sznitman~\cite{S2010} (to which we will return in \refsubsubsect{RandomInterlacements} below).
\refquestion{Geometry} is studied by Dembo, Peres and Rosen~\cite{DPR2003} for $d\geq 3$ and Dembo, Peres, Rosen and Zeitouni~\cite{DPRZ2004} for $d=2$. In both cases, a law of large numbers is established for the \emph{$\epsilon$-cover time} (the time for the Brownian motion to come within distance $\epsilon$ of every point) as $\epsilon\downarrow 0$. For $d\geq 3$, Dembo, Peres and Rosen also obtain the multifractal spectrum of \emph{late points} (those points that are approached within distance $\epsilon$ on a time scale that is a positive fraction of the $\epsilon$-cover time). In the present paper we will consider a large but fixed time $t$, and we will use a key lemma from \cite{DPR2003} to obtain global information about $\mathbb{T}^d \setminus W[0,t]$. Throughout the paper we fix a dimension $d \geq 3$. The behaviour in $d=2$ is expected to be quite different (see the discussion in \refsubsubsect{2d} below).
A random set is an infinite-dimensional object, hence issues of measurability require care. In general, events are defined in terms of whether a random closed set intersects a given closed set, or whether a random open set contains a given closed set: see Matheron~\cite{M1975} or Molchanov~\cite{M2005} for a general theory of random sets and questions related to their geometry. On the torus we will parametrize these basic events as \begin{equation} \label{BasicEvent} \set{(x+\varphi E)\cap W[0,t]=\varnothing} = \shortset{x+\varphi E\subset \mathbb{T}^d\setminus W[0,t]}, \qquad x\in\mathbb{T}^d,\,E\subset\mathbb{R}^d \text{ compact} \end{equation} (see \eqref{AdditionOfSet} below), where $\varphi>0$ acts as a scaling factor. The set $E$ in \eqref{BasicEvent} plays a role similar to that of a test function, and we will restrict our attention to suitably regular sets $E$, for instance, compact sets with non-empty interior.
\noindent $\bullet$ In giving an answer to \refquestion{Geometry}, we must distinguish between global properties, such as the size of the largest inradius or the principal Dirichlet eigenvalue of the random set, and local properties, such as whether or not the random set is locally connected. In the present paper we focus on the \emph{global properties} of $\mathbb{T}^d \setminus W[0,t]$. We will therefore be interested in the existence of subsets of $\mathbb{T}^d\setminus W[0,t]$ of a given form:
\begin{varquestion} \lbquestion{ExistsTranslate} For a given compact set $E\subset\mathbb{R}^d$, what is the probability of the event \begin{equation} \label{ExistsTranslateEvent} \set{\exists x\in\mathbb{T}^d\colon\, x+\varphi E\subset \mathbb{T}^d\setminus W[0,t]} = \bigcup_{x\in\mathbb{T}^d}\set{x+\varphi E\subset \mathbb{T}^d\setminus W[0,t]} \end{equation} formed as the uncountable union of events from \eqref{BasicEvent}? \end{varquestion}
\noindent For instance, questions about the inradius can be formulated in terms of \refquestion{ExistsTranslate} by setting $E$ to be a ball.
The answer to \refquestion{ExistsTranslate} depends on the scaling factor $\varphi$. To obtain a non-trivial result we are led to choose $\varphi=\varphi_d(t)$ depending on time, where \begin{equation} \label{phidtDefinition} \varphi_d(t) = \left(\frac{d}{(d-2)\kappa_d}\,\frac{\log t}{t}\right)^{1/(d-2)}, \qquad t>1, \end{equation} and $\kappa_d$ is the constant \begin{equation} \label{kappadDefinition} \kappa_d=\frac{2\pi^{d/2}}{\Gamma(d/2-1)}. \end{equation} We will see that $\varphi_d(t)$ represents the \emph{linear size of the largest subsets of} $\mathbb{T}^d\setminus W[0,t]$, in the sense that the limiting probability of the event in \eqref{ExistsTranslateEvent} decreases from $1$ to $0$ as the set $E$ increases from small to large, in the sense of small or large capacity (see \refsubsubsect{CapacityDefinition} below).
In what follows we will see that $T^d\setminus W[0,t]$ is controlled by two spatial scales: \begin{equation} \label{twoscales} \varphi_\mathrm{local}(t) = \left(\frac{1}{t}\right)^{1/(d-2)}, \qquad \varphi_\mathrm{global}(t) = \left(\frac{\log t}{t}\right)^{1/(d-2)}. \end{equation} The linear size of the \emph{typical} holes in $T^d\setminus W[0,t]$ is of order $\varphi_\mathrm{local}(t)$, the linear size of the \emph{largest} holes of order $\varphi_\mathrm{global}(t)$. The choice \eqref{phidtDefinition} of $\varphi_d(t)$ is a fine tuning of the latter.
\noindent $\bullet$ For a typical point $x\in\mathbb{T}^d$, the event $\set{x+\varphi_d(t)E\subset\mathbb{T}^d\setminus W[0,t]}$ in \eqref{BasicEvent} is unlikely to occur even when $E$ is small. However, given a compact set $E\subset\mathbb{R}^d$, the points $x\in\mathbb{T}^d$ for which $x+\varphi_d(t)E\subset\mathbb{T}^d\setminus W[0,t]$ (i.e., the points that realize the event in \eqref{ExistsTranslateEvent}) are atypical, and we can ask whether the subset $x+\varphi_d(t)E$ is likely to form part of a considerably larger subset:
\begin{varquestion} \lbquestion{LargerSubset} Are the points $x\in\mathbb{T}^d$ for which $x+\varphi_d(t)E\subset\mathbb{T}^d\setminus W[0,t]$ likely to satisfy $x+\varphi_d(t)E'\subset\mathbb{T}^d\setminus W[0,t]$ for some substantially larger set $E'\supset E$? \end{varquestion}
\noindent \refquestion{LargerSubset} aims to distinguish between the two qualitative pictures shown in \reffig{SparseVsDense}, which we call \emph{sparse} and \emph{dense}, respectively. We will show that in $d \geq 3$ the answer to \refquestion{LargerSubset} is no, i.e., the picture is dense as in part (b) of \reffig{SparseVsDense}. In \refsubsubsect{2d} below we will argue that in $d=2$ the answer to \refquestion{LargerSubset} is yes, i.e., the picture is sparse as in part (a) of \reffig{SparseVsDense}. This can already be seen from \reffig{d=2sim}.
\begin{figure}
\caption{The vicinity of $x+\varphi_d(t)E\subset\mathbb{T}^d\setminus W[0,t]$. The circular region in part (a) can be enlarged substantially while remaining a subset of $\mathbb{T}^d\setminus W[0,t]$, the circular region in part (b) cannot.}
\end{figure}
In a similar spirit, we can ask about \emph{temporal} versus \emph{spatial} avoidance strategies:
\begin{varquestion} \lbquestion{Avoidance} For a given $x\in\mathbb{T}^d$, does the unlikely event $\set{x+\varphi_d(t)E\subset\mathbb{T}^d\setminus W[0,t]}$ arise primarily because the Brownian motion spends an unusually small amount of time near $x$, or because the Brownian motion spends a typical amount of time near $x$ and simply happens to avoid the set $x+\varphi_d(t) E$? \end{varquestion}
Questions \ref{q:LargerSubset} and \ref{q:Avoidance}, though not equivalent, are interrelated: if the Brownian motion spends an unusually small amount of time near $x$, then it may be plausibly expected to fill the vicinity of $x$ less densely, and vice versa. We will show that in $d \geq 3$ the Brownian motion follows a spatial avoidance strategy (the second alternative in \refquestion{Avoidance}) and that, indeed, the Brownian motion is very likely to spend approximately the same amount of time around all points of $\mathbb{T}^d$. In \refsubsubsect{2d} below we will argue that in $d=2$ the first alternative in \refquestion{Avoidance} applies.
\noindent $\bullet$ The negative answer to \refquestion{LargerSubset} and the heuristic picture in \reffig{SparseVsDense}(b) suggest that regions of $\mathbb{T}^d$ where $W[0,t]$ is relatively dense nearly separate the large subsets $x+\varphi_d(t)E\subset\mathbb{T}^d\setminus W[0,t]$ into disjoint components. Making sense of this heuristic is complicated by the fact that $\mathbb{T}^d\setminus W[0,t]$ is connected almost surely (see \refprop{Connected} below), so that all large subsets belong to the same connected component in $\mathbb{T}^d\setminus W[0,t]$.
\begin{varquestion} \lbquestion{Components} Can the approximate component structure of the large subsets of $\mathbb{T}^d\setminus W[0,t]$ be captured in a well-defined way? \end{varquestion}
\noindent We will provide a positive answer to \refquestion{Components} by enlarging the Brownian path $W[0,t]$ to a Wiener sausage $W_{\rho(t)}[0,t]$ of radius $\rho(t)=o(\varphi_d(t))$. Under suitable hypotheses on the \emph{enlargement radius} $\rho(t)$ (see \eqref{RadiusBound} below) we are able to control certain properties of all the connected components of $\mathbb{T}^d\setminus W_{\rho(t)}[0,t]$ simultaneously: for instance, we compute the asymptotics of their maximum possible volume and capacity and minimal possible Dirichlet eigenvalue. The well-definedness of the approximate component structure lies in the fact that (subject to the hypothesis in \eqref{RadiusBound} below) these properties do not depend on the precise choice of $\rho(t)$.
The existence of a connected component of $\mathbb{T}^d\setminus W_{\rho(t)}[0,t]$ having a given property, for instance, having at least a specified volume, involves an uncountable union of the events in \eqref{ExistsTranslateEvent} as $E$ runs over a suitable class of connected sets. Central to our arguments is a \emph{discretization procedure} that reduces such an uncountable union to a suitably controlled finite union (see \refsect{LatticeAnimals} below).
\subsection{Outline} \lbsubsect{Outl}
Our main results concern the \emph{extremal geometry} of the set $\mathbb{T}^d\setminus W[0,t]$ as $t\to\infty$. Our key theorem is a large deviation estimate for the \emph{shape of the component with largest capacity} in $\mathbb{T}^d \setminus W_{\rho(t)}[0,t]$ as $t\to\infty$, where $W_{\rho(t)}[0,t]$ is the Wiener sausage of radius $\rho(t)$. From this we derive large deviation principles for the maximal volume and the principal Dirichlet eigenvalue of the components of $\mathbb{T}^d \setminus W_{\rho(t)}[0,t]$ as $t\to\infty$, and identify the number of disjoint translates of $\varphi_d(t)E$ in $\mathbb{T}^d \setminus W[0,t]$ as $t\to\infty$ for suitable sets $E$. We further derive large deviation principles for the largest inradius as $t\to\infty$ and the $\epsilon$-cover time as $\epsilon \downarrow 0$, extending laws of large numbers that were derived in Dembo, Peres and Rosen~\cite{DPR2003}. Along the way we settle the five questions raised in \refsubsect{Mot}.
It turns out that the costs of the various large deviations are \emph{asymmetric}: polynomial in one direction and stretched exponential in the other direction. Our main results are linked by the heuristic that sets of the form $x+\varphi_d(t)E$ appear according to a \emph{Poisson point process} with total intensity $t^{J_d(\Capa E)+o(1)}$, where $J_d$ is given by \eqref{JdkappaDefinition} below (see \reffig{JandI} below).
The remainder of the paper is organised as follows. In \refsubsect{DefNot} we give definitions and introduce notations. In Sections~\ref{ss:CompStruct} and \ref{ss:GeomStruct} we state our main results: four theorems, five corollaries and two propositions. In \refsubsect{Discussion} we discuss these results, state some conjectures, make the link with random interlacements, and reflect on what happens in $d=2$. \refsect{Preparations} contains various estimates on hitting times, hitting numbers and hitting probabilities for Brownian excursions between the boundaries of concentric balls, which serve as key ingredients in the proofs of the main results. \refsect{LatticeAnimals} looks at non-intersection probabilities for lattice animals, which serve as discrete approximations to continuum sets. The proofs of the main results are given in Sections~\ref{s:ProofTheorems}--\ref{s:CoroProofs}. \refappendix{ExcursionProofs} contains the proof of two lemmas that are used along the way.
\subsection{Definitions and notations} \lbsubsect{DefNot}
\subsubsection{Torus}
The $d$-dimensional \emph{unit torus} $\mathbb{T}^d$ is the quotient space $\mathbb{R}^d/\mathbb{Z}^d$, with the canonical projection map $\pi_0\colon\,\mathbb{R}^d\to\mathbb{T}^d$. We consider $\mathbb{T}^d$ as a Riemannian manifold in such a way that $\pi_0$ is a local isometry. The space $\mathbb{R}^d$ acts on $\mathbb{T}^d$ by translation: given $x=\pi_0(y_0)\in\mathbb{T}^d, y_0,y\in\mathbb{R}^d$, we define $x+y=\pi_0(y_0+y)\in \mathbb{T}^d$. (Having made this definition, we will no longer need to refer to the projection map $\pi_0$, nor to the particular representation of the torus $\mathbb{T}^d$.) Given a set $E\subset\mathbb{R}^d$, a scale factor $\varphi>0$, and a point $x\in\mathbb{T}^d$ or $x\in\mathbb{R}^d$, we can now define \begin{equation} \label{AdditionOfSet} x+\varphi E=\set{x+\varphi y\colon\,y\in E}. \end{equation}
Euclidean distance in $\mathbb{R}^d$ and the induced distance in $\mathbb{T}^d$ are both denoted by $d(\cdot,\cdot)$. The distance from a point $x$ to a set $E$ is $d(x,E)=\inf\set{d(x,y) \colon\,y\in E}$. The closed ball of radius $r$ around a point $x$ is denoted by $B(x,r)$, for $x\in\mathbb{T}^d$ or $x\in\mathbb{R}^d$. We will only be concerned with the case $0<r<\tfrac{1}{2}$, so that $B(x,r)=x+B(0,r)$ for $x\in\mathbb{T}^d$ and the local isometry $B(0,r)\to B(x,r)$, $y\mapsto x+y$, is one-to-one.
\subsubsection{Brownian motion and Wiener sausage}
We write $\mathbb{P}_{x_0}$ for the law of the Brownian motion $W=(W(t))_{t\geq 0}$ on $\mathbb{T}^d$ started at $x_0\in\mathbb{T}^d$, i.e., the Markov process with generator $-\tfrac{1}{2}\Delta_{\mathbb{T}^d}$, where $\Delta_{\mathbb{T}^d}$ is the Laplace operator for $\mathbb{T}^d$. We can always take $W(t)=x_0 +\tilde{W}(t)$, where $\tilde{W}=(\tilde{W}(t))_{t \geq 0}$ is the standard Brownian motion on $\mathbb{R}^d$ started at $0$, so that $W$ is the projection onto $\mathbb{T}^d$ (via $\pi_0$) of a Brownian motion in $\mathbb{R}^d$. When $x_0\in\mathbb{R}^d$ we will also use $\mathbb{P}_{x_0}$ for the law of the Brownian motion on $\mathbb{R}^d$. When the initial point $x_0$ is irrelevant we will write $\mathbb{P}$ instead of $\mathbb{P}_{x_0}$. The image of the Brownian motion over the time interval $[a,b]$ is denoted by $W[a,b]=\set{W(s)\colon\, a\leq s\leq b}$.
For $r>0$ and $E\subset\mathbb{R}^d$ or $E\subset\mathbb{T}^d$, we write $E_r=\cup_{x\in E} B(x,r)$ and $E_{-r}=[\cup_{x\in E^c} B(x,r)]^c$. The \emph{Wiener sausage} of radius $r$ run for time $t$ is the $r$-enlargement of $W[0,t]$, i.e., $W_r[0,t]=\cup_{s \in [0,t]} B(W(s),r)$.
\subsubsection{Capacity} \lbsubsubsect{CapacityDefinition}
The (Newtonian) {\em capacity} of a Borel set $E\subset\mathbb{R}^d$, denoted by $\Capa E$, can be defined as \begin{equation} \label{CapacityDoubleInt} \Capa E = \left( \inf_{\mu\in\mathcal{P}(E)} \iint_{E\times E} G(x,y) \, d\mu(x) \, d\mu(y)\right)^{-1}, \end{equation} where the infimum runs over the set of probability measures $\mu$ on $E$, and \begin{equation} \label{Green} G(x,y)=\frac{\Gamma(d/2 -1)}{2 \pi^{d/2} d(x,y)^{d-2}} \end{equation} is the Green function associated with Brownian motion on $\mathbb{R}^d$ (throughout the paper we restrict to $d \geq 3$). In terms of the constant $\kappa_d$ from \eqref{kappadDefinition}, we can write $G(x,y)=1/\kappa_d\,d(x,y)^{d-2}$, and it emerges that $\kappa_d=\Capa B(0,1)$ is the capacity of the unit ball.\footnote{See Port and Stone~\cite[Section 3.1]{PS1978}. The alternative normalization $\Capa B(0,1)=1$ is used also, for instance, in Doob~\cite[Chapter 1.XIII]{D1984}. This corresponds to replacing $G(x,y)$ by $1/d(x,y)^{d-2}$ in (\ref{CapacityDoubleInt}--\ref{Green}).}
The function $E\mapsto\Capa E$ is non-decreasing in $E$ and satisfies the scaling relation \begin{equation} \label{CapacityScaling} \Capa (\varphi E) = \varphi^{d-2} \Capa E, \qquad \varphi>0, \end{equation} and the union bound \begin{equation} \label{CapacityOfUnion} \Capa (E\cup E') + \Capa (E\cap E') \leq \Capa E+\Capa E'. \end{equation}
Capacity has an interpretation in terms of Brownian hitting probabilities: \begin{equation} \label{CapacityAndHittingSimple} \lim_{d(x,0)\to\infty} d(x,0)^{d-2}\,\mathbb{P}_x\big(W\cointerval{0,\infty} \cap E\neq\varnothing\big) = \frac{\Capa E}{\kappa_d}, \qquad E\subset\mathbb{R}^d\text{ bounded Borel.} \end{equation} Thus, capacity measures how likely it is for a set to be hit by a Brownian motion that starts far away. We will make extensive use of asymptotic properties similar to \eqref{CapacityAndHittingSimple}.
If a set $E$ is \emph{polar} -- i.e., with probability 1, $E$ is not hit by a Brownian motion started away from $E$ -- then $\Capa E=0$. For instance, any finite or countable union of $(d-2)$-dimensional subspaces has capacity zero.
\subsubsection{Sets}
The boundary of a set $E$ is denoted by $\partial E$, the interior by $\interior{E}$, and the closure by $\closure{E}$. We define \begin{equation} \mathcal{E}_c=\set{\text{$E\subset\mathbb{R}^d$ compact: $E$ and $\mathbb{R}^d \setminus E$ connected}} \end{equation} We will use these sets to describe the possible components of $\mathbb{T}^d\setminus W_{\rho(t)}[0,t]$. We further define \begin{equation} \mathcal{E}^* = \set{\text{$E\subset\mathbb{R}^d$ compact: $\Capa E=\Capa(\interior{E})$}} \cup \set{\text{$E\subset\mathbb{R}^d$ bounded open: $\Capa E=\Capa(\closure{E})$}}. \end{equation} The condition $\Capa(\interior{E})=\Capa(\closure{E})$ in the definition of $\mathcal{E}^*$ is satisfied when every point of $\partial E$ is a regular point for $\interior{E}$, which in turn is satisfied when $E$ satisfies a cone condition at every point (see Port and Stone~\cite[Chapter~2, Proposition~3.3]{PS1978}). In particular, any finite union of cubes, or any $r$-enlargement $E_r$ with $r>0$ of a compact set $E$, belongs to $\mathcal{E}^*$.
\subsubsection{Maximal capacity of a component}
A central role will be played by the largest capacity $\kappa^*(t,\rho)$ for a component of $\mathbb{T}^d\setminus W_\rho[0,t]$, defined by \begin{equation} \kappa^*(t,\rho)=\sup\set{\Capa E\colon\, \text{$E\subset\mathbb{R}^d$ connected, $x+E\subset\mathbb{T}^d\setminus W_\rho[0,t]$ for some $x\in\mathbb{T}^d$}}. \end{equation} Note that by rescaling we have \begin{equation} \frac{\kappa^*(t,\rho)}{\varphi_d(t)^{d-2}} = \sup\set{\Capa E\colon\,\text{$E\subset\mathbb{R}^d$ connected, $x+\varphi_d(t)E\subset\mathbb{T}^d\setminus W_\rho[0,t]$ for some $x\in\mathbb{T}^d$}}. \end{equation}
\subsection{Component structure} \lbsubsect{CompStruct}
We begin by describing the component structure of $\mathbb{T}^d\setminus W_\rho[0,t]$. In formulating the results below we will use the abbreviation (see \reffig{JandI}(a)) \begin{equation} \label{JdkappaDefinition} J_d(\kappa)=\frac{d}{d-2}\left(1-\frac{\kappa}{\kappa_d}\right), \qquad \kappa \geq 0. \end{equation}
\begin{figure}
\caption{(a) The function $\kappa \mapsto J_d(\kappa)$ in \eqref{JdkappaDefinition}. (b) The rate function $\kappa \mapsto I_d(\kappa)$ in \eqref{Idrf}.}
\end{figure}
Our first theorem quantifies the likelihood of finding sets of large capacity that do not intersect $W_\rho[0,t]$, for $\rho$ in a certain window between the local and the global spatial scales defined in \eqref{twoscales}.
\begin{theorem} \lbthm{CapacitiesInWc} Fix a positive function $t \mapsto \rho(t)$ satisfying \begin{equation} \label{RadiusBound} \lim_{t\to\infty} \frac{\rho(t)}{\varphi_d(t)} = 0, \qquad \lim_{t\to\infty} \frac{(\log t)^{1/d}\rho(t)}{\varphi_d(t)} =\infty. \end{equation} Then the family $\mathbb{P}(\kappa^*(t,\rho)/\varphi_d(t)^{d-2}\in\cdot)$, $t>1$, satisfies the LDP on $[0,\infty]$ with rate $\log t$ and rate function (see \reffig{JandI}(b)) \begin{equation} \label{Idrf} I_d(\kappa) = \begin{cases} -J_d(\kappa), & \kappa\geq \kappa_d,\\ \infty, & \kappa < \kappa_d, \end{cases} \end{equation} with the convention that $I_d(\infty)=\infty$. \end{theorem}
The counterpart of \refthm{CapacitiesInWc} for small capacities is contained in the following two theorems, which show that components of small capacity are likely to exist and to be numerous. Let $\chi_\rho(t,\kappa)$ denote the number of components $C$ of $\mathbb{T}^d\setminus W_\rho[0,t]$ such that $C$ contains some ball of radius $\rho$ and has the form $C=x+\varphi_d(t)E$ for a connected open set $E$ with $\Capa E\geq\kappa$.
\begin{theorem} \lbthm{ComponentCounts} Fix a positive function $t\mapsto\rho(t)$ satisfying \eqref{RadiusBound}, and let $\kappa<\kappa_d$. Then \begin{equation} \lim_{t\to\infty}\frac{\log\chi_{\rho(t)}(t,\kappa)}{\log t}=J_d(\kappa) \qquad\text{in $\mathbb{P}$-probability.} \end{equation} \end{theorem}
\begin{theorem} \lbthm{NoTranslates} Fix a non-negative function $t\mapsto\rho(t)$ satisfying $\rho(t)=o(\varphi_d(t))$, and let $E\subset\mathbb{R}^d$ be compact with $\Capa E <\kappa_d$. Then \begin{equation} \log\mathbb{P}\left( \nexists\, x\in\mathbb{T}^d\colon\, x+\varphi_d(t)E\subset\mathbb{T}^d\setminus W_{\rho(t)}[0,t] \right) \leq -t^{J_d(\Capa E)+o(1)}, \qquad t\to\infty. \end{equation} \end{theorem}
The next theorem identifies the shape of the components of $\mathbb{T}^d \setminus W_{\rho(t)}[0,t]$. For $E\subset E'$ a pair of nested compact connected subsets of $\mathbb{R}^d$, we say that a component $C$ of $\mathbb{T}^d\setminus W_\rho[0,t]$ satisfies condition \eqref{CtrhoEEprime} when \begin{equation*} \tag{$\mathcal{C}(t,\rho,E,E')$} \label{CtrhoEEprime} C=x+\varphi_d(t)U, \qquad x\in\mathbb{T}^d,\,E\subset U\subset E'. \end{equation*} Define $\chi_\rho(t,E,E')$ to be the number of components of $\mathbb{T}^d\setminus W_\rho[0,t]$ satisfying condition \eqref{CtrhoEEprime}, and define $F_\rho(t,E,E')$ to be the event \begin{equation} F_\rho(t,E,E')= \set{\parbox{275pt}{There exists a component $C=x+\varphi_d(t) U$ of $\mathbb{T}^d\setminus W_\rho[0,t]$ satisfying condition \eqref{CtrhoEEprime}, and any other component $C'=x'+\varphi_d(t) U'$ has $\Capa U'<\Capa U$.}} \end{equation} In words, $F_\rho(t,E,E')$ is the event that $\mathbb{T}^d\setminus W_\rho[0,t]$ contains a component sandwiched between $x+\varphi_d(t) E$ and $x+\varphi_d(t)E'$, and any other component has smaller capacity (when viewed as a subset of $\mathbb{R}^d$).
\begin{theorem} \lbthm{ShapeOfComponents} Fix a positive function $t\mapsto\rho(t)$ satisfying \eqref{RadiusBound}, let $E\in\mathcal{E}_c$, and let $\delta>0$. If $\Capa E\geq\kappa_d$, then \begin{equation} \lim_{t\to\infty}\frac{\log\mathbb{P}\bigl(F_{\rho(t)}(t,E,E_\delta)\bigr)}{\log t} = J_d(\Capa E) \quad (=-I_d(\Capa E)), \end{equation} while if $\Capa E<\kappa_d$, then \begin{equation} \lim_{t\to\infty}\frac{\log \chi_{\rho(t)}(t,E,E_\delta)}{\log t} = J_d(\Capa E) \qquad\text{in $\mathbb{P}$-probability.} \end{equation} \end{theorem}
Theorems~\ref{t:CapacitiesInWc}--\ref{t:ShapeOfComponents} yield the following corollary. For $E\subset\mathbb{R}^d$, let $\chi(t,E)$ denote the maximal number of disjoint translates $x+\varphi_d(t)E$ in $\mathbb{T}^d\setminus W[0,t]$.
\begin{corollary} \lbcoro{UnhitSet} Suppose that $E\in\mathcal{E}^*$. Then \begin{equation} \label{HittingSetProbSimple} \lim_{t\to\infty} \mathbb{P}\left(\exists\, x\in\mathbb{T}^d\colon\, x+\varphi_d(t) E\subset\mathbb{T}^d\setminus W[0,t]\right) = \begin{cases} 1,&\Capa E<\kappa_d,\\ 0,&\Capa E>\kappa_d. \end{cases} \end{equation} Furthermore, \begin{equation} \label{HittingLargeSetProb} \lim_{t\to\infty} \frac{\log\mathbb{P}(\exists\, x\in\mathbb{T}^d\colon\, x+\varphi_d(t) E\subset\mathbb{T}^d\setminus W[0,t])}{\log t}= J_d(\Capa E) \vee 0, \end{equation} and if $\Capa E<\kappa_d$, then \begin{equation} \label{DisjointTranslates} \lim_{t\to\infty} \frac{\log\chi(t,E)}{\log t} = J_d(\Capa E) \quad \text{in $\mathbb{P}$-probability.} \end{equation} \end{corollary}
\subsection{Geometric structure} \lbsubsect{GeomStruct}
Having described the components in terms of their capacities in \refsubsect{CompStruct}, we are ready to look at the geometric structure of our random set. Our first corollary concerns the maximal volume of a component of $\mathbb{T}^d\setminus W_\rho[0,t]$, which we denote by $V(t,\rho)$. Volume is taken w.r.t.\ the $d$-dimensional Lebesgue measure, and we write $V_d=\Vol B(0,1)$ for the volume of the $d$-dimensional unit ball.
\begin{corollary} \lbcoro{VolumeLDP} Subject to \eqref{RadiusBound}, the family $\mathbb{P}(V(t,\rho(t))/\varphi_d(t)^d\in\cdot)$, $t>1$, satisfies the LDP on $(0,\infty)$ with rate $\log t$ and rate function \begin{equation} I^{\rm volume}_d(v) = I_d\left( \kappa_d (v/V_d)^{(d-2)/d} \right) \! . \end{equation} Moreover, for $v<V_d$, \begin{equation} \label{SmallVolume} \log \mathbb{P}\big(V(t,\rho(t))/\varphi_d(t)^d < v\big) \leq -t^{J_d(\kappa_d(v/V_d)^{(d-2)/d})+o(1)}, \qquad t\to\infty. \end{equation} \end{corollary}
Our second corollary concerns $\lambda(t,\rho)=\lambda(\mathbb{T}^d\setminus W_\rho[0,t])$, the principal Dirichlet eigenvalue of $\mathbb{T}^d\setminus W_\rho[0,t]$, where by $\lambda(E)$ (for $E\subset\mathbb{T}^d$ or $E\subset\mathbb{R}^d$) we mean the principal eigenvalue of the operator $-\tfrac{1}{2}\Delta_E$ with Dirichlet boundary conditions on $\partial E$. We write $\lambda_d=\lambda(B(0,1))$ for the principal Dirichlet eigenvalue of the $d$-dimensional unit ball.
\begin{corollary} \lbcoro{EvalLDP} Subject to \eqref{RadiusBound}, the family $\mathbb{P}(\varphi_d(t)^2 \lambda(t,\rho(t)) \in\cdot)$, $t>1$, satisfies the LDP on $(0,\infty)$ with rate $\log t$ and rate function \begin{equation} I^{\rm Dirichlet}_d(\lambda) = I_d\left( \kappa_d (\lambda_d/\lambda)^{(d-2)/2} \right) \! . \end{equation} Moreover, for $\lambda> \lambda_d$, \begin{equation} \label{LargeEval} \log\mathbb{P}\big(\varphi_d(t)^2 \lambda(t,\rho(t)) \geq \lambda\big) \leq -t^{J_d(\kappa_d(\lambda_d/\lambda)^{(d-2)/2})+o(1)}, \qquad t\to\infty. \end{equation} \end{corollary}
Our last two corollaries concern the largest inradius of $\mathbb{T}^d\setminus W[0,t]$, \begin{equation} \rho_{\rm in}(t) = \sup_{x\in\mathbb{T}^d} d(x,W[0,t]) = \sup\set{\rho\geq 0\colon\, \mathbb{T}^d\setminus W_\rho[0,t] \neq \varnothing}, \qquad t>0, \end{equation} and the $\epsilon$-cover time, \begin{equation} \mathcal{C}_\epsilon =\sup_{x\in\mathbb{T}^d} \inf\set{\big. t\geq 0\colon\, d(x,W[0,t])\leq\epsilon} = \inf\set{\big. t\geq 0\colon\,\rho_{\rm in}(t)\leq\epsilon}, \qquad 0<\epsilon<1. \end{equation} For the latter we need the scaling function \begin{equation} \label{psidepsilonDefinition} \psi_d(\epsilon) = \frac{\epsilon^{-(d-2)} \log (1/\epsilon)}{\kappa_d}, \qquad 0<\epsilon<1. \end{equation}
\begin{corollary} \lbcoro{InradiusLDP} The family $\mathbb{P}(\rho_{\rm in}(t)/\varphi_d(t)\in\cdot\,)$, $t>1$, satisfies the LDP on $(0,\infty)$ with rate $\log t$ and rate function \begin{equation} I^{\rm inradius}_d(r) = I_d(\kappa_d\,r^{d-2}). \end{equation} Moreover, for $0<r<1$, \begin{equation} \label{SmallInradius} \log \mathbb{P}\big(\rho_{\rm in}(t)/\varphi_d(t) < r\big) \leq - t^{J_d(\kappa_d\,r^{d-2})+o(1)}, \qquad t\to\infty. \end{equation} \end{corollary}
\begin{corollary} \lbcoro{CoverTimeLDP} The family $\mathbb{P}(\mathcal{C}_\epsilon/\psi_d(\epsilon)\in\cdot\,)$, $0<\epsilon<1$, satisfies the LDP on $(0,\infty)$ with rate $\log(1/\epsilon)$ and rate function \begin{equation} I_d^{\rm cover}(u) = \begin{cases} u-d, &u\geq d,\\ \infty, &0<u< d. \end{cases} \end{equation} Moreover, for $0<u<d$, \begin{equation} \label{SmallCoverTime} \log \mathbb{P}\big(\mathcal{C}_\epsilon/\psi_d(\epsilon)<u\big) \leq -\epsilon^{-(d-u)+o(1)}, \qquad \epsilon \downarrow 0. \end{equation} \end{corollary}
\refcoro{CoverTimeLDP} is equivalent to \refcoro{InradiusLDP} because of the relation $\set{\rho_{\rm in}(t) > \epsilon} = \set{\mathcal{C}_\epsilon > t}$ and the asymptotics \begin{equation} \label{phipsiAsymptotics} \varphi_d(u\psi_d(\epsilon)) \sim \left( \frac{u}{d} \right)^{1/(d-2)} \epsilon, \quad \epsilon \downarrow 0,\,u>0, \qquad \psi_d(r\varphi_d(t)) \sim \frac{t}{d r^{d-2}}, \quad t\to\infty,\,r>0. \end{equation}
\subsection{Discussion} \lbsubsect{Discussion}
\subsubsection{Upward versus downward deviations and the role of \texorpdfstring{$J_d(\kappa)$}{J\_d(kappa)}} \lbsubsubsect{UpDownDiscussion}
\refthm{CapacitiesInWc} says that the region with largest capacity not intersecting the Wiener sausage of radius $\rho(t)$ lives on scale $\varphi_d(t)$, and that upward large deviations on this scale have a cost that decays {\em polynomially} in $t$. \refthm{ComponentCounts} identifies how many components there are with small capacity. This number grows {\em polynomially} in $t$. \refthm{NoTranslates} says that this number is extremely unlikely to be zero: the cost is {\em stretched exponential} in $t$. Theorem~\ref{t:ShapeOfComponents} completes the picture obtained from Theorems~\ref{t:CapacitiesInWc}--\ref{t:NoTranslates} by showing that components can approximate any shape in $\mathcal{E}_c$.
Theorems~\ref{t:CapacitiesInWc}--\ref{t:ShapeOfComponents} and are linked by the heuristic that components of the form $x+\varphi_d(t)E$ appear according to a Poisson point process with total intensity $t^{J_d(\Capa E)+o(1)}$. When $\Capa E > \kappa_d$ we have $J_d(\Capa E)<0$, and the likelihood of even a single such component is $t^{-\abs{J_d(\Capa E)} +o(1)}$, as in \refcoro{UnhitSet}. When $\Capa E<\kappa_d$ we have $J_d(\Capa E)>0$, and a Poisson random variable $X$ of mean $t^{J_d(\Capa E)+o(1)}$ satisfies $X=t^{J_d(\Capa E)+o(1)}$ with high probability and $\mathbb{P}(X=0)= \exp[ -t^{J_d(\Capa E)+o(1)}]$. Based on this heuristic, we conjecture that the inequalities in \eqref{SmallVolume}, \eqref{LargeEval}, \eqref{SmallInradius} and \eqref{SmallCoverTime} are all equalities asymptotically.
\subsubsection{Components and the role of \texorpdfstring{$\rho(t)$}{rho(t)}} \lbsubsubsect{ComponentsDiscussion}
Theorems~\ref{t:CapacitiesInWc}--\ref{t:ShapeOfComponents} concern components of the form $x+\varphi_d(t)E$. We begin by remarking that, with high probability, all components have this form:
\begin{proposition} \lbprop{NoWrapping} Assume \eqref{RadiusBound}. Let ${\rm Wrap}(t,\rho)$ be the event that $\mathbb{T}^d\setminus W_\rho[0,t]$ has a component $C$ that, when considered as a Riemannian manifold with its intrinsic metric, is not the isometric image $x+E$ of some bounded subset $E$ of $\mathbb{R}^d$. Then \begin{equation} \lim_{t\to\infty}\frac{\log\mathbb{P}\left( {\rm Wrap}(t,\rho(t)) \right)}{\log t}
= -\infty. \end{equation} \end{proposition} \noindent Informally, such a component must ``wrap around'' the torus, so that the local isometry from $\mathbb{R}^d$ to $\mathbb{T}^d$ is not a global isometry. \refprop{NoWrapping} means that, apart from a negligible event, we may sensibly consider the components as subsets of $\mathbb{R}^d$ and discuss their capacities as defined in \eqref{CapacityDoubleInt}.
Collectively, Theorems~\ref{t:CapacitiesInWc}--\ref{t:ShapeOfComponents}, Corollaries~\ref{c:VolumeLDP}--\ref{c:EvalLDP} and \refprop{NoWrapping} show that $\mathbb{T}^d\setminus W_{\rho(t)}[0,t]$ has a \emph{component structure}, with well-defined bounds on the capacities, volumes and principal Dirichlet eigenvalues of these components. By contrast, the choice $\rho(t)=0$ does not give a component structure at all: \begin{proposition} \lbprop{Connected} With probability $1$, the set $\mathbb{T}^d\setminus W[0,t]$ is path-connected, open and dense for every $t$, and the set $\mathbb{T}^d\setminus W\cointerval{0,\infty}$ is path-connected, locally path-connected and dense. \end{proposition}
The picture behind Propositions~\ref{p:NoWrapping}--\ref{p:Connected} is that the set $\mathbb{T}^d \setminus W[0,t]$ consists of ``lakes'' whose linear size is of order $\varphi_d(t)$, connected by narrow ``channels'' whose linear size is at most $\varphi_d(t)/(\log t)^{1/d}$. By inflating the Brownian motion to a Wiener sausage of radius $\rho(t)$ with (recall \eqref{twoscales} and \eqref{RadiusBound}) \begin{equation} \label{regime} \varphi_\mathrm{local}(t) \ll \varphi_d(t)/(\log t)^{1/d} \ll \rho(t) \ll \varphi_d(t) \asymp \varphi_\mathrm{global}(t), \end{equation} we effectively block off these channels, so that $\mathbb{T}^d\setminus W_{\rho(t)}[0,t]$ consists of disjoint lakes.
\refprop{Connected} shows that some lower bound on $\rho(t)$ is necessary for the results of Theorems~\ref{t:CapacitiesInWc}--\ref{t:ShapeOfComponents}, Corollaries~\ref{c:VolumeLDP}--\ref{c:EvalLDP} and \refprop{NoWrapping} to hold.\footnote{The choice $\rho(t)=0$ makes the eigenvalue result in \refcoro{EvalLDP} false for $d\geq 4$, since the path of the Brownian motion itself is a polar set for $d\geq 4$. However, for $d = 3$ the eigenvalue $\lambda(t,\rho(t))$ is non-trivial even when $\rho(t)=0$, and we conjecture that \refcoro{EvalLDP} remains valid, i.e., the eigenvalue is determined primarily by the large lakes in $\mathbb{T}^d\setminus W[0,t]$, and not by the narrow channels connecting them. See the rough estimates in van den Berg, Bolthausen and den Hollander~\cite{vdBBdHpr}.} It would be of interest to know whether the condition $\rho(t)\gg \varphi_d(t)/(\log t)^{1/d}$ can be relaxed, i.e., whether the true size of the channels is of smaller order than $\varphi_d(t)/(\log t)^{1/d}$. By analogy with the random interlacements model (see \refsubsubsect{RandomInterlacements} below), the relevant regime to study would be $\varphi_\mathrm{local}(t) \asymp \varphi_d(t)/(\log t)^{1/(d-2)} \ll \rho(t) \ll \varphi_d(t)/(\log t)^{1/d}$, i.e., the missing part of \eqref{regime}.
\subsubsection{A comparison with random interlacements} \lbsubsubsect{RandomInterlacements}
The discrete analogue of $\mathbb{T}^d\setminus W[0,t]$ is the complement $\mathbb{T}_N^d\setminus S[0,n]$ of the path of a random walk $S=(S(n))_{n\in\mathbb{N}_0}$ on a large discrete torus $\mathbb{T}_N^d=(\mathbb{Z}/N\mathbb{Z})^d$. The spatial scale being fixed by discretization, it is necessary to take $N\to\infty$ and $n\to\infty$ simultaneously, and the choice $n=uN^d$ for $u\in(0,\infty)$ has been extensively studied: see for instance Benjamini and Sznitman~\cite{BenjSznit2008}, Sznitman~\cite{S2010} and Sidoravicius and Sznitman~\cite{SS2009}. Teixeira and Windisch~\cite{TeixWind2011} prove that $S[0,uN^d]$, seen \emph{locally} from a \emph{typical} point, converges in law as $N\to\infty$, namely, \begin{equation} \label{DiscreteTorusLocal} \lim_{N\to\infty} \mathbb{P}\left( (X_N+E) \cap S[0,uN^d] = \varnothing \right) = e^{-u\Capa_{\mathbb{Z}^d} E}, \qquad E\subset\mathbb{Z}^d\text{ finite}, \end{equation} where $X_N$ is drawn uniformly from $\mathbb{T}_N^d$, and $\Capa_{\mathbb{Z}^d} E$ is the discrete capacity. The right-hand side of \eqref{DiscreteTorusLocal} is the non-intersection probability \begin{equation} \mathbb{P}(E \cap \mathcal{I}^u = \varnothing) = e^{-u\Capa_{\mathbb{Z}^d} E} \end{equation} for the \emph{random interlacements} model with parameter $u$ introduced by Sznitman~\cite{S2010}. The set $\mathcal{I}^u\subset\mathbb{Z}^d$ can be constructed as the union of a certain Poisson point process of random walk paths, with an intensity measure proportional to the parameter $u$. The random interlacements model has a critical value $u_*\in(0,\infty)$ such that $\mathbb{Z}^d\setminus\mathcal{I}^u$ has an unbounded component a.s.\ when $u<u_*$ and has only bounded components a.s.\ when $u>u_*$.
The continuous analogue of \eqref{DiscreteTorusLocal} is the probability of the event in \eqref{BasicEvent} with the scaling factor $\varphi=\varphi_\mathrm{local}(t) = t^{-1/(d-2)}$ instead of $\varphi=\varphi_d(t) \asymp \varphi_\mathrm{global}(t)$. Our methods (see Propositions~\ref{p:ExcursionNumbers} and \ref{p:NSuccessfulProb} below) yield \begin{equation} \label{CtsTorusLocal} \lim_{t\to\infty} \left( (X+t^{-1/(d-2)}E) \cap W[0,t] = \varnothing \right) = e^{-\Capa E},\qquad E\subset\mathbb{R}^d\text{ compact}, \end{equation} for $X$ drawn uniformly from $\mathbb{T}^d$, which implies that the random set $\mathbb{T}^d\setminus W[0,t]$, seen locally from a typical point, converges in law (see Molchanov~\cite[Theorem 6.5]{M2005} for a discussion of convergence in law for random sets) to a random closed set $\mathcal{I}$ uniquely characterized by its non-intersection probabilities \begin{equation} \label{CtsInterlacements} \mathbb{P}(E \cap \mathcal{I} = \varnothing) = e^{-\Capa E}, \qquad E\subset\mathbb{R}^d\text{ compact}. \end{equation} As with the discrete random interlacements $\mathcal{I}^u$, the limiting random set $\mathcal{I}$ can be constructed from a Poisson point process of Brownian motion paths (see Sznitman~\cite[Section 2]{SznitmanBrInt}).
Because of scale invariance, no parameter is needed in \eqref{CtsTorusLocal}--\eqref{CtsInterlacements}. Indeed, the continuous model corresponds to a rescaled limit of the discrete model when $(N,u)$ is replaced by $(kN,u/k^{d-2})$ and $k\to\infty$. In this rescaling the parameter $u$ tends to zero, and $\mathbb{Z}^d\setminus\mathcal{I}^u$ loses its finite component structure, which is in accordance with the connectedness result \refprop{Connected}.
Inflating the Brownian motion to a Wiener sausage can be interpreted as reintroducing a kind of discretization. However, because of \eqref{RadiusBound}, the spatial scale $\rho(t)$ of this discretization is much larger than the spatial scale $\varphi_\mathrm{local}(t) = t^{-1/(d-2)}$ corresponding to \eqref{CtsTorusLocal} (cf.\ \refsubsubsect{ComponentsDiscussion}).
In the random interlacements model no sharp bound is currently known for the tail behaviour of the capacity of the component containing the origin. Recently, Popov and Teixeira~\cite{PopovTeixeiraSoftLocal} showed that for $d \geq 3$ the \emph{diameter} of the component containing $0$ in $\mathbb{Z}^d\setminus\mathcal{I}^u$ has an exponential tail for $u$ sufficiently large (with a logarithmic correction in $d=3$). In particular, the largest diameter of a component in a box of volume $N^d$, $d\geq 4$, can grow at most as $\log N$, and therefore the largest capacity of a component can grow at most as $(\log N)^{d-2}$.
When this last bound is translated heuristically to our context, the corresponding assertion is that the maximal capacity of a component is at most of order $(\log t)^{d-2}/t$. By \refthm{CapacitiesInWc}, this bound is very far from sharp for $d\geq 4$. It is tempting to conjecture that the \emph{capacity} of the component containing $0$ in $\mathbb{Z}^d\setminus\mathcal{I}^u$ also has an exponential tail for $u$ sufficiently large. The reasonableness of this conjecture is related to whether or not the condition on $\rho(t)$ in \eqref{RadiusBound} can be weakened to $\rho(t) \geq u\varphi_\mathrm{local}(t)$ for $u$ sufficiently large. Possibly the scaling behaviour of $\mathbb{T}^d\setminus W_{\rho(t)}[0,t]$ with $\rho(t) = u\varphi_\mathrm{local}(t)$ undergoes some sort of percolation transition at a critical value $\bar u_* \in (0,\infty)$.
\subsubsection{Corollaries of the capacity bounds}
\refcoro{UnhitSet} summarizes for which set $E$ a subset $x+\varphi_d(t)E \subset \mathbb{T}^d\setminus W[0,t]$ can be expected to exist: according to Theorems~\ref{t:CapacitiesInWc}--\ref{t:ShapeOfComponents}, subsets of large capacity are unlikely to exist, whereas subsets of small capacity are numerous.
Corollaries~\ref{c:VolumeLDP}--\ref{c:EvalLDP} follow from Theorems~\ref{t:CapacitiesInWc}--\ref{t:ShapeOfComponents} with the help of the isoperimetric inequalities \begin{equation} \label{CapaVolEval} \frac{\Capa E}{\kappa_d} \geq \left( \frac{\Vol E}{V_d} \right)^{(d-2)/d} \geq \left( \frac{\lambda_d}{\lambda(E)} \right)^{(d-2)/2}, \qquad E\subset\mathbb{R}^d\text{ bounded open,} \end{equation} where we recall that $\kappa_d,V_d,\lambda_d$ are the capacity, volume and principal Dirichlet eigenvalue of $B(0,1)$. The first inequality is the Poincar\'e-Faber-Szeg\"o theorem, which says that among all sets with a given volume the ball has the smallest capacity. The second inequality is the Faber-Krahn theorem, which says that among all sets of a given volume the ball has the smallest Dirichlet eigenvalue.\footnote{See e.g.\ Bandle~\cite[Theorems II.2.3 and III.3.8]{B1980} or P\'olya and Szeg\"o~\cite[Section I.1.12]{PS1951}. These references consider the capacity only when $d=3$, but their methods apply for all $d\geq 3$.} Comparing with \refthm{CapacitiesInWc}, we see that the most efficient way to produce a component of a given large volume (or small principal Dirichlet eigenvalue) is for that component to be a ball.
Equality holds throughout \eqref{CapaVolEval} when $E$ is a ball, and the lower bounds in Corollaries~\ref{c:VolumeLDP}--\ref{c:EvalLDP}, together with Corollaries~\ref{c:InradiusLDP}--\ref{c:CoverTimeLDP}, follow by specializing Theorems~\ref{t:CapacitiesInWc}--\ref{t:ShapeOfComponents} to that case.
The large deviation principles in \refthm{CapacitiesInWc} and Corollaries~\ref{c:VolumeLDP}--\ref{c:CoverTimeLDP} each imply a weak law of large numbers, e.g.\ $\lim_{t\to\infty} \kappa^*(t,\rho(t))/\varphi_d(t)^{d-2}=1$ in $\mathbb{P}$-probability. The weak laws of large numbers implied by Corollaries~\ref{c:InradiusLDP}--\ref{c:CoverTimeLDP} were proved in Dembo, Peres and Rosen~\cite{DPR2003} in the stronger form $\lim_{t\to\infty} \rho_{\rm in}(t)/\varphi_d(t)=1$ and $\lim_{t\to\infty} \mathcal{C}_\epsilon/\psi_d(\epsilon)=d$ $\mathbb{P}$-a.s. The $L^1$-version of this convergence is proved in van den Berg, Bolthausen and den Hollander~\cite{vdBBdHpr}. Note that none of these forms are equivalent: for instance, a.s.\ convergence does not follow from Corollaries~\ref{c:InradiusLDP}--\ref{c:CoverTimeLDP}, since the sum $\sum_{t\in\mathbb{N}} \exp[-I_d(\kappa)\log t]$ fails to converge when $I_d(\kappa)$ is small.
\subsubsection{The maximal diameter of a component} \lbsubsubsect{DiameterDiscussion}
There is no analogue of \refcoro{VolumeLDP} for the maximal diameter instead of the maximal volume. The capacity and the diameter are related by $\Capa E \leq \kappa_d (\diam E)^{d-2}$. However, there is no inequality in the reverse direction: a set of fixed capacity can have an arbitrarily large diameter. It turns out that the maximal diameter of the components of $\mathbb{T}^d \setminus W_{\rho(t)}[0,t]$ is of larger order than $\varphi_d(t)$. More precisely, suppose that $\rho(t)=o(\varphi_d(t))$, and let $D(t,\rho(t))$ denote the largest diameter of a component of $\mathbb{T}^d\setminus W_{\rho(t)}[0,t]$. Then $\lim_{t\to\infty} D(t,\rho(t))/\varphi_d(t)=\infty$ in $\mathbb{P}$-probability. Indeed, choose a compact connected set $E$ of zero capacity and large diameter, say $E=[0,L]\times \set{0}^{d-1}$ with $L$ large. Then, by \refthm{NoTranslates}, $\mathbb{T}^d \setminus W_{\rho(t)}[0,t]$ has a component containing $x+\varphi_d(t) E$ for some $x$ with a high probability. See also the discussion at the end of \refsubsubsect{RandomInterlacements} above.
\subsubsection{The second-largest component}
The component of second-largest capacity (or second-largest volume, principal Dirichlet eigenvalue, or inradius) has a different large deviation behaviour, due to the fact that $E\mapsto\Capa E$ is not additive. Indeed, typically $\Capa(E^{(1)}\cup E^{(2)}) <\Capa(E^{(1)})+\Capa(E^{(2)})$, even for disjoint sets $E^{(1)}, E^{(2)}$. In the case of concentric spheres, $\Capa(\partial B(0,r_1)\cup\partial B(0,r_2))=\max\set{\Capa (\partial B(0,r_1)), \Capa(\partial B(0,r_2))}$. It follows that the most efficient way to produce two large but disjoint components is to have them almost touching.
\subsubsection{Answers to Questions \ref{q:Geometry}--\ref{q:Components}}
The results in this paper give a partial answer to \refquestion{Geometry}. \refquestion{ExistsTranslate} is answered by \refcoro{UnhitSet} subject to $E\in\mathcal{E}^*$, $\Capa E\neq\kappa_d$ (see also \refsect{LatticeAnimals} for results that are simultaneous over a certain class of sets $E$). The resolution to \refquestion{LargerSubset}, namely, the fact that the dense picture in \reffig{SparseVsDense}(b) applies, is provided by \refcoro{UnhitSet}. If $E\subset E'$ with $\Capa E'\geq\Capa E+\delta$, $\delta>0$, and $E,E'\in\mathcal{E}^*$, then, compared to subsets of the form $x+\varphi_d(t)E$, subsets of the form $x+\varphi_d(t)E'$ are much less numerous (when $\Capa E<\kappa_d$) or much less probable (when $\Capa E\geq\kappa_d$). Moreover, if \eqref{RadiusBound} holds, then Theorems~\ref{t:CapacitiesInWc}--\ref{t:ComponentCounts} answer \refquestion{LargerSubset} \emph{simultaneously} over all possible sets $E'$. The answer to \refquestion{Avoidance}, namely, that the Brownian motion follows a spatial avoidance strategy, will follow from \refprop{ExcursionNumbers} below. Finally, Theorems~\ref{t:CapacitiesInWc}--\ref{t:ShapeOfComponents}, Corollaries~\ref{c:VolumeLDP}--\ref{c:EvalLDP} and \refprop{NoWrapping} provide the answer to \refquestion{Components}.
\subsubsection{Two dimensions} \lbsubsubsect{2d}
It remains a challenge to extend the results in the present paper to $d=2$ (see \reffig{d=2sim}). A law of large numbers for the $\epsilon$-cover time is derived in Dembo, Peres, Rosen and Zeitouni~\cite{DPRZ2004}: \begin{equation}\label{CoverTimeLLN2d} \lim_{\epsilon\downarrow 0} \frac{\mathcal{C}_\epsilon}{\psi_2(\epsilon)}=2 \quad \text{ a.s.}, \qquad \psi_2(\epsilon)=\frac{[\log(1/\epsilon)]^2}{\pi}. \end{equation} However, the relation $\psi_2(\epsilon(t))\sim\psi_2(\tilde{\epsilon}(t))$, where $\epsilon(t),\tilde{\epsilon}(t)\downarrow 0$, no longer implies $\epsilon(t)\sim\tilde{\epsilon}(t)$: cf.\ \eqref{phipsiAsymptotics}. Hence the identity $\set{\rho_{\rm in}(t) > \epsilon} = \set{\mathcal{C}_\epsilon > t}$ does not lead to a law of large numbers for the largest inradius $\rho_{\rm in}(t)$ itself, but only for its logarithm $\log \rho_{\rm in}(t)$: \begin{equation}\label{Inradius2d} \rho_{\rm in}(t) = e^{-\sqrt{\pi t/2} + o(\sqrt{t})}, \quad \frac{\log \rho_{\rm in}(t)}{\sqrt{t}} \to -\sqrt{\pi/2}, \qquad t\to\infty. \end{equation} In order to give a detailed geometric description, the error term $o(\sqrt{t})$ in \eqref{Inradius2d} would need to be controlled up to order $O(1)$. Rough asymptotics for the logarithm of the average principal Dirichlet eigenvalue are conjectured in van den Berg, Bolthausen and den Hollander~\cite{vdBBdHpr}.
In contrast to $d\geq3$, the large subsets of $\mathbb{T}^2 \setminus W[0,t]$ are expected to arise because of a \emph{temporal} avoidance strategy and to resemble the \emph{sparse} picture of \reffig{SparseVsDense}(a) (see Questions~\ref{q:LargerSubset}--\ref{q:Avoidance}). Furthermore, the Poisson point process heuristic, valid for $d \geq 3$ as explained in \refsubsubsect{UpDownDiscussion}, fails in $d=2$. The components of $\mathbb{T}^2 \setminus W[0,t]$ are expected to have a hierarchical structure, with long-range spatial correlations.
\section{Brownian excursions} \lbsect{Preparations}
In this section we list a few properties of Brownian excursions that will be needed as we go along. \refSubSect{Excursions} looks at the times and the numbers of excursions between the boundaries of two concentric balls, \refsubsect{BMCapa} estimates the hitting probabilities of these excursions in terms of capacity, while \refsubsect{ContCapa} collects a few elementary properties of capacity.
\subsection{Counting excursions between balls} \lbsubsect{Excursions}
$\bullet$ Excursion times. Let $x\in\mathbb{T}^d$ and $0<r<R<\tfrac{1}{2}$. Regard these values as fixed for the moment. Set $T_0=\inf\set{t\geq 0\colon\,W(t)\in\partial B(x,R)}$ and, for $i\in\mathbb{N}$, define recursively the hitting times (see \reffig{ExcTim}) \begin{equation} \label{hittimedef} \begin{aligned} T'_i &= \inf\set{t\geq T_{i-1}\colon\,W(t)\in\partial B(x,r)},\\ T_i &= \inf\set{t\geq T'_i\colon\,W(t)\in\partial B(x,R)}. \end{aligned} \end{equation} We call $W[T'_i,T_i]$ the \emph{$i^{\rm th}$ excursion from $\partial B(x,r)$ to $\partial B(x,R)$}, and write $\xi'_i(x)=W(T'_i)$, $\xi_i(x)=W(T_i)$ for its starting and ending points.\footnote{If the starting point $x_0$ lies inside $B(x,R)$, then the Brownian motion may travel from $\partial B(x,r)$ to $\partial B(x,R)$ before time $T_0$. To simplify the application of Dembo, Peres and Rosen~\cite[Lemma 2.4]{DPR2003}, we do not call this an excursion from $\partial B(x,r)$ to $\partial B(x,R)$.}
Set \begin{equation} \begin{aligned} &\tau_0(x,r,R)=\tau'_0(x,r,R)=T_0(x),\\ &\tau_i(x,r,R)=T_i-T_{i-1},\,\tau'_i(x,r,R)=T'_i-T_{i-1}, \quad i\in\mathbb{N}. \end{aligned} \end{equation} Thus, $\tau_i(x,r,R)$ is the duration of the $i^{\rm th}$ excursion from $\partial B(x,R)$ to itself via $\partial B(x,r)$, while $\tau'_i(x,r,R)<\tau_i(x,r,R)$ is the duration of the $i^{\rm th}$ excursion from $\partial B(x,R)$ to $\partial B(x,r)$.
\begin{figure}
\caption{Hittings that define the times $T_i$, $i\in\mathbb{N}_0$, and $T'_i$, $i\in\mathbb{N}$. The open circles indicate the locations of the starting and ending points $\xi'_i(x)=W(T'_i)$, $\xi_i(x)=W(T_i)$ of the excursions.}
\end{figure}
\noindent (All the variables $T_i,T'_i,\xi_i,\xi'_i,\tau_i,\tau'_i$ depend on all the parameters $x,r,R$. Nevertheless, in our notation we only indicate some of these dependencies.)
\noindent $\bullet$ Excursion numbers. Define \begin{align} N(x,t,r,R) &= \max\set{i\in\mathbb{N}_0\colon\,T_i\leq t} = \max\set{j\in\mathbb{N}_0\colon\,\sum_{i=0}^j \tau_i(x,r,R) \leq t},\\ N'(x,t,r,R) &= \max\set{j\in\mathbb{N}_0\colon\,\sum_{i=0}^j \tau'_i(x,r,R) \leq t}. \end{align} Thus, $N(x,t,r,R)$ is the number of completed excursions from $\partial B(x,r)$ to $\partial B(x,R)$ by time $t$, while $N'(x,t,r,R)$ is the number of (necessarily completed) excursions when the total time spent \emph{not} making an excursion reaches $t$.
As we will see in \refprop{ExcursionNumbers} below, $N(x,t,r,R)$ and $N'(x,t,r,R)$ have very similar scaling behaviour for $t\to\infty$ and $r\ll R\ll 1$. Indeed, the times $\tau_i(x,r,R)$ and $\tau'_i(x,r,R)$ are typically large (since the Brownian motion typically visits the bulk of $\mathbb{T}^d$ many times before travelling from $\partial B(x,R)$ to $\partial B(x,r)$), whereas $\tau_i(x,r,R)-\tau'_i(x,r,R)=T_i(x)-T'_i(x)$ scales as $R^2$. The advantage of $N'(x,t,r,R)$ is that it is independent of non-intersection events within $B(x,r)$ given the starting and ending points $\xi'_i(x)$,$ \xi_i(x)$ of the excursions.
Define \begin{equation} \label{NdDefinition} N_d(t,r,R)= \frac{\kappa_d t}{r^{-(d-2)}-R^{-(d-2)}}. \end{equation} The following proposition shows that $N_d(t,r,R)$ represents the typical size for the random variables $N(x,t,r,R)$ and $N'(x,t,r,R)$.
\begin{proposition} \lbprop{ExcursionNumbers} For any $\delta \in (0,1)$ there is a $c=c(\delta)>0$ such that, uniformly in $x,x_0\in\mathbb{T}^d$, $t>1$ and $0<r^{1-\delta}\leq R\leq c$, \begin{align} \mathbb{P}_{x_0} \! \left( \big. N(x,t,r,R) \geq (1+\delta) N_d(t,r,R) \right) &\leq e^{-c N_d(t,r,R)}, \label{NUpwardBound} \\ \mathbb{P}_{x_0} \! \left( \big. N'(x,t,r,R) \geq (1+\delta) N_d(t,r,R) \right) &\leq e^{-c N_d(t,r,R)}, \label{NprimeUpwardBound} \\ \mathbb{P}_{x_0} \! \left( \big. N(x,t,r,R) \leq (1-\delta) N_d(t,r,R) \right) &\leq e^{-c N_d(t,r,R)}. \label{NDownwardBound} \end{align} \end{proposition}
\begin{proof} The result follows from a lemma in Dembo, Peres and Rosen~\cite{DPR2003}, which we reformulate in our notation. (Note that the constant $\kappa_d$ defined by \eqref{kappadDefinition} corresponds to the quantity $1/\kappa_{\mathbb{T}^d}$ from~\cite[page 2]{DPR2003} rather than $\kappa_{\mathbb{T}^d}$.)
\begin{lemma}[{\rm \cite[Lemma 2.4]{DPR2003}}] \lblemma{ExcursionTimes} There is a constant $\eta>0$ such that if $N\geq \eta^{-1}$, $0<\delta<\delta_0<\eta$ and $0<2r\leq R<R_0(\delta)$, then for some $c=c(r,R)>0$ and uniformly in $x,x_0\in\mathbb{T}^d$, \begin{equation} \mathbb{P}_{x_0} \! \left( 1-\delta \leq \frac{\kappa_d}{N(r^{-(d-2)}-R^{-(d-2)})} \sum_{i=0}^N \tau_i(x,r,R) \leq 1+\delta \right) \geq 1-e^{-c\delta^2 N}. \end{equation} Moreover, $c$ can be chosen to depend only on $\delta_0$ as soon as $R>r^{1-\delta_0}$. The same result holds when $\tau'_i(x,r,R)$ is replaced by $\tau_i(x,r,R)$. \end{lemma} (The same result for $\tau'_i$ is not included in \cite{DPR2003}, but follows from the estimates in that paper. Indeed, $\tau_i-\tau'_i$ is shown to be an error term.)
To prove \refprop{ExcursionNumbers}, we begin with \eqref{NDownwardBound}. Fix $\delta>0$. We may assume without loss of generality that $\delta<\tfrac{1}{2}$ and $1/(1-\tfrac{1}{2}\delta)<1+\tfrac{2}{3}\delta<1+\eta$. Set $N=\floor{(1-\delta)N_d(t,r,R)} +1$. Since $N/N_d(t,r,R) \to 1-\delta$ as $N_d(t,r,R)\to\infty$, we can choose $r$ small enough so that $\tfrac{1}{2}N_d(t,r,R)\leq N\leq (1-\tfrac{1}{2}\delta) N_d(t,r,R)$ and $N\geq\eta^{-1}$, uniformly in $R$ and $t>1$. We have \begin{equation} \big\{N(x,t,r,R) \leq (1-\delta) N_d(t,r,R)\big\} = \set{N(x,t,r,R)<N} = \set{T_N\geq t}. \end{equation} Since $T_N=\sum_{i=0}^N \tau_i(x,r,R)$, it follows that \begin{align} \mathbb{P}_{x_0} \! \big( N(x,t,r,R)\leq (1-\delta) N_d(t,r,R) \big) &= \mathbb{P}_{x_0} \! \left( \sum_{i=0}^N \tau_i(x,r,R) \geq t \right) = \mathbb{P}_{x_0} \! \left( \frac{\kappa_d\sum_{i=0}^N \tau_i(x,r,R)}{N(r^{-(d-2)}-R^{-(d-2)})} \geq \frac{N_d(t,r,R)}{N} \right) \notag\\ &\leq \mathbb{P}_{x_0} \! \left( \frac{\kappa_d\sum_{i=0}^N \tau_i(x,r,R)}{N(r^{-(d-2)}-R^{-(d-2)})} \geq \frac{1}{1-\tfrac{1}{2}\delta} \right) \! . \end{align} Hence \eqref{NDownwardBound} follows from \reflemma{ExcursionTimes} with $\delta$ and $\delta_0$ replaced by $\tfrac{1}{2}\delta/(1-\tfrac{1}{2}\delta)$ and $\tfrac{2}{3}\delta$, respectively, with the constant $c$ in \refprop{ExcursionNumbers} chosen small enough so that $2r\leq R<R_0 [\tfrac{1}{2}\delta/(1-\tfrac{1}{2}\delta)]$.
The proof of \eqref{NprimeUpwardBound} is similar. Let $\delta>0$ be such that $\tfrac{1}{2}\delta/(1+\tfrac{1}{2}\delta)<\eta$ and set $N'=\ceiling{(1+\delta)N_d(t,r,R)}$. As before, we have \begin{equation} \mathbb{P}_{x_0}\bigl(N'(t,x,r,R)\geq (1+\delta) N_d(t,r,R)\bigr) \leq \mathbb{P}_{x_0}\Bigl(\frac{\kappa_d\sum_{i=0}^{N'}\tau'_i(x,r,R)}{N'(r^{-(d-2)}-R^{-(d-2)})} \leq \frac{1}{1+\tfrac12\delta} \Bigr) \end{equation} and we can apply the version of \reflemma{ExcursionTimes} with $\tau'_i(x,r,R)$ instead of $\tau_i(x,r,R)$ and $\delta$ replaced by $\tfrac{1}{2}\delta/(1+\tfrac{1}{2}\delta)$.
Finally, because $N'(t,x,r,R) \leq N(t,x,r,R)$, \eqref{NUpwardBound} follows from \eqref{NprimeUpwardBound}. \qed\end{proof}
\refprop{ExcursionNumbers} forms the link between the global structure of $\mathbb{T}^d$, notably the fact that a Brownian motion on $\mathbb{T}^d$ has a finite mean return time to a small ball, and the excursions of $W$ within small balls, during which $W$ cannot be distinguished from a Brownian motion on all of $\mathbb{R}^d$.
\subsection{Hitting sets by excursions} \lbsubsect{BMCapa}
The concentration inequalities in \refprop{ExcursionNumbers} will allow us to treat the number of excursions as deterministic. This observation motivates the following definition.
\begin{definition} \lbdefn{NSuccessful} Let $0<r<R<\tfrac{1}{2}$, $\varphi>0$ and $N\in\mathbb{N}$. A pair $(x,E)$ with $x\in\mathbb{T}^d$, $E\subset \mathbb{R}^d$ Borel, will be called \emph{$(N,\varphi,r,R)$-successful} if none of the first $N$ excursions of $W$ from $\partial B(x,r)$ to $\partial B(x,R)$ hit $x+\varphi E$. \end{definition}
\begin{proposition} \lbprop{NSuccessfulProb} Let $0<\epsilon<r<R<\tfrac{1}{2}$. Then, uniformly in $\varphi>0$, $x_0,x\in\mathbb{T}^d$ and $E\subset\mathbb{R}^d$ a Borel set with $\varphi E\subset B(0,\epsilon)$, and uniformly in $(\xi'_i(x),\xi_i(x))_{i=1}^N$, \begin{equation} \label{NSuccessfulProbFormula} \begin{aligned} &\mathbb{P}_{x_0} \! \condparenthesesreversed{ (x,E) \text{\rm{ is $(N,\varphi,r,R)$-successful}} }{(\xi'_i(x),\xi_i(x))_{i=1}^N} \\ &\quad = \exp\left[ -N \left( \frac{\varphi}{r} \right)^{d-2} \frac{\Capa E }{\kappa_d} [1+o(1)] \right], \qquad r/\epsilon, R/r \to\infty. \end{aligned} \end{equation} \end{proposition}
\noindent Since the error term is uniform in $(\xi'_i(x),\xi_i(x))_{i=1}^N$, \refprop{NSuccessfulProb} also applies to the unconditional probability $\mathbb{P}_{x_0}( (x,E)\text{\rm{ is $(N,\varphi,r,R)$-successful}} )$.
To prove \refprop{NSuccessfulProb} we need the following lemma for the hitting probability of a single excursion given its starting and ending points. For $\xi'\in\partial B(x,r)$, $\xi\in\partial B(x,R)$, write $\mathbb{P}_{\xi',\xi}$ for the law of an excursion $W[0,\zeta_R]$, $\zeta_R=\inf\set{t\geq 0\colon\, d(x,W(t))\geq R}$, from $\partial B(x,r)$ to $\partial B(x,R)$, started at $\xi'$ and conditioned to end at $\xi$.
\begin{lemma} \lblemma{CapacityAndHittingDistant} Let $0<\epsilon<r<R<\tfrac{1}{2}$. Then, uniformly in $x\in\mathbb{T}^d$, $\xi'\in\partial B(x,r), \xi\in\partial B(x,R)$ and $E$ a Borel set with $E\subset B(0,\epsilon)$, \begin{equation} \mathbb{P}_{\xi',\xi}((x+E)\cap W[0,\zeta_R]\neq\varnothing) = \frac{\Capa E}{\kappa_d \, r^{d-2}}\,[1+o(1)], \qquad r/\epsilon, R/r\to\infty. \end{equation} \end{lemma}
\reflemma{CapacityAndHittingDistant} is a more elaborate version of \eqref{CapacityAndHittingSimple}: it states that the asymptotics of \eqref{CapacityAndHittingSimple} remain valid when we stop the Brownian motion upon exiting a sufficiently distant ball, and hold conditionally and uniformly, provided the balls and the set are well separated. In the proof we use the relation \begin{equation} \label{CapacityAndHittingUniform} \int_{\partial B(0,r)} \mathbb{P}_x(E\cap W\cointerval{0,\infty}\neq\varnothing) \,d\sigma_r(x) = \frac{\Capa E}{\kappa_d \, r^{d-2}}, \qquad \text{$E$ a Borel subset of $B(0,r)$}, \end{equation} where $\sigma_r$ denotes the uniform measure on $\partial B(0,r)$. Equation \eqref{CapacityAndHittingUniform} becomes an identity as soon as $B(0,r)$ contains $E$, and as such it is a more precise version of \eqref{CapacityAndHittingSimple}: see Port and Stone~\cite[Chapter 3, Theorem 1.10]{PS1978} and surrounding material.
We defer the proof of \reflemma{CapacityAndHittingDistant} to \refsubsect{CapaHittingDistantProof}. We can now prove \refprop{NSuccessfulProb}.
\begin{proof} Conditional on their starting and ending points $(\xi'_i(x),\xi_i(x))_{i=1}^N$, the successive excursions from $\partial B(x,r)$ to $\partial B(x,R)$ are independent with laws $\mathbb{P}_{\xi'_i(x),\xi_i(x)}$. Applying \reflemma{CapacityAndHittingDistant}, we have \begin{align} &\mathbb{P}_{x_0} \! \condparenthesesreversed{(x,E)\text{ is $(N,\varphi,r,R)$-successful}} {(\xi'_i(x),\xi_i(x))_{i=1}^N}\notag\\ &\quad = \prod_{i=1}^N \mathbb{P}_{\xi'_i(x),\xi_i(x)}((x+\varphi E)\cap W[0,\zeta_R] = \varnothing) = \left( 1-\frac{\Capa(\varphi E)}{\kappa_d \, r^{d-2}}\,[1+o(1)] \right)^N \! . \label{NSuccessfulProduct} \end{align} Since $\Capa(\varphi E)\leq\kappa_d\,\epsilon^{d-2} = o(r^{d-2})$ as $r/\epsilon\to\infty$, we can rewrite the right-hand side of \eqref{NSuccessfulProduct} as \begin{equation} \exp\left[-N\frac{\Capa(\varphi E)}{\kappa_d\,r^{d-2}}\,[1+o(1)] \right], \end{equation} so that the scaling relation in \eqref{CapacityScaling} implies the claim. \qed\end{proof}
\subsection{Properties of capacity} \lbsubsect{ContCapa}
In this section we collect a few elementary properties of capacity.
\subsubsection{Continuity}
\begin{proposition} \lbprop{CapacityContinuity} Let $E$ denote a Borel subset of $\mathbb{R}^d$. \begin{enumerate} \item \lbitem{CapacityCompact} If $E$ is compact, then $\Capa E_r\downarrow\Capa E$ as $r\downarrow 0$. \item \lbitem{CapacityOpen} If $E$ is open, then $\Capa E_{-r}\uparrow\Capa E$ as $r\downarrow 0$. \item \lbitem{CapacityCtyPoint} If $E$ is bounded with $\Capa(\closure{E})=\Capa(\interior{E})$, then $\Capa E_r\downarrow\Capa E$ and $\Capa E_{-r}\uparrow\Capa E$ as $r\downarrow 0$. \end{enumerate} \end{proposition}
\begin{proof} For $r\downarrow 0$ we have $E_r\downarrow\closure{E}$ and $E_{-r}\uparrow\interior{E}$ for any set $E$. By Port and Stone~\cite[Chapter 3, Proposition 1.13]{PS1978}, it follows that $\Capa E_{-r} \uparrow \Capa(\interior{E})$ and, if $E$ is bounded, $\Capa E_r\downarrow \Capa(\closure{E})$. The statements about $E$ follow depending on which inequalities in $\Capa(\interior{E})\leq\Capa E\leq\Capa(\closure{E})$ are equalities. \qed\end{proof}
\refprop{CapacityContinuity} is a statement about the continuity of $E\mapsto\Capa E$ with respect to enlargement and shrinking. The assumptions on $E$ are necessary, since there are sets $E$ with $\Capa(\closure{E})>\Capa(\interior{E})$. Note that $E\mapsto\Capa E$ is \emph{not} continuous with respect to the Hausdorff metric, even when restricted to reasonable classes of sets. For instance, the finite sets $B(0,1)\cap\tfrac{1}{n}\mathbb{Z}^d$ converge to $B(0,1)$ in the Hausdorff metric, but have zero capacity for all $n$.
\subsubsection{Asymptotic additivity}
\begin{lemma} \lblemma{SeparatedCapacity} Let $0<\epsilon<r$. Then, uniformly in $x_1,x_2\in\mathbb{R}^d$ with $d(x_1,x_2)\geq r$ and $E^{(1)},E^{(2)}$ Borel subsets of $\mathbb{R}^d$ with $E^{(1)},E^{(2)}\subset B(0,\epsilon)$, \begin{equation} \Capa\big((x_1+E^{(1)})\cup (x_2+E^{(2)})\big) = \big( \! \Capa E^{(1)} + \Capa E^{(2)}\big)\,[1-o(1)], \qquad r/\epsilon\to\infty. \end{equation} \end{lemma}
\begin{proof} Fix $\tilde{r}$ large enough so that $(x_1+E^{(1)})\cup (x_2+E^{(2)})\subset B(0, \tilde{r})$. On the event $\set{W\text{ hits }x_j+E^{(j)}}$, write $Y_j$ for the first point of $x_j+E^{(j)}$ hit by $W$. Applying \eqref{CapacityOfUnion}, \eqref{CapacityAndHittingUniform}, and the Markov property, we get \begin{align} 0 &\leq \Capa(x_1+E^{(1)})+\Capa(x_2+E^{(2)}) -\Capa\big((x_1+E^{(1)})\cup (x_2+E^{(2)})\big) \notag\\ &= \kappa_d \, \tilde{r}^{d-2} \int_{\partial B(0,\tilde{r})} \mathbb{P}_x \! \left( W\text{ hits $x_1+E^{(1)}$ and }x_2+E^{(2)} \right) d\sigma_{\tilde{r}}(x) \notag\\ &\leq \sum_{\set{j,j'}=\set{1,2}} \kappa_d \, \tilde{r}^{d-2} \int_{\partial B(0,\tilde{r})} \mathbb{E}_x \! \left( \indicator{W \text{ hits $x_j+E^{(j)}$}} \mathbb{P}_{Y_j}\big(W\text{ hits } B(x_{j'},\epsilon)\big) \right) d\sigma_{\tilde{r}}(x) \notag\\ &\leq \sum_{\set{j,j'}=\set{1,2}} \kappa_d \, \tilde{r}^{d-2} \int_{\partial B(0,\tilde{r})} \mathbb{P}_x \! \left( W\text{ hits } x_j+E^{(j)} \right) \frac{\epsilon^{d-2}}{(r-\epsilon)^{d-2}}\, d\sigma_{\tilde{r}}(x) \notag\\ &= \frac{\epsilon^{d-2}}{(r-\epsilon)^{d-2}} \left( \Capa E^{(1)} +\Capa E^{(2)} \right), \end{align} where the second inequality uses that every $Y_j\in x_j+E^{(j)}$ is at least a distance $r-\epsilon$ from $x_{j'}$. But $(\epsilon/(r-\epsilon))^{d-2}=o(1)$ for $r/\epsilon \downarrow 0$, and so the claim follows. \qed\end{proof}
\section{Non-intersection probabilities for lattice animals} \lbsect{LatticeAnimals}
An event such as \begin{equation} \set{\exists x\in\mathbb{T}^d\colon\, (x+\varphi_d(t)E)\cap W[0,t]=\varnothing} \end{equation} is a simultaneous statement about an infinite collection $(x+\varphi_d(t) E)_{x\in\mathbb{T}^d}$ of sets. In this section, we apply the results of \refsect{Preparations} to prove simultaneous statements for a finite collection of discretized sets, the lattice animals defined below. \refSubSect{HitLaLaAn} proves a bound for sets of large capacity that forms the basis for \refthm{CapacitiesInWc}, while \refsubsect{HitSmLaAn} proves bounds for sets of small capacity that form the basis for Theorems~\ref{t:ComponentCounts}--\ref{t:NoTranslates}.
\begin{definition} A \emph{lattice animal} is a connected set $A\subset\mathbb{R}^d$ that is the union of a finite number of closed unit cubes with centres in $\mathbb{Z}^d$. We write $\mathcal{A}^\boxempty$ for the collection of all lattice animals, and $\mathcal{A}^\boxempty_Q$ for the collection of lattice animals $A\in\mathcal{A}^\boxempty$ that contain $0$ and consist of at most $Q$ unit cubes. \end{definition}
It is readily verified that, for any $d\geq 2$, there is a constant $C<\infty$ such that \begin{equation} \label{LatticeAnimalGrowth} \shortabs{\mathcal{A}^\boxempty_Q} \leq e^{CQ}, \qquad Q\in\mathbb{N}. \end{equation}
In fact, subadditivity arguments show that $|\mathcal{A}^\boxempty_Q|$ grows exponentially,
in the sense that $\lim_{Q\to\infty}|\mathcal{A}^\boxempty_Q|^{1/Q}$ exists in $(1,\infty)$ for any $d\geq 2$. See, for instance, Klarner~\cite{K1967} for the case $d=2$, or Mejia Miranda and Slade~\cite[Lemma 2]{MejMirSlade2011} for a general upper bound that implies \eqref{LatticeAnimalGrowth}.
Lattice animals are commonly considered as discrete combinatorial objects. In our context, we can identify $A\in\mathcal{A}^\boxempty$ with the collection $A\cap\mathbb{Z}^d$ of lattice points in $A$. Requiring $A$ to be a connected subset of $\mathbb{R}^d$ is then equivalent to requiring the vertices $A\cap\mathbb{Z}^d$ to form a connected subgraph of the lattice $\mathbb{Z}^d$. (Because of the details of our definition, the relevant choice of lattice structure is that vertices $x,y\in\mathbb{Z}^d$ are adjacent when their $\ell_\infty$-distance is $1$.)
For $n\in\mathbb{N}$, set $G_n=x+\tfrac{1}{n}\mathbb{Z}^d$ to be a \emph{grid} of $n^d$ points in $\mathbb{T}^d$, for some $x\in\mathbb{T}^d$. The choice of $x$ (i.e., the alignment of the grid) will generally not be relevant to our purposes.
\subsection{Large lattice animals} \lbsubsect{HitLaLaAn}
\begin{proposition} \lbprop{HitLatticeAnimal} Fix an integer-valued function $t \mapsto n(t)$ such that \begin{equation} \label{BoundOnCubeNumber} \lim_{t\to\infty} \frac{n(t)\varphi_d(t)}{(\log t)^{1/d}} = 0. \end{equation} Given $A\in\mathcal{A}^\boxempty$, write $E(A)=n(t)^{-1}\varphi_d(t)^{-1}A$. Then, for each $\kappa$, \begin{equation} \label{HitLatticeAnimalProb} \limsup_{t\to\infty} \frac{\log\mathbb{P}_{x_0} \! \left( \exists x\in G_{n(t)}, A\in\mathcal{A}^\boxempty\colon\, \Capa E(A) \geq \kappa,(x+\varphi_d(t)E(A))\cap W[0,t]=\varnothing \right)}{\log t} \leq J_d(\kappa). \end{equation} \end{proposition}
\refprop{HitLatticeAnimal} gives an upper bound on the probability of finding unhit sets of large capacity, simultaneously over all sets of the form $E(A)$, $A\in\mathcal{A}^\boxempty$. Note that $x+\varphi_d(t) E(A)$ is a finite union of cubes of side length $1/n(t)$ centred at points of $G_{n(t)}$. In \refsect{ProofTheorems} we will use $x+\varphi_d(t) E(A)$ as a lattice approximation to a generic set $x+\varphi_d(t) E$. The fineness of this lattice approximation is determined by the relation between the lengths $1/n(t)$ and $\varphi_d(t)$. The hypothesis in \eqref{BoundOnCubeNumber} means that the lattice scale $1/n(t)$ is a factor of order $o((\log t)^{1/d})$ smaller compared to the scale $\varphi_d(t)$. This order is chosen so that the number of lattice animals does not grow too quickly.
Before proving \refprop{HitLatticeAnimal}, we give some definitions and make some remarks that we will use throughout \refsect{LatticeAnimals}. We abbreviate \begin{equation} \label{phint} \varphi=\varphi_d(t), \qquad n=n(t), \qquad E(A)=n^{-1}\varphi^{-1} A. \end{equation} For $x\in\mathbb{T}^d$, we introduce the nested balls $B(x,r)$ and $B(x,R)$, where \begin{equation} \label{rRDefinition} r=\varphi^{1-\delta}, \qquad R=\varphi^{1-2\delta}, \end{equation} and $\delta\in(0,\frac{1}{2})$ is fixed. We have $\varphi\ll r\ll R\to 0$ as $t\to\infty$, and we will always take $t$ large enough so that $\varphi<1$ and $R<\frac{1}{2}$.
Suppose $\kappa\in(0,\infty)$ is given and consider the collection of lattice animals $A\in\mathcal{A}^\boxempty$ such that $\Capa E(A)\leq \kappa$. By \eqref{CapaVolEval}, it follows that $\Vol E(A)$ is uniformly bounded. Consequently, we may assume that such a lattice animal $A$ consists of at most $Q=Q(t)$ unit cubes, where $Q$ is suitably chosen with \begin{equation} \label{QBound} Q=O(n^d\varphi^d). \end{equation} Suppose, instead, that $A\in\mathcal{A}^\boxempty$ is minimal subject to the condition $\Capa E(A) \geq\kappa$, and suppose that $n\varphi\to\infty$. By \eqref{CapacityOfUnion}, upon removing a single unit cube from $A$ the capacity $\Capa E(A)$ decreases by at most $O(1/n^{d-2}\varphi^{d-2})$, and so it follows that $\kappa\leq\Capa E(A)\leq \kappa + O(1/n^{d-2}\varphi^{d-2})$. In particular, $\Capa E(A)$ is uniformly bounded for $t$ sufficiently large, and we may again assume \eqref{QBound}.
In what follows, we will always work in a context where one of these two assumptions applies. We will therefore always assume that $A$ consists of at most $Q$ cubes, where $Q$ satisfies \eqref{QBound}.
Given $x\in G_n$ and $A\in\mathcal{A}^\boxempty$, the translate $x+\varphi E(A)$ can be written as $x'+\varphi E(A')$, where $x'\in G_n$ and $0\in A'$. By the above, we have $A'\in \mathcal{A}^\boxempty_Q$. Since $A'$ is connected and $0\in A'$, it follows that $\varphi E(A')\subset B(0,\varphi Q\sqrt{d})$. If $Q=t^{o(1)}$ (in particular, if \eqref{BoundOnCubeNumber} is assumed, or the weaker hypothesis in \eqref{WeakBoundOnCubeNumber}), then $r/\varphi Q\to\infty$ as $t\to\infty$. We may therefore always take $t$ large enough so that $B(0,\varphi Q\sqrt{d}) \subset B(0,r)$, and we may apply \refprop{NSuccessfulProb} to $\varphi E(A)$, uniformly over $A\in \mathcal{A}_Q^\boxempty$.
\begin{proof} Note that if we replace $n$ by a suitable multiple $kn=k(t)n(t)$ for $k(t)\in\mathbb{N}$, we can only increase the probability in \eqref{HitLatticeAnimalProb}. Thus it is no loss of generality to assume that $n\varphi\to\infty$.
The event that $W$ hits $x+\varphi E(A)$ is decreasing in $A$. Therefore we may restrict our attention to lattice animals $A$ that are minimal subject to $\Capa E(A)\geq\kappa$. By the remarks above, we may assume that $A\in\mathcal{A}^\boxempty_Q$. Combining \eqref{BoundOnCubeNumber} and \eqref{QBound}, we have $Q=o(\log t)$.
Set $N=(1-\delta)N_d(t,r,R)$. Recalling \eqref{NdDefinition} and \eqref{rRDefinition}, we have $N_d(t,r,R)= t^{\delta+o(1)}$ as $t\to\infty$. If the event in \eqref{HitLatticeAnimalProb} occurs, then there must exist a point $x\in G_n$ with $N(x,t,r,R) < N$ or a pair $(x,A)\in G_n\times \mathcal{A}^\boxempty_Q$ such that $\Capa E(A)\geq \kappa$ and $(x,E(A))$ is $(\floor{N}, \varphi,r,R)$-successful. Write $\tilde{\chi}^\boxempty$ for the number of such pairs. Then \begin{align} &\mathbb{P}_{x_0} \! \left( \exists x\in G_n, A\in\mathcal{A}^\boxempty\colon\,\Capa E(A) \geq \kappa, (x+\varphi E(A))\cap W[0,t]=\varnothing \right)\notag\\ &\quad\leq \abs{G_n} \max_{x\in G_n} \mathbb{P}_{x_0}(N(x,t,r,R)<N) + \mathbb{P}_{x_0}(\tilde{\chi}^\boxempty\geq 1)\notag\\ &\quad\leq t^{d/(d-2)+o(1)} e^{-c t^{\delta+o(1)}} + \mathbb{P}_{x_0}(\tilde{\chi}^\boxempty\geq 1) \end{align} by \refprop{ExcursionNumbers}. The first term in the right-hand side is negligible. For the
second term, $Q=o(\log t)$ implies that $|\mathcal{A}_Q^\boxempty|\leq e^{O(Q)}=t^{o(1)}$ by \eqref{LatticeAnimalGrowth}, and so \refprop{NSuccessfulProb} gives \begin{align} \mathbb{E}(\tilde{\chi}^\boxempty) &\leq \abs{G_n}\abs{\mathcal{A}_Q^\boxempty} \max_{x\in G_n, A\in\mathcal{A}_Q^\boxempty} \mathbb{P}_{x_0}((x,E(A))\text{ is $(\floor{N} \! ,\varphi,r,R)$-successful})\notag\\ &\leq (t^{d/(d-2) +o(1)}) (t^{o(1)}) (t^{-d\kappa/[(d-2)\kappa_d]+O(\delta)}) \leq t^{-d(\kappa/\kappa_d-1)/(d-2)+O(\delta)}, \label{ExpectedSuccessfulAnimals} \end{align} and the Markov inequality completes the proof. \qed\end{proof}
\refprop{HitLatticeAnimal} bounds the probability that a single rescaled lattice animal $x+\varphi_d(t)E(A)$ is not hit. We will also need the following bounds, for finite unions of lattice animals that are relatively close, and for pairs of lattice animals that are relatively distant.
\begin{lemma} \lblemma{HitFiniteUnionOfLAs} Assume \eqref{BoundOnCubeNumber}. Fix a capacity $\kappa\geq\kappa_d$, a positive integer $k\in\mathbb{N}$ and a positive function $t\mapsto h(t)>0$ satisfying \begin{equation} \label{BoundOnAnimalSeparation} \lim_{t\to\infty}\frac{\log(h(t)/\varphi_d(t))}{\log t}=0. \end{equation} Then the probability that there exist a point $x\in G_{n(t)}$ and lattice animals $A^{(1)}, \dotsc,A^{(k)}\in\mathcal{A}^\boxempty$, such that the union $E=\cup_{j=1}^k E(A^{(j)})$ satisfies $\Capa E\geq\kappa$, $\varphi_d(t)E\subset B(0,h(t))$, and $(x+\varphi_d(t)E) \cap W[0,t] =\varnothing$, is at most $t^{-I_d(\kappa)+o(1)}$. \end{lemma}
\begin{proof} The proof is the same as for \refprop{HitLatticeAnimal}. Abbreviate $h=h(t)$. Since $h=t^{o(1)} \varphi$, it follows that $r/h\to\infty$ as $t\to\infty$, so that \refprop{NSuccessfulProb} applies to $\varphi E$. Similarly, writing $A^{(j)}=y_j+\tilde{A}^{(j)}$ with $\tilde{A}^{(j)} \in\mathcal{A}_Q^\boxempty$ and $y_j\in B(0,nh)\cap\mathbb{Z}^d$, we have that there are at most
$O((nh)^{dk})|\mathcal{A}_Q^\boxempty|^k$ possible choices for $A^{(1)},\dotsc,A^{(k)}$. This number is $t^{o(1)}$ by \eqref{BoundOnCubeNumber} and \eqref{BoundOnAnimalSeparation}, so that a counting argument applies as before. \qed\end{proof}
\begin{lemma} \lblemma{LargeDistantComponents} Assume \eqref{BoundOnCubeNumber}. Fix a positive function $t \mapsto h(t)>0$ satisfying \begin{equation} \label{BoundOnAnimalPairSeparation} \liminf_{t\to\infty} \frac{h(t)}{\varphi_d(t) \log t} > 0, \end{equation} and let $\kappa^{(1)},\kappa^{(2)}>\kappa_d$, $x_1\in\mathbb{T}^d$. Then the probability that there exist a point $x_2\in G_{n(t)}$ with $d(x_1,x_2)\geq h(t)$ and lattice animals $A^{(1)},A^{(2)} \in\mathcal{A}^\boxempty$ with $\Capa E(A^{(j)})\geq \kappa^{(j)}$ such that $(x_j+\varphi_d(t)E(A^{(j)})) \cap W[0,t]=\varnothing$, $j=1,2$, is at most $t^{-[d \kappa^{(1)}/(d-2)\kappa_d] -I_d(\kappa^{(2)})+o(1)}$. \end{lemma}
\begin{proof} We resume the notation and assumptions from the proof of \refprop{HitLatticeAnimal}, this time taking $\delta<\tfrac{1}{4}$. Abbreviate $h=h(t)$.
For $x_2\in G_n$ such that $d(x_1,x_2)\geq 2R$, the events of $(x_j,E(A_j))$ being $(\floor{N},\varphi,r,R)$-successful, $j=1,2$, are conditionally independent given $(\xi'_i(x_j),\xi_i(x_j))_{i,j}$. The required bound for the case $d(x_1,x_2)\geq 2R$ therefore follows by the same argument as in the proof of \refprop{HitLatticeAnimal}.
For $x_2\in G_n$ such that $d(x_1,x_2)\leq 2R$, set $\tilde{r}=\varphi^{1-3\delta}$, $\tilde{R} =\varphi^{1-4\delta}$ and $\tilde{N}=(1-\delta)N_d(t,\tilde{r},\tilde{R})$. We have $\varphi E(A_j) \subset B(0,\varphi Q\sqrt{d})$ for $j=1,2$, with $Q=o(\log t)$ (without loss of generality, as in the proof of \refprop{HitLatticeAnimal}). Write $x_2=x_1+\varphi y$, where $y\in\mathbb{R}^d$ with $h/\varphi \leq d(0,y) \leq 2R/\varphi$. The hypothesis \eqref{BoundOnAnimalPairSeparation} implies that $h/\varphi Q\to\infty$. Hence we can apply \reflemma{SeparatedCapacity} (with $\epsilon =\varphi Q\sqrt{d}$ and $h$ playing the role of $r$), to conclude that \begin{equation} \label{CapacityTwoAnimals} \Capa\big( E(A_1)\cup (y+E(A_2)) \big) = \big(\Capa E(A_1)+\Capa E(A_2)\big)[1-o(1)]. \end{equation} We also have $E(A_1) \cup (y+E(A_2))\subset B(0,2R+\varphi Q\sqrt{d})$ with $\tilde{r}/R,\tilde{r} /\varphi Q\to\infty$. In particular, $x_1+\varphi(E(A_1)\cup (y+E(A_2)))\subset B(x_1,\tilde{r})$ for $t$ large enough. As in the proof of \refprop{HitLatticeAnimal}, $(x_j+\varphi E(A_j)) \cap W[0,t]=\varnothing$ implies that $N(x_1,t,\tilde{r},\tilde{R})<N$ or $(x_1,E(A_1) \cup(y+E(A_2)))$ is $(\floor{\smash{\tilde{N}}\big.},\varphi,\tilde{r},\tilde{R})$-successful. By \eqref{CapacityTwoAnimals} and \refprop{NSuccessfulProb}, \begin{align} & \mathbb{P}_{x_0} \! \left( \big(x_1,E(A_1)\cup(y+E(A_2))\big)\text{ is $(\floor{\smash{\tilde{N}}\big.},\varphi,\tilde{r},\tilde{R})$-successful} \right) \\&\quad \leq \exp\left[ -\tilde{N}(\varphi/\tilde{r})^{d-2} (\kappa^{(1)}+\kappa^{(2)}-o(1))/\kappa_d \right], \end{align} and the rest of the proof is the same as for \refprop{HitLatticeAnimal}. \qed\end{proof}
\subsection{Small lattice animals} \lbsubsect{HitSmLaAn}
The bound in \refprop{HitLatticeAnimal} is only meaningful when $\kappa>\kappa_d$. For $\kappa<\kappa_d$, there are likely to be many unhit sets of capacity $\kappa$, and the two propositions that follow will quantify this statement.
For $E\subset\mathbb{R}^d$, write $\chi(t,n(t),E)$ for the number of points $x\in G_{n(t)}$ such that $(x+\varphi_d(t) E)\cap W[0,t]=\varnothing$, and write $\chi^{\rm disjoint}(t,n(t),E)$ for the maximal number of disjoint translates $x+\varphi_d(t) E$ such that $x\in G_{n(t)}$ and $(x+\varphi_d(t) E)\cap W[0,t]=\varnothing$. For $\kappa>0$, define \begin{equation} \begin{aligned} \chi_+^\boxempty(t,n(t),\kappa) &= \sum_{\substack{A\in\mathcal{A}^\boxempty\colon\, 0\in A, \\ \Capa E(A)\geq\kappa}} \chi(t,n(t),E(A)),\\ \chi_-^\boxempty(t,n(t),\kappa) &= \min_{\substack{A\in\mathcal{A}^\boxempty\colon\, \\ \Capa E(A)\leq\kappa}} \chi^{\rm disjoint}(t,n(t),E(A)). \end{aligned} \end{equation}
\begin{proposition} \lbprop{AnimalCounts} Fix an integer-valued function $t \mapsto n(t)$ satisfying condition \eqref{BoundOnCubeNumber} such that $\lim_{t\to\infty} n(t)\varphi_d(t)=\infty$. Then, for $0<\kappa<\kappa_d$, \begin{equation} \lim_{t\to\infty} \frac{\log\chi_+^\boxempty(t,n(t),\kappa)}{\log t} = J_d(\kappa), \quad \lim_{t\to\infty} \frac{\log\chi_-^\boxempty(t,n(t),\kappa)}{\log t} = J_d(\kappa), \quad \mbox{in $\mathbb{P}_{x_0}$-probability}. \end{equation} \end{proposition}
\begin{proposition} \lbprop{HitManyLatticeAnimals} Fix an integer-valued function $t \mapsto n(t)$ and a non-negative function $t \mapsto h(t)$ satisfying \begin{equation} \label{WeakBoundOnCubeNumber} \lim_{t\to\infty} \frac{\log [n(t)\varphi_d(t)]}{\log t} = 0, \qquad \lim_{t\to\infty} \frac{\log[h(t)/\varphi_d(t)]}{\log t} \leq 0, \end{equation} and collections of points $(S(t))_{t>1}$ in $\mathbb{T}^d$ such that $\max_{x\in\mathbb{T}^d} d(x,S(t)) \leq h(t)$ for all $t>1$. Given $A\in\mathcal{A}^\boxempty$, write $E(A)=n(t)^{-1}\varphi_d(t)^{-1}A$. Then, for each $\kappa \in(0,\kappa_d)$, \begin{equation} \mathbb{P}_{x_0} \! \left( \exists A\in\mathcal{A}^\boxempty\colon\,\Capa E(A)\leq \kappa \text{ and } (x+\varphi_d(t)E(A))\cap W[0,t] \neq \varnothing \; \forall x\in S(t) \right) \leq \exp\left[ -t^{J_d(\kappa)-o(1)} \right] \! . \end{equation} \end{proposition}
Compared to \refthm{NoTranslates}, \refprop{HitManyLatticeAnimals} requires $(x+\varphi_d(t)E(A))\cap W[0,t] \neq \varnothing$ only for $x$ in some subset $S(t)$ of the torus, subject to the requirement that $S(t)$ should be within distance $h(t)$ of every point in $\mathbb{T}^d$. The reader may assume that $S(t)=\mathbb{T}^d$, $h(t)=0$ for simplicity.
In \refprop{HitManyLatticeAnimals}, the scale $n(t)$ of the lattice need only satisfy \eqref{WeakBoundOnCubeNumber} instead of the stronger condition \eqref{BoundOnCubeNumber}. This reflects the difference in scaling between the probabilities in \refprop{HitManyLatticeAnimals} compared to \refprop{HitLatticeAnimal}.
\subsubsection{Proof of \refprop{AnimalCounts}} \lbsubsubsect{AnimalCountsProof}
\begin{proof} Let $\delta\in(0,\tfrac{1}{2})$ be given. It suffices to show that $t^{J_d(\kappa)-O(\delta)} \leq \chi_-^\boxempty(t,n,\kappa)$ and $\chi_+^\boxempty(t,n,\kappa)\leq t^{J_d(\kappa) +O(\delta)}$ with high probability. (Given $\kappa<\kappa'$, the assumption $n\varphi\to\infty$ implies the existence of some $A$ with $\kappa\leq\Capa E(A)\leq\kappa'$, and therefore $\chi_-^\boxempty(t,n,\kappa')\leq \chi_+^\boxempty(t,n,\kappa)$.)
For the upper bound, recall $N$ and $\tilde{\chi}^\boxempty$ from the proof of \refprop{HitLatticeAnimal}. On the event $\{N(x,t,r,R)<N \;\forall\,x\in G_n\}$ (whose probability tends to $1$) we have $\chi_+^\boxempty(t,\kappa,n) \leq \tilde{\chi}^\boxempty$. From \eqref{ExpectedSuccessfulAnimals} it follows that $\tilde{\chi}^\boxempty \leq t^{J_d(\kappa)+O(\delta)}$ with high probability.
For the lower bound, let $\set{x_1,\dotsc,x_K}$ denote a maximal collection of points in $G_n$ satisfying $d(x_j,x_k)>2R$ for $j\neq k$, so that $K=R^{-d+o(1)}=t^{d/(d-2)-O(\delta)}$. Write $N_-=(1+\delta)N_d(t,r,R)$. By \refprop{ExcursionNumbers}, in the same way as in the proof of \refprop{HitLatticeAnimal}, $N(x_j,t,r,R)\leq N_-$ for each $j=1,\dotsc,K$, with high probability. Moreover we may take $t$ large enough so that $\varphi E(A)\subset B(0,R)$, so that the translates $x_j+\varphi E(A)$ are disjoint. Let $\tilde{\chi}_-^\boxempty(A)$ denote the number of points $x_j$, $j\in \set{1,\dotsc,K}$, such that $(x_j,E(A))$ is $(\ceiling{N_-}\!,\varphi,r,R)$-successful. We have $\chi_-^\boxempty(t,n(t),E(A)) \geq \tilde{\chi}_-^\boxempty(A) - 1$ on the event $\set{N(x_j,t,r,R)\leq N_- \: \forall j}$, since at most one translate $x_j+\varphi E(A)$ may have been hit before the start of the first excursion, in the case $x_0\in B(x_j,R)$. On the other hand, since the balls $B(x_j,R)$ are disjoint, the excursions are conditionally independent given the starting and ending points $(\xi'_i(x_j),\xi_i(x_j))_{i,j}$. It follows that, for each $A$ with $\Capa E(A)\leq\kappa$, $\tilde{\chi}_-^\boxempty(A)$ is stochastically larger than a Binomial$(K,p)$ random variable, where $p\geq t^{-d\kappa/(d-2)-O(\delta)}$ by \refprop{NSuccessfulProb}. A straightforward calculation shows that $\mathbb{P}(\text{Binomial} (K,p)<\frac{1}{2}Kp)\leq e^{-cKp}$ for some $c>0$, so that \begin{equation} \mathbb{P}_{x_0}(\tilde{\chi}_-^\boxempty(A) \leq t^{J_d(\kappa)-O(\delta)}) \leq \exp\left[ -ct^{J_d(\kappa)-O(\delta)} \right]. \end{equation} As in the proof of \refprop{HitLatticeAnimal}, there are at most $t^{o(1)}$ animals $A$ to consider, so a union bound completes the proof. \qed\end{proof}
As with \reflemma{HitFiniteUnionOfLAs}, we may modify \refprop{AnimalCounts} to deal with a finite union of lattice animals.
\begin{lemma} \lblemma{FiniteUnionOfLACounts} Assume the hypotheses of \refprop{AnimalCounts}, let $k\in\mathbb{N}$, and let $t\mapsto h(t)>0$ be a positive function satisfying \eqref{BoundOnAnimalSeparation}. Define \begin{equation} \begin{aligned} \chi^\boxempty_+(t,n(t),\kappa,k,h(t)) &= \sum \chi(t,n(t),E), \\ \chi^\boxempty_-(t,n(t),\kappa,k,h(t)) &= \min \chi^{\rm disjoint}(t,n(t),E), \end{aligned} \end{equation} where the sum and minimum are over sets $E=\cup_{j=1}^k E(A^{(j)})$ such that $\varphi_d(t) E\subset B(0,h(t))$; $(x+\varphi_d(t)E)\cap W[0,t]=\varnothing$; and $\Capa E\geq\kappa$ (for $\chi^\boxempty_+$) or $\Capa E\leq\kappa$ (for $\chi^\boxempty_-$), respectively. Then $(\log \chi^\boxempty_+(t,n(t),\kappa,k,h(t)))/\log t$ and $(\log \chi^\boxempty_-(t,n(t), \kappa,k,h(t)))/\log t$ converge in $\mathbb{P}_{x_0}$-probability to $J_d(\kappa)$ as $t\to\infty$. \end{lemma}
\subsubsection{Proof of \refprop{HitManyLatticeAnimals}}
The proof of \refprop{AnimalCounts} compares $\chi_-^\boxempty(t,n(t),\kappa)$ to a random variable that is approximately Binomial $(t^{d/(d-2)},t^{-d\kappa/(d-2)})$. If this identification were exact, then the asymptotics in \refprop{HitManyLatticeAnimals} would follow in a similar way. However, the bound for each individual probability $\mathbb{P}_{x_0} (N(x_j,t,r,R)\geq (1+\delta)N_d(t,r,R))$, $j=1,\dotsc,K$, although relatively small, is still much larger than the probability in \refprop{HitManyLatticeAnimals}. Therefore an additional argument is needed.
\begin{proof} Abbreviate $h=h(t),S=S(t)$.
Recall that the condition $\Capa E(A)\leq\kappa$ implies that $A$ consists of at most $Q$ cubes, where because of \eqref{QBound} and \eqref{WeakBoundOnCubeNumber} we have $Q=t^{o(1)}$. Fix such an $A$, and write $A=p+A'$, where $p\in\mathbb{Z}^d$ and $A'\in\mathcal{A}^\boxempty_Q$. In particular, $E(A')\subset B(0,Q\sqrt{d})$. Since $x+\varphi E(A) =x+\tfrac{1}{n}p+\varphi E(A')$, we can assume by periodicity that $p\in\set{0,\dotsc,n-1}^d$.
Let $\delta\in(0,\tfrac{1}{3})$, take $r,R$ as in \eqref{rRDefinition}, and choose $\tilde{n} =\tilde{n}(t)\in\mathbb{N}$ such that $1/\tilde{n}= \varphi^{1-3\delta+o(1)}$ and $1/\tilde{n}\geq 2R$. Let $\set{\tilde{x}_1,\dotsc,\tilde{x}_{\tilde{n}^d}}$ denote a grid of points in $\mathbb{T}^d$ with spacing $1/\tilde{n}$ (i.e., a translate of $G_{\tilde{n}}$), chosen in such a way that $d(x_0,\tilde{x}_j)>R$. To each grid point $\tilde{x}_j$, $j=1,\dotsc,\tilde{n}^d$, associate in some deterministic way a point $x_j\in S$ with $d(x_j+\tfrac{1}{n}p,\tilde{x}_j) =d(x_j,\tilde{x}_j-\tfrac{1}{n}p)\leq h$ (this is always possible by the hypothesis on $S$). The choice of $\tilde{x}_j,x_j$ depends on $t$, but we suppress this dependence in our notation.
Since $h/\varphi\leq t^{o(1)}$, we have $r/h\geq \varphi^{-\delta+o(1)}\to\infty$. Since also $r/\varphi Q\to\infty$, we may take $t$ large enough so that $h+\varphi Q\sqrt{d}<r<R<1/ \tilde{n}$, implying that $x_j+\varphi E(A)=x_j+\tfrac{1}{n}p+E(A')\subset B(\tilde{x}_j,r)$ for $j=1,\dotsc,\tilde{n}^d$, and so we can apply \reflemma{CapacityAndHittingDistant} to the sets $x_j+\varphi E(A)$, uniformly in the choice of $A$ and $j$.
Let $\sigma(s)$ be the total amount of time, up to time $s$, during which the Brownian motion is \emph{not} making an excursion from $\partial B(\tilde{x}_j,r)$ to $\partial B(\tilde{x}_j,R)$ for any $j=1,\dotsc,\tilde{n}^d$. In other words, $\sigma(s)$ is the Lebesgue measure of $[0,s] \setminus ( \cup_{j=1}^{\tilde{n}^d} \cup_{i=1}^\infty [T'_i(\tilde{x}_j),T_i(\tilde{x_j})])$. Define the stopping time $T''=\inf\set{s\colon\, \sigma(s)\geq t}$. Clearly, $T''\geq t$. Define $N''_j$ to be the number of excursions from $\partial B(\tilde{x}_j,r)$ to $\partial B(\tilde{x}_j,R)$ by time $T''$, and write $(\xi'_i(\tilde{x}_j),\xi_i(\tilde{x}_j))_{i=1,\dotsc,N''_j}$ for the starting and ending points of these excursions.
If $(x+\varphi E(A)) \cap W[0,t]\neq\varnothing$ for each $x\in S$, then necessarily, for each $j=1,\dotsc,\tilde{n}^d$, at least one of the $N''_j$ excursions from $\partial B(\tilde{x}_j,r)$ to $\partial B(\tilde{x}_j,R)$ must hit $x_j+\varphi E(A)$. (Here we use that $d(x_0,\tilde{x}_j)>R$, which implies that the Brownian motion cannot hit $x_j+\varphi E(A)$ before the start of the first excursion.) These excursions are conditionally independent given $(\xi'_i(\tilde{x}_j),\xi_i(\tilde{x}_j))$ for $i=1,\dotsc,N''_j, j=1,\dotsc,\tilde{n}^d$. Applying \reflemma{CapacityAndHittingDistant} and \eqref{CapacityScaling}, we get \begin{align} &\mathbb{P}_{x_0} \! \condparenthesesreversed{ (x+\varphi E(A))\cap W[0,t]\neq\varnothing \; \forall x\in S}{(N''_j)_j, (\xi'_i(\tilde{x}_j),\xi_i(\tilde{x}_j))_{i,j}}\notag\\ &\quad\leq \mathbb{P}_{x_0} \! \condparenthesesreversed{ (x_j+\varphi E(A))\cap W[0,T'']\neq\varnothing \; \forall j }{ (N''_j)_j, (\xi'_i(\tilde{x}_j),\xi_i(\tilde{x}_j))_{i,j} }\notag\\ &\quad= \prod_{j=1}^{\tilde{n}^d} \left( 1-\prod_{i=1}^{N''_j} \left( 1-\frac{\varphi^{d-2}\Capa E(A)}{\kappa_d \, r^{d-2}}(1+o(1)) \right) \right)\notag\\ &\quad\leq \exp \left[ \sum_{j=1}^{\tilde{n}^d} \log\left( 1-(1-(\varphi/r)^{d-2} (\kappa/\kappa_d+o(1)))^{N''_j} \right) \right]. \end{align} In this upper bound, which no longer depends on $(\xi'_i(\tilde{x}_j),\xi_i(\tilde{x}_j))_{i,j}$, the function $y \mapsto \log(1-e^{cy})$ is concave, and hence we can replace each $N''_j$ by the empirical mean $\bar{N}''=\tilde{n}^{-d} \sum_{j=1}^{\tilde{n}^d} N''_j$: \begin{align} &\mathbb{P}_{x_0} \! \condparenthesesreversed{ (x+\varphi E(A))\cap W[0,t]\neq\varnothing \; \forall x\in S }{ (N''_j)_j }\notag\\ &\quad\leq \exp \left( \tilde{n}^d \log\left( 1-(1-(\varphi/r)^{d-2} (\kappa/\kappa_d+o(1)))^{\bar{N}''} \right) \right)\notag\\ &\quad\leq \exp \left[ -\tilde{n}^d (1-(\varphi/r)^{d-2} (\kappa/\kappa_d+o(1)))^{\bar{N}''} \right]. \end{align} Write $M=(1+\delta)N_d(t,r,R)$. On the event $\set{\bar{N}''\leq M}$, the relations $(\varphi/r)^{d-2} M\sim (1+\delta)d(d-2)^{-1}\log t$ and $\tilde{n}^d=t^{d/(d-2)-O(\delta)}$ imply that \begin{align} &\indicator{\bar{N}''\leq M}\mathbb{P}_{x_0} \! \condparenthesesreversed{ (x+\varphi E(A))\cap W[0,t]\neq\varnothing \; \forall x\in S }{ (N''_j)_j }\notag\\ &\quad\leq \exp\left[ -t^{d/(d-2)-O(\delta)} \exp\left[ -(\varphi/r)^{d-2} M(\kappa/\kappa_d +o(1)) \right] \right]\notag\\ &\quad= \exp\left[ -t^{J_d(\kappa)-O(\delta)} \right]. \label{HitAllSetsBarNSmall} \end{align}
Next, we will show that $\mathbb{P}_{x_0}(\bar{N}''\geq M)\leq \exp[-ct^{d/(d-2)-O(\delta)}]$. To that end, let $\pi^{(\tilde{n})}$ denote the projection map from the unit torus $\mathbb{T}^d$ to a torus of side length $1/\tilde{n}$. Under $\pi^{(\tilde{n})}$, every grid point $\tilde{x}_j$ maps to the same point $\pi^{(\tilde{n})}(\tilde{x}_j)$, and $\sigma(s)$ is the total amount of time the projected Brownian motion $\pi^{(\tilde{n})}(W)$ in $\pi^{(\tilde{n})}(\mathbb{T}^d)$ spends \emph{not} making an excursion from $\partial B(\pi^{(\tilde{n})}(\tilde{x}_j),r)$ to $\partial B(\pi^{(\tilde{n})}(\tilde{x}_j),R)$, by time $s$. Moreover, $\tilde{n}^d \bar{N}'' = \sum_{j=1}^{\tilde{n}^d} N''_j$ can be interpreted as the number of such excursions in $\pi^{(\tilde{n})}(\mathbb{T}^d)$ completed by time $T''$.
Write $x\mapsto \tilde{n}x$ for the dilation that maps the torus $\pi^{(\tilde{n})}(\mathbb{T}^d)$ of side length $1/\tilde{n}$ to the unit torus $\mathbb{T}^d$. By Brownian scaling, $(\tilde{W}(u) )_{u\geq 0} = (\tilde{n}\pi^{(\tilde{n})}(W(\tilde{n}^{-2}u)))_{u\geq 0}$ has the law of a Brownian motion in $\mathbb{T}^d$. Moreover, $\tilde{n}^d\bar{N}''$ can be interpreted as the number of excursions of $\tilde{W}(u)$ from $\partial B(\tilde{n}\pi^{(\tilde{n})}(\tilde{x_j}), \tilde{n} r)$ to $\partial B(\tilde{n}\pi^{(\tilde{n})}(\tilde{x_j}),\tilde{n} R)$ until the time spent not making such excursions first exceeds $\tilde{n}^2 t$, i.e., precisely the quantity $N'(\tilde{n}\pi^{(\tilde{n})}(\tilde{x}_j),\tilde{n}^2 t,\tilde{n}r,\tilde{n}R)$ from \refsubsect{Excursions}. We have $N_d(\tilde{n}^2 t,\tilde{n}r,\tilde{n}R)=\tilde{n}^d N_d(t,r,R)$, so \refprop{ExcursionNumbers} gives \begin{align} \mathbb{P}_{x_0}(\bar{N}''\geq M) &= \mathbb{P}_{x_0}(\tilde{n}^d \bar{N}'' \geq \tilde{n}^d M) = \mathbb{P}_{\tilde{n}\pi^{(\tilde{n})}(x_0)} \! \left( N'(\tilde{n} \pi^{(\tilde{n})}(\tilde{x}_j),\tilde{n}^2 t,\tilde{n}r,\tilde{n}R) \geq \tilde{n}^d M \right)\notag\\ &= \mathbb{P}_{\tilde{n}\pi^{(\tilde{n})}(x_0)} \! \left( N'(\tilde{n} \pi^{(\tilde{n})}(\tilde{x}_j),\tilde{n}^2 t,\tilde{n}r,\tilde{n}R) \geq (1+\delta)N_d(\tilde{n}^2 t,\tilde{n}r,\tilde{n}R) \right)\notag\\ &\leq \exp\left[ -c N_d(\tilde{n}^2 t,\tilde{n}r,\tilde{n}R) \right] = \exp\left[ -c t^{d/(d-2)-O(\delta)} \right] \! . \label{BarNNotLarge} \end{align}
Equations \eqref{HitAllSetsBarNSmall}--\eqref{BarNNotLarge} imply that, for each fixed $A=p+A'$ with $\Capa E(A)\leq \kappa$, we have \begin{equation} \mathbb{P}_{x_0} \! \left( (x+\varphi E(A))\cap W[0,t]\neq \varnothing \; \forall x\in S \right) \leq \exp\left[ -t^{d(1-\kappa/\kappa_d)/(d-2)-O(\delta)} \right]. \end{equation}
But the number of pairs $(p,A')$ is at most $n^d |\mathcal{A}^\boxempty_Q|= t^{d/(d-2)+o(1)} e^{O(Q)}$, by \eqref{LatticeAnimalGrowth} and \eqref{WeakBoundOnCubeNumber}. Since $Q=t^{o(1)}$, a union bound completes the proof. \qed\end{proof}
\section{Proofs of Theorems \ref{t:CapacitiesInWc}--\ref{t:ShapeOfComponents} and Propositions \ref{p:NoWrapping}--\ref{p:Connected}} \lbsect{ProofTheorems}
In proving Theorems~\ref{t:CapacitiesInWc}--\ref{t:ShapeOfComponents}, we bound non-intersection probabilities for Wiener sausages, e.g. \begin{equation} \mathbb{P}\left( \exists x\in\mathbb{T}^d\colon\, (x+\varphi_d(t) E)\cap W_{\rho(t)}[0,t] =\varnothing \right), \qquad E\subset\mathbb{R}^d, \end{equation} in terms of the Brownian non-intersection probabilities estimated in Propositions~\ref{p:HitLatticeAnimal} and \ref{p:AnimalCounts}--\ref{p:HitManyLatticeAnimals}, in which $E$ is a rescaled lattice animal. In \refsubsect{ApproxByLA} we prove an approximation lemma for lattice animals, which leads directly to the proofs of Theorems~\ref{t:CapacitiesInWc}--\ref{t:NoTranslates} and \refprop{NoWrapping}. Proving \refthm{ShapeOfComponents} requires an additional argument to show that a component containing a given set is likely to be not much larger, and we prove this in \refsubsect{ShapeProof}. Finally, in \refsubsect{ConnectedProof} we give the proof of \refprop{Connected}.
\subsection{Approximation by lattice animals} \lbsubsect{ApproxByLA}
\begin{lemma} \lblemma{SetsAndAnimals} Let $\rho>0$ and $n\in\mathbb{N}$ satisfy $\rho n \geq 2\sqrt{d}$, and let $\varphi>0$. Then, given a bounded connected set $E\subset\mathbb{R}^d$, there is an $A\in\mathcal{A}^\boxempty$ such that $E(A)=n^{-1}\varphi^{-1} A$ satisfies $E\subset E(A)\subset E_{\rho/\varphi}$ and, for any $x\in\mathbb{T}^d$, $0\leq\tilde{\rho}\leq\tfrac{1}{4}\rho$, \begin{align} \label{SetInclusion} x+\varphi E\subset x'+ \varphi & E(A) \subset x+(\varphi E)_\rho \qquad\text{for some }x'\in G_n,\\ \label{MissSetAndAnimal} \big\{(x+\varphi E)\cap W_\rho[0,t]=\varnothing\big\} \subset& \set{\exists x'\in G_n\colon\,(x'+\varphi E(A))\cap W[0,t]=\varnothing},\\ \label{HitSetAndAnimal} \big\{(x+\varphi E)\cap W_{\tilde\rho}[0,t]\neq\varnothing\big\} \subset& \set{(x+\varphi E(A))\cap W[0,t]\neq\varnothing}. \end{align} \end{lemma}
\begin{proof} Let $A$ be the union of all the closed unit cubes with centres in $\mathbb{Z}^d$ that intersect $n\varphi E_{\rho/4\varphi}$. This set is connected because $E$ is connected, and therefore $A\in\mathcal{A}^\boxempty$. Every cube in $A$ is within distance $\sqrt{d}$ of some point of $n\varphi E_{\rho/4\varphi}$, so that $E\subset E_{\rho/4\varphi}\subset E(A)\subset E_{\rho/4\varphi+\sqrt{d}/n\varphi}$. By assumption, $\sqrt{d}/n\leq\rho/2$, so that $E(A)\subset E_{3\rho/4\varphi}\subset E_{\rho/\varphi}$ (see \reffig{SetAndAnimal}(a)).
\begin{figure}
\caption{(a) From inside to outside: an F-shaped set $E$; the enlargement $E_{\rho/4\varphi}$; $E(A)$, the union of the rescaled cubes intersecting $E_{\rho/4\varphi}$; the bounding set $E_{3\rho/4\varphi}$. The grid shows the cubes in the definition of $E(A)$, rescaled to have side length $1/n\varphi$. The parameters $\rho,n$ satisfy $\rho n =2\sqrt{d}$. (b) From inside to outside (scaled by $\varphi$ compared to part (a)): the prospective subset $x+\varphi E$ of $\mathbb{T}^d\setminus W_\rho[0,t]$; the approximating grid-aligned set $x'+\varphi E(A)$; the taboo set $x+(\varphi E)_\rho$ that the Brownian motion must not visit.}
\end{figure}
Given $x\in\mathbb{T}^d$, let $x'\in G_n$ satisfy $d(x,x')\leq\sqrt{d}/2n$. Then $x+\varphi E\subset x'+(\varphi E)_{\sqrt{d}/2n}\subset x'+\varphi E(A)\subset x+(\varphi E(A))_{\sqrt{d}/2n}\subset x +(\varphi E)_\rho$ since $\sqrt{d}/2n\leq \rho/4$ and $\varphi E(A)\subset (\varphi E)_{3\rho/4}$. See \reffig{SetAndAnimal}(b). This proves \eqref{SetInclusion}; \eqref{MissSetAndAnimal} follows immediately because $(x+\varphi E)\cap W_\rho[0,t]=\varnothing$ is equivalent to $(x+(\varphi E)_\rho) \cap W[0,t]=\varnothing$.
Similarly, since $(\varphi E)_{\rho/4}\subset \varphi E(A)$ and since $(x+\varphi E)\cap W_{\tilde\rho}[0,t]\neq\varnothing$ is equivalent to $(x+(\varphi E)_{\tilde\rho})\cap W[0,t]\neq\varnothing$, the inclusion in \eqref{HitSetAndAnimal} follows. \qed\end{proof}
\subsubsection{Proof of \refthm{NoTranslates}}
In this section we prove the following theorem, of which \refthm{NoTranslates} is the special case with $S(t)=\mathbb{T}^d$.
\begin{theorem} \lbthm{HitManySets} Fix non-negative functions $t \mapsto \rho(t)$ and $t \mapsto h(t)$ satisfying \begin{equation} \label{phihcond} \lim_{t\to\infty} \frac{\rho(t)}{\varphi_d(t)} = 0, \qquad \lim_{t\to\infty} \frac{\log[h(t)/\varphi_d(t)]}{\log t} \leq 0, \end{equation} and collections of points $(S(t))_{t>1}$ in $\mathbb{T}^d$ such that $\max_{x\in\mathbb{T}^d} d(x,S(t)) \leq h(t)$ for all $t>1$. Then, for any $E\subset\mathbb{R}^d$ compact with $\Capa E <\kappa_d$, \begin{equation} \label{AvoidAll} \log \mathbb{P}\left( (x+\varphi_d(t) E)\cap W_{\rho(t)}[0,t]\neq\varnothing \; \forall x\in S(t) \right) \leq -t^{J_d(\Capa E)+o(1)}, \qquad t\to\infty. \end{equation} \end{theorem}
\begin{proof} Fix $E\subset\mathbb{R}^d$ compact with $\Capa E<\kappa_d$, and let $\delta>0$ be arbitrary with $\Capa E +\delta <\kappa_d$. By \refprop{CapacityContinuity}\refitem{CapacityCompact}, we can choose $r>0$ so that $\Capa(E_r)\leq \Capa E+\tfrac{1}{2}\delta$. If $E_r$ is not already connected, then enlarge it to a connected set $E'\supset E_r$ by adjoining a finite number of line segments (this is possible because $E_r$ is the $r$-enlargement of a compact set). Doing so does not change the capacity, so we may apply \refprop{CapacityContinuity}\refitem{CapacityCompact} again to find $r'>0$ so that $\Capa((E')_{r'})\leq\Capa E+\delta$.
Define $\rho_0(t)=r'\varphi_d(t)$ and $n(t)=\ceiling{2\big.\smash{\sqrt{d}} /\rho_0(t)}$, so that $\rho_0(t)n(t)\geq 2\sqrt{d}$ and the condition \eqref{WeakBoundOnCubeNumber} from \refprop{HitManyLatticeAnimals} holds. Since $\rho(t)/\varphi_d(t)\to 0$, we may choose $t$ sufficiently large so that $\rho(t)\leq \tfrac{1}{4}\rho_0(t)$.
Apply \reflemma{SetsAndAnimals} to $E'$ with $\rho=\rho_0(t)$, $\tilde\rho=\rho(t)$, and $\varphi=\varphi_d(t)$. Note that if $(x+\varphi_d(t)E)\cap W_{\rho(t)}[0,t]\neq\varnothing$ for all $x\in S(t)$, then $(x+\varphi_d(t)E(A))\cap W[0,t]\neq\varnothing$ for all $x\in S(t)$, where $\Capa E(A)\leq\Capa((E')_{\rho/\varphi})=\Capa((E')_{r'})\leq\Capa E+\delta$. By \refprop{HitManyLatticeAnimals} with $\kappa=\Capa E +\delta$, this event has a probability that is at most $\exp[-t^{J_d(\Capa E)-O(\delta)}]$, and taking $\delta\downarrow 0$ we get the desired result. \qed\end{proof}
\subsubsection{Proof of \refthm{CapacitiesInWc}}
\begin{proof} First consider $\kappa<\kappa_d$. Since $I_d(\kappa)$ is infinite for such $\kappa$, it suffices to show that $\lim_{t\to\infty} \log\mathbb{P}(\kappa^*(t,\rho(t))\leq \kappa\varphi^{d-2}) /\log t=-\infty$. Let $\kappa<\kappa'<\kappa_d$, and take $E$ to be a ball of capacity $\kappa'$. If $\kappa^*(t,\rho(t)) \leq \kappa\varphi^{d-2}$, then no translate $x+\varphi_d(t)E$, $x\in\mathbb{T}^d$, can be a subset of $\mathbb{T}^d\setminus W_{\rho(t)}[0,t]$. Applying \refthm{NoTranslates}, we conclude that $\mathbb{P}(\kappa^*(t,\rho(t))\leq \kappa\varphi^{d-2})\leq \exp[-t^{J_d(\kappa)+o(1)}]$, which implies the desired result.
Next consider the LDP upper bound for $\kappa\geq\kappa_d$. Since $\kappa \mapsto I(\kappa)$ is increasing and continuous on $[\kappa_d,\infty]$, it suffices to show that $\mathbb{P}(\kappa^* (t,\rho(t))\geq\kappa\varphi^{d-2})\leq t^{-I_d(\kappa)+o(1)}$ for $\kappa>\kappa_d$. Therefore, suppose that $x+\varphi_d(t) E\subset\mathbb{T}^d\setminus W_{\rho(t)}[0,t]$ for some $x\in\mathbb{T}^d$ and $E\subset\mathbb{R}^d$ compact with $\Capa E\geq\kappa$. As in the proof of \refthm{HitManySets}, define $n(t)=\ceiling{2\big.\smash{\sqrt{d}}/\rho(t)}$. \reflemma{SetsAndAnimals} gives $(x'+\varphi_d(t) E(A))\cap W[0,t]=\varnothing$ for some $x'\in G_{n(t)}$ and $\Capa E(A) \geq\Capa E\geq\kappa$. The condition in \eqref{RadiusBound} on $\rho(t)$ implies the condition in \eqref{BoundOnCubeNumber} on $n(t)$, and therefore we may apply \refprop{HitLatticeAnimal} to conclude that $\mathbb{P}(\kappa^*(t,\rho(t))\geq\kappa\varphi^{d-2}) \leq t^{-I_d(\kappa)+o(1)}$.
Finally, the LDP lower bound for $\kappa\geq\kappa_d$ will follow (with $E$ the ball of capacity $\kappa$, say) from the lower bound proved for \refthm{ShapeOfComponents} (see \refsubsect{ShapeProof}). \qed\end{proof}
\subsubsection{Proof of \refthm{ComponentCounts}}
\begin{proof} As in the proof of \refthm{CapacitiesInWc}, the lower bound will follow from the more specific lower bound proved for \refthm{ShapeOfComponents} (see \refsubsect{ShapeProof}).
Choose $n(t)$ such that $n(t)\geq 2\sqrt{d}/\rho(t)$ and the hypotheses of \refprop{AnimalCounts} hold. (The conditions on $n(t)$ are mutually consistent because $2\sqrt{d}/\rho(t)=O(1/\varphi_d(t))$.) Given any component $C$ containing a ball of radius $\rho(t)$ and having the form $C=x+\varphi_d(t)E$ for $\Capa E\geq\kappa$, apply \reflemma{SetsAndAnimals} to find $x'_C\in G_{n(t)}$ and $A_C\in\mathcal{A}^\boxempty$ such that $C\subset x'_C+\varphi_d(t)E(A_C)\subset C_{\rho(t)}\subset \mathbb{T}^d\setminus W[0,t]$. The pairs $(x'_C,E(A_C))$ so constructed must be distinct: for $C'\neq C$, we have $x'_{C'}+\varphi_d(t)E(A_{C'})\subset C'_{\rho(t)} \subset (\mathbb{T}^d\setminus C)_{\rho(t)}=\mathbb{T}^d\setminus C_{-\rho(t)}$, and since $C_{-\rho(t)}$ is non-empty by assumption, it follows that $C\nsubseteq x'_{C'}+\varphi_d(t)E(A_{C'})$. We therefore conclude that $\chi_{\rho(t)} (t,\kappa)\leq \chi_+^\boxempty(t,n(t),\kappa)$, so the required upper bound follows from \refprop{AnimalCounts}. \qed\end{proof}
\subsubsection{Proof of \refprop{NoWrapping}}
\begin{proof} Abbreviate $\varphi=\varphi_d(t),\rho=\rho(t)$. It suffices to bound the probability that $\mathbb{T}^d\setminus W_\rho[0,t]$ has a component of diameter at least $\tfrac{1}{2}$, since the mapping $x+y\mapsto y$ from $B(x,r)\subset\mathbb{T}^d$ to $B(0,r)\subset\mathbb{R}^d$ is a well-defined local isometry if $r<\tfrac{1}{2}$.
Suppose that $x\in\mathbb{T}^d\setminus W_\rho[0,t]$ belongs to a connected component intersecting $\partial B(x,\tfrac{1}{2})$. Then there is a bounded connected set $E\subset\mathbb{R}^d$ such that $(x+\varphi E)\cap W_\rho[0,t]$ and $E\cap\partial B(0,\tfrac{1}{2}\varphi^{-1}) \neq\varnothing$ (see \reffig{Diameter1}). Define $n=n(t)=\ceiling{2\big.\smash{\sqrt{d}/\rho}}$ and apply \reflemma{SetsAndAnimals} to conclude that $(x'+\varphi E(A))\cap W[0,t]=\varnothing$ with $E\subset E(A)$, $A\in\mathcal{A}^\boxempty$, $x'\in G_n$. Since $E(A)$ contains $E$, it has diameter at least $\tfrac{1}{2}\varphi^{-1}$, so $A$ has diameter at least $\tfrac{1}{2}n$ and must consist of at least $n/(2\sqrt{d})$ unit cubes. Since $\rho=o(\varphi)$ and $\varphi=t^{-d/(d-2) +o(1)}$, we have $n\geq t^{d/(d-2)+o(1)}$. The hypothesis in \eqref{RadiusBound} implies that $n\varphi=o((\log t)^{1/d})$, as in condition \eqref{BoundOnCubeNumber} from \refprop{HitLatticeAnimal}. Therefore $\Vol E(A) \geq (n\varphi)^{-d}n/(2\sqrt{d})\geq t^{d/(d-2)+o(1)}$, and in particular $\Vol E(A)\to\infty$. By \eqref{CapaVolEval}, $\Capa E(A)\to\infty$ also. Thus, if $\mathbb{T}^d\setminus W_\rho[0,t]$ has a component of diameter at least $\tfrac{1}{2}$, then the event in \refprop{HitLatticeAnimal} occurs, with $\kappa$ arbitrarily large for $t\to\infty$. By \refprop{HitLatticeAnimal}, the probability of this occuring is negligible, as claimed. \qed\end{proof}
\begin{figure}
\caption{A large connected component of $\mathbb{T}^d\setminus W_\rho[0,t]$ that is not isometric to a subset of $\mathbb{R}^d$ (shading) and a possible choice of the set $x+\varphi E$ (dark shading).}
\end{figure}
This proof is unchanged if the radius $\tfrac{1}{2}$ is replaced by any $\delta\in(0,\tfrac{1}{2})$, which shows that the maximal diameter $D(t,\rho(t))$ satisfies $D(t,\rho(t))\to 0$ in $\mathbb{P}$-probability when \eqref{RadiusBound} holds (see \refsubsubsect{DiameterDiscussion}).
\subsection{Proof of \refthm{ShapeOfComponents}} \lbsubsect{ShapeProof}
In Theorems~\ref{t:CapacitiesInWc}--\ref{t:NoTranslates} we deal with components that contain a subset $x+\varphi_d(t)E$ of a given form. \refthm{ShapeOfComponents} adds the requirement that the component containing such a subset should not extend further than distance $\delta\varphi_d(t)$ from $x+\varphi_d(t)E$. In the proof, we will bound the probability that the component extends no further than distance $\rho(t)$ from $x+\varphi_d(t)E$, but only for sets $E\in\mathcal{E}^\boxempty_c$ of the following kind: define \begin{equation} \label{cEboxcDefn} \mathcal{E}^\boxempty_c = \set{\text{$E\in\mathcal{E}_c$: $E=\tfrac{1}{n}A$ for some $A\in\mathcal{A}^\boxempty$}} \end{equation} to be the collection of sets in $\mathcal{E}_c$ that are rescalings of lattice animals.
Note that, unlike in \refsect{LatticeAnimals}, the scaling factor $\tfrac{1}{n}$ in \eqref{cEboxcDefn} is fixed and does not depend on $t$. We begin by showing that the collection $\mathcal{E}^\boxempty_c$ is dense in $\mathcal{E}_c$.
\begin{lemma} \lblemma{cEboxcDense} Given $E\in\mathcal{E}_c$ and $\delta>0$, there exists $E^\boxempty\in\mathcal{E}^\boxempty_c$ with $E\subset E^\boxempty\subset E_\delta$. \end{lemma}
\reflemma{cEboxcDense} will allow us to prove \refthm{ShapeOfComponents} only for $E\in\mathcal{E}^\boxempty_c$.
\begin{proof} For $y\notin E$, define \begin{equation} b(y)=\sup\set{r>0\colon\, y\text{ belongs to the unbounded component of }\mathbb{R}^d\setminus E_r}. \end{equation} Since $\mathbb{R}^d\setminus E$ is open and connected, $b(y)$ is continuous and positive on $\mathbb{R}^d\setminus E$. By compactness, we may choose $\eta\in(0,\delta)$ such that $b(y)>\eta$ for $y\notin E_\delta$. Apply \reflemma{SetsAndAnimals} (with $\rho$ and $\varphi$ replaced by $\eta$ and 1, and $n$ sufficiently large) to find $E'=\tfrac{1}{n}A$ with $E\subset E'\subset E_\eta$. The set $E'$ is a rescaled lattice animal, but $\mathbb{R}^d\setminus E'$ might not be connected. However, if $y$ belongs to a bounded component of $\mathbb{R}^d\setminus E'$, then $b(y)\leq\eta$ by construction: since $E'\subset E_\eta$, $y$ cannot belong to the unbounded component of $\mathbb{R}^d\setminus E_\eta$. By choice of $\eta$, it follows that every bounded component of $\mathbb{R}^d\setminus E'$ is contained in $E_\delta$. Thus, if we define $E^\boxempty$ to be $E'$ together with these bounded components (see \reffig{LAwConnCompl}), then $E^\boxempty\in\mathcal{E}_c^\boxempty$ and $E^\boxempty\subset E_\delta$, as claimed. \qed\end{proof}
\begin{figure}
\caption{A set $E$ (white) and its enlargement $E_\delta$ (dark shading). Every bounded component of $\mathbb{R}^d\setminus E_\delta$ can reach infinity without touching $E_\eta$ (medium shading). A set $E'$ (light shading) with $E\subset E'\subset E_\eta$ may disconnect a region from infinity (diagonal lines), but this region must belong to $E_\delta$.}
\end{figure}
In the proof of \refthm{ShapeOfComponents}, we adapt the concept of $(N,\varphi,r,R)$-successful from \refdefn{NSuccessful} to formulate the desired event in terms of excursions. To this end we next introduce the sets and events that we will use. In the remainder of this section, we abbreviate $\varphi=\varphi_d(t)$, $\rho=\rho(t)$, $I_d(\kappa)=I(\kappa)$ and $J_d(\kappa)=J(\kappa)$.
Fix $E\in\mathcal{E}_c^\boxempty$ and $\delta>0$. We may assume that $E\subset B(0,a)$ with $a>\delta$. Let $\eta\in(0,\tfrac{1}{2})$ be small enough that $\kappa_d \eta^{d-2}<\Capa E$. Set $r=\varphi^{1-\eta}$, $R=\varphi^{1-2\eta}$, and let $\set{x_0,\dotsc,x_K}\subset\mathbb{T}^d$ denote a maximal collection of points in $\mathbb{T}^d$ satisfying $d(x_0,x_j)>R$ and $d(x_j,x_k)>2R$ for $j\neq k$, so that \begin{equation} \label{KAsymptotics} K= R^{-d-o(1)} =t^{d/(d-2)+O(\eta)} \end{equation} Take $t$ large enough that $\rho<\tfrac{1}{2}\delta\varphi$ and $R<\tfrac{1}{2}$. Set $N=(1+\eta) N_d(t,r,R)$ (see \eqref{NdDefinition}).
Choose $q=q(t)$ with $q>2a+\delta$, $q\geq\log t$, and $q=(\log t)^{O(1)}$. Let $\set{y_1, \dotsc,y_L}\subset B(0,2q)\setminus E_\delta$ denote a maximal collection of points in $B(0,2q)\setminus E_\delta$ satisfying $d(y_\ell,E)\geq \delta$, $d(y_\ell,y_m)\geq \tfrac{1}{2}\rho/\varphi$ for $\ell\neq m$, so that $L=O((q\varphi/\rho)^d)=(\log t)^{O(1)}$ by \eqref{RadiusBound}.
(The collection $\set{y_1,\dotsc,y_L}$ will be used to ensure that a component containing $x_j+\varphi E$ is contained in $x_j+\varphi E_\delta$; see the event $F_3(j)$ below. The requirements on $q$ are chosen so that $L$ is suitably bounded, while also allowing us to apply \reflemma{LargeDistantComponents} to deal with components that are relatively far from $x_j$.)
Let $Z=\partial(E_{\rho/\varphi}) \cup ( \cup_{z\in B(0,2a)\cap\eta\mathbb{Z}^d} \partial B(z,\eta) \setminus E_{\rho/\varphi} )$ (see \reffig{SetZ}: $Z$ consists of a $(d-1)$-dimensional shell around $E$ together with a finite number of $(d-1)$-dimensional spheres). Let $\set{z_1,\dotsc,z_M}\subset Z$ denote a maximal collection of points in $Z$ with $d(z_m,z_p)\geq\tfrac{1}{2}\rho/\varphi$ for $m\neq p$. Since $Z$ is $(d-1)$-dimensional, we have $M=O((\rho/\varphi)^{d-1})$.
\begin{figure}
\caption{The set $E$ (shaded) and part of the $(d-1)$-dimensional set $Z$.}
\end{figure}
For $j=1,\dotsc,K$, define the following events.
\begin{itemize} \item[$\bullet$] $F_1(j)=\set{\frac{1}{2}N\leq N(x_j,t,r,R)\leq N}$ is the event that $W$ makes between $\tfrac{1}{2}N$ and $N$ excursions from $\partial B(x_j,r)$ to $\partial B(x_j,R)$ by time $t$. \item[$\bullet$] $F_2(j)$ is the event that $(x_j,E_{\rho/\varphi})$ is $(\floor{N},\varphi,r,R)$-successful. \item[$\bullet$] $F_3(j)$ is the event that, for each $\ell=1,\dotsc,L$, the $i^{\rm th}$ excursion from $\partial B(x_j,r)$ to $\partial B(x_j,R)$ hits $x_j+B(\varphi y_\ell, \tfrac{1}{2}\rho)$ for some $i=i(\ell)\in\set{1,\dotsc,\floor{N/4}}$. \item[$\bullet$] $F_4(j)$ is the event that, for each $m=1,\dotsc,M$, the $i^{\rm th}$ excursion from $\partial B(x_j,r)$ to $\partial B(x_j,R)$ hits $x_j+B(\varphi z_m, \tfrac{1}{2}\rho)$ for some $i=i(m) \in\set{\floor{N/4}+1,\dotsc,\floor{N/2}}$. \item[$\bullet$] $F_5(j)$ is the event that $\mathbb{T}^d\setminus W_\rho[0,t]$ contains no component of capacity at least $\varphi^{d-2}\Capa E $ disjoint from $B(x_j,2q\varphi)$. \item[$\bullet$] $F(j)=F_1(j)\cap F_2(j)\cap F_3(j)$. \item[$\bullet$] $F_{\rm max}(j)=F_1(j)\cap F_2(j)\cap F_3(j)\cap F_4(j)\cap F_5(j)$. \end{itemize}
\begin{lemma} \lblemma{FImplies} On $F(j)$, the component of $\mathbb{T}^d\setminus W_\rho[0,t]$ containing $x_j+\varphi E$ satisfies condition \eqref{CtrhoEEprime} with $E'=E_\delta$. Furthermore, $F_{\rm max}(j)\subset F_\rho(t,E,E_\delta)$ for $t$ sufficiently large. \end{lemma}
\begin{proof} Note that if $F_1(j)\cap F_2(j)$ occurs, then $x_j+\varphi E\subset \mathbb{T}^d\setminus W_\rho[0,t]$. If $F_1(j)\cap F_3(j)$ occurs, then the set $x_j+\cup_{\ell=1}^L B(\varphi y_\ell, \tfrac{1}{2}\rho)$ is entirely covered by the Wiener sausage. By choice of $\set{y_1,\dotsc, y_L}$, this set contains $x_j+\left( B(0,2q\varphi)\setminus \varphi E_\delta \right)$, and consequently $\left( \mathbb{T}^d\setminus W_\rho[0,t] \right) \cap B(x_j,2q\varphi) \subset x_j +\varphi E_\delta$.
We have therefore shown that, on $F(j)$, $\mathbb{T}^d\setminus W_\rho[0,t]$ has a component containing $x_j+\varphi E$ and satisfying condition \hyperref[CtrhoEEprime]{$\mathcal{C}(t,\rho,E,E_\delta)$}. To show further that $F_{\rm max}(j)\subset F_\rho(t,E,E_\delta)$, we will show any other component must have capacity smaller than $\varphi^{d-2}\Capa E$.
If $F_1(j)\cap F_4(j)$ occurs, then $x_j+\varphi Z$ is entirely covered by the Wiener sausage, by choice of $\set{z_1,\dotsc,z_M}$. By choice of $Z$, all components of $B(x_j,a\varphi)\setminus (x_j+\varphi Z)$, other than any components that are subsets of $x_j+\varphi E_{\rho/\varphi}=x_j+(\varphi E)_\rho$, must be contained in a ball of radius $\eta\varphi$, and in particular have capacity at most $\kappa_d(\eta\varphi)^{d-2}<\varphi^{d-2}\Capa E$.
Finally, if $F_5(j)$ occurs, then the component of largest capacity cannot occur outside $B(x_j,2q\varphi)$, and therefore must be the component of largest capacity contained in $x_j+(\varphi E)_\rho$.
It therefore remains to show that the component of largest capacity in $x_j+(\varphi E)_\rho$ is in fact the component containing $x_j+\varphi E$. Suppose that $a\in x_j+\varphi E$ is the centre of a $(d-1)$-dimensional ball of radius $\rho$ that is completely contained in some face of $x_j+\varphi E$, and let $b$ be a point at distance at most $\rho$ from $a$ along the line perpendicular to the face (see \reffig{NearComponent}). If both $x_j+\varphi E$ and $b$ are contained in $\mathbb{T}^d\setminus W_\rho[0,t]$, then so is the line segment from $a$ to $b$, so that $b$ belongs to the same component as $x_j+\varphi E$.
\begin{figure}
\caption{A point $b$ near the centre $a$ of a ball (thicker line) on a face of $x_j+\varphi E$, and a point $c$ near the boundary of a face. The Brownian path must not touch the dotted lines, but the Wiener sausage can fill the shaded circles by visiting the crossed points. The point $c$ can belong to a different component than $x_j+\varphi E$, but $b$ cannot.}
\end{figure}
We therefore conclude that, on $F_{\rm max}(j)$, any point of $x_j+(\varphi E)_\rho$ that is not in the same component as $x_j+\varphi E$ must lie within distance $2\rho$ of the boundary of some face of $x_j+\varphi E$. Write $H$ for the set of boundaries of faces of $E$. Since $H$ is \mbox{$(d-2)$}-dimensional, its capacity is $0$, and therefore $\Capa((\varphi H)_{2\rho})=\varphi^{d-2} \Capa(H_{2\rho/\varphi})=o(\varphi^{d-2})$ by \refprop{CapacityContinuity}\refitem{CapacityCompact}, since $\rho/\varphi\to 0$. In particular, for $t$ sufficiently large the component of largest capacity in $x_j+(\varphi E)_\rho$ must be the component containing $x_j+\varphi E$, which completes the proof of \reflemma{FImplies}. \qed\end{proof}
\begin{proof}[Proof of \refthm{ShapeOfComponents}] Because of the upper bound proved for Theorems~\ref{t:CapacitiesInWc}--\ref{t:ComponentCounts}, we need only prove the lower bounds \begin{equation} \label{BasicLowerBoundF} \mathbb{P}\left( F_\rho(t,E,E_\delta) \right) \geq t^{-I(\Capa E)-o(1)}, \qquad \Capa E\geq\kappa_d, \delta>0. \end{equation} and \begin{equation} \label{BasicLowerBoundchi} \chi_\rho(t,E,E_\delta)\geq t^{J(\Capa E)-o(1)} \text{ with high probability,} \qquad\Capa E<\kappa_d,\delta>0. \end{equation} Moreover, it suffices to prove \eqref{BasicLowerBoundF}--\eqref{BasicLowerBoundchi} under the assumption that $E\in\mathcal{E}_c^\boxempty$ and, in \eqref{BasicLowerBoundF}, that $\Capa E>\kappa_d$. Indeed, given any $\delta'\in(0,\tfrac{1}{2}\delta)$, apply \reflemma{cEboxcDense} to find $E^\boxempty\in\mathcal{E}_c^\boxempty$ with $E\subset E^\boxempty\subset E_{\delta'}$. By adjoining, if necessary, a sufficiently small cube to $E^\boxempty$, we may assume that $\Capa E^\boxempty> \Capa E$. Apply \eqref{BasicLowerBoundF}--\eqref{BasicLowerBoundchi} with $E$ and $\delta$ replaced by $E^\boxempty$ and $\delta'$, respectively. \refprop{CapacityContinuity}\refitem{CapacityCompact} implies that $\Capa E^\boxempty\downarrow\Capa E$ as $\delta'\downarrow 0$. Since $\kappa\mapsto J(\kappa)$ is continuous, we conclude that the bounds for $E\in\mathcal{E}_c$ follows from those for $E\in\mathcal{E}^\boxempty_c$.
We next relate the left-hand side of \eqref{BasicLowerBoundF} to the events $F_1(j),\dotsc,F_5(j)$. Noting that $F_1(j)\cap F_2(j)\cap F_1(k)\cap F_2(k) \subset F_5(j)^c$ for $j\neq k$, \reflemma{FImplies} implies that \begin{align} \mathbb{P}\left( F_\rho(t,E,E_\delta) \right) &\geq \sum_{j=1}^K \mathbb{P}_{x_0}(F(j)) \notag\\& \geq \sum_{j=1}^K \mathbb{P}_{x_0}(F_2(j)\cap F_3(j)\cap F_4(j)) -\sum_{j=1}^K \mathbb{P}_{x_0}(F_1(j)^c) -\sum_{j=1}^K \mathbb{P}_{x_0}(F_1(j)\cap F_2(j)\cap F_5(j)^c). \label{BoundByF} \end{align} We will bound each of the sums in the right-hand side of \eqref{BoundByF}.
Applying \refprop{ExcursionNumbers} and \eqref{KAsymptotics} (and noting that $N_d(t,r,R) =t^{\eta+o(1)}$ and that $\tfrac{1}{2}N/N_d(t,r,R)=\tfrac{1}{2}(1+\eta)<\tfrac{3}{4}$), we see that the second sum in the right-hand side of \eqref{BoundByF} is at most $t^{d/(d-2)+O(\eta)} \exp[-c t^{\eta+o(1)}]$. This term will be negligible compared to the scale of \eqref{BasicLowerBoundF}.
For the last sum in \eqref{BoundByF}, we assume that $\Capa E>\kappa_d$ and use \reflemma{LargeDistantComponents}. Set $h(t)=2q\varphi$, and note that $h(t)/(\varphi \log t)\geq 1$ by assumption on $q$. If $F_1(j)\cap F_2(j)\cap F_5(j)^c$ occurs, then, by \reflemma{SetsAndAnimals}, there are lattice animals $A,A'\in\mathcal{A}^\boxempty$ with $\Capa E(A), \Capa E(A')\geq \Capa E$ and a point $x'\in\mathbb{T}^d\setminus B(x_j,2q\varphi)$ with $(x_j+\varphi E(A)) \cap W[0,t]=(x'+\varphi E(A'))\cap W[0,t]=\varnothing$. By \reflemma{LargeDistantComponents} with $\kappa^{(1)}=\kappa^{(2)}=\Capa E$, we have \begin{equation} \mathbb{P}_{x_0}(F_1(j)\cap F_2(j)\cap F_5(j)^c) \leq t^{-d\Capa(E)/[(d-2)\kappa_d]-I(\Capa E)+o(1)}. \end{equation} Hence the last sum in \eqref{BoundByF} is at most $t^{-2I(\Capa E)+O(\eta)}$. Since $I(\Capa E)>0$, this term is also negligible, for $\eta$ sufficiently small, compared to the scale of \eqref{BasicLowerBoundF}. (This is the only part of the proof where $\Capa E>\kappa_d$ is used.)
We have therefore proved that \eqref{BasicLowerBoundF} will follow if we can give a suitable lower bound for the first sum on the right-hand side of \eqref{BoundByF}. Using again the asymptotics \eqref{KAsymptotics} for $K$, \eqref{BasicLowerBoundF} will follow from \begin{equation} \label{F234Bound} \mathbb{P}_{x_0}\condparenthesesreversed{F_2(j)\cap F_3(j) \cap F_4(j)}{((\xi'_i(x_j),\xi_i(x_j))_{i=1}^N)_{j=1}^K} \geq t^{-d\Capa E/[(d-2)\kappa_d]-O(\eta)}. \end{equation} In fact, \eqref{F234Bound} also implies \eqref{BasicLowerBoundchi}. On the event $\cap_{j=1}^K F_1(j)$ (which occurs with high probability, by \refprop{ExcursionNumbers}), \reflemma{FImplies} implies that $\chi_\rho(t,E,E_\delta)$ is at least as large as the number of $j\in\set{1,\dotsc,K}$ for which $F_2(j)\cap F_3(j)$ occurs. Since the events $F_2(j)\cap F_3(j)$ are conditionally independent for different $j$ given the starting and ending points $((\xi'_i(x_j), \xi_i(x_j))_{i=1}^N)_{j=1}^K$, \eqref{F234Bound} and \eqref{KAsymptotics} immediately imply that $\chi_\rho(t,E,E_\delta)\geq t^{J_d(\kappa)-O(\eta)}$ with high probability (cf.\ the proof of \refprop{AnimalCounts} in \refsubsubsect{AnimalCountsProof}).
It therefore remains to prove \eqref{F234Bound}. To do so, we will condition on not hitting $x_j+(\varphi E)_\rho$ and use the following lemma to estimate the conditional probability of hitting small nearby balls. Note that, conditional on the occurence of $F_2(j)$ and the starting and ending points $(\xi'_i(x_j),\xi_i(x_j))_{i=1}^N$, the events $F_3(j)$ and $F_4(j)$ are independent.
\begin{lemma} \lblemma{HitWhileAvoiding} Fix $E\in\mathcal{E}_c^\boxempty$ and $\delta>0$, and let $0<\rho<\varphi<r<R<\tfrac{1}{2}$. Then there is an $\epsilon>0$ such that if $\rho/\varphi<\epsilon$, $\varphi/r<\epsilon$ and $r/R\leq\tfrac{1}{2}$, then, uniformly in $x\in\mathbb{T}^d$, $\xi'\in \partial B(x,r)$, and $\xi\in\partial B(x,R)$, \begin{multline} \mathbb{P}_{\xi',\xi} \! \condparentheses{(x+B(\varphi y,\tfrac{1}{2}\rho))\cap W[0,\zeta_R] \neq\varnothing}{(x+(\varphi E)_\rho)\cap W[0,\zeta_R] =\varnothing } \\ \geq \begin{cases} \epsilon (\varphi/r)^{d-2} (\rho/\varphi)^{d-2}, & \text{if }y\in B(0,r/\varphi)\setminus E_\delta,\\ \epsilon (\varphi/r)^{d-2} (\rho/\varphi)^\alpha, & \text{if }y\in E_\delta\setminus E_{\rho/\varphi}, \end{cases} \end{multline} where $\alpha>d-2$ is some constant depending only on $d$. \end{lemma}
We give the proof of \reflemma{HitWhileAvoiding} in \refsubsect{HitWhileAvoidingProof}.
The event $F_3(j)$ says that all $(x_j,B(y_\ell,\tfrac{1}{2}\rho/\varphi))$, $\ell=1,\dotsc,L$, are \emph{not} $(\floor{N/4},\varphi,r,R)$-successful. \reflemma{HitWhileAvoiding} implies (as in the proof of \refprop{NSuccessfulProb}) that, uniformly in $\ell$, \begin{align} &\mathbb{P}_{x_0} \! \condparentheses{(x_j,B(y_\ell,\tfrac{1}{2}\rho/\varphi)) \text{ is $(\floor{N/4},\varphi,r,R)$-successful}}{F_2(j)}\notag\\ &\quad\leq \left(1-\epsilon (\rho/r)^{d-2}(\rho/\varphi) \right)^{\floor{N/4}} = \exp\left[ -\epsilon \floor{N/4} (\varphi/r)^{d-2} (\rho/\varphi)^{d-1}(1+o(1))\right] \! . \end{align} Recalling \eqref{phidtDefinition} and \eqref{NdDefinition}, we have $N(\varphi/r)^{d-2}\geq (d/(d-2)+O(\eta))\log t$, so that \begin{align} &\mathbb{P}_{x_0} \! \condparentheses{\text{some $(x_j,B(y_\ell,\tfrac{1}{2}\rho/\varphi))$ is $(\floor{N/4},\varphi,r,R)$-successful}}{F_2(j)}\notag\\ &\quad\leq L\exp\left[ -\epsilon\left( \tfrac{1}{4}d/(d-2)+O(\eta) \right) \left( \frac{(\log t)^{1/d} \rho}{\varphi} \right)^{d-2} (\log t)^{2/d} \right] \! . \label{CondProbyl} \end{align} By \eqref{RadiusBound}, $(\log t)^{1/d}\rho/\varphi \to\infty$, whereas $L=(\log t)^{O(1)}$. Hence, the conditional probability in \eqref{CondProbyl} is $o(1)$ and $\condP{F_3(j)} {F_2(j)}=1-o(1)$.
For $F_4(j)$, write $k=\floor{N/2}-\floor{N/4}$ and $p=\epsilon (\varphi/r)^{d-2} (\rho/ \varphi)^\alpha$. \reflemma{HitWhileAvoiding} states that, conditional on $F_2(j)$, each ball $x_j+B(\varphi z_m,\tfrac{1}{2}\rho)$ has a probability at least $p$ of being hit during each of the $k$ excursions from $\partial B(x_j,r)$ to $\partial B(x_j,R)$ in the definition of $F_4(j)$. It follows that $\condP{F_4(j)}{F_2(j)}$ is at least the probability that a Binomial$(k,p)$ random variable has value $M$ or larger. We have $p\to 0$ and $k-M\to\infty$ as $t\to\infty$, so using Stirling's approximation, we get \begin{multline} \mathbb{P}_{x_0} \! \condparentheses{F_4(j)}{F_2(j)} \geq \binom{k}{M} p^M(1-p)^{k-M} = \frac{k^k p^M(1-p)^{k-M}}{M^M (k-M)^{k-M}(\sqrt{2\pi}+o(1))}\\ \geq \left( \frac{kp}{M} \right)^M \frac{(1-p)^k}{\sqrt{2\pi}+o(1)} = \exp\left[ -M\log(M/kp) -O(kp)-O(1)\right] \! . \end{multline}
Observe that $kp=e^{O(1)}N_d(t,r,R)(\varphi/r)^{d-2}(\rho/\varphi)^\alpha=e^{O(1)}(\rho/\varphi)^\alpha \log t$. The assumption $\rho/\varphi\to 0$ implies that $kp=o(\log t)$. On the other hand, recall that $M=O((\varphi/\rho)^{d-1})$, so that $M/kp=e^{O(1)}(\varphi/\rho)^{\alpha+d-1}/\log t$. The hypothesis \eqref{RadiusBound} means that $\varphi/\rho=o((\log t))^{1/d}$. Consequently, $M=o((\log t)^{(d-1)/d})$ and $\log(M/kp)\leq O(\log\log t)$. In particular, $M\log(M/kp) \leq o(\log t)$, and we conclude that \begin{equation} \label{CondProbzm} \mathbb{P}_{x_0} \! \condparentheses{F_4(j)}{F_2(j)}=\exp\left( -o(\log t) \right)=t^{o(1)}. \end{equation}
Combining \eqref{CondProbyl}, \eqref{CondProbzm}, and \refprop{NSuccessfulProb}, we obtain \begin{align} \mathbb{P}_{x_0}(F_2(j)\cap F_3(j)\cap F_4(j)) &= \mathbb{P}_{x_0}(F_2(j)) \mathbb{P}_{x_0}\condparentheses{F_3(j)}{F_2(j)} \mathbb{P}_{x_0}\condparentheses{F_4(j)}{F_2(j)} \notag\\&= t^{-d\Capa(E_{\rho/\varphi})/[(d-2)\kappa_d]+O(\eta)}\,[1-o(1)]\,t^{o(1)} =t^{-d\Capa(E)/[(d-2)\kappa_d]+O(\eta)}. \end{align} We have therefore verified \eqref{F234Bound}, and this completes the proof. \qed\end{proof}
\subsection{Proof of \refprop{Connected}} \lbsubsect{ConnectedProof}
\begin{proof} $\mathbb{T}^d\setminus W[0,t]$ is open since $W[0,t]$ is the (almost surely) continuous image of a compact set.
Consider first a Brownian motion $\tilde{W}$ in $\mathbb{R}^d$. Define \begin{equation} \tilde{Z}=\bigcup_{q,q'\in\mathbb{Q}}\bigcup_{1\leq i<j\leq d} \set{(x_1,\dotsc,x_d)\in\mathbb{R}^d: x_i=q,x_j=q'} \end{equation} and note that $\tilde{Z}$ is the inverse image $\pi_0^{-1}(Z)$ of a path-connected, locally path-connected, dense subset $Z=\pi_0(\tilde{Z})\subset\mathbb{T}^d$ (where $\pi_0\colon,\mathbb{R}^d\to\mathbb{T}^d$ is the canonical projection map). Since $\tilde{Z}$ is the countable union of $(d-2)$-dimensional subspaces, $\tilde{W}\cointerval{0,\infty}$ does not intersect $\tilde{Z}$, except perhaps at the starting point, with probability $1$. Projecting onto $\mathbb{T}^d$, it follows that $W\cointerval{0,\infty}$ intersects $Z$ in at most one point, and in particular $\mathbb{T}^d \setminus W\cointerval{0,\infty}$ contains a path-connected, locally path-connected, dense subset. This implies the remaining statements in \refprop{Connected}. \qed\end{proof}
\section{Proofs of Corollaries \ref{c:UnhitSet}--\ref{c:CoverTimeLDP}} \lbsect{CoroProofs}
\subsection{Proof of \refcoro{UnhitSet}}
\begin{proof} \eqref{HittingSetProbSimple} follows immediately from the more precise statements in \eqref{HittingLargeSetProb}--\eqref{DisjointTranslates}. By monotonicity and continuity, it suffices to show \eqref{HittingLargeSetProb} for $\Capa E>\kappa_d$.
Consider first the lower bounds in \eqref{HittingLargeSetProb}--\eqref{DisjointTranslates}. Replace $E$ by the compact set $\closure{E}$ (by hypothesis, this does not change the value of $\Capa E$). Let $\kappa>\Capa E$ be arbitrary and use \refprop{CapacityContinuity}\refitem{CapacityCompact} to find $r>0$ such that $\Capa(E_r)\leq\kappa$. Adjoin finitely many lines to $E_r$ to make it into a connected set $E'$ (as in the proof of \refthm{HitManySets}) and then adjoin any bounded components of $\mathbb{R}^d\setminus E'$ to form a set $E''\in\mathcal{E}_c$ that satisfies the conditions of \refthm{ShapeOfComponents}. For $\Capa E\geq\kappa_d$, \refthm{ShapeOfComponents} implies that $x+\varphi_d(t)E\subset\mathbb{T}^d\setminus W[0,t]$ for some $x\in\mathbb{T}^d$, with probability at least $t^{J_d(\kappa)-o(1)}$. If instead $\Capa E<\kappa_d$, then it is no loss of generality to assume that $\kappa<\kappa_d$ also. Then \refthm{ShapeOfComponents} shows that there are at least $t^{J_d(\kappa)-o(1)}$ components containing translates $x+\varphi_d(t) E$; these translates are necessarily disjoint. In both cases we conclude by taking $\kappa\downarrow\Capa E$.
For the upper bounds, we will shrink the set $E$. The results nearly follow from Theorems~\ref{t:CapacitiesInWc}--\ref{t:ComponentCounts}, since the existence of $x+\varphi_d(t) E\subset\mathbb{T}^d\setminus W[0,t]$ implies the existence of $x+(\varphi_d(t) E)_{-\rho(t)}\subset \mathbb{T}^d\setminus W_{\rho(t)}[0,t]$. However, the set $E$ might not be connected. To handle this possibility, we will appeal directly to Lemmas~\ref{l:HitFiniteUnionOfLAs} and \ref{l:FiniteUnionOfLACounts}.
Let $\kappa\in(\kappa_d,\Capa E)$ (for \eqref{HittingLargeSetProb}) or $\kappa\in(0,\Capa E)$ (for \eqref{DisjointTranslates}) be arbitrary. Apply \refprop{CapacityContinuity}\refitem{CapacityCtyPoint} to find an $r>0$ such that $\Capa(E_{-2r})>\kappa$. The enlargement $(E_{-2r})_r$ has a finite number $k$ of components, by boundedness. Set $\rho=\rho(t)=r\varphi_d(t)$ and choose $n=n(t)$ such that $n(t) \geq 2\sqrt{d}/\rho(t)$ and the hypotheses of \refprop{AnimalCounts} hold. (As in the proof of \refthm{ComponentCounts}, these conditions on $n(t)$ are mutually consistent.) Apply \reflemma{SetsAndAnimals} to each of the $k$ components of $(E_{-2r})_r$ to obtain a set $E^\boxempty=\cup_{j=1}^k E(A^{(j)})$ satisfying $(E_{-2r})_r\subset E^\boxempty\subset (E_{-2r})_{2r}\subset E$. Thus, $\Capa E^\boxempty\geq\kappa$. Furthermore, given $x\in\mathbb{T}^d$ there is $x'\in G_{n(t)}$ such that $x'+\varphi_d(t)E^\boxempty\subset x+\varphi_d(t)((E_{-2r})_{2r}) \subset x+\varphi_d(t) E$. Define $h(t)=C\varphi_d(t)$, where $C$ is a constant large enough so that $E\subset B(0,C)$. For $\Capa E>\kappa_d$, we can then apply \reflemma{HitFiniteUnionOfLAs} to conclude that $\mathbb{P}(\exists x\in\mathbb{T}^d\colon\, x+\varphi_d(t)E\subset\mathbb{T}^d\setminus W[0,t])\leq t^{J_d(\kappa)+o(1)}$. For $\Capa E<\kappa_d$, \reflemma{FiniteUnionOfLACounts} implies that $\chi(t,E)\leq\chi_+^\boxempty(t,n(t),\kappa,h(t))\leq t^{J_d(\kappa)+o(1)}$ with high probability. In both cases take $\kappa\uparrow\Capa E$. \qed\end{proof}
\subsection{Proof of Corollaries \ref{c:VolumeLDP}--\ref{c:EvalLDP}}
\begin{proof} Note the scaling relation \begin{equation} \label{EvalScaling} \lambda(\varphi D)=\varphi^{-2}\lambda(D). \end{equation} Corollaries~\ref{c:VolumeLDP}--\ref{c:EvalLDP} follow from Theorems~\ref{t:CapacitiesInWc}, \ref{t:NoTranslates} and \ref{t:ShapeOfComponents} because of the inequalities \eqref{CapaVolEval}. Indeed, apart from the fact that the principal Dirichlet eigenvalue $\lambda(E)$ is decreasing in $E$ rather than increasing, the proofs are identical and we will prove only \refcoro{EvalLDP}.
Since $\lambda \mapsto I^{\rm Dirichlet}_d(\lambda)$ is continuous and decreasing on $\ocinterval{0,\lambda_d}$, it suffices to prove \eqref{LargeEval} and to show that $\mathbb{P}(\varphi_d(t)^2\lambda(t,\rho(t)) \leq \lambda)=t^{-I_d^{\rm Dirichlet}(\lambda)+o(1)}$ for $\lambda<\lambda_d$.
For \eqref{LargeEval}, note that $\mathbb{T}^d\setminus W_{\rho(t)}[0,t]$ cannot contain a ball of capacity $>\kappa_d(\lambda_d/\lambda(t,\rho(t)))^{(d-2)/2}$: by \eqref{CapacityScaling} and \eqref{EvalScaling}, the component of $\mathbb{T}^d\setminus W_{\rho(t)}[0,t]$ containing such a ball would have an eigenvalue strictly smaller than $\lambda(t,\rho(t))$. In particular, if $\lambda>\lambda_d$ and $\lambda(t,\rho(t)) \geq \lambda \varphi_d(t)^{-2}$, then $\mathbb{T}^d\setminus W_{\rho(t)}[0,t]$ cannot contain a ball of capacity $\kappa_d\, \varphi_d(t)^{d-2}((\lambda_d/\lambda)^{(d-2)/2}+\delta)$ for any $\delta>0$. Taking $\delta$ small enough so that $(\lambda_d/\lambda)^{(d-2)/2}$ $+\delta<1$, applying \refthm{NoTranslates} with $E$ the ball of capacity $\kappa_d((\lambda_d/\lambda)^{(d-2)/2} +\delta)$, and letting $\delta\downarrow 0$, we obtain \eqref{LargeEval}.
Now take $\lambda<\lambda_d$. By \refprop{NoWrapping}, apart from an event of negligible probability, every component $C$ of $\mathbb{T}^d\setminus W_{\rho(t)}[0,t]$ can be isometrically identified (under its intrinsic metric) with a bounded open subset $E$ of $\mathbb{R}^d$, via $C=x+E$ for some $x\in\mathbb{T}^d$. In particular, $\lambda(C)=\lambda(E)$, and we can apply \eqref{CapaVolEval} to conclude that $\kappa^*(t,\rho(t))\geq\Capa E\geq \kappa_d(\lambda_d/ \lambda(C))^{(d-2)/2}$. Applying \refthm{CapacitiesInWc}, \begin{equation} \mathbb{P}(\varphi_d(t)^2\lambda(t,\rho(t))\leq\lambda) \leq \mathbb{P}(\kappa^*(t,\rho(t))\geq \kappa_d (\lambda_d/\lambda)^{(d-2)/2}\varphi_d(t)^{d-2}) \leq t^{-I_d^{\rm Dirichlet}(\lambda)+o(1)}. \end{equation}
For the reverse inequality, note that \refthm{ShapeOfComponents} implies that $\mathbb{T}^d\setminus W_{\rho(t)}[0,t]$ contains a ball of capacity $\kappa_d\,\varphi_d(t)^{d-2}(\lambda_d/ \lambda)^{(d-2)/2}$ with probability at least $t^{-I_d^{\rm Dirichlet}(\lambda)-o(1)}$. \qed\end{proof}
\subsection{Proof of \refcoro{InradiusLDP}}
\begin{proof} Since $r\mapsto I_d^{\rm inradius}(r)$ is continuous and strictly increasing on $\cointerval{1,\infty}$ and is infinite elsewhere, it suffices to verify \eqref{SmallInradius} and show $\mathbb{P}(\rho_{\rm in}(t)> r\varphi_d(t))= t^{-I_d^{\rm inradius}(r) +o(1)}$ for $r\geq 1$. But the events $\set{\rho_{\rm in}(t)\leq r\varphi_d(t)}$ and $\set{\rho_{\rm in}(t)> r\varphi_d(t)}$ are precisely the event \begin{equation} \set{(x+\varphi_d(t) B(0,r))\cap W[0,t]\neq \varnothing \; \forall x\in\mathbb{T}^d} \end{equation} and its complement \begin{equation} \set{\exists x\in\mathbb{T}^d\colon\,(x+\varphi_d(t) B(0,r))\cap W[0,t]=\varnothing} \end{equation} from \refthm{NoTranslates}, with $\rho(t)=0$, and equation \eqref{HittingLargeSetProb} from \refcoro{UnhitSet}, with $E=B(0,r)$. \qed\end{proof}
\subsection{Proof of \refcoro{CoverTimeLDP}}
\begin{proof} Recall that $\set{\rho_{\rm in}(t) > \epsilon} = \set{\mathcal{C}_\epsilon > t}$, so that setting $t=u\psi_d(\epsilon)$, $r=\epsilon/\varphi_d(u\psi_d(\epsilon))$ rewrites the event $\set{\mathcal{C}_\epsilon>u\psi_d(\epsilon)}$ as $\set{\rho_{\rm in}(t)>r\varphi_d(t)}$. By \eqref{phipsiAsymptotics}, $r\to (u/d)^{1/(d-2)}$ as $\epsilon\downarrow 0$. It follows that $\mathbb{P}(\mathcal{C}_\epsilon>u\psi_d(\epsilon))=t^{-I_d^{\rm inradius}((u/d)^{1/(d-2)})+o(1)}$ for $u>d$, since $r\mapsto I_d^{\rm inradius}(r)$ is continuous on $(1,\infty)$. Noting that $t=\epsilon^{-(d-2)+o(1)}$, this last expression is $\epsilon^{I_d^{\rm cover}(u)+o(1)}$. A similar argument proves \eqref{SmallCoverTime}. Because $u\mapsto I_d^{\rm cover}(u)$ is continuous and strictly increasing on $\cointerval{d,\infty}$ and $I_d^{\rm cover}(v)=\infty$ otherwise, these two facts complete the proof. \qed\end{proof}
\appendix
\section{Hitting probabilities for excursions} \lbappendix{ExcursionProofs}
\subsection{Unconditioned excursions: proof of \reflemma{CapacityAndHittingDistant}} \lbsubsect{CapaHittingDistantProof}
\begin{proof} Since $R<\tfrac{1}{2}$, we may consider $x,\xi',\xi,W(t)$ to have values in $\mathbb{R}^d$ instead of $\mathbb{T}^d$. Furthermore, w.l.o.g.\ we may assume that $x=0$.
We first remove the effect of conditioning on the exit point $\xi\in\partial B(0,R)$. Define $T=\sup\set{t<\zeta\colon\,d(0,W(t))\leq r}$ to be the last exit time from $B(0,r)$ before time $\zeta$; note that $E\cap W[0,\zeta_R]=E\cap W[0,T]$. Let $\tilde{r}\in(r,R)$ and define $\tilde{\tau}=\inf\set{t>T\colon\,d(0,W(t))=\tilde{r}}$ to be the first hitting time of $\partial B(0,\tilde{r})$ after time $T$.
Under $\mathbb{P}_{\xi'}$ (i.e., without conditioning on the exit point $W(\zeta_R)$) we can express $(W(t))_{0\leq t\leq\zeta_R}$ as the initial segment $(W(t))_{0\leq t\leq\tilde{\tau}}$ followed by a Brownian motion, conditionally independent given $W(\tilde{\tau})$, started at $\tilde{\xi}=W(\tilde{\tau})$ and conditioned to exit $B(0,R)$ before hitting $B(0,r)$. Let $f_{\tilde{r},R}(\tilde{\xi},\cdot)$ denote the density, with respect to the uniform measure $\sigma_R$ on $\partial B(0,R)$, of the first hitting point $W(\zeta_R)$ on $\partial B(0,R)$ for this conditioned Brownian motion. Then for $S\subset\partial B(0,R)$ measurable, we have \begin{equation} \label{PxiprimetautildeFormula} \mathbb{P}_{\xi'} \! \left( E\cap W[0,\zeta_R]\neq\varnothing, W(\zeta_R)\in S \right) =\mathbb{E}_{\xi'} \! \left( \indicator{E\cap W[0,T]\neq\varnothing} \int_S f_{\tilde{r},R}(W(\tilde{\tau}),\xi) d\sigma_R(\xi) \right). \end{equation} From \eqref{PxiprimetautildeFormula} it follows that the conditioned measure $\mathbb{P}_{\xi',\xi}$ satisfies \begin{equation} \label{PxiprimexiFormula} \mathbb{P}_{\xi',\xi}(E\cap W[0,\zeta_R]\neq\varnothing) = \frac{\mathbb{E}_{\xi'} \! \left( \indicator{E\cap W[0,T]\neq\varnothing} f_{\tilde{r},R}(W({\tilde{\tau}}),\xi) \right)}{\mathbb{E}_{\xi'} \! \left( f_{\tilde{r},R}(W({\tilde{\tau}}),\xi) \right)}. \end{equation} (More precisely, we would conclude \eqref{PxiprimexiFormula} for $\sigma_R$-a.e.\ $\xi$, but by a continuity argument we can take \eqref{PxiprimexiFormula} to hold for all $\xi$.)
Now choose $\tilde{r}$ in such a way that $R/\tilde{r}\to\infty, \tilde{r}/r\to\infty$, for instance, $\tilde{r}=\sqrt{rR}$. Since $R/\tilde{r}\to\infty$, we have $f_{\tilde{r},R} (\tilde{\xi},\xi)=1+o(1)$, uniformly in $\tilde{\xi},\xi$. Therefore \begin{align} \label{DifferenceOfHitting} \mathbb{P}_{\xi',\xi}(E\cap W[0,\zeta_R]\neq\varnothing) &= [1+o(1)]\,\mathbb{P}_{\xi'}(E\cap W[0,\zeta_R]\neq\varnothing) \notag\\ &= [1+o(1)]\left( \big. \mathbb{P}_{\xi'}(E\cap W\cointerval{0,\infty}\neq\varnothing) -\mathbb{P}_{\xi'}(E\cap W\cointerval{\zeta_R,\infty}\neq\varnothing) \right) \! . \end{align} By the Markov property, the last term in \eqref{DifferenceOfHitting} is the probability of hitting $E$ when starting from some point $W(\zeta_R)\in\partial B(0,R)$ (averaged over the value of $W(\zeta_R)$). Since $R/r\to\infty$, this will be shown to be an error term, and the proof will have been completed, once we show that \begin{equation} \label{HittingFromPoint} \mathbb{P}_{\xi'} \! \left( W\cointerval{0,\infty}\cap E\neq\varnothing \right) =\frac{\Capa E }{\kappa_d \, r^{d-2}}\,[1+o(1)] \qquad\text{as }r/\epsilon\to\infty. \end{equation} Note that \eqref{HittingFromPoint} is essentially the limit in \eqref{CapacityAndHittingSimple}, taken uniformly over the choice of $E\subset B(0,\epsilon)$.
To show \eqref{HittingFromPoint}, let $g_\epsilon(\xi',\cdot)$ denote the density, with respect to the uniform measure $\sigma_\epsilon$ on $\partial B(0,\epsilon)$, of the first hitting point of $\partial B(0,\epsilon)$ for a Brownian motion started at $\xi'$ and conditioned to hit $B(0,\epsilon)$. Then \begin{equation} \label{HittingFromPointIntegral} \mathbb{P}_{\xi'} \! \left( W\cointerval{0,\infty}\cap E\neq\varnothing \right) = \frac{\epsilon^{d-2}}{r^{d-2}}\int_{\partial B(0,\epsilon)} \mathbb{P}_y \! \left( W\cointerval{0,\infty}\cap E\neq\varnothing \right) g_\epsilon(\xi',y) d\sigma_\epsilon(y). \end{equation} Since $r/\epsilon\to\infty$, we have $g_\epsilon(\xi',y)\to 1$ uniformly in $\xi',y$. Hence \eqref{HittingFromPoint} follows from \eqref{HittingFromPointIntegral} and \eqref{CapacityAndHittingUniform}. \qed\end{proof}
\subsection{Excursions avoiding an obstacle: proof of \reflemma{HitWhileAvoiding}} \lbsubsect{HitWhileAvoidingProof}
\begin{proof} It suffices to bound from below \begin{equation} \label{UnconditionalHitWhileAvoiding} \mathbb{P}_{\xi',\xi} \! \left( W[0,\zeta_R]\text{ intersects $x+B(\varphi y,\tfrac{1}{2}\rho)$ but not }x+(\varphi E)_\rho \right), \end{equation} since conditioning on $(x+(\varphi E)_\rho)\cap W[0,\zeta_R]=\varnothing$ can only increase the probability in \eqref{UnconditionalHitWhileAvoiding}. Moreover, as in the proof of \reflemma{CapacityAndHittingDistant}, we may replace $\mathbb{P}_{\xi',\xi}$ by $\mathbb{P}_{\xi'}$, using now that the densities $f_{\tilde{r},R}$ and $g_\epsilon$ are bounded away from $0$ and $\infty$ when $r\leq\tfrac{1}{2}R$.
Fix $E\in\mathcal{E}_c^\boxempty$, so that $E=\tfrac{1}{n}A$ for some $A\in\mathcal{A}^\boxempty\cap \mathcal{E}_c$ and $n\in\mathbb{N}$, and fix $\delta>0$ (we may assume that $\delta<1/(2n)$). By assumption, $E$ is bounded, say $E\subset B(0,a)$. By adjusting $\epsilon$, we may assume that $\rho/\varphi<a$ (so that $(\varphi E)_\rho\subset B(0,2a\varphi)$) and $r>4a\varphi$. We distinguish between three cases:
\noindent $\bullet$ $y\in B(0,3a) \setminus E_\delta$. Consider $w\in\mathbb{Z}^d\setminus A$. Because $A\in\mathcal{E}_c$, there is a finite path of open cubes with centres $w_0,w_1,\dots,w_k\in\mathbb{Z}^d$ such that $w_0\in\mathbb{Z}^d\setminus B(0,3an)$, $w_k=w$, $d(w_{j-1},w_j)=1$ and $\interior{ \cup_{j=0}^k (w_j+[-\tfrac{1}{2},\tfrac{1}{2}]^d)}\cap A=\varnothing$. By compactness, the length $k$ of such paths may be taken to be uniformly bounded. Hence, if $\rho/\varphi <\delta/2$, then, given $\xi''\in\partial B(x,3a\varphi)$, there is a path $\Gamma\subset B(x,3a\varphi)$ from $\xi''$ to $x+\varphi y$ consisting of a finite number of line segments, each of length at most $\varphi$, such that $\Gamma_{\delta\varphi/2}\cap (x+(\varphi E)_\rho)=\Gamma_{\delta\varphi/2} \cap (x+\varphi (E_{\rho/\varphi}))=\varnothing$. Moreover, the number of line segments can be taken to be bounded uniformly in $y$ and $\xi''$. In fact, $\Gamma$ can be chosen as the union of line segments between points $x+\varphi w_0/n,\dotsc, x+\varphi w_k/n$ as above, together with a bounded number of line segments to join $\xi''$ to $x+\varphi w_0/n$ in $B(x,3a\varphi) \setminus B(x,2a\varphi)$ and to join $x+\varphi w_k/n$ to $x+\varphi y$ in the cube $x+(\varphi/n)(w+[-\tfrac{1}{2}, \tfrac{1}{2}]^d)$ containing $y$ (see \reffig{PathGamma})
\begin{figure}
\caption{The path $\Gamma$ reaching $x+\varphi y$. The sets $\Gamma_{\delta\varphi/2}$ and $x+(\varphi E)_\rho=x+\varphi(E_{\rho/\varphi})$ are depicted for the worst-case scenario where the parameters $\rho/\varphi<\delta/2<1/4n$ are equal.}
\end{figure}
From $\xi'\in\partial B(x,r)$, the Brownian path reaches $\partial B(x,3a\varphi)$ before $\partial B(x,R)$ with probability $(r^{-(d-2)}-R^{-(d-2)})/((3a\varphi)^{-(d-2)}-R^{-(d-2)})$. By our assumptions, this is at least $c_1 (\varphi/r)^{d-2}$ for some $c_1>0$. Uniformly in the first hitting point $\xi''$ of $\partial B(x,3a\varphi)$, there is a positive probability of hitting $\partial B(x+\varphi y,\tfrac{1}{4}\delta\varphi)$ via $\Gamma_{\delta\varphi/2}$ before hitting $\partial B(x,4a\varphi)$. The probability of next hitting $\partial B(x+\varphi y, \tfrac{1}{2}\rho)$ before $\partial B(x+\varphi y,\tfrac{1}{2}\delta\varphi)$ is \begin{equation} [(\tfrac{1}{4}\delta\varphi)^{-(d-2)}-(\tfrac{1}{2}\delta\varphi)^{-(d-2)}] /[(\tfrac{1}{2}\rho)^{-(d-2)}-(\tfrac{1}{2}\delta\varphi)^{-(d-2)}], \end{equation} which is at least $c_2(\rho/\varphi)^{d-2}$ for some $c_2>0$. Thereafter there is a positive probability of returning to $\partial B(x,r)$ without hitting $x+(\varphi E)_\rho$, via $\Gamma_{\delta\varphi/2}$. Combining these probabilities gives the required bound.
\noindent $\bullet$ $y\in E_\delta\setminus E_{\rho/\varphi}$. We have $y\in\tfrac{1}{n}(w+[-\tfrac{1}{2}, \tfrac{1}{2}]^d)$ for some $w\in\mathbb{Z}^d$. Write $C_\theta(y,\tfrac{1}{n}w)$ for the cone with vertex $y$, central angle $\theta$, and axis the ray from $y$ to $\tfrac{1}{n}w$. We can choose the angle $\theta$ and a constant $c_3>0$ small enough (in a manner depending only on $d$) so that $C_\theta(y,\tfrac{1}{n}w) \cap E_{\rho/\varphi} \cap B(y,(1+c_3) d(y,w))=\varnothing$. With $\theta$ and $c_3$ fixed, we can choose $c_4>0$ so that every point of $B(\tfrac{1}{n}w,c_4)$ is a distance at least $c_5>0$ from $\partial C_\theta(y, \tfrac{1}{n}w)$ and $\partial B(y,(1+c_3)d(y,\tfrac{1}{n}w))$ (see \reffig{ConeInCube}).
\begin{figure}
\caption{Cones $C_\theta(y,\tfrac{1}{n}w)$ and parts of balls $B(y,\rho/(2\varphi))$ and $B(y,(1+c_3)d(y,\tfrac{1}{n}w))$ for three choices of $y$. The outer square is the cube $\tfrac{1}{n}(w+[-\tfrac{1}{2},\tfrac{1}{2}]^d)$ containing $y$. The dashed line shows the greatest possible extent of $E_{\rho/\varphi}$. At least one face of the cube is not contained in $E$, resulting in a conduit to the outside of the cube (dotted lines). The ball $B(\tfrac{1}{n}w,c_4)$ in the centre is uniformly bounded away from the sides of the cones and from the other balls. On the left the parameters $\rho/\varphi <1/4n$ are depicted as equal. On the right is the more relevant situation $\rho/\varphi \ll 1/(4n)$.}
\end{figure}
Under these conditions, there is a probability at least $c_6(\rho/\varphi)^\alpha$ for a Brownian path started from a point of $\partial B(x+\varphi w/n,c_4 \varphi)$ to reach $\partial B(x+\varphi y,\tfrac{1}{2}\rho)$ before hitting $\partial B(x+\varphi y,\varphi (1+c_3) d(y,w))\cup \partial(x+\varphi C_\theta(y,w))$, and then to reach $\partial B(x+\varphi y, \varphi d(y,w))$ before hitting $\partial(x+\varphi C_\theta(y,w))$.\footnote{This follows from hitting estimates for Brownian motion in a cone. For instance, via the notation of Burkholder~\cite[pp.\ 192--193]{Burk1977}, the harmonic functions on $C(0,z_0)$ given by $u_1(z)=r_0^{p+d-2}(\abs{z}^{-(p+d-2)}-\abs{z}^p) h(\vartheta)$ and $u_2(z)=\abs{z}^p h(\vartheta)$ (with $\vartheta$ the angle between $z$ and $z_0$ and the value $p>0$ chosen so that $u_1(z)=u_2(z)=0$ on $\partial C(0,z_0)$) are lower bounds for the probabilities, starting from $z\in C(0,z_0)$, of hitting $\partial B(0,r_0)$ before $\partial B(0,1) \cup\partial C(0,z_0)$ and of hitting $\partial B(0,1)$ before $\partial C(0,z_0)$, respectively.} The rest of the proof proceeds as in the previous case.
\noindent $\bullet$ $y\in B(0,r/\varphi)\setminus B(0,3a)$. Let $b=d(0,y)\in[3a,r/\varphi]$. The probability that a Brownian path started from $\xi'$ first hits $\partial B(x,b\varphi)$ without hitting $\partial B(x,R)$, then hits $\partial B(x+\varphi y,\tfrac{1}{12}b\varphi)$ without hitting $\partial B(x,\tfrac{2}{3}b\varphi)$, then hits $\partial B(x+\varphi y, \tfrac{1}{2}\rho)$ before hitting $\partial B(x+\varphi y,\tfrac{1}{6}b\varphi)$, and finally exits $B(x,R)$ without hitting $\partial B(x,\tfrac{2}{3}b\varphi)$, is at least $[c_7 (b\varphi/r)^{d-2}] [c_8][c_9(\rho/(b\varphi))^{d-2}][c_{10}]$. Since $x+(\varphi E)_\rho \subset B(x,2a\varphi)\subset B(x,\tfrac{2}{3}b\varphi)$, this is the required bound. \qed\end{proof}
\end{document} |
\begin{document}
\begin{center}{\bf Non-collision and collision properties of \\ Dyson's model in infinite dimension and other stochastic dynamics
whose equilibrium states are determinantal random point fields } \end{center}
\begin{center}{\sf Hirofumi Osada}\\ Dedicated to Professor Tokuzo Shiga on his 60th birthday \end{center}
\begin{center}Published in \\ Advanced Studies in Pure Mathematics 39, 2004\\ Stochastic Analysis on Large Scale Interacting Systems\\ pp.~325--343 \end{center}
\begin{abstract} Dyson's model on interacting Brownian particles is a stochastic dynamics consisting of an infinite amount of particles moving in $ \R $ with a logarithmic pair interaction potential. For this model we will prove that each pair of particles never collide.
The equilibrium state of this dynamics is a determinantal random point field with the sine kernel. We prove for stochastic dynamics given by Dirichlet forms with determinantal random point fields as equilibrium states the particles never collide if the kernel of determining random point fields are locally Lipschitz continuous, and give examples of collision when H\"{o}lder continuous.
In addition we construct infinite volume dynamics (a kind of infinite dimensional diffusions) whose equilibrium states are determinantal random point fields. The last result is partial in the sense that we simply construct a diffusion associated with the {\em maximal closable part} of {\em canonical} pre Dirichlet forms for given determinantal random point fields as equilibrium states. To prove the closability of canonical pre Dirichlet forms for given determinantal random point fields is still an open problem. We prove these dynamics are the strong resolvent limit of finite volume dynamics. \end{abstract}
\numberwithin{equation}{section} \newcounter{Const} \setcounter{Const}{0}
\def\refstepcounter{Const}c_{\theConst}{\refstepcounter{Const}c_{\theConst}}
\def\cref#1{c_{\ref{#1}}}
\theoremstyle{theorem} \newtheorem{thm}{Theorem}[section]
\theoremstyle{plain} \newtheorem{lem}{Lemma}[section] \newtheorem{cor}[lem]{Corollary} \newtheorem{prop}[lem]{Proposition}
\theoremstyle{definition} \newtheorem{dfn}[lem]{Definition} \newtheorem{ntt}[lem]{Notation} \newtheorem{cdn}[lem]{Condition} \newtheorem{ex}[lem]{Example} \newtheorem{prob}[lem]{Problem}
\theoremstyle{remark} \newtheorem{rem}[lem]{Remark}
\section{Introduction} \label{s:1} Dyson's model on interacting Brownian particles in infinite dimension is an infinitely dimensional diffusion process $ \{ (X_t ^{i} )_{i \in \N}\} $ formally given by the following stochastic differential equation (SDE): \begin{align}\label{:11}& d X_t^i = dB_t^i + \sum_{j=1,\ j\not= i} ^{\infty}\frac{1}{X_t^i - X_t^j} \, dt \quad (i = 1,2,3,\ldots)
, \end{align} where $ \{ B_t ^{i} \} $ is an infinite amount of independent one dimensional Brownian motions. The corresponding unlabeled dynamics is \begin{align}\label{:12}& \mathbb{X}_t = \sum _{i=1}^{\infty } \delta _{X_t^i} . \end{align} Here $ \delta _{\cdot } $ denote the point mass at $ \cdot $. By definition $ \mathbb{X}_t $ is a $ \Theta $-valued diffusion, where $ \Theta $ is the set consisting of configurations on $ \R $; that is, \begin{align}&\label{:13}
\Theta = \{ \theta = \sum _{i} \delta _{x_i}\, ;\, x_i \in \R,\, \theta (\{ | x | \le r\}) < \infty \text{ for all } r \in \R\} .\end{align} We regard $ \Theta $ as a complete, separable metric space with the vague topology.
In \cite{sp.2} Spohn constructed an unlabeled dynamics \eqref{:12} in the sense of a Markovian semigroup on $ \Lm $. Here $ \mu $ is a probability measure on $ (\Theta , \mathfrak{B}(\Theta ) ) $ whose correlation functions are generated by the sine kernel \begin{align}\label{:14}& \mathsf{K} _{\sin} (x)= \frac{\bar{\rho}}{\pi x } \sin (\pi x ). \end{align} (See \sref{s:2}). Here $ 0 < \bar{\rho} \le 1 $ is a constant related to the {\em density} of the particle. Spohn indeed proved the closability of a non-negative bilinear form $ (\mathcal{E}, \di ) $ on $ \Lm $ \begin{align} \label{:15} & \mathcal{E}(\mathfrak{f},\mathfrak{g}) = \int_\Theta \mathbb{D}[\mathfrak{f},\mathfrak{g}](\theta )d\mu, \\ & \notag \di=\{\ff \in \dii \cap \Lm ; \mathcal{E}(\mathfrak{f},\mathfrak{f})
<\infty \}.
\end{align} Here $ \mathbb{D} $ is the square field given by \eqref{:25} and $ \dii $ is the set of the local smooth functions on $ \Theta $ (see \sref{s:3} for the definition). The Markovian semi-group is given by the Dirichlet form that is the closure $ (\mathcal{E},\dom ) $ of this closable form on $ \Lm $.
The measure $ \mu $ is an equilibrium state of \eqref{:12}, whose formal Hamiltonian $ \mathcal{H} = \mathcal{H} (\theta ) $ is given by $ ( \theta = \sum _{i} \delta _{x_i}) $ \begin{align}& \label{:16}
\mathcal{H}(\theta ) = \sum_{i\not=j} - 2 \log |x_i - x_j| ,\end{align} which is a reason we regard Spohn's Markovian semi-group is a correspondent to the dynamics formally given by the SDE \eqref{:11} and \eqref{:12}.
We remark the existence of an $ L^2 $-Markovian semigroup does not imply the existence of the associated {\em diffusion} in general. Here a diffusion means (a family of distributions of) a strong Markov process with continuous sample paths starting from each $ \theta \in \Theta $.
In \cite{o.dfa} it was proved that there exists a diffusion $ \diffusion $ with state space $ \Theta $ associated with the Markovian semigroup above. This construction admits us to investigate the {\em trajectory-wise} properties of the dynamics. In the present paper we concentrate on the collision property of the diffusion. The problem we are interested in is the following:
{\sf Does a pair of particles $ (X_t^i,X_t^j) $ that collides each other for some time $ 0 < t < \infty $ exist ?}
We say for a diffusion on $ \Theta $ {\em the non-collision occurs}
if the above property does {\em not} hold, and {\em the collision occurs} if otherwise.
If the number of particles is finite, then the non-collision should occur at least intuitive level. This is because drifts $ \frac{1}{x_i-x_j} $ have a strong repulsive effect. When the number of the particles is infinite, the non-collision property is non-trivial because the interaction potential is long range and un-integrable. We will prove the non-collision property holds for Dyson's model in infinite dimension.
Since the sine kernel measure is the prototype of determinantal random point fields, it is natural to ask such a non-collision property is universal for stochastic dynamics given by Dirichlet forms \eqref{:15}
with the replacement of the measure $ \mu $ with general determinantal random point fields. We will prove, if the kernel of the determinantal random point field (see \eqref{:23}) is locally Lipschitz continuous, then the non-collision always occurs. In addition, we give an example of determinantal random point fields with H\"{o}lder continuous kernel that the collision occurs.
The second problem we are interested in this paper is the following:
\textsf{Does there exist $ \Theta $-valued diffusions associated with the Dirichlet forms $ (\mathcal{E},\dom ) $ on $ \Lm $ when $ \mu $ is determinantal random point fields ?}
We give a partial answer for this in \tref{l:28}.
The organization of the paper is as follows: In \sref{s:2} we state main theorems. In \sref{s:3} we prepare some notion on configuration spaces. In \sref{s:4} we prove \tref{l:23} and \tref{l:25}. In \sref{s:5} we prove \pref{l:27} and \tref{l:26}. In \sref{s:6} we prove \tref{l:28}. Our method proving \tref{l:20} can be applied to Gibbs measures. So we prove the non-collision property for Gibbs measures in \sref{s:7}.
\section{Set up and the main result} \label{s:2}
Let $ \mathsf{E} \subset \Rd $ be a closed set which is the closure of a connected open set in $ \Rd $ with smooth boundary. Although we will mainly treat the case $ \mathsf{E}= \R $, we give a general framework here by following the line of \cite{so-}. Let $ \Theta $ denote the set of configurations on $ \mathsf{E} $, which is defined similarly as \eqref{:13} by replacing $ \R $ with $ \mathsf{E} $.
A probability measure on $ (\Theta ,\mathcal{B}(\Theta ) ) $ is called a random point field on $ \mathsf{E} $. Let $ \mu $ be a random point field on $ \mathsf{E} $. A non-negative, permutation invariant function $ \map{\rho _n}{\mathsf{E}^n }{\R} $ is called an $ n $-correlation function of $ \mu $ if for any measurable sets $ \{ A_1,\ldots,A_m \} $ and natural numbers $ \{ k_1,\ldots,k_m \} $ such that $ k_1+\cdots + k_m = n $ the following holds: \begin{align*}& \int _{A_1^{k_1}\ts \cdots \ts A_{m}^{k_m}} \rho _n (x_1,\ldots,x_n) dx_1\cdots dx_n = \int _{\Theta } \prod _{i=1}^m \frac{\theta (A_i) ! }{(\theta (A_i) - k_i ) ! } d\mu .\end{align*} It is known (\cite{so-}, \cite{le-1}, \cite{le-2}) that, if a family of non-negative, permutation invariant functions $ \{ \rho _n \} $ satisfies \begin{align}\label{:21}& \sum_{k=1}^{\infty} \left\{ \frac{1}{(k+j)!} \int _{A^{k+j}} \rho _{k+j} \ dx_1\cdots dx_{k+j} \right\} ^{-1/k} = \infty , \end{align} then there exists a unique probability measure (random point field) $ \mu $ on $ \mathsf{E} $ whose correlation functions equal $\{ \rho _n \} $.
Let $ \map{K}{L^2(\mathsf{E}, dx )}{L^2(\mathsf{E}, dx )} $ be a non-negative definite operator which is locally trace class; namely \begin{align}\label{:22}& 0 \le (K f , f ) _{L^2(\mathsf{E}, dx )},\quad \\ \notag & \mathrm{Tr} (1_B K 1_B ) < \infty \quad \text{ for all bounded Borel set } B .\end{align} We assume $ K $ has a continuous kernel denoted by $ \mathsf{K} = \mathsf{K} (x,y) $. Without this assumption one can develop a theory of determinantal random point fields (see \cite{so-}, \cite{shirai-takahashi}); we assume this for the sake of simplicity.
\begin{dfn}\label{dfn:1} A probability measure $ \mu $ on $ \Theta $ is said to be a determinantal (or fermion) random point field with kernel $ \mathsf{K} $ if its correlation functions $ \rho _n $ are given by \begin{align}\label{:23}& \rho _n (x_1,\ldots,x_n) = \det (\mathsf{K} (x_i,x_j)_{1\le i,j \le n }) \end{align} \end{dfn}
We quote: \begin{lem}[Theorem 3 in \cite{so-}] \label{l:21} Assume $ \mathsf{K}(x,y) = \overline{\mathsf{K}(y,x) } $ and $ 0 \le K \le 1 $. Then $ K $ determines a unique determinantal random point field $ \mu $.
\end{lem} We give examples of determinantal random point fields. The first example is the stationary measure of Dyson's model in infinite dimension. The first three examples are related to the semicircle law of empirical distribution of eigen values of random matrices. We refer to \cite{so-} for detail. \begin{ex}[sine kernel]\label{d:21} Let $ \mathsf{K} _{\sin} $ and $ \bar{\rho} $ be as in \eqref{:14}. Then \begin{align}\label{:2m1}&
\mathsf{K} _{\sin} (t) = \frac{1}{2\pi } \int _{|k| \le \pi \bar{\rho} } e ^{\sqrt{-1} k t} \, dk . \end{align} So the $ \mathsf{K} _{\sin} $ is a function of positive type and satisfies the assumptions in \lref{l:21}. Let $ \hat{\mu }^{ N } $ denote the probability measure on $ \R^{ N } $ defined by \begin{align}\label{:2m4}&
\hat{\mu }^N = \frac{1}{Z^{N}}
e ^{ - \sum _{i, j =1 }^{ N } - 2 \log |x_i-x_j| } e^{ - \lambda ^2 _{N}\sum_{i=1}^{ N } x _i ^2 }
\, dx_1 \cdots dx_{ N }
, \end{align} where $ \lambda _{N} = 2 (\pi \bar{\rho })^3 / 3 N ^2 $ and $ Z^{N} $ is the normalization. Set $ \mu ^{ N } = \hat{\mu }^{ N } \circ (\xi ^{ N })^{-1}$, where $ \map{\xi ^{ N }}{\R^{ N }}{\Theta } $ such that $ \xi ^{ N }(x_1,\ldots,x_N) = \sum _{i=1}^{ N } \delta _{x_i}$. Let $ \rho _{ n } ^{ N } $ denote the $ n $-correlation function of $ \mu ^{ N } $. Let $ \rho _n $ denote the $ n $-correlation function of $ \mu $. Then it is known (\cite[Proposition 1]{sp.2}, \cite{so-}) that for all $ n = 1,2,\ldots $ \begin{align}\label{:2m5}& \lim_{ N \to\infty} \rho _{ n } ^{ N } (x_1,\ldots,x_n) = \rho _n (x_1,\ldots,x_n) \quad \text{ for all }(x_1,\ldots,x_n ) . \end{align} In this sense the measure $ \mu $ is associated with the Hamiltonian $ \mathcal{H} $
in \eqref{:16} coming from the log potential $ -2\log |x| $.
\end{ex} \begin{ex}[Airy kernel]\label{d:22} $ \mathsf{E}=\R $ and $$ \mathsf{K}(x,y)= \frac{\mathcal{A}_i(x) \cdot \mathcal{A}_i'(y) - \mathcal{A}_i(y) \cdot \mathcal{A}_i'(x) }{x-y} $$ Here $ \mathcal{A}_i $ is the Airy function. \end{ex} \begin{ex}[Bessel kernel]\label{d:23} Let $ \mathsf{E} = [0, \infty) $ and \begin{align*}& \mathsf{K} (x,y) = \frac{ J_{\alpha}(\sqrt{x})\cdot \sqrt{y}\cdot J_{\alpha}'(\sqrt{y}) - J_{\alpha}(\sqrt{y})\cdot \sqrt{x}\cdot J_{\alpha}'(\sqrt{x}) } {2(x-y)} .\end{align*} Here $ J_{\alpha } $ is the Bessel function of order $ \alpha $.
\end{ex}
\begin{ex}\label{d:24} Let $ \mathsf{E} = \R $ and $ \mathsf{K}(x,y) = \mathsf{m}(x) \mathsf{k}(x-y) \mathsf{m}(y) $, where $ \map{\mathsf{k}}{\R }{\R } $ is a non-negative, continuous {\em even} function that is convex in $ [0, \infty) $ such that $ \mathsf{k} (0) \le 1 $, and $ \map{\mathsf{m}}{\R }{\R } $ is nonnegative continuous and $ \int _{\R } \mathsf{m}(t) dt < \infty $ and $ \mathsf{m} (x) \le 1 $ for all $ x $ and $ 0 < \mathsf{m} (x) $ for some $ x $. Then $ \mathsf{K} $ satisfies the assumptions in \lref{l:21}.
Indeed, it is well-known that $ \mathsf{k} $ is a function of positive type (187 p. in \cite{don} for example), so the Fourier transformation of a finite positive measure. By assumption $ 0 \le \mathsf{K}(x,y) \le 1 $, which implies $ 0 \le K \le 1 $. Since $ \int \mathsf{K} (x,x) dx < \infty $, $ K $ is of trace class. \end{ex}
Let $ \mathsf{A}$ denote the subset of $ \Theta $ defined by \begin{align}\label{:24}& \mathsf{A}= \{ \theta \in \Theta ;\ \theta (\{ x \} ) \ge 2 \quad \text{ for some } x \in \mathsf{E} \}
. \end{align} Note that $ \mathsf{A}$ denotes the set consisting of the configurations with collisions. We are interested in how large the set $ \mathsf{A} $ is. Of course $ \mu ( \mathsf{A}) = 0 $ because the 2-correlation function is locally integrable. We study $ \mathsf{A}$ more closely from the point of stochastic dynamics; namely, we measure $ \mathsf{A}$ by using a capacity.
To introduce the capacity we next consider a bilinear form related to the given probability measure $ \mu $. Let $\dii $ be the set of all local, smooth functions on $\Theta $ defined in \sref{s:3}. For $\ff ,\mathfrak{g} \in\dii $ we set $\map{\mathbb{D}[\mathfrak{f},\mathfrak{g}]}{\Theta}{\R}$ by \begin{align} \label{:25} \mathbb{D}[\mathfrak{f},\mathfrak{g}](\theta )&= \frac{1}{2}\sum_{i} \frac{\partial f (\mathbf{x} )}{\partial x_{i}}\frac{\partial g (\mathbf{x} )}{\partial x_{i}} .\end{align} Here $ \theta = \sum _{i} \delta _{x_i}$, $ \mathbf{x} = (x_1,\ldots ) $ and $ f (\mathbf{x} )= f(x_1,\ldots )$ is the permutation invariant function such that $\ff (\theta )=f (x_1,x_2,\ldots )$ for all $\theta \in \Theta $. We set $ g $ similarly. Note that the left hand side of \eqref{:25} is again permutation invariant. Hence it can be regard as a function of $ \theta = \sum _{i} \delta _{x_i}$. Such $ f $ and $ g $ are unique; so the function $\map{\mathbb{D}[\mathfrak{f},\mathfrak{g}]}{\Theta }{\R }$ is well defined.
For a probability measure $ \mu $ in $ \Theta $ we set as before \begin{align} \notag & \mathcal{E}(\mathfrak{f},\mathfrak{g}) = \int_\Theta \mathbb{D}[\mathfrak{f},\mathfrak{g}](\theta )d\mu, \\ & \notag \di=\{\ff \in \dii \cap \Lm ; \mathcal{E}(\mathfrak{f},\mathfrak{f})
<\infty \}.
\end{align} When $ (\mathcal{E}, \di ) $ is closable on $ \Lm $, we denote its closure by $ (\mathcal{E}, \dom ) $.
We are now ready to introduce a notion of capacity for a {\em pre}-Dirichlet space $ (\mathcal{E}, \di , \Lm ) $. Let $ \mathcal{O} $ denote the set consisting of all open sets in $ \Theta $. For $ O \in \mathcal{O} $ we set $ \mathcal{L}_{O} =\{ \mathfrak{f} \in \di \, ;\, \mathfrak{f} \ge 1 \ \mu \text{-a.e.\ on } {O} \} $ and \begin{align*}& \0 ({O}) = \begin{cases} \inf_{\mathfrak{f} \in \mathcal{L}_{O} } \left\{ \mathcal{E}(\mathfrak{f}, \mathfrak{f} ) + (\mathfrak{f}, \mathfrak{f} )_{\Lm } \right\} &\mathcal{L}_{O}\not= \emptyset \\ \infty &\mathcal{L}_{O}=\emptyset \end{cases} . \end{align*} For an arbitrary subset $ {A} \subset \Theta $ we set $ \0 ({A}) = \inf _{{A} \subset {O} \in \mathcal{O} } \0 ({O})$. This quantity $ \0 $ is called 1-capacity for the pre-Dirichlet space $ (\mathcal{E}, \di , \Lm ) $.
We state the main theorem: \begin{thm}\label{l:20} Let $ \mu $ be a determinantal random point field with kernel $ \mathsf{K} $. Assume $ \mathsf{K} $ is locally Lipschitz continuous. Then \begin{align}\label{:26}& \0 ( \mathsf{A})=0 , \end{align} where $ \mathsf{A} $ is given by \eqref{:24}. \end{thm}
In \cite{o.dfa} it was proved \begin{lem}[Corollary 1 in \cite{o.dfa}]\label{l:22} Let $ \mu $ be a probability measure on $ \Theta $. Assume $ \mu $ has locally bounded correlation functions. Assume $ (\mathcal{E}, \di ) $ is closable on $ \Lm $. Then there exists a diffusion $ \diffusion $ associated with the Dirichlet space $ (\mathcal{E}, \dom , \Lm ) $. \end{lem}
Combining this with \tref{l:20} we have \begin{thm} \label{l:23} Assume $ \mu $ satisfies the assumption in \tref{l:20}. Assume $ (\mathcal{E}, \di ) $ is closable on $ \Lm $. Then a diffusion $ \diffusion $ associated with the Dirichlet space $ (\mathcal{E}, \dom , \Lm ) $ exists and satisfies \begin{align}\label{:27}& \mathsf{P}_{\theta}( \sigma _{\mathsf{A}} = \infty ) = 1 \quad \text{ for q.e.\ }\theta , \end{align} where $ \sigma _{\mathsf{A}} = \inf \{ t > 0\, ;\, \X _t \in \mathsf{A}\} $. \end{thm} We refer to \cite{fot} for q.e.\ (quasi everywhere) and related notions on Dirichlet form theory. We remark the capacity of pre-Dirichlet forms are bigger than or equal to the one of its closure by definition. So \eqref{:27} is an immediate consequence of \tref{l:20} and the general theory of Dirichlet forms once $ (\mathcal{E}, \di ) $ is closable on $ \Lm $ and the resulting (quasi) regular Dirichlet space $ (\mathcal{E}, \dom , \Lm ) $ exists.
To apply \tref{l:23} to Dyson's model we recall a result of Spohn. \begin{lem}[Proposition 4 in \cite{sp.2}]\label{l:24} Let $ \mu $ be the determinantal random point field with the sine kernel in \eref{d:21}. Then $ \ed $ is closable on $ \Lm $. \end{lem}
We say a diffusion $ \diffusion $ is Dyson's model in infinite dimension if it is associated with the Dirichlet space $ (\mathcal{E}, \dom , \Lm ) $ in \tref{l:24}. Collecting these we conclude: \begin{thm} \label{l:25} No collision \eqref{:27} occurs in Dyson's model in infinite dimension. \end{thm}
The assumption of the local Lipschitz continuity of the kernel $ \mathsf{K} $ is crucial; we next give a collision example when $ \mathsf{K} $ is merely H\"{o}lder continuous. We prepare: \begin{prop} \label{l:27} Assume $ K $ is of trace class. Then $ (\mathcal{E}, \di ) $ is closable on $ \Lm $. \end{prop}
\begin{thm} \label{l:26} Let $ \mathsf{K} (x,y) = \mathsf{m}(x) \mathsf{k}(x-y) \mathsf{m}(y) $ be as in \eref{d:24}. Let $ \alpha $ be a constant such that \begin{align}\label{:27a}& 0 < \alpha < 1 .\end{align} Assume $ \mathsf{m} $ and $ \mathsf{k} $ are continuous and there exist positive constants $ \refstepcounter{Const}c_{\theConst} \label{;21} $ and $ \refstepcounter{Const}c_{\theConst} \label{;22} $ such that \begin{align}\label{:27b}& \cref{;21}t ^{\alpha} \le \mathsf{k} (0) - \mathsf{k}(t) \le \cref{;22}t ^{\alpha} \quad \text{ for } 0 \le t \le 1. \end{align} Then $ (\mathcal{E}, \di , \Lm ) $ is closable and the associated diffusion satisfies \begin{align}\label{:27c}& \mathsf{P}_{\theta}(\sigma _{\mathsf{A}} < \infty ) = 1 \quad \text{ for q.e.\ } \theta .\end{align} \end{thm}
Unfortunately the closability of the pre-Dirichlet form $ (\mathcal{E} , \di ) $ on $ \Lm $ has not yet proved for determinantal random point fields of locally trace class except the sine kernel. So we propose a problem: \begin{prob}\label{p:22} \thetag{1} Are pre-Dirichlet forms $ (\mathcal{E} , \di ) $ on $ \Lm $ closable when $ \mu $ are determinantal random fields with continuous kernels? \\ \thetag{2} Can one construct stochastic dynamics (diffusion processes) associated with pre-Dirichlet forms $ (\mathcal{E} , \di ) $ on $ \Lm $. \end{prob} We remark one can deduce the second problem from the first one (see \cite[Theorem 1]{o.dfa}). We conjecture that $ (\mathcal{E} , \di , \Lm ) $ are always closable. As we see above, in case of trace class kernel, this problem is solved by \pref{l:27}. But it is important to prove this for determinantal random point field of {\em locally} trace class. This class contains Airy kernel and Bessel kernel and other nutritious examples. We also remark for interacting Brownian motions with Gibbsian equilibriums this problem was settled successfully (\cite{o.dfa}).
In the next theorem we give a partial answer for \thetag{2} of Problem~\ref{p:22}. We will show one can construct a stochastic dynamics in infinite volume, which is canonical in the sense that \thetag{1} it is the strong resolvent limit of a sequence of finite volume dynamics and that \thetag{2} it coincides with $ (\mathcal{E}, \dom ) $ whenever $ (\mathcal{E}, \di ) $ is closable on $ \Lm $.
For two symmetric, nonnegative forms $ (\mathcal{E}_1,\dom _1 ) $ and $ (\mathcal{E}_2 ,\dom _2 ) $, we write $ (\mathcal{E}_1,\dom _1 ) \le (\mathcal{E}_2,\dom _2 ) $ if $ \dom _1 \supset \dom _2 $ and $ \mathcal{E}_1 (\mathfrak{f}, \mathfrak{f} ) \le \mathcal{E}_2 (\mathfrak{f},\mathfrak{f} )$ for all $ \mathfrak{f} \in \dom _2 $.
Let $ (\mathcal{E}^{\text{reg}} , \dom ^{\text{reg}}) $ denote the regular part of $ (\mathcal{E} , \di ) $ on $ \Lm $, that is, $ (\Ereg , \Dreg ) $ is closable on $ \Lm $ and in addition satisfies the following: \begin{align*}& (\mathcal{E}^{\text{reg}} , \dom ^{\text{reg}}) \le (\mathcal{E} ,\di ) ,\\ \intertext{and for all closable forms such that $ (\mathcal{E}', \mathcal{D}' ) \le (\mathcal{E} ,\di ) $}
& (\mathcal{E}', \mathcal{D}' ) \le (\Ereg , \Dreg ) .\end{align*} It is well known that such a $ (\Ereg , \Dreg ) $ exists uniquely and called the maximal regular part of $ (\mathcal{E}, \dom ) $. Let us denote the closure by the same symbol $ (\Ereg , \Dreg ) $.
Let $ \map{\pi _r}{\Theta }{\Theta } $ be such that
$ \pi _r (\theta ) = \theta (\cdot \cap \{ x \in \mathsf{E} ; |x| < r \} )$. We set \begin{align*}& \dom _{\infty , r} = \{ \mathfrak{f} \in \di \, ;\, \mathfrak{f} \text{ is } \sigma[\pi _r] \text{-measurable}\} .\end{align*} We will prove $ (\mathcal{E}, \dom _{\infty , r} ) $ are closable on $ \Lm $. These are the finite volume dynamics we are considering.
Let $ \mathbb{G}_{\alpha } $ (resp.\ $ \mathbb{G}_{r , \alpha } $) $ (\alpha >0) $ denote the $ \alpha $-resolvent of the semi-group associated with the closure of $ (\Ereg ,\Dreg ) $ (resp.\ $ (\mathcal{E}, \dom _{\infty , r} $)) on $ \Lm $. \begin{thm} \label{l:28} \thetag{1} $ (\Ereg , \Dreg ) $ on $ \Lm $ is a quasi-regular Dirichlet form. So the associated diffusion exists. \\ \thetag{2} $ \mathbb{G}_{r , \alpha } $ converge to $ \mathbb{G}_{\alpha} $ strongly in $ \Lm $ for all $ \alpha >0 $. \end{thm}
\begin{rem}\label{r:222} We think the diffusion constructed in \tref{l:28} is a reasonable one because of the following reason. \thetag{1} By definition the closure of $ (\Ereg , \Dreg ) $ equals $ (\mathcal{E}, \dom ) $ when $ (\mathcal{E}, \di ) $ is closable. \thetag{2} One naturally associated Markov processes on $ \Theta _r $, where $ \Theta _r $ is the set of configurations on
$ \mathsf{E} \cap \{ |x| < r \} $. So \thetag{2} of \tref{l:28} implies the diffusion is the strong resolvent limit of finite volume dynamics. \end{rem} \begin{rem}\label{r:223} If one replace $ \mu $ by the Poisson random measure $ \lambda $ whose intensity measure is the Lebesgue measure and consider the Dirichlet space $ (\mathcal{E}^{\lambda},\dom ) $ on $ L(\Theta , \lambda ) $, then the associated $ \Theta $-valued diffusion is the $ \Theta $-valued Brownian motion $ \mathbb{B} $, that is, it is given by \begin{align*}& \mathbb{B}_t = \sum _{i=1}^{\infty} \delta _{B^i_t} ,\end{align*} where $ \{ B^i_t \} $ ($ i \in \N $ ) are infinite amount of independent Brownian motions. In this sense we say in Abstract that the Dirichlet form given by \eqref{:15} for Radon measures in $ \Theta $ {\em canonical}. We also remark such a type of local Dirichlet forms are often called distorted Brownian motions. \end{rem}
\section{Preliminary} \label{s:3} Let $ \1 = (-\7,\7)^d \cap \mathsf{E} $ and $ \ya =\{\theta \in\Theta ;\theta(\1 )= \6 \} $. We note $\Theta =\sum _{\6 = 0} ^{\infty}\ya $. Let $\yaQ $ be the $ \6 $ times product of $ \1 $. We define $ \map{\pi_r}{\Theta }{\Theta } $ by $\pi _{\7}(\theta )=\theta(\cdot\cap \1 )$. A function $\map{\mathbf{x}}{\ya }{\yaQ } $ is called a $\yaQ $-coordinate of $\theta $ if \begin{align}\label{:31}& \pi _{\7}(\theta )=\sum _{k=1} ^{\6 } \delta_{x_{k}(\theta )}, \qquad \mathbf{x}(\theta )=(x_{1}(\theta ),\ldots,x_{\6 }(\theta )).
\end{align} Suppose $ \map{\ff }{\Theta }{\R }$ is $ \sigma [\pi _{\7}]$-measurable. Then for each $ n = 1,2,\ldots $ there exists a unique permutation invariant function $ \map{\frn }{\yaQ }{\R } $ such that \begin{align}\label{:32}&
\ff (\theta ) = \frn (\mathbf{x}(\theta ) ) \quad \text{ for all } \theta \in \ya . \end{align}
We next introduce mollifier. Let $\map{j}{\R }{\R } $ be a non-negative, smooth function such that
$ j (x)= j (|x|)$,
$\int_{\R^d} j dx=1$ and $ j (x)=0$ for $ |x| \geq \frac{1}{2}$. Let $ j _\e =\e j (\cdot/\e )$ and $ j ^{\6}_{\e } (x_1,\ldots,x_{\6})=\prod _{i=1}^{\6} j _\e(x_i)$. For a $ \sigma [\pi _{\7}]$-measurable function $\ff $ we set $\map{\mathfrak{J} _{\7,\e }\ff }{\Theta }{\R }$ by \begin{align} \label{:33} \mathfrak{J} _{\7,\e }\ff (\theta ) &= \begin{cases} j _{\e }^{\6} * \hat{f}^{\6}_r (\mathbf{x}(\theta )) & \text{ for } \theta \in \ya , \6 \ge 1 \\ \ff (\theta ) & \text{ for } \theta \in \Theta ^0_r, \end{cases} \end{align} where $ \frn $ is given by \eqref{:32} for $\ff $, and $\map{\hat{f}^{\6}_{\7}}{\R^{d\6}}{\R}$ is the function defined by $\hat{f}^{\6} _{\7}(x)=f^{\6} _{\7}(x)$ for $ x \in \yaQ $ and $\hat{f}^{\6} _{\7}(x)=0$ for $ x \not\in \yaQ $. Moreover $\mathbf{x}(\theta ) $ is an $ \1^{\6} $-coordinate of $\theta \in \ya $, and $\ast $ denotes the convolution in $ \R ^{\6} $. It is clear that $ \mathfrak{J} _{\7,\e }\ff $ is $ \sigma [\pi _{\7}]$-measurable.
We say a function $ \map{\ff }{\Theta }{\R } $ is local if $ \ff $ is $ \sigma [\pi _{\7}]$-measurable for some $ \7 < \infty $. For $ \map{\ff }{\Theta }{\R } $ and $ \6 \in \N \cup \{ \infty \} $ there exists a unique permutation function $ f^{\6} $ such that $ \ff (\theta ) = f^{\6}(x_1,\ldots) $ for all $ \theta \in \Theta ^{\6 } $. Here $ \Theta ^{\6} = \{ \theta \in \Theta \, ;\, \theta (\mathsf{E} ) = \6 \} $, and $ \theta = \sum _{i} \delta _{x_i}$. A function $ \ff $ is called smooth if $ f^{\6 } $ is smooth for all $ \6 \in \N \cup \{ \infty \} $. Note that a $ \sigma [\pi _{\7}]$-measurable function $ \ff $ is smooth if and only if $ \frn $ is smooth for all $ n \in \N $.
\section{Proof of \tref{l:23}} \label{s:4} We give a sequence of reductions of \eqref{:26}. Let $ \mathbf{A} $ denote the set consisting of the sequences $ \mathbf{a} = (a_r)_{r \in \N} $ satisfying the following: \begin{align}\label{:41a}& a_r \in \Q \quad \text{ for all }r \in \N , \\ & \label{:41b} a_r = 2r + r_0 \quad \text{ for all sufficiently large }r \in \N ,\\ & \label{:41c} 2 \le a_1 ,\ 1 \le a_{r+1} - a_r \le 2 \quad \text{ for all }r \in \N . \end{align} Note that the cardinality of $ \mathbf{A} $ is countable by \eqref{:41a} and \eqref{:41b}.
Let $ \mathbb{I}= \{ 2,3,\ldots, \}^3 $. For $ (\7,\6,\8) \in \mathbb{I}$ and $ \mathbf{a} = (a_{\7}) \in \mathbf{A} $ we set \begin{align*}&
\za = \{ \theta \in \Theta \, ;\, \theta ( \Ia ) = \6 \}
\\&
\zb = \{ \theta \in \Theta \, ;\, \theta ( \Ia ) = \6 ,\
\theta ( \bar {\2} _{a_{\7} + \frac{1}{\8} } \backslash \Ia ) = 0 \} .\end{align*} Here $ \bar {\2} _{a_{\7} + \frac{1}{\8} } $ is the closure of $ {\2} _{a_{\7} + \frac{1}{\8} } $, where $ \1 = (-\7,\7)^d \cap \mathsf{E} $ as before. We remark $ \zb $ is an open set in $ \Theta $. We set \begin{align}& \label{:42}
\Ae = \{ \theta = \sum _{i} \delta _{x_i} \, ;\, \theta \in \zb \text{ and $ \theta $ satisfy } \\ &\quad \quad \notag
| x_i - x_j | < \e \text{ and } x_i, x_j \in \2 _{a_{\7} - 1} \text{ for some }i \not= j \} .\end{align} It is clear that $ \Ae $ is an open set in $ \Theta $. \begin{lem} \label{l:43} Assume that for all $ \mathbf{a} \in \mathbf{A} $ and $ (\7,\6,\8) \in \mathbb{I}$ \begin{align}\label{:43}& \inf _{ 0 < \e < 1/2m} \0 ( \Ae ) = 0 . \end{align} Then \eqref{:26} holds. \end{lem}
\begin{proof} Let \begin{align}\notag \mathsf{A}^{\mathbf{a} } (\7,\6,\8)= &\{ \theta = \sum _{i} \delta _{x_i} \, ;\, \theta \in \zb \text{ and $ \theta $ satisfy } \\ \quad \quad &\notag \ x_i = x_j \text{ and } x_i, x_j \in \2 _{a_{\7} - 1} \text{ for some }i \not= j \} .\end{align} Then $ \mathsf{A}= \bigcup _{\mathbf{a}\in \mathbf{A} }\bigcup _{ (\7,\6,\8) \in \mathbb{I}} \, \mathsf{A}^{\mathbf{a} } (\7,\6,\8)
$. Since $ \mathbf{A} $ and $ \mathbb{I}$ are countable sets and the capacity is sub additive, \eqref{:26} follows from \begin{align}\label{:44}& \0 (\mathsf{A}^{\mathbf{a} } (\7,\6,\8) )= 0 \quad \text{ for all } \mathbf{a} \in \mathbf{A}, \ (\7,\6,\8) \in \mathbb{I} . \end{align} Note that $ \mathsf{A}^{\mathbf{a} } (\7,\6,\8) \subset \Ae $. So \eqref{:43} implies \eqref{:44} by the monotonicity of the capacity, which deduces \eqref{:26}. \end{proof}
Now fix $ \mathbf{a} \in \mathbf{A} $ and $ (\7,\6,\8) \in \mathbb{I}$ and suppress them from the notion. Set \begin{align}\label{:45}& \Aa = \mathsf{A}_{\e /2 }^{\mathbf{a} } (\7,\6,\8) ,\quad \A = \Ae ,\quad \Ab = \mathsf{A}_{1 + \e }^{\mathbf{a} } (\7,\6,\8) .\end{align} and let $ \map{\4 }{\R}{\R} $ ($ 0 < \e < 1/m < 1 $) such that \begin{align}\label{:46}& \4 (t) = \begin{cases}
2 & ( |t| \le \e )
\\ 2 \log |t| / \log \e &( \e \le |t| \le 1 )
\\0 & ( 1 \le |t| ) .\end{cases} \end{align} We define $ \map{\5}{\Theta }{\R } $ by $ \5 (\theta )= 0 $ for $ \theta \not\in \zb $ and \begin{align*}& \5 (\theta ) = \sum _{ x _i , \, x _j \in \2 _{a_{\7} - 1}, \ j\ne i} \4 (x _i - x _j ) \quad \text{ for }\theta \in \zb .\end{align*} Here we set $ \5 (\theta ) = 0 $ if the summand is empty. Let $ \mathfrak{g}_{\e } = \mathfrak{J}_{a_{\7} + \frac{1}{m}, \e / 4} \5 $. Here $ \mathfrak{J}_{a_{\7} + \frac{1}{m}, \e / 4} $ is the mollifier introduced in \eqref{:33}.
\begin{lem} \label{l:44} For $ 0 < \e < 1/2m $, $ \mathfrak{g}_{\e } $ satisfy the following: \begin{align} \label{:47}& \mathfrak{g}_{\e } \in \di && \\ & \label{:48b} \mathfrak{g}_{\e } (\theta ) \ge 1 && \text{ for all } \theta \in \A \\ & \label{:48a} 0 \le \mathfrak{g}_{\e } (\theta ) \le n(n+1) && \text{ for all } \theta \in \Theta \\ & \label{:48c}
\mathfrak{g}_{\e } (\theta ) =0 && \text{ for all } \theta \not\in \Ab \\ & \label{:49b} \mathbb{D}[\mathfrak{g}_{\e } , \mathfrak{g}_{\e } ] (\theta ) = 0 && \text{ for all } \theta \not\in \Aee \\ \label{:49a}& \mathbb{D}[\mathfrak{g}_{\e } , \mathfrak{g}_{\e } ] (\theta ) \le
\frac{ \cref{;41} }{( \log \e \ \min |x_i - x_j | ) ^2 } && \text{ for all } \theta \in \Aee . \end{align} Here $ \theta = \sum \delta _{x_k} $ and the minimum in \eqref{:49a} is taken over $ x_i , x_j $ such that
$$ x _i , \, x _j \in \2 _{a_{\7} - 1}, \ \e / 2 \le |x_i - x_j | \le 1 + \e ,$$ and $\refstepcounter{Const}c_{\theConst} \label{;41} \ge 0 $ is a constant independent of $ \e $ ($ \cref{;41} $ depends on $ (\7,\6,\8) $ ). \end{lem} \begin{proof} \eqref{:47} follows from \cite[Lemma 2.4 (1)]{o.dfa}. Other statements are clear from a direct calculation. \end{proof}
Permutation invariant functions $\map{\sigma^{\6 }_{\7 }}{\yaQ }{\R^+}$ are called density functions of $\mu $ if, for all bounded $\sigma [\pi_r]$-measurable functions $ \ff $, \begin{align}\label{:3.5}& \int_{\ya } \ff \, d\mu = \frac{1}{\6 !} \int_{\yaQ }f^{\6 }_{\7 }\sigma^{\6 }_{\7 }dx . \end{align} Here $\map{f^{\6 }_{\7 }}{\yaQ }{\R}$ is the permutation invariant function such that $ f^{\6 }_{\7 }(\mathbf{x}(\theta ))=\ff (\theta ) $ for $\theta\in\ya $, where $\mathbf{x}$ is an $\yaQ $-coordinate. We recall relations between a correlation function and a density function (\cite{so-}): \begin{align}\label{:4t}& \rho _{\6} = \sum_{k=0}^{\infty} \frac{1}{k !} \int _{I_r^{k }} \sigma ^{\6 + k}_{\7} (x_1,\ldots , x_{\6+k}) dx_{\6+1}\cdots dx_{\6+k} \\ \label{:4u} & \sigma ^{\6 }_{\7} = \sum_{k=0}^{\infty} \frac{(-1)^k}{k !} \int _{I_r^{k }} \rho _{\6 + k} (x_1,\ldots , x_{\6+k}) dx_{\6+1}\cdots dx_{\6+k} \end{align} The first summand in the right hand side of \eqref{:4t} is taken to be $ \sigma ^{\6 }_{\7 } $. It is clear that \begin{align}\label{:4v}& 0 \le \sigma ^{\6 }_{\7 } ( x_1,\ldots,x_n ) \le \rho _{\6 } ( x_1,\ldots,x_n ) \end{align}
\begin{lem} \label{l:42} There exists a constant $ \refstepcounter{Const}c_{\theConst} \label{;4x} $ depending on $ \7, \6 $ such that \begin{align}\label{:4x}& \sigma^{\6 }_{\7 } (x_1,\ldots,x_n) \le \cref{;4x}
\min _{ i \not= j} |x_i - x_j | \quad \text{ for all } (x_1,\ldots,x_n) \in \yaQ \end{align} \end{lem} \begin{proof} By \eqref{:23} and the kernel $ \mathsf{K} $ is locally Lipschitz continuous, we see $ \rho _{\6} $ is bounded and Lipschitz continuous on $ \yaQ $. In addition, by using \eqref{:23} we see $ \rho _{\6} = 0 $ if $ x_i=x_j $ for some $ i \not= j $. Hence by using \eqref{:23} again there exists a constant $ \refstepcounter{Const}c_{\theConst} \label{;4y} $ depending on $ \6 , \7 $ such that \begin{align}\label{:4y}&
\rho _{\6} (x_1,\ldots,x_n) \le \cref{;4y} \min _{ i \not= j} |x_i - x_j | \quad \text{ for all } (x_1,\ldots,x_n) \in \yaQ .\end{align} \eqref{:4x} follows from this and \eqref{:4v} immediately. \end{proof}
\begin{lem} \label{l:45} \eqref{:43} holds true. \end{lem} \begin{proof} By the definition of the capacity, $ \mathfrak{g}_{\e } \in \di $, \eqref{:47} and \eqref{:48b} we obtain \begin{align}\label{:4a}& \0 (\A ) \le \mathcal{E} (\mathfrak{g}_{\e } , \mathfrak{g}_{\e } ) + (\mathfrak{g}_{\e } ,\mathfrak{g}_{\e } )_{\Lm }
\end{align} So we will estimate the right hand side. We now see by \eqref{:49b} \begin{align} \label{:4b} \mathcal{E}(\mathfrak{g}_{\e } , \mathfrak{g}_{\e } ) & = \int _{ \Aee } \mathbb{D}[\mathfrak{g}_{\e } , \mathfrak{g}_{\e } ] d \mu \\& \notag = \frac{1}{n !} \int _{ B _{\e }} \{ \frac{1}{2}\sum_{i=1} ^{\6 } \frac{\partial g _{\e }^{\6}}{\partial x_{i}} \frac{\partial g _{\e }^{\6}}{\partial x_{i}} \} \sigma _{\3}^{\6} dx_1 \cdots dx_n \\ \notag & = : \mathbf{I}_{\e } .\end{align} Here $ g _{\e }^{\6} $ is defined by \eqref{:32} for $ \mathfrak{g}_{\e } $, and $ B _{\e }= \varpi _{\3} ^{-1} ( \pi _{\3} (\Aee ) ) $, where $ \map{\varpi }{ I_{\3}^{\6} }{ \Theta }$ is the map such that $ \varpi ((x_1,\ldots,x_n)) = \sum \delta _{x_i}$.
By using \eqref{:49a} and \lref{l:42} for $ \3 $ it is not difficult to see there exists a constant $ \refstepcounter{Const}c_{\theConst} \label{;44} $ independent of $ \e $ satisfying the following: \begin{align*}&
\mathbf{I}_{\e } \le \frac{ \cref{;44} }{|\log \e |} .\end{align*} This implies $ \lim_{\e \to 0} \mathcal{E}(\mathfrak{g}_{\e } , \mathfrak{g}_{\e } ) = 0 $. By \eqref{:48a} and \eqref{:48c} we have \begin{align}\notag & (\mathfrak{g}_{\e } ,\mathfrak{g}_{\e } )_{\Lm } = \int _{\Ab } \mathfrak{g}_{\e } ^2 d \mu \le n^2(n+1)^2 \, \mu (\Ab ) \to 0 \quad \text{ as }\e \downarrow 0 . \end{align} Combining these with \eqref{:4a} we complete the proof of \lref{l:45}. \end{proof}
\noindent {\em Proof of \tref{l:20}}. \tref{l:20} follows from \lref{l:43} and \lref{l:45} immediately. \qed
\section{Proof of \pref{l:27}}\label{s:5}
\begin{lem} \label{l:51} Let $ \mu $ be a probability measure on $ (\Theta , \mathcal{B}(\Theta ) ) $ such that $ \mu (\{ \theta (\mathsf{E}) < \infty \} ) = 1 $ and that density functions $ \{ \sigma ^n _{\mathsf{E} } \} $ on $ \mathsf{E} $ of $ \mu $ are continuous. Then $ (\mathcal{E}, \di ) $ is closable on $ \Lm $. \end{lem} \begin{proof} Let $ \Theta ^n = \{ \theta \in \Theta\, ;\, \theta (\mathsf{E} ) = n \} $ and set $$ \mathcal{E}^n (\mathfrak{f}, \mathsf{g} ) = \sum _{k=1}^n \int _{\Theta ^k } \mathbb{D} [\mathfrak{f},\mathfrak{g}] d\mu .$$ By assumption $ \sum_{n=0}^{\infty } \mu (\Theta ^n) = 1$, from which we deduce $ (\mathcal{E}, \di ) $ is the increasing limit of $ \{ (\mathcal{E}^n, \di ) \} $. Since density functions are continuous, each $ (\mathcal{E}^n, \di ) $ is closable on $ \Lm $. So its increasing limit $ (\mathcal{E}, \di ) $ is also closable on $ \Lm $. \end{proof}
\begin{lem} \label{l:52} Let $ \mu $ be a determinantal random point field on $ \mathsf{E} $ with continuous kernel $ \mathsf{K} $. Assume $ \mathsf{K} $ is of trace class. Then their density functions $ \sigma ^n $ on $ \mathsf{E} $ are continuous. \end{lem} \begin{proof} For the sake of simplicity we only prove the case $ K < 1 $, where $ K $ is the operator generated by the integral kernel $ \mathsf{K} $. The general case is proved similarly by using a device in \cite[935 p.]{so-}.
Let $ \lambda _i $ denote the $ i $-th eigenvalue of $ K $ and $ \varphi _i $ its normalized eigenfunction. Then since $ K $ is of trace class we have \begin{align} \label{:51}& \mathsf{K} (x,y) = \sum _{i=1}^{\infty} \lambda _i \varphi _i (x) \overline{\varphi _i (y)} .\end{align}
It is known that (see \cite[934 p.]{so-}) \begin{align}\label{:53}& \sigma ^n (x_1,\ldots,x_n) = \det (\text{Id} - K ) \cdot \det (L (x_i, x_j))_{1\le i,j \le n} ,\end{align} where $ \det (\text{Id} - K ) = \prod_{i=1}^{\infty} (1 - \lambda _i ) $ and \begin{align} \label{:54}& L (x,y) = \sum_{i=1}^{\infty} \frac{\lambda _i}{1- \lambda _i} \varphi _i (x) \overline{\varphi _i (y)} . \end{align}
Since $ \mathsf{K} (x,y) $ is continuous, eigenfunctions $ \varphi _i (x) $ are also continuous. It is well known that the right hand side of \eqref{:51} converges uniformly. By $ 0 \le K < 1 $ we have $ 0 \le \lambda _i \le \lambda _1 <1 $. Collecting these implies the right hand side of \eqref{:54} converges uniformly. Hence $ L(x,y) $ is continuous in $ (x,y) $. This combined with \eqref{:53} completes the proof. \end{proof}
{\em Proof of \pref{l:27}}. Since $ K $ is of trace class, the associated determinantal random point field $ \mu $ satisfies $ \mu (\{ \theta (\mathsf{E}) < \infty \} ) = 1 $. By \lref{l:52} we have density functions $ \sigma _{\mathsf{E} }^n $ are continuous. So \pref{l:27} follows from \lref{l:51}. \qed
We now turn to the proof of \tref{l:26}. So as in the statement in \tref{l:26} let $ \mathsf{E} = \R $ and $ \mathsf{K}(x,y) = \mathsf{m}(x) \mathsf{k}(x-y) \mathsf{m}(y) $, where $ \map{\mathsf{k}}{\R }{\R } $ is a non-negative, continuous {\em even} function that is convex in $ [0, \infty) $ such that $ \mathsf{k} (0) \le 1 $, and $ \map{\mathsf{m}}{\R }{\R } $ is nonnegative continuous and $ \int _{\R } \mathsf{m}(t) dt < \infty $ and $ \mathsf{m} (x) \le 1 $ for all $ x $ and $ 0 < \mathsf{m} (x) $ for some $ x $. We assume
$ \mathsf{k} $ satisfies \eqref{:27b}.
\begin{lem} \label{l:53} There exists an interval $ I $ in $ \mathsf{E} $ such that \begin{align}\label{:55} & \sigma _{I }^2 (x,x+t) \ge \cref{;51} t ^{\alpha }
\quad \text{ for all } |t| \le 1 \text{ and } x, x+t \in I ,\end{align} where $ \refstepcounter{Const}c_{\theConst} \label{;51} $ is a positive constant and $ \sigma _{I }^2 $ is the 2-density function of $ \mu $ on $ I $. \end{lem} \begin{proof} By assumption we see $ \inf _{ x \in I} \mathsf{m}(x) > 0 $ for some open bounded, nonempty interval $ I $ in $ \mathsf{E} $. By \eqref{:4u} we have \begin{align}\label{:56}& \sigma _{I }^2 (x,x+t) \ge \rho _2 (x, x+t) - \int _I \rho_3 (x, x+t, z) dz \end{align} By \eqref{:23} and \eqref{:27b} there exist positive constants $ \refstepcounter{Const}c_{\theConst} \label{;52} $ and $ \refstepcounter{Const}c_{\theConst} \label{;53} $ such that \begin{align}\label{:57}& \cref{;52} t ^{\alpha } \le \rho _2 (x,x+t)
&&\quad \text{ for all } |t| \le 1 \text{ and } x, x+t \in I \\ &\notag \rho _3 (x,x+t,z ) \le \cref{;53} t ^{\alpha }
&&\quad \text{ for all } |t| \le 1 \text{ and } x, x+t , z \in I .\end{align} Hence by taking $ I $ so small we deduce \eqref{:55} from \eqref{:56} and \eqref{:57}. \end{proof}
{\em Proof of \tref{l:26}}. The closability follows from \pref{l:27}. So it only remains to prove \eqref{:27c}.
Let $ (\mathcal{E}^2 , \dom ^2 ) $ and $ (\mathcal{E}, \dom ) $ denote closures of $ (\mathcal{E} ^2, \di ) $ and $ (\mathcal{E} , \di ) $ on $ \Lm $, respectively. Then \begin{align}\label{:5a}&
(\mathcal{E} ^2 , \dom ^2 ) \le (\mathcal{E} , \dom ) \end{align} Let $ I $ be as in \lref{l:53}. Let $ \{ I_r \} _{r=1,\ldots}$ be an increasing sequence of
open intervals in $ \mathsf{E} $ such that $ I_1 = I $ and $ \cup_r I _r = \mathsf{E} $. Let \begin{align}\label{:58}& \mathcal{E}_r^2 (\mathfrak{f},\mathfrak{g} )= \int _{\Theta^2 } \sum _{x_i \in I_r} \frac{1}{2} \frac{\partial f (\mathbf{x} )}{\partial x_i } \cdot \frac{\partial g (\mathbf{x} )}{\partial x_i } d \mu \end{align} Here we set $ \mathbf{x}=(x_1,\ldots ) $, $ f $ and $ \mathfrak{f} $ similarly as in \eqref{:25}. Then since density functions on $ I_r $ are continuous, we see $ (\mathcal{E}_r^2 , \di ) $ are closable on $ \Lm $. So we denote its closure by $ (\mathcal{E}_r^2 , \dom _r^2 ) $. It is clear that $ \{ (\mathcal{E}_r^2 , \dom _r^2 ) \} $ is increasing in the sense that $ \dom _r^2 \supset \dom _{r+1}^2 $ and $ \mathcal{E}_r^2 (\mathfrak{f}, \mathfrak{f} ) \le \mathcal{E}_{r+1}^2 (\mathfrak{f}, \mathfrak{f} ) $ for all $ \mathfrak{f} \in \dom _{r+1} $. So we denote its limit by $ (\check{\mathcal{E} }^2 , \check{\dom}^2 ) $. It is known (\cite[Remark \thetag{3} after Theorem 3]{o.dfa}) that \begin{align}\label{:59}& (\check{\mathcal{E}}^2 , \check{\mathcal{\dom} }^2 ) \le (\mathcal{E}^2, \dom ^2) .\end{align}
By \eqref{:5a}, \eqref{:59} and the definition of $ \{ (\mathcal{E}_r^2 , \dom _r^2 ) \} $ we conclude $ (\mathcal{E}_1^2 , \dom _1^2 ) \le (\mathcal{E} , \dom ) $, which implies \begin{align}\label{:5b}& \0 _1^2 \le \0 ,\end{align} where $ \0 _1^2 $ and $ \0 $ denote capacities of $ (\mathcal{E}_1^2 , \dom _1^2 ) $ and $ (\mathcal{E} , \dom ) $, respectively. Let $ \mathsf{B}=\Theta ^2 \cap \{ \theta (\{ x \} ) =2 \text{ for some } x \in I \} $. Then by \eqref{:27a} and \eqref{:55} together with
a standard argument (see \cite[Example 2.2.4]{fot} for example) we obtain \begin{align}\label{:5c}& 0 < \0 _1^2 (\mathsf{B} ) . \end{align}Since $ \mathsf{B} \subset \mathsf{A} $, we deduce $ 0 < \0 (\mathsf{A} ) $ from \eqref{:5b} and \eqref{:5c}, which implies \eqref{:27c}. \qed
\section{A construction of infinite volume dynamics}\label{s:6} In this section we prove \tref{l:28}. We first prove the closability of pre-Dirichlet forms in finite volume. \begin{lem} \label{l:61} Let $ I _r = (-r, r) \cap \mathsf{E} $ and $ \sigma _r^n $ denote the $ n $-density function on $ I _r $. Then $ \sigma _r^n $ is continuous. \end{lem} \begin{proof}
Let $ M = \sup _{x,y \in I _r} | \mathsf{K}(x,y)| $. Then $ M < \infty $ because $ \mathsf{K} $ is continuous. Let $ \mathbf{x}_i = (\mathsf{K} (x_i, x_1), \mathsf{K} (x_i, x_2),\ldots, \mathsf{K} (x_i, x_n)) $ and
$ \| \mathbf{x}_i \| $ denote its Euclidean norm. Then by \eqref{:23} we see \begin{align}\label{:61}&
|\rho _n| \le \prod _{i=1}^n \| \mathbf{x}_i \| \le \{ \sqrt{n} M \}^n .\end{align} By using Stirling's formula and \eqref{:61} we have for some positive constant $ \refstepcounter{Const}c_{\theConst} \label{;61} $ independent of $ k $ and $ M $ such that \begin{align}\label{:62}&
| \frac{(-1)^k}{k !} \int _{I_r^{k }} \rho _{\6 + k} (x_1,\ldots , x_{\6+k})
dx_{\6+1}\cdots dx_{\6+k} | \\\notag & \quad \quad \quad \quad \quad \le \cref{;61}^k k^{- k + 1/2} (n+k)^{(n+k)/2} M ^{n+k} .\end{align} This implies for each $ n $ the series in the right hand side of \eqref{:4u} converges uniformly in $ (x_1,\ldots,x_n) $. So $ \sigma _r^n $ is the limit of continuous functions in the uniform norm, which completes the proof. \end{proof}
\begin{lem} \label{l:62} $ (\mathcal{E}, \dom _{\infty , r} ) $ are closable on $ \Lm $. \end{lem} \begin{proof}
Let $ I_r = \{ x \in \mathsf{E} ; |x| < r \} $ and $ \Theta _r^n = \{ \theta (I_r) = n \} $. Let $ \mathcal{E} _r^n (\mathfrak{f},\mathfrak{g}) = \int _{\Theta _r^n } \mathbb{D} [\mathfrak{f}, \mathfrak{g} ] d \mu $. Then it is enough to show that $ (\mathcal{E} _r^n , \dom _{\infty , r}) $ are closable on $ \Lm $ for all $ n $.
Since $ \mathfrak{f} $ is $ \sigma[\pi _r] $-measurable, we have ($ \mathbf{x}= (x_1,\ldots,x_n) $) \begin{align*}& \mathcal{E} _r^n (\mathfrak{f},\mathfrak{g}) = \frac{1}{n!} \int _{I_r^n } \sum _{i=1}^n \frac{1}{2} \frac{\partial f _r^n (\mathbf{x} )}{\partial x_i } \cdot \frac{\partial g _r^n (\mathbf{x} )}{\partial x_i }
\sigma _r^n (\mathbf{x} )d \mathbf{x} ,\end{align*} where $ f _r^n $ and $ g _r^n $ are defined similarly as after \eqref{:3.5}. Then since $ \sigma _r^n $ is continuous, we see $ (\mathcal{E} _r^n , \dom _{\infty , r} ) $ is closable. \end{proof}
\begin{proof}[Proof of \tref{l:28}] By \lref{l:62} we see the assumption \thetag{A.1$ ^* $} in \cite{o.dfa} is satisfied. \thetag{A.2} in \cite{o.dfa} is also satisfied by the construction of determinantal random point fields. So one can apply results in \cite{o.dfa} (Theorem 1, Corollary 1, Lemma 2.1 \thetag{3} in \cite{o.dfa}) to the present situation. Although in Theorem 1 in \cite{o.dfa} we treat $ (\mathcal{E}, \dom ) $, it is not difficult to see that the same conclusion also holds for $ (\Ereg , \Dreg ) $, which completes the proof. \end{proof}
\section{Gibbsian case}\label{s:7} In this section we consider the case $ \mu $ is a canonical Gibbs measure with interaction potential $ \Phi $, whose $ n $-density functions for bounded sets are bounded, and 1-correlation function is locally integrable. If $ \Phi $ is super stable and regular in the sense of Ruelle, then probability measures satisfying these exist. In addition, it is known in \cite{o.dfa} that, if $ \Phi $ is upper semi-continuous (or more generally $ \Phi $ is a measurable function dominated from above by a upper semi-continuous potential satisfying certain integrable conditions (see \cite{o.m})), then the form $ (\mathcal{E}, \dom ) $ on $ \Lm $ is closable. We remark these assumptions are quite mild. In \cite{o.dfa} and \cite{o.m} only {\em grand} canonical Gibbs measures with {\em pair} interaction potential are treated; it is easy to generalize the results in \cite{o.dfa} and \cite{o.m} to the present situation.
\begin{prop}\label{l:71} Let $ \mu $ be as above. Assume $ d \ge 2 $. Then $ \0 (\mathsf{A} ) = 0 $ and no collision \eqref{:27} occurs. \end{prop}
\begin{proof} The proof is quite similar to the one of \tref{l:20}. Let $ \mathbf{I}_{\e } $ be as in \eqref{:4b}. It only remains to show $ \lim_{\e \to 0} \mathbf{I}_{\e } = 0 $.
We divide the case into two parts: \thetag{1} $ d = 2 $ and \thetag{2} $ 3 \le d $. Assume \thetag{1}. We can prove $ \lim \mathbf{I}_{\e } =0 $ similarly as before. In the case of \thetag{2} the proof is more simple. Indeed, we change definitions of $ \Ab $ in \eqref{:45} and $ h_{\e } $ in \eqref{:46} as follows: $ \Ab = \mathsf{A}_{4 \e }^{\mathbf{a} } (\7,\6,\8) $ \begin{align}\label{:71}& \4 (t) = \begin{cases}
2 & ( |t| \le \e )
\\ -(2/\e) |t| + 4 &( \e \le |t| \le 2\e )
\\0 & ( 2\e \le |t| ) .\end{cases} \end{align} Then we can easily see $ \lim \mathbf{I}_{\e } =0 $. \end{proof}
\begin{rem}\label{r:71} \thetag{1} This result was announced and used in \cite[Lemma 1.4]{o.inv2}. Since this result was so different from other parts of the paper \cite{o.inv2}, we did not give a detail of the proof there. \\ \thetag{2} In \cite{r-s} a related result was obtained. In their frame work the choice of the domain of Dirichlet forms may be not same as ours. Indeed, their domains are smaller than or equal to ours (we do not know they are same or not). So one may deduce \pref{l:71} from their result. \end{rem}
\noindent Address: \\ Graduate School of Mathematics\\ Nagoya University \\ Chikusa-ku, Nagoya, 464-8602\\
\noindent Current Address (2015) \\
Hirofumi Osada\\
Tel 0081-92-802-4489 (voice)\\
Graduate School of Mathematics,\\
Kyushu University\\
Fukuoka 819-0395, JAPAN\\ {\em \texttt{[email protected]}}
\noindent submit: February 9, 2003, revised: March 31, 2003
\noindent {\em Partially supported by Grant-in-Aid for Scientific Research (B) 11440029}
\end{document} |
\begin{document}
\title{Finite presentability of universal central extensions of ${\mathfrak{sl}_n}$, II } \author{Zezhou Zhang}
\begin{abstract}
In this note we connect finite presentability of a Jordan algebra to finite presentability of its Tits-Kantor-Koecher algebra. Through this we complete our discussion of finite presentability of universal central extensions of ${\mathfrak{sl}_n(A)}$, $A$ a $k$-algebra, initiated in \cite{ZeZel}, and answer a question raised by Shestakov-Zelmanov \cite{ShestZelFPjordan} in the positive. \end{abstract}
\renewcommand{\Alph{subsection}}{\Alph{subsection}}
\maketitle
Throughout this note all algebras are considered over a field $k$ containing $\frac{1}{2}$.
\section{Introduction}
Let $\mathcal{V}$ be a variety (of universal algebras) in the sense of \cite{JacobsonJordanbook}, \cite{RingsNearlyAssociative}. An algebra $A\in \mathfrak{V}$ is said to be \textit{finitely presented (f.p.)} if it can be presented in $\mathcal{V}$ by finitely many generators and finitely many relations.
\begin{definition}
A $k$-algebra $J$ satisfying the identities \begin{enumerate}
\item $xy=yx$,
\item $(x^2y)x=x^2(yx)$
\end{enumerate}for all $x, y \in J$ is called a \textit{Jordan} algebra. \end{definition}
\begin{remark}
$J$ as above is sometimes referred to as a linear Jordan algebra, in contrast to the concept of quadratic Jordan algebras. These two concepts are equivalent when $\tfrac{1}{2} \in k$. \end{remark}
\begin{example}
An associative $k$-algebra $A$ admits a canonical Jordan product given by $x\circ y= \frac{1}{2}(xy+yx)$. This new product on $A$ makes it a Jordan algebra, denoted by $A^{(+)}$. \end{example}
J.M. Osborn (see \cite{HersteinRingswithinvolution}) showed that for a finitely generated associative algebra $A$ the Jordan algebra $A^{(+)}$ is finitely generated.
In \cite{ShestZelFPjordan}, Shestakov and Zelmanov considered the question whether for a finitely presented associative algebra $A$ the Jordan algebra $A^{(+)}$ is finitely presented. They proved (among other things) that \begin{enumerate}
\item the Jordan algebra $k\langle x, y \rangle ^{(+)}$, where $k\langle x, y \rangle$ is the free associative algebra of rank $2$, is not finitely presented;
\item let $A$ be a finitely presented associative algebra and let
$M_n(A)$ be the algebra of $n\times n$ matrices over $A$, and $n \geq 3$, then the Jordan algebra $M_n(A)^{(+)}$ is finitely presented. \end{enumerate} For the borderline case of $M_2(A)$ they asked if the Jordan algebra $M_2(A)^{(+)}$ is finitely presented.
In this paper we give a positive answer to this question.
\begin{theorem} Let $A$ be a finitely presented associative $k$-algebra. Then the Jordan algebra $ M_2(A)^{(+)}$ is finitely presented. \end{theorem}
The proof of this theorem is based on the paper \cite{ZeZel} on universal central extensions of Lie algebras $\mathfrak{sl}_n(A)$.
\begin{definition} Let $A$ be a finitely presented associative algebra. $\mathfrak{sl}_n(A)$ is the Lie algebra generated by off-diagonal matrix units among $n \times n$ matrices; forming a subalgebra of $\mathfrak{gl}_n(A)$. Equivalently, $\mathfrak{sl}_n(R)=\{X\in \mathfrak{gl}_n(A) \mid \mbox{tr} (X) \in [A,A] \}$. \end{definition}
\begin{definition} Let $\mathcal{L}, \mathfrak{g}$ be $k$-Lie algebras where $\mathfrak{g}$ is perfect. A surjective Lie homomorphism $\pi: \mathcal{L} \rightarrow \mathfrak{g} $ is called a central extension of $\mathfrak{g}$ if $\ker(\pi)$ is central in $\mathcal{L}$. This $\pi: \mathcal{L} \rightarrow \mathfrak{g} $ is called a universal central extension if there exists a unique homomorphism $\phi: \mathcal{L} \rightarrow \mathcal{M}$ from $f: \mathcal{L}\rightarrow \mathfrak{g}$ to any other central extension $p: \mathcal{M}\rightarrow \mathfrak{g}$ of $\mathfrak{g}$. In other words, $f=p\circ\phi$. The universal central extension of $\mathfrak{g}$ is customarily denoted as $\widehat{\mathfrak{g}}$. Perfectness of $\mathfrak{g}$ guarantees the existence of $\widehat{\mathfrak{g}}$, which is necessarily perfect (see \cite{Neher}). \end{definition}
In \cite{ZeZel} it was shown that for a finitely presented associative algebra $A$ and $n\geq 3$, the Lie algebra $\widehat{\mathfrak{sl}_n(A)}$ is finitely presented. As a consequence of the Lie-Jordan correspondence results of this paper, we may sharpen this result:
\begin{theorem}
The universal central extension of ${\mathfrak{sl}_2(k\langle x, y \rangle)}$ is not finitely presented as a Lie algebra. \end{theorem}
\begin{remark}
The result in \cite{ZeZel} holds without the restriction $\text{char}(F) \neq 2$. \end{remark}
\section{Finite Presentation of Jordan systems} \noindent\textit{\underline{Jordan triple systems}}
A Jordan algebra $J$ is equipped with a Jordan triple product $\{a,b,c\}=(ab)c+a(bc)-b(ac)$. If $J=A^{(+)}$, where $A$ is an associative algebra, the $\{a,b,c\}=\tfrac{1}{2}(abc+cba)$.
\begin{definition}[See \cite{MeybergLectures}] A vector space $V$ over a field $k$ containg $\tfrac{1}{2}$ is called a \textit{Jordan triple system} if it admits a trilinear product $\{ \ , \ , \}: V^3 \rightarrow V$ that is symmetric in the outer variables, while satisfying the identities
\begin{enumerate}
\item $\{a,b,\{a,c,a\}\}=\{a,\{b,a,c\},a\}$,
\item $\{\{a,b,a\},b,c\}=\{a,\{b,a,b\},c\}$,
\item $\{a,\{b,\{a,c,a\},b\},a\}=\{ \{a,b,a\}, c, \{a,b,a\} \}$ \end{enumerate} for any $a,b,c \in V$. \end{definition}
\begin{remark} When $\tfrac{1}{6} \in k$, the three defining identities of a Jordan triple system may be merged in to $\{a,b,\{c,d,e\}\}=\{ \{a,b,c\} ,d,e\}-\{c,\{b,a,d\},e\}+\{a,b,\{c,d,e\}\}$. \end{remark}
It is easy to see that every Jordan algebra is a Jordan triple system with respect to the Jordan triple product $\{ \ ,\ ,\ \}$.
\noindent\textit{\underline{TKK constuction for a Jordan triple system.}}
For a Jordan algebra $J$ we let $K(J)$ be the Tits-Kantor-Koecher (abrreviated TKK) Lie algebra of $J$ viewed as a Jordan triple system.
\begin{definition}[See \cite{ShestZelFPjordan}. See also {\cite[Section 5]{BenkartSmirnovBC1}}, \cite{AllisonGaoUnitary}] \label{tkkdef}
Let $T$ be a Jordan triple system. Let $\{e_u, u\in I\}$ be a basis of the vector space $T$. Let \[\{e_u,e_v, e_w\}=\sum\gamma^t_{uvw}e_u, \] where $\ t,u,v,w \in I$; $\gamma_{uvw}^t\in k$.
The Lie algebra $K(T)$ is presented by generators $x_u^{\pm}, u \in I $ and relations
\begin{gather}
[x_u^\sigma, x_v^\sigma]=0, \tag{K1}\label{K1}\\
\quad \quad \left[[x_u^\sigma, x_v^{-\sigma}], x_w^\sigma\right]-\sum\gamma^t_{uvw}e_t=0, \tag{K2}\label{K2}
\end{gather}where $\ t,u,v,w \in I$; $\sigma=\pm$. This Lie algebra is called the \textit{(Universal) Tits-Kantor-Koecher Lie algebra associated to $T$}. It is obvious that this construction is basis-independent.
\end{definition}
\begin{remark}
The above (universal) TKK construction $T \rightarrow K(T)$ is a functor: see for example \cite{CavenySmirnovJordanLieCategories}. In other words, any homomorphism of Jordan Triple systems $T_1 \stackrel{\phi}{\rightarrow} T_2$ gives rise to a homomorphism
$K(T_1) \stackrel{\phi}{\rightarrow} K(T_2)$ of Lie algebras. A quick proof follows from the definition of $K(T)$ above: choose a basis $B_1 \cup B_2$ of $T_1$ such that $\phi(B_1)$ is linearly independent while $\phi(B_2)=0$. Extend $\phi(B_1)$ to a basis of $T_2$ and it is clear that we may extend $\phi$. \end{remark}
\begin{lemma}\label{lemma1}
Let $T$ be a Jordan triple system. If the Lie algebra $K(T)$ is finitely presented then the Jordan triple system is finitely presented as well. \end{lemma}
\begin{proof}
Let $\{e_u, u\in I\}$ be a basis of the vector space $T$ and let $ \{e_u,e_v, e_w\}=\sum\gamma^t_{uvw}e_u$, where $\ t,u,v,w \in I$; $\gamma_{uvw}^t\in k$. Define $K(T)$ as in Definition \ref{tkkdef}. For a subset $S\subset I$, let $R(S)$ be the set of those relations from (\ref{K1}),(\ref{K2}) that have all indices $t,u,v,w$ lying in $S$.
By our assumption there exists a finite subset $S\subset I$ such that the algebra $K(T)$ is generated by $x_u^\pm, \ u\in S$, and presented by the set of relations $R(S)$. Let $\widetilde{T}$ be the Jordan triple system presented by generators $y_u, \ u \in S$ and the set of relations \[R_J(S): \{y_u, y_v, y_w\}-\sum\gamma^t_{uvw}y_t=0,\] where $u,v,w, t \in S$. We claim that the mapping $y_u \mapsto e_u, \ u \in S$, extends to an isomorphism $\widetilde{T} \cong T$. Since the elements $e_u, u \in S$ satisfies the relations from $R_J(S)$ it follows that the mapping $y_u \stackrel{\varphi}{\rightarrow} e_u, \ u \in S$, extends to a homomorphism $\widetilde{T} \stackrel{\varphi}{\rightarrow}T$. This homomorphism gives rise to a homomorphism $K(\widetilde{T}) \stackrel{\varphi}{\rightarrow} K(T)$.
Consider the Lie algebra $K(\widetilde{T})$. Since the elements $y_u^\pm \in K(\widetilde{T}), \ u \in S$ satisfy the relations $R(S)$, the mapping $x_u^\pm \rightarrow y_u^\pm, u \in S$ extends to a homomorphism $K(T) \rightarrow K(\widetilde{T})$. Hence the homomorphism $K(\widetilde{T}) \stackrel{\varphi}{\rightarrow}K(T)$ is an isomorphism. This implies that the homomorphism $\widetilde{T} \stackrel{\varphi}{\rightarrow} T$ is bijective, hence an isomorphism. \end{proof}
\begin{lemma}\label{lemma2}
Let $J$ be a unital Jordan algebra. If $J$ is finitely presented as a Jordan triple system, then $J$ is finitely presented as a Jordan algebra. \end{lemma}
\begin{proof}
Let elements $a_1, \ldots, a_n \in J$ generate $J$ as a Jordan triple system. Then they clearly generate $J$ as a Jordan algebra, according to the formula $\{a,b,c\}=(ab)c+a(bc)-b(ac)$.
Let $\mathfrak{T}$ and $\mathfrak{J}$ be the free Jordan triple system and the free Jordan algebra on the set of free generators $x_u, \ u \geq 1$, respectively. Since $\mathfrak{J}$ is a Jordan triple system with respect to the Jordan triple product there exists a natural homomorphism $\mathfrak{T} \rightarrow \mathfrak{J} , \ a \mapsto \widetilde{a}$, that extends the identical mapping $x_u \rightarrow x_u, \ i \geq 1$.
Let $R \subset \mathfrak{T}$ be a finite subset, all elements from $R$ become zero when evaluated at $a_u, \ 1 \leq u \leq n$, and $R$ defines $J$ as a Jordan triple system. Since elements $a_1, \ldots a_n$ generate $J$ as a Jordan triple system, there exists an element $\omega(x_1, \ldots, x_n) \in \mathfrak{T}$ such that $\omega(a_1, \ldots, a_n)=1$.
Let $P=\widetilde{R} \cup \{\omega(x_1, \ldots, x_n)^2- \omega(x_1, \ldots, x_n) \} \subset \mathfrak{J}$. Consider the Jordan algebra $\widetilde{J}=\langle x_1, \ldots, x_n \mid P=(0) \rangle$. Our aim is to show that $\widetilde{J} \cong J$.
Since the generators $a_1, \ldots, a_n$ satisfy the relations $P$ it follows that there exists a surjective homomorphism $\widetilde{J} \stackrel{\varphi}{\rightarrow}J$, $\varphi(x_i)=x_i, \ 1\leq i \leq n$. We claim that the elements $x_1, \ldots , x_n$ generate the Jordan algebra $\widetilde{J}$ as a Jordan triple system: Indeed, let $\widetilde{J}' $ be the Jordan triple system generated by $x_1 \ldots, x_n$ in $\widetilde{J}$. The relations $P$ imply that the element $\omega(x_1, \ldots, x_n)$ is an identity element of the algebra $\widetilde{J}$ and $\omega(x_1, \ldots, x_n) \in \widetilde{J}'$. If $a, b \in \widetilde{J}'$ then $ab=\{a,\omega(x_1, \ldots, x_n), b\} \in \widetilde{J}'$, which implies $\widetilde{J}' \cong \widetilde{J}$.
Since the generator $x_1, \ldots, x_n$ of the Jordan triple system $\widetilde{J}$ satisfy the relations $R$ it follows that there exists a homomorphism of Jordan triple systems $J \stackrel{\psi}{\rightarrow} \widetilde{J}$ that extends $\psi(x_u)=x_u, \ 1 \leq u \leq n $. This implies that $\varphi, \psi$ are isomorphisms of Jordan triple systems. In particular, $\varphi$ is a bijection. Hence $\varphi$ is an isomorphism of Jordan algebras.\end{proof}
Lemmas \ref{lemma1} , \ref{lemma2} imply the following proposition:
\begin{prop}\label{Tkk Jor presentation}
Let $J$ be a Jordan algebra with $1$. Then, if its TKK Lie algebra $K(J)$ is finitely presented, then $J$ is finitely presented. \end{prop}
\section{Non-finite presentation of $\widehat{\mathfrak{sl}_2(A)}$}
\begin{theorem}\label{sl2 non fp theorem} Let $k$ be a field where $char(k)\neq 2$. Then $\widehat{\mathfrak{sl}_2(k\langle x, y \rangle)}$ is not finitely presented. \end{theorem}
The universal central extension of $sl_2(A)$ was discussed in Kassel-Loday \cite{Kassellodaycentral}; see also \cite{GaoSt2}. To elaborate:
\begin{theorem}[See \cite{GaoSt2}]\label{st2pres}
$\widehat{\mathfrak{sl}_2(A)}$ admits a presentation where the Lie algebra is generated by $\{X_{12}(a), X_{21}(a), T(a,b) \mid a,b \in A \}$, subjecting to the relations
\begin{align*}
X_{ij}(\alpha a + \beta b)&= \alpha X_{ij}(a) + \beta X_{ij}(b),\\
T(a,b)&=[X_{12}(a),X_{21}(b)],\\
[T(a,b),X_{12}(c)]&=X_{12}(abc+cba),\\
[T(a,b),X_{21}(c)]&=-X_{12}(bac+cab),\\
[X_{ij}(A),X_{ij}(A)]&=0.
\end{align*}
for all $1\leq i\neq j \leq 2$, $a,b,c \in A$, $\alpha, \beta \in k$. \end{theorem} \begin{proof}[Proof of Theorem \ref{sl2 non fp theorem}]
Define $x_+(a):=X_{12}(a)$, and $x_-(a):=X_{21}(\tfrac{1}{2}a)$. These new expressions give a new presentation of $\widehat{\mathfrak{sl}_2(A)}$, in generators $\{x_+(a),x_-(a)|a\in A\}$:\begin{align*}
a \mapsto x_+(a) \mbox{ and } a \mapsto x_-(a) \mbox{ are } k \mbox{ linear maps,}\\
[x_+(A),x_+(A)]=[x_-(A),x_-(A)]=0,\\
[[x_+(a),x_-(b)],x_+(c)]=x_+(\tfrac{1}{2}(abc+cba)),\\
[[x_-(b),x_+(a)],x_-(c)]=x_-(\tfrac{1}{2} (bac+cab)),
\end{align*}for all $a,b,c \in A$. This gives $\widehat{\mathfrak{sl}_2(A)} \cong K(A^{(+)})$, as this presentation defines $K(A^{(+)})$.
Now set $A=k\langle x, y \rangle$. According to \cite{ShestZelFPjordan}, $k\langle x, y\rangle ^{(+)}$ is not finitely presented as a Jordan algebra. Theorem \ref{Tkk Jor presentation} then implies that $\widehat{\mathfrak{sl}_2(k\langle x, y \rangle)}$, being isomorphic to $K(k\langle x, y \rangle^{(+)})$, is not finitely presented.\end{proof}
\section{Finite presentation of $M_2(A)^{(+)}$}
\begin{lemma}
Let $k$ be a field where $char(k)\neq 2$, $A$ a unital associative $k$-algebra. Then $\widehat{\mathfrak{sl}_4(A)} \cong K(M_2(A)^{(+)})$. \end{lemma}
\begin{proof}
According to \cite{Kassellodaycentral,GaoShang}, $\widehat{\mathfrak{sl}_4(A)}$ admits the presentation with generating set $\{X_{ij}(s) \mid s\in A, 1\leq i \neq j \leq n\}$ and set of relations
\begin{align*} &\alpha\mapsto X_{ij}(\alpha) \text{ is a $k$-linear map,}\\
&[X_{ij}(\alpha), X_{jk}(\beta)] = X_{ik}(\alpha\beta), \text{ for distinct } i, j, k, \\
&[X_{ij}(\alpha), X_{kl}(\beta)] = 0, \text{ for } j\neq k, i\neq l,
\end{align*}for all $\alpha, \beta \in A$. According to \cite{ZeZel}, $\widehat{\mathfrak{sl}_4(A)}$ is finitely presented.
Like in Theorem \ref{sl2 non fp theorem}, we reorganize the presentation: the $2 \times 2$ block partition of $\mathfrak{sl}_4$ allows us to identify $X_{13}(A) \oplus X_{14}(A) \oplus X_{23}(A) \oplus X_{24}(A)$ to a copy of $M_2(A)^{(+)}$ (`` $x_+$''), and $X_{31}(A) \oplus X_{32}(A) \oplus X_{41}(A) \oplus X_{42}(A)$ to another copy of $M_2(A)^{(+)}$ (`` $x_-$''). Now identify through the ($k$-linear) assignment
\[\begin{tabular}{c c c c}
$X_{13}(a) \rightarrow x_+(e_{11}(a)),$ & $X_{14}(a) \rightarrow x_+(e_{12}(a)),$ & $X_{23}(a) \rightarrow x_+(e_{12}(a)),$ & $X_{24}(a) \rightarrow x_+(e_{22}(a))$
\end{tabular}\] and
\[\begin{tabular}{c c c c}
$\tfrac{1}{2} X_{31}(a) \rightarrow x_-(e_{11}(a)),$ & $\tfrac{1}{2} X_{32}(a) \rightarrow x_-(e_{12}(a)),$ & $\tfrac{1}{2} X_{41}(a) \rightarrow x_-(e_{12}(a)),$ & $ \tfrac{1}{2} X_{42}(a) \rightarrow x_-(e_{22}(a))$,
\end{tabular}\]where $\oplus e_{ij}(A)$ is the Peirce decomposition of $M_2(A)$, namely decomposing $2 \times 2$ matrices into subspaces corresponding to the four entries.
Under these new expressions our defining presentation becomes \begin{align*}
U \mapsto x_+(U) \mbox{ and } U \mapsto x_-(U) \mbox{ are } k \mbox{ linear maps,}\\
[x_+(M_2(A)),x_+(M_2(A))]=[x_-(M_2(A)),x_-(M_2(A))]=0,\\
[[x_+(U),x_-(V)],x_+(W)]=x_+(\tfrac{1}{2}(UVW+WVU)),\\
[[x_-(V),x_+(U)],x_-(W)]=x_-(\tfrac{1}{2} (VUW+WUV)),
\end{align*}for all $U,V,W\in M_2(A)$. Similar to Theorem \ref{sl2 non fp theorem}, this gives $\widehat{\mathfrak{sl}_4(A)} \cong K(M_2(A)^{(+)})$. \end{proof}
\begin{theorem}
Notations be as before. If $A$ is finitely presented as a $k$-algebra, then the special Jordan algebra $M_2(A)^{(+)}$ is finitely presented. \end{theorem}
\begin{proof}
Apply Proposition \ref{Tkk Jor presentation} to $ K(M_2(A)^{(+)})$ (which is isomorphic to $\widehat{\mathfrak{sl}_4(A)}$).
\end{proof}
\end{document} |
\begin{document}
\title{\Large\textbf{Geometrical structures of multipartite quantum systems}
\begin{abstract}
In this paper I will investigate geometrical structures of multipartite quantum systems based on complex projective varieties. These varieties are important in characterization of quantum entangled states. In particular I will establish relation between multi-projective Segre varieties and multip-qubit quantum states. I also will discuss other geometrical approaches such as toric varieties to visualize complex multipartite quantum systems. \end{abstract}
\title{\Large\textbf{Geometrical structures of multipartite quantum systems} \section{Introduction} Characterization of multipartite quantum systems is very interesting research topic in the foundations of quantum theory and has many applications in the field quantum information and quantum computing. And in particular geometrical structures of multipartite quantum entangled pure states are of special importance.
In this paper we will review the construction of
Segre variety for multi-qubit states. We will also show a
construction of geometrical measure of entanglement based on
the Segre variety for multi-qubit systems. Finally we will
establish a relation between the Segre variety, toric variety,
and multi-qubit quantum systems. The relation could be used as a
tool to visualize entanglement properties of multi-qubit states.
Let $\mathcal{Q}_{j},~j=1,2,\ldots m$ be quantum systems with underlying Hilbert spaces $\mathcal{H}_{\mathcal{Q}_{j}}$. Then the Hilbert space of a multi-qubit systems $\mathcal{Q}$, is given by $\mathcal{H}_{\mathcal{Q}}=\mathcal{H}_{\mathcal{Q}_{m}}\otimes \mathcal{H}_{\mathcal{Q}_{m-1}}\otimes\cdots\otimes \mathcal{H}_{\mathcal{Q}_{1}}$, where $\mathcal{H}_{\mathcal{Q}_{j}}=\mathbf{C}^{2}$ and $\dim \mathcal{H}_{\mathcal{Q}}=2^{m}$. Now, let \begin{equation}\ket{\Psi}=\sum^{1}_{x_{m}=0}\sum^{1}_{x_{m-1}=0}\cdots \sum^{1}_{ x_{1}=0}\alpha_{x_{m}x_{m-1}\cdots x_{1}}\ket{x_{m}x_{m-1}\cdots x_{1}}, \end{equation} be a vector in $\mathcal{H}_{\mathcal{Q}}$, where $\ket{x_{m}x_{m-1}\cdots~x_{1}}= \ket{x_{m}}\otimes\ket{x_{m-1}}\otimes\cdots\otimes\ket{x_{1}}$ are orthonormal basis in $\mathcal{H}_{\mathcal{Q}}$
and $\alpha_{x_{m}x_{m-1}\cdots x_{1}}\in \mathbf{C}$. Then the quantum states are normalized vectors in $\mathcal{P}(\mathcal{H}_{\mathcal{Q}})= \mathcal{H}_{\mathcal{Q}}/\sim$. Moreover, let $\rho_{\mathcal{Q}}=\sum^{\mathrm{N}}_{i=1}p_{i}\ket{\Psi_{i}}\bra{\Psi_{i}}$, for all $0\leq p_{i}\leq 1$ and $\sum^{\mathrm{N}}_{i=1}p_{i}=1$, denotes a density operator acting on the Hilbert space $\mathcal{H}_{\mathcal{Q}}$.
The density operator $\rho_{\mathcal{Q}}$ is said to be fully separable, which we will denote by $\rho^{sep}_{\mathcal{Q}}$, with respect to the Hilbert space decomposition, if it can be written as $ \rho^{sep}_{\mathcal{Q}}=\sum^\mathrm{N}_{i=1}p_i \bigotimes^m_{j=1}\rho^i_{\mathcal{Q}_{j}}, $ where $\rho^i_{\mathcal{Q}_{j}}$ denotes a density operator on Hilbert space $\mathcal{H}_{\mathcal{Q}_{j}}$. If $\rho^{p}_{\mathcal{Q}}$ represents a pure state, then the quantum system is fully separable if $\rho^{p}_{\mathcal{Q}}$ can be written as $\rho^{sep}_{\mathcal{Q}}=\bigotimes^m_{j=1}\rho_{\mathcal{Q}_{j}}$, where $\rho_{\mathcal{Q}_{j}}$ is the density operator on $\mathcal{H}_{\mathcal{Q}_{j}}$. If a state is not separable, then it is said to be an entangled state.
\section{Projective geometry}
In this section we give a short introduction to variety. Let $\mathbf{C}[z]=\mathbf{C}[z_{1},z_{2}, \ldots,z_{n}]$ denotes the polynomial algebra in $n$ variables with complex coefficients. Then, given a set of $r$ polynomials $\{g_{1},g_{2},\ldots,g_{r}\}$ with $g_{i}\in \mathbf{C}[z]$, we define a complex affine variety as \begin{eqnarray} &&\mathcal{V}_{\mathbf{C}}(g_{1},g_{2},\ldots,g_{r})=\{P\in\mathbf{C}^{n}: g_{i}(P)=0~\forall~1\leq i\leq r\}, \end{eqnarray} where $P\in\mathbf{C}^{n}$ is called a point of $\mathbf{C}^{n}$ and if $P=(a_{1},a_{2},\ldots,a_{n})$ with $a_{j}\in\mathbf{C}$, then $a_{j}$ is called the coordinates of $P$. A complex projective space $\mathbf{CP}^{n}$ is defined to be the set of lines through the origin in $\mathbf{C}^{n+1}$, that is, \begin{equation} \mathbf{CP}^{n}=\frac{\mathbf{C}^{n+1}-\{0\}}{ u\sim v},~\lambda\in \mathbf{C}-0,~v_{i}=\lambda u_{i} ~\forall ~0\leq i\leq n+1, \end{equation} where $u=(u_{1},\ldots,u_{n+1})$ and $v=(v_{1},\ldots,v_{n+1})$. Given a set of homogeneous polynomials $\{g_{1},g_{2},\ldots,g_{r}\}$ with $g_{i}\in \mathbf{C}[z]$, we define a complex projective variety as \begin{eqnarray} &&\mathcal{V}(g_{1},\ldots,g_{r})=\{O\in\mathbf{CP}^{n}: g_{i}(O)=0~\forall~1\leq i\leq r\}, \end{eqnarray} where $O=[a_{1},a_{2},\ldots,a_{n+1}]$ denotes the equivalent class of point $\{\alpha_{1},\alpha_{2},\ldots,$ $\alpha_{n+1}\}\in\mathbf{C}^{n+1}$. We can view the affine complex variety $\mathcal{V}_{\mathbf{C}}(g_{1},g_{2},\ldots,g_{r})\subset\mathbf{C}^{n+1}$ as a complex cone over the complex projective variety $\mathcal{V}(g_{1},g_{2},\ldots,g_{r})$.
We can map the product of spaces $\underbrace{\mathbf{CP}^{1}\times\mathbf{CP}^{1} \times\cdots\times\mathbf{CP}^{1}}_{m~\mathrm{times }}$ into a projective space by its Segre embedding as follows. The Segre map is given by \begin{equation} \begin{array}{ccc}
\mathcal{S}_{2,\ldots,2}:\mathbf{CP}^{1}\times\mathbf{CP}^{1} \times\cdots\times\mathbf{CP}^{1}&\longrightarrow& \mathbf{CP}^{2^{m}-1},\\ \end{array} \end{equation} is defined by $ ((\alpha^{1}_{0},\alpha^{1}_{1}),\ldots,
(\alpha^{m}_{0},\alpha^{m}_{1})) \longmapsto
(\alpha^{m}_{i_{m}}\alpha^{m-1}_{i_{m-1}}\cdots\alpha^{1}_{i_{1}})$, where $(\alpha^{i}_{0},\alpha^{i}_{1})$ is points defined on the $i$th complex projective space $\mathbf{CP}^{1}$ and $\alpha_{i_{m}i_{m-1}\cdots i_{1}}$,$0\leq i_{s}\leq 1$ be a homogeneous coordinate-function on $\mathbf{CP}^{2^{m}-1}$. Moreover, let us consider a multi-qubit quantum system
and let $ \mathcal{A}=\left(\alpha_{i_{m}i_{m-1}\ldots i_{1}}\right)_{0\leq i_{s}\leq 1}, $ for all $j=1,2,\ldots,m$. $\mathcal{A}$ can be realized as the following set $\{(i_{1},i_{2},\ldots,i_{m}):1\leq i_{s}\leq 2,\forall~s\}$, in which each point $(i_{m},i_{m-1},\ldots,i_{1})$ is assigned the value $\alpha_{i_{m}i_{m-1}\ldots i_{1}}$. For each $s=1,2,\ldots,m$, a two-by-two minor about the $j$-th coordinate of $\mathcal{A}$ is given by \begin{eqnarray}\label{segreply1} &&\mathcal{I}^{m}_{\mathcal{A}}= \alpha_{x_{m}x_{m-1}\ldots x_{1}}\alpha_{y_{m}y_{m-1}\ldots y_{1}} - \alpha_{x_{m}x_{m-1}\ldots x_{s+1}y_{s}x_{s-1}\ldots x_{1}}\alpha_{y_{m}y_{m-1} \ldots y_{s+1} x_{s}y_{s-1}\ldots y_{m}}. \end{eqnarray} Then the ideal $\mathcal{I}^{m}_{\mathcal{A}}$ is generated by The image of the Segre embedding $\mathrm{Im}(\mathcal{S}_{2,2,\ldots,2})$, which again is an intersection of families of quadric hypersurfaces in $\mathbf{CP}^{2^{m}-1}$, is called Segre variety and it is given by \begin{eqnarray}\label{eq: submeasure} \mathrm{Im}(\mathcal{S}_{2,2,\ldots,2})&=&\bigcap_{\forall s}\mathcal{V}\left(\mathcal{I}^{m}_{\mathcal{A}}\right). \end{eqnarray} This is the space of separable multi-qubit states. Moreover, we propose a measure of entanglement for general pure multipartite states based on modified Segre variety as follows \begin{eqnarray}\label{EntangSeg2}\nonumber &&\mathcal{F}(\mathcal{Q}^{p}_{m}(2,2\ldots,2)) =(\mathcal{N}\sum_{\forall \sigma\in\mathrm{Perm}(u)}\sum_{
k_{j},l_{j}, j=1,2,\ldots,m}\\&&|\alpha_{k_{1}k_{2}\ldots k_{m}}\alpha_{l_{1}l_{2}\ldots l_{m}} - \alpha_{\sigma(k_{1})\sigma(k_{2})\ldots\sigma(k_{m})}\alpha_{\sigma(l_{1})\sigma(l_{2})
\ldots\sigma(l_{m})}|^{2})^{\frac{1}{2}}, \end{eqnarray} where $\sigma\in\mathrm{Perm}(u)$ denotes all possible sets of permutations of indices for which $k_{1}k_{2}\ldots k_{m}$ are replace by $l_{1}l_{2}\ldots l_{m}$, and $u$ is the number of indices to permute.
As an example we will discuss the four-qubit state in which we first encounter these new varieties. For this quantum system we can partition the Segre embedding as follows: $$\xymatrix{ \mathbf{P}^{1}\times\mathbf{P}^{1}\times\mathbf{P}^{1}\times\mathbf{P}^{1} \ar[d]_{\mathcal{S}_{2,\ldots,2}}\ar[r]_{\mathcal{S}_{2,2}\otimes I\otimes I}&\mathbf{P}^{3} \times\mathbf{P}^{1}\times\mathbf{P}^{1}\ar[d]_{I\otimes\mathcal{S}_{2,2}}\\
\mathbf{P}^{2^{4}-1}&\mathbf{P}^{3}
\times\mathbf{P}^{3}\ar[l]_{\mathcal{S}_{4,4}}}.$$
For the Segre variety, which is represented by completely decomposable tensor, we have a commuting diagram and $\mathcal{S}_{2,\ldots,2}=(\mathcal{S}_{4,4}) \circ(I\otimes\mathcal{S}_{2,2})\circ(\mathcal{S}_{2,2}\otimes I\otimes I)$. \section{Toric variety and multi-qubit quantum systems}
Let $S\subset \mathbf{R}^{n}$ be finite subset, then a convex polyhedral cone is defined by $
\sigma=\mathrm{Cone}(S
)=\left\{\sum_{v\in S}\lambda_{v}v|\lambda_{v}\geq0\right\}.$ In this case $\sigma$ is generated by $S$. In a similar way we define a polytope by $
P=\mathrm{Conv}(S)=\left\{\sum_{v\in S}\lambda_{v}v|\lambda_{v}\geq0, \sum_{v\in S}\lambda_{v}=1\right\}. $ We also could say that $P$ is convex hull of $S$. A convex polyhedral cone is called simplicial if it is generated by linearly independent set. Now, let $\sigma\subset \mathbf{R}^{n}$ be a convex polyhedral cone and $\langle u,v\rangle$ be a natural pairing between $u\in \mathbf{R}^{n}$ and $v\in\mathbf{R}^{n}$. Then, the dual cone of the $\sigma$ is define by $$
\sigma^{\wedge}=\left\{u\in \mathbf{R}^{n*}|\langle u,v\rangle\geq0~\forall~v\in\sigma\right\}, $$ where $\mathbf{R}^{n*}$ is dual of $\mathbf{R}^{n}$. We call a convex polyhedral cone strongly convex if $\sigma\cap(-\sigma)=\{0\}$.
The algebra of Laurent polynomials is defined by $ \mathbf{C}[z,z^{-1}]=\mathbf{C}[z_{1},z^{-1}_{1},\ldots,z_{n},z^{-1}_{n}], $ where $z_{i}=\chi^{e^{*}_{i}}$. The terms of the form $\lambda \cdot z^{\beta}=\lambda z^{\beta_{1}}_{1}z^{\beta_{2}}_{2}\cdots z^{\beta_{n}}_{n}$ for $\beta=(\beta_{1},\beta_{2},\ldots,\beta_{n})\in \mathbf{Z}$ and $\lambda\in \mathbf{C}^{*}$ are called Laurent monomials. A ring $R$ of Laurent polynomials is called a monomial algebra if it is a $\mathbf{C}$-algebra generated by Laurent monomials. Moreover, for a lattice cone $\sigma$, the ring $$R_{\sigma}=\{f\in \mathbf{C}[z,z^{-1}]:\mathrm{supp}(f)\subset \sigma\} $$ is a finitely generated monomial algebra, where the support of a Laurent polynomial $f=\sum_{i}\lambda_{i}z^{i}$ is defined by $$\mathrm{supp}(f)=\{i\in \mathbf {Z}^{n}:\lambda_{i}\neq0\}.$$
Now, for a lattice cone $\sigma$ we can define an affine toric variety to be the maximal spectrum $$\mathbf{X}_{\sigma}=\mathrm{Spec}R_{\sigma}.$$ A toric variety $\mathbf{X}_{\Sigma}$ associated to a fan $\Sigma$ is the result of gluing affine varieties $\mathbf{X}_{\sigma}=\mathrm{Spec}R_{\sigma}$ for all $\sigma\in \Sigma$ by identifying $\mathbf{X}_{\sigma}$ with the corresponding Zariski open subset in $\mathbf{X}_{\sigma^{'}}$ if $\sigma$ is a face of $\sigma^{'}$. That is, first we take the disjoint union of all affine toric varieties $\mathbf{X}_{\sigma}$ corresponding to the cones of $\Sigma$. Then by gluing all these affine toric varieties together we get $\mathbf{X}_{\Sigma}$.
A compact toric variety $\mathcal{X}_{A}$ is called projective if there exists an injective morphism $$\Phi:\mathcal{X}_{\Sigma}\longrightarrow\mathbf{P}^{r}$$ of $\mathcal{X}_{\Sigma}$
into some projective space such that $\Phi(\mathcal{X}_{\Sigma})$ is Zariski
closed in $\mathbf{P}^{r}$. A toric variety $\mathcal{X}_{\Sigma}$ is equivariantly projective if and only if $\Sigma$ is strongly polytopal. Now, let $\mathcal{X}_{\Sigma}$ be equivariantly projective and morphism $\Phi$ be embedding which is induced by the rational map $\phi:\mathcal{X}_{A} \longrightarrow \mathbf{P}^{r}$ defined by $p \mapsto[z^{m_{0}},z^{m_{1}},\ldots,z^{m_{r}}],$ where $z^{m_{l}}(p)=p^{m_{l}}$ in case $p=(p_{1},p_{2},\ldots p_{n})$. Then, the rational map $\Phi(\mathcal{X}_{\Sigma§})$ is the set of common solutions of finitely many monomial equations \begin{equation} z^{\beta_{0}}_{i_{0}}z^{\beta_{1}}_{i_{1}}\cdots z^{\beta_{s}}_{i_{s}}=z^{\beta_{s+1}}_{i_{s+1}}z^{\beta_{s+2}}_{i_{s+2}}\cdots z^{\beta_{r}}_{i_{r}} \end{equation} which satisfy the following relationships \begin{equation}
\beta_{0}m_{0}+\beta_{1}m_{1}+\cdots +\beta_{s}m_{s}=\beta_{s+1}m_{s+1}+\beta_{s+2}m_{s+2}+\cdots +\beta_{r}m_{r} \end{equation} and \begin{equation}
\beta_{0}+\beta_{1}+\cdots +\beta_{s}=\beta_{s+1}+\beta_{s+2}+\cdots +\beta_{r} , \end{equation} for all $\beta_{l}\in \mathcal{Z}_{\geq 0}$ and $l=0,1,\ldots, r$ \cite{Ewald}. As we have seen for multi-qubit systems the separable states are given by the Segre embedding of $\mathbf{CP}^{1}\times\mathbf{CP}^{1}\times\cdots\times\mathbf{CP}^{1}$. Now, for example, let $z_{1}=\alpha^{1}_{1}/\alpha^{1}_{0}, z_{2}=\alpha^{2}_{1}/\alpha^{2}_{0},\ldots, z_{m}=\alpha^{m}_{1}/\alpha^{m}_{0}$.
Then we can cover $\mathbf{CP}^{1}\times\mathbf{CP}^{1}
\times\cdots\times\mathbf{CP}^{1}$ by $2^{m}$ charts \begin{eqnarray} \nonumber && \mathbf{X}_{\check{\Delta}_{1}}=\{(z_{1},z_{2},\ldots,z_{m})\},\\\nonumber&& ~\mathbf{X}_{\check{\Delta}_{2}}=\{(z^{-1}_{1},z_{2},\ldots,z_{m})\},\\\nonumber&& ~~~~~~~~~~\vdots \\\nonumber&& \mathbf{X}_{\check{\Delta}_{2^{m}-1}}=\{(z_{1},z^{-1}_{2},\ldots,z^{-1}_{m})\}, \\\nonumber&& \mathbf{X}_{\check{\Delta}_{2^{m}}}=\{(z^{-1}_{1},z^{-1}_{2},\ldots,z^{-1}_{m})\} \end{eqnarray} Let us consider the $m$ -hypercube $\Sigma$ centered at the origin with vertices $(\pm1,\ldots,\pm1)$. This gives the toric variety $\mathcal{X}_{\Sigma}= \mathbf{CP}^{1}\times\mathbf{CP}^{1}\times\cdots\times\mathbf{CP}^{1}$.
Now, the map $\Phi(\mathcal{X}_{\Sigma})$ is a set of the common solutions of the following monomial equations \begin{equation} x^{\beta_{0}}_{i_{0}}x^{\beta_{1}}_{i_{1}}\cdots x^{\beta_{2^{m-1}-1}}_{i_{2^{m-1}-1}}=x^{\beta_{2^{m-1}}}_{i_{2^{m-1}}} \cdots x^{\beta_{2^{m}-1}}_{i_{2^{m}-1}} \end{equation} that gives quadratic polynomials $\alpha_{k_{1}k_{2}\ldots k_{m}}\alpha_{l_{1}l_{2}\ldots l_{m}} = \alpha_{k_{1}k_{2}\ldots l_{j}\ldots k_{m}}\alpha_{l_{1}l_{2} \ldots k_{j}\ldots l_{m}}$ for all $j=1,2,\ldots,m$ which coincides with the Segre ideals.
Moreover, we have \begin{equation} \Phi(\mathcal{X}_{\Sigma})=\mathrm{Specm} \mathcal{C}[\alpha_{00\ldots 0},\alpha_{00\ldots 1},\ldots,\alpha_{11\ldots 1}]/\mathcal{I}(\mathcal{A}), \end{equation} where $\mathcal{I}(\mathcal{A})=\langle \alpha_{k_{1}k_{2}\ldots k_{m}}\alpha_{l_{1}l_{2}\ldots l_{m}} - \alpha_{k_{1}k_{2}\ldots l_{j}\ldots k_{m}}\alpha_{l_{1}l_{2} \ldots k_{j}\ldots l_{m}}\rangle_{\forall j;k_{j},l_{j}=0,1}$. This toric variety describe the space of separable states in a multi-qubit quantum systems. In summary we have investigated the geometrical structures of quantum multi-qubit states based on the Segre variety toric varieties. We showed that multi-qubit states can be characterized and visualized by embedding of toric variety in a complex projective space. The results are interesting in our voyage to the realm of quantum theory and a better understanding of the nature of multipartite quantum systems. \begin{flushleft} \textbf{Acknowledgments:} The work was supported by the Swedish Research Council (VR). \end{flushleft}
\end{document} |
\begin{document}
\begin{abstract} Let $R$ be a commutative ring of characteristic zero and $G$ an arbitrary group. In the present paper we classify the groups $G$ for which the set of symmetric elements with respect to the classical involution of the group ring $RG$ is Lie metabelian. \end{abstract}
\title{Group rings with Lie metabelian set of symmetric elements}
\section{Introduction}
If $*$ is an involution on a ring $R$ then the symmetric (respectively, anti-symmetric) elements of $R$ with respect to $*$ are the elements of
$R^+=\{r\in R|\; r^*=r\}$ (respectively, $R^-=\{r\in R|\; r^*=-r\}$). It is well known that crucial information of the algebraic structure of $R$ can be determined by that of $R^+$. An important results of this nature is due to Amitsur who proved that for an arbitrary algebra $A$ with an involution $*$, if $A^+$ satisfies a polynomial identity then $A$ satisfies a polynomial identity \cite{A}.
Let $R$ be a commutative ring, let $G$ be a group and let $*$ be a group involution extended by linearity to the group ring $RG$. More precisely, if $r=\sum_{g\in G} r_g g \in RG$ with $r_g\in R$ for each $g\in G$ then $r^*=\sum_{g\in G} r_g g^*$. During the last years many authors have paid attention to the algebraic properties of the symmetric and anti-symmetric elements of $RG$ and in particular to their Lie properties. The characterization of the Lie nilpotence of $RG^+$ and $RG^-$, for the case of the classical involution, was given in three different papers when $R$ is a field of characteristic different from 2. In the first paper \cite{GS} it was considered the case in which the group $G$ has no 2-elements. The case in which the group $G$ has $2$-elements was solved in \cite{L} for $RG^+$ and in \cite{GS1} for $RG^-$. For group involutions extended by linearity to the whole group ring the Lie nilpotence of $RG^+$ was study in \cite{GPS1} when the group $G$ has no 2-elements, and completed for an arbitrary group $G$ in \cite{LSS2009JPAAA}. The case of oriented involutions was study in \cite{JHP}.
A particular case of Lie nilpotence, the commutativity, has been studied for $RG^+$ and $RG^-$. For the classical involution, this was study in \cite{Broche2006} for $RG^+$ and in \cite{BrochePolcino2007} for $RG^-$. For the case of an arbitrary group involution extended by linearity to the whole group ring, the commutativity of $RG^+$ was studied in \cite{JR} and that of $RG^-$ in \cite{JR2,BJPR}. This was extended to oriented involutions in \cite{BP2} for $RG^+$ and in \cite{BJR} for $RG^-$. Finally, for nonlinear involutions the commutativity of $RG^+$ was studied in \cite{R} and that of $RG^-$ in \cite{R2}.
The Lie solvability of $RG^+$ and $RG^-$ was studied in \cite{LSS2009Forum} for the classical involution when the group $G$ has no $2$-elements. The question of when $RG^+$ is Lie metabelian has been studied under some conditions, namely for $G$ a finite group of odd order without elements of order 3 and the classical involution in \cite{LR}; for $R$ a field of characteristic different from $2$ and $3$ and $G$ a periodic group without $2$-elements and an arbitrary group involution extended by linearity to $RG$ in \cite{CLS2013}. In all theses cases $RG^+$ is Lie metabelian if and only if $G$ is abelian. Finally, in \cite{CLS2013}, it is also shown that for $G$ a finite group of odd order and $R$ a field of characteristic different from 2, if $RG^+$ is Lie metabelian then $G$ is nilpotent.
The goal of this paper is to characterize the group rings $RG$, with $R$ a commutative ring of characteristic zero and $G$ an arbitrary group, for which the set $RG^+$ of symmetric elements with respect to the classical involution is Lie metabelian. More precisely we prove the following result.
\begin{trm}\label{Main} Let $R$ be a commutative ring of characteristic 0 and $G$ a group. Denote by $RG^+$ the set of symmetric elements of $RG$ for the classical involution. Then $RG^+$ is Lie metabelian if and only if one of the following conditions holds: \begin{enumerate}
\item $\GEN{g\in G:g^2\ne 1}$ is abelian.
\item $G$ contains an elementary abelian subgroup of index 2.
\item $G$ has an abelian subgroup $B$ of index 2 and an element $x$ of order 4 such that $b^x=b^{-1}$ for every $b\in B$.
\item The center of $G$ is $\{g\in G:g^2=1\}$ and it has index 4 in $G$. \end{enumerate} \end{trm}
\section{Preliminaries and notation} In this section we introduce the basic notation and definitions. The centre of $G$ is denoted $\ensuremath{\mathcal{Z}}(G)$ and its exponent is denoted by $\textnormal{Exp}(G)$. If $g$ is a group element of finite order we will denote by $\circ (g)$ its order. If $g,h\in G$ then $g^h=h^{-1} g h$ and $(g,h)=g^{-1}h^{-1}gh$.
For elements $a$ and $b$ in an arbitrary ring we use the standard notation for the additive commutator: $[a,b]=ab-ba$, also known as Lie bracket. Recall that a ring $R$ is called Lie metabelian if $[[a,b],[c,d]]=0$ for all $a,b,c,d\in R$. More generally we say that a subset $X$ of $R$ is Lie metabelian if the same identity holds for all the elements of $X$. We also say that $X$ is commutative if the elements of $X$ commute.
More generally, if $X$ and $Y$ are subsets of a ring then $[X,Y]$ denotes the additive subgroup generated by the Lie brackets $[x,y]$ with $x\in X$ and $y\in Y$. Observe that $X$ is Lie metabelian if and only if $[X,X]$ is commutative.
The following subsets of $RG$ play an important role: \begin{eqnarray*} X^+&=&\{g+g^{-1}:g \in G, g^2\neq 1\}\cup\{g:g\in G, g^2=1\} \\ \breve{G}&=&\{g-g^{-1}:g\in G\} \end{eqnarray*} Note that $RG^+$ is generated as an $R$-module by $X^+$ and
therefore $RG^+$ is Lie metabelian if and only if so is $X^+$. In particular, if $RG^+$ is commutative then obviously $RG^+$ is Lie metabelian. The nonabelian groups $G$ satisfying that $RG^+$ is commutative have been classified in \cite{Broche2006}. These groups are precisely the Hamiltonian 2-groups, and are included in (2) and (3) of Theorem~\ref{Main}. Therefore in the rest of the paper we will assume that $RG^+$ is not commutative.
Also, \begin{equation}\label{RGRGX}
[RG^+,RG^+]\subseteq R\breve{G}
\end{equation} where $R\breve{G}$ denotes the $R$-submodule of $RG$ generated by $\breve{G}$. In fact, to see this it is enough to consider $g,h,x,y\in G$ with $x^2=y^2=1$ and write the Lie brackets of generators of $RG^+$ in the following form: \begin{eqnarray*}
{}[g+g^{-1},h+h^{-1}]&=&gh-(gh)^{-1}+gh^{-1}-(gh^{-1})^{-1}+g^{-1} h - (g^{-1} h)^{-1} + (hg)^{-1} - hg, \\
{}[g+g^{-1},x] &=& gx-(gx)^{-1}+g^{-1} x - (g^{-1} x)^{-1}, \\
{}[x,y]&=&xy-(xy)^{-1} \end{eqnarray*}
We will need the following result.
\begin{trm}\label{CasiAntisimetricos}\cite{BrochePolcino2007} Let $R$ be a commutative ring with unity with characteristic 0 and let $G$ be any group. Then $\breve{G}$ is commutative if and only if one of the following conditions holds: \begin{enumerate} \item $K=\GEN{g\in G: g^2\ne 1}$ is abelian (and thus $G=K\rtimes \GEN{x}$ with $x^2=1$ and $k^x=k^{-1}$ for all $k\in K$). \item $G$ contains an elementary abelian subgroup of index $2$. \end{enumerate} \end{trm}
\section{Sufficiency condition} In this section we prove the sufficiency part of Theorem~\ref{Main}. Assume first that $G$ satisfies either condition (1) or condition (2) of Theorem~\ref{Main}. Then $\breve{G}$ is commutative by Theorem~\ref{CasiAntisimetricos} and therefore $[RG^+,RG^+]$ is commutative by (\ref{RGRGX}). Thus $RG^+$ is Lie metabeliano.
Secondly suppose that $G$, $B$ and $x$ satisfy condition (3) of Theorem~\ref{Main}. Then clearly all the elements of $G\setminus B$ have order 4 and the elements of $G$ of order 2 are central. Therefore in order to prove that $RG^+$ is Lie metabelian it is enough to show that the following set is commutative
$$C= \{ [ g+g^{-1}, h+h^{-1}] :g,h\in G,\; g^2\neq 1\neq h^2 \}.$$ As $B$ is abelian, $[ g+g^{-1}, h+h^{-1}]=0$ for elements $g,h\in B$. If $g\in B$ and $h\in G\setminus B$ then $g^h=g^{-1}$, so $(g+g^{-1} )h=h(g+g^{-1})$. Therefore, $[g+g^{-1},h+h^{-1}]=0$. Finally, if $g,h\in G\setminus B$ then $h=bg$ for some $b\in B$. Since $b^g=b^{-1}$ and $g^2=g^{-2}$, we have that $$[ g+g^{-1}, h+h^{-1}]=(g+g^{-1})b(g+g^{-1})-b(g+g^{-1})^2=2(b^{-1}-b)(1+g^2).$$ Hence, $C \subseteq \{ 2(b^{-1}-b)(1+g^2):\ b\in B,\; g\in G\setminus B \}\cup \{0\} \subseteq RB$ and thus $C$ is commutative as desired.
Finally, assume that $G$ satisfies condition (4) of Theorem~\ref{Main}. As in the previous case the elements of order 2 are central and hence it is enough to prove that the elements of $C$ commute. Notice that $G/\ensuremath{\mathcal{Z}}(G)$ has exactly three non-trivial cyclic subgroups, say $\GEN{x\ensuremath{\mathcal{Z}}(G)}$, $\GEN{y\ensuremath{\mathcal{Z}}(G)}$ and $\GEN{z\ensuremath{\mathcal{Z}}(G)}$, and $z=uxy$ for some $u\in \ensuremath{\mathcal{Z}}(G)$. Moreover $G'=\{1,t=(x,y)\}$ and
$$(x+x^{-1})(y+y^{-1})(z+z^{-1}) = (x^2y^2+t)(1+x^2)(1+y^2)u.$$ and therefore
\begin{equation}\label{Suficiency31}
(1-t)(x+x^{-1})(y+y^{-1})(z+z^{-1}) = 0.
\end{equation}
Consider $x_1,y_1,x_2,y_2\in G$ with $[x_1+x_1^{-1},y_1+y_1^{-1}]\ne 0$ and $[x_2+x_2^{-1},y_2+y_2^{-1}]\ne 0$.
We have to show that $[[x_1+x_1^{-1},y_1+y_1^{-1}],[x_2+x_2^{-1},y_2+y_2^{-1}]]=0$. Observe that $y_1\not \in \GEN{\ensuremath{\mathcal{Z}}(G),x_1}$ and $x_1\not \in \GEN{\ensuremath{\mathcal{Z}}(G),y_1}$ and therefore $G=\GEN{\ensuremath{\mathcal{Z}}(G),x_1,y_1}$. Similarly $G=\GEN{\ensuremath{\mathcal{Z}}(G),x_2,y_2}$. Moreover $t=(x_1,y_1)=(x_2,y_2)$ and $(y_i+y_i^{-1})(x_i+x_i^{-1})=t(x_i+x_i^{-1})(y_i+y_i^{-1})$ for $i=1,2$, so that
\begin{equation}\label{Suficiency32}
[x_i+x_i^{-1},y_i+y_i^{-1}] = (1-t)(x_i+x_i^{-1})(y_i+y_i^{-1}).
\end{equation}
As $G/\ensuremath{\mathcal{Z}}(G)$ has exactly 3 non-trivial cyclic subgroups then either $x_2\ensuremath{\mathcal{Z}}(G)\in \{x_1\ensuremath{\mathcal{Z}}(G),y_1\ensuremath{\mathcal{Z}}(G)\}$ or $y_2\ensuremath{\mathcal{Z}}(G)\in \{x_1\ensuremath{\mathcal{Z}}(G),y_1\ensuremath{\mathcal{Z}}(G)\}$. By symmetry, one may assume that $x_2\ensuremath{\mathcal{Z}}(G)=x_1\ensuremath{\mathcal{Z}}(G)$. So $x_2=ux_1$ with $u\in \ensuremath{\mathcal{Z}}(G)$. Moreover either $y_2\in y_1\ensuremath{\mathcal{Z}}(G)$ or $y_2\in x_1y_1\ensuremath{\mathcal{Z}}(G)$. In the first case $y_2=vy_1$ with $v\in \ensuremath{\mathcal{Z}}(G)$ and
\begin{eqnarray*}
[x_2+x_2^{-1},y_2+y_2^{-1}]&=&(1-t)(ux_1+ux_1^{-1})(vy_1+vy_1^{-1})=(1-t)uv(x_1+x_1^{-1})(y_1+y_1^{-1})\\&=&uv[x_1+x_1^{-1},y_1+y_1^{-1}].
\end{eqnarray*} In the second case $\GEN{x_1\ensuremath{\mathcal{Z}}(G)}=\GEN{x_2\ensuremath{\mathcal{Z}}(G)}$, $\GEN{y_1\ensuremath{\mathcal{Z}}(G)}$ and $\GEN{y_2\ensuremath{\mathcal{Z}}(G)}$ are the three different cyclic subgroups of $G/\ensuremath{\mathcal{Z}}(G)$. Then using (\ref{Suficiency31}) and (\ref{Suficiency32}) and the fact that $t$ is central we have
\begin{eqnarray*}
&&[x_1+x_1^{-1},y_1+y_1^{-1}][x_2+x_2^{-1},y_2+y_2^{-1}] = \\
&&(1-t)(x_1+x_1^{-1})(y_1+y_1^{-1})(1-t)(x_2+x_2^{-1})(y_2+y_2^{-1}) = 0\\
\end{eqnarray*} and
\begin{eqnarray*}
&&[x_2+x_2^{-1},y_2+y_2^{-1}][x_1+x_1^{-1},y_1+y_1^{-1}] = \\
&&(1-t)(x_2+x_2^{-1})(y_2+y_2^{-1}) (1-t)(x_1+x_1^{-1})(y_1+y_1^{-1})= 0\\
\end{eqnarray*} In both cases $[[x_1+x_1^{-1},y_1+y_1^{-1}],[x_2+x_2^{-1},y_2+y_2^{-1}]]=0$, as desired. \section{Necessary condition} Now we assume that $RG^+$ is Lie metabelian and we will prove that $G$ satisfies one of the conditions (1)-(4) of Theorem~\ref{Main}. This is easy if $\breve{G}$ is commutative, by Theorem~\ref{CasiAntisimetricos}. Thus unless otherwise stated we assume that $RG^+$ is Lie metabelian and $\breve{G}=\{g-g^{-1}:g\in G\}$ is not commutative.
A relevant role in the proof is played by the following normal subgroups of $G$:
\begin{eqnarray*}
A&=&\GEN{g\in G : g^2 = 1}, \\
B&=&\GEN{g\in G : \circ(g)\ne 4 }
\end{eqnarray*}
\noindent {\bf 4.1} \underline{Properties of $A$}
The first lemmas address the properties of $A$. (In fact the first lemma does not use the assumption that $\breve{G}$ is not commutative.)
\begin{lem}\label{PropiedadesA} \begin{enumerate} \item\label{ElementsA} Every element of $A$ is of the form $ab$ with $a^2=b^2=1$. \item\label{ACasiAntisimetrico} $\breve{A}=\{a-a^{-1}:a\in A\}$ is commutative. \item\label{(x,A)=1ox^2inA} For every $x\in G$ either $(x,A)=1$ or $x^2\in A$. \end{enumerate} \end{lem}
\begin{proof} (\ref{ElementsA}) We have to prove that the product of elements of order 2 of $G$ is also the product of at most two elements of order 2. By induction it is enough to show that if $x_1,x_2,x_3$ are elements of order 2 in $G$ then $x_1x_2x_3$ is the product of at most two elements of order 2. This is clear if $(x_1,x_2)=1$ or $(x_2,x_3)=1$. So assume that $(x_1,x_2)\ne 1\ne (x_2,x_3)$. If $(x_1,x_3)=1$ then $x_2x_3x_1$ is the product of two elements of order 2 and it is conjugate of $x_1x_2x_3$. Thus we may also assume that $(x_1,x_3)\ne 1$. As, by hypothesis, $RG^+$ is Lie metabelian and $x_1^2=x_2^2=x_3^2=1$, we have \begin{eqnarray*} 0=[[x_1,x_2],[x_2,x_3]]&=&x_1x_3+x_2x_1x_3x_2+x_2x_3x_2x_1+x_3x_2x_1x_2\\&&-(x_3x_1+x_1x_2x_3x_2+x_2x_1x_2x_3+x_2x_3x_1x_2) \end{eqnarray*} Then $x_1x_3$ is one of the elements of the negative part and by assumption it is not any of the first three summands. Therefore $x_1x_3=x_2x_3x_1x_2$ and hence $(x_1x_2x_3)^2 = x_1 (x_2x_3x_1x_2) x_3 = x_1^2 x_3^2=1$, as desired.
(\ref{ACasiAntisimetrico}) Let $a\in A$ with $a^2\ne 1$. By (\ref{ElementsA}), $a=xy$ with $x^2=y^2=1$. Then $a-a^{-1}=xy-yx=[x,y]$. Now, using that $RG^+$ is Lie metabelian we deduce that $\{a-a^{-1}:a\in A\}$ is commutative.
(\ref{(x,A)=1ox^2inA}) Assume that $(x,A)\ne 1$. Then $(x,a)\ne 1$ for some $a\in G$ of order 2. By assumption \begin{eqnarray*} 0&=&[[x+x^{-1},a],[xa+ax^{-1},a]] \\&=& 2(ax^{-2}+x^{-2}a+xax+axaxa-x^2a-ax^2-x^{-1}ax^{-1}-ax^{-1}ax^{-1}a). \end{eqnarray*} Then $x^2a\in \{ax^{-2},x^{-2}a,xax,axaxa\}$. However $x^2a$ is not one of the last two elements because $(x,a)\ne 1$. Therefore $x^2a=ax^{-2}$ or $x^2=x^{-2}$. In the first case $x^2a$ has order 2 and therefore it belongs to $A$. In the second case $x^4=1$. In both cases $x^2\in A$, as desired. \end{proof}
\begin{lem}\label{Aabelian} $A$ is abelian. \end{lem}
\begin{proof} Recall that we are assuming that $\breve{G}$ is not commutative and $A$ is not abelian. By statement (\ref{ACasiAntisimetrico}) of Lemma~\ref{PropiedadesA}, $\breve{A}$ is commutative. Then $G\ne A$ and $A$ satisfies one of the two conditions of Theorem~\ref{CasiAntisimetricos}. In both cases $A$ contains elements $b$ and $c$ with $b^2\ne 1=c^2$ and $(b,c)\ne 1$ and $(b^2,c)=1$. Moreover, if $x\in G\setminus A$ then either $x$, $xb$, $xc$ or $xbc$ do not commute with neither $b$ nor $c$. Thus we may assume that $(x,b)\ne 1 \ne (x,c)$.
As in the proof of statement (\ref{ACasiAntisimetrico}) of Lemma~\ref{PropiedadesA}, $b-b^{-1}$ is the additive commutator of two elements of order 2. Then \begin{eqnarray*} 0&=&[b-b^{-1},[x+x^{-1},c]] \\
&=& bxc+b^{-1}cx^{-1}+b^{-1}cx+bx^{-1}c+cx^{-1}b+cxb+x^{-1}cb^{-1}+xcb^{-1} \\
& & -bcx-bcx^{-1}-b^{-1}xc-b^{-1}x^{-1}c -xcb-x^{-1}cb -cxb^{-1} -cx^{-1}b^{-1} \end{eqnarray*} and therefore $$
bxc\in \{bcx,bcx^{-1}, b^{-1}xc, b^{-1}x^{-1}c, xcb, x^{-1}cb, cxb^{-1}, cx^{-1}b^{-1}\} $$ If $bxc=bcx$ then $(c,x)=1$, a contradiction. If $bxc=bcx^{-1}$ it follows that $(xc)^2=1$ and thus $x\in A$, a contradiction. If $bxc=b^{-1}xc$ then $b^2=1$, a contradiction. If $bxc=cx^{-1}b^{-1}$ then $(bxc)^2=1$ and hence $x\in A$, a contradiction. Therefore only four possibilities remains: \begin{itemize} \item[(a.1)] $bxc=b^{-1}x^{-1}c$ and thus $b^2x^2=1$. \item[(a.2)] $bxc=xcb$. \item[(a.3)] $bxc=cxb^{-1}$ \item[(a.4)] $bxc=x^{-1}cb$. \end{itemize}
Using similar arguments for $[b-b^{-1},[xc+cx^{-1},c]]=0$ we get that $$
bx\in \{bcxc,bx^{-1}, b^{-1}x, cx^{-1}cb, xb, x^{-1}b^{-1}, b^{-1}cx^{-1}c, cxcb^{-1}\} $$ If $bx=bcxc$ then $(x,c)=1$, a contradiction. If $bx=bx^{-1}$ then $x^2=1$ and thus $x\in A$, again a contradiction. If $bx=b^{-1}x$ then $b^2=1$, a contradiction. If $bx=x^{-1}b^{-1}$, then $(bx)^2=1$ and thus $x\in A$, again a contradiction. Therefore, since $(b,x)\neq 1$ only three possibilities remains, namely
\begin{itemize}
\item[(b.1)] $bxb^{-1}=cx^{-1}c$
\item[(b.2)] $b^2(xc)^2=1 $
\item[(b.3)] $bxb=cxc$
\end{itemize}
We now consider the two cases for $A$ mentioned at the beginning of the proof.
\emph{Case 1}: $K=\GEN{g\in A\;:g^2\ne 1}$ is abelian and $A=K\rtimes \GEN{\alpha}_2$ with $k^\alpha=k^{-1}$ for every $k\in K$. In particular $b^c=b^{-1}$.
If (b.1) holds $bxb^{-1}=cx^{-1}c$ then $(cbx)^2=(cb)^2=1$ and thus $x\in A$, a contradiction.
Assume now that (b.2) $b^2(xc)^2=1 $ holds. Then $bxc=b^{-1}cx^{-1}$. If case (a.1) holds then $bx=b^{-1}x^{-1}$ and therefore it follows that $b^{-1}cx^{-1}=bxc=b^{-1}x^{-1}c$ and thus $(c,x^{-1})=1$, a contradiction. If case (a.2) holds then $b^{-1}cx^{-1}=bxc=xcb$ and $(xcb)^2=xcbb^{-1}cx^{-1}=1$ and thus $x\in A$ a contradiction. If case (a.3) holds then $cbx^{-1}=b^{-1}cx^{-1}=bxc=cxb^{-1}$ and so $bx^{-1}=xb^{-1}$ which implies that $(xb^{-1})^2=1$ and hence $x\in A$, a contradiction. If (a.4) $bxc=x^{-1}cb=x^{-1}b^{-1}c$ holds then $bx=x^{-1}b^{-1}$ and therefore $bx\in A$ again a contradiction.
Finally assume that (b.3) $bxb=cxc$ holds. Then $bx=cxcb^{-1}=cxbc$ and thus $bxc=cxb$. If case (a.1) holds then $bx=b^{-1}x^{-1}$ and therefore it follows that $(xcb)^2=xcbbxc=xcbb^{-1}x^{-1}c=1$ and thus $x\in A$, a contradiction. If case (a.2) holds then $cxb=bxc=xcb$ and thus $(x,c)=1$, a contradiction. If case (a.3) holds then $cxb=bxc=cxb^{-1}$ and hence $b=b^{-1}$, a contradiction. If (a.4) holds using the same argument as in case (b.2) we get a contradiction, that finishes the proof in this case.
\emph{Case 2}. $A$ contains an elementary abelian subgroup $C$ of index 2 in $G$. We can assume that $c\in C$, $b^2\in C$ and $b$ has order 4.
Assume first that (b.1) $bxb^{-1}=cx^{-1}c$ holds. Then $(cbx)^2=(cb)^2=(bc)^2$ and thus $(bc)^2\neq 1\neq (cb)^2$, because $cbx\not\in A$. If case (a.1) holds then
\begin{eqnarray*} [[b+b^{-1},c],[c,x+x^{-1}]] &=& 4(bx+b^{-1}x+cbxc+cb^{-1}xc-b^{-1}cxc-bcxc-cbcx-cb^{-1}cx)\\ &=& 4(bx+b^{-1}x+cbxc+cb^{-1}xc-xb-x^{-1}b-cbcx-cb^{-1}cx)\\ &=&0. \end{eqnarray*} Then $bx\in \{xb,x^{-1}b,cbcx, cb^{-1}cx\}$. Since $(b,x)\ne 1$ and $(b,c)\ne 1$ it follows that $bx=cb^{-1}cx$. Then $cb=b^{-1}c$ and hence $(cb)^2=1$ a contradiction.
If case (a.2) holds then \begin{eqnarray*} [[b+b^{-1},c],[c,x+x^{-1}]] &=& 4(cb^{-1}xc+cb^{-1}x^{-1}c+cbxc+cbx^{-1}c\\ &&-b^{-1}cxc-bcxc-b^{-1}cx^{-1}c-bcx^{-1}c)\\ &=& 4(cb^{-1}xc+b^{-1}x+cbxc+bx\\ &&-x^{-1}b^{-1}-bcxc-xb^{-1}-bcx^{-1}c)\\ &=&0. \end{eqnarray*} Thus $bx\in \{x^{-1}b^{-1},bcxc,xb^{-1}, bcx^{-1}c\}$. Since $(bx)^2\ne 1\ne (c,x)$ and $(cx)^2\ne 1$ it follows that $bx=xb^{-1}$. Then using (a.2) we get that $b^{-1}c=cb$ and therefore $(cb)^2=1$ a contradiction.
If case (a.3) holds then $x^2=(bc)^2=(cb)^2$ and thus $(b,x^2)=1=(x^2,c)$. Hence we have that
\begin{eqnarray*} &&[[b+b^{-1},c],[b+b^{-1},cx+(cx)^{-1}]] =\\ && 2(3x^{-1} + 2b^2x^{-1} + 2x^{-1}b^2 + 2bx^{-1}b + 2b^{-1}x^{-1}b^{-1} + 3bx^{-1}b^{-1} + b^{-1}x^{-1}b + b^2x^{-1}b^2\\
&&-3x - 2b^2x - 2xb^2 - 2bxb - 2b^{-1}xb^{-1} - 3bxb^{-1} - b^{-1}xb - b^2xb^2)= 0, \end{eqnarray*} and hence $x^{-1}\in\{x, b^2x, xb^2, bxb, b^{-1}xb^{-1}, bxb^{-1}, b^{-1}xb, b^2xb^2\}$. Notice that $b^2\neq x^2$ because in this case $b^2=(bc)^2$ and therefore $(b,c)=1$, a contradiction. Thus, since $x^2\neq 1$, $b^2\neq x^2$, $(bx)^2\neq 1 \neq (xb^{-1})^2$ it follows that $x^{-1}=b^2xb^2$. Then $(b^2x)^2=1$ and hence $b^2x\in A$ a contradiction.
Finally, if case (a.4) holds then $bxc=x^{-1}cb=cbx$ and $(bx)^2=(bc)^2=(cb)^2=(xb)^2$. Thus the following commutator \begin{eqnarray*}
0&=& [[c,b+b^{-1}],[c,x+x^{-1}]] \\ &=& xb+xb^{-1}+x^{-1}b+x^{-1}b^{-1}+cbcx+cbcx^{-1}+cb^{-1}cx+cb^{-1}cx^{-1}+bcxc\\ &&+bcx^{-1}c+b^{-1}cxc+b^{-1}cx^{-1}c +cxbc+cxb^{-1}c+cx^{-1}bc+cx^{-1}b^{-1}c\\ && -bx-bx^{-1}-b^{-1}x-b^{-1}x^{-1}-cbxc-cbx^{-1}c-cb^{-1}xc -cb^{-1}x^{-1}c\\ &&-cxcb-cxcb^{-1}-cx^{-1}cb-cx^{-1}cb^{-1}-xcbc-xcb^{-1}c-x^{-1}cbc-x^{-1}cb^{-1}c\\ &=& 2(xb+xb^{-1}+x^{-1}b+2x^{-1}b^{-1}+b^2xb+b^2xb^{-1}+b^2x^{-1}b^{-1}\\ && -2bx-b^{-1}x-bx^{-1}-b^{-1}x^{-1}-bxb^2-bx^{-1}b^2-b^{-1}x^{-1}b^2). \end{eqnarray*} Moreover $xb\not\in \{bx,b^{-1}x^{-1},bx^{-1}b^2\}$ because $(b,x)\ne 1$, $xb\not\in A$ and $xb^{-1}\not\in A$. Therefore $xb \in \{bx^{-1},b^{-1}x,bxb^2,b^{-1}x^{-1}b^2\}.$ Hence either $x^b=x^{-1}$ or $b^x=b^{-1}$. But in the first case $x^{-1}bc=bxc=x^{-1}cb$, in contradiction with $(b,c)\ne 1$. Thus $b^x=b^{-1}$ and taking the following commutator we have
\begin{eqnarray*}
0&=& [[b+b^{-1},c],[b+b^{-1},cx+cx^{-1}]] =16(cb+cb^{-1}-bc+b^{-1}c)x^{-1}cb^{-1}. \end{eqnarray*} Hence $cb=b^{-1}c$ and thus $(bx)^2=(bc)^2=1$ in contradiction with $bx\not\in A$.
Secondly assume that $(b.2)$ $b^2(xc)^2=1$ holds. In case (a.1) we have $x^2=b^2=(xc)^2$ and therefore $x=cxc$ in contradiction with $(x,c)\ne 1$. In case (a.2), $(bxc)^2=b(xc)^2b=1$, in contradiction with $bxc\not\in A$. For cases (a.3) and (a.4) we consider the following double commutator \begin{eqnarray*}
0 &=& [[c,b+b^{-1}],[c,cx+(cx)^{-1}]] \\
&=& 4(bxc +b^{-1}xc+cbcx^{-1}c+cb^{-1}cx^{-1}c -bx^{-1}c-b^{-1}x^{-1}c-cbcxc-cb^{-1}cxc) \end{eqnarray*} Then $bxc\in \{bx^{-1}c,b^{-1}x^{-1}c,cbcxc,cb^{-1}cxc\}$. However $bxc \not\in \{bx^{-1} c,cbcxc\}$, since $x^2\ne 1\ne (b,c)$. Thus either $b^2=x^2$ or $b^c=b^{-1}$. If $b^2=x^2$ by (b.2) we have that $x^2=(xc)^2$ and hence $(x,c)=1$, a contradiction. Thus $b^c=b^{-1}$. In case (a.3) since $b^2=(xc)^2$ and $b^c=b^{-1}$ we get that $cx^{-1}=(xc)^3=bcxb^{-1}$ and thus $x^{-1}=b^{-1}xb^{-1}$. Thus $(bx^{-1})^2=1$ yielding to a contradiction with $x\in A$. In case (a.4) we get that $bx=x^{-1}cbc=x^{-1}b^{-1}$ and hence $(bx)^2=1$, again a contradiction
Finally assume that (b.3) $bxb=cxc$ holds.
In case (a.1) $x^2=b^2$. Now consider the commutator \begin{eqnarray*} [[b+b^{-1}, c],[b+b^{-1},x+x^{-1}]] &=& 8(cbxb^{-1}+cb^{-1}xb^{-1}+bcbx^{-1}+b^{-1}cbx^{-1}\\&&-cx^{-1}-cb^2x^{-1}-bcxb^{-1}-b^{-1}cxb^{-1}) =0. \end{eqnarray*}
Then it follows that $cbxb^{-1}\in\{-cx^{-1},cb^2x^{-1},bcxb^{-1},b^{-1}cxb^{-1}\}$. If $cbxb^{-1}=cx^{-1}$ then $bxb^{-1}=x^{-1}$ and since $x^2=b^2$ we get that $bxb=x$. Therefore by (b.3) we obtain that $cxc=bxb=x$ and hence $(c,x)=1$, a contradiction. If $cbxb^{-1}=cb^2x^{-1}=cx$ it follows that $(b,x)=1$, a contradiction. If $cbxb^{-1}=bcxb^{-1}$ then $(b,c)=1$, a contradiction. Finally if $cbxb^{-1}=b^{-1}cxb^{-1}$ then $cb=b^{-1}c$ and hence $(bcx)^2=bcxccbcx=bbxbcbcx=b^2x^2=1$, again a contradiction.
In case (a.2) notice that we have $(b^2,x)=1$. In fact $b^2xc=bxcb=xcb^2=xb^2c$. Now we consider \begin{eqnarray*} 0&=& [[c,b+b^{-1}],[c,x+x^{-1}]] \\ &=& 4(bcx+b^{-1}cx+bcx^{-1}+b^{-1}cx^{-1} -cbx-cb^{-1}x-cbx^{-1}-cb^{-1}x^{-1})c \end{eqnarray*} Then $bc \in \{cb^{-1}, cbx^{-2}, cb^{-1} x^{-2}\}$, because $bc\ne cb$. But if $bc=cb^{-1}$ then $bxc=xb^{-1}c$ and so $bx=xb^{-1}$. Therefore $cxc=bxb=x$ and thus $(c,x)=1$, a contradiction. Hence $x^{-2} \in \{(b,c),(bc)^2\}\in C$ and thus $x$ has order 4. But if $x^2=(b,c)$, since $(b^2,x)=1$ then $(bcx)^2= bcx(cbx^2)x=bcxcbx^3=b^2xb^2x^3=1$ and so $bcx\in A$, a contradiction. On the other hand, if $x^2=(bc)^2$, by (a.2) we have that $bx=xcbc$ and then $cxc=bxb=x(cb)^2=x^{-1}$ yielding to a contradiction with $(cx)^2=1$.
In case (a.3) we have that $bxc=cxb^{-1}$. Recall that by (b.3) we have that $cx=bxbc$ and therefore $bxc=cxb^{-1}=bxbcb^{-1}$, in contradiction with $(b,c)\ne 1$.
Finally assume that (a.4) $bxc=x^{-1}cb$ holds. Furthermore, from the relations $c^2=b^4=(c,b^2)=1$, $cxc=bxb$ we get the following computation
\begin{eqnarray*}
0&=& [[c,b+b^{-1}],[c,x+x^{-1}]] \\ &=& 4xb+2xb^{-1}+2x^{-1}b+4x^{-1}b^{-1}+2b^2x^{-1}b^{-1}+2b^2xb\\ && -4bx-2bx^{-1}-2b^{-1}x-4b^{-1}x^{-1}-2bxb^2-2b^{-1}x^{-1}b^2 \end{eqnarray*}
Moreover $xb\not\in \{bx,b^{-1}x^{-1},bxb^2\}$ because $(b,x)\ne 1$, $bx\not\in A$ and $xb\ne cxcb=bxb^2$. Thus $xb \in \{bxb^2,b^{-1}x^{-1}b^2\}.$ If $xb=bxb^2$ it follows that $x=bxb=cxc$, in contradiction with $(b,c)\ne 1$. Thus $xb=b^{-1}x^{-1}b^2$ and hence $bx=x^{-1}b$. Therefore by (a.4) we get that $x^{-1}bc=bxc=x^{-1}cb$ yielding to a contradiction with $(b,c)\neq 1$. This finishes the proof of the lemma. \end{proof}
\begin{lem}\label{AAbelConsecuencias} Let $a\in A$ and $x,y\in G$ be such that $(a,x)\ne 1 \ne (a,y)$. Then \begin{enumerate} \item\label{Orden4} $x$ and $y$ have order 4. \item\label{CuadradoCentral} $(x^2,y)=(x,y^2)=1$. \item\label{Traza1} $aa^xa^ya^{xy}=1$. \item\label{GModuloAAbeliano} $(x,y)\in A$. \end{enumerate} \end{lem}
\begin{proof} (\ref{Orden4}) is a consequence of statement~(\ref{(x,A)=1ox^2inA}) of Lemma~\ref{PropiedadesA} and the fact that $A$ is abelian.
(\ref{CuadradoCentral}) Consider $$\matriz{{l} [[x+x^{-1},y+y^{-1}],[(xy)+(xy)^{-1},y+y^{-1}]] = \\ 2 y^{-1} + 3xy^{-1}x + xyxy^2 + y^{-1}xy^{-1}xy + y^{-1}xyxy^{-1} + 2yxyxy + 2y^{-1}xy^2x \\ +y^{-1}xyx^{-1}y + x^{-1}y^{-1}xy^2 + yx^{-1}y^{-1}xy + yxyx^{-1}y^{-1} + 2x^2y^{-1} \\
+ x^{-1}yxy^2 + y^{-1}xyx^{-1}y^{-1} + yx^{-1}yxy + yxyx^{-1}y \\% xyx^{-1} = y^{-1}
+ 3xyx + xy^{-1}xy^2 + 2y^{-1}xyxy + yxy^{-1}xy + yxyxy^{-1}
\\ + 2y^{-1}xy^2x^{-1}\\ - 2y - 3x^{-1}y^{-1}x^{-1} - 3x^{-1}yx^{-1} - 2yx^2 - 2xy^2xy - 2x^{-1}y^2xy \\ - 2y^{-1}x^{-1}y^{-1}x^{-1}y^{-1} - 2y^{-1}x^{-1}y^{-1}x^{-1}y - y^{-1}x^{-1}y^{-1}xy^{-1} - y^{-1}x^{-1}yx^{-1}y^{-1} \\ - y^{-1}x^{-1}yx^{-1}y - y^{-1}x^{-1}yxy^{-1} - y^{-1}xy^{-1}x^{-1}y^{-1} - y^{-1}xy^{-1}x^{-1}y - y^2x^{-1}y^{-1}x^{-1} - y^2x^{-1}y^{-1}x \\ - y^2x^{-1}yx^{-1} - y^2x^{-1}yx - yx^{-1}y^{-1}x^{-1}y^{-1} - yx^{-1}y^{-1}x^{-1}y - yxy^{-1}x^{-1}y^{-1} - yxy^{-1}x^{-1}y }$$
By assumption the result of the previous calculation should be 0. Notice that if $xy$ and $xy^{-1}$ belong to $A$ then since $A$ is abelian $(x^2,y)=(y^2,x)=1$ as desired. So assume that $xy$ and $xy^{-1}$ do not belong to $A$. Also we have that $y$ should appear in the support of the positive part. It cannot be one of the elements of the first line (after the equality) because neither $y,xy^{-1},xy$ nor $xy^2$ have order 2, as they do not belong to $A$. If $y$ belongs to the support of the second line then $(x,y)=1$ or $x^2=y^2$, as desired. If $y$ belongs to the support of the third line then $y^x=y^{-1}$ and if $y$ belongs to the fourth line then $x^y=x^{-1}$. In both cases $(x,y)=1$, as desired. Finally if $y=y^{-1}xy^2x^{-1}$ then $(x,y^2)=1$. Since $x$ and $y$ play symmetric rolls it also follows that $(x^2,y)=1$.
(\ref{Traza1}) By (\ref{CuadradoCentral}), both $x^2$ and $(ax)^2=aa^xx^2$ commute with $y$ and hence $a^ya^{xy}=(aa^x)^y=aa^x$. Thus $aa^xa^ya^{xy}=1$, as desired.
(\ref{GModuloAAbeliano}) By means of contradiction assume that $(x,y)\not\in A$. Using (2) we have $1\ne (x,y)^2 = (x^2y^2 (xy)^2)^2 = (xy)^4$.
\textbf{Claim}: $x^2=y^2$. By means of contradiction assume $x^2\ne y^2$ and consider the following double commutator. $$\matriz{{l} [[x+x^{-1},xy+(xy)^{-1}],[x+x^{-1},(xy)^2x+((xy)^2x)^{-1}]] = \\ 8(y^{-1}+x^2y^{-1}+(xy)^2y+xyx^{-1}y^2+(yx)^2y^{-1}+y(xy)^2x^2y^2+(xy)^4y+(xy)^{3}x^{-1}y^2 \\ -y-yx^2-(yx)^2y-xyx-xyx^{-1}-(yx)^{3}x-(xy)^{3}x-(xy)^{3}x^{-1}).}$$ By assumption this is 0. Having in mind that $y\ne y^{-1}$, $x^2\ne y^2$, and $(xy)^4\ne 1$, (in particular $(xy)^2\ne 1$, $(yx)^2y^2\ne 1\ne(xy)^2x^2y^2$), and comparing $y$ with the elements with positive coefficient we deduce that either $xyx^{-1}y=1$ or $(xy)^4=x^2$. In the first cases $(x,y)=y^2\in A$, yielding a contradiction. We conclude that $(xy)^4=x^2$ and hence $(xy)^4=(yx)^4$. Claiming symmetry we deduce that $(yx)^4 = y^2$. Then $y^2=(yx)^4=(xy)^4 =x^2$, again a contradiction. This finishes the proof of the claim.
Let $z=xy$. Then $z^2=(x,y)\not\in A$. Consider the following double commutator $$\matriz{{l} [[x+x^{-1},a],[x+x^{-1},xz+(xz)^{-1}]] = \\
8(azx+azx^{-1}+a^xz^{-1}x^{-1}+a^xz^{-1}x-az^{-1}x^{-1}-az^{-1}x-a^xzx^{-1}-a^xzx)=0 }$$ Then $azx\in\{az^{-1}x^{-1}, az^{-1}x, a^xzx^{-1}, a^xzx\}$. If $azx=az^{-1}x^{-1}$ then $z^2=x^{-2}=x^2$ and thus $z^4=1$, a contradiction. If $azx=az^{-1}x$ then $z^2=1$, a contradiction. If $azx=a^xzx^{-1}$ then $a=a^xx^2=x^{-1}ax^{-1}$ and therefore $(ax)^2=1$, a contradiction because $x\not\in A$. Finally $azx=a^xzx$ and hence $a=a^x$ again a contradiction because $(a,x)\neq 1$.
\end{proof}
\begin{lem}\label{ACentral} $A\subseteq \ensuremath{\mathcal{Z}}(G)$. \end{lem}
\begin{proof}Recall that we are assuming that $\breve{G}$ is not commutative and $A$ is not contained in the center of $G$. Let $x\in G$ and $a\in A$ be such that $(x,a)\ne 1$. By Lemma~\ref{Aabelian}, $A$ is an elementary abelian 2-group. We claim that $G/A$ is elementary abelian 2-group too. Indeed, if $y\in G$ then either $y$ or $yx$ does not commute with $a$. Thus $\GEN{A,x,y}$ with $x$ and $y$ satisfying the conditions of Lemma~\ref{AAbelConsecuencias}. Then $x^2,y^2,(x,y)\in A$ and therefore $\GEN{A,x,y}/A$ is elementary abelian. Thus $y^2\in A$. This proves the claim.
By Theorem~\ref{CasiAntisimetricos}, $[G:A]>2$, for otherwise $\breve{G}$ is commutative. Hence $G\ne \GEN{A,x}$. Let $y\in G\setminus \GEN{A,x}$. By replacing $y$ by $xy$ if needed one may assume that $(a,y)\ne 1$. By replacing $y$ by $ay$ we may also assume that $t=(x,y)\ne 1$. By Lemma~\ref{AAbelConsecuencias}, we deduce that $x$ and $y$ have order 4, $(x^2,y)=(x,y^2)=1=aa^xa^ya^{xy}$ and $t$ has order 2. Using this we deduce that $(t,x)=(t,y)=1$ and $(y+y^{-1})(x+x^{-1})=t(x+x^{-1})(y+y^{-1})$.
Thus \begin{eqnarray}\label{axay} \nonumber &&[[a,x+x^{-1}],[a,y+y^{-1}]] \\
&&\hspace{3cm} = (aa^x+aa^y+aa^{xy}t+t-aa^{xy}-1-aa^yt-aa^xt)(1+x^2+y^2+x^2y^2)xy
\end{eqnarray}
By assumption this is 0 and therefore one of the following conditions hold: $$aa^x\in \GEN{x^2,y^2}, \; aa^{y}\in \GEN{x^2,y^2}, \; t\in \GEN{x^2,y^2}, \; aa^{xy}t\in \GEN{x^2,y^2}.$$ This implies that one out of 16 equalities holds. However seven of them yields a contradiction with the fact that $aa^x$, $aa^y$, $(ax)^2$, $(ay)^2$, $(axy)^2$, $(xy)^2$ and $t$ are all different from 1. We classify the remaining nine equalities as follows:
$$\matriz{{lrclllrcl}
\text{(a)} & aa^{xy}t &=&1. \\
\text{(b)} & t &=&x^2. & & \text{(b')} & t&=&y^2. \\
\text{(c)} & aa^x&=&y^2. & & \text{(c')} & aa^y&=&x^2. \\
\text{(d)} & aa^x &=& x^2y^2.& & \text{(d')} & aa^y&=&x^2y^2.\\
\text{(e)} & aa^{xy}t &=&x^2. & & \text{(e')} & aa^{xy}t&=&y^2. \\
}$$ By symmetry we only have to consider the five cases in the left column. On the other hand one can pass from case (e) to case (b) by replacing $x$ by $x_1=ax$. Indeed, if case (e) holds then $(x_1,y)= a^x a^{xy}t = aa^xx^2 = x_1^2\ne 1$ . Therefore, we only have to consider cases (a)-(d).
Replacing in (\ref{axay}), $y$ by $ay$ and therefore $t$ by $(x,ay)=aa^xt$, and $y^2$ by $(ay)^2 = aa^y y^2$ we obtain \begin{eqnarray}\label{axaay} \nonumber &&[[a,x+x^{-1}],[a,ay+(ay)^{-1}]] \\ &&\hspace{4cm} = (aa^x+aa^y-aa^{xy}-1)(1+t)(1+x^2+aa^yy^2+aa^yx^2y^2)xay
\end{eqnarray}
In cases (a) or (b), equation (\ref{axaay}) takes the form $$[[a,x+x^{-1}],[a,ay+(ay)^{-1}]] = 2(aa^x+aa^y-aa^{xy}-1)(1+x^2+aa^yy^2+aa^yx^2y^2)xay$$ Thus, in these cases $\GEN{x^2,aa^yy^2}$ contains either $aa^x$ or $aa^y$.
This yields eight possible equalities, but again we can exclude five of them because neither $y^2$, $aa^x$, $aa^y$ nor $(ax)^2$ is 1. Thus \begin{equation}\label{abc3}
\text{In cases (a) or (b) either } x^2=aa^y, y^2=a^xa^y, x^2y^2=a^xa^y \text{ or } x^2=y^2. \end{equation}
On the other hand, in cases (c) and (d) equation (\ref{axaay}) takes the form \begin{equation}\label{axaaycd} [[a,x+x^{-1}],[a,(ay)+(ay)^{-1}]]
= 2(aa^x-1)(1+t)(1+x^2+aa^{xy}+aa^{xy}x^2)xay. \end{equation} and this is 0 if and only if $\GEN{aa^{xy},x^2}$ contains either $aa^x$ or $aa^xt$. However $aa^x\ne 1, a^xa^{xy}\neq 1$ and $aa^xx^2\neq1$ (for otherwise $a=a^x$, $a=a^y$ or $(ax)^2=1$). Hence \begin{equation}\label{cd5}
\text{In cases (c) and (d) either } aa^y=x^2, t=aa^x, t=aa^xx^2, t=aa^y \text{ or } t=aa^yx^2. \end{equation}
Now we deal separately with the four cases (a)-(d).
{\bf Case (a)} Suppose $aa^{xy}t=1$. Then the last option of (\ref{abc3}) can be excluded because $(xy)^2=t=aa^{xy}$ and hence $(ayx)^2=1$, a contradiction. If $x^2y^2=a^xa^y$ then since $a^xa^y=aa^{xy}=t=x^2y^2(xy)^2$, it follows that $(xy)^2=1$, a contradiction. If $x^2=aa^y$ or $y^2=a^xa^y$. In the first case $$[[a,x+x^{-1}],[a,xy+(xy)^{-1}]] =
4(y^2-1)(1+aa^y+aa^x y^2 + a^x a^y y^2)y = 0,$$
and thus $y^2\in\{1, aa^y, aa^xy^2,a^xa^yy^2\}$ yielding in all cases to a contradiction. Finally if or $y^2=a^xa^y=aa^{xy}$ then $$ [[a,x+x^{-1}],[a,xy+(xy)^{-1}]] =
4(1-a^x a^y)(1+aa^y +aa^y x^2 + x^2)y^{-1}= 0,$$ and therefore $t=y^2=a^x a^y \in \{1,aa^y,aa^y x^2,x^2\}$, a contradiction because $t\ne 1$, $a\ne a^x$, $aa^xx^2 = (ax)^2 \ne 1$ and $x^2\ne y^2$.
{\bf Case (b)} Assume $t=x^2$, or equivalently $x^y=x^{-1}$.
We consider separately the four cases of (\ref{abc3}). If $aa^y=x^2$ then $$ [[a,x+x^{-1}],[a,axy+(axy)^{-1}]] = 4(a+a^y+a^yy^2+a^{xy}y^2-a^x -ay^2-aa^{x}- a^y y^2)y=0 $$ and thus $a\in \{a^x,ay^2, aa^x,a^yy^2\}$, a contradiction because $a\ne a^y,a^{xy}$, $y^2\ne 1$ and $1\ne (ay)^2 = aa^y y^2$.
If $x^2=y^2$ then $$\matriz{{l} [[a,ax+x^{-1}a],[a,ay+y^{-1}a]] = 4(1+aa^{xy}+aa^xt+a a^y-aa^x-aa^y-t-aa^{xy}t)xy=0 , }$$
and therefore $1\in \{aa^x,aa^y,t, aa^{xy}t\}$, a contradiction because $aa^x,aa^y,t\ne 1$ and $1\ne (axy)^2 = aa^{xy}t$. If $y^2 = aa^{xy}x^2$ then the same commutator $$\matriz{{l} [[a,x+x^{-1}],[a,xy+y^{-1}x^{-1}]] = 4(y^2+x^2y^2+aa^{x}+aa^xx^2-1-x^2-aa^y-aa^yx^2)y =0, }$$ and hence $1\in \{y^2,x^2y^2,aa^{x}, aa^xx^2\}$. Notice that since $y^2,x^2y^2,aa^x \ne 1$, it follows that $aa^xx^2=1$. Then $y^2=aa^{xy}x^2=aa^{xy}aa^x=aa^y$ and therefore $(ay)2^1$, a contradiction.
Finally if $y^2=a^xa^y=aa^{xy}=ay^{-1}x^{-1}axy=ayxayxy^2$, then $1=(ayx)^2$ a contradiction.
{\bf Case (c)} Suppose $aa^x=y^2$. In this case the third and fifth options of (\ref{cd5}) take the forms $t=y^2x^2$ and $t=aa^yx^2=a^xy^2a^yx^2=aa^{xy}x^2y^2$, respectively, which implies $(xy)^2=1$ and $(axy)^2=1$, respectively. This is contradictory with our hypothesis on $x$ and $y$. The second and fourth options take the forms $t=y^2$ and $t=aa^y=aa^{xy}y^2$ which are cases (b') and (e') respectively. Since these cases have been excluded we are left with only one case: $aa^y=x^2$. Then
$$ [[a,(ax)+(ax)^{-1}],[a,(ay)+(ay)^{-1}]] =
4 (1+aa^{xy}+aa^xt+aa^yt-aa^x-aa^y-t-aa^{xy}t)xy=0 $$ and thus $1\in \{aa^x, aa^y, t, aa^{xy}t\}$. Recall that $aa^x,aa^y,t\ne 1$ therefore $1=aa^{xy}t$ a contradiction because $1 \ne (xy)^2 = tx^2y^2=aa^{xy}t$.
{\bf Case (d)} Finally suppose $aa^x=x^2y^2$. Then the second and fourth option of (\ref{cd5}) take the forms $t=x^2y^2$ and $t=aa^{xy}x^2y^2$, respectively and this implies $(xy)^2=1$ and $(axy)^2=1$, respectively, again a contradiction. The third and fifth options take the form $t=y^2$ and $t=aa^{xy}y^2$ which are cases (b') and (e'), already excluded. Thus the only remaining case is $aa^y=x^2$, which is case (c'), already excluded. This finishes the proof of the proposition. \end{proof}
\noindent \textbf{4.2} \underline{Properties of $B$}
We now address the properties of $B=\GEN{g\in G: \circ (g)\ne 4}$. For that we start with the following lemma.
\begin{lem}\label{Orden4NoConmutan} Let $x,y\in G$ with $x^4\ne 1 \ne y^4$ and $(x,y)\ne 1$. Then \begin{enumerate}
\item\label{x4NoDihedral} $x^y\ne x^{-1}$ and $y^x\ne y^{-1}$.
\item\label{x4x2y2} $x^2y^2\ne 1$ and $x^2\ne y^2$.
\item\label{x4NoDihedraly^2} $(y^2)^x \ne y^{-2}$ and $(x^2)^y\ne x^{-2}$.
\end{enumerate} \end{lem}
\begin{proof} The assumption implies that $xy^i,y^ix,x^iy,yx^i\not\in \ensuremath{\mathcal{Z}}(G)$ for every $i\in \ensuremath{\mathbb{Z}}$. Hence $(xy^i)^2$, $(y^ix)^2$, $(x^iy)^2$ and $(yx^i)^2$ are all different from 1, by Lemma~\ref{ACentral}.
(\ref{x4NoDihedral}) By symmetry it is enough to show that $x^y\ne x^{-1}$. Otherwise \begin{eqnarray}\label{eq} 0&=& [[y+y^{-1},xy+(xy)^{-1}],[y+y^{-1},xy^2+(xy^2)^{-1}]] \nonumber\\ &=&2(4y^{-1}+2x^{-2}y+2x^2y+6y^{-3}+3x^{-2}y^3+3x^2y^3+2y^{-5}+x^{-2}y^5+x^2y^5 \\ && -4y-2x^{-2}y^{-1}-2x^2y^{-1}-6y^3-3x^{-2}y^{-3}-3x^2y^{-3}-2y^5-x^{-2}y^{-5}-x^2y^{-5})\nonumber \end{eqnarray}
Then $y\in\{y^{-1}, x^{-2}y, x^2y, y^{-3}, x^{-2}y^3, x^2y^3, y^{-5}, x^{-2}y^5, x^2y^5\}$. Having in mind that $y^4\ne 1 \ne x^4$ we deduce that $y^6=1$ or $x^2\in \{y^2,y^{-2},y^4,y^{-4}\}$. If $y^6=1$, then introducing introducing this relation in (\ref{eq}) we get that $x^2\in \{y^2,y^{-2},y^4,y^{-4}\}$ or a contradiction with $y^4\neq 1\neq x^4$. Therefore $x^2=(x^2)^y = x^{-2}$, a contradiction.
(\ref{x4x2y2}) Observe that the inequalities $x^2y^2\ne 1$ and $x^2 \ne y^2$ transfer to each other by replacing $y$ by $y^{-1}$. Thus it is enough to prove the first inequality. So assume $x^2y^2=1$. Then $(x,y^2)=(x^2,y)=1$ and \begin{equation}\label{corch1} \begin{array}{l} [[x+x^{-1},y+y^{-1}],[x+x^{-1},x^{-1}y+(x^{-1}y)^{-1}]] = \\ 2(4x^{-1}+6x^{-1}y^2+2y^{-1}xy+2xyx^{-1}y^{-1}x+y^{-1}xy^{-1}x^2+xy^{-1}xy^{-1}x\\ +3xyxy^{-1}x+3yxy^{-1}x^2+2x^{-3}y^2 \\ -4x-6y^{-2}x-2yx^{-1}y^{-1}-3yx^{-1}y-yx^{-3}y-2xy^{-1}x^{-1}yx^{-1}\\ -3x^{-1}yx^{-1}yx-x^{-1}yx^{-1}yx^{-1}-2y^{-1}x^2y^{-1}x)=0 \end{array} \end{equation}
Then $x\in\{x^{-1}, x^{-1}y^2, y^{-1}xy, xyx^{-1}y^{-1}x, y^{-1}xy^{-1}x^2, xy^{-1}xy^{-1}x, xyxy^{-1}x, yxy^{-1}x^2, x^{-3}y^2\}$ Having in mind that $x^4\ne 1$, $y^2=x^{-2}\ne x^2$, $(x,y)\ne 1$, $(xy^{-1})^2\ne 1$ and $x^y\ne x^{-1}$, it follows that the only possibility is $x=x^{-3}y^2$.
However $x$ has coefficient $-4$ while $x^3y^2$ has coefficient $2$, thus expression (\ref{corch1}) is non-zero, yielding to a contradiction.
(\ref{x4NoDihedraly^2}) By symmetry it is enough to prove that $(y^2)^x \neq y^{-2}$. Assume that $(y^2)^x = y^{-2}$. Therefore $(x,y^2)\neq 1$. Let $y_1=xy^2$. Then $(x,y_1)\ne 1$ and if $(y^2)^x=y^{-2}$ it follows that $y_1^2=x^2$. Then $y_1^4\ne 1$ and this contradicts (\ref{x4x2y2}), when applied to $x$ and $y_1$.
\end{proof}
\begin{lem}\label{x4No1Conmutan} $B$ is abelian. \end{lem}
\begin{proof} By means of contradiction, let $x,y\in G$ with $x^4\ne 1 \ne y^4$ and $(x,y)\ne 1$. Once more recall that we are assuming that $RG^+$ is Lie metabelian and $\breve{G}$ is not commutative. In particular, $xy^i,y^ix,x^iy,yx^i\not\in \ensuremath{\mathcal{Z}}(G)$ for every $i\in \ensuremath{\mathbb{Z}}$. Hence $(xy^i)^2,(y^ix)^2,(x^iy)^2,(yx^i)^2\ne 1$, by Lemma~\ref{ACentral}. We consider the following equality where the right column should not be read for the moment.
\begin{equation}\label{xyxyy} \matriz{{ll} [[x+x^{-1},y+y^{-1}],[xy+(xy)^{-1},y+y^{-1}]] = &\\ y^{-3} & (y^4=1 ) \\ +3y^{-1}+xy^{-2}x^{-1}y+yxy^{-2}x^{-1} & (y^2=1 ) \\ +2x^{-2}y+yx^{-2} & (x^2=1 ) \\ +xyx^{-1}+x^{-1}y^{-1}xy^2+y^{-1}xyx^{-1}y+yx^{-1}y^{-1}xy+yxyx^{-1}y^{-1} & ((x,y)=1 ) \\ +xyxy^2+2yxyxy & ((xy)^2=1 ) \\ +2xy^{-1}x+y^{-1}xy^{-1}xy & ((xy^{-1})^2=1 ) \\ +x^{-1}y^{-2}x^{-1}y^{-1}+2y^{-1}x^{-1}y^{-2}x^{-1} & ((xy^2)^2=1 ) \\ +xy^{-1}xy^2+2y^{-1}xyxy+yxy^{-1}xy+3xyx+yxyxy^{-1} & (x^y=x^{-1} ) \\ +x^{-1}yxy^2+yx^{-1}yxy+yxyx^{-1}y & (y^x=y^{-1} ) \\ +2x^{-2}y^{-1}+3y^{-1}x^{-2}+x^{-1}y^{-2}x^{-1}y+yx^{-1}y^{-2}x^{-1}+y^{-2}x^{-2}y & (x^2y^2=1 ) \\ +xy^{-2}x^{-1}y^{-1}+y^{-1}x^{-1}y^{-2}x+y^{-1}xy^{-2}x^{-1} & ((y^2)^x=y^{-2} ) \\ +y^{-1}x^{-1}y^2x & ((x,y^2)=1 ) \\ +y^{-2}x^{-2}y^{-1} & (x^2y^4=1) \\ +y^{-1}xyx^{-1}y^{-1} & (y^{x^{-1}}=y^3 ) \\ +xy^3x^{-1} & (y^x=y^3 ) \\ +y^{-1}xyxy^{-1} & (xyx=y^3) \\ +xy^3x & (xy^3x=y ) \\ +y^{-1}x^{-1}y^2x^{-1} & (x^{y^2}=x^{-1}) \\ - 3y-3x^{-1}y^{-1}x^{-1}-2x^{-1}yx^{-1}-x^2y^{-1}-3x^2y-xy^{-1}x^{-1}\\ -2y^{-1}x^2-2yx^2-y^3-x^{-1}y^{-2}xy -x^{-1}y^{-3}x^{-1}-x^{-1}y^2xy\\ -xy^{-2}xy-xy^{-3}x^{-1}-xy^2x^{-1}y^{-1} -xy^2x^{-1}y -xy^2xy^{-1}\\ -2xy^2xy -2y^{-1}x^{-1}y^{-1}x^{-1}y^{-1}-2y^{-1}x^{-1}y^{-1}x^{-1}y\\ -y^{-1}x^{-1}y^{-1}xy^{-1} -y^{-1}x^{-1}yx^{-1}y^{-1}-y^{-1}x^{-1}yx^{-1}y\\ -y^{-1}x^{-1}yxy^{-1}-y^{-1}x^2y^2-y^{-1}xy^{-1}x^{-1}y^{-1}-y^{-1}xy^{-1}x^{-1}y\\ -y^{-1}xy^2x^{-1} -y^{-1}xy^2x -y^{-2}x^{-1}y^{-1}x^{-1}-y^{-2}x^{-1}y^{-1}x\\ -y^{-2}x^{-1}yx^{-1}-y^{-2}x^{-1}yx-yx^{-1}y^{-1}x^{-1}y^{-1}-yx^{-1}y^{-1}x^{-1}y \\ -yx^2y^2-yxy^{-1}x^{-1}y^{-1}-yxy^{-1}x^{-1}y -yxy^2x^{-1}-yxy^2x } \end{equation} As we are assuming that $RG^+$ is Lie metabelian the expression in (\ref{xyxyy}) should be 0, hence as $y$ appears with coefficient $-3$, one of the elements with positive coefficient should be equal to $y$. Each relation in the right column is equivalent to the one given by each of the summands in the same line to be equal to $y$. Thus one of the relations in the right column of (\ref{xyxyy}) holds. We will prove that each of these relation yields some contradiction. This is clear for the first seven relations by the first paragraph of the proof. For the next five relations, it is a consequence of Lemma~\ref{Orden4NoConmutan}. Before continuing with the remaining relations we prove the following claim which will exclude the next two relations.
{\bf Claim}. $(x,y^2)\ne 1$ and $(x^2,y)\ne 1$. By symmetry it is enough to deduce a contradiction from the assumption $(x,y^2)=1$. In this case (\ref{xyxyy}) reduces to \begin{equation}\label{xyxyyCond} \matriz{{ll} [[x+x^{-1},y+y^{-1}],[xy+(xy)^{-1},y+y^{-1}]] = &\\ 4y^{-3} & (y^4=1 ) \\ +4y^{-1} & (y^2=1 ) \\ +2x^{-2}y+2yx^{-2} & (x^2=1 ) \\ +xyx^{-1}+x^{-1}yx+y^{-1}xyx^{-1}y+yx^{-1}y^{-1}xy & ((x,y)=1 ) \\ +2xy^3x+2yxyxy & ((xy)^2=1 ) \\ +2xy^{-1}x+2y^{-1}xy^{-1}xy & ((xy^{-1})^2=1 ) \\ +2x^{-2}y^{-3}+2y^{-3}x^{-2} & ((xy^2)^2=1 ) \\ +4xyx+4y^{-1}xyxy & (x^y=x^{-1} ) \\ +x^{-1}y^3x+yx^{-1}yxy+yxyx^{-1}y & (y^x=y^{-1} ) \\ +4x^{-2}y^{-1}+4y^{-1}x^{-2} & (x^2y^2=1 ) \\
+xy^3x^{-1} & (y^x=y^3 ) \\
- 4y-4y^3-2x^2y^{-1}-4x^2y-2y^{-1}x^2-4yx^2 -2x^2y^3-2y^3x^2\\ -4x^{-1}y^{-1}x^{-1}-2x^{-1}yx^{-1}-xy^{-1}x^{-1} -x^{-1}y^{-1}x \\ - 2 x^{-1}y^{-3}x^{-1}-xy^{-3}x^{-1} -x^{-1}y^{-3}x \\
-2y^{-1}x^{-1}y^{-1}x^{-1}y^{-1}-y^{-1}x^{-1}y^{-1}xy^{-1} -y^{-1}xy^{-1}x^{-1}y^{-1} \\ -4y^{-1}x^{-1}y^{-1}x^{-1}y -y^{-1}x^{-1}yxy^{-1}-y^{-1}xy^{-1}x^{-1}y\\ -2y^{-1}x^{-1}yx^{-1}y
} \end{equation} This is 0 and hence one of the conditions on the right column of (\ref{xyxyyCond}) holds. The first seven relations are excluded by the first paragraph of the proof. The following three relations are excluded by Lemma \ref{Orden4NoConmutan}. Moreover, if $y^x=y^3$ then $y^2=(y^2)^x = y^6$, a contradiction that finishes the proof of the claim.
So only the last five positive summands of (\ref{xyxyy}) can cancel the $-3y$ and hence at least three of the following conditions hold: $y^{x^{-1}}=y^3$, $y^x=y^3$, $xyx=y^3$, $xy^3x=y$ and $x^{y^2}=x^{-1}$. Any two of the first three equalities cannot hold simultaneously because otherwise $(x^2,y)=1$, in contradiction with the Claim. Then the last two equalities hold. Then $y=xy^3x=y^2x^{-1}yx$ and hence $y^x=y^{-1}$, in contradiction with Lemma~\ref{Orden4NoConmutan}. \end{proof}
\noindent\textbf{4.3} \underline{The exponent of $G$}
We will consider separately the cases when $G$ has exponent 4 or different of 4.
\begin{lem}\label{abelianindex2} If $\textnormal{Exp}(G)\ne 4$ then $[G:B]=2$ and for every $x\in G\setminus B$ and $b \in B$ we have $b^x=b^{-1}$. \end{lem}
\begin{proof} Recall that we are assuming that $RG^+$ is Lie metabelian and $\breve G$ is not commutative. Assume that $\textnormal{Exp}(G)\ne 4$. First we will prove that if for every $x\in G\setminus B$ and $b \in B$ we have $b^x=b^{-1}$ then the index of $B$ in $G$ is equal to 2, or equivalently that $xy\in B$ for every $y\in G\setminus B$. Otherwise we take $b\in B$ with $b^2\ne 1$ then $b^{-1}=b^{xy}=(b^x)^y=(b^{-1})^y=b$ a contradiction.
Therefore it remains to prove that for every $x\in G\setminus B$ and $b \in B$ we have $b^x=b^{-1}$. By means of contradiction assume that $b^x\neq b^{-1}$. As $B$ is abelian, it is enough to prove the result for $b\in G$ with $b^4\ne 1$. Note that $x^4=(bx)^4=(b^{-1}x)^4=1$. Therefore $x^2,(bx)^2\in \ensuremath{\mathcal{Z}}(G)$, by Lemma~\ref{ACentral}, and $(xb)^2 = ((bx)^2)^x=(bx)^2=(bx)^{-2}$. Hence $1=(bx)^2(xb)^2=bx^{-1}b^2xb$ and thus $(b^2)^x=b^{-2}$. By assumption $$\matriz{{lll} 0&=&[[b+b^{-1},x+x^{-1}],[b+b^{-1},x^b+(x^{-1})^b]] \\ &&+8b^2+8b^{-1}xbx+8b^{-1}x^{-1}bx+8b^2x^2+4b^4+4b^4x^2+4b^{-1}xb^3x+4b^{-1}x^{-1}b^3x \\ &&-8b^{-2}-8bxb^{-1}x-8bx^{-1}b^{-1}x-8b^{-2}x^2-4b^{-4}-4b^{-4}x^2-4xb^{-3}xb-4xb^{-3}x^{-1}b }$$ Having in mind that $b^4\ne 1$, $(b^{-1}x)^2\ne 1$ and $b^x\ne b^{-1}$, for the $b^2$ to be canceled by the summands with negative coefficient either $x^2=b^4$ or at least two of the following conditions holds: $b^6=1$, $b^6=x^2$, $b^x=b^{-3}x^2$ or $b^x=b^{-3}$. However the first two equalities are not compatible and the last two are also not compatible. Therefore $b^6\in \{1,x^2\}$ and $b^x \in \{b^{-3},b^{-3}x^2\}$. Thus $b=b^{x^2}=b^{-9}$ and therefore $b^{10}=1$. Then $b^{-4}=b^6\in \{1,x^2\}$. Since $b^4\neq 1$, we conclude that $x^2=b^4$. Then $$[[b+b^{-1},bx+(bx)^{-1}],[b+b^{-1},xb+(xb)^{-1}]] =
16(b^{2} +xb^{-1}xb -b^6-bxb^{-1}x^{-1}) =0.$$ Thus $b^2=b^{6}$ or $b^{x}=b^{-1}$ yielding in both cases to a contradiction, that finishes the proof of the lemma. \end{proof}
\begin{lem}\label{Exp4CentroA} If $\textnormal{Exp}(G)=4$ then $\ensuremath{\mathcal{Z}}(G)=A$. \end{lem}
\begin{proof} By Lemma~\ref{ACentral}, $A\subseteq \ensuremath{\mathcal{Z}}(G)$ and we have to prove that the equality holds. By means of contradiction assume that $z\in \ensuremath{\mathcal{Z}}(G)\setminus A$. As $R G^+$ is not commutative and the elements of $G$ of order $2$ are central, there are $x,y \in G$ such that $[x+x^{-1},y+y^{-1}]\ne 0$. In particular, $t=(x,y)\ne 1$, $x^y\ne x^{-1}$ and $y^x\ne y^{-1}$ and hence $t\not\in \{x^2,y^2\}$. As $G/A$ has exponent 2, we have $t\in A$ and, in particular $t$ has order 2. Moreover $z,x,y,xy\not\in A$ and therefore they all have order 4. Then $$\matriz{{lll} 0&=&[[x+x^{-1},y+y^{-1}],[x+x^{-1},xyz+(xyz)^{-1}]] \\
&& 8(xz+x^3z+xy^2z+txz^3+x^3y^2z+tx^3z^3+txy^2z^3+tx^3y^2z^3 \\ &&-txz-xz^3-tx^3z-txy^2z-x^3z^3-xy^2z^3-tx^3y^2z-x^3y^2z^3) }$$ Comparing $xz$ with the terms with negative coefficient, and having in mind that $x,y,xy$ and $z$ have order 4 and $t\not\in \{x^2,y^2\}$, we have that $z^2\in \{x^2, y^2, x^2y^2\}$. By symmetry it is enough to consider the cases $x^2=z^2$ and $x^2y^2=z^2$.
Case 1. Assume that $z^2=x^2$. Then $$\matriz{{lll} 0&=&[[y+y^{-1},xy+(xy)^{-1}],[y+y^{-1},xz]]
=8(t-1)(1+y^2+tx^2+tx^2y^2)yz }$$ and thus $t\in \GEN{y^2,tx^2}$. However $t\not\in \{1,y^2,tx^2\}$ and therefore $x^2=y^2=z^2$. Then $$\matriz{{rcl} [[xz,yz],[xz,xyz+(xyz)^{-1}]]
=4(1+tx^2-t-x^2)x=0}$$ and hence either $t= 1$ or $x^2=1$, a contradiction.
Case 2. Assume that $z^2=x^2y^2$. Then $$
[[x+x^{-1},xy+(xy)^{-1}],[x+x^{-1},yz+(yz)^{-1}]]
=16(1+x^2+ty^2+tz^2-t-y^2-tx^2-z^2)xz= 0,
$$ and hence $1\in\{t, y^2, tx^2, z^2\}$ yielding in all cases to a contradiction.
\end{proof}
\begin{lem}\label{Babelianindex2} Assume $G$ has exponent 4 and let $x,y,h\in G\setminus A$ with $\GEN{x,y,h}$ non-abelian, $x^2\neq y^2$ and $(x,y)=1$. Then $x^h=x^{-1}$ and $y^h=y^{-1}$. \end{lem}
\begin{proof} Let $H=\GEN{x,y,h}$. If $(xh)^2=1$ then $xh\in \ensuremath{\mathcal{Z}}(G)$, by Lemma~\ref{ACentral}, and thus $(x,h)=1=(y,h)$. Then $H$ is abelian in contradiction with the hypothesis. Thus $(xh)^2\ne 1$ and similarly $(yh)^2\neq 1$ and $(x^{\pm 1} y^{\pm 1} h)^2\neq 1$. As $H$ is not abelian either $(x,h)\ne 1$ or $(y,h)\ne 1$ and by symmetry one may assume that $(x,h)\ne 1$. Then $[x-x^{-1},h-h^{-1}]=xh+x^{-1}h^{-1}+hx^{-1}+h^{-1}x-xh^{-1}-x^{-1}h-hx-h^{-1}x^{-1}\ne 0$, since $xh\not \in \{xh^{-1},x^{-1}h,hx,h^{-1}x^{-1}\}$. This proves that $\breve{H}$ is not commutative. As by assumption $RG^+$ is Lie metabelian, so is $RH^+$ and hence $\ensuremath{\mathcal{Z}}(H)=\GEN{g\in H : g^2=1}$, by Lemma~\ref{Exp4CentroA}. In particular, $y\not\in \ensuremath{\mathcal{Z}}(H)$ and therefore $(y,h)\ne 1$. Moreover $(xy)^2=x^2y^2\ne 1$ and therefore $xy\not\in \ensuremath{\mathcal{Z}}(H)$. Thus $h^x\ne h^y$. For future use we display the information gathered in this paragraph:
\begin{equation}\label{Babelianindex21}
(xh)^2\ne 1, (yh)^2 \ne 1, (x^{\pm 1} y^{\pm 1}h)^2 \ne 1, (x,h)\ne 1, (y,h) \ne 1, h^x\ne h^y.
\end{equation}
By means of contradiction we assume that either $x^h\ne x^{-1}$ or $y^h\ne y^{-1}$.
\textbf{Claim 1}. $h^x \ne h^{-1}$ and $h^y \ne h^{-1}$.
By symmetry we only prove the second inequality. By means of contradiction assume that $h^y=h^{-1}$. Then $h^x\ne h^{-1}$, by (\ref{Babelianindex21}). Consider \begin{eqnarray*} 0&=&[[x+x^{-1},h+h^{-1}],[x+x^{-1},xyh+(xyh)^{-1}]] \\ &=& 8(xy+xyh^2+yx^{-1}+hxh^{-1}y^{-1}+yx^{-1}h^2+hxhy^{-1}+y^{-1}x^2hxh^{-1}+y^{-1}hx^{-1}h \\ &&-yh^{-1}xh-xy^{-1}-yhxh-yx^2h^{-1}xh-xy^{-1}h^2-y^{-1}x^{-1}-yx^2hxh-y^{-1}x^{-1}h^2) \end{eqnarray*} As $(x,h)\ne 1$, $y^2\ne 1$, $h^x\ne h^{-1}$, $x^2\ne y^2$ and $(xh)^2\ne 1$, we deduce that either $x^h=x^{-1}$, $h^2=y^2$ or $h^2=x^2y^2$. We consider these three cases separately.
If $x^h=x^{-1}$ then by the initial assumption $y^h\ne y^{-1}$. Thus $h^2=(y,h)\ne y^2$ and therefore $$\matriz{{l} [[y+y^{-1},yh+(yh)^{-1}],[y+y^{-1},xyh+(xyh)^{-1}]] = \\
\hspace{1cm} 16(x+ x y^2+ x^{-1} h^2+ x^{-1} y^2 h^2 -x^{-1}- x h^2- x^{-1} y^2- x y^2 h^2)= 0, }$$ and thus $x\in\{x^{-1}, x h^2, x^{-1} y^2, x y^2 h^2\}$, yielding to a contradiction in all cases.
If $y^2=h^2$ then \begin{eqnarray*} 0&=&[[x+x^{-1},xh+(xh)^{-1}],[x+x^{-1},xyh+(xyh)^{-1}]] \\
&=&16(y+x^2y+x^{-1}h^{-1}xyh+hx^{-1}h^{-1}y^{-1}x^{-1}-y^{-1}x^{-1}hxh-y^3-y^{-1}xhxh-y^{-1}x^2) \end{eqnarray*} and thus $y\in\{y^{-1}x^{-1}hxh, y^3, y^{-1}xhxh, y^{-1}x^2\}$. As $x^2\ne y^2\ne 1$ and $(x,h)\ne 1$, we deduce that $y$ can only be canceled with $y^{-1}(xh)^2$. Thus $h^2=y^2=(xh)^2$ or equivalently $x^h=x^{-1}$, a case which has been excluded in the previous paragraph.
Finally, assume that $h^2=x^2y^2$. Then \begin{eqnarray*} 0&=&[[x+x^{-1},xh+(xh)^{-1}],[x+x^{-1},xyh+(xyh)^{-1}]] \\
&=&16(y+y^{-1}h^2+yx^{-1}hxh+yhxhx -y^{-1}xhxh-yh^2-y^{-1}xhx^{-1}h-y^{-1}) \end{eqnarray*} Thus $y\in\{y^{-1}xhxh, yh^2, y^{-1}xhx^{-1}h, y^{-1}\}$ and therefore either $y^2=(xh)^2$ or $y^2=xhx^{-1}h$. In the first case $(x,h)=x^2h^2(xh)^2=y^2(xh)^2=1$, contradicting (\ref{Babelianindex21}). In the second case $x^h = h^{-1} y^2 h^{-1} x = x^{-1}$, a case excluded above. This finishes the proof of Claim 1.
\textbf{Claim 2}. $h^2\not \in \{x^2,y^2,x^2y^2\}$.
Observe that $x,y$ and $h_1=xh$ satisfy the assumptions of the lemma and therefore $h^{-1}x^{-1}=h_1^{-1}\ne h_1^x = hx$, by Claim 1. Hence $h^2\ne x^2$. Similarly $h^2\ne y^2$ and applying this to $x,xy$ and $h$ we deduce that $h^2\ne(xy)^2= x^2y^2$. This proves Claim 2.
\textbf{Claim 3}. $x^h\ne x^{-1}$ and $y^h\ne y^{-1}$.
By symmetry it is enough to prove one of the two conditions and by means of contradiction assume that $x^h=x^{-1}$. Then $y^h\ne y^{-1}$ by the initial assumption. Therefore
$$\matriz{{l}
\hspace{-1cm}[[y+y^{-1},h+h^{-1}],[y+y^{-1},xh+(xh)^{-1}]] = \\
=8( x +xy^2 +xh^2 +x^{-1}(yh)^2y^2h^2 +xy^2h^2 +x^{-1}(yh)^2h^2 +x^{-1}(yh)^2y^2 +x^{-1}(yh)^2\\ -x^{-1} -x(yh)^2y^2h^2 -x^{-1}y^2 -x^{-1}h^2 -xyhyh^{-1} -xy^{-1}hyh
-x^{-1}y^2h^2 -x(yh)^2) \ne 0 }$$
by (\ref{Babelianindex21}) and Claims 1 and 2, a contradiction.
\textbf{Claim 4}. $y^h\ne x^2y$ and $x^h\ne y^2x$.
Again, by symmetry it is enough to prove that the first inequality holds and by means of contradiction we assume that $y^h=x^2y$. Then
$$\matriz{{l} \hspace{-1cm}[[x+x^{-1},h+h^{-1}],[x+x^{-1},xyh+(xyh)^{-1}]] =\\
8(xy+x^{-1}y+xyh^2+y^{-1}hxh^{-1}+x^{-1}yh^2+y^{-1}hx^{-1}h^{-1}+y^{-1}hx^{-1}h +y^{-1}hxh \\
-yh^{-1}xh-xy^{-1}-x^2yh^{-1}xh-yhxh-y^{-1}x^{-1}-xy^{-1}h^2-xy(xh)^2-y^{-1}x^{-1}h^2) \ne 0
}$$ by (\ref{Babelianindex21}) and Claims 1, 2 and 3, a contradiction.
Finally we consider \begin{eqnarray*} 0&=&[[x+x^{-1},h+h^{-1}],[x+x^{-1},yh+(yh)^{-1}]] \\ &=&4(y +x^2y +yh^2 +hxyh^{-1}x^{-1} +hxh^{-1}y^{-1}x^{-1} +hy^{-1}h^{-1} +yx^2h^2 +xhxyh^{-1} +xhx^{-1}yh\\ && +xhxh^{-1}y^{-1} +xhx^{-1}hy^{-1} +hy^{-1}h^{-1}x^2 +xhy^{-1}hx^{-1} +xhxyh +hxhy^{-1}x +hy^{-1}hx^2\\ && -yxhx^{-1}h^{-1} -hyh^{-1} -y^{-1} -yhxh^{-1}x -xyhx^{-1}h -hyh^{-1}x^2 -hyh -y^{-1}x^{2} -y^{-1}h^{2}\\ && -hy^{-1}xh^{-1}x^{-1} -yhxhx -hyhx^2 -y^{-1}x^{2}h^{2} -xhy^{-1}xh^{-1} -hy^{-1}xhx^{-1} -hy^{-1}xhx). \end{eqnarray*} Taking into account the inequalities in (\ref{Babelianindex21}) and Claims 1-4, in order to cancel $y$ we deduce that $y\in \{hy^{-1}xh^{-1}x^{-1},hyhx^2,xhy^{-1}xh^{-1},hy^{-1}xhx^{-1}\}$. However, applying Claim 4 to $y,y^{-1}x$ and $h$ we deduce that $(y^{-1}x)^h \ne yx$; applying Claim 2 to $x,y$ and $yh$ we deduce that $(yh)^2\ne x^2y^2$; applying Claim 3 to $x,y^{-1}x$ and $h$ we have $(y^{-1}x)^h\ne x^{-1}y$; and applying Claim 1 to $x,y$ and $yh$ we deduce that $(yh)^{x}\ne h^{-1} y^{-1}$. This yields to a contradiction and finishes the proof of the lemma. \end{proof}
We now introduce a third subgroup of $G$:
$$C=\GEN{xy : x^2\ne 1 \ne y^2, (x,y)=1}$$
\begin{lem}\label{Expo4Final} If $\textnormal{Exp}(G)=4$ then \begin{enumerate}
\item $\ensuremath{\mathcal{Z}}(G)\subseteq C$,
\item $C$ is abelian,
\item $c^t=c^{-1}$ for every $c\in C$ and every $t\in G\setminus C$, and
\item either $[G:C]=2$ or $C=\ensuremath{\mathcal{Z}}(G)$ and $[G:C]=4$. \end{enumerate}
\end{lem}
\begin{proof} (1) As $\textnormal{Exp}(G)=4$ there is an element $x\in G$ of order 4. If $y\in \ensuremath{\mathcal{Z}}(G)$ then $y^2=1$, by Lemma~\ref{Exp4CentroA} and therefore $(xy)^2\ne 1$ and $(x,y)=1$. Therefore $y=x(xy)^{-1} \in C$.
(2) By means of contradiction we assume that $C$ is not abelian. Then let $x,y,t,u\in G$ such that $1\not\in \{x^2,y^2,t^2,u^2\}$, $(x,y)=(t,u)=1$ and $(xy,tu)\ne 1$. In particular $xy,tu\not\in \ensuremath{\mathcal{Z}}(G)$ and hence $(xy)^2\ne 1 \ne (tu)^2$. Thus $x^2\ne y^2$ and $t^2\ne u^2$. Moreover either $\GEN{x,y,t}$ or $\GEN{x,y,u}$ is not abelian. If both are non-abelian then $x^t=x^{-1}=x^u$, $y^t=y^u=y^{-1}$, by Lemma~\ref{Babelianindex2}. Then $(x,tu)=(y,tu)=1$, contradicting $(xy,tu)\ne 1$. Thus, by symmetry one may assume that $\GEN{x,y,t}$ is non-abelian and $\GEN{x,y,u}$ is abelian. Then $x^t=x^{-1}$ and $y^t=y^{-1}$. Applying Lemma~\ref{Babelianindex2} to $\GEN{t,u,x}$ we deduce that $u=u^x=u^{-1}$, a contradiction.
(3) Let $t\in G\setminus C$. It is enough to show that if $x^2\ne 1 \ne y^2$ and $(x,y)=1$ then $(xy)^t=(xy)^{-1}$. If $x^2=y^2$ then $xy\in \ensuremath{\mathcal{Z}}(G)$ and hence $(xy)^t=xy=(xy)^{-1}$. Otherwise $x^t=x^{-1}$ and $y^t=y^{-1}$, by Lemma~\ref{Babelianindex2}. Therefore $(xy)^t=x^{-1}y^{-1}=(xy)^{-1}$, as desired.
(4) Suppose that $C\ne \ensuremath{\mathcal{Z}}(G)$ and let $c\in C\setminus \ensuremath{\mathcal{Z}}(G)$. If $x,y\in G\setminus C$ with $xy\not\in C$ then by Lemma~\ref{Exp4CentroA} and (3) it follows that $c\ne c^{-1}=c^{xy}=c$. Thus, in this case $[G:C]=2$.
Finally suppose that $C=\ensuremath{\mathcal{Z}}(G)$. This implies that if $[\GEN{x,y,\ensuremath{\mathcal{Z}}(G)}:\ensuremath{\mathcal{Z}}(G)]>2$ then $(x,y)\ne 1$ because otherwise $xy\in C=\ensuremath{\mathcal{Z}}(G)$. By Lemma~\ref{Exp4CentroA}, $G/\ensuremath{\mathcal{Z}}(G)$ is elementary abelian of order $\ge 4$. We have to prove that the order is exactly 4. Otherwise there are $x,y,u\in G$ such that $[\GEN{x,y,u,\ensuremath{\mathcal{Z}}(G)}:\ensuremath{\mathcal{Z}}(G)]=8$. These $x,y$ and $u$ will be fixed for the rest of the proof. Then $1\not\in \{(x,y),(x,u),(y,u),(x,yu)=(x,y)(x,u),(xy,u)=(x,u)(y,u),(xu,y)=(x,y)(u,y),(xy,xu)=(x,u)(y,x)(y,u)\}$ and therefore $\GEN{(x,y),(x,u),(y,u)}$ has order 8.
\noindent $\bullet$ \textbf{Claim 1}. If $[\GEN{g,h,\ensuremath{\mathcal{Z}}(G)}:\ensuremath{\mathcal{Z}}(G)]=4$ then $(g,h)\ne g^2h^2 $.
For otherwise $(gh)^2=g^2h^2(g,h)=1$ and hence $gh\in \ensuremath{\mathcal{Z}}(G)$, a contradiction.
\noindent $\bullet$ \textbf{Claim 2}. If $[\GEN{g,h,\ensuremath{\mathcal{Z}}(G)}:\ensuremath{\mathcal{Z}}(G)]=4$ then $(g,h)\ne g^2$.
By symmetry it is enough to prove the claim for $g=y$ and $h=u$. So assume that $(y,u)=y^2$, or equivalently $y^u=y^{-1}$.
Then $1\not\in \{(xu,y)=(x,y)y^2,(xy,u)=(x,u)y^2,(xy,yu)=(x,yu)y^2,(xyu)^2=xuy^{-1}xyu=xux(x,y)u=(x,yu)x^2u^2\}$ and thus
\begin{equation}\label{y2NoConmutador}
y^2\not\in \{(x,y),(x,u),(x,yu)\} \quad \text{and} \quad (x,yu)\ne x^2u^2
\end{equation}
Before proving Claim 2 we prove some intermediate claims.
\textbf{Claim 2.1}. $(x,y)\ne x^2$.
By means of contradiction assume $x^2=(x,y)$. Then by (\ref{y2NoConmutador}) $x^2\ne y^2$, $1\ne (x,yu)=x^2(x,u)$ and $1\ne (xyu)^2=u^2(x,u)$. Having these relations in mind we get that \begin{eqnarray*} 0&=&[[x+x^{-1},u+u^{-1}],[x+x^{-1},yu+(yu)^{-1}]] \\ &=& 8y(1+x^2+u^2+x^2u^2+y^2(x,u)+x^2y^2(x,u)+y^2u^2(x,u)+x^2y^2u^2(x,u) \\ && -(x,u)-y^2-x^2(x,u)-x^2y^2-u^2(x,u)-y^2u^2-x^2u^2(x,u)-x^2y^2u^2), \end{eqnarray*} and therefore, we obtain that $u^2\in \{y^2,x^2y^2\}$. However, if $u^2=y^2$ then
$$\matriz{{l} [[x+x^{-1}, xu+u^{-1}x^{-1}], [x+x^{-1}, xyu+(xyu)^{-1}]] \\ \hspace{1cm}=16y(1+x^2+y^2(x,u)+x^2y^2(x,u)-(x,u)-y^2-x^2(x,u)-x^2y^2)\ne 0 }$$ because $1\not\in \{(x,u), y^2, x^2(x,u), x^2y^2\}$, a contradiction. If $y^2=x^2u^2$ then $$\matriz{{l} [[x+x^{-1}, xu+u^{-1}x^{-1}], [x+x^{-1}, xyu+(xyu)^{-1}]] \\
\hspace{1cm}=16y(1+x^2+(x,u)y^2+u^2(x,u)-(x,u)-y^2-x^2(x,u)-u^2)\ne 0}$$ because $1\not\in\{(x,u),y^2, x^2(x,u), u^2\}$, a contradiction.
\textbf{Claim 2.2}. $(x,u)\ne x^2$.
By means of contradiction assume that $(x,u)= x^2$. Then $1\ne (xy,u)=x^2y^2$, $1\ne (x,yu)=(x,y)x^2$, $1\ne (y,xu)=(x,y)y^2$. Having in mind these relations we have that \begin{eqnarray*} 0&=&[[x+x^{-1},y+y^{-1}],[x+x^{-1},xyu+(xyu)^{-1}]] \\ &=& 8xu(1+x^2+y^2+(x,y)u^2+x^2y^2+x^2u^2(x,y)+y^2u^2(x,y)+x^2y^2u^2(x,y) \\ &&-(x,y)-u^2-x^2(x,y)-y^2(x,y)-x^2u^2-y^2u^2-x^2y^2(x,y)-x^2y^2u^2) \end{eqnarray*} Therefore $u^2\in \{x^2,y^2,x^2y^2\}$. If $u^2=y^2$, then taking $x_1=x$, $y_1=u$ and $u_1=y$, they satisfy $(x_1,y_1)=x_1^2$ and $(y_1,u_1)=y_1^2$, which contradicts Claim 2.1. By symmetry we may also exclude the case $u^2=x^2$. Therefore $u^2=x^2y^2=(xy,u)$. Taking now $x_1=x$, $y_1=u$ and $u_1=xy$, we have $(y_1,u_1)=(u,xy)=x^2y^2=u^2=y_1^2$ and $(x_1,y_1)=(x,u)=x^2$, again in contradiction with Claim 2.1, that finishes the proof of the claim.
\textbf{Claim 2.3}. $(x,yu)\not\in \{u^2,x^2y^2u^2,y^2u^2\}$ and $x^2\ne u^2$.
By means of contradiction assume first that $(x,yu)=u^2$. Let $x_1=y$, $y_1=yu$ and $u_1=x$. Then $(y_1,u_1)=(yu,x)=u^2=u^2y^2(u,y)=(yu)^2=y_1^2$ and $(x_1,y_1)=y^2=x_1^2$, in contradiction with Claim 2.1.
Secondly assume that $(x,yu)=x^2y^2u^2$. Then $(x,y)(x,u)=(x,yu)=x^2y^2u^2$ and taking $x_1=xu$, $y$ and $u$ we have that $x^2u^2(x,u)=x_1^2= (x_1,y)=(x,y)y^2$, contradiction Claim 2.1.
Thirdly, assume $(x,yu)=u^2y^2$ and consider $x_1=y$, $y_1=yu$ and $u_1=xy$. Then $(y_1,u_1)=(x,y)(x,u)y^2=(x,yu)y^2=u^2=y_1^2$ and $(x_1,y_1)=(y,u)=y^2=x_1^2$, contradicting Claim 2.1.
Finally assume that $x^2=u^2$ and consider $x_1=y$, $y_1=xu$ and $u_1=u$. Then $(y_1,u_1)=(x,u)=y_1^2$ and $(x_1,u_1)=x_1^2$, contradicting Claim 2.2. This finishes the proof of Claim 2.3.
\textbf{Claim 2.4}. $y^2\ne u^2$.
By means of contradiction suppose that $y^2=u^2=(y,u)$. Then we have that $$\matriz{{l} [[x+x^{-1},xy+(xy)^{-1}],[x+x^{-1},xu+(xu)^{-1}]] \\
\hspace{1cm}= 8yu(1+x^2+(x,yu)+(x,y)y^2+(x,yu)x^2+(x,y)x^2u^2+(x,u)x^2u^2+(x,u)y^2 \\ \hspace{1.3cm}-(x,y)-(x,u)-y^2-(x,y)x^2-(x,u)x^2-x^2u^2-(x,yu)y^2-(x,yu)x^2u^2) =0}$$
and hence $1\in \{(x,y), (x,u), y^2, (x,y)x^2, (x,u)x^2, x^2u^2, (x,yu)y^2, (x,yu)x^2u^2\}$ which contradicts (\ref{y2NoConmutador}) and Claims 2.1, 2.2 and 2.3.
\textbf{Claim 2.5}. $u^2\ne x^2y^2$.
Otherwise assume that $(y,u)=y^2=u^2x^2$. Hence
$$\matriz{{l} [[x+x^{-1},xy+(xy)^{-1}],[x+x^{-1},xu+(xu)^{-1}]] \\
\hspace{1cm} 8yu(1+x^2+(x,yu)+(x,y)y^2+(x,u)y^2+(x,yu)x^2+(x,y)u^2+(x,u)u^2 \\ \hspace{1.3cm} -(x,y)-(x,u)-y^2-(x,y)x^2 -(x,u)x^2-u^2-(x,yu)x^2u^2-(x,yu)u^2)=0.}$$
Then $1\in \{(x,y), (x,u), y^2, (x,y)x^2, (x,u)x^2, u^2, (x,yu)x^2u^2, (x,yu)u^2\}$ in contradiction with (\ref{y2NoConmutador}) and Claims 2.1, 2.2 and 2.3.
\textbf{Claim 2.6}. $(x,u)\ne x^2y^2$.
Let $x_1=xu$. If $(x,u)=x^2y^2$ then $x_1^2=(x,u)x^2u^2=y^2u^2$, contradicting Claim 2.5. This finishes the proof of Claim 2.6.
We are ready to prove Claim 2. Consider the following double commutator $$\matriz{{l} [[x+x^{-1},y+y^{-1}],[x+x^{-1},xu+(xu)^{-1}]] \\ 4xyu( 1 +y^2 +x^2 +(x,yu) +(x,y)u^2 +(u,x)u^2 +(x,yu)x^2y^2 +(x,yu)y^2 +(x,yu)x^2\\ +(x,y)u^2y^2 +(x,y)u^2x^2 + (x,u)y^2 u^2 + (x,u)x^2 u^2 +x^2 y^2 +(x,y)x^2 y^2 u^2 + (x,u)x^2y^2u^2 \\
-(x,y) -(x,u) -u^2 -(x,y)y^2 -(x,y)x^2 - (x,u)y^2 -(x,u)x^2 -u^2y^2 -x^2u^2\\ -(x,yu)u^2 -(x,y)x^2 y^2 - (x,u)x^2y^2 -x^2y^2u^2 - (x,yu)y^2 u^2 - (x,yu)x^2u^2 -(x,yu)x^2 y^2 u^2 ) = 0. }$$ Then 1 is in the support of the negative part which contradicts (\ref{y2NoConmutador}) and Claims 2.1-2.6. This finishes the proof of Claim 2.
\noindent $\bullet$ \textbf{Claim 3}. If $[\GEN{g,h,\ensuremath{\mathcal{Z}}(G)}:\ensuremath{\mathcal{Z}}(G)]=4$ then $g^2\ne h^2 \ne (gh)^2$.
By Claim 2, $(gh)^2=g^2h^2(g,h)\ne h^2$. Then applying this relation to $g_1=gh$ and $h_1=h$ we have $g^2=(g_1h_1)^2\ne h_1^2 = h^2$.
\noindent $\bullet$ \textbf{Claim 4}. If $[\GEN{g,h,k,\ensuremath{\mathcal{Z}}(G)}:\ensuremath{\mathcal{Z}}(G)]=8$ then $(h,k)\ne g^2h^2$.
By symmetry one may assume that $g=x$, $h=y$ and $k=u$ and by means of contradiction we assume that $(y,u)=x^2y^2$. Then, since $(x,u)(y,u)=(xy,u)\neq 1$, it follows that $(x,u)\ne (y,u)=x^2y^2$ and $(x,yu)\neq (y,u)=x^2y^2$.
Moreover, by Claim 2, $u^2\ne (y,u)=x^2y^2$. We collect this information for future use: \begin{equation}\label{Claim4Inicial}
x^2y^2 \not\in \{u^2, (x,u), (x,yu)\} \end{equation}
\textbf{Claim 4.1}. $(x,u)\ne y^2u^2$.
If $(x,u)=y^2u^2$ then $$\matriz{{l} [[x+x^{-1},y+y^{-1}],[x+x^{-1},xu+(xu)^{-1}]] \\ \hspace{1cm} 8xyu(1+x^2y^2+x^2+(x,y)y^2u^2+y^2+(x,y)x^2u^2+(x,y)x^2y^2u^2+(x,y)u^2\\ \hspace{1.3cm} -(x,y)-y^2u^2-(x,y)x^2y^2-(x,y)x^2-x^2u^2-x^2y^2u^2-(x,y)y^2-u^2)= 0 }$$ and thus $1\in\{(x,y), y^2u^2, (x,y)x^2y^2, (x,y)x^2, x^2u^2, x^2y^2u^2, (x,y)y^2, u^2\}$ which contradicts
Claims 1-3. This finishes the proof of Claim 4.1.
\textbf{Claim 4.2}. $(x,yu)\ne y^2$.
Assume that $(x,yu)=y^2$. Since $y^2=(x,yu)=(x,y)(x,u)$ it follows that $(x,u)\neq y^2$. Combining this with Claims 2 and 3 and (\ref{Claim4Inicial}) we have $$\matriz{{l} [[x+x^{-1},y+y^{-1}],[x+x^{-1},xu+(xu)^{-1}]] \\
\hspace{1cm} 8xyu(1+x^2y^2+y^2+(x,y)u^2+(x,y)y^2u^2+x^2+(x,u)x^2u^2+(x,y)x^2u^2 \\ \hspace{1.3cm} -(x,u)y^2-(x,u)-u^2-(x,u)x^2-(x,u)x^2y^2-x^2y^2u^2-y^2u^2-x^2u^2) \ne 0, }$$ a contradiction. This finishes the proof of Claim 4.2.
We are ready to prove Claim 4. Applying Claims 1-3, 4.1 and 4.2 and (\ref{Claim4Inicial}) we deduce that $$\matriz{{l} [[x+x^{-1},y+y^{-1}],[x+x^{-1},xyu+(xyu)^{-1}]] \\ \hspace{0.3cm} = 4xu(1+(x,u)+x^2y^2+x^2+(xu,y)x^2y^2u^2+(xy,u)+x^2(x,u)+y^2+(x,yu)u^2\\ \hspace{0.5cm} +(xu,y)u^2+(x,y)x^2u^2+y^2(x,u)+(x,yu)x^2y^2u^2+(x,yu)x^2u^2+(x,y)y^2u^2+(x,yu)y^2u^2\\ \hspace{0.5cm}-(x,y)-u^2-(x,yu)-(x,y)x^2y^2-(x,y)x^2-(x,u)u^2-(y,u)u^2-(y,u)y^2u^2-(x,yu)x^2y^2\\ \hspace{0.5cm}-(x,yu)x^2-(x,y)y^2-(xy,u)u^2-(x,u)x^2u^2-y^2u^2-(x,yu)y^2-(x,u)y^2u^2) \ne 0, }$$
a contradiction that finishes the proof of Claim 4.
\noindent $\bullet$ \textbf{Claim 5}. If $[\GEN{g,h,k,\ensuremath{\mathcal{Z}}(G)}:\ensuremath{\mathcal{Z}}(G)]=8$ then $(g,h)\ne k^2$.
Let $g_1=kg^{-1}$, $h_1=g$ and $k_1=gk^{-1}h=g_1^{-1}h$. Then $[\GEN{g_1,h_1,k_1,\ensuremath{\mathcal{Z}}(G)}:\ensuremath{\mathcal{Z}}(G)]=8$. Thus $(g,h)=(h_1,g_1k_1)=(h_1,g_1)(h_1,k_1)=k^2g_1^2h_1^2(h_1,k_1)\ne k^2$, by Claim 4. This proves Claim 5.
\noindent $\bullet$ \textbf{Claim 6} If $[\GEN{g,h,k,\ensuremath{\mathcal{Z}}(G)}:\ensuremath{\mathcal{Z}}(G)]=8$ then $(ghk)^2 \ne \{ g^2h^2, g^2h^2k^2\}$. In fact $(ghk)^2g^2h^2k^2=(g,h)(g,k)(h,k)\ne 1$, because $[\GEN{g,h,k,\ensuremath{\mathcal{Z}}(G)}:\ensuremath{\mathcal{Z}}(G)]=8$ , and thus $(ghk)^2 \ne g^2h^2k^2$. Now assume that $(ghk)^2 = g^2h^2$. Then $(g,hk)=h^2(hk)^2$ which contradicts Claim 4.
Finally, using Claims 1-6 we deduce that $$\matriz{{l} \hspace{-0.3cm}[[x+x^{-1},y+y^{-1}],[x+x^{-1},u+u^{-1}]] = \\
2yu((x,y)+(x,u)+(y,u)+(x,y)(x,u)(y,u)+(xyu)^2+\\ (x,y)x^2+(x,y)y^2+(x,y)u^2+(x,u)x^2+(x,u)y^2+(x,u)u^2+(y,u)x^2+(y,u)y^2+(y,u)u^2+\\ (x,y)x^2y^2+(x,y)x^2u^2+(x,y)y^2u^2+(x,u)x^2y^2+\\ (x,u)x^2u^2+(x,u)y^2u^2+(y,u)x^2y^2+(y,u)x^2u^2+(y,u)y^2u^2+\\ (xyu)^2x^2y^2+(xyu)^2x^2u^2+(xyu)^2y^2u^2+\\ (xy)^2u^2+(xu)^2y^2+(yu)^2x^2+\\ (xyu)^2x^2+(xyu)^2y^2+(xyu)^2u^2\\ -1-x^2-y^2-u^2-(x,y)(x,u)-(xu,y)-(xy,u)-x^2y^2-x^2u^2-y^2u^2-\\ (x,y)(x,u)x^2-(x,y)(x,u)y^2-(x,y)(x,u)u^2-(xu,y)x^2-(xu,y)y^2-(xu,y)u^2-(xy,u)x^2-\\ (xy,u)y^2-(xu,y)u^2-x^2y^2u^2-(x,y)(x,u)(y,u)x^2y^2-(x,y)(x,u)x^2u^2-(x,y)(x,u)y^2u^2-\\ (x,y)(x,u)x^2y^2-(xu,y)x^2u^2-(xu,y)u^2y^2-(xy,u)x^2y^2-\\ (xy,u)x^2u^2-(xy,u)y^2u^2-(x,y)(x,u)x^2y^2u^2-(xu,y)x^2y^2u^2-(xy,u)x^2y^2u^2) \ne 0, }$$ yielding the final contradiction. \end{proof}
We are ready to finish the proof of the necessary condition in Theorem~\ref{Main}. At the beginning of the section we proved that if $\breve{G}$ is commutative then $RG^+$ is Lie metabelian and $G$ satisfies either condition (1) or (2) of Theorem~\ref{Main}. Assume that $RG^+$ is Lie metabelian but $\breve{G}$ is not commutative as it has been assumed throughout this section. If the exponent of $G$ is different from 4 then, by Lemmas~\ref{x4No1Conmutan} and \ref{abelianindex2}, $B=\GEN{g:\circ(g)\ne 4}$ is an abelian subgroup of $G$ of index 2 and if $x\in G\setminus B$ then $x$ has order 4 and $b^x=b^{-1}$ for every $b\in B$. Thus $B$ satisfies condition (3) of Theorem~\ref{Main}. Assume that $G$ has exponent 4 and let $C=\GEN{xy : x^2\ne 1 \ne y^2, (x,y)=1}$. By Lemmas~\ref{Exp4CentroA} and \ref{Expo4Final}, $C$ is an abelian subgroup of $G$ containing $Z(G)=\{g:g^2=1\}$ and either $C$ has index 2 in $G$ or $C=Z(G)$ and $[G:Z(G)]=4$. In the latter case $G$ satisfies condition (4) of Theorem~\ref{Main}. In the former case, if $t\in G\setminus C$ then $t$ has order 4
and $c^t=c^{-1}$. Thus $G$ satisfies condition (3) of Theorem~\ref{Main}, and the proof finishes.
\end{document} |
\begin{document}
\title{Constructing arcs from paths using Zorn's Lemma}
\author[J. Brazas]{Jeremy Brazas} \address{West Chester University\\ Department of Mathematics\\ West Chester, PA 19383, USA} \email{[email protected]}
\subjclass{Primary 54C10,54C25 ; Secondary 54B30 } \keywords{path connected space, arcwise connected space, $\Delta$-Hausdorff space}
\date{\today}
\begin{abstract} It is a well-known fact that every path-connected Hausdorff space is arcwise connected. Typically, this result is viewed as a consequence of a sequence of fairly technical results from continuum theory. In this note, we exhibit a direct and simple proof of this statement, which makes explicit use of Zorn's Lemma. Additionally, by carefully breaking down the proof, we identify a modest improvement to a class of spaces relevant to algebraic topology. \end{abstract}
\maketitle
The following theorem is a well-established and commonly used result from general topology, which can be found in many introductory and reference textbooks on the subject, e.g. \cite{Eng89,HS,Moore,Nadler,willard}.
\begin{theorem}\label{hausdorfftheorem} Every path-connected Hausdorff space is arcwise connected. \end{theorem}
This result is typically proved using a combination of basic general topology, metrization theory, and the ``Arcwise Connectedness Theorem," which asserts that any locally connected continuum (connected, compact metric space), is arcwise connected. To prove the Arcwise Connectedness Theorem, one is required one to construct an arc between two points without prior knowledge that even one non-constant path in the space exists. Consequently, proofs of this theorem are quite delicate. In fact, several authors of well-known topology textbooks have made critical oversights in the proof \cite{Ball}. It is possible that the popularity of treating Theorem \ref{hausdorfftheorem} as a consequence of the Arcwise Connectedness Theorem, has shadowed elementary proofs and potential generalizations of Theorem \ref{hausdorfftheorem}. The author believes that having a variety of proofs for one result can often be beneficial. In this expository note, we give a direct, elementary proof of Theorem \ref{hausdorfftheorem} using Zorn's Lemma. We find this proof to be conceptually simpler than other known proofs and a nice example of how the axiom of choice can sometimes ease technicalities one might face if intentionally trying to avoid the axiom of choice.
The primary difficulty in proving Theorem \ref{hausdorfftheorem} directly is ensuring that one can always replace a path with an injective path. We must begin with an arbitrary non-constant path $\alpha:[0,1]\to X$ (which may be space-filling) and search for a collection of pairwise-disjoint intervals $(a,b)\subseteq [0,1]$ such that $\alpha|_{[a,b]}$ is a loop. We refer to such a collection as a \textit{loop-cancellation of }$\alpha$. Provided that a loop-cancellation is maximal in the partial order of all loop-cancellations of $\alpha$, we may then ``pinch off" or ``delete" the corresponding subloops to obtain an injective path. To verify the existence of a maximal loop-cancellation, we employ the axiom of choice in the form of Zorn's Lemma.
The author knows of two other published proofs of Theorem \ref{hausdorfftheorem}, which also take the approach of deleting loops from paths. We take a moment to mention a few things about them.
First, is S.B. Nadler's proof in \cite[Theorem 8.23]{Nadler}. Here, Theorem \ref{hausdorfftheorem} is proved for locally path-connected continua and then extended to all Hausdorff spaces using a metrization theorem. However, much like other textbook proofs, Nadler's line of argument is indirect, relying on a variety of far more general results, such as the``Maximum-Minimum Theorem" \cite[Exercise 4.34]{Nadler}. The Maximum-Minimum Theorem uses the compactness of the hyperspace of compact subsets of $[0,1]$ to allow one to find, what we are calling, a ``maximal loop-cancellation" without appealing to the axiom of choice.
Second, is a beautiful and direct proof by R. B\"{o}rger in the mostly overlooked note \cite{Borger}. This proof, as far as the author can tell, is the only published direct proof of Theorem \ref{hausdorfftheorem}. Rather than focusing on open sets as we do, B\"{o}rger takes the dual approach of constructing a nested sequence of closed sets $A_1\supseteq A_2\supseteq \cdots $ and, essentially, works to show the components of the complement of $\bigcap_{n}A_n$ form a maximal loop-cancellation. B\"{o}rger mentions in the introduction ``I am indebted to K.P. Hart for some simplifications, particularly for avoiding use of Zorn's lemma." Hence, we do not doubt that B\"{o}rger knew of a proof, which is similar in spirit to the one in this note.
Certainly, one could argue that our proof is logically redundant, because it unnecessarily uses the axiom of choice. However, there are often many benefits to exploring different proofs of well-known results. Moreover, there is a certain conceptual simplicity to our proof that the author does not find in any of the other methods of proof. Indeed, some of the technical ``weight" of other proofs appears to be absorbed by Zorn's Lemma. Those who freely use the axiom of choice may find it preferable.
Finally, in Section \ref{sectionfinal}, we note that by closely breaking down our ``from scratch" proof, it is possible to prove a modest generalization of Theorem \ref{hausdorfftheorem} that replaces the ``Hausdorff" hypothesis with a strictly weaker property that is relevant to categories commonly used in algebraic topology (see Theorem \ref{mainthm}).
\section{A proof of Theorem \ref{hausdorfftheorem}}\label{sectionproof}
First, we establish some notation and terminology. A \textit{path} is a continuous function $\alpha:[a,b]\to X$ from a closed real interval $[a,b]$. If $\alpha(a)=x$ and $\alpha(b)=y$, we say that $\alpha$ is a \textit{path from $x$ to} $y$. If $\alpha(a)=\alpha(b)$, we will refer to $\alpha$ as a \textit{loop}. Often, we will use the closed unit interval $[0,1]$ to be the domain of a path. Given paths $\alpha:[a,b]\to X$ and $\beta:[c,d]\to X$, we write $\alpha\equiv\beta$ if $\alpha=\beta\circ \phi$ for some increasing homeomorphism $\phi: [a,b]\to [c,d]$ and we say that $\alpha$ \textit{is a reparameterization of} $\beta$. If $\alpha:[0,1]\to X$ is a path, then $\alpha^{-}(t)=\alpha(1-t)$ is the reverse path. If $\alpha,\beta:[0,1]\to X$ are paths with $\alpha(1)=\beta(0)$, then $\alpha\cdot\beta:[0,1]\to X$ denotes the usual concatenation of the two paths.
\begin{definition} A space $X$ is \begin{enumerate} \item \textit{path connected} if whenever $x,y\in X$, there exists a path from $x$ to $y$, \item \textit{arcwise connected} if whenever $x\neq y$ in $X$, there exists a path $\alpha:[0,1]\to X$ from $x$ to $y$, which is a topological embedding, i.e. a homeomorphism onto its image. \end{enumerate} \end{definition}
Certainly, arcwise connected $\Rightarrow$ path connected. We do not consider ``monotone" paths (those which have connected fibers) as being of separate interest from injective-paths since a quotient map argument shows that every monotone path in a $T_1$ space may be replaced by an injective path with the same image.
Toward our proposed generalization of Theorem \ref{hausdorfftheorem}, we give the following definition.
\begin{definition} We say that a space $X$ \textit{permits loop deletion} if whenever $\alpha:[0,1]\to X$ is a path and there exist $0\leq \cdots \leq a_3\leq a_2\leq a_1<b_1\leq b_2\leq b_3\leq \cdots \leq 1$ such that $\{a_n\}\to 0$, $\{b_n\}\to 1$, and $\alpha(a_n)=\alpha(b_n)$ for all $n\in\mathbb{N}$, then $\alpha(0)=\alpha(1)$. \end{definition}
Intuitively, if $X$ permits loop deletion, then the scenario illustrated in Figure \ref{tfig} cannot occur, that is, there cannot exist paths $\alpha,\beta:[0,1]\to X$ such that $\alpha(1/n)=\beta(1/n)$ for all $n\in\mathbb{N}$ and $\alpha(0)\neq \beta(0)$.
\begin{figure}
\caption{In a space that permits loop deletion, there cannot be two paths $\alpha,\beta$ which agree on a sequence converging to $0$ but for which $\alpha(0)\neq\beta(0)$.}
\label{tfig}
\end{figure}
\begin{remark} If a space $X$ is Hausdorff, then convergent sequences in $X$ have unique limits. Hence, every Hausdorff space permits loop deletion. \end{remark}
Given a path $\alpha:[0,1]\to X$, a \textit{loop-cancellation of }$\alpha$ is a set $\mathscr{U}$ of pairwise-disjoint open intervals in $[0,1]$ such that for each $(a,b)\in \mathscr{U}$, we have $\alpha(a)=\alpha(b)$, i.e. such that $\alpha|_{[a,b]}$ is a loop (see Figure \ref{fig2}). We endow each loop-cancellation with the linear ordering inherited from the ordering of $[0,1]$. Let $\mathscr{L}(\alpha)$ denote the set of all loop-cancellations of $\alpha$. We give $\mathscr{L}(\alpha)$ the following partial order: $\mathscr{V}\geq \mathscr{U}$ in $\mathscr{L}(\alpha)$ if for each $U\in \mathscr{U}$, there exists $V\in\mathscr{V}$ such that $U\subseteq V$. Clearly, the empty set is minimal in $\mathscr{L}(\alpha)$. We say a loop-cancellation is \textit{maximal in $\mathscr{L}(\alpha)$} if it is maximal with respect to this partial order on $\mathscr{L}(\alpha)$. If $\alpha$ is itself a loop, then $\{(0,1)\}$ is a maximal loop-cancellation of $\alpha$.
To construct injective paths, we wish to ``delete" loops occurring as subpaths of $\alpha$. Formally, this must be done within the domain by collapsing the closure of each element of a loop-cancellation to a single point so that the resulting quotient space is homeomorphic to $[0,1]$. The next definition identifies when such an operation is possible.
\begin{definition} We say that a loop-cancellation $\mathscr{U}\in\mathscr{L}(\alpha)$ is \textit{collapsible} if $\mathscr{U}\neq \{(0,1)\}$ and if the elements of $\mathscr{U}$ have pairwise disjoint closures. \end{definition}
\begin{remark}\label{collapsableremark} If $\alpha$ is not a loop and $\mathscr{U}$ is maximal in $\mathscr{L}(\alpha)$, then $\mathscr{U}$ is necessarily collapsible. Otherwise, we would have $(a,b),(b,c)\in\mathscr{U}$, which implies $\alpha(a)=\alpha(b)=\alpha(c)$. We could then replace these two with $(a,c)$ to form a loop-cancellation that is greater in $\mathscr{L}(\alpha)$. \end{remark}
Let $\mathscr{U}$ be a collection of open intervals in $(0,1)$ with pairwise-disjoint closures. Basic constructions from real analysis give the existence of non-decreasing, continuous surjections $\Gamma_{\mathscr{U}}:[0,1]\to [0,1]$ that are constant on the closure of each set $U\in\mathscr{U}$ and which are strictly increasing on $[0,1]\backslash \bigcup\{\overline{I}\mid I\in\mathscr{U}\}$. We refer to such a function $\Gamma_{\mathscr{U}}$ as a \textit{collapsing function for }$\mathscr{U}$. For example, if $\mathscr{U}$ is the set of components of the complement of the ternary Cantor set, then the ternary Cantor function is a collapsing function for $\mathscr{U}$. Note that $\Gamma_{\mathscr{U}}$ is not unique to $\mathscr{U}$ but if $\Gamma_{\mathscr{U}}$ and $\Gamma_{\mathscr{U}}'$ are two collapsing functions for $\mathscr{U}$, then we have $h\circ \Gamma_{\mathscr{U}}=\Gamma_{\mathscr{U}}'$ for some increasing homeomorphism $h:[0,1]\to [0,1]$.
\begin{definition}
Suppose $\mathscr{U}\in\mathscr{L}(\alpha)$ is collapsible and let $\Gamma_{\mathscr{U}}$ be a collapsing function for $\mathscr{U}$. We write $\alpha_{\mathscr{U}}:[0,1]\to X$ for the path, which agrees with $\alpha$ on $[0,1]\backslash \bigcup\mathscr{U}$ and such that $\alpha|_{\mathscr{U}}$ is constant on each $I\in\mathscr{U}$, that is, $\alpha_{\mathscr{U}}(\overline{I})=\alpha(\partial I)$ for each $I\in\mathscr{U}$. By the universal property of the quotient map $\Gamma_{\mathscr{U}}$, there exists a unique path $\beta:[0,1]\to X$ satisfying $\beta\circ \Gamma_{\mathscr{U}}=\alpha_{\mathscr{U}}$, which we refer to as a \textit{$\mathscr{U}$-reduction of $\alpha$.} \end{definition}
Since any two collapsing functions for $\mathscr{U}$ differ by a homeomorphism, it follows that if $\beta$ and $\beta '$ are two $\mathscr{U}$-reductions of $\alpha$ (constructed using different collapsing functions), then $\beta\equiv \beta '$.
\begin{figure}
\caption{A loop-cancellation $\mathscr{U}$ of a path $\alpha$ consisting of three intervals each of which is mapped to a loop. In this case, the loop-cancellation is maximal and a parameterization of the black arc is a $\mathscr{U}$-reduction of $\alpha$.}
\label{fig2}
\end{figure}
\begin{proposition}\label{loopreductionisinjectiveprop} If $\mathscr{U}\in\mathscr{L}(\alpha)$ is maximal and $\beta$ is a $\mathscr{U}$-reduction of $\alpha$, then $\beta$ is injective. \end{proposition}
\begin{proof} We prove the contrapositive. Suppose $0\leq a<b\leq 1$, with $\beta(a)=\beta(b)$. Fix collapsing function $\Gamma_{\mathscr{U}}$ for $\mathscr{U}$ such that $\beta\circ\Gamma_{\mathscr{U}}=\alpha_{\mathscr{U}}$. Since $\Gamma_{\mathscr{U}}$ is non-decreasing, $\Gamma_{\mathscr{U}}^{-1}([a,b])$ is a closed interval, call it $[c,d]$. Notice that for each $I\in\mathscr{U}$, either $I\subseteq (c,d)$ or $I\cap [c,d]=\emptyset$. Now $c,d\notin\bigcup\mathscr{U}$ and $\alpha_{\mathscr{U}}(c)=\beta(a)=\beta(b)=\alpha_{\mathscr{U}}(d)$. Since $\alpha$ agrees with $\alpha_{\mathscr{U}}$ on $[0,1]\backslash\bigcup\mathscr{U}$, we have $\alpha(c)=\alpha(d)$. Now $\mathscr{V}=\{I\in\mathscr{U}\mid I\cap [c,d]=\emptyset\}\cup\{(c,d)\}$ is a loop-cancellation for $\alpha$ that is greater than $\mathscr{U}$ in $\mathscr{L}(\alpha)$. Hence, $\mathscr{U}$ is not maximal. \end{proof}
It is not entirely obvious that a maximal loop-cancellation must exist for an arbitrary path. Indeed, it is possible for distinct loop-cancellations to nest and overlap in complicated ways. Compounding the issue is the fact that different maximal loop-cancellations may result in different reductions. For example, suppose $\beta,\gamma:[0,1]\to X$ are injective paths for which $\beta((0,1))\cap \gamma((0,1))=\emptyset$, $\beta(0)=\gamma(0)$, and $\beta(1)=\gamma(1)$. If $\alpha=(\beta\cdot \beta^{-})\cdot \gamma$, then $\mathscr{U}=\{(0,1/2)\}$ is maximal in $\mathscr{L}(\alpha)$ with $\gamma$ as a $\mathscr{U}$-reduction and $\mathscr{V}=\{(1/4,1)\}$ is maximal with $\beta$ as a $\mathscr{V}$-reduction. Based on these observations, it is natural to attempt an application of Zorn's Lemma.
\begin{lemma}\label{maximalloopcancellationlemma} If $X$ permits loop deletion then for every path $\alpha:[0,1]\to X$, there exists a maximal loop-cancellation $\mathscr{V}\in\mathscr{L}(\alpha)$. \end{lemma}
\begin{proof} The conclusion is clear if $\alpha$ is a loop so we assume $\alpha(0)\neq \alpha(1)$. The lemma will follow from Zorn's Lemma once we show that every linearly ordered suborder of $\mathscr{L}(\alpha)$ has an upper bound. Suppose $S\subseteq\mathscr{L}(\alpha)$ is linearly ordered when given the order inherited from $\mathscr{L}(\alpha)$. Now $V=\bigcup_{\mathscr{U}\in S}(\bigcup \mathscr{U})$ is an open subset of $(0,1)$. Let $\mathscr{V}$ denote the set of connected components of $V$. To show that $S$ has an upper bound in $\mathscr{L}(\alpha)$, it suffices to show that $\mathscr{V}\in \mathscr{L}(\alpha)$. Let $(c,d)\in\mathscr{V}$. If $(c,d)\in\mathscr{U}$ for some $\mathscr{U}\in S$, it is clear that $\alpha(c)=\alpha(d)$. Suppose that $(c,d)\notin \mathscr{U}$ for any $\mathscr{U}\in S$.
Pick $c<\cdots <a_3<a_2<a_1<b_1<b_2<b_3<\cdots <d$ where $\{a_{n}\}\to c$ and $\{b_n\}\to d$. Fixing $n\in\mathbb{N}$, $\bigcup S$ is an open cover of the closed interval $[a_n,b_n]$ and so we may find finitely many $I_{n,1},I_{n,2},\dots, I_{n,k_n}\in \bigcup S$, which cover $[a_n,b_n]$. Find $\mathscr{W}_{n,j}\in S$ with $I_{n,j}\in \mathscr{W}_{n,j}$. Since $S$ is linearly ordered, we may define $\mathscr{U}_n$ to be the maximum of $\{\mathscr{W}_{n,1},\mathscr{W}_{n,2},\dots , \mathscr{W}_{n,k_n}\}$ in $S$. Since $I_{n,j}\subseteq\bigcup\mathscr{W}_{n,j}\subseteq \bigcup\mathscr{U}_n$ for all $j$, it follows that $[a_n,b_n]\subseteq \bigcup\mathscr{U}_n$. For some interval $(c_n,d_n)\in \mathscr{U}_n$ we have $[a_n,b_n]\subseteq (c_n,d_n)$. Moreover, since $(c_n,d_n)$ meets the connected connected component $(c,d)$ of $V$, we have $(c_n,d_n)\subseteq (c,d)$.
We now have $c\leq \cdots\leq c_3\leq c_2\leq c_1<d_1\leq d_2\leq d_3\leq \cdots \leq d$ where $\{c_n\}\to c$ and $\{d_n\}\to d$. Moreover, since $(c_n,d_n)\in \mathscr{U}_n\in S$, we have $\alpha(c_n)=\alpha(d_n)$ for all $n\in\mathbb{N}$. Finally, since we are assuming that $X$ permits loop deletion, it follows that $\alpha(c)=\alpha(d)$. Therefore, $\mathscr{V}$ is a loop cancellation, i.e. $\mathscr{V}\in\mathscr{L}(\alpha)$. \end{proof}
\begin{proof}[Proof of Theorem \ref{hausdorfftheorem}] Suppose $X$ is a path connected Hausdorff space and fix $x,y\in X$ with $x\neq y$. Find a path $\alpha:[0,1]\to X$ from $x$ to $y$. According to Lemma \ref{maximalloopcancellationlemma}, there exists a maximal loop-cancellation $\mathscr{V}\in\mathscr{L}(\alpha)$. Since $\mathscr{V}$ must be collapsible (Remark \ref{collapsableremark}), we may choose a collapsing function $\Gamma_{\mathscr{V}}$ for $\mathscr{V}$. Now the $\mathscr{V}$-reduction $\beta:[0,1]\to X$ satisfying $\beta\circ\Gamma_{\mathscr{V}}=\alpha_{\mathscr{V}}$ is injective by Proposition \ref{loopreductionisinjectiveprop}. Since $\Gamma_{\mathscr{V}}:[0,1]\to[0,1]$ is a non-decreasing surjection, we have $\beta(0)=x$ and $\beta(1)=y$. Moreover, since $[0,1]$ is compact and $X$ is Hausdorff, the continuous injection $\beta$ is a homeomorphism onto its image. This proves $X$ is arcwise connected. \end{proof}
\section{What other spaces permit loop deletion?}\label{sectionfinal}
There are, in fact, some commonly used spaces that permit loop deletion but which are not necessarily Hausdorff. Such spaces become particularly relevant when general constructions fail to be closed under the Hausdorff property.
\begin{definition} We say a space $X$ is \begin{enumerate} \item \textit{weakly Hausdorff} if for every map $f:K\to X$ from a compact Hausdorff space $K$, the image $f(K)$ is closed in $X$, \item \textit{$\Delta$-Hausdorff} if for every path $\alpha:[0,1]\to X$, the image $\alpha([0,1])$ is closed in $X$. \end{enumerate} \end{definition}
The following implications hold: Hausdorff $\Rightarrow$ weakly Hausdorff $\Rightarrow$ $\Delta$-Hausdorff $\Rightarrow$ $T_1$. The weakly Hausdorff property is particularly relevant to algebraic topology, where it provides a suitable ``separation axiom" within the category of compactly generated spaces \cite{Strickland}. As noted in \cite{McCord}, if one is performing ``gluing" constructions involving quotient topologies in algebraic topology, the weakly Hausdorff property is often preferable over the Hausdorff property since many such constructions preserve the former property but not the latter. The $\Delta$-Hausdorff property is the analogue in the category of ``$\Delta$-generated spaces" \cite{CSWdiff,FRdirected} and offers the same kind of conveniences.
\begin{example} The one-point compactification $X^{\ast}$ of any non-locally compact Hausdorff space $X$ is weakly Hausdorff but not Hausdorff. This occurs, for example, if $X$ is $\mathbb{Q}$, $\mathbb{R}^{\omega}$ with the product topology, or $[0,1]^{\omega}$ in the uniform topology. One can attach arcs or other spaces to $X^{\ast}$ to obtain path-connected examples. Hence, there are many $\Delta$-Hausdorff spaces that are not Hausdorff. \end{example}
To extend Theorem \ref{hausdorfftheorem}, we check that all of the ingredients of the proof work for $\Delta$-Hausdorff spaces.
\begin{lemma}\label{deltaclosedlemma} If $X$ is $\Delta$-Hausdorff, then $X$ permits loop deletion. \end{lemma}
\begin{proof} Suppose, to obtain a contradiction, that $\alpha(0)=x\neq y=\alpha(1)$. Set $x_n=\alpha(a_n)=\alpha(b_n)$ and note that $\{x_n\}$ converges to both $x$ and $y$ in $X$. If the sequences $\{a_n\}$ and $\{b_n\}$ stabilize to $0$ and $1$ respectively, then $\alpha(b_n)=x$ for sufficiently large $n$. Therefore, the constant sequence at $x$ converges to $y$. Since every $\Delta$-Hausdorff space is $T_1$, we obtain a contradiction.
We now assume that one of the sequences $\{a_n\}$ or $\{b_n\}$ is not eventually constant. Without loss of generality, we may assume $\{a_n\}$ is not eventually constant. Thus $0<a_n$ for all $n\in\mathbb{N}$. Since $X$ is $\Delta$-Hausdorff, the sets $\alpha([0,a_n])$, $n\in\mathbb{N}$ are closed in $X$ and contain $\{x_m\mid m\geq n\}$. Since $\{x_m\}_{m\geq n}\to y$, we have $y\in \alpha([0,a_n])$. Since $\{a_n\}$ is non-increasing an converges to $0$, we may find a decreasing sequence $\{t_j\}$ in $(0,a_1]$ that converges to $0$ and for which $\alpha(t_j)=y$ for all $j\in\mathbb{N}$. However, since $\{t_j\}\to 0$, it follows that the constant sequence at $y$ converges to $x$. Thus $X$ is $T_1$ and $x\in\overline{\{y\}}$; a contradiction. \end{proof}
Since we are no longer assuming $X$ is Hausdorff, the usual Closed Mapping Theorem (the continuous image of a compact space in a Hausdorff space is closed) does not apply. Hence, we must also make sure that an injective path in a $\Delta$-Hausdorff space is an embedding.
\begin{lemma}\label{injtoembeddinglemma} If $X$ is $\Delta$-Hausdorff, then every injective-path in $X$ is a closed embedding. \end{lemma}
\begin{proof} Let $\alpha:[0,1]\to X$ be an injective-path and let $C\subseteq [0,1]$ be closed. Write $C=\bigcap_{n\in\mathbb{N}}F_n$ where $F_n$ is a finite, disjoint union of closed intervals. Since $X$ is $\Delta$-Hausdorff, if $[a,b]$ is a component of $F_n$, then $\alpha([a,b])$ is closed in $X$. Therefore, $\alpha(F_n)$ is closed in $X$ for all $n\in\mathbb{N}$. Since $\alpha$ is injective, we have $\alpha(C)=\bigcap_{n\in\mathbb{N}}\alpha(F_n)$ and we conclude that $\alpha(C)$ is closed in $X$. \end{proof}
With Lemmas \ref{deltaclosedlemma} and \ref{injtoembeddinglemma} established, the same proof used in Section \ref{sectionproof} gives the following generalization of Theorem \ref{hausdorfftheorem}.
\begin{theorem}\label{mainthm} Every path-connected, $\Delta$-Hausdorff topological space is arcwise connected. \end{theorem}
Upon inspection, one can see that all parts of B\"{o}rger's proof of Theorem \ref{hausdorfftheorem} also goes through for $\Delta$-Hausdorff spaces. Hence, Theorem \ref{mainthm} can be proven without appealing to the axiom of choice.
\begin{example} Even with the weakened hypothesis, the converse of Theorem \ref{mainthm} is certainly not true. For a counterexample, let $X$ be the quotient space $[-1,1]/\mathord{\sim}$ where $-\frac{n}{n+1}\sim \frac{n}{n+1}$ for $n\in\mathbb{N}$ (this space is precisely illustrated in Figure \ref{tfig}). As a quotient of a closed interval, $X$ is $\Delta$-generated. However, one can show that $X$ is not arcwise connected and therefore is not $\Delta$-Hausdorff. Let $a,b$ be the images of $-1,1$ in $X$ respectively and let $Y$ be the space obtained by attaching a copy of $[0,1]$ to $X$ by identifying $0\sim a$ and $1\sim b$. Now $Y$ is arcwise connected but it is not $\Delta$-Hausdorff. \end{example}
Indeed, it is unrealistic to hope that there is some simple topological property $P$ that gives ``path connected $+$ $P$ $\Leftrightarrow$ arcwise connected" and whose definition doesn't involving quantifying over all paths in the space. However, it is possible to show that $X$ is $\Delta$-Hausdorff if and only if every non-loop path in $X$ has an injective $\mathscr{U}$-reduction. Since we have already proven the ``hard" direction in this note, we'll leave the converse as an exercise.
\end{document} |
\begin{document}
\title{Continuous time `true' self-avoiding random walk on $\Z$}
\begin{center} \vspace*{-3ex} Institute of Mathematics\\ Budapest University of Technology (BME) \end{center}
\vspace*{3ex}
\begin{abstract}
We consider the continuous time version of the `true' or `myopic' self-avoiding random walk with site repulsion in $1d$. The Ray\,--\,Knight-type method which was applied in \citep{toth_95} to the discrete time and edge repulsion case is applicable to this model with some modifications. We present a limit theorem for the local time of the walk and a local limit theorem for the displacement.
\end{abstract}
\section{Introduction}
\subsection{Historical background}
Let $X(t)$, $t\in\Z_+:=\{0,1,2,\dots\}$ be a nearest neighbour walk on the integer lattice $\Z$ starting from $X(0)=0$ and denote by $\ell(t,x)$, $(t,x)\in\Z_+\times\Z$, its local time (that is: its occupation time measure) on sites: \[\ell(t,x):=\#\{0\le s\le t: X(s)=x\}\] where $\#\{\dots\}$ denotes cardinality of the set. The true self-avoiding random walk with site repulsion (STSAW) was introduced in \citep{amit_parisi_peliti_83} as an example for a non-trivial random walk with long memory which behaves qualitatively differently from the usual diffusive behaviour of random walks. It is governed by the evolution rules \begin{align} \condprob{X(t+1)=x\pm1} {{\mathcal{F}}_t, \ X(t)=x} &= \frac {e^{-\beta\ell(t,x\pm1)}} {e^{-\beta\ell(t,x+1)}+e^{-\beta\ell(t,x-1)}}\notag\\ &=\frac{e^{-\beta(\ell(t,x\pm1)-\ell(t,x))}} {e^{-\beta(\ell(t,x+1)-\ell(t,x))}+e^{-\beta(\ell(t,x-1)-\ell(t,x))}}, \label{transprobstsaw}\\ \ell(t+1,x) &= \ell(t,x) + \ind{X(t+1)=x}.\notag \end{align}
The extension of this definition to arbitrary dimensions is straightforward. In \citep{amit_parisi_peliti_83}, actually, the multidimensional version of the walk was defined. Non-rigorous -- nevertheless rather convincing -- scaling and renormalization group arguments suggested that: \begin{enumerate} \item In three and more dimensions, the walk behaves diffusively with a Gaussian scaling limit of $t^{-1/2}X(t)$ as $t\to\infty$. See e.g.\ \citep{amit_parisi_peliti_83}, \citep{obukhov_peliti_83} and \citep{horvath_toth_veto_10}. \item In one dimension (that is: the case formally defined above), the walk is superdiffusive with a non-degenerate scaling limit of $t^{-2/3}X(t)$ as $t\to\infty$, but with no hint about the limiting distribution. See \citep{peliti_pietronero_87}, \citep{toth_99} and \citep{toth_veto_08}. \item The critical dimension is $d=2$ where the Gaussian scaling limit is obtained with logarithmic multiplicative corrections added to the diffusive scaling. See \citep{amit_parisi_peliti_83} and \citep{obukhov_peliti_83}. \end{enumerate}
These questions are still open. However, the scaling limit in one dimension of a closely related object was clarified in \citep{toth_95}. The true self-avoiding walk with self-repulsion defined in terms of the local times on edges rather than sites is defined as follows:
Let $\wX(t)$, $t\in\Z_+:=\{0,1,2,\dots\}$ be yet again a nearest neighbour walk on the integer lattice $\Z$ starting from $\wX(0)=0$ and denote now by $\well_{\pm}(t,x)$, $(t,x)\in\Z_+\times\Z$, its local time (that is: occupation time measure) on unoriented edges: \begin{align*} \well_+(t,x) &:= \#\{0\le s < t: \{\wX(s),\wX(s+1)\}=\{x,x+1\} \}, \\ \well_-(t,x) &:= \#\{0\le s < t: \{\wX(s),\wX(s+1)\}=\{x,x-1\} \}. \end{align*} Note that $\well_+(t,x)=\well_-(t,x+1)$. The true self-avoiding random walk with edge repulsion (ETSAW) is governed by the evolution rules \begin{align*} \condprob{\wX(t+1)=x\pm1} {{\mathcal{F}}_t,\ \wX(t)=x} &= \frac {e^{-2\beta\well_\pm(t,x)}} {e^{-2\beta\well_+(t,x)}+e^{-2\beta\well_-(t,x)}}\\ &=\frac{e^{-\beta(\well_\pm(t,x)-\well_\mp(t,x))}} {e^{-\beta(\well_+(t,x)-\well_-(t,x))}+ e^{-\beta(\well_-(t,x)-\well_+(t,x))}}\\ \well_\pm(t+1,x) &= \well_\pm(t,x) + \ind{\{\wX(t),\wX(t+1)\}=\{x,x\pm1\}}. \end{align*}
In {\citep{toth_95}}, a limit theorem was proved for $t^{-2/3}\wX(t)$, as $t\to\infty$. Later, in \citep{toth_werner_98}, a space-time continuous process $\R_+\ni t\mapsto \mathcal{X}(t)\in\R$ was constructed -- called the true self-repelling motion (TSRM) -- which possessed all the analytic and stochastic properties of an assumed scaling limit of $\R_+\ni t\mapsto \mathcal{X}^{(A)}(t):= A^{-2/3}\wX([At])\in\R$. The invariance principle for this model has been clarified in \citep{newman_ravishankar_06}.
A key point in the proof of \citep{toth_95}\ is a kind of Ray\,--\,Knight-type argument which works for the ETSAW but not for the STSAW. (For the original idea of Ray\,--\,Knight theory, see \citep{knight_63} and \citep{ray_63}.) Let \[\wT_{\pm,x,h}:=\min\{t\ge0: \well_\pm(t,x)\ge h\},\qquad x\in\Z, \quad h\in\Z_+\] be the so called inverse local times and \[\wLambda_{\pm,x,h}(y):=\well_\pm(\wT_{\pm,x,h},y),\qquad x,y\in\Z, \quad h\in\Z_+\] the local time sequence of the walk stopped at the inverse local times. It turns out that, in the ETSAW case, for any fixed $(x,h)\in\Z\times\Z_+$, the process $\Z\ni y\mapsto\wLambda_{\pm,x,h}(y)\in\Z_+$ is Markovian and it can be thoroughly analyzed.
It is a fact that the similar reduction does not hold for the STSAW. Here, the natural objects are actually slightly simpler to define: \begin{align*} T_{x,h} &:= \min\{t\ge0: \ell(t,x)\ge h\},&& x\in\Z, & h\in\Z_+, \\[1ex] \Lambda_{x,h}(y) &:= \ell(T_{x,h},y),&& x,y\in\Z, & h\in\Z_+. \end{align*} The process $\Z\ni y\mapsto\Lambda_{x,h}(y)\in\Z_+$ (with fixed $(x,h)\in\Z\times\Z_+$) is not Markovian and thus the Ray\,--\,Knight-type of approach fails. Nevertheless, this method works also for the model treated in the present paper.
The main ideas of this paper are similar to those of \citep{toth_95}, but there are essential differences, too. Those parts of the proofs which are the same as in \citep{toth_95} will not be spelled out explicitly. E.g.\ the full proof of Theorem \ref{thmtoth} is omitted altogether. We put the emphasis on those arguments which differ genuinely from \citep{toth_95}. In particular, we present some new coupling arguments.
This paper is organised as follows. First, we describe the model which we will study and present our theorems. In Section \ref{RK}, we give the proof of Theorem \ref{thmlimLambda} in three steps: we introduce the main technical tools, i.e.\ some auxiliary Markov processes. Then we state technical lemmas which are all devoted to check the conditions of Theorem \ref{thmtoth} cited from \citep{toth_95}. Finally, we complete the proof using the lemmas. The proof of these lemmas are postponed until Section \ref{proofs}. The proof of Theorem \ref{thmXconv} is in Section \ref{Xconv}.
\subsection{The random walk considered and the main results}
Now, we define a version of true self-avoiding random walk in continuous time, for which the Ray\,--\,Knight-type method sketched in the previous section is applicable. Let $X(t)$, $t\in\R_+$ be a \emph{continuous time} random walk on $\Z$ starting from $X(0)=0$ and having right continuous paths. Denote by $\ell(t,x)$, $(t,x)\in\R_+\times\Z$ its local time (occupation time measure) on sites: \[\ell(t,x):=\abs{\{s\in[0,t)\,:\, X(s)=x\}}\]
where $|\{\dots\}|$ now denotes Lebesgue measure of the set indicated. Let $w:\R\to(0,\infty)$ be an almost arbitrary rate function. We assume that it is non-decreasing and not constant.
The law of the random walk is governed by the following jump rates and differential equations (for the local time increase): \begin{align} \condprob{X(t+\mathrm d t)=x\pm1} {{\mathcal{F}}_t, \ X(t)=x} &= w(\ell(t,x)-\ell(t,x\pm1))\,\mathrm d t + o(\mathrm d t),\label{Xtrans}\\ \dot{\ell}(t,x) &= \ind{X(t)=x}\label{ltrans} \end{align} with initial conditions \[X(0)=0, \qquad \ell(0,x)=0.\] The dot in \eqref{ltrans} denotes time derivative. Note that for the the choice of exponential weight function $w(u)=\exp\{\beta u\}$. This means exactly that conditionally on a jump occurring at the instant $t$, the random walker jumps to right or left from its actual position with probabilities $e^{-\beta\ell(t,x\pm1)}/(e^{-\beta\ell(t,x+1)}+e^{-\beta\ell(t,x-1)})$, just like in \eqref{transprobstsaw}. It will turn out that in the long run the holding times remain of order one.
Fix $j\in\Z$ and $r\in\R_+$. We consider the random walk $X(t)$ running from $t=0$ up to the stopping time \begin{equation} T_{j,r}=\inf\{t\ge0:\ell(t,j)\ge r\},\label{defT} \end{equation} which is the inverse local time for our model. Define \begin{equation} \Lambda_{j,r}(k):=\ell(T_{j,r},k)\qquad k\in\Z\label{Lambdadef} \end{equation} the local time process of $X$ stopped at the inverse local time.
Let \begin{align*} \lambda_{j,r}&:=\inf\{k\in\Z:\Lambda_{j,r}(k)>0\},\\ \rho_{j,r}&:=\sup\{k\in\Z:\Lambda_{j,r}(k)>0\}. \end{align*}
Fix $x\in\R$ and $h\in\R_+$. Consider the two-sided reflected Brownian motion $W_{x,h}(y)$, $y\in\R$ with starting point $W_{x,h}(x)=h$. Define the times of the first hitting of $0$ outside the interval $[0,x]$ or $[x,0]$ with \begin{align*} \mathfrak l_{x,h}&:=\sup\{y<0\wedge x:W_{x,h}(y)=0\},\\ \mathfrak r_{x,h}&:=\inf\{y>0\vee x:W_{x,h}(y)=0\} \end{align*} where $a\wedge b=\min(a,b)$, $a\vee b=\max(a,b)$, and let \begin{equation} \mathcal{T}_{x,h}:=\int_{\mathfrak l_{x,h}}^{\mathfrak r_{x,h}} W_{x,h}(y)\,\mathrm d y.\label{defcT} \end{equation}
The main result of this paper is \begin{theorem}\label{thmlimLambda} Let $x\in\R$ and $h\in\R_+$ be fixed. Then \begin{align} A^{-1}\lambda_{\lfloor Ax\rfloor,\lfloor\sqrt A\sigma h\rfloor}&\Longrightarrow\mathfrak l_{0\wedge x,h},\\ A^{-1}\rho_{\lfloor Ax\rfloor,\lfloor\sqrt A\sigma h\rfloor}&\Longrightarrow\mathfrak r_{0\vee x,h}, \end{align} and \begin{equation} \begin{split} \left(\frac{\Lambda_{\lfloor Ax\rfloor,\lfloor\sqrt A\sigma h\rfloor}(\lfloor Ay\rfloor)}{\sigma\sqrt A}, \frac{\lambda_{\lfloor Ax\rfloor,\lfloor\sqrt A\sigma h\rfloor}}A\le y\le\frac{\rho_{\lfloor Ax\rfloor,\lfloor\sqrt A\sigma h\rfloor}}A\right)\hspace*{8em}\\ \Longrightarrow\left(W_{x,h}(y), \mathfrak l_{0\wedge x,h}\le y\le\mathfrak r_{0\vee x,h}\right) \end{split} \end{equation} as $A\to\infty$ where $\sigma^2=\int_{-\infty}^\infty u^2\rho(\mathrm d u)\in(0,\infty)$ with $\rho$ defined by \eqref{defrho} and \eqref{defW} later. \end{theorem}
\begin{corollary}\label{corTlim} For any $x\in\R$ and $h\ge0$, \begin{equation} \frac{T_{\lfloor Ax\rfloor,\lfloor\sqrt A\sigma h\rfloor}} {\sigma A^{3/2}}\Longrightarrow\mathcal{T}_{x,h}. \end{equation} \end{corollary}
For stating Theorem \ref{thmXconv}, we need some more definitions. It follows from \eqref{defcT} that $\mathcal{T}_{x,h}$ has an absolutely continuous distribution. Let \begin{equation} \omega(t,x,h):=\frac\partial{\partial t}\,\prob{\mathcal{T}_{x,h}<t}\label{defomega} \end{equation} be the density of the distribution of $\mathcal{T}_{x,h}$. Define \[\varphi(t,x):=\int_0^\infty \omega(t,x,h)\,\mathrm d h.\]
Theorem 2 of \citep{toth_95} gives that, for fixed $t>0$, $\varphi(t,\cdot)$ is a density function, i.e. \begin{equation} \int_{-\infty}^\infty \varphi(t,x)\,\mathrm d x=1.\label{intfi} \end{equation} One could expect that $\varphi(t,\cdot)$ is the density of the limit distribution of $X(At)/A^{2/3}$ as $A\to\infty$, but we prove a similar statement for their Laplace transform. We denote by $\hat\varphi$ the Laplace transforms of $\varphi$: \begin{equation} \hat\varphi(s,x):=s\int_0^\infty e^{-st}\varphi(t,x)\,\mathrm d t.\label{deffihat} \end{equation}
\begin{theorem}\label{thmXconv} Let $s\in\R_+$ be fixed and $\theta_{s/A}$ a random variable of exponential distribution with mean $A/s$ which is independent of the random walk $X(t)$. Then, for almost all $x\in\R$, \begin{equation} A^{2/3}\prob{X(\theta_{s/A})=\lfloor A^{2/3}x\rfloor}\to\hat\varphi(s,x) \end{equation} as $A\to\infty$. \end{theorem}
From this local limit theorem, the integral limit theorem follows immediately: \[\lim_{A\to\infty}\prob{A^{-2/3}X(\theta_{s/A})<x}=\int_{-\infty}^x \hat\varphi(s,y)\,\mathrm d y.\]
\section{Ray\,--\,Knight construction}\label{RK}
The aim of this section is to give a random walk representation of the local time sequence $\Lambda_{j,r}$. Therefore, we introduce auxiliary Markov processes corresponding to each edge of $\Z$. The process corresponding to the edge $e$ is defined in such a way that its value is the difference of local times of $X(T_{j,r})$ on the two vertices adjacent to $e$ where $X(T_{j,r})$ is the process $X(t)$ stopped at an inverse local time. It turns out that the auxiliary Markov processes are independent. Hence, by induction, the sequence of local times can be given as partial sums of independent auxiliary Markov processes. The proof of Theorem \ref{thmlimLambda} relies exactly on this observation.
\subsection{The basic construction}\label{basic}
Let \begin{equation} \tau(t,k):=\ell(t,k)+\ell(t,k+1)\label{deftau} \end{equation} be the local time spent on (the endpoints of) the edge $\langle k,k+1\rangle$, $k\in\Z$, and \begin{equation} \theta(s,k):=\inf\{t\ge0\,:\,\tau(t,k)>s\} \end{equation} its inverse. Further on, define \begin{align} \xi_k(s)&:=\ell(\theta(s,k),k+1)-\ell(\theta(s,k),k),\\ \alpha_k(s)&:=\ind{X(\theta(s,k))=k+1}-\ind{X(\theta(s,k))=k}. \end{align}
A crucial observation is that, for each $k\in\Z$, $s\mapsto(\alpha_k(s),\xi_k(s))$ is a Markov process on the state space $\{-1,+1\}\times\R$. The transition rules are \begin{align} \condprob{\alpha_k(t+\mathrm d t)=-\alpha_k(t)}{\mathcal{F}_t} &=w(\alpha_k(t)\xi_k(t))\,\mathrm d t + o(\mathrm d t),\label{lawalpha}\\ \dot{\xi_k}(t)&=\alpha_k(t),\label{lawxi} \end{align} with some initial state $(\alpha_k(0),\xi_k(0))$. Furthermore, these processes are independent. In plain words: \begin{enumerate} \item $\xi_k(t)$ is the difference of time spent by $\alpha_k$ in the states $+1$ and $-1$, alternatively, the difference of time spent by the walker on the sites $k+1$ and $k$; \item $\alpha_k(t)$ changes sign with rate $w(\alpha_k(t)\xi_k(t))$ since the walker jumps between $k$ and $k+1$ with these rates. \end{enumerate}
The common infinitesimal generator of these processes is \[(Gf)(\pm1, u)=\pm f'(\pm1,u) + w(\pm u)\big(f(\mp1,u)-f(\pm1,u)\big)\] where $f'(\pm1,u)$ is the derivative with respect to the second variable. It is an easy computation to check that these Markov processes are ergodic and their common unique stationary measure is \begin{equation} \mu(\pm1,\mathrm d u)=\frac{1}{2Z}e^{-W(u)}\,\mathrm d u\label{defmu} \end{equation} where \begin{equation} W(u):=\int_0^u \left(w(v)-w(-v)\right)\mathrm d v\quad\text{and}\quad Z:=\int_{-\infty}^\infty e^{-W(v)}\,\mathrm d v.\label{defW} \end{equation} Mind that, due to the condition imposed on $w$ (non-decreasing and non-constant), \begin{equation} \lim_{\abs{u}\to\infty}\frac{W(u)}{\abs{u}}=\lim_{v\to\infty}(w(v)-w(-v))>0, \label{Z<infty} \end{equation} and thus $Z<\infty$ and $\mu(\pm1,\mathrm d u)$ is indeed a probability measure on $\{-1,+1\}\times\R$.
Let \begin{equation} \beta_\pm(t,k):=\inf\left\{s\ge0:\int_0^s \ind{\alpha_k(u)=\pm1}\,\mathrm d u\ge t\right\} \end{equation} be the inverse local times of $(\alpha_k(t),\xi_k(t))$. With the use of them, we can define the processes \begin{equation}\label{defetak} \eta_{k,-}(t):=\xi_k(\beta_-(t,k)),\qquad\eta_{k,+}(t):=-\xi_k(\beta_+(t,k)). \end{equation} which are also Markovian. By symmetry, the processes with different sign have the same law. The infinitesimal generator of $\eta_{k,\pm}$ is \[(Hf)(u)=-f'(u)+w(u)\int_u^\infty e^{-\int_u^v w(s)\,\mathrm d s}w(v)(f(v)-f(u))\,\mathrm d v.\]
It is easy to see that the Markov processes $\eta_{k,\pm}$ are ergodic and their common unique stationary distribution is \begin{equation} \rho(\mathrm d u):=\frac1Z e^{-W(u)}\,\mathrm d u\label{defrho} \end{equation} with the notations \eqref{defW}. The stationarity of $\mu$ is not surprising after \eqref{defmu}, but a straightforward calculation yields it also.
The main point is the following \begin{proposition} \begin{enumerate} \item The processes $s\mapsto(\alpha_k(s),\xi_k(s))$, $k\in\Z$ are independent Markov process with the same law given in \eqref{lawalpha}--\eqref{lawxi}. They start from the initial states $\xi_k(0)=0$ and \[\alpha_k(0)=\left\{ \begin{array}{rl} +1\quad & \mbox{if}\quad k<0,\\ -1\quad & \mbox{if}\quad k\ge0. \end{array}\right.\]
\item The processes $s\mapsto\eta_{k,\pm}(s)$, $k\in\Z$ are independent Markov processes if we consider exactly one of $\eta_{k,+}$ and $\eta_{k,-}$ for each $k$. The initial distributions are \begin{align} \prob{\eta_{k,+}(0)\in A}&=\left\{ \begin{array}{ll} Q(0,A)\quad & \mbox{if}\quad k\ge0,\\ \ind{0\in A}\quad & \mbox{if}\quad k<0, \end{array}\right.\label{initial+}\\[2ex] \prob{\eta_{k,-}(0)\in A}&=\left\{ \begin{array}{ll} \ind{0\in A}\quad & \mbox{if}\quad k\ge0,\\ Q(0,A)\quad & \mbox{if}\quad k<0. \end{array}\right.\label{initial-} \end{align} \end{enumerate} \end{proposition}
\subsection{Technical lemmas}
The lemmas of this subsection descibe the behaviour of the auxiliary Markov processes $\eta_{k,\pm}$. Since they all have the same law, we denote them by $\eta$ to keep the notation simple, and it means that the statement is true for all $\eta_{k,\pm}$.
Fix $b\in\R$. Define the stopping times \begin{align} \theta_+&:=\inf\{t>0:\eta(t)\ge b\},\\ \theta_-&:=\inf\{t>0:\eta(t)\le b\}. \end{align}
In our lemmas, $\gamma$ will always be a positive constant, which is considered as being a small exponent, and $C$ will be a finite constant considered as being large. To simplify the notation, we will use the same letter for constants at different points of our proof. The notation does not emphasizes, but their value depend on $b$.
First, we estimate the exponential moments of $\theta_-$ and $\theta_+$.
\begin{lemma}\label{lemma-momgenf} There are $\gamma>0$ and $C<\infty$ such that, for all $y\ge b$, \begin{equation} \condexpect{\exp(\gamma\theta_-)}{\eta(0)=y}\le\exp(C(y-b)).\label{esttheta-} \end{equation} \end{lemma}
\begin{lemma}\label{lemma+momgenf} There exists $\gamma>0$ such that \begin{equation} \condexpect{\exp(\gamma\theta_+)}{\eta(0)=b}<\infty.\label{esttheta+} \end{equation} \end{lemma}
Denote by $P^t=e^{tH}$ the transition kernel of $\eta$. For any $x\in\R$, define the probability measure \[Q(x,\mathrm d y):=\left\{\begin{array}{lcl}\exp(-\int_x^y w(u)\,\mathrm d u) w(y)\,\mathrm d y & \mbox{if} & y\ge x,\\ 0 & \mbox{if} & y<x,\end{array}\right.\] which is the conditional distribution of the endpoint of a jump of $\eta$ provided that $\eta$ jumps from $x$. We show that the Markov process $\eta$ converges exponentially fast to its stationary distribution $\rho$ defined by \eqref{defrho} if the initial distribution is $0$ with probability $1$ or $Q(0,\cdot)$.
\begin{lemma}\label{lemmaexpconv} There are $C<\infty$ and $\gamma>0$ such that \begin{equation}
\left\|P^t(0,\cdot)-\rho\right\|<C\exp(-\gamma t)\label{expconv} \end{equation} and \begin{equation}
\left\|Q(0,\cdot)P^t-\rho\right\|<C\exp(-\gamma t).\label{nuexpconv} \end{equation} \end{lemma}
We give a bound on the decay of the tails of $P^t(0,\cdot)$ and $Q(0,\cdot)P^t$ uniformly in $t$.
\begin{lemma}\label{lemmaunifexpbound} There are constants $C<\infty$ and $\gamma>0$ such that \begin{equation} P^t(0,(x,\infty))\le Ce^{-\gamma x} \end{equation} and \begin{equation} Q(0,\cdot)P^t(0,(x,\infty))\le Ce^{-\gamma x} \end{equation} for all $x\ge0$ and for any $t>0$ uniformly, i.e.\ the value of $C$ and $\gamma$ does not depend on $x$ and $t$. \end{lemma}
We introduce some notation from \citep{toth_95} and cite a theorem, which will be the main ingredient of our proof. Let $A>0$ be the scaling parameter, and let \[S_A(l)=S_A(0)+\sum_{j=1}^l \xi_A(j)\qquad l\in\N\] be a discrete time random walk on $\R_+$ with the law \[\condprob{\xi_A(l)\in\mathrm d x}{S_A(l-1)=y}=\pi_A(\mathrm d x,y,l)\] for each $l\in\N$ with \[\int_{-y}^\infty \pi_A(\mathrm d x,y,l)=1.\] Define the following stopping time of the random walk $S_A(\cdot)$: \[\omega_{[Ar]}=\inf\{l\ge[Ar]:S_A(l)=0\}.\]
We give the following theorem without proof, because this is the continuous analog of Theorem 4 in \citep{toth_95} and its proof is essentially identical to that of the corresponding statement in \citep{toth_95}. \begin{theorem}\label{thmtoth} Suppose that the following conditions hold: \begin{enumerate} \item The step distributions $\pi_A(\cdot,y,l)$ converge exponentially fast as $y\to\infty$ to a common asymptotic distribution $\pi$. That is, for each $l\in\Z$,
\[\int_\R |\pi_A(\mathrm d x,y,l)-\pi(\mathrm d x)|<Ce^{-\gamma y}.\]
\item The asymptotic distribution is symmetric: $\pi(-\mathrm d x)=\pi(\mathrm d x)$, and its moments are finite, in particular, denote \begin{equation} \sigma^2:=\int_\R x^2\pi(\mathrm d x).\label{defsigma} \end{equation}
\item Uniform decay of the step distributions: for each $l\in\Z$, \[\pi_A((x,\infty),y,l)\le Ce^{-\gamma x}.\]
\item Uniform non-trapping condition: The random walk is not trapped in a bounded domain or in a domain away from the origin. That is, there is $\delta>0$ such that \begin{equation} \int_\delta^\infty \pi_A(\mathrm d x,y,l)>\delta\quad\mbox{or}\quad \int_{x=\delta}^\infty \int_{z=-\infty}^\infty \pi_A(\mathrm d x-z,y+z,l+1)\pi_A(\mathrm d z,y,l)>\delta \label{nontrapping} \end{equation} and \[\int_{-\infty}^{-(\delta\wedge y)} \pi_A(\mathrm d x,y,l)>\delta.\] \end{enumerate} Under these conditions, if \[\frac{S_A(0)}{\sigma\sqrt A}\to h,\] then \begin{equation} \left(\frac{\omega_{[Ar]}}A,\frac{S_A([Ay])}{\sigma\sqrt A}:0\le y\le
\frac{\omega_{[Ar]}}A\right)\Longrightarrow\left(\omega_r^W,|W_y|:0\le y\le\omega_r^W\bigm||W_0|=h\right) \end{equation} in $\R_+\times D[0,\infty)$ as $A\to\infty$ where \[\omega_r^W=\inf\{s>0:W_s=0\}\] with a standard Brownian motion $W$ and $\sigma$ is given by \eqref{defsigma}. \end{theorem}
\subsection{Proof of Theorem \ref{thmlimLambda}}
Using the auxiliary Markov processes introduced in Subsection \ref{basic}, we can build up the local time sequence as a random walk. This Ray\,--\,Knight-type construction is the main idea of the following proof.
\begin{proof}[Proof of Theorem \ref{thmlimLambda}] Fix $j\in\Z$ and $r\in\R_+$. Using the definition \eqref{Lambdadef} and the construction of $\eta_{k,\pm}$ \eqref{deftau}--\eqref{defetak}, we can formulate the following recursion for $\Lambda_{j,r}$: \begin{equation}\begin{aligned} &\Lambda_{j,r}(j)=r,\\ &\Lambda_{j,r}(k+1)=\Lambda_{j,r}(k)+\eta_{k,-}(\Lambda_{j,r}(k)) \qquad & \mbox{if}\quad k\ge j,\\ &\Lambda_{j,r}(k-1)=\Lambda_{j,r}(k)+\eta_{k-1,+}(\Lambda_{j,r}(k)) \qquad & \mbox{if}\quad k\le j. \end{aligned}\label{Lambdawalk}\end{equation}
It means that the processes $(\Lambda_{j,r}(j-k))_{k=0}^\infty$ and $(\Lambda_{j,r}(j+k))_{k=0}^\infty$ are random walks on $\R_+$, they start from $\Lambda_{j,r}(j)=r$, and the distribution of the following step always depends on the actual position of the walker. In order to apply Theorem \ref{thmtoth}, we rewrite \eqref{Lambdawalk}: \begin{align*} \Lambda_{j,r}(j+k)&=h+\sum_{i=0}^{k-1} \eta_{j+i,-}(\Lambda_{j,r}(j+i)) & k&=0,1,2,\dots,\\ \Lambda_{j,r}(j-k)&=h+\sum_{i=0}^{k-1} \eta_{j-i-1,+}(\Lambda_{j,r}(j-i)) & k&=0,1,2,\dots. \end{align*} The step distributions of this random walks are \[\pi_A(\mathrm d x,y,l)=\left\{\begin{array}{l} P^y(0,\mathrm d x)\\[1ex] Q(0,\cdot)P^y(\mathrm d x)\end{array}\right.\] according to \eqref{initial+}--\eqref{initial-}.
The exponential closeness of the step distribution to the stationary distribution is shown by Lemma \ref{lemmaexpconv}. One can see from \eqref{defrho} and \eqref{defW} that the distribution $\rho$ is symmetric and it has a non-zero finite variance. Lemma \ref{lemmaunifexpbound} gives a uniform exponential bound on the tail of the distributions $P^t(0,\cdot)$ and $Q(0,\cdot)P^t$.
Since we only consider $[\lambda_{j,r},\rho_{j,r}]$, that is, the time interval until $\Lambda_{j,r}$ hits $0$, we can force the walk to jump to $1$ in the next step after hitting $0$, which does not influence our investigations. It means that $\pi_A(\{1\},0,l)=1$ for $l\in\Z$, and with this, the non-trapping condition \eqref{nontrapping} fulfils. Therefore, Theorem \ref{thmtoth} is applicable for the forward and the backward walks, and Theorem \ref{thmlimLambda} is proved. \end{proof}
\section{The position of the random walker}\label{Xconv}
We turn to the proof of Theorem \ref{thmXconv}. First, we introduce the rescaled distribution \[\varphi_A(t,x):=A^{2/3}\prob{X(t)=\lfloor A^{2/3}x\rfloor}\] where $t,x\in\R_+$. We define the Laplace transform of $\varphi_A$ with \begin{equation} \hat\varphi_A(s,x)=s\int_0^\infty e^{-st}\varphi_A(t,x)\,\mathrm d t,\label{deffiAhat} \end{equation} which is the position of the random walker at an independent random time of exponential distribution with mean $A/s$.
We denote by $\hat\omega$ the Laplace transforms of $\omega$ defined in \eqref{defomega} and rewrite \eqref{deffihat}: \begin{gather*} \hat\omega(s,x,h):=s\int_0^\infty e^{-st}\omega(t,x,h)\,\mathrm d t =s\,\expect{e^{-s\mathcal{T}_{x,h}}},\\ \hat\varphi(s,x)=s\int_0^\infty e^{-st}\varphi(t,x)\,\mathrm d t =\int_0^\infty\hat\omega(s,x,h)\,\mathrm d h. \end{gather*} Note that the scaling relations \begin{align} \alpha\omega(\alpha t,\alpha^{2/3}x,\alpha^{1/3}h)&=\omega(t,x,h),\notag\\ \alpha^{2/3}\hat\varphi(\alpha^{-1}s,\alpha^{2/3}x)&=\hat\varphi(s,x)\label{fiscaling} \end{align} hold because of the scaling property of the Brownian motion.
\begin{proof}[Proof of Theorem \ref{thmXconv}] The first observation for the proof is the identity \begin{equation} \prob{X(t)=k}=\int_{h=0}^\infty \prob{T_{k,h}\in(t,t+\mathrm d h)},\label{Ptransf} \end{equation} which follows from \eqref{defT}. If we insert it to the definition of $\hat\varphi_A$ \eqref{deffiAhat}, then we get \begin{equation}\label{fihatcomp}\begin{split} \hat\varphi_A(s,x)&=sA^{-1/3}\int_0^\infty e^{-st/A}\prob{X(t)=\lfloor A^{2/3}x\rfloor}\mathrm d t\\ &=sA^{-1/3}\int_0^\infty e^{-st/A}\int_{h=0}^\infty\prob{T_{\lfloor A^{2/3}x\rfloor,h}\in(t,t+\mathrm d h)}\mathrm d t\\ &=sA^{-1/3}\int_0^\infty \expect{e^{-sT_{\lfloor A^{2/3}x\rfloor,h}/A}}\mathrm d h \end{split}\end{equation} using \eqref{Ptransf}. Defining \[\hat\omega_A(s,x,h)=s\expect{\exp(-sT_{\lfloor A^{2/3}x\rfloor,\lfloor A^{1/3}\sigma h\rfloor}/(\sigma A))}\] gives us \begin{equation}\label{fihatfinal} \hat\varphi_A(s,x)=\int_0^\infty \hat\omega_A(\sigma s,x,h)\,\mathrm d h \end{equation} from \eqref{fihatcomp}. From Corollary \ref{corTlim}, it follows that, for any $s>0$, $x\ge0$ and $h>0$, \[\hat\omega_A(s,x,h)\to\hat\omega(s,x,h).\]
Applying Fatou's lemma in \eqref{fihatfinal}, one gets \begin{equation} \liminf_{A\to\infty}\hat\varphi_A(s,x) \ge\int_0^\infty \hat\omega(\sigma s,x,h)\,\mathrm d h =\sigma^{2/3}\hat\varphi(s,\sigma^{2/3}x),\label{liminffihat} \end{equation} where we used \eqref{fiscaling} in the last equation. A consequence of \eqref{intfi}, \eqref{liminffihat} integrated and a second application of Fatou's lemma yield \[1=\int_{-\infty}^\infty \hat\varphi(s,x)\,\mathrm d x \le \int_{-\infty}^\infty \liminf_{A\to\infty}\hat\varphi_A(s,x)\,\mathrm d x \le \liminf_{A\to\infty} \int_{-\infty}^\infty \hat\varphi_A(s,x)\,\mathrm d x=1,\] which gives that, for fixed $s\in\R_+$, $\hat\varphi_A(s,x)\to\hat\varphi(s,x)$ holds for almost all $x\in\R$, indeed. \end{proof}
\section{Proof of lemmas}\label{proofs}
\subsection{Exponential moments of the return times}
\begin{proof}[Proof of Lemma \ref{lemma-momgenf}] Consider the Markov process $\zeta(t)$ which decreases with constant speed $1$, it has upwards jumps with homogeneous rate $w(-b)$, and the distribution of the size of a jump is the same as that of $\eta$, provided that the jump starts from $b$. In other words, the infinitesimal generator of $\zeta$ is \[(Zf)(u)=-f'(u)+w(-b)\int_0^\infty e^{-\int_0^v w(b+s)\,\mathrm d s}w(b+v) (f(u+v)-f(u))\,\mathrm d v.\]
Note that, by the monotonicity of $w$, $\eta$ and $\zeta$ can be coupled in such a way that they start from the same position and, as long as $\eta\ge b$ holds, $\zeta\ge\eta$ is true almost surely. It means that it suffices to prove \eqref{esttheta-} with \begin{equation} \theta'_-:=\inf\{t>0:\zeta(t)\le b\}\label{theta'-def} \end{equation} instead of $\theta_-$. But the transitions of $\zeta$ are homogeneous in space, which yields that \eqref{esttheta-} follows from the finiteness of \begin{equation} \condexpect{\exp(\gamma\theta'_-)}{\zeta(0)=b+1}.\label{theta'-} \end{equation}
In addition to this, $\zeta$ is a supermartingale with stationary increments, which gives us \[\expect{\zeta(t)}=b+1-ct\] with some $c>0$, if the initial condition is $\zeta(0)=b+1$. For $\alpha\in\left(-\infty,\lim_{u\to\infty}\frac{W(u)}u\right)$ (c.f.\ \eqref{Z<infty}), the expectation \[\log\expect{e^{\alpha(\zeta(t)-\zeta(0))}}\] is finite, and negative for some $\alpha>0$. Hence, the martingale \begin{equation} M(t)=\exp\left(\alpha(\zeta(t)-\zeta(0))-t\log\expect{e^{\alpha(\zeta(1)-\zeta(0))}}\right) \end{equation} stopped at $\theta'_-$ gives that the expectation in \eqref{theta'-} is finite with $\gamma=-\log\expect{e^{\alpha(\zeta(1)-\zeta(0))}}$. \end{proof}
\begin{proof}[Proof of Lemma \ref{lemma+momgenf}] First, we prove for negative $b$, more precisely, for which $w(-b)>w(b)$. In this case, define the homogeneous process $\kappa$ with $\kappa(0)=b$ and generator \[Kf(u)=-f'(u)+w(-b)\int_0^\infty e^{-w(b)s}w(b)(f(u+s)-f(u))\,\mathrm d s.\] It is easy to see that there is a coupling of $\eta$ and $\kappa$, for which $\eta\ge\kappa$ as long as $\eta\le b$. Therefore, it is enough to show \eqref{esttheta+} with \[\theta'_+:=\inf\{t>0:\kappa(t)\ge b\}\] instead of $\theta_+$.
But $\kappa$ is a submartingale with stationary increments, for which \[\log\expect{e^{\alpha(\kappa(t)-\kappa(0))}}\] is finite if $\alpha\in(-\infty,w(b))$, and negative for some $\alpha<0$. The statement follows from the same idea as in the proof of Lemma \ref{lemma-momgenf}.
Now, we prove the lemma for the remaining case. Fix $b$, for which we already know \eqref{esttheta+}, and chose $b_1>b$ arbitrarily. We start $\eta$ from $\eta(0)=b_1$, and we decompose its trajectory into independent excursions above and below $b$, alternatingly. Let \begin{equation} Y_0:=\inf\{t\ge0:\eta(t)\le b\}, \end{equation} and by induction, define \begin{align} X_k&:=\inf\left\{t>0:\eta\left(\sum_{j=1}^{k-1} X_j+\sum_{j=0}^{k-1} Y_j+t\right)\ge b\right\}, \label{defX}\\ Y_k&:=\inf\left\{t\ge0:\eta\left(\sum_{j=1}^k X_j+\sum_{j=0}^{k-1} Y_j+t \right)\le b\right\} \label{defY} \end{align} if $k=1,2,\dots$. Note that $(X_k,Y_k)_{k=1,2,\dots}$ is an i.i.d.\ sequence of pairs of random variables. Finally, let \begin{equation} Z_k:=X_k+Y_k\qquad k=1,2,\dots.\label{defZ} \end{equation}
With this definition, the $Z_k$'s are the lengths of the epochs in a renewal process. Lemma \ref{lemma-momgenf} tells us that $Y_0$ has finite exponential moment. The same holds for $X_1,X_2,\dots$ because of the first part of this proof for the case of small $b$. Note that the distribution of the upper endpoint of a jump of $\eta$ conditionally given that $\eta$ jumps above $b$ is exactly $Q(b,\cdot)$. Since $Q(b,\cdot)$ decays exponentially fast, we can use Lemma \ref{lemma-momgenf} again to conclude that $\expect{\exp(\gamma Y_k)}<\infty$ for $\gamma>0$ small enough. Define \begin{equation} \nu_t:=\max\left\{n\ge0:\sum_{k=1}^n Z_k\le t\right\}\label{defnut} \end{equation} in the usual way. The following decomposition is true: \begin{equation}\begin{split} &\prob{\frac{\sum_{k=1}^{\nu_t+1} Y_k}t<\varepsilon}\\ &\qquad\qquad\le\prob{\frac{\nu_t+1}t<\frac12\frac1{\expect{Z_1}}} +\prob{\frac{\sum_{k=1}^{\nu_t+1} Y_k}t<\varepsilon, \frac{\nu_t+1}t\ge\frac12\frac1{\expect{Z_1}}}. \end{split}\label{X/tdecomp}\end{equation}
Lemma 4.1 of \citep{vandenberg_toth_91} gives a large deviation principle for the renewal process $\nu_t$, hence \begin{equation} \prob{\frac{\nu_t+1}t<\frac12\frac1{\expect{Z_1}}} \le\prob{\frac{\nu_t}t<\frac12\frac1{\expect{Z_1}}}<e^{-\gamma t} \end{equation} with some $\gamma>0$. For the second term on the right-hand side in \eqref{X/tdecomp}, \begin{equation}\begin{split} &\prob{\frac{\sum_{k=1}^{\nu_t+1} Y_k}t<\varepsilon, \frac{\nu_t+1}t\ge\frac12\frac1{\expect{Z_1}}}\\ &\hspace*{7em} =\prob{\frac{\sum_{k=1}^{\nu_t+1} Y_k}{\nu_t+1} <\varepsilon\frac t{\nu_t+1}, \frac{\nu_t+1}t\ge\frac12\frac1{\expect{Z_1}}}\\ &\hspace*{7em}\le\prob{\frac{\sum_{k=1}^{\nu_t+1}Y_k}{\nu_t+1} <2\varepsilon\expect{Z_1}, \frac{\nu_t+1}t\ge\frac12\frac1{\expect{Z_1}}}\\ &\hspace*{7em}\le\max_{n\ge\frac12\frac1{\expect{Z_1}}t} \prob{\frac{\sum_{k=1}^n Y_k}n<2\varepsilon\expect{Z_1}}, \end{split}\label{X/test}\end{equation} which is exponentially small for some $\varepsilon>0$ by standard large deviation theory, and the same holds for the probability estimated is \eqref{X/tdecomp}, which means that $\eta$ spends at least $\varepsilon t$ time above $b$ with overwhelming probability.
The inequality \[\condprob{\theta_+>t}{\eta(0)=b_1} \le\prob{\sum_{k=1}^{\nu_t+1}Y_k<\varepsilon t} +\Condprob{\theta_+>t}{\eta(0)=b_1,\sum_{k=1}^{\nu_t+1}Y_k>\varepsilon t}\] is obvious. The first term on the right-hand side is exponentially small by \eqref{X/tdecomp}--\eqref{X/test}. In order to bound the second term, denote by $J(t)$ the number of jumps when $\eta(s)\ge b$. The condition $\sum_{k=1}^{\nu_t+1}Y_k>\varepsilon t$ means that this is the case in an at least $\varepsilon$ portion of $[0,t]$. The rate of these jumps are at least $w(-b)$ by the monotonicity of $w$. Note that $J(t)$ dominates stochastically a Poisson random variable $L(t)$ with mean $w(-b)t$. Hence, \begin{equation} \prob{J(t)<\frac12w(-b)t}\le\prob{L(t)<\frac12w(-b)t}<e^{-\gamma t}\label{theta+tail} \end{equation} for $t$ large enough with some $\gamma>0$ by a standard large deviation estimate.
Note that $Q$ is also monotone in the sense that \[\int_{b_1}^\infty Q(x_1,\mathrm d y)<\int_{b_1}^\infty Q(x_2,\mathrm d y)\] if $x_1<x_2$. Therefore, a jump of $\eta$, which starts above $b$, exits $(-\infty,b_1]$ with probability at least \[r=\int_{b_1}^\infty Q(b,\mathrm d y)>0.\] Finally, \[\begin{split} &\Condprob{\theta_+>t}{\eta(0)=b_1,\sum_{k=1}^{\nu_t+1} Y_k>\varepsilon t}\\ &\quad\le\prob{J(t)<\frac12w(-b)\varepsilon t} +\Condprob{\theta_+>t} {J(t)\ge\frac12w(-b)\varepsilon t,\eta(0)=b_1,\sum_{k=1}^{\nu_t+1} Y_k>\varepsilon t}\\ &\quad\le e^{-\gamma t}+(1-r)^{\frac12w(-b)\varepsilon t} \end{split}\] by \eqref{theta+tail}, which is an exponential decay, as required. \end{proof}
\subsection{Exponential convergence to the stationarity}
\begin{proof}[Proof of Lemma \ref{lemmaexpconv}] First, we prove \eqref{expconv}. We couple two copies of $\eta$, say $\eta_1$ and $\eta_2$. Suppose that \[\eta_1(0)=0\qquad\mbox{and}\qquad\prob{\eta_2(0)\in A}=\rho(A).\] Their distribution after time $t$ are obviously $P^t(0,\cdot)$ and $\rho$, respectively. We use the standard coupling lemma to estimate their variation distance:
\[\left\|P^t(0,\cdot)-\rho\right\|\le\prob{T>t}\] where $T$ is the random time when the two processes merge.
Assume that $\eta_1=x_1$ and $\eta_2=x_2$ with fixed numbers $x_1,x_2\in\R$. Then there is a coupling where the rate of merge is \[c(x_1,x_2):=w(-x_1\vee x_2)\exp\left(-\int_{x_1\wedge x_2}^{x_1\vee x_2} w(z)\,\mathrm d z\right).\] Consider the interval $I_b=(-b,b)$ where $b$ will be chosen later appropriately. If $\eta_1=x_1$ and $\eta_2=x_2$ where $x_1,x_2\in I_b$, then for the rate of merge \begin{equation} c(x_1,x_2)\ge w(-b)\exp\left(-\int_{-b}^b w(z)\,\mathrm d z\right)=:\beta(b)>0 \label{rateofmerge} \end{equation} holds if $w(x)>0$ for all $x\in\R$.
Let $\vartheta$ be the time spent in $I_b$, more precisely, \begin{align*}
\vartheta_i(t)&:=|\{0\le s\le t:\eta_i(s)\in I_b\}|\qquad i=1,2,\\
\vartheta_{12}(t)&:=|\{0\le s\le t:\eta_1(s)\in I_b,\eta_2(s)\in I_b\}|. \end{align*} The estimate \[\prob{T>t}\le\prob{\vartheta_{12}(t)<\frac t2} +\condprob{T>t}{\vartheta_{12}(t)\ge\frac t2}\] is clearly true. Note that \[\Condprob{T>t}{\vartheta_{12}(t)\ge\frac t2}\le\exp\left(-\frac12\beta(b)t\right)\] follows from \eqref{rateofmerge}.
By the inclusion relation \begin{equation} \left\{\vartheta_{12}(t)<\frac t2\right\}\subset \left\{\vartheta_1(t)<\frac34 t\right\}\cup\left\{\vartheta_2<\frac34 t\right\},\label{inclusion} \end{equation} it suffices to prove that the tails of $\prob{\vartheta_i(t)<\frac34 t}$ decay exponentially $i=1,2$, if $b$ is large enough.
We will show that \begin{equation}
\prob{\frac{|\{0\le s\le t:\eta(s)<b\}|}t<\frac78}\le e^{-\gamma t}.\label{7/8} \end{equation} A similar statement can be proved for the time spent above $-b$, therefore another inclusion relation like \eqref{inclusion} gives the lemma.
First, we verify that the first hitting of level $b$ \[\inf\{s>0:\eta_i(s)=b\}\] has finite exponential moment, hence, it is negligible with overwhelming probability and we can suppose that $\eta_i(0)=b$. Indeed, for any fixed $\varepsilon>0$, the measures $\rho$ and $Q(b,\cdot)$ assign exponentially small weight to the complement of the interval $[-\varepsilon t,\varepsilon t]$ as $t\to\infty$. From now on, we suppress the subscript of $\eta_i$, we forget about the initial values, and assume only that $\eta(0)\in[-\varepsilon t,\varepsilon t]$.
If $\eta(0)\in[b,\varepsilon t]$, then recall the proof Lemma \ref{lemma-momgenf}. There, we could majorate $\eta$ with a homogeneous process $\zeta$. If we define \[a:=\condexpect{\theta'_-}{\zeta(0)=b+1}\] with the notation \eqref{theta'-def}, which is finite by Lemma \ref{lemma-momgenf}, then from a large deviation principle, \begin{equation} \condprob{\theta_-(t)>2a\varepsilon t}{\eta(0)\in[b,\varepsilon t]} \le\condprob{\theta'_-(t)>2a\varepsilon t}{\eta(0)\in[b,\varepsilon t]} <e^{-\gamma t}\label{theta-largedev} \end{equation} with some $\gamma>0$.
If $\eta(0)\in[-\varepsilon t,b]$, then we can neglect that piece of the trajectory of $\eta$ which falls into the interval $[0,\theta_+]$, because without this, $\vartheta(t)$ decreases and the bound on \eqref{7/8} becomes stronger. Since $\eta$ jumps at $\theta_+$ a.s.\ and the distribution of $\eta(\theta_+)$ is $Q(b,\cdot)$, we can use the previous observations concerning the case $\eta(0)\in[b,\varepsilon t]$.
Using \eqref{theta-largedev}, it is enough to prove that
\[\prob{\frac{|\{0\le s\le t:\eta(s)<b\}|}t<\frac78+2a\varepsilon}\le e^{-\gamma t}\] with the initial condition $\eta(0)=b$ where the value of $b$ is not specified yet. We introduce $X_k,Y_k,Z_k$ and $\nu_t$ as in \eqref{defX}--\eqref{defnut} with $Y_0\equiv0$. The only difference is that here we want to ensure a given portion of time spent below $b$ with high probability with the appropriate choice of $b$. With the same idea as in the proof of Lemma \ref{lemma+momgenf} in \eqref{X/tdecomp}--\eqref{X/test}, we can show that \[\prob{\frac{\sum_{k=1}^{\nu_t+1} X_k}t\le\frac78+2a\varepsilon}\] is exponentially small by large deviation theory if we choose $b$ large enough to set $\expect{X_1}/\expect{Z_1}$ (the expected portion of time spent below $b$) sufficiently close to $1$. With this, the proof of \eqref{expconv} is complete, that of \eqref{nuexpconv} is similar. \end{proof}
\subsection{Decay of the transition kernel}
\begin{proof}[Proof of Lemma \ref{lemmaunifexpbound}] We return to the idea that the partial sums of $Z_k$'s form a renewal process. Remember the definitions \eqref{defX}--\eqref{defnut}. This proof relies on the estimate
\[|\eta(t)|\le Z_{\nu_t+1},\] which is true, because the process $\eta$ can decrease with speed at most $1$. Therefore, it suffices to prove the exponential decay of the tail of $Z_{\nu_t+1}$.
Define the \emph{renewal measure} with \[U(A):=\sum_{n=0}^\infty \prob{\sum_{k=1}^n Z_k\in A}\] for any $A\subset\R$. We consider the \emph{age} and the \emph{residual waiting time} \begin{align*} A_t&:=t-\sum_{k=1}^{\nu_t} Z_k,\\ R_t&:=\sum_{k=1}^{\nu_t+1} Z_k-t \end{align*} separately. For the distribution of the former $H(t,x):=\prob{A_t>x}$, the renewal equation \begin{equation} H(t,x)=(1-F(t))\ind{t>x}+\int_0^t H(t-s,x)\,\mathrm d F(s)\label{reneq} \end{equation} holds where $F(x)=\prob{Z_1<x}$. \eqref{reneq} can be deduced by conditioning on the time of the first renewal, $Z_1$. From Theorem (4.8) in \citep{durrett_95}, it follows that \begin{equation} H(t,x)=\int_0^t(1-F(t-s))\ind{t-s>x}U(\mathrm d s).\label{formH} \end{equation}
As explained after \eqref{defZ}, Lemma \ref{lemma-momgenf} and Lemma \ref{lemma+momgenf} with $b=0$ together imply that $1-F(x)\le C e^{-\gamma x}$ with some $C<\infty$ and $\gamma>0$. On the other hand, \[U([k,k+1])\le U([0,1])\] is true, because, in the worst case, there is a renewal at time $k$. Otherwise, the distribution of renewals in $[k,k+1]$ can be obtained by shifting the renewals in $[0,1]$ with $R_k$. We can see from \eqref{formH} by splitting the integral into segments with unit length that \[H(t,x)\le U([0,1])\sum_{k=\lfloor x\rfloor}^\infty C e^{-\gamma k},\] which is uniform in $t>0$.
With the equation \[\{R_t>x\}=\{A_{t+x}\ge x\}=\{\mbox{no renewal in } (t,t+x]\},\] a similar uniform exponential bound can be deduced for the tail $\prob{R_t>x}$. Since $Z_{\nu_t+1}=A_t+R_t$, the proof is complete. \end{proof}
\hbox{ \phantom{M} \hskip7cm \vbox{\hsize8cm {\noindent Address of authors:\\ {\sc Institute of Mathematics\\ Budapest University of Technology \\ Egry J\'ozsef u.\ 1\\ H-1111 Budapest, Hungary}\\[10pt] e-mail:\\ {\tt balint{@}math.bme.hu}\\ {\tt vetob{@}math.bme.hu} }}}
\end{document} |
\begin{document}
\maketitle
\begin{abstract}
Let $(\Omega, g)$ be a real analytic Riemannian manifold with real analytic boundary $\partial \Omega$. Let $\psi_{\lambda}$ be an eigenfunction of the Dirichlet-to-Neumann operator $\Lambda$ of $(\Omega, g, \partial \Omega)$ of eigenvalue $\lambda$. Let $\mathcal{N}_{\lambda_j}$ be its nodal set. Then, there exists a constant $C > 0$ depending only on
$(M, g, \partial \Omega)$ so that $$\mathcal{H}^{n-2} (\mathcal{N}_{\lambda}) \leq C \lambda.$$ This proves a conjecture of F. H. Lin and K. Bellova. \end{abstract}
This article is concerned with the Hausdorff $\mathcal{H}^{n-2}$ (surface) measure of the nodal sets $$\mathcal{N}_{\lambda} = \{x \in \partial \Omega: \psi_{\lambda} (x) = 0\} \subset \partial \Omega$$ of Steklov eigenfunctions of eigenvalue $\lambda$ of a domain $\Omega \subset {\mathbb R}^n$ in the real analytic case. The Steklov eigenvalue problem is to find the eigenfunctions of the Steklov problem on a domain $\Omega$, \begin{equation} \label{SP} \left\{ \begin{array}{l} \Delta u(x) = 0, \;\; x \in \Omega, \\ \\ \frac{\partial u}{\partial \nu}(x) = - \lambda u(x), \;\; x \in \partial \Omega. \end{array} \right. \end{equation} It is often assumed that $\Omega \subset {\mathbb R}^n$ is a bounded $C^2$ domain with Euclidean metric, but the problem may be posted on a bounded domain in any Riemannian manifold. The eigenvalue problem may be reduced to the boundary, and $\psi_{\lambda}$ is an eigenfunction
\begin{equation} \Lambda \psi_{\lambda} = \lambda \psi_{\lambda} \end{equation} of the Dirichlet-to-Neumann operator $$\Lambda f = \frac{\partial u}{\partial \nu}(x) |_{\partial \Omega}. \;\;$$ Here, $u$ is the harmonic extension of $f$, $$\left\{ \begin{array}{l} \Delta u(x) = 0, \;\; x \in \Omega, \\ \\ u(x)=f(x), \;\; x \in \partial \Omega. \end{array} \right..$$
$\Lambda$ is self-adjoint on $L^2(\partial \Omega, d S)$ and there exists an orthonormal basis $\{\psi_j\}$ of eigenfunctions $$\Lambda \psi_j = \lambda_j \psi_j, \;\;\; \psi_j \in C^{\infty}(\partial \Omega), \;\; \int_{\partial \Omega} \psi_j \psi_k d S = \delta_{jk},$$
where $d S$ is the surface measure. We order the eigenvalues in ascending order $0=\lambda_0<\lambda_1\le \lambda_2\le\cdots$, counted with multiplicty.
In a recent article, Bellova-Lin \cite{BL} proved that $$\mathcal{H}^{n-2} (\mathcal{N}_{\lambda}) \leq C \lambda^{6}$$ when $\Omega\subset {\mathbb R}^n$ is a bounded {\it real analytic} Euclidean domain. They suggest that the optimal result is $\mathcal{H}^{n-2} (\mathcal{N}_{\lambda}) \leq C \lambda.$ The purpose of this article is to prove this upper bound for bounded real analytic domains in general real analytic Riemannian manifolds.
\begin{theo} \label{NODALBOUND} Let $(\Omega, g)$ be a real analytic Riemannian manifold with real analytic boundary $\partial \Omega$. Let $\psi_{\lambda}$ be an eigenfunctions of the Dirichlet-to-Neumann operator $\Lambda$ of $(\Omega, g, \partial \Omega)$ of eigenvalue $\lambda$, and
$\mathcal{N}_{\lambda}$ be its nodal set as above. Then, there exists a constant $C > 0$ depending only on
$(\Omega, g, \partial \Omega)$ so that $$\mathcal{H}^{n-2} (\mathcal{N}_{\lambda}) \leq C \lambda.$$
\end{theo} It is not hard to find examples of $(\Omega, g, \partial \Omega)$ and $\psi_{\lambda}$ where the upper bound is achieved, for instance on a
hemisphere of a round sphere. But it is not clear that it is attained by a sequence of Steklov eigenfunctions on every $(\Omega, g, \partial \Omega)$, or more stringently that it is obtained by every sequence of eigenfunctions. In the setting of real analytic Riemannian manifolds $(M,g)$, it is proved in \cite{DF} that there exists $ C> 0$ depending only on the metric $g$ so that $\mathcal{H}^{n-1}(\mathcal{N}_{\lambda}) \geq C \lambda$. Since $\dim \partial \Omega = n-1$, the analogous lower bound for the real analytic Steklov problem would be $\mathcal{H}^{n-2} (\mathcal{N}_{\lambda}) \geq C \lambda$, where $C$ depends only on $(\Omega, g, \partial \Omega)$. However the key existence result for $\Delta$-eigenfunctions of eigenvalue $\lambda^2$, that every ball of radius $\frac{C}{\lambda}$ contains a zero of $\varphi_{\lambda}$, does not seem to be known for the Steklov problem \eqref{SP}. We believe it is possible to prove good lower bounds by the methods of this article, and plan to investigate lower bounds in a subsequent article.
\subsection{Outline of the proof of Theorem \ref{NODALBOUND}}
The key to proving the sharp upper bound in the generality of Theorem \ref{NODALBOUND} is to use the wave group \begin{equation} \label{WG} U(t ) = e^{it \Lambda} : L^2(\partial \Omega) \to L^2(\partial \Omega) \end{equation} generated by $\Lambda$. $\Lambda$ is a positive elliptic self-adjoint pseudo-differential operator of order one, and its wave group has been constructed as a Fourier integral operator in \cite{Hor,DG}. As in \cite{Z} we study nodal sets by analytically continuing the Schwartz kernel of the wave group to imaginary time $t + i \tau$ with $ \tau > 0$, and to the complexification $(\partial \Omega)_{{\mathbb C}} $ of $\partial \Omega$. The analytic continuation in time and in the first space variable defines the Poisson wave kernel\begin{equation} \label{PWG} U_{{\mathbb C}}(t + i \tau, \zeta, y) = e^{i (t + i \tau) \Lambda} (\zeta, y) : L^2(\partial \Omega) \to L^2((\partial \Omega)_{{\mathbb C}}). \end{equation} As discussed below,
$\Lambda$ is an analytic pseudo-differential operator on $\partial \Omega$ when $(\Omega, \partial \Omega, g)$ is real analytic, and \eqref{PWG} is a Fourier integral operator with complex phase. (See \cite{Bou2,Sj} for background on analytic pseudo-differential operators).
In the real analytic case, the Steklov eigenfunctions are real analytic on $\partial \Omega$ and have complex analytic extensions to $(\partial \Omega)_{{\mathbb C}}$. We then study their complex nodal sets \begin{equation} \label{CXN} \mathcal{N}_{\lambda}^{{\mathbb C}} = \{ \zeta \in (\partial \Omega)_{{\mathbb C}}: \psi^{{\mathbb C}}_{\lambda_j}(\zeta) = 0\}. \end{equation} To prove Theorem \ref{NODALBOUND}, we use Crofton's formula and a multi-dimensional Jensen's formula to give an upper bound for $\mathcal{H}^{n-2}(\mathcal{N}_{\lambda})$ in terms of the integral geometry of $\mathcal{N}_{\lambda}^{{\mathbb C}}$. The integral geometric approach to the upper bound is inspired by the classic paper of Donnelly-Fefferman \cite{DG} (see also \cite{Lin}). But, instead of doubling estimates or frequency function estimates, we use the Poisson wave kernel to obtain growth estimates on eigenfunctions, and then use results on pluri-subharmonic functions rather than functions of one complex variable to relate growth of zeros to growth of eigenfunctions. This approach was used in \cite{Z} to prove equidistribution theorems for complex nodal sets when the geodesic flow is ergodic. The Poisson wave kernel approach works for Steklov eigenfunctions as well as Laplace eigenfunctions, and in fact for eigenfunctions of any positive elliptic analytic pseudo-differential operator.
We first use the Poisson wave group \eqref{PWG} to analytically continue eigenfunctions in the form \begin{equation} \label{UAC} U_{{\mathbb C}}(i \tau) \psi_j (\zeta) = e^{- \tau \lambda_j} \psi_j^{{\mathbb C}} (\zeta). \end{equation} We then use \eqref{UAC} to determine the growth properties of $\psi_j^{{\mathbb C}}(\zeta)$ in Grauert tubes of the complexification of $\partial \Omega$. The relevant notion of Grauert tube is the standard Grauert tube for $\partial \Omega$ with the metric $g_{\partial \Omega}$ induced by the ambient metric $g$ on $M$. This is because the principal symbol of $\Lambda$ is the same as the principal symbol of $\sqrt{\Delta_{\partial \Omega}}$.
\begin{rem} A remark on notation: In \cite{Z} we use $M$ to denote a Riemannian manifold, $M_{\epsilon}$ its Grauert tube of radius $\epsilon$ and $\partial M_{\epsilon}$ to denote the boundary of the Grauert tube of radius $\epsilon$. Since $\partial \Omega$ is the Riemannian manifold of interest here, we denote it by $M$, \begin{equation} \label{MOM} (M, g): = (\partial \Omega, g_{\partial \Omega}). \end{equation} Thus the Grauert tube of radius $\tau$ of $(\partial \Omega)_{{\mathbb C}}$ is denoted $M_{\tau}$ and its boundary by $\partial M_{\tau}$, not to be confused with $\partial \Omega$. We also denote $m = \dim M = n -1$.
\end{rem} Because $U_{{\mathbb C}}(i \tau) $ is a Fourier integral operator with complex phase, it can only magnify the $L^2$ norm of $\psi_j$ by a power of $\lambda_j$. Hence the exponential $e^{\tau \lambda_j}$ dominates the $L^2$ norm on the boundary of the Grauert tube of radius $\tau$. We prove: \begin{prop} \label{PW} Suppose $(\Omega, g, \partial \Omega)$ is real analytic. Let $\{\psi_{\lambda}\}$ be an eigenfunction of $\Lambda$ on $M = \partial \Omega$ of eigenvalue $\lambda$. Then
$$ \sup_{\zeta \in M_{\tau}} |\psi^{{\mathbb C}}_{\lambda}(\zeta)| \leq C
\lambda^{\frac{m+1}{2}} e^{\tau \lambda},
$$ \end{prop} The proof follows from a standard cosine Tauberian result and the fact that the complexified Poisson kernel is a complex Fourier integral operator of finite order. This simple growth estimate replaces the doubling estimates of \cite{DF} and \cite{BL}. It is closely related to growth estimates of $\Delta$-eigenfunctions in \cite{Z,Z2,Z3}.
For the precise statement that $U_{{\mathbb C}}(t + i \tau)$ is indeed a Fourier integral operator with complex phase, we refer to Theorem \ref{BOUFIO}. It is in some sense a known result for elliptic analytic pseudo-differential operators, and we therefore postpone the detailed proof of Theorem \ref{BOUFIO} for $\Lambda$ to a later article.
We thank Boris Hanin, Peng Zhou, Iosif Polterovich, Chris Sogge and particularly Y. Canzani for comments/corrections on earlier versions. We also thank G. Lebeau for confirming that Theorem \ref{BOUFIO} should be true, with not too different a proof than in the Riemannian wave case.
\section{Geometry and analysis of Grauert tubes}
We briefly review the geometry and analysis on Grauert tubes of real analytic Riemannian manifolds. We refer to \cite{Z,Z2,GS1,GS2} for more detailed discussions.
\subsection{\label{AC} Analytic continuation to a Grauert tube}
A real analytic manifold $M$ always possesses a complexification $M_{{\mathbb C}}$, i.e. a complex manifold of which $M$ is a totally real submanifold. A real analytic Riemannian metric $g$ on $M$ determines a canonical plurisubharmonic function $\rho_g$ on $M_{{\mathbb C}}$; since the metric is fixed througout, we denote it simply by $\rho$. Its square-root $\sqrt{\rho}$ is known as the Grauert tube function; it
equals $\sqrt{- r^2_{{\mathbb C}}(z, \bar{z})}/2$ where $r_{{\mathbb C}}$ is the holomorphic extension of the distance function. The $(1,1)$ form $\omega = \omega_{\rho}: = i \partial\dbar \rho$ defines a K\"ahler metric on $M_{{\mathbb C}}$. The Grauert tubes $M_{\epsilon}: = \{ z \in M_{{\mathbb C}}: \sqrt{\rho}(z) < \epsilon\} $ are strictly pseudo-convex domains in $M_{{\mathbb C}}$, whose boundaries are strictly pseudo-convex CR manifolds. We also denote the contact form of $\partial M_{\tau}$ by
\begin{equation} \label{alpha} \alpha = \frac{1}{i} \partial \rho|_{\partial M_{\tau} } = d^c \sqrt{\rho}.\end{equation}
The complexified exponential map \begin{equation} \label{E} (x, \xi) \in B_{\epsilon}^*M \to E(x, \xi): = \exp_x^{{\mathbb C}} \sqrt{-1} \xi \in M_{\epsilon} \end{equation} defines a symplectic diffeomorphism, where $B^*_{\epsilon} M
\subset T^*M$ is the co-ball bundle of radius $\epsilon$, equipped with the standard symplectic structure, and where $M_{\epsilon}$ is equipped with $\omega_{\rho}$. The Grauert tube function $\sqrt{\rho}$ pulls back under $E$ to the metric norm function $|\xi|_g$. We emphase the setting $M_{{\mathbb C}}$ but it is equivalent to using $E$ to endow $B^*_{\epsilon} M$ with an adapted complex structure. We refer to \cite{GS1, GS2, LS, GLS} for further discussion.
\subsection{Geodesic and Hamiltonian flows}
The microlocal analysis of the kernels \eqref{PWG} involves the complexification of the geodesic flow. We denote by $g^t$
the (real) homogeneous geodesic flow of $(M, g)$. It is the real analytic Hamiltonian flow on $T^*M \backslash 0_M$ generated by the Hamiltonian $|\xi|_g$ with respect to the standard symplectic form $\omega$. We also consider
the Hamiltonian flow of
$|\xi|_g^2$, which is real analytic on all of $T^*M$ and denote its Hamiltonian flow by $G^t$. In general, we denote by $\Xi_H$ the Hamiltonian vector field of a Hamiltonian $H$ and its flow by $\exp t \Xi_H$. Thus, we consider the Hamiltonian flows \begin{equation} \label{gtdef}
g^t = \exp t \Xi_{|\xi|_g}, \;\;\; \mbox{resp.}\;\;\; G^t = \exp t \Xi_{|\xi|_g^2}. \end{equation}
The exponential map is the map $\exp_x: T^*M \to M$ defined by $\exp_x
\xi = \pi G^t(x, \xi)$ where $\pi$ is the standard projection. Since $E^* \sqrt{\rho} = |\xi|, $ $E^*$ conjugates the geodesic flow on $B^*M$ to the Hamiltonian flow $\exp t \Xi_{\sqrt{\rho}}$ of $\sqrt{\rho}$ with respect to $\omega$, i.e. \begin{equation} \label{gt} E(g^t(x, \xi)) = \exp t \Xi_{\sqrt{\rho}} (\exp_x i \xi). \end{equation}
\subsection{Szeg\"o kernel and analytic continuation of the Poisson kernel}
We denote by $\mathcal{O}^{s + \frac{m-1}{4}}(\partial M _{\tau})$ the Sobolev spaces of CR holomorphic functions on the boundaries of the strictly pseudo-convex domains $M_{\tau}$, i.e. \begin{equation} \label{SOBSP} {\mathcal O}^{s + \frac{m-1}{4}}(\partial M_{\tau}) = W^{s + \frac{m-1}{4}}(\partial M_{\tau}) \cap \mathcal{O} (\partial M_{\tau}), \end{equation} where $W^s$ is the $s$th Sobolev space and where $ \mathcal{O} (\partial M_{\tau})$ is the space of boundary values of holomorphic functions. The inner product on $\mathcal{O}^0 (\partial M _{\tau} )$ is with respect to the Liouville measure or contact volume form
\begin{equation} \label{CONTACTVOL} d\mu_{\tau} : = \alpha \wedge \omega^{m-1}, \end{equation} on $\partial M_{\tau}$.
The study of norms of complexified eigenfunctions is related to the study of the Szeg\"o\ kernels $\Pi_{\tau}$ of $M_{\tau}$, namely the orthogonal projections
\begin{equation} \Pi_{\tau}: L^2(\partial M_{\tau}, d\mu_{\tau}) \to \mathcal{O}^0(\partial M_{\tau}, d\mu_{\tau}) \end{equation} onto the Hardy space of boundary values of holomorphic functions in $M_{\tau}$ which belong to $ L^2(\partial M_{\tau}, d\mu_{\tau})$. The Szeg\"o\ projector $\Pi_{\tau}$ is a complex Fourier integral operator with a positive complex canonical relation. The real points of its canonical relation form the graph $\Delta_{\Sigma}$ of the identity map on the symplectic one
$\Sigma_{\tau} \subset T^* \partial M_{\tau}$ defined by the spray \begin{equation} \label{SIGMATAU} \Sigma_{\tau} = \{(\zeta, r d^c \sqrt{\rho}(\zeta)): r \in {\mathbb R}_+\} \subset T^* (\partial M_{\tau}) \end{equation} of the contact form $d^c \sqrt{\rho}$. There exists a symplectic equivalence (cf. \cite{GS2}) \begin{equation} \iota_{\tau} : T^*M - 0 \to \Sigma_{\tau},\;\; \iota_{\tau} (x, \xi) = (E(x, \tau
\frac{\xi}{|\xi|}), |\xi|d^c \sqrt{\rho}_{E(x, \tau
\frac{\xi}{|\xi|})} ). \end{equation}
\subsection{Analytic continuation of the Poisson wave kernel}
The wave group generated by $\Lambda$ on $M = \partial \Omega$ is the unitary group $U(t) = e^{ i
t \Lambda}$. Its kernel $U(t, x, y)$ solves the `half-wave equation', \begin{equation} \label{HALFWE} \left(\frac{1}{i} \frac{\partial }{\partial t} -\Lambda_x \right) U(t, x, y) = 0, \;\; U(0, x, y) = \delta_y(x). \end{equation} Here, $\Lambda_x$ means that $\Lambda$ is applied in the $x$ variable. In the real domain it is well known \cite{Hor,DG} that $U(t, x, y)$ is the Schwartz kernel of a Fourier integral operator, $$U(t, x, y) \in I^{-1/4}({\mathbb R} \times M \times M, \Gamma)$$
with underlying canonical relation $$\Gamma = \{(t, \tau, x, \xi, y, \eta): \tau + |\xi| = 0, g^t(x, \xi) = (y, \eta) \} \subset T^* {\mathbb R} \times T^*M \times T^*M. $$
The Poisson-wave kernel is the analytic continuation $U(t + i \tau, x, y)$ of the wave kernel with respect to time, $ t \to t + i \tau\in {\mathbb R} \times {\mathbb R}_+$. For $t = 0$ and for $\tau > 0$, we obtain the Poisson semi-group $U(i \tau) = e^{- \tau \Lambda}$. For general $t + i \tau$, the Poisson-wave kernel has the eigenfunction expansion, \begin{equation}\label{POISEIGEXP} U ( i \tau, x, y) = \sum_j e^{i (t + i \tau) \lambda_j} \psi_{\lambda_j}(x) \psi_{\lambda_j}(y). \end{equation}
As stated in Theorem \ref{BOUFIO} in the introduction, the Poisson-wave kernel $U(t + i \tau, x, y)$ admits an analytic continuation $U_{{\mathbb C}}(t + i \tau, \zeta, y)$ in the first variable to $M_{\tau} \times M$.
\begin{theo}\label{BOUFIO} Let $U(t)$ be the wave group of the Dirichlet to Neumann operator $\Lambda$ on $M = \partial \Omega$ as above. Then $\Pi_{\epsilon} \circ U (i \epsilon): L^2(M) \to \mathcal{O}(\partial M_{\epsilon})$ is a complex Fourier integral operator of order $- \frac{m-1}{4}$ associated to the canonical relation $$\Gamma = \{(y, \eta, \iota_{\epsilon} (y, \eta) \} \subset T^* \partial M_{\epsilon} \times \Sigma_{\epsilon}.$$ Moreover, for any $s$, $$\Pi_{\epsilon} \circ U (i \epsilon): W^s(M) \to {\mathcal O}^{s + \frac{m-1}{4}}(\partial M_{\epsilon})$$ is a continuous isomorphism. \end{theo} This statement is asserted by Boutet de Monvel in \cite{Bou, Bou2} for any real analytic positive elliptic pseudo-differential operator, and has been accepted since then as an established fact (see for instance \cite{GS1,GS2}). The proof was only sketched in \cite{Bou,Bou2},
and the first complete proofs appeared only recently in \cite{Z2,L,St} for the special case of the wave group of a Riemannian manifold without boundary. Roughly the same proof applies to the Steklov problem as well because $\sqrt{\Delta_{\partial M}}$ and $\Lambda$ are the same to leading order and in fact differ by an analytic pseudo-differential operator of order zero. This is because the principal symbol of $\Lambda,$
\begin{equation} \label{ps} \sigma_{\Lambda} : T^* \partial \Omega \to {\mathbb R}, \;\;\; \sigma_{\Lambda} (x, \xi) = |\xi|_{g_{\partial}}, \end{equation} is the same as for the Laplacian $\Delta_{\partial}$ of the boundary $(\partial \Omega, g_{\partial})$. In fact, the complete symbol of $\Lambda$ is calculated in \cite{LU} (see also \cite{PS}). It would be desirable to have a detailed exposition of the proof, but we postpone that to a future article.
\section{Growth of complexified eigenfunctions proof of Proposition \ref{PW}}
We further need to generalize sup norm estimates of complexified eigenfunctions in \cite{Z2} to the $\Lambda$-eigenfunctions.
As in \cite{Z2,Z3} we prove Proposition \ref{PW} by introducing the `tempered' spectral projections \begin{equation}\label{TCXSPM} P_{ I_{\lambda}}^{\tau}(\zeta, \bar{\zeta}) = \sum_{j: \lambda_j \in I_{\lambda}} e^{-2 \tau \lambda_j}
|\psi_{\lambda_j}^{{\mathbb C}}(\zeta)|^2, \;\; (\sqrt{\rho}(\zeta) \leq \tau), \end{equation} where $I_{\lambda} $ could be a short interval $[\lambda, \lambda + 1]$ of frequencies or a long window $[0, \lambda]$. Exactly as in \cite{Z2} but with the wave group of $\Lambda$ replacing the wave group of $\sqrt{\Delta}$, we prove \begin{equation} \label{PTAU} P^{\tau}_{[0, \lambda]}(\zeta, \bar{\zeta}) = (2\pi)^{-m} \left(\frac{\lambda}{\sqrt{\rho}} \right)^{\frac{m-1}{2}}
\left( \frac{\lambda}{(m-1)/2 + 1} + O (1) \right), \;\; \zeta \in \partial M_{\tau}. \end{equation} We then obtain
\begin{cor} \label{PWa} Let $\psi_{\lambda}$ be an eigenfunction of $\Lambda$ as above. Then there exists $C > 0$ so that for all $\sqrt{\rho}(\zeta) = \tau$, $$ C
\lambda_j^{-\frac{m-1}{2}} e^{ \tau \lambda} \leq \sup_{\zeta \in M_{\tau}} |\psi^{{\mathbb C}}_{\lambda}(\zeta)| \leq C
\lambda^{\frac{m-1}{4} + \half} e^{\tau \lambda}. $$
\end{cor}
The lower bound is not used in the nodal analysis.
\subsection{Proof of the local Weyl law}
We only sketch the proof for the sake of completeness, since it is essentially the same as in \cite{Z,Z2,Z3} and closely follows \cite{DG}. The novelty is that we apply the argument of \cite{DG} to the analytically continued parametrix.
By \cite{Hor, DG} the positive elliptic first order pseudo-differential operator $\Lambda$ generates a wave group which has a parametrix of the form,
\begin{equation} \label{PARAONE} U(t, x, y) = \int_{T^*_y M} e^{ i t |\xi|_{g_y} } e^{i \langle \xi, \exp_y^{-1} (x) \rangle} A(t, x, y, \xi) d\xi \end{equation} similar to that of the wave kernel of $M = \partial \Omega$, since $\Lambda = \sqrt{\Delta_M} + Q$ where $Q $ is an analytic pseudo-differential operator of order zero. Here,
$|\xi|_{g_x} $ is the metric norm function at $x$, and where $A(t, x, y, \xi)$ is a polyhomogeneous amplitude of order $0$ which is supported near the diagonal. The amplitude is different from that of the wave kernel since the transport equations involve $Q$.
By Theorem \ref{BOUFIO}, the wave group and parametrix may be analytically continued. To obtain uniform asymptotics, we use the analytic continuation of the H\"ormander parametrix (\ref{PARAONE}). We choose local coordinates near $x$
and write $\exp_x^{-1}(y) = \Phi(x, y)$ in these local coordinates for $y$ near $x$, and write the integral $T^*_yM$ as an integral over ${\mathbb R}^m$ in these coordinates. The holomorphic extension of the parametrix to the Grauert tube $|\zeta| < \tau$ at time $t + 2 i \tau$ has the form \begin{equation} \label{CXPARAONE} U_{{\mathbb C}}(t + 2 i \tau,
\zeta, \bar{\zeta}) = \int_{{\mathbb R}^m} e^{(i t - 2\tau ) |\xi|_{g_y} } e^{i \langle \xi, \Phi (\zeta, \bar{\zeta}) \rangle} A(t, \zeta, \bar{\zeta}, \xi) d\xi, \end{equation} where $A$ is the analytic extensions of the real analytic $A$ and $\Phi(\zeta, \bar{\zeta})$ is the analytic extension of $\exp_y^{-1} (x)$.
We introduce a cutoff function $\chi \in \mathcal{S}({\mathbb R})$ with $\hat{\chi} \in C_0^{\infty}$ supported in sufficiently small neighborhood of $0$ so that no other singularities of $U_{{\mathbb C}}(t + 2 i \tau, \zeta, \bar{\zeta})$ lie in its support. We also assume $\hat{\chi} \equiv 1$ in a smaller neighborhood of $0$. We then change variables $\xi \to \lambda \xi$ and apply the complex stationary phase to the integral, \begin{equation} \label{CXPARAONEc}\begin{array}{l} \int_{{\mathbb R}} \hat{\chi}(t) e^{-i \lambda t} U_{{\mathbb C}} (t + 2 i \tau, \zeta, \bar{\zeta})dt \\ \\ = \lambda^m \int_{0}^{\infty} \int_{{\mathbb R}} \hat{\chi}(t) e^{-i \lambda t} \int_{S^{m-1}} e^{(i t - 2\tau ) \lambda r} e^{i r \lambda \langle \omega, \Phi (\zeta, \bar{\zeta}) \rangle} A(t, \zeta, \bar{\zeta}, \lambda r \omega ) r^{m-1} dr dt d\omega.\end{array} \end{equation}
The resulting integral \eqref{CXPARAONEc} is a semi-classical Fourier integral distribution of with a complex phase, the same phase as in the pure Riemannian case treated in \cite{Z2}. Hence the stationary phase calculation is essentially the same as in section 9.1 of \cite{Z2}. We first integrate over $d \omega$ and find that there are two stationary phase points, one giving an exponentially decaying amplitude of order $e^{- 2 \lambda \tau r}$and one for which the critical value is $2 \lambda \tau r$. It cancels the term $- 2 \tau \lambda r$ coming from the factor $e^{(i t - 2\tau ) \lambda r} $. We then apply stationary phase to the resulting integral over $(t, r)$ with phase $ t (r - 1)$. The critical set consists of $r = 1, t = 0$. The phase is clearly non-degenerate with Hessian determinant one and inverse Hessian operator $D^2_{\theta, t}$. Taking into account the factor of $\lambda^{-1}$ from the change of variables, the stationary phase expansion gives \begin{equation}\label{EXPANSIONCaa} \sum_j \psi(\lambda - \lambda_j) e^{- 2 \tau
\lambda_j} |\psi_j^{{\mathbb C}}(\zeta)|^2 \sim \sum_{k = 0}^{\infty}\lambda^{\frac{m-1}{2} - k} \omega_k(\tau; \zeta), \end{equation} where the coefficients $\omega_k(\tau, \zeta)$ are smooth for $\zeta \in \partial M_{\tau}$. The Weyl asymptotics then follows from the standard cosine Tauberian theorem, as in \cite{DG} or \cite{Z2} (loc. cit.).
\section{Proof of Theorm \ref{NODALBOUND}}
We start with the integral geometric approach of \cite{DF} (Lemma 6.3) (see also \cite{Lin} (3.21)). There exists a ``Crofton formula" in the real domain which bounds the local nodal hypersurface volume above, \begin{equation} \label{INTGEOM} \mathcal{H}^{m-1}(\mathcal{N}_{\varphi_{\lambda}} \cap U) \leq C_L \int_{\mathcal{L}} \#\{ \mathcal{N}_{\varphi_{\lambda}}\cap \ell\} d\mu(\ell). \end{equation}
Thus, $ \mathcal{H}^{m-1}(\mathcal{N}_{\varphi_{\lambda}} \cap U) $ is bounded above by a constant $C_L$ times the average over all line segments of length $L$ in a local coordinate patch $U$ of the number of intersection points of the line with the nodal hypersurface. The measure $d\mu_L$ is known as the `kinematic measure' in the Euclidean setting \cite{F} (Chapter 3); see also Theorem 5.5 of \cite{AP}. We will be using geodesic segments of fixed length $L$ rather than line segments, and parametrize them by $S^*M \times [0, L]$, i.e. by their intial data and time. Then $d\mu_{\ell}$ is essentially Liouville measure $d\mu_L$ on $S^* M$ times $dt$.
The complexification of a real line $\ell = x + {\mathbb R} v$ with $x, v \in {\mathbb R}^m$ is $\ell_{{\mathbb C}} = x + {\mathbb C} v$. Since the number of intersection points (or zeros) only increases if we count complex intersections, we have \begin{equation} \label{INEQ1} \int_{\mathcal{L}} \# (\mathcal{N}_{\varphi_{\lambda}} \cap \ell) d\mu(\ell) \leq \int_{\mathcal{L}} \# (\mathcal{N}_{\varphi_{\lambda}}^{{\mathbb C}} \cap \ell_{{\mathbb C}}) d\mu(\ell).
\end{equation} Note that this complexification is quite different from using intersections with all complex lines to measure complex nodal volumes. If we did that, we would obtain a similar upper bound on the complex hypersurface volume of the complex nodal set. But it would not give an upper bound on the real nodal volume and indeed would the complex volume tends to zero as one shrinks the Grauert tube radius to zero, while \eqref{INEQ1} stays bounded below.
Hence to prove Theorem \ref{NODALBOUND} it suffices to show \begin{lem} \label{DF2} We have, $$\mathcal{H}^{m-1}(\mathcal{N}_{\varphi_{\lambda}}) \leq C_L \int_{\mathcal{L}} \# (\mathcal{N}_{\varphi_{\lambda}})^{{\mathbb C}} \cap \ell_{{\mathbb C}} ) d\mu(\ell) \leq C \lambda. $$
\end{lem}
We now sketch the proofs of these results using a somewhat novel approach to the integral geometry and complex analysis.
\subsection{\label{GEOS} Background on hypersurfaces and geodesics}
The proof of the Crofton formula given below in Lemma \ref{CROFTONEST} involves the geometry of geodesics and hypersurfaces. To prepare for it we provide the relevant background.
As above, we denote by $d\mu_L$ the Liouville measure on $S^* M$. We also denote by $\omega$ the standard symplectic form on $T^* M$ and by $\alpha$ the canonical one form. Then $d\mu_L = \omega^{n-1} \wedge \alpha$ on $S^* M$. Indeed, $d\mu_L$
is characterized by the formula $d\mu_L \wedge d H = \omega^{m}$, where $H(x, \xi) = |\xi|_g$. So it suffices to verify that $\alpha \wedge dH = \omega$ on $S^*M$. We take the interior product $\iota_{\Xi_H}$ with the Hamilton vector field $\Xi_H$ on both sides, and the identity follows from the fact that $\alpha(\Xi_H) = \sum_j \xi_j \frac{\partial H}{\partial \xi_j} = H = 1$ on $S^*M$, since $H$ is homogeneous of degree one. Henceforth we denote by $\Xi = \Xi_H$ the generator of the geodesic flow.
Let $N \subset M$ be a smooth hypersurface in a Riemannian manifold $(M, g)$. We denote by $T^*_N M$ the of covectors with footpoint on $N$ and $S^*_N M$ the unit covectors along $N$. We introduce Fermi normal coordinates $(s, y_n) $ along
$N$, where $s$ are coordinates on $N$ and $y_n$ is the
normal coordinate, so that $y_m = 0$ is a local defining function for $N$. We also let $\sigma, \xi_m$ be
the dual symplectic Darboux coordinates. Thus the canonical
symplectic form is $\omega_{T^* M } = ds \wedge d \sigma + dy_m
\wedge d \xi_m. $ Let $\pi: T^* M \to M$ be the natural projection. For notational simplicity we denote $\pi^*y_m$ by $ y_m$ as functions on $T^* M$. Then $y_m$ is a defining function of $T^*_N M$.
The hypersurface $S^*_N M \subset S^* M$ is a kind of Poincar\'e section or symplectic transversal to the orbits of $G^t$, i.e. is a symplectic transversal away from the (at most codimension one) set of $(y, \eta) \in S_N^* M$ for which $\Xi_{y, \eta} \in T_{y, \eta} S^*_N M$, where as above $\Xi$ is the generator of the geodesic flow. More precisely,
\begin{lem} \label{NSYMP} The restriction $\omega |_{S_N^* M}$ is symplectic on $S^*_N M \backslash S^* N$. \end{lem}
Indeed, $\omega |_{S_N^* M}$ is symplectic on $T_{y, \eta} S^* N$ as long as $T_{y, \eta} S^*_N M$
is transverse to $\Xi_{y, \eta}$, since $\ker (\omega|_{S^*M}) = {\mathbb R} \Xi. $ But $S^* N$ is the set of points of $S^*_N M$ where $\Xi \in T S^*_N M$, i.e. where $S^*_N M$ fails to be transverse to $G^t$. Indeed, transversality fails when $\Xi(y_m) =dy_m (\Xi) = 0$, and $\ker d y_m \cap \ker d H = T S^*_N M$. One may also see it in Riemannian terms as follows: the generator $\Xi_{y, \eta}$ is the horizontal lift $\eta^h$ of $\eta$ to $(y, \eta)$ with respect to the Riemannian connection on $S^* M$, where we freely identify covectors and vectors by the metric. Lack of transversality occurs when $\eta^h $ is tangent to $T_{(y, \eta)} (S^*_N M)$. The latter is the kernel of $d y_n$. But $d y_m (\eta^h) = d y_m (\eta)= 0 $ if and only if $\eta \in T N$.
It follows from Lemma \ref{NSYMP} that the symplectic volume form of $S^*_N M \backslash S^* N$
is $\omega^{n-1} |_{S_N^* M}$. The following Lemma gives a useful alternative formula:
\begin{lem} \label{dmuLN}
Define $$d\mu_{L, N} = \iota_{\Xi} d\mu_L \;|_{S^*_N M}, $$
where as above, $d\mu_L$ is Liouville measure on $S^* M$. Then $$d \mu_{L, N}= \omega^{m-1} |_{S_N^* M}. $$ \end{lem}
Indeed, $d \mu_L = \omega^{m-1} \wedge \alpha$, and $ \iota_{\Xi} d\mu_L = \omega^{m-1}$.
\begin{cor} \label{COR} $\mathcal{H}^{m-1} (N) = \frac{1}{\beta_m} \int_{S^*_N M} |\omega^{m-1}|$. \end{cor}
\subsection{Hausdorff measure and Crofton formula for real geodesic arcs}
First we sketch a proof of the integral geometry estimate using geodesic arcs rather than local coordinate line segments. For background on integral geometry and Crofton type formulae we refer to \cite{AB,AP}. As explained there, a Crofton formula arises from a double fibration
$$\begin{array}{lllll} && \mathcal{I} && \\ &&&&\\ & \pi_1 \;\swarrow & & \searrow \;\pi_2 & \\ &&&& \\ \Gamma &&&& B, \end{array}$$ where $\Gamma$ parametrizes a family of submanifolds $B_{\gamma}$ of $B$. The points $b \in B$ then parametrize a family of submanifolds $\Gamma_b = \{\gamma \in \Gamma: b \in B_{\gamma}\}$ and the top space is the incidence relation in $B \times \Gamma$ that $b \in B_{\gamma}.$
We would like to define $\Gamma$ as the space of geodesics of $(M, g)$, i.e. the space of orbits of the geodesic flow on $S^* M$. Heuristically, the space of geodesics is the quotient space $S^* M/{\mathbb R}$ where ${\mathbb R}$ acts by the geodesic flow $G^t$ (i.e. the Hamiltonian flow of
$H$). Of course, for a general (i.e. non-Zoll) $(M, g)$ the `space of geodesics' is not a Hausdorff space and so we do not have a simple analogue of the space of lines in ${\mathbb R}^n$. Instead we consider the space $\mathcal{G}_T$ of geodesic arcs of length $T$.
If we only use partial orbits of length $T$, no two partial orbits are equivalent and
the space of geodesic arcs $\gamma_{x, \xi}^T$ of length $T$ is simply parametrized by $S^* M$. Hence we let $B = S^* M$ and also $\mathcal{G}_T \simeq S^* M$. The fact that different arcs of length $T$ of the same geodesic are distinguished leads to some redundancy.
In the following, let $L_1$ denote the length of the shortest closed geodesic of $(M, g)$.
\begin{prop}\label{CROFTONEST} Let $N \subset M$ be any smooth hypersurface\footnote{The same formula is true if $N$ has a singular set $\Sigma$ with $\mathcal{H}^{m-2}(\Sigma) < \infty$}, and let $S^*_N M$ denote the unit covers to $M$ with footpoint on $N$. Then for $0 < T < L_1,$ $$\mathcal{H}^{m-1}(N) = \frac{ 1}{\beta_mT}
\int_{S^* M} \# \{t \in [- T, T]: G^t(x, \omega) \in S^*_N M\} d\mu_L(x, \omega),$$ where $\beta_m $ is $2 (m-1)!$ times the volume of the unit ball in ${\mathbb R}^{m-2}$. \end{prop}
\begin{proof}
By Corollary \ref{COR}, the Hausdorff measure of $N$ is given by \begin{equation} \label{HNN}\begin{array}{lll} \mathcal{H}^{m-1}(N) & = & \frac{1}{\beta_m}
\int_{S^*_N M} |\omega^{m-1}|. \end{array} \end{equation}
We use the Lagrange (or more accurately, Legendre) immersion,
$$\iota: S^* M \times {\mathbb R} \to S^*M \times S^* M, \;\; \iota(x,
\omega, t) = (x, \omega, G^t(x, \omega)), $$
where as above, $G^t$ is the geodesic flow \eqref{gtdef}.
We also let $\pi: T^* M \to M$ be the standard projection. We restrict $\iota$ to $S^* M \times [-T, T]$ and define the incidence relation $$ \mathcal{I}_T = \{((y, \eta), (x, \omega), t) \subset S^*M \times S^*M \times [- T, T]: (y, \eta) = G^t(x, \omega)\}, $$ which is isomorphic to $[- T, T ] \times S^*M$ under $\iota$. We form the diagram $$\begin{array}{lllll} && \mathcal{I}_T \simeq S^* M \times [-T, T] && \\ &&&&\\ & \pi_1 \;\swarrow & & \searrow \;\pi_2 & \\ &&&& \\ S^* M \simeq \mathcal{G}_T &&&& S^* M, \end{array}$$ using the two natural projections, which in the local parametrization take the form $$\pi_1(t, x, \xi) = G^t(x, \xi), \;\;\; \pi_2(t, x, \xi) = (x, \xi). $$ As noted above, the bottom left $S^*M$ should be thought of as the space of geodesic arcs. The fiber $$\pi_1^{-1}(y, \eta) = \{(t, x, \xi) \in [-T, T ] \times S^* M: G^t(x, \xi) = (y, \eta)\} \simeq \gamma_{(y, \eta)}^T$$ may be identified with the geodesic segment through $(y, \eta)$ and the fiber $\pi_2^{-1} (x, \omega) \simeq [- T, T]$.
We `restrict' the diagram above to $S^*_N M$: \begin{equation} \label{DIAGRAM} \begin{array}{lllll} && \mathcal{I}_T \simeq S_N^* M \times [-T, T] && \\ &&&&\\ & \pi_1 \;\swarrow & & \searrow \;\pi_2 & \\ &&&& \\ (S^*_N M)_T &&&& S_N^* M, \end{array} \end{equation} where
$$(S^*_N M)_{T} = \pi_1 \pi_2^{-1} (S_N^* M) = \bigcup_{|t| < T} G^t(S^*_N M).$$
We define the Crofton density $\varphi_T$ on $S_N^* M$ corresponding to the diagram \eqref{DIAGRAM} \cite{AP} (section 4) by \begin{equation} \label{CROFDEN} \varphi_T = (\pi_2)_* \pi_1^* d\mu_L. \end{equation} Since the fibers of $\pi_2$ are 1-dimensional, $\varphi_T$ is a differential form of dimension $2 \dim M - 2$ on $S^*M$. To make it smoother, we can introduce a smooth cutoff $\chi$ to $(-1,1)$, equal to $1$ on $(- \half, \half)$, and use $\chi_T(t) = \chi(\frac{t}{T}). $ Then $\pi_1^* (d\mu_L \otimes \chi_T dt)$ is a smooth density on $\mathcal{I}_T$.
\begin{lem} \label{phiT} The Crofton density \eqref{CROFDEN} is given by, $\varphi_T = T d\mu_{L, N} $ \end{lem}
\begin{proof}
In \eqref{DIAGRAM} we defined the map $\pi_1: (y, \eta, t) \in S^*_N M \times [-T,T] \to G^t(y, \eta) \in (S^* M)_{\epsilon}$. We first claim that $\pi_1^* d\mu_L = d\mu_{L, N} \otimes dt. $ This is essentially the same as Lemma \ref{dmuLN}. Indeed,
$d \pi_1 (\frac{\partial}{\partial t} )= \Xi$, hence $\iota_{\frac{\partial}{\partial t}} \pi_1^* d\mu_L |_{(t, y, \eta)}
= (G^t)^* \omega^{m-1} = \omega^{m-1} |_{T_{y, \eta} S^*_N M}$.
Combining Lemma \ref{phiT} with \eqref{HNN} gives \begin{equation} \label{HDPHIT} \int_{S^*_N M} \varphi_T = \int_{\pi_2^{-1} (S^*_N M)} d\mu_L = T\beta_m\mathcal{H}^{m-1}(N). \end{equation}
\end{proof}
We then relate the integral on the left side to numbers of intersections of geodesic arcs with $N$. The relation is given by the co-area formula: if $f: X \to Y$ is a smooth map of manifolds of the same dimension and if $\Phi$ is a smooth density on $Y$, and if $\# \{f^{-1}(y)\} < \infty$ for every regular value $y$, then $$ \int_X f^* \Phi = \int_Y \# \{f^{-1}(y)\}\; \Phi. $$ If we set set $X = \pi_2^{-1}(S^*_N M), \; Y = S^* M, $ and $f =
\pi_1|_{\pi_2^{-1}(S^*_N M)}$ then the co-area formula gives, \begin{equation} \label{COAREA} \int_{\pi_2^{-1}(S^*_N M)} \pi_1^* d\mu_L = \int_{S^* M} \# \{t \in [- T, T]: G^t(x, \omega) \in S^*_N M\} d\mu_L(x, \omega). \end{equation}
Combining \eqref{HDPHIT} and \eqref{COAREA} gives the result stated in Proposition \ref{CROFTONEST},
\begin{equation} \label{CONCLUSION} T \beta_m \mathcal{H}^{m-1}(N) =
\int_{S^* M} \# \{t \in [- T, T]: G^t(x, \omega) \in S^*_N M\} d\mu_L(x, \omega). \end{equation}
\end{proof}
\subsection{Proof of Lemma \ref{DF2}}
The next step is to complexify.
\begin{proof}
We complexify the Lagrange immersion $\iota$ from a line (segment) to a strip in ${\mathbb C}$: Define $$F: S_{\epsilon} \times S^*M \to M_{{\mathbb C}}, \;\;\; F(t + i
\tau, x, v) = \exp_x (t + i \tau) v, \;\;\; (|\tau| \leq \epsilon) $$ By definition of the Grauert tube, $\psi$ is surjective onto $M_{\epsilon}$. For each $(x, v) \in S^* M$, $$F_{x, v}(t + i \tau) = \exp_x (t + i \tau) v $$ is a holomorphic strip. Here, $S_{\epsilon} = \{t + i \tau \in {\mathbb C}:
|\tau| \leq \epsilon\}. $ We also denote by
$S_{\epsilon, L} = \{t + i \tau \in {\mathbb C}:
|\tau| \leq \epsilon, |t| \leq L \}. $
Since $F_{x, v}$ is a holomorphic strip,
$$F_{x, v}^*(\frac{1}{\lambda} dd^c \log |\psi_j^{{\mathbb C}}|^2) =
\frac{1}{\lambda} dd^c_{t + i \tau} \log |\psi_j^{{\mathbb C}}|^2 (\exp_x (t + i \tau)
v) = \frac{1}{\lambda} \sum_{t + i \tau: \psi_j^{{\mathbb C}}(\exp_x (t + i
\tau) v) = 0} \delta_{t + i \tau}. $$ Put:
\begin{equation} \label{acal} \mathcal{A}_{L, \epsilon} (\frac{1}{\lambda} dd^c \log |\psi_j^{{\mathbb C}}|^2) = \frac{1}{\lambda} \int_{S^* M} \int_{S_{\epsilon, L}}
dd^c_{t + i \tau} \log |\psi_j^{{\mathbb C}}|^2 (\exp_x (t + i \tau) v) d\mu_L(x, v). \end{equation} A key observation of \cite{DF,Lin} is that \begin{equation} \label{MORE} \#\{\mathcal{N}_{\lambda}^{{\mathbb C}} \cap F_{x,v}(S_{\epsilon, L}) \} \geq \#\{\mathcal{N}_{\lambda}^{{\mathbb R}} \cap F_{x,v}(S_{0, L}) \}, \end{equation} since every real zero is a complex zero. It follows then from Proposition \ref{CROFTONEST} (with $N = \mathcal{N}_{\lambda}$) that $$\begin{array}{lll} \mathcal{A}_{L, \epsilon} (\frac{1}{\lambda} dd^c \log
|\psi_j^{{\mathbb C}}|^2) &= & \frac{1}{\lambda} \int_{S^* M} \#\{\mathcal{N}_{\lambda}^{{\mathbb C}} \cap F_{x,v}(S_{\epsilon, L}) \} d \mu(x,v) \\ && \\ &\geq & \frac{1}{\lambda} \mathcal{H}^{m-1}(\mathcal{N}_{\psi_{\lambda}}).\end{array} $$ Hence to obtain an upper bound on $\frac{1}{\lambda} \mathcal{H}^{m-1}(\mathcal{N}_{\psi_{\lambda}})$ it suffices to prove that there exists $M < \infty$ so that \begin{equation} \label{acalest} \mathcal{A}_{L, \epsilon} (\frac{1}{\lambda} dd^c \log
|\psi_j^{{\mathbb C}}|^2) \leq M. \end{equation}
To prove \eqref{acalest}, we observe that since $dd^c_{t + i \tau} \log |\psi_j^{{\mathbb C}}|^2 (\exp_x (t + i \tau) v)$ is a positive $(1,1)$ form on the strip, the integral over $S_{\epsilon}$ is only increased if we integrate against a positive smooth test function $\chi_{\epsilon} \in C_c^{\infty}({\mathbb C})$ which equals one on $S_{\epsilon, L}$ and vanishes off $S_{2 \epsilon, L} $. Integrating by parts the $dd^c$ onto $\chi_{\epsilon}$, we have
$$\begin{array}{lll} \mathcal{A}_{L, \epsilon} (\frac{1}{\lambda} dd^c \log |\psi_j^{{\mathbb C}}|^2) &\leq & \frac{1}{\lambda} \int_{S^* M} \int_{{\mathbb C}}
dd^c_{t + i \tau} \log |\psi_j^{{\mathbb C}}|^2 (\exp_x (t + i \tau) v) \chi_{\epsilon} (t + i \tau) d\mu_L(x, v) \\ && \\ &= & \frac{1}{\lambda} \int_{S^* M} \int_{{\mathbb C}}
\log |\psi_j^{{\mathbb C}}|^2 (\exp_x (t + i \tau) v) dd^c_{t + i \tau} \chi_{\epsilon} (t + i \tau) d\mu_L(x, v) .
\end{array}$$
Now write $\log |x| = \log_+ |x| - \log_- |x|$. Here $\log_+ |x| = \max\{0, \log |x|\}$ and
$\log_ |x|= \max\{0, - \log |x| \}. $ Then we need upper bounds for $$ \frac{1}{\lambda} \int_{S^* M} \int_{{\mathbb C}}
\log_{\pm} |\psi_j^{{\mathbb C}}|^2 (\exp_x (t + i \tau) v) dd^c_{t + i \tau} \chi_{\epsilon} (t + i \tau) d\mu_L(x, v) .$$
For $\log_+$ the upper bound is an immediate consequence of Proposition \ref{PW}. For $\log_-$ the bound is subtler: we need to show that $|\varphi_{\lambda}(z)| $ cannot be too small on too large a set.
As we know from Gaussian beams, it is possible that $|\varphi_{\lambda}(x) | \leq C e^{- \delta \lambda} $ on sets of almost full measure in the real domain; we need to show that nothing worse can happen.
The map \eqref{E} is a diffeomorphism and since $B_{\epsilon}^* M = \bigcup_{0 \leq \tau \leq \epsilon} S^*_{\tau} M$ we also have that $$E: S_{\epsilon, L} \times S^* M \to M_{\tau}, \;\;\; E(t + i \tau, x, v) = \exp_x (t + i \tau) v $$ is a diffeomorphism for each fixed $t$. Hence by letting $t$ vary, $E$ is a smooth fibration with fibers given by geodesic arcs. Over a point $\zeta \in M_{\tau}$ the fiber of the map is a geodesic arc $$\{ (t + i \tau, x, v): \exp_x (t + i \tau) v = \zeta, \;\; \tau = \sqrt{\rho}(\zeta)\}. $$ Pushing forward the measure $ dd^c_{t + i \tau} \chi_{\epsilon} (t + i \tau) d\mu_L(x, v) $ under $E$ gives a positive measure $d\mu$ on $M_{\tau}$. We claim that \begin{equation}\label{PUSH} \mu: = E_* \; dd^c_{t + i \tau} \chi_{\epsilon} (t + i \tau) d\mu_L(x, v) =\left (\int_{\gamma_{x, v}} \Delta_{t + i \tau} \chi_{\epsilon} ds \right) dV_{\omega}, \end{equation} where $dV_{\omega}$ is the K\"ahler volume form $\frac{\omega^m}{m!} $ (see \S \ref{AC}.)
In fact, $d\mu_{L}$ is equivalent under $E$ to the contact volume form $\alpha \wedge \omega_{\rho}^{m-1}$ where $\alpha = d^c \sqrt{\rho}$. Hence the claim amounts to saying that the K\"ahler volume form is $d \tau$ times the contact volume form.
In particular it is a smooth (and of course signed) multiple $J$ of the K\"ahler volume form $dV_{\omega}$, and we do not need to know the coefficient function $J$ beyond that it is bounded above and below by constants independent of $\lambda$. We then have \begin{equation} \label{JEN} \int_{S^* M} \int_{{\mathbb C}}
\log |\psi_j^{{\mathbb C}}|^2 (\exp_x (t + i \tau) v) dd^c_{t + i \tau} \chi_{\epsilon} (t + i \tau) d\mu_L(x, v) = \int_{M_{\tau}}
\log |\psi_j^{{\mathbb C}}|^2 J d V. \end{equation} To complete the proof of \eqref{acalest} it suffices to prove that the right side is $\geq - C \lambda$ for some $ C> 0$.
We use the well-known
\begin{lem} \label{HARTOGS} (Hartog's Lemma; (see \cite[Theorem~4.1.9]{HoI-IV}): Let $\{v_j\}$ be a sequence of subharmonic functions in an open set $X \subset {\mathbb R}^m$ which have a uniform upper bound on any compact set. Then either $v_j \to -\infty$ uniformly on every compact set, or else there exists a subsequence $v_{j_k}$ which is convergent to some $u \in L^1_{loc}(X)$. Further, $\limsup_n u_n(x) \leq u(x)$ with equality almost everywhere. For every compact subset $K \subset X$ and every continuous function $f$, $$\limsup_{n \to \infty} \sup_K (u_n - f) \leq \sup_K (u - f). $$ In particular, if $f \geq u$ and $\epsilon > 0$, then $u_n \leq f + \epsilon$ on $K$ for $n$ large enough. \end{lem}
This Lemma implies the desired lower bound on \eqref{JEN}:
there exists $C > 0$ so that \begin{equation}
\label{LOGINT} \frac{1}{\lambda} \int_{M_{\tau} } \log |\psi_{\lambda}| J d V \geq - C. \end{equation} For if not, there exists a subsequence of eigenvalues $\lambda_{j_k}$
so that $\frac{1}{\lambda_{j_k}}\int_{M_{\tau}} \log |\psi_{\lambda_{j_k}}| J d V \to - \infty. $ By Proposition \ref{PW}, $\{\frac{1}{\lambda_{j_k}} \log |\psi_{\lambda_{j_k}}|\}$ has a uniform upper bound. Moreover the sequence does not tend uniformly to $-\infty$ since $||\psi_{\lambda}||_{L^2(M)} = 1$. It follows that a further subsequence tends in $L^1$ to a limit $u$ and by the dominated convergence theorem the limit of \eqref{LOGINT} along the sequence equals $\int_{M_{\tau}} u J dV \not= - \infty.$ This contradiction concludes the proof of \eqref{LOGINT}, hence
\eqref{acalest}, and thus
the theorem.
\end{proof}
\end{document} |
\begin{document}
\newtheorem{theorem}{Theorem} \newtheorem{proposition}{Proposition} \newtheorem{lemma}{Lemma} \newtheorem{corollary}{Corollary} \newtheorem{definition}{Definition} \newtheorem{remark}{Remark} \newtheorem{remarks}{Remarks}
\newcommand{\textstyle}{\textstyle}
\numberwithin{equation}{section} \numberwithin{theorem}{section} \numberwithin{proposition}{section} \numberwithin{lemma}{section} \numberwithin{corollary}{section} \numberwithin{definition}{section} \numberwithin{remark}{section}
\newcommand{\mathbb{R}^N}{\mathbb{R}^N} \newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\displaystyle}{\displaystyle} \newcommand{\nabla}{\nabla} \newcommand{\partial}{\partial}
\newcommand{\infty}{\infty} \newcommand{\partial}{\partial} \newcommand{\noindent}{\noindent} \newcommand{
\vskip-.1cm}{
\vskip-.1cm} \newcommand{
}{
}
\newcommand{{\bf A}}{{\bf A}} \newcommand{{\bf B}}{{\bf B}} \newcommand{{\bf C}}{{\bf C}} \newcommand{{\bf D}}{{\bf D}} \newcommand{{\bf E}}{{\bf E}} \newcommand{{\bf F}}{{\bf F}} \newcommand{{\bf G}}{{\bf G}} \newcommand{{\mathbf \omega}}{{\mathbf \omega}} \newcommand{{\bf A}_{2m}}{{\bf A}_{2m}} \newcommand{{\mathbf C}}{{\mathbf C}} \newcommand{{\mathrm{Im}}\,}{{\mathrm{Im}}\,} \newcommand{{\mathrm{Re}}\,}{{\mathrm{Re}}\,} \newcommand{{\mathrm e}}{{\mathrm e}}
\newcommand{{\mathcal{N}}}{{\mathcal{N}}}
\newcommand{L^2_\rho(\ren)}{L^2_\rho(\mathbb{R}^N)} \newcommand{L^2_{\rho^*}(\ren)}{L^2_{\rho^*}(\mathbb{R}^N)}
\renewcommand{\alpha}{\alpha} \renewcommand{\beta}{\beta} \newcommand{\gamma}{\gamma} \newcommand{\Gamma}{\Gamma} \renewcommand{\delta}{\delta} \newcommand{\Delta}{\Delta} \newcommand{\varepsilon}{\varepsilon} \newcommand{\varphi}{\varphi} \renewcommand{\lambda}{\lambda} \renewcommand{\omega}{\omega} \renewcommand{\Omega}{\Omega} \newcommand{\sigma}{\sigma} \renewcommand{\tau}{\tau} \renewcommand{\theta}{\theta} \newcommand{\zeta}{\zeta} \newcommand{\widetilde x}{\widetilde x} \newcommand{\widetilde t}{\widetilde t} \newcommand{\noindent}{\noindent}
\newcommand{{\bf u}}{{\bf u}} \newcommand{{\bf x}}{{\bf x}} \newcommand{{\bf y}}{{\bf y}} \newcommand{{\bf z}}{{\bf z}} \newcommand{{\bf a}}{{\bf a}} \newcommand{{\bf c}}{{\bf c}} \newcommand{{\bf j}}{{\bf j}} \newcommand{{\bf U}}{{\bf U}} \newcommand{{\bf Y}}{{\bf Y}} \newcommand{{\bf H}}{{\bf H}} \newcommand{{\bf G}}{{\bf G}} \newcommand{{\bf V}}{{\bf V}} \newcommand{{\bf w}}{{\bf w}} \newcommand{{\bf v}}{{\bf v}} \newcommand{{\bf h}}{{\bf h}} \newcommand{{\rm div}\,}{{\rm div}\,} \newcommand{{\rm i}\,}{{\rm i}\,} \def{\rm Id}{{\rm Id}}
\newcommand{\quad \mbox{in} \quad \ren \times \re_+}{\quad \mbox{in} \quad \mathbb{R}^N \times \mathbb{R}_+} \newcommand{\quad \mbox{in} \quad}{\quad \mbox{in} \quad} \newcommand{\quad \mbox{in} \quad \re \times \re_+}{\quad \mbox{in} \quad \mathbb{R} \times \mathbb{R}_+} \newcommand{\quad \mbox{in} \quad \re}{\quad \mbox{in} \quad \mathbb{R}} \newcommand{\quad \mbox{for} \quad}{\quad \mbox{for} \quad} \newcommand{,\quad \mbox{where} \quad}{,\quad \mbox{where} \quad} \newcommand{\quad \mbox{as} \quad}{\quad \mbox{as} \quad} \newcommand{\quad \mbox{and} \quad}{\quad \mbox{and} \quad} \newcommand{,\quad \mbox{with} \quad}{,\quad \mbox{with} \quad} \newcommand{,\quad \mbox{or} \quad}{,\quad \mbox{or} \quad} \newcommand{\quad \mbox{at} \quad}{\quad \mbox{at} \quad} \newcommand{\quad \mbox{on} \quad}{\quad \mbox{on} \quad}
\newcommand{\eqref}{\eqref} \newcommand{\mathcal}{\mathcal} \newcommand{\mathfrak}{\mathfrak}
\newcommand{\G_\e}{\Gamma_\varepsilon} \newcommand{ H^{1}(\Rn)}{ H^{1}(\Rn)} \newcommand{W^{1,2}(\Rn)}{W^{1,2}(\Rn)} \newcommand{\Wan}{W^{\frac{\alpha}{2},2}(\Rn)} \newcommand{\Wa}{W^{\frac{\alpha}{2},2}(\R)} \newcommand{\int_{\Rn}}{\int_{\Rn}} \newcommand{\int_\R}{\int_\R} \newcommand{I_\e}{I_\varepsilon} \newcommand{\n \ie}{\nabla I_\e} \newcommand{I_\e'}{I_\varepsilon'} \newcommand{I_\e''}{I_\varepsilon''} \newcommand{I_0''}{I_0''} \newcommand{I'_0}{I'_0}
\newcommand{z_{\e,\rho}}{z_{\varepsilon,\rho}} \newcommand{w_{\e,\xi}}{w_{\varepsilon,\xi}} \newcommand{z_{\e,\rho}}{z_{\varepsilon,\rho}} \newcommand{w_{\e,\rho}}{w_{\varepsilon,\rho}} \newcommand{{\dot{z}}_{\e,\rho}}{{\dot{z}}_{\varepsilon,\rho}} \newcommand{{\bf E}}{{\bf E}} \newcommand{{\bf u}}{{\bf u}} \newcommand{{\bf v}}{{\bf v}} \newcommand{{\bf z}}{{\bf z}} \newcommand{{\bf w}}{{\bf w}} \newcommand{{\bf 0}}{{\bf 0}} \newcommand{{\bf \phi}}{{\bf \phi}} \newcommand{\underline{\phi}}{\underline{\phi}} \newcommand{{\bf h}}{{\bf h}}
\newcommand{\mathbb{E}}{\mathbb{E}} \newcommand{\mathbb{X}}{\mathbb{X}} \newcommand{\mathbb{F}}{\mathbb{F}} \newcommand{\mathbb{Y}}{\mathbb{Y}} \newcommand{\mathbb{M}}{\mathbb{M}} \newcommand{\mathbb{H}}{\mathbb{H}}
\newcommand{
}{
} \newcommand{\quad \Longrightarrow \quad}{\quad \Longrightarrow \quad}
\def\com#1{\fbox{\parbox{6in}{\texttt{#1}}}}
\def{\mathbb N}{{\mathbb N}} \def{\cal A}{{\cal A}} \newcommand{\,d}{\,d} \newcommand{\varepsilon}{\varepsilon} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{{\mbox spt}}{{\mbox spt}} \newcommand{{\mbox ind}}{{\mbox ind}} \newcommand{{\mbox supp}}{{\mbox supp}} \newcommand{\displaystyle}{\displaystyle} \newcommand{\partial}{\partial} \renewcommand{\thesection.\arabic{equation}}{\thesection.\arabic{equation}} \renewcommand{1.1}{1.1}
\newcommand{(-\D)^m}{(-\Delta)^m}
\newenvironment{pf}{\noindent{\it Proof}.\enspace}{\rule{2mm}{2mm}
}
\newcommand{(-\Delta)^{\alpha/2}}{(-\Delta)^{\alpha/2}}
\title
{\bf Positive solutions for semilinear fractional elliptic problems involving an inverse fractional operator}
\author{P.~\'Alvarez-Caudevilla, E.~Colorado and Alejandro Ortega}
\address{Departamento de Matem\'aticas, Universidad Carlos III de Madrid, Av. Universidad 30, 28911 Legan\'es (Madrid), Spain} \email{[email protected]}
\address{Departamento de Matem\'aticas, Universidad Carlos III de Madrid, Av. Universidad 30, 28911 Legan\'es (Madrid), Spain} \email{[email protected]}
\address{Departamento de Matem\'aticas, Universidad Carlos III de Madrid, Av. Universidad 30, 28911 Legan\'es (Madrid), Spain} \email{[email protected]}
\thanks{This paper has been partially supported by the Ministry of Economy and Competitiveness of Spain and FEDER, under research project MTM2016-80618-P}
\thanks{The first author was also partially supported by the Ministry of Economy and Competitiveness of Spain under research project RYC-2014-15284}
\date{\today}
\begin{abstract} This paper is devoted to the study of the existence of positive solutions for a problem related to a higher order fractional differential equation involving a nonlinear term depending on a fractional differential operator, \begin{equation*} \left\{ \begin{tabular}{lcl}
$(-\Delta)^{\alpha} u=\lambda u+ (-\Delta)^{\beta}|u|^{p-1}u$ & &in $\Omega$, \\
$\mkern+3mu(-\Delta)^{j}u=0$ & &on $\partial\Omega$, for $j\in\mathbb{Z}$, $0\leq j< [\alpha]$, \end{tabular} \right. \end{equation*} where $\Omega$ is a bounded domain in $\mathbb{R}^{N}$, $0<\beta<1$, $\beta<\alpha<\beta+1$ and $\lambda>0$. In particular, we study the fractional elliptic problem, \begin{equation*}
\left\{
\begin{array}{ll}
(-\Delta)^{\alpha-\beta} u= \lambda(-\Delta)^{-\beta}u+ |u|^{p-1}u & \hbox{in} \quad \Omega, \\
\mkern+72.2mu u=0 & \hbox{on} \quad \partial\Omega,
\end{array}
\right. \end{equation*} and we prove existence or nonexistence of positive solutions depending on the parameter $\lambda>0$, up to the critical value of the exponent $p$, i.e., for $1<p\leq 2_{\mu}^*-1$ where $\mu:=\alpha-\beta$ and $2_{\mu}^*=\frac{2N}{N-2\mu}$ is the critical exponent of the Sobolev embedding. \end{abstract} \maketitle \noindent {\it \footnotesize 2010 Mathematics Subject Classification}. {\scriptsize 35A15, 35G20, 35J61, 49J35.}\\ {\it \footnotesize Key words}. {\scriptsize Fractional Laplacian, Critical Problem, Concentration-Compactness Principle, Mountain Pass Theorem}
\section{Introduction}\label{sec:intro}
\noindent Let $\Omega$ be a smooth bounded domain of $\mathbb{R}^N$ with $N>2\mu$ and $$\mu:=\alpha-\beta\quad \hbox{with} \quad 0<\beta<1\quad \hbox{and}\quad \beta<\alpha<\beta+1.$$ We analyze the existence of positive solutions for the following fractional elliptic problem, \begin{equation}\label{ecuacion}
\left\{
\begin{array}{ll}
(-\Delta)^{\alpha-\beta} u= \gamma(-\Delta)^{-\beta}u+ |u|^{p-1}u & \hbox{in} \quad \Omega, \\
u= 0 & \hbox{on} \quad \partial\Omega,
\end{array}
\right.
\tag{$P_\gamma$} \end{equation} depending on the real parameter $\gamma>0$. To this end, we consider, \begin{equation*} 1<p\leq2_{\mu}^*-1=\frac{N+2\mu}{N-2\mu}, \end{equation*} where $2_{\mu}^*=\frac{2N}{N-2\mu}$ is the critical exponent of the Sobolev embedding. Associated with \eqref{ecuacion} we have the following Euler--Lagrange functional: \begin{equation} \label{funcional_ecuacion}
\mathcal{F}_\gamma(u)=\frac{1}{2}\int_\Omega|(-\Delta)^{\frac{\mu}{2}} u|^2dx-\frac{\gamma}{2} \int_\Omega |(-\Delta)^{-\frac{\beta}{2}}u|^2\, dx-\frac{1}{p+1} \int_\Omega |u|^{p+1}dx, \end{equation}
such that the solutions of \eqref{ecuacion} corresponds to critical points of the $C^1$ functional \eqref{funcional_ecuacion} and vice versa.
Note that $(-\Delta)^{-\beta}$ is a positive linear integral compact operator from $L^2(\Omega)$ into itself and it is well defined thanks to the Spectral Theorem. The definition of the fractional powers of the positive Laplace operator $(-\Delta)$, in a bounded domain $\Omega$ with homogeneous Dirichlet boundary data, can be carried out through the spectral decomposition using the powers of the eigenvalues of $(-\Delta)$ with the same boundary conditions. Indeed, let $(\varphi_i,\lambda_i)$ be the eigenfunctions (normalized with respect to the $L^2(\Omega)$-norm) and eigenvalues of $(-\Delta)$ under homogeneous Dirichlet boundary data. Then, $(\varphi_i,\lambda_i^{\mu})$ stand for the eigenpairs of $(-\Delta)^{\mu}$ under homogeneous Dirichlet boundary conditions as well. Thus, the fractional operator $(-\Delta)^{\mu}$ is well defined in the space of functions that vanish on the boundary, \begin{equation*}
H_0^{\mu}(\Omega)=\left\{u=\sum_{j=1}^{\infty} a_j\varphi_j\in L^2(\Omega):\ \|u\|_{H_0^{\mu}(\Omega)}=\left(\sum_{j=1}^{\infty} a_j^2\lambda_j^{\mu} \right)^{\frac{1}{2}}<\infty\right\}. \end{equation*} As a result of this definition it follows that, \begin{equation}\label{eqnorma}
\|u\|_{H_0^{\mu}(\Omega)}=\|(-\Delta)^{\frac{\mu}{2}}u\|_{L^2(\Omega)}. \end{equation} In particular, \begin{equation*} (-\Delta)^{-\beta}u=\sum_{j=1}^{\infty} a_j\lambda_j^{-\beta}\varphi_j. \end{equation*} Since the above definition allows us to integrate by parts, we say that $u\in H_0^{\mu}(\Omega)$ is an energy or weak solution for problem \eqref{ecuacion} if, \begin{equation*}
\int_{\Omega}(-\Delta)^{\frac{\mu}{2}}u(-\Delta)^{\frac{\mu}{2}}\phi dx=\gamma\int_{\Omega}(-\Delta)^{-\frac{\beta}{2}}u(-\Delta)^{-\frac{\beta}{2}}\phi dx+\int_{\Omega}|u|^{p-1}u\phi dx,\quad \forall\phi\in H_0^{\mu}(\Omega). \end{equation*} In other words, $u\in H_0^{\mu}(\Omega)$ is a critical point of the functional defined by \eqref{funcional_ecuacion}. We also observe that the functional embedding features for the equation in \eqref{ecuacion} are governed by the Sobolev's embedding Theorem. Let us recall the compact inclusion, \begin{equation} \label{compact_emb} H_0^{\mu}(\Omega) \hookrightarrow L^{p+1}(\Omega),\quad 2\leq p+1<2_{\mu}^*, \end{equation} being a continuous inclusion up to the critical exponent $p=2_{\mu}^*-1$.\newline To define non-integer higher-order powers for the Laplace operator, let us recall that the homogeneous Navier boundary conditions are defined as \begin{equation*} u=\Delta u=\Delta^2 u=\ldots=\Delta^{k-1} u=0,\quad\mbox{on }\partial\Omega. \end{equation*} Given $\alpha>1$, the $\alpha$-th power of the classical Dirichlet Laplacian in the sense of the spectral theory can be defined as the operator whose action on a smooth function $u$ satisfying the homogenous Navier boundary conditions for $0\leq k<[\alpha]$ (where $[\cdot]$ means the integer part), is given by \begin{equation*} \langle (-\Delta)^{\alpha} u, u \rangle=\sum_{j\ge 1}
\lambda_j^{\alpha}|\langle u_1,\varphi_j\rangle|^2. \end{equation*} We refer to \cite{MusNa2,MusNa3} for a study of this higher-order fractional Laplace operator, referred to as the Navier fractional Laplacian, as well as useful properties of the fractional Sobolev space $H_0^{\alpha}(\Omega)$.\newline On the other hand, we have a connection between problem \eqref{ecuacion} and a fractional order elliptic system which turns out to be very useful in the sequel. In particular, taking $\textstyle{\psi:=(-\Delta)^{-\beta}u}$, problem \eqref{ecuacion} provides us with the fractional elliptic cooperative system, \begin{equation} \label{cosys}
\left\{\begin{array}{l}
(-\Delta)^{\mu}u = \gamma \psi+|u|^{p-1}u,\\
(-\Delta)^{\beta}\psi=u,
\end{array}\right.\quad \hbox{in}\quad \Omega,\quad (u,\psi)=(0,0)\quad\hbox{in}\quad \partial\Omega. \end{equation} Nevertheless, system \eqref{cosys} is not a variational system. In order to obtain a variational system from problem \eqref{ecuacion} we follow a similar idea to the one performed above, distinguishing whether $\alpha=2\beta$ or $\alpha\neq2\beta$. In the first case we take $\textstyle{v:=\sqrt{\gamma}\psi}$ and, recalling that $\mu:=\alpha-\beta$, we obtain the following fractional elliptic cooperative system, \begin{equation} \label{sistemabb} \left\{\begin{array}{l}
(-\Delta)^{\beta}u=\sqrt{\gamma}v+|u|^{p-1}u,\\ (-\Delta)^{\beta}v=\sqrt{\gamma}u, \end{array} \right. \tag{$S_{\gamma}^{\beta}$} \quad \hbox{in}\quad \Omega,\quad (u,v)=(0,0)\quad\hbox{on}\quad \partial\Omega, \end{equation} whose associated energy functional is \begin{equation*}
\mathcal{J}_{\gamma}^{\beta}(u,v)=\frac{1}{2} \int_\Omega |(-\Delta)^{\frac{\beta}{2}} u|^2dx + \frac{1}{2} \int_\Omega |(-\Delta)^{\frac{\beta}{2}} v|^2dx -\sqrt{\gamma}\int_\Omega uvdx -\frac{1}{p+1} \int_\Omega |u|^{p+1}dx. \end{equation*} In the second case, $\alpha\neq2\beta$, taking $v=\gamma^{\beta/\alpha}\psi$ we obtain the system, \begin{equation*} \left\{\begin{array}{rl}
(-\Delta)^{\mu}u=&\!\!\!\gamma^{1-\beta/\alpha}v+|u|^{p-1}u,\\ (-\Delta)^{\beta}v=&\!\!\!\gamma^{\beta/\alpha}u, \end{array} \right. \quad \hbox{in}\quad \Omega,\quad (u,v)=(0,0)\quad\hbox{on}\quad \partial\Omega. \end{equation*} Since the former system is still not variational, we transform it into the following variational system, \begin{equation} \label{sistemaab} \left\{\begin{array}{rl}
\frac{1}{\gamma^{1-\beta/\alpha}}(-\Delta)^{\mu} u =&\!\!\! v+\frac{1}{\gamma^{1-\beta/\alpha}}|u|^{p-1}u,\\ \frac{1}{\gamma^{\beta/\alpha}}(-\Delta)^{\beta}v=&\!\!\! u, \end{array} \right. \tag{$S_{\gamma}^{\alpha,\beta}$} \quad \hbox{in}\quad \Omega,\quad (u,v)=(0,0)\quad\hbox{on}\quad \partial\Omega. \end{equation} whose associated functional is \begin{align*}
\mathcal{J}_{\gamma}^{\alpha,\beta}(u,v)=&\frac{1}{2\gamma^{1-\beta/\alpha}}\int_\Omega|(-\Delta)^{\frac{\mu}{2}}u|^2dx +\frac{1}{2\gamma^{\beta/\alpha}}\int_\Omega|(-\Delta)^{\frac{\beta}{2}}v|^2dx-\int_\Omega uv dx\\
&-\frac{1}{(p+1)\gamma^{1-\beta/\alpha}} \int_\Omega |u|^{p+1}dx. \end{align*} We will use the equivalence between problem \eqref{ecuacion} and systems \eqref{sistemabb} and \eqref{sistemaab} to surpass the difficulties that arise while working with the inverse fractional Laplace operator $(-\Delta)^{-\beta}$. In particular, this approach will help us to avoid ascertaining explicit estimations for this inverse term. On the other hand, to overcome the usual difficulties that appear when dealing with fractional Laplace operators we will use the ideas of Caffarelli and Silvestre \cite{CS}, together with those performed in \cite{BrCdPS}, giving an equivalent definition of the fractional operator $(-\Delta)^{\mu}$ in a bounded domain $\Omega$ by means of an auxiliary problem that we will introduce below. Associated with the domain $\Omega$ let us consider the cylinder $\mathcal{C}_{\Omega}=\Omega\times(0,\infty)\subset\mathbb{R}_+^{N+1}$ called extension cylinder. Moreover, we denote by $(x,y)$ the points belonging to $\mathcal{C}_{\Omega}$ and with $\partial_L\mathcal{C}_{\Omega}=\partial\Omega\times(0,\infty)$ the lateral boundary of the extension cylinder. Thus, given a function $u\in H_{0}^{\mu}(\Omega)$, define the $\mu$-harmonic extension function $w$, denoted by $w:=E_{\mu}[u]$, as the solution to problem, \begin{equation*}
\left\{
\begin{array}{ll}
-{\rm div}(y^{1-2\mu}\nabla w)=0 & \hbox{in} \quad \mathcal{C}_{\Omega}, \\
w=0 & \hbox{on}\quad \partial_L\mathcal{C}_{\Omega}, \\
w(x,0)=u(x) & \hbox{in} \quad \Omega\times\{y=0\}.
\end{array}
\right. \end{equation*} This extension function $w$ belongs to the space \begin{equation*}
\mathcal{X}_0^{\mu}(\mathcal{C}_{\Omega})=\overline{\mathcal{C}_0^{\infty}(\Omega\times[0,\infty))}^{\|\cdot\|_{\mathcal{X}_0^{\mu}(\mathcal{C}_{\Omega})}},\ \text{with}\ \|w\|_{\mathcal{X}_0^{\mu}(\mathcal{C}_{\Omega})}^2=\kappa_{\mu}\int_{\mathcal{C}_{\Omega}}y^{1-2\mu}|\nabla w(x,y)|^2dxdy. \end{equation*} With that constant $\kappa_{\mu}$, whose precise value can be seen in \cite{BrCdPS}, the extension operator is an isometry between $H_0^{\mu}(\Omega)$ and $\mathcal{X}_0^{\mu}(\mathcal{C}_{\Omega})$ in the sense \begin{equation}\label{isometry}
\|E_{\mu}[\varphi]\|_{\mathcal{X}_0^{\mu}(\mathcal{C}_{\Omega})}=\|\varphi\|_{H_0^{\mu}(\Omega)},\ \text{for all}\ \varphi\in H_0^{\mu}(\Omega). \end{equation} The relevance of the extension function $w$ is that it is related to the fractional Laplacian of the original function through the formula \begin{equation*} \frac{\partial w}{\partial \nu^{\mu}}:= -\kappa_{\mu} \lim_{y\to 0^+} y^{1-2\mu}\frac{\partial w}{\partial y}=(-\Delta)^{\mu}u(x). \end{equation*} In the case $\Omega=\mathbb{R}^N$ this formulation provides us with explicit expressions for both the fractional Laplacian and the $\mu$-extension in terms of the Riesz and the Poisson kernels respectively. Precisely, \begin{equation*} \begin{split}
& (-\Delta)^{\mu}u(x)=\ d_{N,\mu}P.V.\int_{\mathbb{R}^N}\frac{u(x)-u(y)}{|x-y|^{N+2\mu}}dy\\
& w(x,y)=\ P_y^{\mu}\ast u(x)=c_{N,\mu}y^{2\mu}\int_{\mathbb{R}^N}\frac{u(z)}{(|x-z|^2+y^2)^{\frac{N+2\mu}{2}}}dz. \end{split} \end{equation*} For exact values of the constants $c_{N,\mu}$ and $d_{N,\mu}$ we refer to \cite{BrCdPS}. Thanks to the arguments shown above, we can reformulate problem \eqref{ecuacion} in terms of the extension problem as follows, \begin{equation}\label{extension_problem}
\left\{
\begin{array}{ll}
-{\rm div}(y^{1-2\mu}\nabla w)=0 & \hbox{in}\quad \mathcal{C}_{\Omega}, \\
w=0 & \hbox{on}\quad \partial_L\mathcal{C}_{\Omega}, \\
\frac{\partial w}{\partial \nu^\mu}=\gamma (-\Delta)^{-\beta}w+|w|^{p-1}w & \hbox{in}\quad \Omega\times\{y=0\}.
\end{array}
\right.
\tag{$\tilde{P}_{\gamma}$} \end{equation} Therefore, an energy or weak solution of this problem is a function $w\in \mathcal{X}_0^{\mu}(\mathcal{C}_{\Omega})$ satisfying \begin{equation*}
\kappa_{\mu}\int_{\mathcal{C}_{\Omega}} y^{1-2\mu}\langle\nabla w,\nabla\varphi \rangle dxdy=\int_{\Omega} \left(\gamma (-\Delta)^{-\beta}w+|w|^{p-1}\omega\right)\varphi(x,0)dx,\quad \forall\varphi\in\mathcal{X}_0^{\mu}(\mathcal{C}_{\Omega}). \end{equation*} For any energy solution $w\in \mathcal{X}_0^{\mu}(\mathcal{C}_{\Omega})$ to problem \eqref{extension_problem}, the corresponding trace function $u=Tr[w]=w(\cdot,0)$ belongs to the space $H_0^{\mu}(\Omega)$ and is an energy solution for the problem \eqref{ecuacion} and vice versa. If $u\in H_0^{\mu}(\Omega)$ is an energy solution of \eqref{ecuacion}, then $w:=E_\mu[u]\in \mathcal{X}_0^{\mu}(\mathcal{C}_{\Omega})$ is an energy solution for \eqref{extension_problem} and, as a consequence, both formulations are equivalent. Finally, the energy functional associated with problem \eqref{extension_problem} is \begin{equation*}
\widetilde{\mathcal{F}}_{\gamma}(w)=\frac{\kappa_{\mu}}{2}\int_{\mathcal{C}_{\Omega}}y^{1-2\mu}|\nabla w|^2dxdy-\frac{\gamma}{2}\int_{\Omega}|(-\Delta)^{-\frac{\beta}{2}}w|^2dx-\frac{1}{p+1}\int_{\Omega}|w|^{p+1}dx. \end{equation*} Since the extension operator is an isometry, critical points of $\widetilde{\mathcal{F}}_{\gamma}$ in $\mathcal{X}_0^{\mu}(\mathcal{C}_{\Omega})$ correspond to critical points of the functional $\mathcal{F}_{\gamma}$ in $H_0^{\mu}(\Omega)$. Indeed, arguing as in \cite[Proposition 3.1]{BCdPS}, the minima of $\widetilde{\mathcal{F}}_{\gamma}$ also correspond to the minima of the functional $\mathcal{F}_{\gamma}$.
Another useful tool to be applied throughout this work will be the following trace inequality, \begin{equation}\label{sobext}
\int_{\mathcal{C}_{\Omega}}y^{1-2\mu}|\nabla \phi(x,y)|^2dxdy\geq C\left(\int_{\Omega}|\phi(x,0)|^rdx\right)^{\frac{2}{r}},\quad\forall\phi\in \mathcal{X}_{0}^{\mu}(\mathcal{C}_{\Omega}), \end{equation} with $1\leq r\leq\frac{2N}{N-2\mu},\ N>2\mu$. Let us notice that, since the extension operator is an isometry, inequality \eqref{sobext} is equivalent to the fractional Sobolev inequality, \begin{equation}\label{sobolev}
\int_{\Omega}|(-\Delta)^{\mu/2}\varphi|^2dx\geq C\left(\int_{\Omega}|\varphi|^rdx\right)^{\frac{2}{r}},\quad\forall\varphi\in H_{0}^{\mu}(\Omega), \end{equation} with $1\leq r\leq\frac{2N}{N-2\mu}$, $N>2\mu$. \begin{remark} When $r=2_{\mu}^*$, the best constant in \eqref{sobext} will be denoted by $S(\mu,N)$. This constant is explicit and independent of the domain $\Omega$. Indeed, its exact value is given by the expression \begin{equation*} S(\mu,N)=\frac{2\pi^\mu\Gamma(1-\mu)\Gamma(\frac{N+2\mu}{2})(\Gamma(\frac{N}{2}))^{\frac{2\mu}{N}}}{\Gamma(\mu)\Gamma(\frac{N-2\mu}{2})(\Gamma(N))^\mu}, \end{equation*} and it is never achieved when $\Omega$ is a bounded domain. Thus, we have, \begin{equation*}
\int_{\mathbb{R}_{+}^{N+1}}\!\!y^{1-2\mu}|\nabla \phi(x,y)|^2dxdy\geq S(\mu,N)\left(\int_{\mathbb{R}^{N}}|\phi(x,0)|^{\frac{2N}{N-2\mu}}dx\right)^{\frac{N-2\mu}{N}}\ \forall \phi\in \mathcal{X}_0^\mu(\mathbb{R}_{+}^{N+1}). \end{equation*} If $\Omega=\mathbb{R}^N$, the constant $S(\mu,N)$ is achieved for the family of extremal functions $w_{\varepsilon}^{\mu}= E_\mu[v_{\varepsilon}^{\mu}]$ with \begin{equation}\label{u_eps}
v_{\varepsilon}^{\mu}(x)=\frac{\varepsilon^{\frac{N-2\mu}{2}}}{(\varepsilon^2+|x|^2)^{\frac{N-2\mu}{2}}}, \end{equation} for arbitrary $\varepsilon>0$; see \cite{BrCdPS} for further details. Finally, combining the previous comments, the best constant in \eqref{sobolev} with $\Omega=\mathbb{R}^N$ is given then by $\kappa_\mu S(\mu,N)$. \end{remark} Although systems \eqref{sistemabb} and \eqref{sistemaab} no longer contain an inverse term as $(-\Delta)^{-\beta}$ they still are non-local systems, with all the complications that this entails. However, we use the extension technique shown above to reformulate the non-local systems \eqref{sistemabb} and \eqref{sistemaab} in terms of the following local systems. Taking $w:=E_{\mu}[u]$ and $z:=E_{\beta}[v]$, the extension system corresponding to \eqref{sistemabb} reads as \begin{equation}\label{extension_systembb}
\left\{
\begin{array}{ll}
-{\rm div}(y^{1-2\beta}\nabla w)= 0 & \hbox{in}\quad \mathcal{C}_{\Omega}, \\
-{\rm div}(y^{1-2\beta}\nabla z)=0 & \hbox{in}\quad \mathcal{C}_{\Omega}, \\
\displaystyle\frac{\partial w}{\partial \nu^{\beta}}= \sqrt{\gamma} z+|w|^{p-1}w & \hbox{in}\quad \Omega\times\{y=0\},\\
\displaystyle\frac{\partial z}{\partial \nu^{\beta}}= \sqrt{\gamma} w & \hbox{in}\quad \Omega\times\{y=0\},\\
w=z= 0 & \hbox{on}\quad \partial_L\mathcal{C}_{\Omega},
\end{array}
\right.
\tag{$\widetilde{S}_{\gamma}^{\beta}$} \end{equation} whose associated functional is \begin{equation*} \begin{split}
\Phi_{\gamma}^{\beta}(w,z)&=\frac{\kappa_{\beta}}{2}\int_{\mathcal{C}_{\Omega}}y^{1-2\beta}|\nabla w|^2dxdy+\frac{\kappa_{\beta}}{2}\int_{\mathcal{C}_{\Omega}}y^{1-2\beta}|\nabla z|^2dxdy-\sqrt{\gamma}\int_{\Omega}w(x,0)z(x,0)dx\\&-\frac{1}{p+1}\int_{\Omega}|w(x,0)|^{p+1}dx.
\end{split} \end{equation*} Since the extension function is an isometry, critical points for the functional $\Phi_{\gamma}^{\beta}$ in $\mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})\times\mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})$ correspond to critical points of $\mathcal{J}_{\gamma}^{\beta}$ in $H_0^{\beta}(\Omega)\times H_0^{\beta}(\Omega)$. Moreover, arguing as in \cite[Proposition 3.1]{BCdPS}, the minima of $\Phi_{\gamma}^{\beta}$ also correspond to the minima of $\mathcal{J}_{\gamma}^{\beta}$. Similarly, the extension system of system \eqref{sistemaab} reads as \begin{equation}\label{extension_systemab}
\left\{
\begin{array}{ll}
-{\rm div}(y^{1-2\mu}\nabla w)= 0 & \hbox{in}\quad \mathcal{C}_{\Omega}, \\
-{\rm div}(y^{1-2\beta}\nabla z)= 0 & \hbox{in}\quad \mathcal{C}_{\Omega}, \\
\displaystyle\frac{1}{\gamma^{1-\beta/\alpha}}\frac{\partial w}{\partial \nu^{\mu}}= z+\frac{1}{\gamma^{1-\beta/\alpha}}|w|^{p-1}w & \hbox{in}\quad \Omega\times\{y=0\},\\
\displaystyle\frac{1}{\gamma^{\beta/\alpha}}\frac{\partial z}{\partial \nu^{\beta}}=w & \hbox{in}\quad \Omega\times\{y=0\},\\
w=z=0 & \hbox{on}\quad \partial_L\mathcal{C}_{\Omega},
\end{array}
\right.
\tag{$\widetilde{S}_{\gamma}^{\alpha,\beta}$} \end{equation} whose associated functional is \begin{equation*} \begin{split}
\Phi_{\gamma}^{\alpha,\beta}(w,z)=&\frac{\kappa_{\mu}}{2\gamma^{1-\beta/\alpha}}\int_{\mathcal{C}_{\Omega}}y^{1-2\mu}|\nabla w|^2dxdy+\frac{\kappa_{\beta}}{2\gamma^{\beta/\alpha}}\int_{\mathcal{C}_{\Omega}}y^{1-2\beta}|\nabla z|^2dxdy-\int_{\Omega}w(x,0)z(x,0)dx\\&-\frac{1}{(p+1)\gamma^{1-\beta/\alpha}}\int_{\Omega}w(x,0)^{p+1}dx.
\end{split} \end{equation*} Once again, since the extension function is an isometry, critical points of $\Phi_{\gamma}^{\alpha,\beta}$ in $\mathcal{X}_0^{\mu}(\mathcal{C}_{\Omega})\times\mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})$ correspond to critical points of $\mathcal{J}_{\gamma}^{\alpha,\beta}$ in $H_0^{\mu}(\Omega)\times H_0^{\beta}(\Omega)$, and also, minima of $\Phi_{\gamma}^{\alpha,\beta}$ correspond to minima of $\mathcal{J}_{\gamma}^{\alpha,\beta}$.
Before finishing this introductory section, let us observe that problem \eqref{ecuacion} can be seen as a linear perturbation of the critical problem, \begin{equation} \label{crBC}
\left\{
\begin{array}{ll}
(-\Delta)^{\mu}u=|u|^{2_{\mu}^*-2}u & \hbox{in} \quad\Omega, \\
u=0 & \hbox{on}\quad \partial\Omega,
\end{array}
\right. \end{equation} for which, after applying a Pohozaev-type result \cite[Proposition 5.5]{BrCdPS}, one can prove the non-existence of positive solutions under the star-shapeness assumption on the domain $\Omega$. Moreover, the limit case $\beta\to0$ in problem \eqref{ecuacion} corresponds to \begin{equation}\label{bezero}
\left\{
\begin{tabular}{rll}
$(-\Delta)^{\alpha}u=$ &\!\!\!$\gamma u+|u|^{2_{\alpha}^*-2}u$ &in $\Omega$, \\
$u=$ &\!\!\!$0$ &on $\partial\Omega$,
\end{tabular}
\right.\quad\hbox{with}\quad 0<\alpha<1, \end{equation} which was studied in \cite{BCdPS}, where the existence of positive solutions is proved for $N\geq4\alpha$ if and only if $0<\gamma<\lambda_1^*$, with $\lambda_1^*$ being first eigenvalue of the $(-\Delta)^{\alpha}$ operator under homogeneous Dirichlet boundary conditions. Note that in our situation the non-local term $\gamma(-\Delta)^{-\beta}u$ plays actually the role of $\gamma u$ in \cite{BCdPS}.
\underline{\bf Main results.}
We ascertain the existence of positive solutions for the problem \eqref{ecuacion} depending on the positive real parameter $\gamma$. To do so, we will first show the interval of the parameter $\gamma$ for which there is the possibility of having positive solutions. Then, we use the equivalence between \eqref{ecuacion} and the systems \eqref{sistemabb} and \eqref{sistemaab} together with the extension technique to prove the main results of this work. Indeed, using the well-known Mountain Pass Theorem (MPT) \cite{AR}, we will prove that there exists a positive solution for \eqref{ecuacion} for any $$0<\gamma<\lambda_1^*,$$ where $\lambda_1^*$ is the first eigenvalue of the operator $(-\Delta)^{\alpha}$ under homogeneous Dirichlet boundary conditions. If $1<p+1<2_{\mu}^*$ one might apply the MPT directly since, as we will show, our problem possesses the mountain pass geometry and thanks to the compact embedding \eqref{compact_emb} the Palais-Smale condition is satisfied for the functionals $\mathcal{F}_\gamma$, $\mathcal{J}_{\gamma}^{\beta}$ and $\mathcal{J}_{\gamma}^{\alpha,\beta}$ (see details below in Section \ref{Sec:ProofTh0}). However, at the critical exponent $p=2_{\mu}^*-1$, the compactness of the Sobolev embedding is lost and the problem becomes very delicate. To overcome this lack of compactness we apply a concentration-compactness argument relying on \cite[Theorem 5.1]{BCdPS}, which is an adaptation to the fractional setting of the classical result of P.-L. Lions, \cite{Lions}. Then we are capable of proving that, under certain conditions, the Palais-Smale condition is satisfied for the functionals $\Phi_{\gamma}^{\beta}$ and $\Phi_{\gamma}^{\alpha,\beta}$. Thus, by the arguments above, the result will also follow for the functionals $\mathcal{F}_\gamma$, $\mathcal{J}_{\gamma}^{\beta}$ and $\mathcal{J}_{\gamma}^{\alpha,\beta}$. Consequently, we state now the main results of this paper.
\begin{theorem} \label{Th0} Assume $1<p<2_{\mu}^*-1$. Then, for every $\gamma\in (0,\lambda_1^*)$, where $\lambda_1^*$ is the first eigenvalue of $(-\Delta)^{\alpha}$ under homogeneous Dirichlet boundary conditions, there exists a positive solution for the problem \eqref{ecuacion}. \end{theorem}
\begin{theorem} \label{Th1} Assume $p=2_{\mu}^*-1$. Then, for every $\gamma\in (0,\lambda_1^*)$, where $\lambda_1^*$ is the first eigenvalue of $(-\Delta)^{\alpha}$ under homogeneous Dirichlet boundary conditions, there exists a positive solution for the problem \eqref{ecuacion} provided that $N>4\alpha-2\beta$. \end{theorem}
Let us observe that, even though problem \eqref{ecuacion} is a non-local but also a linear perturbation of the critical problem \eqref{crBC}, Theorem \ref{Th1} addresses dimensions $N>4\alpha-2\beta$, in contrast to the existence result \cite[Theorem 1.2]{BCdPS} about the linear perturbation \eqref{bezero}, that covers the range $N\geq4\alpha$. In other words, the non-local term $(-\Delta)^{-\beta}u$, despite of being just a linear perturbation, has an important effect on the dimensions for which the classical Brezis--Nirenberg technique (see \cite{BN}) based on the minimizers of the Sobolev constant still works. See details in Section \ref{Subsec:concentracion_compacidad}.
\section{Sub-critical case. Proof Theorem \ref{Th0} }\label{Sec:ProofTh0}
\noindent In this section we carry out the proof of Theorem \ref{Th0}. This is done through the equivalence between problem \eqref{ecuacion} and systems \eqref{sistemabb} and \eqref{sistemaab}. We note that the results proved in the sequel for the functionals $\mathcal{F}_\gamma$, $\mathcal{J}_{\gamma}^{\beta}$ and $\mathcal{J}_{\gamma}^{\alpha,\beta}$ translate immediately in analogous results for the functionals $\Phi_{\gamma}^{\beta}$ and $\Phi_{\gamma}^{\alpha,\beta}$. First, we characterize the existence of positive solutions for problem \eqref{ecuacion} in terms of the parameter $\gamma$. Moreover, for such characterization the following eigenvalue problem will be considered \begin{equation}\label{eiglin1}
\left\{
\begin{array}{ll}
(-\Delta)^{\mu} u = \lambda (-\Delta)^{-\beta} u& \hbox{in} \quad\Omega, \\
u=0 & \hbox{on}\quad \partial\Omega.
\end{array}
\right. \end{equation} Then, for the first eigenfunction $\phi_1$ of \eqref{eiglin1}, associated with the first eigenvalue $\lambda_1^*$, we find
$$\int_\Omega |(-\Delta)^{\frac{\mu}{2}} \phi_1|^2dx =\lambda_1^* \int_\Omega |(-\Delta)^{-\frac{\beta}{2}} \phi_1|^2dx,$$ and, therefore, \begin{equation}\label{bieigen}
\lambda_1^*=\inf_{u\in H_0^{\mu}(\Omega)} \frac{\int_\Omega |(-\Delta)^{\frac{\mu}{2}} u|^2dx}{ \int_\Omega |(-\Delta)^{-\frac{\beta}{2}} u|^2dx}. \end{equation} On the other hand, thanks to the definition of the fractional operator $(-\Delta)^{\mu}$, we have that $\phi_1\equiv\varphi_1$, with $\varphi_1$ as the first eigenfunction of the Laplace operator under homogeneous Dirichlet boundary conditions. Then, $$(-\Delta)^{\mu}\phi_1=(-\Delta)^{\mu}\varphi_1=\lambda_1^{\mu}\varphi_1 \quad\hbox{and}\quad (-\Delta)^{-\beta}\phi_1=(-\Delta)^{-\beta}\varphi_1=\lambda_1^{-\beta}\varphi_1,$$ with $\lambda_1$ as the first eigenvalue of the Laplace operator under homogeneous Dirichlet boundary conditions. Hence, due to \eqref{eiglin1}, we conclude that $\lambda_1^*=\lambda_1^{\mu+\beta}=\lambda_1^{\alpha}$. Thus, $\lambda_1^*$ coincides with the first eigenvalue of the operator $(-\Delta)^{\alpha}$ under homogeneous Dirichlet or Navier boundary conditions, depending on whether $\alpha\leq1$ or $1<\alpha<\beta+1$ respectively. As a consequence, we have the following. \begin{lemma}\label{cota} Problem \eqref{ecuacion} does not possess a positive solution when $$\gamma \geq \lambda_1^*.$$ \end{lemma} \begin{proof} Assume that $u$ is a positive solution of \eqref{ecuacion} and let $\varphi_1$ be a positive first eigenfunction of the Laplace operator in $\Omega$ under homogeneous Dirichlet boundary conditions. Taking $\varphi_1$ as a test function for equation \eqref{ecuacion} we obtain \begin{align*}
\lambda_1^{\mu}\int_{\Omega}u\varphi_1dx=\int_{\Omega}\varphi_1(-\Delta)^{\mu}udx &=\gamma\int_{\Omega}\varphi_1(-\Delta)^{-\beta}udx+\int_{\Omega}|u|^{p-1}u\varphi_1dx\\ &> \gamma\int_{\Omega}\varphi_1(-\Delta)^{-\beta}udx=\gamma\int_{\Omega}u(-\Delta)^{-\beta}\varphi_1dx\\ &=\frac{\gamma}{\lambda_1^{\beta}}\int_{\Omega}u\varphi_1dx. \end{align*} Hence, $\lambda_1^{\mu}>\frac{\gamma}{\lambda_1^{\beta}}$, and we conclude that $\gamma<\lambda_1^{\mu+\beta}=\lambda_1^{\alpha}=\lambda_1^*$, proving the lemma. \end{proof} Next we check that $\mathcal{F}_\gamma$, as well as $\mathcal{J}_{\gamma}^{\beta}$ and $\mathcal{J}_{\gamma}^{\alpha,\beta}$ satisfy the MP geometry. \begin{lemma} \label{lezero} The functionals $\mathcal{F}_\gamma$, $\mathcal{J}_{\gamma}^{\beta}$ and $\mathcal{J}_{\gamma}^{\alpha,\beta}$ have the MP geometry . \end{lemma} \begin{proof}
For short, we prove the result for $\mathcal{F}_\gamma$, for the remaining functionals the result follows in a similar way. Without loss of generality, we consider a function $g\in H_0^{\mu}(\Omega)$ such that $\|g\|_{p+1}=1$. Because of \eqref{bieigen}, the fractional Sobolev inequality \eqref{sobolev} and \eqref{eqnorma}, we find that for $t>0$, \begin{align*}
\mathcal{F}_\gamma(tg)&=\frac{t^2}{2}\int_{\Omega}|(-\Delta)^{\frac{\mu}{2}}g|^2dx-\frac{\gamma t^2}{2}\int_{\Omega}|(-\Delta)^{-\frac{\beta}{2}}g|^2dx-\frac{t^{p+1}}{p+1}\\
&\geq\frac{t^2}{2}\int_{\Omega}|(-\Delta)^{\frac{\mu}{2}}g|^2dx-\frac{\gamma t^2}{2\lambda_1^{*}}\int_{\Omega}|(-\Delta)^{\frac{\mu}{2}}g|^2dx-\frac{t^{p+1}}{p+1}\\
&\geq\frac{t^2}{2}\left(1-\frac{\gamma}{\lambda_1^*}\right)\int_{\Omega}|(-\Delta)^{\frac{\mu}{2}}g|^2dx-\frac{t^{p+1}}{C(p+1)}\int_{\Omega}|(-\Delta)^{\frac{\mu}{2}}g|^2dx\\
&=\|g\|_{H_0^{\mu}(\Omega)}^2\left(\frac{1}{2}\left(1-\frac{\gamma}{\lambda_1^*}\right)t^2-\frac{1}{C(p+1)}t^{p+1}\right)>0, \end{align*} for $t>0$ sufficiently small and $C>0$ is a constant coming from inequality \eqref{sobolev}, that is, $$0<t^{p-1}<\frac{C(p+1)}{2}\left(1-\frac{\gamma}{\lambda_1^*}\right).$$ Thus, the functional $\mathcal{F}_\gamma$ has a local minimum at $u=0$, i.e., $\mathcal{F}_\gamma(tg)>\mathcal{F}_\gamma(0)=0$ for any $g\in H_0^{\mu}(\Omega)$ provided $t>0$ is small enough. Furthermore, it is clear that \begin{align*}
\mathcal{F}_\gamma(tg)&=\frac{t^2}{2} \int_\Omega |(-\Delta)^{\frac{\mu}{2}} g|^2dx - \frac{\gamma t^2}{2} \int_\Omega |(-\Delta)^{-\frac{\beta}{2}} g|^2dx-\frac{t^{p+1}}{p+1}\\
&\leq \frac{t^2}{2}\|g\|_{H_0^{\mu}(\Omega)}^2-\frac{t^{p+1}}{p+1}. \end{align*} Then, $\mathcal{F}_\gamma(tg) \rightarrow -\infty$ as $t\to \infty$ and, thus, there exists $\hat u \in H_0^{\mu}(\Omega)$ such that $\mathcal{F}_\gamma(\hat u)<0$. Hence, the functional $\mathcal{F}_\gamma$ has the mountain pass geometry.\newline \end{proof} Similarly we have the MP geometry for the extended functionals. \begin{lemma}\label{lezeroextension} The functionals $\Phi_{\gamma}^{\beta}$ and $\Phi_{\gamma}^{\alpha,\beta}$ have the MP geometry. \end{lemma} \begin{proof} The proof is similar to the proof of Lemma \ref{lezero}, we only need to note that, thanks to the isometry \eqref{isometry} and the trace inequality
\eqref{sobext}, the extension function minimizes the norm $\|\cdot\|_{\mathcal{X}_0^{\mu}(\mathcal{C}_{\Omega})}$ among all the functions with the same trace on $\{y=0\}$, i.e.,
$$\|E_{\mu}[\varphi(\cdot,0)]\|_{\mathcal{X}_0^{\mu}(\mathcal{C}_{\Omega})}\leq\|\varphi\|_{\mathcal{X}_0^{\mu}(\mathcal{C}_{\Omega})}\quad \hbox{for all}\quad \varphi\in\mathcal{X}_0^{\mu}(\mathcal{C}_{\Omega}).$$ Therefore, \begin{equation}\label{min_eig} \lambda_1^{\mu}=\inf_{\substack{u\in H_0^{\mu}(\Omega)\\ u\not\equiv0}}
\frac{\|u\|_{H_0^{\mu}(\Omega)}^2}{\|u\|_{L^{2}(\Omega)}^2} =\inf_{\substack{w\in \mathcal{X}_0^{\mu}(\mathcal{C}_{\Omega})\\
w\not\equiv0}}\frac{\|w\|_{\mathcal{X}_0^{\mu}(\mathcal{C}_{\Omega})}^2}{\|w(\cdot,0)\|_{L^{2}(\Omega)}^2}. \end{equation} Thus, following the arguments in the proof of Lemma \ref{lezero}, the result follows. \end{proof}
\begin{definition}\label{def_PS} Let $V$ be a Banach space. We say that $\{u_n\} \subset V$ is a Palais-Smale (PS) sequence for a functional $\mathfrak{F}$ if \begin{equation}\label{convergencia} \mathfrak{F}(u_n)\quad\hbox{is bounded and}\quad \mathfrak{F}'(u_n) \to 0\quad\mbox{in}\ V'\quad \hbox{as}\quad n\to \infty, \end{equation} where $V'$ is the dual space of $V$. Moreover, we say that $\{u_n\}$ satisfies a PS condition if \begin{equation}\label{conPS} \{u_n\}\quad \mbox{has a strongly convergent subsequence.} \end{equation} \end{definition} In particular, we say that the functional $\mathfrak{F}$ satisfies the PS condition at level $c$ if every PS sequence at level $c$ for $\mathfrak{F}$ satisfies the PS condition. In the subcritical range, $1\le p<2_\mu^*-1$, the PS condition is satisfied at any level c due to the compact embedding \eqref{compact_emb}. However, at the critical exponent $2_{\mu}^*$ the compactness in the Sobolev embedding is lost and, as we will see, the PS condition will be satisfied only for levels below certain critical level $c^*$. \begin{lemma}\label{acotacion_ecuacion} Let $\{u_n\}\subset H_0^\mu(\Omega)$ be a PS sequence at level $c$ for the functional $\mathcal{F}_\gamma$, i.e. $$\mathcal{F}_\gamma(u_n) \rightarrow c,\quad \mathcal{F}_\gamma'(u_n) \rightarrow 0,\quad \hbox{as}\quad n\to \infty.$$ Then, $\{u_n\}$ is bounded in $H_0^{\mu}(\Omega)$. \end{lemma} \begin{proof} Since $\mathcal{F}_\gamma'(u_n) \rightarrow 0$ in $\left(H_0^{\mu}(\Omega)\right)'$ and
$\mathcal{F}_\gamma(u_n) \to c$, we find that
$$\mathcal{F}_\gamma(u_n)-\frac{1}{p+1} \langle \mathcal{F}_\gamma'(u_n)|u_n\rangle=c+o(1)\cdot\|u_n\|_{H_0^{\mu}(\Omega)}.$$ That is, \begin{align*}
\left(\frac{1}{2}-\frac{1}{p+1}\right)\!\int_\Omega |(-\Delta)^{\frac{\mu}{2}} u_n|^2dx-\left(\frac{1}{2}-\frac{1}{p+1}\right)\!\int_\Omega
|(-\Delta)^{-\frac{\beta}{2}}u_n|^2dx =c+o(1)\cdot\|u_n\|_{H_0^{\mu}(\Omega)}. \end{align*} Therefore, by \eqref{bieigen}, since $\gamma<\lambda_1^*$, using \eqref{eqnorma} we conclude that
$$0<\left(\frac{1}{2}-\frac{1}{p+1}\right)\left(1-\frac{\gamma}{\lambda_1^*}\right)\|u_n\|_{H_0^{\mu}(\Omega)}^2\leq c+o(1)\cdot\|u_n\|_{H_0^{\mu}(\Omega)}.$$ Thus, the sequence $\{u_n\}$ is bounded in $H_0^{\mu}(\Omega)$. \end{proof} Following similar ideas as in the above proof, we obtain the following two results.
\begin{lemma}\label{acotacion_sistemabb} Let $\{(u_n,v_n)\}$ be a PS sequence at level $c$ for the functional $\mathcal{J}_{\gamma}^{\beta}$, i.e. $$\mathcal{J}_{\gamma}^{\beta}(u_n,v_n) \rightarrow c,\quad \left(\mathcal{J}_{\gamma}^{\beta}\right)'(u_n,v_n) \rightarrow 0,\quad \hbox{as}\quad n\to \infty.$$ Then, $\{(u_n,v_n)\}$ is bounded in $H_0^{\beta}(\Omega)\times H_0^{\beta}(\Omega)$. \end{lemma} \begin{lemma}\label{rem} Let $\{(w_n,z_n)\}$ be a PS sequence at level $c$ for the functional $\Phi_{\gamma}^{\beta}\ ($resp. for the functional $\Phi_{\gamma}^{\alpha,\beta})$. Then, $\{(w_n,z_n)\}$ is bounded in $\mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})\times\mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})\ ($resp. in $\mathcal{X}_0^{\mu}(\mathcal{C}_{\Omega})\times\mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega}))$ \end{lemma} Now, we are able to prove one of the main results of this paper. \begin{proof}[Proof of Theorem \ref{Th0}.]
\break Since we are dealing with the subcritical case $1<p<2_{\mu}^*-1$, given a PS sequence $\{u_n\}\subset H_0^{\mu}(\Omega)$ for the functional $\mathcal{F}_\gamma$, thanks to Lemma \ref{acotacion_ecuacion} and the compact inclusion \eqref{compact_emb}, the PS condition is satisfied. Moreover, by Lemma \ref{lezero}, the functional $\mathcal{F}_\gamma$ satisfies the MP geometry. Then, due to the MPT \cite{AR} and the PS condition, the functional $\mathcal{F}_\gamma$ possesses a critical point $u\in H_0^{\mu}(\Omega)$. Moreover, if we define the set of paths between the origin and $\hat u$, $$\Gamma:=\{g\in C([0,1],H_0^{\mu}(\Omega))\,;\, g(0)=0,\; g(1)=\hat u\},$$ with $\hat u$ given as in Lemma \ref{lezero}, i.e. $\mathcal{F}_\gamma(\hat u)<0$, then, $$\mathcal{F}_\gamma(u)=\inf_{g\in\Gamma} \max_{\theta \in [0,1]} \mathcal{F}_\gamma(g(\theta))= c.$$ To show that $u>0$, let us consider the functional, \begin{equation*} \mathcal{F}_\gamma^+(u)=\mathcal{F}_\gamma(u^+), \end{equation*} where $u^+=\max\{u,0\}$. Repeating with minor changes the arguments carried out above, one readily shows that what was proved for the functional $\mathcal{F}_\gamma$ still holds for the functional $\mathcal{F}_\gamma^+$. Hence, it follows that $u\geq 0$ and by the Maximum Principle (see \cite{CaSi}), $u>0$. \end{proof} \begin{remark} Once we have proved the existence of a positive solution to problem \eqref{ecuacion}, due to the equivalence between \eqref{ecuacion} and systems \eqref{sistemabb} and \eqref{sistemaab} we have the existence of a positive solution to both systems too. \end{remark}
\section{Concentration-Compactness at the critical exponent.}\label{Subsec:concentracion_compacidad}
In this subsection we focus on the critical exponent case, $p=2_{\mu}^*-1$, proving Theorem\;\ref{Th1}. Our aim is to prove the PS condition for the functional $\mathcal{F}_{\gamma}$ since the rest of the proof will be similar to what we performed in the previous section for the subcritical case.
First, by means of a concentration-compactness argument, we will prove that the PS condition is satisfied at levels below certain critical level $c^*$ (to be determined). Next, we construct an appropriate path whose energy is below that critical level $c^*$ and finally we will find a corresponding sequence satisfying the PS condition. Both steps are strongly based on the use of particular test functions. Hence, through this subsection we will focus on working with the extended functionals $\Phi_{\gamma}^{\beta}$ and $\Phi_{\gamma}^{\alpha,\beta}$. Once we have completed this task, since the $\beta$-harmonic extension is an isometry, given a PS sequence $\{(w_n,z_n)\}\subset\mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})\times\mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})$ at level $c$ for the functional $\Phi_{\gamma}^{\beta}$, satisfying the PS condition, it is clear that the trace sequence $\{(u_n,v_n)\}=\{Tr[w_n],Tr[z_n]\}$ belongs to $H_0^{\beta}(\Omega)\times H_0^{\beta}(\Omega)$ and is a PS sequence at the same level $c$ below certain $c^*$ for the functional $\mathcal{J}_{\gamma}^{\beta}$, satisfying the PS condition. Thus, the functional $\mathcal{J}_{\gamma}^{\beta}$ satisfies the PS condition at every level $c$ below the critical level $c^*$. In a similar way we can infer that the functional $\mathcal{J}_{\gamma}^{\alpha,\beta}$ satisfies the corresponding PS condition.
More specifically, by means of a concentration-compactness argument we first prove that the PS condition is satisfied for any level $c$ with \begin{equation}\label{levelbeta} c<\left(\frac{1}{2}-\frac{1}{2_{\beta}^*}\right)\left(\kappa_{\beta}S(\beta,N)\right)^{\frac{2_{\beta}^*}{2_{\beta}^*-2}}=\frac{\beta}{N} \left(\kappa_{\beta}S(\beta,N)\right)^{\frac{N}{2\beta}}, \tag{$c_{\beta}^*$} \end{equation} when dealing with the functional $\Phi_{\gamma}^{\beta}$, and for any level \begin{equation}\label{levelmu} c<\frac{1}{\gamma^{1-\beta/\alpha}}\left(\frac{1}{2}-\frac{1}{2_{\mu}^*}\right)\left(\kappa_{\mu}S(\mu,N)\right)^{\frac{2_{\mu}^*}{2_{\mu}^*-2}}=\frac{1}{\gamma^{1-\beta/\alpha}}\frac{\mu}{N} \left(\kappa_{\mu}S(\mu,N)\right)^{\frac{N}{2\mu}}, \tag{$c_{\mu}^*$} \end{equation} when dealing with the functional $\Phi_{\gamma}^{\alpha,\beta}$. Next, using an appropriate cut-off version of the extremal functions \eqref{u_eps} we will obtain a path below the critical levels $c_{\beta}^*$ and $c_{\mu}^*$.\newline
\subsection{PS condition under a critical level}
To accomplish the first step, let us start recalling the following. \begin{definition}
We say that a sequence $\{y^{1-2\mu}|\nabla w_n|^2\}_{n\in\mathbb{N}}$ is tight if for any $\eta>0$ there exists $\rho_0>0$ such that \begin{equation*}
\int_{\{y>\rho_0\}}\int_{\Omega}y^{1-2\mu}|\nabla w_n|^2dxdy\leq\eta,\quad\forall n\in\mathbb{N}. \end{equation*} \end{definition} \noindent In particular, since we are dealing with a system, we say that the sequence
$$\{(y^{1-2\mu}|\nabla w_n|^2,y^{1-2\beta}|\nabla z_n|^2)\}_{n\in\mathbb{N}},$$ is tight if for any $\eta>0$ there exists $\rho_0>0$ such that \begin{equation*}
\int_{\{y>\rho_0\}}\int_{\Omega}y^{1-2\mu}|\nabla w_n|^2dxdy+\int_{\{y>\rho_0\}}\int_{\Omega}y^{1-2\beta}|\nabla z_n|^2dxdy\leq\eta,\quad\forall n\in\mathbb{N}. \end{equation*} Now we state the Concentration-Compactness Theorem \cite[Theorem 5.1]{BCdPS} that will be useful in the proof of the PS condition. \begin{theorem}\label{th:concentracion}
Let $\{w_n\}$ be a weakly convergent sequence to $w$ in $\mathcal{X}_0^{\mu}(\mathcal{C}_{\Omega})$ such that the sequence $\{y^{1-2\mu}|\nabla w_n|^2\}_{n\in\mathbb{N}}$ is tight. Let $u_n=w_n(x,0)$ and $u=w(x,0)$. Let $\nu,\ \zeta$ be two nonnegative measures such that \begin{equation*}
y^{1-2\mu}|\nabla w_n|^2\to\zeta\quad\mbox{and}\quad|u_n|^{2_{\mu}^*}\to\nu,\quad\mbox{as}\ n\to\infty \end{equation*} in the sense of measures. Then there exists an index set $I$, at most countable, points $\{x_i\}_{i\in I}\subset\Omega$ and positive numbers $\nu_i$, $\zeta_i$, with $i\in I$, such that, \begin{itemize}
\item $\nu=|u|^{2_{\mu}^*}+\sum\limits_{i\in I}\nu_i\delta_{x_i},\ \nu_i>0,$
\item $\zeta=y^{1-2\mu}|\nabla w|^2+\sum\limits_{i\in I}\zeta_i\delta_{x_i},\ \zeta_i>0,$ \end{itemize} where $\delta_{x_{j}}$ stands for the Dirac's delta centered at $x_j$ and satisfying the condition \begin{equation*} \zeta_i\geq S(\mu,N)\nu_i^{2/2_{\mu}^*}. \end{equation*} \end{theorem} With respect to the PS condition we have the following. \begin{lemma}\label{PScondition_extensionsistemabb} If $p=2_{\beta}^*-1$ the functional $\Phi_{\gamma}^{\beta}$ satisfies the PS condition for any level $c$ below the critical level defined by \eqref{levelbeta}. \end{lemma}
\begin{proof}\renewcommand{\qedsymbol}{} Let $\{(w_n,z_n)\}_{n\in\mathbb{N}}\subset \mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})\times \mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})$ be a PS sequence at level $c$ for the functional $\Phi_{\gamma}^{\beta}$, i.e. \begin{equation}\label{critic} \Phi_{\gamma}^{\beta}(w_n,z_n)\to c<c_{\beta}^*\quad\mbox{and}\quad \left(\Phi_{\gamma}^{\beta}\right)'(w_n,z_n)\to 0. \end{equation} From \eqref{critic} and Lemma \ref{rem} we get that the sequence $\{(w_n,z_n)\}_{n\in\mathbb{N}}$ is uniformly bounded in $\mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})\times \mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})$, in other words, there exists a finite $M>0$ such that \begin{equation} \label{Mfrac}
||w_n||_{\mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})}^2+||z_n||_{\mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})}^2\leq M, \end{equation} and, as a consequence, we can assume that, up to a subsequence, \begin{align}\label{conver} & w_n\rightharpoonup w\quad\mbox{weakly in}\ \mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega}),\nonumber\\ & w_n(x,0) \rightarrow w(x,0)\quad\mbox{strong in}\ L^r(\Omega), \hbox{with}\quad 1\leq r<2_{\beta}^*,\nonumber\\ & w_n(x,0) \rightarrow w(x,0)\quad\mbox{a.e. in}\ \Omega, \end{align} and \begin{align}\label{conver2} & z_n\rightharpoonup z\quad\mbox{weakly in}\ \mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega}),\nonumber\\ & z_n(x,0) \rightarrow z(x,0)\quad\mbox{strong in}\ L^r(\Omega), 1\leq r<2_{\beta}^*,\nonumber\\ & z_n(x,0) \rightarrow z(x,0)\quad\mbox{a.e. in}\ \Omega. \end{align} Before applying Theorem \ref{th:concentracion}, first we need to check that the PS sequence $\{(w_n,z_n)\}_{n\in\mathbb{N}}$ is tight. To avoid any unnecessary technical details, and since the functional $\Phi_{\gamma}^{\beta}$ is obtained as a particular case (up to a multiplication by $\sqrt{\gamma}$) of the functional $\Phi_{\gamma}^{\alpha,\beta}$ when $\alpha=2\beta$, we prove the following. \end{proof} \begin{lemma} A PS sequence $\{(w_n,z_n)\}_{n\in\mathbb{N}}\subset \mathcal{X}_0^{\mu}(\mathcal{C}_{\Omega})\times \mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})$ at level $c$ for the functional $\Phi_{\gamma}^{\alpha,\beta}$ is tight. \end{lemma} \begin{proof} The proof is similar to the proof of Lemma 3.6 in \cite{BCdPS}, which follows some arguments contained in \cite{AAP}, and we include it for the reader's convenience. By contradiction, suppose that there exists $\eta_0>0$ and $m_0\in\mathbb{N}$ such that for any $\rho>0$ we have, up to a subsequence, \begin{equation}\label{contradiction}
\int_{\{y>\rho\}}\int_{\Omega}y^{1-2\mu}|\nabla w_n|^2dxdy+\int_{\{y>\rho\}}\int_{\Omega}y^{1-2\beta}|\nabla z_n|^2dxdy>\eta_0,\quad\forall m\geq m_0. \end{equation} Let $\varepsilon>0$ be fixed (to be determined later), and let $\rho_0>0$ such that
\begin{equation*}
\int_{\{y>\rho_0\}}\int_{\Omega}y^{1-2\mu}|\nabla w|^2dxdy+\int_{\{y>\rho_0\}}\int_{\Omega}y^{1-2\beta}|\nabla z|^2dxdy<\varepsilon. \end{equation*} Let $j=\left[\frac{M}{\varepsilon\kappa}\right]$ be the integer part, with $\kappa=\min\left\{\frac{\kappa_{\mu}}{\gamma^{1-\beta/\alpha}},\frac{\kappa_{\beta}}{\gamma^{\beta/\alpha}}\right\}$ and $I_k=\{y\in\mathbb{R}^+:\rho_0+k\leq y\leq \rho_0+k+1\}$, $k=0,1,\ldots,j$. Then, using \eqref{Mfrac}, \begin{align*}
\sum_{k=0}^{j}\int_{I_k}\int_{\Omega}y^{1-2\mu}|\nabla w_n|^2dxdy&+\int_{I_k}\int_{\Omega}y^{1-2\beta}|\nabla z_n|^2dxdy\\
&\leq\int_{\mathcal{C}_{\Omega}}y^{1-2\mu}|\nabla w_n|^2dxdy+\int_{\mathcal{C}_{\Omega}}y^{1-2\beta}|\nabla z_n|^2dxdy\\ &\leq\frac{M}{\kappa}<\varepsilon(j+1). \end{align*} Hence, there exists $k_0\in\{0,1,\ldots,j\}$ such that \begin{equation}\label{menor}
\int_{I_{k_0}}\int_{\Omega}y^{1-2\mu}|\nabla w_n|^2dxdy+\int_{I_{k_0}}\int_{\Omega}y^{1-2\beta}|\nabla z_n|^2dxdy\leq\varepsilon. \end{equation} Take now a smooth cut-off function \begin{equation*} X(y)=\left\{ \begin{tabular}{lcl} $0$&&if $y\leq r+k_0$,\\ $1$&&if $y\geq r+k_0+1$, \end{tabular} \right. \end{equation*} and define $(t_n,s_n)=(X(y)w_n,X(y)z_n)$. Then \begin{align*}
&\left|\left\langle \left(\Phi_{\gamma}^{\alpha,\beta}\right)'(w_n,z_n)-\left(\Phi_{\gamma}^{\alpha,\beta}\right)'(t_n,s_n)\Big|(t_n,s_n)\right\rangle\right|\\ &=\frac{\kappa_{\mu}}{\gamma^{1-\beta/\alpha}}\int_{\mathcal{C}_{\Omega}}y^{1-2\mu}\langle\nabla(w_n-t_n),\nabla t_n\rangle dxdy+\frac{\kappa_{\beta}}{\gamma^{\beta/\alpha}}\int_{\mathcal{C}_{\Omega}}y^{1-2\beta}\langle\nabla(z_n-s_n),\nabla s_n\rangle dxdy\\ &=\frac{\kappa_{\mu}}{\gamma^{1-\beta/\alpha}}\int_{I_{k_0}}\int_{\Omega}y^{1-2\mu}\langle\nabla(w_n-t_n),\nabla t_n\rangle dxdy+\frac{\kappa_{\beta}}{\gamma^{\beta/\alpha}}\int_{I_{k_0}}\int_{\Omega}y^{1-2\beta}\langle\nabla(z_n-s_n),\nabla s_n\rangle dxdy. \end{align*} Now, because of the Cauchy-Schwarz inequality, inequality \eqref{menor} and the compact inclusion\footnote{Let us recall that $\beta\in(0,1)$ and $\mu:=\alpha-\beta\in(0,1)$ thus, the weights $w_1(x,y)=y^{1-2\mu}$ and $w_2(x,y)=y^{1-2\beta}$ belongs to the Muckenhoupt class $A_2$. We refer to \cite{FKS} for the precise definition as well as some useful properties of the weights belonging to the Muckenhoupt classes $A_p$.}, \small{\begin{equation*} H^1(I_{k_0}\times\Omega,y^{1-2\mu}dxdy)\times H^1(I_{k_0}\times\Omega,y^{1-2\beta}dxdy)\hookrightarrow L^2(I_{k_0}\times\Omega,y^{1-2\mu}dxdy)\times L^2(I_{k_0}\times\Omega,y^{1-2\beta}dxdy), \end{equation*}} it follows that, \begin{align*}
&\left|\left\langle \left(\Phi_{\gamma}^{\alpha,\beta}\right)'(w_n,z_n)-\left(\Phi_{\gamma}^{\alpha,\beta}\right)'(t_n,s_n)\Big|(t_n,s_n) \right\rangle\right|\\
&\leq\frac{\kappa_{\mu}}{\gamma^{1-\beta/\alpha}}\left(\int_{I_{k_0}}\int_{\Omega}y^{1-2\mu}|\nabla(w_n-t_n)|^2dxdy\right)^{1/2}\left(\int_{I_{k_0}}\int_{\Omega}y^{1-2\mu}|\nabla t_n|^2dxdy\right)^{1/2}\\
&+\frac{\kappa_{\beta}}{\gamma^{\beta/\alpha}}\left(\int_{I_{k_0}}\int_{\Omega}y^{1-2\beta}|\nabla(z_n-s_n)|^2dxdy\right)^{1/2}\left(\int_{I_{k_0}}\int_{\Omega}y^{1-2\beta}|\nabla s_n|^2dxdy\right)^{1/2}\\ &\leq\max\left\{\frac{\kappa_{\mu}}{\gamma^{1-\beta/\alpha}},\frac{\kappa_{\beta}}{\gamma^{\beta/\alpha}}\right\}c\varepsilon \leq C\varepsilon, \end{align*} where $C:=c\max\left\{\frac{\kappa_{\mu}}{\gamma^{1-\beta/\alpha}},\frac{\kappa_{\beta}}{\gamma^{\beta/\alpha}}\right\}>0$. On the other hand, by \eqref{critic}, \begin{equation*}
\left|\left\langle
\left(\Phi_{\gamma}^{\alpha,\beta}\right)'(t_n,s_n)\Big|(t_n,s_n)
\right\rangle\right|\leq c_1\varepsilon+o(1), \end{equation*} with $c_1$ a positive constant. Thus, we conclude \begin{align*}
\int_{\{y>r+k_0+1\}}\int_{\Omega}y^{1-2\mu}|\nabla w_n|^2dxdy&+\int_{\{y>r+k_0+1\}}\int_{\Omega}y^{1-2\beta}|\nabla z_n|^2dxdy\\
&\leq\int_{\mathcal{C}_{\Omega}}y^{1-2\mu}|\nabla t_n|^2dxdy+\int_{\mathcal{C}_{\Omega}}y^{1-2\beta}|\nabla s_n|^2dxdy\\ &\leq\frac{1}{\kappa}\left\langle
\left(\Phi_{\gamma}^{\alpha,\beta}\right)'(t_n,s_n)\Big|(t_n,s_n) \right\rangle \leq C \varepsilon, \end{align*} in contradiction with \eqref{contradiction}. Hence, the sequence is tight. \end{proof} \begin{proof}[Continuation proof Lemma \ref{PScondition_extensionsistemabb}] Once we have proved that the PS sequence $$\{(w_n,z_n)\}_{n\in\mathbb{N}}\subset \mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})\times \mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega}),$$ is tight, we can apply Theorem \ref{th:concentracion}. Consequently, up to a subsequence, there exists an at most countable set $I$, a sequence of points $\{x_i\}_{i\in I}\subset\Omega$ and non-negative real numbers $\nu_i$ and $\zeta_i$ such that
\begin{itemize}
\item $|u_n|^{2_{\beta}^*}\to \nu=|u|^{2_{\beta}^*}+\sum\limits_{i\in I}\nu_i\delta_{x_i},$
\item $y^{1-2\beta}|\nabla w_n|^2\to\zeta=y^{1-2\beta}|\nabla w|^2+\sum\limits_{i\in I}\zeta_i\delta_{x_i},$
\item $y^{1-2\beta}|\nabla z_n|^2\to\widetilde{\zeta}=y^{1-2\beta}|\nabla z|^2+\sum\limits_{i\in I}\widetilde{\zeta}_i\delta_{x_i},$ \end{itemize} where $\delta_{x_i}$ is the Dirac's delta centered at $x_i$ and satisfying, \begin{equation}\label{in:concentracion} \zeta_i\geq S(\mu,N)\nu_i^{2/2_{\mu}^*}. \end{equation}
We fix $j\in I$ and we let $\phi\in\mathcal{C}_0^{\infty}(\mathbb{R}_+^{N+1})$ be a non-increasing smooth cut-off function verifying $\phi=1$ in $B_1^+(x_{j})$, $\phi=0$ in $B_2^+(x_{j})^c$, with $B_r^+(x_j)\subset\mathbb{R}^{N}\times\{y\geq0\}$ the $(N+1)$-dimensional semi-ball of radius $r>0$ centered at $x_j$. Let now $\phi_{\varepsilon}(x,y)=\phi(x/\varepsilon,y/\varepsilon)$, such that $|\nabla\phi_{\varepsilon}|\leq\frac{C}{\varepsilon}$ and denote $\Gamma_{2\varepsilon}=B_{2\varepsilon}^+(x_{j})\cap\{y=0\}$. Therefore, since by \eqref{critic} \begin{equation}\label{tocero} \left(\Phi_{\gamma}^{\beta}\right)'(w_n,z_n)\to 0\quad \hbox{in the dual space}\quad \left(\mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})\times \mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})\right)', \end{equation} taking the dual product in \eqref{tocero} with $(\phi_{\varepsilon}w_n,\phi_{\varepsilon}z_n)$, we obtain \begin{equation*} \begin{split} \lim_{n\to\infty}&\left(\kappa_{\beta}\int_{\mathcal{C}_{\Omega}}y^{1-2\beta}\nabla w_n\nabla(\phi_{\varepsilon}w_n)dxdy+\kappa_{\beta}\int_{\mathcal{C}_{\Omega}}y^{1-2\beta}\nabla z_n\nabla(\phi_{\varepsilon}z_n)dxdy\right.\\
&\ \ \left.-2\sqrt{\gamma}\int_{\Gamma_{2\varepsilon}}\phi_{\varepsilon}w_n(x,0)z_n(x,0)dx-\int_{\Gamma_{2\varepsilon}}\phi_{\varepsilon}|w_n|^{2_{\beta}^*}(x,0)dx\right)=0. \end{split} \end{equation*} Hence, \begin{equation*} \begin{split} &\lim_{n\to\infty}\left(\kappa_{\beta}\int_{\mathcal{C}_{\Omega}}y^{1-2\beta}\langle\nabla w_n,\nabla\phi_{\varepsilon} \rangle w_ndxdy+\kappa_{\beta}\int_{\mathcal{C}_{\Omega}}y^{1-2\beta}\langle\nabla z_n,\nabla\phi_{\varepsilon} \rangle z_ndxdy\right)\\
&=\lim_{n\to\infty}\left(2\sqrt{\gamma}\int_{\Gamma_{2\varepsilon}}\phi_{\varepsilon}w_n(x,0)z_n(x,0)dx+\int_{\Gamma_{2\varepsilon}}\phi_{\varepsilon}|w_n|^{2_{\beta}^*}(x,0)dx\right.\\
&\ \ \ \ \ \ \ \ \ \ \ \ \left.-\kappa_{\beta}\int_{\mathcal{C}_{\Omega}}y^{1-2\beta}\phi_{\varepsilon}|\nabla w_n|^2dxdy-\kappa_{\beta}\int_{\mathcal{C}_{\Omega}}y^{1-2\beta}\phi_{\varepsilon}|\nabla z_n|^2dxdy\right). \end{split} \end{equation*} Moreover, thanks to \eqref{conver}, \eqref{conver2} and Theorem \ref{th:concentracion}, we find, \begin{equation}\label{eq:tozero} \begin{split} &\lim_{n\to\infty}\left(\kappa_{\beta}\int_{\mathcal{C}_{\Omega}}y^{1-2\beta}\langle\nabla w_n,\nabla\phi_{\varepsilon} \rangle w_ndxdy+\kappa_{\beta}\int_{\mathcal{C}_{\Omega}}y^{1-2\beta}\langle\nabla z_n,\nabla\phi_{\varepsilon} \rangle z_ndxdy\right)\\ &\ \ \ \ \ \ \ =2\sqrt{\gamma}\int_{\Gamma_{2\varepsilon}}\phi_{\varepsilon}w(x,0)z(x,0)dx+\int_{\Gamma_{2\varepsilon}}\phi_{\varepsilon}d\nu-\kappa_{\beta}\int_{B_{2\varepsilon}^+(x_{j})}\phi_{\varepsilon}d\zeta-\kappa_{\beta}\int_{B_{2\varepsilon}^+(x_{j})}\phi_{\varepsilon}d\widetilde{\zeta}. \end{split} \end{equation} Assume for the moment that the left hand side of \eqref{eq:tozero} vanishes as $\varepsilon\to0$. Then, it follows that, \begin{equation*} \begin{split} 0&=\lim_{\varepsilon\to0}2\sqrt{\gamma}\int_{\Gamma_{2\varepsilon}}\phi_{\varepsilon}w(x,0)z(x,0)dx+\int_{\Gamma_{2\varepsilon}}\phi_{\varepsilon}d\nu-\kappa_{\beta}\int_{B_{2\varepsilon}^+(x_{j})}\phi_{\varepsilon}d\zeta-\kappa_{\beta}\int_{B_{2\varepsilon}^+(x_{j})}\phi_{\varepsilon}d\widetilde{\zeta}\\ &=\nu_{j}-\kappa_{\beta}\zeta_{j}-\kappa_{\beta}\widetilde{\zeta}_{j}, \end{split} \end{equation*} and we conclude, \begin{equation}\label{eq:compacidad} \nu_{j}=\kappa_{\beta}\left(\zeta_{j}+\widetilde{\zeta}_{j}\right). \end{equation} Finally, we have two options, either the compactness of the PS sequence or concentration around those points $x_j$. In other words, either $\nu_{j}=0$, so that $\zeta_{j}=\widetilde{\zeta}_{j}=0$ or, thanks to \eqref{eq:compacidad} and \eqref{in:concentracion}, $\nu_{j}\geq\left(\kappa_{\beta}S(\beta,N)\right)^{\frac{2_{\beta}^*}{2_{\beta}^*-2}}$. In case of having concentration, we find, \begin{equation*} \begin{split}
c=&\lim_{n\to\infty}\Phi_{\gamma}^{\beta}(w_n,z_n)=\lim_{n\to\infty}\Phi_{\gamma}^{\beta}(w_n,z_n)-\frac{1}{2}\left\langle\left(\Phi_{\gamma}^{\beta}\right)'(w_n,z_n)\Big|(w_n,z_n)\right\rangle\\
=& \left(\frac{1}{2}-\frac{1}{2_{\beta}^*}\right)\int_{\Omega}|w(x,0)|^{2_{\beta}^*}dx+\left(\frac{1}{2}-\frac{1}{2_{\beta}^*}\right)\nu_{k_0}\\ \geq&\left(\frac{1}{2}-\frac{1}{2_{\beta}^*}\right)\left(\kappa_{\beta}S(\beta,N)\right)^{\frac{2_{\beta}^*}{2_{\beta}^*-2}}=c_{\beta}^*, \end{split} \end{equation*} in contradiction with the hypotheses $c<c_{\beta}^*$. It only remains to prove that the left hand side of \eqref{eq:tozero} vanishes as $\varepsilon\to0$. Due to \eqref{critic} and Lemma \ref{rem}, the PS sequence $\{(w_n,z_n)\}_{n\in\mathbb{N}}$ is bounded in $\mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})\times\mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})$, so that, up to a subsequence, \begin{equation*} \begin{split} (w_n,z_n)&\rightharpoonup(w,z)\in \mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})\times\mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega}),\\ (w_n,z_n)&\rightarrow(w,z)\quad\mbox{a.e. in}\quad \mathcal{C}_{\Omega}. \end{split} \end{equation*} Moreover, for $r<2^*=\frac{2(N+1)}{N-1}$ we have the compact inclusion, \begin{equation*} \mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})\times\mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})\hookrightarrow L^{r}(\mathcal{C}_{\Omega},y^{1-2\beta}dxdy)\times L^{r}(\mathcal{C}_{\Omega},y^{1-2\beta}dxdy). \end{equation*} Applying H\"older's inequality with $p=\frac{N+1}{N-1}$ and $q=\frac{N+1}{2}$, we find, \begin{equation*} \begin{split}
\int_{B_{2\varepsilon}^+(x_{k_0})}& y^{1-2\beta}|\nabla\phi_{\varepsilon}|^2|w_n|^2dxdy\\
\leq& \left(\int_{B_{2\varepsilon}^+(x_{k_0})}\!\!\!\!\!\!\!\!\! y^{1-2\beta}|\nabla\phi_{\varepsilon}|^{N+1}dxdy\right)^{\frac{2}{N+1}}\left(\int_{B_{2\varepsilon}^+(x_{k_0})}\!\!\!\!\!\!\!\!\! y^{1-2\beta}|w_n|^{2\frac{N+1}{N-1}}dxdy\right)^{\frac{N-1}{(N+1)}}\\
\leq&\frac{1}{\varepsilon^2}\left(\int_{B_{2\varepsilon}(x_{k_0})}\int_0^\varepsilon y^{1-2\beta}dxdy\right)^{\frac{2}{N+1}}\left(\int_{B_{2\varepsilon}^+(x_{k_0})}\!\!\!\!\!\!\!\!\! y^{1-2\beta}|w_n|^{2\frac{N+1}{N-1}}dxdy\right)^{\frac{N-1}{(N+1)}}\\
\leq& c_0\varepsilon^{\frac{2(1-2\beta)}{N+1}}\left(\int_{B_{2\varepsilon}^+(x_{k_0})}\!\!\!\!\!\!\!\!\! y^{1-2\beta}|w_n|^{2\frac{N+1}{N-1}}dxdy\right)^{\frac{N-1}{(N+1)}}\\
\leq& c_0 \varepsilon^{\frac{2(1-2\beta)}{N+1}}\varepsilon^{\frac{(2+N-2\beta)(N-1)}{(N+1)}}\left(\int_{B_{2}^+(x_{k_0})}y^{1-2\beta}|w_n(\varepsilon x,\varepsilon y)|^{2\frac{N+1}{N-1}}dxdy\right)^{\frac{N-1}{(N+1)}}\\ \leq& c_1 \varepsilon^{N-2\beta}. \end{split} \end{equation*} for appropriate positive constants $c_0$ and $c_1$. In a similar way, \begin{equation*}
\int_{B_{2\varepsilon}^+(x_{k_0})}\!\!\!\!\!\!\!\!\! y^{1-2\beta}|\nabla\phi_{\varepsilon}|^2|z_n|^2dxdy\leq c_2 \varepsilon^{N-2\beta}. \end{equation*} Thus, we find that, \begin{equation*} \begin{split}
0\leq&\lim_{n\to\infty}\left|\kappa_{\beta}\int_{\mathcal{C}_{\Omega}}y^{1-2\beta}\langle\nabla w_n,\nabla\phi_{\varepsilon} \rangle w_ndxdy+\kappa_{\beta}\int_{\mathcal{C}_{\Omega}}y^{1-2\beta}\langle\nabla z_n,\nabla\phi_{\varepsilon} \rangle z_ndxdy\right|\\
\leq&\kappa_{\beta}\lim_{n\to\infty}\left(\int_{\mathcal{C}_{\Omega}}y^{1-2\beta}|\nabla w_n|^2dxdy\right)^{1/2}\left(\int_{B_{2\varepsilon}^+(x_{k_0})}y^{1-2s}|\nabla\phi_{\varepsilon}|^2|w_n|^2dxdy\right)^{1/2}\\
+&\kappa_{\beta}\lim_{n\to\infty}\left(\int_{\mathcal{C}_{\Omega}}y^{1-2\beta}|\nabla z_n|^2dxdy\right)^{1/2}\left(\int_{B_{2\varepsilon}^+(x_{k_0})}y^{1-2s}|\nabla\phi_{\varepsilon}|^2|z_n|^2dxdy\right)^{1/2}\\ \leq&C \varepsilon^{\frac{N-2\beta}{2}}\to0, \end{split} \end{equation*} as $\varepsilon\to 0$ and the proof of the Lemma \ref{PScondition_extensionsistemabb} is complete. \end{proof} Next we show the corresponding result for the functional $\Phi_{\gamma}^{\alpha,\beta}$. \begin{lemma}\label{PScondition_extensionsistemaab} If $p=2_{\mu}^*-1$ the functional $\Phi_{\gamma}^{\alpha,\beta}$ satisfies the PS condition for any level $c$ below the critical level defined by \eqref{levelmu}. \end{lemma} The proof of this result is similar to the one of Lemma \ref{PScondition_extensionsistemabb}, so we omit the details for short.
\subsection{PS sequences under a critical level}
At this point, it remains to show that we can obtain PS sequences for the functionals $\Phi_{\gamma}^{\beta}$ and $\Phi_{\gamma}^{\alpha,\beta}$ under the critical levels defined by \eqref{levelbeta} and \eqref{levelmu} respectively. To do so, we consider the extremal functions of the fractional Sobolev inequality \eqref{sobolev}, namely, given $\theta\in(0,1)$, we set \begin{equation*}
u_{\varepsilon}^{\theta}(x)=\frac{\varepsilon^{\frac{N-2\theta}{2}}}{(\varepsilon^2+|x|^2)^{\frac{N-2\theta}{2}}}, \end{equation*} and $w_{\varepsilon}^{\theta}=E_{\theta}[u_{\varepsilon}^{\theta}]$ its $\theta$-harmonic extension. Then, since $w_{\varepsilon}^{\theta}$ is a minimizer of the Sobolev inequality, it holds \begin{equation*}
S(\theta,N)=\frac{\displaystyle\int_{\mathbb{R}_+^{N+1}}y^{1-2\theta}|\nabla w_{\varepsilon}^{\theta} |^2dxdy}{\displaystyle\|u_{\varepsilon}^{\theta}\|_{L^{2_{\theta}^*}(\mathbb{R}^N)}^2}. \end{equation*} We take a non-increasing smooth cut-off function $\phi_0(t)\in\mathcal{C}_0^{\infty}(\mathbb{R}_+)$ such that $$\phi_0(t)=1\quad \hbox{if}\quad 0\leq t\leq1/2\quad \hbox{and}\quad \phi_0(t)=0\quad \hbox{if}\quad t\geq 1.$$
Assume without loss of generality that $0\in\Omega$, $r>0$ small enough such that $\overline{B}_r^+\subseteq\overline{\mathcal{C}}_{\Omega}$, and define the function $\phi_r(x,y)=\phi_0(\frac{r_{x,y}}{r})$ where $r_{xy}=|(x,y)|=\left(|x|^2+y^2\right)^{1/2}$. Note that $\phi_r w_{\varepsilon}^{\theta}\in\mathcal{X}_0^{\theta}(\mathcal{C}_{\Omega})$. We recall now the following lemma proved in \cite{BCdPS}. \begin{lemma}\label{estcol} The family $\{\phi_r w_{\varepsilon}^{\theta}\}$ and its trace on $\{y=0\}$, denoted by $\{\phi_r u_{\varepsilon}^{\theta}\}$, satisfy \begin{equation*} \begin{split}
\|\phi_r w_{\varepsilon}^{\theta}\|_{\mathcal{X}_0^{\theta}(\mathcal{C}_{\Omega})}^{2}&=\|w_{\varepsilon}^{\theta}\|_{\mathcal{X}_0^{\theta}(\mathcal{C}_{\Omega})}^{2}+O(\varepsilon^{N-2\theta}),\\
\|\phi_r u_{\varepsilon}^{\theta}\|_{L^2(\Omega)}^{2}&= \left\{
\begin{tabular}{lc}
$C \varepsilon^{2\theta}+O(\varepsilon^{N-2\theta})$ & if $N>4\theta$, \\
$C \varepsilon^{2\theta}|\log(\varepsilon)|$ & if $N=4\theta$.
\end{tabular}
\right. \end{split} \end{equation*} \end{lemma} \begin{remark}
Since $\|u_{\varepsilon}^{\theta}\|_{L^{2_{\theta}^*}(\mathbb{R}^N)}\sim C$ does not depend on $\varepsilon$ it follows that \begin{equation*}
\|\phi_r u_{\varepsilon}^{\theta}\|_{L^{2_{\theta}^*}(\Omega)}=\|u_{\varepsilon}^{\theta}\|_{L^{2_{\theta}^*}(\mathbb{R}^N)}+O(\varepsilon^N)=C+O(\varepsilon^N). \end{equation*} \end{remark} \noindent Next, we define the normalized functions, \begin{equation*}
\eta_{\varepsilon}^{\theta}=\frac{\phi_r w_{\varepsilon}^{\theta}}{\|\phi_r u_{\varepsilon}^{\theta}\|_{2_{\theta}^*}}\quad \mbox{and} \quad \sigma_{\varepsilon}^{\theta}=\frac{\phi_r u_{\varepsilon}^{\theta}}{\|\phi_r u_{\varepsilon}^{\theta}\|_{2_{\theta}^*}}, \end{equation*} then, because of Lemma \ref{estcol} the following estimates hold, \begin{equation}\label{estimaciones} \begin{split}
\|\eta_{\varepsilon}^{\theta}\|_{\mathcal{X}_0^{\theta}(\mathcal{C}_{\Omega})}^{2}&=S(\theta,N)+O(\varepsilon^{N-2\theta}),\\
\|\sigma_{\varepsilon}^{\theta}\|_{L^2(\Omega)}^{2}&= \left\{
\begin{tabular}{lc}
$C \varepsilon^{2\theta}+O(\varepsilon^{N-2\theta})$ & if $N>4\theta$, \\
$C \varepsilon^{2\theta}|\log(\varepsilon)|$ & if $N=4\theta$,
\end{tabular}
\right.\\
\|\sigma_{\varepsilon}^{\theta}\|_{L^{2_{\theta}^*}(\Omega)}&=1. \end{split} \end{equation} To continue, we consider \begin{equation}\label{test} (\overline{w}_{\varepsilon}^{\beta},\overline{z}_{\varepsilon}^{\beta})=(M\eta_{\varepsilon}^{\beta},M\rho\eta_{\varepsilon}^{\beta}), \end{equation} with $\rho>0$ to be determined and $M\gg 1$ a constant such that $\Phi_{\gamma}^{\beta}(\overline{w}_{\varepsilon}^{\beta},\overline{z}_{\varepsilon}^{\beta})<0$. Then, under this construction, we define the set of paths $$\Gamma_\varepsilon:=\{g\in C([0,1],\mathcal{X}_{0}^{\beta}(\mathcal{C}_{\Omega})\times \mathcal{X}_{0}^{\beta}(\mathcal{C}_{\Omega}))\,;\, g(0)=(0,0),\ g(1)=(\overline{w}_{\varepsilon}^{\beta},\overline{z}_{\varepsilon}^{\beta})\},$$ and we consider the minimax values \begin{equation*} c_\varepsilon=\inf_{g\in\Gamma_\varepsilon} \max_{t \in [0,1]} \Phi_{\gamma}^{\beta}(g(t)). \end{equation*} Next we prove that, in fact, $c_{\varepsilon}<c_{\beta}^*$ for $\varepsilon$ small enough. \begin{lemma}\label{levelb} Assume $p=2_{\beta}^*-1$. Then, there exists $\varepsilon>0$ small enough such that, \begin{equation}\label{cotfunctional} \sup_{t\geq0}\Phi_{\gamma}^{\beta}(t\overline{w}_{\varepsilon}^{\beta},t\overline{z}_{\varepsilon}^{\beta})<c_{\beta}^*, \end{equation} provided that $N>6\beta$. \end{lemma} \begin{proof} Because of \eqref{estimaciones} with $\theta=\beta$, it follows that \begin{align*} g(t):=&\Phi_{\gamma}^{\beta}(t\overline{w}_{\varepsilon}^{\beta},t\overline{z}_{\varepsilon}^{\beta})\\
=&\frac{M^2t^2}{2}\left(\kappa_{\beta}\|\eta_{\varepsilon}^{\beta}\|_{\mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})}^{2}
+\rho^2 \kappa_{\beta}\|\eta_{\varepsilon}^{\beta}\|_{\mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})}^{2}
-2\sqrt{\gamma}\|\sigma_{\varepsilon}^{\theta}\|_{L^2(\Omega)}^{2}\right)-\frac{Mt^{2_{\beta}^*}}{2_{\beta}^*}\\ =&\frac{M^2t^2}{2}\left([\kappa_{\beta}S(\beta,N)+O(\varepsilon^{N-2\beta})]+\rho^2[\kappa_{\beta}S(\beta,N)+O(\varepsilon^{N-2\beta})]
-2\sqrt{\gamma}\|\sigma_{\varepsilon}^{\theta}\|_{L^2(\Omega)}^{2}\right)\\ & -\frac{M^{2_{\beta}^*}t^{2_{\beta}^*}}{2_{\beta}^*}. \end{align*} It is clear that $\displaystyle \lim_{t\to \infty} g(t)=-\infty$, therefore, the function $g(t)$ possesses a maximum value at the point \begin{equation*}
t_{\gamma,\varepsilon}:=\left(\frac{M^2\left([\kappa_{\beta}S(\beta,N)+O(\varepsilon^{N-2\beta})]+\rho^2[\kappa_{\beta}S(\beta,N)+O(\varepsilon^{N-2\beta})]-2\sqrt{\gamma}\|\sigma_{\varepsilon}^{\theta}\|_{L^2(\Omega)}^{2}\right)}{M^{2_{\beta}^*}}\right)^{\frac{1}{2_{\beta}^*-2}}. \end{equation*} Moreover, at this point $t_{\gamma,\varepsilon}$, \begin{equation*} \begin{array}{rl} g(t_{\gamma,\varepsilon})=\left(\frac{1}{2}-\frac{1}{2_{\beta}^*}\right)\Big(&\!\!\!\![\kappa_{\beta}S(\beta,N)+O(\varepsilon^{N-2\beta})]+ \rho^2[\kappa_{\beta}S(\beta,N)+O(\varepsilon^{N-2\beta})]\\
&\!\!\!\!-2\sqrt{\gamma}\|\sigma_{\varepsilon}^{\theta}\|_{L^2(\Omega)}^{2}\Big)^{\frac{2_{\beta}^*}{2_{\beta}^*-2}}. \end{array}\end{equation*} To finish it is enough to show that \begin{equation}\label{ppte} g(t_{\gamma,\varepsilon})<\left(\frac{1}{2}-\frac{1}{2_{\beta}^*}\right)\left(\kappa_{\beta}S(\beta,N)\right)^{\frac{2_{\beta}^*}{2_{\beta}^*-2}}=c_{\beta}^*, \end{equation} holds true for $\varepsilon$ sufficiently small and making the appropriate choice of $\rho>0$. Thus, sim\-pli\-fying \eqref{ppte}, we are left to choose $\rho>0$ such that \begin{equation*}
O(\varepsilon^{N-2\beta})+\kappa_{\beta}S(\beta,N)\rho^2+O(\varepsilon^{N-2\beta})\rho^2<2\sqrt{\gamma}\rho \|\sigma_{\varepsilon}^{\theta}\|_{L^2(\Omega)}^{2}, \end{equation*} holds true provided $\varepsilon$ is small enough. To this end, take $\rho=\varepsilon^{\delta}$ with $\delta>0$ to be determined, then, since \begin{equation*} O(\varepsilon^{N-2\beta})+\kappa_{\beta}S(\beta,N)\varepsilon^{2\delta}+O(\varepsilon^{N-2\beta+2\delta})=O(\varepsilon^{\tau}), \end{equation*} with $\tau=\min\{N-2\beta, 2\delta, N-2\beta+2\delta\}=\min\{N-2\beta, 2\delta\}$, the proof will be finished once $\delta>0$ has been chosen such that the inequality \begin{equation}\label{toprove}
O(\varepsilon^{\tau})<2\sqrt{\gamma}\rho\|\sigma_{\varepsilon}^{\theta}\|_{L^2(\Omega)}^{2}, \end{equation} holds true for $\varepsilon$ small enough. Now we use the estimates \eqref{estimaciones}. Then, if $N=4\beta$ inequality \eqref{toprove} reads \begin{equation}\label{i.1}
O(\varepsilon^{\tau})<2C\sqrt{\gamma}\varepsilon^{2\beta+\delta}|\log(\varepsilon)|. \end{equation} Since $0<\varepsilon\ll 1$, inequality \eqref{i.1} holds for $\tau=\min\{2\beta,2\delta\}>2\beta+\delta$, that is impossible and, thus, inequality \eqref{toprove} can not hold for $N=4\beta$. On the other hand, if $N>4\beta$ inequality \eqref{toprove} has the form, \begin{equation}\label{i.2} O(\varepsilon^{\tau})<2C\sqrt{\gamma}\varepsilon^{2\beta+\delta}. \end{equation}
Since $\varepsilon\ll 1$, inequality \eqref{i.2} holds for $\tau=\min\{N-2\beta,2\delta\}>2\beta+\delta$. Using the identity $\displaystyle\min\{a,b\}=\frac{1}{2}(a+b-|a-b|)$, we arrive at the condition \begin{equation}\label{i.3}
N-2\beta-|N-2\beta-2\delta|>4\beta. \end{equation} Finally, we have two options, \begin{enumerate} \item $N-2\beta>2\delta$ combined with \eqref{i.3} provides us with the range, \begin{equation}\label{i.4} N-2\beta>2\delta>4\beta. \end{equation} Then $N>6\beta$ necessarily, so that we can choose a positive $\delta$ satisfying \eqref{i.4} and, hence, inequality \eqref{toprove} holds for $\varepsilon$ small enough. \item $N-2\beta<2\delta$ combined with \eqref{i.3} implies that $2(N-2\beta)-4\beta>2\delta$, and hence, \begin{equation}\label{i.5} 2(N-2\beta)-4\beta>2\delta>N-2\beta, \end{equation} Once again $N>6\beta$ necessarily, so that we can choose a positive $\delta$ satisfying \eqref{i.5} and, hence, inequality \eqref{toprove} holds for $\varepsilon$ small enough. \end{enumerate} Thus, if $N>6\beta$ we can choose $\rho>0$ and $\varepsilon>0$ small enough such that \eqref{cotfunctional} is achieved. \end{proof}
Now, we are in the position to conclude the proof of the second main result of the paper. First we will focus on the particular case when $\alpha=2\beta$. Later on we will follow a similar argument to prove the results when $\alpha\neq 2\beta$.
\begin{proof}[Proof of Theorem \ref{Th1}. Case $\alpha=2\beta$.]
\break By Lemma \ref{lezeroextension}, the functional $\Phi_{\gamma}^{\beta}$ satisfies the MP geometry. Because of MPT we have a PS sequence which by Lemma \ref{levelb}, satisfies that the corresponding energy level is bellow the critical one. Taking into account Lemma \ref{PScondition_extensionsistemabb}, this PS sequence satisfies the PS condition, hence we obtain a critical point $(w,z)\in\mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})\times\mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})$ for the functional $\Phi_{\gamma}^{\beta}$. The rest of the proof follows as in the subcritical case. \end{proof}
Now, we focus on the functional $\Phi_{\gamma}^{\alpha,\beta}$. For this case, we consider \begin{equation}\label{test2} (\overline{w}_{\varepsilon}^{\mu},\overline{z}_{\varepsilon}^{\beta})=(M\eta_{\varepsilon}^{\mu},M\rho\eta_{\varepsilon}^{\beta}), \end{equation} with $\rho>0$ to be determined and a constant $M\gg 1$ such that $\Phi_{\gamma}^{\alpha,\beta}(\overline{w}_{\varepsilon}^{\mu},\overline{z}_{\varepsilon}^{\beta})<0$. Let us notice that, by definition, \begin{equation*} \sigma_{\varepsilon}^{\mu}\sigma_{\varepsilon}^{\beta}
=\frac{\phi_ru_{\varepsilon}^{\mu}\phi_ru_{\varepsilon}^{\beta}}{\|\phi_ru_{\varepsilon}^{\mu}\|_{2_{\mu}^*}
\|\phi_ru_{\varepsilon}^{\beta}\|_{2_{\beta}^*}}, \end{equation*} and, since $\mu:=\alpha-\beta$, we find \begin{equation*}
u_{\varepsilon}^{\mu}u_{\varepsilon}^{\beta}=\frac{\varepsilon^{\frac{N-2\mu}{2}}}{(\varepsilon^2+|x|^2)^{\frac{N-2\mu}{2}}}
\frac{\varepsilon^{\frac{N-2\beta}{2}}}{(\varepsilon^2+|x|^2)^{\frac{N-2\beta}{2}}}=\frac{\varepsilon^{N-\alpha}}{(\varepsilon^2+|x|^2)^{N-\alpha}}
=\left(\frac{\varepsilon^{\frac{N-2(\alpha/2)}{2}}}{(\varepsilon^2+|x|^2)^{\frac{N-2(\alpha/2)}{2}}}\right)^2 \!=\!\left(u_{\varepsilon}^{\alpha/2}\right)^{2}. \end{equation*} Thus, applying \eqref{estimaciones} with $\theta=\frac{\alpha}{2}$, we conclude \begin{equation}\label{estab}
\int_{\Omega}\sigma_{\varepsilon}^{\mu}\sigma_{\varepsilon}^{\beta}dx=C\|\sigma_{\varepsilon}^{\alpha/2}\|_{L^2(\Omega)}^{2}=\left\{
\begin{tabular}{lc}
$C \varepsilon^{\alpha}+O(\varepsilon^{N-\alpha})$ & if $N>2\alpha$, \\
$C \varepsilon^{\alpha}|\log(\varepsilon)|$ & if $N=2\alpha$.
\end{tabular}
\right.\\ \end{equation} Following the steps performed for the case $\alpha=2\beta$, we define the set of paths $$\Gamma_\varepsilon:=\{g\in C([0,1],\mathcal{X}_{0}^{\mu}(\mathcal{C}_{\Omega})\times \mathcal{X}_{0}^{\beta}(\mathcal{C}_{\Omega}))\,;\, g(0)=(0,0),\; g(1)=(M\eta_{\varepsilon}^{\mu},M\rho\eta_{\varepsilon}^{\beta})\},$$ and we consider the minimax values $$c_\varepsilon=\inf_{g\in\Gamma_\varepsilon} \max_{t \in [0,1]} \Phi_{\gamma}^{\alpha,\beta}(g(t)).$$ The final step of our scheme will be completed once we have shown that $c_{\varepsilon}<c_{\mu}^*$ for $\varepsilon$ small enough. \begin{lemma}\label{levelab} Assume $p=2_{\beta}^*-1$. Then, there exists $\varepsilon>0$ small enough such that, \begin{equation}\label{cotfunctionalab} \sup_{t\geq0}\Phi_{\gamma}^{\alpha,\beta}(t\overline{w}_{\varepsilon}^{\mu},t\overline{z}_{\varepsilon}^{\beta})<c_{\mu}^*, \end{equation} provided that $N>4\alpha-2\beta$. \end{lemma} The proof is similar to the one performed for Lemma \ref{levelb}, but we include it for the reader's convenience. \begin{proof} Because of \eqref{estimaciones}, it follows that \begin{align*} g(t):=&\,\Phi_{\gamma}^{\alpha,\beta}(t\overline{w}_{\varepsilon}^{\mu},t\overline{z}_{\varepsilon}^{\beta})\\
=&\frac{M^2t^2}{2}\left(\frac{\kappa_{\mu}}{\gamma^{1-\beta/\alpha}}\|\eta_{\varepsilon}^{\mu}\|_{\mathcal{X}_0^{\mu}(\mathcal{C}_{\Omega})}^{2}+\frac{\rho^2\kappa_{\beta}}{\gamma^{\beta/\alpha}}\|\eta_{\varepsilon}^{\beta}\|_{\mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})}^{2}-2\|\sigma_{\varepsilon}^{\alpha/2}\|_{L^2(\Omega)}^{2}\right)-\frac{M^{2_{\mu}^*}t^{2_{\mu}^*}}{2_{\mu}^*\gamma^{1-\beta/\alpha}}\\
=&\frac{M^2t^2}{2}\left(\frac{1}{\gamma^{1-\beta/\alpha}}[\kappa_{\mu}S(\mu,N)+O(\varepsilon^{N-2\mu})]+\frac{\rho^2}{\gamma^{\beta/\alpha}}[\kappa_{\beta}S(\beta,N)+O(\varepsilon^{N-2\beta})]-2\|\sigma_{\varepsilon}^{\alpha/2}\|_{L^2(\Omega)}^{2}\right)\\ &-\frac{M^{2_{\mu}^*}t^{2_{\mu}^*}}{2_{\mu}^*\gamma^{1-\beta/\alpha}}. \end{align*} It is clear that $\displaystyle \lim_{t\to \infty} g(t)=-\infty$, therefore, the function $g(t)$ possesses a maximum value at the point, $$ \begin{array}{rl} t_{\gamma,\varepsilon}\!=\Big(\frac{\gamma^{1-\beta/\alpha}}{M^{2_{\mu}^*-2}}\Big( & \!\!\!\!\frac{1}{\gamma^{1-\beta/\alpha}}[\kappa_{\mu}S(\mu,N)\!+\!O(\varepsilon^{N-2\mu})]\\
& \!\! \!\! +
\frac{\rho^2}{\gamma^{\beta/\alpha}}[\kappa_{\beta}S(\beta,N)+O(\varepsilon^{N-2\beta})]-2\|
\sigma_{\varepsilon}^{\alpha/2}\|_{L^2(\Omega)}^{2}\Big)\Big)^{\frac{1}{2_{\mu}^*-2}}. \end{array}$$ Moreover, at this point $t_{\gamma,\varepsilon}$, $$\begin{array}{rl} h(t_{\gamma,\varepsilon})=\left(\frac{1}{2}-\frac{1}{2_{\mu}^*}\right)\Big(\Big( \gamma^{1-\beta/\alpha}\Big)^{\frac{2}{2_{\mu}^*}} \Big(&\!\!\!\!\frac{1}{\gamma^{1-\beta/\alpha}}[\kappa_{\mu}S(\mu,N)+O(\varepsilon^{N-2\mu})]\\ & \!\!\!\! +\frac{\rho^2}{\gamma^{\beta/\alpha}}[\kappa_{\beta}S(\beta,N)+O(\varepsilon^{N-2\beta})]-
2\|\sigma_{\varepsilon}^{\alpha/2}\|_{L^2(\Omega)}^{2}\Big)\Big)^{\frac{2_{\mu}^*}{2_{\mu}^*-2}}. \end{array}$$ To complete the proof we must show that the inequality \begin{equation}\label{lls} h(t_{\gamma,\varepsilon})<c_{\mu}^*:=\frac{1}{\gamma^{1-\beta/\alpha}}\left(\frac{1}{2}-\frac{1}{2_{\mu}^*}\right)\left(\kappa_{\mu}S(\mu,N)\right)^{\frac{2_{\mu}^*}{2_{\mu}^*-2}}, \end{equation} holds true for $\varepsilon$ small enough. Thus, simplifying \eqref{lls}, we are left to choose $\rho>0$ such that inequality \begin{equation*}
O(\varepsilon^{N-2\mu})+\rho^2[\kappa_{\beta}S(\beta,N)+O(\varepsilon^{N-2\beta})]<2\gamma^{\beta/\alpha}\|\sigma_{\varepsilon}^{\alpha/2}\|_{L^2(\Omega)}^{2}. \end{equation*} holds true provided $\varepsilon$ is small enough. To this end, take $\rho=\varepsilon^{\delta}$ with $\delta>0$ to be determined, therefore, since \begin{equation*} O(\varepsilon^{N-2\mu})+\kappa_{\beta}S(\beta,N)\varepsilon^{2\delta}+O(\varepsilon^{N-2\beta+2\delta})=O(\varepsilon^{\tau}), \end{equation*} with $\tau=\min\{N-2\mu,2\delta,N-2\beta+2\delta\}=\min\{N-2\mu,2\delta\}$, the proof will be completed once we choose $\delta>0$ such that the inequality \begin{equation}\label{toproveab}
O(\varepsilon^{\tau})<2\gamma^{\beta/\alpha}\|\sigma_{\varepsilon}^{\alpha/2}\|_{L^2(\Omega)}^{2}, \end{equation} holds true for $\varepsilon$ small enough. We use once again the estimates \eqref{estimaciones}. If $N=2\alpha$, because of \eqref{estab}, inequality \eqref{toproveab} reads, \begin{equation}\label{h.1}
O(\varepsilon^{\tau})<2\gamma^{\beta/\alpha}\varepsilon^{\alpha+\delta}|\log(\varepsilon)|. \end{equation}
Since $\varepsilon\ll 1$, inequality \eqref{h.1} holds for $\tau=\min\{2\alpha-2\mu,2\delta\}=\min\{2\beta,2\delta\}>\alpha+\delta$. Using the identity $\displaystyle\min\{a,b\}=\frac{1}{2}(a+b-|a-b|)$, we find that $\tau>\alpha+\delta$ implies $\beta+\delta-|\beta-\delta|>\alpha+\delta$, which is impossible because $\alpha>\beta$. Therefore, \eqref{toproveab} can not hold if $N=2\alpha$. On the other hand, if $N>2\alpha$, inequality \eqref{toproveab} has the form, \begin{equation}\label{h.2} O(\varepsilon^{\tau})<2\gamma^{\beta/\alpha}\varepsilon^{\alpha+\delta}. \end{equation} Since $\varepsilon\ll 1$, inequality \eqref{h.2} holds if and only if
$\tau=\min\{N-2\mu,2\delta\}>\alpha+\delta$. Keeping in mind the identity $\displaystyle\min\{a,b\}=\frac{1}{2}(a+b-|a-b|)$, if $\tau>\alpha+\delta$ we arrive at the condition \begin{equation}\label{h.3}
N-2\mu-|N-2\mu-2\delta|>2\alpha. \end{equation} Consequently, we have two options: \begin{enumerate} \item $N-2\mu>2\delta$ combined with \eqref{h.3} provides us with the range, \begin{equation}\label{h.4} N-2\mu>2\delta>2\alpha. \end{equation} Then $N>4\alpha-2\beta$ necessarily, so that we can choose a positive $\delta$ satisfying \eqref{h.4} and, hence, inequality \eqref{toproveab} holds for $\varepsilon$ small enough. \item $N-2\mu<2\delta$ combined with \eqref{h.3} implies that $2(N-2\mu)-2\alpha>2\delta$, and hence, \begin{equation}\label{h.5} 2(N-2\mu)-2\alpha>2\delta>N-2\mu. \end{equation} Once again $N>4\alpha-2\beta$ necessarily, so that we can choose a positive $\delta$ satisfying \eqref{h.5} and, hence, inequality \eqref{toproveab} holds for $\varepsilon$ small enough. \end{enumerate}
\end{proof} To conclude, we complete the proof of Theorem \ref{Th1}, by dealing with the remaining case $\alpha\neq2\beta$. \begin{proof}[Proof of Theorem \ref{Th1}. Case $\alpha\neq2\beta$.]
\break By Lemma \ref{lezeroextension}, the functional $\Phi_{\gamma}^{\alpha, \beta}$ satisfies the MP geometry. Because of MPT we have a PS sequence which by Lemma \ref{levelab}, satisfies that the corresponding energy level is bellow the critical one. Taking into account Lemma \ref{PScondition_extensionsistemaab}, this PS sequence satisfies the PS condition, hence we obtain a critical point $(w,z)\in\mathcal{X}_0^{\mu}(\mathcal{C}_{\Omega})\times\mathcal{X}_0^{\beta}(\mathcal{C}_{\Omega})$ for the functional $\Phi_{\gamma}^{\alpha,\beta}$. The rest of the proof follows as in the subcritical case. \end{proof}
\end{document} |
\begin{document}
\title[measurable and topological dynamics] {On the interplay between measurable and topological dynamics}
\author{E. Glasner and B. Weiss}
\address{Department of Mathematics\\
Tel Aviv University\\
Tel Aviv\\
Israel} \email{[email protected]} \address {Institute of Mathematics\\
Hebrew University of Jerusalem\\ Jerusalem\\
Israel} \email{[email protected]}
\begin{date} {December 7, 2003} \end{date}
\maketitle
\tableofcontents \setcounter{secnumdepth}{2}
\setcounter{section}{0}
\section*{Introduction} Recurrent - wandering, conservative - dissipative, contracting - expanding, deterministic - chaotic, isometric - mixing, periodic - turbulent, distal - proximal, the list can go on and on. These (pairs of) words --- all of which can be found in the dictionary --- convey dynamical images and were therefore adopted by mathematicians to denote one or another mathematical aspect of a dynamical system.
The two sister branches of the theory of dynamical systems called {\em ergodic theory} (or {\em measurable dynamics}) and {\em topological dynamics} use these words to describe different but parallel notions in their respective theories and the surprising fact is that many of the corresponding results are rather similar. In the following article we have tried to demonstrate both the parallelism and the discord between ergodic theory and topological dynamics. We hope that the subjects we chose to deal with will successfully demonstrate this duality.
The table of contents gives a detailed listing of the topics covered. In the first part we have detailed the strong analogies between ergodic theory and topological dynamics as shown in the treatment of recurrence phenomena, equicontinuity and weak mixing, distality and entropy. In the case of distality the topological version came first and the theory of measurable distality was strongly influenced by the topological results. For entropy theory the influence clearly was in the opposite direction. The prototypical result of the second part is the statement that any abstract measure probability preserving system can be represented as a continuous transformation of a compact space, and thus in some sense ergodic theory embeds into topological dynamics.
We have not attempted in any way to be either systematic or comprehensive. Rather our choice of subjects was motivated by taste, interest and knowledge and to great extent is random. We did try to make the survey accessible to non-specialists, and for this reason we deal throughout with the simplest case of actions of $\mathbb{Z}$. Most of the discussion carries over to noninvertible mappings and to $\mathbb{R}$ actions. Indeed much of what we describe can be carried over to general amenable groups. Similarly, we have for the most part given rather complete definitions. Nonetheless, we did take advantage of the fact that this article is part of a handbook and for some of the definitions, basic notions and well known results we refer the reader to the earlier introductory chapters of volume I. Finally, we should acknowledge the fact that we made use of parts of our previous expositions \cite{W4} and \cite{G}.
We made the writing of this survey more pleasurable for us by the introduction of a few original results. In particular the following results are entirely or partially new. Theorem \ref{periodic} (the equivalence of the existence of a Borel cross-section with the coincidence of recurrence and periodicity), most of the material in Section 4 (on topological mild-mixing), all of subsection 7.4 (the converse side of the local variational principle) and subsection 7.6 (on topological determinism).
{\large{\part{Analogies}}}
\section{Poincar\'e recurrence vs. Birkhoff's recurrence}\label{Sec-Poin}
\subsection{Poincar\'e recurrence theorem and topological recurrence} The simplest dynamical systems are the periodic ones. In the absence of periodicity the crudest approximation to this is approximate periodicity where instead of some iterate $T^nx$ returning exactly to $x$ it returns to a neighborhood of $x$. The first theorem in abstract measure dynamics is Poincar\'{e}'s recurrence theorem which asserts that for a finite measure preserving system $(X,\mathcal{B},\mu,T)$ and any measurable set $A$, $\mu$-a.e. point of $A$ returns to $A$ (see \cite[Theorem 4.3.1]{HKat}). The proof of this basic fact is rather simple and depends on identifying the set of points $W\subset A$ that never return to $A$. These are called the {\bf wandering points\/} and their measurability follows from the formula $$ W= A \cap \left( \bigcap_{k=1}^\infty T^{-k} (X\setminus A)\right). $$ Now for $n\ge 0$, the sets $T^{-n} W$ are pairwise disjoint since $x\in T^{-n}W$ means that the forward orbit of $x$ visits $A$ for the last time at moment $n$. Since $\mu(T^{-n}W)=\mu(W)$ it follows that $\mu(W)=0$ which is the assertion of Poincar\'{e}'s theorem. Noting that $A\cap T^{-n}W$ describes the points of $A$ which visit $A$ for the last time at moment $n$, and that $\mu(\cup_{n=0}^\infty T^{-n} W)=0$ we have established the following stronger formulation of Poincar\'{e}'s theorem.
\begin{thm} For a finite measure preserving system $(X,\mathcal{B},\mu,T)$ and any measurable set $A$, $\mu$-a.e. point of $A$ returns to $A$ infinitely often. \end{thm}
Note that only sets of the form $T^{-n}B$ appeared in the above discussion so that the invertibility of $T$ is not needed for this result. In the situation of classical dynamics, which was Poincar\'{e}'s main interest, $X$ is also equipped with a separable metric topology. In such a situation we can apply the theorem to a refining sequence of partitions $\mathcal{P}_m$, where each $\mathcal{P}_m$ is a countable partition into sets of diameter at most $\frac{1}{m}$. Applying the theorem to a fixed $\mathcal{P}_m$ we see that $\mu$-a.e. point comes to within $\frac{1}{m}$ of itself, and since the intersection of a sequence of sets of full measure has full measure, we deduce the corollary that $\mu$-a.e. point of $X$ is recurrent.
This is the measure theoretical path to the recurrence phenomenon which depends on the presence of a finite invariant measure. The necessity of such measure is clear from considering translation by one on the integers. The system is dissipative, in the sense that no recurrence takes place even though there is an infinite invariant measure.
{\begin{center}{$\divideontimes$}\end{center}}
There is also a topological path to recurrence which was developed in an abstract setting by G. D. Birkhoff. Here the above example is eliminated by requiring that the topological space $X$, on which our continuous transformation $T$ acts, be compact. It is possible to show that in this setting a finite $T$-invariant measure always exists, and so we can retrieve the measure theoretical picture, but a purely topological discussion will give us better insight.
A key notion here is that of minimality. A nonempty closed, $T$-invariant set $E\subset X$, is said to be {\bf minimal\/} if $F\subset E$, closed and $T$-invariant implies $F=\emptyset$ or $F=E$. If $X$ itself is a minimal set we say that the system $(X,T)$\ is a {\bf minimal system}.
Fix now a point $x_0\in X$ and consider $$ \omega(x_0)=\bigcap_{n=1}^\infty\overline{\{T^k x_0: k\ge n\}}. $$ The points of $\omega(x_0)$ are called {\bf $\omega$-limit points of $x_0$\/}, ($\omega =$ last letter of the Greek alphabet) and in the separable case $y\in \omega(x_0)$ if and only if there is some sequence $k_i\to \infty$ such that $T^{k_i}x_0 \to y$. If $x_0\in \omega(x_0)$ then $x_0$ is called a {\bf positively recurrent point}.
Clearly $\omega(x_0)$ is a closed and $T$-invariant set. Therefore, in any nonempty minimal set $E$, any point $x_0\in E$ satisfies $x_0\in \omega(x_0)$ and thus we see that minimal sets have recurrent points.
In order to see that compact systems $(X,T)$\ have recurrent points it remains to show that minimal sets always exist. This is an immediate consequence of Zorn's lemma applied to the family of nonempty closed $T$-invariant subsets of $X$. A slightly more constructive proof can be given when $X$ is a compact and separable metric space. One can then list a sequence of open sets $U_1, U_2, \dots$ which generate the topology, and perform the following algorithm:
\begin{enumerate} \item set $X_0=X$, \item for $i=1,2, \dots$, \newline if $\bigcup_{\,n=-\infty}^{\,\infty} T^{-n}U_i\supset X_{i-1}$ put $X_i= X_{i-1}$, else put $X_i= X_{i-1}\setminus \bigcup_{\,n=-\infty}^{\,\infty} T^{-n}U_i$. \end{enumerate}
Note that $X_i\ne\emptyset$ and closed and thus $X_\infty=\bigcap_{\,i=0}^{\,\infty} X_i$ is nonempty. It is clearly $T$-invariant and for any $U_i$, if $U_i\cap X_\infty \ne\emptyset$ then $\bigcup_{\,-\infty}^{\,\infty} T^{-n}(U_i\cap X_\infty) =X_\infty$, which shows that $(X_\infty, T)$ is minimal.
\subsection{The existence of Borel cross-sections}
There is a deep connection between recurrent points in the topological context and ergodic theory. To see this we must consider quasi-invariant measures. For these matters it is better to enlarge the scope and deal with continuous actions of $\mathbb{Z}$, generated by $T$, on a {\em complete separable metric space\/} $X$. A probability measure $\mu$ defined on the Borel subsets of $X$ is said to be {\bf quasi-invariant\/} if $T\cdot \mu \sim \mu$. Define such a system $(X,\mathcal{B},\mu,T)$ to be {\bf conservative\/} if for any measurable set $A$, $\ TA\subset A$ implies $\mu(A\setminus TA)=0$.
It is not hard to see that the conclusion of Poincar\'{e}'s recurrence theorem holds for such systems; i.e. if $\mu(A)>0$, then $\mu$-a.e. $x$ returns to $A$ infinitely often. Thus once again $\mu$-a.e. point is topologically recurrent. It turns out now that the existence of a single topologically recurrent point implies the existence of a non-atomic conservative quasi-invariant measure. A simple proof of this fact can be found in \cite{KW} for the case when $X$ is compact --- but the proof given there is equally valid for complete separable metric spaces. In this sense the phenomenon of Poincar\'{e} recurrence and topological recurrence are ``equivalent" with each implying the other.
A Borel set $B\subset X$ such that each orbit intersects $B$ in exactly one point is called {\bf a Borel cross-section\/} for the system $(X,T)$\ . If a Borel cross-section exists, then no non-atomic conservative quasi-invariant measure can exist. In \cite{Wei} it is shown that the converse is also valid --- namely if there are no conservative quasi-invariant measures then there is a Borel cross-section.
Note that the periodic points of $(X,T)$\ form a Borel subset for which a cross-section always exists, so that we can conclude from the above discussion the following statement in which no explicit mention is made of measures.
\begin{thm}\label{periodic} For a system $(X,T)$\ , with $X$ a completely metrizable separable space, there exists a Borel cross-section if and only if the only recurrent points are the periodic ones. \end{thm}
\begin{rem} Already in \cite{Glimm} as well as in \cite{Eff} one finds many equivalent conditions for the existence of a Borel section for a system $(X,T)$\ . However one doesn't find there explicit mention of conditions in terms of recurrence. Silvestrov and Tomiyama \cite{ST} established the theorem in this formulation for $X$ compact (using $C^*$-algebra methods). We thank A. Lazar for drawing our attention to their paper. \end{rem}
\subsection{Recurrence sequences and Poincar\'e sequences} We will conclude this section with a discussion of recurrence sequences and Poincar\'e sequences. First for some definitions. Let us say that $D$ is a {\bf recurrence set} if for any dynamical system $(Y, T)$ with compatible metric $\rho$ and any $\epsilon >0$ there is a point $y_0$ and a $d \in D$ with $$ \rho(T^dy_0,\ y_0)< \epsilon. $$ Since any system contains minimal sets it suffices to restrict attention here to minimal systems. For minimal systems the set of such $y$'s for a fixed $\epsilon$ is a dense open set.
To see this fact, let $U$ be an open set. By the minimality there is some $N$ such that for any $y \in Y$, and some $0 \leq n \leq N$, we have $T^ny \in U$. Using the uniform continuity of $T^n$, we find now a $\delta >0$ such that if $\rho(u,\ v) < \delta$ then for all $0 \leq n \leq N$ $$ \rho(T^nu,\ T^n v)< \epsilon. $$ Now let $z_0$ be a point in $Y$ and $d_0 \in D$ such that \begin{equation}\label{d} p(T^{d_0}z_0,\ z_0) < \delta. \end{equation} For some $0 \leq n_0 \leq N$ we have $T^{n_0}z_0 = y_0 \in U$ and from \eqref{d} we get $\rho(T^{d_0}y_0,\ y_0) < \epsilon$. Thus points that $\epsilon$ return form an open dense set. Intersecting over $\epsilon\to 0$ gives a dense $G_\delta$ in $Y$ of points $y$ for which $$ \inf_{d\in D} \ \rho(T^dy,\ y)=0. $$ Thus there are points which actually recur along times drawn from the given recurrence set.
A nice example of a recurrence set is the set of squares.
To see this it is easier to prove a stronger property which is the analogue in ergodic theory of recurrence sets.
\begin{defn} A sequence $\{s_j\}$ is said to be a {\bf Poincar\'e sequence\/} if for any finite measure preserving system $(X,\ \mathcal{B} ,\ \mu,\ T)$ and any $B \in \mathcal{B}$ with positive measure we have $$ \mu ( T ^{s_j} B \cap B ) > 0 \qquad {\text{for some $ s_j$ in the sequence.}} $$ \end{defn}
Since any minimal topological system $(Y,T)$ has finite invariant measures with global support, $\mu$ any Poincar\'e sequence is recurrence sequence. Indeed for any presumptive constant $b>0$ which would witness the non-recurrence of $\{s_j\}$ for $(Y,T)$, there would have to be an open set $B$ with diameter less than $b$ and having positive $\mu$-measure such that $T ^{s_j} B \cap B$ is empty for all $\{s_j\}$.
Here is a sufficient condition for a sequence to be a Poincar\'e sequence:
\begin{lem} If for every $\alpha \in (0,\ 2 \pi)$ $$ \lim_{n \to \infty}\ \frac {1}{n}\ \sum^{n}_{k=1}\ e^{i \alpha s_k}\ =\ 0 $$ then $\{s_k\}_1^\infty$ is a Poincar\'e sequence. \end{lem}
\begin{proof} Let $(X,\ \mathcal{B} ,\ \mu,\ T)$ be a measure preserving system and let $U$ be the unitary operator defined on $L^2(X,\ \mathcal{B},\ \mu)$ by the action of $T$, i.e. $$ (Uf)(x)= f(Tx). $$ Let $H_0$ denote the subspace of invariant functions and for a set of positive measure $B$, let $f_0$ be the projection of $1_B$ on the invariant functions. Since this can also be seen as a conditional expectation with respect to the $\sigma$-algebra of invariant sets $f_0 \geq 0$ and is not zero. Now since $\mathbf{1}_B - f_0$ is orthogonal to the space of invariant functions its spectral measure with respect to $U$ doesn't have any atoms at $\{0\}$. Thus from the spectral representation we deduce that in $L^2$-norm $$
\left| \left| \frac {1}{n}\ \sum_1^n\ U^{s_k}
(1_B-f_0)\right|\right|_{L^2} \longrightarrow 0 $$ or $$
\left| \left| \left( \frac {1}{n} \sum^n_1 \ U ^{s_k}\ 1_B
\right)-f_0 \right|\right|_{L_2} \longrightarrow 0 $$ and integrating against $1_B$ and using the fact that $f_0$ is the projection of $1_B$ we see that $$
\lim_{n \to \infty}\ \frac {1}{n}\ \sum^n_1\ \mu(B \cap T^{-s_k}B) = \|f_0\|^2 >0 $$ which clearly implies that $\{s_k\}$ is a Poincar\'e sequence. \end{proof}
The proof we have just given is in fact von-Neumann's original proof for the mean ergodic theorem. He used the fact that $\mathbb {N}$ satisfies the assumptions of the proposition, which is Weyl's famous theorem on the equidistribution of $\{n \alpha\}$. Returning to the squares Weyl also showed that $\{n^2 \alpha\}$ is equidistributed for all irrational $\alpha$. For rational $\alpha$ the exponential sum in the lemma needn't vanish , however the recurrence along squares for the rational part of the spectrum is easily verified directly so that we can conclude that indeed the squares are a Poincar\'e sequence and hence a recurrence sequence.
The converse is not always true, i.e. there are recurrence sequences that are not Poincar\'e sequences. This was first shown by I. Kriz \cite{Kr} in a beautiful example (see also \cite[Chapter 5]{W4}). Finally here is a simple problem.
{\bf Problem:}\ If $D$ is a recurrence sequence for all circle rotations is it a recurrence set?
A little bit of evidence for a positive answer to that problem comes from looking at a slightly different characterization of recurrence sets. Let $\mathcal{N} $ denote the collection of sets of the form $$ N(U,\ U)=\{n:\ T^{-n}U \cap U \neq \emptyset\},\qquad (U\ {\text{open and nonempty}}), $$ where $T$ is a minimal transformation. Denote by $\mathcal{N}^*$ the subsets of $\mathbb {N}$ that have a non-empty intersection with every element of $\mathcal{N} $. Then $\mathcal{N}^*$ is exactly the class of recurrence sets. For minimal transformations, another description of $N(U,\ U)$ is obtained by fixing some $y_0$ and denoting $$ N(y_0,\ U)=\{n:\ T^ny_0 \in U\} $$ Then $N(U,\ U)=N(y_0,\ U)-N(y_0,\ U)$. Notice that the minimality of $T$ implies that $N(y_0,\ U)$ is a {\bf syndetic} set (a set with bounded gaps) and so any $N(U,\ U)$ is the set of differences of a syndetic set. Thus $\mathcal{N}$ consists essentially of all sets of the form $S\ -\ S$ where $S$ is a syndetic set.
Given a finite set of real numbers $\{\lambda_1,\lambda_2,\dots,\lambda_k\}$ and $\epsilon>0$ set $$ V(\lambda_1,\lambda_2,\dots,\lambda_k;\epsilon)=
\{n\in \mathbb{Z}: \max_{j} \{\|n\lambda_j \|<\epsilon\}\}, $$
where $\|\cdot\|$ denotes the distance to the closest integer. The collection of such sets forms a basis of neighborhoods at zero for a topology on $\mathbb{Z}$ which makes it a topological group. This topology is called the {\bf Bohr topology}. (The corresponding uniform structure is totally bounded and the completion of $\mathbb{Z}$ with respect to it is a compact topological group called the {\bf Bohr compactification} of $\mathbb{Z}$.)
Veech proved in \cite{Veech} that any set of the form $S\ -\ S$ with $S\subset \mathbb{Z}$ syndetic contains a neighborhood of zero in the Bohr topology {\em up to a set of zero density}. It is not known if in that statement the zero density set can be omitted. If it could then a positive answer to the above problem would follow (see also \cite{G26}).
\section{The equivalence of weak mixing and continuous spectrum}\label{Sec-WM}
In order to analyze the structure of a dynamical system $\mathbf{X}$ there are, a priori, two possible approaches. In the first approach one considers the collection of {\bf subsystems} $Y\subset X$ (i.e. closed $T$-invariant subsets) and tries to understand how $X$ is built up by these subsystems. In the other approach one is interested in the collection of {\bf factors} $X\overset{\pi}{\to}Y$ of the system $\mathbf{X}$. In the measure theoretical case the first approach leads to the ergodic decomposition and thereby to the study of the ``indecomposable" or ergodic components of the system. In the topological setup there is, unfortunately, no such convenient decomposition describing the system in terms of its indecomposable parts and one has to use some less satisfactory substitutes. Natural candidates for indecomposable components of a topological dynamical system are the ``orbit closures" (i.e. the topologically transitive subsystems) or the ``prolongation" cells (which often coincide with the orbit closures), see \cite{AG0}. The minimal subsystems are of particular importance here. Although we can not say, in any reasonable sense, that the study of the general system can be reduced to that of its minimal components, the analysis of the minimal systems is nevertheless an important step towards a better understanding of the general system.
This reasoning leads us to the study of the collection of indecomposable systems (ergodic systems in the measure category and transitive or minimal systems in the topological case) and their factors. The simplest and best understood indecomposable dynamical systems are the ergodic translations of a compact monothetic group (a cyclic permutation on $\mathbb{Z}_p$ for a prime number $p$, the ``adding machine" on $\prod_{n=0}^\infty \mathbb{Z}_2$, an irrational rotation $z\mapsto e^{2\pi i\alpha}z$ on
$S^1=\{z\in \mathbb{C}:|z|=1\}$ etc.). It is not hard to show that this class of ergodic actions is characterized as those dynamical systems which admit a model $(X,\mathcal{X},\mu,T)$ where $X$ is a compact metric space, $T:X\to X$ a surjective isometry and $\mu$ is $T$-ergodic. We call these systems {\bf Kronecker\/} or {\bf isometric\/} systems. Thus our first question concerning the existence of factors should be: given an ergodic dynamical system $\mathbf{X}$ which are its Kronecker factors? Recall that a measure dynamical system $\mathbf{X}=(X,\mathcal{X},\mu,T)$ is called {\bf weakly mixing\/} if the product system $(X\times X,\mathcal{X}\otimes\mathcal{X},\mu\times \mu,T\times T)$ is ergodic. The following classical theorem is due to von Neumann. The short and elegant proof we give was suggested by Y. Katznelson.
\begin{thm}\label{easy} An ergodic system $\mathbf{X}$ is weakly mixing iff it admits no nontrivial Kronecker factor. \end{thm} \begin{proof} Suppose $\mathbf{X}$ is weakly mixing and admits an isometric factor. Now a factor of a weakly mixing system is also weakly mixing and the only system which is both isometric and weakly mixing is the trivial system (an easy exercise). Thus a weakly mixing system does not admit a nontrivial Kronecker factor.
For the other direction, if $\mathbf{X}$ is non-weakly mixing then in the product space $X\times X$ there exists a $T$-invariant measurable subset $W$ such that $0<(\mu\times \mu)(W)<1$. For every $x\in X$ let $W(x)=\{x'\in X:(x,x')\in W\}$ and let $f_x={\mathbf{1}}_{W(x)}$, a function in $L^\infty (\mu)$. It is easy to check that $U_T f_x=f_{T^{-1}x}$ so that the map $\pi:X\to L^2(\mu)$ defined by $\pi(x)=f_x, x\in X$ is a Borel factor map. Denoting $$ \pi(X)=Y\subset L^2(\mu), \quad {\text{and}} \quad \nu=\pi_*(\mu), $$
we now have a factor map $\pi: \mathbf{X} \to (Y,\nu)$. Now the function $\|\pi(x)\|$ is clearly measurable and invariant and by ergodicity it is a constant
$\mu$-a.e.; say $\|\pi(x)\|=1$. The dynamical system $(Y,\nu)$ is thus a subsystem of the compact dynamical system $(B,U_T)$, where $B$ is the unit ball of the Hilbert space $L^2(\mu)$ and $U_T$ is the Koopman unitary operator induced by $T$ on $L^2(\mu)$. Now it is well known (see e.g. \cite{G}) that a compact topologically transitive subsystem which carries an invariant probability measure must be a Kronecker system and our proof is complete. \end{proof}
Concerning the terminology we used in the proof of Theorem \ref{easy}, B. O. Koopman, a student of G. D. Birkhoff and a co-author of both Birkhoff and von Neumann introduced the crucial idea of associating with a measure dynamical system $\mathbf{X}=(X,\mathcal{X},\mu,T)$ the unitary operator $U_T$ on the Hilbert space $L^2(\mu)$. It is now an easy matter to see that Theorem \ref{easy} can be re-formulated as saying that the system $\mathbf{X}$ is weakly mixing iff the point spectrum of the Koopman operator $U_T$ comprises the single complex number $1$ with multiplicity $1$. Or, put otherwise, that the one dimensional space of constant functions is the eigenspace corresponding to the eigenvalue $1$ (this fact alone is equivalent to the ergodicity of the dynamical system) and that the restriction of $U_T$ to the orthogonal complement of the space of constant functions has a continuous spectrum.
{\begin{center}{$\divideontimes$}\end{center}}
We now consider a topological analogue of this theorem. Recall that a topological system $(X,T)$\ is {\bf topologically weakly mixing\/} when the product system $(X\times X,T\times T)$ is topologically transitive. It is {\bf equicontinuous\/} when the family $\{T^n:n\in \mathbb{Z}\}$ is an equicontinuous family of maps. Again an equivalent condition is the existence of a compatible metric with respect to which $T$ is an isometry. And, moreover, a minimal system is equicontinuous iff it is a minimal translation on a compact monothetic group. We will need the following lemma.
\begin{lem}\label{cont} Let $(X,T)$\ be a minimal system and $f:X\to \mathbb{R}$ a $T$-invariant function with at least one point of continuity (for example this is the case when $f$ is lower or upper semi-continuous or more generally when it is the pointwise limit of a sequence of continuous functions), then $f$ is a constant. \end{lem}
\begin{proof} Let $x_0$ be a continuity point and $x$ an arbitrary point in $X$. Since $\{T^n x:n \in \mathbb{Z}\}$ is dense and as the value $f(T^n x)$ does not depend on $n$ it follows that $f(x)=f(x_0)$. \end{proof}
\begin{thm}\label{eswdisp} Let $(X,T)$\ be a minimal system then $(X,T)$\ is topologically weakly mixing iff it has no non-trivial equicontinuous factor. \end{thm} \begin{proof} Suppose $(X,T)$\ is minimal and topologically weakly mixing and let $\pi:(X,T)\to(Y,T)$ be an equicontinuous factor. If $(x,x')$ is a point whose $T\times T$ orbit is dense in $X\times X$ then $(y,y')=(\pi(x),\pi(x'))$ has a dense orbit in $Y\times Y$. However, if $(Y,T)$\ is equicontinuous then $Y$ admits a compatible metric with respect to which $T$ is an isometry and the existence of a transitive point in $Y\times Y$ implies that $Y$ is a trivial one point space.
Conversely, assuming that $(X\times X,T\times T)$ is not transitive we will construct an equicontinuous factor $(Z,T)$ of $(X,T)$. As $(X,T)$ is a minimal system, there exists a $T$-invariant probability measure $\mu$ on $X$ with full support. By assumption there exists an open $T$-invariant subset $U$ of $X\times X$, such that ${\rm{cls\,}} U:= M \subsetneq X\times X$. By minimality the projections of $M$ to both $X$ coordinates are onto. For every $y\in X$ let $M(y)=\{x\in X:(x,y)\in M\}$, and let $f_y=\mathbf{1}_{M(y)}$ be the indicator function of the set $M(y)$, considered as an element of $L^1(X,\mu)$.
Denote by $\pi:X\to L^1(X,\mu)$ the map $y\mapsto f_y$. We will show that $\pi$ is a continuous homomorphism, where we consider $L^1(X,\mu)$ as a dynamical system with the isometric action of the group $\{U^n_T:n\in \mathbb{Z}\}$ and $U_Tf(x)=f(Tx)$. Fix $y_0\in X$ and $\epsilon>0$. There exists an open neighborhood $V$ of the closed set $M(y_0)$ with $\mu(V\setminus M(y_0))<\epsilon$. Since $M$ is closed the set map $y\mapsto M(y), X\to 2^X$ is upper semi-continuous and we can find a neighborhood $W$ of $y_0$ such that $M(y)\subset V$ for every $y\in W$. Thus for every $y\in W$ we have $\mu(M(y)\setminus M(y_0))<\epsilon$. In particular, $\mu(M(y))\le\mu(M(y_0))+\epsilon$ and it follows that the map $y\mapsto \mu(M(y))$ is upper semi-continuous. A simple computation shows that it is $T$-invariant, hence, by Lemma \ref{cont}, a constant.
With $y_0,\epsilon$ and $V, W$ as above, for every $y\in W$, $\mu(M(y)\setminus M(y_0))<\epsilon$ and $\mu(M(y))=\mu(M(y_0))$, thus
$\mu(M(y)\Delta M(y_0))<2\epsilon$, i.e., $\| f_y-f_{y_0} \|_1<2\epsilon$. This proves the claim that $\pi$ is continuous.
Let $Z=\pi(X)$ be the image of $X$ in $L^1(\mu)$. Since $\pi$ is continuous, $Z$ is compact. It is easy to see that the $T$-invariance of $M$ implies that for every $n\in \mathbb{Z}$ and $y\in X$,\ $f_{T^{-n}y}=f_y\circ T^n$ so that $Z$ is $U_T$-invariant and $\pi:(Y,T)\to (Z,U_T)$ is a homomorphism. Clearly $(Z,U_T)$ is minimal and equicontinuous (in fact isometric). \end{proof}
Theorem \ref{eswdisp} is due to Keynes and Robertson \cite{KRo} who developed an idea of Furstenberg, \cite{Fur2}; and independently to K. Petersen \cite{Pe1} who utilized a previous work of W. A. Veech, \cite{Veech}. The proof we presented is an elaboration of a work of McMahon \cite{McM2} due to Blanchard, Host and Maass, \cite{BHM}. We take this opportunity to point out a curious phenomenon which recurs again and again. Some problems in topological dynamics --- like the one we just discussed --- whose formulation is purely topological, can be solved using the fact that a $\mathbb{Z}$ dynamical system always carries an invariant probability measure, and then employing a machinery provided by ergodic theory. In several cases this approach is the only one presently known for solving the problem. In the present case however purely topological proofs exist, e.g. the Petersen-Veech proof is one such.
\section{Disjointness: measure vs. topological}\label{Sec-disj}
In the ring of integers $\mathbb{Z}$ two integers $m$ and $n$ have no common factor if whenever $k|m$ and $k|n$ then $k=\pm 1$. They are disjoint if $m\cdot n$ is the least common multiple of $m$ and $n$. Of course in $\mathbb{Z}$ these two notions coincide. In his seminal paper of 1967 \cite{Fur3}, H. Furstenberg introduced the same notions in the context of dynamical systems, both measure-preserving transformations and homeomorphisms of compact spaces, and asked whether in these categories as well the two are equivalent. The notion of a factor in, say the measure category, is the natural one: the dynamical system $\mathbf{Y}=(Y,\mathcal{Y},\nu,T)$ is a {\bf factor\/} of the dynamical system $\mathbf{X}=(X,\mathcal{X},\mu,T)$ if there exists a measurable map $\pi:X \to Y$ with $\pi(\mu)=\nu$ that $T\circ \pi= \pi \circ T$. A common factor of two systems $\mathbf{X}$ and $\mathbf{Y}$ is thus a third system $\mathbf{Z}$ which is a factor of both. A {\bf joining\/} of the two systems $\mathbf{X}$ and $\mathbf{Y}$ is any system $\mathbf{W}$ which admits both as factors and is in turn spanned by them. According to Furstenberg's definition the systems $\mathbf{X}$ and $\mathbf{Y}$ are {\bf disjoint\/} if the product system $\mathbf{X}\times \mathbf{Y}$ is the only joining they admit. In the topological category, a joining of $(X,T)$\ and $(Y,S)$ is any subsystem $W\subset X\times Y$ of the product system $(X\times Y, T\times S)$ whose projections on both coordinates are full; i.e. $\pi_X(W)=X$ and $\pi_Y(W)=Y$. $(X,T)$\ and $(Y,S)$ are {\bf disjoint\/} if $X\times Y$ is the unique joining of these two systems. It is easy to verify that if $(X,T)$\ and $(Y,S)$ are disjoint then at least one of them is minimal. Also, if both systems are minimal then they are disjoint iff the product system $(X\times Y, T\times S)$ is minimal.
In 1979, D. Rudolph, using joining techniques, provided the first example of a pair of ergodic measure preserving transformations with no common factor which are not disjoint \cite{Ru1}. In this work Rudolph laid the foundation of joining theory. He introduced the class of dynamical systems having ``minimal self-joinings" (MSJ), and constructed a rank one mixing dynamical system having minimal self-joinings of all orders.
Given a dynamical system $\mathbf{X}=(X,\mathcal{X},\mu,T)$ a probability measure $\lambda$ on the product of $k$ copies of $X$ denoted $X_1,X_2,\ldots, X_k$, invariant under the product transformation and projecting onto $\mu$ in each coordinate is a {\bf $k$-fold self-joining\/}. It is called an {\bf off-diagonal\/} \label{def-offdiag} if it is a ``graph" measure of the form $\lambda={{\rm{gr\,}}}(\mu,T^{n_1},\dots,T^{n_k})$, i.e. $\lambda$ is the image of $\mu$ under the map $x\mapsto\big(T^{n_1}x,T^{n_2}x,\ldots, T^{n_k}x\big)$ of $X$ into $\prod\limits^k_{i=1}X_i$. The joining $\lambda$ is a {\bf product of off-diagonals\/} if there exists a partition $(J_1,\ldots, J_m)$ of $\{1,\ldots, k\}$ such that (i)\ For each $l$, the projection of $\lambda$ on $\prod\limits_{i\in J_l}X_i$ is an off-diagonal, (ii)\ The systems $\prod\limits_{i\in J_l}X_i$, $1\le l\le m$, are independent. An ergodic system $\mathbf{X}$ has {\bf minimal self-joinings of order $k$\/} if every $k$-fold ergodic self-joining of $\mathbf{X}$ is a product of off-diagonals.
In \cite{Ru1} Rudolph shows how any dynamical system with MSJ can be used to construct a counter example to Furstenberg's question as well as a wealth of other counter examples to various questions in ergodic theory. In \cite{JRS} del Junco, Rahe and Swanson were able to show that the classical example of Chac{\'o}n \cite{Chacon} has MSJ, answering a question of Rudolph whether a weakly but not strongly mixing system with MSJ exists. In \cite{GW1} Glasner and Weiss provide a topological counterexample, which also serves as a natural counterexample in the measure category. The example consists of two horocycle flows which have no nontrivial common factor but are nevertheless not disjoint. It is based on deep results of Ratner \cite{Rat} which provide a complete description of the self joinings of a horocycle flow. More recently an even more striking example was given in the topological category by E. Lindenstrauss, where two minimal dynamical systems with no nontrivial factor share a common almost 1-1 extension, \cite{Lis1}.
Beginning with the pioneering works of Furstenberg and Rudolph, the notion of joinings was exploited by many authors; Furstenberg 1977 \cite{Fur4}, Rudolph 1979 \cite{Ru1}, Veech 1982 \cite{V2}, Ratner 1983 \cite{Rat}, del Junco and Rudolph 1987 \cite{JR1}, Host 1991 \cite{Host}, King 1992 \cite{King2}, Glasner, Host and Rudolph 1992 \cite{GHR}, Thouvenot 1993 \cite{Thou2}, Ryzhikov 1994 \cite{Ry}, Kammeyer and Rudolph 1995 (2002) \cite{KR}, del Junco, Lema\'nczyk and Mentzen 1995 \cite{JLM}, and Lema\'nczyk, Parreau and Thouvenot 2000 \cite{LPT}, to mention a few. The negative answer to Furstenberg's question and the consequent works on joinings and disjointness show that in order to study the relationship between two dynamical systems it is necessary to know all the possible joinings of the two systems and to understand the nature of these joinings.
Some of the best known disjointness relations among families of dynamical systems are the following: \begin{itemize} \item ${{\rm{id}}} \ \bot \ $ ergodic, \item distal $\ \bot \ $ weakly mixing (\cite{Fur3}), \item rigid $\ \bot \ $ mild mixing (\cite{FW2}), \item zero entropy $\ \bot \ $ $K$-systems (\cite{Fur3}), \end{itemize} in the measure category and \begin{itemize} \item $F$-systems $ \ \bot \ $ minimal (\cite{Fur3}), \item minimal distal $\ \bot \ $ weakly mixing, \item minimal zero entropy $\ \bot \ $ minimal UPE-systems (\cite{Bla2}), \end{itemize} in the topological category.
\section{Mild mixing: measure vs. topological}\label{Sec-mm}
\begin{defn}\label{def-mild} Let $\mathbf{X}=(X,\mathcal{X},\mu,T)$ be a measure dynamical system. \begin{enumerate} \item The system $\mathbf{X}$ is {\bf rigid\/} if there exists a sequence $n_k\nearrow \infty$ such that $$ \lim \mu \left(T^{n_k}A\cap A\right)=\mu (A) $$ for every measurable subset $A$ of $X$. We say that $\mathbf{X}$ is $\{n_k\}$-{\bf rigid\/}. \item An ergodic system is {\bf mildly mixing\/} if it has no non-trivial rigid factor. \end{enumerate} \end{defn}
These notions were introduced in \cite{FW2}. The authors show that the mild mixing property is equivalent to the following multiplier property.
\begin{thm} An ergodic system $\mathbf{X}=(X,\mathcal{X},\mu,T)$ is mildly mixing iff for every ergodic (finite or infinite) measure preserving system $(Y,\mathcal{Y},\nu,T)$, the product system $$ (X\times Y, \mu \times \nu, T\times T), $$ is ergodic. \end{thm}
Since every Kronecker system is rigid it follows from Theorem \ref{easy} that mild mixing implies weak mixing. Clearly strong mixing implies mild mixing. It is not hard to construct rigid weakly mixing systems, so that the class of mildly mixing systems is properly contained in the class of weakly mixing systems. Finally there are mildly but not strongly mixing systems; e.g. Chac{\'o}n's system is an example (see Aaronson and Weiss \cite{AW}).
We also have the following analytic characterization of mild mixing.
\begin{prop}\label{ex-rigid-matrix} An ergodic system $\mathbf{X}$ is mildly mixing iff $$ \limsup_{n\to\infty}\phi_f(n)<1, $$
for every matrix coefficient $\phi_f$, where for $f\in L^2(X,\mu), \|f\|=1$,\ $\phi_f(n):= \langle U_{T^n} f,f\rangle$. \end{prop}
\begin{proof} If $\mathbf{X}\to \mathbf{Y}$ is a rigid factor, then there exists a sequence $ n_i\to \infty$ such that
$U_{T^{n_i}}\to {{\rm{id}}}$ strongly on $L^2(Y,\nu)$. For any function $f\in L^2_0(Y,\nu)$ with $\|f\|=1$, we have $\lim_{i\to\infty}\phi_f(n_i)=1$. Conversely, if $\lim_{i\to\infty}\phi_f(n_i)=1$ for some $n_i\nearrow \infty$ and $f\in L^2_0(X,\mu),
\|f\|=1$, then $\lim_{i\to\infty} U_{T^{n_i}}f=f$. Clearly $f$ can be replaced by a bounded function and we let $A$ be the sub-algebra of $L^\infty(X,\mu)$ generated by $\{U_{T^n}f:n \in \mathbb{Z} \}$. The algebra $A$ defines a non-trivial factor $\mathbf{X}\to \mathbf{Y}$ such that $U_{T^{n_i}}\to {{\rm{id}}}$ strongly on $L^2(Y,\nu)$. \end{proof}
We say that a collection $\mathcal{F}$ of nonempty subsets of $\mathbb{Z}$ is a {\bf family\/} if it is hereditary upward and {\bf proper\/} (i.e. $A\subset B$ and $A\in \mathcal{F}$ implies $B\in \mathcal{F}$, and $\mathcal{F}$ is neither empty nor all of $2^\mathbb{Z}$).
With a family $\mathcal{F}$ of nonempty subsets of $\mathbb{Z}$ we associate the {\bf dual family\/} $$ \mathcal{F}^{*}=\{E:E\cap F\ne\emptyset, \forall \ F\in \mathcal{F}\}. $$ It is easily verified that $\mathcal{F}^{*}$ is indeed a family. Also, for families, $\mathcal{F}_1\subset \mathcal{F}_2\ \Rightarrow\ \mathcal{F}^{*}_1\supset \mathcal{F}^{*}_2$, and $\mathcal{F}^{**}=\mathcal{F}$.
We say that a subset $J$ of $\mathbb{Z}$ has {\bf uniform density 1\/} if for every $0<\lambda<1$ there exists an $N$ such that for every interval $I\subset \mathbb{Z}$
of length $> N$ we have $|J \cap I| \ge \lambda |I|$. We denote by $\mathcal{D}$ the family of subsets of $\mathbb{Z}$ of uniform density 1. It is also easy to see that $\mathcal{D}$ has the finite intersection property.
Let $\mathcal{F}$ be a family of nonempty subsets of $\mathbb{Z}$ which is closed under finite intersections (i.e. $\mathcal{F}$ is a filter). Following \cite{Fur5} we say that a sequence $\{x_n: n\in \mathbb{Z}\}$ in a topological space $X$ {\bf $\mathcal{F}$-converges} to a point $x\in X$ if for every neighborhood $V$ of $x$ the set $\{n: x_n\in V\}$ is in $\mathcal{F}$. We denote this by $$ \mathcal{F}\,{\text -}\,\lim x_n =x. $$
We have the following characterization of weak mixing for measure preserving systems which explains more clearly its name. \begin{thm} The dynamical system $\mathbf{X}=(X,\mathcal{X},\mu,T)$ is weakly mixing iff for every $A, B \in \mathcal{X}$ we have $$ \mathcal{D}\,{\text -}\,\lim \mu(T^{-n}A\cap B) =\mu(A)\mu(B). $$ \end{thm}
An analogous characterization of measure theoretical mild mixing is obtained by considering the families of $I\!P$ and $I\!P^*$ sets. An $I\!P$-{\bf set\/} is any subset of $\mathbb{Z}$ containing a subset of the form $I\!P\{n_i\}= \{n_{i_1}+n_{i_2}+\cdots+n_{i_k}:i_1<i_2<\cdots<i_k\}$, for some infinite sequence $\{n_i\}_{i=1}^\infty$. We let $\mathcal{I}$ denote the family of $I\!P$-sets and call the elements of the dual family $\mathcal{I}^*$, {\bf $I\!P^*$-sets\/}. Again it is not hard to see that the family of $I\!P^*$-sets is closed under finite intersections. For a proof of the next theorem we refer to \cite{Fur5}.
\begin{thm} The dynamical system $\mathbf{X}=(X,\mathcal{X},\mu,T)$ is mildly mixing iff for every $A, B \in \mathcal{X}$ we have $$ \mathcal{I}^*\,{\text -}\,\lim\mu(T^{-n}A\cap B) =\mu(A)\mu(B). $$ \end{thm}
{\begin{center}{$\divideontimes$}\end{center}}
We now turn to the topological category. Let $(X,T)$\ be a topological dynamical system. For two non-empty open sets $U,V\subset X$ and a point $x\in X$ set \begin{gather*} N(U,V)=\{n\in \mathbb{Z}: T^n U\cap V\ne\emptyset\},\quad N_+(U,V)=N(U,V)\cap \mathbb{Z}_+\\ {\text{and}} \qquad N(x,V)=\{n\in \mathbb{Z}: T^n x\in V\}. \end{gather*} Notice that sets of the form $N(U,U)$ are symmetric.
We say that $(X,T)$\ is {\bf topologically transitive\/} (or just {\bf transitive\/}) if $N(U,V)$ is nonempty whenever $U,V \subset X$ are two non-empty open sets. Using Baire's category theorem it is easy to see that (for metrizable $X$) a system $(X,T)$\ is topologically transitive iff there exists a dense $G_\delta$ subset $X_0\subset X$ such that ${\bar{\mathcal{O}}}_T(x)=X$ for every $x\in X_0$.
We define the family $\mathcal{F}_{\rm thick}$ of {\bf thick sets\/} to be the collection of sets which contain arbitrary long intervals. The dual family $\mathcal{F}_{\rm synd}=\mathcal{F}_{\rm thick}^*$ is the collection of {\bf syndetic sets\/} --- those sets $A\subset \mathbb{Z}$ such that for some positive integer $N$ the intersection of $A$ with every interval of length $N$ is nonempty.
Given a family $\mathcal{F}$ we say that a topological dynamical system $(X,T)$ is {\bf $\mathcal{F}$-recurrent\/} if $N(A,A)\in \mathcal{F}$ for every nonempty open set $A\subset X$. We say that a dynamical system is {\bf $\mathcal{F}$-transitive\/} if $N(A,B)\in \mathcal{F}$ for every nonempty open sets $A, B \subset X$. The class of $\mathcal{F}$-transitive systems is denoted by $\mathcal{E}_{\mathcal{F}}$. E.g. in this notation the class of {\bf topologically mixing systems\/} is $\mathcal{E}_{\rm cofinite}$, where we call a subset $A\subset \mathbb{Z}$ co-finite when $\mathbb{Z}\setminus A$ is a finite set. We write simply $\mathcal{E}=\mathcal{E}_{\rm{infinite}}$ for the class of {\bf recurrent transitive\/} dynamical systems. It is not hard to see that when $X$ has no isolated points $(X,T)$\ is topologically transitive iff it is recurrent transitive. From this we then deduce that a weakly mixing system is necessarily recurrent transitive.
In a dynamical system $(X,T)$\ a point $x\in X$ is a {\bf wandering point\/} if there exists an open neighborhood $U$ of $x$ such that the collection $\{T^n U: n\in \mathbb{Z}\}$ is pairwise disjoint.
\begin{prop}\label{rec-tra} Let $$(X,T)$\ $ be a topologically transitive dynamical system; then the following conditions are equivalent: \begin{enumerate} \item $$(X,T)$\ \in \mathcal{E}_{\text{\rm infinite}}$. \item The recurrent points are dense in $X$. \item $$(X,T)$\ $ has no wandering points. \item The dynamical system $(X_\infty,T)$, the one point compactification of the integers with translation and a fixed point at infinity, is not a factor of $$(X,T)$\ $. \end{enumerate} \end{prop}
\begin{proof} 1 $\Rightarrow$ 4\ If $\pi:X\to X_\infty$ is a factor map then, clearly $N(\pi^{-1}(0),\pi^{-1}(0))=\{0\}$.
4 $\Rightarrow$ 3\ If $U$ is a nonempty open wandering subset of $X$ then $\{T^jU : j\in \mathbb{Z}\}\cup (X\setminus \bigcup \{T^jU : j\in \mathbb{Z}\})$ is a partition of $X$. It is easy to see that this partition defines a factor map $\pi : X\to X_\infty$.
3 $\Rightarrow$ 2\ This implication is a consequence of the following:
\begin{lem} If the dynamical system $$(X,T)$\ $ has no wandering points then the recurrent points are dense in $X$. \end{lem}
\begin{proof} For every $\delta>0$ put $$ A_\delta=\{x\in X:\exists j\not=0,\ d(T^jx,x)<\delta\}. $$ Clearly $A_\delta$ is an open set and we claim that it is dense. In fact given $x\in X$ and $\epsilon>0$ there exists $j\not=0$ with $$ T^jB_\epsilon(x)\cap B_\epsilon(x)\not=\emptyset. $$ If $y$ is a point in this intersection then $d(T^{-j}y,y)<2\epsilon$. Thus for $\epsilon<\delta/2$ we have $y\in A_\delta$ and $d(x,y)<\epsilon$. Now by Baire's theorem $$ A=\bigcap_{k=1}^\infty A_{1/k} $$ is a dense $G_\delta$ subset of $X$ and each point in $A$ is recurrent. \end{proof}
2 $\Rightarrow$ 1\ Given $U,V$ nonempty open subsets of $X$ and $k\in N(U,V)$ let $U_0$ be the nonempty open subset $U_0=U\cap T^{-k}V$. Check that $N(U_0,U_0)+k\subset N(U,V)$. By assumption $N(U_0,U_0)$ is infinite and a fortiori so is $N(U,V)$. This completes the proof of Proposition \ref{rec-tra}. \end{proof}
A well known characterization of the class $\mathbf{WM}$ of topologically weakly mixing systems is due to Furstenberg: \begin{thm} $\mathbf{WM}=\mathcal{E}_{\text{\rm thick}}$. \end{thm}
Following \cite{AG} we call the systems in $\mathcal{E}_{\text{\rm synd}}$ {\bf topologically ergodic\/} and write $\mathbf{TE}$ for this class. This is a rich class as we can see from the following claim from \cite{GW}. Here $\mathbf{MIN}$ is the class of minimal systems and $\mathbf{E}$ the class of $E$-systems; i.e. those transitive dynamical systems $(X,T)$\ for which there exists a probability invariant measure with full support.
\begin{thm}\label{MIN<TE} $\mathbf{MIN}, \mathbf{E} \subset \mathbf{TE}$. \end{thm}
\begin{proof} 1.\ The claim for $\mathbf{MIN}$ is immediate by the well known characterization of minimal systems: $(X,T)$\ is minimal iff $N(x,U)$ is syndetic for every $x\in X$ and nonempty open $U\subset X$.
2.\ Given two non-empty open sets $U,V$ in $X$, choose $k\in \mathbb{Z}$ with $T^kU\cap V\not=\emptyset$. Next set $U_0=T^{-k}V\cap U$, and observe that $k+N(U_0,U_0) \subset N(U,V)$. Thus it is enough to show that $N(U,U)$ is syndetic for every non-empty open $U$. We have to show that $N(U,U)$ meets every thick subset $B\subset\mathbb{Z}$. By Poincar\'e's recurrence theorem, $N(U,U)$ meets every set of the form $A-A=\{n-m:n,m\in A\}$ with $A$ infinite. It is an easy exercise to show that every thick set $B$ contains some $D^+(A)=\{a_n-a_m:n>m\}$ for an infinite sequence $A=\{a_n\}$. Thus $\emptyset\not= N(U,U)\cap \pm D^+(A) \subset N(U,U)\cap \pm B$. Since $N(U,U)$ is symmetric, this completes the proof. \end{proof}
We recall (see the previous section) that two dynamical systems $(X,T)$\ and $(Y,T)$\ are disjoint if every closed $T\times T$-invariant subset of $X\times Y$ whose projections on $X$ and $Y$ are full, is necessarily the entire space $X\times Y$. It follows easily that when $(X,T)$\ and $(Y,T)$\ are disjoint, at least one of them must be minimal. If both $(X,T)$\ and $(Y,T)$\ are minimal then they are disjoint iff the product system is minimal. We say that $(X,T)$\ and $(Y,T)$\ are {\bf weakly disjoint\/} when the product system $(X\times Y, T\times T)$ is transitive. This is indeed a very weak sense of disjointness as there are systems which are weakly disjoint from themselves. In fact, by definition a dynamical system is topologically weakly mixing iff it is weakly disjoint from itself.
If $\mathbf{P}$ is a class of recurrent transitive dynamical systems we let $\mathbf{P}^{\curlywedge}$ be the class of recurrent transitive dynamical systems which are weakly disjoint from every member of $\mathbf{P}$ $$ \mathbf{P}^{\curlywedge}=\{(X,T) : X\times Y\in \mathcal{E} \ {\text {\rm for every \ }} (Y,T)\in \mathcal{P} \}. $$ We clearly have $\mathbf{P}\subset \mathbf{Q}\ \Rightarrow \mathbf{P}^{\curlywedge}\supset \mathbf{Q}^{\curlywedge}$ and $\mathbf{P}^{\curlywedge \curlywedge \curlywedge}= \mathbf{P}^{\curlywedge}$.
For the discussion of topologically mildly mixing systems it will be convenient to deal with families of subsets of $\mathbb{Z}_+$ rather than $\mathbb{Z}$. If $\mathcal{F}$ is such a family then $$ \mathcal{E}_\mathcal{F}= \{(X,T) : N_+(A,B) \in \mathcal{F}\ {\text {\rm for every nonempty open}}\ A,B \subset X\}. $$ Let us call a subset of $\mathbb{Z}_+$ a $SI\!P$-{\bf set\/} (symmetric $I\!P$-set), if it contains a subset of the form $$ SI\!P\{n_i\}=\{n_\alpha-n_\beta > 0: \ n_\alpha,n_\beta\in I\!P\{n_i\} \cup \{0\}\}, $$ for an $I\!P$ sequence $I\!P\{n_i\}\subset \mathbb{Z}_+$. Denote by $\mathcal{S}$ the family of $SI\!P$ sets. It is not hard to show that $$ \mathcal{F}_{\rm thick}\subset\mathcal{S}\subset\mathcal{I}, $$ (see \cite{Fur5}). Hence $\mathcal{F}_{\rm syndetic} \supset \mathcal{S}^* \supset \mathcal{I}^*$, hence $\mathcal{E}_{{\text {\rm synd}}}\supset \mathcal{E}_{\mathcal{S}^*}\supset \mathcal{E}_{\mathcal{I}^*}$, and finally $$ \mathcal{E}^{\curlywedge}_{{\text {\rm synd}}}\subset \mathcal{E}^{\curlywedge}_{\mathcal{S}^*}\subset \mathcal{E}^{\curlywedge}_{\mathcal{I}^*}. $$
\begin{defn} A topological dynamical system $(X,T)$\ is called {\bf topologically mildly mixing\/} if it is in $\mathcal{E}_{\mathcal{S}^*}$ and we denote the collection of topologically mildly mixing systems by $\mathbf{MM}=\mathcal{E}_{\mathcal{S}^*}$. \end{defn}
\begin{thm}\label{MM=E^} A dynamical system is in $\mathcal{E}$ iff it is weakly disjoint from every topologically mildly mixing system: $$ \mathcal{E}=\mathbf{MM}^{\curlywedge}. $$ And conversely it is topologically mildly mixing iff it is weakly disjoint from every recurrent transitive system: $$ \mathbf{MM}=\mathcal{E}^{\curlywedge}. $$ \end{thm}
\begin{proof} 1.\ Since $\mathcal{E}_{\mathcal{S}^*}$ is nonvacuous (for example every topologically mixing system is in $\mathcal{E}_{\mathcal{S}^*}$), it follows that every system in $\mathcal{E}_{\mathcal{S}^*}^{\curlywedge}$ is in $\mathcal{E}$.
Conversely, assume that $$(X,T)$\ $ is in $\mathcal{E}$ but $$(X,T)$\ \not\in \mathcal{E}_{\mathcal{S}^*}^{\curlywedge}$, and we will arrive at a contradiction. By assumption there exists $$(Y,T)$\ \in \mathcal{E}_{\mathcal{S}^*}$ and a nondense nonempty open invariant subset $W\subset X\times Y$. Then $\pi_X(W)=O$ is a nonempty open invariant subset of $X$. By assumption $O$ is dense in $X$. Choose open nonempty sets $U_0\subset X$ and $V_0\subset Y$ with $U_0\times V_0\subset W$. By Proposition \ref{rec-tra} there exists a recurrent point $x_0$ in $U_0\subset O$. Then there is a sequence $n_i\to\infty$ such that for the $I\!P$-sequence $\{n_\alpha\}=I\!P\{n_i\}_{i=1}^\infty$,\ $I\!P{\text{-}}\lim T^{n_\alpha}x_0=x_0$ (see \cite{Fur5}). Choose $i_0$ such that $T^{n_\alpha}x_0\in U_0$ for $n_\alpha\in J=I\!P\{n_i\}_{i\ge i_0}$ and set $D=SI\!P(J)$. Given $V$ a nonempty open subset of $Y$ we have: $$ D\cap N(V_0,V) \not=\emptyset. $$ Thus for some $\alpha,\beta$ and $v_0\in V_0$, $$ T^{n_\alpha-n_\beta}(T^{n_\beta}x_0,v_0)= (T^{n_\alpha}x_0,T^{n_\alpha-n_\beta}v_0) \in (U_0\times V)\cap W. $$ We conclude that $$ \{x_0\}\times Y\subset {\rm{cls\,}} W. $$
The fact that in an $\mathcal{E}$ system the recurrent points are dense together with the observation that $\{x_0\}\times Y\subset {\rm{cls\,}} W$ for every recurrent point $x_0\in O$, imply that $W$ is dense in $X\times Y$, a contradiction.
2.\ From part 1 of the proof we have $\mathcal{E}=\mathcal{E}_{\mathcal{S}^*}^{\curlywedge}$, hence $\mathcal{E}^{\curlywedge}= \mathcal{E}_{\mathcal{S}^*}^{\curlywedge\curlywedge} \supset\mathcal{E}_{\mathcal{S}^*}$.
Suppose $$(X,T)$\ \in \mathcal{E}$ but $$(X,T)$\ \not\in \mathcal{E}_{\mathcal{S}^*}$, we will show that $$(X,T)$\ \not\in \mathcal{E}^{\curlywedge}$. There exist $U,V\subset X$, nonempty open subsets and an $I\!P$-set $I=I\!P\{n_i\}$ for a monotone increasing sequence $\{n_1<n_2< \cdots\}$ with $$ N(U,V)\cap D=\emptyset, $$ where $$ D=\{n_\alpha-n_\beta: n_\alpha,n_\beta\in I,\ n_\alpha>n_\beta\}. $$ If $$(X,T)$\ $ is not topologically weakly mixing then $X\times X\not\in \mathcal{E}$ hence $$(X,T)$\ \not\in \mathcal{E}^{\curlywedge}$. So we can assume that $$(X,T)$\ $ is topologically weakly mixing. Now in $X\times X$ $$ N(U\times V,V\times U)= N(U,V)\cap N(V,U)=N(U,V)\cap -N(U,V), $$ is disjoint from $D\cup -D$, and replacing $X$ by $X\times X$ we can assume that $N(U,V)\cap (D\cup -D)=\emptyset$. In fact, if $X\in \mathcal{E}^{\curlywedge}$ then $X\times Y \in \mathcal{E}$ for every $Y \in \mathcal{E}$, therefore $X\times (X\times Y) \in \mathcal{E}$ and we see that also $X\times X \in \mathcal{E}^{\curlywedge}$.
By going to a subsequence, we can assume that $$ \lim_{k\to\infty} n_{k+1}-\sum_{i=1}^kn_i =\infty. $$ in which case the representation of each $n\in I$ as $n=n_\alpha=n_{i_1}+n_{i_2}+\cdots +n_{i_k}; \ \alpha=\{i_1<i_2<\cdots<i_k\}$ is unique.
Next let $y_0\in\{0,1\}^\mathbb{Z}$ be the sequence $y_0=\mathbf{1}_I$. Let $Y$ be the orbit closure of $y_0$ in $\{0,1\}^\mathbb{Z}$ under the shift $T$, and let $[1]=\{y\in Y:y(0)=1\}$. Observe that $$ N(y_0,[1])=I. $$ It is easy to check that $$ I\!P{\text{-}}\lim T^{n_\alpha}y_0=y_0. $$ Thus the system $(Y,T)$ is topologically transitive with $y_0$ a recurrent point; i.e. $$(Y,T)$\ \in \mathcal{E}$.
We now observe that $$ N([1],[1])= N(y_0,[1])- N(y_0,[1])=I-I= D\cup -D\cup \{0\}. $$ If $X\times Y$ is topologically transitive then in particular \begin{gather*} N(U\times [1],V\times [1])=N(U,V)\cap N([1],[1]) =\\
N(U,V)\cap (D\cup -D\cup \{0\})=\ {\text{\rm infinite\ set}}. \end{gather*} But this contradicts our assumption. Thus $X\times Y\not\in \mathcal{E}$ and $$(X,T)$\ \not\in \mathcal{E}^{\curlywedge}$. This completes the proof. \end{proof}
We now have the following:
\begin{cor} Every topologically mildly mixing system is weakly mixing and topologically ergodic: $$ \mathbf{MM}\subset \mathbf{WM}\cap \mathbf{TE}. $$ \end{cor}
\begin{proof} We have $\mathcal{E}_{\mathcal{S}^*}\subset \mathcal{E} =\mathcal{E}_{\mathcal{S}^*}^{\curlywedge}$, hence for every $$(X,T)$\ \in \mathcal{E}_{\mathcal{S}^*}$, $X\times X \in \mathcal{E}$ i.e. $$(X,T)$\ $ is topologically weakly mixing. And, as we have already observed the inclusion $\mathcal{F}_{\rm syndetic} \supset \mathcal{S}^*$, entails $\mathbf{TE}=\mathcal{E}_{{\text {\rm synd}}}\supset \mathcal{E}_{\mathcal{S}^*} = \mathbf{MM}$. \end{proof}
To complete the analogy with the measure theoretical setup we next define a topological analogue of rigidity. This is just one of several possible definitions of topological rigidity and we refer to \cite{GM} for a treatment of these notions.
\begin{defn} A dynamical system $(X,T)$\ is called {\bf uniformly rigid\/} if there exists a sequence $n_k\nearrow \infty$ such that $$ \lim_{k\to\infty} \sup_{x\in X} d(T^{n_k}x,x)=0, $$ i.e. $\lim_{k\to\infty} T^{n_k}={\rm{id}}$ in the uniform topology on the group of homeomorphism of $H(X)$ of $X$. We denote by $\mathcal{R}$ the collection of topologically transitive uniformly rigid systems. \end{defn}
In \cite{GM} the existence of minimal weakly mixing but nonetheless uniformly rigid dynamical systems is demonstrated. However, we have the following:
\begin{lem} A system which is both topologically mildly mixing and uniformly rigid is trivial. \end{lem}
\begin{proof} Let $$(X,T)$\ $ be both topologically mildly mixing and uniformly rigid. Then $$ \Lambda={\rm{cls\,}} \{T^n: n\in \mathbb{Z}\}\subset H(X), $$ is a Polish monothetic group.
Let $T^{n_i}$ be a sequence converging uniformly to ${\rm{id}}$, the identity element of $\Lambda$. For a subsequence we can ensure that $\{n_\alpha\}=I\!P\{n_i\}$ is an $I\!P$-sequence such that $I\!P{\text{-}}\lim T^{n_\alpha} = {\rm{id}}$ in $\Lambda$. If $X$ is nontrivial we can now find an open ball $B=B_\delta(x_0)\subset X$ with $TB\cap B=\emptyset$. Put $U=B_{\delta/2}(x_0)$ and $V=TU$; then by assumption $N(U,V)$ is an $SI\!P^*$-set and in particular: $$ \forall \alpha_0\ \exists \alpha,\beta>\alpha_0,\ n_\alpha-n_\beta\in N(U,V). $$ However, since $I\!P{\text{-}}\lim T^{n_\alpha} = {\rm{id}}$, we also have eventually, $T^{n_\alpha-n_\beta}U\subset B$; a contradiction. \end{proof}
\begin{cor} A topologically mildly mixing system has no nontrivial uniformly rigid factors. \end{cor}
We conclude this section with the following result which shows how these topological and measure theoretical notions are related.
\begin{thm} Let $(X,T)$\ be a topological dynamical system with the property that there exists an invariant probability measure $\mu$ with full support such that the associated measure preserving dynamical system $(X,\mathcal{X},\mu,T)$ is measure theoretically mildly mixing then $(X,T)$\ is topologically mildly mixing. \end{thm}
\begin{proof} Let $(Y,S)$ be any system in $\mathcal{E}$; by Theorem \ref{MM=E^} it suffices to show that $(X\times Y,T\times S)$ is topologically transitive. Suppose $W\subset X\times Y$ is a closed $T\times S$-invariant set with ${\rm{int\,}} W\ne\emptyset$.
Let $U\subset X, V\subset V$ be two nonempty open subsets with $U\times V\subset W$. By transitivity of $(Y,S)$ there exits a transitive recurrent point $y_0\in V$. By theorems of Glimm and Effros \cite{Glimm}, \cite{Eff}, and Katznelson and Weiss \cite{KW} (see also Weiss \cite{Wei}), there exists a (possibly infinite) invariant ergodic measure $\nu$ on $Y$ with $\nu(V)>0$.
Let $\mu$ be the probability invariant measure of full support on $X$ with respect to which $(X,\mathcal{X},\mu,T)$ is measure theoretically mildly mixing. Then by \cite{FW2} the measure $\mu\times \nu$ is ergodic. Since $\mu\times\nu(W) \ge \mu\times\nu(U\times V) >0$ we conclude that $\mu\times\nu(W^c)=0$ which clearly implies $W= X\times Y$. \end{proof}
We note that the definition of topological mild mixing and the results described above concerning this notion are new. However independently of our work Huang and Ye in a recent work also define a similar notion and give it a comprehensive and systematic treatment, \cite{HY2}. The first named author would like to thank E. Akin for instructive conversations on this subject.
Regarding the classes $\mathbf{WM}$ and $\mathbf{TE}$ let us mention the following result from \cite{W}.
\begin{thm} $$ \mathbf{TE} = \mathbf{WM}^{\curlywedge}. $$ \end{thm}
For more on these topics we refer to \cite{Fur5}, \cite{A}, \cite{W}, \cite{AG}, \cite{HY1} and \cite{HY2}.
\section{Distal systems: topological vs. measure}\label{Sec-distal} As noted above the Kronecker or minimal equicontinuous dynamical systems can be considered as the most elementary type of systems. What is then the next stage? The clue in the topological case, which chronologically came first, is to be found in the notion of distality. A topological system $(X,T)$\ is called {\bf distal \/} if $$ \inf_{n\in \mathbb{Z}} d(T^nx,T^nx')>0 $$
for every $x\ne x'$ in $X$. It is easy to see that this property does not depend on the choice of a metric. And, of course, every equicontinuous system is distal. Is the converse true? Are these notions one and the same? The dynamical system given on the unit disc $D=\{z\in \mathbb{C}:|z|\le 1\}$ by the formula $Tz=z\exp(2\pi i |z|)$ is a counter example, it is distal but not equicontinuous. However it is not minimal. H. Furstenberg in 1963 noted that skew products over an equicontinuous basis with compact group translations as fiber maps are always distal, often minimal, but rarely equicontinuous, \cite{Fur2}. A typical example is the homeomorphism of the two torus $\mathbb{T}^2=\mathbb{R}^2/\mathbb{Z}^2$ given by $T(x,y)=(x+\alpha,y+x)$ where $\alpha\in \mathbb{R}/Z$ is irrational. Independently and at about the same time, it was shown by L. Auslander, L. Green and F. Hahn that minimal nilflows are distal but not equicontinuous, \cite{AGH}. These examples led Furstenberg to his path breaking structure theorem, \cite{Fur2}.
Given a homomorphism $\pi:(X,T)\to (Y,T)$ let $R_\pi=\{(x,x'):\pi(x)=\pi(x')\}$. We say that the homomorphism $\pi$ is an {\bf isometric extension\/} if there exists a continuous function $d:R_\pi\to \mathbb{R}$ such that for each $y\in Y$ the restriction of $d$ to $\pi^{-1}(y)\times \pi^{-1}(y)$ is a metric and for every $x,x'\in \pi^{-1}(y)$ we have $d(Tx,Tx')=d(x,x')$.
If $K$ is a compact subgroup of ${{\rm{Aut\,}}}(X,T)$ (the group of homeomorphisms of $X$ commuting with $T$, endowed with the topology of uniform convergence) then the map $x\mapsto Kx$ defines a factor map $(X,T)\overset\pi\to(Y,T)$\ with $Y=X/K$ and $R_\pi=\{(x,kx):x\in X,\ k\in K\}$. Such an extension is called a {\bf group extension\/}\label{def-gr-ext-t}. It turns out, although this is not so easy to see, that when $(X,T)$\ is minimal then $\pi:(X,T)\to (Y,T)$ is an isometric extension iff there exists a commutative diagram: \begin{equation*}\label{iso-gr-diag} \xymatrix { (\tilde X,T) \ar[dd]_{\tilde\pi} \ar[dr]^{\rho} & \\ & (X,T) \ar[dl]^{\pi}\\ (Y,T) & } \end{equation*} where $(\tilde X,T)$ is minimal and $(\tilde X,T)\overset{\tilde\pi}\to(X,T)$ is a group extension with some compact group $K\subset {\rm{Aut\,}}(\tilde X,T)$ and the map $\rho$ is the quotient map from $\tilde X$ onto $X$ defined by a closed subgroup $H$ of $K$. Thus $Y=\tilde X/K$ and $X=\tilde X/H$ and we can think of $\pi$ as a {\bf homogeneous space extension\/} with fiber $K/H$.
We say that a (metrizable) minimal system $(X,T)$\ is an {\bf $I$ system} if there is a (countable) ordinal $\eta$ and a family of systems $\{(X_\theta,x_\theta)\}_{\theta\le\eta}$ such that (i) $X_0$ is the trivial system, (ii) for every $\theta<\eta$ there exists an isometric homomorphism $\phi_\theta:X_{\theta+1}\to X_\theta$, (iii) for a limit ordinal $\lambda\le\eta$ the system $X_\lambda$ is the inverse limit of the systems $\{X_\theta\}_{\theta<\lambda}$ (i.e. $X_\lambda=\bigvee_{\theta<\lambda}(X_\theta,x_\theta)$), and (iv) $X_\eta=X$.
\begin{thm}[Furstenberg's structure theorem] \label{furst-structure-tm} A minimal system is distal iff it is an I-system. \end{thm}
{\begin{center}{$\divideontimes$}\end{center}}
W. Parry in his 1967 paper \cite{Pa1} suggested an intrinsic definition of measure distality. He defines in this paper a property of measure dynamical systems, called ``admitting a separating sieve", which imitates the intrinsic definition of topological distality.
\begin{defn}\label{defn-ssieve} Let $\mathbf{X}$ be an ergodic dynamical system. A sequence $A_1\supset A_2\supset \cdots$ of sets in $\mathcal{X}$ with $\mu(A_n)>0$ and $\mu(A_n)\to 0$, is called a {\bf separating sieve\/} if there exists a subset $X_0\subset X$ with $\mu(X_0)=1$ such that for every $x,x'\in X_0$ the condition ``for every $n\in \mathbb{N}$ there exists $k\in \mathbb{Z}$ with $T^k x,T^k x'\in A_n$" implies $x=x'$, or in symbols: $$ \bigcap_{n=1}^\infty\left(\bigcup_{k\in \mathbb{Z}} T^k(A_n\times A_n)\right)\cap (X_0 \times X_0)\subset \Delta. $$ We say that the ergodic system $\mathbf{X}$ is {\bf measure distal\/} if either $\mathbf{X}$ is finite or there exists a separating sieve. \end{defn}
Parry showed that every measure dynamical system admitting a separating sieve has zero entropy and that any $T$-invariant measure on a minimal topologically distal system gives rise to a measure dynamical system admitting a separating sieve.
If $\mathbf{X}=(X,\mathcal{X},\mu,T)$ is an ergodic dynamical system and $K\subset {\rm{Aut\,}}(\mathbf{X})$ is a compact subgroup (where ${\rm{Aut\,}}(\mathbf{X})$ is endowed with the weak topology) then the system $\mathbf{Y}=\mathbf{X}/K$ is well defined and we say that the extension $\pi:\mathbf{X} \to\mathbf{Y}$ is a {\bf group extension\/}. Using \eqref{iso-gr-diag} we can define the notion of isometric extension or homogeneous extension in the measure category. We will say that an ergodic system {\bf admits a Furstenberg tower\/} if it is obtained as a (necessarily countable) transfinite tower of measure isometric extensions. In 1976 in two outstanding papers \cite{Z1}, \cite{Z2} R. Zimmer developed the theory of distal systems (for a general locally compact acting group). He showed that, as in the topologically distal case, systems admitting Parry's separating sieve are exactly those systems which admit Furstenberg towers.
\begin{thm} \label{zimmer-structure-tm} An ergodic dynamical system is measure distal iff it admits a Furstenberg tower. \end{thm}
In \cite{Lis2} E. Lindenstrauss shows that every ergodic measure distal $\mathbb{Z}$-system can be represented as a minimal topologically distal system. For the exact result see Theorem \ref{distal-model} below.
\section{Furstenberg-Zimmer structure theorem vs. its topological PI version}\label{Sec-FZ}
Zimmer's theorem for distal systems leads directly to a structure theorem for the general ergodic system. Independently, and at about the same time, Furstenberg proved the same theorem, \cite{Fur4}, \cite{Fur5}. He used it as the main tool for his proof of Szemer\'edi's theorem on arithmetical progressions. Recall that an extension $\pi:(X,\mathcal{X},\mu,T)\to (Y,\mathcal{Y},\nu,T)$ is a {\bf weakly mixing extension\/} if the relative product system $\mathbf{X}\underset{\mathbf{Y}}{\times}\mathbf{X}$ is ergodic. (The system $\mathbf{X}\underset{\mathbf{Y}}{\times}\mathbf{X}$ is defined by the $T\times T$ invariant measure $$ \mu\underset{\nu}{\times}\mu=\int_Y\mu_y\times\mu_y\,d\nu(y), $$ on $X\times X$, where $\mu=\int_Y\mu_y\,d\nu(y)$ is the disintegration of $\mu$ over $\nu$.)
\begin{thm}[The Furstenberg-Zimmer structure theorem]\label{FZsth} Let $\mathbf{X}$ be an ergodic dynamical system. \begin{enumerate} \item There exists a maximal distal factor $\phi:\mathbf{X}\to\mathbf{Z}$ with $\phi$ is a weakly mixing extension. \item This factorization is unique. \end{enumerate} \end{thm}
{\begin{center}{$\divideontimes$}\end{center}}
Is there a general structure theorem for minimal topological systems? Here, for the first time, we see a strong divergence between the measure and the topological theories. The culpability for this divergence is to be found in the notions of proximality and proximal extension, which arise naturally in the topological theory but do not appear at all in the measure theoretical context. In building towers for minimal systems we have to use two building blocks of extremely different nature (isometric and proximal) rather than one (isometric) in the measure category. A pair of points $(x,x')\in X\times X$ is called {\bf proximal\/} if it is not distal, i.e. if $\inf_{n\in \mathbb{Z}} d(T^nx,T^nx')=0$. An extension $\pi:(X,T)\to (Y,T)$ is called {\bf proximal\/} if every pair in $R_\pi$ is proximal. The next theorem was developed gradually by several authors (Veech, Glasner-Ellis-Shapiro, and McMahon, \cite{V}, \cite{EGS}, \cite{McM1}, \cite{V1}). We need first to introduce some definitions. We say that a minimal dynamical system $(X,T)$\ is {strictly \bf PI\/} (proximal isometric) if it admits a tower consisting of proximal and isometric extensions. It is called a {\bf PI system\/} if there is a strictly PI minimal system $(\tilde{X},T)$ and a proximal extension $\theta:\tilde{X}\to X$. An extension $\pi:X \to Y$ is a {\bf RIC extension\/} (relatively incontractible) if for every $n\in \mathbb{N}$ and every $y\in Y$ the set of almost periodic points in $X_y^n=\pi^{-1}(y)\times\pi^{-1}(y)\times\dots\times \pi^{-1}(y)$ ($n$ times) is dense. (A point is called {\bf almost periodic\/} if its orbit closure is minimal.) It can be shown that a every isometric (and more generally, distal) extension is RIC. Also every RIC extension is open. Finally a homomorphism $\pi:X\to Y$ is called {\bf topologically weakly mixing\/} if the dynamical system $(R_\pi,T\times T)$ is topologically transitive.
The philosophy in the next theorem is to regard proximal extensions as `negligible' and then the claim is, roughly (i.e. up to proximal extensions), that every minimal system is a weakly mixing extension of its maximal PI factor.
\begin{thm}[Structure theorem for minimal systems]\label{minsth} Given a metric minimal system $(X,T)$, there exists a countable ordinal $\eta$ and a canonically defined commutative diagram (the canonical PI-Tower) \begin{equation} \xymatrix
{X \ar[d]_{\pi} &
X_0 \ar[l]_{\tilde{\theta_0}}
\ar[d]_{\pi_0}
\ar[dr]^{\sigma_1} & &
X_1 \ar[ll]_{\tilde{\theta_1}}
\ar[d]_{\pi_1}
\ar@{}[r]|{\cdots} &
X_{\nu}
\ar[d]_{\pi_{\nu}}
\ar[dr]^{\sigma_{\nu+1}} & &
X_{\nu+1}
\ar[d]_{\pi_{\nu+1}}
\ar[ll]_{\tilde{\theta_{\nu+1}}}
\ar@{}[r]|{\cdots} &
X_{\eta}=X_{\infty}
\ar[d]_{\pi_{\infty}} \\
pt &
Y_0 \ar[l]^{\theta_0} &
Z_1 \ar[l]^{\rho_1} &
Y_1 \ar[l]^{\theta_1}
\ar@{}[r]|{\cdots} &
Y_{\nu} &
Z_{\nu+1}
\ar[l]^{\rho_{\nu+1}} &
Y_{\nu+1}
\ar[l]^{\theta_{\nu+1}}
\ar@{}[r]|{\cdots} &
Y_{\eta}=Y_{\infty}
} \nonumber \end{equation} where for each $\nu\le\eta, \pi_{\nu}$ is RIC, $\rho_{\nu}$ is isometric, $\theta_{\nu}, {\tilde\theta}_{\nu}$ are proximal extensions and $\pi_{\infty}$ is RIC and topologically weakly mixing extension. For a limit ordinal $\nu ,\ X_{\nu}, Y_{\nu}, \pi_{\nu}$ etc. are the inverse limits (or joins) of $ X_{\iota}, Y_{\iota}, \pi_{\iota}$ etc. for $\iota < \nu$. Thus $X_\infty$ is a proximal extension of $X$ and a RIC topologically weakly mixing extension of the strictly PI-system $Y_\infty$. The homomorphism $\pi_\infty$ is an isomorphism (so that $X_\infty=Y_\infty$) iff $X$ is a PI-system. \end{thm}
We refer to \cite{Gl3} for a review on structure theory in topological dynamics.
\section{Entropy: measure and topological}\label{Sec-ent}
\subsection{The classical variational principle} For the definitions and the classical results concerning entropy theory we refer to \cite{HKat}, Section 3.7 for measure theory entropy and Section 4.4 for metric and topological entropy. The variational principle asserts that for a topological $\mathbb{Z}$-dynamical system $(X,T)$ the topological entropy equals the supremum of the measure entropies computed over all the invariant probability measures on $X$. It was already conjectured in the original paper of Adler, Konheim and McAndrew \cite{AKM} where topological entropy was introduced;\ and then, after many stages (mainly by Goodwyn, Bowen and Dinaburg; see for example \cite{DGS}) matured into a theorem in Goodman's paper \cite{Goodm}.
\begin{thm}[The variational principle]\label{var-princ} Let $(X,T)$\ be a topological dynamical system, then $$ h_{{\rm top}}(X,T)={\sup}\{h_\mu:\mu\in M_T(X)\} ={\sup}\{h_\mu:\mu\in M^{{\rm{erg}}}_T(X)\}. $$ \end{thm}
This classical theorem has had a tremendous influence on the theory of dynamical systems and a vast amount of literature ensued, which we will not try to trace here (see \cite[Theorem 4.4.4]{HKat}). Instead we would like to present a more recent development.
\subsection{Entropy pairs and UPE systems}
As we have noted in the introduction, the theories of measurable dynamics (ergodic theory) and topological dynamics exhibit a remarkable parallelism. Usually one translates `ergodicity' as `topological transitivity',`weak mixing' as `topological weak mixing', `mixing' as `topological mixing' and `measure distal' as `topologically distal'. One often obtains this way parallel theorems in both theories, though the methods of proof may be very different.
What is then the topological analogue of being a K-system? In \cite{Bla1} and \cite{Bla2} F. Blanchard introduced a notion of `topological $K$' for $\mathbb{Z}$-systems which he called UPE (uniformly positive entropy). This is defined as follows: a topological dynamical system $(X,T)$ is called a UPE system if every open cover of $X$ by two non-dense open sets $U$ and $V$ has positive topological entropy. A local version of this definition led to the concept of an entropy pair. A pair $(x,x') \in X \times X,\ x\not=x'$ is an entropy pair if for every open cover $\mathcal{U}=\{U,V\}$ of $X$, with $x \in {{\rm{int\,}}} (U^c)$ and $x' \in {{\rm{int\,}}} (V^c)$, the topological entropy $h(\mathcal{U})$ is positive. The set of entropy pairs is denoted by $E_X=E_{(X,T)}$ and it follows that the system $(X,T)$ is UPE iff $E_X= (X\times X)\setminus \Delta$. In general $E^*=E_X\cup \Delta$ is a $T\times T$-invariant closed symmetric and reflexive relation. Is it also transitive? When the answer to this question is affirmative then the quotient system $X/E_X^*$ is the topological analogue of the Pinsker factor. Unfortunately this need not always be true even when $(X,T)$ is a minimal system (see \cite{GW55} for a counter example).
The following theorem was proved in Glasner and Weiss \cite{GW35}.
\begin{thm}\label{K>UPE} If the compact system $(X,T)$ supports an invariant measure $\mu$ for which the corresponding measure theoretical system $(X,\mathcal{X},\mu,T)$ is a $K$-system, then $(X,T)$ is UPE. \end{thm}
Applying this theorem together with the Jewett-Krieger theorem it is now possible to obtain a great variety of strictly ergodic UPE systems.
Given a $T$-invariant probability measure $\mu$ on $X$, a pair $(x,x') \in X \times X,\ x\not=x'$ is called a $\mu$-entropy pair if for every Borel partition $\alpha =\{Q,Q^c\}$ of $X$ with $x \in {{\rm{int\,}}}(Q)$ and $x' \in {{\rm{int\,}}}(Q^c)$ the measure entropy $h_\mu(\alpha)$ is positive. This definition was introduced by Blanchard, Host, Maass, Mart\'{\i}nez and Rudolph in \cite{B-R} as a local generalization of Theorem \ref{K>UPE}. It was shown in \cite{B-R} that for every invariant probability measure $\mu$ the set $E_\mu$ of $\mu$-entropy pairs is contained in $E_X$.
\begin{thm}\label{mu-subset-e} Every measure entropy pair is a topological entropy pair. \end{thm}
As in \cite{GW35} the main issue here is to understand the, sometimes intricate, relation between the combinatorial entropy $h_c(\mathcal{U})$ of a cover $\mathcal{U}$ and the measure theoretical entropy $h_\mu(\gamma)$ of a measurable partition $\gamma$ subordinate to $\mathcal{U}$.
\begin{prop}\label{pro-mu-subset-e} Let $\mathbf{X}=(X,\mathcal{X},\mu,T)$ be a measure dynamical system. Suppose $\mathcal{U}=\{U,V\}$ is a measurable cover such that every measurable two-set partition $\gamma=\{H,H^c\}$ which (as a cover) is finer than $\mathcal{U}$ satisfies $h_\mu(\gamma)>0$; then $h_c(\mathcal{U})>0$. \end{prop}
Since for a $K$-measure $\mu$ clearly every pair of distinct points is in $E_\mu$, Theorem \ref{K>UPE} follows from Theorem \ref{mu-subset-e}. It was shown in \cite{B-R} that when $(X,T)$ is uniquely ergodic the converse of Theorem \ref{mu-subset-e} is also true: $E_X=E_\mu$ for the unique invariant measure $\mu$ on $X$.
\subsection{A measure attaining the topological entropy of an open cover}
In order to gain a better understanding of the relationship between measure entropy pairs and topological entropy pairs one direction of a variational principle for open covers (Theorem \ref{mcvp} below) was proved in Blanchard, Glasner and Host \cite{BGH}. Two applications of this principle were given in \cite{BGH};\ (i) the construction, for a general system $(X,T)$, of a measure $\mu\in M_T(X)$ with $E_X=E_\mu$, and (ii) the proof that under a homomorphism $\pi:(X,\mu,T)\to (Y,\nu,T)$ every entropy pair in $E_\nu$ is the image of an entropy pair in $E_\mu$.
We now proceed with the statement and proof of this theorem which is of independent interest. The other direction of this variational principle will be proved in the following subsection.
\begin{thm}\label{mcvp} Let $(X,T)$ be a topological dynamical system, and $\mathcal{U}$ an open cover of $X$, then there exists a measure $\mu\in M_T(X)$ such that $h_\mu(\alpha)\ge h_{{\rm top}}(\mathcal{U})$ for all Borel partitions $\alpha$ finer than $\mathcal{U}$. \end{thm}
A crucial element of the proof of the variational principle is a combinatorial lemma which we present next. We let $\phi: [0,1]\to\mathbb{R}$ denote the function $$ \phi(x)=-t\log t\quad {\text{ for\ }} 0<t\le 1 ;\ \phi(0)=0 \ . $$ Let $\mathfrak{L}=\{1,2,\dots,\ell\}$ be a finite set, called the {\bf alphabet\/}; sequences $\omega=\omega_1\ldots \omega_n\in \mathfrak{L}^n$, for $n\ge 1$, are called {\bf words of length $n$ on the alphabet $\mathfrak{L}$\/} . Let $n$ and $k$ be two integers with $1\leq k\le n$.
For every word $\omega$ of length $n$ and every word $\theta$ of length $k$
on the same alphabet, we denote by $p(\theta|\omega)$ the frequency of appearances of $\theta$ in $\omega$,\ i.e. $$
p(\theta|\omega)=\frac{1}{n-k+1} {{\rm{card\,}}}\big\{i :\ 1\leq i\le n-k+1,\; \omega_i\omega_{i+1}\ldots\omega_{i+k-1}= \theta_1\theta_2\ldots\theta_{k}\big\} \ . $$ For every word $\omega$ of length $n$ on the alphabet $\mathfrak{L}$, we let $$ H_k(\omega)=\sum_{\theta\in
\mathfrak{L}^k}\phi\big(p(\theta|\omega)\big) \ . $$
\begin{lem} For every $h>0$, $\epsilon>0$, every integer $k\ge 1$ and every sufficiently large integer $n$, $$ {{\rm{card\,}}}\big\{ \omega\in \mathfrak{L}^n : H_k(\omega) \le kh \big\} \le \exp\big( n(h+\epsilon)\big) \ . $$ \end{lem}\label{combin} {\bf Remark. }It is equally true that, if $h \le \log ({{\rm{card\,}}} \mathfrak{L})$, for sufficiently large $n$, $$ {{\rm{card\,}}}\big\{ \omega\in \mathfrak{L}^n : H_k(\omega) \leq kh \big\} \geq \exp\big( n(h-\epsilon)\big) \ . $$ We do not prove this inequality here, since we have no use for it in the sequel.
\begin{proof} {\bf The case $ k=1$.}
We have \begin{equation}\label{H1} {{\rm{card\,}}}\big\{ \omega\in \mathfrak{L}^n :\ H_1(\omega) \le h \big\} =\sum_{q \in K} \frac{n!}{q_1!\ldots q_{\ell}!} \end{equation} where $K$ is the set of $q=(q_1,\ldots ,q_{\ell})\in \mathbb{N}^{\ell}$ such that $$ \sum_{i=1}^{\ell} q_i=n\ {\text{ and }}\ \sum_{i=1}^{\ell}
\phi(\frac{q_i}{n})\le h \ . $$ By Stirling's formula, there exist two universal constants $c$ and $c'$ such that $$ c\big(\frac{m}{e})^m\sqrt m \le m!\, \le c'\big({\frac{m}{e}})^m\sqrt m $$ for every $m>0$. From this we deduce the existence of a constant $C(\ell)$ such that for every $ q\in K$, $$ {\frac{n!}{q_1!\ldots q_{\ell}!}} \leq C(\ell) \exp\big( n\sum_{i=1}^{\ell} \phi(\frac{q_i}{n})\big) \leq C({\ell})\exp(nh)\ . $$ Now the sum \eqref{H1} contains at most $(n+1)^{\ell}$ terms; so that we have $$ {{\rm{card\,}}}\big\{ \omega\in \mathfrak{L}^n :\ H_1(\omega) \le h \big\} \le (n+1)^{\ell}C(\ell)\exp(nh)\le \exp\big( n(h+\epsilon)\big) $$ for all sufficiently large $n$, as was to be proved.
{\bf The case $k>1$.}
For every word $\omega$ of length $n\ge 2k$ on the alphabet $\mathfrak{L}$, and for $0\le j<k$, we let $n_j$ be the integral part of $\frac{n-j}{k}$, and $\omega^{(j)}$ the word $$ (\omega_{j+1}\ldots\omega_{j+k})\; (\omega_{j+k+1}\ldots\omega_{j+2k})\; \ldots\; (\omega_{j+(n_j-1)k+1}\ldots\omega_{j+n_jk}) $$ of length $n_j$ on the alphabet $B=\mathfrak{L}^k$.
Let now $\theta$ be a word of length $k$ on the alphabet $\mathfrak{L}$; we also consider $\theta$ as an element of $B$. One easily verifies that, for every word $\omega$ of length $n$ on the alphabet $\mathfrak{L}$, $$
\big| p(\theta|\omega)-\frac{1}{k}\sum_{j=0}^{k-1}
p(\theta|\omega^{(j)})\big| \le \frac{k}{n-2k+1} \ . $$ The function $\phi$ being uniformly continuous, we see that for sufficiently large $n$, and for every word $\omega$ of length $n$ on $\mathfrak{L}$, $$
\sum_{\theta\in B} \left| \phi\big( p(\theta|\omega)\big) -\phi\left(
\frac{1}{k}\sum_{j=0}^{k-1}p(\theta|\omega^{(j)})\right) \right| < \frac{\epsilon}{2} $$ and by convexity of $\phi$, $$ {\frac{1}{k}}\sum_{j=0}^{k-1}H_1(\omega^{(j)})= \frac{1}{
k}\sum_{j=0}^{k-1}\sum_{\theta\in B} \phi\big(p(\theta|\omega^{(j)})\big) \leq \frac{\epsilon}{2} +
\sum_{\theta\in \mathfrak{L}^k}\phi\big(p(\theta|\omega)\big) =\frac{\epsilon}{2} + H_k(\omega). $$ Thus, if $H_k(\omega)\le kh$, there exists a $j$ such that $H_1(\omega^{(j)})\le \frac{\epsilon}{2} + kh$.
Now, given $j$ and a word $u$ of length $n_j$ on the alphabet $B$, there exist $\ell^{n-n_jk}\leq \ell^{2k-2}$ words $\omega$ of length $n$ on $\mathfrak{L}$ such that $\omega^{(j)}=u$. Thus for sufficiently large $n$, by the first part of the proof, \begin{align*} {{\rm{card\,}}}\big\{ \omega\in \mathfrak{L}^n : H_k(\omega) \le kh \big\} & \le {\ell}^{2k-2}\sum_{j=0}^{k-1}{{\rm{card\,}}}\big\{ u\in B^{n_j}:H_1(u) \le \frac{\epsilon}{2} + kh\big\}\\ & \le {\ell}^{2k-2}\sum_{j=0}^{k-1}\exp\big(n_j(\epsilon+kh)\big)\\ & \le {\ell}^{2k-2}k\exp\big(n(\frac{\epsilon}{k}+h)\big) \le \exp\big(n(h+\epsilon\big))\ . \end{align*} \end{proof}
Let $(X,T)$ be a compact dynamical system. As usual we denote by $M_T(X)$ the set of $T$-invariant probability measures on $X$, and by $M_T^{{\rm{erg}}}(X)$ the subset of ergodic measures.
We say that a partition $\alpha$ is finer than a cover $\mathcal{U}$ when every atom of $\alpha$ is contained in an element of $\mathcal{U}$. If $\alpha=\{A_1,\ldots,A_{\ell}\}$ is a partition of $X$, $x\in X$ and $N\in\mathbb{N}$, we write $\omega(\alpha,N,x)$ for the word of length $N$ on the alphabet $\mathfrak{L}=\{1,\ldots,{\ell}\}$ defined by $$ \omega(\alpha,N,x)_n=i\quad {\text{if}}\quad T^{n-1}x\in A_i,\qquad 1\le n\le N\ . $$
\begin{lem}Let $\mathcal{U}$ be a cover of $X$, $h=h_{{\rm top}}(\mathcal{U})$, $K\ge 1$ an integer, and $\{\alpha_l:1\le l\le K\}$ a finite sequence of partitions of $X$, all finer than $\mathcal{U}$. For every $\epsilon>0$ and sufficiently large $N$, there exists an $x\in X$ such that $$ H_k\big(\omega(\alpha_l,N,x)\big) \ge k(h-\epsilon) \ {\text {for every} }\ k,l\ {\text { with} }\ 1\leq k,\;l\le K. $$ \end{lem} \begin{proof} One can assume that all the partitions $\alpha_l$ have the same number of elements $\ell$ and we let $\mathfrak{L}=\{1,\ldots,\ell\}$. For $1\le k \le K$ and $N\ge K$, denote $$ \Omega(N,k)=\{\omega \in \mathfrak{L}^N :\ H_k(\omega) < k(h-\epsilon)\} \ . $$ By Lemma \ref{combin}, for sufficiently large $N$ $$ {{\rm{card\,}}}(\Omega(N,k))\leq \exp(N(h-\epsilon/2))\ {\text{ for all}}\ k\le K. $$ Let us choose such an $N$ which moreover satisfies $K^2 <\exp(N\epsilon/2)$. For $1\le k,l\le K$, let $$ Z(k,l)= \{x\in X :\omega(\alpha_l, N,x)\in\Omega(N,k)\} \ . $$
The set $Z(k,l)$ is the union of ${{\rm{card\,}}}(\Omega(N,k))$ elements of $(\alpha_l)_0^{N-1}$. Now this partition is finer than the cover $\mathcal{U}_0^{N-1}$, hence $Z(k,l)$ is covered by $$ {{\rm{card\,}}}(\Omega(N,k))\leq\exp(N(h-\epsilon/2)) $$ elements of $\mathcal{U}_0^{N-1}$. Finally, $$ \bigcup_{1\leq k,l\leq K} Z(k,l) $$ is covered by $K^2\exp(N(h-\epsilon/2))<\exp(Nh)$ elements of $\mathcal{U}_0^{N-1}$. As every subcover of $\mathcal{U}_0^{N-1}$ has at least $\exp(Nh)$ elements, $$ \bigcup_{1\le k,l\le K} Z(k,l)\neq X. $$ This completes the proof of the lemma. \end{proof}
\begin{proof}[Proof of theorem \ref{mcvp}] Let $\mathcal{U}=\{U_1,\ldots,U_\ell\}$ be an open cover of $X$. It is clearly sufficient to consider Borel partitions $\alpha$ of $X$ of the form \begin{equation}\label{one*} \alpha=\{ A_1,\ldots,A_\ell\}\ {\text{ with}}\ A_i\subset U_i \ {\text{ for every} }\ i. \end{equation}
{\bf Step 1:\ }Assume first that $X$ is $0$-dimensional.
The family of partitions finer than $\mathcal{U}$, consisting of clopen sets and satisfying \eqref{one*} is countable; let $\{\alpha_l:l\ge 1\}$ be an enumeration of this family. According to the previous lemma, there exists a sequence of integers $N_K$ tending to $+\infty$ and a sequence $x_K$ of elements of $X$ such that: \begin{equation}\label{doub*} H_k\big(\omega(\alpha_l, N_K,x_K)\big) \ge k(h-\frac{1}{K}) \ {\text{ for every} }\ 1\le k,\;l \le K. \end{equation} Write $$ \mu_K=\frac{1}{N_K}\sum_{i=0}^{N_K-1} \delta_{T^ix_K} \ . $$ Replacing the sequence $\mu_K$ by a subsequence (this means replacing the sequence $N_K$ by a subsequence, and the sequence $x_K$ by the corresponding subsequence preserving the property \eqref{doub*}), one can assume that the sequence of measures $\mu_K$ converges weak$^*$ to a probability measure $\mu$. This measure $\mu$ is clearly $T$-invariant. Fix $k,l\ge 1$, and let $F$ be an atom of the partition $(\alpha_l)_0^{k-1}$, with name $\theta\in\{1,\ldots, \ell\}^k$. For every $K$ one has $$
\big|\mu_K(F)-p\big(\theta| \omega(\alpha_l , N_K,x_K)\big)
\big| \le \frac{2k}{N_K}. $$ Now as $F$ is clopen, \begin{align*} \mu(F)& =\lim_{K\to\infty}\mu_K(F)=
\lim_{K\to\infty} p\big(\theta| \omega(\alpha_l, N_K,x_K)\big)\; \ {\text{hence}}\\ \phi(\mu(F) )
&=\lim_{K\to\infty} \phi\big(p\big(\theta| \omega(\alpha_l, N_K,x_K)\big) \big) \end{align*} and, summing over $\theta\in \{1,\ldots,\ell\}^k$, one gets $$ H_\mu\big( (\alpha_l)_0^{k-1}\big) =\lim_{K\to\infty} H_k\big( \omega(\alpha_l , N_K,x_K)\big) \ge kh. $$ Finally, by sending $k$ to infinity one obtains $h_\mu(\alpha_l)\ge h$.
Now, as $X$ is $0$-dimensional, the family of partitions $\{\alpha_l\}$ is dense in the collection of Borel partitions of $X$ satisfying \eqref{one*}, with respect to the distance associated with $L^1(\mu)$. Thus, $h_\mu(\alpha)\ge h$ for every partition of this kind.
{\bf Step 2:\ }The general case.
Let us recall a well known fact: there exists a topological system $(Y,T)$, where $Y$ is $0$-dimensional, and a continuous surjective map $\pi: Y\to X$ with $\pi\circ T=T\circ \pi$.
(Proof : as $X$ is a compact metric space, it is easy to construct a Cantor set $K$ and a continuous surjective $f: K\to X$. Put $$ Y=\{ y \in K^{\mathbb{Z}}:\ f(y_{n+1})=T f(y_n)\ {\text{ for every }}\ n\in\mathbb{Z}\} $$ and let $\pi: Y\to X$ be defined by $\pi(y)=f(y_0)$.
$Y$ is a closed subset of $K{^\mathbb{Z}}$ --- where the latter is equipped with the product topology --- and is invariant under the shift $T$ on $K^{\mathbb{Z}}$. It is easy to check that $\pi$ satisfies the required conditions.)
Let $\mathcal{V}=\pi^{-1}(\mathcal{U})=\{\pi^{-1}(U_1), \ldots,\pi^{-1}(U_d)\}$ be the preimage of $\mathcal{U}$ under $\pi$ ; one has $h_{{\rm top}}(\mathcal{V})=h_{{\rm top}}(\mathcal{U})=h$. By the above remark, there exists $\nu\in M(Y,T)$ such that $h_\nu(\mathcal{Q})\ge h$ for every Borel partition $\mathcal{Q}$ of $Y$ finer than $\mathcal{V}$. Let $\mu=\nu\circ\pi^{-1}$ the measure which is the image of $\nu$ under $\pi$. One has $\mu\in M_T(X)$ and, for every Borel partition $\alpha$ of $X$ finer than $\mathcal{U}$, $\pi^{-1}(\alpha)$ is a Borel partition of $Y$ which is finer than $\mathcal{V}$ with $$ h_\mu(\alpha)=h_\nu\big( \pi^{-1}(\alpha)\big)\ge h. $$ This completes the proof of the theorem. \end{proof}
\begin{cor}\label{corol1} Let $(X,T)$ be a topological system, $\mathcal{U}$ an open cover of $X$ and $\alpha$ a Borel partition finer than $\mathcal{U}$, then, there exists a $T$-invariant ergodic measure $\mu$ on $X$ such that $h_\mu(\alpha)\ge h_{{\rm top}}(\mathcal{U})$. \end{cor} \begin{proof} By Theorem \ref{mcvp} there exists $\mu\in M_T(X)$ with $h_{\mu}(\alpha)\ge h_{{\rm top}}(\mathcal{U})$; let $\mu=\int_\omega \mu_\omega\,dm(\omega )$ be its ergodic decomposition. The corollary follows from the formula $$ \int h_{\mu_\omega}(\alpha)\,dm(\omega) =h_\mu(\alpha). $$ \end{proof}
\subsection{The variational principle for open covers} Given an open cover $\mathcal{U}$ of the dynamical system $(X,T)$\ , the results of the previous subsection imply the inequality $$ \sup_{\mu\in M_T(X)}\inf_{\alpha\succ\mathcal{U}} h_\mu(\alpha) \ge h_{{\rm top}}(\mathcal{U}). $$ We will now present a new result which will provide the fact that $$ \sup_{\mu\in M_T(X)}\inf_{\alpha\succ\mathcal{U}} h_\mu(\alpha) = h_{{\rm top}}(\mathcal{U}) $$ thus completing the proof of a variational principle for $\mathcal{U}$.
We first need a universal version of the Rohlin lemma.
\begin{prop}\label{univ-roh} Let $(X,T)$ be a (Polish) dynamical system and assume that there exists on $X$ a $T$-invariant aperiodic probability measure. Given a positive integer $n$ and a real number $\delta>0$ there exists a Borel subset $B\subset X$ such that the sets $B, TB, \dots, T^{n-1}B$ are pairwise disjoint and for every aperiodic $T$-invariant probability measure $\mu\in M_T(X)$ we have $\mu(\bigcup_{j=0}^{n-1} T^j B) > 1 - \delta$. \end{prop}
\begin{proof} Fix $N$ (it should be larger than $n/\delta$ for the required height $n$ and error $\delta$). The set of points that are periodic with period $\le N$ is closed. Any point in the complement (which by our assumption is nonempty) has, by continuity, a neighborhood $U$ with $N$ disjoint forward iterates. There is a countable subcover $\{U_m\}$ of such sets since the space is Polish. Take $A_1=U_1$ as a base for a {\em Kakutani sky-scraper} \begin{gather*} \{T^j A_1^k: j=0,\dots, k-1;\ k=1,2,\dots\},\\ A_1^k=\{x\in A_1: r_{A_1}(x)=k\}, \end{gather*} where $r_{A_1}(x)$ is the first integer $j\ge 1$ with $T^jx\in A_1$. Next set $$ B_1= \bigcup_{k\ge 1}\bigcup_{j=0}^{[(k-n-1)/n]} T^{jn}A_1^k, $$ so that the sets $B_1, TB_1,\dots, T^{n-1}B_1$ are pairwise disjoint.
Remove the full forward $T$ orbit of $U_1$ from the space and repeat to find $B_2$ using as a base for the next Kakutani sky-scraper $A_2$ defined as $U_2$ intersected with the part of $X$ not removed earlier. Proceed by induction to define the sequence $B_i, \ i=1,2,\dots$ and set $B=\bigcup_{i=1}^\infty B_i$. By Poincar\'e recurrence for any aperiodic invariant measure we exhaust the whole space except for $n$ iterates of the union $A$ of the bases of the Kakutani sky-scrapers. By construction $A=\bigcup_{m=1}^\infty A_m$ has $N$ disjoint iterates so that $\mu(A) \le 1/N$ for every $\mu\in M_T(X)$. Thus $B, TB,\dots, T^{n-1}B$ fill all but $n/N < \delta$ of the space uniformly over the aperiodic measures $\mu\in M_T(X)$. \end{proof}
Let $(X,T)$\ be a dynamical system and $\mathcal{U}=\{U_1,U_2, \dots U_{\ell}\}$ a finite open cover. We denote by $\mathcal{A}$ the collection of all finite Borel partitions $\alpha$ which refine $\mathcal{U}$, i.e. for every $A\in \alpha$ there is some $U\in \mathcal{U}$ with $A\subset U$. We set $$ \check{ h}(\mathcal{U})= \sup_{\mu\in M_T(X)}\inf_{\alpha\in\mathcal{A}} h_\mu(\alpha) \qquad {\text{and}} \qquad \hat h(\mathcal{U})= \inf_{\alpha\in\mathcal{A}}\sup_{\mu\in M_T(X)} h_\mu(\alpha). $$
\begin{prop}\label{bar-hat} Let $(X,T)$ be a dynamical system, $\mathcal{U}=\{U_1,U_2, \dots U_{\ell}\}$ a finite open cover, then \begin{enumerate} \item $\check{ h}(\mathcal{U}) \le \hat h(\mathcal{U})$, \item $\hat h(\mathcal{U}) \le h_{{\rm top}}(\mathcal{U})$. \end{enumerate} \end{prop}
\begin{proof} 1.\ Given $\nu\in M_T(X)$ and $\alpha\in \mathcal{A}$ we obviously have $h_\nu(\alpha) \le \sup_{\mu\in M_T(X)} h_\mu(\alpha)$. Thus $$ \inf_{\alpha\in\mathcal{A}}h_\nu(\alpha) \le \inf_{\alpha\in\mathcal{A}} \sup_{\mu\in M_T(X)} h_\mu(\alpha) = \hat h(\mathcal{U}), $$ and therefore also $\check{ h}(\mathcal{U}) \le \hat h(\mathcal{U})$.
2.\ Choose for $\epsilon> 0$ an integer $N$ large enough so that there is a subcover $\mathcal{D} \subset \mathcal{U}_0^{N-1}= \bigvee_{j=0}^{N-1} T^{-j}\mathcal{U}$ of cardinality $2^{N(h_{{\rm top}}(\mathcal{U})+\epsilon)}$. Apply Proposition \ref{univ-roh} to find a set $B$ such that the sets $B, TB, \dots ,T^{N-1}B$ are pairwise disjoint and for every $T$-invariant Borel probability measure $\mu\in M_T(X)$ we have $\mu(\bigcup_{j=0}^{N-1} T^j B) > 1 - \delta$. Consider $\mathcal{D}_B=\{D\cap B: D\in \mathcal{D}\}$, the restriction of the cover $\mathcal{D}$ to $B$, and find a partition $\beta$ of $B$ which refines $\mathcal{D}_B$. Thus each element $P\in \beta$ has the form $$ P=P_{i_0,i_1,\dots,i_{N-1}} \subset \left(\bigcap_{j=0}^{N-1} T^{-j} U_{i_j}\right) \cap B, $$ where $\bigcap_{j=0}^{N-1} T^{-j} U_{i_j}$ represents a typical element of $\mathcal{D}$. Next use the partition $\beta$ of $B$ to define a partition $\alpha=\{A_i:i=1,\dots,\ell\}$ of $\bigcup_{j=0}^{N-1} T^{j}B$ by assigning to the set $A_i$ all sets of the form $T^jP_{i_0,i_1,\dots,i_j,\dots,i_{N-1}}$ where $i_j=i$ ($j$ can be any number in $[0,N-1]$). On the remainder of the space $\alpha$ can be taken to be any partition refining $\mathcal{U}$.
Now if $N$ is large and $\delta$ small enough then \begin{equation}\label{estim} h_\mu(\alpha) \le h_{{\rm top}}(\mathcal{U})+2\epsilon. \end{equation} Here is a sketch of how one establishes this inequality. For $n >> N$ we will estimate $H_\mu(\alpha_0^{n-1})$ by counting how many $(n,\alpha)$-names are needed to cover most of the space. We take $\delta>0$ so that $\sqrt{\delta}<< \epsilon$. Denote $E = B \cup TB \cup \dots \cup T^{N-1}B$ (so that $\mu(E) > 1 - \delta$). Define $$ f(x)=\frac{1}{n}\sum_{i=0}^n \mathbf{1}_E(T^ix), $$ and observe that $0 \le f \le1$ and $$ \int_X f(x)\, d\mu(x) > 1 - \delta, $$ since $T$ is measure preserving. Therefore $\int (1-f) < \delta$ and (Markov's inequality) $$ \mu \{x : (1-f) \ge \sqrt{\delta}\} \le \frac{1}{\sqrt{\delta}} \int(1-f) \le \sqrt{\delta}. $$ It follows that for points $x$ in $G =\{f > 1- \sqrt{\delta}\}$, we have the property that $T^i x \in E$ for most $i$ in $[0,n]$.
Partition $G$ according to the values of $i$ for which $T^i x\in B$. This partition has at most $$ \sum_{j\le\frac{n}{N}}\binom{n}{j} \le \frac{n}{N}\binom{n}{n/N} $$ sets, a number which is exponentially small in $n$ (if $N$ is sufficiently large).
For a fixed choice of these values the times when we are not in $E$ take only $n\sqrt{\delta}$ values and there we have $ < l^{n\sqrt{\delta}}$ choices.
Finally when $T^i x \in B$ we have at most $2^{(N({h_{{\rm top}}}(U) + \epsilon))}$ names so that the total contribution is $< 2^{(N({h_{{\rm top}}}(U) + \epsilon))\frac{n}{N}}$.
Collecting these estimations we find that $$ H(\alpha_0^{n-1}) < n({h_{{\rm top}}}(U) + 2\epsilon), $$ whence \eqref{estim}. This completes the proof of the proposition. \end{proof}
We finally obtain:
\begin{thm}[The variational principle for open covers]\label{cvp} Let $(X,T)$ be a dynamical system, $\mathcal{U}=\{U_1,U_2, \dots U_k\}$ a finite open cover and denote by $\mathcal{A}$ the collection of all finite Borel partitions $\alpha$ which refine $\mathcal{U}$, then \begin{enumerate} \item for every $\mu \in M_T(X)$, $\inf_{\alpha\in\mathcal{A}} h_\mu(\alpha) \le h_{{\rm top}}(\mathcal{U})$, and \item there exists an ergodic measure $\mu_0\in M_T(X)$ with $h_{\mu_0}(\alpha) \ge h_{{\rm top}}(\mathcal{U})$ for every Borel partition $\alpha \in \mathcal{A}$. \item $$ \check{ h}(\mathcal{U}) = \hat h(\mathcal{U}) = h_{{\rm top}}(\mathcal{U}). $$ \end{enumerate} \end{thm}
\begin{proof} 1.\ This assertion can be formulated by the inequality $\check{ h}(\mathcal{U}) \le h_{{\rm top}}(\mathcal{U})$ and it follows by combining the two parts of Lemma \ref{bar-hat}.
2.\ This is the content of Theorem \ref{mcvp}.
3.\ Combine assertions 1 and 2. \end{proof}
\subsection{Further results connecting topological and measure entropy}
Given a topological dynamical system $(X,T)$\ and a measure $\mu\in M_T(X)$, let $\pi:(X,\mathcal{X},\mu,T)\to (Z,\mathcal{Z},\eta,T)$ be the {\bf measure-theoretical\/} Pinsker factor of $(X,\mathcal{X},\mu,T)$, and let $\mu=\int_Z \mu_z\,d\eta(z)$ be the disintegration of $\mu$ over $(Z,\eta)$. Set $$ \lambda=\int_Z (\mu_z \times \mu_z)\,d\eta(z), $$ the relatively independent joining of $\mu$ with itself over $\eta$. Finally let $\Lambda_\mu={{\rm{supp\,}}}(\lambda)$ be the topological support of $\lambda$ in $X\times X$. Although the Pinsker factor is, in general, only defined measure theoretically, the measure $\lambda$ is a well defined element of $M_{T\times T}(X\times X)$. It was shown in Glasner \cite{Gl25} that $E_\mu= \Lambda_\mu\setminus \Delta$.
\begin{thm}\label{char-mu-ep} Let $(X,T)$\ be a topological dynamical system and let $\mu\in\linebreak[0] M_T(X)$. \begin{enumerate} \item $E_\mu=\Lambda_\mu\setminus \Delta$ and $\Lambda_\mu=E_\mu\cup \{(x,x): x\in {{\rm{supp\,}}}(\mu)\}$. \item ${{\rm{cls\,}}} E_\mu\subset \Lambda_\mu$. \item If $\mu$ is ergodic with positive entropy then ${{\rm{cls\,}}} E_\mu=\Lambda_\mu$. \end{enumerate} \end{thm}
One consequence of this characterization of the set of $\mu$-entropy pairs is a description of the set of entropy pairs of a product system. Recall that an $E$-system is a system for which there exists a probability invariant measure with full support.
\begin{cor}\label{prod-ep} Let $(X_1,T)$ and $(X_2,T)$ be two topological $E$-systems then: \begin{enumerate} \item $E_{X_1\times X_2}= (E_{X_1}\times E_{X_2}) \cup (E_{X_1}\times \Delta_{X_2}) \cup (\Delta_{X_1}\times E_{X_2})$. \item The product of two UPE systems is UPE. \end{enumerate} \end{cor}
Another consequence is:
\begin{cor}\label{ep-proximal} Let $(X,T)$ be a topological dynamical system, $P$ the proximal relation on $X$. Then: \begin{enumerate} \item For every $T$-invariant ergodic measure $\mu$ of positive entropy the set $P\cap E_\mu$ is residual in the $G_\delta$ set $E_\mu$ of $\mu$ entropy pairs. \item When $E_X\ne\emptyset$ the set $P\cap E_X$ is residual in the $G_\delta$ set $E_X$ of topological entropy pairs. \end{enumerate} \end{cor}
Given a dynamical system $(X,T)$\ , a pair $(x,x')\in X\times X$ is called a {\bf Li--Yorke pair\/} if it is a proximal pair but not an asymptotic pair. A set $S\subseteq X$ is called {\bf scrambled\/} if any pair of distinct points $\{x,y\}\subseteq S$ is a Li--Yorke pair. A dynamical system $(X,T)$ is called {\bf chaotic in the sense of Li and Yorke\/} if there is an uncountable scrambled set. In \cite{BGKM} Theorem \ref{char-mu-ep} is applied to solve the question whether positive topological entropy implies Li--Yorke chaos as follows.
\begin{thm} Let $(X,T)$ be a topological dynamical system. \begin{enumerate} \item If $(X,T)$ admits a $T$-invariant ergodic measure $\mu$ with respect to which the measure preserving system $(X,\mathcal{X},\mu,T)$ is not measure distal then $(X,T)$ is Li--Yorke chaotic. \item If $(X,T)$ has positive topological entropy then it is Li--Yorke chaotic. \end{enumerate} \end{thm}
In \cite{BHR} Blanchard, Host and Ruette show that in positive entropy systems there are also many asymptotic pairs.
\begin{thm}\label{BHR} Let $(X,T)$ be a topological dynamical system with positive topological entropy. Then \begin{enumerate} \item The set of points $x\in X$ for which there is some $x'\ne x$ with $(x,x')$ an asymptotic pair, has measure $1$ for every invariant probability measure on $X$ with positive entropy. \item There exists a probability measure $\nu$ on $X\times X$ such that $\nu$ a.e. pair $(x,x')$ is Li--Yorke and positively asymptotic; or more precisely for some $\delta >0$ \begin{gather*} \lim_{n\to+\infty}d(T^nx,T^nx')=0, \qquad {\text and}\\ \liminf_{n\to+\infty}d(T^{-n}x,T^{-n}x')=0, \qquad \limsup_{n\to+\infty}d(T^{-n}x,T^{-n}x')\ge\delta. \end{gather*} \end{enumerate} \end{thm}
\subsection{Topological determinism and zero entropy}
Following \cite{KSS} call a dynamical system $(X,T)$\ {\bf deterministic\/} if every $T$-factor is also a $T^{-1}$-factor. In other words every closed equivalence relation $R\subset X\times X$ which has the property $TR\subset R$ also satisfies $T^{-1}R\subset R$. It is not hard to see that an equivalent condition is as follows. For every continuous real valued function $f\in C(X)$ the function $f\circ T^{-1}$ is contained in the smallest closed subalgebra $\mathcal{A}\subset C(X)$ which contains the constant function $\mathbf{1}$ and the collection $\{f\circ T^n: n\ge 0\}$. The folklore question whether the latter condition implies zero entropy was open for awhile. Here we note that the affirmative answer is a direct consequence of Theorem \ref{BHR} (see also \cite{KSS}).
\begin{prop}\label{non-inv-f} Let $(X,T)$ be a topological dynamical system such that there exists a $\delta>0$ and a pair $(x,x')\in X\times X$ as in Theorem \ref{BHR}.2. Then $(X,T)$\ is not deterministic. \end{prop}
\begin{proof} Set $$ R=\{(T^nx,T^nx'): n\geq 0\} \cup \{(T^nx',T^nx): n\geq 0\} \cup \Delta. $$ Clearly $R$ is a closed equivalence relation which is $T$-invariant but not $T^{-1}$-invariant. \end{proof}
\begin{cor} A topologically deterministic dynamical system has zero entropy. \end{cor}
\begin{proof} Let $(X,T)$ be a topological dynamical system with positive topological entropy; by Theorem \ref{BHR}.2. and Proposition \ref{non-inv-f} it is not deterministic. \end{proof}
{\large{\part{Meeting grounds}}}
\section{Unique ergodicity}\label{Sec-JK}
The topological system $(X,T)$ is called {\bf uniquely ergodic\/} if $M_T(X)$ consists of a single element $\mu$. If in addition $\mu$ is a full measure (i.e. ${{\rm{supp\,}}} \mu=X$) then the system is called {\bf strictly ergodic\/} (see \cite[ Section 4.3]{HKat}). Since the ergodic measures are characterized as the extreme points of the Choquet simplex $M_T(X)$, it follows immediately that a uniquely ergodic measure is ergodic. For a while it was believed that strict ergodicity --- which is known to imply some strong topological consequences (like in the case of $\mathbb{Z}$-systems, the fact that {\em every\/} point of $X$ is a generic point and moreover that the convergence of the ergodic sums $\mathbb{A}_n(f)$ to the integral $\int f\, d\mu, \ f\in C(X)$ is {\em uniform\/}) --- entails some severe restrictions on the measure-theoretical behavior of the system. For example, it was believed that unique ergodicity implies zero entropy. Then, at first some examples were produced to show that this need not be the case. Furstenberg in \cite{Fur3} and Hahn and Katznelson in \cite{HK} gave examples of uniquely ergodic systems with positive entropy. Later in 1970 R. I. Jewett surprised everyone with his outstanding result: every weakly mixing measure preserving $\mathbb{Z}$-system has a strictly ergodic model, \cite{Jew}. This was strengthened by Krieger \cite{K} who showed that even the weak mixing assumption is redundant and that the result holds for every ergodic $\mathbb{Z}$-system.
We recall the following well known characterizations of unique ergodicity (see \cite[Theorem 4.9]{G}).
\begin{prop}\label{unique-erg} Let $(X,T)$\ be a topological system. The following conditions are equivalent. \begin{enumerate} \item $(X,T)$\ is uniquely ergodic. \item $C(X)=\mathbb{R}+\bar B$, where $B=\{g-g\circ T: g\in C(X)\}$. \item For every continuous function $f\in C(X)$ the sequence of functions $$ \mathbb{A}_nf(x)=\frac{1}{n}\sum_{j=0}^{n-1}f(T^jx). $$ converges uniformly to a constant function. \item For every continuous function $f\in C(X)$ the sequence of functions $\mathbb{A}_n(f)$ converges pointwise to a constant function. \item For every function $f\in A$, for a collection $A\subset C(X)$ which linearly spans a uniformly dense subspace of $C(X)$, the sequence of functions $\mathbb{A}_n(f)$ converges pointwise to a constant function. \end{enumerate} \end{prop}
Given an ergodic dynamical system $\mathbf{X}=(X,\mathcal{X},\mu,T)$ we say that the system $\hat \mathbf{X}=(\hat X,\hat \mathcal{X},\hat \mu,T)$ is a {\bf topological model\/} (or just a model) for $\mathbf{X}$ if $(\hat X,T)$ is a topological system, $\hat \mu\in M_T(\hat X)$ and the systems $\mathbf{X}$ and $\hat \mathbf{X}$ are measure theoretically isomorphic. Similarly we say that $\hat \pi:\hat \mathbf{X} \to \hat \mathbf{Y}$ is a {\bf topological model\/} for $\pi:\mathbf{X}\rightarrow \mathbf{Y}$ when $\hat \pi$ is a topological factor map and there exist measure theoretical isomorphisms $\phi$ and $\psi$ such that the diagram \begin{equation*}\label{eq-model2} \xymatrix { \mathbf{X} \ar[d]_{\pi} \ar[r]^{\phi} &
\hat \mathbf{X} \ar[d]^{\hat \pi} \\ \mathbf{Y}\ar[r]_{\psi} & \hat\mathbf{Y} } \end{equation*} is commutative.
\section{The relative Jewett-Krieger theorem} In this subsection we will prove the following generalization of the Jewett-Krieger theorem (see \cite[Theorem 4.3.10]{HKat}).
\begin{thm}\label{rel-jk} If $\pi:\mathbf{X}=(X,\mathcal{X},\mu,T)\rightarrow \mathbf{Y}=(Y,\mathcal{Y},\nu,T)$ is a factor map with $\mathbf{X}$ ergodic and $\hat\mathbf{Y}$ is a uniquely ergodic model for $\mathbf{Y}$ then there is a uniquely ergodic model $\hat \mathbf{X}$ for $\mathbf{X}$ and a factor map $\hat \pi:\hat \mathbf{X} \to \hat \mathbf{Y}$ which is a model for $\pi:\mathbf{X}\rightarrow \mathbf{Y}$. \end{thm}
In particular, taking $\mathbf{Y}$ to be the trivial one point system we get:
\begin{thm}\label{jk} Every ergodic system has a uniquely ergodic model. \end{thm}
Several proofs have been given of this theorem, e.g. see \cite{DGS} and \cite{BF}. We will sketch a proof which will serve the relative case as well.
\begin{proof}[Proof of theorem \ref{rel-jk}]
A key notion for this proof is that of a {\bf uniform} partition whose importance in this context was emphasized by G. Hansel and J.-P. Raoult, \cite{HR}. \begin{defn} A set $B \in \mathcal{X}$ is uniform if $$ \lim_{N \to \infty}\ {\esssup}_x
\left|\ \frac 1N\ \sum^{N-1}_0\ 1_B(T^ix)- \mu(B)\right|=0. $$ A partition $\mathcal{P}$ is uniform if, for all $N$, every set in $\bigvee^N_{-N}\ T^{-i}\mathcal{P}$ is uniform. \end{defn}
The connection between uniform sets, partitions and unique ergodicity lies in Proposition \ref{unique-erg}. It follows easily from that proposition that if $\mathcal{P}$ is a uniform partition, say into the sets $\{P_1,\ P_2, \ldots, P_a\}$, and we denote by $\mathcal{P}$ also the mapping that assigns to $x \in X$, the index $1 \leq i \leq a$ such that $x \in P_i$, then we can map $X$ to $\{1,\ 2, \ldots, a\}^\mathbb Z = A^\mathbb Z$ by: $$ \pi(x)= (\ldots, \mathcal{P} (T^{-1}x),\ \mathcal{P}(x),\ \mathcal{P}(Tx), \ldots, \mathcal{P}(T^nx), \ldots). $$ Pushing forward the measure $\mu$ by $\pi$, gives $\pi \circ \mu$ and the closed support of this measure will be a closed shift invariant subset, say $E \subset A^\mathbb Z$. Now the indicator functions of finite cylinder sets span the continuous functions on $E$, and the fact that $\mathcal{P}$ is a uniform partition and Proposition \ref{unique-erg} combine to establish that $(E,\ $shift) is uniquely ergodic. This will not be a model for $(X,\ \mathcal{X},\ \mu,\ T)$ unless $\bigvee^\infty_{- \infty}\ T^{-i}\mathcal{P}= \mathcal{X}$ modulo null sets, but in any case this does give a model for a nontrivial factor of $X$.
Our strategy for proving Theorem \ref{jk} is to first construct a single nontrivial uniform partition. Then this partition will be refined more and more via uniform partitions until we generate the entire $\sigma$-algebra $\mathcal{X}$. Along the way we will be showing how one can prove a relative version of the basic Jewett--Krieger theorem. Our main tool is the use of Rohlin towers. These are sets $B \in \mathcal{X}$ such that for some $N,\ B,\ TB, \ldots, T^{N-1}B$ are disjoint while $\bigcup^{N-1}_0\ T^iB$ fill up most of the space. Actually we need Kakutani--Rohlin towers, which are like Rohlin towers but fill up the whole space. If the transformation does not have rational numbers in its point spectrum this is not possible with a single height, but two heights that are relatively prime, like $N$ and $N+1$ are certainly possible. Here is one way of doing this. The ergodicity of $(X,\ \mathcal{X},\ \mu,\ T)$ with $\mu$ non atomic easily yields, for any $n$, the existence of a positive measure set $B$, such that $$ T^i\ B \cap B = \emptyset \qquad , \qquad i=1,\ 2, \ldots, n. $$ With $N$ given, choose $n \geq 10 \cdot N^2$ and find $B$ that satisfies the above. It follows that the return time $$ r_B(x)= \inf\{i>0:T^ix \in B\} $$ is greater than $10 \cdot N^2$ on $B$. Let $$ B_\ell = \{x:r_B(x)=\ell\}. $$ Since $\ell$ is large (if $B_\ell$ is nonempty) one can write $\ell$ as a positive combination of $N$ and $N \times 1$, say $$ \ell = Nu_\ell + (N+1) v_\ell. $$ Now divide the column of sets $\{T^iB_\ell : 0 \leq i < \ell\}$ into $u_\ell$-blocks of size $N$ and $v_\ell$-blocks of size $N+1$ and mark the first layer of each of these blocks as belonging to $C$. Taking the union of these marked levels ($T^iB_\ell$ for suitably chosen $i$) over the various columns gives us a set $C$ such that $r_C$ takes only two values -- either $N$ or $N+1$ as required.
It will be important for us to have at our disposal K-R towers like this such that the columns of say the second K-R tower are composed of entire subcolumns of the earlier one. More precisely we want the base $C_2$ to be a subset of $C_1$ -- the base of the first tower. Although we are not sure that this can be done with just two column heights we can guarantee a bound on the number of columns that depends only on the maximum height of the first tower. Let us define formally: \begin{defn} A set $C$ will be called the base of a {\bf bounded} K-R tower if for some $N,\ \bigcup^{N-1}_0\ T^iC=X$ up to a $\mu$-null set. The least $N$ that satisfies this will be called the {\bf height} of $C$, and partitioning $C$ into sets of constancy of $r_C$ and viewing the whole space $X$ as a tower over $C$ will be called the K-R tower with columns the sets $\{T^iC_\ell:\ 0 \leq i <\ell\}$ for $C_\ell=\{x \in C: r_C(x)=\ell\}$. \end{defn}
Our basic lemma for nesting these K-R towers is: \begin{lem}\label{4N} Given a bounded K-R tower with base $C$ and height $N$, for any $n$ sufficiently large there is a bounded K-R tower with base $D$ contained in $C$ whose column heights are all at least $n$ and at most $n+4N$. \end{lem}
\begin{proof} We take an auxiliary set $B$ such that $T^i\ B \cap B= \emptyset$ for all $0 <i<10(n+2N)^2$ and look at the unbounded (in general) K-R tower over $B$. Using the ergodicity it is easy to arrange that $B \subset C$. Now let us look at a single column over $B_m$, with $m \geq 10\ (n+2N)^2$. We try to put down blocks of size $n+2N$ and $n+2N+1$, to fill up the tower. This can certainly be done but we want our levels to belong to $C$. We can refine the column over $B_m$ into a finite number of columns so that each level is either entirely within $C$ or in $X \backslash C$. This is done by partitioning the base $C$ according to the finite partition: $$ \bigcap^{m-1}_{i=0}\ T^{-1}\{C,\ X \backslash C\}. $$ Then we move the edge of each block to the nearest level that belongs to $C$. The fact that the height of $C$ is $N$ means that we do not have to move any level more than $N-1$ steps, and so at most we lose $2N-2$ or gain that much thus our blocks, with bases now all in $C$, have size in the interval $[n,\ n+4N]$ as required. \end{proof}
It is clear that this procedure can be iterated to give an infinite sequence of nested K-R towers with a good control on the variation in the heights of the columns. These can be used to construct uniform partitions in a pretty straightforward way, but we need one more lemma which strengthens slightly the ergodic theorem. We will want to know that when we look at a bounded K-R tower with base $C$ and with minimum column height sufficiently large that for most of the fibers of the towers (that is for $x \in C,\ \{T^ix:\ 0 \leq i<r_C(x)\}$) the ergodic averages of some finite set of functions are close to the integrals of the functions. It would seem that there is a problem because the base of the tower is a set of very small measure (less than 1/min column height) and it may be that the ergodic theorem is not valid there. However, a simple averaging argument using an intermediate size gets around this problem. Here is the result which we formulate for simplicity for a single function $f$:
\begin{lem}\label{erg} Let $f$ be a bounded function and $(X,\ \mathcal{X},\ \mu,\ T)$ ergodic. Given $\epsilon >0$, there is an $n_0$, such that if a bounded K-R tower with base $C$ has minimum column height at least $n_0$, then those fibers over $x \in C:\ \{T^ix:\ 0 \leq i < r_C(x)\}$ that satisfy $$
\left| \frac {1}{r_C(x)}\ \sum^{r_C(x)-1}_{i=0}\
f(T^ix)-\int_X\ fd\mu \right| <\epsilon $$ fill up at least $1-\epsilon$ of the space. \end{lem}
\begin{proof}
Assume without loss of generality that $|f| \le 1$. For a $\delta$ to be specified later find an $N$ such that the set of $y \in X$ which satisfy \begin{equation}\label{*}
\left|\frac 1N\ \sum^{N-1}_0\ f(T^iy)-\int fd \mu\right| < \delta \end{equation} has measure at least $1-\delta$. Let us denote the set of $y$ that satisfy \eqref{*} by $E$. Suppose now that $n_0$ is large enough so that $N/n_0$ is negligible -- say at most $\delta$. Consider a bounded K-R tower with base $C$ and with minimum column height greater than $n_0$. For each fiber of this tower, let us ask what is the fraction of its points that lie in $E$. Those fibers with at least a $\sqrt{\delta}$ fraction of its points not in $E$ cannot fill up more than a $\sqrt{\delta}$ fraction of the space, because $\mu(E)>1-\delta$.
Fibers with more than $1-\sqrt{\delta}$ of its points lying in $E$ can be divided into disjoint blocks of size $N$ that cover all the points that lie in $E$. This is done by starting at $x\in C$, and moving up the fiber, marking the first point in $E$, skipping $N$ steps and continuing to the next point in $E$ until we exhaust the fiber. On each of these $N$-blocks the average of $f$ is within $\delta$
of its integral, and since $|f|\le 1$ if $\sqrt{ \delta } < \epsilon/10$ this will guarantee that the average of $f$ over the whole fiber is within $\epsilon$ of its integra. \end{proof}
We are now prepared to construct uniform partitions. Start with some fixed nontrivial partition $\mathcal{P}_0$. By Lemma \ref{erg}, for any tall enough bounded K-R tower at least 9/10 of the columns will have the 1-block distribution of each $\mathcal{P}_0$-name within $\frac {1}{10}$ of the actual distribution. We build a bounded K-R tower with base $C_1(1)$ and heights $N_1,\ N_1+1$ with $N_1$ large enough for this to be valid. It is clear that we can modify $\mathcal{P}_0$ to $\mathcal{P}_1$ on the bad fibers so that now all fibers have a distribution of 1-blocks within $\frac {1}{10}$ of a fixed distribution. We call this new partition $\mathcal{P}_1$. Our further changes in $\mathcal{P}_1$ will not change the $N_1,\ N_1+1$ blocks that we see on fibers of a tower over our ultimate $C_1$. Therefore, we will get a uniformity on all blocks of size $100N_1$. The 100 is to get rid of the edge effects since we only know the distribution across fibers over points in $C_1(1)$.
Next we apply Lemma \ref{erg} to the 2-blocks in $\mathcal{P}_1$ with 1/100. We choose $N_2$ so large that $N_1/N_2$ is negligible and so that any uniform K-R tower with height at least $N_2$ has for at least 99/100 of its fibers a distribution of 2-blocks within $1 / 100$ of the global $\mathcal{P}_1$ distribution. Apply Lemma \ref{4N} to find a uniform K-R tower with base $C_2(2) \subset C_1(1)$ such that its column heights are between $N_2$ and $N_2+4N_1$. For the fibers with good $\mathcal{P}_1$ distribution we make no change. For the others, we copy on most of the fiber (except for the top $10 \cdot N^2_1$ levels) the corresponding $\mathcal{P}_1$-name from one of the good columns. In this copying we also copy the $C_1(1)$-name so that we preserve the blocks. The final $10 \cdot N^2_1$ spaces are filled in with $N_1,\ N_1+1$ blocks. This gives us a new base for the first tower that we call $C_1(2)$, and a new partition $\mathcal{P}_2$. The features of $\mathcal{P}_2$ are that all its fibers over $C_1(2)$ have good (up to $1 /10$) 1-block distribution, and all its fibers over $C_2(2)$ have good (up to $1 /100$) 2-block distributions. These will not change in the subsequent steps of the construction.
Note too that the change from $C_1(1)$, to $C_1(2)$, could have been made arbitrarily small by choosing $N_2$ sufficiently large.
There is one problem in trying to carry out the next step and that is, the filling in of the top relatively small portion of the bad fibers after copying most of a good fiber. We cannot copy an exact good fiber because it is conceivable that no fiber with the precise height of the bad fiber is good. The filling in is possible if the column heights of the previous level are relatively prime. This was the case in step 2, because in step 1 we began with a K-R tower heights $N_1, N_1+1$. However, Lemma \ref{4N} does not guarantee relatively prime heights. This is automatically the case if there is no rational spectrum. If there are only a finite number of rational points in the spectrum then we could have made our original columns with heights $LN_1,\ L(N_1+1)$ with $L$ being the highest power so that $T^L$ is not ergodic and then worked with multiples of $L$ all the time. If the rational spectrum is infinite then we get an infinite group rotation factor and this gives us the required uniform partition without any further work.
With this point understood it is now clear how one continues to build a sequence of partitions $\mathcal{P}_n$ that converge to $\mathcal{P}$ and $C_i(k) \rightarrow C_i$ such that the $\mathcal{P}$-names of all fibers over points in $C_i$ have a good (up to $1 / {10^i}$) distribution of $i$-blocks. This gives the uniformity of the partition $\mathcal{P}$ as required and establishes
\begin{prop} Given any $\mathcal{P}_0$ and any $\epsilon >0$ there is a uniform partition $\mathcal{P}$ such that $d(\mathcal{P}_0,\mathcal{P})<\epsilon$ in the $\ell_1$-metric on partitions. \end{prop}
As we have already remarked the uniform partition that we have constructed gives us a uniquely ergodic model for the factor system generated by this partition. We need now a relativized version of the construction we have just carried out. We formulate this as follows:
\begin{prop} Given a uniform partition $\mathcal{P}$ and an arbitrary partition $\mathcal{Q}_0$ that refines $\mathcal{P}$, for any $\epsilon >0$ there is a uniform partition $\mathcal{Q}$ that also refines $\mathcal{P}$ and satisfies $$
\|\mathcal{Q}_0 - \mathcal{Q}\|_1<\epsilon. $$ \end{prop}
Even though we write things for finite alphabets, everything makes good sense for countable partitions as well and the arguments need no adjusting. However, the metric used to compare partitions becomes important since not all metrics on $\ell_1$ are equivalent. We use always: $$
\|\mathcal{Q} - \overline {\mathcal{Q}}\|_1=\sum_j\ \int_X \
|1_{Q_j}-1_{\overline Q_j}| d \mu $$ where the partitions $\mathcal{Q}$ and $\overline {\mathcal{Q}}$ are ordered partitions into sets $\{Q_j\},\ \{\overline Q _j\}$ respectively. We also assume that the $\sigma$-algebra generated by the partition $\mathcal{P}$ is nonatomic -- otherwise there is no real difference between what we did before and what has to be done here.
We will try to follow the same proof as before. The problem is that when we redefine $\mathcal{Q}_0$ to $\mathcal{Q}$ we are not allowed to change the $\mathcal{P}$-part of the name of points. That greatly restricts us in the kind of names we are allowed to copy on columns of K-R towers and it is not clear how to proceed. The way to overcome the difficulty is to build the K-R towers inside the uniform algebra generated by $\mathcal{P}$. This being done we look, for example, at our first tower and the first change we wish to make in $\mathcal{Q}_0$. We divide the fibers into a finite number of columns according to the height and according to the $\mathcal{P}$-name.
Next each of these is divided into subcolumns, called $\mathcal{Q}_0$-columns, according to the $\mathcal{Q}_0$-names of points. If a $\mathcal{P}$-column has some good (i.e. good 1-block distribution of $\mathcal{Q}_0$-names) $\mathcal{Q}_0$-subcolumn it can be copied onto all the ones that are not good. Next notice that a $\mathcal{P}$-column that contains not even one good $\mathcal{Q}_0$-name is a set defined in the uniform algebra. Therefore if these sets have small measure then for some large enough $N$, uniformly over the whole space, we will not encounter these bad columns too many times.
In brief the solution is to change the nature of the uniformity. We do not make all of the columns of the K-R tower good -- but we make sure that the bad ones are seen infrequently, uniformly over the whole space. With this remark the proof of the proposition is easily accomplished using the same nested K-R towers as before -- {\em but inside the uniform algebra}.
Finally the J-K theorem is established by constructing a refining sequence of uniform partitions and looking at the inverse limit of the corresponding topological spaces. Notice that if $\mathcal{Q}$ refines $\mathcal{P}$, and both are uniform, then there is a natural homeomorphism from $X_{\mathcal{Q}}$ onto $X_{\mathcal{P}}$. The way in which the theorem is established also yields a proof of the relative J-K theorem, Theorem \ref{rel-jk}. \end{proof}
Using similar methods E. Lehrer \cite{Leh} shows that in the Jewett-Krieger theorem one can find, for any ergodic system, a strictly ergodic model which is topologically mixing.
\section{Models for other commutative diagrams} One can describe Theorem \ref{rel-jk} as asserting that every diagram of ergodic systems of the form $\mathbf{X}\to\mathbf{Y}$ has a strictly ergodic model. What can we say about more complicated commutative diagrams? A moments reflection will show that a repeated application of Theorem \ref{rel-jk} proves the first assertion of the following theorem.
\begin{thm}\label{CD} Any commutative diagram in the category of ergodic $\mathbb{Z}$ dynamical systems with the structure of an inverted tree,\ i.e. no portion of it looks like \begin{equation}\label{Z>>XY} \xymatrix { & \mathbf{Z} \ar[dl]_\alpha \ar[dr]^\beta & \\ \mathbf{X} & & \mathbf{Y} } \end{equation} has a strictly ergodic model. On the other hand there exists a diagram of the form \eqref{Z>>XY} that does not admit a strictly ergodic model. \end{thm}
For the proof of the second assertion we need the following theorem.
\begin{thm}\label{top-dis>meas-dis} If $(Z,\eta,T)$ is a strictly ergodic system and $(Z,T)\overset{\alpha}{\to} (X,T)$ and $(Z,T)\overset{\beta}{\to}(Y,T)$ are topological factors such that $\alpha^{-1}(U)\cap\beta^{-1}(V)\ne\emptyset$ whenever $U\subset X$ and $V\subset Y$ are nonempty open sets, then the measure-preserving systems $\mathbf{X}=(X,\mathcal{X},\mu,T)$ and $\mathbf{Y}=(Y,\mathcal{Y},\nu,T)$ are measure-theoretically disjoint. In particular this is the case if the systems $(X,T)$\ {} and $(Y,T)$\ {} are topologically disjoint. \end{thm}
\begin{proof} It suffices to show that the map $\alpha\times \beta:Z\to X\times Y$ is onto since this will imply that the topological system $(X\times Y,T)$ is strictly ergodic. We establish this by showing that the measure $\lambda=(\alpha\times\beta)_*(\eta)$ (a joining of $\mu$ and $\nu$) is full;\ i.e. that it assigns positive measure to every set of the form $U\times V$ with $U$ and $V$ as in the statement of the theorem. In fact, since by assumption $\eta$ is full we have $$ \lambda(U\times V)= \eta((\alpha\times\beta)^{-1}(U\times V))= \eta(\alpha ^{-1}(U) \cap \beta^{-1}(V))>0. $$ This completes the proof of the first assertion. The second follows since topological disjointness of $(X,T)$\ {} and $(Y,T)$\ {} implies that $\alpha\times \beta:Z\to X\times Y$ is onto. \end{proof}
\begin{proof}[Proof of Theorem Theorem \ref{CD}] We only need to prove the last assertion. Take $\mathbf{X}=\mathbf{Y}$ to be any nontrivial weakly mixing system, then $\mathbf{X}\times\mathbf{X}$ is ergodic and the diagram \begin{equation} \xymatrix { & \mathbf{X}\times\mathbf{X} \ar[dl]_{p_1} \ar[dr]^{p_2} & \\ \mathbf{X} & & \mathbf{X} } \end{equation} is our counter example. In fact if \eqref{Z>>XY} is a uniquely ergodic model in this situation then it is easy to establish that the condition in Theorem \ref{top-dis>meas-dis} is satisfied and we apply this theorem to conclude that $\mathbf{X}$ is disjoint from itself. Since in a nontrivial system $\mu\times\mu$ and ${{\rm{gr\,}}}(\mu,{{\rm{id}}})$ are different ergodic joinings, this contradiction proves our assertion. \end{proof}
\section{The Furstenberg-Weiss almost 1-1 extension theorem}
It is well known that in a topological measure space one can have sets that are large topologically but small in the sense of the measure. In topological dynamics when $(X,T)$\ is a factor of $(Y,T)$\ and the projection $\pi: Y \to X$ is one to one on a topologically large set (i.e. the complement of a set of first category), one calls $(Y,T)$\ an {\bf almost 1-1 extension} of $(X,T)$\ and considers the two systems to be very closely related. Nonetheless, in view of the opening sentence, it is possible that the measure theory of $(Y,T)$\ will be quite different from the measure theory of $(X,T)$\ . The following theorem realizes this possibility in an extreme way (see \cite{FW15}).
\begin{thm} Let $(X,T)$\ be a non-periodic minimal dynamical system, and let $\pi: Y \to X$ be an extension of $(X,T)$\ with $(Y,T)$\ topologically transitive and $Y$ a compact metric space. Then there exists an almost 1-1 minimal extension, $\overline{\pi}:(\overline{Y},T)\to (X,T)$ and a Borel subset $Y_0\subset Y$ with a Borel measurable map $\theta: Y_0 \to \overline{Y}$ satisfying (1) $\theta T = T \theta$, (2) $\overline{\pi}\theta =\pi$, (3) $\theta$ is 1-1 on $Y_0$, (4) $\mu(Y_0)=1$ for any $T$-invariant measure $\mu$ on $Y$. \end{thm}
In words, one can find an almost 1-1 minimal extension of $X$ such that the measure theoretic structure is as rich as that of an arbitrary topologically transitive extension of $X$.
An almost 1-1 extension of a minimal equicontinuous system is called an {\bf almost automorphic} system. The next corollary demonstrates the usefulness of this modelling theorem. Other applications appeared e.g. in \cite{GW35} and \cite{DL}.
\begin{cor} Let $(X,\mathcal{X},\mu,T)$ be an ergodic measure preserving transformation with infinite point spectrum defined by $(G,\rho)$ where $G$ is a compact monothetic group $G=\overline{\{\rho^n\}}_{n\in \mathbb{Z}}$. Then there is an almost 1-1 minimal extension of $(G,\rho)$ (i.e. a minimal almost automorphic system), $(\tilde Z,\sigma)$ and an invariant measure $\nu$ on $Z$ such that $(Z,\sigma,\nu)$ is isomorphic to $(X,\mathcal{X},\mu,T)$. \end{cor}
\section{Cantor minimal representations}
A {\em Cantor minimal dynamical system} is a minimal topological system $(X,T)$\ where $X$ is the Cantor set. Two Cantor minimal systems $(X,T)$\ and $(Y,S)$ are called {\em orbit equivalent\/} (OE) if there exists a homeomorphism $F: X\to Y$ such that $F(\mathcal{O}_T(x))=\mathcal{O}_S(Fx)$ for every $x\in X$. Equivalently: there are functions $n:X\to \mathbb{Z}$ and $m: X \to \mathbb{Z}$ such that for every $x\in X$ $F(Tx)=S^{n(x)}(Fx)$ and $F(T^{m(x)})=S(Fx)$. An old result of M. Boyle implies that the requirement that, say, the function $n(x)$ be continuous already implies that the two systems are {\em flip conjugate\/}; i.e. $(Y,S)$ is isomorphic either to $(X,T)$\ or to $(X,T^{-1})$. However, if we require that both $n(x)$ and $m(x)$ have at most one point of discontinuity we get the new and, as it turns out, useful notion of {\em strong orbit equivalence\/} (SOE). A complete characterization of both OE and SOE of Cantor minimal systems was obtained by Giordano Putnam and Skau \cite{GPS} in terms of an algebraic invariant of Cantor minimal systems called the {\em dimension group}. (See \cite{Glasner} for a review of these results.)
We conclude this section with the following remarkable theorems, due to N. Ormes \cite{Ormes}, which simultaneously generalize the theorems of Jewett and Krieger and a theorem of Downarowicz \cite{Dow} which, given any Choquet simplex $Q$, provides a Cantor minimal system $(X,T)$\ with $M_T(X)$ affinely homeomorphic with $Q$. (See also Downarowitcz and Serafin \cite{DS}, and Boyle and Downarowicz \cite{BD}.)
\begin{thm}\label{ort} \begin{enumerate} \item Let $(\Omega,\mathcal{B},\nu,S)$ be an ergodic, non-atomic, probability measure preserving, dynamical system. Let $(X,T)$ be a Cantor minimal system such that whenever $\exp(2\pi i/p)$ is a (topological) eigenvalue of $(X,T)$\ for some $p\in \mathbb{N}$ it is also a (measurable) eigenvalue of $(\Omega,\mathcal{B},\nu,S)$. Let $\mu$ be any element of the set of extreme points of $M_T(X)$. Then, there exists a homeomorphism $T':X \to X$ such that (i) $T$ and $T'$ are strong orbit equivalent, (ii) $(\Omega,\mathcal{B},\nu,S)$ and $(X,\mathcal{X},\mu,T')$ are isomorphic as measure preserving dynamical systems. \item Let $(\Omega,\mathcal{B},\nu,S)$ be an ergodic, non-atomic, probability measure preserving, dynamical system. Let $(X,T)$ be a Cantor minimal system and $\mu$ any element of the set of extreme points of $M_T(X)$. Then, there exists a homeomorphism $T':X \to X$ such that (i) $T$ and $T'$ are orbit equivalent, (ii) $(\Omega,\mathcal{B},\nu,S)$ and $(X,\mathcal{X},\mu,T')$ are isomorphic as measure preserving dynamical systems. \item Let $(\Omega,\mathcal{B},\nu,S)$ be an ergodic, non-atomic, probability measure preserving dynamical system. Let $Q$ be any Choquet simplex and $q$ an extreme point of $Q$. Then there exists a Cantor minimal system $(X,T)$\ and an affine homeomorphism $\phi: Q \to M_T(X)$ such that, with $\mu=\phi(q)$, $(\Omega,\mathcal{B},\nu,S)$ and $(X,\mathcal{X},\mu,T)$ are isomorphic as measure preserving dynamical systems. \end{enumerate} \end{thm}
\section{Other related theorems} Let us mention a few more striking representation results.
For the first one recall that a topological dynamical system $(X,T)$ is said to be \textbf{prime} if it has no non-trivial factors. A similar definition can be given for measure preserving systems. There it is easy to see that a prime system $(X,\mathcal{X},\mu,T)$ must have zero entropy. It follows from a construction in \cite{SW} that the same holds for topological entropy, namely any system $(X,T)$ with positive topological entropy has non-trivial factors. In \cite{Wei36} it is shown that any ergodic zero entropy dynamical system has a minimal model $(X,T)$ with the property that any pair of points $(u,v)$ not on the same orbit has a dense orbit in $X \times X$. Such minimal systems are necessarily prime, and thus we have the following result:
\begin{thm}\label{prime} An ergodic dynamical system has a topological, minimal, prime model iff it has zero entropy. \end{thm}
The second theorem, due to Glasner and Weiss \cite{GW35}, treats the positive entropy systems.
\begin{thm} An ergodic dynamical system has a strictly ergodic, UPE model iff it has positive entropy. \end{thm}
We also have the following surprising result which is due to Weiss \cite{Wei35}.
\begin{thm} There exists a minimal metric dynamical system $(X,T)$ with the property that for every ergodic probability measure preserving system $(\Omega,\mathcal{B},\mu,S)$ there exists a $T$-invariant Borel probability measure $\nu$ on $X$ such that the systems $(\Omega,\mathcal{B},\mu,S)$ and $(X,\mathcal{X},\nu,T)$ are isomorphic. \end{thm}
In \cite{Lis2} E. Lindenstrauss proves the following: \begin{thm}\label{distal-model} Every ergodic measure distal $\mathbb{Z}$-system $\mathbf{X}=(X,\mathcal{X},\mu,T)$ can be represented as a minimal topologically distal system $(X,T,\mu)$ with $\mu\in M_T^{{\rm{erg}}}(X)$. \end{thm} This topological model need not, in general, be uniquely ergodic. In other words there are measure distal systems for which no uniquely ergodic topologically distal model exists.
\begin{prop} \begin{enumerate} \item There exists an ergodic non-Kronecker measure distal system $(\Omega,\mathcal{F},m,T)$ with nontrivial maximal Kronecker factor $(\Omega_0,\mathcal{F}_0,m_0,T)$ such that (i) the extension $(\Omega,\mathcal{F},m,T)\to (\Omega_0,\mathcal{F}_0,m_0,T)$ is finite to one a.e. and (ii) every nontrivial factor map of $(\Omega_0,\mathcal{F}_0,m_0,T)$ is finite to one. \item A system $(\Omega,\mathcal{F},m,T)$ as in part 1 does not admit a topologically distal strictly ergodic model. \end{enumerate} \end{prop}
\begin{proof} 1.\ Irrational rotations of the circle as well as adding machines are examples of Kronecker systems satisfying condition (ii). There are several constructions in the literature of ergodic, non-Kronecker, measure distal, two point extensions of these Kronecker systems. A well known explicit example is the strictly ergodic Morse minimal system.
2.\ Assume to the contrary that $(X,\mu,T)$ is a distal strictly ergodic model for $(\Omega,\mathcal{F},m,T)$. Let $(Z,T)$ be the maximal equicontinuous factor of $(X,T)$ and let $\eta$ be the unique invariant probability measure on $Z$. Since by assumption $(X,\mu,T)$ is not Kronecker it follows that $\pi: X \to Z$ is not one to one. By Furstenberg's structure theorem for minimal distal systems $(Z,T)$ is nontrivial and moreover there exists an intermediate extension $X \to Y \overset{\sigma}{\to}Z$ such that $\sigma$ is an isometric extension. A well known construction implies the existence of a minimal group extension $\rho:(\tilde Y,T) \to (Z,T)$, with compact fiber group $K$, such that the following diagram is commutative (see Section \ref{Sec-distal} above). We denote by $\nu$ the unique invariant measure on $Y$ (the image of $\mu$) and let $\tilde \nu$ be an ergodic measure on $\tilde Y$ which projects onto $\nu$. The dotted arrows denote measure theoretic factor maps.
\begin{equation*} \xymatrix { & (X,\mu)\ar@{.>}[dl]\ar[dd]_{\pi} \ar[dr]& & &\\ (\Omega_0,m_0) \ar@{.>}[dr] & &(Y,\nu)\ar[dl]_{\sigma} & (\tilde Y,\tilde\nu)\ar[l]_{\phi} \ar[dll]^{\rho, K}\\ & (Z,\eta)& & & } \end{equation*}
Next form the measure $ \theta = \int_K R_k\tilde\nu\, dm_K,$ where $m_K$ is Haar measure on $K$ and for each $k\in K$, $R_k$ denotes right translation by $k$ on $\tilde Y$ (an automorphism of the system $(\tilde Y,T)$). We still have $\phi(\theta)=\nu$.
A well known theorem in topological dynamics (see \cite{SS}) implies that a minimal distal finite to one extension of a minimal equicontinuous system is again equicontinuous and since $(Z,T)$ is the maximal equicontinuous factor of $(X,T)$ we conclude that the extension $\sigma: Y \to Z$ is not finite to one. Now the fibers of the extension $\sigma$ are homeomorphic to a homogeneous space $K/H$, where $H$ is a closed subgroup of $K$. Considering the measure disintegration $\theta = \int_Z \theta_z\, d\eta(z)$ of $\theta$ over $\eta$ and its projection $\nu = \int_Z \nu_z\, d\eta(z)$, the disintegration of $\nu$ over $\eta$, we see that a.e. $\theta_z \equiv m_K$ and $\nu_z \equiv m_{K/H}$. Since $K/H$ is infinite we conclude that the {\em measure theoretical extension} $\sigma: (Y,\nu) \to (Z,\eta)$ is not finite to one. However considering the dotted part of the diagram we arrive at the opposite conclusion. This conflict concludes the proof of the proposition. \end{proof}
In \cite{OW} Ornstein and Weiss introduced the notion of tightness for measure preserving systems and the analogous notion of mean distality for topological systems.
\begin{defn} Let $(X,T)$\ be a topological system. \begin{enumerate}
\item
A pair $(x,y)$ in $X\times X$ is {\bf mean proximal\/}
if for some (hence any) compatible metric $d$
$$
\limsup_{n\to\infty}\frac{1}{2n+1}\sum_{i=-n}^{n}
d(T^i x, T^i y) = 0.
$$
If this $\limsup$ is positive the pair is called
{\bf mean distal\/}.
\item
The system $(X,T)$\ is {\bf mean distal\/} if every pair with $x\ne y$
is mean distal.
\item
Given a $T$-invariant probability measure $\mu$ on $X$,
the triple $(X,\mu,T)$ is called {\bf tight\/} if there is
a $\mu$-conull set $X_0\subset X$ such that every pair of
distinct points $(x,y)$ in $X_0\times X_0$ is mean distal. \end{enumerate} \end{defn}
Ornstein and Weiss show that tightness is in fact a property of the measure preserving system $(X,\mu,T)$ (i.e. if the measure system $(X,\mathcal{X},\mu,T)$ admits one tight model then every topological model is tight). They obtain the following results.
\begin{thm} \mbox{} \begin{enumerate}
\item
If the entropy of $(X,\mu,T)$ is positive and finite
then $(X,\mu,T)$ is not tight.
\item
There exist strictly ergodic non-tight systems
with zero entropy. \end{enumerate} \end{thm}
Surprisingly the proof in \cite{OW} of the non-tightness of a positive entropy system does not work in the case when the entropy is infinite which is still open.
J. King gave an example of a tight system with a non-tight factor. Following this he and Weiss \cite{OW} established the following result. Note that this theorem implies that tightness and mean distality are not preserved by factors.
\begin{thm}
If $(X,\mathcal{X},\mu,T)$ is ergodic with zero entropy
then there exists a mean-distal system $(Y,\nu,S)$ which
admits $(X,\mathcal{X},\mu,T)$ as a factor. \end{thm}
\end{document} |
\begin{document}
\title{Bipartite entangled stabilizer mutually unbiased bases as maximum cliques of Cayley graphs}
\author{Wim van Dam} \email{[email protected]} \thanks{} \affiliation{ Department of Computer Science, University of California, Santa Barbara, CA 93106, USA\\ Department of Physics, University of California, Santa Barbara, CA 93106, USA } \author{Mark Howard} \email{[email protected]} \affiliation{ Department of Physics, University of California, Santa Barbara, CA 93106, USA } \date{\today}
\begin{abstract} We examine the existence and structure of particular sets of mutually unbiased bases (MUBs) in bipartite qudit systems. In contrast to well-known power-of-prime MUB constructions, we restrict ourselves to using maximally entangled stabilizer states as MUB vectors. Consequently, these bipartite entangled stabilizer MUBs (BES MUBs) provide no local information, but are sufficient and minimal for decomposing a wide variety of interesting operators including (mixtures of) Jamio\l kowski states, entanglement witnesses and more. The problem of finding such BES MUBs can be mapped, in a natural way, to that of finding maximum cliques in a family of Cayley graphs. Some relationships with known power-of-prime MUB constructions are discussed, and observables for BES MUBs are given explicitly in terms of Pauli operators. \end{abstract}
\pacs{03.65.Aa, 03.67.-a} \maketitle
\section{Introduction.}
One of the most important and long-studied tools in quantum information theory is that of mutually unbiased bases (MUBs). Two orthonormal bases $\mathcal{A}=\{\ket{a}\}$ and $\mathcal{B}=\{\ket{b}\}$ in a Hilbert space of dimension $d$ are said to be mutually unbiased when $|\langle a\ket{b}|=1/\sqrt{d}$ i.e. certainty of a measurement outcome in one basis implies complete uncertainty of a measurement outcome in another. This is the finite-dimensional analogue to the complementarity of position and momentum in continuous variable quantum mechanics. Typically, MUBs are most useful in Hilbert spaces, $\mathcal{H}_d$, of prime power dimension ($d=p^k$), for which \emph{complete} sets of MUBs are known to exist and a number of construction methods are available. Ignoring the trace component (which is often known or unimportant), decomposing a $d \times d$ Hermitean operator (e.g. a density matrix) requires $d^2-1$ parameters, which necessitates measuring $d+1$ different observables (since each observable yields $d-1$ independent probabilities). Complete sets of MUBs are sets with $d+1$ orthonormal bases, possessing the desirable properties of being both mutually unbiased with respect to one another and also being minimal in terms of the number of observables required (hence this is considered the optimal tomography set-up \cite{Adamson:2010,Wootters:1989}).
Our work here concerns the construction of MUBs in Hilbert space of dimension $d=p^2$ that are deliberately incomplete in that they contain only $p^2-1$ observables -- insufficient for parameterizing all operators in $\mathcal{H}_{p^2}$, but sufficient and \emph{minimal} for the description of Hermitean operators that are local maximally mixed (LMM) \cite{Baumgartner:2007}. LMM operators, $W$, defined on a bipartite system $\mathcal{H}_{p^2}=\mathbb{C}^p\otimes \mathbb{C}^p$ are those for which $\text{Tr}_1 (W)=\text{Tr}_2 (W)\propto\mathbb{I}$. This class of operators is surprisingly broad. The Jamio\l kowski isomorphism, for example, tells us that any unital map $\mathcal{E}$ acting on $\mathcal{H}_p$ can be represented by an LMM operator, indicating that this result could potentially be useful for the characterization of noise processes, whilst reducing the number of measurements required (process tomography using a similar construction is discussed in detail in \cite{Scott:2008}). Other scenarios in which the non-local information is of paramount importance include investigation of bipartite entangled and non-local states, and the witnesses \cite{Guhne:2003} and Bell inequalities \cite{Masanes2003} that identify them. As a final example, the motivation for this work came in considering a convenient, minimal basis with which to decompose so-called Clifford witnesses for detecting stabilizer vs. nonstabilizer operations \cite{WvDMH:2010}.
The literature concerning MUBs, constructions and related structures is vast. This field of study seems to originate with Schwinger's construction for unitary operator bases \cite{Schwinger:1960} in 1960, and subsequently Ivonovic's 1981 construction \cite{Ivonovic:1981} for complete MUBs in prime dimensions. Wootters and Fields \cite{Wootters:1989} provided a MUB construction for power-of-prime dimensions $d=p^k$ and showed its optimality for state reconstruction (tomography). A more recent (2002) construction that also works for power-of-prime dimensions is given by Bandyopadhyay \emph{et al.} \cite{Bandyopadhyay2002} and this is framed explicitly in terms of Pauli operators and stabilizer states. Lawrence \emph{et al.} \cite{Lawrence:2002} found a similar construction for multi-qubit systems in the same year. Since then a large number of related results have been published e.g. \cite{Klimov:2007,Sulc:2007,Klimov:2009,Kalev:2009} (also see a recent review article \cite{Durt:2010} and references therein) and a number of interesting connections with combinatorics (e.g. mutually orthogonal Latin squares \cite{Paterek:2009}) and finite geometry \cite{Saniga:2004,Bengtsson:2005,Wootters:2006}. A prominent example of the usefulness of MUBs is their optimality for state or process reconstruction (a recent experimental result \cite{Adamson:2010} shows an improvement over standard techniques by using MUB state tomography). Quantum key distribution schemes \cite{Cerf:2002,Chau:2005} typically rely on MUBs for their security. Another important application of MUBs is their interpretation in terms of finite phase space, leading to a discrete Wigner function; for a particular choice of MUB using stabilizer states, the resulting Wigner function can shed light on the computational power of circuits in the so-called ``Clifford computer" model \cite{Gibbons:2004,Galvao:2005,Cormick:2006,WvDMH:2010}.
Inadvertently, we have rediscovered some results that were previously known in the context of quantum key distribution \cite{Chau:2005}, and in the context of unitary designs \cite{Gross:2007,Scott:2008} (i.e., the Cliffords that we use to create some of our BES-MUBs are known to create a minimal unitary design). Recent work by Planat \cite{Planat:2011} is somewhat related to our current investigation, insofar as it utilizes graph theoretical concepts and stabilizer (Pauli operator) observables to examine the construction of MUBs. Kalev \emph{et al.} \cite{Kalev:2009} investigated MUBs in bipartite systems using sets of commuting Pauli operators, but their work is more focused on complete sets of MUBs for density operators in $\mathcal{H}_{p^2}$.
This work provides an alternative graph-theoretic method (as opposed to unitary designs or finite field constructions) of analyzing MUBs and similar structures in quantum information theory. It is hoped that a combination of the alternative methods outlined here, in addition to those of \cite{Chau:2005,Gross:2007,Scott:2008,Planat:2011} and others, will prove fruitful for further analyses. We show how to create an orthonormal basis of $p^2$ stabilizer states in $\mathcal{H}_{p^2}$, given a matrix $F \in SL(2,\ensuremath{\mathbb{Z}_p})$. Furthermore, we show that the quantity $\text{Tr}(F_i^{-1} F_j)$ indicates whether the bases corresponding to $F_i$ and $F_j$ are mutually unbiased. This leads naturally to a Cayley graph structure wherein graph vertices are given by the elements of $SL(2,\ensuremath{\mathbb{Z}_p})$, and edges between vertices correspond to mutual unbiasedness of the corresponding bases. The BES MUBs that we seek are easily shown to be maximum cliques of the Cayley graphs, and for primes up to 11 we can partition $SL(2,\ensuremath{\mathbb{Z}_p})$ into $p$ distinct (non-overlapping) BES-MUBs. For primes 13 and higher, it is an interesting open question whether such BES-MUBs exist, as a deterministic search for the maximum clique is infeasible. For the related question of minimal unitary designs it has been noted by Chau that subgroups of $SL(2,\ensuremath{\mathbb{Z}_p})$ of a particular size only exist for primes up to 11, but it is not clear that complete BES-MUBs depend in any way on the existence of such subgroups. The family of Cayley graphs under consideration (defined for all primes $p$) is actually the graph complement of a family of Ramanujan graphs, and we are able to list some general graph-theoretic properties that hold for all values of $p$. In section \ref{sec:defns} we review the necessary background concerning the Clifford group and introduce some graph-theoretical concepts that will be useful in later sections. In section \ref{sec:MUBconstruction} we explicitly give the recipe for constructing BES-MUBs and relate our work to a well-known MUB construction that uses finite field methods. Section \ref{graphprops} further explores the quantities and concepts from graph theory that can be applied to our family of Cayley graphs, and finally, Appendix \ref{AppendixA} provides a description of the MUB observables in terms of stabilizer measurements as well commuting sets of Pauli operators.
\section{Definitions and Useful Results}\label{sec:defns} \subsection{Relevant finite groups and their properties} The finite-dimensional analogues of position and momentum operators are denoted by $X$ and $Z$, arbitrary products of which are called displacement operators $D$, indexed by a vector $u=(u_1,u_2)\in \ensuremath{\mathbb{Z}_p}^2$: \begin{align} &X\ket{j}=\ket{j+1} \quad Z\ket{j}=\omega^j\ket{j} \quad \left(\omega=e^{2\pi\mathrm{i}/p}\right)\\ &D_u=\tau^{u_1 u_2} X^{u_1} Z^{u_2}\quad \tau=e^{(p+1)\pi\mathrm{i}/p}. \end{align} The Weyl-Heisenberg group (or generalized Pauli group) for a single qupit is given by \begin{align} \mathcal{G}_p=\left\{\tau^c D_u \vert u \in \ensuremath{\mathbb{Z}_p}^2, c \in \ensuremath{\mathbb{Z}_p}\right\}. \end{align} The set of unitary operators that map the Pauli group onto itself under conjugation is called the Clifford group (sometimes called the Jacobi group): \begin{align*}
\mathcal{C}_p=\{C\in U(p)\vert U \mathcal{G}_p U^\dag =\mathcal{G}_p\}. \end{align*}
The fact that every Clifford operation in dimension $p$ can be associated with a matrix $F \in SL(2,\ensuremath{\mathbb{Z}_p})$ in addition to a vector $u \in \ensuremath{\mathbb{Z}_p}^2$ results from the isomorphism \begin{align} \mathcal{C}_p \cong SL(2,\ensuremath{\mathbb{Z}_p}) \ltimes \ensuremath{\mathbb{Z}_p}^2, \end{align} established by Appleby \cite{Appleby:arxiv09}, where $\mathcal{C}$ is the Clifford group. If we specify the elements of $F$ and $u$ as \begin{align} F=\left(
\begin{array}{cc}
\alpha & \beta \\
\gamma & \delta \\
\end{array}
\right)\in SL(2,\ensuremath{\mathbb{Z}_p}) \qquad u=\left(
\begin{array}{c}
u_1 \\
u_2 \\
\end{array}
\right)\in \ensuremath{\mathbb{Z}_p}^2 \label{Fudef} \end{align} then Appleby provides an explicit description of the unitary matrix $\protect{C_{(F\vert u)} \in \mathcal{C}_p}$ in terms of these elements i.e., \begin{align} &C_{(F\vert u)}=D_u U_F \\ &U_F=\begin{cases} \frac{1}{\sqrt{p}}\sum_{j,k=0}^{p-1}\tau^{\beta^{-1}\left(\alpha k^2-2 j k +\delta j^2\right)}\ket{j}\bra{k}\quad &\beta\neq0\\ \sum_{k=0}^{p-1} \tau^{\alpha \gamma k^2} \ket{\alpha k}\bra{k}\quad &\beta=0. \end{cases} \end{align}
Note how composition and inverses can be represented in this notation \cite{Scott:042203} \begin{align} C_{(F\vert u)} C_{(K \vert v)}=C_{(FK\vert u+Fv)}\label{composition}\\ C_{(F\vert u)}^{-1}=C_{(F\vert u)}^\dag = C_{(F^{-1} \vert -F^{-1}u)}\label{inverse} \end{align}
We will have need to relate the matrix trace $\text{Tr}\left(C_{(F\vert u)}\right)$ to the matrix trace $\text{Tr}\left(F\right)$ modulo $p$: \begin{align} &\lvert\text{Tr}\left(C_{(F\vert u)}\right)\rvert = \begin{cases} \in\{0,\sqrt{p},p\} &\text{ if }\text{Tr}(F)= 2 \\ 1 &\text{ if }\text{Tr}(F)\neq 2 \end{cases}\label{tracerelations} \end{align}
To see why this is so we must define the Legendre Symbol \begin{align*} \ell_p(x) = \begin{cases} \;\;\,1\;\;\, \text{ if } &x \text{ is a quadratic residue} \pmod{p} \\ -1\;\, \text{ if } &x \text{ is a quadratic non-residue} \pmod{p}\\ \;\;\,0\;\;\, \text{ if } &x \equiv 0 \pmod{p}. \end{cases} \end{align*} and quote a result from Appleby \cite{Appleby:arxiv09} \begin{align} (\text{ Case 1: }\qquad\beta =0 \Rightarrow \alpha\neq 0\ )\hspace{1cm} \nonumber \\
\lvert \text{Tr}\left(C_{(F\vert u)}\right) \rvert = \begin{cases} |\ell_p(\alpha)|=1 \quad &(\text{Tr}(F)\neq 2) \\
|\ell_p(\gamma)| \sqrt{p} \delta_{u_1,0}\quad &(\text{Tr}(F)= 2, \gamma \neq 0)\\ p \delta_{u_1,0}\delta_{u_2,0}\quad &(\text{Tr}(F)= 2, \gamma =0) \end{cases}\label{tracerelations1} \\ (\text{ Case 2: }\qquad\beta \neq 0 \ )\hspace{2cm} \nonumber\\
\lvert \text{Tr}\left(C_{(F\vert u)}\right) \rvert = \begin{cases} |\ell_p(\text{Tr}(F)- 2)|=1 \quad &(\text{Tr}(F)\neq 2)\\
|\ell_p(-\beta)| \sqrt{p} \delta_{u_2,\beta^{-1}(1-\alpha)u_1 }\quad &(\text{Tr}(F)= 2) \end{cases}\label{tracerelations2}
\end{align}
Finally, we note some important facts regarding the structure of the group $ SL(2,\ensuremath{\mathbb{Z}_p}) $. A minimal set of generators is e.g. \begin{align} SL(2,\ensuremath{\mathbb{Z}_p}) =\left\langle \left(
\begin{array}{cc}
1 & 1 \\
0 & 1 \\
\end{array}
\right),\left(
\begin{array}{cc}
1 & 0 \\
1 & 1 \\
\end{array}
\right) \right \rangle \end{align}
It has order $\protect{|SL(2,\ensuremath{\mathbb{Z}_p})|=p(p^2-1)}$ and can be partitioned into $p+4$ conjugacy classes \cite{appleby:012102}, each of which has constant trace. If we partition $ SL(2,\ensuremath{\mathbb{Z}_p}) $ by the matrix trace of its elements, $\text{Tr}(F)$, we see the following \begin{align}
\Big|\left\{F|\ell_p\left((\text{Tr}(F))^2-4\right)=1\right\}\Big|&=&p(p+1)\\
\Big|\left\{F|\ell_p\left((\text{Tr}(F))^2-4\right)=-1\right\}\Big|&=&p(p-1)\\
\Big|\left\{F|\ell_p\left((\text{Tr}(F))^2-4\right)=0\right\}\Big|&=&p^2\hspace{0.5cm} \end{align}
The final sets $\left\{F|\text{Tr}(F)=2\right\}$ and $\left\{F|\text{Tr}(F)=-2\right\}$ are each comprised of three conjugacy classes. Many of these facts will be used in subsequent sections, particulary section \ref{graphprops} concerning graph-theoretical properties of Cayley graphs that are relevant to the construction of BES MUBs.
\begin{figure}\label{GraphFig}
\end{figure}
\subsection{Graphs: Cayley Graphs and Maximum Cliques}
We review some relevant notation and properties of graphs that can be found in any standard reference (e.g, \cite{vanLint:1975}). An undirected Cayley graph $\Gamma(G,T)$ with an associated finite group $G$ and set $T \subset G$, is the graph whose vertices are the elements of $G$ and whose set of edges is $\protect{\{ g_1\sim g_2 |g_1^{-1}g_2 \in T\}}$. We must have $I \not\in T$ and $T^{-1}=T$. The resulting graph $\Gamma(G,T)$ is regular i.e. each vertex has degree $|T|$, and the number of (undirected) edges is given by $\frac{1}{2}|G||T|$ .
A complete graph of order $n$, denoted $K_n$, is a graph with $n$ vertices, each of which is adjacent to every other vertex (see Fig \ref{GraphFig} (a) for an example $K_5$). A subgraph, $\Gamma^{\ \prime}$, of $\Gamma$, is a graph whose vertices form a subset of the vertices of $\Gamma$ and the adjacency relation is inherited from $\Gamma$. A clique of $\Gamma$ is a complete subgraph of $\Gamma$, where the size of the clique is given by the number of vertices in this subgraph. The largest possible clique (not necessarily unique) contained in $\Gamma$ is a maximum clique, the size of which is usually denoted $\omega(\Gamma)$. We discuss graph-theoretic properties, and what they say about the problem at hand, in more detail in Section.~\ref{graphprops}.
\section{Construction of the restricted MUB}\label{sec:MUBconstruction}
The goal is to create a set of states, $\mathcal{S}$, of size $|\mathcal{S}|=p^2(p^2-1)$ that is partitioned into $p^2-1$ subsets, where each subset, containing $p^2$ states, forms an orthonormal basis. Labeling the basis with a supercript and the individual states within a basis using a subscript we have \begin{align*} \mathcal{S}=\{\ket{\psi_1^1}\ldots \ket{\psi_j^k}\ldots \ket{\psi_{p^2}^{p^2-1}}\}. \end{align*} This is a mutually unbiased basis if \begin{eqnarray*}
|\langle \psi_j^k \vert \psi_m^n\rangle|=\frac{1}{p}(1-\delta_{k,n})+\delta_{k,n}\delta_{j,m}. \end{eqnarray*} The $ \ket{\psi_j^k}$ of the set $\mathcal{S}$, that comprises our bipartite entangled stabilizer MUB (BES MUB), will be maximally entangled stabilizer states -- formed by applying a Clifford operation, $C$, to one half of a maximally entangled state \begin{eqnarray*} \ket{J_C}=(I\otimes C) \sum_{j=0}^{p-1} \frac{\ket{jj}}{\sqrt{p}}. \end{eqnarray*}
The overlap $|\langle J_{C_m}\vert J_{C_n}\rangle|$ between any two such states is given by \begin{align*}
|\langle J_{C_m}\vert J_{C_n}\rangle|=\frac{1}{p}|\text{Tr}(C_m^\dag C_n)|. \end{align*}
Using the notation we have previously described, it is easy to show using Eq.s~$\protect{\ref{composition} - \ref{tracerelations}}$ that \begin{align} \text{if }\quad &\text{Tr}(F^{-1} K) \neq 2 \nonumber\\ \text{then }\quad \Big\lvert &\text{Tr}\left[\left(C_{(F\vert u)}\right)^\dag \left(C_{(K \vert v)}\right) \right]\Big\rvert=1 \quad \forall u,v \in \ensuremath{\mathbb{Z}_p}^2 \end{align} i.e., a pair of matrices $F,K \in SL(2,\ensuremath{\mathbb{Z}_p})$ satisfying $\protect{\text{Tr}(F^{-1} K)\neq 2}$ defines a pair of mutually unbiased basis. Since the subspace under consideration has dimension $\protect{(p^2-1)^2}$, and since each basis contains $p^2-1$ independent states, we require a total of $\protect{p^2-1}$ matrices $\protect{F_i \in SL(2,\ensuremath{\mathbb{Z}_p})}$, satisfying, pairwise, $\protect{\text{Tr}(F_i^{-1} F_j)\neq 2}$, in order to create the BES MUB.
Define \begin{align}
&G=SL(2,\ensuremath{\mathbb{Z}_p}) &|G|=p(p^2-1) \label{graphdef}\\
&T=\{F\in SL(2,\ensuremath{\mathbb{Z}_p})| \text{Tr}(F)\neq 2\} \quad &|T|=|G|-p^2 \nonumber \end{align} then the Cayley graph $\Gamma(G,T)$ has the property that two vertices $F_i$ and $F_j$ are adjacent if and only if $\protect{\text{Tr}(F_i^{-1} F_j)\neq 2}$. A clique of size $p^2-1$, if it exists, immediately gives the desired complete BES MUB by the preceding discussion. Furthermore, a clique of size $\protect{p^2-1}$ must be a maximum clique since the dimension of the Hilbert space for local maximally mixed operators is $\protect{(p^2-1)^2}$.
\begin{theorem}\label{BEScliques}
A pair of matrices $F_1,F_2 \in SL(2,\ensuremath{\mathbb{Z}_p})$ satisfying $\protect{\text{Tr}(F_1^{-1} F_2)\neq 2}$ defines a pair of mutually unbiased bases in $\mathcal{H}_{p^2}=\mathbb{C}^p\otimes \mathbb{C}^p$ (via the relationship between $SL(2,\ensuremath{\mathbb{Z}_p})$ and the Clifford group). A set of matrices $\mathcal{F}=\{F_i\}$, of order $\protect{|\mathcal{F}|=p^2-1}$, such that pairwise $\protect{\text{Tr}(F_i^{-1} F_j)\neq 2 \pmod{p} }$, defines \textit{(i)} a complete bipartite entangled stabilizer MUB \textit{(ii)} a maximum clique of the Cayley graph defined in Eq.~(\ref{graphdef}). \end{theorem}
One can check using a computer algebra system \cite{GAP4,Hibbard:1998} that the following subgroups $\protect{H_p \leq SL(2,\ensuremath{\mathbb{Z}_p})}$ have order $\protect{|H_p|=p^2-1}$, and every pair of elements $\protect{F_i, F_j \in H_p}$ satisfies $\protect{\text{Tr}(F_i^{-1} F_j)\neq 2}$ (i.e. these subgroups provide complete BES MUBs). \begin{align} p=3: \quad &H_3=\left\langle \left(
\begin{array}{cc}
0 & 1 \\
2 & 0 \\
\end{array}
\right),\left(
\begin{array}{cc}
1 & 1 \\
1 & 2 \\
\end{array}
\right) \right \rangle \\
p=5: \quad &H_5=\left\langle \left(
\begin{array}{cc}
0 & 2 \\
2 & 0 \\
\end{array}
\right),\left(
\begin{array}{cc}
1 & 1 \\
2 & 3 \\
\end{array}
\right) \right \rangle \\
p=7: \quad &H_7=\left\langle \left(
\begin{array}{cc}
0 & 2 \\
3 & 0 \\
\end{array}
\right),\left(
\begin{array}{cc}
1 & 1 \\
4 & 5 \\
\end{array}
\right) \right \rangle \\
p=11: \quad &H_{11}=\left\langle \left(
\begin{array}{cc}
0 & 1 \\
10 & 0 \\
\end{array}
\right),\left(
\begin{array}{cc}
0 & 4 \\
8 & 10 \\
\end{array}
\right) \right \rangle \end{align}
In fact for every prime dimension $p\leq 11$ we can partition $SL(2,\ensuremath{\mathbb{Z}_p})$ by using $p$ distinct max-cliques of size $p^2-1$. For odd primes it suffices to consider the left cosets of $H_p$ in $SL(2,\ensuremath{\mathbb{Z}_p})$ where \begin{align} F_t=\left(
\begin{array}{cc}
1 & 0 \\
t & 1 \\
\end{array}
\right) \quad t \in \ensuremath{\mathbb{Z}_p} \end{align}
are the left coset representatives. For $p=13$ and higher, we were unable to find cliques saturating the upper bound of $\protect{p^2-1}$. It is known that, for any primes $p\geq 13$, there does not exist a subgroup $H_p$ of size $\protect{|H_p|=p^2-1}$ \cite{Dickson:1958}, but we are unaware of any proof that cliques of size $p^2-1$ (i.e. complete BES MUBs for $p$-dimensional systems) necessarily depend on this subgroup structure. A deterministic search for a clique of size 168 in the $\Gamma(G,T)$ graph for $p=13$ is infeasible, given the computational complexity of the max-clique problem. A heuristic search was able to find a clique of size 158, however.
By adapting a well-known power-of-prime construction for complete MUBs (Bandyopadhyay et al. \cite{Bandyopadhyay2002}) we can show that the size of the largest clique satisfies \begin{align} \label{cliquelowerbound} \omega(\Gamma)\geq p(p-1) \quad \forall p \end{align} To be specific, Section 4.3.1 of \cite{Bandyopadhyay2002} describes the construction of a complete set of MUBs for dimensions $p^2$. In their notation, this amounts to finding a set of $p^2$ $2 \times 2$ symmetric matrices $\{A\}$ such that $\det(A_j-A_k)\neq 0$. Suitable sets of matrices are parameterized by two elements $s,t \in \ensuremath{\mathbb{Z}_p}$ via \begin{align} \{A\}=\Big\{\left(
\begin{array}{cc}
a & b \\
b & sa+tb \\
\end{array}
\right),\quad \forall\ a,b \ \in \ensuremath{\mathbb{Z}_p} \Big\} \label{cliquelb} \end{align} A little thought reveals that every $A$ with non-zero off diagonal element $b$ can be related one-to-one with a matrix $F \in SL(2,\ensuremath{\mathbb{Z}_p})$, where $F$ has a non-zero element $\beta$ ($F$ defined as per Eq.~\eqref{Fudef}), \begin{align*} F(a,b,s,t)=\left(
\begin{array}{cc}
-ab^{-1} & -b^{-1} \\
b-a^2b^{-1}s-at & -ab^{-1}s-t \\
\end{array}
\right). \end{align*} One can check that the $\det(A_j-A_k)\neq 0$ condition translates to $\protect{\text{Tr}(F_j^{-1} F_k)\neq 2}$, as one would expect. In this way we can create a set of $p(p-1)$ matrices $F \in SL(2,\ensuremath{\mathbb{Z}_p})$ that form a clique in our Cayley graph $\Gamma(G,T)$. In general, sets of matrices formed this way cannot be extended with an additional $p-1$ matrices $F_k$ (having $\beta_k=0$) to form a complete BES MUB i.e., they form (part of) a \emph{maximal}, but not maximum, clique in $\Gamma(G,T)$. However, we can often slightly improve upon the lower bound e.g., we can construct cliques of size $p(p-1)+2$ for primes up 17. A consequence of Eq.~(\ref{cliquelowerbound}) is that the fraction of pairs $(F_i,F_j)$ that do not define mutually unbiased bases, out of the total number of such pairs $(F_i,F_j)$, vanishes as $p \rightarrow \infty$. In Appendix~\ref{AppendixA}, we explicitly give the observables involved in these BES MUBs in terms of tensor products of Pauli operators.
\section{Some graph-theoretic properties of these Cayley graphs}\label{graphprops}
In this section we further investigate the graph-theoretical properties of the family of Cayley graphs that were previously shown to be closely related to BES MUBs. Without loss of generality, the elements $F \in SL(2,\ensuremath{\mathbb{Z}_p})$ can be ordered lexicographically by the vectors constituting the rows of the matrix $F$ i.e. \begin{align*} \{F_i\}=\Big\{F_1=\left(
\begin{array}{cc}
0 & 1 \\
-1 & 0 \\
\end{array}
\right),F_2=\left(
\begin{array}{cc}
0 & 1 \\
-1 & 1 \\
\end{array}
\right),\ldots\\
\ldots F_{p(p^2-1)}=\left(
\begin{array}{cc}
-1 & -1 \\
-1 & -2 \\
\end{array}
\right)\Big\}=\Big\{\left(
\begin{array}{cc}
\alpha_i & \beta_i \\
\gamma_i & \delta_i \\
\end{array}
\right)\Big\}. \end{align*} It is easy to see that (i) there are $p^2-1$ possibilities for $(\alpha,\ \beta)$; (ii) each such $(\alpha,\ \beta)$ in turn allows for $p$ possible $(\gamma,\ \delta)$. Any two elements $F_i,\ F_j$, for which $(\alpha_i,\ \beta_i)=(\alpha_j,\ \beta_j)$, cannot be connected by an edge since \begin{align} \text{Tr}\Big(\left(
\begin{array}{cc}
\alpha & \beta \\
\gamma_i & \delta_i \\
\end{array}
\right)^{-1}\left(
\begin{array}{cc}
\alpha & \beta\\
\gamma_j & \delta_j \\
\end{array}
\right)\Big)=\det F_i+\det F_j=2. \label{colorings} \end{align} The so-called vertex coloring problem for graphs involves assigning a label (color) to every vertex of the graph, such that adjacent vertices cannot be assigned the same color. The minimum number of colors required to do this is the chromatic number, denoted $\chi(\Gamma)$. It is a basic fact \cite{vanLint:1975} that the chromatic number of a graph is bounded below by the clique number i.e. $\omega(\Gamma)\leq \chi(\Gamma)$. The discussion leading to Eq.~\ref{colorings} immediately implies that a $p^2-1$ coloring of the Cayley graph $\Gamma(G,T)$ is possible: assign the same color to two vertices $F_i,\ F_j$ if and only if $(\alpha_i,\ \beta_i)=(\alpha_j,\ \beta_j)$. Since the chromatic number $\chi$ is bounded below by the clique number $\omega(\Gamma)$, we know that this coloring is minimal for primes 2 to 11. Hence \begin{align*} &\omega(\Gamma) = \chi(\Gamma)= p^2-1& \quad &p\in \{ 2,3,5,7,11\}
\\ &\omega(\Gamma) \leq \chi(\Gamma)\leq p^2-1& \quad \forall &p
\end{align*} Note that the upper bound $\omega(\Gamma) \leq p^2-1$ is a graph-theoretical inequality that confirms the geometrical argument preceding Theorem \ref{BEScliques} i.e., the number of BES mutually unbiased bases that can fit in a Hilbert space $\mathcal{H}_{p^2}=\mathbb{C}^p\otimes \mathbb{C}^p$ is at most $p^2-1$.
A concept closely related to cliques and colorings is that of independence. An independent set of a graph is a set of vertices, no two of which are adjacent. A maximum independent set is the largest such set (not necessarily unique) that can be found in the graph, and the independence number, $\alpha(\Gamma)$, of a graph is the size of this maximum independent set. The discussion preceding Eq.~(\ref{colorings}) can equally well be interpreted as providing a lower bound on the independence number of $\Gamma(G,T)$; there are $p^2-1$ independent sets of size $p$, wherein two elements $F_i,\ F_j$ that satisfy $(\alpha_i,\ \beta_i)=(\alpha_j,\ \beta_j)$ are pairwise non-adjacent, hence \begin{align} \alpha(\Gamma)\geq p \qquad \forall p \label{alphalb} \end{align} The physical interpretation of this is that we can always find a set of $p$ bases such that, pairwise, no two are mutually unbiased with respect to each other.
The adjacency matrix of a graph $\Gamma$ with $n$ vertices is an $n \times n$ matrix $A[\Gamma]$ with elements $A_{i,j}=1$ if vertices $i$ and $j$ are adjacent, and $A_{i,j}=0$ otherwise. Knowledge of the spectrum of an adjacency matrix often allows us to find, or bound, many quantities of interest. We denote the spectrum of the $p(p^2-1) \times p(p^2-1)$ adjacency matrices $A[\Gamma(G,T)]$ as $\{\lambda^{m_0}_0,\lambda^{m_1}_1,\lambda^{m_2}_2,\lambda^{m_3}_3\}$ where $m_i$ denotes the multiplicity of $\lambda_i$. The complement of a graph $\Gamma$, denoted $\overline{\Gamma}$, is the graph with same vertex set as $\Gamma$, but where two vertices are adjacent in $\overline{\Gamma}$ if and only if they are not adjacent in $\Gamma$. The spectrum of a graph and its complement can be related in a simple way for the case of regular graphs (the case we deal with in this work), as the following theorem demonstrates.
\begin{theorem}\label{Brouwer} \emph{(Brouwer and Haemers \cite{Brouwer:})} Suppose $\Gamma$ is a $k$-regular graph on $n$ vertices with 4 distinct (adjacency) eigenvalues $\{k=\lambda_0>\lambda_1>\lambda_2>\lambda_3\}$. If, in addition, both $\Gamma$ and its complement, $\overline{\Gamma}$, are connected, then $\overline{\Gamma}$ also has 4 distinct eigenvalues, $\{n-k-1>-\lambda_3-1>-\lambda_2-1>-\lambda_1-1\}$. \end{theorem}
The Cayley graphs we studied, defined in Eq.~\eqref{graphdef}, are actually the graph complement of a well known family of graphs (that form a family of Ramanujan graphs, amongst other interesting properties), whose spectrum is known exactly. \begin{theorem}\label{Lubotzky} \emph{(Lubotzky \cite{Lubotzky:1994})} Let $G=SL(2,\ensuremath{\mathbb{Z}_p})$, and let $T$ (i.e., the connection set for the Cayley graph) be the union of the conjugacy classes $c_1$ and $c_{\nu}$ of the elements \begin{align*} \left(
\begin{array}{cc}
1 & 0 \\
1 & 1 \\
\end{array}
\right), \quad \left(
\begin{array}{cc}
1 & 0 \\
\nu & 1 \\
\end{array}
\right), \end{align*}
where $\nu$ is a generator of the cyclic group $\ensuremath{\mathbb{Z}_p}^*=\ensuremath{\mathbb{Z}_p}/\{0\}$. Then $T=\protect{\{F\in SL(2,\ensuremath{\mathbb{Z}_p})|\text{Tr}(F)=2,F\neq I\}}$, $|T|=p^2-1$ and the spectrum of the corresponding Cayley graph $A[\Gamma(G,T)]$ denoted $\{\lambda^{m_0}_0,\lambda^{m_1}_1,\lambda^{m_2}_2,\lambda^{m_3}_3\}$ , is \begin{align*} &\lambda_0 &=&&\ & p^2-1,&\quad& &\ & m_0 &=&&\ & 1 \\ &\lambda_1 &=&&\ & p-1 , &\quad& &\ & m_1 &=&&\ & (p-2)(p+1)^2/2\\ &\lambda_2 &=&&\ & 0 , &\quad& &\ & m_2 &=&&\ & p^2\\ &\lambda_3 &=&&\ & -(p+1),&\quad& &\ & m_3 &=&&\ & p(p-1)^2/2\\ \end{align*} \end{theorem}
Combining the two preceding theorems (connectedness is obviously satisfied by our Cayley graphs) allows us to completely characterize the spectrum of the canonical Cayley graph Eq.~\eqref{graphdef} that we used to search for BES MUBs. \begin{theorem}\label{MUBgraphspectrum}
(Spectrum of graphs defined in Eq.~(\ref{graphdef})) Let $G=SL(2,\ensuremath{\mathbb{Z}_p})$ and $T=\protect{\{F\in SL(2,\ensuremath{\mathbb{Z}_p})| \text{Tr}(F)\neq 2\}}$. Then $|T|=|G|-p^2$ and the spectrum of $A[\Gamma(G,T)]$ denoted $\{\lambda^{m_0}_0,\lambda^{m_1}_1,\lambda^{m_2}_2,\lambda^{m_3}_3\}$ is \begin{align*} &\lambda_0&=&&\ &p(p^2-1)-p^2 &\ & &\ & m_0 &=&&\ & 1 \\ &\lambda_1&=&&\ &p &\ & &\ & m_1 &=&&\ & p(p-1)^2/2 \\ &\lambda_2&=&&\ &-1 &\ & &\ & m_2 &=&&\ & p^2\\ &\lambda_3&=&&\ &-p &\ & &\ & m_3 &=&&\ & (p-2)(p+1)^2/2 \end{align*} \end{theorem}
At this point we note that the problem of finding BES MUBs, framed as finding maximum cliques of size $p^2-1$ in the Cayley graph $\Gamma$ defined by Eq.~\eqref{graphdef}, is completely equivalent to finding maximum independent sets of size $p^2-1$ in the complement, $\overline{\Gamma}$, of that graph i.e., \begin{align*} \exists \text{ complete BES MUB } \iff \omega(\Gamma)=p^2-1=\alpha(\overline{\Gamma}). \end{align*}
Unfortunately, it seems that existing spectral lower bounds on the clique number are of little help for the task of proving existence of BES MUBs. Nonetheless, using some well-known spectral bounds we list some implications for the graphs $\Gamma(G,T)$ under consideration. A lower bound on the chromatic number is given by \begin{align*} \chi(\Gamma)\geq 1-\frac{\lambda_0}{\lambda_3} = p(p-1), \end{align*} which, in conjunction with Eq.~(\ref{colorings}), shows that $\protect{p(p-1) \leq \chi(\Gamma) \leq p^2-1}$. In fact, this lower bound was already implied by Eq.~(\ref{cliquelowerbound}).
For a regular graph, $\Gamma$, on $n$ vertices, Hoffman (unpublished) and Lov{\'a}sz \cite{Lovasz:1979} proved the formula \begin{align*} \alpha(\Gamma)\leq \frac{-n \lambda_{\min}}{\lambda_{\max}-\lambda_{\min}}= \frac{-n \lambda_3}{\lambda_0-\lambda_3}=p+1, \end{align*} which, in conjunction with Eq.~(\ref{alphalb}) gives us $\protect{p \leq \alpha(\Gamma) \leq p+1}$. As a final remark on spectral implications, we note that the spectrum exhibited in Thm.~\ref{MUBgraphspectrum} classifies $\Gamma(G,T)$ as a so-called walk-regular graph \cite{Godsil:1980}.
\section{Conclusion} We have shown how the set of bipartite entangled stabilizer (BES) states can be partitioned into sets of mutually unbiased bases (MUBs), whose span is sufficient and minimal to describe an interesting class of operators that includes (mixtures of) Jamio\l kowski states, Clifford witnesses \cite{WvDMH:2010} and more. Mutual unbiasedness of two stabilizer orthonormal bases is easily shown to be equivalent to a simple relation on pairs of matrices from $SL(2,\ensuremath{\mathbb{Z}_p})$. Pairs of matrices satisfying this relation are adjacent vertices on a naturally defined Cayley graph, and the problem of finding complete (optimal) BES MUBs is transformed into that of finding maximum cliques in the Cayley graph. In a different mathematical context, the graph complement of our Cayley graphs are well-studied, and so we can quote, for example, the exact spectrum of the adjacency matrix for all prime values $p$. The most interesting open question is whether such BES-MUBs exist for all primes, or indeed for any primes greater than 11. For the closely related task of finding minimal unitary designs, a discussion by Chau \cite{Chau:2005} (invoking Dickson's theorem on the existence of certain subgroups of $SL(2,\ensuremath{\mathbb{Z}_p})$) suggests that minimal unitary designs only exist for primes up to 11. It remains to be seen whether the latitude afforded by seeking BES-MUBs, as opposed to subgroups of $SL(2,\ensuremath{\mathbb{Z}_p})$, allows for construction of optimal BES-MUBs when $p\geq 13$.
\appendix \section{Measurement Operators for BES MUBs}\label{AppendixA} Given a matrix $F \in SL(2,\ensuremath{\mathbb{Z}_p})$, this defines an orthonormal basis $\mathcal{F}$ in the bipartite Hilbert space $\mathcal{H}_{p^2}$ via \begin{align}
&\mathcal{F}=\{\ket{J_u^F}, \forall u \in \ensuremath{\mathbb{Z}_p}^2 \},\quad |\braket{J_u^F}{J_v^F}|=\delta_{u,v}\label{Fbasis}\\ &\text{where }\ \ket{J_u^F}=\left(I\otimes C_{(F\vert u)}\right) \sum_{j=0}^{p-1} \frac{\ket{jj}}{\sqrt{p}} \nonumber \end{align} We will show how the basis $\mathcal{F}$ can be rewritten in terms of stabilizer measurements, and subsequently how $\mathcal{F}$ can be identified as the simultaneous eigenbasis of a set of $p^2-1$ commuting Pauli operators.
Using so-called symplectic notation, the general form for multi-particle stabilizer operators with vectors $x=(x_1,x_2,\dots)$ and $z = (z_1,z_2,\dots)$ with $x_i$, $z_i \in \ensuremath{\mathbb{Z}_p}$ is \begin{align}
P_{(x|z)}=\left(X^{x_1}\otimes X^{x_2}\dots\right) \left(Z^{z_1}\otimes Z^{z_2}\dots\right). \end{align} Measuring a two-qupit Pauli operator corresponds to projecting with a rank-$p$ projector, $\Pi$, \begin{align}
\Pi:=\Pi_{(x_1,x_2|z_1,z_2)[k]}=\frac{1}{p}\big(&I+\ensuremath{\mathrm{\omega}}^{-k}P_{(x_1,x_2|z_1,z_2)}+\ldots \nonumber \\
&+\ensuremath{\mathrm{\omega}}^{-(p-1)k}(P_{(x_1,x_2|z_1,z_2)})^{p-1}\big) \end{align} The product of two appropriately chosen such projectors, $\Pi, \Pi^\prime$, defines a rank-1 operator - a stabilizer state:
\begin{align*} &\ketbra{\psi}{\psi}=\frac{1}{p^2}\sum_{s\in \mathcal{G}_s} s=\Pi\ \Pi^\prime, \end{align*}
where $\mathcal{G}_s=\langle g,g^\prime \rangle$ is a subgroup, generated by two commuting Pauli operators $g$ and $g^\prime $, of the group $\mathcal{G}_2=\left\{\ensuremath{\mathrm{\omega}}^c P_{(x|z)} \vert x,z \in \ensuremath{\mathbb{Z}_p}^2, c \in \ensuremath{\mathbb{Z}_p}\right\}$. In symplectic notation $g = \omega^{-k}P_{(x_1,x_2|z_1,z_2)}$ and
$g^\prime = \omega^{-k^\prime} P_{(x^\prime_1,x^\prime_2|z^\prime_1,z^\prime_2)}$ and commutativity of $g$ and $g^\prime$ reduces to \begin{align*} \sum_{i=1,2} x_iz_i-x^\prime_iz^\prime_i \equiv 0 \!\mod{p}. \end{align*}
Given $u=(u_1,u_2) \in \ensuremath{\mathbb{Z}_p}^2$ and $\beta\neq 0$, the following two sets of projectors are equal, up to re-ordering \begin{align*}
&\forall u:\quad \left\{\ket{J_u^F}\bra{J_u^F}\right\}=\left\{\Pi_{(1,0|\alpha \beta^{-1},-\beta^{-1})[u_1]}\Pi_{(0,1|-\beta^{-1},\beta^{-1}\delta)[u_2]}\right\}. \end{align*} When $\beta= 0$, the following two sets of projectors are equal, up to re-ordering \begin{align*}
&\forall u:\quad \left\{\ket{J_u^F}\bra{J_u^F}\right\}=\left\{\Pi_{(1,\alpha| 0 ,\gamma)[u_1]}\Pi_{(0,0|1,-\delta)[u_2]}\right\}. \end{align*}
Many existing constructions for complete MUBs in $\mathcal{H}_{d}$ (with power-of-prime dimension $d$) are based around the partitioning of $d^2-1$ non-identity Pauli operators into $d+1$ classes, each of which contains $d-1$ mutually commuting operators. Each basis within the MUB is then given by the simultaneous eigenbasis of the $d-1$ mutually commuting operators (i.e., each class is associated with exactly one orthonormal basis, for a given partitioning). We can frame the construction of BES MUBs in this language too, with the modification that we are partitioning the set of all weight-two Pauli operators i.e. the subset $\big\{P_{(x_1,x_2|z_1,z_2)}/\{P_{(x_1,0|z_1,0)},P_{(0,x_2|0,z_2)}\}\big\}$ of size $(p^2-1)^2$. With individual classes containing $p^2-1$ operators, there can only be at most $p^2-1$ such classes. It should be clear that a set of $n$ matrices $F\in SL(2,\ensuremath{\mathbb{Z}_p})$ (satisfying $\protect{\text{Tr}(F_i^{-1} F_j)\neq 2}$) is equivalent to $n$ non-overlapping classes of weight-two Pauli operators, each class containing $p^2-1$ non-identity elements. Recalling Eq.~(\ref{Fbasis}) for the definition of the basis associated with $F$, then the associated class of unitary operators is the subgroup $\mathcal{G}_s=\langle g,g^\prime \rangle$ of $\mathcal{G}_2$. The simultaneous eigenbasis of all $p^2$ Pauli operators in $\mathcal{G}_s$ forms an orthonormal basis. When $\beta \neq 0$ the class of Pauli operators corresponding to $F$ is given by \begin{align*}
\mathcal{G}_s(F)=\langle g,g^\prime \rangle:=\langle P_{(1,0|\alpha \beta^{-1},-\beta^{-1})},P_{(0,1|-\beta^{-1},\beta^{-1}\delta)} \rangle. \end{align*} When $\beta = 0$ the class of Pauli operators corresponding to $F$ is given by \begin{align*}
\mathcal{G}_s(F)=\langle g,g^\prime \rangle:=\langle P_{(1,\alpha | 0,\gamma )},P_{(0,0 | 1,-\delta)} \rangle. \end{align*}
\end{document} |
\begin{document}
\title[Solvable maximal subgroups]{A note on solvable maximal subgroups\oplus in subnormal subgroups of ${\rm GL}_n(D)$} \author[Hu\oplus`{y}nh Vi\d{\oplus^{e}}t Kh\oplus'{a}nh]{Hu\oplus`{y}nh Vi\d{\oplus^{e}}t Kh\oplus'{a}nh}\thanks{The second author was funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under Grant No. 101.04-2016.18} \address{Faculty of Mathematics and Computer Science, VNUHCM - University of Science, 227 Nguyen Van Cu Str., Dist. 5, Ho Chi Minh City, Vietnam.} \email{[email protected]; [email protected]} \author[B\oplus`{u}i Xu\oplus^{a}n H\h{a}i]{B\oplus`{u}i Xu\oplus^{a}n H\h{a}i} \keywords{division ring; maximal subgroup; solvable group; polycyclic-by-finite group.\oplus \protect \indent 2010 {\it Mathematics Subject Classification.} 12E15, 16K20, 16K40, 20E25.} \maketitle \selectlanguage{english}
\begin{abstract} Let $D$ be a non-commutative division ring, and $G$ be a subnormal subgroup of ${\rm GL}_n(D)$. Assume additionally that the center of $D$ contains at least five elements if $n>1$. In this note, we show that if $G$ contains a non-abelian solvable maximal subgroup, then $n=1$ and $D$ is a cyclic algebra of prime degree over the center. \end{abstract}
\section{Introduction}
In the theory of skew linear groups, one of unsolved difficult problems is that whether the general skew linear group over a division ring contains maximal subgroups. In \cite{akbri}, the authors conjectured that for $n\geq 2$ and a division ring $D$, the group ${\rm GL}_n(D)$ contains no solvable maximal subgroups. In \cite{dorbidi2011}, this conjecture was shown to be true for non-abelian solvable maximal subgroups. In this paper, we consider the following more general conjecture.
\begin{conjecture}\label{conj:1} Let $D$ be a division ring, and $G$ be a non-central subnormal subgroup of ${\rm GL}_n(D)$. If $n\geq 2$, then $G$ contains no solvable maximal subgroups. \end{conjecture}
We note that this conjecture is not true if $n=1$. Indeed, it was proved in \cite{akbri} that the subgroup $\mathbb{C}^*\cup \mathbb{C}^* j$ is a solvable maximal subgroup of the multiplicative group $\mathbb{H}^*$ of the division ring of real quaternions $\mathbb{H}$. In this note, we show that Conjecture \ref{conj:1} is true for non-abelian solvable maximal subgroups of $G$, that is, we prove that $G$ contains no non-abelian solvable maximal subgroups. This fact generalizes the main result in \cite{dorbidi2011} and it is a consequence of Theorem \ref{theorem_3.7} in the text.
Throughout this note, we denote by $D$ a division ring with center $F$ and by $D^*$ the multiplicative group of $D$. For a positive integer $n$, the symbol ${\rm M}_n(D)$ stands for the matrix ring of degree $n$ over $D$. We identify $F$ with $F{\rm I}_n$ via the ring isomorphism $a\mapsto a{\rm I}_n$, where ${\rm I}_n$ is the identity matrix of degree $n$. If $S$ is a subset of ${\rm M}_n(D)$, then $F[S]$ denotes the subring of ${\rm M}_n(D)$ generated by the set $S\cup F$. Also, if $n=1$, i.e., if $S\subseteq D$, then $F(S)$ is the division subring of $D$ generated by $S\cup F$. Recall that a division ring $D$ is \textit{locally finite} if for every finite subset $S$ of $D$, the division subring $F(S)$ is a finite dimensional vector space over $F$. If $H$ and $K$ are two subgroups in a group $G$, then $N_K(H)$ denotes the set of all elements $k\in K$ such that $k^{-1}Hk\leq H$, i.e., $N_K(H)=K\cap N_G(H)$. If $A$ is a ring or a group, then $Z(A)$ denotes the center of $A$.
Let $V =D^n= \left\oplus{ {\left( {{d_1},{d_2}, \ldots ,{d_n}} \right)\left| {{d_i} \in D} \right.} \right\oplus}$. If $G$ is a subgroup of ${\rm GL}_n(D)$, then $V$ may be viewed as $D$-$G$ bimodule. Recall that a subgroup $G$ of ${\rm GL}_n(D)$ is \textit{irreducible} (resp. \textit{reducible, completely reducible}) if $V$ is irreducible (resp. reducible, completely reducible) as $D$-$G$ bimodule. If $F[G]={\rm M}_n(D)$, then $G$ is \textit{absolutely irreducible} over $D$. An irreducible subgroup $G$ is \textit{imprimitive} if there exists an integer $ m \geq 2$ such that $V = \oplus _{i = 1}^m{V_i}$ as left $D$-modules and for any $g \in G$ the mapping $V_i \to V_ig$ is a permutation of the set $\oplus{V_1, \cdots, V_m\oplus}$. If $G$ is irreducible and not imprimitive, then $G$ is \textit{primitive}.
\section{Auxiliary lemmas}
\begin{lemma}\label{lemm_2.1}
Let $D$ be a division ring with center $F$, and $M$ be a subgroup of ${\rm GL}_n(D)$. If $M/M\cap F^*$ is a locally finite group, then $F[M]$ is a locally finite dimensional vector space over $F$. \end{lemma}
\begin{Proof} Take any finite subset $\oplus{x_1,x_2,\dots,x_k\oplus}\subseteq F[M]$ and write
$$x_i=f_{i_1}m_{i_1}+f_{i_2}m_{i_2}+\cdots+f_{i_s}m_{i_s}.$$
Let $G=\left\langle m_{i_j}:1\leq i\leq k,1\leq j\leq s\right\rangle$ be the subgroup of $M$ generated by all $m_{i_j}$.
Since $M/M\cap F^*\cong MF^*/F^*$ is locally finite, the group $GF^*/F^*$ is finite. Let $\oplus{y_1,y_2,\dots,y_t\oplus}$ be a transversal of $F^*$ in $GF^*$ and set
$$R=Fy_1+Fy_2+\cdots+Fy_t.$$
Then, $R$ is a finite dimensional vector space over $F$ containing $\oplus{x_1,x_2,\dots,x_k\oplus}$. \end{Proof}
\begin{lemma}\label{lemma_2.2}
Every locally solvable periodic group is locally finite. \end{lemma}
\begin{Proof}
Let $G$ be a locally solvable periodic group, and $H$ be a finitely generated subgroup of $G$. Then, $H$ is solvable with derived series of length $n\geq1$, say,
$$1=H^{(n)}\unlhd H^{(n-1)}\unlhd\cdots\unlhd H'\unlhd H.$$
We shall prove that $H$ is finite by induction on $n$. For if $n=1$, then $H$ is a finitely generated periodic abelian group, so it is finite. Suppose $n>1$. It is clear that $H/H'$ is a finitely generated periodic abelian group, so it is finite. Hence, $H'$ is finitely generated. By induction hypothesis, $H'$ is finite, and as a consequence, $H$ is finite. \end{Proof}
\begin{lemma}\label{lemma_2.3}
Let $D$ be a division ring with center $F$, and $G$ be a subnormal subgroup of $D^*$. If $G$ is solvable-by-finite, then $G\subseteq F$. \end{lemma}
\begin{Proof}
Let $A$ be a solvable normal subgroup of finite index in $G$. Since $G$ is subnormal in $G$, so is $A$. By \cite[14.4.4]{scott}, we have $A\subseteq F$. This implies that $G/Z(G)$ is finite, so $G'$ is finite too \cite[Lemma 1.4, p. 115]{passman_77}. Therefore, $G'$ is a finite subnormal subgroup of $D^*$. In view of \cite[Theorem 8]{her}, it follows that $G'\subseteq F$, hence $G$ is solvable. Again by \cite[14.4.4]{scott}, we conclude that $G\subseteq F$. \end{Proof}
For our further use, we also need one result of Wehrfritz which will be restated in the following lemma for readers' convenience.
\begin{lemma}\cite[Proposition 4.1]{wehrfritz_07}\label{lemma_2.4}
Let $D=E(A)$ be a division ring generated as such by its metabelian subgroup $A$ and its division subring $E$ such that $E\leq C_D(A)$. Set $H=N_{D^*}(A)$, $B=C_A(A')$, $K=E(Z(B))$, $H_1=N_{K^*}(A)=H\cap K^*$, and let $T$ be the maximal periodic normal subgroup of $B$.
\begin{enumerate}[(1)]
\item If $T$ has a quaternion subgroup $Q=\left\langle i,j\right\rangle $ of order $8$ with $A=QC_A(Q)$, then $H=Q^+AH_1$, where $Q^+=\left\langle Q,1+j,-(1+i+j+ij)/2\right\rangle$. Also, $Q$ is normal in $Q^+$ and $Q^+/{\left\langle -1,2\right\rangle}\cong\mbox{\rm Aut} Q\cong Sym(4)$.
\item If $T$ is abelian and contains an element $x$ of order $4$ not in the center of $B$, then $H=\left\langle x+1\right\rangle AH_1$.
\item In all other cases, $H=AH_1$.
\end{enumerate} \end{lemma}
\section{Maximal subgroups in subnormal subgroups of ${\rm GL}_n(D)$}
\begin{proposition}\label{proposition_3.1}
Let $D$ be a division ring with center $F$, and $G$ be a subnormal subgroup of $D^*$. If $M$ is a non-abelian solvable-by-finite maximal subgroup of $G$, then $M$ is abelian-by-finite and $[D:F]<\infty$. \end{proposition}
\begin{Proof}
Since $M$ is maximal in $G$ and $M\subseteq F(M)^*\cap G\subseteq G$, either $M = F(M)^*\cap G$ or $G \subseteq F(M)^*$. The first case implies that $M$ is a solvable-by-finite subnormal subgroup of $F(M)^*$, which yields $M$ is abelian by Lemma \ref{lemma_2.3}, a contradiction. Therefore, the second case must occur, i.e., $G \subseteq F(M)^*$. By Stuth's theorem (see e.g. \cite[14.3.8]{scott}), we conclude that $F(M)=D$. Let $N$ be a solvable normal subgroup of finite index in $M$. First, we assume that $N$ is abelian, so $M$ is abelian-by-finite. In view of \cite[Corollary 24]{wehrfritz_89}, the ring $F[N]$ is a Goldie ring, and hence it is an Ore domain whose skew field of fractions coincides with $F(N)$. Consequently, any $\alpha\in F(N)$ may be written in the form $\alpha=pq^{-1},$ where $q, p\in F[N]$ and $q\ne0$. The normality of $N$ in $M$ implies that $F[N]$ is normalized by $M$. Thus, for any $m\in M$, we have
$$m\alpha m^{-1}=mpq^{-1}m^{-1}=(mpm^{-1})(m^{-1}qm)^{-1}\in F(N).$$
In other words, $L:=F(N)$ is a subfield of $D$ normalized by $M$. Let $\oplus{x_1,x_2,\ldots,x_k\oplus}$ be a transversal of $N$ in $M$ and set
$$\Delta=Lx_1+Lx_2+\cdots+Lx_k.$$
Then, $\Delta$ is a domain with $\dim_L\Delta\leq k$, so $\Delta$ is a division ring that is finite dimensional over its center. It is clear that $\Delta$ contains $F$ and $M$, so $D=\Delta$ and $[D:F]<\infty$.
Next, we suppose that $N$ is a non-abelian solvable group with derived series of length $s\geq1$. Then we have such a series
$$1=N^{(s)}\unlhd N^{(s-1)}\unlhd N^{(s-2)}\unlhd\cdots\unlhd N'\unlhd N \unlhd M.$$
If we set $A=N^{(s-2)}$, then $A$ is a non-abelian metabelian normal subgroup of $M$. By the same arguments as above, we conclude that $F(A)$ is normalized by $M$ and we have $M\subseteq N_G(F(A)^*)\subseteq G$. By the maximality of $M$ in $G$, either $N_G(F(A)^*)=M$ or $N_G(F(A)^*)=G$. If the first case occurs, then $G\cap F(A)^*$ is a subnormal subgroup of $F(A)^*$ contained in $M$. Since $M$ is solvable-by-finite, so is $G\cap F(A)^*$. By Lemma \ref{lemma_2.3}, $A\subseteq G\cap F(A)^*$ is abelian, a contradiction. We may therefore assume that $N_G(F(A))=G$, which says that $F(A)$ is normalized by $G$. In view of Stuth's theorem, we have $F(A)=D$. From this we conclude that $Z(A)=F^*\cap A$ and $F=C_D(A)$. Set $H=N_{D^*}(A)$, $B=C_A(A')$, $K=F(Z(B))$, $H_1=H\cap K^*$, and $T$ to be the maximal periodic normal subgroup of $B$. Then $H_1$ is an abelian group, and $T$ is a characteristic subgroup of $B$ and hence of $A$. In view of Lemma \ref{lemma_2.4}, we have three possible cases:
\textit{Case 1:} $T$ is not abelian.
Since $T$ is normal in $M$, we conclude that $M\subseteq N_G(F(T)^*)\subseteq G$. By the maximality of $M$ in $G$, either $M = N_G(F(T)^*)$ or $G= N_G(F(T)^*)$. The first case implies that $F(T)^*\cap G$ is subnormal in $F(T)^*$ contained in $M$. Again by Lemma~ \ref{lemma_2.3}, it follows that $T\subseteq F(T)\cap G$ is abelian, a contradiction. Thus, we may assume that $G= N_G(F(T)^*)$, which implies that $F(T)=D$ by Stuth's theorem. By Lemma~ \ref{lemma_2.2}, $T$ is locally finite. In view of Lemma \ref{lemm_2.1}, we conclude that $D=F(T)=F[T]$ is a locally finite division ring. Since $M$ is solvable-by-finite, it contains no non-cyclic free subgroups. In view of \cite[Theorem 3.1]{hai-khanh}, it follows $[D:F]<\infty$ and $M$ is abelian-by-finite.
\textit{Case 2:} $T$ is abelian and contains an element $x$ of order $4$ not in the center of $B=C_A(A')$.
It is clear that $x$ is not contained in $F$. Because $x$ is of finite order, the field $F(x)$ is algebraic over $F$. Since $\left\langle x\right\rangle$ is a $2$-primary component of $T$, it is a characteristic subgroup of $T$ (see the proof of \cite[Theorem 1.1, p. 132]{wehrfritz_07}). Consequently, $\left\langle x\right\rangle$ is a normal subgroup of $M$. Thus, all elements of the set $x^M:=\oplus{m^{-1}xm\vert m\in M\oplus}\subseteq F(x)$ have the same minimal polynomial over $F$. This implies $|x^M|<\infty$, so $x$ is an $FC$-element, and consequently, $[M:C_M(x)]<\infty$. Setting $C=Core_M(C_M(x))$, then $C\unlhd M$ and $[M:C]$ is finite. Since $M$ normalizes $F(C)$, we have $M\subseteq N_G(F(C)^*) \subseteq G$. By the maximality of $M$ in $G$, either $N_G(F(C)^*)=M$ or $N_G(F(C)^*)=G$. The last case implies that $F(C)=D$, and consequently, $x\in F$, a contradiction. Thus, we may assume that $N_G(F(C)^*)=M$. From this, we conclude that $G\cap F(C)^*$ is a subnormal subgroup of $F(C)^*$ which is contained in $M$. Thus, $C\subseteq G\cap F(C)^*$ is contained in the center of $F(C)$ by \cite[14.4.4]{scott}. Therefore, $C$ is an abelian normal subgroup of finite index in $M$. By the same arguments used in the first paragraph we conclude that $[D:F]<\infty$.
\textit{Case 3:} $H=AH_1$.
Since $A'\subseteq H_1\cap A$, we have $H/H_1\cong A/A\cap H_1$ is abelian, and hence $H'\subseteq H_1$. Since $H_1$ is abelian, $H'$ is abelian too. Moreover, $M\subseteq H$, it follows that $M'$ is also abelian. In other words, $M$ is a metabelian group, and the conclusions follow from \cite[Theorem 3.3]{hai-tu}. \end{Proof}
Let $D$ be a division ring, and $G$ be a subnormal subgroup of $D^*$. It was showed in \cite[Theorem 3.3]{hai-tu} that if $G$ contains a non-abelian metabelian maximal subgroup, then $D$ is cyclic of prime degree. The following theorem generalizes this phenomenon.
\begin{theorem} \label{theorem_3.2}
Let $D$ be a division ring with center $F$, and $G$ be a subnormal subgroup of $D^*$. If $M$ is a non-abelian solvable maximal subgroup of $G$, then the following conditions hold:
\begin{enumerate}[(i)]
\item There exists a maximal subfield $K$ of $D$ such that $K/F$ is a finite Galois extension with $\mathrm{Gal}(K/F)\cong M/K^*\cap G\cong \mathbb{Z}_p$ for some prime $p$, and $[D:F]=p^2$.
\item The subgroup $K^*\cap G$ is the $FC$-center. Also, $K^*\cap G$ is the Fitting subgroup of $M$. Furthermore, for any $x\in M\setminus K$, we have $x^p\in F$ and $D=F[M]=\bigoplus_{i=1}^pKx^i$.
\end{enumerate} \end{theorem}
\begin{Proof}
By Proposition \ref{proposition_3.1}, it follows that $[D:F]<\infty$. Since $M$ is solvable, it contains no non-cyclic free subgroups. In view of \cite[Theorem 3.4]{hai-tu}, we have $F[M]=D$, there exists a maximal subfield $K$ of $D$ containing $F$ such that $K/F$ is a Galois extension, $N_G(K^*)=M $, $K^*\cap G$ is the Fitting normal subgroup of $M$ and it is the $FC$-center, and $M/K^*\cap G\cong{\rm Gal}(K/F)$ is a finite simple group of order $[K:F]$. Since $M/K^*\cap G$ is solvable and simple, one has $M/K^*\cap G\cong{\rm Gal}(K/F)\cong \mathbb{Z}_p$, for some prime number $p$. Therefore, $[K:F]=p$ and $[D:F]=p^2$. For any $x\in M\backslash K$, if $x^p\not\in F$, then by the fact that $F[M]=D$, we conclude that $C_M(x^p)\ne M$. Moreover, since $x^p\in K^*\cap G$, it follows that $\left\langle x,K^*\cap G\right\rangle \leq C_M(x^p)$. In other words, $C_M(x^p)$ is a subgroup of $M$ strictly containing $K^*\cap G$. Because $M/K^*\cap G$ is simple, we have $C_M(x^p)= M$, a contradiction. Therefore $x^p\in F$. Furthermore, since $x^p\in K$ and $[D:K]_r=p$, we conclude $D=\bigoplus_{i=1}^{p-1}Kx_i$. \end{Proof}
Also, the authors in \cite{nassab14} showed that if $D$ is an infinite division ring, then $D^*$ contains no polycyclic-by-finite maximal subgroups. In the following corollary, we will see that every subnormal subgroup of $D^*$ does not contain non-abelian polycyclic-by-finite maximal subgroups.
\begin{corollary}
Let $D$ be a division ring with center $F$, $G$ be a subnormal subgroup of $D^*$, and $M$ be a non-abelian maximal subgroup of $G$. Then $M$ cannot be finitely generated solvable-by-finite. In particular, $M$ cannot be polycyclic-by-finite. \end{corollary}
\begin{Proof}
Suppose that $M$ is solvable-by-finite. Then by Proposition \ref{proposition_3.1} , we conclude that $[D:F]<\infty$. In view of \cite[Corollary 3]{mah2000}, it follows that $M$ is not finitely generated. The rest of the corollary is clear. \end{Proof}
\begin{theorem} \label{theorem_3.4}
Let $D$ be a non-commutative locally finite division ring with center $F$, and $G$ be a subnormal subgroup of ${\rm GL}_n(D)$, $n\geq 1$. If $M$ is a non-abelian solvable maximal subgroup of $G$, then $n=1$ and all conclusions of Theorem \ref{theorem_3.2} hold. \end{theorem}
\begin{Proof}
By \cite[Theorem 3.1]{hai-khanh}, there exists a maximal subfield $K$ of ${\rm M}_n(D)$ containing $F$ such that $K^*\cap G$ is a normal subgroup of $M$ and $M/K^*\cap G$ is a finite simple group of order $[K:F]$. Since $M/K^*\cap G$ is solvable and simple, we conclude $M/K^*\cap G\cong \mathbb{Z}_p$, for some prime number $p$. It follows that $[K:F]=p$ and $[M_n(D):F]=p^2$, from which we have $n=1$. Finally, all conclusions follow from Theorem \ref{theorem_3.2}. \end{Proof}
\begin{lemma}\label{lemma_3.5}
Let $R$ be a ring, and $G$ be a subgroup of $R^*$. Assume that $F$ is a central subfield of $R$ and that $A$ is a maximal abelian subgroup of $G$ such that $K=F[A]$ is normalized by $G$. Then $F[G]=\oplus _{g \in T}{Kg}$ for every transversal $T$ of $A$ in $G$. \end{lemma}
\begin{Proof}
For the proof of this lemma, we use the similar techniques as in the proof of \cite[Lemma 3.1]{dorbidi2011}.
Since $K$ is normalized by $G$, it follows that $F[G]=\sum\nolimits_{g \in T}{Kg}$ for every transversal $T$ of $A$ in $G$. Therefore, it suffices to prove that every finite subset $\oplus{g_1,g_2,\dots,g_n\oplus}\subseteq T$ is linearly independent over $K$. Assume by contrary that there exists such a non-trivial relation
$$k_1g_1+k_2g_2+\cdots+k_ng_n=0.$$
Clearly, we can suppose that all $k_i$ are non-zero, and that $n$ is minimal. If $n=1$, then there is nothing to prove, so we can suppose $n>1$. Since the cosets $Ag_1$ and $Ag_2$ are disjoint, we have $g_1^{-1}g_2\not\in A=C_G(A)$. So, there exists an element $x\in A$ such that $g_1^{-1}g_2x\ne xg_1^{-1}g_2$. For each $1\leq i\leq n$, if we set $x_i=g_ixg_i^{-1}$, then $x_1\ne x_2$. Since $G$ normalizes $K$, it follows $x_i\in K$ for all $1\leq i\leq n$. Now, we have
$$(k_1g_1+\cdots+k_ng_n)x-x_1(k_1g_1+\cdots+k_ng_n)= 0.$$
By definition, we have $x_ig_i=g_ix$, and $x$, $x_i\in K$ for all $i$. Recall that $K=F[A]$ is commutative, so from the last equality
$$\left( {{k_1}{g_1}x + {k_2}{g_2}x + \cdots + {k_n}{g_n}x} \right) - \left( {{x_1}{k_1}{g_1} + {x_1}{k_2}{g_2} + \cdots + {x_1}{k_n}{g_n}} \right) = 0,$$
it follows
$$\left( {{k_1}{x_1}{g_1} + {k_2}{x_2}{g_2} + \cdots + {k_n}{x_n}{g_n}} \right) - \left( {{k_1}{x_1}{g_1} + {k_2}{x_1}{g_2} + \cdots + {k_n}{x_1}{g_n}} \right) = 0.$$
Consequently, we get
$$\left( {{x_2} - {x_1}} \right){k_2}{g_2} + \cdots + \left( {{x_n} - {x_1}} \right){k_n}{g_n} = 0,$$
which is a non-trivial relation with less than $n$ summands because $x_1\ne x_2$, a contradiction. Therefore, $T$ is linearly independent over $K$. \end{Proof}
\begin{remark}\label{rem.subnormal}
In view of \cite[Theorem 11]{mah98}, if $D$ is a division ring with at least five elements and $n\geq 2$, then any non-central subnormal subgroup of ${\rm GL}_n(D)$ contains ${\rm SL}_n(D)$ and hence is normal. \end{remark}
\begin{theorem} \label{theorem_3.6}
Let $D$ be non-commutative division ring with center $F$, and $G$ be a subnormal subgroup of ${\rm GL}_n(D)$, $n\geq2$. Assume additionally that $F$ contains at least five elements. If $M$ is a solvable maximal subgroup of $G$, then $M$ is abelian. \end{theorem}
\begin{Proof}
If $G\subseteq F$, then there is nothing to prove. Thus, we may assume that $G$ is non-central, hence ${\rm SL}_n(D)\subseteq G$ and $G$ is normal in ${\rm GL}_n(D)$ by Remark \ref{rem.subnormal}. Setting $R=F[M]$, then $M\subseteq R^*\cap G \subseteq G$. By the maximality of $M$ in $G$, either $R^*\cap G=M$ or $G\subseteq R^*$. We need to consider two possible cases:
\textit{Case 1:} $R^*\cap G=M$.
The normality of $G$ implies that $M$ is a normal subgroup of $R^*$. If $M$ is reducible, then by \cite[Lemma 1]{kiani_07}, it contains a copy of $D^*$. It follows that $D^*$ is solvable, and hence it is commutative, a contradiction. We may therefore assume that $M$ is irreducible. Then $R$ is a prime ring by \cite[1.1.14]{shirvani-wehrfritz}. So, in view of \cite[Theorem 2]{lanski_81}, either $M\subseteq Z(R)$ or $R$ is a domain. If the first case occurs, then we are done. Now, suppose that $R$ is a domain. By \cite[Corollary 24]{wehrfritz_89}, we conclude that $R$ is a Goldie ring, and thus $R$ is an Ore domain. Let $\Delta_1$ be the skew field of fractions of $R$, which is contained in ${\rm M}_n(D)$ by \cite[5.7.8]{shirvani-wehrfritz}. Since $M\subseteq \Delta_1^*\cap G\subseteq G$, either $G\subseteq\Delta_1^*$ or $M=\Delta_1^*\cap G$. The first case occurs implies that $\Delta_1$ contains $F[{\rm SL}_n(D)]$. Thus, if $G\subseteq\Delta_1^*$, then by the Cartan-Brauer-Hua Theorem for the matrix ring (see e.g. \cite[Theorem D]{dorbidi2011}), one has $\Delta_1={\rm M}_n(D)$, which is impossible since $n\geq 2$. Thus the second case must occur, i.e., $M=\Delta_1\cap G$, which yields $M$ is normal in $\Delta_1^*$. Since $M$ is solvable, it is contained in $Z(\Delta_1)$ by \cite[14.4.4]{scott}, so $M$ is abelian.
\textit{Case 2:} $G\subseteq R^*$.
In this case, Remark \ref{rem.subnormal} yields ${\rm SL}_n(D)\subseteq R^*$. Thus, by the Cartan-Brauer-Hua Theorem for the matrix ring, one has $R=F[M]={\rm M}_n(D)$. It follows by \cite[Theorem A]{wehrfritz86} that $M$ is abelian-by-locally finite. Let $A$ be a maximal abelian normal subgroup of $M$ such that $M/A$ is locally finite. Then \cite[1.2.12]{shirvani-wehrfritz} says that $F[A]$ is a semisimple artinian ring. The Wedderburn-Artin Theorem implies that
$$ F[A] \cong {\rm M}_{n_1}(D_1)\times {\rm M}_{n_2}(D_2)\cdots\times {\rm M}_{n_s}(D_s),$$
where $D_i$ are division $F$-algebras, $1\leq i\leq s$. Since $F[A]$ is abelian, $n_i = 1$ and $K_i:=D_i=Z(D_i)$ are fields that contain $F$ for all $i$. Therefore,
$$F[A]\cong K_1\times K_2\cdots\times K_s.$$
If $M$ is imprimitive, then by \cite[Lemma 2.6]{hai-khanh}, we conclude that $M$ contains a copy of ${\rm SL}_r(D)$ for some $r\geq 1$. This fact cannot occur: if $r>1$, then ${\rm SL}_r(D)$ is unsolvable; if $r=1$, then $D'$ is solvable and hence $D$ is commutative, a contradiction. Thus, $M$ is primitive, and \cite[Proposition 3.3]{dorbidi2011} implies that $F[A]$ is an integral domain, so $s=1$. It follows that $K:=F[A]$ is a subfield of ${\rm M}_n(D)$ containing $F$. Again by \cite[Proposition 3.3]{dorbidi2011}, we conclude that $$L:=C_{{\rm M}_n(D)}(K)=C_{{\rm M}_n(D)}(A)\cong {\rm M}_m(\Delta_2)$$ for some division $F$-algebra $\Delta_2$. Since $M$ normalizes $K$, it also normalizes $L$. Therefore, we have $M\subseteq N_G(L^*)\subseteq G$. By the maximality of $M$ in $G$, either $M= N_G(L^*)$ or $G=N_G(L^*)$. The last case implies that $L^*\cap G$ is normal in ${\rm GL}_n(D)$. By Remark \ref{rem.subnormal}, either $L^*\cap G\subseteq F$ or ${\rm SL}_n(D)\subseteq L^*\cap G$. If the first case occurs, then $A\subseteq F$ because $A$ is contained in $L^*\cap G$. If the second case occurs, then by the Cartan-Brauer-Hua Theorem for the matrix, one has $L={\rm M}_n(D)$. It follows that $K=F[A]\subseteq F$, which also implies that $A\subseteq F$. Thus, in both case we have $A\subseteq F$. Consequently, $M/M\cap F^*$ is locally finite, and hence $D$ is a locally finite division ring by Lemma \ref{lemm_2.1}. If $M$ is non-abelian, then by Theorem \ref{theorem_3.4}, we conclude that $n=1$, a contradiction. Therefore $M$ is abelian in this case. Now, we consider the case $M= N_G(L^*)$, from which we have $L^*\cap G \subseteq M$. In other words, $L^*\cap G$ is a solvable normal subgroup of ${\rm GL}_m(\Delta_2)$. From this, we may conclude that $L^*\cap G$ is abelian: if $m>1$, then in view of Remark \ref{rem.subnormal}, one has $L^*\cap G\subseteq Z(\Delta_2)$ or ${\rm SL}_m(\Delta_2)\subseteq L^*\cap G$, but the latter cannot happen since ${\rm SL}_m(\Delta_2)$ is unsolvable; if $m=1$ then $L=\Delta_2$, and according to \cite[14.4.4]{scott}, we conclude that $L^*\cap G\subseteq Z(\Delta_2)$. In short, we have $L^*\cap G$ is an abelian normal subgroup of $M$ and $M/L^*\cap G$ is locally finite. By the maximality of $A$ in $M$, one has $A=L^*\cap G$. Because we are in the case $L^*\cap G \subseteq M$, it follows that $L^*\cap G=L^*\cap M$. Consequently, $$A=L^*\cap M=C_{{\rm GL}_n(D)}(A)\cap M =C_M(A),$$ which means $A$ is a maximal abelian subgroup of $M$.
By Lemma \ref{lemma_3.5}, $F[M]=\oplus _{m \in T}{Km}$ for some transversal $T$ of $A$ in $M$. Thus, for any $x\in L$, there exist $k_1,k_2,\dots,k_t\in K$ and $m_1,m_2,\dots,m_t\in T$ such that $x=k_1m_1+k_2m_2+\cdots+k_tm_t$. Take an arbitrary element $a\in A$, since $xa=ax$, it follows that
$$k_1m_1a+k_2m_2a+\cdots+k_tm_ta=ak_1m_1+ak_2m_2+\cdots+ak_tm_t.$$
By the normality of $A$ in $M$, there exist $a_i\in A$ such that $m_ia=a_im_i$ for all $1\leq i\leq t$. Moreover, we have $a$ and $a_i$'s are in $K$ which is a field, the equality implies
$$k_1a_1m_1+k_2a_2m_2+\cdots+k_ta_tm_t=k_1am_1+k_2am_2+\cdots+k_tam_t,$$ from which it follows that
$$ (k_1a_1-k_1a)m_1+(k_2a_2-k_2a)m_2+\cdots+(k_ta_t-k_ta)m_t=0.$$
Since $\oplus{m_1,m_2,\dots,m_t\oplus}$ is linearly independent over $K$, one has $a=a_1=\cdots=a_t$. Consequently, $m_ia=am_i$ for all $a\in A$, and thus $m_i\in C_M(A)=A$ for all $1\leq i\leq t$. This means $x\in K$, and hence $L=K$ and $K$ is a maximal subfield of ${\rm M}_n(D)$.
Next, we prove that $M/A$ is simple. Suppose that $N$ is an arbitrary normal subgroup of $M$ properly containing $A$. Note that by the maximality of $A$ in $M$, we conclude that $N$ is non-abelian. We claim that $Q:=F[N]={\rm M}_n(D)$. Indeed, since $N$ is normal in $M$, we have $M\subseteq N_G(Q^*)\subseteq G$, and hence either $N_G(Q^*)=M$ or $N_G(Q^*)=G$. First, we suppose the former case occurs. Then $Q^*\cap G\subseteq M$, hence $Q^*\cap G$ is a solvable normal subgroup of $Q^*$. In view of \cite[Proposition 3.3]{dorbidi2011}, $Q$ is a prime ring. It follows by \cite[Theorem 2]{lanski_81} that either $Q^*\cap G\subseteq Z(Q)$ or $Q$ is a domain. If the first case occurs, then $N\subseteq Q^*\cap G$ is abelian, which contradicts to the choice of $N$. If $Q$ is a domain, then by Goldie's theorem, it is an Ore domain. Let $\Delta_3$ be the skew field of fractions of $Q$, which is contained in ${\rm M}_n(D)$ by \cite[5.7.8]{shirvani-wehrfritz}. Because $M$ normalizes $Q$, it also normalizes $\Delta_3$, from which we have $M\subseteq N_G(\Delta_3^*)\subseteq G$. Again by the maximality of $M$ in $G$, either $N_G(\Delta_3^*)=M$ or $N_G(\Delta_3^*)=G$. The first case implies that $\Delta_3^*\cap G$ is a solvable normal subgroup of $\Delta_3^*$. Consequently, $N\subseteq\Delta_3^*\cap G$ is abelian by \cite[14.4.4]{scott}, a contradiction. If $N_G(\Delta_3^*)=G$, then $\Delta_3={\rm M}_n(D)$ by the Cartan-Brauer-Hua Theorem for the matrix ring, which is impossible since $n\geq2$. Therefore, the case $N_G(Q^*)=M$ cannot occur. Next, we consider the case $N_G(Q^*)=G$. In this case we have $Q^*\cap G \unlhd {\rm GL}_n(D)$, and hence either $Q^*\cap G\subseteq F$ or ${\rm SL}_n(D)\subseteq Q^*\cap G$ by Remark \ref{rem.subnormal}. The first case cannot occur since $Q^*\cap G$ contains $N$, which is non-abelian. Therefore, we have ${\rm SL}_n(D)\subseteq Q^*$. By the Cartan-Brauer-Hua theorem for the matrix ring, we conclude $Q={\rm M}_n(D)$ as claimed. In other words, we have $F[N]=F[M]={\rm M}_n(D)$.
For any $m\in M\subseteq F[N]$, there exist $f_1,f_2,\dots,f_s\in F$ and $n_1,n_2,\dots,n_s\in N$ such that
$$m=f_1n_1+f_2n_2+\cdots+f_sn_s.$$
Let $H=\left\langle n_1,n_2,\dots,n_s \right\rangle $ be the subgroup of $N$ generated by $n_1,n_2,\dots,n_s$. Set $B=AH$ and $S=F[B]$. Recall that $A$ is a maximal abelian subgroup of $M$. Thus, if $B$ is abelian, then $A=B$ and hence $H\subseteq A$. Consequently, $m\in F[A]=K$, from which it follows that $m\in K^*\cap M=A\subseteq N$. Now, assume that $B$ is non-abelian, and we will prove that $m$ also belongs to $N$ in this case. Since $M/A$ is locally finite, $B/A$ is finite. Let $\oplus{x_1,\ldots,x_k\oplus}$ be a transversal of $A$ in $B$. The maximality of $A$ in $M$ implies that $A$ is a maximal abelian subgroup of $B$, and that $A$ is also normal in $B$. By Lemma \ref{lemma_3.5},
$$S=Kx_1\oplus Kx_2\oplus\cdots\oplus Kx_k,$$
which says that $S$ is an artinian ring. Since $C_{{\rm M}_n(D)}(A)=C_{{\rm M}_n(D)}(K)=L$ is a field, in view of \cite[Proposition 3.3]{dorbidi2011}, we conclude that $A$ is irreducible. Because $B$ contains $A$, by definition, it is irreducible too. It follows by \cite[1.1.14]{shirvani-wehrfritz} that $S$ is a prime ring. Now, $S$ is both prime and artinian, so it is simple and $S\cong {\rm M}_{n_0}(\Delta_0)$ for some division $F$-algebra $\Delta_0$. If we set $F_0=Z(\Delta_0)$, then $Z(S)=F_0$. Since $B$ is abelian-by-finite, the group ring $FB$ is a PI-ring by \cite[Lemma 11, p. 176]{passman_77}. Thus, as a hommomorphic image of $FB$, the ring $S=F[B]$ is also a PI-ring. By Kaplansky's theorem (\cite[Theorem 3.4, p. 193]{passman_77}), we conclude that $[S:F_0]<\infty$. Since $K$ is a maximal subfield of ${\rm M}_n(D)$, it is also maximal in $S$. From this, we conclude that $F_0\subseteq C_S(K)= K$, and that $K$ is a finite extension field over $F_0$.
Recall that $A$ is normal in $B$, so for any $b\in B$, the mapping $\theta_b:K\to K$ given by $\theta_b(x)=bxb^{-1}$ is well defined. It is clear that $\theta_b$ is an $F_0$-automorphism of $K$. Thus, the mapping $$\psi:B\to {\rm Gal}(K/F_0)$$ defined by $\psi(b)=\theta_b$ is a group homomorphism with $$\mathrm{ker}\psi=C_{S^*}(K^*)\cap B=C_{S^*}(A)\cap B=C_B(A)=A.$$ Since $F_0[B]=S$, it follows that $C_S(B)=F_0$. Therefore, the fixed field of $\psi(B)$ is $F_0$, and hence $K/F_0$ is a Galois extension. By the fundamental theorem of Galois theory, one has $\psi$ is a surjective homomorphism. Hence, $ B/A\cong {\rm Gal}(K/F_0)$.
Setting $M_0=M\cap S^*$, then $m\in M_0$, $B\subseteq M_0$, and $F_0[M_0]=F_0[B]=S$. The conditions $F_0\subseteq K$ and $F\subseteq F_0$ implies that $K=F[A]=F_0[A]$. It is clear that $A$ is a maximal abelian subgroup of $M_0$, and that $A$ is also normal in $M_0$. If $M_0/A$ is infinite, then there exists an infinite transversal $T$ of $A$ in $M_0$ such that $S=F_0[M_0]=\oplus_{m\in T}Km$ by Lemma \ref{lemma_3.5}. It follows that $[S:K]=\infty$, a contradiction. Therefore, $M_0/A$ must be finite. Replacing $B$ by $M_0$ in the preceding paragraph, we also conclude that $ M_0/A\cong {\rm Gal}(K/F_0)$. Consequently, $B/A\cong{\rm Gal}(K/F_0)\cong M_0/A$. The conditions $B\subseteq M_0$ and $B/A \cong M_0/A$ imply $B=M_0$. Hence, $m\in M_0= B\subseteq N$. Since $m$ was chosen arbitrarily, it follows that $M=N$, which implies the simplicity of $M/A$. Since $M/A$ is simple and solvable, one has $M/A\cong \mathbb{Z}_p$, for some prime number $p$. By Lemma \ref{lemma_3.5}, it follows $\dim_K{\rm M}_n(D)=|M/A|=p$, which forces $n=1$, a contradiction. \end{Proof}
Now, we are ready to get the main result of this note which gives in particular, the positive answer to Conjecture \ref{conj:1} for non-abelian case. \begin{theorem} \label{theorem_3.7}
Let $D$ be a non-commutative division ring with center $F$, $G$ a subnormal subgroup of ${\rm GL}_n(D)$. Assume additionally that $F$ contains at least five elements if $n>1$. If $M$ is a non-abelian solvable maximal subgroup of $G$, then $n=1$ and the following conditions hold:
\begin{enumerate}[(i)]
\item There exists a maximal subfield $K$ of $D$ such that $K/F$ is a finite Galois extension with $\mathrm{Gal}(K/F)\cong M/K^*\cap G\cong \mathbb{Z}_p$ for some prime $p$, and $[D:F]=p^2$.
\item The subgroup $K^*\cap G$ is the $FC$-center. Also, $K^*\cap G$ is the Fitting subgroup of $M$. Furthermore, for any $x\in M\setminus K$, we have $x^p\in F$ and $D=F[M]=\bigoplus_{i=1}^pKx^i$.
\end{enumerate} \end{theorem}
\begin{Proof}
Combining Theorem \ref{theorem_3.2} and Theorem \ref{theorem_3.6}, we get the result. \end{Proof}
\end{document} |
\begin{document}
\title{Subgroup decomposition in $\Out(F_n)$\\ Part III: Weak Attraction Theory}
\begin{abstract} This is the third in a series of four papers, announced in \cite{HandelMosher:SubgroupsIntro}, that develop a decomposition theory for subgroups of $\mathsf{Out}(F_n)$.
In this paper, given $\phi \in \mathsf{Out}(F_n)$ and an attracting-repelling lamination pair for $\phi$, we study which lines and conjugacy classes in $F_n$ are weakly attracted to that lamination pair under forward and backward iteration of $\phi$ respectively. For conjugacy classes, we prove Theorem F from the research annoucement, which exhibits a unique vertex group system called the ``nonattracting subgroup system'' having the property that the conjugacy classes it carries are characterized as those which are not weakly attracted to the attracting lamination under forward iteration, and also as those which are not weakly attracted to the repelling lamination under backward iteration. For lines in general, we prove Theorem G which characterizes exactly which lines are weakly attracted to the attracting lamination under forward iteration and which to the repelling lamination under backward iteration. We also prove Theorem H which gives a uniform version of weak attraction of lines.
\end{abstract}
\newcommand\arXiv{arXiv}
\subsection*{Introduction}
Many results about the groups $\MCG(S)$ and $\mathsf{Out}(F_n)$ are based on dynamical systems. The Tits alternative (\BookOne, \BookTwo\ for $\mathsf{Out}(F_n)$; \cite{McCarthy:Tits} and \cite{Ivanov:subgroups} independently for $\MCG(S)$) says that for any subgroup $H$, either $H$ is virtually abelian or $H$ contains a free subgroup of rank~$\ge 2$, and these free subgroups are constructed by analogues of the classical ``ping-pong argument'' for group actions on topological spaces. Dynamical ping-pong arguments were also important in Ivanov's classification of subgroups of $\MCG(S)$ \cite{Ivanov:subgroups}. And they will be important in \PartFour\ where we prove our main theorem about subgroups of $\mathsf{Out}(F_n)$, Theorem~C stated in the Introduction~\Intro.
Ping-pong arguments are themselves based on understanding the dynamics of an individual group element~$\phi$, particularly an analysis of attracting and repelling fixed sets of~$\phi$, of their associated basins of attraction and repulsion, and of neutral sets which are neither attracted nor repelled. The proofs in \cite{McCarthy:Tits,Ivanov:subgroups} use the action of $\MCG(S)$ on Thurston's space $\mathcal{PML}(S)$ of projective measured laminations on $S$.
The proof of Theorem C in \PartFour\ will employ ping-pong arguments for the action of $\mathsf{Out}(F_n)$ on the \emph{space of lines} $\mathcal B = \mathcal B(F_n)$, which is just the quotient of the action of $F_n$ on the space $\widetilde \mathcal B$ of two point subsets of $\bdy F_n$. The basis of those ping-pong arguments will be \emph{Weak Attraction Theory} which, given $\phi \in \mathsf{Out}(F_n)$ and a dual lamination pair $\Lambda^\pm \in \L^\pm(\phi)$, addresses the following dynamical question regarding the action of $\phi$ on~$\mathcal B$: \begin{description} \item[General weak attraction question:] Which lines $\ell \in \mathcal B$ are weakly attracted\index{weakly attracted} to $\Lambda_+$ under iteration of $\phi$? Which are weakly attracted to $\Lambda_-$ under iteration of $\phi^\inv$? And which are weakly attracted to neither $\Lambda_+$ nor $\Lambda_-$? \end{description} To say that $\ell$ is weakly attracted to $\Lambda_+$ (under iteration of~$\phi$) means that $\ell$ is weakly attracted to a generic leaf $\lambda \in \Lambda_+$, that is, the sequence $\phi^k(\ell)$ converges in the weak topology to $\ell$ as $k \to +\infinity$. Note that this is independent of the choice of generic leaf of $\Lambda_+$, since all of them have the same weak closure, namely~$\Lambda_+$.
Our answers to the above question are an adaptation and generalization of many of the ideas and constructions found in The Weak Attraction Theorem 6.0.1 of \BookOne, which answered a narrower version of the question above, obtained by restricting to a lamination $\Lambda_+$ which is topmost in $\L(\phi)$ and to birecurrent lines. That answer was expressed in terms of the structure of an ``improved relative train track representative'' of~$\phi$.
In this paper we develop weak attraction theory to completely answer the general weak attraction question. Our theorems are expressed both in terms of the structure of a CT\ representative of $\phi$, and in more invariant terms. The theory is summarized in Theorems F, G and~H, versions of which were stated earlier in \Intro; the versions stated here are more expansive and precise. Theorem~F focusses on periodic lines and on the nonattracting subgroup system; Theorems~G and~H are concerned with arbitrary lines. Each of these theorems has applications in \PartFour.
\paragraph{The nonattracting subgroup system: Theorem F.} We first answer the general weak attraction question restricted to ``periodic'' lines in $\mathcal B$, equivalently circuits in marked graphs, equivalently conjugacy classes in $F_n$. The statement uses two concepts from \hbox{\PartOne}: \emph{geometricity} of general EG\ strata (which was in turn based on geometricity of top EG\ strata as developed in \BookOne), and \emph{vertex group systems}.
\begin{TheoremF}[Properties of the nonattracting subgroup system] For each rotationless $\phi \in \mathsf{Out}(F_n)$ and each $\Lambda^\pm \in \L^\pm(\phi)$ there exists a subgroup system $\mathcal A_\na(\Lambda^\pm)$, called the \emph{nonattracting subgroup system}, with the following properties: \begin{enumerate} \item\label{ItemThmCAnaVGS} (Proposition \ref{PropVerySmallTree} \pref{ItemA_naNP}) $\mathcal A_\na(\Lambda^\pm)$ is a vertex group system. \item\label{ItemThmCAnaGeom} (Proposition~\ref{PropVerySmallTree}~\pref{ItemA_naNoNP}) $\Lambda^\pm$ is geometric if and only if
$\mathcal A_\na(\Lambda^\pm)$ is not a free factor system. \item\label{ItemThmCAnaIsAna} (Corollary~\ref{CorNAWellDefined}) For each conjugacy class $c$ in $F_n$ the following are equivalent: \\ $\bullet$\quad $c$ is not weakly attracted to $\Lambda^+_\phi$ under iteration of $\phi$; \\ $\bullet$\quad $c$ is carried by $\mathcal A_\na(\Lambda^\pm)$. \item\label{ItemThmFAnaWellDef} (Corollary~\ref{CorPMna}) $\mathcal A_\na(\Lambda^\pm)$ is uniquely determined by items~\pref{ItemThmCAnaVGS} and~\pref{ItemThmCAnaIsAna}. \item\label{ItemThmCAnaPM} (Corollaries \ref{CorNAWellDefined} and \ref{CorNAIndOfPM}) For each conjugacy class $c$ in $F_n$, $c$ is not weakly attracted to $\Lambda^+$ by iteration of $\phi$ if and only if $c$ is not weakly attracted to $\Lambda^-$ by iteration of~$\phi^\inv$. \end{enumerate} \end{TheoremF} \noindent Furthermore (Definition~\ref{defn:Z}), choosing any CT\ $f \from G \to G$ representing $\phi$ with EG-stratum $H_r$ corresponding to $\Lambda^+_\phi$, the nonattracting subgroup system $\mathcal A_\na(\Lambda^\pm)$ has a concrete description in terms of $f$ and the indivisible Nielsen paths of height~$r$ (the latter are described in (\recognition\ Corollary~4.19) or Fact~\refGM{FactEGNPUniqueness}). The description given in Definition~\ref{defn:Z} is our first definition of $\mathcal A_\na(\Lambda^\pm)$, and it is not until Corollary~\ref{CorPMna} that we prove $\mathcal A_\na(\Lambda^\pm)$ is well-defined independent of the choice of CT\ (item~\pref{ItemThmFAnaWellDef} above). Corollary~\ref{CorNAIndOfPM} (item~\pref{ItemThmCAnaPM} above) shows moreover that $\mathcal A_\na(\Lambda^\pm)$ is indeed well-defined independent of the choice of nonzero power of $\phi$, depending only on the cyclic subgroup $\<\phi\>$ and the lamination pair~$\Lambda^\pm$.
\subparagraph{Notation:} The nonattracting subgroup system $\mathcal A_\na(\Lambda^\pm)$ depends not only on the lamination pair $\Lambda^\pm$ but also on the outer automorphism $\phi$ (up to nonzero powers). Often we emphasize this dependence by building $\phi$ into the notation for the lamination itself, writing $\Lambda^\pm_\phi$ and $\mathcal A_\na(\Lambda^\pm_\phi)$.
\paragraph{The set of nonattracted lines: Theorems G and~H.} Theorem~G, a vague statement of which was given in the introduction, is a detailed description of the set $\mathcal B_\na(\Lambda^+_\phi;\phi)$ of all lines $\gamma \in \mathcal B$ that are not attracted to $\Lambda^+_\phi$ under iteration by $\phi$. Theorem~H is a less technical and more easily applied distillation of Theorem~G, and is applied several times in \PartFour.
As stated in Lemma~\ref{LemmaThreeNASets}, there are three somewhat obvious subsets of $\mathcal B_\na(\Lambda^+_\phi;\phi)$. One is the subset $\mathcal B(\mathcal A_\na(\Lambda^{\pm}_\phi))$ of all lines supported by the nonattracting subgroup system $\mathcal A_\na(\Lambda^{\pm}_\phi)$. Another is the subset $\mathcal B_\gen(\phi^\inv)$ of all generic leaves of attracting laminations for $\phi^{-1}$. The third is the subset $\mathcal B_\sing(\phi^\inv)$ of all singular lines for $\phi^{-1}$: by definition these lines are the images under the quotient map $\widetilde\mathcal B \mapsto \mathcal B$ of those endpoint pairs $\{\xi,\eta\} \in \widetilde \mathcal B$ such that $\xi,\eta$ are each nonrepelling fixed points for the action of some automorphism representing~$\phi^\inv$.
In Definition~\ref{DefnConcatenation} we shall define an operation of ``ideal concatenation'' of lines: given a pair of lines which are asymptotic in one direction, they define a third line by concatenating at their common ideal point and straightening, or what is the same thing by connecting their opposite ideal points by a unique line.
Theorem~G should be thought of as stating that $\mathcal B_\na(\Lambda^+_\phi;\phi)$ is the smallest set of lines that contains $\mathcal B(\mathcal A_\na(\Lambda^{\pm}_\phi)) \union \mathcal B_\gen(\phi^\inv) \union \mathcal B_\sing(\phi^\inv)$ and is closed under this operation of ideal concatenation. It turns out that only a limited amount of such concatenation is possible, namely, extending a line of $\mathcal B(\mathcal A_\na(\Lambda^{\pm}_\phi))$ by concatenating on one or both ends with a line of $\mathcal B_\sing(\phi^\inv)$, producing a set of lines we denote $\mathcal B_\ext(\Lambda^\pm_\phi;\phi^\inv)$ (see Section~\ref{SectionNAConcatenation}).
\begin{TheoremG}[\textbf{Theorem \ref{ThmRevisedWAT}}] If $\phi, \phi^{-1} \in \mathsf{Out}(F_n)$ are rotationless and if $\Lambda^\pm_\phi \in \L^\pm(\phi)$ then $$\mathcal B_\na(\Lambda^+_\phi) = \mathcal B_\ext(\Lambda^\pm_\phi;\phi^\inv) \union \mathcal B_\gen(\phi^\inv) \union \mathcal B_\sing(\phi^\inv) $$ \end{TheoremG} Note that the first of the three terms in the union is the only one that depends on the lamination pair $\Lambda^\pm_\phi$; the other two depend only on $\phi^\inv$.
For certain purposes in \PartFour\ the following corollary to Theorem~G is useful in being easier to directly apply. In particular item~\pref{ThmHUnifWA} provides a topologically uniform version of weak attraction:
\begin{TheoremH}[\textbf{Corollary~\ref{CorOneWayOrTheOther}}] Given rotationless $\phi,\phi^\inv \in \mathsf{Out}(F_n)$ and a dual lamination pair $\Lambda^\pm \in \L^\pm(\phi)$, the following hold: \begin{enumerate} \item Any line $\ell \in \mathcal B$ that is not carried by $\mathcal A_\na(\Lambda^\pm)$ is weakly attracted either to $\Lambda^+$ by iteration of $\phi$ or to $\Lambda^-$ by iteration by $\phi^\inv$. \item \label{ThmHUnifWA} For any neighborhoods $V^+,V^- \subset \mathcal B$ of $\Lambda^+, \Lambda^-$, respectively, there exists an integer $m \ge 1$ such that for any line $\ell \in \mathcal B$ at least one of the following holds: \ $\gamma \in V^-$; \ $\phi^m(\gamma) \in V^+$; \ or $\gamma$ is carried by $\mathcal A_\na(\Lambda^\pm)$. \end{enumerate} \end{TheoremH}
\setcounter{tocdepth}{2} \tableofcontents
\section{The nonattracting subgroup system} \label{SectionWeakAttraction}
Consider a rotationless $\phi \in \mathsf{Out}(F_n)$ and a dual lamination pair $\Lambda^\pm_\phi \in \L^\pm(\phi)$. Since $\phi$ is rotationless its action on $\L(\phi)$ is the identity and therefore so is its action on $\L(\phi^\inv)$; the laminations $\Lambda^+_\phi$ and $\Lambda^-_\phi$ are therefore fixed by~$\phi$ and by~$\phi^\inv$. In this setting we shall define the \emph{nonattracting subgroup system} $\mathcal A_\na(\Lambda^\pm_\phi)$, an invariant of $\phi$ and $\Lambda^\pm_\phi$.
One can view the definition of $\mathcal A_\na(\Lambda^\pm_\phi)$ in two ways. First, in Definition~\ref{defn:Z}, we define $\mathcal A_\na(\Lambda^\pm_\phi)$ with respect to a choice of a CT\ representing $\phi$; this CT\ acts as a choice of ``coordinate system'' for $\phi$, and with this choice the description of $\mathcal A_\na(\Lambda^\pm_\phi)$ is very concrete. We derive properties of this definition in results to follow, from Proposition~\ref{PropVerySmallTree} to Corollary~\ref{CorNASubgroups}, including most importantly the proofs of items~\pref{ItemThmCAnaVGS}, \pref{ItemThmCAnaGeom} and~\pref{ItemThmCAnaIsAna} of Theorem~F. Then, in Corollaries~\ref{CorPMna} and~\ref{CorNAIndOfPM}, we prove that $\mathcal A_\na(\Lambda^\pm_\phi)$ is invariantly defined, independent of the choice of CT\, and furthermore independent of the choice of a positive or negative power of~$\phi$, in particular proving items~\pref{ItemThmFAnaWellDef} and~\pref{ItemThmCAnaPM} of Theorem~F. The independence result is what allows us to regard the nonattracting subgroup system as an invariant of a dual lamination pair rather than of each lamination individually (but still with implicit dependence on $\phi$ up to nonzero power).
\noindent \textbf{Weak attraction.} Recall (Section~\refGM{SectionLineDefs})\footnote{Cross references such as ``Section I.X.Y.Z'' refer to Section X.Y.Z of \PartOne. Cross references to the Introduction \Intro, to \PartOne, and to \PartTwo\ are to the June 2013 versions.} the notation $\mathcal B$ for the space of lines of~$F_n$ on which $\mathsf{Out}(F_n)$ acts naturally, and recall that that a line $\ell \in \mathcal B$ is said to be \emph{weakly attracted} to a generic leaf $\lambda \in \Lambda^+_\phi \subset \mathcal B$ under iteration by $\phi$ if the sequence $\phi^n(\ell)$ weakly converges to $\lambda$ as $n \to +\infinity$, that is, for each neighborhood $U \subset \mathcal B$ of $\lambda$ there exists an integer $N > 0$ such that if $n \ge N$ then $\phi^n(\ell) \in U$. Note that since any two generic leaves of $\Lambda^+_\phi$ have the same weak closure, namely $\Lambda^+_\phi$, this property is independent of the choice of~$\lambda$; for that reason we often speak of $\ell$ being weakly attracted to $\Lambda^+_\phi$ by iteration of $\phi$.
This definition of weak attraction applies to $\phi^\inv$ as well, and so we may speak of $\ell$ being weakly attracted to $\Lambda^-_\phi$ under iteration by $\phi^\inv$. This definition also applies to iteration of a CT\ $f \from G \to G$ representing $\phi$ on elements of the space $\wh\mathcal B(G)$ (Section~\refGM{def:DoubleSharp}), which contains the subspace $\mathcal B(G)$ identified with $\mathcal B$ by letting lines be realized in~$G$, and which also contains all finite paths and rays in~$G$. We may speak of such paths being weakly attracted to $\Lambda^+_\phi$ under iteration by $f$. Whenever $\phi$ and the $\pm$ sign are understood, as they are in the notations $\Lambda^+_\phi$ and $\Lambda^-_\phi$, we tend to drop the phrase ``under iteration by \ldots''.
\begin{remark} Suppose that $\phi$ is a rotationless iterate of some possibly nonrotationless $\eta \in \mathsf{Out}(F_n)$ and that $\Lambda^+_\phi$ is $\eta$-invariant. Then $\gamma$ is weakly attracted to $\Lambda^+_\phi$ under iteration by $\eta$ if and only $\gamma$ is weakly attracted to $\Lambda^+_\phi $ under iteration by $\phi$. Our results therefore apply to $\eta$ as well as $\phi$. \end{remark}
\subsection{The nonattracting subgroup system $\mathcal A_{\na}(\Lambda^+_\phi)$} \label{SectionAsubNA}
The Weak Attraction Theorem 6.0.1 of \BookOne\ answers the ``general weak attraction question'' posed above in the restricted setting of a lamination pair $\Lambda^\pm_\phi$ which is topmost with respect to inclusion, and under restriction to birecurrent lines only. The answer is expressed in terms of an ``improved relative train track representative'' $g \from G \to G$, a ``nonattracting subgraph'' $Z \subset G$, a (possibly trivial) Nielsen path $\hat \rho_r$, and an associated set of paths denoted $\<Z,\hat \rho_r\>$. The construction and properties of $Z$ and $\<Z,\hat \rho_r\>$ are given in \cite[Proposition~6.0.4]{BFH:TitsOne}.
In Definition~\ref{defn:Z} and the lemmas that follow, we generalize the subgraph $Z$, the path set $\<Z,\hat \rho_r\>$, and the nonattracting subgroup system beyond the topmost setting.
\textbf{Notation:} Throughout Subsections~\ref{SectionAsubNA} and~\ref{SectionAppsAndPropsANA} we fix a rotationless $\phi \in \mathsf{Out}(F_n)$, a lamination pair~$\Lambda^\pm_\phi \in \L^\pm(\phi)$, and a CT\ representative $f \from G \to G$ with EG\ stratum $H_r$ corresponding to $\Lambda^+_\phi$. For a review of CTs, completely split paths, and the terms of a complete splitting, we refer the reader to Section~\refGM{SectionRTTDefs}, particularly Definition~\refGM{DefCompleteSplitting}.
For the nonattracting subgroup system we shall use various notations in various contexts. The notation $\mathcal A_\na(\Lambda^+_\phi)$ is used from the start, presuming immediately what we shall eventually show in Corollary~\ref{CorNAWellDefined} regarding its independence from the choice of a CT\ representative, but leaving open for a while the issue of whether it depends on the choice of $\pm$ sign. After the latter independence is established in Corollary~\ref{CorNAIndOfPM} we will switch over to the notation $\mathcal A_\na(\Lambda^\pm_\phi)$. When we wish to emphasize dependence on $\phi$ we sometimes use the notation $\mathcal A_\na(\Lambda^\pm;\phi)$ or $\mathcal A_\na(\Lambda^+;\phi)$; and when we wish to de-emphasize this dependence we sometimes use $\mathcal A_\na(\Lambda^\pm)$ or $\mathcal A_\na(\Lambda^+)$.
\begin{defns} \label{defn:Z} \textbf{The graph $Z$, the path $\hat \rho_r$, the path set $\<Z,\hat \rho_r\>$, and the subgroup system $\mathcal A_\na(\Lambda^+_\phi)$.} \quad
\noindent We shall define the \emph{nonattracting subgraph} $Z$ of $G$, and a path $\hat\rho_r$, either a trivial path or a height~$r$ indivisible Nielsen path if one exists. Using these we shall define a graph $K$ and an immersion $K \mapsto G$ by consistently gluing together the graph $Z$ and the domain of $\hat\rho_r$. We then define $\mathcal A_\na(\Lambda^+_\phi)$ in terms of the induced \hbox{$\pi_1$-injection} on each component of~$K$. We also define a groupoid of paths $\<Z,\hat\rho_r\>$ in~$G$, consisting of all concatenations whose terms are edges of $Z$ and copies of the path~$\hat\rho_r$ or its inverse, equivalently all paths in $G$ that are images under the immersion $K \to G$ of paths in~$K$.
\textbf{Definition of the graph $Z$.} The \emph{nonattracting subgraph} $Z$ of $G$ is defined as a union of certain strata $H_i \ne H_r$ of $G$, as follows. If $H_i$ is an irreducible stratum then $H_i \subset Z$ if and only if no edge of $H_i$ is weakly attracted to $\Lambda$; equivalently, using Fact~\trefGM{FactAttractingLeaves}{ItemSplitAnEdge}, we have $H_i \subset G \setminus Z$ if and only if for some (every) edge $E_i$ of $H_i$ there exists $k \ge 0$ so that some term in the complete splitting of $f^k_\#(E_i)$ is an edge in~$H_r$. If $H_i$ is a zero stratum enveloped by an EG\ stratum $H_s$ then $H_i \subset Z$ if and only if $H_s \subset Z$.
\textbf{Remark.} $Z$ automatically contains every stratum $H_i$ which is a fixed edge, an \neg-linear edge, or an EG\ stratum distinct from $H_r$ for which there exists an indivisible Nielsen path of height $i$. For a fixed edge this is obvious. If $H_i$ is an \neg-linear edge $E_i$ then this follows from (Linear Edges) which says that $f(E_i) = E_i \cdot u$ where $u$ is a closed Nielsen path, because for all $k \ge 1$ it follows that the path $f^k_\#(E_i)$ completely splits as $E_i$ followed by Nielsen paths of height $<i$, and no edges of $E_r$ occur in this splitting. For an EG\ stratum $H_i$ with an indivisible Nielsen path of height $i$ this follows from Fact~\trefGM{FactNielsenBottommost}{ItemBottommostEdges} which says that for each edge $E \subset H_i$ and each $k \ge 1$, the path $f^k_\#(E)$ completely splits into edges of $H_i$ and Nielsen paths of height $< i$; again no edges of $E_r$ occur in this splitting.
\textbf{Remark.} Suppose that $H_i$ is a zero stratum enveloped by the EG\ stratum $H_s$ and that $H_i \subset Z$. Applying the definition of $Z$ to $H_i$ it follows that $H_s \subset Z$. Applying the definition of $Z$ to $H_s$ it follows that no $s$-taken connecting path in $H_i$ is weakly attracted to $\Lambda^+_\phi$. Applying (Zero Strata) it follows that no edge in $Z$ is weakly attracted to $\Lambda$.
\textbf{Definition of the path $\hat \rho_r$.} If there is an \iNp\ $\rho_r$ of height $r$ then it is unique up to reversal by Fact~\refGM{FactEGNPUniqueness} and we define $\hat \rho_r = \rho_r$. Otherwise, by convention we choose a vertex of~$H_r$ and define $\hat \rho_r$ to be the trivial path at that vertex.
\textbf{Definition of the path set $\<Z,\hat \rho_r\>$.} Consider $\wh\mathcal B(G)$, the set of lines, rays, circuits, and finite paths in $G$ (Definition~\refGM{SectionLineDefs}). Define the subset $\<Z,\hat \rho_r\> \subset \wh\mathcal B(G)$ to consist of all elements which decompose into a concatenation of subpaths each of which is either an edge in $Z$, the path $\hat \rho_r$ or its inverse $\hat \rho_r^{-1}$. Given $\ell \in \wh\mathcal B(G)$, if $\ell \in \<Z,\hat\rho_r\>$ we will also say that $\ell$ is \emph{carried by} $\<Z,\hat\rho_r\>$; see e.g.\ Lemma~\ref{LemmaZPClosed}~\pref{item:ZP=NA}.
\textbf{Definition of the subgroup system $\mathcal A_\na(\Lambda^+_\phi)$.} If $\hat\rho_r$ is the trivial path, let $K = Z$ and let $h \from K \inject G$ be the inclusion. Otherwise, define $K$ to be the graph obtained from the disjoint union of $Z$ and an edge $E_\rho$ representing the domain of the Nielsen path $\rho_r \from E_\rho \to G_r$, with identifications as follows: given an endpoint $x \in E(\rho)$, if $\rho_r(x) \in Z$ then identify $x \sim \rho_r(x)$; also, given distinct endpoints $x,y \in E(\rho)$, if $\rho_r(x)=\rho_r(y)$ then identify $x \sim y$ (these points have already been identified if $\rho_r(x)=\rho_r(y) \in Z$). Define $h \from K \to G$ to be the map induced by the inclusion $Z \inject G$ and by the map $\rho_r \from E_\rho \to G$. By Fact~\refGM{FactEGNPUniqueness} the initial oriented edges of $\rho_r$ and $\bar\rho_r$ are distinct in $H_r$, and since no edge of $H_r$ is in $Z$ it follows that the map $h$ is an immersion. The restriction of $h$ to each component of $K$ therefore induces an injection on the level of fundamental groups. Define $\mathcal A_\na(\Lambda^+_\phi)$, the \emph{nonattracting subgroup system}, to be the subgroup system determined by the images of the fundamental group injections induced by the immersion $h \from K \to G$, over all noncontractible components of $K$.
\textbf{Remark: The case of a top stratum.} In the special case that $H_r$ is the top stratum of $G$, there is a useful formula for $\mathcal A_\na(\Lambda^+_\phi)$ which is obtained by considering three subcases. First, when $\hat\rho_r$ is trivial we have $K=Z=G_{r-1}$. Second is the geometric case, where $\hat\rho_r$ is a closed Nielsen path whose endpoint is an interior point of $H_r$ (Fact~\trefGM{FactEGNielsenCrossings}{ItemEGNielsenPointInterior}), and so the graph $K$ is the disjoint union of $Z=G_{r-1}$ with a loop mapping to $\rho_r$. Third is the ``parageometric'' case, where $\hat\rho_r$ is a nonclosed Nielsen path having at least one endpoint which is an interior point of $H_r$ (Fact~\trefGM{FactEGNielsenCrossings}{ItemEGParageometricOneInterior}), and so $K$ is obtained by attaching an arc to $Z=G_{r-1}$ by identifying at most one endpoint of the arc to $G_{r-1}$; note in this case that union of noncontractible components of $K$ deformation retracts to the union of noncontractible components of $G_{r-1}$. From this we obtain the following formula: $$\mathcal A_\na(\Lambda^+_\phi) = \begin{cases} [\pi_1 G_{r-1}] & \quad\text{if $\Lambda^+_\phi$ and $H_r$ are nongeometric} \\ [\pi_1 G_{r-1}] \union \{[\<\rho_r\>]\} &\quad\text{if $\Lambda^+_\phi$ and $H_r$ are geometric} \end{cases} $$ where in the geometric case $[\<\rho_r\>]$ denotes the conjugacy class of the infinite cyclic subgroup generated by an element of $F_n$ represented by the closed Nielsen path $\rho_r$.
This completes Definitions~\ref{defn:Z}. \end{defns}
\begin{remark} \label{RemarkGeometricK} In the special case that the stratum $H_r$ is geometric, the 1-complex $K$ lives naturally as an embedded subcomplex of the geometric model $X$ for $H_r$ (Definition~\refGM{DefGeomModel}), as follows. By item~\prefGM{ItemInteriorBasePoint} of that definition, we may identify $K$ with the subcomplex $Z \union j(\bdy_0 S) \subset G \union j(\bdy_0 S) \subset X$ in such a way that the immersion $K \to G$ is identified with the restriction to $K$ of the deformation retraction $d \from X \to G$. The subgroup system $\mathcal A_\na(\Lambda^+_\phi) = [\pi_1 K]$ may therefore be described as the conjugacy classes of the images of the inclusion induced injections $\pi_1 K_i \to \pi_1 X \approx F_n$, over all noncontractible components $K_i \subset X$. Noting that $j \from S \to X$ maps each boundary component $\bdy_1 S,\ldots,\bdy_m S$ to $G_{r-1} \subset Z \subset K$ and maps $\bdy_0 S$ to $j(\bdy_0 S) \subset K$, we have $j(\bdy S) \subset K$. It follows in the geometric case that Proposition~\refGM{PropGeomVertGrSys} applies to $[\pi_1 K]$, the conclusion of which will be used in the proof of the following proposition. \end{remark}
Recall the characterization of geometricity of $\Lambda^+_\phi$ given in Proposition \refGM{PropGeomEquiv}, expressed in terms of the free factor support of the boundary components of $S$. Our next result, among other things, gives a different characterization of geometricity of a lamination $\Lambda^+_\phi \in \L(\phi)$, expressed in terms of the nonattracting subgroup system $\mathcal A_\na(\Lambda^+_\phi)$.
\begin{proposition}[Properties of the nonattracting subgroup system] \label{PropVerySmallTree} Given a CT\ $f \from G \to G$ representing $\phi$ with EG\ stratum $H_r$ corresponding to $\Lambda^+_\phi$, the subgroup system $\mathcal A_\na(\Lambda^+_\phi)$ satisfies the following: \begin{enumerate} \item \label{ItemA_naNP} $\mathcal A_\na(\Lambda^+_\phi)$ is a vertex group system. \item \label{ItemA_naNoNP} $\mathcal A_\na(\Lambda^+_\phi)$ is a free factor system if and only if the stratum $H_r$ is not geometric. \item \label{ItemA_naMalnormal} $\mathcal A_\na(\Lambda^+_\phi)$ is malnormal, with one component for each noncontractible component of~$K$. \end{enumerate} \end{proposition}
\begin{proof} First we show that any subgroup $A$ for which $[A] \in \mathcal A_\na(\Lambda^+_\phi)$ is nontrivial and proper, as required for a vertex group system. Nontriviality follows because only noncontractible components of $K$ are used. To prove properness: if $\hat\rho_r$ is trivial then any circuit containing an edge of $H_r$ is not carried by $\mathcal A_\na(\Lambda^+_\phi)$; if $\hat\rho = \rho_r$ is nontrivial then any circuit containing an edge of $H_r$ but not containing $\rho_r$ is not carried by $\mathcal A_\na(\Lambda^+_\phi)$.
We adopt the notation of Definition~\ref{defn:Z}. By applying Fact~\refGM{FactEGNielsenCrossings} and Proposition~\refGM{PropGeomEquiv}, when $\Lambda^+_\phi$ is not geometric then $\hat\rho_r$ is either trivial or a nonclosed Nielsen path, and when $\Lambda^+_\phi$ is geometric then $\hat\rho_r$ is a closed Nielsen path. We prove \pref{ItemA_naNP}---\pref{ItemA_naMalnormal} by considering these three cases of $\hat\rho_r$ separately.
\textbf{Case 1: $\hat\rho_r$ is trivial.} In this case $K=Z$, and $\mathcal A_\na(\Lambda^+_\phi)$ is the free factor system associated to the subgraph $Z \subset G$. Item~\pref{ItemA_naMalnormal} follows immediately.
\textbf{Case 2: $\hat\rho_r = \rho_r$ is a nonclosed Nielsen path.} We prove that $\mathcal A_\na(\Lambda^+_\phi)$ is a free factor system following an argument of \BookOne\ Lemma~5.1.7. By Fact~\trefGM{FactEGNielsenCrossings}{ItemEGNielsenNotClosed} there is an edge $E \subset H_r$ that is crossed exactly once by $\rho_r$. We may decompose $\rho_r$ into a concatenation of subpaths $\rho_r = \sigma E \tau$ where $\sigma,\tau$ are paths in $G_r \setminus \Int(E)$. Let $\wh G$ be the graph obtained from $G \setminus \Int(E)$ by attaching an edge $J$, letting the initial and terminal endpoints of $J$ be equal to the initial and terminal endpoints of~$\rho_r$, respectively. The identity map on $G \setminus \Int(E)$ extends to a map $h \from \wh G \to G$ that takes the edge $J$ to the path $\rho_r$, and to a homotopy inverse $\bar h \from G \to \wh G$ that takes the edge $E$ to the path~$\bar\sigma J \bar\tau$. We may therefore view $\wh G$ as a marked graph, pulling the marking on $G$ back via $h$. Notice that $K$ may be identified with the subgraph $Z \union J \subset \wh G$, in such a way that the map $h \from \wh G \to G$ is an extension of the map $h \from K \to G$ as originally defined. It follows that $\mathcal A_\na(\Lambda^+_\phi)$ is the free factor system associated to the subgraph $Z \union J$.
In this case, as in Case~1, item~\pref{ItemA_naMalnormal} follows immediately because of the identification of $K$ with a subgraph of the marked graph $\wh G$.
\textbf{Case 3: $\hat\rho_r = \rho_r$ is a closed Nielsen path.} In this case $H_r$ is geometric. Adopting the notation of the geometric model $X$ for $H_r$, Definition~\refGM{DefGeomModel}, by Remark~\ref{RemarkGeometricK} we have $\mathcal A_\na(\Lambda^+_\phi)=[\pi_1 K]$ for a subgraph $K \subset L$ containing $j(\bdy S)$. Applying Proposition~\refGM{PropGeomVertGrSys} it follows that $[\pi_1 K]$ is a vertex group system.
If $\mathcal A_\na \Lambda^+_\phi = [\pi_1 K]$ were a free factor system then, since each of the conjugacy classes $[\bdy_0 S],\ldots,[\bdy_m S]$ is supported by $[\pi_1 K]$, it would follow by Proposition~\trefGM{PropGeomEquiv}{ItemBoundarySupportCarriesLambda} that $[\pi_1 S] \sqsubset [\pi_1 K]$. However, since $S$ supports a pseudo-Anosov mapping class, it follows that $S$ contains a simple closed curve $c$ not homotopic to a curve in $\bdy S$. By Lemma~\trefGM{LemmaLImmersed}{ItemSeparationOfSAndL} we have $[j(c)] \not\in [\pi_1 K]$ while $[j(c)] \in [\pi_1 S]$. This is a contradiction and so $\mathcal A_\na\Lambda^+_\phi$ is not a free factor system.
In this case, item~\pref{ItemA_naMalnormal} is a consequence of Lemma~\trefGM{LemmaLImmersed}{ItemComplementMalnormal}. \end{proof}
Item~\pref{Item:Groupoid} in the next lemma states that $\<Z,\hat \rho_r\>$ is a groupoid, by which we mean that the tightened concatenation of any two paths in $\<Z,\hat \rho_r\>$ is also a path in $\<Z,\hat \rho_r\>$ as long as that concatenation is defined. For example, the concatenation of two distinct rays in $\<Z,\hat \rho_r\>$ with the same base point tightens to a line in $\<Z,\hat \rho_r\>$.
\begin{lemma} \label{LemmaZPClosed} Assuming the notation of Definitions~\ref{defn:Z}, \begin{enumerate} \item \label{Item:BEQuivZP} The map $h$ induces a bijection between $\wh\mathcal B(K)$ and $\<Z,\hat \rho_r\>$. \item \label{Item:Groupoid} $\<Z,\hat \rho_r\>$ is a groupoid. \item \label{item:ZP=NA} The set of lines in $G$ carried by $\<Z,\hat \rho_r\>$ is the same as the set of lines carried by $\mathcal A_{\na}(\Lambda^+_\phi)$. The set of rays in $G$ asymptotically equivalent to a ray carried by $\<Z,\hat\rho_r\>$ is the same as the set of rays carried by $\mathcal A_\na(\Lambda^+_\phi)$. \item\label{Item:circuits} The set of circuits in $G$ carried by $\<Z,\hat \rho_r\>$ is the same as the set of circuits carried by $\mathcal A_{\na}(\Lambda^+_\phi)$. \item \label{item:closed} The set of lines carried by $\<Z,\hat \rho_r\>$ is closed in the weak topology.
\item\label{item:UniqueLift} If $[A_1], [A_2] \in \mathcal A_{\na}(\Lambda^+_\phi)$ and if $A_1 \ne A_2$ then $A_1 \cap A_2 = \{1\}$ and $\partial A_1 \cap \partial A_2 = \emptyset$. \end{enumerate} \end{lemma}
\begin{proof} We make use of four evident properties of the immersion $h:K \to G$. The first is that every path in $K$ with endpoints, if any, at vertices is mapped by $h$ to an element of $\<Z,\hat \rho_r\>$. The second is that $h$ induces a bijection between the vertex sets of $K$ and of $Z \cup \partial \hat \rho_r$. The third is that for each edge $E$ of $Z$, there is a unique edge of $K$ that projects to $E$ and that no other subpath of $K$ has at least one endpoint at a vertex and projects to~$E$. The last is that if $\hat \rho_r$ is non-trivial then it has a unique lift to $K$ (because its unique illegal turn of height $r$ does). Together these imply~\pref{Item:BEQuivZP} which immediately implies \pref{Item:Groupoid}.
Let $K_1,\ldots,K_J$ be the cores of the noncontractible components of $K$. Let $\ti h_j \from \widetilde K_j \inject \widetilde G$ be a lift of $h \restrict K_j$ to an embedding of universal covers. By Definition~\ref{defn:Z} we have $\mathcal A_\na(\Lambda^+_\phi) = \{[A_1],\ldots,[A_J]\}$ where $\widetilde K_j$ is the minimal subtree of the action of $A_j$ on $\widetilde G$, and where $\bdy \widetilde K_j = \bdy A_j \subset \bdy\widetilde G = \bdy F_n$. Given a line $\ell \in \mathcal B(G)$, the first sentence of item~\pref{item:ZP=NA} follows from the fact that a line is carried by $\mathcal A_\na(\Lambda^+_\phi)$ if and only if $\ell$ lifts to a line with both endpoints in some $\bdy A_j$, if and only $\ell$ lifts via some $\ti h_j$ to $\widetilde K_j$, if and only if $\ell$ lifts via $h$ to some $K_j$, if and only if $\ell \in \<Z,\hat\rho\>$. Given a ray $\rho \in \wh\mathcal B(G)$, the second sentence of item~\pref{item:ZP=NA} follows from the fact that the ray is carried by $\mathcal A_\na(\Lambda^+_\phi)$ if and only if $\rho$ lifts to a ray with end in some $\bdy A_j$, if and only if some subray of $\rho$ lifts via some $\ti h_j$ to $\widetilde K_j$, if and only if some subray of $\rho$ lifts via $h$ to some $K_j$, if and only if some subray of $\rho$ is in $\<Z,\hat\rho\>$.
Item~\pref{Item:circuits} follows from \pref{item:ZP=NA} using the natural bijection between periodic lines and circuits. Item~\pref{item:closed} follows from~\pref{Item:BEQuivZP} and Fact~\refGM{FactLinesClosed}. Item \pref{item:UniqueLift} follows from Proposition~\ref{PropVerySmallTree}~\pref{ItemA_naMalnormal} and Fact~\refGM{FactBoundaries}.
\end{proof}
The following lemma is based on Proposition~6.0.4 and Corollary~6.0.7 of \BookOne.
\begin{lemma} \label{defining Z} Assuming the notation of Definitions~\ref{defn:Z}, we have: \begin{enumerate} \item\label{ItemZPEdgesInv} If $E$ is an edge of $Z$ then $f_\#(E) \in \langle Z,\hat \rho_r \rangle $. \item\label{ItemZPPathsInv} $\<Z,\hat \rho_r\>$ is $f_\#$-invariant. \item\label{ItemZPAnyPaths} If $\sigma \in\<Z,\hat \rho_r\>$ then $\sigma$ is not weakly attracted to $\Lambda^+_\phi$. \item\label{ItemZPFinitePaths} For any finite path $\sigma$ in $G$ with endpoints at fixed vertices, the converse to \pref{ItemZPAnyPaths} holds: if $\sigma$ is not weakly attracted to $\Lambda$ then $\sigma \in \<Z,\hat \rho_r\>$. \item \label{ItemBijection} $f_\#$ restricts to bijections of the the following sets: lines in $\<Z,\hat \rho_r\>$; finite paths in $\<Z,\hat \rho_r\>$ whose endpoints are fixed by $f$; and circuits in $\<Z, \hat\rho_r\>$. \end{enumerate} \end{lemma}
\begin{proof} In this proof we shall freely use that $\<Z,\hat\rho_r\>$ is a groupoid, Lemma~\ref{LemmaZPClosed}~\pref{Item:Groupoid}.
$\<Z,\hat \rho_r\>$ contains each fixed or linear edge by construction. Given an indivisible Nielsen path $\rho_i$ of height $i$, we prove by induction on $i$ that $\rho_i$ is in $\<Z,\hat\rho_r\>$. If $H_i$ is \neg\ this follows from (\neg\ Nielsen Paths) and the induction hypothesis. If $H_i$ is EG\ then Fact~\trefGM{FactNielsenBottommost}{ItemBottommostEdges} applies to show that $H_i \subset Z$; combining this with Fact~\trefGM{FactNielsenBottommost}{ItemBottommostEdges} again and with the induction hypothesis we conclude that $\rho_i \in \<Z,\hat\rho_r\>$.
Since all indivisible Nielsen paths and all fixed edges are contained in $\<Z,\hat\rho_r\>$, it follows that all Nielsen path are contained in $\<Z,\hat\rho_r\>$, which immediately implies that $\<Z,\hat \rho_r\>$ contains all exceptional paths.
Suppose that $\tau = \tau_1 \cdot \ldots \cdot \tau_m$ is a complete splitting of a finite path that is not contained in a zero stratum. Each $\tau_i$ is either an edge in an irreducible stratum, a taken connecting path in a zero stratum, or, by the previous paragraph, a term which is not weakly attracted to $\Lambda$ and which is contained in $\<Z,\hat\rho_r\>$. If $\tau_i$ is a taken connecting path in a zero stratum $H_t$ that is enveloped by an EG\ stratum $H_s$ then, by definition of complete splitting, $\tau_i$ is a maximal subpath of $\tau$ in $H_t$; since $\tau \not\subset H_t$ it follows that $m \ge 2$, and by applying (Zero Strata) it follows that at least one other term $\tau_j$ is an edge in $H_s$. In conjunction with the second Remark in Definitions~\ref{defn:Z}, this proves that $\tau$ is contained in $\<Z,\hat \rho_r\>$ if and only if each $\tau_i$ that is an edge in an irreducible stratum is contained in $Z$ if and only if $\tau$ is not weakly attracted to $\Lambda^+_\phi$.
We apply this in two ways. First, this proves item \pref{ItemZPFinitePaths} in the case that $\sigma$ is completely split. Second, applying this to $\tau = f_\#(E)$ where $E$ is an edge in $Z$, item \pref{ItemZPEdgesInv} follows in the case that $f_\#(E)$ is not contained in any zero stratum. Consider the remaining case that $\tau=f_\#(E)$ is contained in a zero stratum $H_t$ enveloped by the EG\ stratum $H_s$. By definition of complete splitting, $\tau=\tau_1$ is a taken connecting path. By Fact~\refGM{FactEdgeToZeroConnector} the edge $E$ is contained in some zero stratum $H_{t'}$ enveloped by the same EG\ stratum $H_s$. Since $E \subset Z$, it follows that $H_s \subset Z$, and so $H^z_s \subset Z$, and so $\tau \subset Z$, proving \pref{ItemZPEdgesInv}.
Item \pref{ItemZPPathsInv} follows from item \pref{ItemZPEdgesInv}, the fact that $f_\#(\hat \rho_r) = \hat \rho_r$ and the fact that $\<Z,\hat \rho_r\>$ is a groupoid.
Every generic leaf of $\Lambda^+_\phi$ contains subpaths in $H_r$ that are not subpaths of $\hat \rho_r$ or $\hat \rho_r^{-1}$ and hence not subpaths in any element of $\<Z,\hat \rho_r\>$. Item \pref{ItemZPAnyPaths} therefore follows from item \pref{ItemZPPathsInv}.
To prove \pref{ItemBijection}, for lines and finite paths the implication $\pref{ItemZPPathsInv} \Rightarrow \pref{ItemBijection}$ follows from Corollary~6.0.7 of \BookOne. For circuits, use the natural bijection between circuits and periodic lines, noting that this bijection preserves membership in $\<Z,\hat\rho_r\>$.
It remains to prove \pref{ItemZPFinitePaths}. By \pref{ItemBijection}, there is no loss of generality in replacing $\sigma$ with $f^k_\#(\sigma)$ for any $k \ge 1$. By Fact~\refGM{FactEvComplSplit} this reduces \pref{ItemZPFinitePaths} to the case that $\sigma$ is completely split which we have already proved. \end{proof}
\subsection{Applications and properties of the nonattracting subgroup system.} \label{SectionAppsAndPropsANA} We now show that the nonattracting subgroup system $\mathcal A_\na(\Lambda^+_\phi)$ deserves its name.
\begin{corollary}\label{CorNAWellDefined} For any rotationless $\phi \in \mathsf{Out}(F_n)$ and $\Lambda^+_\phi \in \L(\phi)$, a conjugacy class $[a]$ in $F_n$ is not weakly attracted to $\Lambda^+_\phi$ if and only if it is carried by $\mathcal A_{\na}(\Lambda^+_\phi)$. \end{corollary}
\begin{proof} Let $f \from G \to G$ be a CT\ representing a rotationless $\phi \in \mathsf{Out}(F_n)$ and assume the notation of Definitions~\ref{defn:Z}. By Lemma~\ref{LemmaZPClosed}~\pref{Item:circuits}, it suffices to show that a circuit in $G$ is not weakly attracted to $\Lambda^+_\phi$ under iteration by $f_\#$ if and only if it is carried by $\<Z,\hat\rho_r\>$. Both the set of circuits in $\<Z,\hat\rho_r\>$ and the set of circuits that are not weakly attracted to $\Lambda^+_\phi$ are $f_\#$-invariant. We may therefore replace $\sigma$ with any $f^k_\#(\sigma)$ and hence may assume that $\sigma$ is completely split. After taking a further iterate, we may assume that some coarsening of the complete splitting of $\sigma$ is a splitting into subpaths whose endpoints are fixed by~$f$. Lemma~\ref{defining Z}~(\ref{ItemZPFinitePaths}) completes the proof. \end{proof}
\begin{corollary}\label{CorNASubgroups} For any rotationless $\phi \in \mathsf{Out}(F_n)$ and $\Lambda^+_\phi \in \L(\phi)$, and for any finite rank subgroup $B \subgroup F_n$, if each conjugacy class in $B$ is not weakly attracted to $\Lambda^+_\phi$ then there exists a subgroup $A \subgroup F_n$ such that $B \subgroup A$ and $[A] \in \mathcal A_\na(\Lambda^+_\phi)$. \end{corollary}
\begin{proof} By Corollary~\ref{CorNAWellDefined} the conjugacy class of every nontrivial element of $B$ is carried by the subgroup system $\mathcal A_\na(\Lambda^+_\phi)$ which, by Proposition~\ref{PropVerySmallTree}~\pref{ItemA_naNP}, is a vertex group system. Applying Lemma~\refGM{LemmaVSElliptics}, the conclusion follows. \end{proof}
Using Corollary~\ref{CorNASubgroups} we can now prove some useful invariance properties of $\mathcal A_\na(\Lambda^+_\phi)$, for instance that $\mathcal A_\na(\Lambda^+_\phi)$ is an invariant of $\phi$ and~$\Lambda^+_\phi$ alone, independent of the choice of CT\ representing~$\phi$.
\begin{corollary} \label{CorPMna} For any rotationless $\phi \in \mathsf{Out}(F_n)$ and any lamination $\Lambda^+_\phi \in \L(\phi)$ we have: \begin{enumerate} \item \label{ItemAnaCharacterization} The nonattracting subgroup system $\mathcal A_\na(\Lambda^+_\phi)$ is the unique vertex group system such that the conjugacy classes it carries are precisely those which are not weakly attracted to $\Lambda^+_\phi$ under iteration of $\phi$. \item \label{ItemAnaDependence} $\mathcal A_\na(\Lambda^+_\phi)$ depends only on $\phi$ and $\Lambda^+_\phi$, not on the choice of a CT\ representing $\phi$. \item \label{ItemAnaNaturality} The dependence in \pref{ItemAnaDependence} is natural in the sense that if $\theta \in \mathsf{Out}(F_n)$ then $\theta(\mathcal A_\na(\Lambda^+_\phi)) = \mathcal A_\na(\Lambda^+_{\theta\phi\theta^\inv})$ where $\Lambda^+_{\theta\phi\theta^\inv}$ is the image of $\Lambda^+_\phi$ under the bijection $\L(\phi) \mapsto \L(\theta\phi\theta^\inv)$ induced by $\theta$. \end{enumerate} \end{corollary}
\begin{proof}
By Proposition~\ref{PropVerySmallTree}~\pref{ItemA_naNP}, $\mathcal A_\na(\Lambda^+_\phi)$ is a vertex group system. By Lemma~\refGM{LemmaVSElliptics}, $\mathcal A_\na(\Lambda^+_\phi)$ is determined by the set of conjugacy classes of elements of $F_n$ that are carried by $\mathcal A_\na(\Lambda^+_\phi)$, and by Corollary~\ref{CorNAWellDefined} these conjugacy classes are determined by $\phi$ and $\Lambda^+_\phi$ alone, independent of choice of a CT\ representing $\phi$, namely they are the conjugacy classes weakly attracted to $\Lambda^+_\phi$ under iteration of~$\phi$. This proves~\pref{ItemAnaCharacterization} and~\pref{ItemAnaDependence}. Item~\pref{ItemAnaNaturality} follows by choosing any CT\ $f \from G \to G$ representing $\phi$ and changing the marking on $G$ by the conjugator $\theta$ to get a CT\ representing $\theta\phi\theta^\inv$. \end{proof}
The following shows that not only is $\mathcal A_\na(\Lambda^+_\phi)$ is invariant under change of CT, but it is invariant under inversion of $\phi$ and replacement of $\Lambda^+_\phi$ with its dual lamination.
\begin{corollary}\label{CorNAIndOfPM} Given $\phi, \phi^\inv \in \mathsf{Out}(F_n)$ both rotationless, and given a dual lamination pair $\Lambda^+ \in \L(\phi)$, $\Lambda^-\in \L(\phi^\inv)$, we have $\mathcal A_\na(\Lambda^+;\phi) = \mathcal A_\na(\Lambda^-;\phi^\inv)$. \end{corollary}
\subparagraph{Notational remark.} Based on Corollary~\ref{CorNAIndOfPM}, we introduce the notation $\mathcal A_\na \Lambda_\phi^\pm$ for the vertex group system $\mathcal A_\na(\Lambda^+;\phi) = \mathcal A_\na(\Lambda^-;\phi^\inv)$.
\begin{proof} For each nontrivial conjugacy class $[a]$ in $F_n$, we must prove that $[a]$ is weakly attracted to $\Lambda^+$ under iteration by $\phi$ if and only if $[a]$ is weakly attracted to $\Lambda^-$ under iteration by~$\phi^\inv$. Replacing $\phi$ with $\phi^\inv$ it suffices to prove the ``if'' direction. Applying Theorem~\refGM{TheoremCTExistence}, choose a CT\ $\fG$ representing $\phi$ having a core filtration element $G_r$ such that $[G_r] = \mathcal F_\supp(\Lambda^+)$, and so $H_r \subset G$ is the EG\ stratum corresponding to $\Lambda^+$. We adopt the notation of Definitions~\ref{defn:Z}.
Suppose that $[a]$ is not weakly attracted to $\Lambda^+$ under iteration by $\phi$. Then the same is true for all $\phi^{-k}([a])$ and so $\phi^{-k}([a]) \in \<Z,\hat\rho_r\>$ for all $k \ge 0$ by Corollary~\ref{CorNAWellDefined}.
Arguing by contradiction, suppose in addition that $[a]$ is weakly attracted to $\Lambda^-$ under iteration by $\phi^\inv$. Applying Corollary~\ref{LemmaZPClosed}~\pref{item:closed} it follows that a generic line $\gamma$ of $\Lambda^-$ is contained in $\<Z,\hat\rho_r\>$. However, since $\mathcal F_\supp(\gamma) = \mathcal F_\supp(\Lambda^-) = [G_r]$, it follows that $\gamma$ has height $r$. If $\hat\rho_r$ is trivial then $\gamma$ is a concatenation of edges of $Z$ none of which has height~$r$, a contradiction. If $\hat\rho_r = \rho_r$ is nontrivial then all occurences of edges of $H_r$ in $\gamma$ are contained in a pairwise disjoint collection of subpaths each of which is an iterate of $\rho_r$ or its inverse. By Fact~\refGM{FactEGNielsenCrossings}, at least one endpoint of $\rho_r$ is disjoint from $G_{r-1}$. If $\rho_r$ is not closed then we obtain an immediate contradiction. If $\rho_r$ is closed then $\gamma$ is a bi-infinite iterate of $\rho_r$, but this contradicts \BookOne\ Lemma~3.1.16 which says that no generic leaf of $\Lambda^+_\psi$ is periodic. \end{proof}
\subsection{Weak convergence and malnormal subgroup systems.} \marginparLee{Michael: This newly numbered section contains the new version of Lemma \ref{ItemNotZPLimit}, which used to be solely about $\mathcal A_\na$.} We conclude Section~\ref{SectionWeakAttraction} with a result which generalizes Fact~\refGM{FactWeakLimitLines} from free factor systems to malnormal subgroup systems. This lemma could have been proved back in Part~1, but the issue first arises here because of applications to $\mathcal A_\na(\Lambda^+_\phi)$ (see the proof of Corollary~\ref{CorOneWayOrTheOther}~\pref{ItemUniformOWOTO}), malnormality of which is proved in Proposition~\ref{PropVerySmallTree}.
In this final subsection of Section~\ref{SectionWeakAttraction}, $G$ denotes an arbitrary marked graph. Given a subgroup system $\mathcal A=\{[A_1],\ldots,[A_J]\}$, the \emph{Stallings graph} of $\mathcal A$ with respect to $G$ is an immersion $f \from \Gamma \to G$ of a finite graph $\Gamma$ whose components are core graphs $\Gamma=\Gamma_1 \union\cdots\union\Gamma_J$ such that the subgroup $A_j<F_n$ is conjugate to the image of $f_* \from \pi_1(\Gamma_j) \to \pi_1(G)\approx F_n$. The Stallings graph is unique up to homeomorphism of the domain.
Recall from Section~\refGM{SectionLineDefs} that a ``ray of $F_n$'', i.e.\ an element of the set $\bdy F_n / F_n$, is realized in $G$ as a ray $\gamma = E_0 E_1 E_2 \cdots \in \wh\mathcal B(G)$ well defined up to asymptotic equivalence. Recall also from Section~\refGM{SectionLineDefs} the weak accumulation set of the ray~$\gamma$, a subset of $\mathcal B(G)$. Given a subgroup system $\mathcal A$, we say that $\gamma$ is \emph{carried by~$\mathcal A$} if there exists a lift $\ti \gamma = \widetilde E_0 \widetilde E_1 \widetilde E_2 \cdots \subset \widetilde G$ and a subgroup $A \subgroup F_n$ with $[A] \in \mathcal A$ such that the end of $\ti \gamma$ is in $\bdy A$. If $\mathcal A$ is the free factor system corresponding to a subgraph $H \subset G$ then $\gamma$ is carried by $\mathcal A$ if and only if some subray of $\gamma$ is contained in $H$. In the earlier context of Section~\ref{SectionWeakAttraction}, $\gamma$ is carried by $\mathcal A_\na(\Lambda^+_\phi)$ if and only if some subray of $\gamma$ is an element of $\<Z,\hat\rho_r\>$.
\begin{lemma} \label{ItemNotZPLimit} For any marked graph $G$ and any malnormal subgroup system $\mathcal A$ we have: \begin{enumerate} \item\label{ItemNotMalLimitLine} Every sequence of lines $\gamma_i \in \mathcal B(G)$ not carried by~$\mathcal A$ has a subsequence that weakly converges to a line not carried by $\mathcal A$. \item\label{ItemNotMalLimitRay} The weak accumulation set of every ray not carried by $\mathcal A$ contains a line not carried by~$\mathcal A$. \item\label{ItemNotMalFinitePathSet} For each sufficiently large constant $L$, letting $\Sigma \subset \wh\mathcal B(G)$ be the set of all finite paths of length $\le L$ that do not lift to the Stallings graph of $\mathcal A$, the following hold: \begin{enumerate} \item\label{ItemNotMalLimitPath} For each sequence $\gamma_i \in \wh\mathcal B(G)$ and each decomposition $\gamma_i = \alpha_i * \beta_i * \omega_i$, if $\beta_i \in \Sigma$ for each $i$, and if $\Length(\alpha_i) \to +\infinity$ and $\Length(\omega_i) \to +\infinity$ as $i \to +\infinity$, then some weak limit of some subsequence of $\gamma_i$ is a line not carried by~$\mathcal A$. \item\label{ItemNotMalCarriedLine} A line $\gamma \in \mathcal B(G)$ is not carried by $\mathcal A$ if and only if some subpath of $\gamma$ is in $\Sigma$. \item\label{ItemNotMalCarriedRay} A ray $\gamma \in \wh\mathcal B(G)$ is not carried by $\mathcal A$ if and only if $\gamma$ has infinitely many distinct subpaths in $\Sigma$. \end{enumerate} \end{enumerate} \end{lemma}
\subparagraph{Remark.} There is also a converse to~\pref{ItemNotMalFinitePathSet}, namely that if $\mathcal A$ is not malnormal then no such finite set $\Sigma$ exists.
\begin{proof} First we show that \pref{ItemNotMalFinitePathSet}$\implies$\pref{ItemNotMalLimitLine} and~\pref{ItemNotMalLimitRay}.
For any sequence of lines $\gamma_i \in \mathcal B(G)$ not carried by $\mathcal A$, apply~\pref{ItemNotMalCarriedLine} to choose a subpath $\beta_i$ of $\gamma_i$ not in $\Sigma$. We obtain a decomposition $\gamma_i = \alpha_i \beta_i \omega_i$ with $\Length(\alpha_i)=\Length(\omega_i)=\infinity$, and so \pref{ItemNotMalLimitLine} follows from~\pref{ItemNotMalLimitPath}. For any ray $\gamma \in \wh\mathcal B(G)$ not carried by $\mathcal A$, apply~\pref{ItemNotMalCarriedRay} to choose infinitely many distinct subpaths $\beta_i$ of $\gamma$ not in $\Sigma$. We obtain for each $i$ a decomposition $\gamma = \alpha_i \beta_i \omega_i$ with $\Length(\alpha_i) \to \infinity$ as $i \to \infinity$, and with $\Length(\omega_i) = \infinity$, and so \pref{ItemNotMalLimitRay} follows from~\pref{ItemNotMalLimitPath} with $\gamma_i=\gamma$.
We turn to the proof of~\pref{ItemNotMalFinitePathSet}. Let $f \from \Gamma=\Gamma_1,\ldots,\Gamma_J \to G$ be the Stallings graph of $\mathcal A=\{[A_1],\ldots,[A_J]\}$.
Let ${\mathbb T}$ be the set of all minimal subtrees $T \subset \widetilde G$ with respect to the actions of all subgroups $A \subgroup F_n$ representing elements of $\mathcal A$, so the trees $T$ are precisely the images of all the lifts of all the maps $f \from \Gamma_j \to G$ to universal covers, $j=1,\ldots,J$. Since $\mathcal A$ is malnormal, there exists a constant $D$ such that for each $T \ne T' \in {\mathbb T}$ the diameter of $T \intersect T'$ is $\le D$ (see Section~\refGM{SectionSSAndFFS}). Consider any $L \ge 2D+2$ and let~$\widetilde\Sigma$ be the set of all paths $\ti\beta$ in $\widetilde G$ such that $\Length(\ti\beta) \le L$ and $\ti\beta \not\subset T$ for each $T \in {\mathbb T}$. The set $\Sigma$ of projections to $G$ of all paths in $\widetilde\Sigma$ is precisely the set of all paths in $G$ of length $\le L$ that do not lift to the Stallings graph of $\mathcal A$.
The $\Leftarrow$ direction of~\pref{ItemNotMalCarriedLine} follows by observing that if a line is carried by~$\mathcal A$ then it lifts to some tree $T \in {\mathbb T}$ and so every subpath lifts to~$T$, hence no subpath is in $\Sigma$. For the~$\Rightarrow$ direction, suppose that no subpath of $\gamma$ is in~$\Sigma$, and so every subpath of length $\le L$ lifts to a path in one of the trees in ${\mathbb T}$. Decompose $\gamma$ as a bi-infinite concatenation of subpaths of length $D+1$ $$\gamma = \cdots \beta_{-1} \beta_0 \beta_1 \beta_2 \cdots $$ Each of the subpaths $\beta_{i-1} \beta_i$ has length $2D+2 \le L$ and so lifts to one of the trees in ${\mathbb T}$. Choose a tree $T \in {\mathbb T}$ and a lift $\ti\beta_0 \ti\beta_1 \subset T$ of the subpath $\beta_0 \beta_1$. Proceeding by induction, suppose that $0 < i$ and that we have extended $\ti\beta_0 \ti\beta_1$ to a lift $$\ti\beta[-i+1,i] = \ti\beta_{-i+1} \cdots \ti\beta_0 \ti\beta_1 \cdots \ti\beta_i \subset T $$ of the subpath $\beta[-i+1,j] = \beta_{-i+1} \cdots \beta_0 \beta_1 \cdots \beta_i$. Choose a tree $T' \in {\mathbb T}$ and a lift $\ti\beta'_i \ti\beta'_{i+1}$ of the subpath $\beta_i\beta_{i+1}$. Since $\ti\beta_i,\ti\beta'_i$ are two lifts of the same path, there exists $g \in F_n$ with corresponding covering transformation $\tau_g \from \widetilde G \to \widetilde G$ such that $\tau_g(\ti\beta'_i) = \ti\beta_i$. It follows that $\tau_g(T') \in {\mathbb T}$ and that the diameter of $T \intersect \tau_g(T')$ is greater than or equal to $\Length(\ti\beta_i) = D+1$ and therefore $\tau_g(T')=T$. The path $\ti\beta_{-i+1} \cdots \ti\beta_0 \ti\beta_1 \cdots \ti\beta_i \tau_g(\ti\beta'_{i+1})$ is therefore a lift of $\beta[-i+1,i+1]$ extending $\ti\beta_0\ti\beta_1$. In a similar fashion, by choosing a tree in ${\mathbb T}$ containing a lift of $\beta_{-i} \beta_{-i+1}$, we obtain a lift of $\beta[-i,i+1]$ extending $\ti\beta_0 \ti\beta_1$, completing the induction. Taking the union as $i \to \infinity$ we obtain a lift of $\gamma$ to $T$, and so $\gamma$ is carried by~$\mathcal A$.
The proof of~\pref{ItemNotMalCarriedRay} is almost the same as~\pref{ItemNotMalCarriedLine}, the primary difference being that if a ray $\gamma \in \wh\mathcal B(G)$ has only finitely many distinct subpaths in $\Sigma$ then after truncating some initial segment we may assume that $\gamma$ has no subpaths in $\Sigma$, and so after that truncation we may write $\gamma$ as a singly infinite concatenation $\gamma = \beta_0 \beta_1 \beta_2 \cdots$ of paths of length $D+1$; then proof then proceeds inductively as above.
To prove~\pref{ItemNotMalLimitPath}, since $\Sigma$ is finite we may pass to a subsequence of $\gamma_i$ so that $\beta_i=\beta$ is constant. By a diagonalization argument as in the proof of Fact~\refGM{FactWeakLimitLines} one obtains a subsequence of $\gamma_i$ that weakly converges to a line containing $\beta$ as a subpath, and so by~\pref{ItemNotMalCarriedLine} that line is not carried by~$\mathcal A$. \end{proof}
\section{Nonattracted lines} \label{SectionNALines} In the previous section, given a rotationless $\phi \in \mathsf{Out}(F_n)$ and $\Lambda^+_\phi \in \L(\phi)$, we described the set of conjugacy classes that are not weakly attracted to $\Lambda^+_\phi$ under iteration by $\phi$ --- they are precisely the conjugacy classes carried by the nonattracting subgroup system $\mathcal A_\na(\Lambda^\pm_\phi)$.
In this section we state and prove Theorem~\ref{ThmRevisedWAT}, the full fledged version of Theorem~G from the Introduction, which characterizes those lines that are not weakly attracted to $\Lambda^+_\phi$ under iteration by $\phi$. Our characterization starts with Lemma~\ref{LemmaThreeNASets} that lays out three particular types of such lines: lines carried by $\mathcal A_\na(\Lambda^\pm_\phi)$; singular lines of $\phi^\inv$; and generic leaves of laminations in $\L(\phi^\inv)$. Theorem~\ref{ThmRevisedWAT} will say that, in addition to these three subsets, by concatenating elements of these subsets in a very particular manner one obtains the entire set of lines not weakly attracted to $\Lambda^+_\phi$. The proof of this theorem will occupy the remaining subsections of Section~\ref{SectionWeakAttraction}.
The proof of Theorem~\ref{ThmRevisedWAT} requires a re-examination of the weak attraction theory of mapping classes of surfaces, based on Nielsen-Thurston theory, and carried out in Section~\ref{SectionNFHGeometric}. One ``folk theorem'' in this context is that for any finite type compact surface $S$ with exactly one boundary component, and for any mapping class $\phi \in \MCG(S) \subgroup \mathsf{Out}(\pi_1 S)$, if $\phi$ is a pseudo-Anosov element of $\MCG(S)$ then $\phi$ is a fully irreducible element of $\mathsf{Out}(\pi_1 S)$. In Proposition~\ref{PropWeakGeomRelFullIrr} we will prove this folk theorem in a very general context that is expressed in terms of geometric models. In \PartFour, the conclusions of Proposition~\ref{PropWeakGeomRelFullIrr} will be incorporated into Theorem~J, which is the full fledged version of Theorem~I from the introduction.
\subsection{Theorem G --- Characterizing nonattracted lines} \label{SectionTheoremGStatement} From here up through Section~\ref{SectionNonattrFullHeight} we adopt the following:
\textbf{Notational conventions:} Let $\phi, \psi=\phi^{-1} \in \mathsf{Out}(F_n)$ be rotationless, let $\Lambda^\pm_\phi \in \L^\pm(\phi)$ be a lamination pair, and denote $\Lambda^+_\psi= \Lambda^-_\phi \in \L(\psi)$ and $\Lambda^-_\psi = \Lambda^+_\phi \in \L(\phi)$. Applying \recognition\ Theorem~4.28 (or see Theorem~\refGM{TheoremCTExistence}), choose $f \from G \to G$ and $f' \from G' \to G'$ to be CTs\ representing $\phi$ and $\psi$, respectively, the first with EG\ stratum $H_r \subset G$ associated to $\Lambda^+_\phi$, and the second with EG\ stratum $H'_u \subset G'$ associated to $\Lambda^+_\psi$, so that \begin{align*} [G_r] = \mathcal F_\supp(\Lambda^+_\phi) &= \mathcal F_\supp(\Lambda^-_\psi) = [G'_u] \\ [G_{r-1}] &= [G'_{u-1}] \end{align*} To check that this is possible, after choosing $f \from G \to G$ to satisfy the one condition $[G_r]=\mathcal F_\supp(\Lambda^\pm_\phi)$ we may then choose $f'$ to satisfy the two conditions $[G'_u]=[G_r]$ and $[G'_{t}] = [G_{r-1}]$ for some $t < u$, but then by (Filtration) in Definition~\refGM{DefCT} it follows that $[G'_{t}] = [G'_{u-1}]$. For other laminations in the set $\L(\psi)$, or strata or filtration elements of $G'$ that occur in the course of our presentation, we use notation like $\Lambda^-_t$, or $H'_t$ or $G'_t$ with the subscript~$t$, as in the previous paragraph.
The reader may refer to Section~\refGM{SectionPrincipalRotationless} for a refresher on basic concepts regarding the set $P(\phi)$ of principal automorphisms representing $\phi \in \mathsf{Out}(F_n)$, and on the set $\Fix(\wh\Phi) \subset \bdy F_n$ of points at infinity fixed by the continuous extension $\wh\Phi \from \bdy F_n \to \bdy F_n$ of an automorphism $\Phi \in \mathsf{Aut}(F_n)$.
We also recall/introduce some notations and definitions related to a rotationless outer automorphism $\psi \in \mathsf{Out}(F_n)$. \begin{itemize} \item $\mathcal B_\na(\Lambda^+_\phi) = \mathcal B_\na(\Lambda^+_\phi;\phi)$ denotes set of all lines in $\mathcal B$ that are not weakly attracted to $\Lambda^+_\phi$ under iteration by~$\phi$. \item $\mathcal B_\sing(\psi)$ denotes the set of singular lines of $\psi$: by definition, $\ell \in \mathcal B$ is a singular line for $\psi$ if there exists $\Psi \in P(\psi)$ ($=$ the set of principal automorphisms representing $\psi$) such that $\bdy\ell \subset \Fix_N(\Psi)$. \item $\mathcal B_\gen(\psi)$ denotes the set of all generic leaves of all elements of $\L(\psi)$. \end{itemize}
\begin{lemma} \label{LemmaThreeNASets} Given rotationless $\phi,\psi=\phi^{-1} \in \mathsf{Out}(F_n)$ and $\Lambda^+_\phi \in \L(\phi)$, if $\gamma \in \mathcal B$ satisfies any of the following three conditions then $\gamma \in \mathcal B_\na(\Lambda^+_\phi)$: \begin{enumerate} \item\label{ItemNANA}
$\gamma$ is carried by $\mathcal A_{\na}(\Lambda^{\pm}_\phi)$. \item\label{ItemSingNA} $\gamma \in \mathcal B_\sing(\psi)$ \item\label{ItemGenNA} $\gamma \in \mathcal B_\gen(\psi)$. \end{enumerate}
\end{lemma}
\begin{proof} Case~\pref{ItemNANA} is a consequence of the following: no conjugacy class carried by $\mathcal A_\na(\Lambda^{\pm}_\phi)$ is weakly attracted to $\Lambda^+_\phi$ (Corollary~\ref{CorNAWellDefined}); axes of conjugacy classes are dense in the set of all lines carried by $\mathcal A_\na(\Lambda^{\pm}_\phi)$ (as is true for any subgroup system); and $\mathcal B_\na(\Lambda^+_\phi)$ is a weakly closed subset of $\mathcal B$, which follows from the fact that being weakly attracted to $\Lambda^+_\phi$ is a weakly open condition on $\mathcal B$, an evident consequence of the definition of an attracting lamination.
For Case~\pref{ItemGenNA}, suppose that $\gamma$, and hence each $\phi^i_\#(\gamma)$, is a generic leaf of some $\Lambda^-_t \in \L(\psi)$. Choose $[a]$ to be a conjugacy class represented by a completely split circuit in $G'$ such that some term of its complete splitting is an edge of $H'_t$. By Fact~\trefGM{FactAttractingLeaves}{ItemSplitAnEdge}, $[a]$ is weakly attracted to $\gamma$ under iteration by~$\psi$. If $\gamma$ were weakly attracted to $\Lambda^+_\phi$ under iteration by $\phi$ then, since the $\phi^i_\#(\gamma)$'s all have the same neighborhoods in~$\mathcal B$, the lamination $\Lambda^+_\phi$ would be in the closure of $\gamma$, and so $[a]$ would be weakly attracted to $\Lambda^+_\phi$ under iteration by $\psi=\phi^\inv$, contradicting Fact~\trefGM{FactAttractingLeaves}{ItemAllCircuitsRepelled}.
For Case~\pref{ItemSingNA}, choose $\Psi \in P(\psi)$ and a lift $\ti \gamma$ of $\gamma$ with endpoints in $\Fix_N(\wh\Psi) = \bdy\Fix(\Psi) \union \Fix_+(\wh\Psi)$. Assuming that $\gamma$ is weakly attracted to $\Lambda^+_\phi$ under iteration by $\phi$, we argue to a contradiction. Since $\gamma$ is $\phi_\#$-invariant, $\Lambda^+_\phi$ is contained in the weak closure of $\gamma$. Let $\ell$ be a generic leaf of~$\Lambda^+_\phi$. Since $\ell$ is birecurrent, $\ell$ is contained in the weak accumulation set of at least one of the endpoints, say~$P$, of $\ti \gamma$. If $P \in \partial \Fix(\Psi)$ then $\ell$ is carried by ${\mathcal A}_{\na}(\Lambda^{\pm}_\phi)$ in contradiction to Case~\pref{ItemNANA} and the obvious fact that $\ell$ is weakly attracted to~$\Lambda^+_\phi$. Thus $P \in \Fix_+(\wh\Psi)$. By Lemma~\refGM{LemmaFixPlusAccumulation}, for every line in the weak accumulation set of $P$, in particular for the generic leaf $\ell$ of $\Lambda^+_\phi$, there exists a conjugacy class $[a]$ whose iterates $\psi^k[\alpha]$ weakly accumulate on that line. It follows that $[a]$ is weakly attracted to $\Lambda^+_\phi$ under iteration by $\psi$. As in Case~\pref{ItemGenNA}, this contradicts Fact~\trefGM{FactAttractingLeaves}{ItemAllCircuitsRepelled}. \end{proof}
As shown in Example~\ref{ExNotClosedUnderConcat} below, by using concepts of concatenation one can sometimes construct lines in $\mathcal B_\na(\Lambda)$ not accounted for in the statement of Lemma~\ref{LemmaThreeNASets}. In the next definition we extend the usual concept of concatenation points to allow points at infinity.
\begin{definition} \label{DefnConcatenation} Given any marked graph $K$ and oriented paths $\gamma_1,\gamma_2 \in \wh\mathcal B(K)$, we that $\gamma_1,\gamma_2$ are \emph{concatenable} if there exist lifts $\ti\gamma_i \subset \widetilde K$ with initial endpoints $P^-_i \in \widetilde K \union \bdy F_n$ and terminal endpoints $P^+_i \in \widetilde K \union \bdy F_n$ satisfying $P^+_1 = P^-_2$ and $P^-_1 \ne P^+_2$. The \emph{concatenation} of $\ti\gamma_1,\ti\gamma_2$ is the oriented path with endpoints $P^-_1, P^+_2$, denoted $\ti\gamma_1 \diamond \ti\gamma_2$. Its projection to~$K$, denoted $\gamma_1 \diamond \gamma_2$, is called \emph{a concatenation of $\gamma_1,\gamma_2$}. This operation is clearly associative and so we can define multiple concatenations. This operation is also invertible, in particular any concatenation of the form $\gamma = \alpha \diamond \nu \diamond \beta$ can be rewritten as $\nu = \alpha^\inv \diamond \gamma \diamond \beta^\inv$.
Notice the use of the definite article upstairs, versus the indefinite article downstairs. ``The'' upstairs concatenation $\ti\gamma_1 \diamond \ti\gamma_2$ is well-defined once the lifts $\ti\gamma_1,\ti\gamma_2$ have been chosen. But because of the freedom of choice of those lifts, ``a'' downstairs concatenation $\gamma_1 \diamond \gamma_2$ is not generally well-defined: this fails precisely when $P^+_1=P^-_2$ is an endpoint of the axis of some element $\gamma$ of $F_n$ and neither $P^-_1$ nor $P^+_2$ is the opposite endpoint, in which case one can replace either of $\ti\gamma_1,\ti\gamma_2$ by a translate under $\gamma$ to get a different concatenation downstairs. This is a mild failure, however, and it is usually safe to ignore.
A subset of $\wh\mathcal B(K)$ is \emph{closed under concatenation} if for any oriented paths $\gamma_1,\gamma_2 \in \wh\mathcal B(K)$, any of their concatenations $\gamma_1 \diamond \gamma_2$ is an element of $\wh\mathcal B(K)$. \end{definition}
\begin{lemma} \label{LemmaConcatenation} Continuing with the Notational Convention above, the set of elements of $\wh\mathcal B(G)$ that are not weakly attracted to $\Lambda^+_\phi$ under iteration by $\phi$ is closed under concatenation. In particular, $\mathcal B_\na(\Lambda^+_\phi)$ is closed under concatenation. \end{lemma}
\begin{proof} Consider a concatenation $\gamma_1 \diamond \gamma_2$ with accompanying notation as in Definition~\ref{DefnConcatenation}. For each $m \ge 0$, the path $f^m_\#(\gamma_1 \diamond \gamma_2)$ is the concatenation of a subpath of $f^m_\#(\gamma_1)$ and a subpath of $f^m_\#(\gamma_2)$. Letting $\ell$ be a generic leaf of $\Lambda^+_\phi$, by Fact~\refGM{FactTiles} we may write $\ell$ as an increasing union of nested tiles $\alpha_1 \subset \alpha_2 \subset \cdots$ so that each $\alpha_j$ contains at least two disjoint copies of $\alpha_{j-1}$. By assumption $\gamma_1$ has the property that there exists an integer $J$ so that if $\alpha_j$ occurs in $f^m_\#(\gamma_1)$ for arbitrarily large $m$ then $j \le J$, and $\gamma_2$ satisfies the same property. This property is therefore also satisfied by $\gamma_1 \diamond \gamma_2$ (with a possibly larger bound $J$) and so $\gamma_1 \diamond \gamma_2$ is not weakly attracted to $\Lambda^+_\phi$.
\end{proof}
\begin{ex} \label{ExNotClosedUnderConcat} The set of lines satisfying \pref{ItemNANA}, \pref{ItemSingNA} and~\pref{ItemGenNA} of Lemma~\ref{LemmaThreeNASets} is generally not closed under concatenation. For example, suppose that for $i=1,2$ we have singular lines for $\psi$ of the form $\gamma'_i = \bar \alpha'_i \beta'_i \subset G'$ where $\alpha'_i\subset G'_u$ is a principal ray of $\Lambda^+_\psi$ (Definition~\refGM{DefSingularRay}) and $\beta'_i \subset G'_{u-1}$. Let $\mu' \subset G'_{u-1}$ be any line that is asymptotic in the backward direction to $\beta'_1$ and in the forward direction to $\beta'_2$. Then $\mu'$ is carried by $\mathcal A_{\na}(\Lambda^{\pm}_\phi)$ and $\gamma'_3 =\gamma'_1 \diamond \mu' \diamond \bar \gamma'_2$ is not weakly attracted to $\Lambda^+_\phi$. However, $\gamma'_3$ does not in general satisfy any of \pref{ItemNANA}, \pref{ItemSingNA} and~\pref{ItemGenNA} of Lemma~\ref{LemmaThreeNASets}. \end{ex}
We account for these kinds of examples as follows. (See also Propositions~\ref{nonGeometricFullHeightCase} and \ref{geometricFullHeightCase}.)
\begin{definition} \label{defn: extended na} Given a subgroup $A \subgroup F_n$ such that $[A] \in \mathcal A_{\na}(\Lambda^{\pm}_\phi)$ and given $\Psi \in P(\psi)$, we say that $\Psi$ is \emph{$A$-related} if $\Fix_N(\wh\Psi) \cap \partial A \ne \emptyset$. Define the \emph{extended boundary} of $A$ to be $$\bdy_\ext(A,\psi) = \partial A \cup \left( \bigcup_\Psi \Fix_N(\wh\Psi)\right) $$ where the union is taken over all $A$-related $\Psi \in P(\psi)$. Let $\mathcal B_\ext(A,\psi)$ denote the set of lines that have lifts with endpoints in $\bdy_\ext(A,\psi)$; this set is independent of the choice of $A$ in its conjugacy class. Define $$\mathcal B_\ext(\Lambda^{\pm}_\phi;\psi)= \bigcup_{A \in \mathcal A_{\na}(\Lambda^{\pm}_\phi)} \mathcal B_\ext(A,\psi) $$ \end{definition} \noindent Basic properties of $\mathcal B_\ext(\Lambda^{\pm}_\phi;\psi)$ are established in the next section.
We conclude this section with the statement of our main weak attraction result. The proof is given in Section~\ref{SectionNALinesGeneral}.
\begin{theorem}[\textbf{Theorem G}] \label{ThmRevisedWAT} If $\phi, \psi=\phi^{-1} \in \mathsf{Out}(F_n)$ are rotationless and $\Lambda^{\pm}_\phi \in \L^\pm(\phi)$ then $$\mathcal B_\na(\Lambda^+_\phi,\phi) = \mathcal B_\ext(\Lambda^{\pm}_\phi;\psi) \union \mathcal B_\sing(\psi) \union \mathcal B_\gen(\psi) $$ \end{theorem}
\begin{remark} The sets $\mathcal B_\ext(\Lambda^{\pm}_\phi;\psi)$, $\mathcal B_\sing(\psi)$, and $\mathcal B_\gen(\psi)$ need not be pairwise disjoint. For example, every line carried by $G'_{u-1}$ is in $\mathcal B_\ext(\Lambda^{\pm}_\phi;\psi)$ and some of these can be in $\mathcal B_\sing(\psi)$ or in $\mathcal B_\gen(\psi)$. \end{remark}
\begin{remark} It is not hard to show that if $\gamma \in \mathcal B_\na(\Lambda^+_\phi)$ is birecurrent then $\gamma$ is either carried by $A_{\na}(\Lambda^{\pm}_\phi)$ or is a generic leaf of some element of $\L(\psi)$. This shows that Theorem~\ref{ThmRevisedWAT} contains the Weak Attraction Theorem (Theorem~6.0.1 of \BookOne) as a special case. \end{remark}
\subsection{$\mathcal B_\ext(\Lambda^{\pm}_\phi;\psi) \union \mathcal B_\sing(\psi) \union \mathcal B_\gen(\psi)$ is closed under concatenation} \label{SectionNAConcatenation}
We continue with the notation for an inverse pair of rotationless outer automorphisms $\phi, \, \psi=\phi^\inv \in \mathsf{Out}(F_n)$ established at the beginning of Section~\ref{SectionTheoremGStatement}.
Much of the work in this section is devoted to revealing details of the structure of $\mathcal B_\ext(\Lambda^{\pm}_\phi;\psi)$. After a few such lemmas/corollaries, the main result of this section is that the union of the three subsets of $\mathcal B_\na(\Lambda^+_\phi)$ occurring in Theorem~\ref{ThmRevisedWAT} is closed under concatenation; see Proposition~\ref{PropStillClosed}.
We shall abuse notation for elements of the set $\mathcal A_\na(\Lambda^{\pm}_\phi)$ as described in Section~\refGM{SectionSSAndFFS}, writing $A \in \mathcal A_\na(\Lambda^{\pm}_\phi)$ to mean that $A$ is a subgroup of $F_n$ whose conjugacy class $[A]$ is an element of the set $\mathcal A_\na(\Lambda^{\pm}_\phi)$. Since $\mathcal A_\na(\Lambda^+_\phi)$ is a malnormal subgroup system (Proposition~\ref{PropVerySmallTree}), this notational abuse should not cause any confusion.
\begin{lemma} \label{LemmaTwoLifts} If $\Psi_1 \ne \Psi_2 \in \mathsf{Aut}(F_n)$ are representatives of $\psi$ and $P \in \Fix(\wh\Psi_1) \cap \Fix(\wh\Psi_2)$ then there exists a nontrivial $a \in \Fix(\Psi_1) \intersect \Fix(\Psi_2)$ determining an inner automorphism~$i_a$, and there exists $A \in A_{\na}(\Lambda^{\pm}_\phi)$, such that $a \in A$ and $P \in \Fix(\hat i_a) \subset \bdy A$. \end{lemma}
\begin{proof} Choosing $a$ so that $\Psi_1 = i_a \Psi_2$ it follows that $i_a = \Psi_1 \Psi_2^\inv$ fixes $P$, and so $a \in \Fix(\Psi_1) \intersect \Fix(\Psi_2)$ (Fact~\refGM{FactTwoLifts}), implying that $[a]$ is not weakly attracted to $\Lambda^+_\phi$. Applying Corollary~\ref{CorNAWellDefined}, the conjugacy class $[a]$ is carried by $\mathcal A_\na(\Lambda^{\pm}_\phi)$, and so there exists $A \in \mathcal A_\na(\Lambda^{\pm}_\phi)$ such that $a \in A$, which implies that $\Fix(\hat i_a) \subset \bdy A$. \end{proof}
\begin{lemma}\label{LemmaAtMostOneA} For each $\Psi \in P(\psi)$ there exists at most one $A \in \mathcal A_\na(\Lambda^\pm_\phi)$ such that $\Psi$ is $A$-related. \end{lemma}
\begin{proof} Suppose that for $j=1,2$ there exist $A_j \in \mathcal A_{\na}(\Lambda^{\pm}_\phi)$ and $P_j \in \Fix_N(\wh\Psi) \cap \partial A_j$. The line $\ti\gamma$ connecting $P_1$ to $P_2$ projects to a line $\gamma \in \mathcal B_\sing(\psi)$ that by Lemma~\ref{LemmaThreeNASets} is not weakly attracted to $\Lambda^+$. Since $P_j \in \partial A_j$, and since by Lemma~\ref{LemmaZPClosed}~\pref{item:ZP=NA} each line that is carried by $A_j$ is contained in $\<Z,\hat \rho_r\>$, the ends of $\gamma$ are contained in $\<Z,\hat \rho_r\>$, and so we may assume that $\gamma = \overline\rho_- \diamond \gamma_0 \diamond \rho_+$ where the rays $\rho_-,\rho_+$ are in $\<Z,\hat \rho_r\>$. After replacing $\gamma$ with a $\phi_\#$-iterate we may also assume that the central subpath $\gamma_0$ has endpoints at fixed vertices. Since none of $\gamma$, $\rho_-$, $\rho_+$ are weakly attracted to $\Lambda^+_\phi$, by Lemma~\ref{LemmaConcatenation} neither is $\gamma_0 = \rho_- \diamond \gamma \diamond \overline \rho_+$. Lemma~\ref{defining Z}~\pref{ItemZPFinitePaths} implies that $\gamma_0$ is contained in $\<Z,\hat \rho_r\>$ and Lemma~\ref{LemmaZPClosed}~\pref{Item:Groupoid} then shows that $\gamma$ is contained in $\<Z,\hat\rho_r\>$. By Lemma~\ref{LemmaZPClosed}~\pref{item:ZP=NA} it follows that $\gamma$ is carried by $A_{\na}(\Lambda^{\pm}_\phi)$, which means that $\bdy\ti\gamma = \{P_1,P_2\} \subset \partial A_3$ for some $A_3 \in \mathcal A_\na(\Lambda^\pm_\phi)$. By Proposition~\ref{PropVerySmallTree}~\pref{ItemA_naMalnormal} and Fact~\refGM{FactBoundaries} it follows that $A_1=A_3=A_2$. \end{proof}
\begin{corollary}\label{CorRelatedBdy} If $\Psi \in P(\psi)$, $A \in \mathcal A_\na(\Lambda^{\pm}_\phi)$, and $\Psi$ is $A$-related, then $\Fix(\Psi) \subgroup A$ and each point of $\Fix_N(\wh\Psi) \setminus \bdy A$ is an isolated attractor for $\wh\Psi$. \end{corollary}
\begin{proof} If $\Fix(\Psi)$ is trivial then by Lemma~\refGM{LemmaFixPhiFacts} each point of $\Fix_N(\wh\Psi)$ is an isolated attractor and we are done. Otherwise, noting that the conjugacy class of each nontrivial element of $\Fix(\Psi)$ is carried by $\mathcal A_\na(\Lambda^{\pm}_\phi)$, applying Corollary~\ref{CorNASubgroups} we have $\Fix(\Psi) \subgroup A'$ for some $A' \in \mathcal A_\na(\Lambda^{\pm}_\phi)$, and so $\bdy\Fix(\Psi) \subset \bdy A'$. It follows that $\Psi$ is $A'$-related. By Lemma~\ref{LemmaAtMostOneA} we have $A'=A$, and applying Lemma~\refGM{LemmaFixPhiFacts} completes the proof. \end{proof}
\begin{corollary}\label{CorDisjBdyA} If $A_1 \ne A_2 \in \mathcal A_\na(\Lambda^\pm_\phi)$ then $\bdy_\ext (A_1,\psi) \cap \bdy_\ext(A_2,\psi) = \emptyset$. \end{corollary}
\begin{proof} We assume that $Q \in \bdy_\ext (A_1,\psi)\cap \bdy_\ext (A_2,\psi)$ and argue to a contradiction. After interchanging $A_1$ and $A_2$ if necessary, we may assume by Proposition~\ref{PropVerySmallTree}~\pref{ItemA_naMalnormal} and Fact~\refGM{FactBoundaries} that $Q \not \in \partial A_1$ and hence that $Q \in \Fix_N(\wh\Psi_1)$ for some $A_1$-related $\Psi_1 \in P(\psi)$. Lemma~\ref{LemmaAtMostOneA} implies that $\Psi_1$ is not $A_2$-related and so $Q \not \in \partial A_2$. The only remaining possibility is that $Q \in \Fix_N(\wh\Psi_2)$ for some $A_2$-related $\Psi_2 \in P(\psi)$. But then Lemma~\ref{LemmaTwoLifts} implies that $Q \in \partial A_3$ for some $A_3 \in \mathcal A_\na(\Lambda^\pm_\phi)$, and then Lemma~\ref{LemmaAtMostOneA} implies that $A_1 = A_3 = A_2$. \end{proof}
\begin{corollary} \label{psi contained in partial A} If $\Psi \in P(\psi)$, $A \in \mathcal A_\na(\Lambda^\pm_\phi)$, and $\Fix_N(\wh\Psi) \cap \bdy_\ext (A,\psi) \ne \emptyset$ then $\Psi$ is $A$-related; in particular, $\Fix_N(\wh\Psi) \subset \bdy_\ext (A,\psi)$. \end{corollary}
\begin{proof} Choose $Q \in \Fix_N(\wh\Psi) \cap \bdy_\ext (A,\psi)$. If $Q \in \partial A$ we're done so we may assume that $ Q \in \Fix_N(\wh\Psi')$ for some $A$-related $\Psi'$. If $\Psi = \Psi'$ we are done. Otherwise, Lemma~\ref{LemmaTwoLifts} implies that $Q \in \partial A'$ for some $A' \in \mathcal A_\na(\Lambda^\pm_\phi)$ and Corollary~\ref{CorDisjBdyA} implies that $A' = A$ so again we are done. \end{proof}
\begin{proposition} \label{PropStillClosed} If the oriented lines $\gamma_1, \gamma_2$ are in the set $\mathcal B_\ext(\Lambda^{\pm}_\phi;\psi) \union \mathcal B_\sing(\psi) \union \mathcal B_\gen(\psi)$ and are concatenable then any concatenation $\gamma_1 \diamond \gamma_2$ is also in that set.
More precisely, given lifts $\ti \gamma_j$ with initial and terminal endpoints $P^-_j$ and $P^+_j$ respectively, if $P^+_1 = P^-_2$ and $P^-_1 \ne P^+_2$ then either there exists $\Psi \in P(\psi)$ such that the three points $P^-_1,P^+_1=P^-_2, P^+_2$ are in $\Fix_N(\wh\Psi)$ or there exists $A \in \mathcal A_\na(\Lambda^\pm_\phi)$ such that those three points are in $\bdy_\ext(A,\psi)$. \end{proposition}
\begin{proof} The first sentence is an immediate consequence of the second, to whose proof we now turn.
\textbf{Case 1:} $\gamma_1 \in \mathcal B_\sing(\psi)$. \quad We have $P^-_1,P^+_1 \in \Fix_N(\wh\Psi)$ for some $\Psi \in P(\psi)$. There are three subcases. First, if $P^-_2,P^+_2 \in \bdy_\ext(A,\psi)$ for some $A \in \mathcal A_\na(\Lambda^\pm_\phi)$ then $P^-_1,P^+_1 \in \bdy_\ext(A,\psi)$ by Corollary~\ref{psi contained in partial A} and we are done. The second subcase is that $P^-_2,P^+_2 \in \Fix_N(\wh\Psi')$ for some $\Psi' \in P(\psi)$; if $\Psi ' = \Psi$ then we are done; if $\Psi' \ne \Psi$ then $P^+_1=P^-_2 \in \bdy_\ext(A,\psi)$ for some $A \in \mathcal A_\na(\Lambda^\pm_\phi)$ by Lemma~\ref{LemmaTwoLifts}, and so $P^-_2,P^+_2 \in \bdy_\ext(A,\psi)$ by Corollary~\ref{psi contained in partial A} so we are reduced to the first subcase. The final subcase is that $\gamma_2$ is a generic line of some $\Lambda^-_t \in \L(\psi)$. Assuming without loss that $P^+_2 \not \in \Fix_N(\wh\Psi)$, the projection of $\Psi_\#(\ti \gamma_2)$ is a generic leaf of $\Lambda^-_t$ that is asymptotic to $\gamma_2$ but not equal to~$\gamma_2$. The next lemma says that this puts us in the second subcase and so we are done.
\begin{lemma} \label{LemmaGenericAsymptotic} Assume that $\theta \in \mathsf{Out}(F_n)$ is rotationless. If $\ell',\ell''$ are each generic lines of elements of $\L(\theta)$, and if some end of $\ell'$ is asymptotic to some end of $\ell''$, then $\ell',\ell'' \in \mathcal B_\sing(\theta)$. \end{lemma}
\noindent This lemma extends Lemma 3.3 of \Axes\ in which it is assumed that $\phi$ is irreducible.
Putting off the proof of Lemma~\ref{LemmaGenericAsymptotic} for a bit, we continue with the proof of Proposition~\ref{PropStillClosed}. Having finished Case~1, by symmetry we may now assume that $\gamma_j \not \in \mathcal B_\sing(\psi)$ for $j=1,2$.
\textbf{Case 2:} $P^-_1,P^+_1 \in \bdy_\ext(A,\psi)$ \textbf{for some} $A \in \mathcal A_\na(\Lambda^\pm_\phi)$.\quad If $P^+_2 \in \bdy_\ext(A,\psi)$ we are done. By Corollary~\ref{CorDisjBdyA}, the only remaining possibility is that $\gamma_2$ is a generic leaf of an element of $\L(\psi)$. If $P^+_1 \in \Fix_N(\wh\Psi)$ for some $\Psi \in P(\psi)$ then, as shown above, Lemma~\ref{LemmaGenericAsymptotic} implies that $P^+_2 \in \Fix_N(\wh\Psi)$ and hence that $\gamma_2 \in \mathcal B_\sing(\psi)$ which is a contradiction. Thus $P^+_1 \in \partial A$. Since $\gamma_2$ is birecurrent, Fact~\refGM{FactLinesClosed} implies that $\gamma_2$ is by carried by~$\mathcal A_\na(\Lambda^{\pm}_\phi)$. Applying
Lemma~\ref{LemmaZPClosed}~\pref{item:UniqueLift}
it follows that $P^+_2 \in \partial A$ and we are done.
By symmetry of $\gamma_1$ and $\gamma_2$, the only remaining case is:
\textbf{Case 3: $\gamma_1$ and $\gamma_2$ are generic leaves of elements of $\L(\psi)$.} Since they have asymptotic ends, they are leaves of the same element of $L(\psi)$ so are singular lines by Lemma~\ref{LemmaGenericAsymptotic}. As we have already considered this case, the proof is complete. \end{proof}
It remains to prove Lemma~\ref{LemmaGenericAsymptotic}, but we first prove the following, which is similar to Lemma 5.11 of \BH, Lemma 4.2.6 of \BookOne\ and Lemma 2.7 of \Axes.
\begin{lemma} \label{one illegal turn} Suppose that $\fG$ is a CT, that $H_r$ is an EG\ stratum, that $\gamma \subset G$ is line of height $r$ with exactly one illegal turn of height $r$ and that $f^{K_\gamma}(\gamma)$ is $r$-legal for some minimal $K_{\gamma}$. Then $K_\gamma \le K $ for some $K$ that is independent of $\gamma$. \end{lemma}
\begin{proof} If the lemma fails there exists a sequence $\gamma_i$ such that $K_i = K_{\gamma_i} \to \infty$. Write $\gamma_i = \bar \sigma_i \tau_i$ where the turn $(\sigma_i, \tau_i)$ is the illegal turn of height $r$. After passing to a subsequence we may assume that $\sigma_i \to \sigma$ and $\tau_i \to \tau$ for some rays $\sigma$ and $\tau$. The line $\gamma = \bar \sigma \tau$ has height $r$ and $f^k_\#(\gamma)$ has exactly one illegal turn of height $r$ for all $k \ge 0$. Lemma~4.2.6 of \BookOne\ implies that there exists $m > 0$ and a splitting $f^m_\#(\gamma) = \bar R^- \cdot \rho\cdot R^+$ where $\rho$ is the unique \iNp\ of height $r$. It follows that for all sufficiently large $i$, $f^m_\#(\gamma_i)$ has a decomposition into subpaths $f^m_\#(\gamma_i)= \bar R_i^- \rho R^+_i$ where the height $r$ illegal turn in $\rho$ is the only height $r$ illegal turn in $\gamma_i$. Since any such decomposition is a splitting, $f^k_\#( \gamma_i)$ has an illegal turn of height $r$ for all $k$ in contradiction to our choice of $\gamma_i$. \end{proof}
\paragraph{Proof of Lemma \ref{LemmaGenericAsymptotic}.} By symmetry we need prove only that $\ell' \in \mathcal B_\sing(\theta)$. By Fact~\refGM{FactTwoEndsGeneric}, each end of each generic leaf of an element of $\L(\theta)$ has the same free factor support as the whole leaf, and so $\ell'$ and $\ell''$ must be generic leaves of the same $\Lambda \in \L(\theta)$.
Let $\fG$ be a CT\ representing $\theta$ and let $H_r$ be the EG\ stratum corresponding to~$\Lambda$. For each $j \ge 0$, there are generic leaves $\ell'_{j}$ and $\ell''_{j}$ of $\Lambda$ such that $f_\#^j(\ell'_{j}) =\ell'$ and $f_\#^j(\ell''_{j}) = \ell''$. Fixing a common end of $\ell'$ and $\ell''$, the corresponding common ends of $\ell'_j$ and $\ell''_j$ determine a maximal common subray $R_j$ of $\ell_j'$ and $\ell_j''$. Denote the rays in $\ell_j'$ and $\ell_j''$ that are complementary to $R_j$ by $R_j'$ and $R_j''$ respectively. Let $\gamma_j = \bar R_j' R_j''$.
Suppose at first that each $\gamma_j$ is $r$-legal. Lemma 5.8 of \BH\ implies that no height $r$ edges of $\gamma_j$ are cancelled when $f^j(\gamma_j)$ is tightened to $\gamma_0$. Let $E_j,E_j'$ and $E_j''$ be the first height $r$ edges of $R_j, R_j'$ and $R_j''$ respectively, let $w_j,w'_j$ and $w''_j$ be their initial vertices and let $d_j,d_j'$ and $d_j''$ be their initial directions. Let $\mu'_j$ be the finite subpath of $\ell'_j $ connecting $w_j'$ to $w_j$. To complete the proof in this case we will show that $\mu'_0$ is a Nielsen path, that $R_0$ is the principal ray determined by iterating $d_0$ (see Definition~\refGM{DefSingularRay}) and that $R_0'$ is the principal ray determined by iterating $d_0'$.
If $w_j=w'_j$ then $d_j,d'_j$ determine distinct gates, and otherwise $w_j,w'_j$ are each incident to an edge of height~$<r$. A similar statement holds for $w_j$, $w''_j$. In all cases it follows that $w_j,w'_j$ and $w''_j$ are principal vertices of~$f$.
Moreover, the following hold for all $i,j\ge 0$: $$ f^{i}(w_{j+i}) = w_j \qquad f^{i}(w'_{j+i}) = w'_j \qquad
f^{i}(d_{j+i}) = d_j \qquad f^{i}(d'_{j+i}) = d'_j \qquad
f^{i}_\#(\mu'_{j+i}) = \mu'_j$$ The first two of these equalities imply that $w = w_j, w'=w'_j \in \Fix(f)$ are independent of $j$; the third and fourth imply that $E = E_j$ and $E' = E'_j$ are independent of $j$; in conjunction with Lemma~\trefGM{FactTiles}{ItemUndergraphPieces}, the last equality implies that $\mu = \mu_j$ is a Nielsen path that is independent of $j$. It follows that $\ell'$ is the increasing union of the subpaths $f^j_\#(\bar E')\mu f^j_\#(E)$ and so $\ell'$ is a pair of principal rays connected by a Nielsen path. Applying Fact~\refGM{FactPrincipalLift} completes the proof that $\ell' \in \mathcal B_\sing(\theta)$ when each $\gamma_j$ is $r$-legal.
It remains to consider the case that that some $\gamma_l$ is not $r$-legal. Assuming without loss that $\gamma_0$ is not $r$-legal, each $\gamma_j$ is not $r$-legal. Lemma~\ref{one illegal turn} implies that $f^k_\#(\gamma_j)$ has an illegal turn of height $r$ for all $k \ge 0$ and Lemma 4.2.6 of \BookOne\ implies that there is a splitting $\gamma_j = \tau_j' \cdot \rho_j \cdot \tau_j''$ where some $f_\#$-iterate of $\rho_j$ is the unique indivisible Nielsen path $\rho$ with height $r$. Since $f^{i}_\#(\rho_{j+i}) = \rho_j$ for all $i,j \ge 0$, $\rho_j = \rho$ for all $j$. Let $E'$ be the first edge of height $r$ in the ray $\bar \tau_0'$ and let $E$ be the initial edge of $\rho_0$. Both of these edges are contained in $\ell'$ and we let $\mu$ be the subpath of $\ell'$ that connects their initial vertices. Arguing as in the previous case, $\mu$ is a Nielsen path and $\ell'$ is the increasing union of the subpaths $f^j_\#(\bar E')\mu f^j_\#(E)$ which proves that $\ell' \in \mathcal B_\sing(\theta)$.
\qed
\subsection{Application --- Proof of Theorem H} \label{SectionAttractionRepulsion}
Before turning in later sections to the proof of Theorem~\ref{ThmRevisedWAT} (Theorem~G), we use it to prove the following:
\begin{corollary}[\textbf{Theorem H}] \label{CorOneWayOrTheOther} Given rotationless $\phi,\psi = \phi^\inv \in \mathsf{Out}(F_n)$, a dual lamination pair $\Lambda^{\pm}_\phi \in \L^\pm(\phi)$, and a line $\gamma \in \mathcal B$, the following hold: \begin{enumerate} \item\label{ItemNonuniformOWOTO} If $\gamma$ is not carried by $\mathcal A_{\na}(\Lambda^{\pm}_\phi)$ then it is either weakly attracted to $\Lambda^+_\phi$ under iteration by $\phi$ or to $\Lambda^-_\phi$ under iteration by $\psi$. \item\label{ItemUniformOWOTO} For any weak neighborhoods $V^+$ and $V^-$ of generic leaves of $\Lambda^+_\phi$ and $\Lambda^-_\phi$, respectively, there exists an integer $m \ge 1$ (independent of $\gamma$) such that at least one of the following holds: \ $\gamma \in V^-$; \ $\phi^m(\gamma) \in V^+$; \ or $\gamma$ is carried by $\mathcal A_{na}(\Lambda^{\pm}_\phi)$. \end{enumerate} \end{corollary}
\begin{proof} For (1) we assume that $\gamma$ is not weakly attracted to $\Lambda^+_\phi$ under iteration of $\phi$ and that $\gamma$ is not weakly attracted to $\Lambda^+_\psi=\Lambda^-_\phi$ under iteration of $\psi=\phi^\inv$, and we prove that $\gamma$ is carried by $\mathcal A_{\na}(\Lambda^{\pm}_\phi)$. Applying Theorem~\ref{ThmRevisedWAT} to both $\psi$ and $\phi$, we have \begin{align*} \gamma &\in \mathcal B_\na(\Lambda^+_\phi,\phi) \intersect \mathcal B_\na(\Lambda^+_\psi,\psi) \\
&= \left( \mathcal B_\ext(\Lambda^{\pm}_\phi;\psi) \union \mathcal B_\sing(\psi) \union \mathcal B_\gen(\psi) \right) \intersect \left( \mathcal B_\ext(\Lambda^{\pm}_\phi;\phi) \union \mathcal B_\sing(\phi) \union \mathcal B_\gen(\phi) \right) \qquad(*) \end{align*}
and using this we proceed by cases.
\textbf{Case 1:} $\gamma \in \mathcal B_\gen(\phi) \union \mathcal B_\gen(\psi)$. By symmetry we may assume $\gamma \in \mathcal B_\gen(\phi)$ and so $\gamma$ is a generic leaf of some $\Lambda^+_t \in \L(\phi)$. Since $\gamma \in \mathcal B_\na(\Lambda^+_\phi,\phi)$ it follows that $\Lambda^+_\phi \not\subset \Lambda^+_t$, and so the stratum $H_t$ associated to $\Lambda^+_t$ is contained in $Z$, by Fact~\refGM{FactLeafAsLimit} and the definition of $Z$. Since $\<Z,\hat \rho_r\>$ is $f_\#$-invariant (Lemma~\ref{defining Z}\pref{ItemZPPathsInv}) and the set of lines that it carries is closed (Lemma~\ref{LemmaZPClosed}\pref{item:closed}), $\<Z,\hat \rho_r\>$ carries $\Lambda^+_t$ by (Fact~\refGM{FactLeafAsLimit}). Lemma~\ref{LemmaZPClosed}\pref{Item:circuits} then implies that $\mathcal A_{\na}(\Lambda^{\pm}_\phi)$ carries $\Lambda^+_t$ and hence $\gamma$.
Having settled Case~1 we may assume that $$\gamma \in \left( \mathcal B_\ext(\Lambda^{\pm}_\phi;\psi) \union \mathcal B_\sing(\psi) \right) \intersect \left( \mathcal B_\ext(\Lambda^{\pm}_\phi;\phi) \union \mathcal B_\sing(\phi)\right) $$
\textbf{Case 2:} $\gamma \in \mathcal B_\ext(\Lambda^{\pm}_\phi;\psi) \union \mathcal B_\ext(\Lambda^{\pm}_\phi;\phi)$. By symmetry we may assume $\gamma \in \mathcal B_\ext(\Lambda^{\pm}_\phi;\psi)$, so there is a lift $\ti \gamma$ with endpoints $P,Q \in \bdy_\ext(A,\psi)$ for some $A \in \mathcal A_{\na}(\Lambda^{\pm}_\phi)$.
\textbf{Case 2a:} At least one of $P$ or $Q$ is not in $\bdy A$, say $P \not\in \bdy A$. It follows that $P \in \Fix_N(\wh\Psi) \setminus \bdy A$ for some $A$-related $\Psi \in P(\psi)$. Applying Corollary~\ref{CorRelatedBdy} it follows that $P$ is an isolated attracting point of $\Fix_N(\wh\Psi)$. Since $P \in \bdy_\ext(A,\psi)$, by Corollary~\ref{CorDisjBdyA} and Lemma~\ref{LemmaTwoLifts} it follows that $\Phi = \Psi^{-1}$ is the only automorphism representing $\phi$ with $P \in \Fix(\wh\Phi) = \Fix(\wh\Psi)$. Since $P$ is a repeller for the action of $\wh\Phi$, we have $P \not \in \Fix_N(\wh\Phi)$, and so $\gamma \not \in \mathcal B_\sing(\phi)$ and $\gamma \not \in\mathcal B_\ext(\Lambda^\pm_\phi;\phi)$. By $(*)$ it follows that $\gamma \in \mathcal B_\gen(\phi)$, reducing to Case~1.
\textbf{Case 2b:} $P,Q \in \bdy A$, and so $\gamma$ is carried by $\mathcal A_{\na}(\Lambda^{\pm}_\phi)$ and we are done.
Having settled Cases~1 and~2, we are reduced to the following:
\textbf{Case 3:} $\gamma \in \mathcal B_\sing(\phi) \union \mathcal B_\sing(\psi)$. By symmetry we may assume $\gamma \in \mathcal B_\sing(\phi)$, and so there exists $\Phi \in P(\phi)$ and a lift $\ti \gamma$ with endpoints $P,Q \in \Fix_N(\wh\Phi)$.
\textbf{Case 3a: $\Fix(\Phi)$ is nontrivial.} By Corollary~\ref{CorRelatedBdy} there exists $A \in \mathcal A_\na(\Lambda^{\pm}_\phi)$ such that $\Fix(\Phi) \subgroup A$, and so $\Phi$ is $A$-related and $\gamma \in \mathcal B_\ext(A,\phi) \subset \mathcal B_\ext(\Lambda^{\pm}_\phi;\phi)$, reducing to Case~2.
\textbf{Case 3b: $\Fix(\Phi)$ is trivial.} It follows that $P,Q$ are isolated attractors in $\Fix_N(\wh\Phi)$. Lemma~\ref{LemmaTwoLifts} combined with the assumption of Case 3b implies that $\Psi=\Phi^\inv$ is the only automorphism representing $\psi$ with $P \in \Fix(\wh\Psi)$. As in Case 2a, using $(*)$ we conclude that $\gamma \in \mathcal B_\gen(\psi)$, reducing to Case~1.
This completes the proof of (1).
We prove (2) by contradiction. If (2) fails then there are neighborhoods $V^+,V^-$ of generic leaves of $\Lambda^+_\phi,\Lambda^-_\phi$ respectively, a sequence of lines $\gamma_i \in \mathcal B$ and a sequence of positive integers $m_i \to \infinity$, such that for all $i$ we have: $\gamma_i \not\in V^-$; $\phi^{2m_i}(\gamma_i) \not\in V^+$; and $\gamma_i$ is not carried by $\mathcal A_\na(\Lambda^\pm_\phi)$. We may assume that $V_+$ has the property $\phi(V_+) \subset V_+$, because generic leaves of $\Lambda^+_\phi$ have a neighborhood basis of such sets. Similarly, we may assume that $V_- \subset \phi(V_-)$. Since $\mathcal A_\na(\Lambda^\pm_\phi)$ is $\phi$-invariant, none of the lines $\phi^{m_i}(\gamma_i)$ are carried by $\mathcal A_\na(\Lambda^\pm_\phi)$. Since $\mathcal A_\na(\Lambda^\pm_\phi)$ is malnormal we may apply Lemma~\ref{ItemNotZPLimit}, with the conclusion that after passing to a subsequence of $\phi^{m_i}(\gamma_i)$, some weak limit $\gamma$ is not carried by $\mathcal A_\na(\Lambda^\pm_\phi)$.
To contradict (1) we show that $\gamma$ is weakly attracted to neither $\Lambda^+_\phi$ nor $\Lambda^-_\phi$. By symmetry we need show only that the sequence $\phi^m(\gamma)$ does not weakly converge to $\Lambda^+_\phi$. If it does then $\phi^M(\gamma) \in V^+$ for some~$M$. Since $V^+$ is open there exists $I$ such that $\phi^{m_i +M}(\gamma_i) \in V^+$ for all $i \ge I$. Since $\phi(V^+) \subset V^+$, it follows that $\phi^m(\gamma_i) \in V^+$ for all $m \ge m_i + M$ and $i \ge I$. We can choose $i \ge I$ so that $m_i \ge M$, and it follows that $\phi^{2m_i}(\gamma_i) \in V^+$, a contradiction. \end{proof}
\subsection{Nonattracted lines of EG\ height: the nongeometric case.} \label{SectionNonattrFullHeight}
We continue the \emph{Notational Conventions} established at the beginning of Section~\ref{SectionTheoremGStatement}. By combining Proposition~\refGM{PropGeomEquiv} with the following equations of free factor systems $$[G_r]=\mathcal F_\supp(\Lambda^\pm_\phi)=[G'_u], \qquad [G_{r-1}]=[G'_{u-1}] $$ we may conclude that the stratum $H_r$ is geometric if and only if the stratum $H'_u$ is geometric. The realizations of a line $\gamma \in \mathcal B$ in the marked graphs $G,G'$ will be denoted $\gamma_G,\gamma_{G'}$ respectively, or just as $\gamma,\gamma'$ when we wish to abbreviate the notation, or even both just as $\gamma$ when we wish for further abbreviation.
The heart of Theorem~\ref{ThmRevisedWAT} (Theorem~G) is the special case concerned with those lines $\gamma$ such that $\gamma_G$ has height~$r$, equivalently $\gamma_{G'}$ has height~$u$, and in this section we focus on that case. We give necessary and sufficient conditions for $\gamma$ to be weakly attracted to $\Lambda^+_\phi$ under iteration of $\phi$, expressed in terms of the form of $\gamma_{G'}$. This is the analog of Proposition~6.0.8 of \BookOne\ which has the additional hypothesis that $\gamma$ is birecurrent, and the proof of which is separated into geometric and non-geometric cases. In our present setting we drop the birecurrence hypothesis, and we also separate the proof into the non-geometric case in Lemma~\ref{nonGeometricFullHeightCase} and the geometric case in Lemma~\ref{geometricFullHeightCase}. The conclusions of two lemmas describe $\gamma_{G'}$ in explicit detail which, while more than we need for our applications, is included because it is needed for the proof and it helps clarify the picture. Although in this section we do not yet derive the conclusions of Theorem~\ref{ThmRevisedWAT} (Theorem~G) for the height~$r$ case, that will be done as part of the derivation of those conclusions for the general case, carried out in Section~\ref{SectionNALinesGeneral}.
The nongeometric case, in which the strata $H_r,H'_u$ are not geometric, is entirely handled in the following Lemma~\ref{nonGeometricFullHeightCase}. The geometric case will take considerably more work and is handled in Section~\ref{SectionNFHGeometric}.
\begin{lemma}
\label{nonGeometricFullHeightCase} Assuming that the strata $H_r$, $H'_u$ are not geometric, and with the notation above, if $\gamma \in \mathcal B$ has realization $\gamma_G$ in $G$ of height $r$ and $\gamma_{G'}$ in $G'$ of height~$u$, and if $\gamma$ is not weakly attracted to $\Lambda^+_\phi$, then its realization $\gamma_{G'}$, satisfies at least one of the following: \begin{enumerate} \item \label{ItemNGFHCgeneric} $\gamma_{G'}$ is a generic leaf of $\Lambda^-_\phi$. \item \label{ItemIntermediatePath} $\gamma_{G'}$ decomposes as $\overline R_1 \mu R_2$ where $R_1$ and $R_2$ are principal rays for $\Lambda^-_\phi$ and $\mu$ is either the trivial path or a nontrivial path of one of the forms $\alpha$, $\beta$, $\alpha\beta$, $\alpha\beta\bar\alpha$, such that $\beta$ is a nontrivial path of height $<s$ and $\alpha$ is a height $s$ indivisible Nielsen path. \item \label{ItemNGFHCOneRay} $\gamma_{G'}$ or $\gamma_{G'}{}^\inv$ decomposes as $\overline R_1 \mu R_2$ where $R_1$ is a principal ray for $\Lambda^-_\phi$, $R_2$ is a ray of height $<s$, and $\mu$ is either trivial or a height $s$ Nielsen path. \end{enumerate} \end{lemma}
\begin{proof} By Proposition~\refGM{PropGeomEquiv}, since $H_r$ is not a geometric stratum, neither is $H'_u$. Let $\alpha$ denote the unique (up to reversal) indivisible Nielsen path of height $s$ in $G'_u$, if it exists; by Fact~\refGM{FactGeometricCharacterization} and Fact~\trefGM{FactEGNielsenCrossings}{ItemEGNielsenNotClosed} it follows that $\alpha$ is not closed and that we may orient $\alpha$ so that its initial endpoint $v$ is an interior point of $H'_u$.
We adopt the abbreviated notation $\gamma$ for $\gamma_G$ and $\gamma'$ for $\gamma_{G'}$.
We first show that $\gamma$ has infinitely many edges in $H_r$ by proving that if a line $\gamma$ has height $r$ and only finitely many edges in $H_r$ then $\gamma$ is weakly attracted to $\Lambda^+$. To prove this, write $\gamma = \gamma_- \gamma_0 \gamma_+$ where $\gamma_-,\gamma_+ \subset G_{r-1}$ and $\gamma_0$ is a finite path whose first and last edges are in $H_r$. Since $f_\#$ restricts to a bijection on lines of height $r-1$, it follows that $f^k_\#(\gamma)$ has height $r$ for all $k$, and so $f^k_\#(\gamma_0)$ is a nontrivial path of height $r$ for all $k$. By Fact~\refGM{FactEvComplSplit} there exists $K \ge 0$ such that $f^K_\#(\gamma_0)$ completely splits into terms each of which is either an edge or Nielsen path of height $r$ or a path in $G_{r-1}$, with at least one term of height $r$. Let $f^K_\#(\gamma_0) = \gamma'_- \gamma'_0 \gamma'_+$ where $\gamma'_-,\gamma'_+$ are in $G_{r-1}$ and $\gamma'_0$ is the maximal subpath of $f^K_\#(\gamma_0)$ whose first and last edges are in $H_r$, so $\gamma'_0$ is nontrivial and completely split. Since $f^K_\#(\gamma) = [f^K_\#(\gamma_-) \, \gamma'_- \, \gamma'_0 \, \gamma'_+ \, f^K_\#(\gamma_+)]$ then, using RTT-(i), it follows that there is a splitting $f^K_\#(\gamma) = \gamma''_- \cdot \gamma'_0 \cdot \gamma''_+$ where $\gamma''_- = [f^K_\#(\gamma_-) \, \gamma'_-]$ and $\gamma''_+= [\gamma'_+ \, f^K_\#(\gamma_+)]$ are in $G_{r-1}$. By Fact~\trefGM{FactEGNielsenCrossings}{ItemEGNielsenNotClosed}, each term in this splitting of $f^K_\#(\gamma)$ which is an indivisible Nielsen path of height $r$ is adjacent to a term that is an edge in $H_r$. It follows that at least one term in the splitting of $f^K_\#(\gamma)$ is an edge in $H_r$, implying that $\gamma$ is weakly attracted to $\Lambda^+$.
Since $\gamma$ contains infinitely many edges in $H_r$, and since $[G_r]=[G'_u]$ and the graphs $G_r, G'_u$ are both core subgraphs, the line $\gamma_{G'}$ contains infinitely many edges in $H'_u$.
In the part of the proof of the nongeometric case of Proposition~6.0.8 of \BookOne\ that does not use birecurrence and so is true in our context, it is shown that there exists $M' > 0$ so that for every finite subpath $\gamma_i'$ of $\gamma'$ there exists a line or circuit $\tau_i'$ in $G'$ that contains at most $M'$ edges of $H'_{s}$ such that $\gamma_i'$ is a subpath of $g_\#^{k_i}(\tau_i')$ for some $k_i\ge 0$. If $G_{r-1} = \emptyset$ then $\tau_i'$ is a circuit; otherwise $\tau_i'$ is a line. (This is proved in two parts. First, in what is called step 2 of that proof, an analogous result is proved in $G$. Then the bounded cancellation lemma is used to transfer this result to $G'$; the case that $G_{r-1} = \emptyset$ is considered after the case that $G_{r-1} \ne \emptyset$.)
Choose a sequence of finite subpaths $\gamma'_i$ of $\gamma'$ that exhaust $\gamma'$ and let $\tau'_i$ and $k_i$ be as above so that $\gamma'_i$ is a subpath of $g^{k_i}_\#(\tau'_i)$ and so that $\tau'_i$ contains at most $M'$ edges of~$H'_u$. Since $\gamma'$ contains infinitely many $H'_u$ edges, we have $k_i \to +\infinity$ as $i \to +\infinity$.
By Lemma~\refGM{LemmaEGUnifPathSplitting} there exists $d>0$ depending only on the bound $M'$ such that $g^d_\#(\tau'_i)$ has a splitting into terms each of which is either an edge or indivisible Nielsen path of height $s$ or a path in $G'_{u-1}$. By taking $i$ so large that $k_i \ge d$ we may replace each $\tau'_i$ by $g^d_\#(\tau'_i)$ and each $k_i$ by $k_i - d$, and hence we may assume that $\tau_i'$ has a splitting $$ \tau_i' = \tau'_{i,1} \cdot \ldots \cdot \tau'_{i,l_i} \qquad\qquad (*) $$ each of whose terms is an edge or Nielsen path of height $s$ or a path in $G'_{u-1}$. The number of edges that $\tau_i'$ has in $H_u$ is still uniformly bounded, and so $l_i$ is uniformly bounded. Passing to a subsequence, we may assume that $l_i$ and the ordered sequence of height $s$ terms in $\tau'_i$ are independent of~$i$. We may also assume that $l = l_i$ is minimal among all such choices of $\gamma_i'$ and $\tau_i'$.
\subparagraph{Case A: $l = 1$.} In this case $\tau'_i = E$ is a single edge of $H'_u$ and so, by \refGM{FactLeafAsLimit}, $\gamma'$ is a leaf of $\Lambda^-_\phi$. If both ends of $\gamma'$ have height $s$ then $\gamma'$ is generic by Fact~\refGM{FactTwoEndsGeneric} and case (1) is satisfied.
Suppose one end of $\gamma'$, say the positive end, has height $\le s-1$. We have a concatenation $\gamma' = \overline R_1 R_2$ where the ray $R_1$ starts with an edge of $H'_u$ and the ray $R_2$ is contained in $G'_{u-1}$. By Fact~\refGM{FactPrincipalVertices}, the concatenation point is a principal vertex. By Lemma~\trefGM{FactTiles}{ItemUndergraphPieces}, for each $m$ there is an $m$-tile in $\gamma'$ which is an initial segment of $R_1$. By Corollary~\refGM{CorTilesExhaustRay} it follows that $R_1$ is a principal ray. This shows that (3) is satisfied with trivial $\mu$.
\subparagraph{Case B: $l \ge 2$.} Choose a subpath $\nu_i' \subset \tau_i'$, with endpoints not necessarily at vertices, such that $g^{k_i}_\#(\nu_i') = \gamma_i'$. Let $\tau''_i$ be the subpath obtained from $\tau'_i$ by removing the initial segment $\tau'_{i,1}$ and the terminal segment $\tau'_{i,l}$ of the splitting $(*)$, so either $\tau''_i = \tau'_{i,2} \cdot \ldots \cdot \tau'_{i,l-1}$ or, when $l=2$, $\tau''_i$ is the trivial path at the common vertex along which $\tau'_{i,1}$ and $\tau'_{i,2}$ are concatenated. After passing to a subsequence, we may assume that $\tau''_i \subset \nu_i'$; if no such subsequence existed then we could reduce~$l$ by removing either $\tau'_{i,1}$ or $\tau'_{i,l}$ from $\tau'_i$. For the same reason, we may assume that $\gamma'$ has a finite subpath that contains $g^{k_i}_\#(\tau''_i)$ for all $i$. After passing to a subsequence, we may assume that $\mu = g^{k_i}_\#(\tau''_i)$ is independent of $i$. Since the sequence of height $s$ terms in $\tau''_i$ is independent of $i$, it follows that $\mu$ is either trivial or has a splitting into terms each of which is either $\alpha$, or $\bar\alpha$, or a path in $G'_{u-1}$. Since the endpoints of $\alpha$ are distinct, no two adjacent terms in this splitting can both be $\alpha$ or $\bar\alpha$, and so each subdivision point of the splitting is in $G'_{u-1}$. Since $v$ is an interior point of $H'_u$, for any occurence of $\alpha$ or $\bar\alpha$ as a term of $\mu$ the endpoint $v$ must be an endpoint of $\eta$. It follows that $\mu$ can be written in one of the forms given in item~\pref{ItemIntermediatePath}, after possibly inverting~$\gamma'$.
Write $\gamma'$ as $\overline R_1 \mu R_2$. If $\tau'_{1,1}$ is an edge $E$ in $H'_u$ then $E = \tau'_{i,1}$ for all $i$, and the ray $R_1$ is the increasing union of $g^{k_1}_\#(\bar E) \subset g^{k_2}_\#(\bar E) \subset \cdots$, so $R_1$ is a principal ray for $\Lambda^-_\phi$. Otherwise $\tau'_{i,1}$ is a path in $G'_{u-1}$ for all $i$ and $R_1$ is a ray in $G'_{u-1}$. Using $\tau'_{i,l}$ similarly in place of $\tau'_{i,1}$, $R_2$ is either a principal ray for $\Lambda^-_\phi$ or a ray in $G'_{u-1}$. At least one of $R_1$ and $R_2$ is a principal ray. If they are both principal rays then item~(2) holds, otherwise (3)~holds. \end{proof}
\subsection{Nonattracted lines of EG\ height: the geometric case.} \label{SectionNFHGeometric}
We continue to analyze the special case of Theorem~\ref{ThmRevisedWAT} (Theorem~G) concerned with lines of maximal height and a top EG-stratum. We state and prove Lemma~\ref{geometricFullHeightCase}, which covers the case of a geometric top stratum. Our proof applies also to Proposition~\ref{PropWeakGeomRelFullIrr}, which in \PartFour\ will be incorporated into the conclusions of Theorem~J.
The reader may wish to review the notations established in the beginning of Sections~\ref{SectionTheoremGStatement} and~\ref{SectionNonattrFullHeight}. We assume the strata $H_r$, $H'_u$ are geometric---equivalently the lamination pairs $\Lambda^\pm_\phi = \Lambda^\pm_\psi$ are geometric (Proposition~\refGM{PropGeomEquiv} and Definition~\refGM{DefGeometricLamination})---we let $\rho_r$, $\rho'_u$ be the closed indivisible Nielsen paths in $G_r$, $G'_u$ of heights $r,u$, respectively. By applying Proposition~\refGM{PropGeomEquiv}, up to reorienting these Nielsen paths we have $[\rho_r]=[\rho'_u]$.
\begin{lemma}[Height $r$ lines in the geometric case] \label{geometricFullHeightCase} Assuming that $H_r$, $H'_u$ are geometric, and with notation as above, if $\gamma \in \mathcal B$ has height~$r$ in $G$ and is not weakly attracted to $\Lambda^+_\phi$ then its realization $\gamma_{G'}$ in $G'$ has at least one of the following forms: \begin{enumerate} \item \label{ItemGFHCIterate} $\gamma_{G'}$ or $\bar\gamma_{G'}$ is the bi-infinite iterate of $\rho'_u$. \item \label{ItemGFHCLeaf} $\gamma_{G'}$ is a generic leaf of $\Lambda^-_\phi$. \item \label{ItemGFHCTwoRays} $\gamma_{G'}$ decomposes as $\overline R_1 \mu R_2$ where $R_1$ and $R_2$ are principal rays for $\Lambda^-_\phi$ and $\mu$ is either a trivial path, a finite iterate of $\rho'_u$ or its inverse, or a nontrivial path of height $<u$. \item \label{ItemGFHCOneRay} $\gamma_{G'}$ or $\bar\gamma_{G'}$ decomposes as $\overline R_1 R_2$ where $R_1$ is a principal ray for $\Lambda^-_\phi$ and the ray $R_2$ either has height $<u$ or is the singly infinite iterate or $\rho'_u$ or its inverse. \end{enumerate} \end{lemma}
Until further notice in this section, we adopt the notation and hypotheses of Lemma~\ref{geometricFullHeightCase}. For purposes of the proof, by replacing $\phi$ and $\psi=\phi^\inv$ with their restrictions to $\mathsf{Out}(\pi_1 G) = \mathsf{Out}(\pi_1 G')$ (Fact~\refGM{FactMalnormalRestriction}), we may replace $f \from G \to G$ with its restriction to $G_r$, and replace $f' \from G' \to G'$ with its restriction to $G'_u$. Hence we have reduced to the assumption that $H^{\vphantom{\prime}}_r,H'_u$ are the top strata. Under that assumption, a geometric model for $H'_u$ (Definition~\refGM{DefGeomModel}) is the same thing as a weak geometric model (Definitions~\refGM{DefWeakGeomModel}). We fix such a model, and we recall its static data; later we review its dynamic data. We adopt shorthand notation $Q = G'_{u-1}$, which has no contractible components (Fact~\refGM{FactGeometricCharacterization}).
\noindent \textbf{Geometric model: Static data.} The static data consists of the following. First we have the finite subgraph $Q \subset G'$. We also have a compact surface $S$ with $m+1$ boundary components $\bdy S = \bdy_0 S \union \cdots \union \bdy_m S$, $m \ge 0$. The \emph{upper boundary} of $S$ is $\bdy_0 S$. The \emph{lower boundary} is $\bdy S - \bdy_0 S = \union_{i=1}^m \bdy_i S$ and is denoted $\bdy_\ell S$. We have a map $\alpha \from \bdy_\ell S \to Q$ such that for each $i=1,\ldots,m$ its restriction $\alpha_i \from \bdy_i S \to Q$ is a homotopically nontrivial closed edge path. We have a quotient 2-complex $Y$ obtained by gluing $S$ and $Q$ using the attaching map $\alpha \from \bdy_\ell S \to Q$. Let $j \from Q \disjunion S \to Y$ denote the quotient map. We also have an embedding $G' \inject Y$ extending the embedding $G'_{u-1} = Q \inject Y$. Finally, we have a deformation retraction $d \from Y \to G'$ which takes $\bdy_0 S$, regarded as a closed curve based at the unique point $p'_u = G' \intersect \bdy_0 S$, to the closed indivisible height~$u$ Nielsen path $\rho'_u$. Note that $Y$ may be regarded as a ``marked 2-complex'' for $F_n$, by composing the marking of $G'$ with the homotopy equivalence $G' \inject Y$, and so up to inner automorphism we have an identification $\pi_1(Y) \approx F_n$. Altogether the static data will be denoted $Q \disjunion S \xrightarrow{j} Y \xrightarrow{d} G'$.
\noindent \textbf{Strategy of the proof of Lemma~\ref{geometricFullHeightCase}.} For each abstract line $\gamma$ realized in $G'$ with full height~$u$, we shall define a canonical decomposition of~$\gamma$ as an alternating concatenation of ``overpaths'' and ``underpaths''. Very roughly speaking an overpath of $\gamma$ is a subpath that begins and ends with edges of $H'_u$, that pulls back to the surface~$S$, and that is maximal with respect to these properties. An overpath of $\gamma$ can be a finite subpath, a subray, or the whole line~$\gamma$. Distinct overpaths have disjoint interiors, and the underpaths of $\gamma$ are the maximal subpaths disjoint from the interiors of the overpaths. When an overpath of $\gamma$ is pulled back to $S$ and straightened with respect to a hyperbolic structure on $S$, the result is called a ``geodesic overpath'', and this can be a finite geodesic path or geodesic ray, in either case intersecting $\bdy S$ precisely in its finite endpoints, or a bi-infinite geodesic line. From the hypothesis that $\gamma$ is not weakly attracted to $\Lambda^+_\phi$ under iteration of $\phi$, one shows that none of the geodesic overpaths of $\gamma$ are weakly attracted to unstable geodesic lamination of the pseudo-Anosov homeomorphism of the geometric model. By applying Nielsen--Thurston theory (Proposition~\refGM{PropNTWA}) one shows that these geodesic overpaths all have a certain form, the key feature being that they do not cross the stable geodesic lamination, from which the desired form of $\gamma$ given in Lemma~\ref{geometricFullHeightCase} is deduced.
Formalizing this strategy requires a lot of descriptive work, leading up to the statement of Lemma~\ref{LemmaOverpathsAttraction} which gives the form of the ``geodesic overpaths'' alluded to above. After Lemma~\ref{LemmaOverpathsAttraction} is stated and proved, we will apply it complete the proof of Lemma~\ref{geometricFullHeightCase}.
In order to describe overpath--underpath decompositions we use the peripheral Bass-Serre tree $F_n \act T$ associated to a geometric model as a bookkeeping device. We turn next to a review of these topics from \PartOne.
\noindent \textbf{Vertex spaces and edge spaces.} The peripheral Bass-Serre tree will be described in terms of the vertex space---edge space decomposition of the universal cover of the geometric model $Y$. Our description roughly follows Definition~\refGM{DefPeripheralSplitting}, with similar justifications using the ``graph of spaces'' approach to Bass-Serre theory in \cite{ScottWall}.
In addition to the notation $Q = G'_{u-1}$, we adopt the shorthand notation $B = \bdy_\ell S$.
Consider the following pushout diagram (with the square on the right added for convenience): $$\xymatrix{ \breve S \disjunion \breve Q \ar@{=}[r] \ar[dr]_{\breve q} & \breve Y \ar[r]^{\breve j}
& \widetilde Y \ar[d]_{q} \ar[r]^{\tilde d}_{\supset} & \widetilde G' \ar[d] \\ & S \disjunion Q \ar[r]_{j} & Y \ar[r]^{d} & G' }$$ The map $q$ is the universal covering map, $\breve S \disjunion \breve Q = \breve Y$ is the subspace of the Cartesian product $(S \disjunion Q) \cross \widetilde Y$ consisting of all pairs $(x,\ti y)$ such that $j(x)=q(\ti y)$, and the maps $\breve q$ and $\breve j$ are restrictions of the two projection maps of the Cartesian product. The sets $\breve S$ and $\breve Q$ defining the disjoint union $\breve S \disjunion \breve Q$ are respectively characterized by requiring $\breve q(x) \in S$ and $\breve q(x) \in Q$. The group $F_n$ acts on $\widetilde Y$ by deck transformations and on $S \disjunion Q$ trivially, inducing a diagonal action on $\breve S \disjunion \breve Q=\breve Y$, such that $\breve j$ is $F_n$-equivariant and $\breve q$ is a covering map with deck transformation group $F_n$. We also define $\breve B \subset \breve S \subset \breve Y$ to be the total lift of $B=\bdy_\ell S$ via the map $\breve q$. The components of $\breve S$, $\breve Q$, and $\breve B$ are indexed as follows, together with their respective images in $\widetilde Y$ and stabilizer groups under the action $F_n = \pi_1(Y) \act \widetilde Y$: \begin{align*} \breve S = \union_s \breve S_s, & \qquad \wh S_s = \breve j(\breve S_s) \subset \widetilde Y, \qquad \Gamma_s = \mathsf{Stab}(\breve S_s) = \mathsf{Stab}(\wh S_s) \subgroup F_n \\ \breve Q = \union_q \breve Q_q, & \qquad \wh Q_q = \breve j(\breve Q_q) \subset \widetilde Y, \qquad \Gamma_q = \mathsf{Stab}(\breve Q_q) = \mathsf{Stab}(\wh Q_q) \subgroup F_n \\ \breve B = \union_b \breve B_b, & \qquad \wh B_b = \breve j(\breve B_b) \subset \widetilde Y, \qquad \Gamma_b = \mathsf{Stab}(\breve B_b) = \mathsf{Stab}(\wh B_b) \subgroup F_n
\end{align*} The sets $\wh S_s$ are called the \emph{$S$-vertex spaces} of $\widetilde Y$, the sets $\wh Q_q$ are the \emph{$Q$-vertex spaces}, and the sets $\wh B_b$ are the \emph{edge spaces}.
By restricting $\breve q$ we get universal covering maps $\breve S_s \to S$, and an isomorphism of the deck group $\Gamma_s \approx \pi_1(S)$ (well-defined up to inner automorphism). Similarly we have universal covering maps of each $\breve Q_q$ over some component $Q_q$ of $Q$ with deck group $\Gamma_q \approx \pi_1(Q_q)$; and of each $\breve B_b$ over some component $B_b$ of $B$ with infinite cyclic deck group $\Gamma_b \approx \pi_1(B_b)$. Since $\breve S_s$ is connected and is cocompact under the action of $\Gamma_s$, the same is true of $\wh S_s$; similar statements hold for the actions of $\Gamma_q$ and $\Gamma_b$.
The domain and range restrictions of $\breve j$ are also denoted with subscripts. The restriction $\breve j_q \from \breve Q_q \to \wh Q_q$ is a homeomorphism. The union $\wh Q = \union_q \wh Q_q$ is a disjoint union, it is the component decomposition of $\wh Q$, and $\wh Q$ is equal to the total lift $\widetilde G'_{u-1}$ of $Q=G'_{u-1}$ under the universal covering map $q \from \widetilde Y \to Y$. On the other hand the restrictions $\breve j_s \from \breve S_s \to \wh S_s$ and $\breve j_b \from \breve B_b \to \wh B_b$ may fail to be be injective or even locally injective, and the unions $\union_s \wh S_s$ and $\union_b \wh B_b$ need not be disjoint unions. This failure stems from the failure of local injectivity of the attaching map $\alpha \from B \to Q$. Nontheless map $\alpha$ factors on each component of $B$ as a finite sequence of Stallings folds followed by a local injection, which lifts to equivariant factorizations of each $\breve j_s$ and each $\breve j_b$, each term of which is a homotopy equivalence, and so each $\breve j_s$ and $\breve j_b$ is a homotopy equivalence. In particular each $\wh B_b$ and each $\wh S_s$ is contractible.
For each $s$ the full lift of the lower boundary $\bdy_\ell S = B$ to $\breve S_s$ is denoted $\bdy_\ell \breve S_s$, and we have a component decomposition $\bdy_\ell \breve S_s = \union^s \breve B_b$ where the symbol $\union^s$ means that the union is taken over all $b$ such that $\breve B_b \subset \bdy_\ell \breve S_s$. We denote $\bdy_\ell \wh S_s = \breve j_s(\bdy_\ell \breve S_s) \subset \wh S_s$. We also denote $\bdy_0 \breve S_s = \bdy \breve S_s - \bdy_\ell \breve S_s$ which is the total lift to $\breve S_s$ of the upper boundary $\bdy_0 S$, and we denote $\bdy_0 \wh S_s = \breve j_s(\bdy_0 \breve S_s)$. Using the fact that the map $j \from S \to Y$ embeds $S - \bdy_\ell S$ as an open connected subset of $Y$, similarly $\breve j_s$ embeds $\breve S_s - \bdy_\ell \breve S_s$ as an open connected subset of $\widetilde Y$, we have $\breve j_s(\breve S_s - \bdy_\ell \breve S_s) \subset \wh S_s$, and we have $\breve j_s(\breve S_s - \bdy_\ell \breve S_s) = \wh S_s - \bdy_\ell \wh S_s$. Furthermore, for any point pair $x \ne y \in \breve S_s$ such that $\breve j_s(x)=\breve j_s(y)$ there exists a component $\breve B_b$ of $\bdy_\ell S_s$ such that $x,y \in \breve B_b$. We therefore have a component decomposition $\bdy_\ell \wh S_s = \union^s \wh B_b = \union^s \breve j_s(\breve B_b)$.
The embedding $G' \subset Y$ and deformation retraction $d \from Y \to G'$ lift to an $F_n$-equivariant embedding $\widetilde G' \subset \widetilde Y$ and deformation retraction $\ti d \from \widetilde Y \to \widetilde G'$, commuting with universal covering maps $q \from \widetilde Y \to Y$ and $\widetilde G' \to G'$, as shown in the right square of the above diagram. Denoting $$\widetilde G'_s = \widetilde G' \intersect \wh S_s = \ti d(\wh S_s) $$ we may restrict $\ti d$ to obtain a $\Gamma_s$-equivariant deformation retraction $$\ti d_s \from \wh S_s \to \widetilde G'_s $$ Note that $\widetilde G'_s$ is connected since $\wh S_s$ is connected, and $\Gamma_s$ acts cocompactly on the tree $\widetilde G'_s$ since it acts cocompactly on $\wh S_s$. It follows that we may naturally identify $$\bdy\Gamma_s = \bdy\wh S_s = \bdy\widetilde G'_s \subset \bdy F_n $$ from which it follows in turn that each line in $\widetilde G'$ with ideal endpoints in $\bdy \Gamma_s$ is contained in the subgraph $\widetilde G'_s$. Denote $$\widetilde H'_u = \widetilde G' \setminus \wh Q = \bigl(\text{the full lift of $H'_u = G' \setminus Q$}\bigr) $$ Let~$\widetilde H'_{u,s} \subset \widetilde H'_u$ be the subgraph of all edges of $\widetilde H'_u \intersect \widetilde G'_s$ (the latter intersection may contain some isolated vertices which we avoid by defining $\widetilde H'_{u,s}$ in this manner). Note that the components of the subgraph $\widetilde G'_s \setminus \widetilde H'_{u,s}$ are precisely the components $\wh B_b$ of $\bdy_\ell \wh S_s$, one for each component $\breve B_b$ of~$\bdy_\ell \breve S_s$.
\noindent \textbf{The Bass-Serre tree $F_n \act T$.} The Bass-Serre tree $T$ is a bipartite tree with vertices and edges as follows. First, $T$ has one $S$-vertex denoted $V_s$ for each $S$-vertex space $\wh S_s$. Also, $T$ has one $Q$-vertex denoted $V_q$ for each $Q$-vertex space $\wh Q_q$. Finally, $T$ has one edge denoted $E_b$ for each edge space $\wh B_b$, and the endpoints of $E_b$ are the unique $S$-vertex $V_s$ and the unique $Q$-vertex $V_q$ having the properties $\wh B_b \subset \wh S_s$ and $\wh B_b \subset \wh Q_q$. The action $F_n \act \widetilde Y$ induces the action $F_n \act T$.
Note that $T$ can be characterized algebraically. The conjugacy class $[\pi_1 S]$ equals the set $\{\Gamma_s\}$ of $S$-vertex stabilizers, and the latter corresponds bijectively to $\{V_s\}$ since $\pi_1 S$ is its own normalizer in $F_n$ (Lemma~\trefGM{LemmaLImmersed}{ItemSeparationOfSAndL}). Also, the union of the conjugacy classes constituting the subgroup system $[\pi_1 Q]$ equals the set $\{\Gamma_q\}$ which corresponds bijectively to $\{V_q\}$ since $[\pi_1 Q]$ is a malnormal subgroup system (Lemma~\trefGM{LemmaLImmersed}{ItemComplementMalnormal}). The tree $T$ thus has one $S$-vertex $V_s$ for each $\Gamma_s$, one $Q$-vertex $V_q$ for each $\Gamma_q$, with an edge $E_b$ connecting $V_s$ to $V_q$ if and only if $\Gamma_s \intersect \Gamma_q$ is a nontrivial subgroup of~$F_n$, that subgroup being the infinite cyclic subgroup $\Gamma_b$.
\textbf{Remark.} Our notation here may be compared with the notation of Definition~\refGM{DefPeripheralSplitting} by setting $L = Q \union \bdy_0 S$. In effect, in forming $T$ we have stripped away the valence~1 vertices and incident edges of the Bass-Serre tree of Definition~\refGM{DefPeripheralSplitting} that are associated to the components of the top boundaries $\bdy_0 \breve S_s$ (see Remark~\refGM{RemarkNotMinimalReFree}). Other valence~$1$ vertices may remain in $T$, namely those associated to ``free lower boundary circles'' of $S$ (Section~\refGM{SectionFreeBoundaryInvariant}).
\noindent \textbf{Lines realized in $T$, and over--under decompositions.} Given a line $\ti\gamma \subset \widetilde G'$, we define its realization $\ti\gamma_T \subset T$, and in parallel we define the over--under decomposition of $\ti\gamma$ in~$\widetilde G'$.
In the degenerate case that $\ti\gamma \subset \wh Q$, we have $\ti\gamma \subset \widetilde Q_q$ for some $q$, in which case the line $\ti\gamma_T$ degenerates to the $Q$-vertex $V_q$, and the entire path $\ti\gamma$ consists of a single underpath $\ti\gamma_q = \ti\gamma$.
Henceforth we may assume $\ti\gamma \not\subset \wh Q$, equivalently $\ti\gamma$ contains an edge of $\widetilde H'_{u,s}$ for some $s$.
Next we define the $S$-vertices in $\ti\gamma_T$ and their associated overpaths in $\ti\gamma$. We put the $S$-vertex $V_s \in T$ in the line $\ti\gamma_T$ if and only if $\ti\gamma \intersect \widetilde H'_{u,s}$ contains an edge, equivalently $\ti\gamma \intersect \interior(\wh S_s) \ne \emptyset$. If $V_s \in \ti\gamma_T$ then the associated overpath denoted $\ti\gamma_s \subset \ti\gamma$ is defined to be the longest subpath of $\ti\gamma$ having the property that each ideal endpoint of $\ti\gamma_s$ is in $\bdy\Gamma_s$ and the edge of $\ti\gamma_s$ incident to each finite endpoint of $\ti\gamma_s$ is in $\widetilde H'_{u,s}$. Note that $\ti\gamma_s \subset \widetilde G'_s$. Note also that distinct overpaths have disjoint interiors, because for any overpaths $\ti\gamma_s, \ti\gamma_{s'} \subset \ti\gamma$ with $s \ne s'$ their intersection $\ti\gamma_s \intersect \ti\gamma_{s'}$ is clearly a path in $\widetilde G'_s \intersect \widetilde G'_{s'} \subset \wh Q$, \emph{and} this path is either empty or a common endpoint. If this were not true then: if $\ti\gamma_s = \ti\gamma_{s'}$ then this path has an edge in $\widetilde H'_u$, contradicting that $\wh Q$ contains no edges of $\widetilde H'_u$; whereas if $\ti\gamma_s \ne \ti\gamma_{s'}$ then the intersection $\ti\gamma_s \intersect \ti\gamma_{s'}$ has a finite endpoint $x$ with incident edge $E$ such that $x$ is also a finite endpoint of one of $\ti\gamma_s$ or $\ti\gamma_{s'}$ with incident edge $E$, and hence $E \subset \widetilde H'_u$, leading to the same contradiction.
Next we define the underpaths of $\ti\gamma$ and their associated $Q$-vertices in $\ti\gamma_T$. The underpaths are the components of $\ti\gamma - \union_s \interior(\ti\gamma_s)$, a disjoint union of possibly degenerate subintervals of $\ti\gamma$, each contained in $\wh Q$. We put the $Q$-vertex $V_q \in T$ in the line $\ti\gamma_T$ if and only if one of the underpaths, denoted $\ti\gamma_q$, is contained in the $Q$-vertex space $\wh Q_q$.
Finally we define the edges of $\ti\gamma_T$. We put the edge $E_b \subset T$ in $\ti\gamma_T$ if and only if its endpoints $V_s,V_q$ are in $\ti\gamma_T$ and the intersection $\ti\gamma_s \intersect \ti\gamma_q$ is nonempty, in which case that intersection is a point that we denote $p_b = \ti\gamma_s \intersect \ti \gamma_q \in \wh B_b$.
This completes the definition of $\ti\gamma_T$, although we must still check that it is indeed a path in the tree~$T$, i.e.\ a locally injective edge path. Choosing an orientation of $\ti\gamma$, by construction we have decomposed $\ti\gamma$ into an alternating concatenation of overpaths and underpaths, what we call the \emph{over--under decomposition} of $\ti\gamma$. Associated to this decomposition we have an expression of $\ti\gamma_T$ as a concatenation of edges of $T$. We must check that this concatenation has no backtracking. Supposing that in $\ti\gamma_T$ the $S$-vertex $V_s$ is preceded by an edge $E_b$ and followed by an edge $E_{b'}$, it follows that the overpath $\ti\gamma_s$ is a finite path with endpoints $p_b \in \breve B_b$ and $p_{b'} \in \breve B_{b'}$; the desired inequality $E_b \ne E_{b'}$ follows from the inequality $\breve B_b \ne \breve B_{b'}$ which is true because, otherwise, it would follow that $\ti\gamma_s \subset \breve B_b = \breve B_{b'}$ contradicting that $\ti\gamma_s$ contains in edge of~$\widetilde H'_u$. And supposing that in $\ti\gamma_T$ the $Q$-vertex $V_q$ is preceded by an edge $E_b$ with opposite $S$-vertex $V_s$ and followed by an edge $E_{b'}$ with opposite $S$-vertex $V_{s'}$, by construction the over--under decomposition has three successive terms $\ti\gamma_s \ti\gamma_q \ti\gamma_{s'}$, and so by construction $\ti\gamma_s$ and $\ti\gamma_{s'}$ have disjoint interiors; but $\ti\gamma_s$ contains every $\widetilde H'_s$ edge in $\ti\gamma$ including at least one such edge, and $\ti\gamma_{s'}$ contains every $\widetilde H'_{s'}$ edge in $\ti\gamma$ including at least one such edge, and it follows that $V_s \ne V_{s'}$ and so $E_b \ne E_{b'}$.
\noindent \textbf{Endpoint behavior of overpaths.} For each full height line $\ti\gamma$ in $\widetilde G'$, we analyze the endpoint structure of each overpath $\ti\gamma_s \subset \ti\gamma$. First, $\ti\gamma_s$ is either a finite nondegenerate path with two finite endpoints, a ray with one finite endpoint and one ideal endpoint, or a line with two ideal endpoints. All finite endpoints of $\ti\gamma_s$ are in $\bdy_\ell \wh S_s$, and all ideal endpoints are in $\bdy\Gamma_s$. These endpoints satisfy the following:
\noindent \textbf{Properness of endpoints:} \quad \begin{itemize} \item[]\textbf{Finite--finite:} If $\ti\gamma_s$ is finite then its two endpoints are in distinct components of~$\bdy_\ell \wh S_s$. \item[]\textbf{Finite--infinite:} If $\ti\gamma_s$ is a ray and if $\wh B_b$ is the component of $\bdy_\ell \wh S_s$ containing its finite endpoint then its ideal endpoint is not in $\bdy \Gamma_b$. \item[]\textbf{Infinite--infinite:} If $\ti\gamma_s$ is a line then for any component $\wh B_b$ of $\bdy_\ell \wh S_s$, the two ideal endpoints of $\ti\gamma_s$ are not both in $\bdy\Gamma_b$. \end{itemize} To see why these hold, a finite path in $\widetilde G'$ having both endpoints in some $\wh B_b$ is entirely contained in $\wh B_b \subset \wh Q$. Similarly, any ray having finite endpoint in $\wh B_b$ and ideal endpoint in $\bdy \Gamma_b = \bdy \wh B_b$ is contained in $\wh B_b$, as is any line having both infinite endpoints in $\bdy \Gamma_b$. But no overpath is entirely contained in $\wh Q$.
\noindent \textbf{Geodesic overpaths.} Henceforth in the proof of Lemma~\ref{geometricFullHeightCase} we fix a hyperbolic structure on $S$ with totally geodesic boundary. This lifts to a complete hyperbolic metric on each $\breve S_s$ with totally geodesic boundary. Denote $$dj \from \breve Q \union \breve S \xrightarrow{\breve j} \widetilde Y \xrightarrow{\ti d} \widetilde G' $$ By restricting $dj$ we obtain the following composition of quasi-isometries with uniform constants independent of $s$, and the associated composition of continuous extensions to Gromov compactifications: $$dj_s \from \begin{cases} \hphantom{\union \bdy \Gamma_s} \breve S_s \,\,\,\, \xrightarrow{\breve j_s} &\wh S_s \,\, \xrightarrow{\ti d_s} \,\, \widetilde G'_s \\ \breve S_s \union \bdy \Gamma_s \, \xrightarrow{\breve j_s} &\wh S_s \union \bdy \Gamma_s \,\, \xrightarrow{\ti d_s} \,\, \widetilde G'_s \union \bdy\Gamma_s \end{cases} $$ This allows us to identify $\bdy\Gamma_s$ with the space of asymptotic equivalence classes of geodesic rays in $\breve S_s$ and with the space of asymptotic equivalence classes of geodesic rays in $\widetilde G'_s$.
Recall from Definition~\refGM{DefProperGeodesic} the concept of a proper geodesic in $\breve S_s$, namely a geodesic which is not contained in $\bdy\breve S_s$ and whose two endpoints (finite and/or ideal) are in $\bdy\breve S_s \union \bdy_\infinity \wh S_s$. Recall also that two proper geodesics $\ti\gamma_1,\ti\gamma_2 \subset \breve S_s$ are \emph{properly equivalent} if they have the same ideal endpoints and if the set of components of $\bdy\breve S_s$ containing a finite endpoint of $\ti\gamma_i$ is independent of $i=1,2$.
Consider a line $\ti\gamma$ in $\widetilde G'$ with realization $\ti\gamma_T$ in~$T$. For each $S$-vertex $V_s \in \ti\gamma_T$ with corresponding overpath $\ti\gamma_s \subset \ti\gamma$ we associate a \emph{geodesic overpath} $\breve\gamma_s \subset \breve S_s$, by choosing $\breve \gamma_s$ to be a geodesic whose endpoints in $\bdy \breve S_s \union \bdy\Gamma_s$ map to the endpoints of $\ti\gamma_s$ under the map $dj_s$. It follows that $\ti\gamma_s$ is obtained from $dj_s(\breve \gamma_s)$ by straightening. \begin{description} \item[Properness of geodesic overpaths:] For each line $\ti\gamma \subset \widetilde G'$ with projection $\ti\gamma$ in $G$, exactly one of the following holds: \begin{description} \item[The line $\ti\gamma$ is a top boundary line:] There exists $s$ such that $\ti\gamma=\ti\gamma_s$ and $\breve \gamma_s$ is a component of $\bdy_0 \breve S_s$; equivalently, $\gamma$ is the line that winds bi-infinitely around $\rho$. \item[Each geodesic overpath is proper:] \quad For each overpath $\ti\gamma_s$ of $\ti\gamma$, its associated geodesic overpath $\breve \gamma_s \subset \breve S_s$ is a proper geodesic. Furthermore, the finite endpoints of $\breve \gamma_s$ are in $\bdy_\ell \breve S_s$, and the choice of $\breve\gamma_s$ is unique up to proper equivalence. \end{description} \end{description} To see why this holds, the statement on finite endpoints holds by construction of geodesic overpaths, and the statement on uniqueness holds because for each $s$ the map $\breve j_s$ induces a bijection between the components of $\bdy_\ell\breve S_s$ and the components of $\breve j_s(\bdy_\ell\breve S_s)$. For the rest, we need only rule out the possibility that $\breve \gamma_s$ is a component of $\bdy_\ell \breve S_s$, but then $\ti\gamma_s \subset \wh Q$, contradicting that $\ti\gamma_s$ contains an edge of $\widetilde H'_{u,s}$.
\noindent \textbf{Geometric model: Dynamic data.} Given the static data $Q \union S \xrightarrow{j} Y \xrightarrow{d} G'$ of a geometric model for $f'$ as specified earlier, the dynamic data of the geometric model consists of a pseudo-Anosov homeomorphism $\Theta \from S \to S$ such that the following dynamic relation holds: \begin{description} \item[$\Theta$ semiconjugates to $f'$:] The maps $dj \circ \Theta$, $f' \circ dj \from S \to G'$ are homotopic. \end{description}
Lemma~\ref{LemmaOverpathsAttraction} below will serve a second purpose, applying to the proof of Theorem~J via Proposition~\ref{PropWeakGeomRelFullIrr}. For this purpose we consider also dynamic data that is ``parasitically'' built upon the given static data, and is dynamically related not to the topological representative $f'$ of $\psi$ but instead to a topological representative of some other outer automorphism. Consider $\omega \in \mathsf{Out}(F_n)$ which preserves the free factor system~$\mathcal F$. Choose any topological representative $f_\omega \from (G',Q) \to (G',Q)$. Consider $\Theta \from S \to S$ a pseudo-Anosov homeomorphism satisfying the following: \begin{description} \item[$\Theta$ semiconjugates to $f_\omega$:] The maps $dj \circ \Theta$, $f_\omega \circ dj \from S \to G'$ are homotopic. \end{description} Note that this property depends only on $\omega$ and $\Theta$, and is independent of the choice of $f_\omega$. Also, this property implies that $\omega$ preserves the conjugacy class of the subgroup $j_*(\pi_1 S)$.
Having chosen $\Theta$ which semiconjugates to $f_\omega$, let $\Lambda^\un, \Lambda^\st \subset S$ denote the unstable/stable geodesic lamination pair for $\Theta$ with respect to some fixed hyperbolic structure on $S$ with totally geodesic boundary. Proposition~\refGM{PropGeomLams} applies, with the conclusion that there exists a dual lamination pair $\Lambda^\pm_\omega \in \L^\pm(\omega)$ such that $\Lambda^\un,\Lambda^\st$ are taken to $\Lambda^+_\omega,\Lambda^-_\omega$, respectively, by the map $dj_\# \from \mathcal B(\pi_1 S) \to \mathcal B(G') \approx \mathcal B(F_n)$ (Proposition~\refGM{PropGeomLams} is currently written only for the case $f_\omega=f'$, but the proof clearly extends to the current situation.)
For each~$s$, the total lifts to $\breve S_s$ of $\Lambda^\un,\Lambda^\st$ will be denoted $\breve\Lambda^\un_s, \breve\Lambda^\st_s \subset \breve S_s$.
\begin{lemma}\label{LemmaOverpathsAttraction} Consider the various objects specified above: static data $Q \union S \xrightarrow{j} Y \xrightarrow{d} G'$ of a geometric model for $f'$; an outer automorphism $\omega \in \mathsf{Out}(F_n)$ such that $\omega(\mathcal F)=\mathcal F$; a topological representative $f_\omega \from (G',Q) \to (G',Q)$; and a pseudo-Anosov homeomorphism $\Theta \from S \to S$ that semiconjugates to $f_\omega$, with unstable/stable lamination pair $\Lambda^\un,\Lambda^\st$ and corresponding dual lamination pair $\Lambda^+_\omega,\Lambda^-_\omega$. For each line $\ti\gamma \in \widetilde \mathcal B$ with image $\gamma \in \mathcal B$, if $\gamma$ is not weakly attracted to $\Lambda^+_\omega$ under iteration of $\omega$ then one of the following holds (up to reversal of orientation of~$\ti\gamma$):
\begin{description} \item[(i) Degenerate $Q$-point:] There exists $q$ such that $\ti\gamma_T = V_q$, the underpath $\ti\gamma_q=\ti\gamma_s$ is a line in $\wh Q_q$, and $\gamma$ is in $Q$ and so is carried by~$\mathcal F$. \item[(ii) Degenerate $S$-point:] There exists $s$ such that $\ti\gamma_T = V_s$, the overpath $\ti\gamma_s = \ti\gamma$ is a bi-infinite line in $\widetilde G'_s$, and the corresponding geodesic overpath $\breve\gamma_s \subset \breve S_s$ is either a leaf of $\breve\Lambda^\st_s$, or is contained in the interior of some principal region of $\breve \Lambda^\st_s$, or is equal to a component of $\bdy_0 \breve S_s$. \item[(iii) One edge:] \quad $\ti\gamma_T$ has the form $\xymatrix{V_s \ar@{-}[r]^{E_b} & V_q}$, and $\ti\gamma = \ti\gamma_s \ti\gamma_q$, and the corresponding proper geodesic ray $\breve\gamma_s \subset \breve S_s$ is contained in some crown principal region of~$\breve\Lambda^\st_s$. \item[(iv) Two edges, two $S$-endpoints:] \quad $\ti\gamma_T$ has the form $\xymatrix{V_{s} \ar@{-}[r]^{E_{b}} & V_q \ar@{-}[r]^{E_{b'}} & V_{s'}}$, and $\ti\gamma = \ti\gamma_s \ti\gamma_q \ti\gamma_{s'}$, and the corresponding proper geodesic rays $\breve\gamma_s \subset \breve S_s$, $\breve\gamma_{s'} \subset \breve S_{s'}$ are contained in crown principal regions of $\breve\Lambda^\st_s$, $\breve\Lambda^\st_{s'}$ respectively. \end{description} \end{lemma}
\begin{proof} Observe that for any overpath $\hat\gamma_s \subset \hat\gamma$ with corresponding geodesic overpath $\breve\gamma_s$, the geodesic $\breve\gamma_s$ does not cross $\breve\Lambda^\st_s$ transversely if and only if it has one of the forms that occur in one of conclusions (ii, iii, iv) of the lemma, namely: a leaf of $\breve\Lambda^\st_s$ as in (ii); a proper geodesic line as in (ii) or a proper geodesic ray as in (iii) or (iv), contained in the interior of a principal region of $\breve \Lambda^\st_s$; or a component of $\bdy_0 \breve S_s$ as in (ii). Although there is one other possibility that may occur for a general geodesic line in $\breve S_s$ that does not cross~$\breve\Lambda^\st_s$ transversely, namely a component of $\bdy_\ell \breve S_s$, such a line cannot be a geodesic overpath of anything because its straightened image under $dj_s$ is a line in $\wh Q \subset \widetilde G'$, and such a line is one big underpath with no overpaths. Note also that \emph{no} finite proper geodesic path in $\breve S_s$ is contained in a principal region of $\breve\Lambda^\st_s$ and \emph{every} finite proper geodesic path crosses $\breve\Lambda^\st_s$ transversely.
To prove the lemma we argue by contradiction: assuming that none of conclusions (i,~ii,~iii,~iv) holds, we prove that $\gamma$ is weakly attracted to $\Lambda^+_\omega$ under iteration of~$\omega$. It follows from the assumption that $\gamma$ is not carried by~$\mathcal F$, for if it were carried then the first conclusion (i)~``Degenerate $Q$-point'' would hold. The over--under decomposition of $\ti\gamma$ therefore has at least one overpath $\ti\gamma_s$. Since none of conclusions (ii,~iii,~iv) hold, by the observation in the previous paragraph it follows that there exists some overpath $\ti\gamma_s$ of $\ti\gamma$ whose corresponding geodesic overpath $\breve\gamma_s$ crosses $\breve\Lambda^\st_s$ transversely. In particular $\breve\gamma_s$ is a proper geodesic in $\breve S_s$ (the only possibility for a nonproper geodesic overpath, that $\breve\gamma_s$ is a component of $\bdy_0 \wh S$, does not cross $\breve\Lambda^\st_s$ transversely).
From the assumption that $\Theta \from S \to S$ semiconjugates (via the map $dj$) to $f_\omega \from G' \to G'$ which topologically represents $\omega$, it follows that $\omega$ preserves the conjugacy class of the subgroup $(d \composed j)_*(\pi_1 S)$ in~$F_n$, which equals the conjugacy class of $\Gamma_s$. We may therefore choose $\Omega \in \mathsf{Aut}(F_n)$ representing $\omega$ such that $\Omega(\Gamma_s)=\Gamma_s$. Let $\ti f_\omega \from \widetilde G' \to \widetilde G'$ be the unique lift of $f_\omega$ that satisfies twisted equivariance with respect to the automorphism~$\Omega$, meaning that $\ti f_\omega(\gamma \cdot x) = \Omega(\gamma) \ti f_\omega(x)$ for all $\gamma \in F_n$, $x \in \widetilde G'$; equivalently, $\ti f_\omega$ and $\Omega$ induce the same homeomorphism of $\bdy \widetilde G' = \bdy F_n$, and in particular $\ti f_\omega$ preserves~$\bdy\Gamma_s$. It also follows that there is a unique lift $\breve\Theta \from \breve S_s \to \breve S_s$ of $\Theta$ whose action on $\bdy\Gamma_s$ equals the action of~$\ti f_\omega$, and therefore $\breve\Theta$ satisfies twisted equivariance with respect to the restricted automorphism $\Omega \restrict \Gamma_s$, that is, $\breve\Theta(\gamma \cdot x) = \Omega(\gamma) \breve\Theta(x)$ for all $x \in \breve S_s$, $\gamma \in \Gamma_s$. There is a homotopy between $dj \composed \Theta$ and $f_\omega \composed dj$, since $\Theta$ semiconjugates to $f_\omega$, and it lifts to a $\Gamma_s$-equivariant homotopy between the maps $dj_s \composed \breve\Theta,\ti f_\omega \composed dj_s \from \breve S_s \to \widetilde G'_s$.
Consider the proper geodesic path $\gamma_s$ in $S$ that is obtained by projecting $\breve\gamma_s$, and so this path crosses $\Lambda^\st$ transversely. Applying Proposition~\refGM{PropNTWA} --- a version of Nielsen-Thurston Theory --- it follows that $\gamma_s$ is geodesically weakly attracted to $\Lambda^\un$ by iteration of $\Theta$. Unwinding the meaning of this statement in our current setting yields the following. A sequence $\delta^i_s$ ($i \ge 0$) of proper geodesics in $\breve S_s$ is said to be a \emph{proper geodesic iteration} of $\breve \gamma_s$ if this sequence is properly equivalent (respectively) to the proper geodesics obtained by straightening the sequence $\breve\Theta^i(\breve\gamma_s)$. In other words, $\delta^i_s$ has the same ideal endpoints in $\breve\Gamma_s$ as $\breve\Theta^i(\breve\gamma_s)$, and the finite endpoints of $\delta^i_s$ are in the same components of $\bdy_\ell \breve S_s$ as the finite endpoints of $\breve\Theta^i(\breve\gamma_s)$.
\begin{description} \item[Nielsen-Thurston Theory conclusion:] For any $\epsilon>0$ and $M>0$ there exists $K$ such that for any proper geodesic iteration $\delta^i_s$ of $\breve\gamma_s$, and for any $i \ge K$, there exists a subpath of $\delta{}^i_s$ and a subpath of a leaf of $\widetilde\Lambda^\un_s$, each of length $\ge M$, and having Hausdorff distance~$\le \epsilon$ from each other. \end{description} We also need: \begin{description} \item[Iteration Claim:] There exists a proper geodesic iteration $\delta^i_s$ of $\breve\gamma_s$ such that $\delta^i_s$ is a proper geodesic overpath of $(\ti f^i_\omega)_\#(\ti\gamma)$ for each $i \ge 0$. \end{description} Before proving this claim, we use it to finish the proof of Lemma~\ref{LemmaOverpathsAttraction}. The map $dj_s \from \breve S_s \to \widetilde G'_s$ is a quasi-isometry. It follows that if $\alpha,\beta$ are geodesic paths in $\breve S_s$, and if $\alpha$, $\beta$ have long subpaths that are Hausdorff close to each other, then their straightened images $(dj_s)_\#(\alpha)$, $(dj_s)_\#(\beta)$ in $\widetilde G'$ have long subpaths that are Hausdorff close to each other; furthermore, since $\widetilde G'$ is a tree, those straightened images have long subpaths that coincide. To be precise, for each $L>0$ there exists $\epsilon > 0$ and $M > 0$ such that if $\alpha,\beta$ have subpaths of length $\ge M$ having Hausdorff distance $\le \epsilon$ from each other, then $(dj_s)_\#(\alpha)$, $(dj_s)_\#(\beta)$ have coinciding subpaths of length $\ge L$. By combining the Iteration Claim and the Nielsen--Thurston Theory conclusion, for any $L$ we may choose $i$ sufficiently large so that $\alpha = \delta^i_s$ is a proper geodesic overpath corresponding to an overpath $(dj_s)_\#(\alpha)$ of the geodesic $(\ti f^i_\omega)_\#(\ti\gamma)$, and $\beta$ is a leaf of $\widetilde\Lambda^\un_s$ whose image $(dj_s)_\#(\beta)$ is a leaf of $\widetilde\Lambda^+$, and the overpath $(dj_s)_\#(\alpha)$ and leaf $(dj_s)_\#(\alpha)$ have coinciding subpaths of length $\ge L$. This proves that the sequence $(f^i_\omega)_\#(\gamma)$ converges weakly to a generic leaf of $\Lambda^+_\omega$; that is, $\gamma$ is weakly attracted to $\Lambda^+_\omega$ under iteration of $\omega$.
We turn to the proof of the Iteration Claim. There is a natural subgroup $\mathsf{Aut}(F_n;T) \subgroup \mathsf{Aut}(F_n)$ that acts on the tree $T$, namely those automorphisms of $F_n$ that permute the subgroups $\Gamma_s$ and the subgroups $\Gamma_q$; this follows from the algebraic description of $T$ given earlier. \begin{description} \item[Naturality Claim:] For each line $\ti\gamma \in \widetilde \mathcal B$ and each $\mathcal A \in \mathsf{Aut}(F_n;T)$, we have $\mathcal A(\ti\gamma)_T = \mathcal A(\ti\gamma_T)$. \end{description} Before proving this claim, we apply it to finish the proof of the Iteration Claim. By hypothesis we have $\omega(\mathcal F)=\mathcal F$ and therefore $\Omega$ permutes the subgroups $\Gamma_q$. We also have that $\Theta$ semiconjugates to $f_\omega$, which implies that $\omega$ fixes the conjugacy class of $j_*(\pi_1 S)$, which implies that $\Omega$ permutes the subgroups $\Gamma_s$. This shows that $\Omega^i \in \mathsf{Aut}(F_n;T)$ for all integers~$i$, and so the Naturality Claim applies to each $\Omega^i$. Let $\delta^0_s = \breve \gamma_s = \breve\Theta^0(\breve\gamma_s)$, and so $V_s \in \ti\gamma_T$. Since $\Omega(\Gamma_s)=\Gamma_s$ (under the action of $\mathsf{Aut}(F_n)$ on subgroups), and since $T$ is determined by its vertex and edge stabilizers, it follows that $\Omega(V_s)=V_s$ (under the action of $\mathsf{Aut}(F_n;T)$ on~$T$). Applying the Naturality Claim by induction it follows (for all $i \ge 0$) that $V_s \in \Omega^i(\ti\gamma)_T$, which implies that $(\ti f^i_\omega)_\#(\ti\gamma)$ has an overpath associated to $V_s$; let $\delta^i_s$ denote the corresponding geodesic overpath in $\breve S_s$. We must still show that $\delta^i_s$ is a proper geodesic iteration of $\breve\gamma_s = \delta^0_s$. Suppose that $\breve\gamma_s$ has a finite endpoint on the component $\breve B_b$ of $\bdy_\ell\breve S_s$, and so $E_b \subset \ti\gamma_T$. It follows that $\Omega^i(E_b) \subset \Omega^i(\ti\gamma_T) = \Omega^i(\ti\gamma)_T$, from which it follows in turn that $\delta^i_s$ has a finite endpoint on $\breve\Theta^i(\breve B_b)$, which also contains a finite endpoint of $\breve\Theta^i(\breve\gamma_s)$. Thus $\delta^i_s$ and $\breve\Theta^i(\breve \gamma_s)$ have finite endpoints in the same components of $\bdy_\ell \breve S_s$. Suppose that $\breve\gamma_s$ has an infinite endpoint $x \in \bdy\Gamma_s$, and let $y$ be its opposite endpoint, and so one of two cases holds: \begin{description} \item[Case (a):] For some component $\breve B_b$ of $\bdy_\ell\breve S_s$ we have $y \in \breve B_b$. \item[Case (b):] $y \in \bdy\Gamma_s$. \end{description} In Case~(a), we already know that $\delta^i_s$ has a finite endpoint $y_i \in \breve\Theta^i(B_b)$, and so the overpath of $(\ti f^i_\omega)_\#(\ti\gamma)$ associated to $V_s$ has finite endpoint $j(y_i) \in j(\breve\Theta^i(B_b))$. We also know that $\Omega^i(x)$ is an ideal endpoint of $(\ti f^i_\omega)_\#(\ti\gamma)$, and that $\Omega^i(x) \in \bdy\Gamma_s$. From the definition of overpaths it follows that the subray of $(\ti f^i_\omega)_\#(\ti\gamma)$ with finite endpoint $j(y_i)$ and infinite endpoint $\Omega^i(x)$ is the overpath of $(\ti f^i_\omega)_\#(\ti\gamma)$ associated to $V_s$, and therefore the corresponding geodesic overpath $\delta^i_s$ is a ray with infinite endpoint $\Omega^i(x)$, completing Case (a). In Case~(b) the line $\ti\gamma$ has ideal endpoints $x,y \in \bdy \Gamma_s$, and $\ti\gamma_T = \{V_s\}$, and the over--under decomposition of $\ti\gamma$ is just a single overpath line whose corresponding geodesic overpath is the line $\delta^0_s$ with ideal endpoints $x,y$. It follows that $(f^i_\omega)_\#(\ti\gamma)$ has ideal endpoints $\Omega^i(x),\Omega^i(y) \in \bdy\Gamma_s$ and $\Omega^i(\ti\gamma)_T = \Omega^i(\ti\gamma_T) = \{V_s\}$, and that the over--under decomposition of $\Omega^i(\ti\gamma_T)$ is a single overpath line whose corresponding geodesic overpath is the line $\delta^i_s$ with ideal endpoints $\Omega^i(x),\Omega^i(y)$, which equal the ideal endpoints of $\breve\Theta^i(\delta^0_s)$. This proves the Iteration Claim, subject to the Naturality Claim.
To complete the proof of Lemma~\ref{LemmaOverpathsAttraction} it remains to prove the Naturality Claim. Denote the action of $\mathcal A$ on $T$ as $\mathcal A(V_s)=V_{s'}$, $\mathcal A(V_q)=V_{q'}$, $\mathcal A(E_b)=E_{b'}$. It follows that $\mathcal A(\bdy\Gamma_s)=\bdy\Gamma_{s'}$, $\mathcal A(\bdy\Gamma_q)=\bdy\Gamma_{q'}$, and $\mathcal A(\bdy\Gamma_b)=\Gamma_{b'}$. We consider several cases separately. First, $\ti\gamma_T$ degenerates to the $Q$-vertex $V_q$ if and only if $\bdy\ti\gamma \subset \bdy\Gamma_q$ if and only if $\mathcal A(\bdy\ti\gamma) \subset \bdy\Gamma_{q'}$ if and only if $\mathcal A(\ti\gamma_T)$ degenerates to $V_{q'}=\mathcal A(V_q)$. Next, $\ti\gamma_T$ degenerates to the $S$-vertex $V_s$ if and only if $\bdy\ti\gamma \subset \bdy\Gamma_s$ and for all components $\breve B_b$ of $\bdy_\ell \breve S_s$ we have $\bdy\ti\gamma \not\subset \bdy\Gamma_b$ if and only if $\mathcal A(\bdy\ti\gamma) \subset \bdy\Gamma_{s'}$ and for all components $\breve B_{b'}$ of $\bdy_\ell \breve S_{s'}$ we have $\mathcal A(\bdy\ti\gamma) \not\subset \bdy\Gamma_{b'}$ if and only if $\mathcal A(\ti\gamma_T)$ degenerates to $V_{s'}=\mathcal A(V_s)$.
For the nondegenerate cases it suffices to prove for each edge $E_b \subset T$ that $E_b \subset \ti\gamma_T$ if and only if $E_{b'}=\mathcal A(E_b) \subset \mathcal A(\ti\gamma_T)$, and we do this by a separation argument taking place in the Gromov compactification $\widetilde Y \union \bdy F_n$. To set up the argument we need further notation. Let $N \subset S$ be a regular neighborhood of $\bdy_\ell S$ and let $S^+ = \closure(S-N)$. We have $\bdy S^+ = \bdy_0 S^+ \union \bdy_\ell S^+$ where $\bdy_0 S^+ = \bdy_0 S$ and $\bdy_\ell S^+ = \bdy N - \bdy_\ell S$. The inclusions $\bdy_\ell S \rightarrow N \leftarrow \bdy_\ell S^+$ induce bijections of components. Let $\breve N \subset \breve S$ be the total lift of $N$ to the covering space $\breve S$, so $\breve N$ is an $F_n$-equivariant regular neighborhood of $\breve B = \bdy_\ell\breve S$. The inclusion $\breve B \subset \breve N$ induces a component bijection denoted $\breve B_b \leftrightarrow \breve N_b$. Let $\breve B^+_b = \bdy \breve N_b - \breve B_b$, and let $\breve B^+ = \union_b \breve B^+_b$, so the inclusion $\breve B^+ \subset \breve N$ induces the component bijection $\breve B^+_b \leftrightarrow \breve N_b$. The map $\breve j$ embeds $\breve B^+$ and $\breve S^+$ in $\widetilde Y$ with images $\wh B^+$ and $\wh S^+$, respectively, and with components denoted $\wh B^+_b = \breve j(\breve B_b)$, and $\wh S^+_s = \breve j(\breve S^+_s)$. Also let $\wh N_b = \breve j(\breve N_b)$ and let $\wh N = \union_b \wh N_b$.
Since $\mathcal A$ preserves the subgroup systems $[\pi_1 Q]$ and $[\pi_1 S]$, and since the outer automorphism of $\pi_1 S$ obtained by restricting $\mathcal A$ preserves the lower boundary subgroup system $[\bdy_\ell(\pi_1 S)]$, it follows that $\mathcal A$ is represented by a homotopy equivalence $\alpha \from Y \to Y$ restricting to maps $S^+ \mapsto S^+$, \, $Q \mapsto Q$, \, $B^+ \mapsto B^+$ and $j(N) \mapsto j(N) \union Q$. From this it follows that there is an $\mathcal A$-twisted equivariant lift $\ti \alpha \from \widetilde Y \to \widetilde Y$ restricting to $\wh S^+_b \mapsto \wh S^+_{b'}$, \, $\wh Q^+_q \mapsto \wh Q^+_{q'}$, \, $\wh B^+_b \mapsto \wh B^+_{b'}$ and $\wh N_b \mapsto \wh N_{b'} \union \wh Q_q$, where $V_q \in T$ is the $Q$-vertex incident to $E_b \subset T$.
Consider an edge $E_b \subset T$, with incident vertices $V_q$ and $V_s$. In $\widetilde Y$, the subset $\widetilde Y - \wh B^+_b$ has two components $\wh Y_{bq}$ and $\wh Y_{bs}$, where $\wh Y_{bq} \supset \wh Q_q$ and $\wh Y_{bs} \supset \wh S^+_s - \wh B^+_b$. In the Gromov compactification $\widetilde Y \union \bdy F_n$, the closure of the line $\wh B^+_b$ is compact arc $\wh B^+_b \union \bdy \Gamma_b$, and its complement $(\widetilde Y \union F_n) - (\wh B^+_b \union \bdy \Gamma_b)$ has two components $\wh Y_{bq} \union \bdy \wh Y_{bq}$ and $\wh Y_{bs} \union \bdy \wh Y_{bs}$ where $\bdy \widetilde Y_{bq}$ is the set of accumulation points of $\widetilde Y_{bq}$ in $\bdy F_n - \bdy \Gamma_b$, and similarly for $\bdy \widetilde Y_{bs}$.
From the description of $\ti\alpha$ above (and the definition of realization of lines in $T$) it follows that $E_b \subset \ti\gamma_T$ if and only if $\bdy\ti\gamma$ has one point in $\bdy\wh Y_{bq}$ and one point in $\bdy\wh Y_{bs}$ if and only if $\mathcal A(\bdy\ti\gamma)$ has one point in $\bdy\wh Y_{b'q'}$ and one point in $\bdy\wh Y_{b's'}$ if and only if $E_{b'} \subset \mathcal A(\ti\gamma_T)$, completing the proof. \end{proof}
\paragraph{Applying of Lemma~\ref{LemmaOverpathsAttraction}: Proof of Lemma~\ref{geometricFullHeightCase}.} We continue with the notations that were reviewed and established in the paragraphs surrounding the statement of Lemma~\ref{geometricFullHeightCase}.
As in the hypothesis of Lemma~2.19, let $\gamma \in \mathcal B$ be a line which is not weakly attracted to $\Lambda^+_\phi$, and whose realization in $G$ has height $r$ and so is not contained in $G_{r-1}$. It follows that $\gamma$ is not supported by $\mathcal F$ and so its realization in $G'$ (also denoted $\gamma)$ is not contained in $G'_{u-1}$. Choose a lift $\ti\gamma \subset \widetilde G'$ with realization $\ti\gamma_T$ in~$T$. We apply Lemma~\ref{LemmaOverpathsAttraction} with $\omega=\psi$, noting that $\Lambda^\un = \Lambda^+_\psi = \Lambda^-_\phi$. It follows that the line $\ti\gamma_T$ and the over--under decomposition of $\ti\gamma$ as related to $\Lambda^\un$ must match one of the forms in the conclusion of Lemma~\ref{LemmaOverpathsAttraction}. We go through the four conclusions one at a time, ruling out the first and using the rest to show that $\gamma$, in relation to $\Lambda^-_\phi$, matches one of the forms in the conclusion of Lemma~\ref{geometricFullHeightCase}. In some parts of the proof we assert that certain rays in $G'$ are principal rays of height~$u$, and these assertions are justified by application of Fact~\refGM{FactSingularRay}.
\textbf{First Case: Degenerate $Q$-vertex.} This case is ruled out since $\gamma$ is not in $Q=G'_{u-1}$.
\textbf{Second Case: Degenerate $S$-vertex.} In this case we have $\ti\gamma_T = V_s$ and $\ti\gamma = \ti\gamma_s \subset \widetilde G'_s$ with corresponding geodesic overpath $\breve\gamma_s \subset \breve S_s$ being either a leaf of $\breve\Lambda^\un_s$, or a geodesic line contained in the interior of a principle region of $\breve\Lambda^\un_s$, or a component of $\bdy_0 \breve S_s$.
If $\breve\gamma_s$ is a leaf of $\widetilde\Lambda^\un_s$ then, by Proposition~\refGM{PropGeomLams}, $\gamma$ is a generic leaf of $\Lambda^-_\phi$, which matches conclusion~\pref{ItemGFHCLeaf} of Lemma~\ref{geometricFullHeightCase}.
If $\breve\gamma_s$ is a component of $\bdy_0 \breve S_s$ then $\ti\gamma \subset \widetilde G'_s$ projects to $\gamma$ in $G'$ that winds bi-infinitely around the circuit $\rho'_u$ or its inverse $\bar\rho'_u$, which matches conclusion~\pref{ItemGFHCIterate} of Lemma~\ref{geometricFullHeightCase}.
Suppose now that $\breve\gamma_s$ is a geodesic line contained in the interior of an upstairs principal region $\breve P$ of $\widetilde\Lambda^\un_s \subset \breve S_s$, and let $P \subset S$ be the downstairs principle region of $\Lambda^\un$ obtained by projecting~$\breve P$. Let $\bdy_\infinity \breve P \subset \bdy\Gamma_s \subset \bdy F_n$ denote the ideal points of $\breve P$. If $P$ is an ideal polygon then $\bdy_\infinity \breve P$ is a finite cyclically ordered set. Otherwise $P$ is a crown, there is a unique component $L$ of $\bdy \breve S_s$ which is also a component of $\bdy \breve P$, and $\bdy_\infinity \breve P$ is the union of the two points $\bdy_\infinity L$ with a countably infinite, linearly ordered, discrete subset of cusps, accumulating in opposite directions on the two points of $\bdy_\infinity L$. Since $\psi$ is rotationless, we may choose a representative $\Psi \in \mathsf{Aut}(F_n)$ representing $\psi$ so that $\Psi(\bdy\Gamma_s)=\bdy\Gamma_s$ and so that $\Psi$ fixes each point of $\bdy_\infinity \breve P$. Corresponding to $\Psi$ there is a lift $\ti f'_\Psi \from \widetilde G' \to \widetilde G'$ of $f'$ (see Section~\refGM{SectionLiftFacts}), and $\ti f'_\Psi$ preserves~$\widetilde G'_s$. There is also a unique corresponding lift $\breve\Theta \from \breve S_s \to \breve S_s$ of $\Theta$, using the correspondence under which $\ti d \composed \widetilde \Theta$ and $\ti f'_\Psi \composed \ti d \from \breve S_s \to \widetilde G'_s$ are $\Gamma_s$-equivariantly homotopic. This map $\breve\Theta$ preserves the principal region $\breve P$ and fixes each point of $\bdy_\infinity \breve P$, and each cusp of $\bdy_\infinity \breve P$ is an attracting point for the action of $\breve\Theta$ on $\bdy\Gamma_s$ (by Proposition~\refGM{PropNielsenThurstonTheory}).
Consider the pair of ideal endpoints $\bdy_\infinity \ti\gamma = \bdy_\infinity \breve\gamma_s = \{\xi_1,\xi_2\} \subset \bdy_\infinity\breve P \subset \bdy F_n$. If $P$ is a crown and $\{\xi_1,\xi_2\} = \bdy_\infinity L$ as above, then $\breve\gamma_s=L$, contradicting that $\breve\gamma_s$ is contained in the interior of~$\breve P$. It follows that at least one of $\xi_1,\xi_2$ is a cusp of $\breve P$, say $\xi_1$. We orient $\ti\gamma$ and $\breve\gamma_s$ so that $\xi_1$ is each of their initial ideal endpoints.
If $\xi_i$ is a cusp of $\breve P$ ($i=1,2$) then, since $\xi_i$ is an attracting point for the action of $\breve\Theta^\inv$ on $\bdy\Gamma_s$, it follows that: $\xi_i$~is~an attracting point for the action of $\Psi$ on all of $\bdy F_n$ (Fact~\refGM{LemmaFixPhiFacts}); $\xi_i$~is represented by a principal ray $\widetilde R_i = [\ti v_i,\xi_i) \subset \widetilde G'$ generated by an oriented edge $\widetilde E_i \subset \widetilde G'$ whose initial direction and initial vertex $\ti v_i$ are fixed by $\ti f'_\Psi$ (Fact~\refGM{FactSingularRay}). Also we have $\widetilde E_i \subset \widetilde H'_s$, for otherwise $\xi_i \not\in \bdy\Gamma_s$.
Knowing that $\xi_1$ is a cusp of $\breve P$, we consider two cases depending on whether $\xi_2$ is also a cusp of $\breve P$.
Suppose that $\xi_2$ is a cusp of $\breve P$. Choose corresponding principal rays $\widetilde R_1,\widetilde R_2$ as above (the choice need not be unique, see Lemma~\refGM{LemmaPrincipalRayUniqueness}). Note that $\ti\mu = [\ti v_1,\ti v_2] \subset \widetilde G'$ is either a trivial path or a Nielsen path of $\ti f'_\Psi$. Let $\mu$ be the path in $G'$ to which $\ti\mu$ projects. The path~$\ti\mu$, if not trivial, decomposes uniquely into fixed edges and indivisible Nielsen paths of $\ti f'_\Psi$. Choose $\widetilde R_1,\widetilde R_2$ so as to minimize the number of terms of this decomposition of~$\ti\mu$. We claim that the interior of $\ti\mu$ is disjoint from the interiors of $\widetilde R_1$ and $\widetilde R_2$, implying that $\ti\gamma = \widetilde R_1^\inv \, \ti\mu \, \widetilde R_2$ (which we show matches conclusion~\pref{ItemGFHCTwoRays} of Lemma~\ref{LemmaOverpathsAttraction}, after verifying the form of~$\mu$). If the claim fails, if say $\interior(\ti\mu) \intersect \interior(\widetilde R_1) \ne \emptyset$, then the first term of the decomposition of $\ti\mu$ contains an edge of $\widetilde H'_u$. By Fact~\refGM{FactEGNPUniqueness} that term is a lift of $\rho'_u$ or $\bar\rho'_u$ that we denote $\alpha\bar\beta$, and so $\ti\mu = \alpha\bar\beta \ti\mu'$. Applying Lemma~\refGM{LemmaPrincipalRayUniqueness} it follows that $(\widetilde R_1 - \alpha) \union \beta$ is also a principle ray representing $\xi_1$, whose base point is connected to the base point of $\widetilde R_2$ by the path $\ti\mu'$, contradicting minimality. It remains to verify that the form of $\mu$ matches conclusion~\pref{ItemGFHCTwoRays} of Lemma~\ref{LemmaOverpathsAttraction}. If $\ti\mu$ is trivial we are done. Otherwise $\ti\mu$ is a Nielsen path for $\ti f'_\Psi$ and it projects to a Nielsen path $\mu$ for $f'_\Psi$. If $\mu$ has height~$u$ then it is an iterate of $\rho'_u$ or $\bar\rho'_u$ (by Fact~\refGM{FactEGNielsenCrossings}) and we are done. Otherwise $\mu$ has height~$\le u-1$ and we are also done.
Suppose that $\xi_2$ is not a cusp of $\breve P$, so $P$ is a crown and $\xi_2 \in \bdy_\infinity L$. We consider separately the cases $L \subset \bdy_\ell \breve S_s$ and $L \subset \bdy_0 \breve S_s$. If $L \subset \bdy_\ell \breve S_s$ then $L = \breve B_b$ for some edge $E_b \subset T$ incident to $V_s$; let $V_q$ be the opposite $Q$-vertex of $E_b$. It follows that the straightened image of $\ti d(L)$ in $\widetilde G'_s$ is the unique line in $\wh B_b = \wh Q_q \intersect \widetilde G'_s = \wh Q_q \intersect \wh\Sigma_s$, and that $\xi_2 \in \bdy\Gamma_b$ is an ideal endpoint of that line. We may therefore write $\ti\gamma$ as a back-to-back concatenation of rays $\ti\gamma = \widetilde R_1^\inv \widetilde R^{\vphantom{\inv}}_2$ where $R_2$ is the maximal subpath of $\ti\gamma$ contained in $\wh Q_q \intersect \widetilde G'_s$, and $\widetilde R_1 = \ti\gamma \setminus \widetilde R_2$ is the minimal subpath of $\ti\gamma$ containing every edge of $\widetilde H'_u \intersect \ti\gamma$. Since $\ti f'_\Psi$ is a principal lift fixing $\xi_1,\xi_2$ it follows that $\ti f'_\Psi$ fixes the common base point of $\widetilde R_1$ and $\widetilde R_2$ and fixes the initial direction of $\widetilde R_1$, and therefore $\widetilde R_1$ is a principal ray representing $\xi_1$, matching conclusion~\pref{ItemGFHCOneRay} of Lemma~\ref{LemmaOverpathsAttraction}. If $L \subset \bdy_0 \breve S_s$ then the same analysis works except that the straightened image of $\ti d(L)$ is a lift of the bi-infinite iterate of $\rho'_u$ or $\bar\rho'_u$, and the subray $\widetilde R_2 \subset \ti\gamma$ is the maximal subray of $\ti\gamma$ that is a lift of a singly infinite iterate of $\rho'_u$ or $\bar\rho'_u$. It still holds that $\ti f'_\Psi$ fixes the common base point of $\widetilde R_1$ and $\widetilde R_2$ and the initial direction of~$\widetilde R_1$, and that $\widetilde R_1$ is a principal ray representing $\xi_1$, also matching conclusion~\pref{ItemGFHCOneRay}.
\textbf{Third Case: One edge.} Up to reversal of orientation we have an over--under decomposition $\ti\gamma = \ti\gamma_s \ti\gamma_q$, and $\ti\gamma_T = E_b$ with $Q$-endpoint $V_q$ and $S$-endpoints $V_s$. Note that the proper geodesic overpath $\breve\gamma_s$ corresponding to $\ti\gamma_s$ is contained in a principal region $\breve P$ of $\breve\Lambda^\un_s$ covering a crown principal region~$P \subset S$. We may choose the principle automorphism $\Psi \in \mathsf{Aut}(F_n)$ representing $\psi$, and the lift $\breve\Theta \from \breve S_s \to \breve S_s$ of $\Theta$, as in the case ``Degenerate $S$-point'', fixing each point of $\bdy_\infinity \breve P$, preserving $B_b$, and preserving $\bdy\Gamma_s$, $\bdy\Gamma_q$, and $\bdy\Gamma_b$. Let $\ti f'_\Psi$ be the principle lift of $f'$ that corresponds to $\Psi$. The rays $\widetilde R_1$ and $\widetilde R_2 = \ti\gamma_l \subset \wh Q_q$ have a common finite endpoint $x \in \wh Q_q \intersect \widetilde G'_s$ fixed by $\ti f'_\Psi$, and the initial direction of $\ti\gamma_s$ is in $\widetilde H'_{u,s}$ and is fixed by $\ti f'_\Psi$. It follows that $\widetilde R_1$ is a principal ray representing $\xi_1$, matching conclusion~\pref{ItemGFHCOneRay} of Lemma~\ref{LemmaOverpathsAttraction}.
\textbf{Fourth Case: Two edges and two $S$-endpoints.} We have over--under decomposition $\ti\gamma = \ti\gamma_{s-} \union \ti\gamma_b \union \ti\gamma_{s+}$. By a very similar analysis as in the previous case, carried out on each of the rays $\widetilde R_1 = \ti\gamma_{s-}$ and $\widetilde R_2 = \ti\gamma_{s_+}$, one proves that these are both principal rays, and setting $\mu = \ti\gamma_q$ (which is either trivial or in $\wh Q_q$), we have matched conclusion~\pref{ItemGFHCTwoRays} of~Lemma~\ref{LemmaOverpathsAttraction}, completing the proof of the lemma \qed
\paragraph{A further application of Lemma~\ref{LemmaOverpathsAttraction}.} Proposition~\ref{PropWeakGeomRelFullIrr} to follow will, in \PartFour, be incorporated as one of the conclusions of Theorem J, the relative, general version of Theorem I which was stated in the Introduction.
As an example of Proposition~\ref{PropWeakGeomRelFullIrr}, consider a compact surface $S$ with nonempty boundary, and a subset $\bdy_\ell S \subset \bdy S$ consisting of all but one component of $\bdy S$. From elementary topology one knows that $S$ deformation retracts to an embedded finite graph containing $\bdy_\ell S$, and so the inclusion $\bdy_\ell S \subset S$ determines a free factor system $[\bdy_\ell S]$ in the free group $\pi_1 S$ having one rank~$1$ component for each component of $\bdy_\ell S$. For any $\phi \in \MCG(S) \subgroup \mathsf{Out}(\pi_1 S)$, if $\phi$ is a pseudo-Anosov element in $\MCG(S)$ then, regarded in $\mathsf{Out}(\pi_1 S)$, $\phi$ is fully irreducible relative to $[\bdy_\ell S]$. In the very special case that $\bdy S$ has just one component, one obtains the ``well known'' theorem saying that if $\phi$ is pseudo-Anosov then $\phi$ is fully irreducible in the absolute sense.
\begin{proposition}\label{PropWeakGeomRelFullIrr} Let $\mathcal F$ be a free factor system, let $Q \disjunion S \xrightarrow{j} Y \xrightarrow{d} G$ be the static data of a geometric model for some CT\ whose top stratum is EG-geometric, and suppose that $Q \subset G$ represents $\mathcal F$. Given $\omega \in \mathsf{Out}(F_n)$, if $\omega(\mathcal F)=\mathcal F$, and if there exists a topological representative $f_\omega \from G \to G$ of $\omega$ and a pseudo-Anosov homeomorphism $\Theta \from S \to S$ which semiconjugates to $f_\omega$, then $\omega$ is fully irreducible rel~$\mathcal F$. \end{proposition}
\begin{proof} We may assume that $f_\omega(Q)=Q$. Let $\Lambda^\un,\Lambda^\st$ be the unstable/stable lamination pair of $\Theta$. As was done just prior to the statement of Lemma~\ref{LemmaOverpathsAttraction}, we may apply the proof of Proposition~\refGM{PropGeomLams} to our present situation, with the conclusion that the induced map $dj_\# \from \mathcal B(\pi_1 S) \to \mathcal B(G)=\mathcal B(F_n)$ takes the laminations $\Lambda^\un,\Lambda^\st \subset \mathcal B(\pi_1 S)$ to a lamination pair $\Lambda^+_\omega$, $\Lambda^-_\omega$ for~$\omega$. Applying Proposition~\refGM{PropGeomLams}~\prefGM{ItemLamSurfSupport}, $\mathcal F_\supp(\Lambda^\pm_\omega) = \mathcal F_\supp[\pi_1 S]$. It follows that $\mathcal F_\supp(\mathcal F,\Lambda^\pm_\omega) = \mathcal F_\supp([\pi_1 Q],[\pi_1 S]) = \{[F_n]\}$, the last equation following from Lemma~\trefGM{LemmaScaffoldFFS}{ItemRelFFS}. The lamination pair $\Lambda^\pm_\omega$ therefore fills rel~$\mathcal F$. Note also by Lemma~\trefGM{LemmaScaffoldFFS}{ItemRelFFS} that every conjugacy class carried by the rank~$1$ subgroup system $[\bdy_0 S]$ fills rel~$\mathcal F$.
We next prove that for every nontrivial $\tau \in F_n$, if its conjugacy class $[\tau]$ is not weakly attracted to~$\Lambda^+_\omega$ under the action of $\omega$ on $\mathcal B$ then $[\tau]$ is carried by~$\mathcal F \union \{[\bdy_0 S]\}$. Let $\gamma$ be the line in $G'$ which wraps bi-infinitely around the circuit in $G'$ representing~$[\tau]$. Let $\ti\gamma \subset \widetilde G'$ be the lift of $\gamma$ which is the axis of $\tau$ in~$\widetilde G'$. Since the over--under decomposition of $\ti\gamma$ is evidently $\tau$-invariant, three cases can occur: (1) $\ti\gamma$ is a bi-infinite concatenation alternating between overpaths and underpaths; (2) all of $\ti\gamma$ is an underpath; (3) all of $\ti\gamma$ is an overpath. But Lemma~\ref{LemmaOverpathsAttraction} rules out case~(1), since $[\tau]$ is not weakly attracted to $\Lambda^+_\omega$. In case~(2) $[\tau]$ is carried by $\mathcal F$. In case~(3) it follows that $\tau \in \Gamma_s$ for some $s$ and that the corresponding geodesic overpath $\breve\gamma_s$ is the axis of the action of $\tau$ on $\breve S_s$. Lemma~\ref{LemmaOverpathsAttraction} implies that one of three possibilities holds: (3a) $\breve\gamma_s$ is a leaf of $\Lambda^\un$; (3b) $\breve\gamma_s$ is contained in the interior of a principle region of $\Lambda^\un$; (3c) $\breve\gamma_s$ is a component of $\bdy_0\breve S_s$. Cases (3a) and (3b) contradict that $\breve\gamma_s$ is the axis of $\tau$, and Case (3c) implies that $[\tau]$ is carried by $\{[\bdy_0 S]\}$.
The conclusion of the proof is a general argument. Assuming by contradiction that $\phi$ is not fully irreducible rel~$\mathcal F$, after passing to a rotationless power there is a $\phi$-invariant free factor system~$\mathcal F'$ with proper inclusions $\mathcal F \sqsubset \mathcal F' \sqsubset \{[F_n]\}$, and there is a CT\ representative $f \from G \to G$ of $\omega$ with properly included core filtration elements $G_s \subset G_{s'} \subset G_t=G$ representing $\mathcal F \sqsubset \mathcal F' \sqsubset \{[F_n]\}$ respectively. Since $\Lambda^\pm_\phi$ fills rel~$\mathcal F$, the stratum of $G$ corresponding to $\Lambda^+_\omega$ is the top stratum~$H_t$ and is an EG-stratum. By Definition~\ref{defn:Z} and Corollary~\ref{CorPMna}~\pref{ItemAnaDependence}, the nonattracting subgroup system $\mathcal A_\na(\Lambda^+_\omega)$ has the form $[G_{t-1}]$ if $\Lambda^+_\omega$ is nongeometric or $[G_{t-1}] \union \{[\<\rho\>]\}$ if $\Lambda^+_\omega$ is geometric (the second case holds in our situation, but we do not make use of that). In either case there exists a circuit that is not carried by the core graph $[G_s]$ but is carried by the core graph $[G_{t-1}]$. It follows that there exists a conjugacy class $[\tau]$ that is not carried by $\mathcal F$ but is carried by $\mathcal F'$, and is therefore not weakly attracted to~$\Lambda^+_\omega$. Applying the previous paragraph, $\tau$ is carried by $[\bdy_0 S]$, implying that $[\tau]$ fills rel~$\mathcal F$ and is therefore not carried by~$\mathcal F'$, a contradiction. \end{proof}
\subsection{General nonattracted lines and the Proof of Theorem G} \label{SectionNALinesGeneral}
We are given rotationless $\phi, \, \psi=\phi^\inv \in \mathsf{Out}(F_n)$, a lamination pair $\Lambda^\pm_\phi \in \L^\pm(\phi)$, and a CT\ $\fG$ representing $\phi$ with EG\ stratum $H_r$ corresponding to $\Lambda^+_\phi$ (note that we are abandoning the notational conventions of Section~\ref{SectionTheoremGStatement}).
From Definition~\ref{defn:Z} we have the path set $\<Z,\hat\rho_r\> \subset \wh\mathcal B$. From Lemma~\ref{LemmaZPClosed} this path set is a groupoid and each line $\gamma \in \<Z,\hat\rho_r\>$ is carried by $\mathcal A_{\na}(\Lambda^{\pm}_\phi)$. Recall also Lemma~\ref{LemmaThreeNASets} which says that $\gamma\in\mathcal B_\na(\Lambda^+_\phi)$ as long as it satisfies at least one of the following conditions. \begin{enumerate} \item\label{ItemGammaCarriedNA}
$\gamma$ is carried by $\mathcal A_{\na}(\Lambda^+_\phi)$. \item $\gamma \in \mathcal B_\sing(\psi)$. \item $\gamma \in \mathcal B_\gen(\psi)$. \end{enumerate} Since each line carried by $\mathcal A_\na(\Lambda^+_\phi)$ is in $\mathcal B_\ext(\Lambda^{\pm}_\phi;\psi)$ it follows that for each line $\gamma \in \<Z,\hat\rho_r\>$ we have $\gamma \in \mathcal B_\ext(\Lambda)$; we use this repeatedly in this section.
To simplify the notation of the proof we define the set of \emph{good lines} in $\mathcal B$ to be $$\mathcal B_\good(\Lambda^{\pm}_\phi;\psi) = \mathcal B_\ext(\Lambda^{\pm}_\phi;\psi) \union \mathcal B_\sing(\psi) \union \mathcal B_\gen(\psi) $$ and we repeatedly use Proposition~\ref{PropStillClosed} which with this notation says that $\mathcal B_\good(\Lambda^{\pm}_\phi;\psi)$ is closed under concatenation.
The conclusion of Theorem~G says that $\mathcal B_\na(\Lambda^+_\phi) = \mathcal B_\good(\Lambda^{\pm}_\phi;\psi)$. One direction of inclusion, namely $\mathcal B_\na(\Lambda^+_\phi) \supset \mathcal B_\good(\Lambda^{\pm}_\phi;\psi)$, follows from Lemma~\ref {LemmaThreeNASets} and the fact that each line in $\mathcal B_\ext(\Lambda^{\pm}_\phi;\psi)$ is a concatenation of lines in $\mathcal B_\sing(\psi)$ and lines carried by $\mathcal A_\na(\Lambda^{\pm}_\phi)$.
We turn now to proof of the opposite inclusion $\mathcal B_\na(\Lambda^+_\phi) \subset \mathcal B_\good(\Lambda^{\pm}_\phi;\psi)$. Given $\gamma \in \mathcal B_\na(\Lambda^+_\phi)$, if the height of $\gamma$ is less than $r$ then $\gamma \in \<Z,\hat \rho_r\>$ and we are done. Henceforth we proceed by induction on height. Define an \emph{inductive concatenation} of $\gamma$ to be an expression of $\gamma$ as a concatenation of finitely many lines in $\mathcal B_\good(\Lambda^{\pm}_\phi;\psi)$ and at most one line $\nu$ of height lower than $\gamma$. If we can show that $\gamma$ has an inductive concatenation, we prove that $\gamma$ is good as follows. In some cases $\nu$ does not occur in the concatenation and so $\gamma$ is good by Proposition~\ref{PropStillClosed}. Otherwise, using invertibility of concatenation, it follows that $\nu$ is expressed as a concatenation of good lines plus the line $\gamma$, all of which are known to be in $\mathcal B_\na(\Lambda^+_\phi)$. Applying Lemma~\ref{LemmaConcatenation} we therefore have $\nu \in \mathcal B_\na(\Lambda^+_\phi)$. Applying induction on height it follows that $\nu$ is good, and so again $\gamma$ is good by Proposition~\ref{PropStillClosed}.
The induction step breaks into two major cases, depending on whether or not the stratum of the same height as $\gamma$ is NEG\ or EG. For the case of an NEG\ stratum we will use the following:
\begin{lemma}\label{LemmaInverseIsPrincipal} Suppose that $\phi, \psi=\phi^{-1} \in \mathsf{Out}(F_n)$ are rotationless, that $\fG$ is a CT\ representing $\phi$, that $E_s$ is the unique edge in an NEG\ stratum $H_s$, and that both endpoints of $E_s$ are contained in $G_{s-1}$. Let $\widetilde E_s$ be a lift of $E_s$, let $\ti f: \widetilde G \to \widetilde G$ be the lift of $f$ that fixes the initial endpoint of $\widetilde E_s$ and let $\Phi$ be the automorphism corresponding to $\ti f$. Then $\Psi = \Phi^{-1}$ is principal. Moreover there is a line $\sigma \in \mathcal B_\sing(\psi)$ that has height $s$, that crosses $E_s$ exactly once and that lifts to a line with endpoints in $\Fix_N(\Psi)$. \end{lemma}
\begin{proof} By Fact~\refGM{FactContrComp} and Definition~\trefGM{DefCT}{ItemZeroStrata}, no component of $G_{s-1}$ is contractible. Letting $\widetilde C_1, \widetilde C_2 \subset \widetilde G$ be the components of the full pre-image of $G_{s-1}$ that contain the initial and terminal endpoints of $\widetilde E_s$ respectively, there are nontrivial free factors $B_1,B_2$ that satisfy $\bdy B_j = \bdy \widetilde C_j$. Each of $\widetilde C_1, \widetilde C_2$ is preserved by $\ti f$ and so each of $B_1,B_2$ is $\Psi$-invariant. By Fact~\refGM{FactPeriodicNonempty} applied to $\Psi \restrict B_j$, there exists $m > 0$ and points $P_j \in \Fix_N(\wh\Psi^m) \cap \partial \widetilde C_j$ for $j=1,2$. Since the line $\ti \sigma$ connecting $P_1$ to $P_2$ is not birecurrent it does not project to either an axis or a generic leaf of some element of $\L(\phi^{-1})$. Thus $\Psi^m \in P(\psi)$. Since $\psi$ is rotationless, $\Psi \in P(\psi)$ and $\sigma \in \mathcal B_\sing(\psi)$. \end{proof}
Fix now $s \ge r$ and assume as an induction hypothesis that all lines in $\mathcal B_\na(\Lambda^+_\phi)$ of height $<s$ are in $\mathcal B_\good(\Lambda^{\pm}_\phi;\psi)$. Fix a line $\gamma \in \mathcal B_\na(\Lambda^+_\phi)$ of height $\le s$. Let $\ti \gamma$ be a lift of $\gamma$ and let $P$ and $Q$ be its initial and terminal endpoints respectively.
\paragraph{Case 1: $H_s$ is NEG.} Let $E_s$ be the unique edge in $H_s$. If $E_s$ is closed and $\gamma$ is a bi-infinite iterate of $E_s$ then $E_s \subset Z$ and $\gamma \in \<Z,\hat\rho_r\>$ so $\gamma \in \mathcal B_\ext(\Lambda^{\pm}_\phi;\psi)$. We may therefore assume that both endpoints of $E_s$ belong to $G_{s-1}$.
Orient $E_s$ so that its initial direction is fixed. Recall (Lemma 4.1.4 of \BookOne) that for each occurrence of $E_s$ or $\overline E_s$ in the representation of $\gamma$ as an edge path, the line $\gamma$ splits at the initial vertex of $E_s$, and we refer to this as the \emph{highest edge splitting vertex} determined by the occurrence of $E_s$. We also use this terminology for lifts of $E_s$ in the universal cover. By Fact~\refGM{FactPrincipalVertices}, highest edge splitting vertices are principal.
\paragraph{Case 1A: Both ends of $\gamma$ have height $s$.} In this case $\gamma$ has a splitting in which each term is finite. Since $\gamma$ is not weakly attracted to $\Lambda^+$, neither is any of the terms in the splitting. Lemma~\ref{defining Z}~\pref{ItemZPFinitePaths} implies that each term is contained in $ \<Z,\hat \rho_r\>$ and so $\gamma$ is contained in $ \<Z,\hat \rho_r\>$ and we are done.
\paragraph{Case 1B: Exactly one end of $\gamma$ has height $s$.} We assume without loss that the initial end of $\gamma$ has height $s$. Pick a lift $\ti\gamma$, let $\widetilde E_s$ be the last lift of $E_s$ crossed by $\ti\gamma$, let $\ti x \in \ti\gamma$ be the highest edge splitting vertex determined by $\widetilde E_s$, and let $\ti \gamma = \widetilde R_-^\inv \cdot \widetilde R_+$\ be the splitting at~$\ti x$. The ray $\widetilde R_-$ has height $s$ and crosses lifts of $E_s$ infinitely often, and as in case 1A the projected ray $R_-$ is contained in $\<Z,\hat\rho_r\>$. It follows that there exists a subgroup $A \in \mathcal A_\na(\Lambda^+_\phi)$ such that $P \in \bdy A$. Let $\ti f$ be the lift of $f$ that fixes $\ti x$ and let $\Phi$ be the corresponding element of $P(\phi)$. Lemma~\ref{LemmaInverseIsPrincipal} implies that $\Psi =\Phi^{-1} \in P(\psi)$.
We claim that $A$ is $\Phi$-invariant. By Lemma~\ref{LemmaZPClosed}~\pref{item:UniqueLift} it suffices to show that $\wh\Phi(\partial A) \cap \partial A \ne \emptyset$. This is obvious if $P \in \Fix(\wh\Phi)$ so assume otherwise. The ray $f_\#( R_-)$ is contained in $ \<Z,\hat \rho_r\>$ by Lemma~\ref{defining Z}~\pref{ItemZPPathsInv} so $P$ and $ \wh\Phi(P)$ bound a line that projects into $ \<Z,\hat \rho_r\>$ and so is carried by $\mathcal A_{\na}(\phi)$. Another application of Lemma~\ref{LemmaZPClosed}~\pref{item:UniqueLift} implies that $\wh\Phi(P) \in \partial A$ as desired.
By Fact~\refGM{FactPeriodicNonempty} applied to $\Psi \restrict A$, the set $\Fix_N(\wh\Psi^m) \cap \partial A$ is nonempty for some $m > 0$. Since $\psi$ is rotationless and $\Psi \in P(\psi)$, we may take $m=1$, from which it follows that $\Psi$ is $A$-related. By Lemma~\ref{LemmaInverseIsPrincipal}, there exist $P',Q' \in \Fix_N(\Psi)$ so that the line $\ti \sigma = \overline{P' Q'}$ crosses $\widetilde E_s$ in the same direction as $\ti \gamma$ and crosses no other edge of height $\ge s$, and $\ti\sigma$ projects to $\sigma \in \mathcal B_\sing(\psi)$. Let $\ti\sigma = \widetilde R'_-{}^\inv \cdot \widetilde R'_+$ be the highest edge splitting determined by $\ti x$. Assuming that $P \ne P'$, the line $\ti\mu = \overline{P P'}$ has endpoints in $\bdy A \union \Fix_N(\Psi)$ and so projects to $\mu \in \mathcal B_\ext(\Lambda^+_\phi)$. If $\gamma$ crosses $\widetilde E_s$ in the backwards direction then $\widetilde E_s$ is the last edge of both $\widetilde R_-$ and $\widetilde R'_-$ and each of $\widetilde R_+$ and $\widetilde R'_+$ have height $\le s-1$; otherwise each of $\widetilde R_+$ and $\widetilde R'_+$ is a concatenation of $\widetilde E_s$ followed by a ray of height $\le s-1$. In either case, assuming that $Q \ne Q'$, it follows that the line $\ti\nu = \overline{Q'Q}$ has height $\le s-1$. We therefore have an inductive concatenation $\gamma = \mu \diamond \sigma \diamond \nu$, with $\mu$ omitted when $P=P'$ and $\nu$ omitted when $Q=Q'$, and Case 1B is completed.
\paragraph{Case 1C: Neither end of $\gamma$ has height $s$.} We induct on the number $m$ of height $s$ edges in $\gamma$. The base case, where $m=0$, follows from induction on $s$. Let $\ti \gamma = \widetilde R_-^\inv \cdot \widetilde R_+$\ be the splitting determined by the last highest edge splitting vertex $\ti x$ in~$\ti \gamma$, let $\ti f$ be the lift of $f$ that fixes $\ti x$, and let $\Phi \in P(\phi)$ correspond to $\ti f$. As in Case~1B, from Lemma~\ref{LemmaInverseIsPrincipal} it follows that $\Psi =\Phi^{-1} \in P(\psi)$ and that there exist $P',Q' \in \Fix_N(\Psi)$ so that the line $\ti \sigma$ connecting $P'$ to $Q'$ crosses the last height $s$ edge of $\ti\gamma$ in the same direction as $\ti \gamma$ and crosses no other edge of height $\ge s$. Let $\ti \sigma = \widetilde R'_-{}^\inv \cdot \widetilde R'_+$ be the highest edge splitting determined by $\ti x$. The line $\mu_1 = \overline{P P'}$ is obtained by tightening $\widetilde R_-^\inv \widetilde R'_-$, and the line $\mu_2 = \overline{Q' Q}$ is obtained by tightening and $\widetilde R'_+{}^\inv \widetilde R_+$. These lines have height $\le s$, cross fewer than $m$ edges of height $s$, and are not weakly attracted to~$\Lambda^+_\phi$ by Lemma \ref{LemmaConcatenation}, because the rays $R_- ,R_+, R'_-$ and $R_+'$ are not weakly attracted to~$\Lambda^+_\phi$. By induction on $m$ we have $\mu_1,\mu_2 \in \mathcal B_\good(\Lambda^{\pm}_\phi;\psi)$. Since $\sigma \in \mathcal B_\sing(\psi)$, it follows that $\gamma = \mu_1 \diamond \sigma \diamond \mu_2 \in \mathcal B_\good(\Lambda^{\pm}_\phi;\psi)$, completing Case 1C.
\paragraph{Case 2: $H_s$ is EG.} Let $\Lambda^+_s \in \L(\phi)$ be the lamination associated to $H_s$ with dual lamination denoted $\Lambda^-_s \in \L(\psi)$. Applying Theorem~\refGM{TheoremCTExistence} with ${\mathcal C}$ being $[G_r] \sqsubset [G_s]$, let $f' \from G' \to G'$ be a CT\ representing $\psi$ with EG\ stratum $H'_{r'}$ associated to $\Lambda^-_\phi$ and EG\ stratum $H'_{s'}$ associated to $\Lambda_s^-$ so that $[G_r] = [G'_{r'}]$ and $[G_s] = [G'_{s'}]$. Let $\gamma'$ be the realization of $\gamma$ in~$G'$, a line of height~$s'$. Using the $F_n$-equivariant identification $\bdy\widetilde G_s \approx \bdy \widetilde G'_{s'}$, there is a lift $\ti\gamma'$ of $\gamma'$ with endpoints $P,Q$.
\paragraph{Case 2A: $\gamma$ is not weakly attracted to $\Lambda^+_s$.} This is the case where we apply Lemmas~\ref{nonGeometricFullHeightCase} and \ref{geometricFullHeightCase}. In the situation where $\gamma'$ is a singular line of $\psi$ or a generic leaf of $\Lambda^-_s$, or in the geometric situation where $\gamma'$ is a bi-infinite iterate of the height $s'$ closed indivisible Nielsen path $\rho'_{s'}$, we have $\gamma' \in \mathcal B_\good(\Lambda^{\pm}_\phi;\psi)$ and we are done. The situation where $\gamma'$ is a singular line of $\psi$ includes all cases of Lemmas~\ref{nonGeometricFullHeightCase} and \ref{geometricFullHeightCase} where $\gamma' = R_-^\inv \mu R_+$, each of $R_-,R_+$ is either a height $s'$ principal ray or a singly infinite iterate of a height $s'$ closed indivisible Nielsen path, and $\mu$ is either trivial or a height $s'$ Nielsen path. We may therefore assume that none of these situations occurs. In all remaining situations, we divide into two subcases depending on whether one or two ends of $\gamma'$ have height $s'$.
Consider first the subcase where only one end of $\gamma'$, say the initial end, has height $s'$. By applying Lemma~\ref{nonGeometricFullHeightCase}~\pref{ItemNGFHCOneRay} or Lemma~\ref{geometricFullHeightCase}~\pref{ItemGFHCOneRay} we obtain a decomposition $\gamma' = R_-^\inv \mu R_+$ where $R_-$ is a height $s'$ principal ray, $\mu$ is either a trivial path or a height $s'$ Nielsen path, and the ray $R_+$ has height $<s'$. Lifting the decomposition of $\gamma'$ we obtain a decomposition $\ti\gamma' = \widetilde R_-^\inv \ti\mu \widetilde R_+$ where $\widetilde R_-$, $\widetilde R_+$ have endpoints $P,Q$. Let $x$ be the initial point of $R_+$, lifting to the initial point $\ti x$ of $\widetilde R_+$. The component $\Gamma$ of the full pre-image of $G'_{s'-1}$ that contains $\ti x$ is $\ti f'$-invariant and infinite and so there is a free factor $B$ such that $Q \in \partial B = \partial \Gamma$. Since $\Psi$ is principal, Lemma~\refGM{FactPeriodicNonempty} implies the existence of $Q' \in \Fix_N(\wh\Psi) \cap \partial \Gamma$. The line $\ti \tau'$ connecting $P$ to $Q'$ projects to $\tau' \in \mathcal B_\sing(\psi)$; let $\tau$ be the realization of $\tau'$ in $G$. The line $\ti\nu' = \ti\tau'{}^\inv \diamond \ti\gamma'$ is contained in $\Gamma$ and so projects to a line $\nu' = \tau'{}^\inv \diamond \gamma'$ of height $<s'$ whose realization in $G$ is a line $\nu$ of height $< s$. We obtain an inductive concatenation $\gamma = \tau \diamond \nu$, completing the first subcase of Case 2A.
Consider next the subcase where both ends of $\gamma'$ have height $s'$. Applying Lemma~\ref{nonGeometricFullHeightCase}~\pref{ItemIntermediatePath} or Lemma~\ref{geometricFullHeightCase}~\pref{ItemGFHCTwoRays}, and keeping in mind the situations that we have assumed not to occur, there is a decomposition $\gamma' = R_1^\inv \mu R_2$ where $R_1$, $R_2$ are both height $s'$ principal rays, and $\mu$ has one of the forms $\beta$, $\alpha\beta$, $\beta\bar\alpha$, $\alpha\beta\bar\alpha$ where $\beta$ is a nontrivial path of height $<s'$ and $\alpha$ (if it occurs) is a height $s'$ nonclosed indivisible Nielsen path oriented to have initial vertex in the interior of $H'_{s'}$ and terminal vertex in $G'_{s'-1}$. Absorbing occurrences of $\alpha$ into the incident principal rays $R_1,R_2$, we obtain rays $R_-,R_+$ containing $R_1,R_2$ respectively, and a decomposition $\gamma' = R_-^\inv \beta R_+$ which lifts to a decomposition $\ti\gamma' = \widetilde R_-^\inv \ti\beta \widetilde R_+$ where $\widetilde R_-$ has endpoint $P$ and $\widetilde R_+$ has endpoint $Q$. Let $\ti x$ be the initial point of $\widetilde R_-$. There is a principal lift $\ti f' \from \widetilde G' \to \widetilde G'$ with associated $\Psi \in P(\psi)$ such that $\widetilde R_1$ is a principal ray for $\ti f'$ fixing the initial point $\ti y$ of $\widetilde R_1$. Since either $\ti x=\ti y$ or the segment $[\ti x,\ti y]$ is a lift of $\alpha$, it follows that $\ti f'$ fixes $\ti x$ and that $\ti f'_\#(\widetilde R_-) = \widetilde R_-$. As in the previous subcase there is a ray based at $\ti x$ with height $< s'$ and terminating at some $Q' \in \Fix_N(\wh\Psi)$. The line $\ti \tau'$ connecting $P$ to $Q'$ projects to $\tau' \in \mathcal B_\sing(\psi)$ which is good, and the line $\ti\sigma' = \ti \tau'{}^\inv \diamond \gamma' $ has only one end with height $s'$. By the previous subcase, the realization $\sigma$ of $\bar \tau\diamond \gamma$ in $G$ is good and hence $\gamma = \tau \diamond\sigma$ is good.
\paragraph{Case 2B: $\gamma$ is weakly attracted to $\Lambda^+_s$.} In this case $H_s \subset Z$, for otherwise $\gamma$ is weakly attracted to $\Lambda^+_\phi$ as well, contrary to hypothesis.
\subparagraph{Special case:} We first consider the special case that $\gamma$ decomposes at a fixed vertex $v$ into two rays $\gamma = \gamma_1 \gamma_2$ so that $\gamma_1$ has height $<s$ and $\gamma_2 \in \<Z,\hat\rho_r\>$. In $\widetilde G$ there is a corresponding decomposition $\ti\gamma = \ti\gamma_1 \ti\gamma_2$ at a vertex~$\ti v$, and there is a lift $\ti f$ fixing $\ti v$ with corresponding $\Phi \in \mathsf{Aut}(F_n)$ representing $\phi$. Let $\Psi = \Phi^\inv$.
Recall the notation established in Definition~\ref{defn:Z} of the graph immersion $h \from K \to G$ used to define $\mathcal A_\na(\Lambda^+_\phi)$. Since the ray $\gamma_2$ is an element of the path set $\<Z,\hat\rho_r\>$, it follows from Definition~\ref{defn:Z} that $\gamma_2$ lifts via the immersion $h \from K \to G$ to a ray in the finite graph $K$. The image of this lifted ray must therefore be contained in a noncontractible component $K_0$ of~$K$. There is a lift of universal covers $\ti h \from \widetilde K_0 \to \widetilde G$ such that $\ti h(\widetilde K_0)$ contains $\ti\gamma_2$ and such that the stabilizer of $\ti h(\widetilde K_0)$ is a subgroup $A \in \mathcal A_\na(\Lambda^{\pm}_\phi)$ whose conjugacy class is the one determined by the immersion $h \from K_0 \to G$. By construction we have $Q \in \bdy A$. If $\wh\Phi(Q) \ne Q$ then $Q$ and $\wh\Phi(Q)$ bound a line that projects into $\<Z,\hat \rho_r\>$ and so is carried by $ \mathcal A_{\na}(\Lambda^{\pm}_\phi)$, and by applying Lemma~\ref{LemmaZPClosed}~\pref{item:UniqueLift} it follows that $\wh Q \in \bdy A$; this is also true if $\wh\Phi(Q) = Q$. In particular $\Phi$, and therefore also $\Psi$, preserves~$A$. By Fact~\refGM{FactPeriodicNonempty} applied to $\Psi \restrict A$ there exists an integer $q \ge 1$ so that $\Fix_N(\wh\Psi^q) \cap \partial A \ne \emptyset$; we choose $q$ to be the minimal such integer and then we choose $Q' \in \Fix_N(\wh\Psi^q) \cap \partial A$. If $Q \ne Q'$ then the line $\beta$ connecting $Q$ to $Q'$ is carried by $A$ and so $\beta \in \mathcal B_\good(\Lambda^{\pm}_\phi;\psi)$.
The component $C$ of $G_{s-1}$ that contains the ray $\gamma_1$ is noncontractible, and letting $\widetilde C$ be the component of the full pre-image of $C$ that contains $\gamma_1$, the stabilizer of $\widetilde C$ is a nontrivial free factor $B$ such that $\partial B = \partial \widetilde C$. By construction we have $P \in \bdy B$. Also $\widetilde C$ is invariant under $\ti f$ and so $B$ is invariant under $\Psi$. By Fact~\refGM{FactPeriodicNonempty} applied to $\Psi \restrict B$ there exists an integer $p \ge 1$ so that $\Fix_N(\wh\Psi^p) \intersect \bdy B \ne \emptyset$; we choose $p$ to be the minimal such integer and then we choose $P' \in \Fix_N(\wh\Psi^p) \cap \partial B$. If $P \ne P'$ then the line $\nu$ connecting $P$ to $P'$ has height~$<s$.
For some least integer $m > 0$ we have $P',Q' \in \Fix_N(\Psi^m)$. If $P' \ne Q'$, consider the line $\mu$ connecting $P'$ to $Q'$. By hypothesis $\psi$ is rotationless and so $\Psi$ is principal if and only if $\Psi^m$ is principal. It follows that if $\Psi$ is principal then $m=1$ and $\mu \in \mathcal B_\sing(\psi)$, whereas if $\Psi$ is not principal then $\Fix_N(\Psi^m) =\{P',Q'\}$ so $m = p = q = 1$ or $2$ and either $\mu \in \mathcal B_\gen(\psi)$ or $\mu$ is a periodic line corresponding to a conjugacy class that is invariant under $\phi^2$. In all cases, $\mu \in \mathcal B_\good(\Lambda^{\pm}_\phi;\psi)$.
We therefore have an inductive concatenation of the form $\gamma = \nu \diamond \mu \diamond \bar\beta$, where $\nu$ is omitted if $P=P'$, $\mu$ is omitted if $P'=Q'$, and $\bar\beta$ is omitted if $Q'=Q$, but at least one of them is not omitted because $P \ne Q$. This completes the proof in the special case.
\subparagraph{General case.} First we reduce to the subcase that $\gamma$ has a subray of height $s$ in~$\<Z,\hat\rho_r\>$. To carry out this reduction, after replacing $\gamma$ with some $\phi^k_\#(\gamma)$ we may assume that $\gamma$ contains a long piece of $\Lambda^+_s$ and so has a splitting $\gamma = R_- \cdot E \cdot R_+$ where $E$ is an edge of $H_s$ whose initial vertex and initial direction are principal. Lifting this splitting we have $\ti \gamma = \widetilde R_- \cdot \widetilde E \cdot \widetilde R_+$. Let $\ti f$ be the principal lift that fixes the initial vertex of $\widetilde E$ and let $\widetilde R'$ be the principal ray determined by the initial direction of $\widetilde E$. Neither the line $R_- R'$ nor the line obtained by tightening $\bar R_+ R'$ is weakly attracted to $\Lambda^+_r$, because $\widetilde R_-$ and $\widetilde R_+$ are not weakly attracted and the ray $R'$ is contained in $\<Z,\hat \rho_r\>$. Each of these lines contains a subray of~$R'$, and any subray of $R'$ contains a further subray of height $s$ in $\<Z,\hat\rho_r\>$, and so it suffices to show that each of these lines is contained in $\mathcal B_\good(\phi)$, which completes the reduction.
Let $t$ be the highest integer in $\{r,\ldots,s-1\}$ for which $H_t$ is not contained in $Z$. Using that $\gamma$ has a subray of height $s$ in $\<Z,\hat\rho_r\>$, after making it a terminal subray by possibly inverting $\gamma$, there is a decomposition $\gamma = \ldots \nu_2\mu_1\nu_1 \mu_0$ into an alternating concatenation where the $\mu_l$'s are the maximal subpaths of $\gamma$ of height $>t$ that are in $\<Z,\hat \rho_r\>$, and the $\nu_l$'s are the subpaths of $\gamma$ that are complementary to the $\mu_l$'s. Each subpath $\nu_l$ has fixed endpoints, is contained in $G_t$, and is not an element of $\<Z,\hat \rho_r\>$. Further, $\nu_l$ is finite unless the decomposition of $\gamma$ is finite and $\nu_l$ is the leftmost term of the decomposition. Since $H_t$ is not a zero stratum, each component of $G_t$ is non-contractible and hence $f$-invariant. We prove that the above decomposition of $\gamma$ is finite by assuming that it is not and arguing to a contradiction.
We claim that for all $l$ and all $m \ge 1$ the following hold: \begin{enumerate} \item If $\nu_l$ is finite, not all of $f^m_\#(\nu_l)$ is cancelled when $f^m_\#(\mu_{l})f^m_\#(\nu_l)f^m_\#( \mu_{l-1})$ is tightened to $f^m_\#(\mu_{l}\nu_l \mu_{l-1})$. Moreover, as $m \to \infty$ the part of $f^m_\#(\nu_l)$ that is not cancelled contains subpaths of $\Lambda^+_\phi$ which cross arbitrarily many edges of $H_r$. \item Not all of $f^m_\#(\mu_l)$ is cancelled when $f^m_\#(\nu_{l+1})f^m_\#(\mu_l)f^m_\#(\nu_l)$ is tightened to $f^m_\#(\nu_{l+1}\mu_l \nu_l)$. \end{enumerate} Assuming without loss of generality that $m$ is large, (1) follows from finiteness of the path~$\nu_l$ by applying Lemma~\ref{defining Z}~\pref{ItemZPFinitePaths} which implies that the path $f^m_\#(\nu_l)$ contains subpaths of~$\Lambda^+_\phi$ that cross arbitrarily many edges of $H_r$, whereas $f^m_\#(\mu_l)$ and $f^m_\#(\mu_{l-1})$ contain no such subpaths. Item (2) follows from the fact that each component of $G_t$ is $f$-invariant which implies that $f^m_\#(\nu_{l+1}\mu_l \nu_l) \not \subset G_t$.
Items (1) and (2) together imply that if $\nu_l$ is finite, the only cancellation that occurs to $f_\#^m(\nu_{l})$ when the concatenation $\ldots f_\#^m(\nu_2) f_\#^m(\mu_1) f_\#^m(\nu_1) f_\#^m( \mu_0)$ is tightened to $f^m_\#(\gamma)$ is that which occurs when the subpath $f_\#^m(\mu_l) f_\#^m(\nu_{l}) f_\#^m(\mu_{l-1})$ is tightened to $f^m_\#(\mu_{l}\nu_l \mu_{l-1})$. But then $f^m_\#(\gamma)$ contains subpaths of a generic leaf of $\Lambda^+_\phi$ that cross arbitrarily many edges of $H_r$, in contradiction to the assumption that $\gamma$ is not weakly attracted to $\Lambda^+_\phi$.
Not only have we shown that the decomposition of $\gamma$ is finite, we have shown that no $\nu_l$ term of the decomposition can be finite, and so either $\gamma = \mu_0$ or $\gamma = \nu_1\mu_0$. If $\gamma=\mu_0$ then $\gamma \in \<Z,\hat \rho_r\>$ and we are done. If $\gamma = \nu_1\mu_0$ then $\gamma$ falls into the special case and we are also done. This completes the proof of Theorem~\ref{ThmRevisedWAT}.
\printindex
\end{document} |
\begin{document}
\title[SUBALGEBRAS IN POSITIVE CHARACTERISTIC] {SUBALGEBRAS OF THE POLYNOMIAL ALGEBRA IN POSITIVE CHARACTERISTIC AND THE JACOBIAN}
\author{Alexey~V.~Gavrilov} \email{[email protected]} \maketitle
{\small Let $\ks$ be a field of characteristic $p>0$ and $R$ be a subalgebra of $\ks[X]=\ks[x_1,\dots,x_n]$. Let $J(R)$ be the ideal in $\ks[X]$ defined by $J(R)\Omega_{\ks[X]/\ks}^n=\ks[X]\Omega_{R/\ks}^n$. It is shown that if it is a principal ideal then $J(R)^q\subset R[x_1^p,\dots,x_n^p]$, where $q=\frac{p^n(p-1)}{2}$.} \newline {\bf Key words:} Polynomial ring; Jacobian; generalized Wronskian. \newline {\bf 2000 Mathematical Subject Classification:}13F20;13N15.
\section{INTRODUCTION}
Let $\ks$ be a field and $\ks[X]=\ks[x_1,\dots,x_n]$ be the polynomial ring. For the most of the paper the number $n\ge 1$ and the basis $X=\{x_1,\dots,x_n\}$ are fixed. Let $R$ be a subalgebra of $\ks[X]$. Denote by $J(R)$ the ideal in $\ks[X]$ generated by the Jacobians of sets of elements of $R$ (a formal definition will be given below). The main result is the following theorem.
{\bf Theorem} {\it Let $\ks$ be a field of characteristic $p>0$ and $R$ be a subalgebra of $\ks[X]$. If $J(R)$ is a principal ideal then $$J(R)^{q}\subset R[X^p],$$ where $R[X^p]=R[x_1^p,\dots,x_n^p]$ and $q=\frac{p^n(p-1)}{2}$. }
Presumably the statement holds for a nonprincipal ideal as well. However, a proof of this conjecture probably requires another technique.
{\bf Corollary } {\it Let $\ks$ be a field of characteristic $p>0$ and $R$ be a subalgebra of $\ks[X]$. If $J(R)=\ks[X]$ then $$R[X^p]=\ks[X].$$ }
Nousiainen [2] proved this in the case $R=\ks[F]=\ks[f_1,\dots,f_n]$ (see also [1]). In this case the condition $J(R)=\ks[X]$ is equivalent to $j(F)\in\ks^{\times}$, where $j(F)$ is the Jacobian of the polynomials $f_1,\dots,f_n$. Thus, Nousiainen's result is a positive characteristic analogue of the famous Jacobian conjecture: if $\cha(\ks)=0$ and $j(F)\in\ks^{\times}$ then $\ks[F]=\ks[X]$. The zero characteristic analogue of Corollary is obviously false. (Consider, for example, the subalgebra $R=\ks[t-t^2,t-t^3]$ of $\ks[t]$. Then $R\neq\ks[t]$ but $J(R)=\ks[t]$).
Nousiainen's method is based on properties of the derivations $\frac{\partial}{\partial f_i}\in\Der(\ks[X])$, which are the natural derivations of $\ks[F]$ extended to $\ks[X]$. Probably his method can be applied to a more general case as well. However, our approach is based on the calculation of generalized Wronskians of a special kind. This calculation may be an interesting result in itself.
\section{PRELIMINARIES AND NOTATION}
An element $\alpha=(\alpha_1,\dots,\alpha_n)$ of $\Ns^n$, where $\Ns$ denotes the set of non-negative integers, is called a multiindex. For $F\in\ks[X]^n$ and two multiindices $\alpha,\beta\in\Ns^n$ we will use the following notation
$$|\alpha|=\sum_{i=1}^n\alpha_i,\,\alpha!=\prod_{i=1}^n\alpha_i!,\, {\alpha\choose\beta}=\prod_{i=1}^n{\alpha_i\choose\beta_i},$$
$$X^{\alpha}=\prod_{i=1}^n x_i^{\alpha_i},\, F^{\alpha}=\prod_{i=1}^n f_i^{\alpha_i},\, \partial^{\alpha}=\prod_{i=1}^n \partial_i^{\alpha_i},$$ where $\partial_i=\frac{\partial}{\partial x_i}\in\Der(\ks[X]).$
In several places multinomial coefficients will appear, denoted by ${\alpha\choose\beta^1\dots\beta^k}$, where $\alpha,\beta^1,\dots,\beta^k\in\Ns^n$ and $\alpha=\beta^1+\dots+\beta^k$. They are defined exactly by the same way as the binomial ones. The set of multiindices possess the partial order; by definition, $\alpha\le\beta$ iff $\alpha_i\le\beta_i$ for all $1\le i\le n$. For the sake of convenience we introduce the "diagonal" intervals $[k,m]=\{\alpha\in\Ns^n:k\le\alpha_i\le m, 1\le i\le n\}$, where $k,m\in\Ns$.
Let $R$ be a subalgebra of $\ks[X]$. Since $\Omega_{\ks[X]/\ks}^n$ is a free cyclic module, we can make the following definition.
{\bf Definition 1} {\it Let $R$ be a subalgebra of $\ks[X]$, where $\ks$ is a field. The Jacobian ideal of $R$ is the ideal $J(R)$ in $\ks[X]$ defined by the equality $$J(R)\Omega_{\ks[X]/\ks}^n=\ks[X]\Omega_{R/\ks}^n,$$ where $\Omega_{R/\ks}^n$ is considered a submodule of $\Omega_{\ks[X]/\ks}^n$ over $R$. }
The exact meaning of the words "is considered a submodule" is that we write $\ks[X]\Omega_{R/\ks}^n$ instead of $\ks[X]{\rm Im}(\Omega_{R/\ks}^n\to\Omega_{\ks[X]/\ks}^n)$. This is a slight abuse of notation, because the natural $R$ - module homomorphism $\Omega_{R/\ks}^n\to\Omega_{\ks[X]/\ks}^n$ is not injective in general.
There is also a more explicit definition. For $F\in\ks[X]^n$, the Jacobian matrix and the Jacobian are defined by
$$JF=\left\|\frac{\partial f_i}{\partial x_j}\right\|_{1\le i,j\le n}\in M(n,\ks[X]),\, j(F)=\det JF\in\ks[X].$$
The module $\ks[X]\Omega_{R/\ks}^n$ is generated by $df_1\wedge\dots\wedge df_n=j(F)dx_1\wedge\dots\wedge dx_n$, where $F=(f_1,\dots,f_n)\in R^n$. Thus $$J(R)=\langle\{j(F):F\in R^n\}\rangle,$$ where $\langle S\rangle$ denotes the ideal in $\ks[X]$ generated by a set $S$. It is an easy consequence of the chain rule that the Jacobian ideal of a subalgebra generated by $n$ polynomials is a principal ideal: $$J(\ks[F])=\ks[X]j(F),\,F\in\ks[X]^n.$$
Clearly, the ring $\ks[X]$ is a free module over $\ks[X^p]$ of rank $p^n$: the set of monomials $\{X^{\alpha}:\alpha\in[0,p-1]\}$ is a natural basis of this module. This construction became important when $\cha(\ks)=p$ (note that in this case $\ks[X^p]$ does not depend on the choice of generators of $\ks[X]$ i.e. it is an invariant).
{\bf Definition 2} {\it Let $\ks$ be a field of characteristic $p>0$ and $F\in\ks[X]^n$. The matrix $U(F)\in M(p^n,\ks[X^p])$ is defined by $$F^{\alpha}=\sum_{\beta\in[0,p-1]}U(F)_{\alpha\beta}X^{\beta}, \,\alpha\in[0,p-1].$$ }
\section{GENERALIZED WRONSKIANS}
In this section we compute generalized Wronskians of a special form. The key tool for this computation is the following simple lemma.
{\bf Lemma 1} {\it Let $R$ be a ring and $f\in R$. If $D_1,\dots,D_l\in\Der(R)$ and if $m\ge l\ge 0$ then $$\sum_{k=0}^m{m\choose k}(-f)^{m-k}D_1\dots D_l f^k =\begin{cases} 0 & {\rm if}\, m>l \\ l!\prod_{k=1}^lD_kf & {\rm if}\, m=l \end{cases} \eqno{(1)}$$ }
In the case $l=0$ there are no derivations and the formula becomes $$\sum_{k=0}^m{m\choose k}(-f)^{m-k} f^k =\begin{cases} 0 & {\rm if}\, m>0 \\ 1 & {\rm if}\, m=0 \end{cases} $$ which is obviously true. \newline {\it Proof}
Denote $$S_{m,l}=\sum_{k=0}^m{m\choose k}(-f)^{m-k}D_l\dots D_1 f^k.$$ This sum coincides with the left hand side of (1), except for the reverse order of the derivations. We have $S_{0,0}=1$ and $S_{m,0}=0,\,m>0$. The following equality can easily be verified $$S_{m+1,l+1}=D_{l+1}S_{m+1,l}+(m+1)(D_{l+1}f)S_{m,l}.$$ By induction on $l$, for $m>l$ we have $S_{m,l}=0$, and $$S_{l,l}=l(D_{l}f)S_{l-1,l-1}= l!\prod_{k=1}^lD_kf.$$ \qed
Let $R$ be a ring. Denote by $R[Z_{ij}]$ the polynomial ring in the $n^2$ indeterminates $Z_{ij},\,1\le i,j\le n$. If $h\in R[Z_{ij}]$ and $A\in M(n, R)$ then $h(A)$ denotes the result of the substitution $Z_{ij}\mapsto A_{ij}$.
{\bf Proposition 1} {\it For any $r\ge 1$ there exists the homogeneous polynomial $H_r\in \Zs[Z_{ij}]$ of degree $\frac{nr^n(r-1)}{2}$ with the following property. Let $R$ be a ring with derivations $D_1,\dots,D_n\in\Der(R)$. If $[D_i,D_j]=0$, for all $i,j$, then for any $F=(f_1,\dots,f_n)\in R^n$ the following equality holds $$\det W=H_r(JF),\eqno{(2)}$$ where $W\in M(r^n,R)$ and $JF\in M(n,R)$ are defined by $$W_{\alpha\beta}=D^{\alpha}F^{\beta},\,\alpha,\beta\in[0,r-1];\, (JF)_{ij}=D_if_j,\,1\le i,j\le n.$$ }
Here $JF$ is an obvious generalization of the Jacobian matrix. The determinant $\det W$ is a generalized Wronskian of the polynomials $F^{\beta}$. \newline {\it Proof}
By the Leibniz formula, $$W_{\alpha\beta}=\sum {\alpha\choose\theta^1\dots\theta^n}\prod_{i=1}^n(D^{\theta^i} f_i^{\beta_i}),$$ where $\theta^1,\dots,\theta^n\in\Ns^n$ and the sum is taken over the multiindices satisfying the equality $\sum_{i=1}^n\theta^i=\alpha$.
Let $T\in M(r^n,R)$ be determined by $$T_{\alpha\beta}={\beta\choose\alpha}(-F)^{\beta-\alpha},\, \alpha,\beta\in[0,r-1].$$ Let $W^{\prime}=WT\in M(r^n,R)$. Then $$W_{\alpha\beta}^{\prime}=\sum_{\gamma}W_{\alpha\gamma}T_{\gamma\beta}= \sum{\alpha\choose\theta^1\dots\theta^n}\prod_{i=1}^n S_i(\beta_i,\theta^i),\eqno{(3)}$$ where $$S_i(m,\theta)=\sum_{k=0}^m{m\choose k}(-f_i)^{m-k}D^{\theta}f_i^k.$$
By Lemma 1, if $m\ge|\theta|$ then
$$S_i(m,\theta) =\begin{cases}
0 & {\rm if}\, m>|\theta| \\
m!\prod_{j=1}^n(D_j f_i)^{\theta_j} & {\rm if}\, m=|\theta| \end{cases} $$
Thus if the product in the right hand side of (3) is not zero then $\beta_i\le|\theta^i|$ for all $1\le i\le n$. The latter implies
$|\beta|\le|\alpha|$. It follows that if $|\alpha|<|\beta|$ then
$W_{\alpha\beta}^{\prime}=0$. In the case $|\alpha|=|\beta|$ the product is zero unless $\beta_i=|\theta^i|,\,1\le i\le n$. Thus, if
$|\alpha|=|\beta|$ then $$W_{\alpha\beta}^{\prime}=\beta!
\sum_{|\theta^1|=\beta_1}\dots\sum_{|\theta^n|=\beta_n} {\alpha\choose\theta^1\dots\theta^n}\prod_{i=1}^n\prod_{j=1}^n (D_jf_i)^{\theta^i_j}.\eqno{(4)}$$
Put the multiindices in a total order compatible with the partial order (e.g. in the lexicographic order). Then the matrix $T$ becomes upper triangular with the unit diagonal, hence $\det T=1$. The matrix $W^{\prime}$ becomes block lower triangular, hence $\det W^{\prime}$ is equal to the product of the $nr-n+1$ determinants of the blocks. Each block determinant
$\det\left\|W_{\alpha\beta}^{\prime}\right\|_{|\alpha|=|\beta|=l}$ (where $0\le l\le nr-n$) is by (4) a homogeneous polynomial in the variables $D_if_j$ of degree $ls_l$, where $s_l$ is the size of the block. Clearly $s_l=\#\{\alpha\in\Ns^n:\alpha\in[0,r-1],
|\alpha|=l\}$. The determinant $\det W=\det W^{\prime}$ is then a homogeneous polynomial of degree
$$\sum_{l=0}^{nr-n}ls_l=\sum_{\alpha\in[0,r-1]}|\alpha|=\frac{nr^n(r-1)}{2}.$$ One can see from (4) that all the coefficients of this polynomial are integers. \qed
When $n=1$, the determinant in the left hand side of (2) is a common Wronskian $W(1,f,\dots,f^{r-1})$. In this case $H_r$ is a polynomial in one variable, which can be easily computed. The matrix $W^{\prime}\in M(r,R)$ is a triangular matrix with the diagonal elements $W_{kk}^{\prime}=k!(DF)^k,\,0\le k\le r-1$, hence $\det W=\det W^{\prime}$ is equal to the product of these elements. So, we have the following equality
$$\det\left\|D^kf^l\right\|_{0\le k,l\le r-1}=(Df)^{\frac{r(r-1)}{2}} \prod_{k=1}^{r-1} k!\eqno{(5)}$$ where $f\in R$ and $D\in\Der(R)$.
The formula (5) may be considered a consequence of the known Wronskian chain rule [3,Part Seven, Ex. 56].
\section{ THE DETERMINANT}
For the rest of the paper $\ks$ is a field of fixed characteristic $p>0$, and $q=\frac{p^n(p-1)}{2}$. For $F\in\ks[X]^n$, denote $$\Delta(F)=\det U(F)\in\ks[X^p].$$ Our aim is to compute this determinant.
Denote by $\phi_F$ the algebra endomorphism $$\phi_F\in\End(\ks[X]),\,\phi_F:x_i\mapsto f_i,\,1\le i\le n.$$
{\bf Lemma 2} {\it Let $F,G\in\ks[X]^n$. Let $\phi_F G=(\phi_FG_1,\dots,\phi_FG_n)\in\ks[X]^n$. Then $$\Delta(\phi_FG)=(\phi_F\Delta(G))\cdot\Delta(F).\eqno{(6)}$$ } \newline {\it Proof}
By definition, $$G^{\alpha}=\sum_{\gamma}U(G)_{\alpha\gamma}X^{\gamma},\,\alpha\in[0,p-1].$$ Applying the endomorphism $\phi_F$ to the both sides of this equality, we get $$(\phi_FG)^{\alpha}=\sum_{\gamma}(\phi_FU(G)_{\alpha\gamma})F^{\gamma}= \sum_{\beta\gamma}(\phi_FU(G)_{\alpha\gamma})U(F)_{\gamma\beta}X^{\beta}.$$ On the other hand, $$(\phi_FG)^{\alpha}=\sum_{\beta}U(\phi_FG)_{\alpha\beta}X^{\beta}.$$ As the monomials $X^{\beta}$ form a basis, the coefficients are the same: $$U(\phi_FG)=(\phi_FU(G))U(F).$$ Taking the determinant, we have (6). \qed
If $F\in\ks[X]^n$ consists of linear forms, then $$F_i=\sum_{j=1}^n A_{ij}X_j,\,1\le i\le n$$ for some matrix $A\in M(n,\ks)$. We write this as $$F=AX;$$ in this notation $X$ and $F$ are considered column vectors.
{\bf Lemma 3} {\it Let $A\in M(n,\ks)$. If $F=AX$, then $$\Delta(F)=(\det A)^{q}.\eqno{(7)}$$ } \newline {\it Proof}
Let $A,B\in M(n,\ks)$. Let $G=BX$ and $F=AX$. Then $\phi_FG=BAX$. One can see that the elements of the matrix $U(BX)$ belong to the field $\ks$. It follows that $\phi_F\Delta(BX)=\Delta(BX)$, and by (6) we have $$\Delta(BAX)=\Delta(BX)\Delta(AX).$$ It is well known that any matrix over a field is a product of diagonal matrices and elementary ones. Thus, because of the latter equality, it is sufficient to prove (7) for diagonal and elementary matrices. If $A$ is elementary, then for some $j\neq k$ we have $f_j=x_j+\lambda x_k$, where $\lambda\in\ks$, and $f_i=x_i,\,i\neq j$. Then $U(F)$ is a (upper or lower) triangular matrix with the unit diagonal, hence $\Delta(F)=\det A=1$, and (7) holds. If $A$ is diagonal then $f_i=A_{ii}x_i,\,1\le i\le n$. In this case $U(F)$ is diagonal as well and $U(F)_{\alpha\alpha}=\prod_{i=1}^nA_{ii}^{\alpha_i}$. The equality (7) can be easily checked. \qed
{\bf Lemma 4} {\it Let $F\in\ks[X]^n$ and
$W=\left\|\partial^{\alpha}F^{\beta}\right\|_{\alpha,\beta\in[0,p-1]}$. Then $$\det W= c_p^n\Delta(F),$$ where $c_p=\prod_{k=1}^{p-1}k!\in\ks^{\times}$. } \newline {\it Proof}
Denote $Q=\left\|\partial^{\alpha}X^{\beta}\right\| _{\alpha,\beta\in[0,p-1]}\in M(p^n,\ks[X])$. This is a Kronecker product
$$Q=Q_1\otimes\dots\otimes Q_n;\, Q_i=\left\|\partial_i^{a}x^{b}_i\right\| _{0\le a,b\le p-1}\in M(p,\ks[x_i]),1\le i\le n.$$ Applying the equality (5) to the rings $\ks[x_i]$, we have $\det Q_i=c_p, 1\le i\le n$. Then $$\det Q=\prod_{i=1}^n(\det Q_i)^{p^{n-1}}=c_p^{np^{n-1}}=c_p^n.$$
The elements of $U(F)$ belong to the kernels of derivations $\partial_i$, hence $$\partial^{\alpha}F^{\beta}=\sum_{\gamma}U(F)_{\beta\gamma} \partial^{\alpha}X^{\gamma}.$$ This can be written in the matrix notation as $$W=QU(F)^{\rm T}.$$ The formula is a consequence. \qed
\section{PROOF OF THE THEOREM}
{\bf Proposition 2} {\it Let $F\in\ks[X]^n$. Then $$\Delta(F)=j(F)^{q}.$$ } \newline {\it Proof}
Let $W=\left\|\partial^{\alpha}F^{\beta}\right\| _{\alpha,\beta\in[0,p-1]}\in M(p^n,\ks[X])$. By Lemma 4 and Proposition 1, $$\det W=c_p^{n}\Delta(F)=H_{p}(JF).$$ If $A\in M(n,\ks)$ and $F=AX$ then $JF=A$, hence $$H_{p}(A)=c_p^{n}(\det A)^{q}$$ by Lemma 3. Without loss of generality, $\ks$ is infinite. The latter equality holds for any matrix, hence it is valid as a formal equality in the ring $\ks[Z_{ij}]$. Thus $$H_{p}(JF)=c_p^{n}(\det JF)^{q}=c_p^{n}j(F)^{q}.$$ \qed
We have the following corollary, proved first by Nousiainen [2].
{\bf Corollary} {\it Let $\ks$ be a field of characteristic $p>0$ and $F\in\ks[X]^n$. Then the set $\{F^{\alpha}:\alpha\in[0,p-1]\}$ is a basis of $\ks[X]$ over $\ks[X^p]$ if and only if $j(F)\in\ks^{\times}$. }
{\bf Proposition 3} {\it Let $F\in\ks[X]^n$. Then $$\ks[X]j(F)^{q} \subset\ks[X^p][F].$$ } \newline {\it Proof}
From linear algebra we have $$\Delta(F)U(F)^{-1}={\rm adj}\,U(F)\in M(p^n,\ks[X^p]).$$ Thus $$j(F)^qX^{\alpha}= \Delta(F)X^{\alpha}=\Delta(F)\sum_{\beta}U(F)^{-1}_{\alpha\beta}F^{\beta} \in\ks[X^p][F]$$ for any multiindex $\alpha\in[0,p-1]$. The set $\{X^{\alpha}\}$ is a basis of $\ks[X]$ over $\ks[X^p]$, hence the inclusion follows. \qed \newline {\it Proof of Theorem}
We have $$J(R)=\langle\{j(F):F\in R^n\}\rangle=\langle P\rangle,\,P\in\ks[X].$$ If $P=0$, the statement is trivial. Suppose $P\neq 0$. Since $P\in J(R)$, there exists a number $m\ge 1$, such that $$P=\sum_{i=1}^mg_ij(F_i),$$ where $g_i\in\ks[X],\,F_i\in R^n,1\le i\le m$. On the other hand, $J(R)\subset\langle P\rangle$, hence $$\mu_i=j(F_i)/P\in\ks[X],\,1\le i\le m.$$
Consider the following two ideals: $$J=\{f\in \ks[X]: \ks[X]f\subset R[X^p]\};\, I=\langle \mu_1^q,\dots,\mu_m^q\rangle.$$ By Proposition 3, $\mu_i^qP^q=j(F_i)^q\in J$ for all $1\le i\le m$, hence $$IP^q\subset J.$$ Raising the equality $\sum_{i=1}^mg_i\mu_i=1$ to the power $qm$, it follows that $1\in I$, whence $P^q\in J$. \qed
\section*{ACKNOWLEDGMENT}
The author thanks Prof. Arno van den Essen for giving him the information about the Nousiainen's preprint.
\section*{REFERENCES}
[1] H. Bass, E.H. Connell, D. Wright. The Jacobian Conjecture: Reduction of degree and formal expansion of the inverse, Bull. A.M.S. 7(2)(1982) 287-330.
[2] P. Nousiainen. On the Jacobian problem in positive characteristic. Pennsylvania State Univ. preprint. 1981
[3] G. Polya, G. Szeg\"o. Problems and Theorems in Analysis, Vol II. Springer-Verlag. 1976
\end{document} |
\begin{document}
\title{On presheaf submonads of quantale-enriched categories} \author{Maria Manuel Clementino}
\author{Carlos Fitas} \address{University of Coimbra, CMUC, Department of Mathematics, 3001-501 Coimbra, Portugal} \email{[email protected], [email protected]} \thanks{}
\begin{abstract} This paper focus on the presheaf monad and its submonads on the realm of $V$-categories, for a quantale $V$. First we present two characterisations of presheaf submonads, both using $V$-distributors: one based on admissible classes of $V$-distributors, and other using Beck-Chevalley conditions on $V$-distributors. Then we focus on the study of the corresponding Eilenberg-Moore categories of algebras, having as main examples the formal ball monad and the so-called Lawvere monad.
\end{abstract} \subjclass[2020]{18D20, 18C15, 18D60, 18A22, 18B35, 18F75} \keywords{Quantale, $V$-category, distributor, lax idempotent monad, Presheaf monad, Ball monad, Lawvere monad} \maketitle
\section*{Introduction} Having as guideline Lawvere's point of view that it is worth to regard metric spaces as categories enriched in the extended real half-line $[0,\infty]_+$ (see \cite{Law73}), we regard both the formal ball monad and the monad that identifies Cauchy complete spaces as its algebras -- which we call here the \emph{Lawvere monad} -- as submonads of the presheaf monad on the category $\Met$ of $[0,\infty]_+$-enriched categories. This leads us to the study of general presheaf submonads on the category of $V$-enriched categories, for a given quantale $V$.
Here we expand on known general characterisations of presheaf submonads and their algebras, and introduce a new ingredient -- conditions of Beck-Chevalley type -- which allows us to identify properties of functors and natural transformations, and, most importantly, contribute to a new facet of the behaviour of presheaf submonads.
In order to do that, after introducing the basic concepts needed to the study of $V$-categories in Section 1, Section 2 presents the presheaf monad and a characterisation of its submonads using admissible classes of $V$-distributors which is based on \cite{CH08}. Next we introduce the already mentioned Beck-Chevalley conditions (BC*) which resemble those discussed in \cite{CHJ14}, with $V$-distributors playing the role of $V$-relations. In particular we show that lax idempotency of a monad $\mathbb{T}$ on $\Cats{V}$ can be identified via a BC* condition, and that the presheaf monad satisfies fully BC*. This leads to the use of BC* to present a new characterisation of presheaf submonads in Section 4.
The remaining sections are devoted to the study of the Eilenberg-Moore category induced by presheaf submonads. In Section 5, based on \cite{CH08}, we detail the relationship between the algebras, (weighted) cocompleteness, and injectivity. Next we focus on the algebras and their morphisms, first for the formal ball monad, and later for a general presheaf submonad. We end by presenting the relevant example of the presheaf submonad whose algebras are the so-called Lawvere complete $V$-categories \cite{CH09}, which, when $V=[0,\infty]_+$, are exactly the Cauchy complete (generalised) metric spaces, while their morphisms are the $V$-functors which preserve the limits for Cauchy sequences.
\section{Preliminaries}
Our work focus on $V$-categories (or $V$-enriched categories, cf. \cite{EK66, Law73, Kel82}) in the special case of $V$ being a quantale.
Throughout $V$ is a \emph{commutative and unital quantale}; that is, $V$ is a complete lattice endowed with a symmetric tensor product $\otimes$, with unit $k\neq\bot$, commuting with joins, so that it has a right adjoint $\hom$; this means that, for $u,v,w\in V$, \[u\otimes v\leq w\;\Leftrightarrow\; v\leq\hom(u,w).\] As a category, $V$ is a complete and cocomplete (thin) symmetric monoidal closed category.
\begin{definition} A \emph{$V$-category} is a pair $(X,a)$ where $X$ is a set and $a\colon X\times X\to V$ is a map such that: \begin{itemize} \item[(R)] for each $x\in X$, $k\leq a(x,x)$; \item[(T)] for each $x,x',x''\in X$, $a(x,x')\otimes a(x',x'')\leq a(x,x'')$. \end{itemize} If $(X,a)$, $(Y,b)$ are $V$-categories, a \emph{$V$-functor} $f\colon (X,a)\to(Y,b)$ is a map $f\colon X\to Y$ such that \begin{itemize} \item[(C)] for each $x,x'\in X$, $a(x,x')\leq b(f(x),f(x'))$. \end{itemize} The category of $V$-categories and $V$-functors will be denoted by $\Cats{V}$. Sometimes we will use the notation $X(x,y)=a(x,y)$ for a $V$-category $(X,a)$ and $x,y\in X$. \end{definition}
We point out that $V$ has itself a $V$-categorical structure, given by the right adjoint to $\otimes$, $\hom$; indeed, $u\otimes k\leq u\;\Rightarrow\;k\leq\hom(u,u)$, and $u\otimes\hom(u,u')\otimes\hom(u',u'')\leq u'\otimes\hom(u',u'')\leq u''$ gives that $\hom(u,u')\otimes\hom(u',u'')\leq\hom(u,u'')$. Moreover, for every $V$-category $(X,a)$, one can define its \emph{opposite $V$-category} $(X,a)^\op=(X,a^\circ)$, with $a^\circ(x,x')=a(x',x)$ for all $x,x'\in X$.
\begin{examples}\label{ex:VCat} \begin{enumerate} \item For $V=\mbox{\bf 2}=(\{0<1\},\wedge,1)$, a $\mbox{\bf 2}$-category is an
\emph{ordered set} (not necessarily antisymmetric) and a
$\mbox{\bf 2}$-functor is a \emph{monotone map}. We denote $\Cats{\mbox{\bf 2}}$ by $\Ord$.
\item The lattice $V=[0,\infty]$ ordered by the ``greater
or equal'' relation $\geq$ (so that $r\wedge s=\max\{r,s\}$, and the supremum of $S\subseteq[0,\infty]$ is given
by $\inf S$) with tensor $\otimes=+$ will be denoted by $[0,\infty]_+$. A $[0,\infty]_+$-category is a
\emph{(generalised) metric space} and a
$[0,\infty]_+$-functor is a \emph{non-expansive map} (see \cite{Law73}). We denote $\Cats{[0,\infty]_+}$ by $\Met$. We note that
\[
\hom(u,v)=v\ominus u:=\max\{v-u,0\},
\]
for all $u,v\in[0,\infty]$.
If instead of $+$ one considers the tensor product $\wedge$, then $\Cats{[0,\infty]_\wedge}$ is the category $\UMet$ of \emph{ultrametric spaces} and \emph{non-expansive maps}.
\item\label{ex.zero_um} The complete lattice $[0,1]$ with the usual
``less or equal'' relation $\le$ is isomorphic to $[0,\infty]$ via
the map $[0,1]\to[0,\infty],\,u\mapsto -\ln(u)$ where
$-\ln(0)=\infty$. Under this isomorphism, the operation $+$ on
$[0,\infty]$ corresponds to the multiplication $*$ on $[0,1]$. Denoting this quantale by $[0,1]_*$, one has $\Cats{[0,1]_*}$
isomorphic to the category $\Met=\Cats{[0,\infty]_+}$ of (generalised) metric spaces and
non-expansive maps.
Since $[0,1]$ is a frame, so that finite meets commute with infinite joins, we can also consider it as a quantale
with $\otimes=\wedge$. The category $\Cats{[0,1]_{\wedge}}$ is isomorphic
to the category $\UMet$.
Another interesting tensor product in $[0,1]$ is given by the
\emph{\L{}ukasiewicz tensor} $\odot$ where $u\odot v=\max(0,u+v-1)$; here
$\hom(u,v)=\min(1,1-u+v)$. Then $\Cats{[0,1]_\odot}$ is the category of \emph{bounded-by-1 (generalised) metric spaces} and \emph{non-expansive maps}.
\item We consider now the set
\[
\Delta=\{\varphi\colon[0,\infty]\to [0,1]\mid \text{for all
$\alpha\in[0,\infty]$: }\varphi(\alpha)=\bigvee_{\beta<\alpha}
\varphi(\beta) \},
\]
of \emph{distribution functions}. With the
pointwise order, it is a complete lattice. For $\varphi,\psi\in\Delta$ and
$\alpha\in[0,\infty]$, define $\varphi\otimes\psi\in\Delta$ by
\[
(\varphi\otimes \psi)(\alpha)=\bigvee_{\beta+\gamma\le
\alpha}\varphi(\beta)*\psi(\gamma).
\]
Then
$\otimes:\Delta\times\Delta\to\Delta$ is associative and
commutative, and
\[
\kappa:[0,\infty]\to [0,1],\, \alpha\mapsto
\begin{cases}
0 & \text{if }\alpha=0,\\
1 & \text{else}
\end{cases}
\]
is a unit for $\otimes$. Finally,
$\psi\otimes-:\Delta\to\Delta$ preserves suprema since, for all $u\in [0,1]$,
$u*-\colon[0,1]\to[0,1]$ preserves suprema. A $\Delta$-category is a
\emph{(generalised) probabilistic metric space} and a
$\Delta$-functor is a \emph{probabilistic non-expansive
map} (see \cite{HR13} and references there).
\end{enumerate} \end{examples}
We will also make use of two additional categories we describe next, the category $\Rels{V}$, of sets and $V$-relations, and the category $\Dists{V}$, of $V$-categories and $V$-distributors.\\
Objects of $\Rels{V}$ are sets, while morphisms are $V$-relations, i.e., if $X$ and $Y$ are sets, a \emph{$V$-relation} $r\colon X{\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex}} Y$ is a map $r\colon X\times Y\to V$. Composition of $V$-relations is given by \emph{relational composition}, so that the composite of $r\colon X{\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex}} Y$ and $s\colon Y{\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex}} Z$ is given by \[(s\cdot r)(x,z)=\bigvee_{y\in Y}r(x,y)\otimes s(y,z),\] for every $x\in X$, $z\in Z$. Identities in $\Cats{V}$ are simply identity relations, with $1_X(x,x')=k$ if $x=x'$ and $1_X(x,x')=\bot$ otherwise. The category $\Rels{V}$ has an involution $(\;)^\circ$, assigning to each $V$-relation $r\colon X{\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex}} Y$ the $V$-relation $r^\circ\colon Y{\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex}} X$ defined by $r^\circ(y,x)=r(x,y)$, for every $x\in X$, $y\in Y$.
Since every map $f\colon X\to Y$ can be thought as a $V$-relation through its graph $f_\circ\colon X\times Y\to V$, with $f_\circ(x,y)=k$ if $f(x)=y$ and $f_\circ(x,y)=\bot$ otherwise, there is an injective on objects and faithful functor $\Set\to\Rels{V}$. When no confusion may arise, we use also $f$ to denote the $V$-relation $f_\circ$.
The category $\Rels{V}$ is a 2-category, when equipped with the 2-cells given by the pointwise order; that is, for $r,r'\colon X{\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex}} Y$, one defines $r\leq r'$ if, for all $x\in X$, $y\in Y$, $r(x,y)\leq r'(x,y)$. This gives us the possibility of studying adjointness between $V$-relations. We note in particular that, if $f\colon X\to Y$ is a map, then $f_\circ\cdot f^\circ\leq 1_Y$ and $1_X\leq f^\circ\cdot f_\circ$, so that $f_\circ\dashv f^\circ$.\\
Objects of $\Dists{V}$ are $V$-categories, while morphisms are $V$-distributors (also called $V$-bimodules, or $V$-profunctors); i.e., if $(X,a)$ and $(Y,b)$ are $V$-categories, a \emph{$V$-distributor} -- or, simply, a \emph{distributor} -- $\varphi\colon (X,a){\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} (Y,b)$ is a $V$-relation $\varphi\colon X{\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex}} Y$ such that $\varphi\cdot a\leq\varphi$ and $b\cdot\varphi\leq\varphi$ (in fact $\varphi\cdot a=\varphi$ and $b\cdot\varphi=\varphi$ since the other inequalities follow from (R)). Composition of distributors is again given by relational composition, while the identities are given by the $V$-categorical structures, i.e. $1_{(X,a)}=a$. Moreover, $\Dists{V}$ inherits the 2-categorical structure from $\Rels{V}$.\\
Each $V$-functor $f\colon(X,a)\to(Y,b)$ induces two distributors, $f_*\colon(X,a){\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}}(Y,b)$ and \linebreak $f^*\colon(Y,b){\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}}(X,a)$, defined by $f_*(x,y)=Y(f(x),y)$ and $f^*(y,x)=Y(y,f(x))$, that is, $f_*=b\cdot f_\circ$ and $f^*=f^\circ\cdot b$. These assignments are functorial, as we explain below.\\
First we define 2-cells in $\Cats{V}$: for $f,f'\colon(X,a)\to(Y,b)$ $V$-functors, $f\leq f'$ when $f^*\leq (f')^*$ as distributors, so that \[f\leq f' \;\;\Leftrightarrow\;\; \forall x\in X, \,y\in Y, \; Y(y,f(x))\leq Y(y,f'(x)).\] $\Cats{V}$ is then a 2-category, and we can define two 2-functors \[\begin{array}{rclcrcl} (\;)_*\colon \Cats{V}^\co&\longrightarrow&\Dists{V}&\mbox{ and }&(\;)^*\colon\Cats{V}^\op&\longrightarrow&\Dists{V}\\ X&\longmapsto&X&&X&\longmapsto&X\\ f&\longmapsto&f_*&&f&\longmapsto&f^* \end{array}\] Note that, for any $V$-functor $f\colon(X,a)\to(Y,b)$, \[f_*\cdot f^*=b\cdot f_\circ\cdot f^\circ\cdot b\leq b\cdot b=b\mbox{ and }f^*\cdot f_*=f^\circ\cdot b\cdot b\cdot f_\circ\geq f^\circ\cdot f_\circ\cdot a\geq a;\] hence every $V$-functor induces a pair of adjoint distributors, $f_*\dashv f^*$. A $V$-functor $f\colon X\to Y$ is said to be \emph{fully faithful} if $f^*\cdot f_*=a$, i.e. $X(x,x')=Y(f(x),f(x'))$ for all $x,x'\in X$, while it is \emph{fully dense} if $f_*\cdot f^*=b$, i.e. $Y(y,y')=\bigvee_{x\in X}Y(y,f(x))\otimes Y(f(x),y')$, for all $y,y'\in Y$. A fully faithful $V$-functor $f\colon X\to Y$ does not need to be an injective map; it is so in case $X$ and $Y$ are separated $V$-categories (as defined below).\\
\begin{remark}\label{rem:adjcond} In $\Cats{V}$ adjointness between $V$-functors \[Y\adjunct{f}{g}X\]
can be equivalently expressed as: \[f\dashv g\;\Leftrightarrow\;f_*=g^*\;\Leftrightarrow\; g^*\dashv f^* \;\Leftrightarrow\;(\forall x\in X)\;(\forall y\in Y)\;X(x,g(y))=Y(f(x),y).\] In fact the latter condition encodes also $V$-functoriality of $f$ and $g$; that is, if $f\colon X\to Y$ and $g\colon Y\to X$ are maps satisfying the condition \[(\forall x\in X)\;(\forall y\in Y)\;\; X(x,g(y))=Y(f(x),y),\] then $f$ and $g$ are $V$-functors, with $f\dashv g$.
Furthermore, it is easy to check that, given $V$-categories $X$ and $Y$, a map $f\colon X\to Y$ is a $V$-functor whenever $f_*$ is a distributor (or whenever $f^*$ is a distributor). \end{remark}
The order defined on $\Cats{V}$ is in general not antisymmetric. For $V$-functors $f,g\colon X\to Y$, one says that $f\simeq g$ if $f\leq g$ and $g\leq f$ (or, equivalently, $f^*=g^*$). For elements $x,y$ of a $V$-category $X$, one says that $x\leq y$ if, considering the $V$-functors $x,y\colon E=(\{*\},k)\to X$ (where $k(*,*)=k$) defined by $x(*)=x$ and $y(*)=y$, one has $x\leq y$; or, equivalently, $X(x,y)\geq k$. Then, for any $V$-functors $f,g\colon X\to Y$, $f\leq g$ if, and only if, $f(x)\leq g(x)$ for every $x\in X$.
\begin{definition} A $V$-category $Y$ is said to be \emph{separated} if, for $f,g\colon X\to Y$, $f=g$ whenever $f\simeq g$; equivalently, if, for all $x,y\in Y$, $x\simeq y$ implies $x=y$. \end{definition}
The tensor product $\otimes$ on $V$ induces a tensor product on $\Cats{V}$, with $(X,a)\otimes(Y,b)=(X\times Y,a\otimes b)=X\otimes Y$, where $(X\otimes Y)((x,y),(x',y'))=X(x,x')\otimes Y(y,y')$. The $V$-category $E$ is a $\otimes$-neutral element. With this tensor product, $\Cats{V}$ becomes a monoidal closed category. Indeed, for each $V$-category $X$, the functor $X\otimes (\;)\colon \Cats{V}\to\Cats{V}$ has a right adjoint $(\;)^X$ defined by $Y^X=(\Cats{V}(X,Y), \fspstr{\;}{\;} )$, with $\fspstr{f}{g}=\bigwedge_{x\in X}Y(f(x),g(x))$ (see \cite{EK66, Law73, Kel82} for details).
It is interesting to note the following well-known result (see, for instance, \cite[Theorem 2.5]{CH09}).
\begin{theorem}\label{th:fct_dist} For $V$-categories $(X,a)$ and $(Y,b)$, and a $V$-relation $\varphi\colon X{\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex}} Y$, the following conditions are equivalent: \begin{tfae} \item $\varphi\colon(X,a){\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}}(Y,b)$ is a distributor; \item $\varphi\colon(X,a)^\op\otimes(Y,b)\to(V,\hom)$ is a $V$-functor. \end{tfae} \end{theorem}
In particular, the $V$-categorical structure $a$ of $(X,a)$ is a $V$-distributor $a\colon(X,a){\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}}(X,a)$, and therefore a $V$-functor $a\colon(X,a)^\op\otimes (X,a)\to(V,\hom)$, which induces, via the closed monoidal structure of $\Cats{V}$, the \emph{Yoneda $V$-functor} $\mathpzc{y}_X\colon(X,a)\to (V,\hom)^{(X,a)^\op}$. Thanks to the theorem above, $V^{X^\op}$ can be equivalently described as
\[PX:=\{\varphi\colon X {\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} E\,|\,\varphi \mbox{ $V$-distributor}\}.\] Then the structure $\widetilde{a}$ on $PX$ is given by \[\widetilde{a}(\varphi,\psi)=\fspstr{\varphi}{\psi}=\bigwedge_{x\in X}\hom(\varphi(x),\psi(x)),\] for every $\varphi, \psi\colon X{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} E$, where by $\varphi(x)$ we mean $\varphi(x,*)$, or, equivalently, we consider the associated $V$-functor $\varphi\colon X\to V$. The Yoneda functor $\mathpzc{y}_X\colon X\to PX$ assigns to each $x\in X$ the distributor $x^*\colon X{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} E$, where we identify again $x\in X$ with the $V$-functor $x\colon E\to X$ assigning $x$ to the (unique) element of $E$. Then, for every $\varphi\in PX$ and $x\in X$, we have that \[\fspstr{\mathpzc{y}_X(x)}{\varphi}=\varphi(x),\] as expected. In particular $\mathpzc{y}_X$ is a fully faithful $V$-functor, being injective on objects (i.e. an injective map) when $X$ is a separated $V$-category. We point out that $(V,\hom)$ is separated, and so is $PX$ for every $V$-category $X$.
For more information on $\Cats{V}$ we refer to \cite[Appendix]{HN20}.
\section{The presheaf monad and its submonads}
The assignment $X\mapsto PX$ defines a functor $P\colon\Cats{V}\to\Cats{V}$: for each $V$-functor $f\colon X\to Y$, $Pf\colon PX\to PY$ assigns to each distributor $\xymatrix{X\ar[r]|{\circ}^{\varphi}&E}$ the distributor $\xymatrix{Y\ar[r]|{\circ}^{f^*}&X\ar[r]|{\circ}^{\varphi}&E}$. It is easily checked that the Yoneda functors $(\mathpzc{y}_X\colon X\to PX)_X$ define a natural transformation $\mathpzc{y}\colon 1\to P$. Moreover, since, for every $V$-functor $f$, the adjunction $f_*\dashv f^*$ yields an adjunction $Pf=(\;)\cdot f^*\dashv (\;)\cdot f_*=:Qf$, $P\mathpzc{y}_X$ has a right adjoint, which we denote by $\mathpzc{m}_X\colon PPX\to PX$. It is straightforward to check that $\mathbb{P}=(P,\mathpzc{m},\mathpzc{y})$ is a 2-monad on $\Cats{V}$ -- the so-called \emph{presheaf monad} --, which, by construction of $\mathpzc{m}_X$ as the right adjoint to $P\mathpzc{y}_X$, is lax idempotent (see \cite{Ho11} for details).\\
Next we present a characterisation of the submonads of $\mathbb{P}$ which is partially in \cite{CH08}. We recall that, given two monads $\mathbb{T}=(T,\mu,\eta)$, $\mathbb{T}'=(T',\mu',\eta')$ on a category $\C$, a monad morphism $\sigma\colon\mathbb{T}\to\mathbb{T}'$ is a natural transformation $\sigma\colon T\to T'$ such that \begin{equation}\label{eq:monadmorphism} \xymatrix{1\ar[r]^{\eta}\ar[rd]_{\eta'}&T\ar[d]^{\sigma}&&TT\ar[r]^-{\sigma_T}\ar[d]_{\mu}&T'T\ar[r]^-{T'\sigma}&T'T'\ar[d]^{\mu'}\\ &T'&&T\ar[rr]_{\sigma}&&T'} \end{equation} By \emph{submonad of $\mathbb{P}$} we mean a 2-monad $\mathbb{T}=(T,\mu,\eta)$ on $\Cats{V}$ with a monad morphism $\sigma:\mathbb{T}\to\mathbb{P}$ such that $\sigma_X$ is an embedding (i.e. both fully faithful and injective on objects) for every $V$-category $X$.
\begin{definition}\label{def:admi}
Given a class $\Phi$ of $V$-distributors, for every $V$-category $X$ let \[\Phi X=\{\varphi\colon X{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} E\,|\,\varphi\in\Phi\}\] have the $V$-category structure inherited from the one of $PX$. We say that $\Phi$ is \emph{admissible} if, for every $V$-functor $f\colon X\to Y$ and $V$-distributors $\varphi\colon Z{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} Y$ and $\psi\colon X{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} Z$ in $\Phi$, \begin{itemize} \item[(1)] $f^*\in\Phi$; \item[(2)] $\psi\cdot f^*\in\Phi$ and $f^*\cdot \varphi\in\Phi$; \item[(3)] $\varphi\in\Phi\;\Leftrightarrow\;(\forall y\in Y)\;y^*\cdot\varphi\in\Phi$; \item[(4)] for every $V$-distributor $\gamma\colon PX{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} E$, if the restriction of $\gamma$ to $\Phi X$ belongs to $\Phi$, then $\gamma\cdot(\mathpzc{y}_X)_*\in\Phi$. \end{itemize} \end{definition}
\begin{lemma} Every admissible class $\Phi$ of $V$-distributors induces a submonad $\Phi=(\Phi,\mathpzc{m}^\Phi,\mathpzc{y}^\Phi)$ of $\mathbb{P}$. \end{lemma} \begin{proof} For each $V$-category $X$, equip $\Phi X$ with the initial structure induced by the inclusion $\sigma_X\colon \Phi X\to PX$, that is, for every $\varphi,\psi\in \Phi X$, $\Phi X(\varphi,\psi)=PX(\varphi,\psi)$. For each $V$-functor $f\colon X\to Y$ and $\varphi\in\Phi X$, by condition (2), $\varphi\cdot f^*\in\Phi$, and so $Pf$ (co)restricts to $\Phi f\colon\Phi X\to\Phi Y$.
Condition (1) guarantees that $\mathpzc{y}_X\colon X\to PX$ corestricts to $\mathpzc{y}^\Phi_X\colon X\to \Phi X$.
Finally, condition (4) guarantees that $\mathpzc{m}_X\colon PPX\to PX$ also (co)restricts to $\mathpzc{m}^\Phi_X:\Phi\Phi X\to\Phi X$: if $\gamma\colon\Phi X{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} E$ belongs to $\Phi$, then $\widetilde{\gamma}:=\gamma\cdot (\sigma_X)^*\colon PX{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} E$ belongs to $\Phi$ by (2), and then, since $\gamma$ is the restriction of $\widetilde{\gamma}$ to $\Phi X$, by (4) $\mathpzc{m}_X(\widetilde{\gamma})=\gamma\cdot(\sigma_X)^*\cdot(\mathpzc{y}_X)_* =\gamma\cdot(\sigma_X)^*\cdot(\sigma_X)_*\cdot(\mathpzc{y}^\Phi_X)_*=\gamma\cdot(\mathpzc{y}^\Phi_X)_*\in\Phi$.
By construction, $(\sigma_X)_X$ is a natural transformation, each $\sigma_X$ is an embedding, and $\sigma$ makes diagrams \eqref{eq:monadmorphism} commute. \end{proof}
\begin{theorem}\label{th:Phi} For a 2-monad $\mathbb{T}=(T,\mu,\eta)$ on $\Cats{V}$, the following assertions are equivalent: \begin{tfae} \item $\mathbb{T}$ is isomorphic to $\Phi$, for some admissible class of $V$-distributors $\Phi$. \item $\mathbb{T}$ is a submonad of $\mathbb{P}$. \end{tfae} \end{theorem}
\begin{proof} (i) $\Rightarrow$ (ii) follows from the lemma above.\\
(ii) $\Rightarrow$ (i): Let $\sigma\colon\mathbb{T}\to\mathbb{P}$ be a monad morphism, with $\sigma_X$ an embedding for every $V$-category $X$, which, for simplicity, we assume to be an inclusion. First we show that \begin{equation}\label{eq:fai}
\Phi=\{\varphi\colon X{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} Y\,|\,\forall y\in Y\;y^*\cdot\varphi\in TX\} \end{equation} is admissible. In the sequel $f\colon X\to Y$ is a $V$-functor.\\
(1) For each $x\in X$, $x^*\cdot f^*=f(x)^*\in TY$, and so $f^*\in\Phi$.\\
(2) If $\psi\colon X{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} Z$ is a $V$-distributor in $\Phi$, and $z\in Z$, since $z^*\cdot\psi\in TX$, $T f(z^*\cdot\psi)=z^*\cdot\psi\cdot f^*\in TY$, and therefore $\psi\cdot f^*\in \Phi$ by definition of $\Phi$. Now, if $\varphi\colon Z{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} Y\in\Phi$, then, for each $x\in X$, $x^*\cdot f^*\cdot\varphi=f(x)^*\cdot\varphi\in TZ$ because $\varphi\in\Phi$, and so $f^*\cdot\varphi\in\Phi$.\\
(3) follows from the definition of $\Phi$.\\
(4) If the restriction of $\gamma\colon PX{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} E$ to $TX$, i.e., $\gamma\cdot(\sigma_X)_*$, belongs to $\Phi$, then $\mu_X(\gamma\cdot(\sigma_X)_*)=\gamma\cdot(\sigma_X)_*\cdot(\eta_X)_*=\gamma\cdot(\mathpzc{y}_X)_*$ belongs to $TX$. \end{proof}
We point out that, with $\mathbb{P}$, also $\mathbb{T}$ is lax idempotent. This assertion is shown at the end of next section, making use of the Beck-Chevalley conditions we study next. (We note that the arguments of \cite[Prop. 16.2]{CLF20}, which states conditions under which a submonad of a lax idempotent monad is still lax idempotent, cannot be used directly here.)
\section{The presheaf monad and Beck-Chevalley conditions}
In this section our aim is to show that $\mathbb{P}$ verifies some interesting conditions of Beck-Chevalley type, that resemble the BC conditions studied in \cite{CHJ14}. We recall from \cite{CHJ14} that a commutative square in $\Set$ \[\xymatrix{W\ar[r]^l\ar[d]_g&Z\ar[d]^h\\ X\ar[r]_f&Y}\] is said to be a \emph{BC-square} if the following diagram commutes in $\Rel$
\[\xymatrix{W\ar[r]|-{\object@{|}}^{l_\circ}&Z\\
X\ar[u]|-{\object@{|}}^{g^\circ}\ar[r]|-{\object@{|}}_{f_\circ}&Y,\ar[u]|-{\object@{|}}_{h^\circ}}\] where, given a map $t\colon A \to B$, $t_\circ\colon A{\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex}} B$ denotes the relation defined by $t$ and $t^\circ\colon B{\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex}} A$ its opposite. Since $t_\circ\dashv t^\circ$ in $\Rel$, this is in fact a kind of Beck-Chevalley condition. A $\Set$-endofunctor $T$ is said to satisfy BC if it preserves BC-squares, while a natural transformation $\alpha\colon T\to T'$ between two $\Set$-endofunctors satisfies BC if, for each map $f\colon X\to Y$, its naturality square \[\xymatrix{TX\ar[r]^{\alpha_X}\ar[d]_{Tf}&T'X\ar[d]^{T'f}\\ TY\ar[r]_{\alpha_Y}&T'Y}\] is a BC-square.
In our situation, for endofunctors and natural transformations in $\Cats{V}$, the role of $\Rel$ is played by $\Dists{V}$.
\begin{definition} A commutative square in $\Cats{V}$ \[\xymatrix{(W,d)\ar[r]^l\ar[d]_g&(Z,c)\ar[d]^h\\ (X,a)\ar[r]_f&(Y,b)}\] is said to be a \emph{BC*-square} if the following diagram commutes in $\Dists{V}$ \begin{equation}\label{diag:BC*}
\xymatrix{(W,d)\ar[r]|-{\circ}^{l_*}&(Z,c)\\
(X,a)\ar[u]|-{\circ}^{g^*}\ar[r]|-{\circ}_{f_*}&(Y,b)\ar[u]|-{\circ}_{h^*}}\end{equation} (or, equivalently, $h^*\cdot f_*\leq l_*\cdot g^*$). \end{definition}
\begin{remarks}\label{rem:BC*} \begin{enumerate} \item For a $V$-functor $f\colon(X,a)\to(Y,b)$, to be fully faithful is equivalent to \[\xymatrix{(X,a)\ar[r]^1\ar[d]_1&(X,a)\ar[d]^f\\ (X,a)\ar[r]_f&(Y,b)}\] being a BC*-square (exactly in parallel with the characterisation of monomorphisms via BC-squares). \item We point out that, contrarily to the case of BC-squares, in BC*-squares the horizontal and the vertical arrows play different roles; that is, the fact that diagram \eqref{diag:BC*} is a BC*-square is not equivalent to \[\xymatrix{(W,d)\ar[r]^g\ar[d]_l&(X,a)\ar[d]^f\\ (Z,c)\ar[r]_h&(Y,b)}\] being a BC*-square; it is indeed equivalent to its \emph{dual} \[\xymatrix{(W,d^\circ)\ar[r]^g\ar[d]_l&(X,a^\circ)\ar[d]^f\\ (Z,c^\circ)\ar[r]_h&(Y,b^\circ)}\] being a BC*-square. \end{enumerate} \end{remarks}
\begin{definitions} \begin{enumerate} \item A \emph{functor $T\colon \Cats{V}\to\Cats{V}$ satisfies BC*} if it preserves BC*-squares. \item Given two endofunctors $T,T'$ on $\Cats{V}$, a \emph{natural transformation $\alpha\colon T\to T'$ satisfies BC*} if the naturality diagram
\[\xymatrix{TX\ar[r]^{\alpha_X}\ar[d]_{Tf}&T'X\ar[d]^{T'f}\\ TY\ar[r]_{\alpha_Y}&T'Y}\] is a BC*-square for every morphism $f$ in $\Cats{V}$. \item \emph{A 2-monad $\mathbb{T}=(T,\mu,\eta)$} on $\Cats{V}$ is said to satisfy \emph{fully BC*} if $T$, $\mu$, and $\eta$ satisfy BC*. \end{enumerate} \end{definitions}
\begin{remark} In the case of $\Set$ and $\Rel$, since the condition of being a BC-square is equivalent, under the Axiom of Choice (AC), to being a weak pullback, a $\Set$-monad $\mathbb{T}$ \emph{satisfies fully BC} if, and only if, it is \emph{weakly cartesian} (again, under (AC)). This, together with the fact that there are relevant $\Set$-monads -- like for instance the ultrafilter monad -- whose functor and multiplication satisfy BC but the unit does not, led the authors of \cite{CHJ14} to name such monads as \emph{BC-monads}. This is the reason why we use \emph{fully BC*} instead of BC* to identify these $\Cats{V}$-monads.
As a side remark we recall that, still in the $\Set$-context, a partial BC-condition was studied by Manes in \cite{Ma02}: for a $\Set$-monad $\mathbb{T}=(T,\mu,\eta)$ to be \emph{taut} requires that $T$, $\mu$, $\eta$ satisfy BC for commutative squares where $f$ is monic. \end{remark}
Our first use of BC* is the following characterisation of lax idempotency for a 2-monad $\mathbb{T}$ on $\Cats{V}$.
\begin{prop}\label{prop:laxidpt} Let $\mathbb{T}=(T,\mu,\eta)$ be a 2-monad on $\Cats{V}$. \begin{enumerate} \item The following assertions are equivalent: \begin{tfae} \item[\em (i)] $\mathbb{T}$ is lax idempotent. \item[\em (ii)] For each $V$-category $X$, the diagram \begin{equation}\label{eq:laxidpt} \xymatrix{TX\ar[r]^-{T\eta_X}\ar[d]_{\eta_{TX}}&TTX\ar[d]^{\mu_X}\\ TTX\ar[r]_-{\mu_X}&TX} \end{equation} is a BC*-square. \end{tfae} \item If $\mathbb{T}$ is lax idempotent, then $\mu$ satisfies BC*. \end{enumerate} \end{prop} \begin{proof} (1) (i) $\Rightarrow$ (ii): The monad $\mathbb{T}$ is lax idempotent if, and only if, for every $V$-category $X$, $T\eta_X\dashv \mu_X$, or, equivalently, $\mu_X\dashv \eta_{TX}$. These two conditions are equivalent to $(T\eta_X)_*=(\mu_X)^*$ and $(\mu_X)_*=(\eta_{TX})^*$. Hence $(\mu_X)^*(\mu_X)_*=(T\eta_X)_* (\eta_{TX})^*$ as claimed.
(ii) $\Rightarrow$ (i): From $(\mu_X)^* (\mu_X)_*=(T\eta_X)_* (\eta_{TX})^*$ it follows that \[(\mu_X)_*=(\mu_X)_* (\mu_X)^* (\mu_X)_*=(\mu_X\cdot T\eta_X)_* (\eta_{TX})^*=(\eta_{TX})^*,\] that is, $\mu_X\dashv \eta_{TX}$.\\
(2) BC* for $\mu$ follows directly from lax idempotency of $\mathbb{T}$, since
\[\xymatrix{TTX\ar[r]^-{(\mu_X)_*}|-{\circ}&TX\ar@{}[rrd]|{=}&&TTX\ar[r]^-{(\eta_{TX})^*}|-{\circ}&TX\\
TTY\ar[u]^{(TTf)^*}|-{\circ}\ar[r]_-{(\mu_Y)_*}|-{\circ}&TY\ar[u]_{(Tf)^*}|-{\circ}&&
TTY\ar[u]^{(TTf)^*}|-{\circ}\ar[r]_-{(\eta_{TY})^*}|-{\circ}&TY\ar[u]_{(Tf)^*}|-{\circ}}\] and the latter diagram commutes trivially.\\ \end{proof}
\begin{remark} Thanks to Remarks \ref{rem:BC*} we know that, if we invert the role of $\eta_{TX}$ and $T\eta_X$ in \eqref{eq:laxidpt}, we get a characterisation of oplax idempotent 2-monad: $\mathbb{T}$ is oplax idempotent if, and only if, the diagram \[\xymatrix{TX\ar[r]^-{\eta_{TX}}\ar[d]_{T\eta_{X}}&TTX\ar[d]^{\mu_X}\\ TTX\ar[r]_-{\mu_X}&TX}\] is a BC*-square. \end{remark}
\begin{theorem} The presheaf monad $\mathbb{P}=(P,\mathpzc{m},\mathpzc{y})$ satisfies fully BC*. \end{theorem}
\begin{proof} (1) \emph{$P$ satisfies BC*}: Given a BC*-square \[\xymatrix{(W,d)\ar[r]^l\ar[d]_g&(Z,c)\ar[d]^h\\ (X,a)\ar[r]_f&(Y,b)}\] in $\Cats{V}$, we want to show that \begin{equation}\label{eq:BC}
\xymatrix{PW\ar[r]|-{\circ}^{(Pl)_*}&PZ\\
PX\ar@{}[ru]|{\geq}\ar[u]|-{\circ}^{(Pg)^*}\ar[r]|-{\circ}_{(Pf)_*}&PY.\ar[u]|-{\circ}_{(Ph)^*}} \end{equation} For each $\varphi\in PX$ and $\psi\in PZ$, we have \begin{align*} (Ph)^*(Pf)_*(\varphi,\psi)&= (Ph)^\circ\cdot\widetilde{b}\cdot Pf(\varphi,\psi)\\ &=\widetilde{b}(Pf(\varphi),Ph(\psi))\\ &= \bigwedge_{y\in Y}\hom(\varphi\cdot f^*(y),\psi\cdot h^*(y))\\ &\leq \displaystyle\bigwedge_{x\in X} \hom(\varphi\cdot f^*\cdot f_*(x),\psi\cdot h^*\cdot f_*(x))\\ &\leq \displaystyle\bigwedge_{x\in X}\hom(\varphi(x),\psi\cdot l_*\cdot g^*(x))&\mbox{($\varphi\leq \varphi\cdot f^*\cdot f_*$, \eqref{eq:BC} is BC*)}\\ &= \widetilde{a}(\varphi,\psi\cdot\l_*\cdot g^*)\\ &\leq \widetilde{a}(\varphi,\psi\cdot l_*\cdot g^*)\otimes \widetilde{c}(\psi\cdot l_*\cdot l^*,\psi)&\mbox{(because $\psi\cdot l_*\cdot l^*\leq\psi$)}\\ &=\widetilde{a}(\varphi,Pg(\psi\cdot l_*)\otimes\widetilde{c}(Pl(\psi\cdot l_*),\psi)\\ &\leq \displaystyle\bigvee_{\gamma\in PW}\widetilde{a}(\varphi,Pg(\gamma))\otimes\widetilde{c}(Pl(\gamma),\psi)\\ &=(Pl)_*(Pg)^*(\varphi,\psi). \end{align*}
(2) \emph{$\mu$ satisfies BC*}: For each $V$-functor $f\colon X\to Y$, from the naturality of $\mathpzc{y}$ it follows that the following diagram
\[\xymatrix{PPX\ar[r]|-{\circ}^-{(\mathpzc{y}_{PX})^*}&PX\\
PPY\ar[u]|-{\circ}^{(PPf)^*}\ar[r]|-{\circ}_-{(\mathpzc{y}_{PY})^*}&PY\ar[u]|-{\circ}_{(Pf)^*}}\] commutes. Lax idempotency of $\mathbb{P}$ means in particular that $\mathpzc{m}_X\dashv \mathpzc{y}_{PX}$, or, equivalently, $(\mathpzc{m}_X)_*=(\mathpzc{y}_{PX})^*$, and therefore the commutativity of this diagram shows BC* for $\mathpzc{m}$.
(3) \emph{$\mathpzc{y}$ satisfies BC*}: Once again, for each $V$-functor $f\colon(X,a)\to(Y,b)$, we want to show that the diagram
\[\xymatrix{X\ar[r]|-{\circ}^-{(\mathpzc{y}_X)_*}&PX\\
Y\ar[u]|-{\circ}^{f^*}\ar[r]|-{\circ}_-{(\mathpzc{y}_Y)_*}&PY\ar[u]|-{\circ}_{(Pf)^*}}\] commutes. Let $y\in Y$ and $\varphi\colon X{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} E$ belong to $PX$. Then \begin{align*}((Pf)^*(\mathpzc{y}_Y)_*)(y,\varphi)&=((Pf)^\circ\cdot \widetilde{b}\cdot\mathpzc{y}_Y)(y,\varphi)=\widetilde{b}(\mathpzc{y}_Y(y),Pf(\varphi))=Pf(\varphi)(y)=\bigvee_{x\in X}b(y,f(x))\otimes\varphi(x)\\ &=\bigvee_{x\in X}b(y,f(x))\otimes\widetilde{a}(\mathpzc{y}_X(x),\varphi)=(\widetilde{a}\cdot\mathpzc{y}_X\cdot f^\circ\cdot b)(y,\varphi)=(\mathpzc{y}_X)_*\cdot f^*(y,\varphi),\\ \end{align*} as claimed. \end{proof}
\begin{corollary}\label{cor:laxidpt} Let $\mathbb{T}=(T,\mu,\eta)$ on $\Cats{V}$ be a 2-monad on $\Cats{V}$, and $\sigma\colon\mathbb{T}\to\mathbb{P}$ be a monad morphism, pointwise fully faithful. Then $\mathbb{T}$ is lax idempotent. \end{corollary}
\begin{proof} We know that $\mathbb{P}$ is lax idempotent, and so, for every $V$-category $X$, $(\mathpzc{m}_X)_*=(\mathpzc{y}_{PX})^*$. Consider diagram \eqref{eq:monadmorphism}. The commutativity of the diagram on the right gives that $(\mu_X)_*=(\sigma_X)^*(\sigma_X)_*(\mu_X)_*=(\sigma_X)^*(\mathpzc{m}_X)_*(P\sigma_X)_*(\sigma_{TX})_*$; using the equality above, and preservation of fully faithful $V$-functors by $\mathbb{P}$ -- which follows from BC* -- we obtain: \begin{align*} (\mu_X)_*&=(\sigma_X)^*(\mathpzc{y}_{PX})^*(P\sigma_X)_*(\sigma_{TX})_*=(\sigma_X)^*(\eta_{PX})^*(\sigma_{PX})^*(P\sigma_X)_*(\sigma_{TX})_* =\\ &=(\eta_{TX})^*\cdot (\sigma_{TX})^*(P\sigma_X)^*(P\sigma_X)_*(\sigma_{TX})_*=(\eta_{TX})^*.\end{align*} \end{proof}
\section{Presheaf submonads and Beck-Chevalley conditions}
In this section, for a general 2-monad $\mathbb{T}=(T,\mu,\eta)$ on $\Cats{V}$, we relate its BC* properties with the existence of a (sub)monad morphism $\mathbb{T}\to\mathbb{P}$. We remark that a necessary condition for $\mathbb{T}$ to be a submonad of $\mathbb{P}$ is that $TX$ is separated for every $V$-category $X$, since $PX$ is separated and separated $V$-categories are stable under monomorphisms.
\begin{theorem}\label{th:submonad} For a 2-monad $\mathbb{T}=(T,\mu,\eta)$ on $\Cats{V}$ with $TX$ separated for every $V$-category $X$, the following assertions are equivalent: \begin{tfae} \item $\mathbb{T}$ is a submonad of $\mathbb{P}$. \item $\mathbb{T}$ is lax idempotent and satisfies BC*, and both $\eta_X$ and $Q\eta_X\cdot\mathpzc{y}_{TX}$ are fully faithful, for each $V$-category $X$. \item $\mathbb{T}$ is lax idempotent, $\mu$ and $\eta$ satisfy BC*, and both $\eta_X$ and $Q\eta_X\cdot\mathpzc{y}_{TX}$ are fully faithful, for each $V$-category $X$. \item $\mathbb{T}$ is lax idempotent, $\eta$ satisfies BC*, and both $\eta_X$ and $Q\eta_X\cdot\mathpzc{y}_{TX}$ are fully faithful, for each $V$-category $X$. \end{tfae} \end{theorem}
\begin{proof} (i) $\Rightarrow$ (ii): By (i) there exists a monad morphism $\sigma\colon \mathbb{T}\to \mathbb{P}$ with $\sigma_X$ an embedding for every $V$-category $X$. By Corollary \ref{cor:laxidpt}, with $\mathbb{P}$, also $\mathbb{T}$ is lax idempotent. Moreover, from $\sigma_X\cdot \eta_X=\mathpzc{y}_X$, with $\mathpzc{y}_X$, also $\eta_X$ is fully faithful. (In fact this is valid for any monad with a monad morphism into $\mathbb{P}$.)
To show that $\mathbb{T}$ satisfies BC* we use the characterisation of Theorem \ref{th:Phi}; that is, we know that there is an admissible class $\Phi$ of distributors so that $\mathbb{T}=\Phi$. Then BC* for $T$ follows directly from the fact that $\Phi f$ is a (co)restriction of $Pf$, for every $V$-functor $f$.
BC* for $\eta$ follows from BC* for $\mathpzc{y}$ and full faithfulness of $\sigma$ since, for any commutative diagram in $\Cats{V}$ \[\xymatrix{\cdot\ar[r]\ar[d]&\cdot\ar[r]^f\ar[d]&\cdot\ar[d]\\
\cdot\ar@{}[ru]|{\fbox{1}}\ar[r]&\ar@{}[ru]|{\fbox{2}}\cdot\ar[r]_g&\cdot}\] with $\fbox{1}\fbox{2}$ satisfying BC*, and $f$ and $g$ fully faithful, also $\fbox{1}$ satisfies BC*.
Thanks to Proposition \ref{prop:laxidpt}, BC* for $\mu$ follows directly from lax idempotency of $\mathbb{T}$.\\
The implications (ii) $\Rightarrow$ (iii) $\Rightarrow$ (iv) are obvious.\\
(iv) $\Rightarrow$ (i): For each $V$-category $(X,a)$, we denote by $\widehat{a}$ the $V$-category structure on $TX$, and define the $V$-functor $(\xymatrix{TX\ar[r]^-{\sigma_X}&PX})=(\xymatrix{TX\ar[r]^-{\mathpzc{y}_{TX}}&PTX\ar[r]^-{Q\eta_X}&PX})$; that is, $\sigma_X(\mathfrak{x})=(\xymatrix{X\ar[r]^{\eta_X}&TX\ar[r]|-{\object@{|}}^{\widehat{a}}&TX\ar[r]|-{\object@{|}}^-{\mathfrak{x}^\circ}&E})=\widehat{a}(\eta_X(\;),\mathfrak{x})$. As a composite of fully faithful $V$-functors, $\sigma_X$ is fully faithful; moreover, it is an embedding because, by hypothesis, $TX$ and $PX$ are separated $V$-categories.\\
To show that \emph{$\sigma=(\sigma_X)_X\colon T\to P$ is a natural transformation}, that is, for each $V$-functor $f\colon X\to Y$, the outer diagram
\[\xymatrix{TX\ar[r]^{\mathpzc{y}_{TX}}\ar[d]_{Tf}&PTX\ar[r]^{Q\eta_X}\ar[d]|{PTf}&PX\ar[d]^{Pf}\\
TY\ar@{}[ru]|{\fbox{1}}\ar[r]_{\mathpzc{y}_{TY}}&PTY\ar@{}[ru]|{\fbox{2}}\ar[r]_{Q\eta_Y}&PY}\] commutes, we only need to observe that $\fbox{1}$ is commutative and BC* for $\eta$ implies that $\fbox{2}$ is commutative.\\
It remains to show \emph{$\sigma$ is a monad morphism}: for each $V$-category $(X,a)$ and $x\in X$, \[(\sigma_X\cdot\eta_X)(x)=\widehat{a}(\eta_X(\;),\eta_X(x))=a(-,x)=x^*=\mathpzc{y}_X(x),\] and so $\sigma\cdot \eta=\mathpzc{y}$. To check that, for every $V$-category $(X,a)$, the following diagram commutes \[\xymatrix{TTX\ar[r]^{\sigma_{TX}}\ar[d]_\mu&PTX\ar[r]^{P\sigma_X}&PPX\ar[d]^{\mathpzc{m}_X}\\ TX\ar[rr]_{\sigma_X}&&PX,}\] let $\mathfrak{X}\in TTX$. We have \begin{align*}
\mathpzc{m}_X\cdot P\sigma_X\cdot \sigma_{TX}(\mathfrak{X})&=(\xymatrix{X\ar[r]^{\mathpzc{y}_X}&PX\ar[r]|-{\object@{|}}^{\widetilde{a}}&PX\ar[r]|-{\object@{|}}^{\sigma_X^\circ}&
TX\ar[r]^{\eta_{TX}}&TTX\ar[r]|-{\object@{|}}^{\widehat{\widehat{a}}}&TTX\ar[r]|-{\object@{|}}^-{\mathfrak{X}^\circ}&E})\\
&=(\xymatrix{X\ar[r]^{\eta_X}&TX\ar[r]|-{\object@{|}}^{\widehat{a}}&TX\ar[r]^{\eta_{TX}}&TTX\ar[r]|-{\object@{|}}^{\widehat{\widehat{a}}}&TTX\ar[r]|-{\object@{|}}^-{\mathfrak{X}^\circ}&E}), \end{align*} since $\sigma_X^\circ\cdot\widetilde{a}\cdot\mathpzc{y}_X(x,\mathfrak{x})=\widetilde{a}(\mathpzc{y}_X(x),\sigma_X(\mathfrak{x}))=\sigma_X(\mathfrak{x})(x)=\widehat{a}\cdot\eta_X(x,\mathfrak{x})$, and
\[\sigma_X\cdot \mu_X(\mathfrak{x})=(\xymatrix{X\ar[r]^{\eta_X}&TX\ar[r]|-{\object@{|}}^{\widehat{a}}&TX\ar[r]|-{\object@{|}}^{\mu_X^\circ}&TTX\ar[r]|-{\object@{|}}^-{\mathfrak{X}^\circ}&E}).\] Hence the commutativity of the diagram follows from the equality $\widehat{\widehat{a}}\cdot \eta_{TX}\cdot\widehat{a}\cdot\eta_X=\mu_X^\circ\cdot \widehat{a}\cdot\eta_X$ we show next. Indeed, \[\widehat{\widehat{a}}\cdot\eta_{TX}\cdot\widehat{a}\cdot\eta_X=(\eta_{TX})_*(\eta_X)_*=(\eta_{TX}\cdot\eta_X)_*=(T\eta_X\cdot\eta_X)_*=(T\eta_X)_*(\eta_X)_*= \mu_X^* (\eta_X)_*=\mu_X^\circ\cdot\widehat{a}\cdot\eta_X.\] \end{proof}
The proof of the theorem allows us to conclude immediately the following result. \begin{corollary}\label{cor:morphism} Given a 2-monad $\mathbb{T}=(T,\mu,\eta)$ on $\Cats{V}$ such that $\eta$ satisfies BC*, there is a monad morphism $\mathbb{T}\to\mathbb{P}$ if, and only if, $\eta$ is pointwise fully faithful. \end{corollary}
\section{On algebras for submonads of $\mathbb{P}$: a survey}
In the remainder of this paper we will study, given a submonad $\mathbb{T}$ of $\mathbb{P}$, the category $(\Cats{V})^\mathbb{T}$ of (Eilenberg-Moore) $\mathbb{T}$-algebras. Here we collect some known results which will be useful in the following sections. We will denote by $\Phi(\mathbb{T})$ the admissible class of distributors that induces the monad $\mathbb{T}$ (defined in \eqref{eq:fai}).
The following result, which is valid for any lax-idempotent monad $\mathbb{T}$, asserts that, for any $V$-category, to be a $\mathbb{T}$-algebra is a property (see, for instance, \cite{EF99} and \cite{CLF20}).
\begin{theorem}\label{th:KZ} Let $\mathbb{T}$ be lax idempotent monad on $\Cats{V}$. \begin{enumerate} \item For a $V$-category $X$, the following assertions are equivalent: \begin{tfae} \item[\em (i)] $\alpha\colon TX\to X$ is a $\mathbb{T}$-algebra structure on $X$; \item[\em (ii)] there is a $V$-functor $\alpha\colon TX\to X$ such that $\alpha\dashv\eta_X$ with $\alpha\cdot\eta_X=1_X$; \item[\em (iii)] there is a $V$-functor $\alpha\colon TX\to X$ such that $\alpha\cdot\eta_X=1_X$; \item[\em (iv)] $\alpha\colon TX\to X$ is a split epimorphism in $\Cats{V}$. \end{tfae} \item If $(X,\alpha)$ and $(Y,\beta)$ are $\mathbb{T}$-algebra structures, then every $V$-functor $f\colon X\to Y$ satisfies $\beta\cdot Tf\leq f\cdot\alpha$. \end{enumerate} \end{theorem}
Next we formulate characterisations of $\mathbb{T}$-algebras that can be found in \cite{Ho11, CH08}, using \emph{injectivity} with respect to certain \emph{embeddings}, and using the existence of certain \emph{weighted colimits}, notions that we recall very briefly in the sequel.
\begin{definition}\cite{Es98} A $V$-functor $f\colon X\to Y$ is a \emph{$T$-embedding} if $Tf$ is a left adjoint right inverse; that is, there exists a $V$-functor $Tf_\sharp$ such that $Tf\dashv Tf_\sharp$ and $Tf_\sharp\cdot Tf=1_{TX}$. \end{definition}
For each submonad $\mathbb{T}$ of $\mathbb{P}$, the class $\Phi(\mathbb{T})$ allows us to identify easily the $T$-embeddings.
\begin{prop}\label{prop:emb} For a $V$-functor $h\colon X\to Y$, the following assertions are equivalent: \begin{tfae} \item $h$ is a $T$-embedding; \item $h$ is fully faithful and $h_*$ belongs to $\Phi(\mathbb{T})$. \end{tfae} In particular, $P$-embeddings are exactly the fully faithful $V$-functors. \end{prop}
\begin{proof} (ii) $\Rightarrow$ (i): Let $h$ be fully faithful with $h_*\in\Phi(\mathbb{T})$. As in the case of the presheaf monad, $\Phi h:\Phi X\to\Phi Y$ has always a right adjoint whenever $h_*\in\Phi(\mathbb{T})$, $\Phi^\dashv h:=(-)\cdot h_*\colon \Phi Y\to\Phi X$; that is, for each distributor $\psi:Y{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} E$ in $\Phi Y$, $\Phi^\dashv h(\psi)=\psi\cdot h_*$, which is well defined because by hypothesis $h_*\in\Phi(\mathbb{T})$. If $h$ is fully faithful, that is, if $h^*\cdot h_*=(1_X)^*$, then $(\Phi^\dashv h\cdot \Phi h)(\varphi)=\varphi\cdot h^*\cdot h_*=\varphi$.
(i) $\Rightarrow$ (ii): If $\Phi^\dashv h$ is well-defined, then $y^*\cdot h_*$ belongs to $\Phi(\mathbb{T})$ for every $y\in Y$, hence $h_*\in \Phi(\mathbb{T})$, by \ref{def:admi}(3), and so $h_*\in\Phi(\mathbb{T})$. Moreover, if $\Phi^\dashv h\cdot \Phi h=1_{\Phi X}$, then in particular $x^*\cdot h^*\cdot h_*=x^*$, for every $x\in X$, which is easily seen to be equivalent to $h^*\cdot h_*=(1_X)^*$. \end{proof}
In $\Dists{V}$, given a $V$-distributor $\varphi\colon (X,a){\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} (Y,b)$, the functor $(\;)\cdot\varphi$ preserves suprema, and therefore it has a right adjoint $[\varphi,-]$ (since the hom-sets in $\Dists{V}$ are complete ordered sets): \[\Dist(X,Z)\adjunct{(\;)\cdot\varphi}{[\varphi,-]}\Dist(Y,Z).\] For each distributor $\psi\colon X{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} Z$,
\[\xymatrix{X\ar[r]|-{\circ}^{\psi}\ar[d]|-{\circ}_{\varphi}&Z\\
Y\ar@{}[ru]^{\leq}\ar[ru]|-{\circ}_{[\varphi,\psi]}}\] $[\varphi,\psi]\colon Y{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} Z$ is defined by \[ [\varphi,\psi](y,z)=\bigwedge_{x\in X}\,\hom(\varphi(x,y),\psi(x,z)).\]
\begin{definitions}\begin{enumerate} \item Given a $V$-functor $f\colon X\to Z$ and a distributor (here called \emph{weight}) $\varphi\colon X{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} Y$, a \emph{$\varphi$-weighted colimit} of $f$ (or simply a \emph{$\varphi$-colimit} of $f$), whenever it exists, is a $V$-functor $g\colon Y\to Z$ such that $g_*=[\varphi,f_*]$. One says then that \emph{$g$ represents $[\varphi,f_*]$}. \item A $V$-category $Z$ is called \emph{$\varphi$-cocomplete} if it has a colimit for each weighted diagram with weight $\varphi\colon(X,a){\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}}(Y,b)$; i.e. for each $V$-functor $f\colon X\to Z$, the $\varphi$-colimit of $f$ exists. \item Given a class $\Phi$ of $V$-distributors, a $V$-category $Z$ is called \emph{$\Phi$-cocomplete} if it is $\varphi$-cocomplete for every $\varphi\in \Phi$. When $\Phi=\Dists{V}$, then $Z$ is said to be \emph{cocomplete}. \end{enumerate} \end{definitions}
The proof of the following result can be found in \cite{Ho11, CH08}. \begin{theorem}\label{th:ch} Given a submonad $\mathbb{T}$ of $\mathbb{P}$, for a $V$-category $X$ the following assertions are equivalent: \begin{tfae} \item $X$ is a $\mathbb{T}$-algebra. \item $X$ is injective with respect to $T$-embeddings. \item $X$ is $\Phi(\mathbb{T})$-cocomplete. \end{tfae} \end{theorem}
$\Phi(\mathbb{T})$-cocompleteness of a $V$-category $X$ is guaranteed by the existence of some special weighted colimits, as we explain next. (Here we present very briefly the properties needed. For more information on this topic see \cite{St04}.)
\begin{lemma} For a distributor $\varphi\colon X\to Y$ and a $V$-functor $f\colon X\to Z$, the following assertions are equivalent: \begin{tfae} \item there exists the $\varphi$-colimit of $f$; \item there exists the $(\varphi\cdot f^*)$-colimit of $1_Z$; \item for each $y\in Y$, there exists the $(y^*\cdot\varphi)$-colimit of $f$. \end{tfae} \end{lemma} \begin{proof} (i) $\Leftrightarrow$ (ii): It is straightforward to check that \[ [\varphi,f_*]=[\varphi\cdot f^*,(1_Z)_*].\]
(i) $\Leftrightarrow$ (iii): Since $[\varphi,f_*]$ is defined pointwise, it is easily checked that, if $g$ represents $[\varphi,f_*]$, then, for each $y\in Y$, the $V$-functor $\xymatrix{E\ar[r]^y&Y\ar[r]^g&Z}$ represents $[y^*\cdot \varphi,f_*]$.
Conversely, if, for each $y\colon E\to Y$, $g_y\colon E\to Z$ represents $[y^*\cdot\varphi,f_*]$, then the map $g\colon Y\to Z$ defined by $g(y)=g_y(*)$ is such that $g_*=[\varphi,f_*]$; hence, as stated in Remark \ref{rem:adjcond}, $g$ is automatically a $V$-functor. \end{proof}
\begin{corollary} Given a submonad $\mathbb{T}$ of $\mathbb{P}$, a $V$-category $X$ is a $\mathbb{T}$-algebra if, and only if, $[\varphi, (1_X)_*]$ has a colimit for every $\varphi\in TX$. \end{corollary}
\begin{remark} Given $\varphi\colon X{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} E$ in $TX$, in the diagram
\[\xymatrix{X\ar[r]|-{\circ}^{a}\ar[d]|-{\circ}_{\varphi}&X\\
Y\ar@{}[ru]^{\leq}\ar[ru]|-{\circ}_{[\varphi,a]}}\] \[[\varphi,a](*,x)=\bigwedge_{x'\in X}\hom(\varphi(x',*),a(x',x))=TX(\varphi,x^*).\] Therefore, if $\alpha\colon TX\to X$ is a $\mathbb{T}$-algebra structure, then \[ [\varphi,a](*,x)=TX(\varphi,x^*)=X(\alpha(\varphi),x),\] that is, $[\varphi,a]=\alpha(\varphi)_*$; this means that $\alpha$ assigns to each distributor $\varphi\colon X{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} E$ the representative of $[\varphi,(1_X)_*]$. \end{remark}
Hence, we may describe the category of $\mathbb{T}$-algebras as follows.
\begin{theorem}\label{thm:charact} \begin{enumerate} \item A map $\alpha\colon TX\to X$ is a $\mathbb{T}$-algebra structure if, and only if, for each distributor $\varphi\colon X{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} E$ in $TX$, $\alpha(\varphi)_*=[\varphi,(1_X)_*]$. \item If $X$ and $Y$ are $\mathbb{T}$-algebras, then a $V$-functor $f\colon X\to Y$ is a $\mathbb{T}$-homomorphism if, and only if, $f$ preserves $\varphi$-weighted colimits for any $\varphi\in TX$, i.e., if $x\in X$ represents $[\varphi,(1_X)_*]$, then $f(x)$ represents $[\varphi\cdot f^*,(1_Y)_*]$. \end{enumerate} \end{theorem}
\section{On algebras for submonads of $\mathbb{P}$: the special case of the formal ball monad}
From now on we will study more in detail $(\Cats{V})^\mathbb{T}$ for special submonads $\mathbb{T}$ of $\mathbb{P}$. In our first example, the formal ball monad $\mathbb{B}$, we will need to consider the (co)restriction of $\mathbb{B}$ and $\mathbb{P}$ to $\Cats{V}_\sep$. We point out that the characterisations of $\mathbb{T}$-algebras of Theorem \ref{th:ch} remain valid for these (co)restrictions.
The space of formal balls is an important tool in the study of (quasi-)metric spaces. Given a metric space $(X,d)$ its \emph{space of formal balls} is simply the collection of all pairs $(x,r),$ where $x \in X$ and $r \in [0,\infty[$. This space can itself be equipped with a (quasi-)metric. Moreover this construction can naturally be made into a monad on the category of (quasi-)metric spaces (cf. \cite{GL19, KW11} and references there).
This monad can readily be generalised to $V$-categories, using a $V$-categorical structure in place of the (quasi-)metric. We will start by considering an extended version of the formal ball monad, the \emph{extended formal ball monad} $\mathbb{B}_\bullet,$ which we define below.
\begin{definitions} The \emph{extended formal ball monad} $\mathbb{B}_\bullet=(B_\bullet ,\eta, \mu)$ is given by the following: \begin{enumerate} \item[--] a functor $B_\bullet\colon\Cats{V}\to\Cats{V}$ which maps each $V$-category $X$ to $B_\bullet X$ with underlying set $X\times V$ and \[B_\bullet X((x,r),(y,s))= \hm{r}{ X(x,y) \otimes s }\] and every $V$-functor $f\colon X \to Y$ to the $V$-functor $B_\bullet f\colonB_\bullet X\to B_\bullet Y$ with $B_\bullet f(x,r)=(f(x),r)$; \item[--] natural transformations $\eta\colon 1 \to B_\bullet$ and $\mu\colon B_\bulletB_\bullet \to B_\bullet$ with $\eta_X(x)=(x,k)$ and $\mu_X((x,r),s)=(x,r\otimes s)$, for every $V$-category $X$, $x\in X$, $r,s\in V$. \end{enumerate} The \emph{formal ball monad} $\mathbb{B}$ is the submonad of $\mathbb{B}_\bullet$ obtained when we only consider balls with radius different from $\bot$. \end{definitions}
\begin{remark} Note that $\mathbb{B}_\bullet X$ is not separated if $X$ has more than one element (for any $x,y \in X$, $(x,\bot)\simeq (y,\bot)$), while, as shown in \ref{prop:canc}, for $X$ separated, separation of $\mathbb{B} X$ depends on an extra property of the quantale $V$. \end{remark}
Using Corollaries \ref{cor:morphism} and \ref{cor:laxidpt}, it is easy to check that
\begin{prop}\label{prop:Bbmonadmorphismff} There is a pointwise fully faithful monad morphism $\sigma \colon \mathbb{B}_\bullet \to \mathbb{P}$. In particular, both $\mathbb{B}_\bullet$ and $\mathbb{B}$ are lax-idempotent. \end{prop}
\begin{proof} First of all let us check that $\eta$ satisfies BC*, i.e., for any $V$-functor $f\colon X\to Y$,
\[\xymatrix{X\ar[r]|-{\circ}^{(\eta_X)_*}&B_\bullet X\\
Y\ar@{}[ru]|{\geq}\ar[u]|-{\circ}^{f^*}\ar[r]|-{\circ}_{(\eta_Y)_*}&B_\bullet Y\ar[u]|-{\circ}_{(B_\bullet f)^*}}\] For $y\in Y$, $(x,r)\inB_\bullet X$, \begin{align*} ((B_\bullet f)^*(\eta_Y)_*)(y,(x,r))&=B_\bullet Y((y,k),(f(x),r))=Y(y,f(x))\otimes r\\ &\leq \bigvee_{z\in X}Y(y,f(z))\otimes X(z,x)\otimes r=\bigvee_{z\in X} Y(y,f(z))\otimesB_\bullet X((z,k),(x,r))\\ &=((\eta_X)_*f^*)(y,(x,r)). \end{align*}
Then, by Corollary \ref{cor:morphism}, for each $V$-category $X$, $\sigma_X$ is defined as in the proof of Theorem \ref{th:submonad}, i.e. for each $(x,r)\inB_\bullet X$, $\sigma_X(x,r)=B_\bullet X((-,k),(x,r))\colon X\to V$; more precisely, for each $y\in X$, $\sigma_X(x,r)(y)=X(y,x)\otimes r$.
Moreover, $\sigma_X$ is fully faithful: for each $(x,r), (y,s)\in B_\bullet X$, \begin{align*} B_\bullet X((x,r),(y,s))&=\hom(r,X(x,y)\otimes s)\geq \hom(X(x,x)\otimes r, X(x,y)\otimes s)\\ &\geq \bigwedge_{z\in X}\hom(X(z,x)\otimes r,X(z,y)\otimes s)=PX(\sigma(x,r),\sigma(y,s)). \end{align*} \end{proof}
It is clear that $\sigma\colon\mathbb{B}_\bullet\to\mathbb{P}$ is not pointwise monic; indeed, if $r=\bot$, then $\sigma_X(x,\bot)\colon X{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} E$ is the distributor that is constantly $\bot$, for any $x\in X$. Still it is interesting to identify the $\mathbb{B}_\bullet$-algebras via the existence of special weighted colimits.
\begin{prop}\label{prop:Balg} For a $V$-category $X$, the following conditions are equivalent: \begin{tfae} \item $X$ has a $\mathbb{B}_\bullet$-algebra structure $\alpha\colonB_\bullet X\to X$; \item $(\forall x\in X)\;(\forall r\in V)\;(\exists x\oplus r\in X)\;(\forall y\in X)\;\; X(x\oplus r,y)=\hom(r,X(x,y))$; \item for all $(x,r)\inB_\bullet X$, every diagram of the sort
\[\xymatrix{X\ar[r]|-{\circ}^{(1_X)_*}\ar[d]|-{\circ}_{\sigma_X(x,r)}&X\\
E\ar@{}[ru]^{\leq}\ar[ru]|-{\circ}_{[\sigma_X(x,r),(1_X)_*]}}\] has a (weighted) colimit. \end{tfae} \end{prop} \begin{proof} (i) $\Rightarrow$ (ii): The adjunction $\alpha\dashv\eta_X$ gives, via Remark \ref{rem:adjcond}, \[X(\alpha(x,r),y)=B_\bullet X((x,r),(y,k))=\hom(r,X(x,y)).\] For $x\oplus r:=\alpha(x,r)$, condition (ii) follows.\\
(ii) $\Rightarrow$ (iii): The calculus of the distributor $[\sigma_X(x,r),(1_X)_*]$ shows that it is represented by $x\oplus r$: \[ [\sigma_X(x,r),(1_X)_*](*,y)=\hom(r,X(x,y)).\]
(iii) $\Rightarrow$ (i) For each $(x,r)\in B_\bullet X$, let $x\oplus r$ represent $[\sigma_X(x,r),(1_X)_*]$. In case $r=k$, we choose $x\oplus k=x$ to represent the corresponding distributor (any $x'\simeq x$ would fit here but $x$ is the right choice for our purpose). Then $\alpha\colonB_\bullet X\to X$ defined by $\alpha(x,r)=x\oplus r$ is, by construction, left adjoint to $\eta_X$, and $\alpha\cdot\eta_X=1_X$. \end{proof} The $V$-categories $X$ satisfying (iii), and therefore satisfying the above (equivalent) conditions, are called \emph{tensored}.
This notion was originally introduced in the article \cite{BK75} by Borceux and Kelly for general $V$-categories (for our special $V$-categories we suggest to consult \cite{St04}).\\
Note that, thanks to condition (ii), we get the following characterisation of tensored categories.
\begin{corollary}\label{cor:oplus}
A $V$-category $X$ is tensored if, and only if, for every $x\in X$,
\[X\adjunct{x\oplus -}{X(x,-)}V\]
is an adjunction in $\Cats{V}$.
\end{corollary}
We now shift our attention to the formal ball monad $\mathbb{B}.$ The characterisation of $\mathbb{B}_\bullet$-algebras given by the Proposition \ref{prop:Balg} may be adapted to obtain a characterisation of $\mathbb{B}$-algebras. Indeed, the only difference is that a $\mathbb{B}$-algebra structure $BX\to X$ does not include the existence of $x\oplus\bot$ for $x\in X$, which, when it exists, is the top element with respect to the order in $X$. Moreover, the characterisation of $\mathbb{B}$-algebras given in \cite[Proposition 3.4]{GL19} can readily be generalised to $\Cats{V}$ as follows.
\begin{prop} For a $V$-functor $\alpha\colon BX\to X$ the following conditions are equivalent. \begin{tfae} \item $\alpha$ is a $\mathbb{B}$-algebra structure. \item For every $x\in X$, $r,s\in V\setminus\{\bot\}$, $\alpha(x,k)=x$ and $\alpha(x,r\otimes s)=\alpha(\alpha(x,r),s)$. \item For every $x\in X$, $r\in V\setminus\{\bot\}$, $\alpha(x,k)=x$ and $X(x,\alpha(x,r))\geq r$. \item For every $x\in X$, $\alpha(x,k)=x$. \end{tfae} \end{prop}
\begin{proof} By definition of $\mathbb{B}$-algebra, (i) $\Leftrightarrow$ (ii), while (i) $\Leftrightarrow$ (iv) follows from Theorem \ref{th:KZ}, since $\mathbb{B}$ is lax-idempotent. (iii) $\Rightarrow$ (iv) is obvious, and so it remains to prove that, if $\alpha$ is a $\mathbb{B}$-algebra structure, then $X(x,\alpha(x,r))\geq r$, for $r\neq\bot$. But \[X(x,\alpha(x,r))\geq r\;\Leftrightarrow\; k\leq\hom(r,X(x,\alpha(x,r))=X(\alpha(x,r),\alpha(x,r)),\] because $\alpha(x,-)\dashv X(x,-)$ by Corollary \ref{cor:oplus}. \end{proof}
Since we know that, if $X$ has a $\mathbb{B}$-algebra structure $\alpha$, then $\alpha(x,r)=x\oplus r$, we may state the conditions above as follows.
\begin{corollary}\label{cor:condition} If $\xymatrix{BX\ar[r]^{-\oplus-}&X}$ is a $\mathbb{B}$-algebra structure, then, for $x\in X$, $r,s\in V\setminus\{\bot\}$: \begin{enumerate} \item $x\oplus k=x$; \item $x\oplus(r\otimes s)=(x\oplus r)\oplus s$; \item $X(x,x\oplus r)\geq r$. \end{enumerate} \end{corollary}
\begin{lemma} Let $X$ and $Y$ be $V$-categories equipped with $\mathbb{B}$-algebra structures $\xymatrix{BX\ar[r]^{-\oplus-}&X}$ and $\xymatrix{BY\ar[r]^{-\oplus-}&Y}$. Then a map $f: X \rightarrow Y$ is a $V$-functor if and only if $$ f \textrm{ is monotone and } f(x) \oplus r \leq f(x \oplus r) ,$$ for all $(x,r) \in BX$. \end{lemma}
\begin{proof} Assume that $f$ is a $V$-functor. Then it is, in particular, monotone, and, from Theorem \ref{th:KZ} we know that $f(x)\oplus r\leq f(x\oplus r)$.
Conversely, assume that $f$ is monotone and that $f(x) \oplus r \leq f(x \oplus r),$ for all $(x,r) \in BX $. Let $x,x' \in X$. Then $x\oplus X(x,x')\leq x'$ since $(x\oplus -)\dashv X(x,-)$ by Corollary \ref{cor:oplus}, and then \begin{align*} f(x)\oplus X(x,x')&\leq f(x\oplus X(x,x'))&\mbox{(by hypothesis)}\\ &\leq f(x')&\mbox{(by monotonicity of $f$).} \end{align*} Now, using the adjunction $ f(x)\oplus - \dashv Y(f(x),-) )$, we conclude that \[X(x,x') \leq Y(f(x),f(x')).\] \end{proof}
The following results are now immediate:
\begin{corollary} \begin{enumerate} \item Let $(X,\oplus), (Y,\oplus)$ be $\mathbb{B}$-algebras. Then a map $f\colon X \rightarrow Y$ is a $\mathbb{B}$-algebra morphism if and only if, for all $(x,r) \in BX$, \[f \textrm{ is monotone and } f(x \oplus r)= f(x) \oplus r.\] \item Let $(X,\oplus), (Y,\oplus)$ be $\mathbb{B}$-algebras. Then a $V$-functor $f\colon X \rightarrow Y$ is a $\mathbb{B}$-algebra morphism if and only if, for all $(x,r) \in BX$, \[f(x \oplus r)\leq f(x) \oplus r.\] \end{enumerate} \end{corollary}
\begin{example} If $X\subseteq\,[0,\infty]$, with the $V$-category structure inherited from $\hom$, then \begin{enumerate} \item $X$ is a $\mathbb{B}_\bullet$-algebra if, and only if, $X=[a,b]$ for some $a,b\in\,[0,\infty]$. \item $X$ is a $\mathbb{B}$-algebra if, and only if, $X=\,]a,b]$ or $X=[a,b]$ for some $a,b\in\,[0,\infty]$. \end{enumerate} Let $X$ be a $\mathbb{B}_\bullet$-algebra. From Proposition \ref{prop:Balg} one has \[(\forall x\in X)\;(\forall r\in\,[0,\infty])\;(\exists x\oplus r\in X)\;(\forall y\in X)\;\;y\ominus (x\oplus r)=(y\ominus x)\ominus r=y\ominus (x+r).\] This implies that, if $y\in X$, then $y>x\otimes r\;\Leftrightarrow\;y>x+r$. Therefore, if $x+r\in X$, then $x\oplus r=x+r$, and, moreover, $X$ is an interval: given $x,y,z\in\,[0,\infty]$ with $x<y<z$ and $x,z\in X$, then, with $r=y-x\in\,[0,\infty]$, $x+r=y$ must belong to $X$: \[z\ominus(x\oplus r)=z-(x+r)=z-y>0\;\Rightarrow\;z\ominus(x\oplus r)=z-(x\oplus r)=z-y\;\Leftrightarrow\; y=x\oplus r\in X.\] In addition, $X$ must have bottom element (that is a maximum with respect to the classical order of the real half-line): for any $x\in X$ and $b=\sup X$, $x\oplus(b-x)=\sup\{z\in X\,;\,z\leq b\}=b\in X$. For $r=\infty$ and any $x\in X$, $x\oplus\infty$ must be the top element of $X$, so $X=[a,b]$ for $a,b\in\,[0,\infty]$.
Conversely, if $X=]a,b]$, for $x\in X$ and $r\in\,[0,\infty[$, define $x\oplus r=x+r$ if $x+r\in X$ and $x\oplus r=b$ elsewhere. It is easy to check that condition (ii) of Proposition \ref{prop:Balg} is satisfied for $r\neq\infty$.
Analogously, if $X=[a,b]$, for $x\in X$ and $r\in\,[0,\infty]$, we define $x\oplus r$ as before in case $r\neq\infty$ and $x\oplus\infty=a$. \end{example}
As we will see, (co)restricting $\mathbb{B}$ to $\Cats{V}_\sep$ will allows us to obtain some interesting results. Unfortunately $X$ being separated does not entail $BX$ being so. Because of this we will need to restrict our attention to the \textit{cancellative} quantales which we define and characterize next.
\begin{definition} A quantale $V$ is said to be \emph{cancellative} if \begin{equation}\label{eq:canc} \forall r,s \in V,\, r\neq \bot :\ r=s \otimes r \ \Rightarrow \ s=k. \end{equation} \end{definition} \begin{remark} We point out that this notion of cancellative quantale does not coincide with the notion of cancellable ccd quantale introduced in \cite{CH17}. On the one hand cancellative quantales are quite special, since, for instance, when $V$ is a locale, and so with $\otimes=\wedge$ is a quantale, $V$ is not cancellative since condition \eqref{eq:canc} would mean, for $r\neq\bot$, $r=s\wedge r\;\Rightarrow\;s=\top$. On the other hand, $[0,1]_\odot$, that is $[0,1]$ with the usual order and having as tensor product the \L{}ukasiewicz sum, is cancellative but not cancellable. In addition we remark that every \emph{value quantale} \cite{KW11} is cancellative. \end{remark}
\begin{prop}\label{prop:canc} Let $V$ be an integral quantale. The following assertions are equivalent: \begin{tfae} \item $BV$ is separated; \item $V$ is cancellative; \item If $X$ is separated then $BX$ is separated. \end{tfae} \end{prop} \begin{proof} (i) $\Rightarrow$ (ii): Let $ r,s \in V,\, r\neq \bot $ and $ r=s \otimes r$. Note that \[ BV((k,r),(s,r))=\hm{r}{\hm{k}{s}\otimes r}=\hm{r}{s \otimes r}=\hm{r}{r}=k\] and \[BV((s,r),(k,r))=\hm{r}{\hm{s}{k}\otimes r}=\hm{r}{\hm{s}{k}\otimes s \otimes r} =\hm{s \otimes r }{ s \otimes r}=k.\] Therefore, since $BV$ is separated, $(s,r)=(k,r)$ and it follows that $s=k.$\\
(ii) $\Rightarrow $ (iii): If $(x,r)\simeq (y,s)$ in $BX$, then \[BX((x,r),(y,s))=k \Leftrightarrow r \leq X(x,y) \otimes s, \mbox{ and }\] \[BX((y,s),(x,r))=k \Leftrightarrow s \leq X(y,x) \otimes r.\] Therefore $r\leq s$ and $s \leq r$, that is $r=s.$ Moreover, since $r \leq X(x,y) \otimes r \leq r$ we have that $X(x,y)=k$. Analogously, $X(y,x)=k$ and we conclude that $x=y$.\\
(iii) $\Rightarrow$ (i): Since $V$ is separated it follows immediately from (iii) that $BV$ is separated.
\end{proof}
We can now show that $\mathbb{B}$ is a submonad of $\mathbb{P}$ in the adequate setting. \emph{From now on we will be working with a cancellative and integral quantale $V$, and $\mathbb{B}$ will be the (co)restriction of the formal ball monad to $\Cats{V}_\sep$.} \begin{prop}
Let $V$ be a cancellative and integral quantale. Then $\mathbb{B}$ is a submonad of $\mathbb{P}$ in $\Cats{V}_\sep$. \end{prop} \begin{proof} Thanks to Proposition \ref{prop:Bbmonadmorphismff}, all that remains is to show that $\sigma_X $ is injective on objects, for any $V$-category $X$. Let $\sigma(x,r)=\sigma(y,s)$, or, equivalently, $X(-,x)\otimes r =X(-,y)\otimes s$. Then, in particular, \[r = X(x,x)\otimes r = X(x,y) \otimes s \leq s= X(y,y)\otimes s = X(y,x)\otimes r \leq r.\]
Therefore $r=s$ and $X(y,x)=X(x,y)=k$. We conclude that $(x,r)=(y,s)$. \end{proof}
Thanks to Theorem \ref{th:ch} $\mathbb{B}$-algebras are characterized via an injectivity property with respect to special embeddings. We end this section studying in more detail these embeddings. Since we are working in $\Cats{V}_\sep$, a $B$-embedding $h\colon X\to Y$, being fully faithful, is injective on objects. Therefore, for simplicity, we may think of it as an inclusion. With $Bh_\sharp\colon BY\to BX$ the right adjoint and left inverse of $Bh\colon BX\to BY$, we denote $Bh_\sharp(y,r)$ by $(y_r, r_y)$.
\begin{lemma}\label{prop:h} Let $h\colon X\to Y$ be a $B$-embedding. Then: \begin{enumerate} \item $(\forall y\in Y)\;(\forall x\in X)\;(\forall r\in V)\; BY((x,r),(y,r))=BY((x,r),(y_r,r_y))$; \item $(\forall \, y \in Y) \colon k_y=Y(y_k,y)$; \item $(\forall\, y \in Y)\;(\forall x \in X)\colon \enskip Y(x,y)= Y(x,y_k)\otimes Y(y_k,y)$. \end{enumerate} \end{lemma}
\begin{proof} (1) From $Bh_\sharp\cdot Bh=1_{BX}$ and $Bh\cdot Bh_\sharp\leq 1_{BY}$ one gets, for any $(y,r)\in BY$, $(y,r)\leq (y_r,r_y)$, i.e. $BY((y,r),(y_r,r_y))=\hom(r_y,Y(y_r,y)\otimes r)=k$. Therefore, for all $x\in X$, $y\in Y$, $r\in V$, \begin{align*} BY((x,r),(y,r))&\leq BX((x,r),(y_r,r_y))=BY((x,r),(y_r,r_y))\\ &=BY((x,r),(y_r,r_y))\otimes BY((y_r,r_y),(y,r))\leq BY((x,r),(y,r)), \end{align*} that is \[BY((x,r),(y,r))=BY((x,r),(y_r,r_y)).\]
(2) Let $y \in Y$. Then \[Y(y_k,y)=BY((y_k,k),(y,k))=BY((y_k,k),(y_k,k_y))=k_y.\]
(3) Let $y\in Y$ and $x\in X$. Then \[Y(x,y)=BY((x,k),(y,k))=BY((x,k),(y_k,k_y))=Y(x,y_k)\otimes k_y=Y(x,y_k)\otimes Y(y_k,y).\]
\end{proof}
\begin{prop} Let $X$ and $Y$ be $V$-categories. A $V$-functor $h\colon X\to Y$ is a $B$-embedding if and only if $h$ is fully faithful and \begin{equation}\label{eq:fff} (\forall y \in Y)\;(\exists ! z\in X)\; (\forall x\in X)\;\;\; Y(x,y)=Y(x,z)\otimes Y(z,y). \end{equation} \end{prop}
\begin{proof} If $h$ is a $B$-embedding, then it is fully faithful by Proposition \ref{prop:emb} and, for each $y\in Y$, $z=y_k\in X$ fulfils the required condition. To show that such $z$ is unique, assume that $z,z'\in X$ verify the equality of condition \eqref{eq:fff}. Then \[Y(z,y)=Y(z,z')\otimes Y(z',y)\leq Y(z',y)=Y(z',z)\otimes Y(z,y)\leq Y(z,y),\] and therefore, because $V$ is cancellative, $Y(z',z)=k$; analogously one proves that $Y(z,z')=k$, and so $z=z'$ because $Y$ is separated.\\
To prove the converse, for each $y\in Y$ we denote by $\overline{y}$ the only $z\in X$ satisfying \eqref{eq:fff}, and define \[Bh_\sharp(y,r)=(\overline{y},Y(\overline{y},y)\otimes r).\] When $x\in X$, it is immediate that $\overline{x}=x$, and so $Bh_\sharp\cdot Bh=1_{BX}$. Using Remark \ref{rem:adjcond}, to prove that $Bh_\sharp$ is a $V$-functor and $Bh\dashv Bh_\sharp$ it is enough to show that \[BX((x,r),Bh_\sharp(y,s))=BY(Bh(x,r),(y,s)),\] for every $x\in X$, $y\in Y$, $r,s\in V$. By definition of $Bh_\sharp$ this means \[BX((x,r),(\overline{y},Y(\overline{y},y)\otimes s))=BY((x,r),(y,s)),\] that is, \[\hom(r,Y(x,\overline{y})\otimes Y(\overline{y},y)\otimes s)=\hom(r,Y(x,y)\otimes s),\] which follows directly from \eqref{eq:fff}. \end{proof}
\begin{corollary} In $\Met$, if $X\subseteq [0,\infty]$, then its inclusion $h\colon X\to[0,\infty]$ is a $B$-embedding if, and only if, $X$ is a closed interval. \end{corollary} \begin{proof} If $X=[x_0,x_1]$, with $x_0,x_1\in\,[0,\infty]$, $x_0\leq x_1$, then it is easy to check that, defining $\overline{y}=x_0$ if $y\leq x_0$, $\overline{y}=y$ if $y\in X$, and $\overline{y}=x_1$ if $y\geq x_1$, for every $y\in\,[0,\infty]$, condition \eqref{eq:fff} is fulfilled.\\
We divide the proof of the converse in two cases:
(1) If $X$ is not an interval, i.e. if there exists $x,x'\in X$, $y\in [0,\infty]\setminus X$ with $x<y<x'$, then either $\overline{y}<y$, and then \[0=y\ominus x'\neq (y\ominus x')+(y\ominus\overline{y})=y-\overline{y},\] or $\overline{y}>y$, and then \[y-x=y\ominus x\neq (\overline{y}\ominus x)+(y\ominus\overline{y})=\overline{y}-x.\]\\
(2) If $X=[x_0,x_1[$ and $y> x_1$, then there exists $x\in X$ with $\overline{y}<x<y$, and so \[y-x=y\ominus x\neq (\overline{y}\ominus x)+(y\ominus\overline{y})=y-\overline{y}.\] An analogous argument works for $X=]x_0,x_1]$. \end{proof}
\section{On algebras for submonads of $\mathbb{P}$ and their morphisms}
In the following $\mathbb{T}=(T,\mu,\eta)$ is a submonad of the presheaf monad $ \mathbb{P}=(P,\mathpzc{m},\mathpzc{y})$ in $\Cats{V}_\sep$ For simplicity we will assume that the injective and fully faithful components of the monad morphism $\sigma:T \rightarrow P$ are inclusions. Theorem \ref{th:KZ} gives immediately that:
\begin{prop} Let $(X,a)$ be a $V$-category and $\alpha: TX \rightarrow X$ be a $V$-functor. The following are equivalent: \begin{enumerate}
\item $(X,\alpha)$ is a $\mathbb{T}$-algebra;
\item $\forall\, x \in X:$ $ \alpha (x^*)=x $.
\end{enumerate} \end{prop}
We would like to identify the $\mathbb{T}$-algebras directly, as we did for $\mathbb{B}_\bullet$ or $\mathbb{B}$ in Proposition \ref{prop:Balg}. First of all, we point out that a $\mathbb{T}$-algebra structure $\alpha\colon TX\to X$ must satisfy, for every $\varphi\in TX$ and $x\in X$, \[X(\alpha(\varphi), x)=TX(\varphi,x^*),\] and so, in particular, \[\alpha(\varphi)\leq x \;\Leftrightarrow\;\varphi\leq x^*;\] hence $\alpha$ must assign to each $\varphi\in TX$ an $x_\varphi\in X$ so that \[x_\varphi=\min\{x\in X\,;\,\varphi\leq x^*\}.\] Moreover, for such map $\alpha\colon TX\to X$, $\alpha$ is a $V$-functor if, and only if, \begin{align*} &\;(\forall \varphi,\rho\in TX)\;\;TX(\varphi,\rho)\leq X(x_\varphi,x_\rho)=TX(X(-,x_\varphi),X(-,x_\rho))\\ \Leftrightarrow&\;(\forall \varphi,\rho\in TX)\;\;TX(\varphi,\rho)\leq \bigwedge_{x\in X}\hom(X(x,x_\varphi),X(x,x_\rho))\\ \Leftrightarrow&\;(\forall x\in X)\;(\forall \varphi,\rho\in TX)\;\;X(x,x_\varphi)\otimes TX(\varphi,\rho)\leq X(x,x_\rho). \end{align*}
\begin{prop} A $V$-category $X$ is a $\mathbb{T}$-algebra if, and only if: \begin{enumerate} \item for all $\varphi\in TX$ there exists $\min\{x\in X\,;\,\varphi\leq x^*\}$; \item for all $\varphi, \rho\in TX$ and for all $x\in X$, $X(x,x_\varphi)\otimes TX(\varphi,\rho)\leq X(x,x_\rho)$. \end{enumerate} \end{prop} We remark that condition (2) can be equivalently stated as: \begin{enumerate} \item[\emph{(2')}] for each $\rho\in TX$, the distributor $\rho_1=\displaystyle\bigvee_{\varphi\in TX} X(-,x_\varphi)\otimes TX(\varphi,\rho)$ satisfies $x_{\rho_1}=x_\rho$, \end{enumerate} which is the condition corresponding to condition (2) of Corollary \ref{cor:condition}.\\
Finally, as for the formal ball monad, Theorem \ref{th:KZ} gives the following characterisation of $\mathbb{T}$-algebra morphisms.
\begin{corollary} Let $(X,\alpha), (Y,\beta)$ be $\mathbb{T}$-algebras. Then a $V$-functor $f: X \rightarrow Y$ is a $\mathbb{T}$-algebra morphism if and only if \[(\forall \varphi \in TX)\;\;\beta(\varphi \cdot f^*) \geq f(\alpha(\varphi)).\] \end{corollary}
\begin{example} \textbf{The Lawvere monad.} Among the examples presented in \cite{CH08} there is a special submonad of $\mathbb{P}$ which is inspired by the crucial remark of Lawvere in \cite{Law73} that Cauchy completeness for metric spaces is a kind of cocompleteness for $V$-categories. Indeed, the submonad $\mathbb{L}$ of $\mathbb{P}$ induced by \[\Phi=\{\varphi\colon X{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} Y\,;\,\varphi\mbox{ is a right adjoint $V$-distributor}\}\] has as $\mathbb{L}$-algebras the \emph{Lawvere complete $V$-categories}. These were studied also in \cite{CH09}, and in \cite{HT10} under the name $L$-complete $V$-categories. When $V=[0,\infty]_+$, using the usual order in $[0,\infty]$, for distributors $\varphi\colon X{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} E$, $\psi\colon E{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} X$ to be adjoint
\[\xymatrix@=8ex{X\ar@{}[r]|{\top}\ar@<1mm>@/^2mm/[r]|\circ^{{\varphi}} & \ar@<1mm>@/^2mm/[l]|\circ^{{\psi}}E}\]means that \begin{align*} (\forall x,x'\in X)\;\;&X(x,x')\leq \varphi(x)+\psi(x'),\\ &0\geq \inf_{x\in X} (\psi(x)+\varphi(x)). \end{align*} This means in particular that \[(\forall n\in\mathbb{N})\;(\exists x_n\in X)\;\;\psi(x_n)+\varphi(x_n)\leq\frac{1}{n},\] and, moreover, \[X(x_n,x_m)\leq\varphi(x_n)+\psi(x_m)\leq \frac{1}{n}+\frac{1}{m}.\] This defines a \emph{Cauchy sequence} $(x_n)_n$, so that \[(\forall\varepsilon>0)\;(\exists p\in\mathbb{N})\;(\forall n,m\in\mathbb{N})\;n\geq p\;\wedge\;m\geq p\;\Rightarrow\;\;X(x_n,x_m)+X(x_m,x_n)<\varepsilon.\] Hence, any such pair induces a (equivalence class of) Cauchy sequence(s) $(x_n)_n$, and a representative for
\[\xymatrix{X\ar[r]|-{\circ}^{(1_X)_*}\ar[d]|-{\circ}_{\varphi}&X\\
E\ar@{}[ru]^{\leq}\ar[ru]|-{\circ}_{[\varphi,(1_X)_*]}}\] is nothing but a limit point for $(x_n)_n$. Conversely, it is easily checked that every Cauchy sequence $(x_n)_n$ in $X$ gives rise to a pair of adjoint distributors \[\varphi=\lim_n\,X(-,x_n)\mbox{ and }\psi=\lim_n\,X(x_n,-).\] We point out that the $\mathbb{L}$-embeddings, i.e. the fully faithful and fully dense $V$-functors $f\colon X\to Y$ do not coincide with the $\mathbb{L}$-dense ones (so that $f_*$ is a right adjoint). For instance, assuming for simplicity that $V$ is integral, a $V$-functor $y\colon E\to X$ ($y\in X$) is fully dense if and only if $y\simeq x$ for all $x\in X$, while it is an $\mathbb{L}$-embedding if and only if $y\leq x$ for all $x\in X$. Indeed, $y\colon E\to X$ is $\mathbb{L}$-dense if, and only if, \begin{enumerate} \item[--] there is a distributor $\varphi\colon X{\longrightarrow\hspace*{-3.1ex}{\circ}\hspace*{1.5ex}} E$, i.e. \begin{equation}\label{eq:distr} (\forall x,x'\in X)\;\;X(x,x')\otimes\varphi(x')\leq\varphi(x), \end{equation} such that \item[--] $k\geq \varphi\cdot y_*$ , which is trivially true, and $a\leq y_*\cdot\varphi$, i.e. \begin{equation}\label{eq:adjoint} (\forall x,x'\in X)\;\;X(x,x')\leq \varphi(x)\otimes X(y,x'). \end{equation} \end{enumerate} Since \eqref{eq:distr} follows from \eqref{eq:adjoint}, \[y\mbox{ is $\mathbb{L}$-dense }\;\Leftrightarrow\;\;(\forall x,x'\in X)\;\;X(x,x')\leq \varphi(x)\otimes X(y,x').\] In particular, when $x=x'$, this gives $k\leq \varphi(x)\otimes X(y,x)$, and so we can conclude that, for all $x\in X$, $y\leq x$ and $\varphi(x)=k$. The converse is also true; that is \[y\mbox{ is $\mathbb{L}$-dense }\;\Leftrightarrow\;\;(\forall x\in X)\;\;y\leq x.\]
Still, it was shown in \cite{HT10} that injectivity with respect to fully dense and fully faithful $V$-functors (called $L$-dense in \cite{HT10}) characterizes also the $\mathbb{L}$-algebras.
\end{example}
\end{document} |
\begin{document}
\title{Analysing the role of entanglement in the three-qubit Vaidman's game}
\author{\IEEEauthorblockN{Hargeet Kaur} \IEEEauthorblockA{Department of Chemistry\\ Indian Institute of Technology\\ Jodhpur, Rajasthan\\ Email: [email protected]} \and \IEEEauthorblockN{Atul Kumar} Department of Chemistry\\ Indian Institute of Technology\\ Jodhpur, Rajasthan\\ Email: [email protected]}
\maketitle
\IEEEpeerreviewmaketitle
\begin{abstract} We analyse the role of degree of entanglement for Vaidman's game in a setting where the players share a set of partially entangled three-qubit states. Our results show that the entangled states combined with quantum strategies may not be always helpful in winning a game as opposed to the classical strategies. We further find the conditions under which quantum strategies are always helpful in achieving higher winning probability in the game in comparison to classical strategies. Moreover, we show that a special class of W states can always be used to win the game using quantum strategies irrespective of the degree of entanglement between the three qubits. Our analysis also helps us in comparing the Vaidman's game with the secret sharing protocol. Furthermore, we propose a new Vaidman-type game where the rule maker itself is entangled with the other two players and acts as a facilitator to share a secret key with the two players. \end{abstract}
\begin{IEEEkeywords} Vaidman's game, secret sharing, entanglement, GHZ, W, measurement, rule-maker, winning probability \end{IEEEkeywords}
\section{Introduction} Game theory is an eminently interesting and flourishing field of study, wherein many situations of conflicts can be efficiently examined and resolved \cite{GameTheory}. With the advent of quantum information and computation, game theory has generated a lot of interest in analysing quantum communication protocols from the perspective of a game \cite{BB84, BB84Game}. The analysis not only allows one to study the fundamental of quantum mechanics but also provides a much better insight to the communication protocol in terms of security, payoffs of different players, and complex nature of multi-qubit entanglement. The aim is to study and compare the payoffs of different users and security of a protocol using classical and quantum strategies. In general, quantum strategies are found to be preferable in comparison to the classical strategies. For example, Meyer demonstrated how quantum strategies can be utilized by a player to defeat his classical opponent in a classical penny flip game \cite{Meyer}. He further explained the relation of penny flip game setting to efficient quantum algorithms. Similarly, Eisert \cite{Eisert} suggested a quantum solution for avoiding the Prisoners' Dilemma. Moreover, the quantum version of Prisoners' Dilemma game was also experimentally realized using a NMR quantum computer \cite{Du}. On the other hand, Anand and Benjamin \cite{Anand} found that for a scenario in penny flip game where two players share an entangled state, a player opting for a mixed strategy can still win against a player opting for a quantum strategy. Therefore, it becomes important to analyse the role of quantum entanglement in game theory. Furthermore, one must also understand and study the importance of using different entangled systems under different game scenarios to take the advantage of usefulness of such entangled systems in different situations. \par In this article, we analyse a game proposed by Vaidman \cite{Vaidman} in which a team of three players always wins the game, when they share a three qubit maximally entangled state. The team, however, does not win the game when players opt for pure classical strategies, in fact the maximum winning probability that can be achieved using classical strategies is $3/4$. Our analysis of Vaidman game includes two different classes of three-qubit entangled states, namely, GHZ class \cite{GHZpaper} and W class of states \cite{Dur}. We attempt to establish a relation between the winning probability of Vaidman's game \cite{Vaidman} with the degree of entanglement of various three-qubit entangled states used as a resources in the game. Interestingly, our results show that for GHZ class, there are set of states for which classical strategies give better winning probability than the quantum strategies. In comparison to the GHZ class, for a special class of W states, quantum strategies prove to be always better than the classical strategies. We further establish a direct correspondence between Vaidman's game and Quantum Secret Sharing (QSS) \cite{Hillery}. In addition, we also propose a Vaidman-type game where one of the players sharing the three-qubit entanglement defines the rule of a game to be played between him/her and the other two players. A detailed examination of the proposed game shows that the rule-maker finds himself in an advantageous situation whenever they share a partially entangled state, because this enables the rule-maker to modify rules in such a way that the team of other two players loose the game. Moreover, we further suggest an application of such a game in facilitated secret sharing between three parties, where one of the players is a facilitator and also controls the secret sharing protocol. \par The organization of the article is as follows. In Section II and III, we briefly describe three-party entanglement (to identify GHZ and W class of states) and QSS, respectively. In section IV, we establish a correspondence of Vaidman's game with QSS. In corresponding subsections we further demonstrate the outcomes of using GHZ and W class of states for Vaidman's game. A new Vaidman-type game is proposed in the Section V followed by its application for QSS in the subsection. We conclude the article in the Section VI.
\section{Three-qubit Entanglement} Dur et al. \cite{Dur} classified pure states of a three-qubit entangled systems in two inequivalent classes, namely GHZ class and W class represented as
\begin{equation} \label{GHZgeneral} \vert{\psi_{GHZ}}\rangle= sin\theta \vert{000}\rangle + cos\theta \vert{111}\rangle \end{equation}
and
\begin{equation} \label{Wgeneral} \vert{\psi_{W}}\rangle= a\vert{100}\rangle + b\vert{010}\rangle + c\vert{001}\rangle, \end{equation}
respectively where $\theta\in\left( 0,\pi/4\right)$ and $\left| a \right|^2 + \left| b \right|^2 + \left| c \right|^2 = 1$. The above two classes are termed as inequivalent classes as a state belonging to one of the class states cannot be converted to a state belonging to another class by performing any number of Local Operations and Classical Communication (LOCC). The degree of entanglement for a pure three-qubit system can be defined using a measure called three-tangle $(\tau)$ \cite{Coffman}, given by
\begin{equation} \label{tau_formula} \tau= C^2_{P(QR)}-C^2_{PQ}-C^2_{PR} \end{equation}
where $C_{P(QR)}$ represents the concurrence of the qubit P, with qubits Q and R taken together as one entity \cite{Hill, Wootters1, Wootters2}. The terms $C_{PQ}$ and $C_{PR}$ can be defined in a similar fashion such that,
\begin{equation} \label{conc_formula} C(\vert{\psi}\rangle)=\vert\langle\psi\vert\sigma_y\otimes\sigma_y\vert\psi^*\rangle\vert \end{equation}
Here, $\psi^*$ denotes the complex conjugate of the wave function representing the two-qubit entangled state. The value of three-tangle varies between 0 for product states to 1 for states having maximum entanglement. For example, the three-tangle for maximally entangled GHZ states represented as
\begin{equation} \label{GHZ} \vert{GHZ}\rangle= \frac{1}{\sqrt{2}}(\vert{000}\rangle+\vert{111}\rangle) \end{equation}
is 1. Similarly the standard state in W class is represented by
\begin{equation} \label{W} \vert{W}\rangle= \frac{1}{\sqrt{3}}(\vert{001}\rangle+\vert{010}\rangle+\vert{001}\rangle) \end{equation}
Although the standard $W$ state possesses genuine three-qubit entanglement, the same cannot be identified using the three-tangle as an entanglement measure as the three-tangle of the standard $W$ state is 0. Nevertheless, one can be assured that the W class of states exhibit genuine tripartite entanglement using other entanglement measures such as the sum of the concurrences for all the three bipartite entities, i.e., $C_{AB}+C_{BC}+C_{CA}$. The maximum value of sum of three concurrences is 2, which is attained for a standard $W$ state, as shown in (\ref{W}). \section{Quantum Secret Sharing} Secret sharing is the process of splitting a secret message into parts, such that no part of it is sufficient to retrieve the original message \cite{Hillery}. The original idea was to split the information between the two recipients, one of which may be a cheat (unknown to the sender). Only when the two recipients cooperate with each other, they retrieve the original message. The protocol, therefore, assumes that the honest recipient will not allow the dishonest recipient to cheat, hence, splitting the information between the two. \par The original protocol can be implemented using the maximally entangled three-qubit GHZ state, as given in (\ref{GHZ}), shared between the three users Alice, Bob, and Charlie. Alice splits the original information between Bob and Charlie in a way that the complete message cannot be recovered unless they cooperate with each other. For sharing a joint key with Bob and Charlie, Alice suggests all of them to measure their qubits either in X or Y direction at random where the eigen states in X and Y basis are defined as
\begin{equation} \label{basis} \vert{\pm x}\rangle=\frac{1}{\sqrt{2}}(\vert{0}\rangle \pm \vert{1}\rangle), \ \ \ \vert{\pm y}\rangle=\frac{1}{\sqrt{2}}(\vert{0}\rangle \pm i\vert{1}\rangle) \end{equation}
\begin{table}[!t !h] \renewcommand{1.3}{1.3} \caption{Effect of Bob's and Charlie's measurement on Alice's state in a GHZ state} \label{QSS} \centering
\begin{tabular}{|c|cccc|} \hline \backslashbox{Bob}{Charlie} & $\vert{+x}\rangle$ & $\vert{-x}\rangle$ & $\vert{+y}\rangle$ & $\vert{-y}\rangle$ \\ \hline $\vert{+x}\rangle$ & $\vert{+x}\rangle$ & $\vert{-x}\rangle$ & $\vert{-y}\rangle$ & $\vert{+y}\rangle$ \\ $\vert{-x}\rangle$ & $\vert{-x}\rangle$ & $\vert{+x}\rangle$ & $\vert{+y}\rangle$ & $\vert{-y}\rangle$ \\ $\vert{+y}\rangle$ & $\vert{-y}\rangle$ & $\vert{+y}\rangle$ & $\vert{-x}\rangle$ & $\vert{+x}\rangle$ \\ $\vert{-y}\rangle$ & $\vert{+y}\rangle$ & $\vert{-y}\rangle$ & $\vert{+x}\rangle$ & $\vert{-x}\rangle$ \\ \hline \end{tabular} \end{table} The effects of Bob's and Charlie's measurements on the state of Alice's qubit is shown in Table \ref{QSS}. After performing their measurements at random, Bob and Charlie announce their choice of measurement bases (but not the measurement outcomes) to Alice. This is followed by Alice telling her choice of measurement bases to Bob and Charlie. Only the bases XXX, XYY, YXY, and YYX (for Alice, Bob, and Charlie, respectively) are accepted, for sharing the secret key. The results from all other random choices of bases are discarded.
Bob and Charlie must meet and tell each other their measurement outcomes so as to collectively know the measurement outcome of Alice. For instance, if both Bob and Charlie measure in X basis and their measurement outcomes are $+1$ and $+1$ respectively, or $-1$ and $-1$ respectively then the corresponding outcome of Alice will be $+1$ when measured in X basis. On the other hand, if the measurement outcomes of Bob and Charlie are $+1$ and $-1$ respectively or vice-versa, then the corresponding outcome of Alice will be $-1$ when measured in X basis.
\section{Vaidman's Game representing Quantum Secret Sharing} In this section, we show a correspondence between the QSS protocol \cite{Hillery} to the Vaidman's game \cite{Vaidman}. We, therefore, first briefly describe the Vaidman's game. In this game, three players, namely Alice, Bob, and Charlie, are taken to arbitrary remote locations: A, B, and C, respectively. Now each player is asked one of the two possible questions: Either \textquotedblleft{What is X?}\textquotedblright \ or \textquotedblleft{What is Y?}\textquotedblright . The players can give only two possible answers, either -1 or +1. The rules of the game suggest that either each player is asked the X question or two of the players are asked the Y question and the remaining one is asked the X question. The team of three players wins the game if the product of their answers is +1 (when all are asked the X question) and -1 (when one is asked the X question and two are asked the Y question). Clearly, if the players adopt the classical strategy then at best they can achieve a winning probability of $3/4$. On the other hand, if the three players share a three-qubit maximally entangled GHZ state, as shown in (\ref{GHZ}), then they always win the game by using a simple quantum strategy, i.e., whenever a player is asked the X(Y) question, she/he measures her/his qubit in the X(Y) basis and uses the measurement outcome obtained in the measurement process as her/his answer. \subsection{Use of GHZ class states} That the three players always win the game using the above strategy is because of the strong correlations between the three qubits of the GHZ state. For example, the three qubits in the GHZ state are related as
\begin{eqnarray} \label{Vaidman Game} \lbrace{M^X_A}\rbrace\lbrace{M^X_B}\rbrace\lbrace{M^X_C}\rbrace &=&1 \IEEEnonumber \\ \lbrace{M^X_A}\rbrace\lbrace{M^Y_B}\rbrace\lbrace{M^Y_C}\rbrace &=&-1 \IEEEnonumber \\ \lbrace{M^Y_A}\rbrace\lbrace{M^X_B}\rbrace\lbrace{M^Y_C}\rbrace &=&-1 \IEEEnonumber \\ \lbrace{M^Y_A}\rbrace\lbrace{M^Y_B}\rbrace\lbrace{M^X_C}\rbrace &=&-1 \end{eqnarray}
where $\lbrace{M^X_i}\rbrace$ is the measurement outcome of the $'i'$-th player measuring her/his qubit in X basis, and $\lbrace{M^Y_i}\rbrace$ is the measurement outcome of the $'i'$-th player measuring her/his qubit in Y basis. A clear correspondence between the Vaidman's game and the QSS protocol is shown in (\ref{Vaidman Game}). \par We now proceed to analyze the Vaidman's game in a more general setting where the three players share a general GHZ state represented in (\ref{GHZgeneral}), instead of sharing a maximally entangled GHZ state as described in the original game. Clearly, for a general GHZ state, the success probability of winning the above defined game varies from $50\%$ to $100\%$ as shown in Figure \ref{fig_genGHZ_VaidmanGame}. Here, we have assumed that the probability of players being asked the set of 4 questions ($XXX$, $XYY$, $YXY$, $YYX$) is equally likely. \begin{figure}
\caption{Success probability of winning Vaidman's game using GHZ-type states}
\label{fig_genGHZ_VaidmanGame}
\end{figure} In Figure \ref{fig_genGHZ_VaidmanGame}, the winning probability of the game, i.e, $\frac{1}{2}(1+sin2\theta)$ is plotted against the degree of entanglement, three tangle ($\tau$). It is clear that only for maximally entangled state, i.e., when $\tau$ attains its maximum value (at $\theta=\pi/4$), the players have $100\%$ chances of winning the game. For all other values of $\tau$ the success probability is always less than the one obtained with a maximally entangled state. Interestingly, only the set of states with $\tau >0.25$ achieve better success probability in comparison to a situation where all the three players opt for classical strategies. Therefore, for the set of states with $\tau <0.25$, classical strategies will prove to be better in comparison to quantum strategies. Hence, entanglement may not be always useful in winning the games using quantum strategies in comparison to classical strategies.
\subsection{Use of W class states} Although W-type states belong to a different class of states, they can also be used as a resource in winning Vaidman's game with a different set of questions. In this case, the playes may be asked either \textquotedblleft{What is Z?}\textquotedblright \ or \textquotedblleft{What is Y?}\textquotedblright. The answers to these questions can again be either +1 or -1. For this, either all players are asked the Z question; or one of the players is asked the Z question and the remaining are asked the Y question. The players win the game if the product of their answers is -1, if all are asked the Z question; and +1, in all other cases. If the players share the standard $W$ state, given in (\ref{W}), before the start of play then they can win this game with a success probability of $87.5\%$. On similar grounds, we can use the standard $W$ state for probabilistic QSS, as QSS holds direct correspondence with the Vaidman's game. \par Similar to the case of GHZ class, here, we analyze the success probability of the Vaidman's game if the three players share a general W-type state as shown in (\ref{Wgeneral}). In such a scenario, the team wins the game with a success probability given as $\frac{1}{4}(\frac{5}{2}+bc+ab+ac)$. This value holds true for an assumption that the team will be asked the 4 set of questions ($ZZZ$, $ZYY$, $YZY$, $YYZ$) with equal likelihood. The plot of winning probability of Vaidman's game versus the sum of three concurrences is demonstrated in Figure \ref{fig_genW_VaidmanGame1}. The figure shows that the winning probability of Vaidman's game linearly increase with the sum of concurrences for W-type states. Furthermore, the plot also indicates that for W-type states shown in (\ref{Wgeneral}) with sum of two qubit concurrences exceeding 1, the winning probability of Vaidman's game is always greater than the classical winning probability. Also, the highest success probability of $87.5\%$ can be achieved for values $a=b=c=\frac{1}{\sqrt{3}}$. \par \begin{figure}
\caption{Success probability of winning Vaidman's game using W-type states}
\label{fig_genW_VaidmanGame1}
\end{figure} Although the use of partially entangled systems, in general, leads to probabilistic information transfer \cite{Karlsson, Shi}, Pati and Agrawal \cite{Agrawal} have shown that there exists a special class of W-type states which can be used for perfect teleportation and dense coding. The class of states can be represented as
\begin{equation} \label{Wn} \vert{W_n}\rangle= \frac{1}{\sqrt{2(1+n)}}(\vert{100}\rangle+\sqrt{n}e^{i\gamma}\vert{010}\rangle+\sqrt{n+1}e^{i\delta}\vert{001}\rangle) \end{equation}
\begin{figure}
\caption{Success probability of winning Vaidman's game using $W_n$ states}
\label{fig_Wn_VaidmanGame1}
\end{figure} \begin{figure}
\caption{Success probability of winning Vaidman's game using $W_n$ states}
\label{fig_Wn_VaidmanGame2}
\end{figure} where $n$ is a positive integer and $\delta$ and $\gamma$ are relative phases. This motivates us to analyse the usefulness of these states for the Vaidman's game. The success probability that can be obtained by sharing $W_n$ states between the three players can be given by $\frac{1}{8(n+1)}(5+5n+\sqrt{n+1}+\sqrt{n}(\sqrt{n+1}+1))$. Figure \ref{fig_Wn_VaidmanGame1} clearly demonstrates that if the three players share $W_n$ states, then the success probability using quantum strategies is always greater than the success probability using the classical strategies, independent of the value of the sum of concurrences. Moreover, Figure \ref{fig_Wn_VaidmanGame2} depicts the dependence of winning probability of Vaidman's game on parameter $n$. The highest success probability of $0.86425$ is achieved for $n=1$ when the sum of the three concurrences is $1.914$. Nevertheless, the winning probability is always greater than the one obtained using classical strategies.
\subsection{Comparison of the use of GHZ and W states} The above calculations suggest that although a standard GHZ state achieves $100\%$ success probability in winning the Vaidman's game which is more than the winning probability achieved by the standard $W$ state, only the set of GHZ-type states with a value of $\tau>0.25$ are useful for obtaining the success probability greater than the one obtained using classical strategies. Moreover, all the W-type states with the sum of three concurrences greater than one, can be useful in winning the game. In addition, a special class of W-type states, i.e., $W_n$ states give better prospects of winning the Vaidman's game, than any classical means, for all values of $n$.
\section{A game where the rule-maker is entangled with the players} The essence of Vaidman's game can be efficiently employed in an interesting scenario, where the rule-maker itself is entangled with the players playing the Vaidman-type game. In our proposed game, Alice, Bob and Charlie share a three-qubit entangled state. We assume that Charlie prepares a three-qubit state and gives one qubit each to Alice ($A$) and Bob ($B$), keeping one ($C$) qubit with himself. Charlie agrees to help Alice and Bob, if they win the game as per the rules defined by Charlie. For this, Charlie measures his qubit in a general basis as shown in (\ref{Parametrized_basis}). Charlie, then asks questions \textquotedblleft{What is X?}\textquotedblright \ or \textquotedblleft{What is Z?}\textquotedblright \ to the team. Alice and Bob are not allowed to discuss and have to give individual answers each. Their answer can be +1 or -1. If the team is asked the X (Z) question, both Alice and Bob measure their qubits in X (Z) basis and give their measurement results as answers to the asked questions.
\begin{equation} \label{Parametrized_basis} \vert{b_0}\rangle= sin\lambda \vert{0}\rangle - cos\lambda \vert{1}\rangle; \ \ \ \ \ \vert{b_1}\rangle= cos\lambda \vert{0}\rangle + sin\lambda \vert{1}\rangle \end{equation}
If Charlie's measurement outcome is $\vert{b_0}\rangle$, he declares the winning condition to be as shown in (\ref{rule1}), and if his measurement outcome is $\vert{b_1}\rangle$, he declares the winning condition to be as shown in (\ref{rule2}). Here, $\lbrace{M^X_i}\rbrace$ is the measurement outcome when the $'i'$-th player measures her/his qubit in X basis, and $\lbrace{M^Z_i}\rbrace$ is the measurement outcome when the $'i'$-th player measures her/his qubit in Z basis.
\begin{equation} \label{rule1} \lbrace{M^X_A}\rbrace\lbrace{M^X_B}\rbrace=1 \ \ \ \ \ \lbrace{M^Z_A}\rbrace\lbrace{M^Z_B}\rbrace=-1 \end{equation}
\begin{equation} \label{rule2} \lbrace{M^X_A}\rbrace\lbrace{M^X_B}\rbrace=-1 \ \ \ \ \ \lbrace{M^Z_A}\rbrace\lbrace{M^Z_B}\rbrace=1 \end{equation}
\begin{figure}
\caption{Success probability of winning the proposed game where the rule-maker is entangled with the players using a standard $W$ states}
\label{fig_ProposedGame}
\end{figure}
If Charlie prepares a three-qubit GHZ state as shown in (\ref{GHZ}), then the team has $50\%$ success probability irrespective of the measurement basis used by Charlie. However, if Charlie prepares a three-qubit $W$ state as shown in (\ref{W}), then success probability of the team depends on the parameter $\lambda$- governing the basis in which Charlie makes a measurement. A plot of success probability achieved with respect to the parameter $\lambda$ is shown in Figure \ref{fig_ProposedGame}. The maximum winning probability of the team $0.9167$ when $\lambda=90^\circ$, i.e., Charlie measures in computational basis ($\vert{b_0}\rangle=\vert{0}\rangle$ and $\vert{b_1}\rangle=\vert{1}\rangle$). On the other hand, Charlie can also measure in computational basis with $\vert{b_0}\rangle=\vert{1}\rangle$ and $\vert{b_1}\rangle=\vert{0}\rangle$ when $\lambda=0^\circ$. In such a parameter setting, the team mostly looses the game as the winning probability is only $0.0833$. Thus, if Charlie wants to help Alice and Bob, he prefers to prepare a standard $W$ state and performs measurement in the computational basis ($\vert{b_0}\rangle=\vert{0}\rangle$ and $\vert{b_1}\rangle=\vert{1}\rangle$) so that the team can win the game with a success rate of $91.667\%$. In this situation, the use of quantum strategy is always preferable for the team of Alice and Bob.
For all the above values and winning probability, we are assuming that Charlie asks the X and Z question with equal probability. Alice and Bob may choose not to measure their qubits and randomly answer as +1 or -1 (classical approach). In that case, the team may win the game half the times.
\subsection{An application of the above game in secret sharing} For establishing a relation between the proposed game and secret sharing, we consider that Alice and Bob are kept in two different cells and are partially disallowed to communicate. By partially, we mean that they can can communicate only under the presence of a facilitator or a controller (Charlie in our case), who listens and allows secure communication between the two. To accomplish this task, we prefer to exploit the properties of a standard $W$ state over the use of a $W_1$ state, because the success rate of winning Vaidman's game is $87.5\%$ when a standard $W$ state is shared, as opposed to $86.425\%$ when a $W_1$ state is shared within the team members. Also, we further consider that Charlie performs his measurement in the basis as shown in (\ref{Parametrized_basis}) at $\lambda=90^\circ$, i.e., Charlie measures his qubit in computational basis ($\vert{b_0}\rangle=\vert{0}\rangle$ and $\vert{b_1}\rangle=\vert{1}\rangle$). \par
\begin{table}[t] \renewcommand{1.3}{1.3} \caption{Control mode of facilitated information sharing} \label{table_CM} \centering
\begin{tabular}{|c|cccccc|} \hline Charlie's measurement outcome & $\vert{1}\rangle$ & $\vert{1}\rangle$ & $\vert{1}\rangle$ & $\vert{1}\rangle$ & $\vert{1}\rangle$ & $\vert{1}\rangle$ \\ \hline Alice's basis & Z & Z & X & X & X & X \\ Bob's basis & Z & X & Z & X & X & X \\ Is the choice of basis accepted? & yes & no & no & yes & yes & yes \\ Alice's measurement outcome & $+1$ & - & - & $+1$ & $-1$ & $+1$ \\ Bob's measurement outcome & $+1$ & - & - & $+1$ & $+1$ & $-1$ \\ \hline Correlation as expected? & $\checkmark$ & - & - & $\times$ & $\checkmark$ & $\checkmark$ \\ \hline
\multicolumn{7}{|c|}{Alice and Bob are asked to announce their outcome and it is checked if} \\
\multicolumn{7}{|c|}{their results comply with (12) in more than or equal to $75\%$ cases} \\ \hline \end{tabular} \end{table}
In order to share a key, Charlie chooses to operate in two different modes, namely control mode and message mode. The control mode corresponds to Charlie's measurement outcome $\vert{1}\rangle$, and is used to check whether Alice and Bob are honest or not, as shown in Table \ref{table_CM}. Similarly, the message mode corresponds to Charlie's measurement outcome $\vert{0}\rangle$, and is used to share a secret key with Alice and Bob (Table \ref{table_MM}). For this, Charlie prepares $'m'$ standard $W$ states as shown in (\ref{W}) and distributes qubits 1 and 2 of each state to Alice and Bob, respectively keeping the third qubit with himself. Charlie, then performs a measurement on his qubit in the computational ($\vert{0}\rangle$, $\vert{1}\rangle$) basis. Meanwhile, Alice and Bob randomly choose their basis of measurement (either X or Z) and announce their choice of basis to Charlie. If they choose two different basis, then their choices are discarded. Alternately, Charlie randomly chooses a basis of measurement and announces his choice to Alice and Bob. This will ensure that both Alice and Bob perform measurements in the same basis. This step is repeated for $'m'$ qubits, and Alice and Bob note down their measurement results each time. \par
\begin{table}[!t] \renewcommand{1.3}{1.3} \caption{Message mode of facilitated information sharing} \label{table_MM} \centering
\begin{tabular}{|c|cccccc|} \hline Charlie's measurement outcome & $\vert{0}\rangle$ & $\vert{0}\rangle$ & $\vert{0}\rangle$ & $\vert{0}\rangle$ & $\vert{0}\rangle$ & $\vert{0}\rangle$ \\ \hline Alice's basis choice & X & X & X & Z & Z & Z \\ Bob's basis choice & X & X & Z & X & Z & Z \\ Basis choice accepted? & yes & yes & no & no & yes & yes \\ Alice's measurement outcome & $\vert{+}\rangle$ & $\vert{-}\rangle$ & - & - & $\vert{0}\rangle$ & $\vert{1}\rangle$ \\ Bob's measurement outcome & $\vert{+}\rangle$ & $\vert{-}\rangle$ & - & - & $\vert{1}\rangle$ & $\vert{0}\rangle$ \\ \hline
\multicolumn{7}{|c|}{$\vert{0}\rangle$ and $\vert{+}\rangle$ correspond to secret bit: 0} \\
\multicolumn{7}{|c|}{$\vert{1}\rangle$ and $\vert{-}\rangle$ correspond to secret bit: 1} \\ \hline
\multicolumn{7}{|c|}{Let Charlie announce that Bob should flip his outcome whenever he} \\
\multicolumn{7}{|c|}{chooses Z basis for measurement} \\ \hline Shared secret bit & 0 & 1 & - & - & 0 & 1 \\ \hline \end{tabular} \end{table}
If Charlie gets $\vert{0}\rangle$ as his measurement outcome, then he knows that the measurement results of Alice and Bob are related as in (\ref{rule1}) with certainty. As explained above, this will be the message mode of the proposed secret sharing scheme, wherein Alice's and Bob's outcomes will either be same or different. The relation between their outcomes is only known to Charlie, which he announces at the end of the protocol. On the other hand, if Charlie gets $\vert{1}\rangle$ as the measurement outcome, then the measurement results of Alice and Bob are related as in (\ref{rule2}) in $75\%$ cases. Since this is a control mode, Charlie secretly asks both Alice and Bob to announce their measurement outcomes, which he verifies to check if anyone (Alice or Bob) is cheating. If the results announced by Alice and Bob do not comply with the results in (\ref{rule2}) more than $75\%$ times, then cheating is suspected. Moreover, as Alice and Bob are not allowed to discuss, they cannot distinguish between the message and the control mode. If both, Alice and Bob are asked to announce their measurement outcomes, then the control mode of secret sharing is taking place. While, if none of them is asked to announce her/his results, then the message mode of secret sharing occurs. If Charlie suspects cheating in the control mode, he disallows communication and does not announce the relation between the outcomes of Alice and Bob for message runs. However, if Charlie does not find anything suspicious, he announces in the end, which results correspond to message and control mode, and also the relation between the outcomes of Alice's and Bob's measurement outcomes in the message mode. This protocol, therefore, enables the controller to check a pair of agents for their honesty, and simultaneous sharing of a secret key with them, if they are proved honest.
\section{Conclusion and Future scope} In this article, we addressed the role of degree of entanglement for Vaidman's game. We analysed the relation between the success probability of the Vaidman's game with the three-qubit entanglement measures considering both quantum and classical strategies. The results obtained here indicate that entanglement and quantum strategies may not be always useful in winning the game. For example, we found that there are set of GHZ class and W class states, for which classical strategies are proved to be better than the quantum strategies. On the other hand, for the special class of W-type states, i.e., $W_{n}$ states, quantum strategies are always better than the classical strategies in winning the Vaidman's game. We further explored a correspondence between the Vaidman's game using general three-qubit pure states and the QSS protocol. In addition, we have proposed an efficient game, where the player deciding the rules of the game itself is entangled with other two players. The proposed game may find an application in facilitated secret sharing, where a facilitator checks the players involved for their honesty and simultaneously controls the process of sharing information between them. \par It will be interesting to analyse these games under real situations, i.e., considering the success probability of the game under noisy conditions. For future study, we also wish to extend our analysis for similar games between four players and then to generalise it for the involvement of $N$ players.
\end{document} |
\begin{document}
\title{Extremal Betti numbers of some Cohen-Macaulay binomial edge ideals}
\begin{abstract} We provide the regularity and the Cohen-Macaulay type of binomial edge ideals of Cohen-Macaulay cones, and we show the extremal Betti numbers of some classes of Cohen-Macaulay binomial edge ideals: Cohen-Macaulay bipartite and fan graphs. In addition, we compute the Hilbert-Poincaré series of the binomial edge ideals of some Cohen-Macaulay bipartite graphs. \end{abstract}
\section*{Introduction}
Binomial edge ideals were introduced in 2010 independently by Herzog et al. in \cite{HHHKR} and by Ohtani in \cite{MO}. They are a natural generalization of the ideals of 2-minors of a $2\times n$-generic matrix: their generators are those 2-minors whose column indices correspond to the edges of a graph. More precisely, given a simple graph $G$ on $[n]$ and the polynomial ring $S=K[x_1,\dots,x_n,y_1, \dots,y_n]$ with $2n$ variables over a field $K$, the \textit{binomial edge ideal} associated to $G$ is the ideal $J_G$ in $S$ generated by all the binomials $\{x_iy_j -x_jy_i | i,j \in V(G) \text{ and } \{i,j\} \in E(G)\}$, where $V(G)$ denotes the vertex set and $E(G)$ the edge set of $G$. Many algebraic and homological properties of these ideals have been investigated, such as the Castelnuovo-Mumford regularity and the projective dimension, see for instance \cite{HHHKR}, \cite{EHH}, \cite{MK}, \cite{KM}, and \cite{RR}. Important invariants which are provided by the graded finite free resolution are the extremal Betti numbers of $J_G$.
Let $M$ be a finitely graded $S$-module. Recall the Betti number $\beta_{i,i+j}(M) \neq 0$ is called \textit{extremal} if $\beta_{k,k+\ell} =0$ for all pairs $(k,\ell) \neq (i,j)$, with $k \geq i, \ell \geq j$. A nice property of the extremal Betti numbers is that $M$ has an unique extremal Betti number if and only if $\beta_{p,p+r}(M) \neq 0$, where $p = \projdim M $ and $r = \reg M$. In this years, extremal Betti numbers were studied by different authors, also motivated by Ene, Hibi, and Herzog's conjecture (\cite{EHH}, \cite{HR}) on the equality of the extremal Betti numbers of $J_G$ and $\mathrm{in}_<(J_G)$. Some works in this direction are \cite{B}, \cite{DH}, and \cite{D}, but the question has been completely and positively solved by Conca and Varbaro in \cite{CV}. The extremal Betti numbers of $J_G$ are explicitly provided by Dokuyucu, in \cite{D}, when $G$ is a cycle or a complete bipartite graph, by Hoang, in \cite{H}, for some closed graphs, and by Herzog and Rinaldo, in \cite{HR}, and Mascia and Rinaldo, in \cite{MR}, when $G$ is a block graph. In this paper, we show the extremal Betti numbers for binomial edge ideals of some classes of Cohen-Macaualy graphs: cones, bipartite and fan graphs. The former were introduced and investigated by Rauf and the second author in \cite{RR}. They construct Cohen-Macaulay graphs by means of the formation of cones: connecting all the vertices of two disjoint Cohen-Macaulay graphs to a new vertex, the resulting graph is Cohen-Macaulay. For these graphs, we give the regularity and also the Cohen-Macaulay type (see Section \ref{Sec: cones}) . The latter two are studied by Bolognini, Macchia and Strazzanti in \cite{BMS}. They classify the bipartite graphs whose binomial edge ideal is Cohen-Macaulay. In particular, they present a family of bipartite graphs $F_m$ whose binomial edge ideal is Cohen-Macaulay, and they prove that, if $G$ is connected and bipartite, then $J_G$ is Cohen-Macaulay if and only if $G$ can be obtained recursively by gluing a finite number of graphs of the form $F_m$ via two operations. In the same article, they describe a new family of Cohen-Macaulay binomial edge ideals associated with non-bipartite graphs, the fan graphs. For both these families, in \cite{JK}, Jayanthan and Kumar compute a precise expression for the regularity, whereas in this work we provide the unique extremal Betti number of the binomial edge ideal of these graphs (see Section \ref{Sec: bipartite and fan graphs} and Section \ref{Sec: Cohen-Macaulay bipartite graphs}). In addition, we exploit the unique extremal Betti number of $J_{F_m}$ to describe completely its Hilbert-Poincaré series (see Section \ref{Sec: bipartite and fan graphs}).
\section{Betti numbers of binomial edge ideals of disjoint graphs}\label{Sec: preliminaries}
In this section we recall some concepts and notation on graphs that we will use in the article.
Let $G$ be a simple graph with vertex set $V(G)$ and edge set $E(G)$. A subset $C$ of $V(G)$ is called a \textit{clique} of $G$ if for all $i$ and $j$ belonging to $C$ with $i \neq j$ one has $\{i, j\} \in E(G)$. The \textit{clique complex} $\Delta(G)$ of $G$ is the simplicial complex of all its cliques. A clique $C$ of $G$ is called \textit{face} of $\Delta(G)$ and its \textit{dimension} is given by $|C| -1$. A vertex of $G$ is called {\em free vertex} of $G$ if it belongs to exactly one maximal clique of $G$. A vertex of $G$ of degree 1 is called \textit{leaf} of $G$. A vertex of $G$ is called a \textit{cutpoint} if the removal of the vertex increases the number of connected components. A graph $G$ is {\em decomposable}, if there exist two subgraphs $G_1$ and $G_2$ of $G$, and a decomposition $G=G_1\cup G_2$ with $\{v\}=V(G_1)\cap V(G_2)$, where $v$ is a free vertex of $G_1$ and $G_2$.
\begin{setup}\label{setup} Let $G$ be a graph on $[n]$ and $u \in V(G)$ a cutpoint of $G$. We denote by \begin{align*} & G' \text{ the graph obtained from } G \text{ by connecting all the vertices adjacent to } u, \\ & G'' \text{ the graph obtained from } G \text{ by removing } u, \\ & H \text{ the graph obtained from } G' \text{ by removing } u. \end{align*}
\end{setup} Using the notation introduced in Set-up \ref{setup}, we consider the following short exact sequence \begin{equation}\label{Exact}
0\To S/J_G \To S/J_{G'}\oplus S/((x_u, y_u)+J_{G''})\To S/((x_u,y_u)+J_{H}) \To 0 \end{equation} For more details, see Proposition 1.4, Corollary 1.5 and Example 1.6 of \cite{R}. From (\ref{Exact}), we get the following long exact sequence of Tor modules \begin{equation}\label{longexact} \begin{aligned} &\cdots\rightarrow T_{i+1,i+1+(j-1)}(S/((x_u,y_u)+J_H)) \rightarrow T_{i,i+j}(S/J_G) \rightarrow \\ & \hspace{-0.4cm}T_{i,i+j}(S/J_{G'}) \oplus T_{i,i+j}(S/((x_u, y_u)+J_{G''})) \rightarrow T_{i,i+j}(S/((x_u,y_u)+J_H)) \rightarrow \end{aligned} \end{equation} where $T_{i,i+j}^S(M)$ stands for $\mathrm{Tor}_{i,i+j}^S(M,K)$ for any $S$-module $M$, and $S$ is omitted if it is clear from the context.
\
\begin{lemma}\label{Lemma: beta_p,p+1 e beta_p,p+2} Let $G$ be a connected graph on $[n]$. Suppose $J_G$ be Cohen-Macaulay, and let $p=\projdim S/J_G$. Then \begin{enumerate} \item[(i)] $\beta_{p,p+1} (S/J_G) \neq 0$ if and only if $G$ is a complete graph on $[n]$. \item[(ii)] If $G= H_1 \sqcup H_2$, where $H_1$ and $H_2$ are graphs on disjoint vertex sets, then $\beta_{p,p+2} (S/J_{G}) \neq 0$ if and only if $H_1$ and $H_2$ are complete graphs. \end{enumerate} \end{lemma}
\begin{proof} (i) In \cite{HMK}, the authors prove that for any simple graph $G$ on $[n]$, it holds \begin{equation}\label{eq: linear strand} \beta_{i,i+1} (S/J_G) = i f_{i}(\Delta(G)) \end{equation} where $\Delta(G)$ is the clique complex of G and $f_{i}(\Delta(G))$ is the number of faces of $\Delta(G)$ of dimension $i$. Since $J_G$ is Cohen-Macaulay, it holds $p=n-1$, and the statement is an immediate consequence of Equation (\ref{eq: linear strand}), with $i=p$.\\
\noindent (ii) Since $J_G$ is generated by homogeneous binomials of degree 2, $\beta_{1,1}(S/J_G) = 0$. This implies that $\beta_{i,i}(S/J_G) = 0$ for all $i \geq 1$. For all $j\geq 1$, we have
\begin{equation*} \beta_{p,p+j} (S/J_G) = \sum_{\substack{1 \leq j_1, j_2 \leq r \\ j_1+j_2 = j}} \beta_{p_1, p_1+j_1}(S_1/J_{H_1})\beta_{p_2, p_2+j_2}(S_2/J_{H_2}) \end{equation*} For $j=2$, we get \begin{equation}\label{eq on beta_p,p+2 disjoint graphs} \beta_{p,p+2} (S/J_G) = \beta_{p_1, p_1+1}(S_1/J_{H_1})\beta_{p_2, p_2+1}(S_2/J_{H_2}). \end{equation} By part (i), both the Betti numbers on the right are non-zero if and only if $H_1$ and $H_2$ are complete graphs, and the thesis follows. \end{proof}
\ \\
Let $M$ be a finitely graded $S$-module. Recall the Cohen-Macaulay type of $M$, that we denote by $\text{CM-type}(M)$, is $\beta_p(M)$, that is the sum of all $\beta_{p,p+i}(M)$, for $i=0,\dots,r$, where $p= \projdim M$, and $r=\reg M$. When $S/J_G$ has an unique extremal Betti number, we denote it by $\widehat{\beta}(S/J_G)$.
\begin{lemma}\label{Lemma: Betti numbers of disjoint graphs} Let $H_1$ and $H_2$ be connected graphs on disjoint vertex sets and $G=H_1\sqcup H_2$. Suppose $J_{H_1}$ and $J_{H_2}$ be Cohen-Macaulay binomial edge ideals. Let $S_i = K[\{x_j,y_j\}_{j \in V(H_i)}]$ for $i=1,2$. Then \begin{enumerate} \item[(i)] $\text{CM-type} (S/J_{G}) = \text{CM-type}(S_1/J_{H_1}) \text{CM-type}(S_2/J_{H_2})$. \item[(ii)] $\widehat{\beta}(S/J_G) = \widehat{\beta}(S_1/J_{H_1})\widehat{\beta}(S_2/J_{H_2})$. \end{enumerate}
\end{lemma}
\begin{proof} (i) The equality $J_{G} = J_{H_1} + J_{H_2}$ implies that the minimal graded free resolution of $S/J_G$ is the tensor product of the minimal graded free resolutions of $S_1/J_{H_1}$ and $S_2/J_{H_2}$, where $S_i = K[\{x_j,y_j\}_{j \in V(H_i)}]$ for $i=1,2$. Then \[ \beta_t(S/J_G) = \sum_{k=0}^t \beta_k(S_1/J_{H_1})\beta_{t-k}(S_2/J_{H_2}). \] Let $p = \projdim S/J_G$, that is $p=p_1+p_2$, where $p_i = \projdim S_i/J_{H_i}$ for $i=1,2$. Since $\beta_k (S_1/J_{H_1}) = 0$ for all $k > p_1$ and $\beta_{p-k}(S_2/J_{H_2})=0$ for all $k < p_1$, it follows \[ \beta_p(S/J_G) = \beta_{p_1}(S_1/J_{H_1})\beta_{p_2}(S_2/J_{H_2}). \]
\noindent (ii) Let $r= \reg S/J_G$. Consider \begin{equation*} \beta_{p,p+r} (S/J_G) = \sum_{\substack{1 \leq j_1, j_2 \leq r \\ j_1+j_2 = r}} \beta_{p_1, p_1+j_1}(S_1/J_{H_1})\beta_{p_2, p_2+j_2}(S_2/J_{H_2}). \end{equation*} Since $\beta_{p_i, p_i+j_i}(S_i/J_{H_i}) =0$ for all $j_i > r_i$, where $r_i = \reg S_i/J_{H_i}$ for $i=1,2$, and $r = r_1 + r_2$, it follows \begin{equation*} \beta_{p,p+r}(S/J_G) = \beta_{p_1, p_1+r_1}(S_1/J_{H_1})\beta_{p_2, p_2+r_2}(S_2/J_{H_2}). \end{equation*} \end{proof}
Let $G$ be a simple connected graph on $[n]$. We recall that if $J_G$ is Cohen-Macaulay, then $p=\projdim S/J_G = n-1$, and it admits an unique extremal Betti number, that is $\widehat{\beta}(S/J_G) = \beta_{p,p+r} (S/J_G)$, where $r = \reg S/J_G$.
\section{Regularity and Cohen-Macaulay type of cones}\label{Sec: cones}
The \textit{cone} of $v$ on $H$, namely $\mathrm{cone}(v,H)$, is the graph with vertices $V(H) \cup \{v\}$ and edges $E(H) \cup \{\{v,w\} \mid w \in V(H)\}$.
\begin{lemma}\label{sum reg} Let $G=\mathrm{cone}(v, H_1 \sqcup \dots \sqcup H_s)$, with $s\geq 2$. Then
\[ \reg S/J_G = \max \left\lbrace \sum_{i=1}^s \reg S/J_{H_i}, 2\right\rbrace. \]
\end{lemma} \begin{proof}
Consider the short exact sequence (\ref{Exact}), with $G=\mathrm{cone}(v, H_1 \sqcup \dots \sqcup H_s)$ and $u=v$, then $G' = K_n$, the complete graph on $[n]$, $G'' = H_1 \sqcup \dots \sqcup H_s$, and $H=K_{n-1}$, where $n = |V(G)|$. Since $G'$ and $H$ are complete graphs, the regularity of $S/J_{G'}$ and $ S/((x_u,y_u)+J_{H})$ is 1. Whereas the regularity of $S/((x_u,y_u)+J_{G''})$ is given by $\reg S/J_{H_1} + \cdots +\reg S/J_{H_s}$. We get the following bound on the regularity of $S/J_G$ \begin{eqnarray*}
\reg S/J_G\hspace{-0.2cm}&\leq &\hspace{-0.2cm}\max\left\lbrace\reg \frac{S}{J_{G'}},\reg \frac{S}{((x_u, y_u)+J_{G''})}, \reg \frac{S}{((x_u,y_u)+J_{H})} +1\right\rbrace\\
&= &\hspace{-0.2cm}\max\left\lbrace 1, \sum_{i=1}^s \reg S/J_{H_i}, 2\right\rbrace. \end{eqnarray*} Suppose $\sum_{i=1}^s \reg S/J_{H_i} \geq 2$, hence $\reg S/J_G \leq \sum_{i=1}^s \reg S/J_{H_i}$. Since $H_1 \sqcup \dots \sqcup H_s$ is an induced subgraph of $G$, by \cite[Corollary 2.2]{MM} of Matsuda and Murai we have \[ \reg S/J_{G} \geq \reg S/J_{H_1\sqcup \cdots \sqcup H_s} = \sum_{i=1}^s \reg S/J_{H_i}. \] Suppose now $\sum_{i=1}^s \reg S/J_{H_i} < 2$, hence $\reg S/J_G \leq 2$. Since $G$ is not a complete graph, $\reg S/J_G \geq 2$, and the statement follows.
\end{proof}
Observe that it happens $\reg S/J_G = 2$, for $G=\mathrm{cone}(v, H_1 \sqcup \dots \sqcup H_s)$, with $s \geq 2$, if and only if all the $H_i$ are isolated vertices except for at most two which are complete graphs. \ \\
We are going to give a description of the Cohen-Macaulay type and some Betti numbers of $S/J_G$ when $S/J_G$ is Cohen-Macaulay, and $G$ is a cone, namely $G=\mathrm{cone}(v,H)$. By \cite[Lemma 3.4]{RR}, to have Cohen-Macaulayness it is necessary that $H$ has exactly two connected components and both are Cohen-Macaulay (see also Corollaries 3.6 and 3.7 and Theorem 3.8 in \cite{RR}).
\begin{proposition}\label{prop: cm-type cone} Let $G=\mathrm{cone}(v, H_1 \sqcup H_2)$ on $[n]$, with $J_{H_1}$ and $J_{H_2}$ Cohen-Macaulay binomial edge ideals. Then \[ \text{CM-type}(S/J_G) = n-2 + \text{CM-type}(S/J_{H_1})\text{CM-type}(S/J_{H_2}). \] In particular, the unique extremal Betti number of $S/J_G$ is given by \begin{equation*} \widehat{\beta}(S/J_G) = \begin{cases} \widehat{\beta}(S_1/J_{H_1}) \widehat{\beta} (S_2/J_{H_2}) & \text{ if } r >2 \\ n-2 +\widehat{\beta}(S_1/J_{H_1}) \widehat{\beta}(S_2/J_{H_2}) & \text{ if } r =2 \end{cases} \end{equation*} where $r= \reg S/J_G$. In addition, if $r >2$, it holds \begin{equation*} \beta_{p,p+2} (S/J_G) = n-2. \end{equation*}
\end{proposition}
\begin{proof}
Consider the short exact sequence (\ref{Exact}), with $u=v$, then we have $G' = K_{n}$, $G'' = H_1 \sqcup H_2$, and $H= K_{n-1}$. It holds \begin{align} r=\reg S/J_G & \ = \ \max \{\reg S/J_{H_1} + \reg S/J_{H_2}, 2\}, \notag\\ \reg S/((x_u,y_u)+J_{G''}) & \ = \ \reg S/J_{H_1} + \reg S/J_{H_2},\notag \\ \reg S/J_{G'} & \ = \ \reg S/((x_u,y_u)+J_{H}) = 1, \label{reg1} \end{align} and \[ p=\projdim S/J_G = \projdim S/J_{G'} = \projdim S/((x_u,y_u)+J_{G''}) = n-1, \] \[ \projdim S/((x_u,y_u)+J_{H}) = n. \] Consider the long exact sequence (\ref{longexact}) with $i=p$. By \eqref{reg1}, we have \[ \beta_{p,p+j}(S/J_{G'}) = \beta_{p,p+j}(S/((x_u, y_u)+J_{H})) = 0 \text{ for all } j \geq 2 \] and \[\beta_{p+1,p+1+(j-1)}(S/((x_u,y_u)+J_H)) \neq \text{ only for } j=2.\]
\noindent By Lemma \ref{Lemma: beta_p,p+1 e beta_p,p+2} and Lemma \ref{Lemma: Betti numbers of disjoint graphs} (i), it follows that \begin{eqnarray*}
\text{CM-type}(S/J_G) &=& \sum_{j=0}^r \beta_{p,p+j} (S/J_G) = \sum_{j=2}^r \beta_{p,p+j} (S/J_G) \\
&=& \beta_{p-1,p-2+2}(S/J_H) + \sum_{j=2}^r \beta_{p-2,p-2+j} (S/J_{G''}) \\
&=& n-2 + \text{CM-type} (S/J_{G''}) \\
&=& n-2 + \text{CM-type}(S/J_{H_1})\text{CM-type}(S/J_{H_2}). \end{eqnarray*}
If $r=2$, \begin{eqnarray*}
\text{CM-type}(S/J_G) &=& \beta_{p,p+2} (S/J_G) \\
&=& \beta_{p-1,p-2+2}(S/J_H) + \beta_{p-2,p-2+2} (S/J_{G''}) \\
&=& n-2 + \widehat{\beta} (S_1/J_{H_1}) \widehat{\beta} (S_2/J_{H_2}), \end{eqnarray*} where the last equality follows from Equation (\ref{eq on beta_p,p+2 disjoint graphs}).
If $r >2$, it means that $H_1$ and $H_2$ are not both complete graphs, and then, by Lemma \ref{Lemma: Betti numbers of disjoint graphs} (ii), $\beta_{p-2,p-2+2} (S/J_{G''}) =0$, then $\beta_{p,p+2} (S/J_G) =n-2$, and $\widehat{\beta}(S/J_G) = \widehat{\beta}(S_1/J_{H_1}) \widehat{\beta}(S_2/J_{H_2})$.
\end{proof}
\section{Extremal Betti numbers of some classes of Cohen-Macaulay binomial edge ideals}\label{Sec: bipartite and fan graphs} We are going to introduce the notation for the family of fan graphs first introduced in \cite{BMS}.
Let $K_m$ be the complete graph on $[m]$ and $W=\{v_1,\dots,v_s\} \subseteq [m]$. Let $F_{m}^W$ be the graph obtained from $K_m$ by attaching, for every $i=1, \dots, s$, a complete graph $K_{h_i}$ to $K_m$ in such a way $V(K_m) \cap V(K_{h_i}) = \{v_1, \dots, v_i\}$, for some $h_i >i$. We say that the graph $F_m^W$ is obtained by adding a \textit{fan} to $K_m$ on the set $W$. If $h_i = i+1$ for all $i=1, \dots, s$, we say that $F_m^W$ is obtained by adding a \textit{pure fan} to $K_m$ on the set $W$.
Let $W = W_1 \sqcup \cdots \sqcup W_k$ be a non-trivial partition of a subset $W \subseteq [m]$. Let $F_m^{W,k}$ be the graph obtained from $K_m$ by adding a fan to $K_m$ on each set $W_i$, for $i=1, \dots, k$. The graph $F_m^{W,k}$ is called a $k$-fan of $K_m$ on the set $W$. If all the fans are pure, we called it a \textit{k-pure} fan graph of $K_m$ on $W$.
When $k=1$, we write $F_m^W$ instead of $F_m^{W,1}$. Consider the pure fan graph $F_m^W$ on $W=\{v_1, \dots, v_s\}$. We observe that $F_m^W = \mathrm{cone}(v_1, F_{m-1}^{W'} \sqcup \{w\})$, where $W' = W \setminus \{v_1\}$, $w$ is the leaf of $F_m^W$, $\{w,v_1\} \in E(F_m^W)$, and $F_{m-1}^{W'}$ is the pure graph of $K_{n-1}$ on $W'$. \\
Now, we recall the notation used in \cite{BMS} for a family of bipartite graphs.
For every $m \geq 1$, let $F_m$ be the graph on the vertex set $[2m]$ and with edge set $E(F_m)= \{\{2i, 2j-1\} \mid i=1, \dots, m, j=i, \dots, m\}$. \\
In \cite{BMS}, they prove that if either $G = F_m$ or $G= F_m^{W,k}$, with $m \geq 2$, then $J_{G}$ is Cohen-Macaulay. The regularity of $S/J_G$ has been studied in \cite{JK}, and hold the following results. \begin{proposition}[\cite{JK}]\label{prop: reg pure fan graph} Let $G = F_m^{W,k}$ be the $k$-pure fan graph of $K_m$ on $W$, with $m \geq 2$. Then \[ \reg S/J_G = k+1. \] \end{proposition}
\begin{proposition}[\cite{JK}]\label{prop: reg bip} For every $m\geq 2$, $\reg S/J_{F_m} = 3$. \end{proposition}
Observe that if $G = F_m^W$ is a pure fan graph, the regularity of $J_G$ is equal to 3 for any $m$ and $W\subseteq [m]$, then all of these graphs belong to the class of graphs studied by Madani and Kiani in \cite{MK1}.
Exploiting Proposition \ref{prop: cm-type cone}, we get hold a formula for the CM-type of any $G = F_m^W$ pure fan graph.
\begin{proposition}\label{Prop: CM-type pure fan graph}
Let $m \geq 2$, and $G = F_m^W$ a pure fan graph, with $|W| \geq 1$. Then \begin{equation}\label{eq: CM-type pure fan graph}
\mathrm{CM-type}(S/J_G) = \widehat{\beta}(S/J_G)=(m-1)|W|. \end{equation} \end{proposition}
\begin{proof}
We use induction on $m$. If $m=2$, $G$ is decomposable into $K_2$ and $K_3$, and it is straightforward to check that (\ref{eq: CM-type pure fan graph}) holds. If $m >2$ and supposing the thesis true for all the pure graphs of $K_{m-1}$, we have $G = \mathrm{cone}(v_1, H_1 \sqcup H_2)$, where $W=\{v_1, \dots, v_s\}$, $H_1=F_{m-1}^{W'}$ is the pure graph of $K_{m-1}$ on $W'$, with $W' = W \setminus \{v_1\}$, $w$ is the leaf of $G$, $\{w,v_1\} \in E(G)$, and $H_2=\{w\}$. By induction hypothesis $\text{CM-type}(S/J_{H_1}) = (m-2)(|W|-1)$, and $\text{CM-type}(S/J_{H_2})=1$, then using Proposition \ref{prop: cm-type cone}, it follows \begin{equation*} \begin{aligned}
\text{CM-type}(S/J_G) &= |V(G)|-2 + \text{CM-type}(S/J_{H_1})\text{CM-type}(S/J_{H_2}) \\
&= (m+ |W|-2) + (m-2)(|W|-1) = (m-1)|W|. \end{aligned} \end{equation*}
Since $|W| \geq 1$, the graph $F_m^W$ is not a complete graph, then $\beta_{p,p+1}(S/J_G) =0$, where $p = \projdim S/J_{G}$. Due to $\reg S/J_G=2$, the $\text{CM-type}(S/J_G)$ coincides with the unique extremal Betti number of $S/J_{G}$, that is $\beta_{p,p+2}$. \end{proof}
In the following result we provide the unique extremal Betti number of any $k$-pure fan graph.
\begin{proposition}\label{CM-type Fan} Let $G = F_m^{W,k}$ be a $k$-pure fan graph, where $m \geq 2$ and $W = W_1 \sqcup \cdots \sqcup W_k \subseteq [m]$ is a non-trivial partition of $W$. Then \begin{equation} \label{eq: CM-type k-pure fan graph}
\widehat{\beta}(S/J_G) = (m-1) \prod_{i=1}^k |W_i|. \end{equation} \end{proposition}
\begin{proof}
Let $|W_i| = \ell_i$, for $i=1, \dots k$. First of all, we observe that if $\ell_i = 1$ for all $i=1,\dots,k$, that is $W_i = \{v_i\}$, then $G$ is decomposable into $G_1 \cup \cdots \cup G_{k+1}$, where $G_1=K_m$, $G_j=K_2$ and $G_1 \cap G_j = \{v_j\}$, for all $j=2, \dots, k+1$. This implies \[ \widehat{\beta}(S/J_G) = \prod_{j=1}^{k+1} \widehat{\beta}(S/J_{G_j}) = m-1 \] where the last equality is due to the fact $\widehat{\beta}(S/J_{K_m}) = m-1$ for any complete graph $K_m$, with $m \geq 2$. Without loss of generality, we suppose $\ell_1 \geq 2$. \\
We are ready to prove the statement on induction on $n$, the number of vertices of $G=F_m^{W,k}$, that is $n=m+\sum_{i=1}^k \ell_i$. Let $n=4$, then $G$ is a pure fan graph $F_2^W$, with $|W|=2$, satisfying Proposition \ref{Prop: CM-type pure fan graph} and it holds \eqref{eq: CM-type pure fan graph}. Let $n>4$. Pick $v \in W_1$ such that $\{v,w\} \in E(G)$, with $w$ a leaf of $G$. Consider the short exact sequence (\ref{Exact}), with $u=v$, $G' = F_{m+\ell_1}^{W',k-1}$ the $(k-1)$-pure fan graph of $K_{m+\ell_1}$ on $W' = W_2 \sqcup \cdots \sqcup W_k$, $G'' = F_{m-1}^{W'',k} \sqcup \{w\}$ the disjoint union of the isolated vertex $w$ and the $k$-pure fan graph of $K_{m-1}$ on $W'' = W \setminus \{v\}$, and $H= F_{m+\ell_1-1}^{W',k-1}$. For the quotient rings involved in (\ref{Exact}), from Proposition \ref{prop: reg pure fan graph}, we have \begin{align*} r = &\reg S/J_G \ = \reg S/((x_u,y_u)+J_{G''}) = 1 + k, \\ &\reg S/J_{G'} = \reg S/((x_u,y_u)+J_{H}) = k. \end{align*} As regard the projective dimensions, we have \begin{align*} p &= \projdim S/J_G = \projdim S/J_{G'} = \projdim S/((x_u,y_u)+J_{G''}) \\ &= \projdim S/((x_u,y_u)+J_{H})-1 = m + \sum_{i=1}^k \ell_i -1. \end{align*} Fix $i=p$ and $j=r$ in the long exact sequence (\ref{longexact}). The Tor modules $T_{p+1,p+1+(r-1)}(S/((x_u,y_u)+J_H))$ and $T_{p,p+r}(S/((x_u, y_u)+J_{G''}))$ are the only non-zeros. It follows \begin{align*} \beta_{p,p+r}(S/J_G) &= \beta_{p-1,p+r-2}(S/J_{H}) + \beta_{p-2,p+r-2}(S/J_{G''})\\ &= \widehat{\beta}(S/J_H) + \widehat{\beta}(S/J_{F_{m-1}^{W'',k}}). \end{align*} Both $F_{m-1}^{W'',k}$ and $H$ fulfil the hypothesis of the proposition and they have less than $n$ vertices, then by induction hypothesis \begin{align*} \widehat{\beta}(S/J_{H}) &= (m+\ell_1-2) \prod_{s=2}^k \ell_s, \\ \widehat{\beta}(S/J_{F_{m-1}^{W'',k}}) &= (m-2) (\ell_1-1) \prod_{s=2}^k \ell_s. \end{align*} Adding these extremal Betti numbers, the thesis is proved. \end{proof}
\begin{proposition}\label{Prop: CM-type bip} Let $m \geq 2$. The unique extremal Betti number of the bipartite graph $F_m$ is given by \[ \widehat{\beta}(S/J_{F_m}) = \sum_{k=1}^{m-1} k^2. \] \end{proposition} \begin{proof} We use induction on $m$. If $m=2$, then $F_2 = K_2$ and it is well known that $\widehat{\beta}(S/J_{F_m})=1$.
Suppose $m >2$. Consider the short exact sequence (\ref{Exact}), with $G=F_m$ and $u= 2m-1$, with respect to the labelling introduced at the begin of this section. The graphs involved in (\ref{Exact}) are $G' = F_{m+1}^W$, that is the pure fan graph of $K_{m+1}$, with $V(K_{m+1}) = \{u\} \cup \{2i | i=1,\dots,m\}$, on $W=\{2i-1 | i= 1, \dots, m-1\}$, $G'' = F_{m-1} \sqcup \{2m\}$, and the pure fan graph $H=F_m^W$. By Proposition \ref{prop: reg pure fan graph} and Proposition \ref{prop: reg bip}, we have \begin{align*} r = &\reg S/J_G = \reg S/((x_u,y_u)+J_{G''}) = 3 \\ &\reg S/J_{G'} = \reg S/((x_u,y_u)+J_{H}) = 2. \end{align*} As regards the projective dimension of the quotient rings involved in (\ref{Exact}), it is equal to $p = 2m-1$ for all, except for $S/((x_u,y_u)+J_{H})$ whose projective dimension is $2m$. Consider the long exact sequence (\ref{longexact}), with $i=p$ and $j=r$. In view of the above, $T_{p,p+r}(S/J_{G'})$, $T_{p,p+r}(S/((x_u,y_u)+J_H))$, and all the Tor modules on the left of $T_{p+1,p+1+(r-1)}(S/((x_u,y_u)+J_H))$ in (\ref{longexact}) are zero. It follows that \[ T_{p,p+r}(S/J_G) \cong T_{p+1,p+1+(r-1)}(S/((x_u,y_u)+J_H)) \oplus T_{p,p+r}(S/((x_u, y_u)+J_{G''})). \] Then, using Proposition \ref{CM-type Fan} and induction hypothesis, we obtain \begin{eqnarray*} \beta_{p,p+r}(S/J_G) &=& \beta_{p-1,p+r-2}(S/J_H) + \beta_{p-2,p+r-2}(S/J_{G''})\\ &=& \widehat{\beta}(S/J_H) + \widehat{\beta}(S/J_{G''})\\ &=&(m-1)^2 + \sum_{k=1}^{m-2} k^2 = \sum_{k=1}^{m-1} k^2. \end{eqnarray*}
\end{proof}
\begin{question}\label{Rem: extremal betti = CM-type} Based on explicit calculations we believe that for all bipartite graphs $F_m$ and pure fan graphs $F_m^{W,k}$ the unique extremal Betti number coincides with the CM-type, that is $\beta_{p,p+j}(S/J_G) = 0$ for all $j=0, \dots, r-1$, when either $G= F_m$ or $G= F_m^{W,k}$, for $m \geq 2$, and $p= \projdim S/J_G$ and $r= \reg S/J_G$. \end{question} \ \\
In the last part of this section, we completely describe the Hilbert-Poincaré series $\mathrm{HS}$ of $S/J_G$, when $G$ is a bipartite graph $F_m$. In particular, we are interested in computing the $h$-vector of $S/J_G$.
For any graph $G$ on $[n]$, it is well known that
\[
\mathrm{HS}_{S/J_G} (t) = \frac{p(t)}{(1-t)^{2n}} = \frac{h(t)}{(1-t)^d}
\]
where $p(t), h(t) \in \mathbb{Z}[t]$ and $d$ is the Krull dimension of $S/J_G$. The polynomial $p(t)$ is related to the graded Betti numbers of $S/J_G$ in the following way
\begin{equation}\label{eq: p(t) with Betti number}
p(t) = \sum_{i,j}(-1)^i\beta_{i,j}(S/J_G)t^j.
\end{equation}
\begin{lemma}\label{lemma: the last non negative entry} Let $G$ be a graph on $[n]$, and suppose $S/J_G$ has an unique extremal Betti number, then the last non negative entry in the $h$-vector is $(-1)^{p+d}\beta_{p,p+r}$, where $p= \projdim S/J_G$ and $r= \reg S/J_G$. \end{lemma}
\begin{proof} If $S/J_G$ has an unique Betti number then it is equal to $\beta_{p,p+r}(S/J_G)$. Since $p(t)=h(t)(1-t)^{2n-d}$, then $\mathrm{lc} (p(t)) = (-1)^d \mathrm{lc}(h(t))$, where $\mathrm{lc}$ denotes the leading coefficient of a polynomial. By Equation (\ref{eq: p(t) with Betti number}), the leading coefficient of $p(t)$ is the coefficient of $t^j$ for $j = p+r$. Since $\beta_{i,p+r} = 0$ for all $i < p$, $\mathrm{lc} (p(t)) = (-1)^p \beta_{p,p+r}$, and the thesis follows. \end{proof}
The degree of $\mathrm{HS}_{S/J_G}(t)$ as a rational function is called \textit{a-invariant}, denoted by $a(S/J_G)$, and it holds
\[
a(S/J_G) \leq \reg S/J_G - \depth S/J_G.
\] The equality holds if $G$ is Cohen-Macaulay. In this case, $\dim S/J_G = \depth S/J_G$, and then $\deg h(t) = \reg S/J_G$.
\begin{proposition} Let $G=F_m$, with $m \geq 2$, then the Hilbert-Poincaré series of $S/J_G$ is given by \[ \mathrm{HS}_{S/J_G}(t) = \frac{h_0+h_1t+h_2t^2+ h_3t^3}{(1-t)^{2m+1}} \] where \[ h_0 = 1, \qquad h_1= 2m-1, \qquad h_2 = \frac{3m^2-3m}{2}, \; \text{ and } \; h_3 = \sum_{k=1}^{m-1}k^2. \] \end{proposition}
\begin{proof} By Proposition \ref{prop: reg bip}, $\deg h(t) = \reg S/J_G = 3$. Let $\mathrm{in}(J_G) = I_{\Delta}$, for some simplicial complex $\Delta$, where $I_\Delta$ denotes the Stanley-Reisner ideal of $\Delta$. Let $f_i$ be the number of faces of $\Delta$ of dimension $i$ with the convention that $f_{-1} = 1$. Then \begin{equation}\label{eq: h_k} h_k = \sum_{i=0}^k (-1)^{k-i} \binom{d-i}{k-i} f_{i-1}. \end{equation} Exploiting the Equation (\ref{eq: h_k}) we get \[ h_1 = f_0 -d = 4m - (2m+1) = 2m-1 \] To obtain $h_2$ we need first to compute $f_1$, that is the number of edges in $\Delta$: they are all the possible edges, except for those that appear in $(I_{\Delta})_2$, which are the number of edges in $G$. So \[ f_1 = \binom{4m}{2} - \frac{m(m+1)}{2} = \frac{15m^2-5m}{2}. \] And then we have \begin{eqnarray*} h_2 = \binom{2m+1}{2} f_{-1} - \binom{2m}{1} f_{0} + \binom{2m-1}{0} f_{1} = \frac{3m^2-3m}{2}. \end{eqnarray*} By Lemma \ref{lemma: the last non negative entry}, and since $p=2m-1$ and $d=2m+1$, \[ h_3 = (-1)^{4m} \beta_{p,p+r}(S/J_G) = \sum_{k=1}^{m-1}k^2 \] where the last equality follows from Proposition \ref{Prop: CM-type bip}. \end{proof}
\section{Extremal Betti numbers of Cohen-Macaulay bipartite graphs}\label{Sec: Cohen-Macaulay bipartite graphs}
In \cite{BMS}, the authors prove that, if $G$ is connected and bipartite, then $J_G$ is Cohen-Macaulay if and only if $G$ can be obtained recursively by gluing a finite number of graphs of the form $F_m$ via two operations. Here, we recall the notation introduced in \cite{BMS} for the sake of completeness. \\
\noindent Operation $*$: For $i = 1, 2$, let $G_i$ be a graph with at least one leaf $f_i$. We denote by $G = (G_1, f_1) * (G_2, f_2)$ the graph G obtained by identifying $f_1$ and $f_2$. \\
\noindent Operation $\circ$: For $i = 1,2$, let $G_i$ be a graph with at least one leaf $f_i$, $v_i$ its neighbour and assume $\deg_{G_i}(v_i) \geq 3$. We denote by $G = (G_1, f_1) \circ (G_2, f_2)$ the graph G obtained by removing the leaves $f_1, f_2$ from $G_1$ and $G_2$ and by identifying $v_1$ and $v_2$. \\
In $G = (G_1, f_1) \circ (G_2, f_2)$, to refer to the vertex $v$ resulting from the identification of $v_1$ and $v_2$ we write $\{v\} = V(G_1) \cap V(G_2)$. For both operations, if it is not important to specify the vertices $f_i$ or it is clear from the context, we simply write $G_1 * G_2$ or $G_1 \circ G_2$.
\begin{theorem}[\cite{BMS}] Let $G=F_{m_1} \circ \cdots \circ F_{m_t} \circ F$, where $F$ denotes either $F_{m}$ or a $k$-pure fan graph $F_{m}^{W,k}$, with $t \geq 0$, $m \geq 3$, and $m_i \geq 3$ for all $i=1, \dots, t$. Then $J_G$ is Cohen-Macaulay. \end{theorem}
\begin{theorem}[\cite{BMS}, \cite{RR}]\label{Theo: bipartite CM} Let $G$ be a connected bipartite graph. The following properties are equivalent: \begin{enumerate} \item[(i)] $J_G$ is Cohen-Macaulay; \item[(ii)] $G = A_1 *A_2 * \cdots * A_k$, where, for $i=1, \dots, k$, either $A_i = F_m$ or $A_i = F_{m_1} \circ \cdots \circ F_{m_t}$, for some $m \geq 1$ and $m_j \geq 3$. \end{enumerate} \end{theorem}
Let $G=G_1* \cdots * G_t$, for $t \geq 1$. Observe that $G$ is decomposable into $G_1 \cup \cdots \cup G_t$, with $G_i \cap G_{i+1} = \{f_i\}$, for $i=1, \dots t-1$, where $f_i$ is the leaf of $G_i$ and $G_{i+1}$ which has been identified in $G_i *G_{i+1}$ and $G_i \cap G_j = \emptyset$, for $1\leq i <j \leq t$. If $G$ is a Cohen-Macaulay bipartite graph, then it admits only one extremal Betti number, and by \cite[Corollary 1.4]{HR}, it holds \[ \widehat{\beta}(S/J_G) = \prod_{i=1}^t \widehat{\beta}(S/J_{G_i}). \] In light of the above, we will focus on graphs of the form $G= F_{m_1} \circ \cdots \circ F_{m_t}$, with $m_i \geq 3$, $i=1, \dots, t$. Before stating the unique extremal Betti number of $S/J_G$, we recall the results on regularity showed in \cite{JK}.
\begin{proposition}[{\cite{JK}}] For $m_1, m_2 \geq 3$, let $G = F_{m_1} \circ F$, where either $F = F_{m_2}$ or $F$ is a $k$-pure fan graph $F_{m_2}^{W,k}$, with $W = W_1 \sqcup \cdots \sqcup W_k$ and $\{v\}= V(F_{m_1}) \cap V(F)$. Then \begin{equation*} \reg S/J_G = \begin{cases} 6 \qquad \; \; \; \quad \text{ if } F = F_{m_2}\\
k+3 \qquad \text{ if } F= F_{m_2}^{W,k} \text{ and } |W_i| = 1 \text{ for all } i\\
k+4 \qquad \text{ if } F= F_{m_2}^{W,k} \text{ and } |W_i| \geq 2 \text{ for some } i \text{ and } v \in W_i\\ \end{cases} \end{equation*} \end{proposition}
\begin{proposition}[\cite{JK}]
Let $m_1, \dots, m_t , m\geq 3$ and $t \geq 2$. Consider $G= F_{m_1} \circ \cdots \circ F_{m_t} \circ F$, where $F$ denotes either $F_{m}$ or the $k$-pure fan graph $F_m^{W,k}$ with $W = W_1 \sqcup \cdots \sqcup W_k$ and $|W_i| \geq 2$ for some $i$. Then \[ \reg S/J_G = \reg S/J_{F_{m_1-1}} + \reg S/J_{F_{m_2-2}} + \cdots + \reg S/J_{F_{m_t -2}} + \reg S/J_{F \setminus \{v,f\}} \] where $\{v\} = V( F_{m_1} \circ \cdots \circ F_{m_t}) \cap V(F)$, $v \in W_i$ and $f$ is a leaf such that $\{v,f\} \in E(F)$. \end{proposition}
\begin{lemma}\label{Lemma: CM-type 2 pallini}
Let $m_1, m_2 \geq 3$ and $G = F_{m_1} \circ F$, where $F$ is either $F_{m_2}$ or a $k$-pure fan graph $F_{m_2}^{W,k}$, with $W = W_1 \sqcup \cdots \sqcup W_k$ and $|W_i| \geq 2$ for some $i$. Let $\{v\} = V(F_{m_1}) \cap V(F)$ and suppose $v \in W_i$. Let $G''$ be as in Set-up \ref{setup}, with $u=v$. Then the unique extremal Betti number of $S/J_G$ is given by \[ \widehat{\beta} (S/J_G) = \widehat{\beta} (S/J_{G''}). \] In particular, \[ \widehat{\beta} (S/J_G) = \begin{cases} \widehat{\beta}(S/J_{F_{m_1-1}})\widehat{\beta}(S/J_{F_{m_2-1}}) \qquad \text{ if } F = F_{m_2} \\ \widehat{\beta}(S/J_{F_{m_1-1}})\widehat{\beta}(S/J_{F_{m_2-1}^{W',k}}) \qquad \text{ if } F = F_{m_2}^{W,k} \end{cases} \] where $W' = W \setminus \{v\}$. \end{lemma}
\begin{proof} Consider the short exact sequence (\ref{Exact}), with $G = F_{m_1} \circ F$ and $u=v$.\\
If $F = F_{m_2}$, then the graphs involved in (\ref{Exact}) are: $G' = F_{m}^{W,2}$, $G'' = F_{m_1 -1} \sqcup F_{m_2-1}$, and $H=F_{m-1}^{W,2}$, where $m=m_1+m_2-1$, $W = W_1 \sqcup W_2$ with $|W_i| = m_i -1$ for $i=1,2$, and $G'$ and $H$ are $2$-pure fan graph. By Proposition \ref{prop: reg pure fan graph} and Proposition \ref{prop: reg bip}, we have the following values for the regularity \begin{align*} r = &\reg S/J_G = \reg S/((x_u,y_u)+J_{G''}) = 6\\ &\reg S/J_{G'} = \reg S/((x_u,y_u)+J_{H}) = 3. \end{align*} In the matter of projective dimension, it is equal to $p=n-1$ for all the quotient rings involved in (\ref{Exact}), except for $S/((x_u,y_u)+J_{H})$, for which it is $n$. Considering the long exact sequence (\ref{longexact}) with $i=p$ and $j=r$, it holds \[ \beta_{p,p+r} (S/J_G) = \beta_{p,p+r} (S/((x_u,y_u)+J_{G''})) \] and by Lemma \ref{Lemma: Betti numbers of disjoint graphs} (ii) the second part of thesis follows. \\
The case $F = F_{m_2}^{W,k}$ follows by similar arguments. Indeed, suppose $|W_1| \geq 2$ and $v \in W_1$. The graphs involved in (\ref{Exact}) are: $G' = F_{m}^{W',k}$, $G'' = F_{m_1 -1} \sqcup F_{m_2-1}^{W'',k}$, and $H=F_{m-1}^{W',k}$, where $m=m_1+m_2+|W_1| -2$, all the fan graphs are $k$-pure, $W' = W'_1 \sqcup W_2 \sqcup \cdots \sqcup W_k$, with $|W'_1| = m_1 -1$, whereas $ W'' = W \setminus \{v\}$. Fixing $r = \reg S/J_G = \reg S/((x_u,y_u)+J_{G''}) = k+4$, since $\reg S/J_{G'} = \reg S/((x_u,y_u)+J_{H}) = k+1$, and the projective dimension of all the quotient rings involved in (\ref{Exact}) is $p=n-1$, except for $S/((x_u,y_u)+J_{H})$, for which it is $n$, it follows \[ \beta_{p,p+r} (S/J_G) = \beta_{p,p+r} (S/((x_u,y_u)+J_{G''})) \] and by Lemma \ref{Lemma: Betti numbers of disjoint graphs} (ii) the second part of the thesis follows. \end{proof}
\begin{theorem}\label{Theo: betti number t pallini}
Let $t \geq 2$, $m \geq 3$, and $m_i \geq 3$ for all $i=1, \dots, t$. Let $G=F_{m_1} \circ \cdots \circ F_{m_t} \circ F$, where $F$ denotes either $F_{m}$ or a $k$-pure fan graph $F_{m}^{W,k}$ with $W = W_1 \sqcup \cdots \sqcup W_k$. Let $\{v\} = V(F_{m_1} \circ \cdots \circ F_{m_t} ) \cap V(F)$ and, if $F=F_m^{W,k}$, assume $|W_1| \geq 2$ and $v \in W_1$. Let $G''$ and $H$ be as in Set-up \ref{setup}, with $u=v$. Then the unique extremal Betti number of $S/J_G$ is given by \[ \widehat{\beta}(S/J_G)=\widehat{\beta}(S/J_{G''}) + \begin{cases} \widehat{\beta}(S/J_H) &\text{if } m_t=3\\ 0 &\text{if } m_t>3 \end{cases} \] In particular, if $F = F_{m}$, it is given by \[ \widehat{\beta} ( S/J_G) = \widehat{\beta}(S/J_{F_{m_1} \circ \cdots \circ F_{m_t-1}}) \widehat{\beta}(S/J_{F_{m-1}}) + \begin{cases} \widehat{\beta}(S/J_{H}) &\text{if } m_t=3 \\ 0 &\text{if } m_t>3 \end{cases} \]
where $H = F_{m_1} \circ \cdots \circ F_{m_{t-1}} \circ F_{m+m_t-2}^{W',2}$, and $F_{m+m_t-2}^{W',2}$ is a $2$-pure fan graph of $K_{m+m_t-2}$ on $W'=W_1' \sqcup W_2'$, with $|W_1'|=m_t-1$ and $|W_2'|=m-1$.\\ If $F = F_{m}^{W,k}$, it is given by \[ \widehat{\beta} ( S/J_G) = \widehat{\beta}(S/J_{F_{m_1} \circ \cdots \circ F_{m_t-1}}) \widehat{\beta}(S/J_{F_{m-1}^{W'',k}}) + \begin{cases} \widehat{\beta}(S/J_H ) &\text{if } m_t=3 \\ 0 &\text{if } m_t>3 \end{cases} \]
where $W'' = W \setminus \{v\}$, $H=F_{m_1}\circ \cdots \circ F_{m_{t-1}} \circ F_{m'}^{W''',k}$, with $m'=m+m_t+|W_1|-2$, $W''' = W_1'' \sqcup W_2 \sqcup \cdots \sqcup W_k$, and $|W_1''| = m_t -1$. \end{theorem}
\begin{proof}
If $F=F_m$, we have $G'= F_{m_1} \circ \cdots \circ F_{m_{t-1}} \circ F_{m+m_t-1}^{W',2}$, $G'' = F_{m_1} \circ \cdots \circ F_{m_t-1} \sqcup F_{m-1}$, and $H=F_{m_1} \circ \cdots \circ F_{m_{t-1}} \circ F_{m+m_t-2}^{W',2}$, where $W'=W_1'\sqcup W_2'$, with $|W_1'|=m_t-1$ and $|W_2'|=m-1$. As regard the regularity of these quotient rings, we have \begin{align*} r &= \reg S/J_G = \reg S/((x_u,y_u)+J_{G''})\\ &= \reg S/J_{F_{m_1-1}} + \reg S/J_{F_{m_2-2}} + \cdots + \reg S/J_{F_{m_t -2}} + \reg S/J_{F_{m-1}} \end{align*} and both $\reg S/J_{G'}$ and $\reg S/((x_u,y_u)+J_H)$ are equal to \[ \reg S/J_{F_{m_1-1}} + \reg S/J_{F_{m_2-2}} + \cdots + \reg S/J_{F_{m_{t-1} -2}} + \reg S/J_{F_{m+m_t-1}^{W',2}}. \] Since $\reg S/J_{F_{m-1}} = \reg S/J_{F_{m+m_t-1}^{W',2}}=3$, whereas if $m_t=3$, $\reg S/J_{F_{m_t -2}}=1$, otherwise $\reg S/J_{F_{m_t -2}}=3$, it follows that \[ \reg S/J_{G'} = \reg S/((x_u,y_u)+J_H) = \begin{cases} r-1 &\text{ if } m_t=3\\ r-3 &\text{ if } m_t>3\\ \end{cases} \] For the projective dimensions, we have \begin{eqnarray*} p &=& \projdim S/J_G = \projdim S/((x_u,y_u)+J_{G''}) \\ &=& \projdim S/J_{G'} = \projdim S/((x_u,y_u)+J_{H}) -1= n-1. \end{eqnarray*} Passing through the long exact sequence (\ref{longexact}) of Tor modules, we obtain, if $m_t =3$ \[ \beta_{p,p+r} (S/J_G) = \beta_{p,p+r} (S/((x_u,y_u)+J_{G''})) + \beta_{p+1,(p+1)+(r-1)}(S/((x_u,y_u)+J_H)) \] and, if $m_t>3$ \[ \beta_{p,p+r} (S/J_G) = \beta_{p,p+r} (S/((x_u,y_u)+J_{G''})). \]
The case $F=F_{m}^{W,k}$ follows by similar arguments. Indeed, the involved graphs are: $G'=F_{m_1}\circ \cdots \circ F_{m_{t-1}} \circ F_{m'}^{W''',k}$, $G''= F_{m_1}\circ \cdots \circ F_{m_t-1} \sqcup F_{m-1}^{W'',k}$, and $H=F_{m_1}\circ \cdots \circ F_{m_{t-1}} \circ F_{m'-1}^{W''',k}$, where all the fan graphs are $k$-pure, $W'' = W \setminus \{v\}$, $m'=m+m_t+|W_1|-1$, $W''' = W_1'' \sqcup W_2 \sqcup \cdots \sqcup W_k$, and $|W_1''| = m_t -1$. Fixing $r= \reg S/J_G$, we get $\reg S/((x_u,y_u)+J_{G''}) =r$, whereas \[ \reg S/J_{G'} = \reg S/((x_u,y_u)+J_H) = \begin{cases} r-1 &\text{ if } m_t=3\\ r-3 &\text{ if } m_t>3\\ \end{cases} \] The projective dimension of all the quotient rings involved is $p=n-1$, except for $S/((x_u,y_u)+J_H)$, for which it is $n$. Passing through the long exact sequence (\ref{longexact}) of Tor modules, it follows the thesis. \end{proof}
\begin{corollary}
Let $t \geq 2$, $m,m_1 \geq 3$, and $m_i \geq 4$ for all $i=2, \dots, t$. Let $G=F_{m_1} \circ \cdots \circ F_{m_t} \circ F$, where $F$ denotes either $F_{m}$ or a $k$-pure fan graph $F_{m}^{W,k}$ with $W = W_1 \sqcup \cdots \sqcup W_k$. Let $\{v\} = V(F_{m_1} \circ \cdots \circ F_{m_t} ) \cap V(F)$ and, when $F=F_m^{W,k}$, assume $|W_1| \geq 2$ and $v \in W_1$. Then the unique extremal Betti number of $S/J_G$ is given by \[ \widehat{\beta}(S/J_G)= \begin{cases} \widehat{\beta}(S/J_{F_{m_1-1}})\prod_{i=2}^t \widehat{\beta}(S/J_{F_{m_i-2}})\widehat{\beta}(S/J_{F_{m-1}})&\text{ if } F=F_m\\ \widehat{\beta}(S/J_{F_{m_1-1}})\prod_{i=2}^t \widehat{\beta}(S/J_{F_{m_i-2}})\widehat{\beta}(S/J_{F_{m-1}^{W',k}}) &\text{ if } F=F_m^{W,k} \end{cases} \] where $W' = W\setminus \{v\}$.
\end{corollary}
\begin{proof} By Theorem \ref{Theo: betti number t pallini} and by hypothesis on the $m_i$'s, we get \[ \widehat{\beta}(S/J_G) = \widehat{\beta}(S/J_{F_{m_1}} \circ \cdots \circ F_{m_{t-1}}) \widehat{\beta}(S/J_{F_{m-1}}). \] Repeating the same argument for computing the extremal Betti number of $S/J_{F_{m_1} \circ \cdots \circ F_{m_t-1} }$, and by Lemma \ref{Lemma: CM-type 2 pallini}, we have done. \end{proof}
\begin{remark} Contrary to what we believe for bipartite graphs $F_m$ and $k$-pure fan graphs $F_m^{W,k}$ (see Question \ref{Rem: extremal betti = CM-type}), in general for a Cohen-Macaulay bipartite graph $G=F_{m_1} \circ \cdots \circ F_{m_t}$, with $t \geq 2$, the unique extremal Betti number of $S/J_G$ does not coincide with the Cohen-Macaulay type of $S/J_G$, for example for $G=F_4 \circ F_3$, we have $5 = \widehat{\beta}(S/J_G) \neq \text{CM-type}(S/J_G) =29 $. \end{remark}
\end{document} |
\begin{document}
\title{Pure-state quantum trajectories for general non-Markovian systems do not exist }
\author{Howard M. Wiseman} \affiliation{Centre for Quantum Dynamics, School of Science, Griffith University, Nathan 4111, Australia}
\author{J. M. Gambetta} \affiliation{Institute for Quantum Computing and Department of Physics and Astronomy, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1}
\begin{abstract} Since the first derivation of non-Markovian stochastic Schr\"odinger \ equations, their interpretation has been contentious. In a recent Letter [Phys. Rev. Lett. {\bf 100}, 080401 (2008)], Di\'osi claimed to prove that they generate ``true single system trajectories [conditioned on] continuous measurement''. In this Letter we show that his proof is fundamentally flawed: the solution to his non-Markovian stochastic Schr\"odinger \ equation at any particular time can be interpreted as a conditioned state, but joining up these solutions as a trajectory creates a fiction. \end{abstract}
\pacs{03.65.Yz, 42.50.Lc, 03.65.Ta} \maketitle
It is well recognized that the continuous measurement of an open quantum system $S$ with Markovian dynamics can be described by a stochastic Schr\"odinger \ equation (SSE). The pure-state solution to such an equation over some time interval, a ``quantum trajectory'' \cite{Car93b}, can be interpreted as the state of $S$ evolving while its environment is under continuous observation (monitoring). This fact is of great importance for designing and experimentally implementing feedback control on open quantum systems \cite{adaptph,QED,spin-sqz}. If this interpretation could also be applied to {\em non-Markovian} SSEs \cite{StrDioGis,GamWis02}, then this would be very significant for quantum technologies, especially in condensed matter environments, which are typically non-Markovian \cite{BrePet02}.
Previously we have argued that non-Markovian SSEs (NMSSEs) {\em cannot} be interpreted in this way \cite{GamWis02,GamWis03}. The solution at any particular time can be interpreted as the system state conditioned upon some measurement of the environment \cite{GamWis02}. But connecting up those solutions to make a trajectory is a fiction akin to trajectories in Bohmian mechanics \cite{GamWis03}.
Restricting to standard quantum mechanics, the basic problem is that for the state of $S$ to remain pure, the bath field must be continuously observed to disentangle it from the system. For Markovian dynamics, this is not a problem, because the moving field interacts with $S$ and, having interacted, moves on. But for non-Markovian dynamics, the field comes back and interacts again with $S$. Thus monitoring the field will feed disturbance back into the system, changing the {\em average} evolution of the state of $S$. That is contrary to the derivation of the NMSSE, which is constructed so as to reproduce on average the no-measurement evolution of $S$.
Recently, Di\'osi\ rederived one form of NMSSE from a different starting point, and claimed that, contrary to the conclusions of Ref.~\cite{GamWis03}, this allows an interpretation of the solutions as ``true single system trajectories [conditioned on] continuous measurement'' \cite{Dio08}. Here we show by general argument, and an explicit calculation, that this claim is incorrect, and that the reformulation does not alter our earlier conclusion.
\section{The non-Markovian system} Di\'osi\ considers a bath comprising an infinite sequence of von Neumann apparatuses $A_n$, each described by position and momentum operators $\hat x_n$, $\hat p_n$, $n\in\cu{1,2,\ldots \infty}$. (For clarity, we are using slightly different notation from Ref.~\cite{Dio08}.) The system interacts with the bath via the coupling Hamiltonian \begin{equation} \label{defV} \hat V = \sum_n \delta(t-\tau_n) \hat X \hat p_n, \;\; \tau_n=\epsilon n, \end{equation} where $\hat X$ is an Hermitian system operator. Here the explicit time-dependence plays the role of the free propagation of a bath field. This would seem to be a recipe for generating Markovian evolution, since $S$ interacts only once with each $A_n$, which thus plays a role analogous to a small segment of a Markovian bath field. The novelty of Di\'osi's approach is to generate non-Markovian evolution by having the $\cu{A_k}_{k=1}^\infty$ prepared in an entangled state $\ket{\phi_0}$. In the position representation it is given by \begin{equation} \label{phi0}
\ip{ \cu{x_k}_{k=1}^\infty}{\phi_0} \propto \exp\left[{ - \epsilon^2\sum_{l,m} x_l x_m \alpha(\tau_l-\tau_m) }\right]. \end{equation} The continuum-time limit is $\epsilon\to0$, where the system is subjected to infinitely frequent, but infinitesimally strong, interactions with the apparatuses. In this limit, $\alpha(t)$ plays the role of the correlation function for the bath.
It is a real and symmetric function \cite{GamWis02,Bassi:2002a}, and equals $g^2\delta(t)$ in the Markovian case. Assuming the system is initially in a pure state also, the Hamiltonian (\ref{defV}) produces an entangled system--bath state $\ket{\Psi(\tau_n^+)}$ immediately after the $n{\mathrm{th}}$ interaction.
Di\'osi\ first considers the case where, immediately after each time $\tau_n$, the observable $\hat x_n$ is measured, yielding result $x_n$. This gives an unnormalized state for the conditioned quantum system, $\tilde \rho(\tau_n^+;\cu{x_l}_{l=1}^{n})$, given by \begin{equation}
{\rm Tr}_{\cu {A_m}_{m=n+1}^{\infty}}\left[\ip{\cu{x_l}_{l=1}^{n}}{\Psi(\tau_n^+)}\ip{\Psi(\tau_n^+)}{\cu{x_l}_{l=1}^{n}}\right], \end{equation}
with $\tr{\tilde \rho(\tau_n^+;\cu{x_l}_{l=1}^{n})}$ being the probability for the record $\cu{x_l}_{l=1}^{n}$. In the limit $\epsilon \to 0$, this state (if appropriately scaled) will have a continuous but stochastic evolution through time. The measurement of observable $\hat x_n$ does not disturb the future evolution of $S$ because $A_n$ never interacts with $S$ again. Thus, there is no difficulty with interpreting this stochastic evolution as the trajectory of an individual system, with the average state at time $t$ \begin{equation} \rho(t) = \int_{-\infty}^\infty \! dx_0 \cdots \int_{-\infty}^\infty \! dx_n\ \tilde \rho(t;\cu{x_l}_{l=1}^{\lfloor t/\epsilon \rfloor}) \end{equation}
being identical with that obtained simply by tracing over the bath (the apparatuses), \begin{equation} \rho(t) = {\rm Tr}_{\cu {A_k}_{k=1}^{\infty}}\left[\ket{\Psi(t)}\bra{\Psi(t)}\right]. \end{equation}
It is obvious, however, that $\tilde \rho(t;\cu{x_l}_{l=1}^{\lfloor t/\epsilon \rfloor})$ is {\em not} the solution of a SSE, for the simple reason that the state is mixed, not pure, even if it begins pure \footnote{It is not clear whether this mixed state trajectory is the solution of a well-defined non-Markovian stochastic master equation.}. The mixedness arises because the interaction of $S$ with $A_n$ entangles $S$ with $A_m$ for $m > n$, because initially $A_n$ and $A_m$ are entangled. That is, the system becomes entangled with apparatuses that are not yet measured. A mixed conditional equation state is not unexpected for non-Markovian systems. It has previously been shown in Refs. \cite{Imamoglu:1994a} and \cite{Breuer:2004b} that it is possible to derive a mixed state quantum trajectory equation that reproduces the non-Markovian evolution on average by adding to $S$ a fictitious system $F$, with the latter coupled to a monitored (Markovian) bath. A mixed state for $S$ arises when the partial trace over $F$ is performed. See Ref.~\cite{GamWis02b} for a comparison of this method with that of the NMSSE.
\section{The non-Markovian SSE and its interpretation} The only way to obtain a pure state for $S$ at time $t$ is by measuring all the apparatuses with which the system is entangled. Specifically, Di\'osi\ shows that it is necessary to measure the set of bath observables $\cu{\hat z(s):s\in[0,t]}$, where $\hat z(s)$ is the ``retarded observable'' \cite{Dio08} \begin{equation} \label{defz} \hat z(s) = 2\epsilon \sum_{k=1}^{\infty } \hat x_k \alpha(s-\tau_k) . \end{equation} This is of course a different observable at different times $s$. The state conditioned on the result $Z_t \equiv \cu{z(s):s\in[0,t]}$ of this measurement at time $t$ is a {\em functional} of $z(s)$ for $0\leq s \leq t$, which we will write as $\ket{\bar \psi_t[Z_t]}$. Di\'osi\ shows that this state is pure, and that it is the solution of the NMSSE \begin{equation} \frac{d\ket{\bar \psi_t[Z_t]}}{dt} = \hat X_t\left(z(t) {-} 2 \int_0^t \alpha(t-s) \frac{\delta}{\delta z(s)} ds\right)\ket{\bar \psi_t[Z_t]}. \label{NMSSE} \end{equation} Here, Di\'osi\ is working in the interaction picture with respect to the system Hamiltonian $\hat H$; hence, the time dependence of $\hat X_t \equiv e^{i\hat H t}\hat X e^{-i\hat H t}$. Equation (\ref{NMSSE}) was first derived in Refs.~\cite{GamWis02,Bassi:2002a}, but is very similar to that derived earlier in Refs.~\cite {StrDioGis}. The ensemble average of solutions of this NMSSE reproduces the reduced state of the system: \begin{equation} \rho(t) = {\rm E} \sq{\op{\bar \psi_t[Z_t]}{\bar \psi_t[Z_t]}} . \end{equation} Here in taking the expectation value, $z(t)$ must be treated as a Gaussian noise process with correlation function ${\rm E}[z(t)z(s)] = \alpha(t-s)$, as appropriate for $\ket{\phi_0}$. This convention is indicated by the notation $\bar\psi$ (as oppiosed to $\tilde\psi$) for the state.
The contentious issue is not whether the solution $\ket{\bar \psi_t[Z_t]}$ has an interpretation in standard quantum mechanics. As just explained, this state is the conditioned state of $S$ at time $t$ if an {\em all-at-one measurement} of the set of bath observables $\cu{\hat z(s):s\in[0,t]}$ were made at that time, yielding the result $Z_t$. The contentious issue is: can the {\em family} of states $\ket{\bar \psi_t[Z_t]}$ for $0\leq t \leq \infty$ be interpreted as a trajectory for the state of a single system, conditioned on monitoring of its bath. Di\'osi\ claims that it can be so interpreted, and that the required monitoring is simply to measure $\hat z(\tau_0)$ at time $\tau_0^+$, $\hat z(\tau_1)$ at time $\tau_1^+$ and so on. At first sight this {\em monitoring} may seem equivalent to the all-at-once measurement described above. But in fact it is not, as we will now explain.
A measurement of $\hat z(t)$ at time $t^+$ involves measuring apparatuses that have not yet interacted with $S$. This is necessarily so because the symmetry of $\alpha(\tau)$ means that $\hat z(t)$ contains contributions from $\hat x_m$ for some $\tau_m>t$ (except for the Markovian case of course). Consequently, $\hat z(t)$ does not commute with $\hat p_m$ for some $\tau_m>t$, and the measurement will therefore disturb these momentum observables. But these are precisely the observables that will couple to the system via (\ref{defV}), and thereby disturb it. Thus, as soon as the first measurement is performed, of $\hat z(\tau_0)$ at time $\tau_0$, $S$ ceases to obey the NMSSE. Whatever stochastic evolution it does undergo, it will not reproduce the reduced state of the unmeasured system $\rho(t)$.
It might be thought that it would be possible to avoid this alteration of the future evolution of the system by repreparing the apparatuses $A_m$ for $\tau_m>t$ in their pre-measurement states. However, this is not possible; before the measurement, these $A_m$ were {\em entangled} with the system $S$ and the other apparatuses. The correlation of these $A_m$ with $S$ is why the system state $\tilde \rho(\tau_n;\cu{x_l}_{l=1}^{n})$, conditioned on measuring the apparatuses after they have interacted with the system, is {\em mixed}. The evolution of this state over time is the only true quantum trajectory for a single system, and its mixedness is an inevitable consequence of the non-Markovian dynamics. In fact, we now show by explicit calculation that the monitoring Di\'osi\ suggests does not even produce pure conditioned states of $S$ --- it also leads to mixed states.
\section{A simple example}
We consider the case where the bath consists of two apparatuses and $\epsilon=1$. Thus there are just three relevant times, $\tau_0=0$ (the initial time), $\tau_1^+=1$ (just after the interaction with $A_1$) and $\tau_2^+=2$ (just after the interaction with $A_2$). Without loss of generality, we can write the initial Gaussian entangled state of the bath, analogous to \erf{phi0}, as \begin{equation} \phi_0(x_1,x_2)=c\exp[-(x_1^2+x_2^2 + 2 ax_1x_2)], \end{equation} where $c^2 ={2 \sqrt{1-a^2}}/\pi$. Here $0\leq a<1$ parametrizes the initial entanglent between the apparatuses. The analogue of \erf{defz} defines two operators, \begin{equation}
\begin{split}
\hat z_1 = 2(\hat x_1+ a\hat x_2) , ~\hat z_2 = 2(\hat x_2+ a\hat x_1) .
\end{split} \label{defdisz} \end{equation}
Let us consider the unconditioned evolution of the system. At the initial time $\tau_0$ the total state is \begin{equation} \begin{split}
\ket{\Psi_0}=& \int_2 \phi_0(x_1,x_2) \ket{x_1}_1\ket{x_2}_2\ket{\psi_0} dx_1 dx_2, \end{split} \end{equation} where the final ket (with no subscript) denotes a state of $S$, and the subscript on the integral sign indicates it is a double integral. This evolves to the following state immediately after the interaction with the first apparatus: \begin{eqnarray}
\ket{\Psi_1}&=& \int_3 \phi_0(x_1,x_2)\ket{x_1+ X_1}_1\ket{x_2}_2 \nl{\times}\ket{X_1} \langle X_1\ket{\psi_0} dx_1dx_2dX_1. \label{Psi1} \end{eqnarray} Here $\ket{X_1}$ denote eigenstates of $\hat X_1\equiv\hat X(\tau_1)$, which we have taken to have a continuous spectrum for simplicity.
Finally, after the second interaction, the total state is \begin{eqnarray}
\ket{\Psi_2}&=& \int_4 \phi_0(x_1,x_2) \ket{x_1+ X_1}_1\ket{x_2+ X_2}_2 \ket{X_2} \nl{\times}
\bra{X_2}X_1\rangle\langle{X_1}\ket{\psi_0} dx_1dx_2dX_1 dX_2. \label{eq:total} \end{eqnarray} From Eq. \eqref{eq:total}, the reduced state for the system at time $\tau_2^+$ is simply \begin{eqnarray}
\rho_2&=& \int_4 \phi_0^2\left(\frac{X_1-X_1'}{2},\frac{X_2-X_2'}{2}\right) \ket{X_2}\ip{X_2}{X_1}\ip {X_1}{\psi}\nl{\times} \ip{\psi}{ X_1'} \ip{X_1'}{ X_2'} \bra{X_2'} dX_1 dX_2 dX_1'dX_2'. \end{eqnarray}
\section{All-at-once measurement at time $\tau_2^+$} It is convenient to use, rather than the observables $\hat z_n$ (\ref{defdisz}), the scaled observables
\begin{equation}
\begin{split}
\hat y_1 &= \hat z_1/2 = \hat x_1+ a\hat x_2 \equiv \zeta_1(\hat x_1,\hat x_2),\\
\hat y_2 &= \hat z_2/2 =\hat x_2+ a\hat x_1 \equiv \zeta_2(\hat x_1,\hat x_2).
\end{split} \label{defdisy} \end{equation} A measurement of $\hat z_n$, or $\hat y_n$, is described by the projector-density $\hat{\Pi}_n(y_n)$, defined by \begin{equation}\label{eq:projectors}
\hat\Pi_n(y_n) =\int dx_1 \int dx_2 \ \hat\pi_1(x_1)\otimes \hat\pi_2(x_2) \delta(y_n-\zeta_n(x_1,x_2)), \end{equation} where $\hat\pi_n(x)=\ket{x}_n\bra{x}_n$. Note that, unlike $\hat\pi_n(x)$, $\hat \Pi_n(y)$ is not a rank-one projector; it is in fact a rank-infinity projector. It satisfies $\int dy\hat \Pi_n(y)=1$ and $\hat \Pi_n(y)\hat \Pi_n(y')=\delta(y-y')\hat \Pi_n(y)$ (no sum over $n$ implied). It is obvious from the definition (\ref{defdisz}) that the two measurements commute.
Consider first the case where at time $\tau_2^+$ projective measurements of $\hat y_1$ and $\hat y_2$ are performed. This yields \begin{eqnarray} \ket{\tilde\Psi_{2}(y_1,y_2)}&=& \hat \Pi_2(y_2)\hat \Pi_1(y_1) \ket{\Psi_2} \nonumber \\ &=& \ket{\frac{y_1-ay_2}{1-a^2}}_1\ket{\frac{y_2-ay_1}{1-a^2}}_2\ket{\tilde\psi_2(y_1,y_2)}, \nonumber\\ \label{eq:conditionstate} \end{eqnarray} where the conditional system state $\ket{\tilde\psi_{2}(y_1,y_2)}$ is \begin{eqnarray} &&c\int_2 \exp[-(X_1-y_1)^2-(X_2-y_2)^2] \nl{\times}\exp\ro{-2aX_1X_2-\frac{a^2(y_1^2+y_2^2)-2ay_1y_2}{1-a^2} }\ket{X_2} \nl{\times}\bra{X_2}X_1\rangle\langle{X_1}\ket{\psi_0} dX_1 dX_2. \label{eq:conditionsysstate} \end{eqnarray}
Obviously $S$ is no longer entangled with $\cu{A_1,A_2}$. This is as expected since the operators $\hat y_1$ and $\hat y_2$ are linearly independent, and jointly measuring these is equivalent to jointly measuring $\hat x_1$ and $\hat x_2$. That is, the measurement at time $\tau_2^+$ effects a rank-one projective measurement on the bath, disentangling it from the system. Moreover, it is easy to verify that, as expected, \begin{equation} \frac{1}{1-a^2}\int_2\ket{\tilde\psi_{2}(y_1,y_2)}\bra{\tilde\psi_{2}(y_1,y_2)} dy_1dy_2 = \rho_2. \end{equation} This establishes that \erf{eq:conditionsysstate} is indeed the discrete-time analogue of the solution of the NMSSE (\ref{NMSSE}) at the relevant time (here $\tau_2^+$).
\section{Monitoring (measurements at $\tau_1^+$ and $\tau_2^+$)} Now consider the case that Di\'osi\ claims is equivalent to the above, namely measuring $\hat y_1$ at time $\tau_1^+$ and $\hat y_2$ at $\tau_2^+$. From \erf{Psi1}, the conditional total state at time $\tau_1^+$ is \begin{eqnarray}
\label{Psi1alt}
\ket{\tilde\Psi_{1}(y_1)}&=&\hat \Pi_1(y_1)\ket{\Psi_1}\\
&=&\int e^{-\left(1-a^2\right) x^2}\ket{ {y_1} -ax}_1\ket{x}_2dx \ket{\tilde\psi_{1}(y_1)} , \nonumber \end{eqnarray} where the conditional system state is \begin{equation} \ket{\tilde\psi_{1}(y_1)} = c \exp\left[- (y_1-\hat X_1)^2\right]\ket{\psi_0}. \end{equation} So far we have a pure state for the system, as expected from Di\'osi's argument. However, at the very next step it breaks down. Because the measurement of the bath has disturbed it, we cannot use the state (\ref{eq:total}) to calculate the next conditioned state. Rather, we must calculate the effect of the interaction between $S$ and $A_2$ on state (\ref{Psi1alt}). The new entangled system-bath state at $\tau_2^+$ is \begin{eqnarray}
\ket{\tilde\Psi_{2|1}(y_1)} &=& \int_2 e^{-\left(1-a^2\right) x^2} \ket{y_1-ax}_1\ket{x+X_2}_2 dx \nl{\times}\ket{X_2}\left\langle{X_2} \ket {\tilde\psi_{1}(y_1)}\right. dX_2 \label{Psi21}
\end{eqnarray} Here the ${2|1}$ subscript indicates that the state is at time $\tau_2^+$ but the measurements it is conditioned upon was performed at time $\tau_1^+$.
After the second measurement we have \begin{equation}
\ket{\tilde\Psi_{2|1,2}(y_1,y_2)}=\hat \Pi_2(y_2)\ket{\tilde\Psi_{2|1}(y_1)}, \end{equation} which evaluates to \begin{eqnarray} &&c\int \ket{\frac{aX_2+y_1-ay_2}{1-a^2}}_1 \ket{\frac{y_2-a^2 X_2-ay_1 }{1-a^2}}_2\nl{\times} \exp\left[-\frac{( X_2+ay_1-y_2)^2}{1-a^2}\right]
\nl{\times}\ket{X_2}\bra{X_2} \exp[-(\hat X_1-y_1)^2]\ket{\psi_{0}} dX_2 \label{Froghorn} \end{eqnarray} Note that this is an entangled state between $S$ and the bath --- it is not possible to define a pure conditional state for the system. The reason is that, as noted above, the projector $\hat\Pi_2(y_2)$ is not rank-one, so there is no guarantee that it will disentangle the system from the bath. So the monitoring procedure Di\'osi\ describes cannot possibly correspond to the solution of the NMSEE (\ref{NMSSE}). Moreover, it is easy to verify that, as expected, \begin{equation}
\int_2 \mathrm{Tr}_{12}\sq{\ket{\tilde\Psi_{2|1,2}(y_1,y_2)}\bra{\tilde\Psi_{2|1,2}(y_1,y_2)}} dy_1dy_2 \neq \rho_2. \end{equation} That is, the measurements described by Di\'osi\ disturb the evolution of the system so that it no longer obeys the original non-Markovian dynamics.
\section{Markovian limit} There is one case where Di\'osi's monitoring procedure does give a pure-state solution at all times which is identical to that which would be obtained by an all-at-once measurement at that time. This is case $a\rightarrow 0$, where $\hat y_n=\hat x_n$. That is to say, the intial bath state is unentangled, and the apparatuses are measured locally. In this Markovian limit we find \begin{equation}
\ket{\tilde\Psi_{2}(y_1,y_2)}= \ket{y_1}_1\ket{y_2}_2 \ket{\tilde\psi_{2}(y_1,y_2)}, \end{equation} where the conditional state $\ket{\tilde\psi_{2}(y_1,y_2)}$ is given by \begin{equation} c\exp\left[- (\hat X_2-y_2)^2\right]\exp\left[- (\hat X_1-y_1)^2\right]\ket{\psi_0}. \end{equation} This sequence of exponentials can obviously be continued indefinitely. The correspondence between the all-at-once measurement and Di\'osi's monitoring here is not surprising: in the Markovian limit the interpretation of a SSE in terms of continuous monitoring of the bath is well known.
To conclude, Di\'osi\ has introduced an elegant formulation of non-Markovian evolution using a local (Markovian) coupling to the bath but an initially non-local (entangled) bath state. In this formulation, it is simple to monitor the bath without affecting the future evolution of the system, because each apparatus only interacts with the system once. However, to make the conditioned system state pure, it is necessary to measure not only the apparatuses which have already interacted with the system, but also some of those which are yet to interact. Measuring the latter necessarily introduces noise that will disturb the future evolution of the system, so that it will not reproduce the original non-Markovian evolution on average. We show by explicit calculation that the monitoring scheme suggested by Di\'osi\ does disturb the evolution in this manner, and moreover it even fails to produce pure conditional system states.
While it is certainly possible to derive a non-Markovian stochastic Schr\"odinger \ equation, its solution can only be interpreted as a conditioned system state at some particular (but arbitrary) time $t$ \cite{GamWis02,GamWis03}. Connecting the solutions at different times creates the illusion of a ``quantum trajectory'', but is not part of standard quantum mechanics. Rather, it is related to Bohmian mechanics and its generalizations \cite{GamWis04} which also allow one to derive discontinuous (jumplike) trajectories \cite{GamAskWis04}. Whether the jumplike non-Markovian trajectories recently introduced in Ref.~\cite{Piilo08} can be interpreted in a similar manner remains to be determined. But from the arguments in this Letter we know that
non-Markovian pure-state trajectories cannot be interpreted as true quantum trajectories.
{\em Acknowledgements:} HMW was supported by the Australian Research Council grant FF0458313.
JMG was partially supported by MITACS and ORDCF.
\end{document} |
\begin{document}
\title{On The Generalisation of Henstock-Kurzweil Fourier Transform}
\author{S. Mahanta$^{1}$,\quad S. Ray$^{2}$
\\
\em\small ${}^{1,2}$Department of Mathematics, Visva-Bharati, 731235, India.\\
\em\small ${}^{1}$e-mail: [email protected]\\
\em\small ${}^{2}$e-mail: [email protected]}
\date{}
\maketitle
\begin{abstract}
In this paper, a generalised integral called the Laplace integral is defined on unbounded intervals, and some of its properties, including necessary and sufficient condition for differentiating under the integral sign, are discussed. It is also shown that this integral is more general than the Henstock-Kurzweil integral. Finally, the Fourier transform is defined using the Laplace integral, and its well-known properties are established.
\end{abstract}
\thanks{\bf Keywords:} Fourier transform, Henstock-Kurzweil integral, Denjoy integral, Laplace integral, Laplace derivative.\\\\
\thanks{\bf Mathematics Subject Classification(2020):} 42A38, 26A39.
\section{Introduction}
If $f\colon\mathbb{R}\to\mathbb{R}$ is Lebesgue integrable, its Fourier transform is defined by $$\widehat{f}(y)=\int_{-\infty}^{\infty}f(x)e^{-2\pi iyx},$$ and its theory is also well established. Now the obvious question from the viewpoint of ``generalised integrals" is ``{\em can we replace the Lebesgue integral with a generalised integral in the definition of Fourier transform?"} It is Erik Talvila who first gave an affirmative answer to the above question in \cite{HKFT}. He used Henstock-Kurzweil integral to define the Fourier transform and proved its important properties. Furthermore, he pointed out that some beautiful results, e.g., Riemann-Lebesgue lemma, are not satisfied by the Henstock-Kurzweil Fourier transform. However, in \cite{mendoza2009some}, it is proved that the Riemann-Lebesgue lemma is satisfied in an appropriate subspace of the space of all Henstock-Kurzweil integrable functions on $\mathbb{R}$. Further results concerning Henstock-Kurzweil Fourier transform can be found in \cite{arredondo2020fourier, arredondo2018norm, mendoza2016note, morales2016extension, torres2015convolution, torres2010pointwise}.
\par In \cite{mahanta2021generalised}, a new generalised integral on bounded intervals called the Laplace integral is defined by the authors of this paper, which has continuous primitives and is more general than the Henstock-Kurzweil integral. In this paper, the concept of the Laplace integral is extended on $\mathbb{R}$ and then applied to define the Fourier transform. Moreover, the essential properties of the Fourier transform are studied in this general setting.
\noindent{\bf Notations:} Let $I$ be an interval bounded or unbounded. We use following notations throughout this paper.
\begin{align*}
&L^{1}(I)=\left\{ f\mid f\,\, \text{is Lebesgue integrable on $I$} \right\},\\
&\mathcal{HK}(I)=\left\{ f\mid f\,\, \text{is Henstock-Kurzweil integrable on $I$} \right\},\\
&\mathcal{BV}(I)=\left\{ f\mid f\,\, \text{is of bounded variation on $I$} \right\},\\
&\mathcal{BV}(\pm\infty)=\left\{ f\mid f\,\, \text{is of bounded variation on $\mathbb{R}\setminus (-a,a)$ for some $a\in\mathbb{R}$}\right\},\\
&V_{I}[f]= \text{Total variation of $f$ on $I$,}\\
&V_{I}[f(\cdot, y)]= \text{Total variation of $f$ with respect to $x$ on $I$,}\\
&\|f\|_{1}= \text{The $L^{1}$ norm of $f$}.
\end{align*}
\section{Preliminaries}
\begin{definition}[\cite{mahanta2021generalised}]
Let $f$ be Laplace integrable (see Definition 3.2 of \cite{mahanta2021generalised}) on a neighbourhood of $x$. If $\exists\, \delta>0$ such that the following limits
\begin{equation*}
\lim\limits_{s\rightarrow \infty}s\int_{0}^{\delta}e^{-st}f(x+t)\,dt\quad\text{and}\quad\lim\limits_{s\rightarrow \infty}s\int_{0}^{\delta}e^{-st}f(x-t)\,dt
\end{equation*}
exist and are equal, then the common value is denoted by $LD_{0}f(x)$. And we say $f$ is Laplace continuous at $x$ if $LD_{0}f(x)=f(x)$.
\end{definition}
\begin{definition}
Let $f$ be Laplace integrable on a neighbourhood of $x$. If $\exists\, \delta>0$ such that the following limits
\begin{equation*}
\lim\limits_{s\rightarrow \infty}s^{2}\int_{0}^{\delta}e^{-st}[f(x+t)-f(x)]\,dt\quad\text{and}\quad\lim\limits_{s\rightarrow \infty}(-s^{2})\int_{0}^{\delta}e^{-st}[f(x-t)-f(x)]\,dt
\end{equation*}
exist and are equal, then we say $f$ is Laplace differentiable at $x$ and the common value is denoted by $LD_{1}f(x)$.
\end{definition}
\noindent If $f$ is a function of two variables, say $x$ and $y$, then we define the Laplace derivative of $f$ with respect to $x$ as the Laplace derivative of $f_{y}(x)=f(x,y)$ ($y$ is assumed to be constant) and we denote it by $LD_{1x}f$. Definition of $LD_{1y}f$ is similar. For further properties on Laplace derivative and Laplace continuity see \cite{OLD, TLD1, TLD2, OLC}.
\section{The Laplace integral on unbounded intervals}
In Definition 3.4 of \cite{borkowski2018applications}, Denjoy-Perron type integrals on unbounded intervals are defined and then it is proved that it is equivalent to Henstock-Kurzweil integral on unbounded intervals (Theorem 3.2 of \cite{borkowski2018applications}). Similarly, we shall define the Laplace integral on unbounded intervals and establish its properties. Due to similarity, most of the results will be given without proof.
\begin{definition}\label{Laplace integral}
Let $I=[a,\infty]$ and let $f:I\to\mathbb{R}$. Then we say $f$ is Laplace integrable on $[a,\infty)$ or on $[a,\infty]$ if
\begin{enumerate}[\upshape (a)]
\item $f$ is Laplace integrable on $[a,c]$ for $c\geqslant a$ and
\item $\lim\limits_{c\rightarrow\infty}\int_{a}^{c}f$ exists.
\end{enumerate}
In this case we write $\int_{a}^{\infty}f=\lim\limits_{c\rightarrow\infty}\int_{a}^{c}f$. The set of all Laplace integrable functions on $[a,\infty)$ will be denoted by $\mathcal{LP}[a,\infty)$ or by $\mathcal{LP}[a,\infty]$.
\end{definition}
\noindent Integrability on $(-\infty,b]$ or on $[-\infty,b]$ can be defined analogously. We shall say that $f$ is integrable on $\mathbb{R}$ or on $\overline{\mathbb{R}}$ (\,$=[-\infty,\infty]$\,) if there is some $a\in \mathbb{R}$ such that $f$ is integrable on both $(-\infty,a]$ and $[a,\infty)$, and we write $\int_{-\infty}^{\infty}f=\int_{-\infty}^{a}f+\int_{a}^{\infty}f$. From now on, we assume all integrals are Laplace integral unless otherwise stated.
\par From Section 3 of \cite{mahanta2021generalised} and Definition 3.4, Theorem 3.2 of \cite{borkowski2018applications}, it is evident that Denjoy-Perron integral or Henstock-Kurzweil integral on unbounded intervals is a particular case of Laplace integral; however, the following example will ensure that $\mathcal{LP}(\mathbb{R})\setminus\mathcal{HK}(\mathbb{R})$ is non-empty.
\begin{example}\label{TheOnleExample}
Let $f:[0,1]\to\mathbb{R}$ be the function defined in Equation (4.12) of \cite{mahanta2021generalised} and let $\phi=LD_{1}f$ on $[0,1]$ (see Theorem 4.2 of \cite{mahanta2021generalised}). Now define
\[
F(x)=
\begin{cases}
f(x)\quad& \text{if $x\in [0,1]$}\\
\,\,\,\,0\quad& \text{if $x\in\mathbb{R}\setminus [0,1]$}.
\end{cases}
\]
Then $F$ is continuous and Laplace differentiable on $\mathbb{R}$. If we denote $LD_{1}F$ by $\Phi$, then we get
\[
\Phi(x)=
\begin{cases}
\phi(x)\quad& \text{if $x\in [0,1]$}\\
\,\,\,\,0\quad& \text{if $x\in\mathbb{R}\setminus [0,1]$}.
\end{cases}
\]
By Theorem 4.3 of \cite{mahanta2021generalised} and Definition \ref{Laplace integral}, we get that $\Phi\in \mathcal{LP}(\mathbb{R})\setminus\mathcal{HK}(\mathbb{R})$.
\end{example}
\begin{theorem}[{\bf Cauchy criterion}]\label{Cauchy criterion}
Let $I=[a,\infty]$ and let $f:I\to\mathbb{R}$ be such that $f\in\mathcal{LP}[a,c]$ for all $c\geqslant a$. Then $f\in\mathcal{LP}(I)$ if and only if for any $\epsilon>0$ there is a $K(\epsilon)\geqslant a$ such that $q>p\geqslant K(\epsilon)$ implies $|\int_{p}^{q}f|\leqslant \epsilon$.
\end{theorem}
\begin{theorem}
Let $I=[a,\infty]$. Then
\begin{enumerate}[\upshape (a)]
\item if $f,g \in \mathcal{LP}(I)$ and $c\in\mathbb{R}$, then $cf+g\in \mathcal{LP}(I)$ and $\int_{I}(cf + g)=c\int_{I}f + \int_{I}g$.
\item if $f,g \in \mathcal{LP}(I)$ and $f\leqslant g$, then $\int_{I}f\leqslant\int_{I}g$.
\item if $c\in I$ and $f \in \mathcal{LP}(I)$, then $\int_{a}^{\infty}f=\int_{a}^{c}f+\int_{c}^{\infty}f$.
\end{enumerate}
\end{theorem}
\begin{theorem}[{\bf Fundamental theorem of calculus}]\label{Fundmental}
Let $I=[a,\infty]$ and let $f:I\to\mathbb{R}$. If $f\in \mathcal{LP}(I)$ and $F(x)=\int_{a}^{x}f$, then $LD_{1}F=f$ a.e. on $I$.
\end{theorem}
\begin{proof}
Let $I_{0}=[a,a+1)$ and $I_{n}=[a+n, a+n+1)$, $n\in \mathbb{N}$. Then $I=\bigcup_{n=0}^{\infty}I_{n}$. Now apply Theorem 5.5 of \cite{mahanta2021generalised} on each $I_{n}$.
\end{proof}
\begin{theorem}[{\bf Du Bois-Reymond's}]
Let $I=[a,\infty]$, let $f:I\to\mathbb{R}$ and let $g:I\to\mathbb{R}$. If
\begin{enumerate}[\upshape (a)]
\item $f\in \mathcal{LP}[a,c]$ for all $c\geqslant a$ and $F(x)=\int_{a}^{x}f$ is bounded on $[a,\infty]$,
\item $g\in L^{1}(I)\cap\mathcal{BV}(I)$ and $G(x)=\int_{a}^{x}g$ for all $x\in I$,
\item $\lim\limits_{x\rightarrow \infty}F(x)G(x)$ exists,
\end{enumerate}
then $fG\in \mathcal{LP}(I)$.
\end{theorem}
\begin{proof}
As $F$ is bounded, $Fg\in L^{1}(I)$ which implies $\lim_{x\rightarrow\infty}\int_{a}^{x}Fg$ exists and equal to $\int_{a}^{\infty}Fg$. Let $x\in[a,\infty)$, then by Theorem 6.1 of \cite{mahanta2021generalised}, we have
\begin{equation}\label{Eq Du Bois-Reymond's}
\int_{a}^{x}fG=F(x)G(x)-\int_{a}^{x}Fg.
\end{equation}
Now applying the last assumption on \eqref{Eq Du Bois-Reymond's}, we get $fG\in \mathcal{LP}(I)$.
\end{proof}
\begin{corollary}[\bf Integration by parts]\label{Int by parts}
Let $I=[a,\infty]$, let $f\in \mathcal{LP}(I)$, let $g\in L^{1}(I)\cap\mathcal{BV}(I)$ and let $G(x)=\int_{a}^{x}g$, $x\in I$. Then $fG\in \mathcal{LP}(I)$ and
\begin{equation*}
\int_{a}^{\infty}fG=\lim\limits_{x\rightarrow\infty}[F(x)G(x)]-\int_{a}^{\infty}Fg,
\end{equation*}
where $F(x)=\int_{a}^{x}f$.
\end{corollary}
\begin{lemma}\label{Holder}
Let $[a,b]\subseteq \mathbb{R}$, let $f\in \mathcal{LP}(\mathbb{R})$ and let $g\in \mathcal{BV}(\mathbb{R})$. If $G(x)=\int_{a}^{x}g$, then
\begin{equation*}
\left|\int_{a}^{b}fG\right|\leqslant \left|\int_{a}^{b}f\right|\inf\limits_{x\in[a,b]} |G(x)| + \|f\|_{[a,b]}V_{[a,b]}[G]\leqslant \left[\inf\limits_{x\in[a,b]} |G(x)| + V_{[a,b]}[G]\right]\|f\|_{[a,b]}.
\end{equation*}
\end{lemma}
\noindent Proof is similar to that of Lemma 24 of \cite{HKFT}.
\begin{theorem}\label{sequence of integrals}
Let $I=[a,\infty]$, let $f\in\mathcal{LP}(I)$, let $(g_{n})$ be a sequence in $L^{1}(I)\cap\mathcal{BV}(I)$ and let $G_{n}(x)=\int_{a}^{x}g_{n}$. If $(g_{n})$ is of uniform bounded variation, then
\[
\lim\limits_{n\rightarrow \infty}\int fG_{n}=\int f(\,\,\lim\limits_{n\rightarrow \infty} G_{n}).
\]
\end{theorem}
\noindent For the proof see Theorem 7.2 of \cite{mahanta2021generalised}.
\begin{lemma}\label{HK-LP}
Let $I=[a,\infty]$, where $a\in\mathbb{R}$. Then $\mathcal{LP}(I)\cap\mathcal{BV}(I)=\mathcal{HK}(I)\cap\mathcal{BV}(I)$.
\end{lemma}
\begin{proof}
It is enough to prove that $f\in \mathcal{LP}(I)\cap\mathcal{BV}(I)$ implies $f\in\mathcal{HK}(I)$. Note that $f\in \mathcal{LP}(I)\cap\mathcal{BV}(I)$ implies $f\in \mathcal{LP}([a,b])\cap\mathcal{BV}([a,b])$, where $a<b<\infty$. As bounded Laplace integrable functions on finite intervals are Lebesgue integrable (see Corollary 5.2 of \cite{mahanta2021generalised}), $f\in \mathcal{HK}([a,b])$. Now, as
\begin{equation*}
\lim\limits_{b\rightarrow\infty}\,\,\,(\mathcal{HK})\int_{a}^{b}f=\lim\limits_{b\rightarrow\infty}\,\int_{a}^{b}f=\int_{a}^{\infty}f,
\end{equation*}
Hake's theorem (Theorem 12.8 of \cite[p.~291]{AMTOIntegration}) implies $f\in\mathcal{HK}(I)$.
\end{proof}
\section{A necessary and sufficient condition of Laplace differentiation under the integral sign}
One may notice that almost the whole paper \cite{HKFT} depends on Lemma 25 of \cite{HKFT}. Furthermore, Theorem 4 of \cite{talvila2001necessary} has a crucial role in proving that lemma. In this section, we also establish results similar to Theorem 4 of \cite{talvila2001necessary}. However, before that, we need to define the concept of a $\mathcal{LP}$-primitive.
\begin{definition}[{\bf $\mathcal{LP}$-primitive}]
Let $I=[a,b]\subseteq \overline{\mathbb{R}}$ and let $F\colon I\to \mathbb{R}$ be continuous. We say $F$ is a $\mathcal{LP}$-primitive on $I$ if $LD_{1}F$ exists a.e. on $I$, $LD_{1}F\in \mathcal{LP}(I)$ and $\int_{\alpha}^{\beta}LD_{1}F=F(\beta)-F(\alpha)$ for $\alpha, \beta\in I$.
\end{definition}
\begin{theorem}\label{Nec-Suff Prop}
Let $I=[\alpha,\beta]\times[a,b]\subseteq \overline{\mathbb{R}}\times\overline{\mathbb{R}}$ and let $f\colon I\to \mathbb{R}$. Suppose $f(.\,,y)$ is a $\mathcal{LP}$-primitive on $[\alpha,\beta]$ for a.e. $y\in (a,b)$. Then $F(x)=\int_{a}^{b}f(x,y)\,dy$ is a $\mathcal{LP}$-primitive and $LD_{1}F(x)=\int_{a}^{b}LD_{1x}f(x,y)\,dy$ for almost every $x\in (\alpha, \beta)$ if and only if
\begin{equation}\label{interchange-integral}
\int_{s}^{t}\int_{a}^{b}LD_{1x}f(x,y)\,dydx=\int_{a}^{b}\int_{s}^{t}LD_{1x}f(x,y)\,dxdy \qquad \text{for all $s,t \in [\alpha, \beta]$.}
\end{equation}
\end{theorem}
\begin{proof}
Let $F$ be a $\mathcal{LP}$-primitive and $LD_{1}F(x)=\int_{a}^{b}LD_{1x}f(x,y)\,dy$ for almost every $x\in (\alpha, \beta)$. Let $s,t\in [\alpha, \beta]$. Then
\begin{align*}
\int_{s}^{t}\int_{a}^{b}LD_{1x}f(x,y)\,dydx&= F(t)-F(s)\\
&=\int_{a}^{b}[f(t,y)-f(s,y)]\,dy=\int_{a}^{b}\int_{s}^{t}LD_{1x}f(x,y)\,dxdy.
\end{align*}
Conversely, let us assume \eqref{interchange-integral} holds. Let $x_{0}\in (\alpha, \beta)$ be fixed. Then for $x\in (\alpha, \beta)$, we get
\begin{align*}
\int_{x_{0}}^{x}\int_{a}^{b}LD_{1t}f(t,y)\,dydt&=\int_{a}^{b}\int_{x_{0}}^{x}LD_{1t}f(t,y)\,dtdy\\
&=\int_{a}^{b}[f(x,y)-f(x_{0},y)]\,dy=F(x)-F(x_{0}).
\end{align*}
Hence by Theorem \ref{Fundmental}, we get $F$ is a $\mathcal{LP}$-primitive and $LD_{1}F(x)=\int_{a}^{b}LD_{1x}f(x,y)\,dy$ for almost every $x\in (\alpha, \beta)$.
\end{proof}
\begin{corollary}\label{Nec-Suff Cor}
Let $I=[\alpha,\beta]\times[a,b]\subseteq \overline{\mathbb{R}}\times\overline{\mathbb{R}}$ and let $g\colon I\to \mathbb{R}$. Suppose $g(.\,,y)\in \mathcal{LP}[\alpha,\beta]$ a.e. $y\in (a,b)$. Define $G(x)=\int_{a}^{b}\int_{\alpha}^{x}g(t,y)\,dtdy$. Then $G$ is a $\mathcal{LP}$-primitive and $LD_{1}G(x)=\int_{a}^{b}g(x,y)\,dy$ for a.e. $x\in (\alpha,\beta)$ if and only if $\int_{s}^{t}\int_{a}^{b}g(x,y)\,dydx=\int_{a}^{b}\int_{s}^{t}g(x,y)\,dxdy$ for $[s,t]\subseteq[\alpha,\beta]$.
\end{corollary}
\section{Convolution}
Let $f\colon\mathbb{R}\to\mathbb{R}$ and $g\colon\mathbb{R}\to\mathbb{R}$. Then the convolution $f*g$ of $f$ and $g$ is defined by
\begin{equation}\label{convolution}
f*g(x)=\int\limits_{\mathbb{R}}f(x-y)g(y)\,dy\qquad\text{for all $x\in\mathbb{R}$,}
\end{equation}
provided the integral exists. We all know the basic properties of convolution while the integral is Lebesgue integral or Henstock-Kurzweil integral (see Section 3 of \cite{HKFT}). Here, we discuss similar properties when the integral is Laplace integral. However, before that, we need to prove the following Lemmas.
\begin{lemma}\label{transformation}
Let $f\in\mathcal{LP}[a,\infty]$. Then $\int_{a}^{\infty}f(x)\,dx=\int_{a-y}^{\infty}f(x+y)\,dx$.
\end{lemma}
\begin{proof}
Let $\epsilon>0$ be given. Let $U$ and $V$ be respectively major and minor functions of $f$ on $[a,b]$ with $V(a)=U(a)=0$ and $0\leqslant U(b)-V(b)\leqslant \epsilon$, where $a<b<\infty$. Then it is quite obvious that $U\circ\phi$ and $V\circ\phi$ are respectively major and minor functions of $f\circ\phi$ on $[a-y,b-y]$ with $V\circ\phi(a-y)=U\circ\phi(a-y)=0$ and $0\leqslant U\circ\phi(b-y)-V\circ\phi(b-y)\leqslant \epsilon$, where $\phi(x)=x+y$ for all $x\in [a-y,b-y]$. Thus if $F(x)=\int_{a}^{x}f(t)\,dt$ for $x\in[a,b]$, then $F\circ\phi(x)=\int_{a-y}^{x}f\circ\phi(t)\,dt$ for $x\in[a-y,b-y]$ which implies
\begin{equation*}
\int_{a}^{b}f(t)\,dt=F(b)=F\circ\phi(b-y)=\int_{a-y}^{b-y}f(t+y)\,dt.
\end{equation*}
Now letting $b\rightarrow \infty$, we get the desired result.
\end{proof}
\begin{lemma}\label{most useful lemma}
Let $f\in \mathcal{LP}(\mathbb{R})$, let $G\colon\mathbb{R}^{2}\to\mathbb{R}$ and let $\mathfrak{I}$ be the collection of all open intervals of $\mathbb{R}$. Moreover, let $\partial_{x}G(x,y)=g(x,y)$. Now define the iterated integrals
\begin{align*}
&\mathrm{I}_{1}(A,B)=\int\limits_{y\in B}\int\limits_{x\in A}f(x)G(x,y)\,dxdy,\\
&\mathrm{I}_{2}(A,B)=\int\limits_{x\in A}\int\limits_{y\in B}f(x)G(x,y)\,dydx,
\end{align*}
where $(A,B)\in\mathfrak{I}\times\mathfrak{I}$. If $A\in\mathfrak{I}$ is bounded and if
\begin{enumerate}[\upshape (a)]
\item $V_{O}[g(\cdot\,,y)]\in L^{1}(B)$, for any interval $O\subseteq A$,
\item $\left|g(x,y)\right|\leqslant \eta_{J}(y)$ and $\left|G(x,y)\right|\leqslant \kappa_{J}(y)$ for a.e. $(x,y)\in J\times B$, where $J$ is any interval in $A$ and $\eta_{J},\,\kappa_{J}\in L^{1}(B)$,
\end{enumerate}
then $\mathrm{I}_{1}(A,B)$ exists, and $\mathrm{I}_{1}(A,B)=\mathrm{I}_{2}(A,B)$. In addition, if $\mathrm{I}_{2}(\mathbb{R},B)$ exists, then $\mathrm{I}_{1}(\mathbb{R},B)=\mathrm{I}_{2}(\mathbb{R},B)$.
\end{lemma}
\begin{proof}
Let $J$ be any bounded interval and let $F(x)=\int_{-\infty}^{x}f$. Then by the second condition of this Lemma, we get $\int_{J}\int_{-\infty}^{\infty}\left|F(x)g(x,y)\right|\,dydx<\infty$, proving that $Fg\in L^{1}(J\times\mathbb{R})$ for any bounded interval $J$. So for $-\infty<a< t< b<\infty$ and $(\alpha,\beta)\subseteq\mathbb{R}$, we get
\begin{equation}\label{Fubini's theorem}
\int_{\alpha}^{\beta}\int_{a}^{t}F(x)g(x,y)\,dxdy=\int_{a}^{t}\int_{\alpha}^{\beta}F(x)g(x,y)\,dydx.
\end{equation}
Let $((a,b),(\alpha,\beta))\in \mathfrak{I}\times\mathfrak{I}$, where $(a,b)$ is bounded. For $t\in (a,b)$, define
\[
H_{a}(t)=\int_{\alpha}^{\beta}\int_{a}^{t}f(x)G(x,y)\,dxdy.
\]
Then applying integration by parts and \eqref{Fubini's theorem}, we get
\begin{equation}\label{finiteness of H_{a}}
H_{a}(t)=F(t)\int_{\alpha}^{\beta}G(t,y)\,dy-\int_{a}^{t}\int_{\alpha}^{\beta}F(x)g(x,y)\,dydx.
\end{equation}
Applying the second condition on \eqref{finiteness of H_{a}}, it can be proved that $H_{a}(t)$ is continuous on $(a,b)$. Let $\phi(x)=\int_{\alpha}^{\beta}G(x,y)\,dy$. Then on any bounded interval $J$, $\phi$ is absolutely continuous. Furthermore, as $\left|\partial_{x}G(x,y)\right|=\left|g(x,y)\right|\leqslant \eta_{J}(y)\in L^{1}((\alpha, \beta))$, we get $\phi^{'}(x)=\int_{\alpha}^{\beta}g(x,y)\,dy$. Thus by Corollary 6.1 of \cite{mahanta2021generalised}, we have $LD_{1}H_{a}(t)=f(t)\int_{\alpha}^{\beta}G(t,y)\,dy=f(t)\phi(t)$ for a.e. $t\in(a,b)$. Let $P:=\{a=x_{0},x_{1},...,x_{n}=b\}$ be any partition of $[a,b]$. Then by the first condition, we get
\[
\sum_{i=0}^{n-1}\left| \phi^{'}(x_{i+1})-\phi^{'}(x_{i})\right|\leqslant \int_{\alpha}^{\beta}\sum_{i=0}^{n-1}\left| g(x_{i+1},y)-g(x_{i},y)\right|\,dy\leqslant \int_{\alpha}^{\beta}V_{[a,b]}[g(\cdot\,,y)]\,dy<\infty,
\]
proving that $\phi^{'}\in \mathcal{BV}[a,b]$. Thus $f\phi\in \mathcal{LP}[a,b]$. Moreover, integrating $f\phi$, we can prove that $H_{a}$ is one of its primitive. Now Theorem \ref{Nec-Suff Prop} implies that $\mathrm{I}_{1}(J, (\alpha,\beta))=\mathrm{I}_{2}(J, (\alpha,\beta))$, where $(J, (\alpha,\beta))\in\mathfrak{I}\times\mathfrak{I}$ and $J$ is bounded.
\par As it is assumed that $I_{2}(\mathbb{R},B)$ exists, for $a\in \mathbb{R}$, we have
\begin{align}\label{H(t)}
\begin{split}
\int\limits_{[a,\infty]}\int\limits_{B}f(x)G(x,y)\,dydx&=\lim\limits_{t\rightarrow \infty}\int\limits_{[a,t]}\int\limits_{B}f(x)G(x,y)\,dydx\\
&=\lim\limits_{t\rightarrow \infty}\int\limits_{B}\int\limits_{[a,t]}f(x)G(x,y)\,dxdy.
\end{split}
\end{align}
Thus $\lim\limits_{t\rightarrow \infty}H_{a}(t)$ exists. Define
\begin{align}\label{H*(t)}
H^{*}_{a}(t)=
\begin{cases}
H_{a}(t)&\text{if $a\leqslant t<\infty$,}\\
\lim\limits_{x\rightarrow \infty}H_{a}(x)&\text{if $t=\infty$}.
\end{cases}
\end{align}
Then $H^{*}_{a}$ is continuous on $[a,\infty]$ and
\begin{align*}
LD_{1}H^{*}_{a}(t)=LD_{1}H_{a}(t)=\int_{B}f(t)G(t,y)\,dy\qquad\text{for a.e. $t\in\mathbb{R}$.}
\end{align*}
Moreover, existence of $I_{2}(\mathbb{R},B)$ implies that $\int_{a}^{\infty}LD_{1}H^{*}_{a}(t)\,dt$ exists. Now as $H_{a}$ is a $\mathcal{LP}$-primitive on every bounded intervals in $\mathbb{R}$, applying \eqref{H(t)} and \eqref{H*(t)} we have
\[
\int_{\alpha}^{\beta}LD_{1}H^{*}_{a}(t)\,dt= H^{*}_{a}(\beta) - H^{*}_{a}(\alpha)\qquad\text{for all $\alpha, \beta\in [a,\infty]$}
\]
which implies that $H^{*}_{a}$ is a $\mathcal{LP}$-primitive. Therefore, Corollary \ref{Nec-Suff Cor} implies that
\[
I_{1}((a,\infty),B)=I_{2}((a,\infty),B).
\]
Similarly, we can prove that $I_{1}((-\infty,a),B)=I_{2}((-\infty,a),B)$, and this completes the proof.
\end{proof}
\begin{theorem}
Let $f\colon\mathbb{R}\to\mathbb{R}$ and $g\colon\mathbb{R}\to\mathbb{R}$. Then
\begin{enumerate}[\upshape (a)]
\item $f*g=g*f$, provided \eqref{convolution} exists.
\item if $f\in\mathcal{LP}(\mathbb{R})$, $h\in L^{1}(\mathbb{R})$ and $g''\in L^{1}(\mathbb{R})$, then $(f*g)*h=f*(g*h)$.
\item for $z\in\mathbb{R}$, $\tau_{z}(f*g)=(\tau_{z}f)*g=f*(\tau_{z}g)$, where $\tau_{z}f(x)=f(x-z)$.
\item if $A=\{ x+y\mid x\in supp(f),\,\, y\in supp(g) \}$, then $supp(f*g)\subseteq\overline{A}$.
\end{enumerate}
\end{theorem}
\begin{proof}
Proof of (a) is a straight forward consequence of Lemma \ref{transformation} and that of (c) and (d) are same as in the case of Lebesgue Fourier transformation. So we give the proof of (b) only.
\par Applying (a), we have
\begin{equation*}
(f*g)*h(x)=\int\int f(y)g(x-y-z)h(z)\,dydz.
\end{equation*}
Let $G^{x}(y,z)=g(x-y-z)h(z)$. As $g'$ is bounded and $g''$, $h$ are Lebesgue integrable on $\mathbb{R}$, $G^{x}(y,z)$ satisfies all conditions of Lemma \ref{most useful lemma}. Thus
\begin{align*}
(f*g)*h(x)&=\int\int f(y)g(x-y-z)h(z)\,dzdy\\
&=\int\int f(y)g*h(x-y)\,dy= f*(g*h)(x). \qedhere
\end{align*}
\end{proof}
\begin{theorem}\label{Four-Conv-Lem}
Let $f\in\mathcal{LP}(\mathbb{R})$, let $g\in L^{1}(\mathbb{R})\cap\mathcal{BV}(\mathbb{R})$, let $F(x)=\int_{-\infty}^{x}f$ and let $G(x)=\int_{-\infty}^{x}g$. If $F$ and $G$ are Lebesgue integrable and $G(\infty)=\lim_{x\rightarrow\infty}G(x)=0$, then
\[
\int_{-\infty}^{t}\int_{-\infty}^{\infty}f(x-y)G(y)\,dydx=\int_{-\infty}^{\infty}\int_{-\infty}^{t}f(x-y)G(y)\,dxdy.
\]
Moreover, we have
\begin{align}\label{conv-norm}
\begin{split}
&\|f*G\|_{1}\leqslant \|F\|_{1}\|g\|_{1},\\
&\|f*G\|_{\mathcal{A}}\leqslant \|f\|_{\mathcal{A}}\|G\|_{1},
\end{split}
\end{align}
where $\|f\|_{\mathcal{A}}=\sup_{x\in \mathbb{R}}\left|\int_{-\infty}^{x}f\right|$, the Alexiewicz's norm (see \cite{LFDIF}) of $f$.
\end{theorem}
\begin{proof}
Let $h(x)=\int_{-\infty}^{\infty}f(x-y)g(y)\,dy$ for $x\in\mathbb{R}$. Then using integration by parts, we have
\begin{equation}\label{conv-eq1}
h(x)=\int_{-\infty}^{\infty}F(x-y)g(y)\,dy=F*g(x).
\end{equation}
Now as both $F$ and $g$ are Lebesgue integrable, using Fubini's theorem and integration by parts, we have
\begin{align}\label{conv-eq2}
\begin{split}
\int_{-\infty}^{t}h(x)\,dx&=\int_{-\infty}^{\infty}\left(\int_{-\infty}^{t}F(x-y)\,dx\right)g(y)\,dy\\
&=\int_{-\infty}^{\infty}F(t-y)G(y)\,dy=\int_{-\infty}^{\infty}\int_{-\infty}^{t}f(x-y)G(y)\,dxdy
\end{split}
\end{align}
which implies $\int_{-\infty}^{t}\int_{-\infty}^{\infty}f(x-y)G(y)\,dydx=\int_{-\infty}^{\infty}\int_{-\infty}^{t}f(x-y)G(y)\,dxdy$. From \eqref{conv-eq1} and \eqref{conv-eq2}, we shall get \eqref{conv-norm}.
\end{proof}
\section{Fourier transform}
Let $f\colon\mathbb{R}\to \mathbb{R}$. Then the Fourier transform of $f$ is defined by
\begin{equation}\label{Fourier Transform}
\widehat{f}(y)=\int_{-\infty}^{\infty}f(x)e^{-2\pi iyx}\,dx,
\end{equation}
provided the integral exists for $y\in \mathbb{R}$.
\par Now, if we denote $G_{y}(x)=e^{-2\pi iyx}$, then it is easy to verify that $G'_{y}$ is of bounded variation on finite intervals. Hence if the integral in \eqref{Fourier Transform} exists, it implies that $f\in \mathcal{LP}_{loc}(\mathbb{R})$, where by $\mathcal{LP}_{loc}(\mathbb{R})$ we mean the set of all locally Laplace integrable functions. Now we establish some basic properties of the Fourier transform in this general setting.
\begin{theorem}[\bf Existence theorems]\noindent
\begin{enumerate}[\upshape (a)]
\item If $f\in\mathcal{LP}_{loc}(\mathbb{R})$ and $f$ is Lebesgue integrable in a neighbourhood of infinity then $\widehat{f}$ exists.
\item If $f\in \mathcal{LP}(\mathbb{R})\cap\mathcal{BV}(\pm\infty)$, then $\widehat{f}$ exists on $\mathbb{R}$.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}[\upshape (a)]
\item Let $a\in (0,\infty)$ be such that $f\in L^{1}(\mathbb{R}\setminus(-a,a))$. Then $\int_{|x|\geqslant a}f(x)e^{-2\pi iyx}\,dx$ exists. Now, as $g_{y}(x)=(-2\pi iy)e^{-2\pi iyx}$ is of bounded variation on $[-a,a]$,
\[
\int_{-a}^{a}f(x)e^{-2\pi iyx}\,dx
\]
exists. Thus $\widehat{f}$ exists on $\mathbb{R}$.
\item It is a straightforward consequence of the previous part, Lemma \ref{HK-LP}, and Theorem 3.1 of \cite{mendoza2009some}.\qedhere
\end{enumerate}
\end{proof}
\noindent The above theorem implies that $\widehat{\Phi}$ (see Example \ref{TheOnleExample}) exists in our setting; however, $\widehat{\Phi}$ does not exist in the sense of Henstock-Kurzweil Fourier transform since $\Phi$ is not locally Henstock integrable.
\begin{theorem}
Suppose $f,g\in \mathcal{LP}(\mathbb{R})$.
\begin{enumerate}[\upshape (a)]
\item If $\widehat{f}$ exists, then $\widehat{\tau_{\zeta}f}(y)=e^{-2\pi i\zeta y}\widehat{f}(y)$ and $\tau_{\eta}\widehat{f}=\widehat{h}$, where $h(x)=e^{2\pi i\eta x}f(x)$.
\item Let $\widehat{f}$ exists at $y\in\mathbb{R}$, let $g\in L^{1}(\mathbb{R})\cap\mathcal{BV}(\mathbb{R})$, let $G(x)=\int_{-\infty}^{x}|g|$ and let $F_{y}(x)=\int_{-\infty}^{x}e^{-2\pi iyt}f(t)\,dt$. If $F_{y}$ and $G$ are Lebesgue integrable and $G(\infty)=\lim_{x\rightarrow\infty}G(x)=0$, then
\[
\widehat{f*G}(y)=\widehat{f}(y)\widehat{G}(y).
\]
\item Let $x^{n}f\in \mathcal{LP}(\mathbb{R})$ and let $\widehat{x^{n}f}$ exists for $n=0,1$. Then $\dfrac{d\widehat{f}}{dy}=\widehat{h}$ a.e. on $\mathbb{R}$, where $h(x)=(-2\pi ix)f(x)$.
\item Let $f,f'\in \mathcal{LP}(\mathbb{R})$. If $\widehat{f}$ exists, then $\widehat{f'}$ exists and $\widehat{f'}(y)=(2\pi iy)\widehat{f}(y)$.
\item Let $f\in \mathcal{LP}(\mathbb{R})$. If
\begin{enumerate}[\upshape (i)]
\item $f$ has compact support, then $\widehat{f}$ is continuous on $\mathbb{R}$.
\item $f\in \mathcal{BV}(\pm\infty)$, then $\widehat{f}$ is continuous on $\mathbb{R}\setminus\{0\}$.
\end{enumerate}
\item {\upshape({\bf Riemann-Lebesgue Lemma})} If $f\in \mathcal{LP}(\mathbb{R})\cap\mathcal{BV}(\mathbb{R})$, then $\lim_{|t|\rightarrow\infty} \widehat{f}(t)=0$.
\end{enumerate}
\end{theorem}
\begin{proof} Proof of (a) and (b) follow from Lemma \ref{transformation} and Theorem \ref{Four-Conv-Lem}, respectively, and proof of (f) follows from Theorem 5.2 of \cite{mendoza2009some} and Lemma \ref{HK-LP}. So we shall prove the rest.
\begin{enumerate}[\upshape (a)]
\setcounter{enumi}{2}
\item Let $A=\mathbb{R}$ and $B=[n,n+1]$, $n\in\mathbb{Z}$. Let $h(x)=(-2\pi ix)f(x)$ and let $G(x,y)=e^{-2\pi ixy}$. Then
\begin{enumerate}[\upshape (i)]
\item $g(x,y)=\partial_{x}G(x,y)=(-2\pi iy)e^{-2\pi ixy}$,
\item $V_{[a,b]}[g(\cdot\,,y)]=2\pi (b-a)y\in L^{1}(B)$, for any compact interval $[a,b]\subseteq A$,
\item $\left|g(x,y)\right|=2\pi y \in L^{1}(B)$ and $\left|G(x,y)\right|=1 \in L^{1}(B)$ for $(x,y)\in A\times B$, and
\item $\mathrm{I}_{2}(A,B)=\int_{x\in A}\int_{y\in B}h(x)G(x,y)\,dydx=[\widehat{f}(n+1)-\widehat{f}(n)]\in \mathbb{R}$
\end{enumerate}
which implies $\mathrm{I}_{1}(A,B)=\int_{y\in B}\int_{x\in A}h(x)G(x,y)\,dxdy=\mathrm{I}_{2}(A,B)$ (see Lemma \ref{most useful lemma}). Hence, we get
\begin{equation}\label{derFourTr}
\int_{s}^{t}\int_{-\infty}^{\infty}h(x)G(x,y)\,dxdy=\int_{-\infty}^{\infty}\int_{s}^{t}h(x)G(x,y)\,dxdy,
\end{equation}
where $s,t\in [n,n+1]$. Now, as $\widehat{f}(y)=\int_{-\infty}^{\infty}H(x,y)\,dx$, where $H(x,y)=f(x)G(x,y)$, and $\partial_{y} H(x,y)=h(x)G(x,y)$, by \eqref{derFourTr} and Theorem \ref{Nec-Suff Prop}, we have $\dfrac{d\widehat{f}}{dy}=\widehat{h}$ a.e. on $[n,n+1]$ for all $n\in \mathbb{Z}$ and hence a.e. on $\mathbb{R}$.
\item As $f, f'$ are integrable, $\lim_{|x|\rightarrow \infty}f(x)=0$. For $[u,v]\subseteq \mathbb{R}$, we have
\begin{equation*}
\int_{u}^{v}f'(x)e^{-2\pi ixy}\,dx= \left[e^{-2\pi ixy}f(x)\right]_{u}^{v}+(2\pi iy)\int_{u}^{v}f(x)e^{-2\pi ixy}\,dx.
\end{equation*}
Now taking $u\rightarrow -\infty$ and $v\rightarrow \infty$, we arrive at our conclusion.
\item
\begin{enumerate}[\upshape (i)]
\item Let $a\in(0,\infty)$ be such that $f(x)=0$ for $|x|>a$. Let $y_{0}\in \mathbb{R}$ be arbitrary and let $I_{y_{0}}=[y_{0}-1,y_{0}+1]$. If we denote, $g_{y}(x)=e^{-2\pi iyx}$, then it is easy to verify that the set $\{g'_{y}\mid y\in I_{y_{0}}\}$ is of uniform variation. Hence, by Theorem 7.2 of \cite{mahanta2021generalised}, we get
\begin{align*}
\lim\limits_{y\rightarrow y_{0}}\widehat{f}(y)&=\lim\limits_{y\rightarrow y_{0}}\int_{-\infty}^{\infty}f(x)e^{-2\pi iyx}\,dx\\
&=\lim\limits_{y\rightarrow y_{0}}\int_{-a}^{a}f(x)e^{-2\pi iyx}\,dx=\widehat{f}(y_{0}).
\end{align*}
\item Let $a\in (0,\infty)$ be such that $f\in \mathcal{BV}(\mathbb{R}\setminus(-a,a))$. Note that $f=f_{1}+f_{2}+f_{3}$, where
\[
f_{1}=f\chi_{[-\infty,-a]},\qquad f_{2}=f\chi_{[-a,a]},\qquad f_{3}=f\chi_{[a,\infty]}.
\]
To prove $\widehat{f}$ is continuous on $\mathbb{R}\setminus\{0\}$ it is enough to show that $\widehat{f_{1}},\widehat{f_{3}}$ are continuous on $\mathbb{R}\setminus\{0\}$. Now Lemma \ref{HK-LP} implies $f_{1},f_{3}\in \mathcal{HK}(\mathbb{R})\cap\mathcal{BV}(\mathbb{R})$ and hence Theorem 4.2 of \cite{mendoza2009some} completes the proof.\qedhere
\end{enumerate}
\end{enumerate}
\end{proof}
\begin{lemma}\label{interchanging fourier cap}
Let $\psi$ and $\phi$ be real-valued function on $\mathbb{R}$ and let $\widehat{\psi}$ exists a.e. on $\mathbb{R}$. If $\phi(y)$, $y\phi(y)$ and $y^{2}\phi(y)$ are Lebesgue integrable on $\mathbb{R}$ and if $\int_{-\infty}^{\infty}\psi\widehat{\phi}$ exists, then $\int_{-\infty}^{\infty}\psi\widehat{\phi}=\int_{-\infty}^{\infty}\widehat{\psi}\phi$.
\end{lemma}
\begin{proof}
Let $G(x,y)=\phi(y)e^{-2\pi iyx}$ and $g(x,y)=\partial_{x}G(x,y)$. Then
\begin{enumerate}[\upshape (a)]
\item $V_{[a,b]}[g(\cdot\,,y)]=\int_{a}^{b}\left|\partial_{x}g(x,y)\right|\,dx=\int_{a}^{b}4\pi^{2}y^{2}\left|\phi(y)\right|\,dx=4\pi^{2}(b-a)y^{2}\left|\phi(y)\right|$;
\item $\int_{-\infty}^{\infty}V_{[a,b]}[g(\cdot\,,y)]\,dy=4\pi^{2}(b-a)\int_{-\infty}^{\infty}y^{2}\left|\phi(y)\right|\,dy=O_{[a,b]}<\infty$;
\item $\left|g(x,y)\right|=2\pi\left|y\phi(y)\right|=\eta(y)\in L^{1}(\mathbb{R})$;
\item $\left|G(x,y)\right|=\left|\phi(y)\right|=\kappa(y)\in L^{1}(\mathbb{R})$.
\end{enumerate}
Therefore, by Lemma \ref{most useful lemma}, we get $\int_{-\infty}^{\infty}\psi\widehat{\phi}=\int_{-\infty}^{\infty}\widehat{\psi}\phi$.
\end{proof}
\noindent Let us denote
\begin{equation}
\widetilde{f}(x)=\int_{-\infty}^{\infty}\widehat{f}(t)e^{2\pi ixt}\,dt
\end{equation}
\begin{theorem}[{\bf The Inversion Theorem}]
Let $f\colon\mathbb{R}\to\mathbb{R}$ be such that $\widehat{f}$ exists almost everywhere on $\mathbb{R}$. Let $x_{0}\in\mathbb{R}$ be such that $\widetilde{f}$ exists at $x_{0}$. Then $f(x_{0})=\widetilde{f}(x_{0})$ provided
\begin{equation}\label{inversion condition 1}
\sup\limits_{x\in[-\delta,\delta]}\left|s\int_{-\delta}^{x}e^{-s|t|}\left(f(x_{0}+t)-f(x_{0})\right)\,dt\right|\rightarrow 0\qquad\text{as $s\rightarrow \infty$}
\end{equation}
for some $\delta(>0)$.
\end{theorem}
\begin{proof}
For simplicity, we assume $x_{0}=0$. As $e^{-t^{2}}$, $te^{-t^{2}}$ and $t^{2}e^{-t^{2}}$ belong to $L^{1}(\mathbb{R})$, by Lemma \ref{interchanging fourier cap}, we get
\[
\int_{-\infty}^{\infty}e^{-\lambda^{2}\pi t^{2}}\widehat{f}(t)\,dt=\int_{-\infty}^{\infty}\widehat{e^{-\lambda^{2}\pi t^{2}}}f(t)\,dt.
\]
Now as $G_{\lambda}(t)=\frac{d}{dt}(e^{-\lambda^{2}\pi t^{2}})\in L^{1}(\mathbb{R})\cap\mathcal{BV}(\mathbb{R})$ and the set $\{V_{\mathbb{R}}[G_{\lambda}]\mid 0\leqslant \lambda \leqslant 1\}$ is uniformly bounded on $\mathbb{R}$,
Theorem \ref{sequence of integrals} implies
\begin{equation}\label{Inversion eqn 1}
\lim\limits_{\lambda\rightarrow 0^{+}}\int_{-\infty}^{\infty}e^{-\lambda^{2}\pi t^{2}}\widehat{f}(t)\,dt=\int_{-\infty}^{\infty}\widehat{f}(t)\,dt=\widetilde{f}(0).
\end{equation}
Let $\delta>0$ be such that \eqref{inversion condition 1} holds.
Then
\begin{align*}
\int_{-\infty}^{\infty}\widehat{e^{-\lambda^{2}\pi t^{2}}}f(t)\,dt&=\int_{-\infty}^{\infty}\lambda^{-1}e^{-\pi t^{2}/\lambda^{2}}f(t)\,dt\\
&=\int\limits_{\left|t\right|<\delta}\lambda^{-1}e^{-\pi t^{2}/\lambda^{2}}f(t)\,dt + \int\limits_{\left|t\right|\geqslant\delta}\lambda^{-1}e^{-\pi t^{2}/\lambda^{2}}f(t)\,dt.
\end{align*}
Let $I_{1}=\int_{\left|t\right|<\delta}\lambda^{-1}e^{-\pi t^{2}/\lambda^{2}}f(t)\,dt$, $I_{2}=\int_{\left|t\right|\geqslant\delta}\lambda^{-1}e^{-\pi t^{2}/\lambda^{2}}f(t)\,dt$ and $J_{\delta}=\mathbb{R}\setminus(-\delta, \delta)$. Then by simple calculation, we can prove that
\[
H_{\lambda}(t)=\frac{d}{dt}(\lambda^{-1}e^{-\pi t^{2}/\lambda^{2}})\in L^{1}(\mathbb{R})\cap\mathcal{BV}(\mathbb{R})
\]
and the set $\{V_{\mathbb{R}}[H_{\lambda}]\mid 0\leqslant \lambda \leqslant \epsilon\}$ is uniformly bounded on $J_{\delta}$, where $\epsilon\in [0,1)$ is sufficiently small. Thus again,
Theorem \ref{sequence of integrals} implies that $I_{2}\rightarrow 0$ as $\lambda\rightarrow 0^{+}$. Now
\begin{align*}
I_{1}&=f(0)\int_{-\delta}^{\delta}\lambda^{-1}e^{-\pi t^{2}/\lambda^{2}}\,dt + \int_{-\delta}^{\delta}\lambda^{-1}e^{-t/\lambda}(f(t)-f(0))e^{-\pi t^{2}/\lambda^{2}\, +\, t/\lambda}\,dt\\
&=f(0)\int_{-\delta}^{\delta}\lambda^{-1}e^{-\pi t^{2}/\lambda^{2}}\,dt + \int_{-\delta}^{\delta}g_{\lambda}(t)h_{\lambda}(t)\,dt,
\end{align*}
where $g_{\lambda}(t)=\lambda^{-1}e^{-t/\lambda}(f(t)-f(0))$ and $h_{\lambda}(t)=e^{-\pi t^{2}/\lambda^{2}\, +\, t/\lambda}$. By Lemma \ref{Holder}, we get
\begin{equation*}
\left| \int_{-\delta}^{\delta}g_{\lambda}(t)h_{\lambda}(t)\,dt \right|\leqslant\left[ \inf\limits_{[-\delta,\delta]}\left|h_{\lambda}(t)\right| + V_{[-\delta,\delta]}[h_{\lambda}] \right]\|g_{\lambda}\|_{[-\delta,\delta]}.
\end{equation*}
For sufficiently small $\lambda$, it can be proved that $V_{[-\delta,\delta]}[h_{\lambda}]$ is independent of $\lambda$. Thus by \eqref{inversion condition 1}, we get
\begin{align}\label{Inversion eqn 2}
\begin{split}
\lim\limits_{\lambda\rightarrow 0^{+}}\int_{-\infty}^{\infty}&\widehat{e^{-\lambda^{2}\pi t^{2}}}f(t)\,dt=\lim\limits_{\lambda\rightarrow 0^{+}} I_{1}=\lim\limits_{\lambda\rightarrow 0^{+}}f(0)\int_{-\delta}^{\delta}\lambda^{-1}e^{-\pi t^{2}/\lambda^{2}}\,dt\\
&=f(0)\pi^{-1/2}\lim\limits_{\lambda\rightarrow 0^{+}}\int_{0}^{\frac{\pi\delta^{2}}{\lambda^{2}}}t^{-1/2}e^{-t}\,dt=f(0)\pi^{-1/2}\Gamma(1/2)=f(0).
\end{split}
\end{align}
Equating \eqref{Inversion eqn 1} and \eqref{Inversion eqn 2}, we get $f(0)=\int_{-\infty}^{\infty}\widehat{f}(t)\,dt$.
\end{proof}
\begin{corollary}
Let $f\colon\mathbb{R}\to\mathbb{R}$ be such that $\widehat{f}=0$ a.e. on $\mathbb{R}$. Then $f=0$ a.e. on $\mathbb{R}$.
\end{corollary}
\begin{proof}
Let $I_{n}=[n,n+1]$ for $n\in \mathbb{Z}$. Then $f\in \mathcal{LP}(I_{n})$ for all $n\in \mathbb{Z}$. Now, Corollary 6.2 of \cite{mahanta2021generalised} implies that $f$ is Laplace continuous a.e. on $I_{n}$ and hence \eqref{inversion condition 1} is satisfied a.e. on $I_{n}$ for all $n\in \mathbb{Z}$. Since $\widehat{f}=0$ a.e. on $\mathbb{R}$, we obtain $f=0$ a.e. on $I_{n}$ for all $n\in \mathbb{Z}$ which completes the proof.
\end{proof}
\section{Conclusions}
The definition of Laplace integral depends on a generalised derivative called the Laplace derivative. Suppose it is possible to define the total Laplace derivative on $\mathbb{R}^{n}$ ($n\geqslant 2$) and establish its interrelations with the partial Laplace derivatives. In that case, it may be possible to find excellent applications of the Fourier transform (in our setting) to the generalised PDEs, i.e., to the PDEs using partial Laplace derivatives, which, we hope, will be an interesting problem to deal with.
\noindent {\bf Acknowledgements}. The UGC fellowship of India supports this research work of the first author (Serial no.- 2061641179, Ref. no.- 19/06/2016(i) EU-V and Roll no.- 424175 under the UGC scheme).
\end{document} |
\begin{document}
\title{A Gentle Introduction to a Beautiful Theorem of Molien}
\begin{abstract} The purpose of this note is to give an accessible proof of Moliens Theorem in Invariant Theory, in the language of today's Linear Algebra and Group Theory, in order to prevent this beautiful theorem from being forgotten. \end{abstract}
\tableofcontents
\section*{Introduction}\label{sintro}
We present some memories of a visit to the ring zoo in 2004. This time we met an animal looking like a unicorn, known by the name of invariant theory. It is rare, old, and very beautiful. The purpose of this note is to give an almost self contained introduction to and clarify the proof of the amazing theorem of Molien, as presented in \cite{Sloane1}. An introduction into this area, and much more, is contained in \cite{Sturmfels}. There are many very short proofs of this theorem, for instance in \cite{Stanley}, \cite{Hu2}, and \cite{Tambour}. \par
Informally, Moliens Theorem is a power series generating function formula for counting the dimensions of subrings of homogeneous polynomials of certain degree which are invariant under the action of a finite group acting on the variables. As an apetizer, we display this stunning formula: $$
\Phi_G(\lambda) := \frac{1}{|G|} \sum_{g\in G} \frac{1}{\det(\mathrm{id} - \lambda T_g)} $$ We can immediately see elements of linear algebra, representation theory, and enumerative combinatorics in it, all linked together. The paper \cite{Sloane1} nicely shows how this method can be applied in Coding theory. For Coding Theory in general, see \cite{jubi}.\par
Before we can formulate the Theorem, we need to set the stage by looking at some Linear Algebra (see \cite{Roman}), Group Theory (see \cite{Hu}), and Representation Theory (see \cite{Sagan} and \cite{Tambour}).
\section{Preliminaries}\label{spreliminaries}
Let $V\cong {\mathbf{C}}^n$ be a finite dimensional complex inner product space with orthonormal basis $\mathcal{B} = (\mathbf{e}_1,\dots,\mathbf{e}_n)$ and let $\mathbf{x} = (x_1,\dots,x_n)$ be the orthonormal basis of the algebraic dual space $V^\ast$ satisfying $\forall 1\le i,j \le n : x_i(\mathbf{e}_j) = \delta_{ij}$. Let $G$ be a finite group acting unitarily linear on $V$ from the left, that is, for every $g\in G$ the mapping $V \to V, \mathbf{v} \mapsto g. \mathbf{v}$ is a unitary bijective linear transformation. Using coordinates, this can be expressed as $[g. \mathbf{v}]_\mathcal{B} = [g]_{\mathcal{B},\mathcal{B}} [\mathbf{v}]_\mathcal{B}$, where $[g]_{\mathcal{B},\mathcal{B}}$ is unitary. Thus, the action is a unitary representation of $G$, or in other words, a $G$--module. Note that we are using \inx{left composition} and column vectors, i.e. $\mathbf{v} = (v_1, \dots , v_n) \istmit{convention} [v_1 \, v_2 \, \dots \, v_n]^\top$, c.~f.~\cite{Anton}. \par
The elements of $V^\ast$ are \inx{linear forms}(linear functionals), and the elements $x_1,\dots, x_n$, looking like variables, are also linear forms, this will be important later. \par Thinking of $x_1,\dots, x_n$ as variables, we may view (see \cite{Tambour}) $S(V^\ast)$, the \kwd{symmetric algebra} on $V^\ast$ as the algebra $R := {\mathbf{C}}[\mathbf{x}] := {\mathbf{C}}[x_1,\dots, x_n] $ of polynomial functions $V\to {\mathbf{C}}$ or polynomials in these variables (linear forms). It is naturally graded by degree as $R = \bigoplus_{d \in {\mathbf{N}}} R_d$, where $R_d$ is the vector space spanned by the polynomials of (total) degree $d$, in particular, $R_0 = {\mathbf{C}}$, and $R_1 = V^\ast$.\par
The action of $G$ on $V$ can be lifted to an action on $R$.
\begin{proposition}\label{pindact} Let $V$, $G$, $R$ as above. Then the mapping $. : G \times R \to R, (g,f) \mapsto g. f$ defined by $(g. f) (\mathbf{v}) := f(g^{-1} . \mathbf{v})$ for $\mathbf{v} \in V$ is a left action. \end{proposition} \begin{proof} For $\mathbf{v} \in V$, $g,h \in G$, and $f \in R$ we check \begin{enumerate}
\item $(1. f)(\mathbf{v}) = f(1^{-1}. \mathbf{v}) = f(1. \mathbf{v}) = f(\mathbf{v})$
\item \begin{multline*}
((hg). f)(\mathbf{v}) = f((hg)^{-1} . \mathbf{v}) =
f((g^{-1} h^{-1}). \mathbf{v}) =\\ f (g^{-1} . (h^{-1} . \mathbf{v}))
= (g. f)(h^{-1} . \mathbf{v}) = (h. (g. f))(\mathbf{v})
\end{multline*} \end{enumerate} \end{proof}
In fact, we know more. \begin{proposition}\label{pindact2} Let $V$, $G$, $R$ as above. For every $g \in G$, the mapping $T_g: R \to R, f \mapsto g. f$ is an algebra automorphism preserving the grading, i.e. $g.R_d \subset R_d$ (here we do not bother about surjectivity). \end{proposition} \begin{proof} For $\mathbf{v} \in V$, $g\in G$, $c\in {\mathbf{C}}$, and $f,f' \in R$ we check \begin{enumerate} \item \begin{multline*}
(g. (f+f'))(\mathbf{v}) =
(f+f')(g^{-1} . \mathbf{v}) =
f(g^{-1} . \mathbf{v}) + f'(g^{-1} . \mathbf{v}) =\\
(g. f)(\mathbf{v}) + (g. f')(\mathbf{v}) =
(g. f + g. f')(\mathbf{v})
\textrm {, thus } g. (f+f') =g. f + g. f'
\end{multline*} \item \begin{multline*}
(g. (f\cdot f'))(\mathbf{v}) =
(f\cdot f')(g^{-1} . \mathbf{v}) =
f(g^{-1} . \mathbf{v}) \cdot f'(g^{-1} . \mathbf{v}) =\\
(g. f)(\mathbf{v}) \cdot (g. f')(\mathbf{v}) =
(g. f \cdot g. f')(\mathbf{v})
\textrm {, thus } g. (f\cdot f') =g. f \cdot g. f'
\end{multline*} \item $
(g. (cf))(\mathbf{v}) = (cf)(g^{-1} . \mathbf{v}) = c (f(g^{-1} . \mathbf{v})) =
c ((g. f)(\mathbf{v})) = (c (g. f))(\mathbf{v})
$ \item By part $2.$ it is clear that the grading is preserved. \item To show that $f \mapsto g. f$ is bijective it is enough to
show that this mapping is injective on the finite dimensional
homogeneous components
$R_d$. Let us introduce a name for this mappig, say
$T_g^d : R_d \to R_d, f \mapsto g. f$. Now
$f \in \ker(T_g^d)$ implies that
$g. f = 0 \in R_d$, i.e. $g. f$ is a polynomial mapping from
$V$ to ${\mathbf{C}}$ of degree $d$ vanishing identically,
$\forall \mathbf{v} \in V: (g. f)(\mathbf{v}) = 0$. By definition of the
extended action we have
$\forall \mathbf{v} \in V: f(g^{-1} . \mathbf{v}) = 0$.
Since $G$ acts on $V$ this implies that
$\forall \mathbf{v} \in V : f(\mathbf{v}) = 0$, so $f$ is the zero mapping.
Since our ground field has characteristic $0$, this implies that
$f$ is the zero polynomial, which we may view as an element of every $R_d$.
See for instance \cite{Cox}, proposition 5 in section 1.1. \item Note that every $T_g^d$ is also surjective, since all group elements have their inverse in $G$. \end{enumerate} \end{proof}
Both propositions together give us a homomorphism from $G$ into $\ensuremath{\mathrm{Aut}}(R)$. They also clarify the r\^ole of the \emph{induced} matrices, which are classical in this area, as mentionend in \cite{Sloane1}. Since the monomials $x_1,\dots,x_n$ of degree one form a basis for $R_1$, it follows from the proposition that their products $\mathbf{x}_2 := (x_1^2,x_1 x_2,x_1 x_3,\dots,x_1 x_n, x_2^2,x_2 x_3,\dots)$ form a basis for $R_2$, and, in general, the monomials of degree $d$ in the linear forms (!) $x_1,\dots,x_n$ form a basis $\mathbf{x}_d$ of $R_d$. Clearly, they certainly span $R_d$, and by the last observation in the last proof they are linearly independent.\par
\begin{definition}\label{dinduced} In the context from above, that is $g \in G$, $f \in R^d$, and $\mathbf{v} \in V$, we define $$ T_g^d : R_d \to R_d, f \mapsto g. f : R^d \to {\mathbf{C}}, \mathbf{v} \mapsto f(g^{-1} . \mathbf{v}) = f(T_{g^{-1}} (\mathbf{v})) .$$ \end{definition}
\begin{remark}\label{rinduced} In particular, we have $(T_g^1 (f))(\mathbf{v}) = f(T_{g^{-1}}(\mathbf{v}) ),$ see proposition \ref{pprop0} below. \end{remark}
Keep in mind that a function $f \in R_d$ maps to $T_g^d (f) = g. f$. Setting $A_g := [T_g^1]_{\mathbf{x},\mathbf{x}}$, then $A_g^{[d]} := [T_g^d]_{\mathbf{x}_d,\mathbf{x}_d}$ is the $d$--th induced matrix in \cite{Sloane1}, because $T_g^1(f\cdot f') = T_g^1(f)\cdot T_g^1(f')$. Also, if $f,f'$ are eigenvectors of $T_g^1$ corresponding to the eigenvalues $\lambda,\lambda'$, then $f\cdot f'$ is an eigenvector of $T_g^2$ with eigenvalue $\lambda \cdot \lambda'$, because $T_g(f\cdot f') = T_g(f) \cdot T_g(f') = (\lambda f) \cdot (\lambda' f') = (\lambda \lambda')(f \cdot f')$. All this generalizes to $d>2$, we will get back to that later. \par
We end this section by verifying two little facts needed in the next section. \begin{proposition}\label{ppropd} The \kwd{first induced operator} of the inverse of a group element $g\in G$ is given by $T_{g^{-1}}^1 = (T_g^1)^{-1}$. \end{proposition} \begin{proof} Since $\dim(V^\ast) < \infty$, it is sufficient to prove that $T_{g^{-1}}^1 \circ T_{g}^1 = \mathrm{id}_{V^\ast}$. Keep in mind that $(T_g^1 (f))(\mathbf{v}) = f (T_{g^{-1}} (\mathbf{v}))$. For arbitrary $f \in V^\ast$ we see that \begin{align*} (T_{g^{-1}}^1 \circ T_{g}^1 )(f) = T_{g^{-1} }^1 ( T_{g}^1 (f)) = T_{g^{-1}}^1 ( g. f)
= g^{-1} . ( g. f) = (g^{-1} g). f = f. \end{align*} \end{proof}
We will be mixing group action notation and composition freely, depending on the context. The following observation is a translation device.
\begin{proposition}\label{pprop0} For $g \in G$ nd $f \in V^\ast$ the following holds: $$ T^1(f) = g. f = f \circ T_{g^{-1}}. $$ \end{proposition} \begin{proof} For $\mathbf{v} \in V$ we see $(T^1(f))(\mathbf{v}) = (g. f)(\mathbf{v}) \istmit{def} f(g^{-1} . \mathbf{v} ) = f( T_{g^{-1}}(\mathbf{v}) ).$ \end{proof}
\section{The Magic Square}\label{ssquare} Remember that we require a unitary representation of $G$, that is the operators $T_g : V \to V$ need to be unitary, i.e. $\forall g \in G : (T_g)^{-1} = (T_g)^\ast$. The first goal of this sections is to show that this implies that the induced operators $T_g^d : R_d \to R_d, f \mapsto g. f$ are also unitary. We saw that $T_g^1 = V^\ast$, the algebraic dual of $V$. In order to understand the operator duals of $V$ and $V^\ast$ we need to look on their inner products first. We may assume that the operators $T_g$ are unitary with respect to the standard inner product $\scp{\mathbf{u}}{\mathbf{v}} = [\mathbf{u}]_{\mathcal{B}, \mathcal{B}} \bullet \overline{[\mathbf{v}]_{\mathcal{B}, \mathcal{B}}}$, where $\bullet$ denotes the dot product.\par
Before we can speak of unitarity of the induced operators $T_g^d$ we have to make clear which inner product applies on $R^1 = V^\ast$. Quite naively, for $f,g \in V^\ast$ we are tempted to define $\scp{f}{g} = [f]_{\mathbf{x}, \mathbf{x}} \bullet \overline{[g]_{\mathbf{x}, \mathbf{x}}}$. \par
We will motivate this in a while, but first we take a look at the diagram in \cite{Roman}, chapter10, with our objects:
$$ \begin{CD} \quad @<T_g^\times<< \quad\\ R^1 = V^\ast @>T_g^1>>V^\ast = R^1 \\ @VVPV@VVPV\\ V@>T_g>>V\\ \quad @<T_g^\ast<< \quad\\ \end{CD} $$
Here $P$ (\lq \lq Rho\rq\rq\ ) denotes the \inx{Riesz map}, see \cite{Roman}, Theorem 9.18, where it is called $R$, but $R$ denotes already our big ring. We started by looking at the operator $T_g$, which is unitary, so its inverse is the Hilbert space adjoint $T_g^\ast$. Omiting the names of the bases we have $[T_g^\ast] = [T_g]^\ast $. We also see the operator adjoint $T_g^\times$ with matrix $[T_g^\times] = [T_g]^\top$, the transpose. However, the arrow for $T_g^1$ is not in the original diagram, but soon we will see it there, too. \par
Fortunately, the Riesz map $P$ turns a linear form into a vector and its inverse $\tau : V \to V^\ast$ maps a vector to a linear form, both are conjugate isomorphisms. This is mostly all we need in order to show that $T_g^1$ is unitary. In the following three propositions we use that $V$ has the orthonormal basis $\mathcal{B}$ and that $V^\ast$ has the orthonormal basis $\mathbf{x}$.
\begin{proposition}\label{ppropa} For every $f \in V^\ast$ the coordinates of its Riesz vector are given by $$[P(f)]_\mathbf{e} = (\overline{f(\mathbf{e}_1)}, \dots , \overline{f(\mathbf{e}_n)}).$$ \end{proposition} \begin{proof} Writing $\tau$ for the inverse of $P$, we need to show that $$ P(f) = \sumn{i}{\overline{f(\mathbf{e}_i)}}\mathbf{e}_i $$ which is equivalent to $$ f = \tau \left ( \sumn{i}{\overline{f(\mathbf{e}_i)}}\mathbf{e}_i \right ). $$ It is sufficient to show the latter for values of $f$ on the basis vectors $\mathbf{e}_j$, $1 \le j \le n$. We obtain \begin{align*} \left (\tau \left ( \sumn{i}{\overline{f(\mathbf{e}_i)}}\mathbf{e}_i \right )\right ) (\mathbf{e}_j) &= \scp{\mathbf{e}_j}{ \left ( \sumn{i}{\overline{f(\mathbf{e}_i)}}\mathbf{e}_i \right )} = \sumn{i}{\scp{\mathbf{e}_j}{ \left ( {\overline{f(\mathbf{e}_i)}}\mathbf{e}_i \right )} } \\ &= \overline{\overline{f(\mathbf{e}_i)}} \sumn{i}{\scp{\mathbf{e}_j}{ \mathbf{e}_i } }
= f(\mathbf{e}_i) \cdot 1. \end{align*} \end{proof}
In particular, this implies that $P(x_i) = \mathbf{e}_i$.
\begin{proposition}\label{ppropb} Our makeshift inner product on $V^\ast$ satisfies $$ \scp{f}{g} = \scp{P(f)}{P(g)} ,$$ where $f,g \in V^\ast$. \end{proposition} \begin{proof} By our vague definition we have $\scp{f}{g} = [f]_{\mathbf{x}, \mathbf{x}} \bullet \overline{[g]_{\mathbf{x}, \mathbf{x}}}$. It is enough to show that $\scp{x_i}{x_j} = \scp{P(x_i)}{P(x_j)}$. From the comment after the proof of Proposition \ref{ppropa} we obtain $$\scp{P(x_i)}{P(x_j)} = \scp{\mathbf{e}_i}{\mathbf{e}_j} = \delta_{ij} = \mathbf{e}_i \bullet \mathbf{e}_j = [x_i]_{\mathbf{x}, \mathbf{x}} \bullet \overline{[x_j]_{\mathbf{x}, \mathbf{x}}} .$$ \end{proof} Hence, our guess for the inner product on $V^\ast$ was correct. We will now relate the Riesz vector of $f \in V^\ast$ to the Riesz vector of $f \circ T_g^{-1}$. Recall that the Riesz vector of $f \in V^\ast$ is the unique vector $\mathbf{w} = P(f)$ such that $f(\mathbf{v}) = \scp{\mathbf{v}}{\mathbf{w}}$ for all $\mathbf{v} \in V$. If $f \ne 0$ it can be found by scaling any nonzero vector in the cokernel of $f$, which is one--dimensional, see \cite{Roman}, in particular Theorem 9.18.
\begin{proposition}\label{pprope} Let $T_g : V \to V$ be unitary, $f \in V^\ast$, $\mathbf{w} = P(f)$ the vector of $f \in V^\ast$. Then $T_g(\mathbf{w})$ is the Riesz vector of $f \circ T_g^{-1}$, i.e. the Riesz vector of $T^1_g(f)$. \end{proposition} \begin{proof} We may assume that $f \ne 0$. Using the notation $\gen{\mathbf{w}}$ for the one--dimensional subspace spanned by $\mathbf{w}$, we start with a little diagram: $$ \gen{\mathbf{w}} \odot \ker(f) \overset{T_g}{\longrightarrow} \gen{T_g(\mathbf{w})} \odot \ker(f \circ T_g^{-1} ), $$ wheere $\odot$ denotes the orthogonal direct sum.\par
We need to show that $f \circ T_g^{-1} = \scp{\cdot}{T_g(\mathbf{w})}$, i.e. that $(f \circ T_g^{-1})(\mathbf{v}) = \scp{\mathbf{v}}{T_g(\mathbf{w})}$ for all $\mathbf{v} \in V$. Since $\mathbf{w} = P(f)$ the vector of $f$, we have $f(\mathbf{v}) = \scp{\mathbf{v}}{\mathbf{w}}$ for all $\mathbf{v} \in V$. We obtain \begin{align*} (f \circ T_g^{-1})(\mathbf{v}) &= \scp{T_g^{-1}(\mathbf{v})}{\mathbf{w}} \istmit{T_g \,\,\mathrm{unitary} }
\scp{\mathbf{v}}{T_g(\mathbf{w})}. \end{align*} From remark \ref{rinduced} we conclude that $f \circ T_g^{-1} = T^1_g(f)$. \end{proof}
Observe that proposition \ref{pprope} implies the commutativity of the following two diagrams. $$ \begin{CD} V^\ast @>T_g^1>>V^\ast\\ @VVPV@VVPV\\ V@>T_g>>V\\ \end{CD}
\qquad \mathrm{and } \qquad
\begin{CD} V^\ast @>(T_g^1)^{-1}>>V^\ast\\ @VVPV@VVPV\\ V@>(T_g)^{-1}>>V\\ \end{CD} $$ Indeed, \ref{pprope} implies \begin{align} P \circ T_g^1 &= T_g \circ P \\ P \circ (T_g^1)^{-1} &= (T_g)^{-1} \circ P \end{align}
\begin{proposition}\label{pproplink} The first induced operator $T_g^1$ is unitary. \end{proposition} \begin{proof} We may use that $T_g$ is unitary, that is, $$ \scp{T_g(\mathbf{v})}{\mathbf{w}} = \scp{\mathbf{v}}{(T_g)^{-1}(\mathbf{w})} = \scp{\mathbf{v}}{(T_{g^{-1}})(\mathbf{w})} \qquad (\ast) .$$ Let $f,h \in V^\ast$ arbitrary, $\mathbf{w} := P(f)$, and $\mathbf{u} := P(h)$. We need to check that $\scp{(T_g^1)(f)}{h} = \scp{f}{(T_g^1)^{-1}(h)}$. We see that \begin{align*} \scp{(T_g^1)(f)}{h} &\istmit{\mathrm{proposition }\ref{ppropb}}
\scp{(P\circ T_g^1)(f)}{P(h)} \istmit{(1)} \scp{(T_g\circ P )(f)}{P(h)} \\
&= \scp{(T_g( P ))(f)}{P(h)} = \scp{T_g(\mathbf{w})}{\mathbf{u}} \istmit{\ast} \scp{\mathbf{w}}{T_g^{-1}(\mathbf{u})} \\
&= \scp{P(f)}{T_g^{-1}(P(h))} = \scp{P(f)}{(T_g^{-1} \circ P ) (h)}\\
&\istmit{(2)} \scp{P(f)}{( P \circ (T_g^1)^{-1}) (h)} = \scp{P(f)}{ P ((T_g^1)^{-1} (h))}\\
&= \scp{f}{(T_g^1)^{-1}(h)} \end{align*} \end{proof}
After having looked at eigenvalues we will see that this generalizes to higher degree, that $T_g^d$ is diagonalizable for all $d\in {\mathbf{Z}}^+$. But first let us look at the matrix version of proposition \ref{pproplink}.
\begin{proposition}\label{ppropf} $$ [T^1_g]_{\mathbf{x},\mathbf{x}} = \overline{[T_g]_{\mathbf{e},\mathbf{e}}} $$ \end{proposition} \begin{proof}
Let $A := [T_g]_{\mathcal{B}, \mathcal{B}} = [A_1| \cdots |A_i| \cdots | A_n] = [a_{i,j}]$ and
$B := [T_g^1]_{\mathbf{x},\mathbf{x}} = [B_1| \cdots |B_i| \cdots | B_n] = [b_{i,j}]$. We will use the commutativity of the diagram, i.e. $P^{-1} \circ T_g \circ P = T_g$, which we will mark as $\square$. No, the proof is not finished here. We get $T_g(\mathbf{e}_i) = A_i = \sumn{k}{a_{k,i}} \mathbf{e}_k$ and \begin{align*} T_g^1 (x_i) &\istmit{\square} (P^{-1} \circ T_g \circ P)(x_i) = P^{-1} ( T_g ( P (x_i)) \\
&\istmit{\ref{ppropa}} P^{-1} ( T_g ( \mathbf{e}_i)) = P^{-1} \left ( \sumn{k}{a_{k,i}} \mathbf{e}_k \right)
\istmit{\textrm{konj.}} \sumn{k}{ \overline{a_{k,i}} P^{-1} \left ( \mathbf{e}_k \right) }\\
&\istmit{\ref{ppropa}} \sumn{k}{\overline{a_{k,i}} x_k} \end{align*} On the other hand, $[T^1_g(x_i)]_{\mathbf{x}} = [T^1_g]_{\mathbf{x},\mathbf{x}} \mathbf{e}_i = B_i$ implies $T^1_g(x_i) = \sumn{k}{b_{k,i}}\mathbf{e}_k$. Together we obtain $b_{k,i} = \overline{a_{k,i}}$, and the proposition follows. \end{proof}
\section{Averaging over the Group}\label{sreynolda} Now we apply averaging to obtain self-adjoint operators. \begin{definition}\label{dreynolds} We define the following operators: \begin{enumerate}
\item $\displaystyle \hat{T} : V\to V, \mathbf{v} \mapsto \hat{T}(\mathbf{v})
:= \sumg{g}{T_g(\mathbf{v})}$
\item $\displaystyle \hat{T^1} : V^\ast\to V^\ast, f \mapsto \hat{T^1}(f)
:= \sumg{g}{T^1_g(f)}$ \end{enumerate}
\end{definition}
These are sometimes called the \kwd{Reynolds} operator of $G$.
\begin{proposition}\label{preynolds} The operators $ \hat{T}$ and $\hat{T^1}$ are self-adjoint (Hermitian). \end{proposition} \begin{proof} The idea of the averaging trick is that if $g\in G$ runs through all group element and $g' \in G$ is fixed, then the products $g'g$ run also through all group elements. We will make use of the facts that every $T_g$ and every $T^1_g$ is unitary. \begin{enumerate}
\item We need to show that $\scp{\hat{T}(\mathbf{v})}{\mathbf{w}} = \scp{\mathbf{v}}{\hat{T}(\mathbf{w})}$ for
arbitrary $\mathbf{v},\mathbf{w} \in V$. We obtain
\begin{align*}
\scp{\hat{T}(\mathbf{v})}{\mathbf{w}} &=\scp{\sumg{g}{T_g(\mathbf{v})}}{\mathbf{w}} = \sumg{g}{\scp{T_g(\mathbf{v})}{\mathbf{w}}}\\
&\istmit{unit.} \sumg{g}{\scp{\mathbf{v}}{(T_g)^{-1}(\mathbf{w})}} = \sumg{g}{\scp{\mathbf{v}}{(T_{g^{-1}})(\mathbf{w})}} \\
&= \sumg{g'}{\scp{\mathbf{v}}{(T_{g'})(\mathbf{w})}} = \scp{\mathbf{v}}{\hat{T}(\mathbf{w})}
\end{align*}
\item The same proof, \emph{mutitis mutandis}, replacing $\hat{T} \leftrightarrow \hat{T^1}$,
$T_g \leftrightarrow T_g^1$, $\mathbf{v} \leftrightarrow f$, and $\mathbf{w} \leftrightarrow h$ shows that
$\scp{\hat{T^1}(f)}{h} = \scp{f}{\hat{T^1}(h)}.$ \end{enumerate} \end{proof}
Consequently, $ \hat{T}$ and $\hat{T^1}$ are unitarily diagonalizable with real spectrum.
\begin{proposition}\label{pproph} The operators $ \hat{T}$ and $\hat{T^1}$ are \inx{idempotent}, i.e. \begin{enumerate}
\item $ \hat{T} \circ \hat{T} = \hat{T} $
\item $ \hat{T^1} \circ \hat{T^1} = \hat{T^1} $ . \end{enumerate} In particular, the eigenvalues of both operators are either $0$ or $1$. \end{proposition} \begin{proof} Again, we show only one part, the other part is analog. To begin with, let $s \in G$ be fixed. Then
\begin{align*}
T_s \circ \hat{T} &= T_s \circ \sumg{g}{T_g} = \sumg{g}{T_s \circ T_g} \\
&= \sumg{g}{T_{sg} } = \sumg{g'}{T_{g'} } = \hat{T}.
\end{align*}
From this it follows that
\begin{align*}
\hat{T} \circ \hat{T} &= \left (\sumg{g}{T_g} \right) \circ \hat{T}
= \sumg{g}{T_g\circ \hat{T} } \istmit{above}
= \sumg{g}{ \hat{T} } \\
&= \frac{1}{|G|} \cdot |G| \cdot\hat{T} = \hat{T}.
\end{align*}
From $ \hat{T} \circ \hat{T} = \hat{T} $ we conclude that $ \hat{T} \circ (\hat{T} - \mathrm{id}) = 0 $.
Thus the minimal polynomial of $T$ divides the polynomial $\lambda (\lambda - 1)$, so
all eigenvalues are contained in $\set{0,1}$. \end{proof}
We will now look at the eigenvalues of $T_g$ and $T^1_g$ and their interrelation. Since both operators are unitary, their eigenvalues have absolute value $1$.
\begin{proposition}\label{pvictor}
\begin{enumerate}
\item If $\mathbf{v} \in V$ is an eigenvector of $T_g$ for the eigenvalue $\lambda$,
then $\mathbf{v}$ is an eigenvector of $T_{g^{-1}}$ for the eigenvalue $\overline{\lambda} = \frac{1}{\lambda}$.
\item If $f\in V^\ast$ is an eigenvector of $T^1_g$ for the eigenvalue $\lambda$,
then $f$ is an eigenvector of $T^1_{g^{-1}}$ for the eigenvalue $\frac{1}{\lambda}$.
\item If $f\in V^\ast$ is an eigenvector of $T^1_g$ for the eigenvalue $\lambda$,
then $P(f) \in V$ is an eigenvector of $T_g$ for the eigenvalue $\overline{\lambda} = \frac{1}{\lambda}$.
\item If $\mathbf{v} \in V$ is an eigenvector of $T_g$ for the eigenvalue $\lambda$,
then $P^{-1} (\mathbf{v}) \in V^\ast$ is an eigenvector of $T^1_g$ for the eigenvalue $\overline{\lambda}=\frac{1}{\lambda}$. \end{enumerate} \end{proposition} \begin{proof} We will make use of the commutativity of Proposition \ref{pprope}. Observe that $g.\mathbf{v} = T_g(\mathbf{v})$ and $g. f = f \circ T_g$. \begin{enumerate}
\item \quad
\begin{align*}
T_g(\mathbf{v}) &= g.\mathbf{v} = \lambda \mathbf{v} \implies g^{-1} . g.\mathbf{v} = g^{-1} . \lambda \mathbf{v}
\implies g^{-1} . g.\mathbf{v} = \lambda g^{-1} .\mathbf{v} \\
& \implies \mathbf{v} = \lambda g^{-1} .\mathbf{v} \implies T_{g^{-1}}(\mathbf{v}) = g^{-1} . \mathbf{v} = \frac{1}{\lambda} \mathbf{v}
\end{align*}
\item \quad
\begin{align*}
T^1_g(f) &= g. f = \lambda f \implies g^{-1} . g. f = g^{-1} . \lambda f
\implies g^{-1} . g. f = \lambda g^{-1} . f \\
& \implies f = \lambda g^{-1} . f \implies T^1_{g^{-1}}(f) = g^{-1} . f = \frac{1}{\lambda} f
\end{align*}
\item \quad
\begin{align*}
T^1_g(f) = \lambda f &\folgtmit{P\circ} P(T^1_g(f)) = P(\lambda f) \folgtmit{(1)} T_g(P(f)) = P(\lambda f) \\
&\implies T_g(P(f)) = \overline{\lambda} P( f) =\frac{1}{\lambda} P( f)
\end{align*}
\item \quad
\begin{align*}
T_g(\mathbf{v}) = \lambda \mathbf{v} &\folgtmit{P^{-1} \circ} P^{-1}(T_g(\mathbf{v})) = P^{-1} (\lambda \mathbf{v})
\folgtmit{\square} (T_g^1 \circ P^{-1})(\mathbf{v}) = \overline{\lambda} P^{-1} (\mathbf{v}) \\
&\implies T_g^1 ( P^{-1} (\mathbf{v})) = \frac{1}{\lambda} P^{-1} (\mathbf{v})
\end{align*}
\end{enumerate} \end{proof}
This implies that if we consider the union of the spectra over all $g\in G$, then we obtain the same (multi)set, no matter if we take $T_g$ or $T^1_g$. \par
\section{Eigenvectors and eigenvalues}\label{svictor}
Now we continue from where we left at the end of section \ref{spreliminaries}, fixing one group element $g \in G$ and compare $T_g^1$ with $T_g^d$ for $d > 1$. By a method called \kwd{stars and bars} it is easy to see that $$\tilde{d} := \dim_{\mathbf{C}}(R_d) = \frac{(n+d+1)!}{(n-1)!d!} .$$
Remember that every $T_g^1$ is unitarily diagonalizable with eigenvalues of absolute value $1$. If $\ensuremath \mathrm{spec}(T_g^1) = (\omega_1,\dots , \omega_n) \in U(1)^n $, then $V^\ast$ has an orthonormal basis $\mathbf{y}_g^1 := (y_{1}, \dots ,y_{n} )$,
such that $T_g^1 (y_{i}) = \omega_i \cdot y_{i} $ for all $1 \le i \le n$, and $[T_g^1]_{\mathbf{y}_g^1,\mathbf{y}_g^1} = \ensuremath \mathrm{diag}(\omega_1,\dots , \omega_n)$. Moreover, $$ [T_g^1]_{\mathbf{y}_g^1,\mathbf{y}_g^1} = [\mathrm{id}]_{\mathbf{y}_g^1, \mathbf{x}} \cdot [T_g^1]_{\mathbf{x},\mathbf{x}} \cdot [\mathrm{id}]_{ \mathbf{x}, \mathbf{y}_g^1}
= \ensuremath \mathrm{diag}(\omega_1,\dots , \omega_n) , $$ where $[\mathrm{id}]_{\mathbf{y}_g^1, \mathbf{x}} = [\mathrm{id}]_{ \mathbf{x}, \mathbf{y}_g^1}^\ast$ is unitary. \par
For $d>1$ put $$ \mathbf{x}^d := (x_1^d, x_2^d, \dots , x_n^d, x_1^{d-1}x_2 ,x_1^{d-1}x_3 , \dots ,x_1^{d-1}x_n, \dots )
=: (\tilde{x_1}, \dots, \tilde{x}_{\tilde{d}}) ,$$ all monomials in the $x_i$ of total degree $d$, numbered from $1$ to $\tilde{d}$.
These are certainly linear independent, since we have no relations amongst the variables, and span $R_d$, since every monomial of total degree $d$ can be written as a linear combination of these. So the form a basis for $R_d$. We will not require that this can be made into an orthonormal basis, we do not even consider any inner product on $R_d$ for $d>1$.
We rather want to establish that $$ \mathbf{y}^d := (y_1^d, y_2^d, \dots , y_n^d, y_1^{d-1}y_2 ,y_1^{d-1}y_3 , \dots ,y_1^{d-1}y_n, \dots )
=: (\tilde{y_1}, \dots, \tilde{y}_{\tilde{d}}) $$ is a basis of eigenvectors of $T_g^d$ diagonalizing $T_g^d$, using the same numbering.
Arranging the eigenvalues of $T_g^1$ in the sam way we put $$ \mathbf{\omega}^d := (\omega_1^d, \omega_2^d, \dots , \omega_n^d, \omega_1^{d-1}\omega_2 ,\omega_1^{d-1}\omega_3 , \dots ,\omega_1^{d-1}\omega_n, \dots )
=: (\tilde{\omega_1}, \dots, \tilde{\omega}_{\tilde{d}}). $$
Now we establish that the $\tilde{y_i}$, $1\le i \le \tilde{d}$ are the eigenvectors for the eigenvalues $\tilde{\omega_1}$ of $T_g^d$.
\begin{proposition}\label{pinducedeigen} In the context above, $$ T_g^d (\tilde{y_i}) = \tilde{\omega_i} \cdot \tilde{y_i} $$ for all $1\le i \le \tilde{d}$. \end{proposition} \begin{proof} The key is proposition \ref{pindact2}, as in the preliminary observations at the end of section \ref {spreliminaries}. Let $$ \tilde{y_i} = \prod_{j=1}^{n} y_j^{\epsilon_j} $$ and $$ \tilde{\omega_i} = \prod_{j=1}^{n} \omega_j^{\epsilon_j} ,$$ where $\epsilon_j \in {\mathbf{N}}$ and the sum of these exponents is $d$. Then \begin{align*} T_g^d (\tilde{y_i}) &= T_g^d \left ( \prod_{j=1}^{n} y_j^{\epsilon_j} \right )
= \prod_{j=1}^{n} T_g^1 \left ( y_j^{\epsilon_j} \right )
= \prod_{j=1}^{n} \omega_j^{\epsilon_j} y_j^{\epsilon_j}
= \tilde{\omega_i} \cdot \tilde{y_i} \end{align*}
\end{proof}
As a consequence, $R_d$ has a basis of eigenvectors of $T_g^d$ and $T_g^d$ is similar to the \inx{diagonal matrix} $\ensuremath \mathrm{diag}(\tilde{\omega_1}, \dots, \tilde{\omega}_{\tilde{d}})$.
\section{Moliens Theorem}\label{sstart}
We will now make some final preparations and then present the proof of Moliens Theorem.\par
For $f \in R$ and $g \in G$ we say that $f$ is an \kwd{invariant} of $g$ if $g. f = f$ and that $f$ is a (simple) invariant of $G$ if $\forall g \in G : g. f = f$. The method of averaging from section \ref{sreynolda} can also be applied to create invariants:
\begin{proposition}\label{ppropg} For $f\in V^\ast$ put $\hat{f} := \hat{T^1} (f)$. Then $\hat{f}$ is an invariant of $G$. \end{proposition} \begin{proof} Let $g \in G$ be arbitrary. We will show that $g.\hat{f} = \hat{f}$. Clearly, from proposition \ref{pprop0} we get that \begin{align*} g.\hat{f} &= \hat{f} \circ T_{g^{-1}} = (\hat{T^1} (f)) \circ T_{g^{-1}} \\
&= \left( \sumg{s}{T_s^1(f) }\right )\circ T_{g^{-1}} = \left( \sumg{s}{f \circ T_{s^{-1}} }\right )\circ T_{g^{-1}}\\
&= \sumg{s}{f \circ T_{s^{-1}} \circ T_{g^{-1}}} = \sumg{t}{f \circ T_{t^{-1}}} = \hat{f}. \end{align*} \end{proof}
Now, we call $$ R^G := \genset{f\in R}{\forall g \in G : g. f = f} $$ the \kwd{algebra of invariants} of $G$.
\begin{proposition}\label{pinvalg} $R^G$ is a subalgebra of $R$. \end{proposition} \begin{proof} Since the mapping $f \mapsto g.f$ is linear for every $g\in G$, $R^G$ is the intersection of subspaces, and hence a subspace. Let us check the subring conditions in more detail. For arbritrary $g \in G$, $f,h \in R^G$, and $\mathbf{v} \in V$ we have $g. f = f$, $g. h = h$ \begin{enumerate}
\item For the zero $0 \in R$ we obtain $(g. 0)(\mathbf{v}) = 0(g^{-1}. \mathbf{v} ) = 0(\mathbf{v})$,
so $0 \in R^G$.
\item We see
\begin{align*}
g. (f-h)(\mathbf{v}) &= (f-h)(g^{-1} . \mathbf{v}) = f(g^{-1} . \mathbf{v}) - h(g^{-1} . \mathbf{v}) \\
&= (g . f)(\mathbf{v}) - (g . h)(\mathbf{v}) = f(\mathbf{v}) - h(\mathbf{v}) = (f-h)(\mathbf{v})
\end{align*}
\item Likewise,
\begin{align*}
g. (f\cdot h)(\mathbf{v}) &= (f\cdot h)(g^{-1} . \mathbf{v}) = f(g^{-1} . \mathbf{v}) \cdot h(g^{-1} . \mathbf{v}) \\
&= (g . f)(\mathbf{v}) \cdot (g . h)(\mathbf{v}) = f(\mathbf{v}) \cdot h(\mathbf{v}) = (f\cdot h)(\mathbf{v}).
\end{align*} \end{enumerate}
\end{proof}
Our subalgebra $R^G$ is graded in the same way as $R$. \begin{proposition}\label{pinvalggraded} The algebra of invariants of $G$ is naturally graded as $$ R^G = \bigoplus_{d \in {\mathbf{N}}} R^G_d, $$ where $R^G_d = \genset{f\in R_d}{\forall g \in G : g. f = f}$, called the $d$--th \kwd{homogeneous component} of $R^G$. \end{proposition} \begin{proof} This follows directly from proposition \ref{pindact} and proposition \ref{pindact2}. \end{proof}
\begin{definition}[Molien series]\label{dmolien} Viewing $R^G_d$ as a vector space, we define $$ a_d := \dim_{\mathbf{C}} R^G_d, $$ the number of linearly independent homogeneous invariants of degree $d\in {\mathbf{N}}$, and $$ \Phi_G(\lambda) := \sum_{d\in{\mathbf{N}}} a_d \lambda^d, $$ the \kwd{Molien series} of $G$. \end{definition}
Thus, the Molien series of $G$ is an ordinary power series generating function whose coefficients are the numbers of linearly independent homogeneous invariants of degree $d$. The following beautiful formula gives these numbers, its proof is the aim of this paper.
\begin{theorem}[Molien, 1897]\label{tmolien} $$
\Phi_G(\lambda) := \frac{1}{|G|} \sum_{g\in G} \frac{1}{\det(\mathrm{id} - \lambda T_g)} $$ \end{theorem}
Following \cite{Sloane1} we first look the number $a_1$ of linearly independent homogeneous invariants of degree $d$. \begin{theorem}[Theorem 13 in \cite{Sloane1}]\label{t13} $$ a_1 = \ensuremath{\mathrm{Tr}} (\hat{T}) = \ensuremath{\mathrm{Tr}} (\hat{T^1}) $$ \end{theorem} \begin{proof} First, we note that the equation $\ensuremath{\mathrm{Tr}} (\hat{T}) = \ensuremath{\mathrm{Tr}} (\hat{T^1}) $ follows from the remark at the end of section \ref{sreynolda}, since the sum for the trace runs over all group elements. Remember that the trace is independent of the choice of basis. From proposition \ref{pproph} we know that both operators are idempotent hermitian and $V^\ast$ has a an orthornormal basis $\mathbf{f} = (\mathbf{f}_a,\dots, \mathbf{f}_n)$ of eigenvectors of $\hat{T^1}$, corresponding to the eigenvalues $\lambda_1, \dots , \lambda_n \in \set{0,1}$, so $$ [\hat{T^1}]_{\mathbf{f},\mathbf{f}} = \ensuremath \mathrm{diag}(\lambda_1, \dots , \lambda_n). $$
Let us say that this matrix has $r$ entries $1$ and the remaining $n-d$ entries $0$. By rearranging the eigenvalues and eigenvectors we may assume that the first $r$ entries are $1$ and the remaining $n-d$ are $0$, i.e. $$ \left ([\hat{T^1}]_{\mathbf{f},\mathbf{f}}\right )_{i,i} = \begin{cases} 1 & : 1 \le i \le r\\ 0 & : r+1 \le i \le n. \end{cases} $$ Hence $\hat{T^1} (f_i) = f_i$ for $1 \le i \le r$ and $\hat{T^1} (f_i) = 0$ for $r+1 \le i \le n$. Any linear invariant of $G$ is certainly fixed by $\hat{T^1}$, so $a_1 \le r$. On the other hand, by proposition \ref{ppropg}, $\hat{f_i} := \hat{T^1} (f_i) = \lambda_i f_i$ is an invariant of $G$ for every $1\le i\le r$, so $a_1 \ge r$. Together, $a_1 = r$. \end{proof}
Before the final proof, let us introduce a handy notation. \begin{definition}\label{dcorfficient} Let $p(\lambda) \in {\mathbf{C}}[\lambda]$ or $p(\lambda) \in {\mathbf{C}}[[\lambda]]$. Then $[\lambda^i]:p(\lambda)$ denotes the \inx{coefficient} of $\lambda^i$ in $p(\lambda)$. \end{definition}
So, for example $[x^2]: 2x^3 + 42x^2 - 6 = 42$ and $[\lambda^d]:\Phi_G(\lambda) = a_d$. \begin{proof}{(Moliens Theorem)} We just established the case $d = 1$, so the reader is probably expecting a proof by induction over $d$. But this is \emph{not} the case. Rather, the case $d = 1$ applies to all $d > 1$. Note that $a_d$ is equal to the number of linearly independent invariants of all of the $T_g^d$. So Theorem \ref{t13} gives us \begin{align*} a_1 &= \ensuremath{\mathrm{Tr}} (\hat{T}) = \ensuremath{\mathrm{Tr}} (\hat{T^1}) \qquad \mathrm{ and} \qquad\\ a_d &= \ensuremath{\mathrm{Tr}} (\hat{T^d}), \end{align*} where the latter includes the first. From definition \ref{dreynolds} we also have $$
\hat{T^1} = \sumg{g}{T^1_g} \quad \textrm{and in general} \quad
\hat{T^d} = \sumg{g}{T^d_g} , $$ so we already know that $$ a_d = \sumg{g}{\ensuremath{\mathrm{Tr}}(T^d_g)}. $$ So all we need to show is $$
[\lambda^d]:\frac{1}{|G|} \sum_{g\in G} \frac{1}{\det(\mathrm{id} - \lambda T^1_g)} = \sumg{g}{\ensuremath{\mathrm{Tr}}(T^d_g)}. $$ We will show that for every summand (group element) the equation $$ [\lambda^d]: \frac{1}{\det(\mathrm{id} - \lambda T^1_g)} = \ensuremath{\mathrm{Tr}}(T^d_g) $$ holds. From proposition \ref{pinducedeigen} we get for every $g\in G$ that \begin{align*}\ensuremath{\mathrm{Tr}}(T^d_g) &= \ensuremath{\mathrm{Tr}}(\ensuremath \mathrm{diag}(\tilde{\omega_1}, \dots, \tilde{\omega}_{\tilde{d}})) \\&= \tilde{\omega_1} + \dots + \tilde{\omega}_{\tilde{d}} = \end{align*} sum of the products of the $\omega_1, \omega_2, \dots ,\omega_n $, taken $d$ of them at a time. On the other hand, for the same $g\in G$ we obtain from section \ref{svictor} that
$[T_g^1]_{\mathbf{y}_g^1,\mathbf{y}_g^1} = \ensuremath \mathrm{diag}(\omega_1,\dots , \omega_n)$ so that
\begin{align*} \det(\mathrm{id} - \lambda T^1_g) &= \det(\mathrm{id} - \lambda \cdot \ensuremath \mathrm{diag}(\omega_1,\dots , \omega_n) ) \\
&= (1 - \lambda \omega_1 )(1 - \lambda \omega_2 )\dots(1 - \lambda \omega_n ), \end{align*} so \begin{align*} \quad & \frac{1}{\det(\mathrm{id} - \lambda T^1_g)} = \frac{1}{(1 - \lambda \omega_1 )(1 - \lambda \omega_2 )\dots(1 - \lambda \omega_n )} \\
&= \frac{1}{(1 - \lambda \omega_1) } \cdot \frac{1}{(1 - \lambda \omega_2)} \cdot \dots \frac{1}{(1 - \lambda \omega_n)} \\
&= (1 + \lambda \omega_1 + \lambda^2 \omega_1^2 + \dots )(1 + \lambda \omega_2 + \lambda^2 \omega_2^2 + \dots ) \dots
(1 + \lambda \omega_n + \lambda^2 \omega_n^2 + \dots ) \end{align*} and here the coefficient of $\lambda^d$ is also sum of the products of $\omega_1, \omega_2, \dots ,\omega_n $, taken $d$ of them at a time.
Again, the last claim $$
\frac{1}{|G|} \sum_{g\in G} \frac{1}{\det(\mathrm{id} - \lambda T_g)} =
\frac{1}{|G|} \sum_{g\in G} \frac{1}{\det(\mathrm{id} - \lambda T^1_g)} $$ follows from the remark at the end of section \ref{preynolds}, since the sum runs over all group elements. \end{proof}
\section{Symbol table}\label{ssymbol}
\begin{multicols}{2} \begin{description}
\item[$a_d$] number of linearly independent homogeneous invariants of degree $d$
\item[$\tilde{d}$] Dimension of $R_d$
\item[$\mathcal{B}$] ON basis for $V$
\item[$G$] Finite group
\item[$\omega_i$] eigenvalue of $T_g^1$ (\cite{Sloane1} $= w_i$ )
\item[$P(f)$] \lq\lq Rho\rq\rq\ Riesz vector of $f$.
\item[$\rho$] Unitary representation $\rho : G \to U(V), g \mapsto T_g$
\item[$R$] Big algebra, direct sum of
\item[$R_d$] Direct summand of degree $d$
\item[$R^G$] Ring of invariants of $d$
\item[$R^G_d$] Degree $d$ summand
\item[$T_g$] representation of $g$ on $V$, (\cite{Sloane1} $ A_\alpha= [T_{g_\alpha}]_{\mathcal{B}, \mathcal{B}} $ )
\item[$V$] Complex inner product space
\item[$V^\ast$] Algebraic dual of $V$ \end{description} \end{multicols}
\section{Lost and found}\label{slostfound}
Some things to explore from here: \begin{itemize}
\item If we know the conjugacy classes of $G$, we may be able to say more, since every
unitary representation splits into irreducible components.
\item There seems to be a link to P\'olya enumeration.
\item We have GAP code, see \cite{GAP4}.
\item An example would be nice.
\item Relations on the generators in $S$ of the Cayley graph $\Gamma(G,S)$
should lead to conditions of the minimal polynomial of its adjacency operator $Q(\Gamma(G,S))$.
\item Also, Cayley graphs of some finite reflection groups \cite{Hu2} should become accessible.
\item Check some more applications, as mentioned in \cite{Sloane1}.
\item For finding invariants, check also \cite{Cox}, Gr\"obner bases. \end{itemize}
\addcontentsline{toc}{section}{References}
\addcontentsline{toc}{section}{Index} \printindex
\def\thefootnote{} \footnote{\texttt{\jobname .tex} Typeset: \today }
\end{document} |
\begin{document}
\title{Dyck paths and pattern-avoiding matchings}
\begin{abstract} How many matchings on the vertex set $V=\{1,2,\dotsc,2n\}$ avoid a given configuration of three edges? Chen, Deng and Du have shown that the number of matchings that avoid three nesting edges is equal to the number of matchings avoiding three pairwise crossing edges. In this paper, we consider other forbidden configurations of size three. We present a bijection between matchings avoiding three crossing edges and matchings avoiding an edge nested below two crossing edges. This bijection uses non-crossing pairs of Dyck paths of length $2n$ as an intermediate step.
Apart from that, we give a bijection that maps matchings avoiding two nested edges crossed by a third edge onto the matchings avoiding all configurations from an infinite family $\mathcal{M}$, which contains the configuration consisting of three crossing edges. We use this bijection to show that for matchings of size $n>3$, it is easier to avoid three crossing edges than to avoid two nested edges crossed by a third edge.
In this updated version of this paper, we add new references to papers that have obtained analogous results in a different context. \end{abstract}
\section{Introduction and Basic Definitions}
This is an updated preprint of a paper whose journal version has already appeared in print \cite{jour}. The main reason for the update was to include references to the papers \cite{bwx,s,sw}, which have independently obtained equivalent results using different methods.
The enumeration of pattern-avoiding permutations has received a considerable amount of attention lately (see \cite{kima} for a survey). We say that a permutation $\pi$ of order $n$ \emph{contains} a permutation $\sigma$ of order $k$, if there is a sequence $1\le i_1<i_2<\dotso<i_k\le n$ such that for every $s,t\in[n]$ $\pi(i_s)<\pi(i_t)$ if and only if $\sigma(s)<\sigma(t)$. One of the central notions in the study of pattern-avoiding permutations is the \emph{Wilf equivalence}: we say that a permutation $\sigma_1$ is Wilf-equivalent to a permutation $\sigma_2$ if, for every $n\in\mathbb{N}$, the number of permutations of order $n$ that avoid $\sigma_1$ is equal to the number of permutations of order $n$ that avoid $\sigma_2$. In this paper, we consider pattern avoidance in matchings. This is a more general concept than pattern avoidance in permutations, since every permutation can be represented by a matching.
A \emph{matching} of size $m$ is a graph on the vertex set $[2m]=\{1,2,\dotsc,2m\}$ whose every vertex has degree one. We say that a matching $M=(V,E)$ \emph{contains} a matching $M'=(V',E')$ if there is a monotone edge-preserving injection from $V'$ to $V$; in other words, $M$ contains $M'$ if there is a function $f\colon V'\to V$ such that $u<v$ implies $f(u)<f(v)$ and $\{u,v\}\in E'$ implies $\{f(u),f(v)\}\in E$.
Let $M$ be a matching of size $m$, and let $e=\{i,j\}$ be an arbitrary edge of~$M$. If $i<j$, we say that $i$ is an \emph{l-vertex} and $j$ is an \emph{r-vertex} of $M$. Obviously, $M$ has $m$ l-vertices and $m$ r-vertices. Let $e_1=\{i_1,j_1\}$ and $e_2=\{i_2,j_2\}$ be two edges of $M$, with $i_1<j_1$ and $i_2<j_2$, and assume that $i_1<i_2$. We say that the two edges $e_1$ and $e_2$ \emph{cross} each other if $i_1<i_2<j_1<j_2$, and we say that $e_2$ is \emph{nested} below $e_1$ if $i_1<i_2<j_2<j_1$.
We say that a matching $M$ on the vertex set $[2m]$ is \emph{permutational}, if for every l-vertex $i$ and every r-vertex $j$ we have $i\le m<j$. There is a natural one-to-one correspondence between permutations of order $m$ and permutational matchings of size $m$: if $\pi$ is a permutation of $m$ elements, we let $M_\pi$ denote the permutational matching on the vertex set $[2m]$ whose edge set is the set $\bigr\{\{i,m+\pi(i)\},\ i\in[m]\bigl\}$. In this paper, we will often represent a permutation $\pi$ on $m$ elements by the ordered sequence $\pi(1)\pi(2)\dotsm\pi(m)$. Thus, for instance, $M_{132}$ refers to the matching on the vertex set $[6]$, with edge set $\bigr\{\{1,4\},\{2,6\},\{3,5\}\bigl\}$. Figure~\ref{fig-match} depicts most of the matchings relevant for this paper. Note that a permutational matching $M_\pi$ contains the permutational matching $M_\sigma$ if and only if $\pi$ contains~$\sigma$.
\begin{figure}\label{fig-match}
\end{figure}
Let $n=2m$ be an even number. A \emph{Dyck path} of length $n$ is a piecewise linear nonnegative walk in the plane, which starts at the point $(0,0)$, ends at the point $(n,0)$, and consists of $n$ linear segments (``steps''), of which there are two kinds: an \emph{up-step} connects $(x,y)$ with $(x+1,y+1)$, whereas a \emph{down-step} connects $(x,y)$ with $(x+1,y-1)$. The nonnegativity of the path implies that among the first $k$ steps of the path there are at least $k/2$ up-steps. Let $\mathcal{D}_m$ denote the set of all Dyck paths of length $2m$. It is well known that
$|\mathcal{D}_{m}|=c_m$, where $c_m=\frac{1}{m+1}\binom{2m}{m}$ is the $m$-th Catalan number.
Every Dyck path $D\in\mathcal{D}_m$ can be represented by a \emph{Dyck word} (denoted by $w(D)$), which is a binary word $w\in\{0,1\}^{2m}$ such that $w_i=0$ if the $i$-th step of $D$ is an up-step, and $w_i=1$ if the $i$-th step of $D$ is a down-step. It can be easily seen that a word $w\in\{0,1\}^n$ is a Dyck word of some Dyck path if and only if the following conditions are satisfied: \begin{itemize}
\item The length $n=|w|$ is even. \item The word $w$ has exactly $n/2$ terms equal to 1.
\item Every prefix $w'$ of $w$ has at most $|w'|/2$ terms equal to 1. \end{itemize} We will use the term Dyck word to refer to any binary word satisfying these conditions. The set of all Dyck words of length $2m$ will be denoted by~$\mathcal{D}'_m$.
Let $\mathcal{G}(m)$ denote the set of all matchings on the vertex set $[2m]$. For a matching $M\in\mathcal{G}(m)$, we define the \emph{base} of $M$ (denoted by $b(M)$) to be the binary word $w\in\{0,1\}^{2m}$ such that $w_i=0$ if $i$ is an l-vertex of $M$, and $w_i=1$ if $i$ is an r-vertex of $M$. The base $b(M)$ is clearly a Dyck word; conversely, every Dyck word is a base of some matching. If $w_i=0$ (or $w_i=1$) we say that $i$ is an \emph{l-vertex} (or an \emph{r-vertex}, respectively) \emph{with respect to the base $w$}. Let $m\in\mathbb{N}$, let $\mathcal{M}$ be an arbitrary set of matchings, and let $w\in\mathcal{D}'_m$; we define the sets $\mathcal{G}(m,\mathcal{M})$ and $\mathcal{G}(m,w,\mathcal{M})$ as follows: \begin{align*} \mathcal{G}(m,\mathcal{M})&=\{M\in\mathcal{G}(m);\ M\text{ avoids all the elements of }\mathcal{M}\}\\ \mathcal{G}(m,w,\mathcal{M})&=\{M\in\mathcal{G}(m,\mathcal{M});\ b(M)=w\} \end{align*}
Let $g(m), g(m,\mathcal{M})$ and $g(m,w,\mathcal{M})$ denote the cardinalities of the sets $\mathcal{G}(m)$, $\mathcal{G}(m,\mathcal{M})$ and $\mathcal{G}(m,w,\mathcal{M})$, respectively. The sets $\mathcal{G}(m,w,\mathcal{M})$ form a partition of $\mathcal{G}(m,\mathcal{M})$. In other words, we have \[ \mathcal{G}(m,\mathcal{M})=\bigcup_w \mathcal{G}(m,w,\mathcal{M})\quad \text{ and }\quad g(m,\mathcal{M})=\sum_w g(m,w,\mathcal{M}), \] where the union and the sum range over all Dyck words $w\in\mathcal{D}'_m$.
If no confusion can arise, we will write $\mathcal{G}(m,M)$ instead of $\mathcal{G}(m,\{M\})$ and $\mathcal{G}(m,w,M)$ instead of $\mathcal{G}(m,w,\{M\})$.
There is an alternative way to encode ordered matchings, which uses transversals of Ferrers shapes. A \emph{Ferrers shape} is a left-justified array of cells, where the number of cells in a given row does not exceed the number of cells in the row directly above it. A \emph{transversal} of a Ferrers shape is a subset of its cells which intersects every row and every column exactly once. A matching $M$ on the vertex set $[2m]$ can be represented by a transversal of a Ferrers shape as follows: first, consider a Ferrers shape with $m$ rows and $m$ columns, in which the $i$-th row (counted from the bottom) has $k$ cells if and only if the $i$-th r-vertex of $M$ has $k$ l-vertices to the left of it. Next, we define a transversal of this shape: the transversal contains the cell in row $i$ and column $j$ if and only if the $i$-th r-vertex is connected to the $j$-th l-vertex by an edge of $M$. This correspondence establishes a bijection between transversals and matchings, where matchings of a given base correspond to the transversals of a given shape. In the context of pattern-avoiding transversals, the equivalence relation ${\,\cong\,}$ is known as shape-Wilf equivalence. Note that the permutational matchings correspond to transversals of square shapes, i.e., to permutation matrices.
The correspondence between matchings and transversals, as well as the (more general) correspondence between ordered graphs and nonnegative fillings of Ferrers shapes, has been pointed out by Krattenthaler~ \cite{kra} and de Mier~\cite{adm}. Using this correspondence, it becomes clear that some previous results on pattern-avoiding graphs \cite{chdd,stan,du,jour} are equivalent to results on pattern-avoiding fillings \cite{bwx,kra,adm,s,sw}.
The aim of this paper is to study the relative cardinalities of the sets $\mathcal{G}(m,F)$, with $F$ being a permutational matching with three edges. For this purpose, we introduce the following notation:
Let ${\,\preccurlyeq\,}$ be the quasiorder relation defined as follows: for two sets $\mathcal{M}$ and $\mathcal{M}'$ of matchings, we write $\mathcal{M}{\,\preccurlyeq\,}\mathcal{M}'$, if for each $m\in\mathbb{N}$ and each $w\in\mathcal{D}'_m$ we have $g(m,w,\mathcal{M})\le g(m,w,\mathcal{M}')$. Similarly, we write $\mathcal{M}{\,\cong\,} \mathcal{M}'$ if $\mathcal{M}{\,\preccurlyeq\,} \mathcal{M}'$ and $\mathcal{M}{\,\succcurlyeq\,} \mathcal{M}'$, and we write $\mathcal{M}{\,\prec\,} \mathcal{M}'$ if $\mathcal{M}{\,\preccurlyeq\,} \mathcal{M}'$ and $\mathcal{M}{\,\ncong\,} \mathcal{M}'$. As above, we omit the curly braces when the arguments of these relations are singleton sets.
Note that two permutations $\pi$ and $\sigma$ are Wilf-equivalent if and only if for every $m\in\mathbb{N}$ the equality $g(m,0^m1^m,M_\sigma)=g(m,0^m1^m,M_\pi)$ holds, where $0^m1^m$ is the Dyck word consisting of $m$ consecutive $0$-terms followed by $m$ consecutive $1$-terms. Thus, if $M_\pi{\,\cong\,} M_\sigma$, then $\pi$ and $\sigma$ are Wilf-equivalent; however, the converse does not hold in general: it is well known that all the permutations of order three are Wilf-equivalent, whereas the results of this paper imply that the permutational matchings of size three fall into three ${\,\cong\,}$-classes.
Combining the known results on Ferrers transversals and the known results on matchings, the full characterization of the ${\,\cong\,}$ and ${\,\prec\,}$ relations for patterns of size three has been obtained:
\[ M_{213}{\,\cong\,} M_{132} {\,\prec\,} M_{123}{\,\cong\,} M_{321} {\,\cong\,} M_{231}{\,\prec\,} M_{312}. \]
The equivalence $M_{213}{\,\cong\,} M_{132}$ follows from the results of Stankova and West~\cite{sw} obtained in the context of fillings of Ferrers shapes. In~\cite{jour}, the same equivalence is proved in the context of pattern-avoiding matchings as a corollary to the result presented in this paper as Theorem~\ref{thm-bij}. The equivalence of $M_{123}{\,\cong\,} M_{321}$ follows from the more general result $M_{12\dotsb k}{\,\cong\,} M_{k(k-1)\dotsb 1}$, which was proved by Chen et al.~\cite{chdd} in the context of matchings, and by Backelin et al.~\cite{bwx} in the context of transversals. Several generalizations of this result are obtained by Krattenthaler~\cite{kra} (see also~\cite{adm}). The equivalence $M_{321} {\,\cong\,} M_{231}$ follows from the general results of Backelin et al.~\cite{bwx}. In this paper, a different bijective argument is given (see Section~\ref{sec-231}). The relation $M_{132} {\,\prec\,} M_{123}$ is proved in Corollary~\ref{cor-123-132} of this paper, and a different argument is given in a forthcoming paper by Stankova~\cite{s}, which also contains the proof of $M_{231}{\,\prec\,} M_{312}$, completing the classification.
This paper is organized as follows: in Section~\ref{sec-132}, we prove Theorem~\ref{thm-bij}, which simultaneously implies $M_{213}{\,\cong\,} M_{132}$ and $M_{132} {\,\prec\,} M_{123}$. In Section~\ref{sec-231}, we present a bijective argument that implies $M_{321} {\,\cong\,} M_{231}$.
\section{The forbidden matchings $M_{132}$ and $M_{213}$}\label{sec-132}
Since the matching $M_{132}$ is the mirror image of the matching $M_{213}$, it is obvious that $g(m,M_{132})$ is equal to $g(m,M_{213})$ for each $m\in \mathbb{N}$. However, there seems to be no straightforward argument demonstrating the stronger fact that $M_{132}{\,\cong\,} M_{213}$.
For $k\ge3$, we define $C_k\in\mathcal{G}(k)$ to be the matching with edge set $E(C_k)=\bigl\{\{2i-1,2i+2\};\ 1\le i<k\bigr\}\cup\bigl\{\{2,2k-1\}\bigr\}$. Let $\mathcal{C}=\{C_k;\ k\ge 3\}$.
The goal of this section is to prove the following result: \begin{thm}\label{thm-bij} $\mathcal{C}{\,\cong\,} M_{132}$. \end{thm}
Since all the elements of $\mathcal{C}$ are symmetric upon mirror reflection, this also proves that $\mathcal{C}{\,\cong\,} M_{213}$ and $M_{132}{\,\cong\,} M_{213}$, see Corollaries~\ref{cor-calc} and~\ref{cor-sym} at the end of this section.
Throughout this section, we consider $m\in\mathbb{N}$ and $w\in\mathcal{D}'_m$ to be arbitrary but fixed, and we let $n=2m$. For the sake of brevity, we write $\mathcal{G}^M$ instead of $\mathcal{G}(m,w,M_{132})$ and $\mathcal{G}^C$ instead of $\mathcal{G}(m,w,\mathcal{C})$. For a matching $G\in\mathcal{G}(m)$ and an arbitrary integer $k\in[n]$, let $G[k]$ denote the subgraph of $G$ induced by the vertices in $[k]$. There are three types of vertices in $G[k]$: \begin{itemize} \item The r-vertices of $G$ belonging to $[k]$. Clearly, all these vertices have degree one in $G[k]$. \item The l-vertices of $G$ connected to some r-vertex belonging to $[k]$. These have degree one in $G[k]$ as well. \item The l-vertices of $G$ belonging to $[k]$ but not connected to an r-vertex belonging to $[k]$. These are the isolated vertices of $G[k]$, and we will refer to them as the \emph{stubs} of $G[k]$. \end{itemize}
Let $G$ be an arbitrary graph from $\mathcal{G}^{M}$. The sequence \[ G[1],\ G[2],\ G[3], \dotsc,\ G[n-1],\ G[n]=G \] will be called \emph{the construction} of $G$. It is convenient to view the construction of $G$ as a sequence of steps of an algorithm that produces the matching $G$ by adding one vertex in every step. Two graphs $G, G'$ from $\mathcal{G}^{M}$ may share an initial part of their construction; however, if $G[k]\neq G'[k]$ for some $k$, then obviously $G[j]\neq G'[j]$ for every $j\ge k$. It is natural to represent the set of all the constructions of graphs from $\mathcal{G}^{M}$ by \emph{the generating tree} of $\mathcal{G}^{M}$ (denoted by $\mathcal{T}^{M}$), defined by the following properties: \begin{itemize} \item The generating tree is a rooted tree with $n$ levels, where the root is the only node of level one, and all the leaves appear on level $n$. \item The nodes of the tree are exactly the elements of the following set: \[ \{G';\ \exists k\in[n], \exists G\in\mathcal{G}^{M}:\ G'=G[k]\} \] \item The children of a node $G'$ are exactly the elements of the following set: \[ \{G''; \exists k\in[n-1], \exists G\in\mathcal{G}^{M}:\ G'=G[k], G''=G[k+1]\} \] \end{itemize} It follows that the level of every node $G'$ of the tree $\mathcal{T}^M$ is equal to the number of vertices of $G'$. Also, the leaves of $\mathcal{T}^M$ are exactly the elements of $\mathcal{G}^{M}$, and the nodes of the path from the root to a leaf $G$ form the construction of $G$.
The generating tree of $\mathcal{G}^{C}$, denoted by $\mathcal{T}^{C}$, is defined in complete analogy with the tree $\mathcal{T}^M$. Our goal is to prove that the two trees are isomorphic, hence they have
the same number of leaves, i.e.,\ $|\mathcal{G}^{M}|=|\mathcal{G}^{C}|$.
We say that a graph $G'$ on the vertex set $[k]$ is \emph{consistent} with $w$, if $G'=G[k]$ for some matching $G\in \mathcal{G}(m)$ with base $w$.
\begin{lem}\label{lem-t1t2}\
\noindent 1. A graph $G'$ is a node of\/ $\mathcal{T}^{M}$ if and only if $G'$ satisfies these three conditions: \begin{itemize} \item[(a)] $G'$ is consistent with $w$. \item[(b)] $G'$ avoids $M_{132}$. \item[(c)] $G'$ does not contain a sequence of five vertices $x_1<x_2<\dotsb<x_5$ such that $x_2$ is a stub, while $\{x_1,x_4\}$ and $\{x_3,x_5\}$ are edges of $G'$. \end{itemize}
\noindent 2. A graph $H'$ is a node of\/ $\mathcal{T}^{C}$ if and only if $H'$ satisfies these three conditions: \begin{itemize} \item[(a)] $H'$ is consistent with $w$. \item[(b)] $H'$ avoids $\mathcal{C}$. \item[(c)] For every $p\ge 3$, $H'$ does not contain an induced subgraph isomorphic to $C_p[2p-1]$ (by an order preserving isomorphism). In other words, for every $p\ge 3$, $H'$ does not contain a sequence of\/ $2p-1$ vertices $x_1<x_2<\dotsb<x_{2p-1}$, where $2p-3$ is a stub, and the remaining $2p-2$ vertices induce the edges $\bigl\{\{x_{2i-1},x_{2i+2}\}, 1\le i\le p-2\bigr\}\cup\bigl\{\{x_2,x_{2p-1}\}\bigr\}$. \end{itemize} \end{lem} \begin{proof} We first prove the first part of the lemma. Let $G'$ be a node of $\mathcal{T}^{M}$. Clearly, $G'$ satisfies conditions \textit{a} and \textit{b} of the first part of the lemma. Assume that $G'$ fails to satisfy condition \textit{c}. Choose $G\in\mathcal{G}^{M}$ such that $G'=G[k]$ for some $k\in[n]$. Let $x_6$ denote the r-vertex of $G$ connected to $x_2$. Then $x_6>k$, because $x_2$ was a stub of $G'=G[k]$, which implies that $x_6>x_5$ and the six vertices $x_1<\dotsb<x_6$ induce a subgraph isomorphic to $M_{132}$, which is forbidden. This shows that the conditions \textit{a}, \textit{b} and \textit{c} are necessary.
To prove the converse, assume that $G'$ satisfies the three conditions, and let $V(G')=[k]$. We will extend $G'$ into a graph $G$ with base $w$, by adding the vertices $k+1, k+2,\dotsc, n$ one by one, and each time that we add a new r-vertex $i$, we connect $i$ with the smallest stub of the graph constructed in the previous steps. We claim that this algorithm yields a graph $G\in\mathcal{G}^{M}$. For contradiction, assume that this is not the case, and that there are six vertices $x_1<x_2<\dotsb<x_6$ inducing a copy of $M_{132}$. By condition \textit{b}, we know that these six vertices are not all contained in $G'$, which means that $x_6>k$. Also, by condition \textit{c}, we know that $x_5>k$. In the step of the above construction when we added the r-vertex $x_5$, both $x_2$ and $x_3$ were stubs. Since $x_5$ should have been connected to the smallest available stub, it could not have been connected to $x_3$, which contradicts the assumption that $x_1,\dotsc,x_6$ induce a copy of $M_{132}$. Thus $G\in\mathcal{G}^{M}$, as claimed.
The proof of the second part of the lemma follows along the same lines. To see that the conditions \textit{a}, \textit{b} and \textit{c} of the second part are sufficient, note that every graph satisfying these conditions can be extended into a graph $H\in\mathcal{G}^{C}$ by adding new vertices one by one, and connecting every new r-vertex to the biggest stub available when the r-vertex is added. We omit the details. \end{proof}
Let $G'$ be a node of $\mathcal{T}^{M}$. We define a binary relation ${\,\thicksim\,}$ on the set of stubs of $G'$ by the following rule: $u{\,\thicksim\,} v$ if and only if either $u=v$ or there is an edge $\{x,y\}\in E(G')$ such that $x<u<y$ and $x<v<y$.
\begin{figure}\label{fig-eqst}
\end{figure}
Let $H'$ be a node of $\mathcal{T}^{C}$. We define a binary relation ${\,\thickapprox\,}$ on the set of stubs of $H'$ by the following rule: $u{\,\thickapprox\,} v$ if and only if either $u=v$ or $H'$ contains a sequence of edges $e_1,e_2,\dotsc,e_p$, where $p\ge 1$, $e_i=\{x_i,y_i\}$, the edge $e_i$ crosses the edge $e_{i+1}$ for each $i<p$, and at the same time $x_1<u<y_1$ and $x_p<v<y_p$ (see Fig.~\ref{fig-eqst}; note that we may assume, without loss of generality, that the edge $e_i$ does not cross any other edge of the sequence except for $e_{i-1}$ and $e_{i+1}$, and that no two edges of the sequence are nested: indeed, a minimal sequence $(e_i)_{i=1}^p$ witnessing $u{\,\thickapprox\,} v$ clearly has these properties). We remark that the relation ${\,\thickapprox\,}$ has an intuitive geometric interpretation: assume that the vertices of $H'$ are represented by points on a horizontal line, ordered left-to-right according to the natural order, and assume that every edge of $H'$ is represented by a half-circle connecting the corresponding endpoints. Then $u{\,\thickapprox\,} v$ if and only if every vertical line separating $u$ from $v$ intersects at least one edge of $H'$.
Using condition \textit{c} of the first part of Lemma~\ref{lem-t1t2}, it can be easily verified that for every node $G'$ of the tree $\mathcal{T}^{M}$, the relation ${\,\thicksim\,}$ is an equivalence relation on the set of stubs of $G'$. Let $\bloks{x}{G'}$ denote the block of ${\,\thicksim\,}$ containing the stub $x$. Clearly, the blocks of ${\,\thicksim\,}$ are contiguous with respect to the ordering $<$ of the stubs of $G'$; i.e.,\ if $x<y<z$ are three stubs of $G'$, then $x{\,\thicksim\,} z$ implies $x{\,\thicksim\,} y{\,\thicksim\,} z$.
Similarly, ${\,\thickapprox\,}$ is an equivalence relation on the set of stubs of a node $H'$ of $\mathcal{T}^{C}$ (notice that, contrary to the case of ${\,\thicksim\,}$, the fact that ${\,\thickapprox\,}$ is an equivalence relation does not rely on the particular properties of the nodes of $\mathcal{T}^{C}$ described in Lemma~\ref{lem-t1t2}). The block of ${\,\thickapprox\,}$ containing $x$ will be denoted by $\blokt{x}{H'}$. These blocks are contiguous with respect to the ordering $<$ as well.
\begin{lem}\label{lem-child}\
\noindent 1. Let $G'$ be a node of level $k<n$ in the tree $\mathcal{T}^{M}$. The following holds: \begin{itemize} \item[(a)] Let $G''$ be an arbitrary child of\/ $G'$ in the tree $\mathcal{T}^{M}$. This implies that $V(G'')=[k+1]$. If the vertex $k+1$ is an l-vertex with respect to $w$, then $G''$ is the only child of\/ $G'$, and $k+1$ is a stub in $G''$. In this case, $\bloks{x}{G'}=\bloks{x}{G''}$ for every stub $x$ of $G'$, and $\bloks{k+1}{G''}=\{k+1\}$. On the other hand, if $k+1$ is an r-vertex, then in the graph $G''$ the vertex $k+1$ is connected to a vertex $x$ satisfying $x=\min\bloks{x}{G'}$. In this case, we have $\bloks{y}{G'}=\bloks{y}{G''}$ whenever $y<x$, and all the stubs $z>x$ of $G''$ form a single ${\,\thicksim\,}\!$-block in $G''$. \item[(b)] If $k+1$ is an r-vertex, then for every stub $x$ satisfying $x=\min\bloks{x}{G'}$, $G'$ has a child $G''$ which contains the edge $\{x,k+1\}$. This implies, together with part a, that if $k+1$ is an r-vertex, then the number of children of $G'$ in $\mathcal{T}^{M}$ is equal to the number of its ${\,\thicksim\,}\!$-blocks. \end{itemize}
\noindent 2. Let $H'$ be a node of level $k<n$ in the tree $\mathcal{T}^{C}$. The following holds: \begin{itemize} \item[(a)] Let $H''$ be an arbitrary child of $H'$ in the tree $\mathcal{T}^{C}$. This implies that $V(H'')=[k+1]$. If the vertex $k+1$ is an l-vertex with respect to $w$, then $H''$ is the only child of $H'$, and $k+1$ is a stub in $H''$. In this case, $\blokt{x}{H'}=\blokt{x}{H''}$ for every stub $x$ of $H'$, and $\blokt{k+1}{H''}=\{k+1\}$. On the other hand, if $k+1$ is an r-vertex, then in the graph $H''$ the vertex $k+1$ is connected to a vertex $x$ satisfying $x=\max\blokt{x}{H'}$. In this case, we have $\blokt{y}{H'}=\blokt{y}{H''}$ whenever $y<x$ and $y\not\in\blokt{x}{H'}$, and all the other stubs of $H''$ form a single ${\,\thickapprox\,}\!$-block in $H''$. \item[(b)] If $k+1$ is an r-vertex, then for every stub $x$ satisfying $x=\max\blokt{x}{H'}$, $H'$ has a child $H''$ which contains the edge $\{x,k+1\}$. This implies, together with part a, that if $k+1$ is an r-vertex, then the number of children of $H'$ in $\mathcal{T}^{C}$ is equal to the number of its ${\,\thickapprox\,}\!$-blocks. \end{itemize} \end{lem} \begin{proof} We first prove part $1a$. The case when $k+1$ is an l-vertex follows directly from the definition of ${\,\thicksim\,}$, so let us assume that $k+1$ is an r-vertex, and let $x$ be the vertex connected to $k+1$ in $G''$. Assume, for contradiction, that $x\not=\min\bloks{x}{G'}$, and choose $y\in\bloks{x}{G'}$ such that $y<x$. Since $y{\,\thicksim\,} x$, $G'$ must contain an edge $e=\{u,v\}$, with $u<y<x<v$. Then the five vertices $u, y, x, v, k+1$ form in $G''$ a configuration that was forbidden by Lemma~\ref{lem-t1t2}, part $1c$. This shows that $x=\min\bloks{x}{G'}$. The edge $\{x,k+1\}$ guarantees that all the stubs larger than $x$ are ${\,\thicksim\,}\!$-equivalent in $G''$, whereas the equivalence classes of the stubs smaller than $x$ are unaffected by this edge. This concludes the proof of part $1a$.
To prove part $1b$, it is sufficient to show that after choosing a vertex $x$ such that $x=\min\bloks{x}{G'}$ and adding the edge $\{x,k+1\}$ to $G'$, the resulting graph $G''$ satisfies the three conditions of the first part of Lemma~\ref{lem-t1t2}. Condition $1a$ of Lemma~\ref{lem-t1t2} is satisfied automatically. If $G''$ fails to satisfy condition $1b$, then $G'$ fails to satisfy one of the conditions $1b$ and $1c$ of that Lemma, which is impossible. Similarly, if $G''$ fails to satisfy condition $1c$, then either $G'$ fails to satisfy this condition as well, or $G'$ contains a stub $y$ with $y<x$ and $y{\,\thicksim\,} x$, contradicting our choice of $x$.
The proof of the second part of this lemma follows along the same lines as the proof of the first part, and we omit it. \end{proof}
We are now ready to state and prove the main theorem of this section.
\begin{thm}\label{thm-m132} The trees $\mathcal{T}^{M}$ and $\mathcal{T}^{C}$ are isomorphic. \end{thm} \begin{proof} Our aim is to construct a mapping $\phi$ with the following properties: \begin{itemize} \item The mapping $\phi$ maps the nodes of $\mathcal{T}^{M}$ to the nodes of $\mathcal{T}^{C}$, preserving their level. \item If $G'$ is a child of $G$ in $\mathcal{T}^M$, then $\phi(G')$ is a child $\phi(G)$ in $\mathcal{T}^C$. Furthermore, if $G_1$ and $G_2$ are two distinct children of a node $G$ in $\mathcal{T}^{M}$, then $\phi(G_1)$ and $\phi(G_2)$ are two distinct children of $\phi(G)$ in $\mathcal{T}^{C}$. \item Let $G$ be an arbitrary node of $\mathcal{T}^{M}$, and let $H=\phi(G)$. Let $\bloks{x_1}{G}$, $\bloks{x_2}{G},\, \dotsc,$ $\bloks{x_s}{G}$ be the sequence of all the distinct blocks of ${\,\thicksim\,}$ in $G$, uniquely determined by the condition $x_1<x_2<\dotsb<x_s$. Similarly, let $\blokt{y_1}{H}$, $\blokt{y_2}{H},\, \dotsc,$ $\blokt{y_t}{H}$ be the sequence of all the distinct blocks of ${\,\thickapprox\,}$ in $H$, uniquely determined by the condition $y_1<y_2<\dotsb<y_t$. Then $s=t$ and
$|\bloks{x_i}{G}|=|\blokt{y_i}{H}|$ for each $i\in [s]$. \end{itemize} These conditions guarantee that $\phi$ is an isomorphism, because, thanks to Lemma~\ref{lem-child}, we know that the number of children of each node of $\mathcal{T}^{M}$ (or~$\mathcal{T}^{C}$) at level $k$ is either equal to one if $k+1$ is an l-vertex or equal to the number of blocks of its ${\,\thicksim\,}$ relation (or ${\,\thickapprox\,}$ relation, respectively) if $k+1$ is an r-vertex.
The mapping $\phi$ is defined recursively for nodes of increasing level. The root of $\mathcal{T}^{M}$ is mapped to the root of $\mathcal{T}^{C}$. Assume that the mapping $\phi$ has been determined for all the nodes of $\mathcal{T}^{M}$ of level at most $k$, for some $k\in[n-1]$, and that it does not violate the properties stated above. Let $G$ be a node of level $k$, let $H=\phi(G)$. If $k+1$ is an l-vertex, then $G$ has a unique child $G'$ and $H$ has a unique child $H'$. In this case, define $\phi(G')=H'$. Let us now assume that $k+1$ is an r-vertex. Let $\bloks{x_1}{G}$, $\bloks{x_2}{G},\, \dotsc,$ $\bloks{x_s}{G}$ be the sequence of all the distinct blocks of ${\,\thicksim\,}$ on $G$, with $x_1<x_2<\dotsb<x_s$. We may assume, without loss of generality, that $x_i=\min\bloks{x_i}{G}$ for $i\in [s]$. By assumption, ${\,\thickapprox\,}$ has $s$ blocks on $H$. Let $\blokt{y_1}{H}$, $\blokt{y_2}{H},\, \dotsc,$ $\blokt{y_s}{H}$ be the sequence of these blocks, where $y_1<y_2<\dotsb<y_s$ and $y_i=\max\blokt{y_i}{H}$ for every $i\in[s]$. By Lemma~\ref{lem-child}, the nodes $G$ and $H$ have $s$ children in $\mathcal{T}^{M}$ and $\mathcal{T}^{C}$. Let $G_i$ be the graph obtained from $G$ by addition of the edge $\{x_i,k+1\}$, let $H_i$ be the graph obtained from $H$ by addition of the edge $\{y_i,k+1\}$, for $i\in[s]$. By Lemma~\ref{lem-child}, the graphs $\{G_i;\ i\in[s]\}$ (or $\{H_i;\ i\in[s]\}$) are exactly the children of $G$ (or $H$, respectively). We define $\phi(G_i)=H_i$. The ${\,\thicksim\,}\!$-blocks of $G_i$ are exactly the sets $\bloks{x_1}{G}$, $\bloks{x_2}{G},\,\dotsc,$ $\bloks{x_{i-1}}{G}$ and $\left(\bigcup_{j\ge i} \bloks{x_j}{G}\right)\setminus \{x_i\}$, while the ${\,\thickapprox\,}\!$-blocks of $H_i$ are exactly the sets $\blokt{y_1}{H}$, $\blokt{y_2}{H},\,\dotsc,$ $\blokt{y_{i-1}}{H}$ and $\left(\bigcup_{j\ge i} \blokt{y_j}{H}\right)\setminus \{y_i\}$. This implies that the corresponding blocks of $G_i$ and $H_i$ have the same number and the same size, as required (note that if $i=s$ and $\bloks{x_s}{G}=\{x_s\}$, then the last block in the above list of ${\,\thicksim\,}\!$-blocks of $G_i$ is empty; however, this happens if and only if the last entry in the list of ${\,\thickapprox\,}\!$-blocks of $H_i$ is empty as well, so it does not violate the required properties of $\phi$).
This concludes the proof of the theorem. \end{proof}
\begin{cor}\label{cor-calc} $M_{132}{\,\cong\,}\mathcal{C}$. \end{cor} \begin{proof} Since $g(m,w,M_{132})$ is equal to the number of leaves of the tree $\mathcal{T}^{M}$, and $g(m,w,\mathcal{C})$ is equal to the number of leaves of the tree $\mathcal{T}^{C}$, this is a direct consequence of Theorem~\ref{thm-m132}. \end{proof}
\begin{cor}\label{cor-sym} $M_{132}{\,\cong\,} M_{213}$. \end{cor} \begin{proof} Let $\overline{w}$ denote the Dyck word defined by the relation $\overline{w}_i=0$ if and only if $w_{n-i+1}=1$. By inverting the linear order of the vertices of a matching $M$ with base $w$, we obtain a matching $\overline{M}$ with base $\overline{w}$. Since every matching $C_k\in\mathcal{C}$ satisfies $\overline{C_k}=C_k$, we know that a matching $M$ avoids $\mathcal{C}$ if and only if $\overline{M}$ avoids $\mathcal{C}$, and hence $g(m,w,\mathcal{C})=g(m,\overline{w}, \mathcal{C})$. Note that $\overline{M_{213}}=M_{132}$. This gives \[ g(m,w,M_{132})=g(m,w,\mathcal{C})=g(m,\overline{w},\mathcal{C})=g(m,\overline{w},M_{132})=g(m,w,M_{213}), \] as claimed. \end{proof}
\begin{cor}\label{cor-123-132} $M_{123}{\,\succ\,} M_{132}$. \end{cor} \begin{proof} Notice that $M_{123}=C_3\in \mathcal{C}$, and all the other graphs in $\mathcal{C}$ avoid $M_{123}$. This implies that $\mathcal{G}(m,w,\mathcal{C})\subseteq\mathcal{G}(m,w,M_{123})$, and for every $m\ge 4$ there is a $w\in\mathcal{D}'_m$ for which this is a proper inclusion, because $C_m$ clearly belongs to $\mathcal{G}(m,M_{123})\setminus\mathcal{G}(m,\mathcal{C})$. The claim follows, as a consequence of Corollary~\ref{cor-calc}. \end{proof}
\section{The matching $M_{231}$ and non-crossing pairs of Dyck paths}\label{sec-231}
In this section, we prove that $M_{231}{\,\cong\,} M_{321}$. This result is a special case of a more general theorem by Backelin et al.~\cite{bwx}. The new proof we present here provides a simple bijection between $M_{231}$-avoiding matchings of a fixed base $w$ and Dyck paths not exceeding the path representing the Dyck word $w$.
We first introduce some notation: recall that $\mathcal{D}(m)$ denotes the set of all Dyck paths of length $2m$. For two Dyck paths $P_1$ and $P_2$ of length $2m$, we say that $(P_1,P_2)$ is a \emph{non-crossing pair} if $P_2$ never reaches above $P_1$. Let $\mathcal{D}^2_m$ denote the set of all the non-crossing pairs of Dyck paths of length $2m$ and, for a Dyck word $w$ of length $2m$, let $\mathcal{D}^2_m(w)$ be the set of all the pairs $(P_1,P_2)\in\mathcal{D}^2_m$ whose first component $P_1$ is the path represented by the Dyck word $w$.
Recently, Chen, Deng and Du \cite{chdd} have proved that $M_{123}{\,\cong\,} M_{321}$ by a bijective construction involving Dyck paths. Their proof in fact shows that the cardinality of the set $D_m^2(w)$ is equal to the number of matchings with base $w$ avoiding $M_{123}$, and at the same time equal to the number of matchings with base $w$ avoiding $M_{321}$. In our notation, this corresponds to the following claim: \[
\forall m\in\mathbb{N}\ \forall w\in\mathcal{D}'_m\ g(m,w,M_{123})=|D^2_m(w)|=g(m,w,M_{321}). \]
In this section, we extend these equalities to the matching $M_{231}$ by
proving $|D^2_m(w)|=g(m,w,M_{231})$. This shows that $M_{231}{\,\cong\,} M_{321}{\,\cong\,} M_{123}$.
We remark that the number of non-crossing pairs of Dyck paths of length $2m$ (and hence the number of $M$-avoiding matchings of size $m$, where $M$ is any of $M_{123},M_{321}$ or $M_{231}$) is equal to $c_{m+2}c_m-c_{m+1}^2$, where $c_m$ is the $m$-th Catalan number (see \cite{gobe}). The sequence $(c_{m+2}c_m-c_{m+1}^2;\ m\in\mathbb{N})$ is listed as the entry A005700 in the On-Line Encyclopedia of Integer Sequences \cite{oeis}. It is noteworthy, that N. Bonichon \cite{boni} has shown a completely different combinatorial interpretation of this sequence, in terms of realizers of plane triangulations.
Let us fix $m\in\mathbb{N}$ and $w\in\mathcal{D}'_m$. Let $M$ be a matching with base $w$. Let $1=x_1<\dotsb<x_m$ denote the sequence of all the l-vertices with respect to $w$, and $y_1<\dotsb<y_m=2m$ be the sequence of all the r-vertices with respect to $w$. Let $y_k$ be the neighbour of $x_1$ in $M$. An edge $\{x_i,y_j\}$ of $M$ is called \emph{short} if $y_j<y_k$, and it is called \emph{long} if $y_j>y_k$. Let $E_\text{S}(M)$ and $E_\text{L}(M)$ denote the set of the short edges and long edges, respectively, so that we have $E(M)=E_\text{S}(M)\cup E_\text{L}(M)\cup \bigl\{\{1,y_k\}\bigr\}$. An l-vertex $x_i$ is called \emph{short} (or \emph{long}) if it is incident with a short edge (or a long edge, respectively).
\begin{lem} Let $M$ be a matching with base $w$. $M$ avoids $M_{231}$ if and only if $M$ satisfies the following three conditions: \begin{itemize} \item The subgraph of $M$ induced by the short edges avoids $M_{231}$. \item The subgraph of $M$ induced by the long edges avoids $M_{231}$. \item Every short l-vertex precedes all the long l-vertices. \end{itemize} \end{lem} \begin{proof} The first two conditions are clearly necessary. The third condition is necessary as well, for if $M$ contained an edge $\{x_s,y_s\}\in E_S$ and an edge $\{x_l,y_l\}\in E_L$ with $x_s>x_l$, then the six vertices $1<x_l<x_s<y_s<y_k<y_l$ would induce a copy of $M_{231}$.
To see that the three conditions are sufficient, assume for contradiction that a graph $M$ satisfies these conditions but contains the forbidden configuration induced by some vertices $x_a<x_b<x_c<y_d<y_e<y_f$. We first note that $y_f>y_k$: indeed, it is impossible to have $y_f=y_k$, because $y_f$ is not connected to the leftmost vertex, and the inequality $y_f<y_k$ would imply that all the three edges of the forbidden configuration are short, which is ruled out by the first condition of the lemma. Thus, the edge $\{x_b,y_f\}$ is long, and hence $\{x_c,y_d\}$ is long as well, by the third condition. This implies that $y_d>y_k$, hence $y_e>y_k$ as well, and all the three edges of the configuration are long, contradicting the second condition of the lemma. \end{proof}
To construct the required bijection between $\mathcal{G}(m,w,M_{231})$ and $\mathcal{D}^2_m(w)$, we will use the intuitive notion of a ``tunnel'' in a Dyck path, which has been employed in bijective constructions involving permutations in, e.g., \cite{eliza1} or \cite{eliza2}. Let $P$ be a Dyck path. A \emph{tunnel} in $P$ is a horizontal segment $t$ whose left endpoint is the center of an up-step of $P$, its right endpoint is the center of a down-step of $P$, and no other point of $t$ belongs to $P$ (see Fig.~\ref{fig-tunnel}). A path $P\in\mathcal{D}_m$ has exactly $m$ tunnels. An up-step $u$ and a down-step $d$ of $P$ are called \emph{partners} if $P$ has a tunnel connecting $u$ and $d$. Let $u_1(P),\dotsc,u_m(P)$ denote the up-steps of $P$ and $d_1(P),\dotsc,d_m(P)$ denote the down-steps of $P$, in the left-to-right order in which they appear on $P$.
\begin{figure}\label{fig-tunnel}
\end{figure}
Let $W\in\mathcal{D}_m$ be the Dyck path represented by the Dyck word $w$, let $(W,P)\in\mathcal{D}^2_m(w)$ be a non-crossing pair of Dyck paths. Let $M(W,P)$ be the unique matching with base $w$ satisfying the condition that $\{x_i,y_j\}$ is an edge of $M$ if and only if $u_i(P)$ is the partner of $d_j(P)$. To see that this definition is valid, we need to check that if $u_i(P)$ is partnered to $d_j(P)$ in a path $P$ not exceeding $W$, then $x_i<y_j$ in the matchings with base $w$. This is indeed the case, because the horizontal coordinate of $u_i(W)$ (which determines the position of $x_i$ in the matching) does not exceed the horizontal coordinate of $u_i(P)$, while the horizontal coordinate of $d_j(P)$ does not exceed the horizontal coordinate of $d_j(W)$ (note that a half-line starting in the center of $u_i(P)$ directed north-west intersects $W$ in the center of $u_i(W)$; similarly, a half-line starting in the center of $d_j(P)$ directed north-east hits the center of $d_j(W)$). See Fig.~\ref{fig-tuhr}.
\begin{figure}\label{fig-tuhr}
\end{figure}
\begin{lem}\label{lem-gpm231} If $(W,P)\in\mathcal{D}^2_m(w)$, then $M(W,P)$ avoids $M_{231}$. \end{lem} \begin{proof} Choose an arbitrary $(W,P)\in\mathcal{D}^2_m(w)$ and assume, for contradiction, that there are six vertices $x_a<x_b<x_c<y_d<y_e<y_f$ in $M(W,P)$ which induce the forbidden configuration. Let $t_{cd}, t_{ae}$ and $t_{bf}$ be the tunnels corresponding to the three edges $x_cy_d$, $x_ay_e$ and $x_by_f$, respectively. Note that the projection of $t_{cd}$ onto some horizontal line $h$ is a subset of the projections of $t_{ae}$ and $t_{bf}$ onto $h$. Thus, the three tunnels lie on different horizontal lines and there is a vertical line intersecting all of them.
Since $a<b$, the tunnel $t_{ae}$ must lie below $t_{bf}$, otherwise the subpath of $P$ between $u_a(P)$ and $u_b(P)$ would intersect $t_{ae}$. On the other hand, $e<f$ implies that $t_{ae}$ lies above $t_{bf}$, a contradiction. \end{proof}
The aim of the next lemma is to show that the mapping $P\mapsto M(W,P)$ can be inverted.
\begin{lem}\label{lem-indm231} For every $M\in\mathcal{G}(m,w,M_{231})$ there is a unique Dyck path $P$ such that $(W,P)\in\mathcal{D}_m^2(w)$ and $M=M(W,P)$. \end{lem} \begin{proof} We proceed by induction on $m$. The case $m=1$ is clear, so let us assume that $m>1$ and that the lemma holds for every $m'<m$ and every $w'\in\mathcal{D}'_{m'}$. Let us choose an arbitrary $w\in\mathcal{D}'_m$, and an arbitrary $M\in\mathcal{G}(m,w,M_{231})$, and define $k$ such that $\{1,y_k\}$ is an edge of $M$. Let $M_\text{S}$ be the matching from $\mathcal{G}(k-1)$ that is isomorphic to the subgraph of $M$ induced by the short edges, let $M_\text{L}\in\mathcal{G}(m-k)$ be isomorphic to the subgraph induced by the long ones, let $w_\text{S}$ and $w_\text{L}$ be the respective bases of $M_\text{S}$ and $M_\text{L}$, and let $W_\text{S}$ and $W_\text{L}$ be the Dyck paths corresponding to $w_\text{S}$ and $w_\text{L}$. By induction, we know that $M_\text{S}=M(W_\text{S},P_\text{S})$ and $M_\text{L}=M(W_\text{L},P_\text{L})$ for some Dyck paths $P_\text{S}$ and $P_\text{L}$, where $P_\text{S}$ does not exceed $W_\text{S}$, and $P_\text{L}$ does not exceed $W_\text{L}$. Let $w_\text{X}$ be the Dyck word $0w_\text{S}1w_\text{L}$, and let $W_\text{X}$ be the corresponding Dyck path. Note that $W_\text{X}$ does not exceed $W$: assume that $W$ has $t$ up-steps occurring before the $k$-th down-step; then $W_\text{X}$ is obtained from $W$ by omitting the $t-k$ up-steps $u_{k+1}(W), u_{k+2}(W),\dotsc,u_t(W)$, and inserting $t-k$ new up-steps directly after the $k$-th down-step.
Let $P$ be the Dyck path obtained by concatenating the following pieces: \begin{itemize} \item An up-step from $(0,0)$ to $(1,1)$ \item A shifted copy of $P_\text{S}$ from $(1,1)$ to $(2k-1,1)$ \item A down-step from $(2k-1,1)$ to $(2k,0)$ \item A shifted copy of $P_\text{L}$ from $(2k,0)$ to $(2m,0)$ \end{itemize} Since $P$ clearly does not exceed $W_\text{X}$, it does not exceed $W$ either. Let us check that $M=M(W,P)$: \begin{itemize} \item The base of $M$ is equal to the base of $M(W,P)$. Thus, to see that $M$ is equal to $M(W,P)$, it suffices to check that $M_\text{S}$ and $M_\text{L}$ are isomorphic to the matchings induced by the short edges of $M(W,P)$ and the long edges of $M(W,P)$, respectively. \item The up-step $u_1(P)$ is clearly partnered to the down-step $d_k(P)$ (which connects $(2k-1,1)$ to $(2k,0)$). Thus, $M(W,P)$ contains the edge $\{x_1,y_k\}$. It follows that $M(W,P)$ has $k-1$ short edges, incident to the l-vertices $x_2,\dotsc,x_k$ and r-vertices $y_1,\dotsc,y_{k-1}$. \item The $k-1$ up-steps $u_2(P),\dotsc,u_k(P)$ as well as the $k-1$ down\nobreakdash-steps $d_1(P),\dotsc,d_{k-1}(P)$ all belong to the shifted copy of $P_\text{S}$. Since shifting does not affect the partnership relations, we see that the short edges of $M(W,P)$ form a matching isomorphic to $M_\text{S}=M(W_\text{S},P_\text{S})$. \item Similarly, the up-steps $u_{k+1}(P),$ $u_{k+2}(P),\dotsc,u_m(P)$ are partnered to the down-steps $d_{k+1}(P)$, $d_{k+2}(P),\dotsc,d_m(P)$ according to the tunnels of $P_\text{L}$. The corresponding long edges form a matching isomorphic to~$M_\text{L}$. \end{itemize} It follows that $M=M(W,P)$.
We now show that $P$ is determined uniquely: assume that $M=M(W,Q)$ for some $Q\in\mathcal{D}_m$. Since $\{1,y_k\}\in E(M)$, the path $Q$ must contain a down step from $(2k-1,1)$ to $(2k,0)$, and this down-step must be the first down-step of $Q$ to reach the line $y=0$. This shows that the subpath of $Q$ between $(1,1)$ and $(2k-1,1)$ is a shifted copy of some Dyck path $Q_S\in\mathcal{D}_{k-1}$. The tunnels of this path must define a matching isomorphic to $M_\text{S}=(W_\text{S},P_\text{S})$. By induction, we know that $P_\text{S}$ is determined uniquely, hence $P_\text{S}=Q_\text{S}$. By the same argument, we see that the subpath of $Q$ from $(2k,0)$ to $(2m,0)$ is a shifted copy of $P_\text{L}$. This shows that $P=Q$, and $P$ is unique, as claimed. \end{proof}
We are now ready to prove the following theorem:
\begin{thm}\label{thm-m231} For each $m\in\mathbb{N}$ and for each $w\in\mathcal{D}'_m$, $g(m,w,M_{231})$ is
equal to $|\mathcal{D}^2_m(w)|$. \end{thm} \begin{proof} Putting together Lemma~\ref{lem-gpm231} and Lemma~\ref{lem-indm231}, we infer that the function that maps a pair $(W,P)\in\mathcal{D}_m(w)$ to the matching $M(W,P)$ is a bijection between $\mathcal{D}^2_m(w)$ and $\mathcal{G}(m,w,M_{231})$. This gives the required result. \end{proof}
\begin{cor} $M_{231}{\,\cong\,} M_{321}$. \end{cor} \begin{proof} This a direct consequence of Theorem~\ref{thm-m231} and the results of Chen, Deng and Du \cite{chdd}. \end{proof}
\section{Conclusion and Open Problems} We have introduced an equivalence relation ${\,\cong\,}$ on the set of permutational matchings, and we have determined how this equivalence partitions the set of permutational matchings of order 3. However, many natural questions remain unanswered. For instance, it would be nice to have an estimate on the number of blocks of ${\,\cong\,}$ belonging to the set $\mathcal{G}(m)$. Also, is it possible to characterize the minimal and the maximal elements of $(\mathcal{G}(m),{\,\preccurlyeq\,})$?
\end{document} |
\begin{document}
\title{Using Landweber method to quantify source conditions - a numerical study.} \begin{abstract}Source conditions of the type $x^\dag \in\mathcal{R}((A^\ast A)^\mu)$ are an important tool in the theory of inverse problems to show convergence rates of regularized solutions as the noise in the data goes to zero. Unfortunately, it is rarely possible to verify these conditions in practice, rendering data-independent parameter choice rules unfeasible. In this paper we show that such a source condition implies a Kurdyka-\L{}ojasiewicz inequality with certain parameters depending on $\mu$. While the converse implication is unclear from a theoretical point of view, we demonstrate how the Landweber method in combination with the Kurdyka-\L{}ojasiewicz inequality can be used to approximate $\mu$ and conduct several numerical experiments. We also show that the source condition implies a lower bound on the convergence rate which is of optimal order and observable without the knowledge of $\mu$.\end{abstract} \section{Introduction}\label{sec:intro} Let $A:X\rightarrow Y$ be a bounded linear operator between Hilbert spaces $X$ and $Y$. A vast class of mathematical problems boils down to the solution of an equation \begin{equation}\label{eq:problem} Ax=y. \end{equation}
In inverse problems, where $A$ is assumed to be ill-posed, i.e., $\mathcal{R}(A)\neq\overline{\mathcal{R}(A)}$, typically only noisy data $y^\delta$ with $\|y-y^\delta\|\leq\delta$, $\delta>0$ are available. In this case, one will not obtain the exact solution $x^\dag$ to \eqref{eq:problem}, but may only hope to find a reasonable approximation $x^\delta$ to $x^\dag$. One of the main questions in the theory of inverse problems is the one of rates of convergence, i.e., finding a function $\varphi:[0,\infty)\rightarrow[0,\infty)$, $\varphi(0)=0$, such that \begin{equation}\label{eq:crates}
\|x^\dag-x^\delta\|\leq \varphi(\delta)\quad \forall \,0<\delta\leq \delta_0. \end{equation} It is known that in general such a function $\varphi$ cannot be found as the convergence may be arbitrarily slow; see, e.g., \cite{EHN96}. Instead, the class of solutions has to be restricted in order to obtain a convergence rate. A classical but still widespread condition for this is to assume that \begin{equation}\label{eq:sc} x^\dag\in\mathcal{R}((A^\ast A)^\mu) \end{equation} for some $\mu>0$. While this condition is clear and applicable from the theoretical point of view, its practical usefulness is limited. For given $A$ and noisy data $y^\delta$ it is seldom possible to find the correct smoothness parameter $\mu$. Even when the exact solution $x^\dag$ is known it is often not an easy task to determine $\mu$.
We demonstrate the importance of the knowledge of $\mu$ exemplarily by recalling the method of classical Tikhonov regularization. Define \begin{equation}\label{eq:tikh}
T_\alpha^\delta(x):=\|Ax-y^\delta\|^2+\alpha\|x\|^2. \end{equation} The minimizer $x_\alpha^\delta:=\mathrm{argmin}_{x\in X} T_\alpha^\delta(x)$ is called Tikhonov-regularized solution to \eqref{eq:problem} under the noisy data $y^\delta$. The convergence of Tikhonov regularization has been extensively studied, we mention for example the monographs \cite{EHN96,Louis,TikhonovArsenin1977}. All of them contain a version of the following proposition. \begin{proposition}\label{thm:tikh} Let $x^\dag\in\mathcal{R}((A^\ast A)^\mu)$ for some $0<\mu\leq1$. Then the minimizers $x_\alpha^\delta$ of \eqref{eq:tikh} satisfy \begin{equation}\label{eq:rate_tikh}
\|x_\alpha^\delta-x^\dag\|\leq C\delta^{\frac{2\mu}{2\mu+1}} \end{equation} with $0<C<\infty$ provided the regularization parameter $\alpha$ is chosen via \begin{equation}\label{eq:apriori} c_1 \delta^{\frac{2}{2\mu+1}}\leq \alpha\leq c_2\delta^{\frac{2}{2\mu+1}} \end{equation} where $c_1\leq c_2$ are positive constants. \end{proposition}
Thus, $\mu$ not only essentially determines the convergence rate but is also crucial to obtain it in practice. We mention that the rate \eqref{eq:rate_tikh} can also be obtained via the discrepancy principle for the choice of $\alpha$; see, e.g., \cite{EHN96,Morozov}. The idea behind this method is to choose $\alpha$ such that $\delta \leq \|Ax_\alpha^\delta-y^\delta\|\leq \tau \delta$ for some $\tau>1$. While the discrepancy principle is applicable independent of the knowledge of $\mu$, it only yields the rate \eqref{eq:rate_tikh} for $0<\mu\leq\frac{1}{2}$. Therefore, if $\mu$ is unknown, one might apply the discrepancy principle in instances where this method is not justified anymore.
\section{The Kurdyka-\L{}ojasiewicz inequality} A \L{}ojasiewicz-type inequality will be the main tool used for our method. It can be formulated in complete metric spaces and we will do so for the moment. We temporarily consider the abstract problem \[ f(x)\rightarrow \min_{x\in X} \] where $X$ is a complete metric space with metric $d_X(x,y)$ and $f:X\rightarrow \mathbb{R}\cup \{\infty\}$ is lower semicontinuous. We need some definitions.
We denote with \begin{equation} [t_1\leq f\leq t_2]:=\{x\in X: t_1\leq f(x)\leq t_2\} \end{equation} the level-set of $f$ to the levels $t_1\leq t_2$. For $t_1=t_2=:t$ we write $[f=t]$ and analogously for the relations $\leq$ and $\geq$. For any $x\in X$ the distance of $x$ to a set $S\subset X$ is denoted by \begin{equation} \mathrm{dist}(x,S):=\inf_{y\in S} d_X(x,y). \end{equation} With this we recall the Hausdorff distance between sets, \begin{equation} D(S_1,S_2):=\max\left\lbrace\sup_{x\in S_1}\mathrm{dist}(x,S_2),\sup_{x\in S_2} \mathrm{dist}(x,S_1) \right\rbrace. \end{equation} For $x\in X$ we define the strong slope as \begin{equation}
|\nabla f|(x):=\limsup_{y\rightarrow x}\frac{\max\{f(x)-f(y),0\}}{d(x,y)}. \end{equation}
The multi-valued mapping $F:X \rightrightarrows Y$ is called k-metrically regular if for all $(\bar x,\bar y)\in \mathrm{Graph}(F)$ there exist $\epsilon_1,\epsilon_2>0$ such that for all $(x,y)\in B(\bar x,\epsilon_1)\times B(\bar y,\epsilon_2)$ it is \[ \mathrm{dist}(x,F^{-1}(y))\leq k \,\mathrm{dist}(y,F(x)). \]
\begin{definition} We call the function $\varphi:[0,\bar r)\rightarrow \mathbb{R}$ a smooth index function if $\varphi\in C[0,\bar r)\cap C^1(0,\bar r)$, $\varphi(0)=0$ and $\varphi^\prime(x)>0$ for all $x\in(0,\bar r)$. We denote the set of all such $\varphi$ with $\mathcal{K}(0,\bar r)$. \end{definition} In the optimization literature such functions are called \textit{desingularizing functions}. We now present the main theorem our work is based on, taken from \cite{BDLM08}.
\begin{proposition}{\cite[Corollary 7]{BDLM08}}\label{thm:cor7}
Let $f:X\rightarrow \mathbb{R}$ be continuous, strongly slope-regular, i.e., $|\nabla f|(x)=|\nabla (-f)|(x)$, $(0,r_0)\subset f(X)$ and $\varphi\in \mathcal{K}(0, r_0)$. Then the following assumptions are equivalent and imply the non-emptiness of the level set $[f=0]$: \begin{itemize} \item $\varphi\circ f$ is $k$-metrically regular on $[0<f<r_0]\times (0,\varphi(r_0))$, \item for all $r_1,r_2\in (0,r_0)$ it is \begin{equation}\label{eq:rate_equation_c7}
D([f=r_1],[f=r_2])\leq k |\varphi(r_1)-\varphi(r_2)|, \end{equation} \item for all $x\in[0<f<r_0]$ it holds that \begin{equation}\label{eq:kl_c7}
|\nabla(\varphi\circ f)|(x)\geq \frac{1}{k}. \end{equation} \end{itemize} \end{proposition} While we added the first item for the sake of completeness, the second and third properties are what is interesting to us. Eq. \eqref{eq:rate_equation_c7} states that the (Hausdorff)-distance between the level-sets is governed by the value of the index function $\varphi$ at the value of the level. Assume we have two values $x_1,x_2\in X$ at hand such that $f(x_1)=c_1\delta$, $f(x_2)=c_2\delta$ and that $\varphi$ is concave. Then, by definition, \[
d_X(x_1,x_2)\leq D([f=c_1\delta],[f=c_2\delta])\leq k|\varphi(c_1\delta)-\varphi(c_2\delta)| \] and without loss of generality $c_1>c_2$ yields \[ d_X(x_1,x_2)\leq kc_1\varphi(\delta). \] This is a generalization of the convergence rates \eqref{eq:crates} and as such of high interest in the context of inverse problems.
The last item \eqref{eq:kl_c7} is called Kurdyka-\L{}ojasiewicz inequality. The result by \L{}ojasiewicz \cite{loja,loja2}, who originally considered real analytic functions, states in principle that for a $C^1$-function $f$ the quantity \[
|f(x)-f(\bar x)|^\theta\|\nabla f\|^{-1} \] remains bounded around any point $\bar x$. Let for example $f(x)=x^2$ and $\bar x=0$. Then one easily sees that \[
|f(x)-f(\bar x)|^\theta\|\nabla f\|^{-1}=\frac{x^{2\theta}}{2x}=\frac{1}{2} \] for $\theta=\frac{1}{2}$.
Kurdyka generalized the inequality \eqref{eq:kl_c7} in \cite{Kurdyka} and later other versions showed up, see for example \cite{Bolte,BolteEtAl07,BolteEtAl072}. Useful in our context appears to be the formulation from, e.g., \cite{Bolte}, \begin{equation*} \varphi^\prime(f(x)-f(\bar x))\mathrm{dist}(0,\partial f(x))\geq 1, \end{equation*} where $\varphi$ is as in Proposition \ref{thm:cor7}, $f$ is convex, $\bar x$ is a minimizer of $f$, and $x$ is sufficiently close to $\bar x$. The convexity of $f$ is not necessary as one can move to further generalized subdifferentials e.g., Frechet subdifferentials or the limiting subdifferential. If $f$ is differentiable, then we can write \begin{equation}\label{eq:kl_grad}
\varphi^\prime(f(x)-f(\bar x))\|\nabla f(x)\|\geq 1. \end{equation} As we will see in the following one dimensional example, the function $\varphi$ is, roughly speaking, such that $\varphi(f(x)-f(\bar x))$ is essentially a linear function.
Let $\varphi$ be a smooth index function and assume $f:\mathbb{R}\rightarrow\mathbb{R}$ to be positive, convex, and $f(0)=0$. If $\frac{d}{dx}\varphi(f(x))=\varphi^\prime(f(x))\frac{d}{dx}f(x)=\mathrm{const}$, then $\varphi(f(x))$ must be an (affine) linear function, see the example of a quadratic function above. Please note here the similarity between the chain rule and the Kurdyka-\L{}ojasiewicz inequality \eqref{eq:kl_grad}. The \L{}ojasiewicz property generalizes this and basically describes the flatness of the functional that is to be minimized around the solution. The flatter this is, the slower the speed of convergence.
\section{Lojasiewicz inequality and source condition in Hilbert spaces} In Hilbert spaces we may rewrite Proposition \ref{thm:cor7} as follows. \begin{proposition}\label{thm:equivalentstuff} Let $f:X\rightarrow \mathbb{R}$ be continuous and $(0,r_0)\subset f(X)$. Let $\varphi$ be a smooth index function. Then the following assumptions are equivalent. \begin{itemize} \item for all $x_1,x_2\in [0<f<r_0]$ \begin{equation}\label{eq:rate_equation}
\|x_1-x_2\|\leq k |\varphi(f(x_1))-\varphi(f(x_2))| \end{equation} \item for all $x\in[0<f<r_0]$ \begin{equation}\label{eq:kl}
\varphi^\prime(f(x)-f(\bar x))\|\nabla f(x)\|\geq \frac{1}{k} \end{equation} where $\bar x$ minimizes $f$. \end{itemize} \end{proposition}
As mentioned before, setting $x_1:=x^\delta$ and $x_2:=x^\dag$ with $x^\delta$ and $x^\dag$ from Section \ref{sec:intro}, immediately yields convergence rates of the form \eqref{eq:crates}. It is therefore interesting how \eqref{eq:kl} relates to the source condition \eqref{eq:sc}. We will now show that the classical source condition $x^\dag\in\mathcal{R}((A^\ast A)^\mu)$ implies $\eqref{eq:kl}$ for the squared residual $f(x)=\|Ax-y\|^2$.
Consider the noise-free least-squares minimization \begin{equation}\label{eq:leastsquares}
f(x):=\|Ax-y\|^2\rightarrow \min_x \end{equation} where $y=Ax^\dag$ and as before $x^\dag\in X$ is the exact solution to \eqref{eq:problem} in the Hilbert space $X$. Since \eqref{eq:leastsquares} is differentiable with $\nabla f=2A^\ast(Ax-y)$ we may verify the smooth KL-inequality \eqref{eq:kl}. In order to do so we assume, analogously to \eqref{eq:sc}, \begin{equation}\label{eq:sc2} x-x^\dag =(A^\ast A)^\mu w \end{equation} for some $\mu>0$ and $w$ in $X$. The only other ingredient is the well-known interpolation inequality \begin{equation}\label{eq:interpol}
\|(A^\ast A)^r x\|\leq \|(A^\ast A )^qx\|^{\frac{r}{q}}\|x\|^{1-\frac{r}{q}}, \end{equation} for all $q>r\geq 0$; see \cite{EHN96}. We are looking for a function $\varphi(t)=ct^{\kappa+1}$ with unknown exponent $-1<\kappa<0$. Then $\varphi^\prime(t)=c(1+\kappa)t^\kappa$. Thus \eqref{eq:kl} reads, with $\bar x=x^\dag$, \[
f(x)^\kappa\|\nabla f(x)\|\geq c \] or equivalently \[
f(x)^{-\kappa}\leq c\|\nabla f(x)\|. \] With $f(x)$ as above and $[\nabla f](x)=2A^\ast(Ax-y)$ we have
\begin{align*}
\left(\|Ax-y\|^2\right)^{-\kappa}&=\left(\|A(x-x^\dag)\|\right)^{-2\kappa}\\
&=\left(\|A(A^\ast A)^\mu w\|\right)^{-2\kappa}\\
&=\left(\|(A^\ast A)^{\mu+\frac{1}{2}} w\|\right)^{-2\kappa}\\
&\leq\left( \|(A^\ast A)^{\mu+1} w\|^{\frac{\mu+\frac{1}{2}}{\mu+1}}\|w\|^{1-\frac{\mu+\frac{1}{2}}{\mu+1}}\right)^{-2\kappa}\\
&=\|(A^\ast A)^{\mu+1}w\|^{-2\kappa\frac{\mu+\frac{1}{2}}{\mu+1} }\|w\|^{-\frac{\kappa}{\mu+1}}\\
&=\|A^\ast(Ax-y) \|^{-2\kappa\frac{\mu+\frac{1}{2}}{\mu+1} }\|w\|^{-\frac{\kappa}{\mu+1}}. \end{align*} The correct $\kappa$ is the one satisfying \[ -2\kappa\frac{\mu+\frac{1}{2}}{\mu+1}=1, \] i.e., \[ \kappa=-\frac{\mu+1}{2\mu+1}. \] Thus our original function $\varphi(t)=c t^{\kappa+1}$ is given by \begin{equation}\label{eq:vsrphi_sc_mu} \varphi(t)=c t^{\frac{\mu}{2\mu+1}}. \end{equation}
In terms of \eqref{eq:rate_equation} this yields for $f(x)=\delta^2$ from \eqref{eq:leastsquares} the rate $\varphi(\delta^2)=c \delta^{\frac{2\mu}{2\mu+1}}$ which is well-known from the theory of optimal convergence rates. Note that we also obtain $\|w\|^{-\frac{\kappa}{\mu+1}}=\|w\|^{\frac{1}{2\mu+1}}$ which is the standard estimate for this term, see, e.g., \cite{EHN96,Louis}.
Unfortunately, at the moment it is not clear whether a \L{}ojasiewicz-inequality \eqref{eq:kl} implies that a source condition \eqref{eq:sc} holds. Nevertheless, we will see that often it is possible to find an approximation to $\mu$ in \eqref{eq:sc} by looking for a function fulfilling \eqref{eq:kl}.
\section{Observable lower bounds for the reconstruction error}
We have seen that the source condition \eqref{eq:sc} implies a \L{}ojasiewicz inequality with $\varphi$ from \eqref{eq:vsrphi_sc_mu}, which implies a convergence rate \eqref{eq:rate_equation} i.e., an upper bound on the distance between to elements in $X$ based on the distance of their images under $A$ in $Y$. In the following we show that the source condition also implies a lower rate, and that upper and lower boundary on the convergence rate are of the same order, disagreeing only in a constant factor. Even more, this lower bound is easily computable in practice, and no information on $\mu$ is required to do so.
\begin{theorem}
Let $A:X\rightarrow Y$ be a compact linear operator between Hilbert spaces $X$ and $Y$. Let $x^\dag\in X$ fulfill a source condition \eqref{eq:sc} for some $\mu>0$ and let $\| Ax-Ax^\dag||$ sufficiently small. Then, whenever $\nabla (\|Ax-Ax^\dag\|^2)\neq 0$, it is \begin{equation}\label{eq:doublebound}
c_1 \varphi(\|Ax-Ax^\dag\|^2) \leq \|x-x^\dag\| \leq c_2 \varphi(\|Ax-Ax^\dag\|^2) \end{equation} with constants $0<c_1<c_2<\infty$ and $\varphi(t)=t^{\frac{\mu}{2\mu+1}}$ from \eqref{eq:vsrphi_sc_mu}. \end{theorem} \begin{proof} The upper bound follows from Proposition \ref{thm:equivalentstuff}. To obtain the lower bound, we note that \begin{equation}\label{eq:rate_uplow}
\|Ax-Ax^\dag\|^2=\langle x-x^\dag,A^\ast (Ax-Ax^\dag)\rangle\leq \|x-x^\dag\|\,\|A^\ast(Ax-Ax^\dag)\| \end{equation} and hence \begin{equation}\label{eq:lb}
\frac{\|Ax-Ax^\dag\|^2}{\|A^\ast(Ax-Ax^\dag)\|}\leq \|x-x^\dag\|. \end{equation} We now use the interpolation equality \eqref{eq:interpol} in the denominator. Together with \eqref{eq:sc2} it follows that \begin{align*}
\|A^\ast(Ax-Ax^\dag)\|&=\|(A^\ast A)^{\mu+1} w\|\\
&\leq \|(A^\ast A)^{\frac{1}{2}+\mu}w\|^{\frac{\mu+1}{\mu+\frac{1}{2}}}\|w\|^{1-\frac{\mu+1}{\mu+\frac{1}{2}}}\\
&= \|Ax-Ax^\dag\|^{\frac{2\mu+2}{2\mu+1}}\|w\|^{-\frac{1}{2\mu+1}} \end{align*} Inserting this into \eqref{eq:lb} yields \begin{align*}
\|x-x^\dag\|\geq \|Ax-Ax^\dag\|^{\frac{2\mu}{2\mu+1}}\|w\|^{\frac{1}{2\mu+1}} \end{align*} \end{proof} Equation \eqref{eq:lb} is not limited to Hilbert spaces. In particular when only $X$ is a Banach spaces with dual space $X^\ast$, then one simply needs to use the dual product $\langle\cdot,\cdot\rangle_{X^\ast\times X}$ instead of the scalar product. Another advantage of \eqref{eq:lb} is that the lower bound is often easily observable since it only contains the (squared) residual and the gradient. Since under the source condition \eqref{eq:sc2} also \[
c_1 \|Ax-Ax^\dag\|^{\frac{2\mu}{2\mu+1}}\leq \frac{\|Ax-Ax^\dag\|^2}{\|A^\ast(Ax-Ax^\dag)\|}\leq c_2\|Ax-Ax^\dag\|^{\frac{2\mu}{2\mu+1}}, \] it appears tempting to try to estimate $\mu$ also from the lower bound, but it can easily be seen that this leads to the same regression problem than the upper bound which we will discuss in the next section. However, we found it useful to check the credibility of the estimated $\mu$. In particular, as we will show in the numerical examples that the lower bound it allows to detect when the noise takes over or when the discretization is insufficient.
\section{From inequality to algorithm}\label{sec:alg}
Many iterative methods for the minimization of a functional $f(x)$ use the gradient of $f$ at the current iterate to update the unknown. For $f(x)=\|Ax-y\|^2$, the gradient $A^\ast (Ax-y)$ includes the calculation of the residual $Ax-y$. Therefore, any such iterative method allows to store and compute the values $\|Ax_k-y\|^2$ and $\|A^\ast(Ax_k-y)\|$ as the iteration goes on. Since the \L{}ojasiewicz inequality relates both values through an unknown one dimensional function, we may use the acquired values of gradients and residuals to estimate that function via a regression.
Our particular choice of the iterative algorithm is the Landweber method, but the algorithm is not restricted to this choice. This is a well-known method to minimize \eqref{eq:leastsquares}; see, e.g., \cite{EHN96,Louis}. Starting from an initial guess $x_0$, it consists in iterating \begin{equation}\label{eq:lw_free} x_{k+1}=x_k-\beta A^\ast (Ax_k-y) \end{equation}
for $k=0,1,\dots,K$ where $0<\beta< \frac{2}{\|A\|^2}$ and $K$ is the stopping index.
Let $\{x_k\}_{k=1,\dots,K}$ be the sequence of iterates obtained from the Landweber method. During the iterations we compute and store the norms of both residual and gradient, i.e., we acquire two vectors \[
R:=(\|Ax_1-y\|,\|Ax_2-y\|,\|Ax_3-y\|,\dots,\|Ax_K-y\|)^T \] and \[
G:=(\|A^\ast (Ax_1-y)\|,\|A^\ast (Ax_2-y)\|,\|A^\ast (Ax_3-y)\|,\dots,\|A^\ast(Ax_K-y)\|)^T. \] If the \L{}ojasiewicz property \eqref{eq:kl_grad} holds with $\bar x=x^\dag$, then there is a smooth index function $\varphi$ such that \begin{equation}\label{eq:kl_ls} \varphi^\prime(R_i^2)\cdot G_i\geq 1 \quad \forall i=1,\dots,K, \end{equation} where the constant $k$ appearing in \eqref{eq:kl_grad} has been moved into the function $\varphi^\prime$.
As we have seen previously, the source condition \eqref{eq:sc} implies $\varphi(t)=ct^{\frac{\mu}{2\mu+1}}$ and hence $\varphi^\prime(t)=ct^{-\frac{\mu+1}{2\mu+1}}$. Setting \[ \gamma:=\frac{2\mu+2}{2\mu+1}, \] we have \[ \varphi^\prime(R_i^2)=c(R_i^2)^{-\frac{\mu+1}{2\mu+1}}=cR_i^{-\gamma}\quad \forall i=1,\dots,K, \] and therefore we obtain \begin{equation}\label{eq:regression_pre} \frac{R_i^\gamma}{c}\leq G_i \quad \forall i=1,\dots,K, \end{equation} where we have combined the upper and lower constant in one single constant. In practice of course we do not know $\gamma$ or the constant $c$, but we have many data pairs $(R_i,G_i)$, $i=1,\dots,K$. Therefore, we may use the data to estimate $\gamma$ and $c$ via a regression approach. To this end, we take the $\log$ of \eqref{eq:regression_pre} which yields, replacing the inequality with an equality, \begin{equation}\label{eq:regression} \gamma\log(R_i)-\log(c)= \log(G_i) \quad \forall i=1,\dots,K. \end{equation} This is a linear regression problem for the variables $c_{l}:=\log(c)$ and $\gamma$. We write it in matrix form \begin{equation}\label{eq:reg_matrix} A_K\begin{pmatrix} \gamma \\ c_l\end{pmatrix}= b_K \end{equation} where \[ A_K:=\begin{pmatrix} \log(R_1) &-1\\
\log(R_2) &-1\\ \vdots & \vdots \\
\log(R_K) &-1 \end{pmatrix}, \quad b_K=\begin{pmatrix} \log(G_1)\\\log(G_2)\\ \vdots \\\log(G_K) \end{pmatrix}. \]
It is well known (see, e.g., \cite{shao}) that the best estimator, i.e., the estimator yielding the least error variance, for $[\gamma,c_l]^T$ in \eqref{eq:reg_matrix} is \[ \begin{pmatrix}
\hat\gamma\\\hat c_l \end{pmatrix}= (A_K^TA_K)^{-1}A_K^Tb_K. \] This immediately yields $\mu_K=\frac{2-\hat\gamma}{2\hat\gamma-2}$ and $c_K=\exp(\hat c_{l})$.
In our numerical experiments we found that it is more appropriate to do a regression after each iterate instead of just one at the end. We therefore define $\mu_k$ as estimates for $\mu$ when only the first $k$ rows of $A_K$ and $b_K$ are used. Analogously we have $c_k$ as estimated constant after $k$ iterations. This allows to track the smoothness parameter throughout the iteration. As shown in the next section, $\mu_k$ and $c_k$ are not constant but often vary as $k$ increases. However, we often find intervals $k_1,\dots,k_2$, $1\leq k_1<k_2\leq K$ where both $\mu_k$ and $c_k$ are approximately stable. In this region we observed that the parameters were estimated reasonably well. Since we do not have a fully automated system yet we pick the final estimate for $\mu$ by hand.
\section{Numerical experiments}
Using the method suggested in Section \ref{sec:alg} we now try to estimate the smoothness parameter $\mu$ for various linear ill-posed problems \eqref{eq:problem}. We give two types of plots. In the first type we plot the estimated $\mu$ against the Landweber iteration number. This should yield a constant value for $\mu$. In the second type of plots, we give the lower bound from \eqref{eq:lb}. If available, we also plot the measured reconstruction error $||x^\dag-x^\delta||$ and for the diagonal examples we show the upper bound from \eqref{eq:rate_uplow}. All of these functions are plotted against the residual $||Ax-y||$ in a loglog-plot. This should yield a linear function whose constant slope is again a measure for $\mu$.
\subsection{Benchmark case: diagonal operators}\label{sec:diag} We start with the (mostly) academic example of a diagonal operator. While this is certainly not a particularly practical situation, it allows to see the chances and limitations of our approach due to the direct control of the parameters and low computational cost.
Let $A:\ell^2\rightarrow \ell^2$, $A: (x_1,x_2,x_3,\cdots)\mapsto (\sigma_1x_1,\sigma_2x_2,\sigma_3 x_3,\dots)$ for $\sigma_i\in \mathbb{R}$, $i\in\mathbb{N}$. To make the example even more academic, let $\sigma_i=i^{-\beta}$ for some $\beta>0$ and assume additionally that $x^\dag$ is given as $x_i^\dag=i^{-\eta}$, $i\in\mathbb{N}$, with $\eta>0$. Following \cite[Proposition 3.13]{EHN96} we have for a compact linear operator $A$ between Hilbert spaces $X$ and $Y$ with singular system $\{\sigma_i,u_i,v_i\}_{i=1}^\infty$ that \begin{equation}\label{eq:verif_smoothness}
x\in \mathcal{R}((A^\ast A)^\mu)\quad \Leftrightarrow \quad \sum_{i=1}^\infty \frac{|\langle Ax,u_i\rangle|^2}{\sigma_i^{2+4\mu}}<\infty. \end{equation}
One quickly verifies that therefore $x\in\mathcal{R}((A^\ast A)^\mu)$ for $\mu\leq\frac{2\eta-1}{4\beta}-\epsilon$ and any small $\epsilon>0$. Numerically it appears unfeasible to determine $\mu$ to such a high precision. Therefore we give $\mu_{exact}=\frac{2\eta-1}{4\beta}$ although the source condition is just barely not satisfied in this case. Nevertheless, it gives an excellent fit for the diagonal operators where we have full control over the smoothness properties.
We present figures for different combinations of $\eta$ and $\beta$. More precisely, we have $\eta=1$ and $\beta=2.5$ in Figure \ref{fig:diag1}, $\eta=2$ and $\beta=2$ in Figure \ref{fig:diag2}, $\eta=2$ and $\beta=1$ in Figure \ref{fig:diag3}, and $\beta=1.5$ and $\eta=3$ in Figure \ref{fig:diag4}. After a burn-in time between 50 and 100 iterations a fairly accurate estimate of $\mu$ is achieved in all the examples. Except for the case $\eta=2$ and $\beta=1$, $\mu$ even seems to converge to the exact value as the iterations increase. The ``exception'' is easily explained when looking the reconstruction errors and the observed lower bound. We show the results for $\eta=2$ and $\beta=2$ in Figure \ref{fig:diag2LB}, where we have an almost perfect match between the reconstruction error, its lower and its upper bound. For the seemingly critical case $\eta=2$ and $\beta=1$ the same plot, shown in Figure \ref{fig:diag3LB}, reveals the the reason for the diverging estimate of $\mu$. In Figure \ref{fig:diag3LB}, the lower bound and the measured reconstruction error have two distinct phases with different slopes and a small transitioning phase. In the first phase, where the residuals are large, we have the slope corresponding to the good estimate of $\mu$ in Figure \ref{fig:diag3}. In the second phase, corresponding to the small residuals, the slope is approximately 1. What happens is that the discretization is too small, such that at some point the algorithm sees the full matrix $A$ as a well-posed operator from $\mathbb{R}^n$ to $\mathbb{R}^n$ instead of the discretization of the ill-posed problem $A$. Indeed, if we increase the discretization level, we obtain a plot with the correct asymptotics similar to Figure \ref{fig:diag1} and \ref{fig:diag3}.
Let us also check the algorithm when the source condition \eqref{eq:sc} is not strictly fulfilled or even violated. To this end we first let $\sigma_i=i^{-3/2}$ and $x_i=e^{-i}$, i.e., $x^\dag$ fulfills the source condition for all $\mu>0$. Over the iterations, $\mu$ starts large, approaches a value of about $\mu_{min}=0.5$ and then slowly starts to grow again; see Figure \ref{fig:expon}. The constant behaves similarly. The \L{}ojasiewicz inequality and therefore our algorithm seem to capture some kind of ``maximal'' smoothness which in this example is not adequately described with the source condition \eqref{eq:sc} for any $\mu>0$.
We now move the exponential to the operator, i.e., consider the case $x_i=i^{-2}$ and $\sigma_i=e^{-i}$ such that $x^\dag$ fails to fulfill the source condition \ref{eq:sc} for any $\mu>0$. The result, shown in Figure \ref{fig:expon_op}, reveals a new pattern. Both $\mu$ and $c$ describe a sinusoidal graph. In neither of the two examples we found a section where $c$ and $\mu$ remain approximately stable. We conclude that the lack of such stability indicates the violation of the source condition \eqref{eq:sc}.
\begin{figure}
\caption{Demonstration of the method for $\eta=1$ and $\beta=2.5$. Dashed: true $\mu=0.1$, solid: estimated $\mu$, dash-dotted: estimated $c$; plotted over the iterations. From the stable section we estimate $\mu\approx0.1$.}
\label{fig:diag1}
\end{figure}
\begin{figure}
\caption{Demonstration of the method for $\eta=2$ and $\beta=2$. Dashed: true $\mu=0.375$, solid: estimated $\mu$, dash-dotted: estimated $c$; plotted over the iterations. From the stable section we estimate $\mu\approx 0.375$.}
\label{fig:diag2}
\end{figure}
\begin{figure}
\caption{Reconstruction error for $\eta=2$ and $\beta=2$. Solid: measured error, dash-dotted: upper bound from the source condition, dashed: observed lower bound.}
\label{fig:diag2LB}
\end{figure}
\begin{figure}
\caption{Demonstration of the method for $\eta=2$ and $\beta=1$. Dashed: true $\mu=0.75$, solid: estimated $\mu$, dash-dotted: estimated $c$; plotted over the iterations. From the stable section we estimate $\mu\approx0.7$. For higher iterations the estimate for $\mu$ goes up because our discretization level is insufficient.}
\label{fig:diag3}
\end{figure}
\begin{figure}
\caption{Reconstruction error for $\eta=2$ and $\beta=1$. Solid: measured error, dash-dotted: upper bound from the source condition, dashed: observed lower bound. The reconstruction error has two slopes. For larger residuals it corresponds to the correct $\mu$, for smaller residual the slope is approximately one due to insufficient discretization. }
\label{fig:diag3LB}
\end{figure}
\begin{figure}
\caption{Demonstration of the method for $\eta=3$ and $\beta=1.5$. Dashed: true $\mu=0.833$, solid: estimated $\mu$, dash-dotted: estimated $c$; plotted over the iterations. From the stable section we estimate $\mu\approx 0.8$.}
\label{fig:diag4}
\end{figure}
\begin{figure}
\caption{Demonstration of the method for $x_i=e^{-i}$ and $\beta=1.5$. Solid: estimated $\mu$, dash-dotted: estimated $c$; plotted over the iterations. Since both $\mu$ and $c$ are unstable we conclude that \eqref{eq:sc} is violated or not strictly fulfilled.}
\label{fig:expon}
\end{figure}
\begin{figure}
\caption{Demonstration of the method for $\eta=2$ and $\sigma_i=e^{-i}$. Solid: estimated $\mu$, dash-dotted: estimated $c$; plotted over the iterations. Since both $\mu$ and $c$ are unstable we conclude that \eqref{eq:sc} is violated.}
\label{fig:expon_op}
\end{figure}
\begin{figure}
\caption{Reconstruction error for $\eta=2$ and $\sigma_i=e^{-i}$. Solid: measured error, dashed: observed lower bound. The wildly oscillating lower bound suggests that the source condition \eqref{eq:sc2} does not hold.}
\label{fig:expon_opLB}
\end{figure}
\subsection{Examples from Regularization Tools} In the previous section we saw that the estimation technique yields reasonable results for the ``easy'' diagonal problem. Now we move to more realistic examples. To this end we use the \textit{Regularization Tools} toolbox \cite{regtools} in MATLAB. This toolbox comes with 14 problems of type \eqref{eq:problem}. In order to not be spoiled with background information we first apply our method to each of the problems. The algorithm yields seemingly good results for the problems \textit{deriv2} (see Figure \ref{fig:RT1}), \textit{gravity} (Figure \ref{fig:RT2}) and \textit{tomo}; see Figure \ref{fig:RT3}. At this point we remark that the routine setting up the \textit{tomo} data has a stochastic component, so all our experiments on the \textit{tomo} problem have been conducted with the same data set that we stored. Since we don't know $\mu$ exactly anymore, we must look for other ways of cross-checking it. We therefore try to verify the implication of Proposition \ref{thm:tikh}. Namely, if the estimate for $\mu$ is correct, the a priori parameter choice \eqref{eq:apriori} should yield the convergence rate \eqref{eq:rate_tikh}. In \eqref{eq:apriori} we simply set $c_1=c_2=1$. Since we are only interested in the exponent of the convergence rate this does not affect the result. We create noisy data contaminated with Gaussian error such that the relative error is between $10\%$ and $0.1\%$. Since we know the exact solution we can compute the reconstruction errors and a linear regression yields the observed convergence rate. The results are given in Table \ref{tab:res}. We achieve good accordance between the observed and the predicted convergence rate \eqref{eq:rate_tikh} for the \textit{deriv2} and \textit{tomo} problems. For \textit{gravity} we observe a mismatch between those rates. We therefore compute the singular value decomposition of all problems in the \textit{Regularization Tools} and look for the largest $\mu$ that still satisfies \eqref{eq:verif_smoothness}. It turns out that only the problems \textit{tomo} and \textit{deriv2} yield a reasonable result with $\mu \approx 0.1$ and $\mu\approx 0.2$, respectively. In particular we did not find a $\mu$ for the gravity problem. Looking again at Figure \ref{fig:RT2} and comparing it to Figure \ref{fig:expon_op} we see that in both Figures $\mu$ and $c$ show sinusoidal graphs. This may hint at an exponentially ill-posed operator in the gravity problem.
In summary, we conclude that or method works and detects the two problems out of the Regularization Tools for which a source condition \eqref{eq:sc} seems to hold.
\begin{figure}
\caption{Demonstration of the method for the problem \textit{deriv2}. Solid: estimated $\mu$, dotted: estimated $c$; plotted over the iterations. From the stable section we estimate $\mu\approx 0.13$.}
\label{fig:RT1}
\end{figure}
\begin{figure}
\caption{Reconstruction error for the problem \textit{deriv2}. Solid: measured error, dashed: observed lower bound. The part with constant slope corresponds to a good estimation of $\mu$ in Figure \ref{fig:RT1}.}
\label{fig:RT1LB}
\end{figure}
\begin{figure}
\caption{Demonstration of the method for the problem \textit{gravity}. Solid: estimated $\mu$, dotted: estimated $c$; plotted over the iterations. From the stable section we estimate $\mu\approx 0.3$.}
\label{fig:RT2}
\end{figure}
\begin{figure}
\caption{Reconstruction error for the problem \textit{gravity}. Solid: measured error, dashed: observed lower bound. The lower bound is oscillating similar as in Figure \ref{fig:expon_opLB}, albeit with smaller amplitude.}
\label{fig:RT2LB}
\end{figure}
\begin{figure}
\caption{Demonstration of the method for the problem \textit{tomo}. Solid: estimated $\mu$, dotted: estimated $c$; plotted over the iterations. From the stable section we estimate $\mu\approx 0.2$.}
\label{fig:RT3}
\end{figure}
\begin{figure}
\caption{Reconstruction error for the problem \textit{tomo}. Solid: measured error, dashed: observed lower bound. }
\label{fig:RT3LB}
\end{figure}
\begin{table}\centering \caption{Tikhonov-test for the problems \textit{deriv2}, \textit{tomo}, and \textit{gravity}. For the first two problems the predicted and observed convergence rates are close enough to suggest that $\mu$ is estimated reasonably well. The \textit{gravity} problem yields a misfit since it does not fulfill \eqref{eq:verif_smoothness} for any $\mu>0$.} \begin{tabular}{c c c c} problem & estimated $\mu$ & predicted rate & observed rate \\ \textit{deriv2} & $0.13$ & $\delta^{0.206}$ & $\delta^{0.193}$ \\ \textit{tomo} & $0.2$ & $\delta^{0.285}$ & $\delta^{0.3}$ \\ \textit{gravity} & $0.3$ & $\delta^{0.375}$ & $\delta^{0.527}$ \end{tabular}\label{tab:res} \end{table}
\subsection{Noisy data} So far our experiments for the estimation of $\mu$ involved only noise-free data. In practice of course noise is inevitable. We repeat the tests of Section \ref{sec:diag} for noisy data. Since the results are similar we will only discuss the case $\eta=2$ and $\beta=2$ as an example. We show the results for $1\%$ and $0.1\%$ relative noise in Figure \ref{fig:noise1} and Figure \ref{fig:noise2}, respectively. We observe that in both cases there is a part where $c$ and $\mu$ are approximately stable. The estimate for $\mu$ from this section is slightly smaller than the true value. As the number of iterations grows, the estimated $\mu$ drops even below zero while at the same point $c$ increases strongly. If we increase the noise level further we obtain a smaller and smaller region for $\mu$. Nevertheless, at least for smaller noise levels the method still works. The pattern also holds for the examples from Regularization Tools. We provide a test of the \textit{deriv2}-problem with $0.1\% $ relative noise in Figure \ref{fig:noise_deriv}. For the \textit{tomo} problem, we still obtain a reasonable estimate for $1\%$ noise, see Figure \ref{fig:noise_tomo}. As with all noisy data sets, the estimate for $\mu$ starts to decrease rapidly after some point. Looking at the lower bound, e.g. in Figure \ref{fig:noise_tomo_LB} for the \textit{tomo} problem, we see that for small er residuals the lower bound starts to increase with smaller residuals. We conclude that then the noise dominates the KL inequality and the estimates obtained for $\mu$ in the corresponding later iterations of the Landweber method can be disregarded.
\subsection{Real measurements} Motivated by this, we now turn to the last test case where we apply our algorithm to real data. This means we have an unknown amount of noise and no exact solution to repeat the Tikhonov-experiment. We use the tomographic X-ray data of a carved cheese, a lotus root, and of a walnut which are freely available at \url{http://www.fips.fi/dataset.php}; see also the documentations \cite{cheese},\cite{lotus}, and \cite{nuts}. We use the datasets \textit{DataFull128x15.mat}, \textit{LotusData128.mat}, and \textit{Data82.mat}, respectively. Since the matrices $A$ are far too large for a full SVD we can not verify \eqref{eq:verif_smoothness} and we have to rely solely on our new approach with the Landweber method which is still easily computable. The results are shown in Figures \ref{fig:cheese}, \ref{fig:lotus}, and \ref{fig:nuts}. The three results look similar but vary in detail. In all figures we have a maximum of $\mu$ after around 25 iterations. After that, $\mu$ decreases while the constant $c$ increases. This is likely due to the noise which takes over at some point see the explanation at the end of the last section. While in particular the lotus problem yields an almost stable section for $\mu$, the walnut hardly yields a trustworthy region. This suggests that the solution of the walnut problem is furthest from fulfilling the source condition \eqref{eq:sc}. This can also be concluded from Figure \ref{fig:real_lower}, where we plot the measured lower bounds from \eqref{eq:lb}.
\begin{figure}
\caption{Demonstration of the method for $\eta=2$ and $\beta=2$ with $1\%$ relative data noise. Solid: true $\mu=0.375$, dashed: estimated $\mu$, dotted: estimated $c$; plotted over the iterations. In the approximately stable region we estimate $\mu\approx 0.35$.}
\label{fig:noise1}
\end{figure}
\begin{figure}
\caption{Demonstration of the method for $\eta=2$ and $\beta=2$ with $0.1\%$ relative data noise. Solid: true $\mu=0.375$, dashed: estimated $\mu$, dotted: estimated $c$; plotted over the iterations. In the approximately stable region we estimate $\mu\approx 0.4$.}
\label{fig:noise2}
\end{figure} \begin{figure}
\caption{Demonstration of the method for the problem \textit{deriv2} with $0.1\%$ relative data noise. Dashed: estimated $\mu$, dotted: estimated $c$; plotted over the iterations. In the approximately stable region we may take $\mu\approx 0.13$.}
\label{fig:noise_deriv}
\end{figure}
\begin{figure}
\caption{Demonstration of the method for the problem \textit{tomo} with $1\%$ relative data noise. Dashed: estimated $\mu$, dotted: estimated $c$; plotted over the iterations. In the saddlepoint-like region we may take $\mu\approx 0.22$. Afterwards $\mu$ decreases because the noise becomes dominating, see Figure \ref{fig:noise_tomo_LB}.}
\label{fig:noise_tomo}
\end{figure}
\begin{figure}
\caption{Reconstruction error for the problem \textit{tomo} with $1\%$ relative data noise. Solid: measured error, dashed: observed lower bound. The lower bound increasing for lower residuals means that the noise becomes dominant, hence $\mu$ is not estimated correctly in in those regions.}
\label{fig:noise_tomo_LB}
\end{figure}
\begin{figure}
\caption{Demonstration of the method for the cheese problem. We estimate $\mu\approx 0.9$, but have no confirmation for this other than our method.}
\label{fig:cheese}
\end{figure}
\begin{figure}
\caption{Demonstration of the method for the lotus problem. We estimate $\mu\approx 0.22$, but have no confirmation for this other than our method.}
\label{fig:lotus}
\end{figure}
\begin{figure}
\caption{Demonstration of the method for the walnut problem. We estimate $\mu\approx 0.4$, but have no confirmation for this other than our method.}
\label{fig:nuts}
\end{figure}
\begin{figure}
\caption{Observed lower bound from \eqref{eq:lb} for the three real data problems, from left to right: cheese, lotus, and walnut. For higher residuals, the cheese and lotus data have an approximately linear slope, hinting that the estimation of $\mu$ for the first iterations might be correct. The walnut data behaves completely irregular, hence the graph for the estimated $\mu$ in \ref{fig:nuts} is the worst among all 3 problems. For smaller residuals, the lower bound goes up, indicating that the noise takes over. This corresponds to the falling estimates of $\mu$ for the larger iterations in the corresponding Figures \ref{fig:cheese}, \ref{fig:lotus}, and \ref{fig:nuts}.}
\label{fig:real_lower}
\end{figure}
\section*{Conclusion} We have shown that the classical source condition for linear inverse problems implies a \L{}ojasiewicz inequality which is equivalent to convergence rates. We used the relation between residual and gradient that is implied by the \L{}ojasiewicz inequality to estimate the smoothness parameter in the source condition using Landweber iteration. This works reasonably well when the source condition is indeed fulfilled; the obtained estimates for the smoothness parameter agree with related knowledge. We have shown that even noisy and real-life data can be used as long as the noise is not too large. While there are many open problems, we believe that exploiting the \L{}ojasiewicz property may lead to an improvement in the numerical treatment of inverse problems and consider this paper as a first step in this direction.
\section*{Acknowledgments} Research supported by Deutsche Forschungsgemeinschaft (DFG-grant HO 1454/10-1).The author thanks the anonymous referees for their comments that helped to improve this paper. The helpful comments and discussions with Bernd Hofmann (TU Chemnitz) and Oliver Ernst (TU Chemnitz) are gratefully acknowledged.
\end{document} |
\begin{document}
\title{Derived equivalences for a class of PI algebras}
\author{Quanshui Wu} \address{School of Mathematical Sciences, Fudan University, Shanghai 200433, China} \email{[email protected]}
\author{Ruipeng Zhu} \address{Department of Mathematics, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China} \email{[email protected]}
\begin{abstract}
A description of tilting complexes is given for a class of PI algebras whose prime spectrum is canonically homeomorphic to the prime spectrum of its center. Some Sklyanin algebras are the kind of algebras considered. As an application, it is proved that any algebra derived equivalent to such kind of algebra, is Morita equivalent to it. \end{abstract} \subjclass[2020]{
16D90,
16E35,
18E30 }
\keywords{Tilting complexes, derived equivalences, Morita equivalences, derived Picard groups}
\maketitle
\section*{Introduction}
Let $A$ be a ring, $P$ be a progenerator (that is, a finitely generated projective generator) of Mod-$A$, and $\{ e_1, \cdots, e_s \}$ be a set of central complete orthogonal idempotents of $A$. For any integers $s > 0$ and $n_1 > \cdots > n_s$, it is clear that $T:=\bigoplus\limits_{i=1}^{s}Pe_i[-n_i]$ is a tilting complex over $A$ and $\End_{\D^\mathrm{b}(A)}(T)$ is Morita equivalent to $A$. If $A$ is commutative, then any tilting complex over $A$ has the above form \cite[Theorem 2.6]{Ye99} or \cite[Theorem 2.11]{RouZi03}.
Recall that a ring $A$ is called an {\it Azumaya algebra}, if $A$ is separable over its center $Z(A)$, i.e., $A$ is a projective $ (A\otimes_{Z(A)}A^{op})$-module. Any two Azumaya algebras which are derived equivalent are Morita equivalent \cite{A17}. If $A$ is an Azumaya algebra, then there exists a bijection between the ideals of $A$ and those of $Z(A)$ via $I \mapsto I \cap Z(A)$ \cite[Corollary II 3.7]{MI}. In this case, the prime spectrum of $A$ is canonically homeomorphic to the prime spectrum of $Z(A)$.
In this note, we prove a more general result.
\begin{thm}\label{tilting complex for finite over center case}
Let $A$ be a ring, $R$ be a central subalgebra of $A$. Suppose that $A$ is finitely presented as $R$-module, and $\Spec(A) \to \Spec(R), \; \mathfrak{P} \mapsto \mathfrak{P} \cap R$ is a homeomorphism.
Then for any tilting complex $T$ over $A$, there exists a progenerator $P$ of $A$ and a set of complete orthogonal idempotents $e_1, \cdots, e_s$ in $R$ such that in the derived category $\mathcal{D}(A)$
$$T \cong \bigoplus\limits_{i=1}^{s}Pe_i[-n_i]$$ for some integers $s>0$ and $n_1 > n_2 > \cdots > n_s$. \end{thm}
\begin{cor}\label{derived equiv is Morita equiv for finite over center case}
Let $B := \End_{\D^\mathrm{b}(A)}(T)$. Then $B \cong \End_{A}(P)$, i.e., $B$ is Morita equivalent to $A$. \end{cor}
Theorem \ref{tilting complex for finite over center case} and Corollary \ref{derived equiv is Morita equiv for finite over center case} apply to some class of Sklyanin algebras, see Corollary \ref{derived-equiv-Skl-alg}. The derived Picard group of $A$ is described in Proposition \ref{D-Pic-group}, which generalizes a result of Negron \cite{Negron17} about Azumaya algebras.
\section{Preliminaries}
Let $A$ be a ring, $\D^\mathrm{b}(A)$ be the bounded derived category of the (right) $A$-module category. Let $T$ be a complex of $A$-modules, $\add(T)$ be the full subcategory of $\D^\mathrm{b}(A)$ consisting of objects that are direct summands of finite direct sums of copies of $T$, and $\End_{\D^\mathrm{b}(A)}(T)$ be the endomorphism ring of $T$. A complex is called {\it perfect}, if it is quasi-isomorphic to a bounded complex of finitely generated projective $A$-modules. $\mathcal{K}^{\mathrm{b}}(\text{proj-}A)$ is the full subcategory of $\D^{\mathrm{b}}(A)$ consisting of perfect complexes.
We first recall the definition of tilting complexes \cite{Rickard89}, which generalizes the notion of progenerators. For the theory of tilting complexes we refer the reader to \cite{Ye20}.
In the following, $A$ and $B$ are associative ring. \begin{defn}
A complex $T \in \mathcal{K}^{\mathrm{b}}(\text{proj-}A)$ is called a {\it tilting complex} over $A$ if
\begin{enumerate}
\item $\add(T)$ generates $\mathcal{K}^{\mathrm{b}}(\text{proj-}A)$ as a triangulated category, and
\item $\Hom_{\D^\mathrm{b}(A)}(T, T[n]) = 0$ for each $n \neq 0$.
\end{enumerate} \end{defn}
The following Morita theorem for derived categories, is due to Rickard.
\begin{thm}\cite[Theorem 6.4]{Rickard89}\label{derived-equivalence}
The following are equivalent.
\begin{enumerate}
\item $\mathcal{D}^{\mathrm{b}}(A)$ and $\mathcal{D}^{\mathrm{b}}(B)$ are equivalent as triangulated categories.
\item There is a tilting complex $T$ over $A$ such that $\End_{\D^\mathrm{b}(A)}(T) \cong B$.
\end{enumerate} \end{thm}
If $A$ and $B$ satisfy the equivalent conditions in Theorem \ref{derived-equivalence}, then $A$ is said to be {\it derived equivalent} to $B$.
The following theorem is also due to Rickard \cite[Theorem 1.6]{Ye99}, where $A$ and $B$ are two flat $k$-algebras over a commutative ring $k$.
\begin{thm}(Rickard)\label{two-side-tilting-complex}
Let $T$ be a complex in $\D^\mathrm{b}(A \otimes_k B^{\mathrm{op}})$. The following are equivalent:
(1) There exists a complex $T^{\vee} \in \D^\mathrm{b}(B \otimes_k A^{\mathrm{op}})$ and isomorphisms
$$T \Lt_{A} T^{\vee} \cong B \text{ in } \D^\mathrm{b}(B^e)\, \text{ and }\, T^{\vee} \Lt_{B} T \cong A \text{ in } \D^\mathrm{b}(A^e)$$ where $B^e =B \otimes_k B^{\mathrm{op}}$ and $A^e =A \otimes_k A^{\mathrm{op}}$.
(2) $T$ is a tilting complex over $A$, and the canonical morphism $B \to \Hom_{\D^\mathrm{b}(A)}(T,T)$ is an isomorphism in $\D^\mathrm{b}(B^e)$.
In this case, $T$ is called a {\it two-sided tilting complex} over $A$-$B$ relative to $k$. \end{thm}
We record some results about tilting complexes here for the convenience.
\begin{lem}\cite[Theorem 2.1]{Rickard91}\label{flat tensor products}
Let $A$ be an $R$-algebra and $S$ be a flat $R$-algebra where $R$ is a commutative ring. If $T$ is a tilting complex over $A$, then $T \otimes_R S$ is a tilting complex over $A \otimes_R S$. \end{lem} The following lemma follows directly from Lemma \ref{flat tensor products}. \begin{lem}\cite[Proposition 2.6]{RouZi03}\label{decomposition of tilting complex by central idempotent}
Let $T$ be a tilting complex over $A$. If $0 \neq e \in A$ is a central idempotent, then $Te$ is a tilting complex over $Ae$. \end{lem}
The following lemma is easy to verify.
\begin{lem}\label{til-comp-lem}
Let $T$ be a tilting complex over $A$. The following conditions are equivalent.
(1) $T \cong \bigoplus\limits_{i=1}^{s}P_i[-n_i]$ in $\D^\mathrm{b}(A)$, for some $A$-module $P_i$ such that $\bigoplus\limits_{i=1}^{s} P_i$ is a progenerator of $A$.
(2) $T$ is homotopy equivalent to $\bigoplus\limits_{i=1}^{s}P_i[-n_i]$ for some $A$-module $P_i$.
\end{lem}
By using a similar proof of \cite[Theorem 2.3]{Ye99} for two-sided tilting complexes (see also \cite{Ye10} or \cite{Ye20}), the following result for tilting complexes holds.
\begin{lem}\label{equivalent condition for tilting complex being a progenerator}
Let $T$ be a tilting complex over $A$, and $T^{\vee}:= \Hom_A(T, A)$. Let $n = \max\{ i \mid \mathrm{H}^i(T) \neq 0 \}$ and $ m = \max\{ i \mid \mathrm{H}^i(T^{\vee}) \neq 0 \}$. If $\mathrm{H}^n(T) \otimes_A \mathrm{H}^m(T^{\vee}) \neq 0$, then
$m=-n$ and $T \cong P[m]$ in $\D^\mathrm{b}(A)$ for some progenerator $P$ of $A$. \end{lem} \begin{proof}
Without loss of generality, we assume that
$$T:= \cdots \longrightarrow 0 \longrightarrow T^{-m} \longrightarrow \cdots \longrightarrow T^n \longrightarrow 0 \longrightarrow \cdots.$$
Since $T$ is a perfect complex of $A$-modules, $$\RHom_A(T,T) \cong T \Lt_{A} T^{\vee}.$$
Consider the bounded spectral sequence of the double complex $T \otimes_A T^{\vee}$ (see \cite[Lemma 14.5.1]{Ye20}). Then
$$\mathrm{H}^n(T) \otimes_A \mathrm{H}^{m}(T^{\vee}) \cong \mathrm{H}^{n+m}(T \Lt_{A} T^{\vee}) \cong \mathrm{H}^{n+m}(\RHom_A(T,T)).$$
Since $T$ is a tilting complex, $\Hom_{\D^\mathrm{b}(A)}(T,T[i]) = 0$ for all $i \neq 0$. It follows that $n+m = 0$.
So $T \cong T^n[-n]$.
By Lemma \ref{til-comp-lem}, the conclusion holds. \end{proof}
A ring $A$ is called {\it local} if $A/J_A$ is a simple Artinian ring, where $J_A$ is the Jacobson radical of $A$. If both $M_A$ and ${}_AN$ are nonzero module over a local ring $A$, then $M \otimes_A N \neq 0$ by \cite[Lemma 14.5.6]{Ye20}.
\begin{prop}\cite[Theorem 2.11]{RouZi03}\label{derived equiv is Morita equiv for local rings}
Let $A$ be a local ring, $T$ be a tilting complex over $A$. Then, $T \cong P[-n]$ in $\D^\mathrm{b}(A)$, for some progenerator $P$ of $A$ and $n \in \mathbb{Z}$. \end{prop} \begin{proof} It follows from Lemma \ref{equivalent condition for tilting complex being a progenerator}. \end{proof}
Let $\Spec A$ (resp., $\Max A$) denote the prime (resp., maximal) spectrum of a ring $A$. Let $R$ be a central subalgebra of $A$. Since the center of any prime ring is a domain, the quotient ring $R/(\mathfrak{P} \cap R)$ is a domain for any $\mathfrak{P} \in \Spec(A)$. Then there is a well defined map $\pi: \Spec A \to \Spec R, \; \mathfrak{P} \mapsto \mathfrak{P} \cap R$. The following facts about the map $\pi$ are well known, see \cite{Bla73} for instance. For the convenience of readers, we give their proofs here.
\begin{lem}\label{Phi-lem}
Let $A$ be a ring, $R$ be a central subring of $A$. Suppose that $A$ is finitely generated as $R$-module.
\begin{enumerate}
\item For any primitive ideal $\mathfrak{P}$ of $A$, $\pi(\mathfrak{P})$ is a maximal ideal of $R$. In particular, $$\pi(\Max A) \subseteq \Max R.$$
\item If $\mathfrak{P} \in \Spec(A)$ and $\pi(\mathfrak{P})$ is a maximal ideal of $R$, then $\mathfrak{P}$ is a maximal ideal of $A$.
\item For any multiplicatively closed subset $\mathcal{S}$ of $R$, the prime ideals of $\mathcal{S}^{-1}A$ are in one-to-one correspondence ($\mathcal{S}^{-1} \mathfrak{P} \leftrightarrow \mathfrak{P}$) with the prime ideals of $A$ which do not meet $\mathcal{S}$.
\item $\pi: \Spec A \to \Spec R$ is surjective.
\item The Jacobson radical $J_R$ of $R$ is equal to $J_A \cap R$.
\item Let $\p$ be a prime ideal of $R$. Then $A_{\p}$ is a local ring if and only if there exists only one prime ideal $\mathfrak{P}$ of $A$ such that $\pi(\mathfrak{P}) = \p$.
\item If $R$ is a Jacobson ring (that is, $\forall$ $\p \in \Spec R$, $J_{R/\p} = 0$), then so is $A$.
\item If $\pi$ is injective, then $\pi(\mathcal{V}(I)) = \mathcal{V}(I \cap R)$. In this case, $\pi$ is a homeomorphism.
\end{enumerate} \end{lem} \begin{proof}
(1) Without loss of generality, we assume that $A$ is a primitive ring. Let $V$ be a faithful simple $A$-module. For any $0 \neq x \in R$, since $Vx$ is also a non-zero $A$-module, $Vx = V$. Because $V$ is a finitely generated faithful $R$-module, $x$ is invertible in $R$. It follows that $R$ is a field.
(2) Since the prime ring $A/\mathfrak{P}$ is finite-dimensional over the field $R/(\mathfrak{P}\cap R)$, the quotient ring $A/\mathfrak{P}$ is a simple ring, that is, $\mathfrak{P}$ is a maximal ideal of $A$.
(3) The proof is similar to the commutative case.
(4) Suppose $\p \in \Spec R$. It follows from (3) that there exists a prime ideal $\mathfrak{P}$ of $A$ such that $\mathfrak{P}A_{\p}$ is a maximal ideal of $A_{\p}$. By (1), $\mathfrak{P}A_{\p} \cap R_{\p}$ is a maximal ideal of $R_{\p}$. Hence $\p R_{\p} \subseteq \mathfrak{P}A_{\p}$. It follows that $\mathfrak{P} \cap R = \p$.
(5) By (1), $J_R \subseteq J_A \cap R$. On the other hand, by (4) and (2), $J_A \cap R \subseteq J_R$.
(6) It follows from (3) and (4) that $\Max A_{\p} = \{ \mathfrak{P}A_{\p} \mid \mathfrak{P} \cap R = \p, \, \mathfrak{P} \in \Spec A \}$. Then the conclusion in (6) follows.
(7) Without loss of generality, we may assume that $A$ is a prime ring. Set $\mathcal{S} = R\setminus \{0\}$.
Then $\mathcal{S}^{-1}A$ is also a prime ring which is finite-dimensional over the field $\mathcal{S}^{-1}R$. Hence $\mathcal{S}^{-1}A$ is an Artinian simple ring. It follows from that $R$ is a Jacobson ring and (5) that $J_A \cap R = J_R = 0$. Hence $\mathcal{S}^{-1}J_{A} \neq \mathcal{S}^{-1}A$, and so $\mathcal{S}^{-1}J_{A} = 0$. Hence $J_A = 0$, as every element in $\mathcal{S}$ is regular in the prime ring $A$. Therefore $A$ is a Jacobson ring.
(8) Suppose $I$ is an ideal of $A$. Obviously, $\pi(\mathcal{V}(I)) \subseteq \mathcal{V}(I \cap R)$. On the other hand, let $\p \in \mathcal{V}(I \cap R) := \{ \p \in \Spec R \mid I \cap R \subseteq \p \}$. It follows from the assumption that $\pi$ is injective and (6) that $A_{\p}$ is a local ring. Hence there exists a prime ideal $\mathfrak{P}$ of $A$ such that $\mathfrak{P}A_{\p}$ is the only maximal ideal of $A_{\p}$ and $\p = \mathfrak{P} \cap R$. Since $I \cap (R \setminus \p) = \emptyset$, $IA_{\p} \subseteq \mathfrak{P} A_{\p}$ and $I \subseteq \mathfrak{P}$. Then $\p = \pi(\mathfrak{P}) \in \pi(\mathcal{V}(I))$. Hence $\pi(\mathcal{V}(I)) \supseteq \mathcal{V}(I \cap R)$. So $\pi$ is a closed map. Since $\pi$ is bijective, it is a homeomorphism. \end{proof}
By Lemma \ref{Phi-lem}, we have the following two results, which describe the condition when $\pi$ is a homeomorphism. \begin{lem}\label{spec-homeo}
$\pi$ is a homeomorphism if and only if $A_{\p}$ is a local ring for any $\p \in \Spec R$. \end{lem}
\begin{prop}\label{Max-Prime}
Suppose that $R$ is a Jacobson ring. If the restriction map $\pi|_{\Max A}$ is injective, then $\pi$ is a homeomorphism. \end{prop} \begin{proof}
If $\pi$ is not injective, then there exist two different prime ideals $\mathfrak{P}$ and $\mathfrak{P}'$ of $A$ such that $\mathfrak{P} \cap R = \mathfrak{P}' \cap R$. By Lemma \ref{Phi-lem} (7), $A$ is a Jacobson ring. Hence, there exists a maximal ideal $\mathfrak{M}$ of $A$ such that $\mathfrak{P} \subseteq \mathfrak{M}$ and $\mathfrak{P}' \nsubseteq \mathfrak{M}$, or the other way round. Without loss of generality, assume
$\mathfrak{P} \subseteq \mathfrak{M}$ and $\mathfrak{P}' \nsubseteq \mathfrak{M}$. Then $\pi(\mathfrak{M}) = \mathfrak{M} \cap R \supseteq \mathfrak{P} \cap R = \mathfrak{P}' \cap R$.
By applying Lemma \ref{Phi-lem} (2) and (4) to the ring $A/{\mathfrak{P}'}$ with the central subalgebra $R/{\mathfrak{P}' \cap R}$, it implies that there exists a maximal ideal $\mathfrak{M}'$ of $A$, such that $\mathfrak{P}' \subseteq \mathfrak{M}'$ and $\mathfrak{M}' \cap R = \mathfrak{M} \cap R$. This contradicts to the hypothesis that $\pi|_{\Max(A)}$ is injective. So $\pi$ is a homeomorphism. \end{proof}
\section{Some Derived equivalences imply Morita equivalences}
If $A$ is a local ring such that $A/{J_A}$ is not a skew-field and $A$ is a domain, then $A$ is not semiperfect. The localizations of the Sklyanin algebra considered in Lemma \ref{S-domain} at maximal ideals of its center are such kind of examples. The following results are needed in the proof of Theorem \ref{tilting complex for finite over center case}. \begin{lem}\label{lemma1}
Let $A$ be a local ring. Then there exists an idempotent element $e$ in $A$, such that any finitely generated projective right $A$-module is a direct sum of finitely many copies of $eA$. \end{lem} \begin{proof}
For any finitely generated projective $A$-modules $P$, $Q$, and a surjective $A$-module morphism $f: P/PJ_A \twoheadrightarrow Q/QJ_A$, there exists a surjective $A$-module morphism $\widetilde{f}$ so that the following diagram commutative.
$$\xymatrix{
P \ar@{-->>}[d]^{\widetilde{f}} \ar@{->>}[r] & P/PJ_A \ar@{->>}[d]^{f}\\
Q \ar@{->>}[r] & Q/QJ_A
}$$
It follows that $Q$ is a direct summand of $P$.
There exists a finitely generated projective $A$-module $Q\neq 0$ such that the length of $Q/QJ_A$ is smallest possible.
By the above fact and division algorithm, any finitely generated projective $A$-module is a direct sum of finite copies of $Q$. In particular, $Q$ is a direct summand of the $A$-module $A$. Hence, there exists an idempotent element $e$ in $A$ such that $Q \cong eA$ as $A$-modules.
\end{proof}
\begin{prop}\label{a open set of fp mod in spec} Let $A$ be a ring with a central subalgebra of $R$ such that $A$ is finitely presented as $R$-module. Suppose that $A_{\p}$ is a local ring for all $\p \in \Spec R$.
If $M$ is a finitely presented $A$-module, then
$$U:= \{ \p \in \Spec R \mid M_{\p} \text{ is projective over } A_{\p} \}$$
is an open subset in $\Spec R$. \end{prop} \begin{proof}
Suppose $\p \in U$. Then $M_{\p}$ is a finitely generated projective $A_{\p}$-module. Since $A_{\p}$ is a local ring, by Lemma \ref{lemma1}, there exist $x \in A$ and $s \in R\setminus\p$, such that
$$xs^{-1} \in A_{\p} \text{ is an idempotent element and } M_{\p} \cong (xs^{-1}A_{\p})^{\oplus l}$$ for some $l \in \mathbb{N}$.
Hence there exists $t \in R \setminus \p$ such that $(x^2-xs)st = 0$, and $xs^{-1}$ is an idempotent element in $A_{st} = A[(st)^{-1}]$. So $xs^{-1}A_{st}$ is a projective $A_{st}$-module. There is an $A_{st}$-module morphism $g: (xs^{-1}A_{st})^{\oplus l} \to M_{st}$ such that $g_{\p}: (xs^{-1}A_{\p})^{\oplus l} \to M_{\p}$ is the prescribed isomorphism.
Since $M_A$ and $A_R$ are finitely presented modules, $M$ is a finitely presented $R$-module.
So $M_{st}$ is a finitely presented $R_{st}$-module.
By \cite[Proposition II.5.1.2]{Bourbaki},
there exists $u \in R \setminus \p$ such that $gu^{-1}: (xs^{-1}A_{stu})^{\oplus l} \to M_{stu}$ is an isomorphism.
It follows that $M_{stu}$ is a projective $A_{stu}$-module.
Hence $X_{stu} := \{\mathfrak{q} \in \Spec R \mid stu \notin \mathfrak{q} \}$ is contained in $U$. Obviously, $\p \in X_{stu}$.
So $U$ is an open subset of $\Spec R$. \end{proof}
Now we are ready to prove Theorem \ref{tilting complex for finite over center case}. The following proof is nothing but an adaption of the arguments of Yekutieli \cite[Theorem 1.9]{Ye10} and Negron \cite[Proposition 3.3]{Negron17} to this situation.
\begin{proof}[Proof of Theorem \ref{tilting complex for finite over center case}]
By assumption, $\Spec A \to \Spec R, \; \mathfrak{P} \mapsto \mathfrak{P} \cap R$ is a homeomorphism. It follows from Lemma \ref{spec-homeo} that $A_{\p}$ is a local ring for any prime ideal $\p$ of $R$.
There exist integers $s>0$ and $n_1 > n_2 > \cdots > n_s$ such that $\mathrm{H}^{i}(T) = 0$ for all $i \neq n_1, \cdots, n_s$, as the tilting complex $T$ is bounded.
Obviously, $\mathrm{H}^{n_1}(T)$ is a finitely generated $R$-module. In fact, it is a finitely presented $R$-module by assumption. Hence the support set $$\Supp(\mathrm{H}^{n_1}(T)) = \mathcal{V}(\Ann_R(\mathrm{H}^{n_1}(T)))$$
is closed in $\Spec R$.
By Lemma \ref{flat tensor products}, $T_{\p} := T \otimes_R R_{\p}$ is a tilting complex over $A_{\p}$. If $\mathrm{H}^{n_1}(T)_{\p} \cong \mathrm{H}^{n_1}(T_{\p})$ is non-zero, then $\mathrm{H}^{n_1}(T)_{\p}$ is $A_{\p}$-projective by Proposition \ref{derived equiv is Morita equiv for local rings}. Hence
$$\Supp(\mathrm{H}^{n_1}(T)) = \{ \p \in \Spec R \mid \mathrm{H}^{n_1}(T)_{\p} \cong \mathrm{H}^{n_1}(T_{\p}) \text{ is a non-zero projective } A_{\p}\text{-module} \}.$$
Since $\mathrm{H}^{n_1}(T)$ is a finitely presented $A$-module, $\mathrm{H}^{n_1}(T)$ is a projective $A$-module, and $\Supp(\mathrm{H}^{n_1}(T))$ is an open set by Proposition \ref{a open set of fp mod in spec}.
By \cite[page 406, Theorem 7.3]{Ja}, there exists an idempotent $e_1 \in R$ such that $\Supp(\mathrm{H}^{n_1}(T)) = X_{e_1}:= \{\mathfrak{q} \in \Spec R \mid e_1 \notin \mathfrak{q} \}$. It follows that $(\mathrm{H}^{n_1}(T)/\mathrm{H}^{n_1}(T)e_1)_{\p} = 0$ for all $\p \in \Spec R$. Hence
$\mathrm{H}^{n_1}(T)(1-e_1) = 0$. Then
$$\mathrm{H}^{j}(T(1-e_1)) = \begin{cases}
0, & j \neq n_2,\cdots,n_s \\
\mathrm{H}^{n_i}(T)(1-e_1), & j = n_2,\cdots,n_s
\end{cases}.$$
By Lemma \ref{decomposition of tilting complex by central idempotent}, $T(1-e_1)$ is a tilting complex over $A(1-e_1)$ and $Te_1$ is a tilting complex over $Ae_1$. It follows from Proposition \ref{derived equiv is Morita equiv for local rings} that $Te_1$ is homotopy equivalent to $\mathrm{H}^{n_1}(T)[-n_1]$.
By induction on $s$, there is a complete set of orthogonal idempotents $e_1, \cdots, e_s$ in $R$ such that $Te_i$ is homotopy equivalent to $\mathrm{H}^{n_i}(T)[-n_i]$ for each $i$. Then $T = \bigoplus\limits_{i=1}^{s} Te_i = \bigoplus\limits_{i=1}^{s} \mathrm{H}^{n_i}(T)[-n_i]$. It follows from Lemma \ref{til-comp-lem} that $P:= \bigoplus\limits_{i=1}^{s} \mathrm{H}^{n_i}(T)$ is a progenerator of $A$ and $T \cong \bigoplus\limits_{i=1}^{s}Pe_i[-n_i]$ in $\D^\mathrm{b}(A)$. \end{proof}
\begin{cor}
If $\Spec R$ is connected, then $T \cong P[-n]$ in $\D^\mathrm{b}(A)$. \end{cor}
\begin{cor}\cite{Negron17}\label{tilting complex over Azumaya algebra}
Let $A$ be an Azumaya algebra. Then any tilting complex over $A$ has the form $P[n]$, where $P$ is a progenerator of $A$. If there exists a ring $B$ which is derived equivalent to $A$, then $B$ is Morita equivalent to $A$. In particular, $B$ is also an Azumaya algebra. \end{cor}
In the following we provide some non-Azumaya algebras which satisfy the conditions in Theorem \ref{tilting complex for finite over center case}.
\begin{defn}\cite{ATV90}
Let $k$ be an algebraically closed field of characteristic $0$. The {\it three-dimensional Sklyanin algebras} $S = S(a, b, c)$ are $k$-algebras generated by three noncommutating variables $x,y,z$ of degree $1$, subject to relations
$$axy+byx+cz^2 = ayz+bzy+cx^2 = azx+bxz+cy^2,$$
for $[a:b:c] \in \mathbb{P}^2$ such that $(3abc)^3 \neq (a^3+b^3+c^3)^3$.
The point scheme of the three-dimensional Sklyanin algebras $S$ is given by the elliptic curve
$$E:= \mathcal{V}( abc(X^3+Y^3+Z^3) - (a^3+b^3+c^3)XYZ ) \subset \mathbb{P}^2.$$
Let us choose the point $[1:-1:0]$ on $E$ as origin, and the automorphism $\sigma$ denotes the translation by the point $[a:b:c]$ in the group law on the elliptic curve $E$ with
$$\sigma[x:y:z] = [acy^2-b^2xz:bcx^2-a^2yz:abz^2-c^2xy].$$ \end{defn}
\begin{lem} \label{S-domain}\cite{ATV90, ATV91}
(1) $S$ is a Noetherian domain.
(2) $S$ is a finite module over its center if and only if the automorphism $\sigma$ has finite order. \end{lem}
Recently, Walton, Wang and Yakimov (\cite{WWY19}) endowed the three-dimensional Sklyanin algebra $S$, which is a finite module over its center $Z$, with a Poisson $Z$-order structure (in the sense of Brown-Gordon \cite{BrownGordon03}). By using the Poisson geometry of $S$, they analyzed all the irreducible representations of $S$. In particular, they proved the following result.
\begin{thm}\cite[Theorem 1.3.(4)]{WWY19}\label{spec-Skl-alg}
If the order of $\sigma$ is finite and coprime with $3$, then $S/{\m S}$ is a local ring for any $\m \in \Max Z$. \end{thm}
Here is a corollary following Theorem \ref{spec-Skl-alg} and Theorem \ref{tilting complex for finite over center case}. \begin{cor}\label{derived-equiv-Skl-alg}
If the order of $\sigma$ is finite and coprime with $3$, then every tilting complex over $S$ has the form $P[n]$, where $P$ is a progenerator of $S$ and $n \in \mathbb{Z}$. Furthermore, any ring which is derived equivalent to $S$, is Morita equivalent to $S$. \end{cor} \begin{proof}
It follows from the Artin-Tate lemma that $Z$ is a finitely generated commutative $k$-algebra and so it is a Jacobson ring. Obviously, $S$ is a finitely presented $Z$-module. By Theorem \ref{spec-Skl-alg}, the restriction map $\pi|_{\Max(S)}$ is injective. It follows from Proposition \ref{Max-Prime} that $\pi: \Spec S \to \Spec Z$ is a homeomorphism. Then, the conclusions follow from Theorem \ref{tilting complex for finite over center case} and Corollary \ref{derived equiv is Morita equiv for finite over center case}. \end{proof}
\section{Derived Picard groups}
Let $A$ be a projective $k$-algebra over a commutative ring $k$. The derived Picard group $\DPic(A)$ of an algebra $A$ was introduced by Yekutieli \cite{Ye99} and Rouquier-Zimmermann \cite{RouZi03} independently. In fact, $A$ can be assume to be flat over $k$, see the paragraph after Definition 1.1 in \cite{Ye10}.
\begin{defn}\label{defn of DPic}
The {\it derived Picard group} of $A$ relative to $k$ is
$$\DPic_k(A):= \frac{\{ \text{two-sided tilting complexes over } A \text{-} A \text{ relative to } k \}}{\text{ isomorphisms }},$$
where the isomorphism is in $\D^\mathrm{b}(A\otimes_kA^{\mathrm{op}})$. The class of a tilting complex $T$ in $\DPic_k(A)$ is denoted by $[T]$. The group multiplication is induced by $- \Lt_A -$, and $[A]$ is the unit element. \end{defn}
Let $T$ be a two-sided tilting complex over $A$-$A$ relative to $k$. For any $z \in Z(A)$, there is an endomorphism of $T$ induced by the multiplication by $z$ on each component of $T$ (see \cite[Proposition 9.2]{Rickard89} or \cite[Propsition 6.3.2]{KonZimm98}). This defines a $k$-algebra automorphism of $Z(A)$, which is denoted by $f_T$. The assignment $\Phi: \DPic_k(A) \to \Aut_k(Z(A)), [T] \mapsto f_T$ is a group morphism, see the paragraph in front of Definition 7 in \cite{Zim96} or \cite[Lemma 5.1]{Negron17}.
Let us first recall some definitions in \cite[Section 3.1]{Negron17}. Suppose $n \in \Gamma(\Spec Z(A), \underline{\mathbb{Z}})$, which consists of continuous functions from $\Spec Z(A)$ to the discrete space $\Z$. For any $i \in \mathbb{Z}$, $n^{-1}(i)$ is both an open and closed subset of $\Spec Z(A)$. Since $\Spec Z(A)$ is quasi-compact, there exists a set of complete orthogonal idempotents $e_{n_1}, \cdots, e_{n_s}$ of $Z(A)$ such that $$n^{-1}(i) = \begin{cases} X_{e_i}, & i = n_j, \, j = 1, \cdots, s \\ \emptyset, & i \neq n_1, \cdots, n_s. \end{cases}$$ Set $e_i = 0$ for any $i \neq n_1, \cdots, n_s$, and $X_{e_i} = \emptyset$.
For any complex $T$ of $Z(A)$-modules, the shift $\Sigma^n T$ is defined by $$\bigoplus_{i \in \Z, \, n^{-1}(i) = X_{e_i}} Te_{i} [-i].$$
Let $\Pic_{Z(A)}(A)$ be the Picard group of $A$ over $Z(A)$. Clearly $\Gamma(\Spec Z(A), \underline{\mathbb{Z}}) \times \Pic_{Z(A)}(A)$ can be viewed as a subgroup of $\DPic_{Z(A)}(A)$ via $(n, [P]) \mapsto [\Sigma^n P]$. The following result is proved in \cite{Negron17} under the assumption that $A$ is an Azumaya algebra.
\begin{prop}\label{D-Pic-group}
Let $A$ be an $k$-algebra which is a finitely generated projective module over its center $Z(A)$. Suppose that
$\Spec A$ is canonically homeomorphic to $\Spec Z(A)$.
Then
(1) there is an exact sequence of groups
\begin{equation}\label{ex-seq-DPic}
1 \longrightarrow \DPic_{Z(A)}(A) \longrightarrow \DPic_{k}(A) \stackrel{\Phi}{\longrightarrow} \Aut_k(Z(A)).
\end{equation}
(2) $\DPic_{Z(A)}(A) = \Gamma(\Spec Z(A), \underline{\mathbb{Z}}) \times \Pic_{Z(A)}(A)$. \end{prop} \begin{proof}
It is obvious that $\Ker \Phi = \DPic_{Z(A)}(A)$. Hence \eqref{ex-seq-DPic} is a exact sequence of groups. Next we prove $\DPic_{Z(A)}(A) = \Gamma(\Spec Z(A), \underline{\mathbb{Z}}) \times \Pic_{Z(A)}(A)$.
Let $T$ be a two-sided tilting complex over $A$ relative to $k$. By Theorem \ref{tilting complex for finite over center case}, there exists a global section $n \in \Gamma(\Spec Z(A), \underline{\mathbb{Z}})$ and an $A$-progenerator $P$, such that $T \cong \Sigma^nP$ in $\D^\mathrm{b}(A)$.
It follows from the fact that
$$A \cong \End_{\D^\mathrm{b}(A)}(T) \cong \End_{\D^\mathrm{b}(A)}(\Sigma^nP) = \End_{\D^\mathrm{b}(A)}(P) = \End_A(P)$$
that $P$ is an invertible $A$-$A$-bimodule with central $Z(A)$-action. By \cite[Proposition 2.3]{RouZi03}, there exists an automorphism $\sigma \in \Aut_k(A)$ such that $T \cong {^{\sigma} (\Sigma^nP)}$ in $\D^\mathrm{b}(A^e)$. If $\Phi([T]) = \id_{Z(A)}$, then $\sigma \in \Aut_{Z(A)}(A)$. Hence $T \cong \Sigma^n({^{\sigma}P})$ in $\D^\mathrm{b}(A^e)$. It follows that $\DPic_{Z(A)}(A) = \Gamma(\Spec Z(A), \underline{\mathbb{Z}}) \times \Pic_{Z(A)}(A)$. \end{proof}
Given any algebra automorphism $\sigma$ of $Z(A)$, there is a $k$-algebra $B$ with $Z(B)=Z(A)$ and a $k$-algebra isomorphism $\widetilde{\sigma}: A \to B$, such that $\sigma=\widetilde{\sigma}|_{Z(A)}$. This determines uniquely an isomorphism class of $B$ as $Z(A)$-algebra
(see \cite[page 9]{F} for the definition). Then it induces an $\Aut_k(Z(A))$-action on the $Z(A)$-algebras which are isomorphic to $A$ as $k$-algebras. The image of $\Phi$ is just the stabilizer $\Aut_k(Z(A))_{[A]}$ of the derived equivalent class (relative to $Z(A)$) of $A$.
\begin{rk}
Suppose that $A$ is an Azumaya algebra.
(1) For any $k$-algebra $B$, $B$ is derived equivalent to $A$ if and only if $B$ is Morita equivalent to $A$. So $\Aut_k(Z(A))_{[A]}$ is also the stabilizer of the Brauer class of $A$ just as in \cite{Negron17}.
(2) Notice that $\DPic_{Z(A)}(A) \cong \DPic_{Z(A)}(Z(A)) \cong \Gamma(\Spec Z(A), \underline{\mathbb{Z}}) \times \Pic_{Z(A)}(A)$ is an abelian group. Hence $\DPic_{k}(A)$ is a group extension of $\Aut_k(Z(A))_{[A]}$ by $\DPic_{Z(A)}(A)$ \cite[Theorem 1.1]{Negron17}. If $A$ is not an Azumaya algebra, $\DPic_{Z(A)}(A)$ may not be abelian. \end{rk}
\end{document} |
\begin{document}
\title{$\mathbb{Z}_3\times \mathbb{Z}_3$ crossed products} \author{Eliyahu Matzri}
\thanks{This work was supported by the U.S.-Israel Binational Science Foundation (grant no. 2010149).} \thanks{The author also thanks Daniel Krashen and UGA for hosting him while this work was done.}
\maketitle
\begin{abstract} Let $A$ be the generic abelian crossed product with respect to $\mathbb{Z}_3\times \mathbb{Z}_3$, in this note we show that $A$ is similar to the tensor product of 4 symbol algebras (3 of degree 9 and one of degree 3) and if $A$ is of exponent $3$ it is similar to the product of 31 symbol algebras of degree $3$. We then use \cite{RS} to prove that if $A$ is any algebra of degree $9$ then $A$ is similar to the product of $35840$ symbol algebras ($8960$ of degree $3$ and $26880$ of degree $9$) and if $A$ is of exponent $3$ it is similar to the product of $277760$ symbol algebras of degree $3$. We then show that the essential $3$-dimension of the class of $A$ is at most $6$. \end{abstract}
\section{Introduction} Throughout this note we let $F$ be a field containing all necessary roots of unity, denoted $\rho_n$. The well known Merkurjev-Suslin theorem says that: assuming $F$ contains a primitive $n$-th root of $1$, there is an isomorphism $\psi : K_2(F)/nK_2(F)\longrightarrow \Br(F)_n$ sending the symbol $\{a,b\}$ to the symbol algebra $(a,b)_{n,F}$. In particular the $n$-th torsion part of the Brauer group is generated by symbol algebras of degree $n$. This means every $A\in \Br(F)_n$ is similar (denoted $\sim$) to the tensor product of symbol algebras of degree $n$. However, their proof is not constructive. It thus raises the following questions. Let $A$ be an algebra of degree $n$ and exponent $m$. Can one explicitly write $A$ as the tensor product of degree $m$ symbol algebras? Also, what is the smallest number of factors needed to express $A$ as the tensor product of degree $m$ symbol algebras? This number is sometimes called the Merkurjev-Suslin number. These questions turn out to be quite hard in general and not much is known. Here is a short summary of some known results. \begin{enumerate}
\item Every degree $2$ algebra is isomorphic to a quaternion algebra.
\item Every degree $3$ algebra is cyclic thus if $\rho_3\in F$ it is isomorphic to a symbol algebra (Wedderburn \cite{W}).
\item Every degree $4$ algebra of exponent $2$ is isomorphic to a product of two quaternion algebras (Albert \cite{Al}).
\item Every degree $p^n$ symbol algebra of exponent $p^m$ is similar to the product of $p^{n-m}$ symbol algebras of degree $p^m$(Tignol \cite{T1}).
\item Every degree $8$ algebra of exponent $2$ is similar to the product of four quaternion algebras (Tignol \cite{T2}).
\item Every abelian crossed product with respect to $\mathbb{Z}_n\times \mathbb{Z}_2$ is similar to the product of a symbol algebra of degree $2n$ and a quterinion algebra, in particular, due to Albert \cite{Al}, every degree $4$ algebra is similar to the product of a degree $4$ symbol algebra and a quaternion algebra (Lorenz, Rowen, Reichstein, Saltman \cite{LRRS}).
\item Every abelian crossed product with respect to $(\mathbb{Z}_2)^4$ of exponent $2$ is similar to the product of $18$ quaternion algebras (Sivatski \cite{SV}).
\item Every $p$-algebra of degree $p^n$ and exponent $p^m$ is similar to the product of $p^n-1$ cyclic algebras of degree $p^m$ (Florence \cite{MF}). \end{enumerate}
In this paper we prove theorems \ref{MT2} and \ref{MT3} stating:
\textit{Let $A$ be an abelian crossed product with respect to $\mathbb{Z}_3\times \mathbb{Z}_3$. Then \begin{enumerate}
\item $A$ is similar to the product of $4$ symbol algebras ($3$ of degree $9$ and one of degree $3$).
\item If $A$ is of exponent $3$ then $A$ is similar to the product of $31$ symbol algebras of degree $3$. \end{enumerate} }
We then use \cite{RS} to deduce the general case of an algebra of degree $9$ to get theorem \ref{MT4} stating:
\textit{ Let $A$ be an $F$-central simple algebra of degree $9$. Then \begin{enumerate}
\item $A$ is similar to the product of $35840$ symbol algebras, ($8960$ of degree $3$ and $26880$ of degree $9$).
\item If $A$ is of exponent $3$ then $A$ is similar to the product of $277760$ symbol algebras of degree $3$. \end{enumerate}}
\section{$\mathbb{Z}_p\times \mathbb{Z}_p$ abelian crossed products} Let $A$ be the generic abelian crossed product with respect to $G=\mathbb{Z}_p\times \mathbb{Z}_p$ over $F$, where $p$ is an odd prime. In the notation of \cite{AS} this means:
$A=(E,G,b_1,b_2,u)=E[z_1,z_2| z_i e z_i^{-1}=\sigma_i(e); z_1^p=b_1; z_2^p=b_2; z_2z_1=uz_1z_2; b_i\in E_i= E^{<\sigma_i>}; u\in E^{\times} \ s.t. \N_{E/F}(u)=1]$ where ${\operatorname{Gal}}(E/F)=<\sigma_1,\sigma_2>\cong G$.
Let $A$ be as above. Write $E=E_1E_2$ where $E_1=F[t_1| \ t_1^{p}=f_1\in F^{\times}]$ and $E_2=F[t_2| \ t_2^{p}=f_2\in F^{\times}]$ thus we have $z_i t_i z_i^{-1}=\sigma_i(t_i)=t_i$ and $z_1 t_2=\rho_pt_2z_1; \ z_2 t_1=\rho_pt_1z_2$. Since $b_i\in E_i$ we can write $b_1=c_0+c_1t_1+...+c_{p-1}t_1^{p-1}; \ b_2=a_0+a_1t_2+...+a_{p-1}t_2^{p-1}$ where $a_i,c_i\in F^{\times}$.
\begin{prop}\label{1} Define $v=e_1z_1+e_2z_2$ for $e_i\in E$. If $v\neq 0$, then $[F[v^p]:F]=p$. \end{prop}
\begin{proof} First we compute $vt_1t_2=(e_1z_1+e_2z_2)t_1t_2=e_1z_1t_1t_2+ e_2z_2t_1t_2=\rho_pt_1t_2e_1z_1+ \rho_pt_1t_2e_2z_2=\rho_pt_1t_2(e_1z_1+e_2z_2)=\rho_pt_1t_2v$. Thus $v^p$ commutes with $t_1t_2$ where $v$ does not, implying $[F[v]:F[v^p]]=p$. By the definition of $v$ we have $v\notin F$. Thus $\deg(A)=p^2$ imply $[F[v]:F]\in \{p,p^2\}$. If $[F[v]:F]=p$ we get that $A$ contains the sub-algebra generated by $t_1t_2, v$ which is a degree $p$ symbol over $F$ and by the double centralizer this will imply that $A$ is decomposable which is not true in the generic case. Thus $[F[v]:F]=p^2$ implying $[F[v^p]:F]=p$ and we are done. \end{proof}
The first step we take is to find a $v$ satisfying $\Tr(v^p)=0$. In order to achieve that we will tensor $A$ with an $F$-symbol of degree $p$.
Define $B=(E_1,\sigma_2, \frac{-c_0}{a_0})\sim (E,G,1,\frac{-c_0}{a_0},1)$. Now by \cite{AS} $ A\otimes B$ is similar to $C=(E,G,b_1,\frac{-c_0}{a_0}b_2,u)$. Abusing notation we write $z_1,z_2$ for the new ones in $C$.
\begin{prop} Defining $v=z_1+z_2$ in $C$ we have $\Tr(v^p)=0$. \end{prop}
\begin{proof}
First notice that $C=\sum_{i,j=0}^{p-1} Ez_1^iz_2^j$. Thus $C_0=\{d\in C | \ \Tr(d)=0\}=E_0 +\sum_{i,j=0;(i,j)\neq(0,0)}^{p-1} Ez_1^iz_2^j$ where $E_0=\sum_{i,j=0;(i,j)\neq(0,0)}^{p-1} Ft_1^it_2^j$ is the set of trace zero elements of $E$. Now computing we see $v^p=z_1^p+e_{p-1,1}z_1^{p-1}z_2+....+e_{1,p-1}z_2^{p-1}z_1+z_2^p=b_1+e_{p-1,1}z_1^{p-1}z_2+....+e_{1,p-1}z_2^{p-1}z_1+b_2$ where $e_{i,j}\in E$. Define $r=v^p-(b_1+b_2)$. Clearly $\Tr(r)=0$, since the powers of $z_1,z_2$ in all monomial appearing in $r$ are less then $p$ and at least one is greater than zero. Thus, $v^p=b_1+b_2+r=c_0+c_1t_1+...+c_{p-1}t_1^{p-1}+(-c_0+\frac{-c_0a_1}{a_0}t_2+...+\frac{-c_0a_{p-1}}{a_0}t_2^{p-1})+r\in C_0$, and we are done. \end{proof}
\begin{prop} $K\doteqdot F[t_1t_2,v^p]$ is a maximal subfield of $C$. \end{prop} \begin{proof} First, notice $C$ is a division algebra of degree $p^2$. To see this assume it is not, then it is similar to a degree $p$ algebra, $D$. Thus $A\otimes B$ is similar to $D$, which implies $A$ is isomorphic to $D\otimes B^{op}$. But then $A$ has exponent $p$ which is false. In the proof of \ref{1} we saw that $[v^p,t_1t_2]=0$ so we are left with showing $[K:F]=p^2$. Assuming $[K:F]=p$, we have $v^p\in F[t_1t_2]$. Let $\sigma$ be a generator of ${\operatorname{Gal}}(F[t_1t_2]/F)=<\sigma>$. Clearly $z_ix=\sigma(x)z_i$ for $i=1,2$ and $x\in F[t_1t_2]$, hence $vx=\sigma(x)v$, that is $\sigma(x)=vxv^{-1}$. In particular, $\sigma(v^p)=vv^pv^{-1}=v^p$, implying $v^p\in F$. But then $C$ contains the sub algebra $F[t_1t_2,v]$ which is an F-csa of degree $p$, thus by the double centralizer $C$ would decompose into two degree $p$ algebras. This will imply that $A$ has exponent $p$, which is false. \end{proof}
The next step is to make $K$ Galois. Let $T$ be the Galois closure of $F[v^p]$. Its Galois group is a subgroup of $S_p$ so has a cyclic $p$-Sylow subgroup, define $L$ to be the fixed subfield. Clearly $F[v^p]\otimes L$ is Galois, with group $\mathbb{Z}_p$. Thus in $C_L$ we have $K_L$ as a maximal Galois subfield with group $\mathbb{Z}_p\times \mathbb{Z}_p$. Now writing $C_L$ as an abelian crossed product we have $C_L=(K,G,b_1,b_2,u)$ where this time we have $\Tr(b_2)=0$. Thus we can write $K_L=L[t_1t_2,t_3 | (t_1t_2)^p=f_1f_2; t_3^p=l\in L],$ \ $b_1\in L[t_1t_2]$ and $b_2=l_1t_3+...+l_{p-1}t_3^{p-1}$.
Now we change things even more. Define $D=(f_1f_2,(-\frac{f_1f_2}{l_1})^pl^{-1})_{p^2,L}=(K_L,G,t_1t_2,-\frac{f_1f_2}{l_1}(t_3)^{-1},\rho_{p^2})$ and again by \cite{AS} we have $R\doteqdot C_L\otimes D=(K_L,G,t_1t_2b_1,-f_1f_2-\frac{f_1f_2l_2}{l_1}t_3-...-\frac{f_1f_2l_{p-2}}{l_1}t_3^{p-2},\rho_{p^2} u)$.
\section{Generic $\mathbb{Z}_3\times \mathbb{Z}_3$ abelian crossed products}
From now we specialize to $p=3$.
\begin{prop} $R$ from the end of the previous section is a symbol algebra of degree $9$. \end{prop}
\begin{proof} This proof is just as in \cite{LRRS}. Since we assume $\rho_9\in F$ it is enough to find a $9$-central element. Notice that in $R$ we have $z_2t_1t_2=\rho_3 t_1t_2z_2$; $z_2^3=-f_1f_2-\frac{f_1f_2l_2}{l_1}t_3$ and $(t_1t_2)^3=f_1f_2$. Thus defining $x=t_1t_2+z_2$ we get $x^3=(t_1t_2+z_2)^3=(t_1t_2)^3+z_2^3=-\frac{f_1f_2l_2}{l_1}t_3$ implying $x^9=-(\frac{f_1f_2l_2}{l_1})^3 l\in L$. Thus $R=(l_3,-(\frac{f_1f_2l_2}{l_1})^3 l)_{9,L}$ for some $l_3\in L$ and we are done. \end{proof}
All of the above gives the following theorem:
\begin{thm}\label{MT1} Let $A$ be a generic abelian crossed product with respect to $\mathbb{Z}_3\times \mathbb{Z}_3$. Then after a quadratic extension $L/F$ we have $A_L$ is similar to $R\otimes D^{-1}\otimes B^{-1}$ where $R,D,B$ are symbols as above. \end{thm}
In order to go down to $F$ we take corestriction. Using Rosset-Tate and the projection formula, (\cite{GT} 7.4.11 and 7.2.7), we get:
\begin{thm}\label{MT2} Let $A$ be a generic abelian crossed product with respect to $\mathbb{Z}_3\times \mathbb{Z}_3$. Then $A=\sum_{i=1}^{4}C_i$ where $C_1,C_2,C_3$ are symbols of degree 9 and $C_4$ is a symbol of degree 3. \end{thm}
\begin{proof} One gets $C_1,C_2$ from the corestriction of $R$ using R.T. $C_3$ from the corestriction of $D$ using the projection formula and $C_4$ comes from $B$.
\end{proof}
\section{The exponent $3$ case} In this section we will consider the case were $\exp(A)=3$. Notice that from \ref{MT1} $A_L \sim R\otimes D^{-1}\otimes B^{-1} = (a,b)_{9,L}\otimes(\gamma,c)_{9,L} \otimes (\alpha,\beta)_{3,L}$ where $\alpha,\beta,\gamma \in F^{\times}$ and $a,b,c\in L^{\times}$.
\begin{thm}\label{MT3} Assume $A$ has exponent $3$, then $A$ is similar to the sum of $16$ degree $3$ symbols over a quadratic extension and $31$ degree $3$ symbols over $F$.
\end{thm}
\begin{proof} The idea for this proof is credited to L.H. Rowen, U. Vishne and E. Matzri. Since $\exp(A)=3$ we have $F\sim A^3\sim R^3\otimes D^{-3}\otimes B^{-3}\sim R^3\otimes D^{-3}\sim (a,b)_{3,L}\otimes(\gamma,c)_{3,L}$. Thus we get $(a,b)_{3,L}=(\gamma,c^{-1})_{3,L}$. Now by the chain lemma for degree 3 symbols in \cite{Rost} or \cite{MV} we have $x_{1,2,3} \in L^{\times}$ such that: $$(a,b)_{3,L}=(a,x_1)_{3,L}= (x_2,x_1)_{3,L}= (x_2,x_3)_{3,L}= (\gamma,x_3)_{3,L}=(\gamma,c^{-1})_{3,L}$$
Now we write $$(a,\frac{b}{x_1})_{9,L}\otimes(\frac{a}{x_2},x_1)_{9,L}\otimes(x_2,\frac{x_1}{x_3})_{9,L}\otimes(\frac{x_2}{\gamma},x_3)_{9,L}\otimes(\gamma,x_3c)_{9,L}\sim (a,b)_{9,L}\otimes(\gamma,c)_{9,L}$$ Thus $A\sim (a,b)_{9,L}\otimes(\gamma,c)_{9,L} \otimes (\alpha,\beta)_{3,L}\sim (a,\frac{b}{x_1})_{9,L}\otimes(\frac{a}{x_2},x_1)_{9,L}\otimes(x_2,\frac{x_1}{x_3})_{9,L}\otimes(\frac{x_2}{\gamma},x_3)_{9,L}\otimes(\gamma,x_3c)_{9,L}\sim (a,b)_{9,L}\otimes(\gamma,c)_{9,L}\otimes (\alpha,\beta)_{3,L}$ where now all the degree $9$ symbols are of exponent $3$. But by a theorem of Tignol, \cite{T1}, each of these symbols is similar to the product of three degree $3$ symbols. Thus we have that $A_L$ is similar to the product of $16$ degree $3$ symbols and over $F$ to the product of $31$ symbols of degree $3$ and we are done.
\end{proof}
\section{The general case of a degree $9$ algebra}
In this section we combine the results of sections $2$ and $3$ with \cite{RS} to handle the general case of a degree $9$ algebra of exponent $9$ and $3$. Let $A$ be a $F$-central simple algebra of degree $9$.
The first step would be to follow \cite{RS} to find a field extension $P/F$ such that $A_P$ is an abelian crossed product with respect to $\mathbb{Z}_3\times \mathbb{Z}_3$ and $[P:F]$ is prime to $3$. The argument in \cite{RS} basically goes as follows: Let $K\subset A$ be a maximal subfield, i.e. $[K:F]=9$. Now let $F\subset K \subset E$ be the normal closure of $K$ over $F$. Since we know nothing about $K$ we have to assume $G={\operatorname{Gal}}(E/F)=S_9$. Let $H<G$ be a $3$-sylow subgroup and $L=E^H$, then $[L:H]=4480$. Now extend scalars to $L$, then $KL\subset A_L$ as a maximal subfield. By Galois correspondence $KL=E^{H_1}$ for some subgroup $H_1<H$ and $[H:H_1]=[KL:L]=9$. Since $H$ is a $3$-group we can find $H_1\triangleleft H_2\triangleleft H$ such that $[H:H_2]=3$ thus we have $L=E^H\subset E^{H_2} \subset KL=E^{H_1}\subset E$ and since $H_2 \triangleleft H$ we know the extension $E^{H_2}/L$ is Galois with group~${\operatorname{Gal}}(E^{H_2}/L)=<\sigma>\cong H/H_2\cong C_3$. Thus in $A_L$ we have the subfield $E^{H_2}$ which has a non trivial $L$- automorphism $\sigma$. Now let $z\in A$ be an element inducing $\sigma$ (such $z$ exists by Skolem-Noether). Consider the subfield $L[z]/L$, since $z^3$ commutes with $E^{H_2}$ and $z$ does not $[L[z]:L[z^3]]=3$. In the best case scenario we have $L[z^3]=L$ which will imply $A_L$ decomposes into the tensor product of two symbols of degree $3$ and we are done. In the general case we will have $[L[z^3]:L]=3$. If $L[z^3]/L$ is Galois we are done since $E^{H_2}[z^3]$ will be a maximal subfield Galois over $L$ with group isomorphic to $\mathbb{Z}_3\times \mathbb{Z}_3$, but again in general this should not be the case. However we can extend scalars to make $L[z^3]/L$ Galois, in particular consider $P=L[\disc(L[z^3])]$ then, $[P:L]=2$ and $P[z^3]/P$ is Galois and we are done. To summarize we have found an extension $P=L[\disc(L[z^3])]$ with $[P:F]=4480\cdot 2=8960$ such that $A_P$ contains a maximal subfield $PE^{H_2}[z^3]/P$ Galois over $P$ with group isomorphic to $\mathbb{Z}_3\times \mathbb{Z}_3$.
Combining the above with the results of sections $2,3$ and using Rosset-Tate we get the following theorem. \begin{thm}\label{MT4} Let $A$ be an $F$-central simple algebra of degree $9$. Then \begin{enumerate}
\item $A$ is similar to the product of $35840$ symbol algebras, ($8960$ of degree $3$ and $26880$ of degree $9$).
\item If $A$ is of exponent $3$ then $A$ is similar to the product of $ 277760$ symbol algebras of degree $3$. \end{enumerate}
\end{thm}
\section{Application to essential dimension} In \cite{M} Merkurjev computes the essential $p$-dimension of $PGL_{p^2}$ relative to a fixed field $k$ to be $p^2+1$. One can interprate this result as follows: Let $F$ be a field of definition (relative to a base field $k$) for the generic division algebra of degree $p^2$. Let $E/F$ be the prime to $p$ closure of $F$. Let $l,$ $l\subset k \subset E$, be a subfield of $E$ over which $A$ is defined. Then $l/k$ has transcendece degree at least $p^2+1$ (and such $l$ exists with transcendence degree exactly $p^2+1$). It makes sense to define the essential dimension and the essential $p$-dimension of the class of an algebra $A$ (with respect to a fixed base field $k$). \begin{defn} Let $A\in \Br(F)$. Define the essential dimension and the essential $p$-dimension of the class of $A$ (with respect to a fixed base field $k$) as:
$$\edc(A)=\min\{\ed(B) | B\sim A\}$$
$$\edc_p(A)=\min\{\ed_p(B) | B\sim A\}$$ \end{defn} Notice that \cite{M} for p=2 gives $\ed_2(PGL_{2^2})=5$ and for $p=3$ it gives $\ed_3(PGL_{3^2})=10$. Now assume $F$ is prime to $p$ closed. Then as proved in \cite{RS} every $F$-csa of degree $p^2$ is actually an abelian crossed product with respect to $\mathbb{Z}_p \times \mathbb{Z}_p$. Thus, in this language, in \cite{LRRS} they prove: \begin{thm} Let $A$ be a generic division algebra of degree $4$, then $\edc(A)=\edc_2(A)=4$ \end{thm}
For $p=3$ Theorem \ref{MT2} says:
\begin{thm} Let $A$ be a generic division algebra of degree $9$, then $\edc_3(A)\leq 6$ \end{thm}
\end{document} |
\begin{document}
\begin{abstract} We present a construction of two infinite graphs $G_1$ and $G_2$, and of an infinite set $\mathscr{F}$ of graphs such that $\mathscr{F}$ is an antichain with respect to the immersion relation and, for each graph $G$ in $\mathscr{F}$, both $G_1$ and $G_2$ are subgraphs of $G$, but no graph properly immersed in $G$ admits an immersion of $G_1$ and of $G_2$. This shows that the class of infinite graphs ordered by the immersion relation does not have the finite intertwine property.
\end{abstract}
\date{\today}
\title{A Note On Immersion Intertwines of Infinite Graphs}
\begin{section}{Introduction}
A \emph{graph} $G$ is a pair $(V(G),E(G))$ where $V(G)$, the set of vertices, is an arbitrary and possibly infinite set, and $E(G)$, the set of edges, is a subset of the set of two-element subsets of $V(G)$. In particular, this definition implies that all graphs in this paper are simple, that is, with no loops or multiple edges. The class of finite graphs will be denoted $\mathscr{G}_{<\infty}$ and the class of graphs whose vertex set is infinite will be denoted by $\mathscr{G}_\infty$.
Let $G$ and $H$ be graphs, and let $\mathscr{P}(G)$ denote the set of all nontrivial, finite paths of $G$. We say $H$ is \emph{immersed} in $G$ if there is a map $\varphi : V(H) \cup E(H) \rightarrow V(G) \cup \mathscr{P}(G)$, sometimes abbreviated as $\varphi : H \rightarrow G$, such that:
\begin{enumerate}
\item if $v\in V(H)$, then $\varphi (v)\in V(G)$;
\item if $v$ and $v'$ are distinct vertices of $H$, then $\varphi(v) \neq \varphi (v') $;
\item if $e = \{v,v'\} \in E(H)$, then $\varphi(e) \in \mathscr{P}(G)$ and the path $\varphi (e)$ connects $\varphi(v)$ with $\varphi(v')$;
\item if $e$ and $e'$ are distinct edges of $H$, then the paths $\varphi(e)$ and $\varphi(e')$ are edge-disjoint; and
\item if $e=\{v,v'\}\in E(H)$ and $v''$ is a vertex of $H$ other than $v$ and $v'$, then $\varphi (v'') \notin V(\varphi(e))$. \end{enumerate}
We call $\varphi$ an \emph{immersion} and write $H \im G$. It is easy to prove (see~\cite{siamd}) that the relation $\im$ is transitive. If $C$ is a subgraph of $H$, then the restriction of $\varphi$ to $V(C)\cup E(C)$ will be abbreviated by $\varphi|_C$. If $\varphi |_{V(H)}$ is a bijection such that two vertices, $v$ and $v'$, of $H$ are adjacent if and only if their images, $\varphi(v)$ and $\varphi(v')$, are adjacent in $G$, then we say that $\varphi$ induces an isomorphism between $H$ and $G$; otherwise $\varphi$ is \emph{proper}. If $H = G$, then $\varphi$ is a \emph{self-immersion}, and, if additionally, it induces the identity map, then it is \emph{trivial}. It is worth noting that immersion, as defined above, is sometimes called \emph{strong immersion}.
Let $S$ be a possibly infinite set of pairwise edge-disjoint paths in a graph $G$. We say that $S$ is \emph{liftable} if no end-vertex of path in $S$ is an internal vertex of another path in $S$. The operation of \emph{lifting} $S$ consists of deleting all internal vertices of all paths in $S$, and adding edges joining every pair of non-adjacent vertices of $G$ that are end-vertices of the same path in $S$. It is easy to see that a graph $H$ is immersed in $G$ if and only if $H$ is isomorphic to a graph obtained from $G$ by deleting a set $V$ of vertices, deleting a set $E$ of edges, and then lifting a liftable set $S$ of paths. Furthermore, a self-immersion of $G$ is proper if and only if at least one of the sets $V$, $E$, and $S$ is nonempty.
Given a graph $G$, a {\em blob} is a maximal 2-edge-connected subgraph of $G$. Note that if a graph is 2-edge-connected, the graph itself is also a blob. An easy lemma about the immersion relation can be stated as follows. \begin{lemma} \label{immersionblob}
Let $H\im G$ via the immersion $\varphi$ and let $C$ be a blob of $H$. Then there is a blob $D$ of $G$ such that $C\im D$ via the immersion $\varphi|_C$. \end{lemma}
A pair $(\mathscr{G}, \leq)$, where $\mathscr{G}$ is a class of graphs and $\leq$ is a binary relation on $\mathscr{G}$, is called a \emph{quasi-order} if the relation $\leq$ is both reflexive and transitive. A quasi-order $(\mathscr{G}, \leq)$ is a \emph{well-quasi-order} if it admits no infinite antichains and no infinite descending chains.
Suppose $(\mathscr{G}, \leq)$ is a quasi-order and $G_1$ and $G_2$ are two elements of $\mathscr{G}$. An \emph{intertwine} of $G_1$ and $G_2$ is an element $G$ of $\mathscr{G}$ satisfying the following conditions:
\begin{itemize}
\item $G_1 \leq G$ and $G_2 \leq G$, and
\item if $G'\leq G$ and $G \nleq G'$, then $G_1 \nleq G$ or $G_2 \nleq G$. \end{itemize}
The class of all intertwines of $G_1$ and $G_2$ is denoted by $\mathscr{I}_{\leq}(G_1,G_2)$. A quasi-order $(\mathscr{G},\leq)$ satisfies the \emph{finite intertwine property} if for every pair $G_1$ and $G_2$ of elements of $\mathscr{G}$, the class of intertwines $\mathscr{I}_{\leq}(G_1, G_2)$ has no infinite antichains. It is clear that if $(\mathscr{G},\leq)$ is a well-quasi-order, then it also satisfies the finite intertwine property. However, it is well known that the converse is not true; for example, see~\cite{anoioig}.
Nash-Williams conjectured, and Robertson and Seymour later proved~\cite{gm23nwic} that $(\mathscr{G}_{<\infty}, \im)$ is a well-quasi-order, and so it follows that $(\mathscr{G}_{<\infty}, \im)$ satisfies the finite intertwine property. In~\cite{anoioig}, the second author showed that $(\mathscr{G}_\infty , \m)$, where $\m$ denotes the minor relation, does not satisfy the finite intertwine property. Andreae showed \cite{oioug} that $(\mathscr{G}_\infty , \im)$ is not a well-quasi-order. In a result analogous to \cite{anoioig}, we strengthen Andreae's result by showing that $(\mathscr{G}_\infty , \im)$ does not satisfy the finite intertwine property. In particular, we construct two graphs $G_1$ and $G_2$, and an infinite class $\mathscr{F}$ in $\mathscr{G}_\infty $ such that:
\begin{enumerate} \item[(IT1)] $\mathscr{F}$ is an immersion antichain; \item[(IT2)] every graph in $\mathscr{F}$ is connected; \item[(IT3)] both $G_1$ and $G_2$ are subgraphs of each graph in $\mathscr{F}$; \item[(IT4)] if $G'$ is properly immersed in a graph $G$ in $\mathscr{F}$, then $G_1 \nim G'$ or $G_2 \nim G'$. \end{enumerate}
Note that~(IT3) implies that $G_1$ and $G_2$ are immersed in $G$. Hence, the existence of graphs $G_1$, $G_2$ and a class of graphs $\mathscr{F}$ satisfying~(IT1)--(IT4) implies the following statement, which is the main result of the paper.
\begin{theorem} \label{mainthm} The quasi-order $(\mathscr{G}_{\infty}, \im)$ does not satisfy the finite intertwine property. \end{theorem}
\end{section} \begin{section}{The Construction}
We will exhibit two graphs $G_1$ and $G_2$ in $\mathscr{G}_{\infty}$ such that $\mathscr{I}_{\im}(G_1, G_2)$ is infinite. The construction of $G_1$ and $G_2$ begins with the following results, which are immediate consequences of, respectively, Lemmas 3 and 4, and Lemmas 1 and 2 of \cite{osioig}.
\begin{theorem} \label{thm1}
There is an infinite set $\mathscr{H}$ of pairwise-disjoint infinite blobs such that $|H| \leq |\mathscr{H}|$ for all $H\in \mathscr{H}$, and $\mathscr{H}$ forms an immersion antichain. \end{theorem}
\begin{theorem} \label{thm2}
Given an immersion antichain $\mathscr{H}$ of pairwise-disjoint infinite blobs such that $|H| \leq |\mathscr{H}|$ for all $H\in \mathscr{H}$, there is a connected graph $G$ such that the set of blobs of $G$ is $\mathscr{H}$ and $G$ admits no self-immersion except for the trivial one. \end{theorem}
Let $\mathscr{H}$ be an antichain as described in Theorem~\ref{thm1}. Partition $\mathscr{H}$ into countably many sets $\{\mathscr{H}_i\}_{i\in \Z}$ with the cardinality of each $\mathscr{H}_i$ equal to $|\mathscr{H}|$. Then, by Theorem~\ref{thm2}, for each $i\in \Z$, there is a connected graph $B_i$ whose set of blobs is $\mathscr{H}_i$, and that admits no proper self-immersion. Furthermore, Lemma \ref{immersionblob} implies that if $i$ and $j$ are distinct integers, then $B_i \nim B_j$, as no blob of $B_i$ is immersed in a blob of $B_j$. Therefore, the set of graphs $\{B_i\}_{i\in \mathbb{Z}}$ is an immersion antichain.
For each graph $B_i$, label one vertex $u_i$. Let $P$ be a two-way infinite path with vertices labeled $\{v_i\}_{i\in \Z}$ such that, for each integer $i$, the vertex $v_i$ is adjacent to $v_{i+1}$ and $v_{i-1}$. We construct the graph $G_1$ by taking the disjoint union of $P$ and the graphs $B_i$ for which $i$ is odd, and then identifying the vertices $u_i$ and $v_j$ for $i=j$. Similarly, we construct the graph $G_2$ by taking the disjoint union of $P$ and the graphs $B_i$ for which $i$ is even, and then identifying the vertices $u_i$ and $v_j$ for $i=j$.
Now let $j$ be an integer. Take the disjoint union of $G_1$ and all the graphs $B_i$ for which $i$ is even. Then, for each even integer $i$, identify the vertex $v_i$ of $G_1$ with the vertex $u_{i+2j}$ of the graph $B_{i+2j}$. Let $F_j$ be the resulting graph (see Figure~\ref{f1}) and define $\mathscr{F}$ as the set $\{F_j\}_{j\in \mathbb{Z}}$.
\end{section}
\begin{figure}
\caption{The graph $F_j$}
\label{f1}
\end{figure}
The following lemma immediately implies our main result, Theorem~\ref{mainthm}.
\begin{lemma} The set of graphs $\mathscr{F}=\{F_j\}_{j\in \mathbb{Z}}$ is an immersion antichain. Furthermore, each $F_j \in \mathscr{F}$ is an immersion intertwine of the graphs $G_1$ and $G_2$. \end{lemma}
\begin{proof} Let $j$ be an integer. It is easy to see that $F_j$ satisfies (IT2) and (IT3). Therefore, in order to show that $F_j$ is an immersion intertwine of $G_1$ and $G_2$, it suffices to prove that it also satisfies (IT4).
Suppose, for contradiction, that $F'_j$ is a graph that is properly immersed in $F_j$ via a map $\varphi$, and both $G_1$ and $G_2$ are immersed in $F'_j$. Then we can obtain $F'_j$ from $F_j$ by deleting a set of vertices $V$, deleting a set of edges $E$, and then lifting a liftable set of paths $S$, with at least one of these sets being nonempty. We consider two cases depending on whether there is an integer $i$ for which $B_i$ meets $V\cup E \cup S$.
First, assume that no $B_i$ meets $V\cup E \cup S$. Then the sets $V$ and $S$ are empty, as all the vertices of $F_j$ are contained in the subgraphs $\{B_n\}_{n\in \Z}$, and $E$ consists of some edges of $P$.
Suppose the edge~$e=\{v_k, v_{k+1}\}$ is in $E$ where $k$ is odd; the argument is symmetric when $k$ is even. The graph $F_j \setminus e$ has exactly two components, with the subgraphs $B_k$ and $B_{k+2}$ in distinct components. Label the component containing $B_k$ as $C_1$ and the component containing $B_{k+2}$ as $C_2$.
Let $A$ be a blob of $B_k$. As $A$ and each blob of $C_2$ are members of the antichain $\mathscr{A}$, by Lemma~\ref{immersionblob}, we have $A\nim C_2$. Hence, by transitivity, $B_k\nim C_2$. It follows similarly that $B_{k+2}\nim C_1$. But as $G_1$ is connected and the only components of $F_j\setminus e$ are $C_1$ and $C_2$, we have that $G_1 \nim F_j \setminus e$. Furthermore, as $F'_j \im F_j \setminus e$, by transitivity, $G_1 \nim F'_j$; a contradiction.
Now suppose that, for some odd integer $i$, the graph $B_i$ meets $V\cup E \cup S$; again, the argument is symmetric if $i$ is even. As $G_1$ is immersed in $F'_j$, so is $B_i$. Let $T$ be the subgraph of $F'_j$ induced by $\varphi^{-1} (V(B_i)\cup \mathscr{P}(B_i))$, and let $\psi$ be the immersion of $B_i$ into $F'_j$. As $B_i$ admits no proper self-immersion, there must be some vertex $v$ of $B_i$ such that $\psi(v)$ is a vertex of $F'_j - T$.
Let $A_v$ be the blob of $B_i$ containing $v$. By Lemma~\ref{immersionblob}, the blob $A_v$ is immersed in some blob of $F'_j - T$. But, again by Lemma~\ref{immersionblob}, each blob of $F'_j - T$ is immersed in a graph of the antichain $\mathscr{A} \setminus \{A_v\}$. So $A_v$ cannot be immersed in $F'_j - T$. Therefore, $B_i$ is not immersed in $F'_j$ and neither is $G_1$.
Hence, $\mathscr{F}$ satisfies the condition (IT4).
To show that $\mathscr{F}$ is an antichain in $(\mathscr{G}_\infty, \im)$, suppose that $F_i$ is immersed in $F_j$ for some distinct integers $i$ and $j$. By construction, $F_i$ and $F_j$ are not isomorphic. Therefore, $F_i$ is properly immersed in the intertwine $F_j$ and so either $G_1 \nim F_i$ or $G_2 \nim F_i$. But both $G_1$ and $G_2$ are immersed in $F_i$ by construction; a contradiction. The conclusion follows. \end{proof}
The graphs $\{B_i\}_{i\in \Z}$ used in our construction, whose existence was proved in~\cite{oioug}, have vertex sets of very large cardinality. In fact, the cardinal in question is the first limit cardinal greater than the cardinality of the continuum. It is not known whether the class of graphs of smaller cardinality ordered by the strong immersion relation is a well-quasi-ordering, whether it has the finite intertwine property, and whether there exists a infinite graph of smaller cardinality that admits only the trivial self-immersion.
\end{document} |
\begin{document}
\title{Numerical Methods for Stochastic Differential Equations} \author{Joshua Wilkie} \affiliation{Department of Chemistry, Simon Fraser University, Burnaby,
British Columbia V5A 1S6, Canada}
\begin{abstract} Stochastic differential equations (sdes) play an important role in physics but existing numerical methods for solving such equations are of low accuracy and poor stability. A general strategy for developing accurate and efficient schemes for solving stochastic equations in outlined here. High order numerical methods are developed for integration of stochastic differential equations with strong solutions. We demonstrate the accuracy of the resulting integration schemes by computing the errors in approximate solutions for sdes which have known exact solutions. \end{abstract}
\pacs{03.65.-w, 02.50.-r, 02.70.-c} \maketitle
Stochastic differential equations ({\bf sdes}) have a long history in physics\cite{CWG} and play an important role in many other areas of science, engineering and finance\cite{CWG,Hase,Platen}. Recently a number of computational techniques have been developed in which high dimensional deterministic equations are decomposed into lower dimensional stochastic equations. Gisin and Percival\cite{GP}, for example, reduced a deterministic master equation for the density matrix into stochastic equations for a wavefunction. Similar approaches are being used to solve the quantum many-body problem for bosons\cite{CCD}, fermions\cite{JC} and vibrations\cite{Wilk}. These latter methods give rise to large sets of coupled sdes which require fast and efficient numerical integration schemes. Unfortunately, and in spite of their widespread use, the available numerical techniques\cite{Platen} for solving such equations are far less accurate than comparable methods for solution of ordinary differential equations ({\bf odes}).
In this manuscript we show how classical methods for solving odes, such as Runge-Kutta, can be adapted for the solution of a class of sdes which should include many of the equations which arise in physical problems.
Consider a finite set of sdes, \begin{eqnarray} dX^j_t=a^j({\bf X}_t,t)~dt+\sum_{k=1}^mb^j_k({\bf X}_t,t)~dW^k_t, \label{sdes} \end{eqnarray} represented in It\^{o}\cite{CWG,Hase,Platen} form, where $j=1,\dots,n$. Here ${\bf X}_t=(X^1_t,\dots,X^n_t)$ and the $dW^k_t$ are independent and normally distributed stochastic differentials with zero mean and variance $dt$ (i.e. sampled $N(0,dt)$). The stochastic variables $W^k_t$ are Wiener processes. Now assume that the coefficients $a^j$ and $b^j_k$ have regularity properties which guarantee strong solutions, i.e. that $X^j_t$ are some fixed functions of the Wiener processes, and that they are differentiable to high order. [Sufficient conditions for strong solutions are discussed in Ref. \cite{Platen}.] We may then view the solutions of (\ref{sdes}) as functions $X^j_t=X_j(t,W^1_t,\dots, W^m_t)$ of time and the Wiener processes. The solutions can therefore be expanded in Taylor series. Keeping terms of order $dt$ or less then gives \begin{eqnarray} X^j_{t+dt}&=&X^j_t+\frac{\partial X^j_t}{\partial t}~dt+\sum_{k=1}^m\frac{\partial X^j_t}{\partial W^k_t}~dW^k_t\nonumber \\ &+&\frac{1}{2}\sum_{k,l=1}^m\frac{\partial^2 X^j_t}{\partial W^k_t\partial W^l_t}~dW^k_tdW^l_t.\label{sdes2} \end{eqnarray} In a mean square sense the product of {\em differentials} $dW^k_tdW^l_t$ is equivalent to $\delta_{k,l}dt$ in the It\^{o}\cite{CWG,Hase,Platen} formulation of stochastic calculus. Making this replacement then yields \begin{eqnarray} dX^j_{t+dt}&=&X^j_{t+dt}-X^j_t=[\frac{\partial X^j_t}{\partial t}+\frac{1}{2}\sum_{k=1}^m\frac{\partial^2 X^j_t}{\partial W^{k2}_t}]~dt\nonumber \\ &+&\sum_{k=1}^m\frac{\partial X^j_t}{\partial W^k_t}~dW^k_t \end{eqnarray} which when compared to (\ref{sdes}) allows us to identify the first derivatives \begin{eqnarray} \frac{\partial X^j_t}{\partial W^k_t}&=&b^j_k({\bf X}_t,t) \\ \frac{\partial X^j_t}{\partial t}&=&a^j({\bf X}_t,t)-\frac{1}{2}\sum_{k=1}^m\frac{\partial^2 X^j_t}{\partial W^{k2}_t}\nonumber \\ &=&a^j({\bf X}_t,t)-\frac{1}{2}\sum_{k=1}^m\sum_{i=1}^nb^i_k({\bf X}_t,t)\frac{\partial b^j_k({\bf X}_t,t)}{\partial X_t^i}. \end{eqnarray} Now that these first order derivatives are expressed in terms of $a^j$ and $b^j_k$, higher order derivatives can be computed. Thus a Taylor expansion of the solutions \begin{eqnarray} X^j_{t+\Delta t}&=&X^j_t+\frac{\partial X^j_t}{\partial t}\Delta t+\sum_{k=1}^m\frac{\partial X^j_t}{\partial W^k_t}~\Delta W^k_t\nonumber \\ &+&\frac{1}{2}\sum_{k,l=1}^m\frac{\partial^2 X^j_t}{\partial W^k_t\partial W^l_t}\Delta W^k_t\Delta W^l_t+\dots \end{eqnarray} can be obtained for finite displacements $\Delta t$ and $\Delta W^k_t$. This Taylor expansion can then be employed to develop Runge-Kutta algorithms and other integration schemes.
We illustrate the use of this approach by developing a Runge-Kutta method for sdes which is closely related to the classical Runge-Kutta scheme for odes. For given displacements $\Delta t$ and $\Delta W^k_t$ define \begin{eqnarray} f_j({\bf X}_t,t)&=&\frac{\partial X^j_t}{\partial t}\Delta t+\sum_{k=1}^m \frac{\partial X^j_t}{\partial W^k_t}\Delta W^k_t\nonumber \\ &=&[a^j({\bf X}_t,t)-\frac{1}{2}\sum_{k=1}^m\sum_{i=1}^nb^i_k({\bf X}_t,t)\frac{\partial b^j_k({\bf X}_t,t)}{\partial X_t^i}]\Delta t\nonumber \\ &+&\sum_{k=1}^mb^j_k({\bf X}_t,t) \Delta W^k_t \end{eqnarray} and consider the following four stage approximation \begin{eqnarray} K_j^1&=&f_j({\bf X}_{t_i},t_i)\nonumber \\ K_j^2&=&f_j({\bf X}_{t_i}+\frac{1}{2}{\bf K}^1,t_i+\frac{1}{2}\Delta t)\nonumber \\ K_j^3&=&f_j({\bf X}_{t_i}+\frac{1}{2}{\bf K}^2,t_i+\frac{1}{2}\Delta t)\nonumber \\ K_j^4&=&f_j({\bf X}_{t_i}+{\bf K}^3,t_{i+1})\nonumber\\ {\bf X}_{t_{i+1}}&=&{\bf X}_{t_i}+\frac{1}{6}({\bf K}^1+2{\bf K}^2+2{\bf K}^3+{\bf K}^4) \label{rk4} \end{eqnarray} where $t_i$ is the initial time and $t_{i+1}=t_i+\Delta t$. Taylor expansion of this scheme shows that ${\bf X}_{t_{i+1}}$ differs from the exact solution by terms of order higher than $\Delta t^2$ (i.e. terms of higher order than $\Delta t^2$, $\Delta t (\Delta W^k_t)^2$, $(\Delta W^k_t)^4$, $(\Delta W^k_t)^2(\Delta W^l_t)^2$, and $(\Delta W^k_t)^2\Delta W^l_t\Delta W^i_t$). Thus, this stochastic Runge-Kutta algorithm plays a role very similar to its classical counterpart except that its order is reduced from four to two. Generalizations to higher order Runge-Kutta schemes are straightforward, and we will employ one such scheme in example calculations, but details will not be presented here.
While this approach is not completely general, since it will fail for sdes with weak solutions or non-differentiable $a^j$ and $b^j_k$, it should be applicable to a wide range of problems. It can for example be used to solve every one of the equations with known solutions tabulated in section 4.4 of Ref. \cite{Platen}. To illustrate the accuracy of the method and its improvement over other known techniques for solving sdes we now consider a number of these examples. We compare known exact solutions with numerical solutions obtained using the Euler-Maruyama scheme\cite{EM}, a derivative free version of the Milstein scheme due to Kloeden and Platen\cite{Mils}, the classical Runge-Kutta scheme (\ref{rk4}), and another Runge-Kutta scheme obtained in the manner outlined above from an eighth order twelve step method for odes due to Hairer and Wanner\cite{Hair} (this reproduces the stochastic Taylor expansion up to and including terms of order $\Delta t^4$). Stochastic differentials were sampled using the routines gasdev and ran2\cite{NR}. \begin{figure}
\caption{$\log_{10}|X_t-X_t^{approximate}|$ vs time $t$ for Eq. (\ref{EG1})}
\label{Fig1}
\end{figure}
As a first test of these methods consider an autonomous nonlinear scalar equation \begin{equation} dX_t = (1+X_t)(1+X_t^2) dt + (1+X_t^2) dW_t \label{EG1} \end{equation} with just one Wiener process. In this example and in all subsequent examples we assume all Wiener processes are initially zero. The exact solution to this equation is\cite{Platen} \begin{equation} X_t=\tan(t+W_t+\arctan(X_0)) \end{equation}
as can be readily verified using It\^{o}\cite{CWG,Hase,Platen} calculus. In Fig. 1 we plot the error $\log_{10}|X_t-X_t^{approximate}|$ vs time computed with a time step of $2.5\times 10^{-5}$ for a single stochastic trajectory with initial condition $X_0=1$ for the four different approximation schemes. The Milstein scheme (long-dashed curve) shows some improvement over the primitive Euler-Maruyama method (solid curve) but the order two Runge-Kutta scheme (short-dashed curve) and order four Runge-Kutta scheme (dotted curve) perform very much better. \begin{figure}
\caption{$\log_{10}|X_t-X_t^{approximate}|$ vs time $t$ for Eq. (\ref{EG2})}
\label{Fig2}
\end{figure}
The second example equation, also from Ref. \cite{Platen}, is an autonomous linear scalar equation in two Wiener processes \begin{equation} dX_t = a_0 X_t dt + b_1 X_t dW^1_t + b_2 X_t dW^2_t\label{EG2} \end{equation} which has an exact solution \begin{equation} X_t=X_0\exp\{[a_0-\frac{1}{2}(b_1^2+b_2^2)]t+b_1 W_t^1+b_2W_t^2\}. \end{equation} The logarithm base ten of the error for the different schemes, calculated for initial condition $X_0=1$ and time step .01, is plotted in Fig. 2. Here the Milstein scheme (long-dashed curve) performs no better than the Euler-Maruyama method (solid curve) but again the order two Runge-Kutta scheme (short-dashed curve) and order four Runge-Kutta scheme (dotted curve) show greatly improved accuracy. [Note that the apparent improvement in performance of all schemes at long time is a result of the fact that the solution decays to zero.] \begin{figure}
\caption{$\log_{10}|X_t^1-X_t^{1~approximate}|$ vs time $t$ for Eq. (\ref{EG3})}
\label{Fig3}
\end{figure}
Example 3 is a set of two coupled linear autonomous sdes \begin{eqnarray} dX_t^1&=&-\frac{3}{2} X_t^1 dt +X_t^1 dW_t^1-X_t^1dW_t^2-X_t^2dW_t^3\nonumber \\ dX_t^2&=&-\frac{3}{2} X_t^2 dt +X_t^2 dW_t^1-X_t^2dW_t^2+X_t^1dW_t^3\label{EG3} \end{eqnarray} with three Wiener processes. Here the solutions are \begin{eqnarray} X_t^1&=&\exp\{-2t+W_t^1-W_t^2\}\cos W_t^3\nonumber\\ X_t^2&=&\exp\{-2t+W_t^1-W_t^2\}\sin W_t^3. \end{eqnarray} Numerical solutions were calculated with a time step of .01 and errors in $X_t^1$ are represented in Fig. 3. The order two Runge-Kutta scheme (long-dashed curve) and order four Runge-Kutta scheme (short-dashed curve) show improvement over the Milstein scheme (solid curve). Similar results were obtained for $X_t^2$. \begin{figure}
\caption{$\log_{10}|X_t-X_t^{approximate}|$ vs time $t$ for Eq. (\ref{EG4})}
\label{Fig4}
\end{figure}
The examples we have considered so far have not had explicitly time dependent $a^j$ and $b^j_k$. Example 4 is a scalar non-autonomous sde \begin{equation} dX_t=[\frac{2}{1+t}X_t+\frac{1}{2}(1+t)^2]dt+\frac{1}{2}(1+t)^2 dW_t\label{EG4} \end{equation} with known solution\cite{Platen} \begin{equation} X_t=\left(\frac{1+t}{1+t_0}\right)^2X_0+\frac{1}{2}(1+t)^2(W_t+t-t_0). \end{equation} Numerical solutions were calculated using the order two Runge-Kutta scheme and a time step of .001, $t_0=0$ and $X_0=1$. The error is represented in Fig. 4. As in previous examples a high accuracy is achieved in spite of the rapid growth of the solution. The comparative smoothness of the error curve reflects the fact the the deterministic part of the solution dominates. \begin{figure}
\caption{$\log_{10}|X_t-X_t^{approximate}|$ vs time $t$ for Eq. (\ref{EG5})}
\label{Fig5}
\end{figure}
We now consider an example for which an exact solution is known but which is expressed in terms a stochastic integral. Consider the stochastic Ginzburg-Landau equation \begin{equation} dX_t=[-X_t^3+(\alpha+\frac{1}{2}\sigma^2)X_t]dt+\sigma X_t dW_t\label{EG5} \end{equation} with solution\cite{Platen} \begin{equation} X_t=X_0\frac{\exp\{\alpha t+\sigma W_t\}}{\sqrt{1+2X_0^2\int_0^t\exp\{2\alpha s+2\sigma W_s\}ds}}. \end{equation} We chose $\alpha=.01$, $\sigma=4$, $X_0=1$ and $dt=5\times 10^{-6}$. The stochastic integral was computed using a Riemann sum with the same time step. Error in the solution calculated with the order two Runge-Kutta scheme is plotted in Fig. 5. Good accuracy is again obtained. \begin{figure}
\caption{$\log_{10}|X_t-X_t^{approximate}|$ vs time $t$ for Eq. (\ref{EG6})}
\label{Fig6}
\end{figure}
Finally, we consider an example in which the exact solution is expressed in terms of a It\^{o}\cite{CWG,Hase,Platen} stochastic integral. Consider the sde \begin{equation} dX_t=-\tanh X_t (a+\frac{1}{2}b^2{\rm sech} ^2X_t)dt+b{\rm sech} X_t dW_t\label{EG6} \end{equation} with exact solution\cite{Platen} \begin{equation} X_t={\rm arcsinh} \left(e^{-at}\sinh X_0+e^{-at}\int_0^t e^{as}dW_s\right). \end{equation} \begin{figure}
\caption{$\log_{10}|n_t-n_t^{approximate}|$ vs time $t$ for Eq. (\ref{EG7})}
\label{Fig7}
\end{figure} We set $a=.02$, $b=1$, $X_0=1$ and $dt=1\times 10^{-5}$. The stochastic integral in the exact solution was calculated using the It\^{o}\cite{CWG,Hase,Platen} integral formula with the same time step. The error in the solution calculated with the order two Runge-Kutta scheme is plotted in Fig. 6. As in all previous cases considered the accuracy is very good.
Thus, the approach to solving sdes advocated here works very well for the wide range of examples we have considered. The order 4 Runge-Kutta method is clearly much more accurate than the order 2 Runge-Kutta scheme. It also has an embedded lower order Runge-Kutta scheme which can be employed to obtain an error estimate suitable for stepsize control\cite{Hair}. Hence is should be possible to use variable stepsizes to ensure the accuracy of the solution. This sort of implementation is essential for solving equations which do not have known exact solutions. The only subtlety in developing such a method is ensuring that the correct Wiener path is maintained even when a step must be rejected. This is achieved\cite{GL} by dividing the rejected differentials $dt$ and $dW^k_t$ in two segments; $dt/2$ and $dW^k_t/2-y$ followed by $dt/2$ and $dW^k_t/2+y$ where $y$ is sampled $N(0,dt/2)$. To illustrate the accuracy of the resulting variable stepsize algorithm we solve the Gisin-Percival\cite{GP} stochastic wave equation for the nonlinear absorber (Eq. 4.2 of Ref. \cite{GP}) \begin{eqnarray}
d|\psi\rangle &=&.1(a^{\dag}-a)|\psi\rangle dt+(2\overline{a^{\dag 2}}a^2-a^{\dag 2}a^2-\overline{a^{\dag 2}}~\overline{a^2})|\psi\rangle dt\nonumber \\
&+&\sqrt{2}(a^2-\overline{a^2}) |\psi\rangle dW_t\label{EG7} \end{eqnarray}
with initial state $|\psi(0)>=|0\rangle$. In Fig. 7 we plot the error in mean occupation number $n_t=M[\langle \psi|a^{\dag}a|\psi\rangle ]$ vs time (Fig. 5 of Ref. \cite{GP}) where $M[\cdot ]$ denotes an average over stochastic realisations. 1000, 10000, and 20000 trajectories were used to calculate the solid curve, dashed curve and dotted curve, respectively. Convergence to the exact result is good.
The author acknowledges the support of the Natural Sciences and Engineering Research Council of Canada.
\end{document} |
\begin{document}
\title{On structure space of the ring $B_1(X)$}
\author{A. Deb Ray} \address{Department of Pure Mathematics, University of Calcutta, 35, Ballygunge Circular Road, Kolkata - 700019, INDIA} \email{[email protected]}
\author{Atanu Mondal} \address{Department of Commerce (E), St. Xavier's college, 30, Mother Teresa sarani, Kolkata - 700016, INDIA} \email{[email protected]}
\begin{abstract} In this article, we continue our study of the ring of Baire one functions on a topological space $(X,\tau)$, denoted by $B_1(X)$ and extend the well known M. H. Stones's theorem from $C(X)$ to $B_1(X)$. Introducing the structure space of $B_1(X)$, an analogue of Gelfand Kolmogoroff theorem is established. It is observed that $(X,\tau)$ may not be embedded inside the structure space of $B_1(X)$. This observation inspired us to introduce a weaker form of embedding and show that in case $X$ is a $T_4$ space, $X$ is weakly embedded as a dense subspace, in the structure space of $B_1(X)$. It is further established that the ring $B_1^{*}(X)$ of all bounded Baire one functions is a C-type ring and also, the structure space of $B_1^{*}(X)$ is homeomorphic to the structure space of $B_1(X)$. Introducing a finer topology $\sigma$ than the original $T_4$ topology $\tau$ on $X$, it is proved that $B_1(X)$ contains free (maximal) ideals if $\sigma$ is strictly finer than $\tau$. It is also proved that $\tau = \sigma$ if and only if $B_1(X) = C(X)$. Moreover, in the class of all perfectly normal $T_1$ spaces, $B_1(X) = C(X)$ is equivalent to the discreteness of the space $X$. \end{abstract} \keywords{$Z_B$- filter, $Z_B$-ultrafilter, free and fixed maximal ideals of $B_1(X)$, Structure space of a ring, Compactification} \subjclass[2010]{26A21, 54C30, 13A15, 54C50, 54D35}
\maketitle
\section{Introduction and Prerequisites} \noindent The collection $B_1(X)$, of all real valued Baire one functions defined on a topological space $X$ forms a commutative lattice ordered ring with unity. Initiating the study of $B_1(X)$ in \cite{AA} we have established a duality between the ideals of $B_1(X)$ and $Z_B$-filters (an analogue of $Z$-filters) on $X$ in a subsequent paper \cite{AA2}. \\ \noindent In case of the rings of continuous functions, M. H. Stone's theorem states that, for every topological space $X$ there exists a Tychonoff space $Y$ such that $C(X) \cong C(Y)$, which is extremely important and useful. Since $B_1(X)$ is a ring that contains $C(X)$ as a subring, it is natural to ask whether it is possible to extend the celebrated M. H. Stone's theorem \cite{GJ} in this bigger ring. In this paper, we begin our study of $B_1(X)$ by addressing this question and answer it in affirmative. Therefore, in view of this result, it would be enough to deal with Tychonoff spaces as long as the study of the ring structure of $B_1(X)$ is concerned. \\\\ \noindent The collection of all maximal ideals of $C(X)$, denoted by $\MMM(C(X))$, equipped with hull-kernel topology is known as the structure space of the ring $C(X)$. It is also very well known that the structure space of $C(X)$ is homeomorphic to the collection of all $Z$-ultrafilters on $X$ with Stone topology \cite{GJ}. In section 2, defining the structure space of $B_1(X)$ in a similar manner, we could establish an analogue of this result in the context of the ring $B_1(X)$. The importance of the structure space of $C(X)$ for a Tychonoff space $X$ lies in the fact that a copy of $X$ is densely embedded in it, i.e., a Tychonoff space $X$ is embedded in the space $\MMM(C(X))$ with hull-kernel topology. Moreover, the structure space $\MMM(C(X))$ becomes the $Stone$-$\check{C}ech$ compactification of $X$. But in case of $\MMM(B_1(X))$, it may not happen the same way. The space $(X, \tau)$ may not be embedded in the structure space $\MMM(B_1(X))$ of the ring $B_1(X)$. We have shown that in case $X$ is a $T_4$-space, a weaker form of embedding from $X$ into the structure space $\MMM(B_1(X))$ exists. Being inspired by this fact, we have introduced another topology $\sigma$ on $X$, generally finer than $\tau$, such that $(X, \sigma)$ is densely embedded inside $\MMM(B_1(X))$. This result leads to several important conclusions. It is proved that the ring $B_1^*(X)$ of bounded Baire one functions is a C-type ring and $\MMM(B_1(X)) \cong \MMM(B_1^*(X))$. Finally, for compact $T_2$ spaces, $\tau \subsetneq \sigma$ ensures the existence of free maximal ideals in $B_1(X)$ and within the class of perfectly normal $T_1$ spaces at least, $B_1(X) = C(X)$ is equivalent to the space to be discrete. \\\\ \noindent In what follows, we write $X$, $Y$ etc. to denote topological spaces without mentioning their topologies (unless required) explicitly. \section{Extension of M. H. Stone's Theorem} \noindent As proposed in the introduction, we construct an isomorphism $B_1(Y) \rightarrow B_1(X)$ using the existing isomorphism $C(Y) \rightarrow C(X)$, where $Y$ is the Tychonoff space constructed suitably from a given topological space $X$. It is also not very hard to observe that such an isomorphism is a lattice isomorphism. \begin{theorem}\label{MHS} For each topological space $X$, there exists a Tychonoff space $Y$ such that $B_1(X)$ is isomorphic to $B_1(Y)$ and $B_1^*(X)$ is isomorphic to $B_1^*(Y)$ under the same (restriction) map. \end{theorem} \begin{proof} Define a binary relation $``\sim"$ on $X$ by $x \sim y$ if and only if $f(x)=f(y)$, for all $f \in C(X)$. $\sim$ is an equivalence relation on $X$. Let $Y= X /\sim$ $\equiv \{[x]: x \in X\}$, where $[x]$ denotes the equivalence class of $x\in X$.\\ Define $\tau :X \rightarrow Y$ by $\tau(x)=[x]$, for all $x \in X$.\\ \noindent For each $f \in C(X)$, let $g_f \in C(Y)$ be defined by the rule $g_f([x])=f(x)$, for all $[x] \in Y$. Certainly $g_f$ is well defined and $g_f \circ \tau = f$. \\ \noindent Consider $C'=\{g_f: f \in C(X)\}$ and equip $Y$ with the weak topology induced by the family $C'$. Then $Y$ becomes a completely regular space \cite{GJ}. Also, $g_f \circ \tau $ is continuous for all $g_f \in C'$. Hence $\tau$ is continuous. \\ \noindent If $g \in C(Y)$ then $g\circ \tau \in C(X)$ and hence $g \circ \tau = f$, for some $f \in C(X)$. So, $\forall$ $[x] \in Y$, $g([x])=g(\tau(x))=f(x)=(g_f\circ \tau )(x)=g_f([x])$. i.e., $g = g_f$ and consequently, $C' = C(Y)$.\\ \noindent Let $[x] \neq [y]$ in $Y$. Then there exists $f \in C(X)$ such that $f(x) \neq f(y)$. So, $g_f[x] \neq g_f[y]$. This proves that $Y$ is Hausdorff and hence, a Tychonoff space.\\ \noindent Let $h \in B_1(Y)$ be any Baire one function on $Y$. There exists a sequence of continuous functions $\{h_n\} \subset C(Y)$ such that, $\{h_n\}$ converges pointwise to $h$, i.e., $\lim\limits_{n\to\infty}h_n(x)=h(x)$, for all $x \in X$. Clearly, $h_n \circ \tau \in C(X)$, $\forall n \in \mathbb{N}$ and also $\lim\limits_{n\to\infty}(h_n\circ \tau)(x)$ exists for all $x \in X$.\\ \noindent Define $\widehat{\psi}(h): X \rightarrow \mathbb{R}$ by $\widehat{\psi}(h)(x)=\lim\limits_{n\to\infty}(h_n\circ \tau)(x)$, for each $h \in B_1(Y)$. Then $\widehat{\psi}(h) \in B_1(X)$. Finally, define $\widehat{\psi}: B_1(Y) \rightarrow B_1(X)$ by $h \mapsto \widehat{\psi}(h)$. It is easy to check that $\widehat{\psi}$ is an isomorphism and in view of a result proved in \cite{AA}, the restriction of $\widehat{\psi}$ on $B_1^*(Y)$ to $B_1^*(X)$ is also an isomorphism. \end{proof} \noindent In \cite{AA}, we have established that every ring homomorphism $B_1(Y) \ra B_1(X)$ is a lattice homomorphism. As a consequence, we get \begin{corollary} The isomorphism $\psi : B_1(Y) \ra B_1(X)$ is a lattice isomorphism. \end{corollary} \noindent Theorem~\ref{MHS} ensures that it is enough to study the ring of Baire one functions defined on any Tychonoff space, instead of any arbitrary topological space. Therefore, in the rest of this paper, by a topological space we always mean a Tychonoff space, unless stated otherwise. \section{The structure space of $B_1(X)$} \noindent Let $X$ be a Tychonoff space. Consider $\MMM(B_1(X))$ as the collection of all maximal ideals of the ring $B_1(X)$. It is easy to observe that for each $f \in B_1(X)$, if $\widehat{\mathscr M_f} = \{\widehat{M} \in \MMM(B_1(X)) \ : \ f \in \widehat{M}\}$ then the collection $\{\widehat{\mathscr M_f} \ : \ f \in B_1(X)\}$ forms a \textbf{base for closed sets} for some topology $\zeta$ on $\MMM(B_1(X))$. This topological space $(\MMM(B_1(X)), \zeta)$ is called the \textbf{structure space} of $B_1(X)$ and the topology is known as the \textbf{hull-kernel topology}. It is well known that the structure space of any commutative ring with unity is always compact. Moreover, the structure space is Hausdorff if the ring is Gelfand (i.e., a ring where every prime ideal can be extended to a unique maximal ideal). Therefore, \begin{itemize}
\item $\MMM(B_1(X))$ is \textbf{compact}.
\item $\MMM(B_1(X))$ is \textbf{Hausdorff}, since $B_1(X)$ is a Gelfand ring~\cite{AA2}. \end{itemize} \noindent In \cite{AA2}, we have introduced $Z_B$-filter, $Z_B$-ultrafilter and studied their interplay with ideals and maximal ideals of $B_1(X)$. It has been observed that a bijective correspondence exists between the collection of all maximal ideals of $B_1(X)$ $\left(\equiv \MMM(B_1(X))\right)$ and the collection of all $Z_B$-ultrafilters on $X$. We now show that the structure space of $B_1(X)$, i.e., $\MMM(B_1(X))$ with \textbf{hull-kernel topology} is homeomorphic to the set of all $Z_B$-ultrafilters on X with Stone-topology.\\\\ \noindent We know that for each $p \in X$, $\mathscr U_p=\{Z \in Z(B_1(X)): p \in Z\}$ is a $Z_B$-ultrafilter on X. In fact $Z[\widehat{M_p}]=\mathscr U_p$. So, we can use the set $X$ as the index set for all $Z_B$-ultrafilters on $X$ of the form $\mathscr U_p$. We enlarge the set $X$ to a bigger set $\widetilde{X}$, which serves as an index set for the family of all $Z_B$-ultrafilters on $X$. For each $p \in \widetilde{X}$, let the corresponding $Z_B$-ultrafilter be denoted by $\mathscr U^p$ and whenever $p \in X$, we take $\mathscr U^p = \mathscr U_p = \{Z \in Z(B_1(X)): p \in Z\}$. So, $\{{\mathscr U^p: p \in \widetilde{X}}\}$ is the set of all $Z_B$-ultrafilters on $X$.\\ For each $Z \in Z(B_1(X))$, let $\overline{Z}= \{p \in \widetilde{X}: Z \in \mathscr U^p \}$. If $p \in Z$ then $Z \in \mathscr U^p = \mathscr U_p$ and hence, $p \in \overline{Z}$. i.e., $Z \subseteq \overline{Z}$. Also $\overline{X}= \widetilde{X}$. The collection $\mathscr B=\{\overline{Z}: Z \in Z(B_1(X))\}$ forms a base for closed sets for some topology on $\widetilde{X}$, as \begin{enumerate}
\item $\overline{\emptyset} = \{p \in \widetilde{X}:\emptyset \in \mathscr U^p\}= \emptyset \implies \emptyset \in \mathscr B$ .
\item For $Z_1$ and $Z_2$ $\in Z(B_1(X))$, $\overline{Z_1 \cup Z_2}= \overline{Z_1} \cup \overline{Z_2}\implies$ union of two sets in $\mathscr B$ belongs to $\mathscr B$. \end{enumerate} This topology is known as \textbf{Stone-topology}. We simply write $\widetilde{X}$ to mean the space $\widetilde{X}$ with Stone-topology. It is easy to check that for any $Z_1$ and $Z_2$ with $Z_1 \subseteq Z_2$ implies $\overline{Z_1} \subseteq \overline{Z_2}$ and also, $\overline{Z} \cap X= Z$. As a consequence, we get the following result: \begin{theorem}.
For any $Z \in Z(B_1(X))$, $\overline{Z}= cl_{\widetilde{X}}Z$. \\
In particular, $cl_{\widetilde{X}}X=\widetilde{X}$. \end{theorem} \begin{proof}
Straightforward and hence omitted. \end{proof} \noindent For each maximal ideal $\widehat{M}$ in $B_1(X)$, $Z[\widehat{M}]$ is a unique $Z_B$-ultrafilter on $X$. Hence, $Z[\widehat{M}]= \mathscr U^p$, for some unique $p \in \widetilde{X}$. Therefore, define a map $\Phi : \MMM(B_1(X)) \rightarrow \widetilde{X}$ by $\Phi(\widehat{M})=p$, whenever $Z[\widehat{M}]= \mathscr U^p$. \begin{theorem} The structure space $\MMM(B_1(X))$ of the ring $B_1(X)$ is homeomorphic to $\widetilde{X}$ with Stone-topology. \end{theorem} \begin{proof} The map $\Phi : \MMM(B_1(X)) \rightarrow \widetilde{X}$ defined by $\Phi(\widehat{M})=p$, whenever $Z[\widehat{M}]= \mathscr U^p$ is a bijection between $\MMM(B_1(X))$ and $\widetilde{X}$, because it was proved in \cite{AA2} that $\widehat{M},\widehat{N} \in \MMM(B_1(X))$ with $\widehat{M} \neq \widehat{N}$ implies $Z[\widehat{M}] \neq Z[\widehat{N}]$. Also, the collection $\{\widehat{\mathscr M_f}: f \in B_1(X)\}$ is a base for closed sets for the structure space of $B_1(X)$, i.e., $\MMM(B_1(X))$ with Hull-Kernel topology, where $\widehat{\mathscr M_f}= \{\widehat{M}\in \MMM(B_1(X)): f \in \widehat{M}\}$. \\ For any $f \in B_1(X)$ and $\widehat{M} \in \MMM(B_1(X))$ $f \in \widehat{M} \iff Z(f) \in Z[\widehat{M}] \iff Z(f) \in \mathscr U^p \iff p \in cl_{\widetilde{X}}Z(f)$, where $\Phi(\widehat{M})=p$. Hence $\Phi(\widehat{\mathscr M_f})=cl_{\widetilde{X}}Z(f)=\overline{Z(f)}$, for any $f \in B_1(X)$. Clearly, $\Phi$ exchanges the basic closed sets between $\MMM(B_1(X))$ and $\widetilde{X}$. Therefore, $\Phi$ is a homeomorphism between the structure space of $B_1(X)$ and $\widetilde{X}$ with Stone-topology. \end{proof} \begin{corollary} $\widetilde{X}$ with Stone-topology is a compact Hausdorff space. \end{corollary} \begin{proof} Immediate, as $\MMM(B_1(X))$ is a compact Hausdorff space. \end{proof} \noindent The following result describes the collection of all maximal ideals of a Tychonoff space $X$. \begin{theorem} A complete description of maximal ideals of the ring $B_1(X)$ is given by $\{\widehat{M^p}: p \in \widetilde{X}\}$, where $\widehat{M^p}= \{f \in B_1(X): p \in cl_{\widetilde{X}}Z(f)\}$. Further, if $p \neq q$ in $\widetilde{X}$ then $\widehat{M^p} \neq \widehat{M^q}$. \end{theorem} \begin{proof} A $Z_B$-ultrafilter $\mathscr U^p$ on $X$ corresponds to a unique point $p$ in $\widetilde{X}$ and for all $ Z \in Z(B_1(X))$, $Z \in \mathscr U^p$ if and only if $p \in \overline{Z}$. i.e., $p \in cl_{\widetilde{X}}Z(f)$.\\ Since $\{\mathscr U^p : p \in \widetilde{X}\}$ is the collection of all $Z_B$-ultrafilters on $X$, it follows that $\{Z_B^{-1}[\mathscr U^p]: p \in \widetilde{X}\}$ is the collection of all maximal ideals of $B_1(X)$. Let $Z_B^{-1}[\mathscr U^p]= \widehat{M^p}$. Then $\widehat{M^p}=\{f \in B_1(X): Z(f) \in \mathscr U^p\}$ $=\{f \in B_1(X):p \in cl_{\widetilde{X}}Z(f)\}$.\\ Again if $p \neq q$ in $\widetilde{X}$ then $\mathscr U^p \neq \mathscr U^q$, which implies $Z_B^{-1}[\mathscr U^p] \neq Z_B^{-1}[\mathscr U^q]$ and so, $\widehat{M^p} \neq \widehat{M^q}$. \end{proof}
\begin{theorem}
$\widehat{M^p}$ is a fixed maximal ideal in $B_1(X)$ if and only if $p \in X$.
\end{theorem} \begin{proof} Let $p \in X$. Then $\widehat{M^p}=\{f \in B_1(X)\ : \ p \in cl_{\widetilde{X}}Z(f)\}=\{f \in B_1(X) \ : \ p \in \overline{Z(f)}\}$. We know $\overline{Z(f)}= \{p \in \widetilde{X}:Z(f) \in \mathscr U^p\}$. So, $p \in X \cap \overline{Z(f)} \implies p \in Z(f) \implies f(p)=0$. i.e., $\widehat{M^p}= \{f \in B_1(X): f(p)=0\}= \widehat{M_p}=$ a fixed maximal ideal. Conversely, $\widehat{M^q}$ is a fixed maximal ideal for some $q \in \widetilde{X}$. Since the collection of all fixed maximal ideals in the ring $B_1(X)$ is $\{\widehat{M_p}: p \in X\}$, where $\widehat{M_p}=\{f \in B_1(X)\ : f(p)=0\}$, we get $\widehat{M^q}=\widehat{M_p}$, for some $p \in X$. Hence, $\widehat{M^q}=\widehat{M_p}= \widehat{M^p}$ which implies $q =p \in X$. \end{proof} \section{Is $\MMM({B_1(X)})$ a compactification of $X$?} \noindent As proposed in the introduction of this paper, we introduce two weaker forms of embedding which we call $F_\sigma$-embedding and weak $F_\sigma$-embedding of $X$ in $\MMM(B_1(X))$. Before we define such weak embeddings, we recall a result from ~\cite{LV} : \begin{theorem}\cite{LV} \label{P1 thm_4.1} \textnormal{(i)} For any topological space $X$ and any metric space $Y$, $B_1(X,Y)$ $\subseteq \mathscr{F_\sigma}(X,Y)$, where $B_1(X,Y)$ denotes the collection of Baire one functions from $X$ to $Y$ and $\mathscr{F_\sigma}(X,Y)=\{f:X\rightarrow Y : f^{-1}(G)$ is an $F_\sigma$ set, for any open set $G \subseteq Y$\}.\\
\textnormal{(ii)} For a normal topological space $X$, $B_1(X,\mathbb{R})$ $= \mathscr{F_\sigma}(X,\mathbb{R})$.
\end{theorem}
\begin{definition} A function $f : X \rightarrow Y$ is called \\ (i) \textbf{$F_\sigma$-continuous} if for any open set $U$ of $Y$ $f^{-1}(U)$ is an $F_\sigma$ set in $X$.\\ (ii) \textbf{weak $F_\sigma$ continuous} if for any basic open set $U$ of $Y$ $f^{-1}(U)$ is an $F_\sigma$ set in $X$.\\ (iii) \textbf{$F_\sigma$-embedding} if $f$ is injective, $F_\sigma$ continuous and $f^{-1} : f(X) \rightarrow X$ is continuous.\\ (iv) \textbf{weak $F_\sigma$-embedding} if $f$ is injective, weak $F_\sigma$ continuous and $f^{-1} : f(X) \rightarrow X$ is continuous. \end{definition} \noindent It is quite easy to observe that a function $f : X \rightarrow Y$ is \textbf{$F_\sigma$-continuous} (respectively, \textbf{weak $F_\sigma$-continuous}) if and only if for any closed set (respectively, basic closed set) $C$ of $Y$ $f^{-1}(C)$ is a $G_\delta$ set in $X$. \begin{remark} In general, $F_\sigma$-continuous function is always weak $F_\sigma$-continuous. If each open set of $Y$ is expressible as a countable union of basic open sets then weak $F_\sigma$-continuity coincides with $F_\sigma$-continuity of the function. Since every open set of $\RR$ is a countable union of disjoint open intervals, a function $f : X \rightarrow \RR$ is weak $F_\sigma$-continuous if and only if it is $F_\sigma$-continuous. \end{remark} \begin{theorem}\label{EMB} A $T_4$ space $X$ is densely weak $F_\sigma$-embedded in $\MMM(B_1(X))$. \end{theorem} \begin{proof} Define $\psi : X \ra \MMM(B_1(X))$ by $\psi(x) = \widehat{M}_x$, where $\widehat{M}_x = \{f \in B_1(X) \ : f(x) = 0\ \}$. Then $\psi$ is an injective function, as $x \neq y$ implies that $\widehat{M}_x \neq \widehat{M}_y$. Since $\{\widehat{\mathscr M_f} \ : \ f \in B_1(X)\}$ is a base for closed sets for the hull-kernel topology on $\MMM(B_1(X))$, $\psi^{-1}(\widehat{\mathscr M_f}) = \{x\in X \ : \ \widehat{M}_x \in \widehat{\mathscr M_f}\}$ = $Z(f)$ which is a $G_\delta$ set in $X$. Since $\psi$ pulls back all basic closed sets of $\MMM(B_1(X))$ to $G_\delta$ sets of $X$, $\psi$ is a weak $F_\sigma$-continuous function. \\\\ $\{Z(g) \ : \ g \in C(X)\}$ is a base for closed sets for the topology of $X$, $(\psi^{-1})^{-1}(Z(g)) = \{\psi(x) \ : \ x \in Z(g)\}$ $= \{\widehat{M}_x \ : \ g \in \widehat{M}_x\}$ $= \widehat{\mathscr M_g} \cap \psi(X)$, a closed set in $\psi(X)$. Hence $\psi^{-1} : \psi(X) \ra X$ is a continuous function. Therefore, $X$ is weak $F_\sigma$-embedded in $\MMM(B_1(X))$. \\\\ That $\psi(X)$ is dense in $\MMM(B_1(X))$ follows from the next observation : \begin{eqnarray*} \overline{\psi(X)} &=& \{\widehat{M} \in B_1(X) \ : \ \widehat{M} \supseteq \bigcap_{x\in X}\widehat{M}_x\}\\
&=& \{\widehat{M} \in B_1(X) \ : \ \widehat{M} \supseteq \{0\}\}\\
&=& \MMM(B_1(X)). \end{eqnarray*} \end{proof} \begin{corollary} If every closed set of $\MMM(B_1(X))$ is expressible as a countable intersection of $\{\widehat{\mathscr M_f} \ : \ f \in B_1(X)\}$ then the $T_4$-space $X$ is densely $F_\sigma$-embedded in $\MMM(B_1(X))$. \end{corollary} \noindent We shall show that every $F_\sigma$-continuous function from a $T_4$ space $X$ to a compact Hausdorff space $Y$ has a unique continuous extension on $\MMM(B_1(X))$. To establish our claim, we need a lemma: \begin{lemma}\label{Lem} Let $X$ be a normal space and $\psi : X \ra Y$ (where $Y$ is any topological space) be a $F_\sigma$-continuous function. For each $h \in C(Y)$, $h\circ \psi \in B_1(X)$. \end{lemma} \begin{proof} Let $C$ be a closed set in $\RR$. By continuity of $h$, $h^{-1}(C)$ is closed in $Y$. $F_\sigma$-continuity of $\psi$ implies that $\psi^{-1}(h^{-1}(C))$ is $G_\delta$. Hence, $(h\circ \psi)^{-1}(C)$ is a $G_\delta$-set in $X$. i.e., $h \circ \psi \in B_1(X)$. \end{proof} \begin{theorem}\label{EXT} If $f : X \rightarrow Y$ is a $F_\sigma$-continuous function from a $T_4$ space $X$ to a compact Hausdorff space $Y$ then there exists a unique continuous function $\widehat{f} : \MMM(B_1(X)) \ra Y$ such that $\widehat{f} \circ \psi = f$. \end{theorem} \begin{proof} Let $\widehat{M} \in \MMM(B_1(X))$. Construct $M_1 = \{g\in C(Y): g \circ f \in \widehat{M} \}$. By Lemma~\ref{Lem}, as $g\in C(Y)$ and $f$ is $F_\sigma$ continuous, $g\circ f \in B_1(X)$. Now it is easy to check that $M_1$ is a prime ideal in $C(Y)$ and therefore can be extended to a unique maximal ideal $M$ in $C(Y)$ (because, $C(Y)$ is a Gelfand ring). As $Y$ is compact, $M = M_y$, for some $y \in Y$. Hence, $g(y) = 0$, for all $g \in M_1$. In other words, $y \in \bigcap\limits_{g \in M_1} Z(g)$. It is also easy to observe that $\bigcap\limits_{g \in M_1} Z(g) = \{y\}$. So, define $\widehat{f} : \MMM(B_1(X)) \ra Y$ by $ \widehat{f}(\widehat{M}) = y $, where $\bigcap\limits_{g \in M_1} Z(g)= \{y\}$.\\ It is also clear that $\widehat{f} \circ \psi = f$. \\\\ \textbf{$\widehat{f}$ is continuous :} Let $\widehat{M} \in \MMM(B_1(X))$. To check the continuity of $\widehat{f}$ at $\widehat{M}$. Let $W$ be any neighbourhood of $\widehat{f}\big( \widehat{M}\big)$ in $Y$. Since $Y$ is compact $T_2$ space, it is Tychonoff and hence $W$ contains a zero set neighbourhood of $\widehat{f}\big( \widehat{M}\big)$.\\
Let $\widehat{f}\big( \widehat{M}\big) \in Y \smallsetminus Z(g_1) \subseteq Z(g_2) \subseteq W$, for some $g_1,g_2 \in C(Y)$.\\
$\widehat{f}\big( \widehat{M}\big) \notin Z(g_1) \implies g_1 \notin M_1 \implies g_1 \circ f \notin \widehat{M}$. So, $\widehat{M} \in \big( \MMM(B_1(X)) \big) \smallsetminus \widehat{\mathscr M _{g_1 \circ f}}$ is a basic open set of $\MMM(B_1(X))$ containing $\widehat{M}$.\\
Our claim is $\widehat{f}\bigg(\MMM(B_1(X)) \smallsetminus \widehat{\mathscr M _{g_1 \circ f}} \bigg) \subseteq W$.\\ Let $\widehat{N}\in \MMM(B_1(X)) \smallsetminus \widehat{\mathscr M _{g_1 \circ f}} \implies \widehat{N} \notin \widehat{\mathscr M _{g_1 \circ f}} \implies g_1 \circ f \notin \widehat{N} \implies g_1 \notin N_1$. Also $g_1.g_2=0 \in N_1$. So, $g_2 \in N_1$, (Since $N_1$ is a prime ideal), which implies $g_2 \circ f \in \widehat{N}$. Hence $g_2\big( \widehat{f}(\widehat{N})\big)=0$, i.e., $\widehat{f}\big( \widehat{N}\big) \subseteq Z(g_2) \subseteq W$. So, $\widehat{f}\bigg(\MMM(B_1(X)) \smallsetminus \widehat{\mathscr M _{g_1 \circ f}} \bigg) \subseteq W$.\\ The uniqueness of $\widehat{f}$ follows from the fact that, $\widehat{f}$ is continuous and $\psi(X)$ is dense in $\MMM(B_1(X))$.
\end{proof}
\begin{theorem}\label{ctype} For a $T_4$ space $X$, $B_1^*(X)$ is isomorphic to $C(\MMM(B_1(X)))$. In other words, for a $T_4$-space $X$, $B_1^*(X)$ is a C-type ring. \end{theorem} \begin{proof} Let $f \in B_1^*(X)$. Since $X$ is normal and $f$ is Baire one, it is $F_\sigma$-continuous. Using Theorem~\ref{EXT}, $f$ has a continuous extension $\widehat{f} : \MMM (B_1(X)) \rightarrow [r, s]$ where $f(X) \subseteq [r, s]\subseteq \RR$.\\ Define $\eta : B_1^*(X) \rightarrow C(\MMM (B_1(X)))$ by $\eta(f) = \widehat{f}$. \\ We claim that $\eta$ is a ring isomorphism. Clearly, for any $f, g \in B_1^*(X)$ and for any fixed maximal ideal $\widehat{M}_x$ of $\MMM (B_1(X))$, $$ (\widehat{f + g})(\widehat{M}_x) = \widehat{f}(\widehat{M}_x) + \widehat{g}(\widehat{M}_x) $$ and $$ (\widehat{fg})(\widehat{M}_x) = \widehat{f}\widehat{g}(\widehat{M}_x) $$ Using the notation of Theorem~\ref{EMB}, $\psi(X) = \{\widehat{M}_x \ : \ x\in X\}$ is dense in $\MMM(B_1(X))$ and hence, $\widehat{f + g}= \widehat{f}+ \widehat{g}$ and $\widehat{fg}= \widehat{f}\widehat{g}$. So, $\eta (f + g) = \eta(f) + \eta(g)$ and $\eta(fg) = \eta(f) \eta(g)$. i.e., $\eta$ is a ring homomorphism. That $\eta$ is one-one is clear from the definition of the function $\eta$. We finally show that $\eta$ is surjective.\\ For each $h \in C(\MMM(B_1(X)))$, $h\circ \psi \in B_1(X)$ and $(h\circ \psi)(X) \subseteq [a, b]$, for some $a< b$ in $\RR$. i.e., $h\circ \psi \in B_1^*(X)$. Now, $\eta(h\circ \psi) = \widehat{h\circ\psi}$ where $(\widehat{h\circ\psi})(\widehat{M}_x) = (h\circ\psi)(x) = h(\widehat{M}_x)$. Since $\widehat{h\circ\psi} = h$ on $\psi(X)$ and $\psi(X)$ is dense in $\MMM (B_1(X))$, we conclude that $\eta(h\circ \psi) = h$. \end{proof} \begin{corollary} If $X$ is a $T_4$ space then $\MMM(B_1(X)) \cong \MMM(B_1^*(X))$. \end{corollary} \begin{proof} If two rings are isomorphic then their structure spaces are homeomorphic. So, $\MMM(B_1^*(X)) \cong \MMM(C(\MMM(B_1(X))))$. Since $\MMM(B_1(X))$ is compact, $$\MMM(C(\MMM(B_1(X)))) \cong \MMM(B_1(X)).$$ Hence $\MMM(B_1(X)) \cong \MMM(B_1^*(X))$. \end{proof} So far we have seen that for a normal space $(X, \tau)$, we get a compact Hausdorff space $\MMM(B_1(X))$ such that $(X, \tau)$ is densely weak $F_\sigma$-embedded in $\MMM(B_1(X))$ and every Baire one function on $X$ has a unique continuous extension on it. For which class of spaces we may expect $\MMM(B_1(X))$ to be the $Stone$-$\check{C}ech$ compactification of $(X,\tau)$? In what follows, we could partially resolve this matter.
Let $(X, \tau)$ be a $T_4$ space. Consider the collection $\BBB = \{Z(f) \ : \ f \in B_1(X)\}$. It is clear that $\emptyset = Z(1) \in \BBB$ and $Z(f) \cup Z(g) = Z(fg) \in \BBB$, for any $f, g \in B_1(X)$. So, $\BBB$ forms a base for closed sets for some topology $\sigma$ on $X$. Certainly, $\tau \subseteq \sigma$, as $\{Z(f) \ : \ f \in C(X)\}$ is a base for closed sets for the topology $\tau$. \\\\ If $B_1(X) = C(X)$ then of course $\tau = \sigma$. Does the converse hold? i.e., if $\sigma = \tau$ then does it imply $B_1(X) = C(X)?$ Before answering this question, we observe that $\MMM(B_1(X))$ is indeed the $Stone$-$\check{C}ech$ compactification of $(X, \sigma)$.\\ In the following theorem, to avoid any ambiguity, we use the notation $\MMM (B_1(X, \tau))$ to denote $\MMM(B_1(X))$ and $\MMM(C(X, \sigma))$ to denote the structure space of $C(X)$, where $X$ has $\sigma$ topology on it. We also use the notation $\beta X_\sigma$ to denote the $Stone$-$\check{C}ech$ compactification of $(X, \sigma)$. \begin{theorem}\label{SC} For a $T_4$ space $(X,\tau)$, $\MMM(B_1(X, \tau))$ is the $Stone$-$\check{C}ech$ compactification of $(X, \sigma)$. \end{theorem} \begin{proof} We first observe that $\psi^* : (X, \sigma) \rightarrow \MMM(B_1(X, \tau))$ given by $x \mapsto \widehat{M}_x$ is an embedding. That $\psi^*$ is a one-one map is already proved in Theorem~\ref{EMB}. It is easy to observe that $\psi^*$ exchanges base for closed sets of $(X, \sigma)$ and $\MMM(B_1(X, \tau))$: For any $f\in B_1(X, \tau)$, \begin{eqnarray*} \psi^*(Z(f)) &=& \{\psi^*(x) \ : \ f(x) = 0\}\\
&=& \{\widehat{M}_x \ : \ f(x) = 0\}\\
&=& \{\widehat{M}_x \ : \ f\in \widehat{M}_x\}\\
&=& \widehat{\mathscr M}_f \cap \psi^*(X) \end{eqnarray*} Therefore, $(X, \sigma)$ is densely embedded in the compact Hausdorff space $\MMM(B_1(X, \tau))$. We now show that for any continuous function $f : (X, \sigma) \rightarrow Y$ (where $Y$ is any compact Hausdorff space), there exists $\widehat{f} : \MMM(B_1(X, \tau)) \rightarrow Y$ such that $\widehat{f}\circ \psi^* = f$.\\ Let $\widehat{M} \in \MMM(B_1(X))$. Construct $M_1 = \{g\in C(Y) \ : \ g\circ f \in \widehat{M}\}$. It is easy to check that $M_1$ is a prime ideal of $C(Y)$ and hence can be extended to a unique maximal ideal of $C(Y)$. $Y$ being a compact Hausdorff space, every maximal ideal of $C(Y)$ is fixed and hence, $M_1 \subseteq M_y$, for some $y\in Y$. So, for all $g \in M_1$, $g(y) = 0$ which implies that $y \in \bigcap\limits_{g\in M_1} Z(g)$. Clearly, $\bigcap\limits_{g\in M_1} Z(g) = \{y\}$. Define $\widehat{f} : \MMM(B_1(X, \tau)) \rightarrow Y$ by $\widehat{f}(\widehat{M}) = y$, whenever $\bigcap\limits_{g\in M_1} Z(g) = \{y\}$. Proceeding as in Theorem~\ref{EXT}, we observe that $\widehat{f}$ is a unique continuous function satisfying $\widehat{f}\circ \psi^* = f$. Hence, $\MMM(B_1(X, \tau))$ is the $Stone$-$\check{C}ech$ compactification of $(X, \sigma)$. \end{proof} \begin{corollary}\label{COR_SC} For any $T_4$ space $(X, \tau)$, $\MMM(B_1(X, \tau)) \cong \beta X_\sigma \cong \MMM(C(X, \sigma))$. \end{corollary} \begin{proof} Follows from the theorem and the fact that the structure space of $C(X, \sigma)$ is $\beta X_\sigma$. \end{proof} The following gives a complete description of the maximal ideals of ${B_1}^*(X)$ : \begin{theorem} For a $T_4$ space $X$, $\{\widehat{M}^{* p} \ : \ p \in \beta X_\sigma\}$ is the complete collection of maximal ideals of ${B_1}^*(X)$, where $\widehat{M}^{* p} = \{f \in B_1^*(X) \ : \ \widehat{f}(p) = 0\}$. Also, $p \neq q$ implies $\widehat{M}^{* p} \neq \widehat{M}^{* q}$. Moreover, $\widehat{M}^{*p}$ is a fixed maximal ideal if and only if $p \in X$. \end{theorem} \begin{proof} By Theorem~\ref{ctype}, the map $\eta : f \rightarrow \widehat{f}$ is an isomorphism from $B_1^*(X)$ onto $C(\mathcal{M}(B_1(X)))$. So, there is a one-one correspondence between the maximal ideals of $B_1^*(X)$ and those of $C(\mathcal{M}(B_1(X)))$. $\mathcal{M}(B_1(X))$ being compact, every maximal ideal of the ring $C(\mathcal{M}(B_1(X)))$ is of the form $\{h \in C(\mathcal{M}(B_1(X))) \ : \ h(p) = 0\}$, where $p \in \mathcal{M}(B_1(X)) \cong \beta X_\sigma$. So, the maximal ideals of $B_1^*(X)$ are given by \\ $\eta^{-1}\left( \{h \in C(\mathcal{M}(B_1(X))) \ : \ h(p) = 0\}\right) = \{f \in {B_1}^*(X) \ : \ \widehat{f}(p) = 0\} = \widehat{M}^{*p}$ (say), for each $p \in \beta X_\sigma$. \\ $p \neq q$ $\Rightarrow$ $\{h \in C(\mathcal{M}(B_1(X))) \ : \ h(p) = 0\} \neq $ $\{h \in C(\mathcal{M}(B_1(X))) \ : \ h(q) = 0\}$ and so, $\eta^{-1}\left(\{h \in C(\mathcal{M}(B_1(X))) \ : \ h(p) = 0\}\right) \neq $ $\eta^{-1}\left(\{h \in C(\mathcal{M}(B_1(X))) \ : \ h(q) = 0\}\right)$. i.e., $\widehat{M}^{* p} \neq \widehat{M}^{* q}$. \\ If $p \in X$ then clearly, $\widehat{M}^{* p} = \{f \in {B_1}^*(X) \ : \ f(p) = 0\} = \widehat{M}^*_p$, the fixed maximal ideal of ${B_1}^*(X)$. \\ If $q \in \beta X_\sigma \setminus X$, then we claim that $\widehat{M}^{* q}$ is not fixed. If possible, it is a fixed maximal ideal of ${B_1}^*(X)$. Then $\widehat{M}^{* q} = {\widehat{M}^ *}_p$ for some $p \in X$. But in that case, $\widehat{M}^{* q} = \widehat{M}^{* p}$ and consequently, $p = q$. \end{proof} It is well known by Gelfand Kolmogoroff Theorem that $\{M^p \ : \ p\in \beta X_\sigma\}$ is precisely the collection of all maximal ideals of $C(X, \sigma)$. So, $\{\widehat{M}^p \ : \ p\in \beta X_\sigma\}$ is the exact collection of all maximal ideals of $B_1(X)$, where $X$ has $\tau$ topology on it. Moreover, for each $p\in X$, $\widehat{M}^p = \widehat{M}_p$, a fixed maximal ideal of $B_1(X)$. So, under the isomorphism of Corollary~\ref{COR_SC}, $\widehat{M}_p$ of $\MMM(B_1(X, \tau))$ corresponds to $M_p$ of $\MMM(C(X, \sigma))$. As a result, we get the following: \begin{theorem}\label{COMP_FIX} $(X, \sigma)$ is compact if and only if each maximal ideal of $B_1(X, \tau)$ is fixed. \end{theorem} \begin{proof} Follows from Corollary~\ref{COR_SC}. \end{proof} \noindent It is easy to check that in $B_1(X)$, every maximal ideal is fixed if and only if every ideal is fixed. \\\\ Also, it is evident from the last theorem that for a $T_4$ space $(X, \tau)$, if each maximal ideal of $B_1(X, \tau)$ is fixed then $(X, \sigma)$ is compact and therefore $(X, \tau)$ is also compact. But $(X, \tau)$ being Hausdorff it is maximal compact and hence, $\tau = \sigma$. On the other hand, if $(X, \tau)$ is compact then it is maximal compact (since it is Hausdorff) and therefore, $\sigma$ is never compact if $\tau \neq \sigma$. As a consequence of this we get the following: \begin{theorem} If $(X, \tau)$ is a compact Hausdorff space and $\sigma$ is strictly finer than $\tau$ then there exists at least one free (maximal) ideal in $B_1(X, \tau)$. \end{theorem} \begin{proof} Follows from Theorem~\ref{COMP_FIX} and the fact that $(X, \tau)$ becomes maximal Hausdorff. \end{proof} It is now a relevant query, for which spaces $\sigma$ remains strictly finer than $\tau$. We claim that for a $T_4$ space $(X, \tau)$, $\sigma = \tau$ if and only if $B_1(X) = C(X)$ and we establish our claim in what follows next. \begin{theorem}\label{sigma_tau} Let $(X, \tau)$ be a $T_4$ space and $\{Z(f) \ : \ f \in B_1(X, \tau)\}$ be a base for closed sets for the topology $\sigma$ on $X$. Then $\sigma = \tau$ if and only if $B_1(X,\tau) = C(X,\tau)$. \end{theorem} \begin{proof} If $B_1(X,\tau) = C(X,\tau)$ then certainly $\sigma = \tau$. For the converse, let $\sigma = \tau$. Consider any $f \in B_1(X,\tau)$ and any basic open set $(a, b)$ of $\RR$. Then $f^{-1}(a, b) = \{x\in X \ : \ a < f(x) < b\} = X \setminus \left(\{x\in X \ : f(x) \leq a\} \cup \{x\in X \ : \ f(x) \geq b\}\right)$ $= (X \setminus Z(g_1)) \cap (X \setminus Z(g_2)) = U$ (say), where $g_1, g_2 \in B_1(X,\tau)$. Since $Z(g_1)$ and $Z(g_2)$ are closed in $(X, \sigma)$, $U$ is open in $(X, \sigma)$. Hence, $f \in C(X, \sigma)$. By hypothesis, $\sigma = \tau$ and therefore, $C(X, \tau) \subseteq B_1(X, \tau) \subseteq C(X, \sigma) = C(X, \tau)$. i.e., $B_1(X, \tau) = C(X, \tau)$. \end{proof} Consequently, from Theorem~\ref{COMP_FIX} and Theorem~\ref{sigma_tau}, it follows that for a $T_4$ space $(X, \tau)$, if a non-continuous Baire one function exists then $B_1(X)$ has a free (maximal) ideal.\\ In general, $B_1(X) = C(X)$ does not always imply that $X$ is discrete. For example, if $X$ is a P-space then $B_1(X) = C(X)$ ~\cite{MW}. We shall now show that for a particular class of topological spaces, e.g., for perfectly normal $T_1$ spaces, $B_1(X) = C(X)$ is equivalent to the discreteness of the space. \begin{theorem}\label{SigmaIsDisc} If $(X, \tau)$ is a perfectly normal $T_1$-space then $\sigma$ is the discrete topology on $X$. \end{theorem} \begin{proof} Let $\{y\}$ be any singleton set in $(X, \sigma)$. Since $T_1$-ness is an expansive property and $(X, \tau)$ is $T_1$, it follows that $(X, \sigma)$ is also $T_1$. So, $\{y\}$ is closed in $(X, \sigma)$. Since $(X, \tau)$ is perfectly normal, $\{y\} = Z(g)$, for some $g \in C(X, \tau)$. Define $\chi_y : X \ra \RR$ as follows : $$ \chi_y(x) = \begin{cases}
1 & \ x=y\\
0 & otherwise.
\end{cases} $$ It is easy to check that $\chi_y \in B_1(X,\tau)$. Also $Z(\chi_y) = X \setminus \{y\}$ which is an open set in $(X, \tau)$ and hence open in $(X, \sigma)$. By definition of $(X, \sigma)$, $Z(\chi_y)$ is a closed set in $(X, \sigma)$. Hence, $\{y\}$ is both open and closed in $(X, \sigma)$ which shows by arbitrariness of $\{y\}$, $\sigma$ is the discrete topology on $X$. \end{proof}
\begin{theorem}\label{NotSC} For a perfectly normal $T_1$ space $(X, \tau)$ the following statements are equivalent: \begin{enumerate} \item $(X, \tau)$ is discrete. \item $B_1(X) = C(X)$. \item ${B_1}^*(X) = C^*(X)$. \item $\MMM({B_1}^*(X))$ is the $Stone$-$\check{C}ech$ compactification of $(X, \tau)$. \item $\MMM(B_1(X))$ is the $Stone$-$\check{C}ech$ compactification of $(X, \tau)$. \end{enumerate} \end{theorem} \begin{proof} $(1) \Rightarrow (2)$ : Immediate.\\\\ $(2) \Rightarrow (3)$ : $C^*(X) = {B_1}^*(X) \cap C(X) = {B_1}^*(X) \cap B_1(X) = {B_1}^*(X)$. \\\\ $(3) \Rightarrow (4)$ : ${B_1}^*(X) = C^*(X)$ implies that $\MMM({B_1}^*(X)) = \MMM(C^*(X))$ and it is well known that $\MMM(C^*(X))$ is the $Stone$-$\check{C}ech$ compactification of $(X, \tau)$. \\\\ $(4) \Rightarrow (5)$ : Follows from the fact that $\MMM({B_1}^*(X)) \cong \MMM(B_1(X))$.\\\\ $(5) \Rightarrow (1)$ : By Theorem~\ref{SC}, $\MMM(B_1(X, \tau))$ is the $Stone$-$\check{C}ech$ compactification of $(X, \sigma)$. By (3), $\MMM(B_1(X, \tau))$ is the $Stone$-$\check{C}ech$ compactification of $(X, \tau)$. Then the identity map $I : \MMM(B_1(X, \tau)) \ra \MMM(B_1(X, \tau))$ when restricted on $(X, \tau)$ becomes a homeomorphism between $(X, \tau)$ and $(X, \sigma)$. Hence, $\sigma = \tau$. Using Theorem~\ref{SigmaIsDisc}, $(X, \tau)$ is a discrete space. \end{proof} \begin{remark} There are plenty of non-discrete perfectly normal $T_1$ spaces (for example, any non-discrete metric space; in particular, $\RR$). By Theorem~\ref{NotSC}, we may certainly assert that at least for those spaces $\MMM(B_1(X))$ is never its $Stone$-$\check{C}ech$ compactification. Theorem~\ref{NotSC} also assures the existence of non-continuous Baire one functions in any non-discrete perfectly normal $T_1$ space. \end{remark}
\end{document} |
\begin{document}
\title{\bf Numerical Analysis of the Non-uniform Sampling Problem} \author{Thomas Strohmer\thanks{Department of Mathematics, University of California, Davis, CA-95616; [email protected]. \hspace*{5.4mm} The author was supported by NSF DMS grant 9973373.}}
\date{} \maketitle \if 0 \vspace*{-6cm} \noindent
\vspace*{6cm} \fi
\begin{abstract}
We give an overview of recent developments in the problem of reconstructing a band-limited signal from non-uniform sampling from a numerical analysis view point. It is shown that the appropriate design of the finite-dimensional model plays a key role in the numerical solution of the non-uniform sampling problem. In the one approach (often proposed in the literature) the finite-dimensional model leads to an ill-posed problem even in very simple situations. The other approach that we consider leads to a well-posed problem that preserves important structural properties of the original infinite-dimensional problem and gives rise to efficient numerical algorithms. Furthermore a fast multilevel algorithm is presented that can reconstruct signals of unknown bandwidth from noisy non-uniformly spaced samples. We also discuss the design of efficient regularization methods for ill-conditioned reconstruction problems. Numerical examples from spectroscopy and exploration geophysics demonstrate the performance of the proposed methods. \end{abstract} Subject Classification: 65T40, 65F22, 42A10, 94A12 \\ \mbox{Key words:} non-uniform sampling, band-limited functions, frames, regularization, signal reconstruction, multi-level method.
\section{Introduction} \label{s:intro}
The problem of reconstructing a signal $f$ from non-uniformly spaced measurements $f(t_j)$ arises in areas as diverse as geophysics, medical imaging, communication engineering, and astronomy. A successful reconstruction of $f$ from its samples $f(t_j)$ requires a priori information about the signal, otherwise the reconstruction problem is ill-posed. This a priori information can often be obtained from physical properties of the process generating the signal. In many of the aforementioned applications the signal can be assumed to be (essentially) band-limited.
Recall that a signal (function) is band-limited with bandwidth $\Omega$ if it belongs to the space ${\Bsp_{\Omega}}$, given by \begin{equation} {\Bsp_{\Omega}} = \left\{ f \in {\Ltsp(\Rst)} : \hat{f}(\omega) = 0
\,\,\text{for}\,\,|\omega|>\Omega \right\}\,, \label{bandlim} \end{equation} where $\hat{f}$ is the Fourier transform of $f$ defined by $$\hat{f}(\omega) = \int \limits_{-\infty}^{+\infty} f(t) e^{-2 \pi i \omega t} \, dt\,.$$ For convenience and without loss of generality we restrict our attention to the case $\Omega = \frac{1}{2}$, since any other bandwidth can be reduced to this case by a simple dilation. Therefore we will henceforth use the symbol ${\Bsp}$ for the space of band-limited signals.
It is now more than 50 years ago that Shannon published his celebrated sampling theorem~\cite{Sha48}. His theorem implies that any signal $f \in {\Bsp}$ can be reconstructed from its regularly spaced samples $\{f(n)\}_{n \in {\mathbb Z}}$ by \begin{equation} f(t) = \sum_{n \in {\mathbb Z}} f(n)
\frac{\sin \pi(t - n)}{\pi(t-n)}\,. \label{shannon} \end{equation}
In practice however we seldom enjoy the luxury of equally spaced samples. The solution of the nonuniform sampling problem poses much more difficulties, the crucial questions being: \begin{itemize} \vspace*{-1mm} \setlength{\itemsep}{-0.5ex} \setlength{\parsep}{-0.5ex} \item Under which conditions is a signal $f \in {\Bsp}$ uniquely defined
by its samples $\{f(t_j)\}_{j \in {\mathbb Z}}$? \item How can $f$ be stably reconstructed from its samples $f(t_j)$? \end{itemize}
These questions have led to a vast literature on nonuniform sampling theory with deep mathematical contributions see~\cite{DS52,Lan67,BM67,BSS88,FG94} to mention only a few. There is also no lack of methods claiming to efficiently reconstruct a function from its samples~\cite{Yen56,YT67,Ben92,FGS95,Win92,Mar93a,FG94}. These numerical methods naturally have to operate in a finite-dimensional model, whereas theoretical results are usually derived for the infinite-dimensional space ${\Bsp}$. From a numerical point of view the ``reconstruction'' of a bandlimited signal $f$ from a finite number of samples $\{f(t_j)\}_{j=1}^{r}$ amounts to computing an approximation to $f$ (or $\hat{f}$) at sufficiently dense (regularly) spaced grid points in an interval $(t_1, t_r)$.
Hence in order to obtain a ``complete'' solution of the sampling problem following questions have to be answered: \begin{itemize} \vspace*{-1mm} \setlength{\itemsep}{-0.5ex} \setlength{\parsep}{-0.5ex} \item Does the approximation computed within the finite-dimensional model actually converge to the original signal $f$, when the dimension of the model approaches infinity? \item Does the finite-dimensional model give rise to fast and stable numerical algorithms? \end{itemize}
These are the questions that we have in mind, when presenting an overview on recent advances and new results in the nonuniform sampling problem from a numerical analysis view point.
In Section~\ref{ss:truncated} it is demonstrated that the celebrated frame approach does only lead to fast and stable numerical methods when the finite-dimensional model is carefully designed. The approach usually proposed in the literature leads to an ill-posed problem even in very simple situations. We discuss several methods to stabilize the reconstruction algorithm in this case. In Section~\ref{ss:trigpol} we derive an alternative finite-dimensional model, based on trigonometric polynomials. This approach leads to a well-posed problem that preserves important structural properties of the original infinite-dimensional problem and gives rise to efficient numerical algorithms. Section~\ref{s:numeric} describes how this approach can be modified in order to reconstruct band-limited signals for the in practice very important case when the bandwidth of the signal is not known. Furthermore we present regularization techniques for ill-conditioned sampling problems. Finally Section~\ref{s:applications} contains numerical experiments from spectroscopy and geophysics.
Before we proceed we introduce some notation that will be used throughout the paper. If not otherwise mentioned $\|h\|$ always denotes the ${\Ltsp(\Rst)}$-norm
(${\lsp^2(\Zst})$-norm) of a function (vector). For operators (matrices) $\|T\|$ is the standard operator (matrix) norm. The condition number of an invertible operator $T$ is defined by
$\kappa (A) = \|A\| \|A^{-1}\|$ and the spectrum of $T$ is $\sigma (T)$. $I$ denotes the identity operator.
\subsection{Nonuniform sampling, frames, and numerical algorithms} \label{s:theory}
The concept of frames is an excellent tool to study nonuniform sampling problems~\cite{Fei89,BH90,Ben92,Hig96,FG94,Zay93}. The frame approach has the advantage that it gives rise to deep theoretical results and also to the construction of efficient numerical algorithms -- {\em if} (and this point is often ignored in the literature) the finite-dimensional model is properly designed.
Following Duffin and Schaeffer~\cite{DS52}, a family $\{{f_j}\}_{j \in {\mathbb Z}}$ in a separable Hilbert space ${\boldsymbol H}$ is said to be a frame for ${\boldsymbol H}$, if there exist constants (the {\em frame bounds}) $A,B>0$ such that \begin{equation} \label{framedef}
A \|f\|^2 \le \sum_{j} |\langle f, {f_j} \rangle|^2 \le B \|f\|^2 \,, \qquad \forall f \in {\boldsymbol H}. \end{equation} We define the {\em analysis operator} $T$ by \begin{equation} T: f \in {\boldsymbol H} \rightarrow Ff = \{ \langle f, {f_j} \rangle\}_{j \in {\mathbb Z}}\,, \label{frameanal} \end{equation} and the {\em synthesis operator}, which is just the adjoint operator of $T$, by \begin{equation} T^{\ast}: c \in {\lsp^2(\Zst}) \rightarrow T^{\ast} c = \sum_{j} c_{j} {f_j}\,. \label{framesyn} \end{equation} The {\em frame operator} $S$ is defined by $S = T^{\ast} T$, hence $Sf = \sum_{j} \langle f, {f_j} \rangle {f_j}$. $S$ is bounded by $A I \le S \le B I$ and hence invertible on ${\boldsymbol H}$.
We will also make use of the operator $T T^{\ast}$ in form of its Gram matrix representation $R: {\lsp^2(\Zst}) \rightarrow {\lsp^2(\Zst}) $ with entries $R_{j,l} = \langle f_j, f_l \rangle$. On $\operatorname{\mathcal R} (T) = \operatorname{\mathcal R} (R)$ the matrix $R$ is bounded by $A I \le R \le B I$ and invertible. On ${\lsp^2(\Zst})$ this inverse extends to the {\em Moore-Penrose inverse} or pseudo-inverse $R^{+}$ (cf.~\cite{EHN96}).
Given a frame $\{{f_j}\}_{j \in {\mathbb Z}}$ for ${\boldsymbol H}$, any $f \in {\boldsymbol H}$ can be expressed as \begin{equation} \label{frameexp} f = \sum_{j \in {\mathbb Z}} \langle f, {f_j} \rangle {\gamma_j}
= \sum_{j \in {\mathbb Z}} \langle f, {\gamma_j} \rangle {f_j} \,, \end{equation} where the elements ${\gamma_j} :=S^{-1} {f_j}$ form the so-called dual frame and the frame operator induced by ${\gamma_j}$ coincides with $S^{-1}$. Hence if a set $\{{f_j}\}_{j \in {\mathbb Z}}$ establishes a frame for ${\boldsymbol H}$, we can reconstruct any function $f \in {\boldsymbol H}$ from its moments $\langle f, {f_j} \rangle$.
One possibility to connect sampling theory to frame theory is by means of the {\em sinc}-function \begin{equation} {\sinc}(t) = \frac{\sin \pi t}{\pi t}\,. \label{sinc} \end{equation} Its translates give rise to a {\em reproducing kernel} for ${\Bsp}$ via \begin{equation} f(t) = \langle f, {\sinc}(\cdot - t) \rangle \quad \forall t, f \in {\Bsp}\,. \label{sincconv} \end{equation} Combining~\eqref{sincconv} with formulas~\eqref{framedef} and~\eqref{frameexp} we obtain following well-known result~\cite{Fei89,BH90}. \begin{theorem} If the set ${\{\sinco(\cdot - t_j)\}_{j \in \Zst}}$ is a frame for ${\Bsp}$, then the function $f \in {\Bsp}$ is uniquely defined by the sampling set $\{f(t_j)\}_{j \in {\mathbb Z}}$. In this case we can recover $f$ from its samples by \begin{equation} \label{recon1} f(t) = \sum_{j \in {\mathbb Z}} f(t_j) \gamma_j \,, \qquad \text{where}\,\,\, \gamma_j = S^{-1} {\sinc}(\cdot - t_j)\,, \end{equation}
or equivalently by \begin{equation} \label{recon2} f(t) = \sum_{j \in {\mathbb Z}} c_j {\sinc}(t-t_j)\,, \qquad \text{where}\,\,\, Rc=b\,, \end{equation}
with $R$ being the frame Gram matrix with entries $R_{j,l}= {\sinc}(t_j - t_l)$ and $b=\{b_j\}=\{f(t_j)\}$. \end{theorem}
The challenge is now to find easy-to-verify conditions for the sampling points $t_j$ such that ${\{\sinco(\cdot - t_j)\}_{j \in \Zst}}$ (or equivalently the exponential system $\{e^{2 \pi i t_j \omega}\}_{j \in {\mathbb Z}}$) is a frame for ${\Bsp}$. This is a well-traversed area (at least for one-dimensional signals), and the reader should consult~\cite{Ben92,FG94,Hig96} for further details and references. If not otherwise mentioned from now on we will assume that ${\{\sinco(\cdot - t_j)\}_{j \in \Zst}}$ is a frame for ${\Bsp}$.
Of course, neither of the formulas~\eqref{recon1} and~\eqref{recon2} can be actually implemented on a computer, because both involve the solution of an infinite-dimensional operator equation, whereas in practice we can only compute a finite-dimensional approximation. Although the design of a valid finite-dimensional model poses severe mathematical challenges, this step is often neglected in theoretical but also in numerical treatments of the nonuniform sampling problem. We will see in the sequel that the way we design our finite-dimensional model is crucial for the stability and efficiency of the resulting numerical reconstruction algorithms.
In the next two sections we describe two different approaches for obtaining finite-dimensional approximations to the formulas~\eqref{recon1} and~\eqref{recon2}. The first and more traditional approach, discussed in Section~\ref{ss:truncated}, applies a finite section method to equation~\eqref{recon2}. This approach leads to an ill-posed problem involving the solution of a large unstructured linear system of equations. The second approach, outlined in Section~\ref{ss:trigpol}, constructs a finite model for the operator equation in~\eqref{recon1} by means of trigonometric polynomials. This technique leads to a well-posed problem that is tied to efficient numerical algorithms.
\section{Truncated frames lead to ill-posed problems} \label{ss:truncated}
According to equation~\eqref{recon2} we can reconstruct $f$ from its sampling values $f(t_j)$ via $f(t) = \sum_{j \in {\mathbb Z}} c_j\, {\sinc}(t - t_j)$, where $c={R^{+}} b$ with $b_j = f(t_j), j \in {\mathbb Z}$. \if 0 Since $R$ is a compact operator it can be diagonalized via its singular system (or eigensystem since $R$ is self-adjoint) $(\lambda_n, u_n)$ as follows~\cite{EHN96} \begin{equation} R x = \sum_{n=1}^{\infty} \lambda_n \langle x, u_n \rangle u_n\,, \label{svdexp} \end{equation} with a corresponding complete orthogonal set of vectors $u_n$. The Moore-Penrose inverse $R^+$ can be expressed as \begin{equation} R^+ y = \sum_{n=1}^{\infty} \frac{\langle y, u_n \rangle}{\lambda_n} u_n\,, \label{pinvexp} \end{equation} where, as usual, only the non-zero singular values of $R$ are used in the above sum. \fi In order to compute a finite-dimensional approximation to $c = \{c_j\}_{j \in {\mathbb Z}}$ we use the finite section method \cite{GF74}. For $x \in {\lsp^2(\Zst})$ and $n \in {\mathbb N}$ we define the orthogonal projection ${P_{n}}$ by \begin{equation} \label{defP} {P_{n}} x = (\dots, 0,0, x_{-n}, x_{-n+1},\dots , x_{n-1}, x_n, 0,0, \dots) \end{equation} and identify the image of ${P_{n}}$ with the space ${\mathbb C}^{2n+1}$. Setting ${R_{n}} = {P_{n}} R {P_{n}}$ and ${b^{(n)}} = {P_{n}} b$, we obtain the $n$-th approximation ${c^{(n)}}$ to $c$ by solving \begin{equation} {R_{n}} {c^{(n)}} = {b^{(n)}}\,. \label{finitesec} \end{equation}
It is clear that using the truncated frame $\{{\sinc}(\cdot - t_j)\}_{j=-n}^{n}$ in~\eqref{recon2} for an approximate reconstruction of $f$ leads to the same system of equations.
If ${\{\sinco(\cdot - t_j)\}_{j \in \Zst}}$ is an exact frame (i.e., a Riesz basis) for ${\Bsp}$ then we have following well-known result. \begin{lemma} Let ${\{\sinco(\cdot - t_j)\}_{j \in \Zst}}$ be an exact frame for ${\Bsp}$ with frame bounds $A,B$ and $Rc=b$ and ${R_{n}} {c^{(n)}} = {b^{(n)}}$ as defined above. Then ${R_{n}^{-1}}$ converges strongly to ${R^{-1}}$ and hence ${c^{(n)}} \rightarrow c$ for ${n \rightarrow \infty }$. \end{lemma} Since the proof of this result given in~\cite{Chr96b} is somewhat lengthy we include a rather short proof here.
\begin{proof} Note that $R$ is invertible on ${\lsp^2(\Zst})$ and $A \le R \le B$.
Let $x \in {\mathbb C}^{2n+1}$ with $\|x\| =1$, then $\langle {R_{n}} x, x \rangle = \langle {P_{n}} R {P_{n}} x, x \rangle =
\langle Rx,x \rangle \ge A$. In the same way we get $\|{R_{n}} \| \le B$, hence the matrices ${R_{n}}$ are invertible and uniformly bounded by $A \le {R_{n}} \le B$ and $$\frac{1}{B} \le {R_{n}^{-1}} \le \frac{1}{A} \qquad \text{for all} \,\,n \in {\mathbb N}.$$ The Lemma of Kantorovich~\cite{RM94} yields that ${R_{n}^{-1}} \rightarrow {R^{-1}}$ strongly. \end{proof}
If ${\{\sinco(\cdot - t_j)\}_{j \in \Zst}}$ is a non-exact frame for ${\Bsp}$ the situation is more delicate. Let us consider following situation.
\noindent {\bf Example 1:} Let $f \in {\Bsp}$ and let the sampling points be given by $t_j = \frac{j}{m}, j \in {\mathbb Z}, 1 < m \in {\mathbb N}$, i.e., the signal is regularly oversampled at $m$ times the Nyquist rate. In this case the reconstruction of $f$ is trivial, since the set $\{{\sinc}(\cdot - t_j)\}_{j \in {\mathbb Z}}$ is a tight frame with frame bounds $A=B=m$. Shannon's Sampling Theorem implies that $f$ can be expressed as $f(t) = \sum_{j \in {\mathbb Z}} c_j \, {\sinc}(t-t_j)$ where $c_j = \frac{f(t_j)}{m}$ and the numerical approximation is obtained by truncating the summation, i.e., $$f_n(t) = \sum_{j =-n}^{n} \frac{f(t_j)}{m}\, {\sinc}(t-t_j)\,.$$
Using the truncated frame approach one finds that $R$ is a Toeplitz matrix with entries $$R_{j,l}=\frac{\sin\frac{\pi}{m} (j-l)}{\frac{\pi}{m}(j-l)} \,, \qquad j,l \in {\mathbb Z}\,,$$ in other words, ${R_{n}}$ coincides with the prolate matrix~\cite{sle78,Var93}. The unpleasant numerical properties of the prolate matrix are well-documented. In particular we know that the singular values $\lambda_n$ of ${R_{n}}$ cluster around $0$ and $1$ with $\log n$ singular values in the transition region. Since the singular values of ${R_{n}}$ decay exponentially to zero the finite-dimensional reconstruction problem has become {\em severely ill-posed}~\cite{EHN96}, although the infinite-dimensional problem is ``perfectly posed'' since the frame operator satisfies $S = mI$, where $I$ is the identity operator.
Of course the situation does not improve when we consider non-uniformly spaced samples. In this case it follows from standard linear algebra that $\sigma(R) \subseteq \{0 \cup [A,B]\}$, or expressed in words, the singular values of $R$ are bounded away from zero. However for the truncated matrices ${R_{n}}$ we have $$\sigma ({R_{n}})\subseteq \{(0,B]\}$$ and the smallest of the singular values of ${R_{n}}$ will go to zero for ${n \rightarrow \infty }$, see~\cite{Har98}.
Let $A=U\Sigma V^{\ast}$ be the singular value decomposition of a matrix $A$ with $\Sigma = \mbox{diag\/}(\{\lambda_{k}\})$. Then the Moore-Penrose inverse of $A$ is $A^+ = V \Sigma^+ U^{\ast}$, where (e.g., see~\cite{GL96}) \begin{equation} \Sigma^{+} = \mbox{diag\/}(\{\lambda_{k}^{+}\})\,, \quad \lambda_{k}^{+} = \begin{cases} 1/\lambda_k & \text{if}\,\, \lambda_k \neq 0, \\ 0 & \text{otherwise.} \end{cases} \label{pinv} \end{equation}
For ${R_{n}} = U_n \Sigma_n V_n$ this means that the singular values close to zero will give rise to extremely large coefficients in ${R_{n}^{+}}$. In fact $\|{R_{n}^{+}}\| {\rightarrow \infty }$ for ${n \rightarrow \infty }$ and consequently ${c^{(n)}}$ does not converge to $c$.
Practically $\|{R_{n}^{+}}\|$ is always bounded due to finite precision arithmetics, but it is clear that it will lead to meaningless results for large $n$. If the sampling values are perturbed due to round-off error or data error, then those error components which correspond to small singular values $\lambda_k$ are amplified by the (then large) factors $1/\lambda_k$. Although for a given ${R_{n}}$ these amplifications are theoretically bounded, they may be practically unacceptable large.
Such phenomena are well-known in regularization theory~\cite{EHN96}. A standard technique to compute a stable solution for an ill-conditioned system is to use a truncated singular value decomposition (TSVD)~\cite{EHN96}. This means in our case we compute a regularized pseudo-inverse ${R_{n}^{+,\thresh}} = V_n \Sigma_n^{+,{\tau}} U_n^{\ast}$ where \begin{equation} \Sigma^{+,{\tau}} = \mbox{diag\/}(\{d_{k}^{+}\})\,, \quad d_{k}^{+} = \begin{cases} 1/\lambda_k & \text{if} \,\,\lambda_k \ge {\tau}, \\ 0 & \text{otherwise.} \end{cases} \label{pinvtrunc} \end{equation} In~\cite{Har98} it is shown that for each $n$ we can choose an appropriate truncation level ${\tau}$ such that the regularized inverses ${R_{n}^{+,\thresh}}$ converge strongly to ${R^{+}}$ for ${n \rightarrow \infty }$ and
consequently $\underset{{n \rightarrow \infty }}{\lim} \|f - {f^{(n)}}\| = 0$, where \begin{equation} {f^{(n)}}(t) = \sum_{j=-n}^{n} c_{j}^{(n,{\tau})} {\sinc}(t-t_j) \notag \end{equation} with \begin{equation} c^{(n,{\tau})} = {R_{n}^{+,\thresh}} {b^{(n)}}\,. \notag \end{equation} The optimal truncation level ${\tau}$ depends on the dimension $n$, the sampling geometry, and the noise level. Thus it is not known a priori and has in principle to be determined for each $n$ independently.
Since ${\tau}$ is of vital importance for the quality of the reconstruction, but no theoretical explanations for the choice of ${\tau}$ are given in the sampling literature, we briefly discuss this issue. For this purpose we need some results from regularization theory.
\subsection{Estimation of regularization parameter} \label{ss:est}
\if 0 In realistic situations the samples are usually perturbed by noise, with
$$\sum_{j=1}^{r}|f(t_j) - f^{\delta}(t_j)|^2 \le \delta^2$$ where $f^{\delta}(t_j)$ denotes a perturbed sample. The noise level $\delta$ is in practice in general much larger than the truncation level for noise-free data. Thus ${\tau}$ will mainly depend on $\delta$. For noise-free data the accuracy is determined by the roundoff level via the machine precision of the computer. \fi
Let $Ax = y^{\delta}$ be given where $A$ is ill-conditioned or singular and $y^{\delta}$ is a perturbed right-hand side with
$\|y - y^{\delta}\| \le \delta \|y\|$. Since in our sampling problem the matrix under consideration is symmetric, we assume for convenience that $A$ is symmetric. From a numerical point of view ill-conditioned systems behave like singular systems and additional information is needed to obtain a satisfactory solution to $Ax=y$. This information is usually stated in terms of ``smoothness'' of the solution $x$. A standard approach to qualitatively describe smoothness of $x$ is to require that $x$ can be represented in the form $x=Sz$ with some vector $z$ of reasonable norm, and a ``smoothing'' matrix $S$, cf.~\cite{EHN96,Neu98}. Often it is useful to construct $S$ directly from $A$ by setting \begin{equation} S = A^p\,, \qquad p \in {\mathbb N}_0 \,. \label{smoothness} \end{equation} Usually, $p$ is assumed to be fixed, typically at $p=1$ or $p=2$.
We compute a regularized solution to $Ax=y^{\delta}$ via a truncated SVD and want to determine the optimal regularization parameter (i.e., truncation level) $\tau$.
Under the assumption that \begin{equation}
x = Sz \, , \quad \|Ax-y^{\delta}\| \le \Delta \|z\| \label{} \end{equation} it follows from Theorem~4.1 in ~\cite{Neu98} that the optimal regularization parameter $\tau$ for the TSVD is \begin{equation} \hat{\tau}=\left(\frac{\gamma_1 \delta}{\gamma_2 p}\right)^{\frac{1}{p+1}}\,, \label{optreg} \end{equation} where $\gamma_1=\gamma_2 =1$ (see Section~6 in~\cite{Neu98}).
However $z$ and $\Delta$ are in general not known. Using
$\|Ax-y^{\delta}\| \le \delta \|y\|$ and $\|y\|=\|Ax\|=\|A Sz\|=\|A^{p+1} z\|$
we obtain $ \|y\| \le \|A\|^{p+1}\|z\|$. Furthermore, setting
$\delta \|y\| = \Delta \|z\|$ implies \begin{equation}
\Delta \le \delta \|A\|^{p+1}\,. \label{Delta} \end{equation} Hence combining~\eqref{optreg} and~\eqref{Delta} we get \begin{equation}
\hat{\tau} \le \left( \frac{\delta \|A\|^{p+1}}{p}\right)^{\frac{1}{p+1}}
= \|A\| \left(\frac{\delta}{p}\right)^{\frac{1}{p+1}}\,. \label{tauest} \end{equation}
Applying these results to solving ${R_{n}} {c^{(n)}} = {b^{(n)}}$ via TSVD as described in the previous section, we get \begin{equation}
\hat{\tau} \le \|{R_{n}}\| \left(\frac{\delta}{p}\right)^{\frac{1}{p+1}}
\le \|R\| \left(\frac{\delta}{p}\right)^{\frac{1}{p+1}} = B \left(\frac{\delta}{p}\right)^{\frac{1}{p+1}}\,, \label{threshopt} \end{equation} where $B$ is the upper frame bound. Fortunately estimates for the upper frame bound are much easier to obtain than estimates for the lower frame bound.
Thus using the standard setting $p=1$ or $p=2$ a good choice for the regularization parameter $\tau$ is \begin{equation} {\tau} \subseteq [B(\delta/2)^{1/3},B(\delta)^{1/2}]\,. \label{thresh} \end{equation} Extensive numerical simulations confirm this choice, see also Section~\ref{s:applications}.
For instance for the reconstruction problem of Example~1 with noise-free data and machine precision $\varepsilon = \delta = 10^{-16}$, formula~\eqref{thresh} implies ${\tau} \subseteq [10^{-6},10^{-8}]$. This coincides very well with numerical experiments.
If the noise level $\delta$ is not known, it has to be estimated. This difficult problem will not be discussed here. The reader is referred to~\cite{Neu98} for more details.
Although we have arrived now at an implementable algorithm for the nonuniform sampling problem, the disadvantages of the approach described in the previous section are obvious. In general the matrix ${R_{n}}$ does not have any particular structure, thus the computational costs for the singular value decomposition are ${\cal O }(n^3)$ which is prohibitive large in many applications. It is definitely not a good approach to transform a well-posed infinite-dimensional problem into an ill-posed finite-dimensional problem for which a stable solution can only be computed by using a ``heavy regularization machinery''.
The methods in~\cite{Yen56,YT67,Win92,San94,BH90} coincide with or are essentially equivalent to the truncated frame approach, therefore they suffer from the same instability problems and the same numerical inefficiency.
\subsection{CG and regularization of the truncated frame method} \label{ss:cgtrunc}
As mentioned above one way to stabilize the solution of ${R_{n}} {c^{(n)}} = {b^{(n)}}$ is a truncated singular value decomposition, where the truncation level serves as regularization parameter. For large $n$ the costs of the singular value decomposition become prohibitive for practical purposes.
We propose the conjugate gradient method~\cite{GL96} to solve ${R_{n}} {c^{(n)}} = {b^{(n)}}$. It is in general much more efficient than a TSVD (or Tikhonov regularization as suggested in~\cite{Win92}), and at the same time it can also be used as a regularization method.
The standard error analysis for CG cannot be used in our case, since the matrix is ill-conditioned. Rather we have to resort to the error analysis developed in~\cite{NP84,Han95}.
When solving a linear system $Ax=y$ by CG for noisy data $y^{\delta}$ following happens. The iterates $x_k$ of CG may diverge for $k \rightarrow \infty$, however the error propagation remains limited in the beginning of the iteration. The quality of the approximation therefore depends on how many iterative steps can be performed until the iterates turn to diverge. The idea is now to stop the iteration at about the point where divergence sets in. In other words the iterations count is the regularization parameter which remains to be controlled by an appropriate stopping rule~\cite{Nem86,Han95}.
In our case assume $\|{b^{(n,\delta)}}-{b^{(n)}}\| \le \delta \|{b^{(n)}}\|$, where $b_j^{(n,\delta)}$ denotes a noisy sample. We terminate the CG iterations when the iterates $({c^{(n,\delta)}})_k$ satisfy for the first time~\cite{Han95} \begin{equation}
\|{b^{(n)}}\ - ({c^{(n,\delta)}})_k\| \le \tau \delta \|{b^{(n)}}\| \label{stopcg} \end{equation} for some fixed $\tau >1$.
It should be noted that one can construct ``academic'' examples where this stopping rule does not prevent CG from diverging, see~\cite{Han95}, ``most of the time'' however it gives satisfactory results. We refer the reader to~\cite{Nem86,Han95} for a detailed discussion of various stopping criteria.
There is a variety of reasons, besides the ones we have already mentioned, that make the conjugate gradient method and the nonuniform sampling problem a ``perfect couple''. See Sections~\ref{ss:trigpol},~\ref{ss:ml}, and~\ref{ss:regul} for more details.
By combining the truncated frame approach with the conjugate gradient method (with appropriate stopping rule) we finally arrive at a reconstruction method that is of some practical relevance. However the only existing method at the moment that can handle large scale reconstruction problems seems to be the one proposed in the next section.
\section{Trigonometric polynomials and efficient signal reconstruction} \label{ss:trigpol}
In the previous section we have seen that the naive finite-dimensional approach via truncated frames is not satisfactory, it already leads to severe stability problems in the ideal case of regular oversampling. In this section we propose a different finite-dimensional model, which resembles much better the structural properties of the sampling problem, as can be seen below.
The idea is simple. In practice only a finite number of samples $\{f(t_j)\}_{j=1}^{r}$ is given, where without loss of generality we assume $-M \le t_1 < \dots < t_r \le M$ (otherwise we can always re-normalize the data). Since no data of $f$ are available from outside this region we focus on a local approximation of $f$ on $[-M,M]$. We extend the sampling set periodically across the boundaries, and identify this interval with the (properly normalized) torus ${\mathbb T}$. To avoid technical problems at the boundaries in the sequel we will choose the interval somewhat larger and consider either $[-M-1/2,M+1/2]$ or $[-N,N]$ with $N= M+\frac{M}{r-1}$. For theoretical considerations the choice $[-M-1/2,M+1/2]$ is more convenient.
Since the dual group of the torus ${\mathbb T}$ is ${\mathbb Z}$, periodic band-limited functions on ${\mathbb T}$ reduce to trigonometric polynomials (of course technically $f$ does then no longer belong to ${\Bsp}$ since it is no longer in ${\Ltsp(\Rst)}$). This suggests to use trigonometric polynomials as a realistic finite-dimensional model for a numerical solution of the nonuniform sampling problem. We consider the space ${\Psp_{\! M}}$ of trigonometric polynomials of degree $M$ of the form \begin{equation} p(t) = (2M+1)^{-1} \sum_{k=-M}^{M} a_{k} e^{2\pi i kt/(2M+1)}\,. \label{pm} \end{equation} The norm of $p \in {\Psp_{\! M}}$ is
$$\|p\|^2 =\int \limits_{-N}^{N} |p(t)|^2\, dt =\sum_{k=-M}^{M} |a_k|^2 \,.$$ Since the distributional Fourier transform of $p$ is $\hat{p} = (2M+1)^{-1} \sum_{k=-M}^{M} a_k \delta_{k/(2M+1)}$ we have
$\mbox{supp\/} \hat{p} \subseteq \{ k/(2M+1) , |k| \le M\} \subseteq [-1/2, 1/2]$. Hence ${\Psp_{\! M}}$ is indeed a natural finite-dimensional model for ${\Bsp}$.
In general the $f(t_j)$ are not the samples of a trigonometric polynomial in ${\Psp_{\! M}}$, moreover the samples are usually perturbed by noise, hence we may not find a $p \in {\Psp_{\! M}}$ such that $p(t_j)=b_j = f(t_j)$. We therefore consider the least squares problem \begin{equation}
\underset{p \in {\Psp_{\! M}} }{ \min } \sum_{j=1}^{r} |p(t_j) -b_j|^2 w_j\,. \label{LSP} \end{equation} Here the $w_j >0$ are user-defined weights, which can be chosen for instance to compensate for irregularities in the sampling geometry~\cite{FGS95}.
By increasing $M$ so that $ r \le 2M+1$ we can certainly find a trigonometric polynomial that interpolates the given data exactly. However in the presence of noise, such a solution is usually rough and highly oscillating and may poorly resemble the original signal. We will discuss the question of the optimal choice of $M$ if the original bandwidth is not known and in presence of noisy data in Section~\ref{ss:regul}.
The following theorem provides an efficient numerical reconstruction algorithm. It is also the key for the analysis of the relation between the finite-dimensional approximation in ${\Psp_{\! M}}$ and the solution of the original infinite-dimensional sampling problem in ${\Bsp}$.
\begin{theorem}[{\bf and Algorithm}] \cite{Gro93a,FGS95} \label{th:act} Given the sampling points $-M \le t_1 < \dots, t_{r} \le M$, samples $\{b_j\}_{j=1}^r$, positive weights $\{w_j\}_{j=1}^{r}$ with $2M+1 \le r$. \\ Step 1: Compute the $(2M+1)\times (2M+1)$ Toeplitz matrix $T_M$ with entries \begin{equation} (T_M)_{k,l}=\frac{1}{2M+1}\sum_{j=1}^{r} w_j e^{-2\pi i (k-l) t_j/(2M+1)}
\qquad \mbox{for $|k|,|l| \le M$} \label{toepmat} \end{equation} and $y_M \in {\mathbb C}^{(2M+1)}$ by \begin{equation} \label{rightside} (y_M)_k=\frac{1}{\sqrt{2M+1}} \sum_{j=1}^{r} b_j w_j e^{-2\pi i k t_j/(2M+1)}
\qquad \mbox{for $|k| \le M$} \,. \end{equation} \noindent Step 2: Solve the system \begin{equation} \label{toepsys} T_M a_M = y_M \,. \end{equation} \noindent Step 3: Then the polynomial ${p_{M}} \in {\Psp_{\! M}}$ that solves~\eqref{LSP} is given by \begin{equation} {p_{M}}(t)=\frac{1}{\sqrt{2M+1}} \sum_{k=-M}^M (a_M)_k e^{2 \pi i kt/(2M+1)} \,. \label{lsppol} \end{equation} \end{theorem}
\noindent {\bf Numerical Implementation of Theorem/Algorithm~\ref{th:act}:} \\ Step 1: The entries of $T_M$ and $y_M$ of equations~\eqref{toepmat} and~\eqref{rightside} can be computed in ${\cal O }(M \log M + r \log(1/\varepsilon))$ operations (where $\varepsilon$ is the required accuracy) using Beylkin's unequally spaced FFT algorithm~\cite{Bey95}.\\ Step 2: We solve $T_M a_M = y_M$ by the conjugate gradient (CG) algorithm~\cite{GL96}. The matrix-vector multiplication in each iteration of CG can be carried out in ${\cal O } (M \log M)$ operations via FFT~\cite{CN96}. Thus the solution of~\eqref{toepsys} takes ${\cal O } (k M \log M)$ operations, where $k$ is the number of iterations.\\ Step 3: Usually the signal is reconstructed on regularly space nodes $\{u_i\}_{i=1}^{N}$. In this case $p_M(u_i)$ in~\eqref{lsppol} can be computed by FFT. For non-uniformly spaced nodes $u_i$ we can again resort to Beylkin's USFFT algorithm.
There exists a large number of fast algorithms for the solution of Toeplitz systems. Probably the most efficient algorithm in our case is CG. We have already mentioned that the Toeplitz system~\eqref{toepsys} can be solved in ${\cal O } (kM \log M)$ via CG. The number of iterations $k$ depends essentially on the clustering of the eigenvalues of $T_M$, cf.~\cite{CN96}. It follows from equation~\eqref{kron} below and perturbation theory~\cite{Chr96a} that, if the sampling points stem from a perturbed regular sampling set, the eigenvalues of $T_M$ will be clustered around $\beta$, where $\beta$ is the oversampling rate. In such cases we can expect a very fast rate of convergence. The simple frame iteration~\cite{Mar93a,Ben92} is not able to take advantage of such a situation.
For the analysis of the relation between the solution ${p_{M}}$ of Theorem~\ref{th:act} and the solution $f$ of the original infinite-dimensional problem we follow Gr\"ochenig~\cite{Gro99}. Assume that the samples $\{f(t_j)\}_{j \in {\mathbb Z}}$ of $f \in {\Bsp}$ are given. For the finite-dimensional approximation we consider only those samples $f(t_j)$ for which $t_j$ is contained in the interval $[-M-\frac{1}{2}, M+\frac{1}{2}]$ and compute the least squares approximation ${p_{M}}$ with degree $M$ and period $2M+1$ as in Theorem~\ref{th:act}. It is shown in~\cite{Gro99} that if $\sigma (T_M) \subseteq [\alpha, \beta]$ for all $M$ with $\alpha >0$ then \begin{equation}
\underset{M {\rightarrow \infty }}{\lim} \int \limits_{[-M, M]} |f(t) - {p_{M}}(t)|^2 \, dt =0 , \label{trigconv} \end{equation} and also $\lim {p_{M}}(t) = f(t)$ uniformly on compact sets.
Under the Nyquist condition $\sup(t_{j+1}-tj) :=\gamma < 1$ and using weights $w_j = (t_{j+1}-t_{j-1})/2$ Gr\"ochenig has shown that \begin{equation} \sigma (T_M) \subseteq [(1-\gamma)^2, 6]\,, \label{condest} \end{equation} independently of $M$, see~\cite{Gro99}. These results validate the usage of trigonometric polynomials as finite-dimensional model for nonuniform sampling.
\noindent {\bf Example 1 -- reconsidered:} Recall that in Example~1 of Section~\ref{ss:truncated} we have considered the reconstruction of a regularly oversampled signal $f \in {\Bsp}$. What does the reconstruction method of Theorem~\ref{th:act} yield in this case? Let us check the entries of the matrix $T_M$ when we take only those samples in the interval $[-n,n]$. The period of the polynomial becomes $2N$ with $N=n+\frac{n}{r-1}$ where $r$ is the number of given samples. Then \begin{equation} (T_M)_{k,l} = \frac{1}{2N} \sum_{j=1}^{r} e^{2\pi i (k-l)t_j/(2N)}
= \sum_{j=-nm}^{nm} e^{2\pi i (k-l) \frac{j}{2nm+1}} = m\delta_{k,l} \label{kron} \end{equation} for $k,l = -M,\dots,M$, where $\delta_{k,l}$ is Kronecker's symbol with the usual meaning $\delta_{k,l}=1$ if $k=l$ and $0$ else. Hence we get $$T_M = m I\,,$$ where $I$ is the identity matrix on ${\mathbb C}^{2M+1}$, thus $T_M$ resembles the structure of the infinite-dimensional frame operator $S$ in this case (including exact approximation of the frame bounds). Recall that the truncated frame approach leads to an ``artificial'' ill-posed problem even in such a simple situation.
The advantages of the trigonometric polynomial approach compared to the truncated frame approach are manifold. In the one case we have to deal with an ill-posed problem which has no specific structure, hence its solution is numerically very expensive. In the other case we have to solve a problem with rich mathematical structure, whose stability depends only on the sampling density, a situation that resembles the original infinite-dimensional sampling problem.
In principle the coefficients ${a_{M}}=\{{(\alsp)_{k}}\}_{k=-M}^{M}$ of the polynomial ${p_{M}}$ that minimizes~\eqref{LSP} could also be computed by directly solving the Vandermonde type system \begin{equation} WV {a_{M}} = W b\,, \label{vandermonde} \end{equation} where $V_{j,k}=\frac{1}{\sqrt{2M+1}} e^{-2 \pi i k t_j/(2M+1)}$ for $j=1,\dots, r,\, k=-M,\dots, M$ and $W$ is a diagonal matrix with entries $W_{j,j}= \sqrt{w_j}$, cf.~\cite{RAG91}. Several algorithms are known for a relatively efficient solution of Vandermonde systems~\cite{BP70,RAG91}. However this is one of the rare cases, where, instead of directly solving~\eqref{vandermonde}, it is advisable to explicitly establish the system of normal equations \begin{equation} T_M a_M = y_M\,, \label{normal} \end{equation} where $T=V^{\ast} W^2 V$ and $y = V^{\ast} W^2 b$.
The advantages of considering the system $T_M a_M = y_M$ instead of the Vandermonde system~\eqref{vandermonde} are manifold: \begin{itemize} \vspace*{-1mm} \setlength{\itemsep}{-0.5ex} \setlength{\parsep}{-0.5ex} \item The matrix $T_M$ plays a key role in the analysis of the relation of the solution of~\eqref{LSP} and the solution of the infinite-dimensional sampling problem~\eqref{recon1}, see~\eqref{trigconv} and ~\eqref{condest} above. \item $T_M$ is of size $(2M+1) \times (2M+1)$, independently of the number of sampling points. Moreover, since $(T_M)_{k,l}=\sum_{j=1}^{r} w_j e^{2\pi i (k-l) t_j}$, it is of Toeplitz type. These facts give rise to fast and robust reconstruction algorithms. \item The resulting reconstruction algorithms can be easily generalized to higher dimensions, see Section~\ref{ss:multi}. Such a generalization to higher dimensions seems not to be straightforward for fast solvers of Vandermonde systems such as the algorithm proposed in~\cite{RAG91}. \end{itemize}
\if 0 An interesting finite-dimensional model is proposed in ~\cite{FLS98}. The Bernstein-Boas formula yields an explicit way to reconstruct a function $f \in {\Bsp}$ from its (sufficiently dense) nonuniform samples $\{f(t_k)\}_{k \in {\mathbb Z}}$, cf.~\cite{Sei95}. This formula involves the numerically intractable computation of infinite products. However since only a finite number of samples can be used in a numerical reconstruction one may assume that the sequence of sampling points has regular structure outside a finite interval. This allows to replace the infinite products by finite products which yields following approximation formula for $f$ \begin{equation}
f(t) \approx \sum_{|n-t|\le L} f(t_n) h(t-t_n) \frac{k-t_n}{k-x}
\frac{\sin \pi x}{\sin \pi t_n} \prod_{|k-x| \le L, k \neq n}
\frac{t_k - x}{t_k - t_n} \frac{k-t_n}{k-x} \notag \end{equation} and an estimate for the approximation error.
Although their approach is computationally more expensive than the algorithm proposed in Section~\ref{s:trigpol} their approach may be an attractive alternative if only a small number of samples in a short interval $[0,L]$ are available and if at the same time the signal to be reconstructed is ``strongly'' non-periodic on $[0,L]$. \fi
We point out that other finite-dimensional approaches are proposed in~\cite{FLS98,CC98}. These approaches may provide interesting alternatives in the few cases where the algorithm outlined in Section~\ref{ss:trigpol} does not lead to good results. These cases occur when only a few samples of the signal $f$ are given in an interval $[a,b]$ say, and at the same time we have
$|f(a) - f(b)| \gg 0$ and $|f'(a) - f'(b)| \gg 0$, i.e., if $f$ is ``strongly non-periodic'' on $[a,b]$. However the computational complexity of the methods in~\cite{FLS98,CC98} is significantly larger.
\subsection{Multi-dimensional nonuniform sampling} \label{ss:multi}
The approach presented above can be easily generalized to higher dimensions by a diligent book-keeping of the notation. We consider the space of $d$-dimensional trigonometric polynomials ${\Psp^{d}_{\! M}}$ as finite-dimensional model for ${\Bsp}^d$. For given samples $f(t_j)$ of $f \in {\Bsp}^d$, where $t_j \in {\Rst^d}$, we compute the least squares approximation ${p_{M}}$ similar to Theorem~\ref{th:act} by solving the corresponding system of equations $T_M a_M = y_M$.
In 2-D for instance the matrix $T_M$ becomes a block Toeplitz matrix with Toeplitz blocks~\cite{Str97}. For a fast computation of the entries of $T$ we can again make use of Beylkin's USFFT algorithm~\cite{Bey95}. And similar to 1-D, multiplication of a vector by $T_M$ can be carried out by 2-D FFT.
Also the relation between the finite-dimensional approximation in ${\Psp^{d}_{\! M}}$ and the infinite-dimensional solution in ${\Bsp}^{d}$ is similar as in 1-D. The only mathematical difficulty is to give conditions under which the matrix $T_M$ is invertible. Since the fundamental theorem of algebra does not hold in dimensions larger than one, the condition $(2M+1)^d \le r$ is necessary but no longer sufficient for the invertibility of $T_M$. Sufficient conditions for the invertibility, depending on the sampling density, are presented in~\cite{Gro99a}.
\section{Bandwidth estimation and regularization} \label{s:numeric}
In this section we discuss several numerical aspects of nonuniform sampling that are very important from a practical viewpoint, however only few answers to these problems can be found in the literature.
\subsection{A multilevel signal reconstruction algorithm} \label{ss:ml}
In almost all theoretical results and numerical algorithms for reconstructing a band-limited signal from nonuniform samples it is assumed that the bandwidth is known a priori. This information however is often not available in practice.
A good choice of the bandwidth for the reconstruction algorithm becomes crucial in case of noisy data. It is intuitively clear that choosing a too large bandwidth leads to over-fit of the noise in the data, while a too small bandwidth yields a smooth solution but also to under-fit of the data. And of course we want to avoid the determination of the ``correct'' $\Omega$ by trial-and-error methods. Hence the problem is to design a method that can reconstruct a signal from non-uniformly spaced, noisy samples without requiring a priori information about the bandwidth of the signal.
The multilevel approach derived in~\cite{SS97} provides an answer to this problem. The approach applies to an infinite-dimensional as well as to a finite-dimensional setting. We describe the method directly for the trigonometric polynomial model, where the determination of the bandwidth $\Omega$ translates into the determination of the polynomial degree $M$ of the reconstruction. The idea of the multilevel algorithm is as follows.
Let the noisy samples $\{b^{\delta}_j\}_{j=1}^{r}=\{f^{\delta}(t_j)\}_{j=1}^r$ of $f \in {\Bsp}$ be given with
$\sum_{j=1}^{r}|f(t_j)-b^{\delta}(t_j)|^2 \le \delta^2 \|b^{\delta}\|^2$ and let $Q_M$ denote the orthogonal projection from ${\Bsp}$ into ${\Psp_{\! M}}$. We start with initial degree $M=1$ and run Algorithm~\ref{th:act} until the iterates $p_{0,k}$ satisfy for the first time the {\em inner} stopping criterion \begin{equation}
\sum_{j=1}^{r}|p_{1,k}(t_j) - b^{\delta}_j|^2 \le
2 \tau (\delta \|b^{\delta}\| + \|Q_0 f - f\|)\|b^{\delta}\| \notag
\end{equation} for some fixed $\tau >1$. Denote this approximation (at iteration $k_*$) by $p_{1,k_*}$. If $p_{1,k_*}$ satisfies the {\em outer} stopping criterion \begin{equation}
\sum_{j=1}^{r}|p_{1,k}(t_j) - b^{\delta}_j|^2 \le
2 \tau \delta \|b^{\delta}\|^2 \label{stopout} \end{equation} we take $p_{1,k_*}$ as final approximation. Otherwise we proceed to the next level $M=2$ and run Algorithm~\ref{th:act} again, using $p_{1,k_*}$ as initial approximation by setting $p_{2,0} =p_{1,k_*}$.
At level $M=N$ the inner level-dependent stopping criterion becomes \begin{equation}
\sum_{j=1}^{r}|p_{N,k}(t_j) - b^{\delta}_j|^2 \le
2 \tau (\delta \|b^{\delta}\| + \|Q_N f - f\|)\|b^{\delta}\|, \label{stopin} \end{equation} while the outer stopping criterion does not change since it is level-independent.
Stopping rule~\eqref{stopin} guarantees that the iterates of CG do not diverge. It also ensures that CG does not iterate too long at a certain level, since if $M$ is too small further iterations at this level will not lead to a significant improvement. Therefore we switch to the next level. The outer stopping criterion~\eqref{stopout} controls over-fit and under-fit of the data, since in presence of noisy data is does not make sense to ask for a solution $p_M$ that satisfies
$\sum_{j=1}^{r}|p_{M}(t_j) - b^{\delta}_j|^2=0$.
Since the original signal $f$ is not known, the expression
$\|f - Q_N f\|$ in~\eqref{stopin} cannot be computed. In~\cite{SS97} the
reader can find an approach to estimate $\|f - Q_N f\|$ recursively.
\subsection{Solution of ill-conditioned sampling problems} \label{ss:regul}
A variety of conditions on the sampling points $\{t_j\}_{j \in {\mathbb Z}}$ are known under which the set ${\{\sinco(\cdot - t_j)\}_{j \in \Zst}}$ is a frame for ${\Bsp}$, which in turn implies (at least theoretically) perfect reconstruction of a signal $f$ from its samples $f(t_j)$. This does however not guarantee a stable reconstruction from a numerical viewpoint, since the ratio of the frame bounds $B/A$ can still be extremely large and therefore the frame operator $S$ can be ill-conditioned. This may happen for instance if $\gamma$ in~\eqref{condest} goes to 1, in which case $\mbox{cond\/}(T)$ may become large. The sampling problem may also become numerically unstable or even ill-posed, if the sampling set has large gaps, which is very common in astronomy and geophysics. Note that in this case the instability of the system $T_M a_M = y_M$ does {\em not} result from an inadequate discretization of the infinite-dimensional problem.
There exists a large number of (circulant) Toeplitz preconditioners that could be applied to the system $T_M a_M = y_M$, however it turns out that they do not improve the stability of the problem in this case. The reason lies in the distribution of the eigenvalues of $T_M$, as we will see below.
Following~\cite{Tyr96}, we call two sequences of real numbers $\{\lambda^{(n)}\}_{k=1}^{n}$ and $\{\nu^{(n)}\}_{k=1}^{n}$ {\em equally distributed}, if \begin{equation} \underset{{n \rightarrow \infty }}{\lim} \frac{1}{n} \sum_{k=1}^{n} [F(\lambda^{(n)}_{k}) - F(\nu^{(n)}_{k}) ] = 0 \label{defdist} \end{equation} for any continuous function $F$ with compact support\footnote{In H.Weyl's definition $\lambda^{(n)}_{k}$ and $\nu^{(n)}_{k}$ are required to belong to a common interval.}.
Let $C$ be a $(n \times n)$ circulant matrix with first column $(c_0,\dots,c_{n-1})$, we write $C = \operatorname{circ} (c_0,\dots,c_{n-1})$. The eigenvalues of $C$ are distributed as $\lambda_k = \frac{1}{\sqrt{n}}\sum_{l=0}^{n-1} c_l e^{2\pi i kl/n}$. Observe that the Toeplitz matrix $A_n$ with first column $(a_0,a_1,\dots, a_n)$ can be embedded in the circulant matrix \begin{equation} C_n =\operatorname{circ} (a_0,a_1,\dots, a_n, \bar{a_n},\dots, \bar{a_1})\,. \label{circembed} \end{equation} Thms~4.1 and~4.2 in~\cite{Tyr96} state that the eigenvalues of $A_n$ and $C_n$ are equally distributed as $f(x)$ where \begin{equation} f(x) = \sum_{k=-\infty}^{\infty} a_k e^{2 \pi i kx}\,. \label{fcirc} \end{equation} The partial sum of the series~\eqref{fcirc} is \begin{equation} f_n(x) = \sum_{k=-n}^{n} a_k e^{2 \pi i kx}\,. \label{fcircm} \end{equation}
To understand the clustering behavior of the eigenvalues of $T_M$ in case of sampling sets with large gaps, we consider a sampling set in $[-M,M)$, that consists of one large block of samples and one large gap, i.e., $t_j = \frac{j}{Lm}$ for $j=-mM,\dots mM$ for $m,L \in {\mathbb N}$. (Recall that we identify the interval with the torus). Then the entries $z_k$ of the Toeplitz matrix $T_M$ of~\eqref{toepmat} (with $w_j=1$) are $$z_k=\frac{1}{2M+1}\sum_{j=-mM}^{mM} e^{-2\pi i k \frac{j}{Lm}/(2M+1)}, \quad k=0,\dots,2M\,.$$ To investigate the clustering behavior of the eigenvalues of $T_M$ for $M {\rightarrow \infty }$, we embed $T_M$ in a circulant matrix $C_M$ as in~\eqref{circembed}. Then~\eqref{fcircm} becomes \begin{equation} f_{mM}(x) = \frac{1}{Lm(2M+1)}\sum_{l=-mM}^{mM} \sum_{j=-mM}^{mM} e^{2 \pi il [k/(4M+1) - j/((2M+1)mL)]} \end{equation} whence $f_{mM} \rightarrow {\bf 1}_{[-1/(2L),1/(2L)]}$ for $M {\rightarrow \infty }$, where ${\bf 1}_{[-a,a]}(x) = 1$, if $-a < x < a$ and 0 else.
Thus the eigenvalues of $T_M$ are asymptotically clustered around zero and one. For general nonuniform sampling sets with large gaps the clustering at 1 will disappear, but of course the spectral cluster at 0 will remain. In this case it is known that the preconditioned problem will still have a spectral cluster at the origin~\cite{YC93} and preconditioning will not be efficient.
Fortunately there are other possibilities to obtain a stabilized solution of $T_M a_M = y_M$. The condition number of $T_M$ essentially depends on the ratio of the maximal gap in the sampling set to the Nyquist rate, which in turn depends on the bandwidth of the signal. We can improve the stability of the system by adapting the degree $M$ of the approximation accordingly. Thus the parameter $M$ serves as a regularization parameter that balances stability and accuracy of the solution. This technique can be seen as a specific realization of {\em regularization by projection}, see Chapter~3 in~\cite{EHN96}. In addition, as described in Section~\ref{ss:regul}, we can utilize CG as regularization method for the solution of the Toeplitz system in order to balance approximation error and propagated error. The multilevel method introduced in Section~\ref{ss:ml} combines both features. By optimizing the level (bandwidth) and the number of iterations in each level it provides an efficient and robust regularization technique for ill-conditioned sampling problems. See Section~\ref{s:applications} for numerical examples.
\if 0 In many applications the physical process that generates the signal implies not only that the signal is (essentially) band-limited but also that its spectrum of the signal has a certain rate of decay. For instance geophysical potential fields have exponentially decaying Fourier transform. This a priori knowledge can be used to improve the accuracy of the approximation. Instead of the usual regularization methods, such as Tikhonov regularization, we propose a different, computationally much more efficient method.
Assume that the decay of the Fourier transform of $f$ can be bounded by $|\hat{f}(\omega)| \le \phi(\omega)$. Typical choice in practice are $\phi(\omega) = e^{-C|\omega|}$ or $\phi(\omega) = C(1+|\omega|^2)^{-1}$. For a given system $T_M a = y$ define the diagonal matrix $P$ by $P_{l,l} = \phi(l)$. Instead of solving $Ta=y$ we consider the ``weighted problem'' \begin{equation} P T a = Py \label{precond1} \end{equation} or \begin{equation}
TP c = y\,,\qquad a = Pc \label{precond2} \end{equation} In the first case the solution is $$a_P = (PA)^+ Pb$$ and in the second case we have $$a_P = P (AP)^+ b \,.$$ Of course, if $T$ is invertible both solutions coincide with the solution of $Ta=b$. However if $T$ is not invertible, then both equations lead to a weighted minimal norm least squares solution. Note that $P$ is not chosen to minimize the condition number of the problem, since as outlined above standard preconditioning will not work in this case.
Systems~\eqref{precond1} and~\eqref{precond2} can be solved by conjugate gradient methods. Hence the computational effort of such an approach is of the same order as Algorithm~\ref{th:act}. A detailed numerical analysis of the convergence properties of this approach has still to be completed. For a numerical example see Section~\ref{ss:geo}.
\fi
\section{Applications} \label{s:applications}
We present two numerical examples to demonstrate the performance of the described methods. The first one concerns a 1-D reconstruction problem arising in spectroscopy. In the second example we approximate the Earth's magnetic field from noisy scattered data.
\subsection{An example from spectroscopy} \label{ss:spectro}
The original spectroscopy signal $f$ is known at 1024 regularly spaced points $t_j$. This discrete sampling sequence will play the role of the original continuous signal. To simulate the situation of a typical experiment in spectroscopy we consider only 107 randomly chosen sampling values of the given sampling set.
Furthermore we add noise to the samples with noise level (normalized by division by $\sum_{k=1}^{1024}|f(t_j)|^2$) of $\delta=0.1$. Since the samples are contaminated by noise, we cannot expect to recover the (discrete) signal $f$ completely. The bandwidth is approximately $\Omega =5$ which translates into a polynomial degree of $M \approx 30$. Note that in general $\Omega$ and (hence $M$) may not be
available. We will also consider this situation, but in the first experiments we assume that we know $\Omega$. The error between the original signal $f$ and an approximation $f_n$ is measured by computing $\|f- f_n\|^2/\|f\|^2$.
First we apply the truncated frame method with regularized SVD as described in Section~\ref{ss:truncated}. We choose the truncation level for the SVD via formula~\eqref{thresh}. This is the optimal truncation level in this case, providing an approximation with least squares error $0.0944$. Figure~\ref{fig:spect}(a) shows the reconstructed signal together with the original signal and the noisy samples. Without regularization we get a much worse ``reconstruction'' (which is not displayed).
We apply CG to the truncated frame method, as proposed in Section~\ref{ss:cgtrunc} with stopping criterion~\eqref{stopcg} (for $\tau =1$). The algorithm terminates already after 3 iterations. The reconstruction error is with $0.1097$ slightly higher than for truncated SVD (see also ~Figure~\ref{fig:spect}(b)), but the computational effort is much smaller.
Also Algorithm~\ref{th:act} (with $M=30$) terminates after 3 iterations. The reconstruction is shown in~Figure~\ref{fig:spect}(c), the least squares error ($0.0876$) is slightly smaller than for the truncated frame method, the computational effort is significantly smaller.
We also simulate the situation where the bandwidth is not known a priori and demonstrate the importance of a good estimate of the bandwidth. We apply Algorithm~\ref{th:act} using a too small degree ($M = 11$) and a too high degree ($M = 40$). (We get qualitatively the same results using the truncated frame method when using a too small or too large bandwidth). The approximations are shown in Figs.~\ref{fig:spect}(d) and (e), The approximation errors are $0.4648$ and $0.2805$, respectively. Now we apply the multilevel algorithm of Section~\ref{ss:ml} which does not require any initial choice of the degree $M$. The algorithm terminates at ``level'' $M=22$, the approximation is displayed in Fig.~\ref{fig:spect}(f), the error is $0.0959$, thus within the error bound $\delta$, as desired. Hence without requiring explicit information about the bandwidth, we are able to obtain the same accuracy as for the methods above.
\begin{figure}
\caption{Example from spectroscopy -- comparison of reconstruction methods.}
\label{fig:spect}
\end{figure}
\subsection{Approximation of geophysical potential fields} \label{ss:geo} Exploration geophysics relies on surveys of the Earth's magnetic field for the detection of anomalies which reveal underlying geological features. Geophysical potential field-data are generally observed at scattered sampling points. Geoscientists, used to looking at their measurements on maps or profiles and aiming at further processing, therefore need a representation of the originally irregularly spaced data at a regular grid.
The reconstruction of a 2-D signal from its scattered data is thus one of the first and crucial steps in geophysical data analysis, and a number of practical constraints such as measurement errors and the huge amount of data make the development of reliable reconstruction methods a difficult task.
It is known that the Fourier transform of a geophysical potential field $f$
has decay $|\hat{f}(\omega)| = {\cal O } (e^{-|\omega|})$. This rapid decay implies that $f$ can be very well approximated by band-limited functions~\cite{RS98}. Since in general we may not know the (essential) bandwidth of $f$, we can use the multilevel algorithm proposed in Section~\ref{ss:ml} to reconstruct $f$.
The multilevel algorithm also takes care of following problem. Geophysical sampling sets are often highly anisotropic and large gaps in the sampling geometry are very common. The large gaps in the sampling set can make the reconstruction problem ill-conditioned or even ill-posed. As outlined in Section~\ref{ss:regul} the multilevel algorithm iteratively determines the optimal bandwidth that balances the stability and accuracy of the solution.
Figure~\ref{fig:geo}(a) shows a synthetic gravitational anomaly $f$. The spectrum of $f$ decays exponentially, thus the anomaly can be well represented by a band-limited function, using a ``cut-off-level'' of $|f(\omega)| \le 0.01$ for the essential bandwidth of $f$.
We have sampled the signal at 1000 points $(u_j,v_j)$ and added 5\% random noise to the sampling values $f(u_j,v_j)$. The sampling geometry -- shown in Figure~\ref{fig:geo} as black dots -- exhibits several features one encounters frequently in exploration geophysics~\cite{RS98}. The essential bandwidth of $f$ would imply to choose a polynomial degree of $M=12$ (i.e., $(2M+1)^2 = 625$ spectral coefficients). With this choice of $M$ the corresponding block Toeplitz matrix $T_M$ would become ill-conditioned, making the reconstruction problem unstable. As mentioned above, in practice we usually do not know the essential bandwidth of $f$. Hence we will not make use of this knowledge in order to approximate $f$.
We apply the multilevel method to reconstruct the signal, using only the sampling points $\{(u_j,v_j)\}$, the samples $\{f^{\delta}(u_j,v_j)\}$ and the noise level $\delta=0.05$ as a priori information. The algorithm terminates at level $M=7$. The reconstruction is displayed in Figure~\ref{fig:geo}(c), the error between the true signal and the approximation is shown in Figure~\ref{fig:geo}(d). The reconstruction error is $0.0517$ (or $0.193$ mGal), thus of the same order as the data error, as desired.
\begin{figure}
\caption{Approximation of synthetic gravity anomaly from 1000
non-uniformly spaced noisy samples by the multilevel algorithm of
Section~\ref{ss:ml}. The algorithm iteratively determines the
optimal bandwidth (i.e.\ level) for the approximation.}
\label{fig:geo}
\label{fig:geo2}
\end{figure}
\end{document} |
\begin{document}
\setlength{\abovedisplayskip}{6pt plus3pt minus3pt} \setlength{\belowdisplayskip}{6pt plus3pt minus3pt}
\def\title#1{\def\thetitle{#1}} \def\authors#1{\def\theauthors{#1}} \def\address#1{\def\theaddress{#1}} \def\email#1{\def\theemail{#1}} \def\url#1{\def\theurl{#1}} \long\def\abstract#1\endabstract{\long\def\theabstract{#1}} \def\primaryclass#1{\def\theprimaryclass{#1}} \def\secondaryclass#1{\def\thesecondaryclass{#1}} \def\keywords#1{\def\thekeywords{#1}}
\input gtoutput
\volumenumber{1}\papernumber{5}\volumeyear{1997} \published{26 October 1997} \pagenumbers{51}{69}
\title{Alexander Duality, Gropes and Link Homotopy} \shorttitle{Alexander duality, gropes and link homotopy}
\authors{Vyacheslav S Krushkal\\Peter Teichner}
\address{Department of Mathematics, Michigan State University\\ East Lansing, MI 48824-1027, USA\\
\\ {\rm Current address:}\ \ Max-Planck-Institut f\"{u}r Mathematik\\ Gottfried-Claren-Strasse 26, D-53225 Bonn, Germany \\
\\\centerline{\rm and}\\
\\ Department of Mathematics\\University of California in San Diego\\ La Jolla, CA, 92093-0112, USA} \email{[email protected]\\[email protected]} \let\theurl\relax
\abstract
We prove a geometric refinement of Alexander duality for certain $2$--complexes, the so-called {\em gropes}, embedded into $4$--space. This refinement can be roughly formulated as saying that $4$--dimensional Alexander duality preserves the {\em disjoint Dwyer filtration.}
In addition, we give new proofs and extended versions of two lemmas of Freedman and Lin which are of central importance in the {\em A-B--slice problem}, the main open problem in the classification theory of topological $4$--manifolds. Our methods are group theoretical, rather than using Massey products and Milnor $\mu$--invariants as in the original proofs. \endabstract
\primaryclass{55M05, 57M25} \secondaryclass{57M05, 57N13, 57N70}
\keywords{Alexander duality, $4$--manifolds, gropes, link homotopy, Milnor group, Dwyer filtration}
\proposed{Robion Kirby}\received{17 June 1997} \seconded{Michael Freedman, Ronald Stern}\revised{17 October 1997}
\maketitlepage
\section{Introduction} \label{introduction} Consider a finite complex $X$ PL--embedded into the $n$--dimensional sphere $S^n$. Alexander duality identifies the (reduced integer) homology $H_i(S^n\smallsetminus X)$ with the cohomology $H^{n-1-i}(X)$. This implies that the homology (or even the stable homotopy type) of the complement cannot distinguish between possibly different embeddings of $X$ into $S^n$. Note that there cannot be a duality for homotopy groups as one can see by considering the fundamental group of classical knot complements, ie the case $X=S^1$ and $n=3$.
However, one can still ask whether additional information about $X$ does lead to additional information about $S^n \smallsetminus X$. For example, if $X$ is a smooth closed $(n-1-i)$--dimensional manifold then the cohomological fundamental class is dual to a {\em spherical} class in $H_i(S^n\smallsetminus X)$. Namely, it is represented by any {\em meridional} $i$--sphere which by definition is the boundary of a normal disk at a point in $X$. This geometric picture explains the dimension shift in the Alexander duality theorem.
By reversing the roles of $X$ and $S^n \smallsetminus X$ in this example we see that it is {\em not} true that $H_i(X)$ being spherical implies that $H_{n-1-i}(S^n \smallsetminus X)$ is spherical. However, the following result shows that there is some kind of improved duality if one does {\em not} consider linking dimensions. One should think of the Gropes in our theorem as means of measuring how spherical a homology class is.
\begin{thm} \label{duality}\rm (Grope Duality)\ \ \sl If $X \subset S^4$ is the disjoint union of closed embedded Gropes of class~$k$ then $H_2(S^4\smallsetminus X)$ is freely generated by $r$ disjointly embedded closed Gropes of class~$k$. Here $r$ is the rank of $H_1(X)$. Moreover, $H_2(S^4\smallsetminus X)$ cannot be generated by $r$ disjoint maps of closed gropes of class $k+1$. \end{thm} As a corollary to this result we show in \ref{grope concordance} that certain Milnor $\mu$--invariants of a link in $S^3$ are unchanged under a {\em Grope concordance}.
The {\em Gropes} above are framed thickenings of very simple 2--complexes, called {\em gropes}, which are inductively built out of surface stages, see Figure~\ref{gropes} and Section~\ref{facts}. For example, a grope of class~$2$ is just a surface with a single boundary component and gropes of bigger class contain information about the lower central series of the fundamental group. Moreover, every closed grope has a fundamental class in $H_2(X)$ and one obtains a geometric definition of the {\em Dwyer filtration} \[ \pi_2(X) \subseteq \dots \subseteq \phi_k(X) \subseteq \dots \subseteq\phi_3(X)\subseteq\phi_2(X)=H_2(X) \] by defining $\phi_k(X)$ to be the set of all homology classes represented by maps of closed gropes of class~$k$ into $X$. Theorem \ref{duality} can thus be roughly formulated as saying that $4$--dimensional Alexander duality preserves the {\em disjoint Dwyer filtration.}
\begin{figure}
\caption{A grope of class 2 is a surface -- two closed gropes of class 4}
\label{gropes}
\end{figure}
Figure~\ref{gropes} shows that each grope has a certain ``type'' which measures how the surface stages are attached. In Section~\ref{facts} this will be made precise using certain rooted trees, compare Figure~\ref{gropes and trees}. In Section~\ref{dualitysection} we give a simple algorithm for obtaining the trees corresponding to the dual Gropes constructed in Theorem~\ref{duality}.
The simplest application of Theorem~\ref{duality} (with class $k=2$) is as follows. Consider the standard embedding of the $2$--torus $T^2$ into $S^4$ (which factors through the usual unknotted picture of $T^2$ in $S^3$). Then the boundary of the normal bundle of $T^2$ restricted to the two essential circles gives two disjointly embedded tori representing generators of $H_2(S^4 \smallsetminus T^2) \cong {\mathbb{Z}}^2$. Since both of these tori may be surgered to (embedded) spheres, $H_2(S^4 \smallsetminus T^2)$ is in fact spherical. However, it cannot be generated by two maps of $2$--spheres with {\em disjoint} images, since a map of a sphere may be replaced by a map of a grope of arbitrarily big class.
This issue of disjointness leads us to study the relation of gropes to classical {\em link homotopy}. We use Milnor group techniques to give new proofs and improved versions of the two central results of \cite{FL}, namely the {\em Grope Lemma} and the {\em Link Composition Lemma}. Our generalization of the grope lemma reads as follows.
\begin{thm} \label{grope lemma} \sl Two $n$--component links in $S^3$ are link homotopic if and only if they cobound disjointly immersed annulus-like gropes of class~$n$ in $S^3 \times I$. \end{thm}
This result is stronger than the version given in \cite{FL} where the authors only make a comparison with the trivial link. Moreover, our new proof is considerably shorter than the original one.
Our generalization of the link composition lemma is formulated as Theorem~\ref{composition} in Section~\ref{compositionsection}. The reader should be cautious about the proof given in \cite{FL}. It turns out that our Milnor group approach contributes a beautiful feature to Milnor's algebraization of link homotopy: He proved in \cite{Milnor1} that by forgetting one component of the unlink one gets an abelian normal subgroup of the Milnor group which is the additive group of a certain ring $R$. We observe that the {\em Magnus expansion} of the free Milnor groups arises naturally from considering the conjugation action of the quotient group on this ring $R$. Moreover, we show in Lemma~\ref{multiplication} that ``composing'' one link into another corresponds to multiplication in that particular ring $R$. This fact is the key in our proof of the link composition lemma.
Our proofs completely avoid the use of Massey products and Milnor $\! \mu$--invariants and we feel that they are more geometric and elementary than the original proofs. This might be of some use in studying the still unsolved {\em A-B--slice problem} which is the main motivation behind trying to relate gropes, their duality and link homotopy. It is one form of the question whether topological surgery and s--cobordism theorems hold in dimension~$4$ without fundamental group restrictions. See \cite{FT1} for new developments in that area.
\noindent {\em Acknowledgements:} It is a pleasure to thank Mike Freedman for many important discussions and for providing an inspiring atmosphere in his seminars. In particular, we would like to point out that the main construction of Theorem~\ref{duality} is reminiscent of the methods used in the {\em linear grope height raising} procedure of \cite{FT2}. The second author would like to thank the Miller foundation at UC Berkeley for their support.
\section{Preliminary facts about gropes and the lower \hfil\break central series} \label{facts} The following definitions are taken from \cite{FT2}. \begin{defi} \rm
A {\it grope} is a special pair (2--complex, circle). A grope has a {\em class} $k=1, 2,\dots, \infty$. For $k=1$ a grope is defined to be the pair (circle, circle). For $k=2$ a grope is precisely a compact oriented surface $\Sigma$ with a single boundary component. For $k$ finite a $k$--{\it grope} is defined inductively as follow: Let $\{\alpha_i, \beta_i, i=1, \dots, {\rm genus}\}$ be a standard symplectic basis of circles for $\Sigma$. For any positive integers $p_i, q_i$ with $p_i+q_i\ge k$ and $p_{i_0} + q_{i_0} = k$ for at least one index $i_0$, a $k$--grope is formed by gluing $p_i$--gropes to each $\alpha_i$ and $q_i$--gropes to each $\beta_i$. \end{defi} The important information about the ``branching" of a grope can be very well captured in a rooted tree as follows: For $k=1$ this tree consists of a single vertex $v_0$ which is called the {\em root}. For $k=2$ one adds $2\cdot$genus$(\Sigma)$ edges to $v_0$ and may label the new vertices by $\alpha_i, \beta_i$. Inductively, one gets the tree for a $k$--grope which is obtained by attaching $p_i$--gropes to $\alpha_i$ and $q_i$--gropes to $\beta_i$ by identifying the roots of the $p_i$--(respectively $q_i$--)gropes with the vertices labeled by $\alpha_i$(respectively $\beta_i$). Figure~\ref{gropes and trees} below should explain the correspondence between gropes and trees.
\begin{figure}
\caption{A grope of class $5$ and the associated tree}
\label{gropes and trees}
\end{figure}
Note that the vertices of the tree which are above the root $v_0$ come in pairs corresponding to the symplectic pairs of circles in a surface stage and that such rooted paired trees correspond bijectively to gropes. Under this bijection, the {\it leaves} ($:=$ 1--valent vertices) of the tree correspond to circles on the grope which freely generate its fundamental group. We will sometimes refer to these circles as the {\it tips} of the grope. The boundary of the first stage surface $\Sigma$ will be referred to as the {\it bottom} of the grope.
Given a group $\Gamma$, we will denote by $\Gamma^k$ the $k$-th term in the lower central series of $\Gamma$, defined inductively by $\Gamma^1:=\Gamma$ and $\Gamma^k:=[\Gamma,\Gamma^{k-1}]$, the characteristic subgroup of $k$--fold commutators in $\Gamma$.
\begin{lem}\label{maximal k} \sl {\rm (Algebraic interpretation of gropes \cite[2.1]{FT2})}\ \ For a space X, a loop $\gamma$ lies in $\pi_1(X)^k, 1 \le k < \omega$, if and only if $\gamma$ bounds a map of some $k$--grope. Moreover, the class of a grope $(G,\gamma)$ is the maximal $k$ such that $\gamma \in \pi_1(G)^k$. \end{lem}
A {\it closed} $k$--grope is a 2--complex made by replacing a 2--cell in $S^2$ with a $k$--grope. A closed grope is sometimes also called a {\em sphere-like} grope. Similarly, one has {\em annulus-like} $k$--gropes which are obtained from an annulus by replacing a 2--cell with a $k$--grope. Given a space $X$, the Dwyer's subgroup $\phi_k(X)$ of $H_2(X)$ is the set of all homology classes represented by maps of closed gropes of class~$k$ into $X$. Compare \cite[2.3]{FT2} for a translation to Dwyer's original definition.
\begin{thmnn} \rm(Dwyer's Theorem \cite{Dwyer})\ \ \sl Let $k$ be a positive integer and let $f\colon\thinspace$ $X\longrightarrow Y$ be a map inducing an isomorphism on $H_1$ and an epimorphism on $H_2/\phi_k$. Then $f$ induces an isomorphism on $\pi_1/(\pi_1)^k$. \end{thmnn}
A {\em Grope} is a special ``untwisted'' $4$--dimensional thickening of a grope $(G,\gamma)$; it has a preferred solid torus (around the base circle $\gamma$) in its boundary. This ``untwisted'' thickening is obtained by first embedding $G$ in ${\R}^3$ and taking its thickening there, and then crossing it with the interval $[0,1]$. The definition of a Grope is independent of the chosen embedding of $G$ in ${\R}^3$. One can alternatively define it by a thickening of $G$ such that all relevant relative Euler numbers vanish. Similarly, one defines sphere- and annulus-like Gropes, the capital letter indicating that one should take a $4$--dimensional untwisted thickening of the corresponding 2--complex.
\section{The Grope Lemma} \label{gropesection} We first recall some material from \cite{Milnor1}. Two $n$--component links $L$ and $L'$ in $S^3$ are said to be {\em link-homotopic} if they are connected by a 1--parameter family of immersions such that distinct components stay disjoint at all times. $L$ is said to be {\em homotopically trivial} if it is link-homotopic to the unlink. $L$ is {\em almost homotopically trivial} if each proper sublink of $L$ is homotopically trivial.
For a group $\pi$ normally generated by $g_1,\ldots,g_k$ its {\em Milnor group} $M\pi$ (with respect to $g_1,\ldots,g_k$) is defined to be the quotient of $\pi$ by the normal subgroup generated by the elements $[g_i,g_i^h]$, where $h\in\pi$ is arbitrary. Here we use the conventions \[ [g_1,g_2]:=g_1\cdot g_2 \cdot g_1^{-1}\cdot g_2^{-1} \; {\rm and } \; g^h:=h^{-1} \cdot g \cdot h. \] $M\pi$ is nilpotent of class~$\leq k+1$, ie it is a quotient of $\pi/(\pi)^{k+1}$, and is generated by the quotient images of $g_1,\ldots, g_k$, see \cite{FT1}. The Milnor group $M(L)$ of a link $L$ is defined to be $M\pi_1(S^3\smallsetminus L)$ with respect to its meridians $m_i$. It is the largest common quotient of the fundamental groups of all links link-homotopic to $L$, hence one obtains:
\begin{thmnn}\rm (Invariance under link homotopy \cite{Milnor1})\ \ \sl If $L$ and $L'$ are link homotopic then their Milnor groups are isomorphic. \end{thmnn} The track of a link homotopy in $S^3 \times I$ gives disjointly immersed annuli with the additional property of being mapped in a level preserving way. However, this is not really necessary for $L$ and $L'$ to be link homotopic, as the following result shows.
\begin{lem}\rm (Singular concordance implies homotopy \cite{Giffen}, \cite{Goldsmith}, \cite{Lin})\ \ \label{concordance} \sl \hfil\break If $L\subset S^3\times\{ 0\}$ and $L'\subset S^3\times\{ 1\}$ are connected in $S^3\times I$ by disjointly immersed annuli then $L$ and $L'$ are link-homotopic. \end{lem} \begin{rem} \rm This result was recently generalized to all dimensions, see \cite{Teichner}. \end{rem} Our Grope Lemma (Theorem~\ref{grope lemma} in the introduction) further weakens the conditions on the objects that connect $L$ and $L'$.
\rk{Proof of Theorem~\ref{grope lemma}} Let $G_1,\ldots,G_n$ be disjointly immersed annulus-like gropes of class $n$ connecting $L$ and $L'$ in $S^3 \times I$. To apply the above Lemma~\ref{concordance}, we want to replace one $G_i$ at a time by an immersed annulus $A_i$ in the complement of all gropes and annuli previously constructed.
Let's start with $G_1$. Consider the circle $c_1$ which consists of the union of the first component $l_1$ of $L$, then an arc in $G_1$ leading from $l_1$ to $l_1'$, then the first component $l_1'$ of $L'$ and finally the same arc back to the base point. Then the $n$--grope $G_1$ bounds $c_1$ and thus $c_1$ lies in the $n$-th term of the lower central series of the group $\pi_1(S^3 \times I \smallsetminus G)$, where $G$ denotes the union of $G_2,\ldots,G_n$. As first observed by Casson, one may do finitely many finger moves on the bottom stage surfaces of G (keeping the components $G_i$ disjoint) such that the natural projection induces an isomorphism \[ \pi_1(S^3 \times I \smallsetminus G) \cong M\pi_1(S^3 \times I \smallsetminus G) \] (see \cite{FT1} for the precise argument, the key idea being that the relation $[m_i,m_i^h]$ can be achieved by a self finger move on $G_i$ which follows the loop $h$.) But the latter Milnor group is normally generated by $(n-1)$ meridians and is thus nilpotent of class~$\leq n$. In particular, $c_i$ bounds a disk in $S^3 \times I \smallsetminus G$ which is equivalent to saying that $l_1$ and $l_1'$ cobound an annulus $A_1$, disjoint from $G_2,\ldots,G_n$.
Since finger moves only change the immersions and not the type of a 2--complex, ie an immersed annulus stays an immersed annulus, the above argument can be repeated $n$~times to get disjointly immersed annuli $A_1,\ldots,A_n$ connecting $L$ and $L'$. \qed
\section{Grope Duality} \label{dualitysection} In this section we give the proof of Theorem~\ref{duality} and a refinement which explains what the trees corresponding to the dual Gropes look like. Since we now consider closed gropes, the following variation of the correspondence to trees turns out to be extremely useful. Let $G$ be a closed grope and let $G'$ denote $G$ with a small $2$--cell removed from its bottom stage. We define the tree $T_G$ to be the tree corresponding to $G'$ (as defined in Section~\ref{facts}) together with an edge added to the root vertex. This edge represents the deleted $2$--cell and it turns out to be useful to define the root of $T_G$ to be the $1$--valent vertex of this new edge. See Figure~\ref{dualtree} for an example of such a tree.
\rk{Proof of Theorem~\ref{duality}} Abusing notation, we denote by $X$ the core grope of the given $4$--dimensional Grope in $S^4$. Thus $X$ is a $2$--complex which has a particularly simple thickening in $S^4$ which we may use as a regular neighborhood. All constructions will take place in this regular neighborhood, so we may assume that $X$ has just one connected component. Let $\{\alpha_{i,j},\beta_{i,j}\}$ denote a standard symplectic basis of curves for the $i$-th stage $X_i$ of $X$; these curves correspond to vertices at a distance $i+1$ from the root in the associated tree. Here $X_1$ is the bottom stage and thus a closed connected surface. For $i>1$, the $X_i$ are disjoint unions of punctured surfaces. They are attached along some of the curves $\alpha_{i-1,j}$ or $\beta_{i-1,j}$.
Let $A_{i,j}$ denote the ${\epsilon}$--circle bundle of $X_i$ in $S^4$, restricted to a parallel displacement of $\alpha_{i,j}$ in $X_i$, see Figure \ref{torusfigure}. The corresponding ${\epsilon}$--disk bundle, for $\epsilon$ small enough, can be used to see that the $2$--torus $A_{i,j}$ has linking number $1$ with $\beta_{i,j}$ and does not link other curves in the collection $\{\alpha_{s,t},\beta_{s,t}\}$. Note that if there is a higher stage attached to $\beta_{i,j}$ then it intersects $A_{i,j}$ in a single point, while if there is no stage attached to $\beta_{i,j}$ then $A_{i,j}$ is disjoint from $X$, and the generator of $H_2(S^4\smallsetminus X)$ represented by $A_{i,j}$ is Alexander-dual to $\beta_{i,j}$. Similarly, let $B_{i,j}$ denote a $2$--torus representative of the class dual to $\alpha_{i,j}$. There are two inductive steps used in the construction of the stages of dual Gropes.
\begin{figure}
\caption{Steps 1 and 2}
\label{torusfigure}
\end{figure}
{\em Step 1}\ \ Let $\gamma$ be a curve in the collection $\{{\alpha}_{i,j}, {\beta}_{i,j}\}$, and let $X'$ denote the subgrope of $X$ which is attached to $\gamma$. Since $X$ is framed and embedded, a parallel copy of $\gamma$ in $S^4$ bounds a parallel copy of $X'$ in the complement of $X$. If there is no higher stage attached to $\gamma$ then the application of Step 1 to this curve is empty.
{\em Step 2}\ \ Let $\Sigma_i$ be a connected component of the $i$-th stage of $X$, and let $m_i$ denote a meridian of $\Sigma_i$ in $S^4$, that is, $m_i$ is the boundary of a small normal disk to $\Sigma_i$ at an interior point. Suppose $i>1$ and let $\Sigma_{i-1}$ denote the previous stage, so that $\Sigma_i$ is attached to $\Sigma_{i-1}$ along some curve, say $\alpha_{i-1,n}$. The torus $B_{i-1,n}$ meets $\Sigma_i$ in a point, but making a puncture into $B_{i-1,n}$ around this intersection point and connecting it by a tube with $m_i$ exhibits $m_i$ as the boundary of a punctured torus in the complement of $X$, see Figure~\ref{torusfigure}.
By construction, $H_1(X)$ is generated by those curves $\{\alpha_{i,j},\beta_{i,j}\}$ which do not have a higher stage attached to them. Fix one of these curves, say $\beta_{i,j}$. We will show that its dual torus $A_{i,j}$ is the first stage of an embedded Grope $G\subset S^4\smallsetminus X$ of class $k$. The meridian $m_i$ and a parallel copy of ${\alpha}_{i,j}$ form a symplectic basis of circles for $A_{i,j}$. Apply Step 1 to $\alpha_{i,j}$. If $i=1$, the result of Step 1 is a grope at least of class $k$ and we are done. If $i>1$, apply in addition Step 2 to $m_i$. The result of Step 2 is a grope with a new genus 1 surface stage, the tips of which are the meridian $m_{i-1}$ and a parallel copy of one of the standard curves in the previous stage, say $\beta_{i-1,n}$. The next Step 1 -- Step 2 cycle is applied to these tips. Altogether there are $i$ cycles, forming the grope $G$.
The trees corresponding to dual gropes constructed above may be read off the tree associated to $X$, as follows. Start with the tree $T_{X}$ for $X$, and pick the tip ($1$--valent vertex), corresponding to the curve ${\beta}_{i,j}$. The algorithm for drawing the tree $T_{G}$ of the grope $G$, Alexander-dual to ${\beta}_{i,j}$, reflects Steps 1 and 2 above. Consider the path $p$ from ${\beta}_{i,j}$ to the root of $T_{X}$, and start at the vertex ${\alpha}_{i-1,n}$, adjacent to ${\beta}_{i,j}$. Erase all branches in $T_{X}$, ``growing'' from ${\alpha}_{i-1,n}$, except for the edge $[{\beta}_{i,j}\ {\alpha}_{i-1,n}]$ which has been previously considered, and its ``partner'' branch $[{\alpha}_{i,j}\ {\alpha}_{i-1,n}]$, and then move one edge down along the path $p$. This step is repeated $i$ times, until the root of $T_{X}$ is reached. The tree $T_{G}$ is obtained by copying the part of $T_{X}$ which is not erased, with the tip ${\beta}_{i,j}$ drawn as the root, see figure \ref{dualtree}.
\begin{figure}
\caption{A dual tree: The branches in $T_{X}$ to be erased are drawn with dashed lines.}
\label{dualtree}
\end{figure}
Note the ``distinguished'' path in $T_{G}$, starting at the root and labelled by $m_i, m_{i-1},$ $\ldots, m_1$. Each of the vertices $m_i, m_{i-1}, \ldots, m_2$ is trivalent (this corresponds to the fact that all surfaces constructed by applications of Step 2 have genus $1$), see figures \ref{dualtree}, \ref{genusone}. In particular, the class of $G$ may be computed as the sum of classes of the gropes attached to the ``partner'' vertices of $m_i,\ldots, m_1$, plus $1$.
We will now prove that the dual grope $G$ is at least of class~$k$. The proof is by induction on the class of $X$. For surfaces (class~$=2$) the construction gives tori in the collection $\{A_{i,j}, B_{i,j}\}$. Suppose the statement holds for Gropes of class less than $k$, and let $X$ be a Grope of class $k$. By definition, for each standard pair of dual circles ${\alpha}, {\beta}$ in the first stage $\Sigma$ of $X$ there is a $p$--grope $X_{\alpha}$ attached to $\alpha$ and a $q$--grope $X_{\beta}$ attached to $\beta$ with $p+q\geq k$. Let $\gamma$ be one of the tips of $X_{\alpha}$. By the induction hypothesis, the grope $G_{\alpha}$ dual to $\gamma$, given by the construction above for $X_{\alpha}$, is at least of class $p$. $G$ is obtained from $G_{\alpha}$ by first attaching a genus $1$ surface to $m_2$, with new tips $m_1$ and a parallel copy of $\beta$ (Step 2), and then attaching a parallel copy of $X_{\beta}$ (Step 1). According to the computation above of the class of $G$ in terms of its tree, it is equal to $p+q\geq k$.
It remains to show that the dual gropes can be made disjoint, and that they are $0$--framed. Each dual grope may be arranged to lie in the boundary of a regular $\epsilon$--neighborhood of $X$, for some small $\epsilon$. Figure \ref{epsilon} shows how Steps 1 and 2 are performed at a distance $\epsilon$ from $X$. Note that although tori $A_{i,j}$ and
\eject
$B_{i,j}$ intersect, at most one of them is used in the construction of a dual grope for each index $(i,j)$. Taking distinct values ${\epsilon}_1,\ldots, {\epsilon}_r$, the gropes are arranged to be pairwise disjoint. The same argument shows that each grope $G$ has a parallel copy $G'$ with $G\cap G'=\emptyset$, hence its thickening in $S^4$ is standard.
\begin{figure}
\caption{Steps 1 and 2 at a distance $\epsilon$ from $X$}
\label{epsilon}
\end{figure}
To prove the converse part of the theorem, suppose that $H_2(S^4\smallsetminus X)$ is generated by $r$ disjoint maps of closed gropes. Perturb the maps in the complement of $X$, so that they are immersions and their images have at most a finite number of transverse self-intersection points. The usual pushing down and twisting procedures from \cite{FQ} produce closed disjoint $0$--framed gropes $G_1,\dots,G_r$ whose only failure to being actual Gropes lies in possible self-intersections of the bottom stage surfaces. The $G_i$ still lie in the complement of $X$ and have class~$k+1$. The proof of the first part of Theorem~\ref{duality} shows that $H_2(Y)/{\phi}_{k+1}(Y)$ is generated by the ``Clifford tori'' in the neighborhoods of self-intersection points of the $G_i$, where $Y$ denotes the complement of all $G_i$ in $S^4$. Assume $X$ is connected (otherwise consider a connected component of $X$), and let $X'$ denote $X$ with a $2$--cell removed from its bottom stage. The relations given by the Clifford tori are among the defining relations of the Milnor group on meridians to the gropes, and Dwyer's theorem shows (as in \cite{FT2}, Lemma 2.6) that the inclusion map induces an isomorphism \[ M{\pi}_1(X')/M{\pi}_1(X')^{k+1}\cong M{\pi}_1(Y)/M{\pi}_1(Y)^{k+1}. \] \noindent Consider the boundary curve $\gamma$ of $X'$. Since $X$ is a grope of class $k$, by Lemma~\ref{Milnor maximal k} below we get ${\gamma}\notin M{\pi}_1(X')^{k+1}$. On the other hand, $\gamma$ bounds a disk in $Y$, hence $\gamma=1\in M{\pi}_1(Y)$. This contradiction concludes the proof of Theorem~\ref{duality}. \qed
\begin{lem}\label{Milnor maximal k} \sl Let $(G,\gamma)$ be a grope of class $k$. Then $\gamma\notin M\pi_1(G)^{k+1}$. \end{lem} \begin{prf} This is best proven by an induction on $k$, starting with the fact that $\pi_1(\Sigma)$ is freely generated by all $\alpha_i$ and $\beta_i$. Here $\Sigma$ is the bottom surface stage of the grope $(G,\gamma)$ with a standard symplectic basis of circles $\alpha_i, \beta_i$. The Magnus expansion for the free Milnor group (see \cite{Milnor1}, \cite{FT2} or the proof of Theorem~\ref{composition}) shows that $\gamma = \prod\big[ \alpha_i, \beta_i\big]$ does not lie in $M\pi_1(\Sigma)^3$. Similarly, for $k>2$, $\pi_1(G)$ is freely generated by those circles in a standard symplectic basis of a surface stage in $G$ to which nothing else is attached. Now assume that the $k$--grope $(G,\gamma)$ is obtained by attaching $p_i$--gropes $G_{\alpha_i}$ to $\alpha_i$ and $q_i$--gropes $G_{\beta_i}$ to $\beta_i,\ p_i+q_i \ge k$. By induction, $\alpha_i \notin M\pi_1(G_{\alpha_i})^{ p_i+1}$ and $\beta_i \notin M\pi_1(G_{\beta_i})^{q_i+1}$ since $p_i, q_i \ge 1$. But the free generators of $\pi_1(G_{\alpha_i})$ and $\pi_1(G_{\beta_i})$ are contained in the set of free generators of $\pi_1(G)$ and therefore $\gamma=\prod\big[\alpha_i, \beta_i\big] \notin M\pi_1(G)^{k+1}$. Again, this may be seen by applying the Magnus expansion to $M\pi_1(G)$. \end{prf}
\begin{rem} \rm In the case when all stages of a Grope $X$ are tori, the correspondence between its tree $T_{X}$ and the trees of the dual Gropes, given in the proof of theorem \ref{duality}, is particularly appealing and easy to describe. Let $\gamma$ be a tip of $T_{X}$. The tree for the Grope, Alexander-dual to $\gamma$, is obtained by redrawing $T_{X}$, only with $\gamma$ drawn as the root, see Figure~\ref{genusone}. \end{rem}
\begin{figure}
\caption{Tree duality in the genus $1$ case}
\label{genusone}
\end{figure}
As a corollary of Theorem~\ref{duality} we get the following result.
\begin{cor} \label{grope concordance} \sl Let $L=(l_1,\ldots,l_n)$ and $L'=(l'_1,\ldots,l'_n)$ be two links in $S^3\times\{0\}$ and $S^3\times\{1\}$ respectively. Suppose there are disjointly embedded annulus-like Gropes $A_1,\ldots,A_n$ of class $k$ in $S^3\times [0,1]$ with $\partial A_i=l_i\cup l'_i$, $i=1,\ldots,n$. Then there is an isomorphism of nilpotent quotients \[ \pi_1(S^3\smallsetminus L)/\pi_1(S^3\smallsetminus L)^k\cong \pi_1(S^3\smallsetminus L')/\pi_1(S^3\smallsetminus L')^k \] \end{cor}
\begin{rem} \rm
For those readers who are familiar with Milnor's $\bar\mu$--invariants we should mention that the above statement directly implies that for any multi-index $I$ of length $|I|\leq k$ one gets $\bar\mu_L(I)=\bar\mu_{L'}(I)$. For a different proof of this consequence see \cite{Krushkal}. \end{rem} \rk{Proof of Corollary~\ref{grope concordance}} The proof is a $\phi_k$--version of Stallings' proof of the concordance invariance of all nilpotent quotients of $\pi_1(S^3 \smallsetminus L)$, see \cite{Stallings}. Namely, Alexander duality and Theorem~\ref{duality} imply that the inclusion maps
\[(S^3\times\{0\}\smallsetminus L)\hookrightarrow (S^3\times[0,1]\smallsetminus (A_1\cup\ldots\cup A_n))\hookleftarrow (S^3\times\{1\}\smallsetminus L') \]
\noindent induce isomorphisms on $H_1(\, .\, )$ and on $H_2(\, .\, )/\phi_k$. So by Dwyer's Theorem they induce isomorphisms on $\pi_1/(\pi_1)^k$. \qed
\section{The Link Composition Lemma} \label{compositionsection}
The Link Composition Lemma was originally formulated in \cite{FL}. The reader should be cautious about its proof given there; it can be made precise using Milnor's $\bar\mu$--invariants with repeating coefficients, while this section presents an alternative proof.
Given a link $\widehat L =(l_1,\ldots, l_{k+1})$ in $S^3$ and a link $Q=(q_1,\ldots, q_m)$ in the solid torus $S^1\times D^2$, their ``composition'' is obtained by replacing the last component of $\widehat L$ with $Q$. More precisely, it is defined as $L\cup{\phi}(Q)$ where $L=(l_1,\ldots, l_k)$ and ${\phi}\colon\thinspace S^1\times D^2\hookrightarrow S^3$ is a $0$--framed embedding whose image is a tubular neighborhood of $l_{k+1}$. The meridian $\{1\}\times\partial D^2$ of the solid torus will be denoted by $\wedge$ and we put $\widehat Q:=Q\cup\wedge$. We sometimes think of $Q$ or $\widehat Q$ as links in $S^3$ via the standard embedding $S^1 \times D^2 \hookrightarrow S^3$.
\begin{figure}
\caption{In this example $\widehat L$ is the Borromean rings, and $Q$ is the Bing double of the core circle of $S^1\times D^2$.}
\label{compositionfigure}
\end{figure}
\begin{thm}\rm (Link Composition Lemma) \label{composition} \sl \hfil\break {\rm(i)}\ \ If $\widehat L$ and $\widehat Q$ are both homotopically essential in $S^3$ then $L\cup{\phi}(Q)$ is also homotopically essential.\\ {\rm(ii)}\ \ Conversely, if $L\cup{\phi}(Q)$ is homotopically essential and if both $\widehat L$ and $\widehat Q$ are almost homotopically trivial, then both $\widehat L$ and $\widehat Q$ are homotopically essential in $S^3$. \end{thm}
\begin{rem} \rm Part (ii) does not hold without the almost triviality assumption on $\widehat L$ and $\widehat Q$. For example, let $\widehat L$ consist of just one component $l_1$, and let $Q$ be a Hopf link contained in a small ball in $S^1\times D^2$. Then $L\cup\phi(Q)={\phi}(Q)$ is homotopically essential, yet $\widehat L$ is trivial. \end{rem}
In part (i), if either $L$ or $Q$ is homotopically essential, then their composition $L\cup{\phi}(Q)$ is also essential. (Note that $\widehat Q$ and ${\phi}(\widehat Q)$ are homotopically equivalent, see Lemma 3.2 in \cite{FL}.) If neither $L$ nor $Q$ is homotopically essential, then by deleting some components of $L$ and $Q$ if necessary, one may assume that $\widehat L$ and $\widehat Q$ are almost homotopically trivial (and still homotopically essential). In the case when $L\cup{\phi}(Q)$ is not almost homotopically trivial part (i) follows immediately. Similarly, part (ii) can be proved in this case easily by induction on the number of components of $L$ and $Q$.
{\em Therefore, we will assume from now on that $\widehat L$, $\widehat Q$ and $L\cup{\phi}(Q)$ are almost homotopically trivial links in $S^3$.}
\begin{lem} \label{infgropes} \sl If $\widehat L$ and $\widehat Q$ are both homotopically trivial in $S^3$ then ${\phi}(\wedge)$ represents the trivial element in the Milnor group $M(L\cup{\phi}(Q))$. \end{lem}
\begin{prf} Let $\wedge'$ denote ${\phi}(S^1\times\{1\})$. The Milnor group $M(L\cup{\phi}(Q))$ is nilpotent of class $k+m+1$, so it suffices to show that ${\phi}(\wedge)$ represents an element in $\pi_1(S^3\smallsetminus (L\cup{\phi}(Q)))^{k+m+1}$. This will be achieved by constructing an $\infty$--grope $G$ bounded by ${\phi}(\wedge)$ in the complement of $L\cup{\phi}(Q)$. In fact, the construction also gives an $\infty$--grope $G'\subset S^3\smallsetminus (L\cup{\phi}(Q))$ bounded by $\wedge'$.
Consider $S^1\times D^2$ as a standard unknotted solid torus in $S^3$, and let $c$ denote the core of the complementary solid torus $D^2\times S^1$. Since $\widehat Q$ is homotopically trivial, after changing $Q$ by an appropriate link homotopy in $S^1\times D^2$, $\wedge$ bounds
\eject
an immersed disk $\Delta\subset S^3$ in the complement of the new link. Denote the new link by $Q$ again. Similarly $L$ can be changed so that the untwisted parallel copy $\wedge'$ of $l_{k+1}$ bounds a disk $\Delta'\subset S^3\smallsetminus L$. Recall that $M(L\cup{\phi}(Q))$ does not change if $L\cup{\phi}(Q)$ is modified by a link homotopy.
The intersection number of $\Delta$ with $c$ is trivial, since $\wedge$ and $c$ do not link. Replace the union of disks $\Delta\cap (D^2\times S^1)$ by annuli lying in $\partial (D^2\times S^1)$ to get $\Sigma\subset S^1\times D^2\smallsetminus Q$, an immersed surface bounded by $\wedge$. Similarly the intersection number of $\Delta'$ with the core circle of ${\phi}(S^1\times D^2)$ is trivial, and $\wedge'$ bounds $\Sigma'\subset S^3\smallsetminus (L\cup{\phi}(Q))$. The surfaces ${\phi}(\Sigma)$ and $\Sigma'$ are the first stages of the gropes $G$ and $G'$ respectively.
Notice that half of the basis for $H_1({\phi}(\Sigma))$ is represented by parallel copies of $\wedge'$. They bound the obvious surfaces: annuli connecting them with $\wedge'$ union with $\Sigma'$, which provide the second stage for $G$. Since this construction is symmetric, it provides all higher stages for both $G$ and $G'$. \end{prf}
\begin{lem} \label{welldefined} \sl Let $i\colon\thinspace S^3\smallsetminus$ neighborhood $(\widehat L\smallsetminus l_1)\longrightarrow S^3\smallsetminus (L\cup{\phi}(Q)\smallsetminus l_1)$ denote the inclusion map, and let $i_{\#}$ be the induced map on $\pi_1$. Then $i_{\#}$ induces a well defined map $i_{*}$ of Milnor groups. \end{lem}
\begin{rem} \rm Given two groups $G$ and $H$ normally generated by $g_i$ respectively $h_j$, let $MG$ and $MH$ be their Milnor groups defined with respect to the given sets of normal generators. If a homomorphism $\phi\colon\thinspace G\longrightarrow H$ maps each $g_i$ to one of the $h_j$ then it induces a homomorphism $M\phi\colon\thinspace MG\longrightarrow MH$. In general, $\phi\colon\thinspace G\longrightarrow H$ induces a homomorphism of the Milnor groups if and only if $\phi(g_i)$ commutes with $\phi(g_i)^{\phi(g)}$ in $MH$ for all $i$ and all $g\in G$. \end{rem}
\rk{Proof of Lemma~\ref{welldefined}} The Milnor groups $M(\widehat L\smallsetminus l_1)$ and $M(L\cup{\phi}(Q)\smallsetminus l_1)$ are generated by meridians. Moreover, $i_{\#}(m_i)=m_i$ for $i=2,\ldots,k$ and $i_{\#}(m_{k+1})={\phi}(\wedge)$ where $m_1,\ldots, m_{k+1}$ are meridians to the components of $\widehat L$. Hence to show that $i_{*}$ is well-defined it suffices to prove that all the commutators \[ [{\phi}(\wedge),({\phi}\wedge)^{i_{\#}(g)}], \quad g\in\pi_1(S^3\smallsetminus (\widehat L\smallsetminus l_1)), \]
\noindent are trivial in $M(L\cup{\phi}(Q)\smallsetminus l_1))$. Consider the following exact sequence, obtained by deleting the component $q_1$ of $Q$.
\[ ker(\psi)\longrightarrow M(L\cup{\phi}(Q)\smallsetminus l_1) \buildrel{\psi}\over\longrightarrow M(L\cup{\phi}(Q)\smallsetminus (l_1\cup \phi(q_1)))\longrightarrow 0 \]
\eject
An application of Lemma~\ref{infgropes} to $(\widehat L\smallsetminus l_1)$ and to $(\widehat Q\smallsetminus q_1)$ shows that $\psi({\phi}(\wedge))=1$ and hence ${\phi}(\wedge),{\phi}(\wedge)^g\in ker(\psi)$. The observation that $ker(\psi)$ is generated by the meridians to $\phi(q_1)$ and hence is commutative finishes the proof of Lemma~\ref{welldefined}. \qed
\rk{Proof of Theorem~\ref{composition}} Let $M(F_{m_1,\ldots,m_{s+1}})$ be the Milnor group of a free group, ie the Milnor group of the trivial link on $s+1$ components with meridians $m_i$. Let $R(y_1,\ldots, y_s)$ be the quotient of the free associative ring on generators $y_1,\ldots,y_s$ by the ideal generated by the monomials $y_{i_1}\cdots y_{i_r}$ with one index occurring at least twice. The additive group $(R(y_1,\ldots, y_s),+)$ of this ring is free abelian on generators $y_{i_1}\cdots y_{i_r}$ where all indices are distinct. Milnor \cite{Milnor1} showed that setting $m_{s+1}=1$ induces a short exact sequence of groups \[ 1\longrightarrow (R(y_1,\ldots, y_s),+) \buildrel{r}\over\longrightarrow M(F_{m_1,\ldots,m_{s+1}}) \buildrel{i}\over\longrightarrow M(F_{m_1,\ldots,m_{s}})\longrightarrow 1 \]
\noindent where $r$ is defined on the above free generators by left-iterated commutators with $m_{s+1}$: \[ r(y_{j_1}\cdots y_{j_k}):= [m_{j_1},[m_{j_2},\ldots,[m_{j_k},m_{s+1}]\ldots]] \]
\noindent In particular, $r(0)=1$ and $r(1)=m_{s+1}$. Obviously, the above extension of groups splits by sending $m_i$ to $m_i$. This splitting induces the following conjugation action of $M(F_{m_1,\ldots,m_s})$ on $R(y_1,\ldots, y_s)$. Let $Y:=y_{j_1}\cdots y_{j_k}$, then
\[ m_i \cdot r(Y)\cdot m_i^{-1} = [m_i,r(Y)] \cdot r(Y) = \] \[ [m_i,[m_{j_1},[m_{j_2},\ldots,[m_{j_k},m_{s+1}]\ldots]] \cdot r(Y) = r((y_i+1)\cdot Y) \]
\noindent which implies that $m_i$ acts on $R(y_1,\ldots$, $y_s)$ by ring multiplication with $y_i+1$ on the left. Since $m_i$ generate the group $M(F_{m_1,\ldots,m_s})$ this defines a well defined homomorphism of $M(F_{m_1,\ldots,m_s})$ into the units of the ring $R(y_1,\ldots$, $y_s)$. In fact, this is the {\em Magnus expansion}, well known in the context of free groups (rather than free Milnor groups). We conclude in particular, that the abelian group $(R(y_1,\ldots, y_s),+)$ is generated by $y_i$ as a module over the group $M(F_{m_1,\ldots,m_s})$.
Returning to the notation of Theorem~\ref{composition}, we have the following commutative diagram of group extensions. We use the fact that the links $L\cup {\phi}(Q) \smallsetminus l_1$ and $\widehat L \smallsetminus l_1$ are homotopically trivial. Here $y_i$ are the variables corresponding to the link $L$ and $z_j$ are the variables corresponding to ${\phi}(Q)$. We introduce short notations $R(\mathcal{Y}):=R(y_1,\ldots,y_k)$ and $R(\mathcal{Y},\mathcal{Z}):=R(y_1,\ldots,y_k,z_2,\ldots,z_m)$.
\[ \begin{CD} R(\mathcal{Y},\mathcal{Z}) @>{r}>> M(L\cup {\phi}(Q)\smallsetminus l_1) @>{i}>> M(L\cup {\phi}(Q)\smallsetminus (l_1\cup {\phi}(q_1))) \\
@AA{\sigma}A @AA{lc}A @AA{lc}A \\ R(\mathcal{Y}) @>{\bar r}>> M(\widehat L\smallsetminus l_1) @>{j}>> M(L\smallsetminus l_1) \\ \end{CD} \]
\noindent Recall that by definition $lc(m_i)=m_i$ for all meridians $m_2,\ldots,m_k$ of $L \smallsetminus l_1$. Moreover, the link composition map $lc$ sends the meridian $m_{k+1}$ to the $\wedge$--curve of ${\phi}(Q)$.
The existence of the homomorphism $lc$ on the Milnor group level already implies our claim~(ii) in Theorem~\ref{composition}: By assumption, $l_1$ represents the trivial element in $M(\widehat L\smallsetminus l_1)$ since $\widehat L$ is homotopically trivial. Consequently, $lc(l_1)=l_1$ is also trivial in $M(L\cup {\phi}(Q)\smallsetminus l_1)$ and hence by \cite{Milnor1} the link $L\cup {\phi}(Q)$ is homotopically trivial.
The key fact in our approach to part~(i) of Theorem~\ref{composition} is the following result which says that link composition corresponds to ring multiplication. \begin{lem} \label{multiplication} \sl The homomorphism $\sigma\colon\thinspace \! R(y_2,\ldots,y_{k})\longrightarrow R(y_2,\ldots,y_{k}, z_2,\ldots,z_m)$ is given by ring multiplication with $r^{-1}(\wedge)$ on the right. \end{lem} Note that by Lemma~\ref{infgropes} $\wedge$ is trivial in $M(L\cup {\phi}(Q)\smallsetminus (l_1\cup {\phi}(q_1)))$, so that it makes sense to consider $r^{-1}(\wedge)$. We will abbreviate this important element by $\wedge_R$.
\rk{Proof of Lemma~\ref{multiplication}} Since the above diagram commutes and $(R(y_2,\ldots, y_{k}),$ $+)$ is generated by $y_i$ as a module over the group $M(F_{m_2,\ldots,m_{k}})$ it suffices to check our claim for these generators $y_i$. We get by definition
\[ lc(\bar r(y_i)) = lc([m_i,m_{k+1}]) =[m_i,\wedge]= (m_i\cdot\wedge\cdot m_i^{-1})\cdot\wedge^{-1} \] \[ = r((y_i+1)\cdot\wedge_R)\cdot \wedge^{-1}
= r(y_i\cdot\wedge_R). \]
\noindent We are using the fact that conjugation by $m_i$ corresponds to left multiplication by $(y_i+1)$. \qed
Since $L$ is homotopically trivial and $\widehat L$ is homotopically essential, it follows that $0\neq l_1\in ker(j)$. After possibly reordering the $y_i$ this implies in addition that for some integer $a\neq 0$ we have
\[ \bar r^{-1}(l_1)=a\cdot (y_2\cdots y_k) + {\rm \; terms \; obtained \; by \; permutations \; from \; } y_2\cdots y_k. \]
\noindent Setting all the meridians $m_i$ of $L$ to $1$ (which implies setting the variables $y_i$ to $0$), we get a commutative diagram of group extensions
\[ \begin{CD}
R(\mathcal{Z}) @>r>> M({\phi}(Q)) @>{i}>> M({\phi}(Q)\smallsetminus {\phi}(q_1)) \\ @AA{p}A @AAA @AAA \\ R(\mathcal{Y},\mathcal{Z}) @>r>> M(L\cup {\phi}(Q)\smallsetminus l_1) @>{i}>> M(L\cup {\phi}(Q)\smallsetminus (l_1\cup {\phi}(q_1)))\\ \end{CD} \]
\noindent As before, $R(\mathcal{Z})$ and $R(\mathcal{Y},\mathcal{Z})$ are short notations for $R(z_2,\ldots,z_m)$ and $R(y_1,$ $\ldots,y_k,z_2,\ldots,z_m)$ respectively. Since $\widehat Q$ (and equivalently $\phi(\widehat Q)$) is homotopically essential we have $0\neq \wedge\in ker(i)$. This shows that $p(\wedge_R)\neq 0$. The almost triviality of $\widehat Q$ implies in addition that after possibly reordering the $z_j$ we have for some integer $b\neq 0$
\[ p(\wedge_R)=b\cdot (z_2\cdots z_m) + {\rm \; terms \; obtained \; by \; permutations \; from \; } z_2\cdots z_m. \]
\noindent It follows from Lemma~\ref{multiplication} that $r^{-1}(l_1)=\bar r^{-1}(l_1)\cdot \wedge_R$. This product contains the term \[ a b \cdot (y_2\cdots y_{k}\cdot z_2\cdots z_m),
\]
\noindent the coefficient $ab$ of which is non-zero. This completes the proof of Theorem~\ref{composition}.\break\hbox{} \qed
\begin{rem} \rm Those readers who are familiar with Milnor's $\bar\mu$--invariants will have recognized that the above proof in fact shows that the first non-vanishing $\bar\mu$--invariants are multiplicative under link composition. \end{rem}
\vfil\eject
\end{document} |
\begin{document}
\title{Augmented Homotopical Algebraic Geometry} \author{Scott Balchin} \address{Department of Pure Mathematics\\ The Hicks Building\\ University of Sheffield\\ Sheffield S3 7RH, England, UK} \email{[email protected]}
\begin{abstract} We develop the framework for augmented homotopical algebraic geometry. This is an extension of homotopical algebraic geometry, which itself is a homotopification of classical algebraic geometry. To do so, we define the notion of augmentation categories, which are a special class of generalised Reedy categories. For an augmentation category, we prove the existence of a closed Quillen model structure on the presheaf category which is compatible with the Kan-Quillen model structure on simplicial sets. Moreover, we use the concept of augmented hypercovers to define a local model structure on the category of augmented presheaves. We prove that crossed simplicial groups, and the planar rooted tree category are examples of augmentation categories. Finally, we introduce a method for generating new examples from old via a categorical pushout construction. \end{abstract}
\maketitle
\section{Introduction}
Classically, algebraic geometry is the study of the zeros of multivariate polynomials, known as varieties. Although this idea is still rooted in modern algebraic geometry, the tools used are now somewhat categorical and abstract.
The exodus towards categorical tools can, of course, be traced back to the school of Grothendieck in $20^\text{th}$ century France, with the seminal work of EGA and SGA~\cite{MR3075000,MR0476737}. Of particular importance is the notion of a scheme, which subsumes the theory of affine varieties~\cite{MR0463157,MR1730819}. For us, a scheme will be defined via its \emph{functor of points}, that is, a (1-)sheaf \begin{equation}\label{defscheme} X \colon \textbf{Aff}^{op} \to \textbf{Set}, \end{equation} where the topology on $\textbf{Aff}^{op}$ is usually taken to be the \'{e}tale topology~\cite{MR563524}. This construction has led to a flourishing field of research fuelled by the emerging theory of categories, particularly that of $(\infty,1)$-categories. We shall outline the natural modifications of the construction given by~(\ref{defscheme}) that have occurred throughout the years. In turn, this will motivate the constructions that appear in this paper.
The first thing to note is how restrictive the category $\textbf{Set}$ is. The study of moduli problems was marred by issues arising from non-trivial object isomorphisms (for example, isomorphisms of vector bundles over a scheme~\cite[\S 1.2]{MR2536079}). To remedy this problem, the category of sets is enlarged to the (2-)category of groupoids~\cite{MR0220789,MR1771927}. Therefore the definition of a \emph{stack} can be loosely worded as a (2-)sheaf \begin{equation}\label{defstack} X \colon \textbf{Aff}^{op} \to \textbf{Grpd}. \end{equation}
In some settings, the natural notion of equivalence for a geometric object is weaker than that of isomorphism. To encode such information, a homotopical viewpoint is required. The correct way to consider such objects is as an $(\infty,1)$-sheaf \begin{equation}\label{defhighstack} X\colon \textbf{Aff}^{op} \to \widehat{\Delta}, \end{equation} where $\widehat{\Delta}$ is the category of \emph{simplicial sets}. Such objects are referred to as \emph{higher stacks}~\cite{hirsch,nstacks}. A canonical example of a higher stack is that of the \emph{classifying stack} $K(G,n)$ for $G$ an abelian $k$-group scheme~\cite{whatis}.
The final stage of the current progression of construction~(\ref{defscheme}), unsurprisingly, is an enlargement of the source category of affine schemes. Just as the category of simplicial sets is a homotopic version of \textbf{Set}, we can consider a homotopic version of affine schemes. The resulting category is the $(\infty,1)$-category of \emph{derived affine schemes}. A \emph{derived stack} is then an $(\infty,1)$-sheaf \begin{equation}\label{defderstack} X\colon \textbf{dAff}^{op} \to \widehat{\Delta}. \end{equation}
The scope of this paper is to develop the tools necessary to take the above progression one step further by enlarging the simplex category $\Delta$ to an \emph{augmentation category} $\mathbb{A}$. By considering the category of presheaves over an augmentation category, we get an extension of the category of simplicial sets. Via the construction of specific Quillen model structures, we will be able to encode the correct ``sheaf condition''. Just as each modification of construction~(\ref{defscheme}) was undertaken to accommodate specific issues, the correct choice of an augmentation category can be used to encode necessary data. Consider the situation where there is a need to encode an $SO(2)$-action on a derived stack. Such an action cannot be inherently captured by the category of simplicial sets. However, this problem is perfectly suited to the category of cyclic sets $\widehat{{\Delta \mathfrak{C}}}$, which is formed as a presheaf category on Connes' cyclic category ${\Delta \mathfrak{C}}$~\cite{connes1}. Using the general framework of augmentation categories, we will show that it is possible to adjust construction~(\ref{defscheme}) to define \emph{$SO(2)$-equivariant derived stacks} as certain functors \begin{equation}\label{defderstack22} X\colon \textbf{dAff}^{op} \to \widehat{{\Delta \mathfrak{C}}}. \end{equation}
All of the above discussion can be assembled as in Figure~\ref{diagramrel}, which is an extension of the one appearing in~\cite{whatis}:
\begin{figure}
\caption{The diagram of relations for augmented derived algebraic geometry}
\label{diagramrel}
\end{figure}
So far we have only considered the \emph{algebraic} setting, in that we work over the algebraic category of (derived) affine schemes. We shall see that we can modify our source category to any (simplicial) site. Classically, of particular significance are the categories \textbf{Top} (resp., \textbf{Mfd}) which give rise to topological (resp., differentiable) stacks~\cite{MR2817778,MR2977576,noohifound}.
\subsection*{Outline of Paper}
\begin{itemize} \item In Section~\ref{sec:2} we recall the theory of homotopical algebraic geometry via the construction of certain closed Quillen model structures. Moreover, we recall the definition of the category of derived affine schemes, and give the construction of $n$-geometric derived stacks via $n$-hypergroupoids. \item In Section~\ref{sec:3} we give the required machinery for the main definition of the paper, that of an augmentation category (Definition~\ref{augdef}). \item In Section~\ref{sec:4} the homotopy theory of augmentation categories is discussed. The existence of a closed Quillen model structure is proven for any augmentation category. The model category relies on the construction of \emph{augmented Kan complexes} which mirror the simplicial analogue. \item In Section~\ref{sec:5} we develop the theory of augmented homotopical algebraic geometry via a Quillen model structure on the category of augmented presheaves. Moreover, we define $n$-geometric augmented derived stacks using a modified $n$-hypergroupoid construction. \item In Section~\ref{sec:6} we prove that crossed simplicial groups provide a class of examples of augmentation categories, and that the resulting geometric theory can be thought of as being equivariant. We briefly discuss two applications of equivariant stacks which the author has developed in \cite{balchin,balchin2}. \item In Section~\ref{sec:7} we prove that the category of planar rooted trees appearing in the theory of dendroidal sets is an augmentation category, with the resulting geometric theory being stable. We then discuss an amalgamation property for augmentation categories which allows us to form new examples from old. \end{itemize}
\section{Homotopical Algebraic Geometry}\label{sec:2}
We shall now utilise the theory of simplicial objects and Quillen model structures to discuss the theory of homotopical algebraic geometry. A nice overview of the constructions in this section can be found in~\cite{MR3285853}. We begin by recalling the notion of a \emph{site}~\cite{artin1962grothendieck}.
\begin{definition} Let $\mathcal{C}$ be a category with all fiber products. A \emph{Grothendieck topology} on $\mathcal{C}$ is given by a function $\tau$ which assigns to every object $U$ of $\mathcal{C}$ a collection $\tau(U)$, consisting of families of morphisms $\{\varphi_i \colon U_i \to U\}_{i \in I}$ with target $U$ such that the following axioms hold:
\begin{enumerate} \item (Isomorphisms) If $U' \to U$ is an isomorphism, then $\{U' \to U\}$ is in $\tau(U)$. \item (Transitivity) If the family $\{\varphi_i \colon U_i \to U\}$ in $\tau(U)$ and if for each $i \in I$ one has a family $\{ \varphi_{ij} : U_{ij} \to U_i\}_{j \in J}$ in $\tau(U_i)$ then the family $\{\varphi_{i}\circ \varphi_{ij} \colon U_{ij} \to U\}_{i \in I, j \in J}$ is in $\tau(U)$. \item (Base change) If the family $\{\varphi_i \colon U_i \to U\}_{i \in I}$ is in $\tau(U)$ and if $V \to U$ is any morphism, then the family $\{V \times_U U_i \to V\}$ is in $\tau(V)$. \end{enumerate}
The families in $\tau(U)$ are called \emph{covering families} for $U$ in the $\tau$-topology. We call a category $\mathcal{C}$ with such a topology a \emph{site}, denoting it as $(\mathcal{C},\tau)$. \end{definition}
Note that we can consider the case where our sites are simplicially enriched, such that the topology is defined on $\pi_0\mathcal{C}$. We shall use this assumption throughout without ever making it explicit.
The first task is to now construct the category of $\infty$-stacks on a site $\mathcal{C}$ using model structures in the sense of~\cite{hovey1999model}. For a category $\mathcal{C}$, write $\widehat{\mathcal{C}}$ for the category of presheaves on $\mathcal{C}$. We will denote by $\widehat{\Delta}_{\text{Kan}}$ the category of simplicial sets equipped with the Kan-Quillen model structure. Let $\mathcal{C}$ be a site, the category of \emph{simplicial presheaves} on $\mathcal{C}$ is the functor category $\textbf{sPr}(\mathcal{C}) := \widehat{\Delta}^{\mathcal{C}^{op}}$. As $\widehat{\Delta}_\text{Kan}$ is left-roper and combinatorial, we can equip $\textbf{sPr}(\mathcal{C})$ with the projective model structure, $\textbf{sPr}_\text{proj}(\mathcal{C})$, which is also left-proper and combinatorial. The problem with this first model structure is that it does not see any of the topology $\tau$. To encode this extra data, we add more weak equivalences, in effect, enforcing an $\infty$-sheaf condition.
\begin{definition} Let $(\mathcal{C}, \tau)$ be a site. A map $f \colon \mathcal{F} \to \mathcal{F}'$ in $\textbf{sPr}(\mathcal{C})$ is a \textit{local weak equivalence} if: \begin{itemize} \item The induced map $\pi_0^\tau \mathcal{F} \to \pi_0^\tau\mathcal{F}'$ is an isomorphism of sheaves, where $\pi_0^\tau$ is the sheafification of $\pi_0$. \item Squares of the following form are pullbacks after sheafification: $$ \xymatrix{\pi_n \mathcal{F} \ar[r] \ar[d] &\ar[d] \pi_n \mathcal{F}' \\ \mathcal{F}_0 \ar[r] & \mathcal{F}'_0 \rlap{ .}} $$ \end{itemize} \end{definition}
\begin{theorem}[{\cite[\S 3]{jardinesimplicialpresheaves}}]\label{projlocmodel} Let $(\mathcal{C}, \tau)$ be a site. There exists a cofibrantly generated \textit{local model structure} on the category $\textbf{sPr}(\mathcal{C})$ where a map $f \colon \mathcal{F} \to \mathcal{F}'$ is a: \begin{itemize} \item Weak equivalence if it is a local weak equivalence. \item Fibration if it has the RLP with respect to the trivial cofibrations. \item Cofibration if it is a cofibration in $\textbf{sPr}_\text{proj}(\mathcal{C})$. \end{itemize} We will denote this model structure $\textbf{sPr}_\tau(\mathcal{C})$. \end{theorem}
Before continuing, we introduce a different description of the local model structure using left Bousfield localisation (as already noted, $\textbf{sPr}_\text{proj}(\mathcal{C})$ is left-proper and combinatorial so such a localisation exists).
\begin{definition} Let $(\mathcal{C},\tau)$ be a site. A morphism $f \colon \mathcal{F} \to \mathcal{F}'$ in $\textup{Ho}(\textbf{sPr}_\text{proj}(\mathcal{C}))$ is called a \emph{$\tau$-covering} if the induced map $\pi_0^\tau (\mathcal{F}) \to \pi_0^\tau (\mathcal{F}')$ is an epimorphism of sheaves. \end{definition}
\begin{definition}\label{hyperdef} Let $(\mathcal{C}, \tau)$ be a site. A map $f \colon X \to Y$ in $\textbf{sPr}(\mathcal{C})$ is a \textit{hypercovering} if for all $n \in \mathbb{N}$: $$X_n \to \text{Hom}_{\widehat{\Delta}}(\partial \Delta[n] , X) \times_{\text{Hom}_{\widehat{\Delta}}(\partial \Delta[n] , Y)} Y_n$$ is a $\tau$-covering in $\textup{Ho}(\textbf{sPr}_\text{proj}(\mathcal{C}))$. \end{definition}
\begin{theorem}[{\cite[Theorem 6.2]{MR2034012}}] There is a model structure on $\textbf{sPr}(\mathcal{C})$ which is the left Bousfield localisation of $\textbf{sPr}_\text{proj}(\mathcal{C})$ at the class of hypercovers. Moreover, this model is Quillen equivalent to $\textbf{sPr}_\tau(\mathcal{C})$. \end{theorem}
\begin{definition} For a site $(\mathcal{C}, \tau)$ the homotopy category $\textup{Ho}(\textbf{sPr}_\tau(\mathcal{C}))$ will be referred to as the category of \emph{$\infty$-stacks on $\mathcal{C}$}. \end{definition}
\subsection{Closed Monoidal Structure}
We now prove the existence of a closed monoidal structure on $\textup{Ho}(\textbf{sPr}_\tau(\mathcal{C}))$, where the monoidal structure is given by direct product. To do so, we show that $\textbf{sPr}_\tau(\mathcal{C})$ is a monoidal model category.
\begin{definition}\label{monmodcat} A (symmetric) monoidal model category is a model category $\mathcal{C}$ equipped with a closed (symmetric) monoidal structure $(\mathcal{C}, \otimes, I)$ such that the two following compatibility conditions are satisfied: \begin{enumerate} \item (Pushout-product axiom) For every pair of cofibrations $f \colon X \to Y$ and $f' \colon X' \to Y'$, their pushout-product $$(X \otimes Y') \bigsqcup_{X \otimes X'} (Y \otimes X')\to Y \otimes Y'$$ is also a cofibration. Moreover, it is a trivial cofibration if either $f$ or $f'$ is. \item (Unit axiom) For every cofibrant object $X$ and every cofibrant resolution $0 \hookrightarrow QI \to I$ of the tensor unit, the induced morphism $QI \otimes X \to I \otimes X$ is a weak equivalence. (Note that this holds automatically in the case that $I$ is cofibrant). \end{enumerate} \end{definition}
The proof of this property for $\textup{Ho}(\textbf{sPr}_\tau(\mathcal{C}))$ is non-trivial unless you make a small adjustment to the model that you are considering. The problem that we currently face is that the class of cofibrations in $\textbf{sPr}_\tau(\mathcal{C})$ are not easily described. This is because in the projective model they are simply defined via a lifting property. Instead, we choose a model which is Quillen equivalent to the one that we are interested in, and prove the existence of the monoidal model category structure for this. The following proposition is proved in the same manner as Theorem~\ref{projlocmodel}.
\begin{proposition} There is a model structure on $\textbf{sPr}(\mathcal{C})$ called the \emph{injective local model} where a map $f \colon \mathcal{F} \to \mathcal{F}'$ is a: \begin{itemize} \item Weak equivalence if it is a local weak equivalence. \item Fibration if it has the RLP with respect to the trivial cofibrations. \item Cofibration if it is a cofibration in the injective model on $\textbf{sPr}(\mathcal{C})$ (i.e., a point-wise monomorphism). \end{itemize} We shall denote by $\textbf{sPr}_{\textup{inj},\tau}(\mathcal{C})$ this model structure. \end{proposition}
\begin{lemma} $\textbf{sPr}_{\textup{inj},\tau}(\mathcal{C})$ is a closed monoidal model category. \end{lemma}
\begin{proof} We need to show that the pushout-product and unit axiom of Definition~\ref{monmodcat} hold. However, as the cofibrations are monomorphisms, this is trivial. \end{proof}
\begin{corollary} $\textup{Ho}(\textbf{sPr}_{\textup{inj},\tau}(\mathcal{C})) \simeq \textup{Ho}(\textbf{sPr}_{\tau}(\mathcal{C}))$ is a closed monoidal category. The internal-hom in $\textup{Ho}(\textbf{sPr}_{\tau}(\mathcal{C}))$ will be denoted $\mathbb{R}\underline{\mathcal{H}\text{om}}(-,-)$. \end{corollary}
We will refer to $\mathbb{R}\underline{\mathcal{H}\text{om}}(X,Y)$ as the \emph{derived mapping stack} from $X$ to $Y$ for two $\infty$-stacks $X,Y$. One can compute this for each object $c \in \mathcal{C}$: $$\mathbb{R}\underline{\mathcal{H}\text{om}}(F,G)(c) := \underline{\text{hom}}^\Delta(c \times F,R_\textup{inj}G),$$ where $R_\textup{inj}$ is the fibrant replacement functor in $\textbf{sPr}_\textup{inj}(\mathcal{C})$ and $\underline{\text{hom}}^\Delta$ is the simplicial-hom of $\text{sPr}(\mathcal{C})$. We will denote by $\mathbb{R}\underline{\text{Hom}}$(-,-) the derived simplicial-hom.
\subsection{Geometric Homotopical Stacks}
We begin with a warning. There is a slight discrepancy between definitions in the literature. The two main bodies of work in the literature are that of To\"{e}n-Vezzosi~\cite{MR2394633} and Lurie~\cite{MR2717174}, in which the notions of $n$-geometric stacks are slightly different. From now on we will follow the conventions of~\cite{MR2394633}.
In the study of algebraic stacks, one is not interested in all objects. Instead, one is interested in only those stacks which have a smooth or \'{e}tale atlas (this can also be worded in terms of internal groupoid objects). We say that stacks with such an atlas are \emph{geometric}. In this section we will introduce what we mean by a geometric $\infty$-stack. We do this in two ways: the first uses an atlas representation as in the classical setting. This method, however, is cumbersome for practical purposes. Therefore we introduce the second method which uses the theory of \emph{hypergroupoids}.
We shall be interested in the site of \emph{derived affine schemes}, which we will denote \textbf{dAff}. An introduction to this site can be found in~\cite{akhil}, while details of the \'{e}tale topology appear in~\cite[Chapter 2.2]{MR2394633}.
\subsubsection{Via Iterated Representability}\label{iterrep}
We fix our site $\mathcal{C} = \textbf{dAff}$, and a set of covering maps $\textbf{P}$ (usually we take $\textbf{P}$ to mean smooth or \'{e}tale).
\begin{definition}[{\cite[Definition 1.3.3.1]{MR2394633}}] \leavevmode \begin{enumerate} \item A stack is \emph{$(-1)$-geometric} if it is representable. \item A morphism of stacks $f \colon F \to G$ is \emph{$(-1)$-representable} if for any representable stack $X$ and any morphism $X \to G$ the homotopy pullback $F \times ^h_G X$ is a representable stack. \item A morphism of stacks $f \colon F \to G$ is \emph{in $(-1)$-\textbf{P}} if it is $(-1)$-representable and if for any representable stack $X$ and any morphism $X \to G$, the induced morphism $F \times_G^h X \to X$ is a $\textbf{P}$-morphism between representable stacks. \end{enumerate} We now let $n \geq 0$. \begin{enumerate} \item Let $F$ be any stack. An \emph{$n$-atlas} for $F$ is a small family of morphisms $\{U_i \to F\}_{i \in I}$ such that \begin{enumerate}
\item Each $U_i$ is representable.
\item Each morphism $U_i \to F$ is in $(n-1)$-$\textbf{P}$.
\item The total morphism $\coprod_{i \in I} U_i \to F$ is an epimorphism. \end{enumerate} \item A stack $F$ is \emph{$n$-geometric} if it satisfies the following conditions: \begin{enumerate}
\item The diagonal $F \to F \times^h F$ is $(n-1)$-representable.
\item The stack $F$ admits an $n$-atlas. \end{enumerate} \item A morphism of stacks $F \to G$ is \emph{$n$-representable} if for any representable stack $X$ and any morphism $X \to G$ the homotopy pullback $F \times^h_G X$ is $n$-geometric. \item A morphism of stacks $F \to G$ is \emph{in $n$-$\textbf{P}$} if it is $n$-representable and if for any representable stack $X$, any morphism $X \to G$, there exists an $n$-atlas $\{U_i\}$ of $F \times^h_G X$ such that each composite morphism $U_i \to X$ is in $\textbf{P}$. \end{enumerate} We will say that a stack is \emph{geometric} if it is $n$-geometric for any $n$. \end{definition}
\begin{definition}\label{index:gest} The full subcategory of $n$-geometric stacks of $\textup{Ho}(\textbf{sPr}_\tau(\mathcal{C}))$ will be denoted $\textbf{GeSt}_n(\mathcal{C},\textbf{P})$. In particular: \begin{itemize} \item $\textbf{GeSt}_n(\textbf{dAff},\text{sm})$ is the category of \emph{derived $n$-Artin stacks}. \item $\textbf{GeSt}_n(\textbf{dAff},\text{\'{e}t})$ is the category of \emph{derived $n$-Deligne-Mumford stacks}. \end{itemize} \end{definition}
\subsubsection{Via $n$-Hypergroupoids}
In this section, we introduce a second representation of $n$-geometric stacks. This method, via hypergroupoids, is more intuitive than the method presented in Section~\ref{iterrep}, and is reportedly closer to that envisaged by Grothendieck in~\cite{pursuing}. An easy to read overview of the theory can be found in the paper~\cite{pridham2}, while the full results in the most general setting can be found in~\cite{MR3033634}. The idea is that an $n$-geometric stack on the site $\mathcal{C}$ can be resolved by some fibrant object in the category $\mathcal{C}^{\Delta^{op}}$. The impressive thing about this construction is that it completely avoids the need for the local model structure on simplicial presheaves, but is intricately linked to the notion of hypercovers nonetheless. Note that an $n$-hypergroupoid captures the nerve construction of an $n$-groupoid~\cite{MR555549}.
\begin{definition}\label{hypergrp} An $n$-hypergroupoid is an object $X \in {\widehat{\Delta}}$ for which the maps $$X_m = \text{Hom}_{\widehat{\Delta}}(\Delta[m],X) \to \text{Hom}_{\widehat{\Delta}}(\Lambda^k[m],X)$$ are surjective for all $m$, $k$, and isomorphisms for all $m >n$ and all $k$. In particular, $X$ is a Kan complex, and therefore fibrant in ${\widehat{\Delta}}_\text{Kan}$. \end{definition}
It is possible to characterise $n$-hypergroupoids using the coskeleton construction. Recall that the \emph{$m$-coskeleton}, denoted $\text{cosk}_mX$, is defined to be $(\text{cosk}_mX)_i = \text{Hom}(\Delta[i]_{\leq m}, X_{\leq m})$ where $X_{\leq m}$ is the truncation at $m$.
\begin{lemma}[{\cite[Proposition 3.4]{MR2112899}}] An $n$-hypergroupoid $X$ is completely determined by its truncation $X_{\leq n+1}$. In fact, $X = \text{cosk}_{n+1}X$ (i.e., $X$ is $(n+1)$-coskeletal). Conversely, a simplicial set of the form $\text{cosk}_{n+1}X$ is an $n$-hypergroupoid if and only if it satisfies the conditions of Definition~\ref{hypergrp} up to level $n+2$. \end{lemma}
\begin{definition} A morphism $f \colon X \to Y$ in ${\widehat{\Delta}}$ is a \emph{trivial relative $n$-hypergroupoid} if the maps $$X_m \to \text{Hom}_{\widehat{\Delta}}(\partial \Delta[m] , X) \times_{\text{Hom}_{\widehat{\Delta}}(\partial \Delta[m] , Y)} Y_m$$ are surjections for all $m$, and isomorphisms for all $m \geq n$. In particular, $f$ is a trivial fibration in ${\widehat{\Delta}}_\text{Kan}$. \end{definition}
\begin{lemma}[{\cite[Lemma 2.9]{MR3033634}}]\label{cosktriv} Let $f \colon X \to Y$ be a trivial $n$-hypergroupoid. Then $X = Y \times_{\text{cosk}_{n-1} Y} \text{cosk}_{n-1}X$. \end{lemma}
We can compare the above definition and property of trivial $n$-hypergroupoids with the definition of a hypercover (Definition~\ref{hyperdef}), and observe that a trivial $n$-hypergroupoid can be seen as a \emph{truncated} or \emph{bounded} hypercover.
We can move to a geometric setting by considering objects in $\textbf{sdAff} := \textbf{dAff}^{\Delta^{op}}$. We also change the surjectivity condition to be surjectivity with respect to a class of covering maps (i.e., smooth or \'{e}tale).
\begin{definition} A \emph{derived Artin (resp., Deligne-Mumford) $n$-hypergroupoid} is a simplicial derived affine scheme $X \in \textbf{sdAff}$ such that the maps $$X_m = \text{Hom}_{\widehat{\Delta}}(\Delta[m],X) \to \text{Hom}_{\widehat{\Delta}}(\Lambda^k[m],X)$$ are smooth (resp., \'{e}tale) surjections for all $m$, $k$, and isomorphisms for all $m >n$ and all $k$. \end{definition}
Note that given an (Artin or Deligne-Mumford) $n$-hypergroupoid $X$, we can construct a simplicial presheaf $X$ on $\textbf{dAff}$ as follows: \begin{align*} X \colon \textbf{dAff}^{op} &\to {\widehat{\Delta}},\\ X(A)_n & := X_n(A). \end{align*}
\begin{definition} A morphism $f \colon X \to Y$ in $\textbf{sdAff}$ is a \emph{trivial relative derived Artin (resp., Deligne-Mumford) $n$-hypergroupoid} if the maps $$X_m \to \text{Hom}_{\widehat{\Delta}}(\partial \Delta[m] , X) \times_{\text{Hom}_{\widehat{\Delta}}(\partial \Delta[m] , Y)} Y_m$$ are smooth (resp., \'{e}tale) surjections for all $m$, and isomorphisms for all $m \geq n$ (i.e., are $n$-truncated hypercovers). \end{definition}
Recall from~\cite{MR2877401} that a \emph{relative category} is a pair $(\mathcal{C},\mathcal{W})$ consisting of a category $\mathcal{C}$ and a wide subcategory $\mathcal{W}$ whose maps are called \emph{weak equivalences}. Such a category has a homotopy category by formally inverting all of the weak equivalences.
\begin{definition}\label{index:hyper} The \emph{category of derived $n$-Artin (resp., Deligne-Mumford) stacks} is obtained from the full subcategory of $\textbf{sdAff}$ consisting of the relative category of derived Artin (resp., Deligne-Mumford) $n$-hypergroupoids and the trivial relative derived Artin (resp., Deligne-Mumford) $n$-hypergroupoids as the weak equivalences. We will denote this category $\mathcal{G}_n^\text{sm}(\textbf{dAff})$ (resp., $\mathcal{G}_n^\text{\'{e}t}(\textbf{dAff})$). \end{definition}
The following theorem is the main result from~\cite{MR3033634}, and proves that we can move freely (up to homotopy) between the $n$-hypergroupoid and $n$-geometric stack constructions.
\begin{theorem}[{\cite[Theorem 4.15]{MR3033634}}]\label{equivcat} There is an equivalence of categories \begin{align*} \textbf{GeSt}_n(\textbf{dAff},\text{sm}) & \simeq \mathcal{G}_n^\text{sm}(\textbf{dAff}), \\ \textbf{GeSt}_n(\textbf{dAff},\text{\'{e}t}) & \simeq \mathcal{G}_n^\text{\'{e}t}(\textbf{dAff}). \end{align*} In fact, such an equivalence can be formulated for any homotopical algebraic geometric context \cite[Definition 1.3.2.13]{MR2394633}. \end{theorem}
\section{Augmentation Categories}\label{sec:3}
In this section we recall the theory of generalised Reedy categories from~\cite{genreedy}, and use them to formulate the framework of an \emph{augmentation category}.
\subsection{Generalised Reedy Categories}
A (strict) Reedy category $\mathbb{S}$ is a category such that we can equip $\mathcal{E}^{\mathbb{S}^{op}}$ with a model structure, for $\mathcal{E}$ a cofibrantly generated model category \cite{strictreedy}. The classes of maps in this model structure can be described explicitly using those of $\mathcal{E}$. An example of a strict Reedy category is the simplex category $\Delta$. One shortcoming of strict Reedy categories is that they do not allow for non-trivial automorphisms on the objects, which occur, for example, in the cyclic category of Connes, introduced in \cite{connes1}. A \emph{generalised Reedy category} allows us to capture this automorphism data. Recall that a subcategory $\mathcal{D} \subset \mathcal{C}$ is said to be \emph{wide in $\mathcal{C}$} if $\text{Ob}(\mathcal{C}) = \text{Ob}(\mathcal{D})$. The following definition appears in \cite{genreedy}.
\begin{definition}[{\cite[Definition 1.1]{genreedy}}]\label{def:genreedy} A \textit{generalised Reedy structure} on a small category $\mathbb{R}$ consists of wide subcategories $\mathbb{R}^+$, $\mathbb{R}^-$, and a degree function $d \colon \text{Ob}(\mathbb{R})\to \mathbb{N}$ satisfying the following four axioms: \begin{enumerate} \item[i)] Non-invertible morphisms in $\mathbb{R}^+$ (resp., $\mathbb{R}^-$) raise (resp., lower) the degree; isomorphisms in $\mathbb{R}$ preserve the degree. \item[ii)] $\mathbb{R}^+ \cap \mathbb{R}^- = \text{Iso}(\mathbb{R})$. \item[iii)] Every morphism $f$ of $\mathbb{R}$ factors uniquely (up to isomorphism) as $f=gh$ with $g \in \mathbb{R}^+$ and $h \in \mathbb{R}^-$. \item[iv)] If $\theta f = f$ for $\theta \in \text{Iso}(\mathbb{R})$ and $f \in \mathbb{R}^-$, then $\theta$ is an identity. Moreover, we say that the generalised Reedy structure is \emph{dualisable} if the following additional axiom holds: \item[iv)$'$] If $f \theta = f$ for $\theta \in \text{Iso}(\mathbb{R})$ and $f \in \mathbb{R}^+$, then $\theta$ is an identity. \end{enumerate} A \emph{morphism} of generalised Reedy categories is a functor $\mathbb{R} \to \mathbb{R}'$ which takes $\mathbb{R}^+$ (resp., $\mathbb{R}^-$) to $\mathbb{R}'^+$ (resp., $\mathbb{R}'^-$) and preserves the degree. \end{definition}
It is possible to generate a large class of generalised Reedy categories using the theory of crossed groups on categories \cite{MR923136}.
\begin{definition}[{\cite[Proposition 2.5]{genreedy}}]\label{csgrpdeff} Let $\mathbb{R},\mathbb{S}$ be categories such that $\mathbb{R} \subseteq \mathbb{S}$ is a wide subcategory. Assume that for all $s \in \mathbb{S}$, there exist subgroups $\mathfrak{G}_s \subseteq \text{Aut}_\mathbb{S}(s)$ of special automorphisms such that each morphism in $\mathbb{S}$ factors uniquely as a special automorphism followed by a morphism in $\mathbb{R}$. Then $\mathbb{S}$ is a \emph{crossed $\mathbb{R}$-group}, which we denote $\mathbb{R}\mathfrak{G}$. What we call a crossed $\mathbb{R}$-group is sometimes referred to as the \emph{total category} of a crossed $\mathbb{R}$-group. \end{definition}
There is a compatibility condition appearing in the following proposition which we will not cover as all categories that we will consider will satisfy it \cite[Remark 2.9]{genreedy}.
\begin{proposition}[{\cite[Proposition 2.10]{genreedy}}]\label{genreedycrossed} Let $\mathbb{R}$ be a strict Reedy category, and $\mathbb{R} \mathfrak{G}$ a compatible crossed $\mathbb{R}$ group. Then there is a unique dualisable generalised Reedy structure on $\mathbb{R} \mathfrak{G}$ for which the embedding $\mathbb{R} \hookrightarrow \mathbb{R} \mathfrak{G}$ is a morphism of generalised Reedy categories. \end{proposition}
We shall now use the degree function appearing in the Definition \ref{def:genreedy} to define the notion of (co)skeleton. Denote by $\mathbb{R}_{\leq n}$ the subcategory of $\mathbb{R}$ consisting of objects of degree $\leq n$. Write $t_n \colon \mathbb{R}_{\leq n} \hookrightarrow \mathbb{R}$ for the corresponding full embedding.
\begin{definition}[{\cite[Definition 6.1]{genreedy}}] Let $\mathbb{R}$ be a generalised Reedy category. \begin{itemize} \item The \emph{$n$-skeleton} functor is the endofunctor $\textup{sk}_n := t_{n!}t^\ast_n$. \item The \emph{$n$-coskeleton} functor is the endofunctor $\textup{cosk}_n := t_{n\ast}t^\ast_n$. \end{itemize} \end{definition}
The class of EZ-categories are a subclass of generalised Reedy categories for which the skeletal filtrations have a nice description. These skeletal filtrations can in turn be described by a corresponding \emph{boundary object} using notions of \emph{face} and \emph{degeneracy} maps analogous to the simplicial case. These boundary objects will allow us to give an explicit description of when objects in the presheaf category are coskeletal.
\begin{definition}[{\cite[Definition 6.6]{genreedy}}] An \emph{EZ-category} (Eilenberg-Zilber category) is a small category $\mathbb{R}$, equipped with a degree function $d \colon \text{Ob}(\mathbb{R}) \to \mathbb{N}$, such that \begin{enumerate} \item Monomorphisms preserve (resp., raise) the degree if and only if they are invertible (resp., non-invertible). \item Every morphism factors as a split epimorphism followed by a monomorphism. \item Any pair of split epimorphisms with common domain gives rise to an absolute pushout (recall an \emph{absolute pushout} is a pushout preserved by the Yoneda embedding $\mathbb{R} \hookrightarrow \widehat{\mathbb{R}}$). \end{enumerate} An EZ-category is a dualisable generalised Reedy category with $\mathbb{R}^+$ (resp., $\mathbb{R}^-$) defined to be the wide subcategory containing all monomorphisms (resp., epimorphisms). Moreover, we will say that an EZ-category $\mathbb{R}$ is \emph{symmetric promagmoidal} if $\widehat{\mathbb{R}}$ has a symmetric tensor product $(\widehat{\mathbb{R}},\square,I_\square)$. Clearly any presheaf category carries the cartesian product, but often the tensor structure that we work with will be different to the cartesian product. \end{definition}
\begin{definition} Let $\mathbb{R}$ be an EZ-category. Denote by $\mathbb{R}[r]$ the representable presheaf of $r \in \mathbb{R}$ in the topos $\widehat{\mathbb{R}}$. The split-epimorphisms will be called the \emph{degeneracy operators} and the monomorphisms will be called the \emph{face operators}. \end{definition}
\begin{definition} Let $\mathbb{R}$ be an EZ-category and $r \in \mathbb{R}$. The \emph{boundary}, $\partial \mathbb{R}[r] \subset \mathbb{R}[r]$ is the subobject of those elements of $\mathbb{R}[r]$ which factor through a non-invertible face operator $s \to r$. Explicitly, $$\partial \mathbb{R}[r] = \bigcup_{f \colon s \to r} f(\mathbb{R}[s]).$$ \end{definition}
\begin{lemma}[{\cite[Corollary 6.8]{genreedy}}]\label{isskele} Let $\mathbb{R}$ be an EZ-category and $r \in \mathbb{R}$, then $\partial \mathbb{R}[r]= \textup{sk}_{d(r)-1}\mathbb{R}[r]$. \end{lemma}
The above definition of boundary coincides exactly with the definition of boundary in the simplicial case. We can now say when an object $X \in \widehat{\mathbb{R}}$ is coskeletal.
\begin{lemma}\label{coskrep} Let $\mathbb{R}$ be an EZ-category, and $X \in \widehat{\mathbb{R}}$. Then the following are equivalent: \begin{enumerate} \item The unit of the adjunction $X \to \textup{cosk}_n(X)$ is an isomorphism. \item The map $X_r =\textup{Hom}(\mathbb{R}[r],X) \to \textup{Hom}(\mathbb{R}[r]_{\leq n},X_{\leq n})$ is a bijection for all $r$ with $d(r) >n$. \item For all $r$ with $d(r) > n$, and every morphism $\partial \mathbb{R}[r] \to X$, there exists a unique filler $\mathbb{R}[r] \to X$: $$\xymatrix{\partial \mathbb{R}[r] \ar[r] \ar[d]& X\ \\ \mathbb{R}[r] \ar@{-->}[ur]}$$ \end{enumerate} If $X$ satisfies any of these equivalent definitions we shall say that $X$ is \emph{$n$-coskeletal}. \end{lemma}
\begin{proof} The equivalence of the first two conditions follows from the definition. For the final condition, note that $\textup{cosk}_k(X)$ if given by the formula $[n] \mapsto \text{Hom}(\textup{sk}_k(\Delta[n]) , X)$ by adjointness, then using Lemma \ref{isskele} we see that this is the same as $\text{Hom}(\partial \mathbb{R}[r],X)$ for $d(r) > n $. The unique filler condition is then equivalent to Condition 2. \end{proof}
\subsection{Augmentation Categories}
\begin{definition}[{\cite[Proposition 7.2]{genreedy}}]\label{thecofibrations} Let $\mathbb{R}$ be an EZ-category. A \emph{normal monomorphism} in $\widehat{\mathbb{R}}$ is a map $f\colon X \to Y$ such that $f$ is monic and for each object $r$ of $\mathbb{R}$ and each non-degenerate element $y \in Y_r \backslash f(X)_r$, the isotropy group $\{ g \in \text{Aut} (r) \mid g^\ast(y)= y \}$ is trivial. \end{definition}
We now have all of the necessary tools to introduce what we mean by an augmentation category. In the following sections we will be exploring the properties of these categories alongside developing a geometric framework for them.
\begin{definition}\label{augdef} An \emph{augmentation category} is a category $\mathbb{A}$ such that: \begin{itemize} \item[(AC1)] $\mathbb{A}$ is a symmetric promagmoidal EZ-category. \item[(AC2)] There is a faithful inclusion of EZ-categories $i \colon \Delta \hookrightarrow \mathbb{A}$ such that for any two simplicial sets $X$ and $Y$ we have $i_! (X) \square i_!(Y) \simeq i_!(X \times Y)$, where $\square$ is the tensor product of $\widehat{\mathbb{A}}$. \begin{definition} A normal monomorphism in $\widehat{\mathbb{A}}$ is said to be \emph{linear} if it is in the saturated class of boundary inclusions $\partial \mathbb{A}[a] \to \mathbb{A}[a]$ for $a = i_![n]$. \end{definition} \item[(AC3)] Let $f \colon A \to B$ and $g \colon K \to L$ be normal monomorphisms in $\widehat{\mathbb{A}}$, then the map $$A \square K \sqcup_{A \square L} B \square L \to B \square K$$ is again a normal monomorphism whenever one of them is linear. \end{itemize} We will usually use $a \in \mathbb{A}$ for a typical element of an augmentation category. \end{definition}
Clearly $\Delta$ itself is the prototypical example of an augmentation category, and is minimal in the sense that any other augmentation category will factor through it.
\section{Homotopy of Augmentation Categories}\label{sec:4}
From now on, we will assume that all categories in question are augmentation categories. This section will be devoted to proving the existence of a model structure on the presheaf category $\widehat{\mathbb{A}}$ for a given augmentation category $\mathbb{A}$. In this model, the fibrant objects are generalisations of Kan complexes. The existence of this model structure strongly hinges on the fact that $\widehat{\mathbb{A}}$ is \emph{weakly enriched in simplicial sets}. In particular, using the tensor product $\square$ and the compatibility condition of (AC2) define for $K \in \widehat{\Delta}$ and $X,Y \in \widehat{\mathbb{A}}$: \begin{align*} \underline{\text{hom}}^\Delta_n(X,Y) &= \text{Hom}_{\widehat{\mathbb{A}}} (X \square i_! (\Delta[n]),Y),\\ X \square K &= X \square i_! (K), \\ (Y^K)_a &= \text{Hom}_{\widehat{\mathbb{A}}} (\mathbb{A}[a] \square i_! (K) , Y). \end{align*}
The method of constructing the model structure will, in part, utilise the above simplicial compatibility and $\widehat{\Delta}_\text{Kan}$ to explicitly describe the weak equivalences.
\begin{remark} The material presented in this section draws heavily on the construction of the stable model structure for dendroidal sets \cite{basicthesis}, which, in turn, follows the presentation of \cite{MR2778589} and \cite{MR3545944}. In fact, one sees that the definition of the augmentation category is rigid enough that the arguments relating to the model structure developed in \cite{basicthesis} are simply altered in a consistent manner, replacing instances of $\Omega$ with $\mathbb{A}$. \end{remark}
\subsection{Normal Monomorphisms}
In this section we look at the properties of the normal monomorphisms as introduced in Definition \ref{thecofibrations}, we recall this definition here using the language of augmentation categories.
\begin{definition} Let $\mathbb{A}$ be an augmentation category. A \emph{normal monomorphism} in $\widehat{\mathbb{A}}$ is a map $f\colon X \to Y$ such that $f$ is monic and for each object $a$ of $\mathbb{A}$ and each non-degenerate element $y \in Y_a \backslash f(X)_a$, the isotropy group $\{ g \in \text{Aut} (a) \mid g^\ast(y)= y \}$ is trivial. \end{definition}
\begin{remark}\label{ifstrictthenmomo} Note that if $\mathbb{A}$ has a strict Reedy structure, then the class of normal monomorphisms coincides with the class of monomorphisms. \end{remark}
Recall that a class of morphisms is \emph{saturated} if it is closed under retracts, transfinite compositions and pushouts. Similar to the simplicial case, the normal monomorphisms can be described as the saturated class of boundary inclusions. The following lemma holds for a wider class of categories than just augmentation categories, as proved in {\cite[Proposition 8.1.35]{MR2294028}}.
\begin{lemma}\label{normalmonocound} The class of normal monomorphisms is the smallest class of monomorphisms closed under pushouts and transfinite compositions that contains all boundary inclusions $\partial \mathbb{A}[a] \to \mathbb{A}[a]$ for $a \in \mathbb{A}$. \end{lemma}
Using the definition of normal monomorphisms, we will say that an object $A \in \widehat{\mathbb{A}}$ is \emph{normal} if $0 \to A$ is a normal monomorphism. From the definition of the normal monomorphisms we get the following trivial property, which leads to an observation of maps between normal objects.
\begin{lemma} A monomorphism $X \to Y$ of $\mathbb{A}$-sets is normal if and only if for any $a \in \mathbb{A}$, the action of $\text{Aut}(a)$ on $Y(a)-X(a)$ is free. \end{lemma}
\begin{corollary}\label{normtonorm} If $f \colon A \to B$ is any morphism of $\mathbb{A}$-sets, and $B$ is normal, then $A$ is also normal. If $f$ is a monomorphism and $B$ is normal, then $f$ is a normal monomorphism \end{corollary}
Quillen's small object argument applied to the saturated class of boundary inclusions yields the following.
\begin{corollary} Every morphism $f \colon X \to Y$ of $\mathbb{A}$-sets can be factored as $f = gh$, $g \colon X \to Z$, $h \colon Z \to Y$, where $g$ is a normal monomorphism and $h$ has the RLP with respect to all normal monomorphisms. \end{corollary}
\begin{definition} Let $X \in \widehat{\mathbb{A}}$, a \emph{normalisation} of $X$ is a morphism $X' \to X$ from a normal object $X'$ having the RLP with respect to all normal monomorphisms. Note that such a normalisation exists for any $X$ due to the factoring of the map $0 \to X$.
\end{definition}
\subsection{Augmented Kan Complexes}
We shall now use the boundary objects, along with our definition of degeneracy maps in a general EZ-category to introduce the concept of a horn object.
\begin{definition} Let $f \colon b \to a$ be a face map of $a \in \mathbb{A}$. The $f$-horn of $\mathbb{A}[a]$ is the subobject of $\partial \mathbb{A}[a]$ which excludes the object which factors through $f$. We denote this object $\Lambda^f\mathbb{A}[a]$. Explicitly, $$\Lambda^f\mathbb{A}[a] = \bigcup_{\substack{g \colon a' \to a \\ g \neq f}} g(\mathbb{R}[a']).$$ \end{definition}
\begin{definition} Let $\mathbb{A}$ be an augmentation category. An $\mathbb{A}$-Kan complex is an object $X \in \widehat{\mathbb{A}}$ such that it has fillers for all horns. That is there is a lift for all face maps $f$: $$\xymatrix{\Lambda^f\mathbb{A}[a] \ar[r] \ar[d]& X\ \\ \mathbb{A}[a] \ar@{-->}[ur]}$$ \end{definition}
\begin{remark} In the simplicial, and indeed the dendroidal settings, there is the concept of an \emph{inner horn}. In the general setting of augmentation categories, there seems to be no canonical way to define these objects. \end{remark}
\begin{definition} The smallest saturated class containing all horn extensions $\Lambda^f\mathbb{A}[a] \to \mathbb{A}[a]$ will be called the class of \emph{anodyne extensions}. Therefore an object is $\mathbb{A}$-Kan if and only if it has the RLP with respect to all anodyne extensions. \end{definition}
\begin{proposition} Let $Z$ be an $\mathbb{A}$-Kan complex and $f \colon A \to B$ a normal monomorphism, then $$f^\ast \colon \underline{\textup{hom}}^\Delta(B,Z) \to \underline{\textup{hom}}^\Delta(A,Z)$$ is a Kan fibration of simplicial sets. \end{proposition}
\begin{proof} We need only prove the result when $f$ is also a boundary inclusion $\partial \mathbb{A}[a] \to \mathbb{A}[a]$ (as the normal monomorphisms are the saturated class of boundary inclusions by Lemma \ref{normalmonocound}). The map $f^\ast$ has the RLP with respect to the horn inclusion $\Lambda^k[n] \to \Delta[n]$ if and only if $Z$ has the RLP with respect to the map $$\Lambda^k[n] \square \mathbb{A}[a] \sqcup \Delta[n] \square \partial \mathbb{A}[a] \to \Delta[n] \square \mathbb{A}[a].$$ By (AC3), we have that this map is an anodyne extension, so $Z$ has the RLP with respect to it. \end{proof}
The following corollary follows by considering the morphism $f \colon 0 \to B$.
\begin{corollary} If $Z$ is an $\mathbb{A}$-Kan complex, and $B$ a normal object, then $\underline{\text{hom}}^\Delta(B,Z)$ is a Kan complex. \end{corollary}
\subsection{Augmented Homotopy}\label{aughomot}
We will use the Kan objects to define a homotopy theory of $\mathbb{A}$-sets. Due to the compatibility of the tensor structures, we can take $X \square \mathbb{A}[1] \in \widehat{\mathbb{A}}$ to be a cylinder object of $X$. It comes from the factorisation of the fold map: $$\xymatrix{X \sqcup X \ar[rr]^{1_X \sqcup 1_X} \ar[dr]_{i_0 \sqcup i_1} && X \\ & X \square \mathbb{A}[1] \ar[ur]_\epsilon}$$ One can see that if $X$ is normal, then $X \square \mathbb{A}[1]$ is normal, and by Corollary \ref{normtonorm}, the map $i_0 \sqcup i_1$ is a normal monomorphism.
\begin{definition}\label{aughomoequiv} Two morphisms $f,g \colon X \to Y$ in $\widehat{\mathbb{A}}$ are \emph{homotopic} ($f \simeq g$) if there exists $H \colon X \square \mathbb{A}[1] \to Y$ such that $f = Hi_0$ and $g = Hi_1$. That is, the following diagram commutes: $$\xymatrix{X \ar[r]^-{i_0} \ar[dr]_f & X \square \mathbb{A}[1] \ar[d]^<<<<H & X \ar[l]_-{i_1} \ar[dl]^g \\ & Y}$$ We will say that $f \colon X \to Y$ is a \emph{homotopy equivalence} if there is a morphism $g \colon Y \to X$ such that $fg \simeq 1_Y$ and $gf \simeq 1_X$. \end{definition}
\begin{definition}\label{weakeqdeff} A map $f \colon X \to Y$ is a \emph{weak equivalence} if there exists a normalisation (i.e., cofibrant replacement) $f' \colon X' \to Y'$ which induces an equivalence of Kan complexes $$\underline{\text{hom}}^\Delta(Y',Z) \to \underline{\text{hom}}^\Delta(X',Z)$$ for every $\mathbb{A}$-Kan complex $Z$. Note that every homotopy equivalence between normal $\mathbb{A}$-sets is a weak equivalence. \end{definition}
\begin{remark} As one would expect, if we have a $\mathbb{A}$-set $X$, then the corresponding normalisation $f \colon X' \to X$ is a weak equivalence. \end{remark}
\begin{lemma}\label{525} A morphism of $\mathbb{A}$-sets which has the RLP with respect to all normal monomorphisms is a weak equivalence. \end{lemma}
\begin{proof} Let $X \to Y$ be a map between $\mathbb{A}$-sets with the RLP with respect to all normal monomorphisms. Denote by $Y' \to Y$ the normalisation of $Y$. Since $Y'$ is normal we may construct a lift $$ \xymatrix{0 \ar[r] \ar[d] & X \ar[d] \\
Y' \ar[r] \ar@{-->}[ur]|s & Y \rlap{ .}} $$ We then factor $s$ as a normal monomorphism $i \colon X' \to Y'$ followed by the normalisation $X' \to X$ to get a lift $$ \xymatrix{Y' \ar@{=}[r] \ar[d]_{i} & Y' \ar[d] \\
X' \ar[r]_{ft} \ar@{-->}[ur]|{f'} & Y\rlap{ .}} $$ The map $f'i = 1_{Y'}$, and since $Y' \to X'$ is a normal monomorphism, so is $\partial I \square X' \cup I \square Y' \to I \square X'$. Therefore, there is a lift
$$
\xymatrixcolsep{12ex}\xymatrix{
\partial I \square X' \cup I \square Y' \ar[r]^-{(if',1_{X'})\cup i_\epsilon} \ar[d] & X'\ar[d] \\ I \square X' \ar@{-->}[ur] \ar[r]_{ft_\epsilon} & X \rlap{ .} } $$ We have therefore constructed a homotopy from $if'$ to $1_{X'}$. Therefore the normalisation $i$ of $f$ is a homotopy equivalence, and therefore induces an equivalence of Kan complexes. \end{proof}
\subsection{Augmented Trivial Cofibrations}
\begin{definition} A \emph{trivial cofibration} of $\mathbb{A}$-sets is a cofibration which is also a weak equivalence. \end{definition}
\begin{lemma} A pushout of a trivial cofibration is a trivial cofibration \end{lemma}
\begin{proof} Let $f \colon A \to B$ be a trivial cofibration and let $$\xymatrix{A \ar[r] \ar[d] & C \ar[d] \\ B \ar[r] & D}$$ be a pushout square. We have that normal monomorphisms are closed under pushouts, therefore $C \to D$ is a normal monomorphism. We need only show that it is also a weak equivalence. Assume that $A$ and $B$ are normal. For a $\mathbb{A}$-Kan complex $Z$, we have an induced pullback square $$\xymatrix{\underline{\text{hom}}^\Delta(D,Z) \ar[r] \ar[d] & \underline{\text{hom}}^\Delta(B,Z) \ar[d] \\ \underline{\text{hom}}^\Delta(C,Z) \ar[r] & \underline{\text{hom}}^\Delta(A,Z) \rlap{ .}}$$ The right side vertical map is a trivial fibration (of simplicial sets) due to the assumption of $A \to B$ being a trivial cofibration between normal objects. Trivial fibrations are closed under pullbacks and therefore the left vertical map is also a trivial fibration. Therefore we have shown that $C \to D$ is a trivial cofibration for the case of $A$ and $B$ normal. Now we assume that $A$ and $B$ are not normal. The following method is called the \emph{cube argument} (\cite[Lemma 5.3.2]{basicthesis}). Let $D' \to D$ be a normalisation of $D$ and consider the commutative diagram $$
\xymatrix{A' \ar[rr] \ar[dr] \ar[dd] & & C' \ar[dr] \ar[dd]|\hole \\
& B' \ar[rr] \ar[dd]& & D' \ar[dd]\\
A \ar[rr]|\hole \ar[dr] & & C \ar[dr] \\ & B \ar[rr] & & D} $$ such that the vertical squares are pullbacks. As we are working in a presheaf category, the pullback of a monomorphism is also a monomorphism. We have that $D'$ is normal, and therefore by Corollary \ref{normtonorm} we have that the maps $A' \to B'$ and $C' \to D'$ are normal monomorphisms and as such, all vertical maps are normalisations. The top square is a pushout as pullbacks preserve pushouts in any presheaf category. Therefore $A' \to B'$ is a trivial cofibration between normal objects, and we have already shown that $C' \to D'$ is also a trivial cofibration. Therefore we have $C \to D$ has a normalisation which is a stable weak equivalence, and is therefore itself a stable weak equivalence. \end{proof}
\begin{lemma}\label{anodaretriv} Anodyne extensions are trivial cofibrations. \end{lemma}
\begin{proof} We need only show that every horn inclusion is a weak equivalence. Let $\Lambda^f \mathbb{A}[a] \to \mathbb{A}[a]$ be such a horn inclusion and $\partial \Delta [n] \to \Delta[n]$ simplicial boundary inclusion. We have by (AC3) that the map $$\partial \Delta[n] \square \mathbb{A}[a] \sqcup \Delta[n] \square \Lambda^f\mathbb{A}[a] \to \Delta[n] \square \mathbb{A}[a]$$ is an anodyne extension, so every $\mathbb{A}$-Kan complex $Z$ has the RLP with respect to it. Therefore the map $\underline{\text{hom}}^\Delta(\mathbb{A}[a],Z) \to \underline{\text{hom}}^\Delta(\Lambda^f\mathbb{A}[a],Z)$ is a trivial fibration of simplicial sets under the assumption that $Z$ is Kan. Therefore $\Lambda^f \mathbb{A}[a] \to \mathbb{A}[a]$ is a weak equivalence and the result is proven. \end{proof}
\begin{lemma}\label{lemma524} Every trivial cofibration is a retract of a pushout of a trivial cofibration between normal objects. \end{lemma}
\begin{proof} Let $u \colon A \to B$ be a trivial cofibration, and $A' \to A$ a normalisation of $A$. We consider the following commutative diagram $$\xymatrix{A' \ar[r]^{u'} \ar[d] & B' \ar[d] \\ A \ar[r]_u & B}$$ constructed by factoring $A' \to B$ as a normal monomorphism $A' \to B'$ followed by a normalisation of $B$. As normalisations are weak equivalences, and weak equivalences satisfy the two out of three property, (as the weak equivalences in $\widehat{\Delta}_\text{Kan}$ do so), we have that $A' \to B'$ is a trivial cofibration between normal objects. We now consider the pushout $$\xymatrix{A' \ar[r]^u \ar[d] & B' \ar[d] \\ A \ar[r]_v & P}$$ which provides a map $s \colon P \to B$. We need to show that $s$ has the RLP with respect to the normal monomorphisms, as this would ensure that $u$ is a retract of $v$ via the lifting $$\xymatrix{A \ar[r]^v \ar[d]_u& P \ar[d]^s \\ B \ar@{=}[r] \ar@{-->}[ur]& B \rlap{ .}}$$ Therefore, we consider the lifting problem $$\xymatrix{\partial \mathbb{A}[a] \ar[r] \ar[d] & P \ar[d]^s \\ \mathbb{A}[a] \ar[r] & B\rlap{ .}}$$ Using the \emph{cube argument} again, we pullback along $\partial \mathbb{A} [a] \to P$ to form the cube: $$
\xymatrix{E \ar[rr] \ar[dr] \ar[dd] & & D \ar[dr] \ar[dd]|\hole \\
& C \ar[rr] \ar[dd]& & \partial \mathbb{A}[a] \ar[dd]\\
A' \ar[rr]|\hole \ar[dr] & & B' \ar[dr] \\ & A \ar[rr] & & P \rlap{ ,}} $$ where the horizontal faces are pushouts and the vertical faces are pullbacks. We have that $E \to C$ is a normalisation, and therefore all the objects in the top face are normal. Therefore $E \to C$ has a section and therefore so does the pushout $D \to \partial \mathbb{A}[a]$. Using this section, we are able to form a commutative diagram $$\xymatrix{\partial \mathbb{A}[a] \ar[d] \ar[r] & D \ar[r] & B' \ar[d] \\ \mathbb{A}[a] \ar@{-->}[urr] \ar[rr] && B}$$ in which the lift exists. This therefore gives a solution to the required lifting problem. \end{proof}
We say that a $\mathbb{A}$-set $X$ is \emph{countable} if each $X(a)$ is a countable set.
\begin{lemma}[{\cite[Proposition 5.3.8]{basicthesis}}]\label{normalsuff} The class of trivial cofibrations is generated by the trivial cofibrations between countable and normal objects. \end{lemma}
\subsection{The Model Structure}
\begin{definition}\label{fibdef} A morphism in $\widehat{\mathbb{A}}$ is a \emph{fibration} if it has the RLP with respect to the trivial cofibrations. \end{definition}
\begin{theorem} There is a cofibrantly generated model structure on $\widehat{\mathbb{A}}$ with the defined class of weak equivalences (Definition \ref{weakeqdeff}), fibrations (Definition \ref{fibdef}) and cofibrations (Definition \ref{thecofibrations}). We will denote this model structure $\widehat{\mathbb{A}}_\text{Kan}$. \end{theorem}
\begin{proof} We will show that the axioms (CM1)-(CM5) hold. First of all, (CM1) holds automatically as $\widehat{\mathbb{A}}$ is a presheaf category. (CM2) holds from the definition of weak equivalences and the fact that the weak equivalences in the Quillen model on $\widehat{\Delta}$ satisfy this property. Axiom (CM3) also holds without much concern. The first non-trivial axiom to show is (CM5). The fact that every map can be factored as a cofibration followed by a trivial fibration follows from the small object argument for the set of all boundary inclusions and Lemma \ref{525}. The factorisation as a fibration followed by a trivial cofibration follows from the small object argument for the set of all trivial cofibrations between normal countable $\mathbb{A}$-sets by Lemma \ref{normalsuff}.
One half of (CM4) holds from the definition of the fibrations (Definition \ref{fibdef}). Assume we have the following commutative digram: $$\xymatrix{A \ar[r] \ar[d]_i & X \ar[d]^p \\ B \ar[r] & Y \rlap{ ,}}$$ for $i$ a cofibration and $p$ a trivial fibration. We need to produce a lift $B \to X$. We factor $p \colon X \to Y$ as a cofibration $Y \to Z$ followed by a map $Z \to X$ having the RLP with respect to all cofibrations. Then $Z \to X$ is a weak equivalence and by two out of three, so is $Y \to Z$. We first find the lift in $$\xymatrix{A \ar[r] \ar[d]_i & X \ar[r] & Z \ar[d] \\ B \ar[rr] \ar@{-->}[urr] && Y \rlap{ ,}}$$ and then in $$\xymatrix{X \ar@{=}[r] \ar[d] & X \ar[d]^p \\ Z \ar@{-->}[ur] \ar[r] & Y \rlap{ .}}$$ The composition of the two lifts gives us the necessary lift $B \to X$. The generating cofibrations are given by the boundary inclusions and the set of generating trivial cofibrations are the trivial cofibrations between normal and countable $\mathbb{A}$-sets. \end{proof}
\begin{proposition} The fibrant objects in $\widehat{\mathbb{A}}_\text{Kan}$ are the $\mathbb{A}$-Kan complexes. \end{proposition}
\begin{proof} Let $Z$ be fibrant in $\widehat{\mathbb{A}}_\text{Kan}$, by Lemma \ref{anodaretriv} we have that the anodyne extensions are exactly the trivial cofibrations, therefore $Z$ has the RLP with respect to the anodyne extensions and is therefore $\mathbb{A}$-Kan. Conversely, let $Z$ be $\mathbb{A}$-Kan, and $A \to B$ a trivial cofibration between normal objects, then the map $\underline{\text{hom}}^\Delta(B,Z) \to \underline{\text{hom}}^\Delta(A,Z)$ is a trivial fibration of simplicial sets. We have that the trivial fibrations in $\widehat{\Delta}_\text{Kan}$ are surjective on vertices, and we can deduce that $Z$ has the RLP with respect to the map $A \to B$. Lemma \ref{lemma524} then implies that every $\mathbb{A}$-Kan complex has the RLP with respect to all trivial cofibrations. \end{proof}
\subsection{Properties of $\text{Ho}(\widehat{\mathbb{A}}_\text{Kan})$}
In this section we briefly list some of the required properties of $\widehat{\mathbb{A}}_\text{Kan}$. The first property that we prove is left-properness which will be essential when considering augmented presheaves.
\begin{proposition} $\widehat{\mathbb{A}}_\text{Kan}$ is left-proper. \end{proposition}
\begin{proof} Consider the following pushout $$\xymatrix{A \ar[r] \ar[d] & C \ar[d] \\ B \ar[r] & D \rlap{ ,}}$$ with $A \to B$ a weak equivalence and $A \to C$ a cofibration. We can reduce to the case where all objects are normal via the \emph{cube argument}. We have an induced diagram $$\xymatrix{\underline{\text{hom}}^\Delta(D,Z) \ar[r] \ar[d] & \underline{\text{hom}}^\Delta(B,Z) \ar[d] \\ \underline{\text{hom}}^\Delta(C,Z) \ar[r] & \underline{\text{hom}}^\Delta(A,Z) \rlap{ ,}}$$ which is a pullback for any $Z \in \widehat{\mathbb{A}}$. If $Z$ were a $\mathbb{A}$-Kan complex, then all simplicial sets in the above diagram are also Kan complexes and the right vertical map is an equivalence of simplicial sets. The left vertical map is an equivalence. This follows as trivial fibrations are stable under pullbacks, and so by Ken Brown's Lemma \cite{MR0341469}, all weak equivalences between fibrant objects are stable under pullback. \end{proof}
The following lemma holds from the construction of compatible Kan complex objects.
\begin{lemma}\label{quiladj1} There is a Quillen adjunction $$i_! : \widehat{\Delta}_\text{Kan} \rightleftarrows \widehat{\mathbb{A}}_\text{Kan} : i^\ast.$$ \end{lemma}
\begin{proof} The right adjoint $i^\ast$ sends fibrations (resp., trivial fibrations) of $\widehat{\mathbb{A}}_\text{Kan}$ to fibrations (resp., trivial fibrations) of $\widehat{\Delta}_\text{Kan}$ by construction, and therefore is part of a Quillen pair, for which $i_!$ is the left adjoint. \end{proof}
Recall that we do not always have the assumption that $\widehat{\mathbb{A}}_\text{Kan}$ satisfies the pushout-product axiom, therefore we cannot hope for a closed monoidal structure in full generality. However, what does hold is the fact that $\text{Ho}(\widehat{\mathbb{A}}_\text{Kan})$ is a simplicial category. Using the tools presented in \cite[Proposition 5.4.4]{basicthesis} and \cite[Lemma 3.8.1]{MR3545944} we can formally prove the following, although the result is non-surprising due to the way we have constructed the classes of maps in our model structure.
\begin{lemma} The category $\textup{Ho}(\widehat{\mathbb{A}}_\text{Kan})$ is enriched over $\textup{Ho}(\widehat{\Delta}_\text{Kan})$. \end{lemma}
\begin{remark} If $\mathbb{A}$ is a strict EZ-category then the model $\widehat{\mathbb{A}}_\text{Kan}$ is a Cisinski type model structure defined in \cite{MR2294028}. This follows as for a strict EZ-category we have by Remark \ref{ifstrictthenmomo} that the normal monomorphisms are then just the monomorphisms. \end{remark}
\section{Augmented Homotopical Algebraic Geometry}\label{sec:5}
\subsection{Local Model Structure on Augmented Presheaves}\label{auglocmodels}
We will now build a local model structure on $\mathbb{A} \textbf{-Pr} (\mathcal{C}) := {\widehat{\mathbb{A}}}^{\mathcal{C}^{op}}$ which reflects the local model structure on simplicial presheaves. First of all, we consider the projective point-wise model structure on $\mathbb{A} \textbf{-Pr} (\mathcal{C})$, again denoted $\mathbb{A} \textbf{-Pr}_\text{proj} (\mathcal{C})$. Note that as $\widehat{\mathbb{A}}$ is left-proper and combinatorial (for it is accessible and cofibrantly generated), we have that the projective point-wise model is also left-proper and combinatorial.
We will now construct the analogue of hypercovers. In the simplicial case we introduced hypercovers using the boundary construction, which was done as we can simply edit the necessary components to get a class of augmented hypercovers.
\begin{definition}\label{aughyper} Let $\mathbb{A}$ be an augmentation category and $(\mathcal{C},\tau)$ a site. A map $f \colon X \to Y$ in $\mathbb{A} \textbf{-Pr}(\mathcal{C})$ is a \emph{hypercovering} if for all $a \in \mathbb{A}$ the map $$X_a \to \text{Hom}_{\widehat{\mathbb{A}}}(\partial \mathbb{A}[a] , X) \times_{\text{Hom}_{\widehat{\mathbb{A}}}(\partial \mathbb{A}[a] , Y)} Y_a$$ is a $\tau$-covering in $\textup{Ho}(\mathbb{A} \textbf{-Pr}_\text{proj}(\mathcal{C}))$. Note that by using Lemma \ref{coskrep}, this is equivalent to asking the same condition for the class of maps $$X_a \to (\textup{cosk}_{d(a)-1}X)_r \times_{(\textup{cosk}_{d(a)-1}X)_r} Y_a.$$ \end{definition}
\begin{definition} The \emph{local model structure} on $\mathbb{A} \textbf{-Pr} (\mathcal{C})$ is the left Bousfield localisation of $\mathbb{A} \textbf{-Pr}_\text{proj} (\mathcal{C})$ at the class of hypercoverings. We shall denote this model structure $\mathbb{A} \textbf{-Pr}_\tau(\mathcal{C})$. \end{definition}
\begin{definition} For a site $(\mathcal{C}, \tau)$, we will call the homotopy category $\text{Ho}(\mathbb{A} \textbf{-Pr}_\tau(\mathcal{C}))$ the category of \emph{$\mathbb{A}$-augmented stacks on $\mathcal{C}$}. \end{definition}
\begin{lemma} For $(\mathcal{C},\tau)$ a site, there is a Quillen adjunction $$i_! : \textbf{sPr}_\tau(\mathcal{C}) \rightleftarrows \mathbb{A} \textbf{-Pr}_\tau(\mathcal{C}) : i^\ast .$$ \end{lemma}
\begin{proof} We will prove the statement for the respective injective local models and then we can compose with the identity functor to the projective local case, which will prove the result. In the local injective models, the cofibrations are the point-wise cofibrations. By Lemma \ref{quiladj1}, we know that $i_!$ sends cofibrations of $\widehat{\Delta}_\text{Kan}$ to cofibrations of $\widehat{\mathbb{A}}_\text{Kan}$. Therefore, we now need only show that $i_!$ preserves the trivial cofibrations now. This follows as if $f \colon X \to Y$ is a (non-augmented) hypercover, then $i_!f \colon i_!X \to i_!Y$ is an augmented hypercover. \end{proof}
\begin{remark} We have introduced the above theory for the case when $\mathcal{C}$ is a simplicial site. However for $\mathbb{A}$-presheaves, it would also make sense to allow \emph{$\mathbb{A}$-sites}. That is, categories $\mathcal{C}$ enriched over $\widehat{\mathbb{A}}$ such that there is a Grothendieck topology on $\pi_0(\mathcal{C})$. \end{remark}
\subsection{Local Weak Equivalences}
We now describe what the weak equivalences in the local model structure should look like. We introduce the concept of augmented homotopy groups, resembling the simplicial case, using the augmented homotopy of Section \ref{aughomot}.
\begin{definition}\label{homotopygrps} Let $X \in \widehat{\mathbb{A}}$ be fibrant in $\widehat{\mathbb{A}}_\text{Kan}$, and $a \in \mathbb{A}$. Denote by $\pi_a(X,x_0)$ the set of equivalence classes of morphisms $\alpha \colon \mathbb{A}[a] \to X$ which fit into the following commutative diagram in $\widehat{\mathbb{A}}$: $$\xymatrix{\partial \mathbb{A}[a] \ar[r] \ar[d] & \mathbb{A}[0] \ar[d]^{x_0} \\ \mathbb{A}[a] \ar[r]_\alpha & X \rlap{ ,}}$$ where the equivalence relation is given by homotopy equivalence of Definition \ref{aughomoequiv}. \end{definition}
\begin{remark} Although we will not pursue it here, one would hope that the object $\pi_a(X,x_0)$ is a group which is abelian for $d(a) \geq 2$. \end{remark}
\begin{definition} Let $(\mathcal{C}, \tau)$ be a site. A map $f \colon \mathcal{F} \to \mathcal{F}'$ in $\mathbb{A} \textbf{-Pr}(\mathcal{C})$ is a \textit{local weak equivalence} if: \begin{itemize} \item The induced map $\pi_0^\tau \mathcal{F} \to \pi_0^\tau\mathcal{F}'$ is an isomorphism of sheaves. \item Squares of following form are pullbacks after sheafification: $$ \xymatrix{\pi_a \mathcal{F} \ar[r] \ar[d] &\ar[d] \pi_a \mathcal{F}' \\ \mathcal{F}_0 \ar[r] & \mathcal{F}'_0 \rlap{ .}} $$ \end{itemize} \end{definition}
\begin{conjecture}\label{firstconjecture} A map $f \colon \mathcal{F} \to \mathcal{F}'$ is a local weak equivalence if and only if it is a hypercover. \end{conjecture}
If the above conjecture is true, which seems likely due to the combinatorial nature of augmentation categories in relation to the simplex category, then we would get the following result, mirroring Theorem \ref{projlocmodel}.
\begin{corollary} Let $(\mathcal{C}, \tau)$ be a site. There exists a cofibrantly generated model structure on the category $\mathbb{A} \textbf{-Pr}(\mathcal{C})$ where a map $f \colon \mathcal{F} \to \mathcal{F}'$ is a: \begin{enumerate} \item Weak equivalence if it is a local weak equivalence. \item Fibration if it has the RLP with respect to the trivial cofibrations. \item Cofibration if it is a cofibration in the point-wise projective model. \end{enumerate} Moreover this model is Quillen equivalent to the local model $\mathbb{A} \textbf{-Pr}_\tau(\mathcal{C})$. \end{corollary}
\subsection{Enriched Structure}
In this section we will show that the local model structure $\mathbb{A} \textbf{-Pr}_\tau(\mathcal{C})$ is enriched over the local model structure on simplicial presheaves, and is therefore a simplicial category. In the case that $\widehat{\mathbb{A}}$ is a closed monoidal category we can go further and show that $\mathbb{A} \textbf{-Pr}_\tau(\mathcal{C})$ is in fact a closed monoidal model category. To do this, we will use a trick used in \cite[\S 3.6]{MR2137288}, which considers instead the local injective model structure. Denote by $\mathbb{A} \textbf{-Pr}_\textup{inj}(\mathcal{C})$ the injective model structure. From the properties of $\widehat{\mathbb{A}}_\text{Kan}$ we have that this model is left-proper and cofibrantly generated, and therefore a left Bousfield localisation exists. We localise at the set of hypercovers once again and retrieve the model category $\mathbb{A} \textbf{-Pr}_{\text{inj,}\tau}(\mathcal{C})$. Clearly we have an equivalence of categories $\text{Ho}(\mathbb{A} \textbf{-Pr}_{\text{inj,}\tau}(\mathcal{C})) \simeq \text{Ho}(\mathbb{A} \textbf{-Pr}_{\tau}(\mathcal{C}))$ as the weak equivalences in each model are the same. As the cofibrations in the local injective model are simply the normal monomorphisms, and finite products preserve local weak equivalences, anything we have proved regarding the enriched structure of $\widehat{\mathbb{A}}_\text{Kan}$ can be carried over to this setting. Using this justification, we state the two following lemmas and corresponding definitions.
\begin{lemma} The category $\textup{Ho}(\mathbb{A} \textbf{-Pr}_{\tau}(\mathcal{C}))$ is enriched over $\textup{Ho}(\textbf{sPr}_\tau(\mathcal{C}))$, and subsequently over $\textup{Ho}(\widehat{\Delta}_\text{Kan})$. In particular, $\mathbb{A} \textbf{-Pr}_{\tau}(\mathcal{C})$ is a simplicial model category. \end{lemma}
\begin{definition} The simplicial presheaf enrichment in $\textup{Ho}(\mathbb{A} \textbf{-Pr}_{\tau}(\mathcal{C}))$ will be denoted $$\mathbb{A} \text{-} \mathbb{R}_\tau^\Delta \underline{\mathcal{H}\text{om}}(-,-) \colon \textup{Ho}(\mathbb{A} \textbf{-Pr}_{\tau}(\mathcal{C})) \times \textup{Ho}(\mathbb{A} \textbf{-Pr}_{\tau}(\mathcal{C})) \to \textup{Ho}(\textbf{sPr}_\tau(\mathcal{C})).$$ The corresponding simplicial enrichment will be denoted $$\mathbb{A} \text{-} \mathbb{R}_\tau^\Delta \underline{\text{Hom}}(-,-) \colon \textup{Ho}(\mathbb{A} \textbf{-Pr}_{\tau}(\mathcal{C})) \times \textup{Ho}(\mathbb{A} \textbf{-Pr}_{\tau}(\mathcal{C})) \to \textup{Ho}(\widehat{\Delta}_\text{Kan}).$$ \end{definition}
\begin{lemma} If the presheaf category $\widehat{\mathbb{A}}$ satisfies the pushout-product axiom, then the category $\textup{Ho}(\mathbb{A} \textbf{-Pr}_{\tau}(\mathcal{C}))$ is a closed monoidal category, and is subsequently enriched over $\text{Ho}(\widehat{\mathbb{A}}_\text{Kan})$. In particular, $\mathbb{A} \textbf{-Pr}_{\tau}(\mathcal{C})$ is a $\mathbb{A}$-model category.
\end{lemma}
\begin{definition} The internal-hom in $\textup{Ho}(\mathbb{A} \textbf{-Pr}_{\tau}(\mathcal{C}))$ will be denoted $$\mathbb{A} \text{-} \mathbb{R}_\tau \underline{\mathcal{H}\text{om}}(-,-) \colon \textup{Ho}(\mathbb{A} \textbf{-Pr}_{\tau}(\mathcal{C})) \times \textup{Ho}(\mathbb{A} \textbf{-Pr}_{\tau}(\mathcal{C})) \to \textup{Ho}(\mathbb{A} \textbf{-Pr}_{\tau}(\mathcal{C})).$$ The corresponding augmented enrichment will be denoted $$\mathbb{A} \text{-} \mathbb{R}_\tau \underline{\text{Hom}}(-,-) \colon \textup{Ho}(\mathbb{A} \textbf{-Pr}_{\tau}(\mathcal{C})) \times \textup{Ho}(\mathbb{A} \textbf{-Pr}_{\tau}(\mathcal{C})) \to \textup{Ho}(\widehat{\mathbb{A}}_\text{Kan}).$$ \end{definition}
\subsection{Augmented Derived Stacks}
We shall now use the model structures developed in Section \ref{auglocmodels} to discuss the theory of augmented stacks, which will be the main objects of interest in this article. Following the simplicial setting, we make the following definition.
\begin{definition} Let $(\mathcal{C},\tau)$ be a site, and $\mathbb{A}$ an augmentation category. \begin{itemize} \item The category $\textup{Ho}(\mathbb{A} \textbf{-Pr}_{\tau}(\mathcal{C}))$ will be called the \emph{category of $\mathbb{A}$-augmented $\infty$-stacks}. \item An object $F \in \textup{Ho}(\mathbb{A} \textbf{-Pr}_{\tau}(\mathcal{C}))$ will be referred to as a \emph{$\mathbb{A}$-augmented $\infty$-stack}. \item For two augmented stacks $F,G \in \textup{Ho}(\mathbb{A} \textbf{-Pr}_{\tau}(\mathcal{C}))$, we will call the object $\mathbb{A} \text{-} \mathbb{R}_\tau \underline{\mathcal{H}\text{om}}(-,-)$ (resp., $\mathbb{A} \text{-} \mathbb{R}_\tau^\Delta \underline{\mathcal{H}\text{om}}(-,-)$) the \emph{$\mathbb{A}$-mapping stack} (resp., mapping stack) from $F$ to $G$. Note that the $\mathbb{A}$-mapping stack does not exist in full generality. \end{itemize} \end{definition}
\subsection{Augmented Geometric Derived Stacks}
We now introduce augmented derived geometric stacks through a modified $n$-hypergroupoid construction. Of course, all that we do in this setting can be reformulated for when the site in question is not $\textbf{dAff}$, but any homotopical algebraic geometric context.
\begin{definition} A derived Artin (resp., derived Deligne-Mumford) $(\mathbb{A},n)$-hypergroupoid is an object $X \in \textbf{dAff}^{\mathbb{A}^{op}}$ such that the maps $$X_a = \text{Hom}_{\widehat{\mathbb{A}}}(\mathbb{A}[a],X) \to \text{Hom}_{\widehat{\mathbb{A}}}(\Lambda^f\mathbb{A}[a],X)$$ are smooth (resp., \'{e}tale) surjections for all objects $a$ and face maps $f$, and are isomorphisms for all $a$ with $d(a) > n$. Note that a derived $(\mathbb{A},n)$-hypergroupoid is an augmented $\mathbb{A}$-Kan complex. Moreover, using Lemma \ref{coskrep} we see that a $(\mathbb{A},n)$-hypergroupoid is $(n+1)$-coskeletal. \end{definition}
\begin{definition} A derived Artin (resp., derived Deligne-Mumford) trivial $(\mathbb{A},n)$-hypergroupoid is a map $f \colon X \to Y$ in $\textbf{dAff}^{\mathbb{A}^{op}}$ such that the maps $$X_a \to \text{Hom}_{\widehat{\mathbb{A}}}(\partial \mathbb{A}[a] , X) \times_{\text{Hom}_{\widehat{\mathbb{A}}}(\partial \mathbb{A}[a] , Y)} Y_a$$ are smooth (resp., \'{e}tale) surjections for all $a,f$ and are isomorphisms for all $a$ with $d(a) > n$. In particular, a derived Artin trivial $(\mathbb{A},n)$-hypergroupoid is a trivial fibration in $\widehat{\mathbb{A}}_\text{Kan}$. \end{definition}
\begin{lemma} Let $f \colon X \to Y$ be a trivial $(\mathbb{A},n)$-hypergroupoid then we have that $X = Y \times_{\textup{cosk}_{n-1} Y} \textup{cosk}_{n-1}X$. \end{lemma}
\begin{proof} This follows from comparing the definition to the general result about augmented coskeletal objects in Lemma \ref{coskrep}. Note that this shows that $(\mathbb{A},n)$-hypergroupoids can be seen as $n$-truncated $\mathbb{A}$-hypercovers. \end{proof}
\begin{definition} A model for the $\infty$-category of strongly quasi-compact $(\mathbb{A},n)$-geometric derived Artin (resp., Deligne-Mumford) stacks is given by the relative category consisting of the derived Artin (resp., Deligne-Mumford) $(\mathbb{A},n)$-hypergroupoids and the class of derived trivial Artin (resp., Deligne-Mumford) $(\mathbb{A},n)$-hypergroupoids. We will denote the homotopy category as $$\mathbb{A} \text{-} \mathcal{G}_n^\text{sm}(\textbf{dAff}) \qquad \left( \text{resp., } \mathbb{A} \text{-} \mathcal{G}_n^\text{\'{e}t}(\textbf{dAff}) \right)$$ \end{definition}
\begin{remark} We could have also (equivalently) formulated the theory of $(\mathbb{A},n)$-geometric derived Artin or Deligne-Mumford stacks by using a representability criteria. However, using the hypergroupoid construction highlights the beauty of the construction with the intertwining of the hypercovering conditions. \end{remark}
We have listed the following as a conjecture, as relative categories do not have the notion of Quillen equivalence, and therefore we can only talk about the derived adjunction that may exist between the homotopy categories. To prove the following conjecture, one would need to construct the full Quillen model structure, and prove that there is a Quillen adjunction. It could also be the case that this result does not hold in all cases, but only for a subclass of augmentation categories.
\begin{conjecture}\label{conj3} Let $\mathbb{A}$ be an augmentation category, then there is an adjunction \begin{align*} \mathbb{L} i_! : \mathcal{G}_n^\text{sm}(\textbf{dAff}) &\rightleftarrows \mathbb{A} \text{-} \mathcal{G}_n^\text{sm}(\textbf{dAff}) : \mathbb{R}i^\ast\\ \bigg( \text{resp., } \mathbb{L} i_! : \mathcal{G}_n^\text{\'{e}t}(\textbf{dAff}) &\rightleftarrows \mathbb{A} \text{-} \mathcal{G}_n^\text{\'{e}t}(\textbf{dAff}) : \mathbb{R}i^\ast \bigg) \end{align*} \end{conjecture}
We finish this section by outlining a potential direction for future research of $(a,n)$-hypergroupoids. One of the key tools used in derived algebraic geometry is that of quasi-coherent complexes as defined in \cite[\S 5.2]{MR2717174}, of particular interest is the cotangent complex which controls infinitesimal deformations. In \cite{MR3033634} an alternative, homotopy equivalent definition of quasi-coherent complexes was given using $n$-hypergroupoids, giving a cosimplicial object. This definition would be adjustable to the augmented setting, giving rise to a particular coaugmented object. The question is then what type of deformation would such an augmented cotangent complex control?
\section{Equivariant Stacks}\label{sec:6}
Our first non-trivial example of an augmentation category will be crossed simplicial groups in the sense of Definition~\ref{csgrpdeff}, introduced by Loday and Fiedorowicz~\cite{loday} (and independently by Krasauskas under the name of \textit{skew-simplicial sets}~\cite{krasauskas}). In particular a \textit{crossed simplicial group} is a category $\Delta \mathfrak{G}$ equipped with an embedding $i \colon \Delta \hookrightarrow \Delta \mathfrak{G}$ such that: \begin{enumerate} \item The functor $i$ is bijective on objects. \item Any morphism $u \colon i[m] \to i[n]$ in $\Delta \mathfrak{G}$ can be uniquely written as $i(\phi) \circ g$ where $\phi \colon [m] \to [n]$ is a morphism in $\Delta$ and $g$ is an automorphism of $i[m]$ in $\Delta \mathfrak{G}$. \end{enumerate}
The canonical example of such a category is the cyclic category of Connes, which we will denote $\Delta \mathfrak{C}$~\cite{connes1}. The following lemma will be used to prove several results in the rest of this article, and follows from Proposition~\ref{genreedycrossed}.
\begin{lemma}[{\cite[\S 6]{genreedy}}]\label{crossisez} Let $\mathbb{R}$ be a strict EZ-category. Then any crossed group $\mathbb{R} \mathfrak{G}$ on $\mathbb{R}$ is an EZ-category. \end{lemma}
\begin{proposition} The category $\Delta \mathfrak{G}$ is an augmentation category. \end{proposition}
\begin{proof} We check that $\Delta \mathfrak{G}$ satisfies the conditions (AC1) -- (AC3) appearing in Definition~\ref{augdef}: \begin{enumerate} \item[(AC1)] From Lemma~\ref{crossisez} we see that $\Delta \mathfrak{G}$ is an EZ-category. A tensor structure is given by just taking the cartesian product. \item[(AC2)] The inclusion $\Delta \hookrightarrow \Delta \mathfrak{G}$ and the compatibility of the monoidal structures follows by definition. \item[(AC3)] In Lemma~\ref{ismonoidalclosed} below, we prove that the normal monomorphisms have the pushout-product property in full generality, which is a stronger result. We forgo the proof of (AC3) and instead refer to the proof of Lemma~\ref{ismonoidalclosed}. model structure is closed monoidal, which is a stronger result. \end{enumerate} \end{proof}
We describe the adjoint $i_!$, which appears for the cyclic case in \cite[\S 3.1]{connes2}. As the functor $i^\ast \colon \widehat{\Delta \mathfrak{C}} \to \widehat{\Delta}$ forgets the $\Delta \mathfrak{G}$ action, the left adjoint is naturally a free construction. In particular, for a simplicial set $X$ we have $i_!(X)_n = \text{Aut}_{\Delta \mathfrak{G}}([n]) \times X_n $, where the $\text{Aut}_{\Delta \mathfrak{G}}([n])$ action given by left multiplication. The face and degeneracy maps are given by: \begin{align*} d_i(g,x)&=\left(d_i(g),d_{g^{-1}(i)}(x)\right),\\ s_i(g,x)&=\left(s_i(g),s_{g^{-1}(i)}(x)\right). \end{align*}
\begin{corollary}\label{equivariantcorr}\leavevmode \begin{itemize} \item There is a Quillen model structure on the category of ${\Delta \mathfrak{G}}$-sets, denoted $\widehat{{\Delta \mathfrak{G}}}_\text{Kan}$, where the fibrant objects are the ${\Delta \mathfrak{G}}$-Kan complexes, and the cofibrations are the normal monomorphisms. Moreover there is a Quillen adjunction $$i_! \colon \widehat{\Delta}_\text{Kan} \rightleftarrows \widehat{{\Delta \mathfrak{G}}}_\text{Kan} \colon i^\ast.$$ \item There is a local model structure on the category of ${\Delta \mathfrak{G}}$-presheaves, denoted ${\Delta \mathfrak{G}} \textbf{-Pr}_\tau(\mathcal{C})$ obtained as the left Bousfield localisation of the point-wise Kan model at the class of ${\Delta \mathfrak{G}}$-hypercovers. Moreover there is a Quillen adjunction $$i_! : \textbf{sPr}_\tau(\mathcal{C}) \rightleftarrows {\Delta \mathfrak{G}} \textbf{-Pr}_\tau(\mathcal{C}) : i^\ast.$$ \item The category of $({\Delta \mathfrak{G}},n)$-geometric derived Artin stacks, which we denote ${\Delta \mathfrak{G}} \text{-} \mathcal{G}_n^\text{sm}(\textbf{dAff})$, is given as the homotopy category of derived $({\Delta \mathfrak{G}},n)$-hypergroupoids with respect to the class of trivial $({\Delta \mathfrak{G}},n)$-hypergroupoids. \end{itemize} \end{corollary}
\begin{definition} The augmented homotopical algebraic geometry theory arising from ${\Delta \mathfrak{G}}$ will be referred to as \emph{$\mathfrak{G}$-equivariant}. For example, the category $\text{Ho}({\Delta \mathfrak{G}} \textbf{-Pr}_\tau(\mathcal{C}))$ will be referred to as the category of \emph{$\mathfrak{G}$-equivariant $\infty$-stacks}. \end{definition}
We will now explore the Kan model structure on $\widehat{{\Delta \mathfrak{G}}}$, which will lead to the justification of using the term $\mathfrak{G}$-equivariant to describe the augmentation. We shall fix our attention to the cyclic category ${\Delta \mathfrak{C}}$, however, the homotopy of arbitrary crossed simplicial groups can be discussed, see~\cite{balchin2} for details.
The Kan model structure on $\widehat{{\Delta \mathfrak{C}}}$ was in fact the first model structure to be developed for cyclic sets by Dwyer, Hopkins and Kan~\cite{homotopycc}.
\begin{proposition}[{\cite[Theorem 3.1]{homotopycc}}]\label{csweak} The category $\widehat{{\Delta \mathfrak{C}}}$ has a cofibrantly generated model structure where a map $f \colon X \to Y$ is a: \begin{itemize} \item Weak equivalence if $i^\ast(f) \colon i^\ast(X) \to i^\ast(Y)$ is a weak equivalence in $\widehat{\Delta}_\text{Kan}$. \item Fibration if $i^\ast(f) \colon i^\ast(X) \to i^\ast(Y)$ is a fibration in $\widehat{\Delta}_\text{Kan}$. \item Cofibration if it has the LLP with respect to the trivial fibrations. \end{itemize} We shall denote this model $\widehat{{\Delta \mathfrak{C}}}_{\text{DHK}}$. \end{proposition}
By comparison of the structure of the fibrations \cite[Proposition 3.2]{homotopycc} and cofibrations \cite[Proposition 3.5]{homotopycc}, we see that the model structure of Proposition~\ref{csweak} is exactly that of $\widehat{{\Delta \mathfrak{C}}}_\text{Kan}$.
\begin{corollary} There is an equivalence $\widehat{{\Delta \mathfrak{C}}}_\text{Kan} \rightleftarrows \widehat{{\Delta \mathfrak{C}}}_\text{DHK}$. \end{corollary}
Recall from~\cite[Proposition 2.8]{homotopycc} that there is a cyclic realisation functor $|-|_\mathfrak{C} \colon \widehat{{\Delta \mathfrak{C}}} \to \textbf{Top}^{SO(2)}$ along with its right adjoint $S_\mathfrak{C}(-)$. We can now use the theory of~\cite{homotopycc} to describe the homotopy type of $\widehat{{\Delta \mathfrak{C}}}_\text{Kan}$.
\begin{proposition}[{\cite[Theorem 2.2]{Dwyer1984147}}]\label{ctopweak} There is a model structure on $\textbf{Top}^{SO(2)}$ where a map $f \colon X \to Y$ is a: \begin{itemize} \item Weak equivalence if the underlying map of topological spaces is a weak equivalence in $\textbf{Top}$. \item Fibration if the underlying map of topological spaces is a fibration in $\textbf{Top}$. \item Cofibration if it has the LLP with respect to the trivial fibrations. \end{itemize} \end{proposition}
\begin{proposition}[{\cite[Corollary 4.3]{homotopycc}}]\label{cyclicequiv} There is a Quillen equivalence $$\widehat{{\Delta \mathfrak{C}}}_\text{Kan} \rightleftarrows \textbf{Top}^{SO(2)},$$ with the equivalence furnished by the cyclic realisation and singular functors. \end{proposition}
Using the fact that a cyclic object is the same as a simplicial objects along with extra datum, we see that a cyclic $\infty$-stack can be viewed as an $\infty$-stack with extra datum. In light of Proposition~\ref{cyclicequiv}, we therefore see that objects of $\text{Ho}({\Delta \mathfrak{C}} \textbf{-Pr}_\tau(\mathcal{C}))$ can be viewed as $\infty$-stacks along with an $SO(2)$-action. Hence the terminology of \emph{equivariant}.
One important property of the Kan model structure on crossed simplicial groups is that the pushout-product axiom holds, as a consequence we have that $ {\Delta \mathfrak{G}} \textbf{-Pr}_\tau(\mathcal{C}))$ is a closed monoidal model category. We now provide a proof of this claim, adapted from \cite[Lemma 2.2.15]{seteklevmaster}.
\begin{lemma}\label{ismonoidalclosed} For ${\Delta \mathfrak{G}}$ a crossed simplicial group, $\widehat{{\Delta \mathfrak{G}}}_\text{Kan}$ is a monoidal model category. \end{lemma}
\begin{proof} We first show that the pushout-product axiom holds. We need to show that given any pair of cofibrations $f \colon X \to Y$ and $f' \colon X' \to Y'$, their pushout-product $$f \boxtimes f' \colon (X \times Y') \bigsqcup_{X \times X'} (Y \times X')\to Y \times Y'$$ is a cofibration that is trivial whenever $f$ or $f'$ is. The condition amounts to considering the following pushout diagram \begin{equation}\label{cyclicpop} \xymatrix{X \times X' \ar[r]^{f'_\ast} \ar[d]_{f_\ast} & X \times Y' \ar[d] \ar@/^/[ddr]^{f'^\ast} \\
Y \times X' \ar[r] \ar@/_/[rrd]_{f'^\ast} & P \ar[dr]|-{f \boxtimes f'} \\ && Y \times Y'} \end{equation} By the universal condition of pushouts, we have that $P$ is represented by pairs $(y,x') \in Y \times X'$ and $(x,y') \in X \times Y'$ subject to the relation $(f(x),x') \sim (x,f'(x'))$. We first show that $f_\ast$ and $f'_\ast$ are cofibrations. We already have that both of these maps are monomorphisms, furthermore, if $(y,x')$ is not in the image of $f_\ast$, then $y$ is not in the image of $f$, which is the normality condition (this shows that $f_\ast$ is a cofibration, $f'_\ast$ follows similarly). Next we will show that $f \boxtimes f'$ is a cofibration. It is clearly a monomorphism as both $f$ and $f'$ are monomorphisms. Let $p \in Y \times Y'$ be an element such that it is not in the image of $f \boxtimes f'$. Therefore $p$ is represented by an element not in the image of either $f_\ast$ or $f'_\ast$. However, since we have that these maps are cofibrations, the $\mathfrak{G}_n$ acts freely on $p$ and $f \boxtimes f'$ is therefore a cofibration.
All that is left to show is the condition about the trivial cofibrations. However, Diagram \ref{cyclicpop} induces a pushout diagram in $\widehat{\Delta}_\text{Kan}$ via the forgetful functor $i^\ast$. As $\widehat{\Delta}_\text{Kan}$ itself is a monoidal model category, we can conclude that the pushout-product is a weak equivalence if $f$ or $f'$ is such.
We now need to show that the unit axiom holds. Let $X$ be a normal object, then we need to show that ${\Delta \mathfrak{G}}[0]' \times X \to {\Delta \mathfrak{G}}[0] \times X$ is a weak equivalence, where ${\Delta \mathfrak{G}}[0]'$ is the normalisation of the unit. This is true when $i^\ast({\Delta \mathfrak{G}}[0]' \times X) \to i^\ast({\Delta \mathfrak{G}}[0] \times X)$ is a weak equivalence of simplicial sets, and as right adjoints preserve products, this is equivalent to asking that $i^\ast({\Delta \mathfrak{G}}[0]') \times i^\ast(X) \to i^\ast({\Delta \mathfrak{G}}[0]) \times i^\ast(X)$ is a weak equivalence of simplicial sets. In particular, we only require that $i^\ast({\Delta \mathfrak{G}}[0]') \to i^\ast({\Delta \mathfrak{G}}[0])$ is a weak equivalence. Consider the composition $0 \to {\Delta \mathfrak{G}}[0]' \to {\Delta \mathfrak{G}}[0]$, this is a horn inclusion and therefore is a trivial cofibration, and in particular a weak equivalence. Consequently, by two-out-of-three, the map $i^\ast({\Delta \mathfrak{G}}[0]') \to i^\ast({\Delta \mathfrak{G}}[0])$ is a weak equivalence as required. Therefore the unit axiom holds. \end{proof}
\subsection{Examples}
\begin{example}[Lifting (1-)stacks to $SO(2)$-equivariant stacks]\leavevmode
This example will give a very brief overview general machinery for constructing $SO(2)$-equivariant stacks, taken from~\cite{balchin}. For a groupoid $\mathcal{G}$, one can construct its \emph{cyclic nerve} $N^\mathfrak{C}\mathcal{G}$ which is the cyclic object which in dimension $n$ has diagrams of the form: $$\xymatrix{x_0 \ar[r]^{a_1} & x_1 \ar[r]^{a_1} & \cdots \ar[r]^{a_{n}}& x_n \rlap{ ,}}$$ with the cyclic operator $\tau_{n}$ being defined as follows: $$\xymatrixcolsep{3pc}\xymatrix{x_n \ar[r]^{(a_n \cdots a_1)^{-1}} & x_0 \ar[r]^{a_1} \ar[r] & \cdots \ar[r]^-{a_{n-1}} & x_{n-1} \rlap{ .}}$$ If we are given a (1-)stack $\mathcal{X} \colon \mathcal{C} \to \textbf{Grpd}$, we can apply the cyclic nerve component-wise to get a cyclic stack $N^\mathfrak{C}\mathcal{X} \colon \mathcal{C} \to \widehat{\Delta \mathfrak{C}}$. In~\cite[\S 5]{balchin}, such a construction was used to construct the $SO(2)$-equivariant derived stack of local systems of an $SO(2)$-space. Such a construction can be done for a handful of other crossed simplicial groups with the nerves defined in~\cite{surface}. \end{example}
\begin{example}[Equivariant cohomology theories] It is a well known fact that one can encode cohomology theories as mapping spaces in categories of $\infty$-stacks. We can adjust this mantra to the equivariant setting. Let $A$ be a sheaf of abelian groups. We define the $\mathfrak{G}$-equivariant cohomology of a site $(\mathcal{C}, \tau)$ to be $$H^n_{\mathfrak{G}}(\mathcal{C};A) := \pi_0 \left( \Delta \mathfrak{G} \text{-} \mathbb{R}_\tau \underline{\textup{Hom}}(\ast,K^\mathfrak{G}(A,n)) \right),$$ where $K^\mathfrak{G}(A,n))$ is a $\Delta \mathfrak{G}$-version of the Eilenberg-Mac Lane space. A full description of this for the cyclic case is given in~\cite{balchin2}, where a chain of Quillen equivalences is given to Borel type cohomology theory. \end{example}
\begin{remark} It seems that for any augmentation category $\mathbb{A}$, we can discuss the theory of $\textup{Ho}(\mathbb{A}_\text{Kan})$-cohomology theory. The obstruction to being able to do this in generality is the construction of the correct Eilenberg-Mac Lane spaces. \end{remark}
\section{Stable Stacks}\label{sec:7}
Our second example will be that of dendroidal sets as introduced in~\cite{dendroidal}. Unfortunately, we do not currently have any sensible examples of stable derived stacks, but nonetheless, we prove that the theory does exist. A comprehensive and readable overview of the theory of dendroidal sets can be found in~\cite{MR2778589}. We will begin by recalling the necessary theory.
\begin{definition} The category $\Omega$ of trees as objects finite rooted trees, i.e., an object of $\Omega$ is of the form:
Any such tree $T$ generates a symmetric coloured operad $\Omega(T)$ whose edge set $E(T)$ of the edges of $T$. The morphisms $T \to T'$ in $\Omega$ are the maps of the symmetric coloured operads $\Omega(T) \to \Omega(T')$. The category of \emph{dendroidal sets} is the presheaf category $\widehat{\Omega}$. \end{definition}
\begin{proposition} The category $\Omega$ is an augmentation category. \end{proposition}
\begin{proof} We check that $\Omega$ satisfies the conditions (AC1) -- (AC3) appearing in Definition~\ref{augdef}: \begin{enumerate} \item[(AC1)] First we require that $\Omega$ is an EZ-category. This is given in \cite[Example 7.6(c)]{genreedy}, but we will elaborate on some of the details here. The degree function $d \colon \Omega \to \mathbb{N}$ is given by $d(T) = \#\{\text{vertices of }T\}$. Every morphism in $\Omega$ can be decomposed as an automorphism (arising from considering different planar structures) followed by a series of face and degeneracy maps \cite[Lemma 3.1]{dendroidal}. There is a category $\Omega_\text{planar}$ which fixes a planar representation of the trees, and is a strict EZ-category. By the description of the morphisms in $\Omega$, we see that it is a crossed $\Omega_\text{planar}$ group, and by Lemma~\ref{crossisez} is a generalised EZ-category. The degeneracy and face operators that appear in the morphism structure of $\Omega$ are exactly those we use in the EZ-structure. We also require a monoidal product $\square$ which gives $\Omega$ the structure of a quasi-monoidal EZ-category. In this case $\square = \otimes$, the \emph{Boardman-Vogt tensor product} on dendroidal sets as described in \cite[\S5]{dendroidal}. We will not describe the construction of the tensor product here. \item[(AC2)] Next, we require an inclusion $i \colon \Delta \hookrightarrow \Omega$ which is compatible with the monoidal structure. The inclusion is given by considering the object $[n]$ as the linear tree $L_n$ which has $n+1$ vertices and $n$ edges. The compatibility of the tensor product with the monoidal structure on simplicial sets is given in \cite[Proposition 5.3]{dendroidal}. \item [(AC3)] The proof of AC3 is highly technical, relying on the shuffle of trees. However, the property is proved as the main result of~\cite{erratadendroidal}, and we will not concern ourself with the details here. \end{enumerate} \end{proof}
Again, we explain the construction of the left adjoint $i_!$. In this case, it is a restriction functor, so the left adjoint is an extension by zero, which sends a simplicial set $X$ to the dendroidal set defined by $$i_!(X)_T = \begin{cases}X_n &\mbox{if } T \simeq i([n]) \\ \emptyset & \mbox{otherwise} \end{cases}.$$
\begin{corollary}\label{dendroidalcorr}\leavevmode \begin{itemize} \item There is a Quillen model structure on the category of dendroidal sets, denoted $\widehat{\Omega}_\text{Kan}$, where the fibrant objects are the dendroidal Kan complexes, and the cofibrations are the normal monomorphisms. Moreover there is a Quillen adjunction $$i_! \colon \widehat{\Delta}_\text{Kan} \rightleftarrows \widehat{\Omega}_\text{Kan} \colon i^\ast.$$ \item There is a local model structure on the category of dendroidal presheaves, denoted $\Omega \textbf{-Pr}_\tau(\mathcal{C})$ obtained as the left Bousfield localisation of the point-wise Kan model at the class of dendroidal hypercovers. Moreover there is a Quillen adjunction $$i_! : \textbf{sPr}_\tau(\mathcal{C}) \rightleftarrows \Omega \textbf{-Pr}_\tau(\mathcal{C}) : i^\ast.$$ \item The category of $(\Omega,n)$-geometric derived Artin stacks, denoted $\Omega \text{-} \mathcal{G}_n^\text{sm}(\textbf{dAff})$, is given as the homotopy category of derived $(\Omega,n)$-hypergroupoids with respect to the class of trivial $(\Omega,n)$-hypergroupoids. \end{itemize} \end{corollary}
\begin{remark} As $\Omega_\text{planar}$ is also an augmentation category, it is possible to replace all symbols $\Omega$ by $\Omega_\text{planar}$ in Corollary~\ref{dendroidalcorr}. Note that also because $\Omega_\text{planar}$ is a strict EZ-category, this model structure will be of Cisinski type. \end{remark}
\begin{definition} The augmented homotopical algebraic geometry theory arising from $\Omega$ will be referred to as \emph{stable}. For example, the category $\text{Ho}(\Omega \textbf{-Pr}_\tau(\mathcal{C}) )$ will be referred to as the category of \emph{stable $\infty$-stacks}. \end{definition}
\begin{remark} Unfortunately, unlike the simplicial and crossed simplicial cases, the model structure $\widehat{\Omega}_\text{Kan}$ is not a closed monoidal category, which is the reason for asking that an augmentation category be only promagmoidal, and not promonoidal. This produced several errors in the existing literature, as it was assumed that the pushout-product axiom holds. This error was spotted and corrected in the errata of~\cite{cisinski}, with details appearing in~\cite{erratadendroidal}. However, it is possible to restrict to the category $\Omega_o$ of those trees which have no bald vertices. In this case, the pushout-product axiom does hold as shown in~\cite[Corollary 2.5]{erratadendroidal}. It is also true that $\Omega_o$ is an augmentation category. However, using this category for the purpose of augmented homotopical algebraic geometry is still unreasonable as there is no known description of the homotopy type of $\Omega_o$. \end{remark}
We finish this section by discussing the homotopy type of $\widehat{\Omega}_\text{Kan}$. This section will lead to the choice of the quantifier $\emph{stable}$ to describe $\Omega$-augmentation. The results in this section are built on~\cite{basicthesis}, where the model $\widehat{\Omega}_\text{Kan}$ was explicitly described. Note that in the aforementioned reference, the Kan model structure is called the \emph{stable model structure}.
We first highlight a difference to what one may find in the literature. The model structure on $\widehat{\Omega}$ that one usually encounters is the \emph{operadic} model structure (sometimes referred to the \emph{Cisinski-Moerdijk} structure). The fibrant objects in this model structure are the \emph{inner Kan complexes}. This model structure is Quillen equivalent to a certain model structure on $\textbf{sOperad}$. It is shown in~\cite{basicthesis} that the Kan model structure is a localisation of this model structure at the collection of \emph{outer horns}. One can easily see the duality between this construction and the relationship between $\widehat{\Delta}_\text{Kan}$ and $\widehat{\Delta}_\text{Joyal}$.
Recall from~\cite{MR1361893} that a \textit{spectrum} $\underline{E} = \{E_i\}_{i \in \mathbb{Z}}$ is a sequence of based spaces $E_n$ and based homeomorphisms $E_i \simeq \Omega E_{i+1}$. We say that a spectrum is \textit{connective} if $E_i = 0$ for $i < 0$. We can view a connective spectrum as an infinite loop space via delooping machinery as in~\cite{MR0339152}, or equally as a $\Gamma$-space as in~\cite{MR0353298}. We will denote by $\textbf{ConSp(Top)}$ the category of connective spectra.
\begin{proposition}[\cite{MR513569}, Proposition 2.2] There is a model structure on the category \textbf{ConSp(Top)}, called the \textit{stable model structure}, where a map $f \colon X \to Y$ is a: \begin{itemize} \item Weak equivalence if $f_\ast \colon \pi_\ast X \simeq \pi_\ast Y$ where $\pi_\ast X = \displaystyle{\lim_{\to k}} \pi_{\ast +k} X_n$. \item Fibration if it has the RLP with respect to trivial cofibrations. \item Cofibration if $f_n \colon X_n \to Y_n$ is a cofibration of spaces for all $n \geq 0$. \end{itemize} \end{proposition}
\begin{proposition}[\cite{MR3349322}, Theorem 5.4]\label{conspec} The Kan model structure on dendroidal sets is Quillen equivalent to the stable model structure on connective spectra. \end{proposition}
\subsection{Amalgamations} We now describe a way to combine the examples of ${\Delta \mathfrak{G}}$ and $\Omega$ into a new augmentation category using the theory of categorical pushouts. Usually, taking a categorical pushout leads to an unwieldy result, however, we will show that it is of the form of an \emph{amalgamation}.
It is known that if you take the algebraic free products with amalgamations of groups, then the original groups embed into this new group \cite{MR3069472}. An amalgamation in an arbitrary category is a pushout along two monic maps.
\begin{definition} A category is said to have the \emph{amalgamation property} if amalgamations exists for all diagrams of monic maps $B \hookleftarrow A \hookrightarrow C$. \end{definition}
It can be shown that such diagrams share the analogous embedding property of free products with amalgamations of groups. Although the category of small categories, $\textbf{Cat}$, does not satisfy the amalgamation property for all pushouts, there are some sufficient conditions which ensure the amalgamation property, this will be our next topic of focus.
Recall that for any functors $F_X \colon \mathcal{W} \to \mathcal{X}$ and $F_Y \colon \mathcal{W} \to \mathcal{Y}$ we can form the pushout category $\mathcal{Z}$ which makes the following universal diagram of functors commutative: $$\xymatrix{\mathcal{W} \ar[r]^{F_Y} \ar[d]_{F_X}& \mathcal{Y} \ar[d]^{G_Y} \\ \mathcal{X} \ar[r]_{G_X} & \mathcal{Z} \rlap{ .}}$$ For the rest of this section we will assume that $\mathcal{W}$ is a subcategory of both $\mathcal{X}$ and $\mathcal{Y}$ and that the functors $F_X$ and $F_Y$ are embeddings.
\begin{definition} We will say that the pushout category $\mathcal{Z}$ is an \textit{amalgamation} if the functors $G_X$ and $G_Y$ are embeddings. \end{definition}
Not all pushouts of this form are an amalgamation, some assumptions are needed on the embeddings $F_X$ and $F_Y$. This was first considered by Trnkov\'{a} in \cite{amalgamations2}, but a more general condition was proved by MacDonald and Scull in \cite{amalgamations}.
\begin{definition}[{\cite[Definition 3.1]{amalgamations}}] A class of morphisms $\mathcal{M}$ of $\mathcal{X}$ has the \textit{3-for-2 property} when if $f,g$ and $h=g \circ f$ are morphisms in $\mathcal{X}$, if any two of $f,g$ and $h$ are in $\mathcal{M}$, then the third is also in $\mathcal{M}$. \end{definition}
\begin{definition}[{\cite[Definition 3.2]{amalgamations}}] A functor $F_Y \colon \mathcal{W} \to \mathcal{Y}$ has the \textit{3-for-2 property} if the set of image morphisms $\mathcal{F} := \{ F_Y(b) \mid b \text{ a morphism in } \mathcal{W} \}$ satisfies the 3-for-2 property. \end{definition}
\begin{theorem}[{\cite[Theorem 3.3]{amalgamations}}] If the functors $F_X \colon \mathcal{W} \to \mathcal{X}$ and $F_Y \colon \mathcal{W} \to \mathcal{Y}$ are embeddings which both satisfy the 3-for-2 property, then the induced functors $G_X \colon \mathcal{X} \to \mathcal{Z}$ and $G_Y \colon \mathcal{Y} \to \mathcal{Z}$ are also embeddings. Therefore $\mathcal{Z}$ is an amalgamation. \end{theorem}
Note that any full functor is automatically 3-for-2. The following lemma gives the structure of the category $\mathcal{Z}$.
\begin{lemma}[{\cite[Lemma 10.2]{Fiore08modelstructures}}]\label{paolistruct} Let $\mathcal{Z}$ be an amalgamation of $\mathcal{X} \leftarrow \mathcal{W} \rightarrow \mathcal{Y}$ such that the map $\mathcal{W} \to \mathcal{X}$ is full. Then: \begin{itemize} \item $\text{Ob}(\mathcal{Z}) = \text{Ob}(\mathcal{Y}) \coprod (\text{Ob}(\mathcal{X}) \backslash \text{Ob}(\mathcal{W})).$ \item The morphisms of $\mathcal{Z}$ have two forms:
\begin{enumerate}
\item A morphism $\xymatrix{X_0 \ar[r]^f & X_1}$ with $f \in \text{Mor}(\mathcal{X}) \backslash \text{Mor}(\mathcal{W})$.
\item A path $\xymatrix{X_0 \ar[r]^{f_1} & Y_1 \ar[r]^d & Y_2 \ar[r]^{f_2} & X_2}$ where $d$ is a morphism in $\mathcal{Y}$, and $f_1,f_2 \in \text{Mor}(\mathcal{X}) \backslash \text{Mor}(\mathcal{W}) \cup \{ \text{identities on } \text{Ob}(\mathcal{Z})\}$. If $f_1$ is non-trivial then $Y_1 \in \mathcal{W}$. If $f_2$ is non-trivial, then $Y_2 \in \mathcal{W}$.
\end{enumerate} \end{itemize} \end{lemma}
We shall now discuss when an augmentation category has the amalgamation property. We will then prove that for an amalgable augmentation category $\mathbb{A}$, the pushout ${\Delta \mathfrak{G}} \sqcup_\Delta \mathbb{A}$ is an augmentation category. First, recall that any morphism in $\Delta$ can be decomposed in the form \textbf{SD} where \textbf{S} is a composition of degeneracy maps, and \textbf{D} is a composition of face maps. Furthermore, any morphism in a crossed simplicial group ${\Delta \mathfrak{G}}$, by definition, can be written in the form \textbf{TSD} for \textbf{SD} as above and \textbf{T} a composition of morphisms of some $\mathfrak{G}_n$. We can then use the properties of crossed simplicial groups to rearrange this in to the form $\textbf{S}\mathbf{'}\textbf{T}\mathbf{'}\textbf{D}\mathbf{'}$.
\begin{definition} Let $\mathcal{S}$ be a fully faithful subcategory of $\mathcal{C}$. We say that $\mathcal{S}$ is a \emph{sieve} in $\mathcal{C}$ if for every morphism $f \colon c \to s$ in $\mathcal{C}$ with $s \in \mathcal{S}$, $c$ and $f$ are also in $\mathcal{S}$. \end{definition}
\begin{definition} An augmentation category $\mathbb{A}$ is \emph{amalgable} if $\Delta$ is a sieve in $\mathbb{A}$. \end{definition}
\begin{proposition} Let $\mathbb{A}$ be an amalgable augmentation category and ${\Delta \mathfrak{G}}$ a crossed simplicial group. Then the category $\mathbb{A} \mathfrak{G} := \Delta \mathfrak{G} \sqcup_\Delta \mathbb{A}$ is an amalgamation. \end{proposition}
\begin{proof} By assumption, the inclusion $\Delta \to \mathbb{A}$ is a sieve, and consequently full. Therefore we need only check that the map $j \colon \Delta \to {\Delta \mathfrak{G}}$ is 3-for-2. As this map is \emph{not} full, we must check it explicitly.
We are interested in the image set $\mathcal{F} := \{j(b) \mid b \text{ is a morphism in } \Delta\}$. Recall that $\mathcal{F}$ has the 3-for-2 property if when if two of $f,g,h=g \circ f$ are in $\mathcal{F}$ then the third is also in $\mathcal{F}$. Assume that $f,g \in \mathcal{F}$ then $f=\textbf{S}_f\textbf{D}_f$, $f=\textbf{S}_g\textbf{D}_g$ and $h = g \circ f = \textbf{S}_f\textbf{D}_f \textbf{S}_g\textbf{D}_g \cong \textbf{S}_h \textbf{D}_h$ for some composition of face and degeneracy maps (we can always do the last step due to the relations between face and degeneracy maps in $\Delta$). Therefore $h \in \mathcal{F}$. Now without loss of generality assume that $f,h \in \mathcal{F}$, we must show that $g \in \mathcal{F}$. This follows again from the unique decomposition property, if $g \neq \mathcal{F}$ then $g = \textbf{S}_g\textbf{T}_g\textbf{D}_g$ for some non-trivial composition $\textbf{T}_g$. Therefore $g \circ f = \textbf{S}_f\textbf{D}_f \textbf{S}_g \textbf{T}_g \textbf{D}_g \cong \textbf{S}_h \textbf{T}_h \textbf{D}_h$ for some composition of face, automorphism and degeneracy maps. Therefore $g \circ f = h \not\in \mathcal{F}$ which is a contradiction. \end{proof}
Using Lemma \ref{paolistruct}, and the fact that $\Delta$ is a sieve in $\mathbb{A}$, we can explicitly describe the structure of $\mathbb{A} \mathfrak{G}$.
\begin{lemma}\label{morstruct} The category $\mathbb{A} \mathfrak{G}$ has the following structure: \begin{itemize} \item $\text{Ob}(\mathbb{A} \mathfrak{G}) = \text{Ob}(\mathbb{A})$. \item The morphisms of $\mathbb{A} \mathfrak{G}$ have the forms:
\begin{enumerate}
\item A morphism $\xymatrix{S \ar[r]^f & T}$ with $f \in \text{Mor}(\mathbb{A}) \backslash \text{Mor}(\Delta)$.
\item A path $\xymatrix{[m] \ar[r]^d & [n] \ar[r]^{f_2} & T}$ with
\begin{itemize} \item $d \in \text{Mor}(\Delta \mathfrak{G})$.
\item $f_2 \in \text{Mor}(\mathbb{A}) \backslash \text{Mor}(\Delta) \cup \{ \text{identities on } \text{Ob}(\mathbb{A} \mathfrak{G})\}$.
\end{itemize}
\end{enumerate} \end{itemize} \end{lemma}
\begin{proof}\leavevmode \begin{itemize} \item We had for a general pushout that $\text{Ob}(\mathcal{\mathbb{A} \mathfrak{G}}) = \text{Ob}(\Delta \mathfrak{G}) \coprod (\text{Ob}(\mathbb{A}) \backslash \text{Ob}(\Delta)).$ Using the fact $\text{Ob}(\Delta) = \text{Ob}(\Delta \mathfrak{G})$, (as $\Delta$ is wide in $\Delta \mathfrak{G}$), we get the result for the objects of $\mathbb{A} \mathfrak{G}$. \item The morphisms come directly from Lemma \ref{paolistruct} with the only difference being we do not have a map $\xymatrix{S \ar[r]^{f_1} & [m]}$ in the path. This is due to the fact that $\Delta$ is a sieve in $\mathbb{A}$ so no such map exists. \end{itemize} \end{proof}
\begin{definition}\label{def:amalg} Let $\mathcal{C}$ be a category and $\mathcal{D}$ a subcategory. We will say that $\mathcal{C}$ is \emph{$\mathcal{D}$-strict} if for all objects $d \in \mathcal{D}$ we have $\text{Aut}_\mathcal{C}(d)$ trivial. \end{definition}
\begin{corollary}\label{amalgcross} Let $\mathbb{A}$ be an amalgable augmentation category, then the pushout $\mathbb{A} \mathfrak{G} := \Delta \mathfrak{G} \sqcup_\Delta \mathbb{A}$ is a crossed $\mathbb{A}$-group. \end{corollary}
\begin{proof} Using Lemma \ref{morstruct}, we have that $\mathbb{A}$ is wide in $\mathbb{A} \mathfrak{G}$. Next, note that because the inclusion is full, $\mathbb{A}$ is $\Delta$-strict. Therefore we have that the morphisms $\xymatrix{[m] \ar[r]^d & [n] \ar[r]^{f_2} & T}$ can be decomposed as an automorphism of $[m]$ (namely $\text{Aut}_{\Delta \mathfrak{G}}([m])$) followed by a map in $\mathbb{A}$. \end{proof}
Finally, we show our desired result, which shows the compatibility between amalgamations and augmentation structures.
\begin{theorem} Let $\mathbb{A}$ be an amalgable augmentation category, then the pushout $\mathbb{A} \mathfrak{G}$ is an augmentation category. \end{theorem}
\begin{proof}\leavevmode \begin{enumerate} \item[(AC1)] We have shown in Corollary \ref{amalgcross} that $\mathbb{A} \mathfrak{G}$ has the structure of a crossed $\mathbb{A}$-group and is therefore an EZ-category. The tensor structure is provided by the tensor structure of $\mathbb{A}$. \item[(AC2)] We have an inclusion $i \colon \Delta \hookrightarrow \mathbb{A} \mathfrak{G}$ coming from the fact that the pushout has the amalgamation property. Moreover this inclusion is compatible with the tensor product by construction. \item[(AC3)] The required pushout-product property holds as we have proved that it does in $\mathbb{A}$ and ${\Delta \mathfrak{G}}$. \end{enumerate} \end{proof}
\begin{example} The category $\Omega$ is amalgable as $\Delta$ is exhibted as a sieve by the inclusion. Therefore for any crossed simplicial group $\Delta \mathfrak{G}$, we can form the category $\Omega \mathfrak{G}$. The question is then what is the homotopy type of $\widehat{\Omega \mathfrak{G}}$ is. For $\widehat{\Omega \mathfrak{C}}$ one could expect the Kan model structure to have the homotopy type of connective spectra with $SO(2)$-action. Note that this category should have some, even if weak, relation to the category $\Xi$ which is used for the theory of higher cyclic operads~\cite{cycoperad}.
\end{example}
\end{document} |
\begin{document}
\def\mathbb R{\mathbb R} \def\mathbb N{\mathbb N} \def\mathbb E{\mathbb E}
\def\mathbb{H}{\mathbb H} \def\mathbb Q{\mathbb Q} \def\mathbb{H}{\mathbb{H}} \def\mathbb{P}{\mathbb{P}} \def\mathbb{S}{\mathbb{S}} \def\mathbb{Y}{\mathbb{Y}} \def\mathbb{W}{\mathbb{W}} \def\mathbb{D}{\mathbb{D}}
\def\mathcal{H}{\mathcal{H}} \def\mathcal{S}{\mathcal{S}}
\def\mathscr{A}{\mathscr{A}}
\def\mathscr {B}{\mathscr {B}}
\def\mathscr {C}{\mathscr {C}}
\def\mathscr {D}{\mathscr {D}}
\def\mathscr{F}{\mathscr{F}} \def\mathscr{G}{\mathscr{G}} \def\mathscr{L}{\mathscr{L}} \def\mathscr{P}{\mathscr{P}} \def\mathscr{S}{\mathscr{S}} \def\mathscr{M}{\mathscr{M}} \def\equation{\equation} \def\begin{\begin} \def\epsilon{\epsilon} \def\varepsilon{\varepsilon} \def\varphi{\varphi} \def\varrho{\varrho} \def\omega{\omega} \def\Omega{\Omega} \def\sigma{\sigma} \def\frac{\frac} \def\sqrt{\sqrt} \def\kappa{\kappa} \def\delta{\delta} \def\langle{\langle} \def\rangle{\rangle} \def\Gamma{\Gamma} \def\gamma{\gamma} \def\nabla{\nabla} \def\beta{\beta} \def\alpha{\alpha} \def\partial{\partial}
\def\tilde{\widetilde} \def\lesssim{\lesssim} \def\rightarrow{\rightarrow} \def\downarrow{\downarrow} \def\uparrow{\uparrow} \def\ell{\ell} \def\infty{\infty} \def\triangle{\triangle}
\def\Delta{\Delta} \def{\bf m}{{\bf m}} \def\mathbf B{\mathbf B} \def\text{\rm{e}}{\text{\rm{e}}} \def\lambda{\lambda} \def\theta{\theta}
\def\text{\rm{d}}{\text{\rm{d}}} \def\text{\rm{ess}}{\text{\rm{ess}}} \def\text{\rm{Ric}}{\text{\rm{Ric}}} \def \Hess{\text{\rm{Hess}}} \def\underline a{\underline a} \def\text{\rm{Ric}}{\text{\rm{Ric}}} \def\text{\rm{cut}}{\text{\rm{cut}}} \def\mathbf{r}{\mathbf{r}} \def\mathbf r{r} \def\text{\rm{gap}}{\text{\rm{gap}}} \def\pi_{{\bf m},\varrho}{\pi_{{\bf m},\varrho}} \def\mathbf r{\mathbf r} \def\tilde{\widetilde} \def\tilde{\widetilde} \def\mathbb I{\mathbb I} \def{\rm in}{{\rm in}} \def{\rm Sect}{{\rm Sect}}
\renewcommand{\overline}{\overline} \renewcommand{\widehat}{\widehat} \renewcommand{\widetilde}{\widetilde}
\allowdisplaybreaks
\title{{f Distribution dependent BSDEs driven by Gaussian processes} \begin{abstract} In this paper we are concerned with distribution dependent backward stochastic differential equations (DDBSDEs) driven by Gaussian processes. We first show the existence and uniqueness of solutions to this type of equations. This is done by formulating a transfer principle to transfer the well-posedness problem to an auxiliary DDBSDE driven by Brownian motion. Then, we establish a comparison theorem under Lipschitz condition and boundedness of Lions derivative imposed on the generator. Furthermore, we get a new representation for DDBSDEs driven by Gaussian processes, this representation is even new for the case of the equations driven by Brownian motion. The new obtained representation enables us to prove a converse comparison theorem. Finally, we derive transportation inequalities and Logarithmic-Sobolev inequalities via the stability of the Wasserstein distance and the relative entropy of measures under the homeomorphism condition. \end{abstract} AMS Subject Classification: 60H10, 60G15, 60G22
\par\noindent Keywords: Distribution dependent BSDEs; Gaussian processes; comparison theorem; converse comparison theorem; transportation inequality; Logarithmic-Sobolev inequality.
\section{Introduction}
Backward stochastic differential equations (BSDEs) were first introduced in their linear form by Bismut in \cite{Bismut73} to investigate stochastic control problems and their connections with a stochastic version of the Pontryagin maximum principle. Afterwards, BSDEs was generally formalised and developed in the seminal work \cite{PP90}. In the last decades, BSDEs have been the subject of growing interest in stochastic analysis, as these equations naturally arise in stochastic control problems in mathematical finance and they provide Feynman-Kac type formulas for semi-linear PDEs (see, e.g., \cite{EPQ9705,MY99,PP92,ZJ17}).
On the other hand, distribution dependent stochastic differential equations (DDSDEs), also known as mean-field equations or McKean-Vlasov equations, are the It\^{o} equations whose coefficients depend upon the law of the solution. As DDSDEs can provide a probabilistic representation for the solutions of a class of nonlinear PDEs, in which a typical example is the propagation of chaos, they are widely used as models in statistical physics and in the study of large scale social interactions within the memory of mean-field games, for which we refer to, e.g., \cite{BT97,CD15,HMC13} and references therein. Furthermore, nonlinear DDBSDEs were first introduced by Buckdahn, Djehiche, Li and Peng in \cite{BDLP09}. Since then, the DDBSDEs have received increasing attentions and have been investigated in a variety of settings. Let us just mention a few here. Chassagneux, Crisan and Delarue \cite{CCD15} showed the existence and uniqueness of solutions to fully coupled DDBSDEs; Carmona and Delarue \cite{CD15} studied DDBSDEs via the stochastic maximum principle; Li \cite{Li18} considered the well-posedness problem of DDBSDEs driven by a Brownian motion and an independent Poisson random measure, and provided a probabilistic representation for a class of nonlocal PDEs of mean-field type; Li, Liang and Zhang \cite{LLZ18} obtained a comparison theorem for DDBSDEs.
In this paper, we want to study the following DDBSDEs driven by Gaussian processes \begin{equation}\label{Bsde-In} \left\{ \begin{array}{ll} \text{\rm{d}} Y_t=-f(t,X_t,Y_t,Z_t,\mathscr{L}_{(X_t,Y_t,Z_t)})\text{\rm{d}} V_t+Z_t\text{\rm{d}}^\diamond X_t,\\ Y_T=g(X_T,\mathscr{L}_{X_T}), \end{array} \right. \end{equation} where $X$ is a centered one-dimensional Gaussian process such that $V_t:=\mathrm{Var}X_t, t\in[0,T]$, is a strictly increasing, continuous function with $V(0)=0$ introduced in \cite{Bender14}, $\mathscr{L}_{(X_t,Y_t,Z_t)}$ and $\mathscr{L}_{X_T}$ denote respectively the laws of $(X_t,Y_t,Z_t)$ and $X_T$, and the stochastic integral is the Wick-It\^{o} integral defined by the $S$-transformation and the Wick product (see Section 2.1). Precise assumptions on the generator $f:[0,T]\times\mathbb R\times\mathbb R\times\mathbb R\times\mathscr{P}_\theta(\mathbb R\times\mathbb R\times\mathbb R)\rightarrow\mathbb R$ and the terminal value function $g:\mathbb R\times\mathscr{P}_\theta(\mathbb R)\rightarrow\mathbb R$ will be specified in later sections, where $\mathscr{P}_\theta(\mathbb R^m)$ stands for the totality of probability measures on $\mathbb R^m$ with finite $\theta$-th moment. We would like to mention that the driving noise $X$ in \eqref{Bsde-In} includes fractional Brownian motion $B^H$ with Hurst parameter $H\in(0,1)$ (while $B^{1/2}$ is the standard Brownian motion), and fractional Wiener integral (see Remark \ref{Re(Inte)}).
The main objectives of the present paper are to show the well-posedness and (converse) comparison theorems, and then to establish functional inequalities including transportation inequalities and Logarithmic-Sobolev inequalities for \eqref{Bsde-In}. Our strategy is as follows. Based on a new transfer principle that extends \cite[Theroem 3.1]{Bender14} to the distribution dependent setting, we first prove general existence and uniqueness results for \eqref{Bsde-In} (see Theorem \ref{Th1}), which are then applied to the case of Lipschitz generator $f$. Second, with the help of a formula for the $L$-derivative, we are able to derive a comparison theorem which generalises and improves the corresponding one in the existing literature (see Theorem \ref{Th(com)} and Remark \ref{Re-comp1}). It is worth stressing that comparing with the works in the distribution-free cases, here we need to impose an additional condition on $f$ that involves the Lions derivative of $f$. Moreover, we obtain a converse comparison theorem which is roughly speaking a converse to the comparison theorem obtained above (see Theorem \ref{Th(Conve)} and Remark \ref{Re(Conve)}). To this end, we provide a representation theorem for the generator $f$ which is even new for DDBSDEs driven by Brownian motions which is also interesting in itself. Finally, by utilising the stability of the Wasserstein distance and relative entropy of measures under the homeomorphism, we establish several functional inequalities including transportation inequalities and Logarithmic-Sobolev inequalities (see Theorems \ref{Th(TrIn)} and \ref{Th(LS)}). Here, let us point out that the transportation inequality for the law of the control solution $Z$ of \eqref{Bsde-In} stated in Theorem \ref{Th(TrIn)} is of the form \begin{align*}
\mathbb{W}_p(\mathscr{L}_{Z},\mu)\leq C\left(H(\mu|\mathscr{L}_{Z})\right)^{\frac 1 {2p}} \end{align*} with any $p\geq1$. In particular, when $p=2$, this inequality reduces to \begin{align*}
\mathbb{W}_2(\mathscr{L}_{Z},\mu)\leq C\left(H(\mu|\mathscr{L}_{Z})\right)^{\frac 1 {4}}, \end{align*}
which is clearly different from the usual quadratic transportation inequality (also called the Talagrand inequality). On the other hand, as shown in \cite{BT20}, this type of inequalities allows one to derive deviation inequality. However, it does not allow to get other important inequalities such as Poincar\'{e} inequality etc. Hence, an interesting problem is whether our results can be further improved in the sense of $\mathbb{W}_2(\mathscr{L}_{Z},\mu)\leq C\sqrt{H(\mu|\mathscr{L}_{Z})}$. Our techniques are currently not enough to give a full answer, since Lemma \ref{FI-Le2} below cannot applied to the case of $Z$. We will leave this topic for the future work.
The remaining of the paper is organised as follows. Section 2 presents some basic facts on Gaussian processes, the Lions derivative, and introduce a transfer result which allows us to build a relation between DDBSDEs concerned and DDBSDEs driven by Brownian motion. In Section 3, we show the existence and uniqueness of a solution to DDBSDE driven by Gaussian process. In Section 4, we establish a comparison theorem, and also provide a converse comparison theorem via a new representation theorem. Section 5 is devoted to deriving functional inequalities, including transportation inequalities and Logarithmic-Sobolev inequalities. Section 6 is designed as an appendix that we prove an auxiliary result needed in Section 5 (cf. Proposition 5.1).
\section{Preliminaries}
\subsection{Wick-It\^{o} integral for Gaussian processes}
In this part, we shall recall some important definitions and facts concerning the Wick-It\^{o} integral for Gaussian processes. Further detailed and deep discussions can be found, e.g., \cite[Section 2]{Bender14} and references therein.
Let $(\Omega,\mathscr{F},(\mathscr{F}_t^X)_{t\in[0,T]},\mathbb{P})$ be a filtered probability space with $(\mathscr{F}_t^X)_{t\in[0,T]}$ the natural completed and right continuous filtration generated by a centered Gaussian process $(X_t)_{0\leq t\leq T}$, whose covariance function $V_t:=\mathrm{Var}X_t, t\in[0,T]$, is a strictly increasing and continuous function with $V(0)=0$.
The first chaos associated to $X$ is \begin{align*} \mathfrak{C}_X:=\overline{\mathrm{span}\{X_t:t\in[0,T]\}}, \end{align*} where the closure is taken in $L^2_X:=L^2(\Omega,\mathscr{F}_T^X,\mathbb{P})$. It is obvious that the elements in $\mathfrak{C}_X$ are centered Gaussian variables. We define the map $\mathcal{R}:\mathfrak{C}_X\rightarrow\mathbb R^{[0,T]}$ by \begin{align*} (\mathcal{R}f)(t)=\mathbb E(X_tf). \end{align*} It is readily checked that $\mathcal{R}$ is injective, whose image $\mathcal{R}(\mathfrak{C}_X)$ is called the Cameron-Martin space of $X$ and is equipped with the inner product \begin{align*} \langle f,g\rangle_X:=\mathbb E\left[\mathcal{R}^{-1}(f)\mathcal{R}^{-1}(g)\right]. \end{align*} Now, we let $\mathfrak{H}_X$ be the set of all those $\mathfrak{h}\in\mathcal{R}(\mathfrak{C}_X)$ that are absolutely continuous with respect to $\text{\rm{d}} V$ with square integrable density, i.e. \begin{align*} \mathfrak{h}(t)=\int_0^t\dot{\mathfrak{h}}(s)\text{\rm{d}} V_s, \ \ \dot{\mathfrak{h}}\in L^2([0,T],\text{\rm{d}} V). \end{align*} Throughout the paper, we suppose that $\mathfrak{H}_X$ and $\{\dot{\mathfrak{h}}:\mathfrak{h}\in\mathfrak{H}_X\}$ are respectively the dense subsets of $\mathcal{R}(\mathfrak{C}_X)$ and $L^2([0,T],\text{\rm{d}} V)$.
\begin{rem}\label{Re(Inte)} As pointed out in \cite[Theorem 2.2]{Bender14}, the Gaussian processes concerned consists of a large class of examples which includes, e.g., fractional Brownian motion $B^H$ with $H\in(0,1)$ and fractional Wiener integral of the form $\int_0^t\sigma(s)\text{\rm{d}} B^H_s$ with $H\in(1/2,1)$ and a deterministic function $\sigma$ satisfying $c^{-1}\leq\sigma\leq c$ for some $c>0$. \end{rem}
Next, we shall introduce the construction of the Wick-It\^{o} integral with respect to Gaussian process $X$. Due to \cite[Corollary 3.40]{Janson97}, the random variables of the form \begin{align*} \text{\rm{e}}^{\diamond\mathfrak{h}}:=\exp\left\{\mathcal{R}^{-1}(\mathfrak{h})-\frac 1 2 \mathrm{Var}\mathcal{R}^{-1}(\mathfrak{h})\right\},\ \ \mathfrak{h}\in\mathfrak{H}_X, \end{align*} form a total subset of $L^2_X$, in which $\text{\rm{e}}^{\diamond\mathfrak{h}}$ is called Wick exponential. Then for each random variable $\eta\in L^2_X$, it can be uniquely determined by its $\mathcal{S}$-transform \begin{align*} (\mathcal{S}\eta)(\mathfrak{h}):=\mathbb E(\eta\text{\rm{e}}^{\diamond\mathfrak{h}}), \ \ \mathfrak{h}\in\mathfrak{H}_X. \end{align*} That is, if $\eta$ and $\zeta$ belong to $L^2_X$ satisfying $(\mathcal{S}\eta)(\mathfrak{h})=(\mathcal{S}\zeta)(\mathfrak{h})$ for every $\mathfrak{h}\in\mathfrak{H}_X$, then there holds $\eta=\zeta, \mathbb{P}$-a.s.. In addition, observe that for each $\mathfrak{h}\in\mathfrak{H}_X$, \begin{align*} (\mathcal{S}X_t)(\mathfrak{h})=\mathbb E(X_t\mathcal{R}^{-1}(\mathfrak{h}))=\mathfrak{h}(t)=\int_0^t\dot{\mathfrak{h}}(s)\text{\rm{d}} V_s, \ \ t\in[0,T] \end{align*} is a bounded variation function and then can be regarded as an integrator in a Lebesgue-Stieltjes integral, which allows us to introduce the following Wick-It\^{o} integral.
\begin{defn}\label{De-WI} A measurable map $Z:[0,T]\rightarrow L^2_X$ is said to have a Wick-It\^{o} integral with respect to $X$, if for any $\mathfrak{h}\in\mathfrak{H}_X$, \begin{align*} \int_0^T(\mathcal{S}Z_t)(\mathfrak{h})\text{\rm{d}} \mathfrak{h}(t) \end{align*} exists and there is a random variable $\xi\in L^2_X$ such that \begin{align*} (\mathcal{S}\xi)(\mathfrak{h})=\int_0^T(\mathcal{S}Z_t)(\mathfrak{h})\text{\rm{d}} \mathfrak{h}(t). \end{align*} In this case, we denote $\xi$ by $\int_0^TZ_t\text{\rm{d}}^\diamond X_t$ and call it the Wick-It\^{o} integral of $Z$ with respect to $X$. Besides, we often use $\int_a^bZ_t\text{\rm{d}}^\diamond X_t$ to denote $\int_0^T\mathrm{I}_{[a,b]}(t)Z_t\text{\rm{d}}^\diamond X_t$. \end{defn}
\begin{rem}\label{Re(WI)} (i) Suppose that $Z:[0,T]\rightarrow L^2_X$ is continuous and $\{\pi^n\}_{n\geq1}$ is a sequence of partition of $[0,T]$. Then, we have \begin{align}\label{1Re(WI)}
\int_0^TZ_t\text{\rm{d}}^\diamond X_t=\lim_{|\pi^n|\ra0}\sum_{t_i\in\pi^n}Z_{t_i}\diamond(X_{t_{i+1}}-X_{t_i}), \end{align} provided that the above limit exists in $L^2_X$. Here, $Z_{t_i}\diamond(X_{t_{i+1}}-X_{t_i})$ is a Wick product defined as follows: \begin{align*} (\mathcal{S}(Z_{t_i}\diamond(X_{t_{i+1}}-X_{t_i})))(\mathfrak{h})=(\mathcal{S}Z_{t_i})(\mathfrak{h})(\mathcal{S}(X_{t_{i+1}}-X_{t_i}))(\mathfrak{h}),\ \ \mathfrak{h}\in\mathfrak{H}_X. \end{align*} In view of \eqref{1Re(WI)}, one can see that the Wick-It\^{o} integral can be interpreted as a limit of Riemann sums in terms of the Wick product.
(ii) If $X=B^{1/2}$, i.e. $X$ is a Brownian motion, then we obtain that $V_t=t$ and \begin{align*} \mathcal{R}(\mathfrak{C}_{B^{1/2}})=\mathfrak{H}_{B^{1/2}}=\left\{\mathfrak{h}:\mathfrak{h}(t)=\int_0^t\dot{\mathfrak{h}}(s)\text{\rm{d}} s, \ \ \dot{\mathfrak{h}}\in L^2([0,T],\text{\rm{d}} t)\right\}. \end{align*} Suppose that $Z$ is progressively measurable satisfying $\mathbb E\int_0^TZ_t^2\text{\rm{d}} t<\infty$. Then, it is easy to verify that the Wick-It\^{o} integral $\int_0^TZ_t\text{\rm{d}}^\diamond B^{1/2}_t$ coincides with the usual It\^{o} integral $\int_0^TZ_t\text{\rm{d}} B^{1/2}_t$, and then \begin{align}\label{2Re(WI)} \left(\mathcal{S}\int_0^TZ_t\text{\rm{d}} B^{1/2}_t\right)(\mathfrak{h})=\int_0^T(\mathcal{S}Z_t)(\mathfrak{h})\text{\rm{d}} \mathfrak{h}(t),\ \ \mathfrak{h}\in\mathcal{R}(\mathfrak{C}_{B^{1/2}})=\mathfrak{H}_{B^{1/2}}. \end{align}
More details can be found in \cite[Remark 2.4]{Bender14}. \end{rem}
\subsection{The transfer principle}
This part is devoted to establishing a transfer principle, which connects DDSDEs driven by Gaussian processes and DDSDEs driven by Brownian motion, and will play a crucial role in the proofs of our main results.
In order to state the principle, we let $U$ be the inverse of $V$ defined as \begin{align*} U_s:=\inf\{r\geq0: V_r\geq s\}, \ \ s\in[0,V_T], \end{align*} and introduce an auxiliary Brownian motion $(\tilde{W})_{t\in[0,V_T]}$ on a filtered probability space $(\tilde\Omega,\tilde\mathscr{F},(\tilde\mathscr{F}_t^X)_{t\in[0,V_T]},\tilde\mathbb{P})$, is the filtration generated by $\tilde{W}$. Similar to Section 2.1, we can define the $\mathcal{S}$-transform on this auxiliary probability space as follows: for each random variable $\tilde\eta\in L^2(\tilde\Omega,\tilde\mathscr{F},\tilde\mathbb{P})$, \begin{align*} (\widetilde{\mathcal{S}}\tilde\eta)(\mathfrak{h}):=\tilde\mathbb E\left(\tilde\eta\exp\left\{\int_0^{V_T}\dot{\mathfrak{h}}(s)\text{\rm{d}}\tilde{W}_s-\frac 1 2\int_0^{V_T}\dot{\mathfrak{h}}^2(s)\text{\rm{d}} s\right\}\right), \ \ \mathfrak{h}\in\mathcal{R}(\mathfrak{C}_{\tilde{W}}). \end{align*} Here, we recall that owing to Remark \ref{Re(WI)} (ii), $\mathcal{R}(\mathfrak{C}_{\tilde{W}})$ is of the form \begin{align*} \mathcal{R}(\mathfrak{C}_{\tilde{W}})=\mathfrak{H}_{\tilde{W}}=\left\{\mathfrak{h}:\mathfrak{h}(t)=\int_0^t\dot{\mathfrak{h}}(s)\text{\rm{d}} s, \ \ \dot{\mathfrak{h}}\in L^2([0,V_T],\text{\rm{d}} t)\right\}. \end{align*}
Now, we have the following transfer principle, which is a distribution dependent version of \cite[Theorem 3.1]{Bender14}.
\begin{prp}\label{Pr1} Assume that $\phi:\mathbb R\times\mathscr{P}_2(\mathbb R)\rightarrow\mathbb R, b:[0,T]\times\mathbb R\times\mathscr{P}_2(\mathbb R^m)\rightarrow\mathbb R, \vartheta:[0,T]\times\mathbb R\times\mathscr{P}_2(\mathbb R)\rightarrow\mathbb R^m$ and $\sigma:[0,T]\times\mathbb R\times\mathscr{P}_2(\mathbb R)\rightarrow\mathbb R$ are measurable functions satisfying \begin{align}\label{Pr1-0} \mathbb E\left[\phi^2(X_t,\mathscr{L}_{X_t})+\int_0^t(b^2(s,X_s,\mathscr{L}_{\vartheta(s,X_s,\mathscr{L}_{X_s})})+\sigma^2(s,X_s,\mathscr{L}_{X_s}))\text{\rm{d}} V_s\right]<\infty \end{align} for some $t\in[0,T]$, and \begin{align}\label{Pr1-1} \phi(\tilde{W}_{V_t},\mathscr{L}_{\tilde{W}_{V_t}})=\int_0^{V_t}b(U_s,\tilde{W}_s,\mathscr{L}_{\vartheta(U_s,\tilde{W}_s,\mathscr{L}_{\tilde{W}_s})})\text{\rm{d}} s+\int_0^{V_t}\sigma(U_s,\tilde{W}_s,\mathscr{L}_{\tilde{W}_s})\text{\rm{d}}\tilde{W}_s, \ \ \widetilde{\mathbb{P}}\textit{-}a.s. \end{align} Then, $\int_0^t\sigma(s,X_s,\mathscr{L}_{X_s})\text{\rm{d}}^\diamond X_s$ is well-defined and there holds in $L^2(\Omega,\mathscr{F}_T^X,\mathbb{P})$ \begin{align}\label{Pr1-2} \phi(X_t,\mathscr{L}_{X_t})=\int_0^tb(s,X_s,\mathscr{L}_{\vartheta(s,X_s,\mathscr{L}_{X_s})})\text{\rm{d}} V_s+\int_0^t\sigma(s,X_s,\mathscr{L}_{X_s})\text{\rm{d}}^\diamond X_s. \end{align} \end{prp}
Before proving Proposition \ref{Pr1}, we first give a useful lemma whose proof is identical to \cite[Lemma 3.2]{Bender14} and therefore omitted here.
\begin{lem}\label{Le1} Assume that $\vartheta:[0,T]\times\mathbb R\times\mathscr{P}_2(\mathbb R)\rightarrow\mathbb R^m$ and $\psi:[0,T]\times\mathbb R\times\mathscr{P}_2(\mathbb R^m)\rightarrow\mathbb R$ are two measurable functions such that $\mathbb E\psi^2(t,X_t,\mathscr{L}_{\vartheta(t,X_t,\mathscr{L}_{X_t})})<\infty$ with some $t\in[0,T]$. Then for any $\hbar\in\mathfrak{H}_X$, \begin{align}\label{1Le1} (\mathcal{S}\psi(t,X_t,\mathscr{L}_{\vartheta(t,X_t,\mathscr{L}_{X_t})}))(\hbar)=(\widetilde{\mathcal{S}}\psi(t,\widetilde{W}_{V_t},\mathscr{L}_{\vartheta(t,\widetilde{W}_{V_t},\mathscr{L}_{\widetilde{W}_{V_t}})}))(\hbar\circ U). \end{align} \end{lem} Let us stress that if $\hbar\in\mathfrak{H}_X$, then $\hbar\circ U$ belongs to $\mathcal{R}(\mathfrak{C}_{\tilde{W}})$, and thus the right-hand side of \eqref{1Le1} is well-defined. Indeed, observe that \begin{align*} (\hbar\circ U)(t)=\int_0^{U_t}\dot{\hbar}(s)\text{\rm{d}} V_s=\int_0^t(\dot{\hbar}\circ U)(s)\text{\rm{d}} s, \ \ t\in[0,V_T] \end{align*} and \begin{align*} \int_0^{V_T}(\dot{\hbar}\circ U)^2(s)\text{\rm{d}} s=\int_0^T\dot{\hbar}^2(s)\text{\rm{d}} V_s<\infty. \end{align*}
\emph{Proof of Proposition \ref{Pr1}.} According to the linearity of $\mathcal{S}$, Lemma \ref{Le1} and the change of variables, we have for each $\hbar\in\mathfrak{H}_X$, \begin{align}\label{Pf(Pr1)-1} &\left(\mathcal{S}\left(\phi(X_t,\mathscr{L}_{X_t})-\int_0^tb(s,X_s,\mathscr{L}_{\vartheta(s,X_s,\mathscr{L}_{X_s})})\text{\rm{d}} V_s\right)\right)(\hbar)\cr &=(\mathcal{S}\phi(X_t,\mathscr{L}_{X_t}))(\hbar)-\int_0^t(\mathcal{S}b(s,X_s,\mathscr{L}_{\vartheta(s,X_s,\mathscr{L}_{X_s})}))(\hbar)\text{\rm{d}} V_s\cr &=(\widetilde{\mathcal{S}}\phi(\widetilde{W}_{V_t},\mathscr{L}_{\widetilde{W}_{V_t}}))(\hbar\circ U)- \int_0^t(\widetilde{\mathcal{S}}b(s,\widetilde{W}_{V_s},\mathscr{L}_{\vartheta(s,\widetilde{W}_{V_s},\mathscr{L}_{\widetilde{W}_{V_s}})}))(\hbar\circ U)\text{\rm{d}} V_s\cr &=(\widetilde{\mathcal{S}}\phi(\widetilde{W}_{V_t},\mathscr{L}_{\widetilde{W}_{V_t}}))(\hbar\circ U)- \int_0^{V_t}(\widetilde{\mathcal{S}}b(U_r,\widetilde{W}_{r},\mathscr{L}_{\vartheta(U_r,\widetilde{W}_{r},\mathscr{L}_{\widetilde{W}_{r}})}))(\hbar\circ U)\text{\rm{d}} r\cr &=\left(\widetilde{\mathcal{S}}\left(\phi(\widetilde{W}_{V_t},\mathscr{L}_{\widetilde{W}_{V_t}})- \int_0^{V_t}b(U_r,\widetilde{W}_{r},\mathscr{L}_{\vartheta(U_r,\widetilde{W}_{r},\mathscr{L}_{\widetilde{W}_{r}})})\text{\rm{d}} r\right)\right)(\hbar\circ U)\cr &=\left(\widetilde{\mathcal{S}}\int_0^{V_t}\sigma(U_r,\tilde{W}_r,\mathscr{L}_{\tilde{W}_r})\text{\rm{d}}\tilde{W}_r\right)(\hbar\circ U), \end{align} where the last equality is due to \eqref{Pr1-1}.\\ Since the classical It\^{o} integral coincides with the Wick-It\^{o} integral (see Remark \ref{Re(WI)} (ii)) and $\hbar\circ U$ belongs to $\mathfrak{H}_{\tilde W}=\mathcal{R}(\mathfrak{C}_{\tilde W})$, we get from \eqref{2Re(WI)} \begin{align*} \left(\widetilde{\mathcal{S}}\int_0^{V_t}\sigma(U_r,\tilde{W}_r,\mathscr{L}_{\tilde{W}_r})\text{\rm{d}}\tilde{W}_r\right)(\hbar\circ U) =\int_0^{V_t}\left(\widetilde{\mathcal{S}}\sigma(U_r,\tilde{W}_r,\mathscr{L}_{\tilde{W}_r})\right)(\hbar\circ U)\text{\rm{d}}(\hbar\circ U)(r). \end{align*} Plugging this into \eqref{Pf(Pr1)-1} and using the change of variables and Lemma \ref{Le1} again, we obtain \begin{align*} &\left(\mathcal{S}\left(\phi(X_t,\mathscr{L}_{X_t})-\int_0^tb(s,X_s,\mathbb{P}_{\vartheta(s,X_s,\mathscr{L}_{X_s})})\text{\rm{d}} V_s\right)\right)(\hbar)\cr &=\int_0^{V_t}\left(\widetilde{\mathcal{S}}\sigma(U_r,\tilde{W}_r,\mathscr{L}_{\tilde{W}_r})\right)(\hbar\circ U)\text{\rm{d}}(\hbar\circ U)(r)\cr &=\int_0^t\left(\widetilde{\mathcal{S}}\sigma(r,\tilde{W}_{V_r},\mathscr{L}_{\tilde{W}_{V_r}})\right)(\hbar\circ U)\text{\rm{d}}\hbar(r)\cr &=\int_0^t\left(\mathcal{S}\sigma(r,X_r,\mathscr{L}_{X_r})\right)(\hbar)\text{\rm{d}}\hbar(r), \end{align*} which yields that $\int_0^t\sigma(s,X_s,\mathbb{P}_{X_s})\text{\rm{d}}^\diamond X_s$ is well-defined and moreover the relation \eqref{Pr1-2} holds due to Definition \ref{De-WI}. The proof is now complete. \qed
\subsection{The Lions derivative}
For later use, we state some basic facts about the Lions derivative.
For any $\theta\in[1,\infty)$, $\mathscr{P}_\theta(\mathbb R^d)$ denotes the set of $\theta$-integrable probability measures on $\mathbb R^d$, and the $L^\theta$-Wasserstein distance on $\mathscr{P}_\theta(\mathbb R^d)$ is defined as follows: \begin{align*}
\mathbb{W}_\theta(\mu,\nu):=\inf_{\pi\in\mathscr {C}(\mu,\nu)}\left(\int_{\mathbb R^d\times\mathbb R^d}|x-y|^\theta\pi(\text{\rm{d}} x, \text{\rm{d}} y)\right)^\frac 1 \theta,\ \ \mu,\nu\in\mathscr{P}_\theta(\mathbb R^d), \end{align*}
where $\mathscr {C}(\mu,\nu)$ stands for the set of all probability measures on $\mathbb R^d\times\mathbb R^d$ with marginals $\mu$ and $\nu$. It is well known that $(\mathscr{P}_\theta(\mathbb R^d),\mathbb{W}_\theta)$ is a Polish space, usually referred to as the $\theta$-Wasserstein space on $\mathbb R^d$. We use $\langle\cdot,\cdot\rangle$ for the Euclidean inner product, and $\|\cdot\|_{L^2_\mu}$ for the $L^2(\mathbb R^d\rightarrow\mathbb R^d,\mu)$ norm. Let $\mathscr{L}_X$ be the distribution of random variable $X$.
\begin{defn} Let $f:\mathscr{P}_2(\mathbb R^d)\rightarrow\mathbb R$. \begin{enumerate} \item[(1)] $f$ is called $L$-differentiable at $\mu\in\mathscr{P}_2(\mathbb R^d)$, if the functional \begin{align*} L^2(\mathbb R^d\rightarrow\mathbb R^d,\mu)\ni\phi\mapsto f(\mu\circ(\mathrm{Id}+\phi)^{-1})) \end{align*} is Fr\'{e}chet differentiable at $0\in L^2(\mathbb R^d\rightarrow\mathbb R^d,\mu)$. That is, there exists a unique $\gamma\in L^2(\mathbb R^d\rightarrow\mathbb R^d,\mu)$ such that \begin{align*}
\lim_{\|\phi\|_{L^2_\mu}\ra0}\frac{f(\mu\circ(\mathrm{Id}+\phi)^{-1})-f(\mu)-\mu(\langle\gamma,\phi\rangle)}{\|\phi\|_{L^2_\mu}}=0. \end{align*} In this case, $\gamma$ is called the $L$-derivative of $f$ at $\mu$ and denoted by $D^Lf(\mu)$.
\item[(2)] $f$ is called $L$-differentiable on $\mathscr{P}_2(\mathbb R^d)$, if the $L$-derivative $D^Lf(\mu)$ exists for all $\mu\in\mathscr{P}_2(\mathbb R^d)$. Furthermore, if for every $\mu\in\mathscr{P}_2(\mathbb R^d)$ there exists a $\mu$-version $D^Lf(\mu)(\cdot)$ such that $D^Lf(\mu)(x)$ is jointly continuous in $(\mu,x)\in\mathscr{P}_2(\mathbb R^d)\times\mathbb R^d$, we denote $f\in C^{(1,0)}(\mathscr{P}_2(\mathbb R^d))$.
\end{enumerate} \end{defn}
In addition, according to \cite[Theorem 6.5]{Cardaliaguet13} and \cite[Proposition 3.1]{RW}, we get the following useful formula for the $L$-derivative.
\begin{lem}\label{FoLD} Let $(\Omega,\mathscr{F},\mathbb{P})$ be an atomless probability space and $\xi,\eta\in L^2(\Omega\rightarrow\mathbb R^d,\mathbb{P})$. If $f\in C^{1,0}(\mathscr{P}_2(\mathbb R^d))$, then \begin{align*} \lim_{\varepsilon\da0}\frac {f(\mathscr{L}_{\xi+\varepsilon\eta})-f(\mathscr{L}_\xi)} \varepsilon=\mathbb E\<D^Lf(\mathscr{L}_\xi)(\xi),\eta\rangle. \end{align*} \end{lem}
\section{Well-posedness of DDBSDE by Gaussian processes}
In this section, we consider the following DDBSDE driven by Gaussian process: \begin{equation}\label{Bsde} \left\{ \begin{array}{ll} \text{\rm{d}} Y_t=-f(t,X_t,Y_t,Z_t,\mathscr{L}_{(X_t,Y_t,Z_t)})\text{\rm{d}} V_t+Z_t\text{\rm{d}}^\diamond X_t,\\ Y_T=g(X_T,\mathscr{L}_{X_T}), \end{array} \right. \end{equation} where $f:[0,T]\times\mathbb R\times\mathbb R\times\mathbb R\times\mathscr{P}_2(\mathbb R\times\mathbb R\times\mathbb R)\rightarrow\mathbb R$ and $g:\mathbb R\times\mathscr{P}_2(\mathbb R)\rightarrow\mathbb R$ are measurable functions. Now, let $\Upsilon$ be the space of pair of $(\mathscr{F}_t^X)_{t\in[0,T]}$-adapted processes $(Y,Z)$ on $[0,T]$ satisfying \begin{align*}
\mathbb E\left(\sup\limits_{0\leq t\leq T}|Y_t|^2+\int_0^T|Z_t|^2\text{\rm{d}} V_t\right)<\infty \end{align*} and \begin{align*} Y_t=u(t,X_t,\mathscr{L}_{X_t}), \ \ \mathbb{P}{\textit-}a.s., \ t\in[0,T],\ \ Z_t=v(t,X_t,\mathscr{L}_{X_t}), \ \text{\rm{d}}\mathbb{P}\otimes\text{\rm{d}} V{\textit-}a.e. \end{align*} with two deterministic functions $u,v:[0,T]\times\mathbb R\times\mathscr{P}_2(\mathbb R)\rightarrow\mathbb R$.
\begin{defn}\label{De-sol} (1) A pair of stochastic processes $(Y,Z)=(Y_t,Z_t)_{0\leq t\leq T}$ is called a solution to \eqref{Bsde}, if $(Y,Z)\in\Upsilon$ satisfies \begin{align*} \mathbb E\left[g^2(X_T,\mathscr{L}_{X_T})+\int_0^Tf^2(s,X_s,Y_s,Z_s,\mathscr{L}_{(X_s,Y_s,Z_s)})\text{\rm{d}} V_s\right]<\infty, \end{align*} the Wick-It\^{o} integral $\int_0^tZ_s\text{\rm{d}}^\diamond X_s$ exists for any $t\in[0,T]$ and $\mathbb{P}$-a.s. \begin{align*} Y_t=g(X_T,\mathscr{L}_{X_T})+\int_t^Tf(s,X_s,Y_s,Z_s,\mathscr{L}_{(X_s,Y_s,Z_s)})\text{\rm{d}} V_s-\int_t^TZ_s\text{\rm{d}}^\diamond X_s,\ \ t\in[0,T]. \end{align*}
(2) \eqref{Bsde} is said to have uniqueness, if for any two solutions $(Y^i,Z^i)\in\Upsilon, i=1,2$, \begin{align*}
\mathbb E\left(\sup\limits_{0\leq t\leq T}|Y^1_t-Y^2_t|^2+\int_0^T|Z^1_t-Z^2_t|^2\text{\rm{d}} V_t\right)=0. \end{align*}
\end{defn}
In order to study \eqref{Bsde}, we shall introduce the following auxiliary BSDE \begin{equation}\label{Aueq} \left\{ \begin{array}{ll} \text{\rm{d}}\widetilde{Y}_t=-f(U_t,\widetilde{W}_t,\widetilde{Y}_t,\widetilde{Z}_t,\mathscr{L}_{(\widetilde{W}_t,\widetilde{Y}_t,\widetilde{Z}_t)})\text{\rm{d}} t+\widetilde{Z}_t\text{\rm{d}}\widetilde{W}_t,\\ \widetilde{Y}_{V_T}=g(\widetilde{W}_{V_T},\mathscr{L}_{\widetilde{W}_{V_T}}), \end{array} \right. \end{equation} where we recall that $\{\widetilde{W}_t\}_{t\in[0,V_T]}$ is a standard Brownian motion stated in Section 2.2. Let $\widetilde{\Upsilon}$ be the space of pair of $(\mathscr{F}_t^{\tilde W})_{t\in[0,V_T]}$-adapted processes $(\tilde Y,\tilde Z)$ on $[0,V_T]$ satisfying \begin{align*}
\tilde\mathbb E\left(\sup\limits_{0\leq t\leq V_T}|\tilde Y_t|^2+\int_0^{V_T}|\tilde Z_t|^2\text{\rm{d}} t\right)<\infty \end{align*} and \begin{align*} \tilde Y_t=\tilde u(t,\tilde W_t,\mathscr{L}_{\tilde W_t}), \ \tilde\mathbb{P}{\textit-}a.s., \ t\in[0,V_T], \ \ \tilde Z_t=\tilde v(t,\tilde W_t,\mathscr{L}_{\tilde W_t}), \ \text{\rm{d}}\tilde\mathbb{P}\otimes\text{\rm{d}} t{\textit-}a.e. \end{align*} with two deterministic functions $\tilde u,\tilde v:[0,V_T]\times\mathbb R\times\mathscr{P}_2(\mathbb R)\rightarrow\mathbb R$. Similar to Definition \ref{De-sol}, we can give the notion of a solution $(\tilde Y,\tilde Z)=(\tilde Y_t,\tilde Z_t)_{0\leq t\leq V_T}$ for \eqref{Aueq} in the space $\widetilde{\Upsilon}$.
\begin{thm}\label{Th1} Suppose that $\mathbb E [g^2(X_T,\mathscr{L}_{X_T})]<\infty$ and \eqref{Aueq} has a unique solution $(\widetilde{Y},\widetilde{Z})\in\widetilde{\Upsilon}$. Then \eqref{Bsde} has a unique solution $(Y,Z)\in\Upsilon$. More precisely, if \begin{align*} \widetilde{Y}_t=\widetilde{u}(t,\widetilde{W}_t,\mathscr{L}_{\widetilde{W}_t}), \ \ \widetilde{Z}_t=\widetilde{v}(t,\widetilde{W}_t,\mathscr{L}_{\widetilde{W}_t}), \ \ t\in[0,V_T] \end{align*} is a unique solution of \eqref{Aueq}, then \begin{align*} Y_t=u(t,X_t,\mathscr{L}_{X_t}), \ \ Z_t=v(t,X_t,\mathscr{L}_{X_t}), \ \ t\in[0,T], \end{align*} is a unique solution of \eqref{Bsde}, where \begin{align*} u(t,x,\mu)=\widetilde{u}(V_t,x,\mu),\ \ v(t,x,\mu)=\widetilde{v}(V_t,x,\mu),\ \ (t,x,\mu)\in[0,T]\times\mathbb R\times\mathscr{P}_2(\mathbb R). \end{align*} \end{thm}
\begin{proof} The proof is divided into two steps.
\textit{Step 1. Existence.} Since $(\widetilde{Y},\widetilde{Z})\in\widetilde{\Upsilon}$ is a solution to \eqref{Aueq}, we let $$\widetilde{Y}_t=\widetilde{u}(t,\widetilde{W}_t,\mathscr{L}_{\widetilde{W}_t}),\ \tilde\mathbb{P}{\textit-}a.s., \ t\in[0,V_T], \ \ \widetilde{Z}_t=\widetilde{v}(t,\widetilde{W}_t,\mathscr{L}_{\widetilde{W}_t}),\ \text{\rm{d}}\tilde\mathbb{P}\otimes\text{\rm{d}} t{\textit-}a.e., $$ and then we have for any $t\in[0,T]$, \begin{align*} &\widetilde{u}(V_t,\widetilde{W}_{V_t},\mathscr{L}_{\widetilde{W}_{V_t}})-\widetilde{u}(0,0,\delta_0)\cr &=-\int_0^{V_t}f(U_s,\widetilde{W}_s,\widetilde{u}(s,\widetilde{W}_{s},\mathscr{L}_{\widetilde{W}_{s}}),\widetilde{v}(s,\widetilde{W}_{s},\mathscr{L}_{\widetilde{W}_{s}}), \mathscr{L}_{(\widetilde{W}_s,\widetilde{u}(s,\widetilde{W}_{s},\mathscr{L}_{\widetilde{W}_{s}}),\widetilde{v}(s,\widetilde{W}_{s},\mathscr{L}_{\widetilde{W}_{s}}))})\text{\rm{d}} s\cr &\quad+\int_0^{V_t}\widetilde{v}(s,\widetilde{W}_{s},\mathscr{L}_{\widetilde{W}_{s}})\text{\rm{d}}\widetilde{W}_s, \end{align*} where $\delta_0$ is the Dirac measure.\\ Because of the definition of $\widetilde{\Upsilon}$, it is readily checked that \eqref{Pr1-0} with $(\phi,b,\sigma)$ replaced by $(\widetilde{u}(V_t,\cdot,\cdot),f,\widetilde{v})$ holds. So, by Proposition \ref{Pr1} we derive for any $t\in[0,T]$, \begin{align*} &\widetilde{u}(V_t,X_t,\mathscr{L}_{X_t})-\widetilde{u}(0,0,\delta_0)\cr &=-\int_0^tf(s,X_s,\widetilde{u}(V_s,X_{s},\mathscr{L}_{X_s}),\widetilde{v}(V_s,X_s,\mathscr{L}_{X_s}), \mathscr{L}_{(X_s,\widetilde{u}(V_s,X_{s},\mathscr{L}_{X_{s}}),\widetilde{v}(V_s,X_{s},\mathscr{L}_{X_{s}}))})\text{\rm{d}} V_s\cr &\quad+\int_0^t\widetilde{v}(V_s,X_s,\mathscr{L}_{X_s})\text{\rm{d}}^\diamond X_s. \end{align*} Let $u(t,x,\mu)=\widetilde{u}(V_t,x,\mu), v(t,x,\mu)=\widetilde{v}(V_t,x,\mu),(t,x,\mu)\in[0,T]\times\mathbb R\times\mathscr{P}_2(\mathbb R)$. Then $(Y_t,Z_t):=(u(t,X_t,\mathscr{L}_{X_t}),v(t,X_t,\mathscr{L}_{X_t}))$ solves the following BSDE: \begin{align}\label{Pf(Th1)-1} Y_t=Y_T+\int_t^Tf(s,X_s,Y_s,Z_s,\mathscr{L}_{(X_s,Y_s,Z_s)})\text{\rm{d}} V_s+\int_t^TZ_sd^\diamond X_s. \end{align} On the other hand, noting that $g(\widetilde{W}_{V_T},\mathscr{L}_{\widetilde{W}_{V_T}})=\widetilde{u}(V_T,\widetilde{W}_{V_T},\mathscr{L}_{\widetilde{W}_{V_T}}),\ \mathbb{P}{\textit-}a.s.$ and the Gaussian law of $\widetilde{W}_{V_T}$, we get $$g(x,\mathscr{L}_{\widetilde{W}_{V_T}})=\widetilde{u}(V_T,x,\mathscr{L}_{\widetilde{W}_{V_T}}),\ \ \text{\rm{d}} x{\textit-}a.e.,$$ which implies \begin{align}\label{Pf(Th1)-2} &g(X_T,\mathscr{L}_{X_T})=g(X_T,\mathscr{L}_{\widetilde{W}_{V_T}})=\widetilde{u}(V_T,X_T,\mathscr{L}_{\widetilde{W}_{V_T}})\cr &=\widetilde{u}(V_T,X_T,\mathscr{L}_{X_T})=u(T,X_T,\mathscr{L}_{X_T})=Y_T,\ \ \mathbb{P}{\textit-}a.s. \end{align} In addition, we easily obtain \begin{align}\label{Pf(Th1)-3}
&\mathbb E\left(\sup\limits_{0\leq t\leq T}|Y_t|^2+\int_0^T|Z_t|^2\text{\rm{d}} V_t\right)\cr
&=\mathbb E\left(\sup\limits_{0\leq t\leq T}|u(t,X_t,\mathscr{L}_{X_t})|^2+\int_0^T|v(t,X_t,\mathscr{L}_{X_t})|^2\text{\rm{d}} V_t\right)\cr
&=\mathbb E\left(\sup\limits_{0\leq t\leq T}|\widetilde{u}(V_t,X_t,\mathscr{L}_{X_t})|^2+\int_0^T|\widetilde{v}(V_t,X_t,\mathscr{L}_{X_t})|^2\text{\rm{d}} V_t\right)\cr
&=\tilde\mathbb E\left(\sup\limits_{0\leq t\leq T}|\widetilde{u}(V_t,\widetilde{W}_{V_t},\mathscr{L}_{\widetilde{W}_{V_t}})|^2+\int_0^T|\widetilde{v}(V_t,\widetilde{W}_{V_t},\mathscr{L}_{\widetilde{W}_{V_t}})|^2\text{\rm{d}} V_t\right)\cr
&=\tilde\mathbb E\left(\sup\limits_{0\leq t\leq V_T}|\widetilde{u}(t,\widetilde{W}_{t},\mathscr{L}_{\widetilde{W}_{t}})|^2+\int_0^{V_T}|\widetilde{v}(t,\widetilde{W}_{t},\mathscr{L}_{\widetilde{W}_{t}})|^2\text{\rm{d}} t\right)\cr
&=\tilde\mathbb E\left(\sup\limits_{0\leq t\leq V_T}|\widetilde{Y}_t|^2+\int_0^{V_T}|\widetilde{Z}_t|^2\text{\rm{d}} V_t\right)<\infty, \end{align} which, together with \eqref{Pf(Th1)-1}-\eqref{Pf(Th1)-2}, yields that $(Y,Z)\in\Upsilon$ is a solution to \eqref{Bsde}.
\textit{Step 2. Uniqueness.} Let $(Y^i,Z^i)\in\Upsilon,i=1,2$ be two solutions of \eqref{Bsde}. Then there exist $u^i,v^i,i=1,2$ such that $(Y_t^i,Z_t^i)=(u^i(t,X_t,\mathscr{L}_{X_t}),v^i(t,X_t,\mathscr{L}_{X_t})),i=1,2$. Along the same arguments as in step 1, we can conclude that $(\widetilde{Y}_t^i,\widetilde{Z}_t^i)=(u^i(U_t,\widetilde{W}_t,\mathscr{L}_{\widetilde{W}_t}),v^i(U_t,\widetilde{W}_t,\mathscr{L}_{\widetilde{W}_t}))\in\widetilde{\Upsilon},i=1,2$ are two solutions of \eqref{Aueq}. Similar to \eqref{Pf(Th1)-3} and by the uniqueness of \eqref{Aueq}, we arrive at \begin{align*}
&\mathbb E\left(\sup\limits_{0\leq t\leq T}|Y_t^1-Y_t^2|^2+\int_0^T|Z_t^1-Z_t^2|^2\text{\rm{d}} V_t\right)\cr
&=\mathbb E\left(\sup\limits_{0\leq t\leq T}|u^1(t,X_t,\mathscr{L}_{X_t})-u^2(t,X_t,\mathscr{L}_{X_t})|^2+\int_0^T|v^1(t,X_t,\mathscr{L}_{X_t})-v^2(t,X_t,\mathscr{L}_{X_t})|^2\text{\rm{d}} V_t\right)\cr
&=\tilde\mathbb E\left(\sup\limits_{0\leq t\leq V_T}|u^1(U_t,\widetilde{W}_t,\mathscr{L}_{\widetilde{W}_t})-u^2(t,\widetilde{W}_t,\mathscr{L}_{\widetilde{W}_t})|^2+\int_0^{V_T}|v^1(U_t,\widetilde{W}_t,\mathscr{L}_{\widetilde{W}_t})-v^2(t,\widetilde{W}_t,\mathscr{L}_{\widetilde{W}_t})|^2\text{\rm{d}} t\right)\cr
&=\tilde\mathbb E\left(\sup\limits_{0\leq t\leq V_T}|\widetilde{Y}_t^1-\widetilde{Y}_t^2|^2+\int_0^{V_T}|\widetilde{Z}_t^1-\widetilde{Z}_t^2|^2\text{\rm{d}} t\right)=0, \end{align*} which finishes the proof. \end{proof}
Next, we are devoted to applying our general Theorem \ref{Th1} to the case of Lipschitz continuous functions $(g,f)$. More precisely, we assume the following conditions: \begin{enumerate}
\item[\textsc{\textbf{(H1)}}] (i) $g,f$ are Lipschitz continuous, i.e. there exist two constants $L_g, L_f>0$ such that for all $t\in[0,T],x_i,y_i,z_i\in\mathbb R,\mu,\widetilde{\mu}\in\mathscr{P}_2(\mathbb R),\mu_i\in\mathscr{P}_2(\mathbb R\times\mathbb R\times\mathbb R),i=1,2$, \begin{align*}
|g(x_1,\mu)-g(x_2,\widetilde{\mu})|\leq L_g(|x_1-x_2|+\mathbb{W}_2(\mu,\widetilde{\mu})) \end{align*} and \begin{align*}
|f(t,x_1,y_1,z_1,\mu_1)-f(t,x_2,y_2,z_2,\mu_2)|\leq L_f(|x_1-x_2|+|y_1-y_2|+|z_1-z_2|+\mathbb{W}_2(\mu_1,\mu_2)). \end{align*}
\item[(ii)] $|g(0,\delta_0)|+\sup_{t\in [0,T]}|f(t,0,0,0,\delta_0)|<\infty$. \end{enumerate}
Owing to \cite[Theorem A.1]{Li18} where the driven noises are a Brownian motion and an independent Poisson random measure, the auxiliary equation \eqref{Aueq} has a unique solution $(\widetilde{Y},\widetilde{Z})\in\widetilde{\Upsilon}$. Hence, with the help of Theorem \ref{Th1}, \eqref{Bsde} admits a unique solution $(Y,Z)\in\Upsilon$.
\begin{rem}\label{Re1}
(i) (Stability estimate) Let $(Y^i,Z^i)\in\Upsilon$ be the unique solution of \eqref{Bsde} with the coefficients $(g^i,f^i),i=1,2$, which satisfy \textsc{\textbf{(H1)}}. Then, there exists a constant $C>0$ such that \begin{align*}
&\mathbb E\left(\sup_{s\in[0,T]}|Y^1_s-Y^2_s|^2+\int_0^{T}|Z^1_s-Z^2_s|^2\text{\rm{d}} V_s\right)\cr
\leq& C\mathbb E\left[\left|(g^1-g^2)(X_T,\mathscr{L}_{X_{T}})\right|^2+
\left(\int_0^{T}|(f^1-f^2)(s,X_s,{Y}^1_s,{Z}^1_s,\mathscr{L}_{(X_s,{Y}^1_s,{Z}^1_s)})|\text{\rm{d}} V_s\right)^2\right]. \end{align*} Indeed, we denote by $(\tilde Y^i,\tilde Z^i)\in\tilde\Upsilon$ the unique solution of \eqref{Aueq} with the coefficient $(g^i,f^i),i=1,2$. By \cite[Theorem A.2]{Li18}, we derive that there is a constant $C>0$ such that \begin{align*}
&\tilde\mathbb E\left(\sup_{s\in[0,V_T]}|\tilde Y^1_s-\tilde Y^2_s|^2+\int_0^{V_T}|\tilde Z^1_s-\tilde Z^2_s|^2\text{\rm{d}} s\right)\cr
\leq& C\tilde\mathbb E\left[\left|(g^1-g^2)(\widetilde{W}_{V_T},\mathscr{L}_{\widetilde{W}_{V_T}})\right|^2+
\left(\int_0^{V_T}|(f^1-f^2)(U_s,\widetilde{W}_s,\widetilde{Y}^1_s,\widetilde{Z}^1_s,\mathscr{L}_{(\widetilde{W}_s,\widetilde{Y}^1_s,\widetilde{Z}^1_s)})|\text{\rm{d}} s\right)^2\right]. \end{align*} Then, the desired stability estimate follows from straightforward calculations via transfer principle Proposition \ref{Pr1}.
(ii) In the distribution-free case, namely the coefficients $g$ and $f$ in \eqref{Bsde} are independent of distributions, Bender \cite[Theorems 4.2 and 4.4]{Bender14} proved the existence and uniqueness result, and our recent work \cite[Theorem 5.1]{FW21} investigated the existence of densities and moreover derived their Gaussian estimates for the marginal laws of the solution. So, our work in this note can be regarded as a continuation and generalization of \cite{Bender14,FW21}. \end{rem}
\section{Comparison theorem and converse comparison theorem}
In this section, we first obtain a comparison theorem by imposing on the condition involved the Lions derivative of the generator. In the second part, we concern with the converse problem of comparison theorem. More precisely, we shall establish a representation theorem for the generator, and show how to combine this result to derive a converse comparison theorem.
\subsection{Comparison theorem}
In the Brownian motion case, i.e. $X=B^{1/2}$, the counter examples given in \cite[Examples 3.1 and 3.2]{BLP09} or \cite[Example 2.1]{LLZ18} show that if the generator $f$ depends on the law of $Z$ or is not increasing with respect to the law of $Y$, comparison theorems usually don't hold for equation \eqref{Bsde}. We consider now the following special version of (3.1): \begin{align}\label{Bsde-com} Y_t=g(X_T,\mathscr{L}_{X_T})+\int_t^Tf(s,X_s,Y_s,Z_s,\mathscr{L}_{(X_s,Y_s)})\text{\rm{d}} V_s-\int_t^TZ_s\text{\rm{d}}^\diamond X_s,\ \ t\in[0,T]. \end{align} Correspondingly, the auxiliary equation of \eqref{Bsde-com} is of the form \begin{align}\label{AuBsde-com} \tilde Y_t=&g(\widetilde{W}_{V_T},\mathscr{L}_{\tilde{W}_{V_T}})+\int_t^{V_T}f(U_s,\tilde{W}_s,\tilde{Y}_s,\tilde{Z}_s,\mathscr{L}_{(\widetilde{W}_s,\widetilde{Y}_s)})\text{\rm{d}} s -\int_t^{V_T}\widetilde{Z}_s\text{\rm{d}}\widetilde{W}_s,\ \ t\in[0,V_T]. \end{align}
\begin{thm}\label{Th(com)} Assume that two generators $f^i,i=1,2$ satisfy \textsc{\textbf{(H1)}}, and that for any $(t,x,y,z)\in[0,T]\times\mathbb R\times\mathbb R\times\mathbb R$, $f^1(t,x,y,z,\cdot)$ belongs to $C^{(1,0)}(\mathscr{P}_2(\mathbb R\times\mathbb R))$ with $0\leq (D^Lf^1)^{(2)}\leq K$ for some constant $K>0$, where $(D^Lf^1)^{(2)}$ denotes the second component of $D^Lf^1$. Let $(Y^i,Z^i)\in\Upsilon, i=1,2$ be the solutions of \eqref{Bsde-com} with data $(g^i,f^i),i=1,2$, respectively. Then, if \begin{align}\label{1Th(com)} g^1(x,\mu)\leq g^2(x,\mu),\ \ x\in\mathbb R,\mu\in\mathscr{P}_2(\mathbb R) \end{align} and \begin{align}\label{2Th(com)} f^1(t,x,y,z,\nu)\leq f^2(t,x,y,z,\nu),\ \ t\in[0,T], x,y,z\in\mathbb R,\nu\in\mathscr{P}_2(\mathbb R\times\mathbb R), \end{align} there holds that for every $t\in[0,T], Y_t^1\leq Y_t^2, \mathbb{P}$-a.s.. \end{thm}
\begin{proof} Let $(\widetilde{Y}^i,\widetilde{Z}^i)\in\widetilde{\Upsilon}$ be the unique solutions of \eqref{AuBsde-com} associated with $(g^i,f^i), i=1,2$, and denote by $(\tilde u^i,\tilde v^i),i=1,2$ their representation functions, respectively.
We first give a comparison result for $(\widetilde{Y}^i,\widetilde{Z}^i), i=1,2$. Our strategy hinges on the It\^{o}-Tanaka formula applied to $[(\tilde Y_t^1-\tilde Y_t^2)^+]^2$ (see, e,g., \cite[Proposition 2.35 and Remark 2.36]{PR14}), which gives for any $t\in[0,V_T]$, \begin{align*}
&\tilde\mathbb E[(\tilde Y_t^1-\tilde Y_t^2)^+]^2+\tilde\mathbb E\int_t^{V_T}|\tilde Z_s^1-\tilde Z_s^2|^2\mathrm{I}_{\{\tilde Y_s^1-\tilde Y_s^2\geq0\}}\text{\rm{d}} s\nonumber\\ =&\tilde\mathbb E[(g^1-g^2)^+(\widetilde{W}_{V_T},\mathscr{L}_{\tilde{W}_{V_T}})]^2\nonumber\\ &+2\tilde\mathbb E\int_t^{V_T}(\tilde Y_s^1-\tilde Y_s^2)^+\left(f^1(U_s,\Theta_s^1,\mathscr{L}_{(\widetilde{W}_s,\widetilde{Y}^1_s)})-f^2(U_s,\Theta_s^2,\mathscr{L}_{(\widetilde{W}_s,\widetilde{Y}^2_s)})\right)\text{\rm{d}} s\nonumber\\ =&2\tilde\mathbb E\int_t^{V_T}(\tilde Y_s^1-\tilde Y_s^2)^+\left(f^1(U_s,\Theta_s^1,\mathscr{L}_{(\widetilde{W}_s,\widetilde{Y}^1_s)})-f^2(U_s,\Theta_s^2,\mathscr{L}_{(\widetilde{W}_s,\widetilde{Y}^2_s)})\right)\text{\rm{d}} s, \end{align*} Here, we have set $\Theta_s^i=(\widetilde{W}_s,\tilde{Y}^i_s,\tilde{Z}^i_s), i=1,2$ and used \eqref{1Th(com)}.\\ Using the Lipschitz continuity of $f^1$ and the fact that $f^1(U_t,\cdot,\cdot,\cdot,\cdot)\leq f^2(U_t,\cdot,\cdot,\cdot,\cdot)$ for each $t\in[0,V_T]$ implied by \eqref{2Th(com)}, we get \begin{align}\label{1PfTh(com)}
&\tilde\mathbb E[(\tilde Y_t^1-\tilde Y_t^2)^+]^2+\tilde\mathbb E\int_t^{V_T}|\tilde Z_s^1-\tilde Z_s^2|^2\mathrm{I}_{\{\tilde Y_s^1-\tilde Y_s^2\geq0\}}\text{\rm{d}} s\nonumber\\ =&2\tilde\mathbb E\int_t^{V_T}(\tilde Y_s^1-\tilde Y_s^2)^+ \left(f^1(U_s,\Theta_s^1,\mathscr{L}_{(\widetilde{W}_s,\widetilde{Y}^1_s)})-f^1(U_s,\Theta_s^2,\mathscr{L}_{(\widetilde{W}_s,\widetilde{Y}^1_s)})\right)\text{\rm{d}} s\cr &+2\tilde\mathbb E\int_t^{V_T}(\tilde Y_s^1-\tilde Y_s^2)^+ \left(f^1(U_s,\Theta_s^2,\mathscr{L}_{(\widetilde{W}_s,\widetilde{Y}^1_s)})-f^1(U_s,\Theta_s^2,\mathscr{L}_{(\widetilde{W}_s,\widetilde{Y}^2_s)})\right)\text{\rm{d}} s\cr &+2\tilde\mathbb E\int_t^{V_T}(\tilde Y_s^1-\tilde Y_s^2)^+ \left(f^1(U_s,\Theta_s^2,\mathscr{L}_{(\widetilde{W}_s,\widetilde{Y}^2_s)})-f^2(U_s,\Theta_s^2,\mathscr{L}_{(\widetilde{W}_s,\widetilde{Y}^2_s)})\right)\text{\rm{d}} s\cr
\leq&2L_{f^1}\tilde\mathbb E\int_t^{V_T}(\tilde Y_s^1-\tilde Y_s^2)^+\left(|\tilde Y_s^1-\tilde Y_s^2|+|\tilde Z_s^1-\tilde Z_s^2|\right)\text{\rm{d}} s \cr &+2\tilde\mathbb E\int_t^{V_T}(\tilde Y_s^1-\tilde Y_s^2)^+ \left(f^1(U_s,\Theta_s^2,\mathscr{L}_{(\widetilde{W}_s,\widetilde{Y}^1_s)})-f^1(U_s,\Theta_s^2,\mathscr{L}_{(\widetilde{W}_s,\widetilde{Y}^2_s)})\right)\text{\rm{d}} s. \end{align} Observe that by Lemma \ref{FoLD}, we obtain \begin{align*} &f^1(U_s,\Theta_s^2,\mathscr{L}_{(\widetilde{W}_s,\widetilde{Y}^1_s)})-f^1(U_s,\Theta_s^2,\mathscr{L}_{(\widetilde{W}_s,\widetilde{Y}^2_s)})\cr =&\int_0^1\frac \text{\rm{d}} {\text{\rm{d}} r}f^1(U_s,\Theta_r^2,\mathscr{L}_{\chi_s(r)})\text{\rm{d}} r\cr
=&\int_0^1\tilde\mathbb E\langle D^Lf^1(U_s,x,\cdot)(\mathscr{L}_{\chi_s(r)})(\chi_s(r)),(0,\widetilde{Y}^1_s-\widetilde{Y}^2_s)\rangle|_{x=\Theta_r^2}\text{\rm{d}} r\cr
=&\int_0^1\tilde\mathbb E\left[(D^Lf^1)^{(2)}(U_s,x,\cdot)(\mathscr{L}_{\chi_s(r)})(\chi_s(r))\cdot(\widetilde{Y}^1_s-\widetilde{Y}^2_s)\right]\big|_{x=\Theta_r^2}\text{\rm{d}} r\cr \leq& K\tilde\mathbb E(\widetilde{Y}^1_s-\widetilde{Y}^2_s)^+, \end{align*} where for any $r\in[0,1],\chi_s(r)=(\widetilde{W}_s,\widetilde{Y}^2_s)+r(0,\widetilde{Y}^1_s-\widetilde{Y}^2_s)$.\\ Therefore, plugging this into \eqref{1PfTh(com)} and applying the H\"{o}lder and the Young inequalities, we have \begin{align*}
&\tilde\mathbb E[(\tilde Y_t^1-\tilde Y_t^2)^+]^2+\tilde\mathbb E\int_t^{V_T}|\tilde Z_s^1-\tilde Z_s^2|^2\mathrm{I}_{\{\tilde Y_s^1-\tilde Y_s^2\geq0\}}\text{\rm{d}} s\nonumber\\
\leq&2(L_{f^1}\vee K)\tilde\mathbb E\int_t^{V_T}(\tilde Y_s^1-\tilde Y_s^2)^+\left(|\tilde Y_s^1-\tilde Y_s^2|+|\tilde Z_s^1-\tilde Z_s^2|+\tilde\mathbb E(\widetilde{Y}^1_s-\widetilde{Y}^2_s)^+\right)\text{\rm{d}} s\cr
\leq&C\int_t^{V_T}\tilde\mathbb E[(\tilde Y_s^1-\tilde Y_s^2)^+]^2\text{\rm{d}} s+\frac 1 2\tilde\mathbb E\int_t^{V_T}|\tilde Z_s^1-\tilde Z_s^2|^2\mathrm{I}_{\{\tilde Y_s^1-\tilde Y_s^2\geq0\}}\text{\rm{d}} s, \end{align*} where and in what follows C denotes a generic constant.\\ Then, the Gronwall inequality implies $\tilde\mathbb E[(\tilde Y_t^1-\tilde Y_t^2)^+]^2=0$ for every $t\in[0,V_T]$. Consequently, it holds that for any $t\in[0,V_T]$, \begin{align*} \widetilde{u}^1(t,\widetilde{W}_t,\mathscr{L}_{\widetilde{W}_t})=\tilde Y_t^1\leq \tilde Y_t^2=\widetilde{u}^2(t,\widetilde{W}_t,\mathscr{L}_{\widetilde{W}_t}),\ \ \tilde\mathbb{P}\textit{-}a.s.. \end{align*} Since $\widetilde{W}_t$ has the Gaussian law, we derive that for any $t\in[0,V_T]$, \begin{align*} \widetilde{u}^1(t,x,\mathscr{L}_{\widetilde{W}_t})\leq\widetilde{u}^2(t,x,\mathscr{L}_{\widetilde{W}_t}),\ \ \text{\rm{d}} x\textit{-}a.e., \end{align*} which, together with the fact that $\mathscr{L}_{\widetilde{W}_{V_t}}=\mathscr{L}_{X_t}$, yields that for each $t\in[0,T]$, \begin{align*} \widetilde{u}^1(V_t,x,\mathscr{L}_{X_t})\leq\widetilde{u}^2(V_t,x,\mathscr{L}_{X_t}),\ \ \text{\rm{d}} x\textit{-}a.e., \end{align*} According to Theorem \ref{Th1}, we conclude that $Y_t^1\leq Y_t^2, \mathbb{P}$-a.s. for every $t\in[0,T]$. This completes the proof. \end{proof}
\begin{rem}\label{Re-comp1} (i) In light of the proof above, one can see that if the conditions for $f^1$ are replaced by that for $f^2$, Theorem \ref{Th(com)} also holds.
(ii) Compared with the relevant result on DDBSDEs driven by the standard Brownian motion $B^{1/2}$ shown in \cite[Theorem 2.2 and Remark 2.2]{LLZ18}, it is easy to see that our above result Theorem \ref{Th(com)} applies to more general BSDEs since we replace $B^{1/2}$ with a general Gaussian process $X$ as driving process. In addition, in contrast to the distribution-free case (see, e.g., \cite[Theorem 4.6 (iii)]{Bender14}, \cite[Theorem 2.2]{EPQ9705} or \cite[Theorem 12.3]{HCS12}), we need to overcome difficulty induced by the appearance of $\mathscr{L}_{(X_s,Y_s)}$ in the generator via a formula for the $L$-derivative (Lemma \ref{FoLD}). \end{rem}
\subsection{Converse comparison theorem}
In this part, we investigate a kind of converse comparison problem: if for each $t\in[0,T],\epsilon\in(0,T-t)$ and $\xi\in L^2_X$, $Y_t^1(t+\epsilon,\xi)\leq Y_t^2(t+\epsilon,\xi)$ (see the definition at the beginning of Theorem \ref{Th(Conve)}), do we have $f^1(t,y,z,\nu)\leq f^2(t,y,z,\nu)$ for every $(t,y,z,\nu)\in[0,T]\times\mathbb R\times\mathbb R\times\mathscr{P}(\mathbb R\times\mathbb R\times\mathbb R)$? That is, if we can compare the solutions of two DDBSDEs with the same terminal condition, for all terminal conditions, can we compare the generators? In order to study this question, we first give a representation theorem for the generator $f$ initiated in \cite{BCHMP00}.
Now, given $(t,y,z)\in[0,T)\times\mathbb R\times\mathbb R$ and let $0<\epsilon<T-t$, and denote by $(Y^\epsilon,Z^\epsilon)$ the unique solution of the following DDBSDE on $[t,t+\epsilon]$: \begin{align}\label{1-Repr} Y^\epsilon_r=y+z(X_{t+\epsilon}-X_t)+\int_r^{t+\epsilon}f(s,Y^\epsilon_s,Z^\epsilon_s,\mathscr{L}_{(X_s,Y_s^\epsilon,Z_s^\epsilon)})\text{\rm{d}} V_s-\int_r^{t+\epsilon}Z^\epsilon_s\text{\rm{d}}^\diamond X_s. \end{align}
Our representation theorem is formulated as follows
\begin{thm}\label{Th(Repre)} Assume that \textsc{\textbf{(H1)}} holds and $V_{t+\epsilon}-V_t=O(\epsilon)$ as $\epsilon\ra0$ for any $t\in[0,T)$. Then for any $(t,y,z)\in[0,T)\times\mathbb R\times\mathbb R$, the following two statements are equivalent: \begin{align}\label{1Th(Re)} (i) \lim\limits_{\epsilon\ra0^+}&\frac{Y^\epsilon_t-y}{\epsilon}=f(t,y,z,\mathscr{L}_{(X_t,y,z)});\\ (ii) \lim\limits_{\epsilon\ra0^+}&\frac 1 \epsilon\int_t^{t+\epsilon}f(r,y,z,\mathscr{L}_{(X_t,y,z)})\text{\rm{d}} r=f(t,y,z,\mathscr{L}_{(X_t,y,z)}).\label{2Th(Re)}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{align} \end{thm}
\begin{proof} We split the proof into two steps. First, we prove this theorem for the case of the auxiliary equation of \eqref{1-Repr}; then we extend this result to \eqref{1-Repr} by applying the transfer principle.
\textit{Step 1. The auxiliary equation case.} Observe first that the corresponding auxiliary equation of \eqref{1-Repr} is given by \begin{align}\label{1PfTh(Re)} \tilde{Y}^\epsilon_{V_r}=&y+z(\tilde{W}_{V_{t+\epsilon}}-\tilde{W}_{V_t})+\int_{V_r}^{V_{t+\epsilon}}f(U_s,\tilde{Y}^\epsilon_s,\tilde{Z}^\epsilon_s,\mathscr{L}_{(\tilde{W}_s,\tilde{Y}^\epsilon_s,\tilde{Z}_s^\epsilon)})\text{\rm{d}} s\cr &-\int_{V_r}^{V_{t+\epsilon}}\tilde{Z}^\epsilon_s\text{\rm{d}}\tilde{W}_s,\ \ r\in[t,t+\epsilon]. \end{align} We claim that for any $(t,y,z)\in[0,T)\times\mathbb R\times\mathbb R$, the following two statements are equivalent: \begin{align*} (I) \lim\limits_{\epsilon\ra0^+}&\frac{\tilde Y^\epsilon_{V_t}-y}{\epsilon}=f(t,y,z,\mathscr{L}_{(\tilde W_{V_t}.y,z)});\\ (II) \lim\limits_{\epsilon\ra0^+}&\frac 1 \epsilon\int_{V_t}^{V_{t+\epsilon}}f(U_r,y,z,\mathscr{L}_{(\tilde W_{V_t}.y,z)})\text{\rm{d}} r=f(t,y,z,\mathscr{L}_{(\tilde W_{V_t}.y,z)}). \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{align*} Indeed, for $s\in[V_t,V_{t+\epsilon}]$, we put $\Gamma^\epsilon_s:=\tilde{Y}^\epsilon_s-(y+z(\tilde{W}_s-\tilde{W}_{V_t}))$. Then, applying the It\^{o} formula to $\Gamma^\epsilon_s$ on the interval $[V_t,V_{t+\epsilon}]$ yields \begin{align}\label{2PfTh(Re)} \Gamma^\epsilon_s=&\int_s^{V_{t+\epsilon}}f(U_r,\Gamma^\epsilon_r+y+z(\tilde{W}_r-\tilde{W}_{V_t}),\tilde{Z}^\epsilon_r,\mathscr{L}_{(\tilde{W}_r,\Gamma^\epsilon_r+y+z(\tilde{W}_r-\tilde{W}_{V_t}),\tilde{Z}_r^\epsilon)})\text{\rm{d}} r\cr &-\int_s^{V_{t+\epsilon}}(\tilde{Z}^\epsilon_r-z)\text{\rm{d}}\tilde{W}_r\cr =:&\int_s^{V_{t+\epsilon}}f^\epsilon(r)\text{\rm{d}} r-\int_s^{V_{t+\epsilon}}(\tilde{Z}^\epsilon_r-z)\text{\rm{d}}\tilde{W}_r. \end{align} Using the facts that $\tilde{Y}^\epsilon_{V_t}-y=\Gamma^\epsilon_{V_t}$ and $\Gamma^\epsilon_{V_t}$ is deterministic thanks to \cite[Proposition 4.2]{EPQ9705} and taking the conditional expectation with respect to $\tilde\mathscr{F}_{V_t}$ in \eqref{2PfTh(Re)}, we arrive at \begin{align}\label{6PfTh(Re)} &\frac{\tilde Y^\epsilon_{V_t}-y}{\epsilon}-f(t,y,z,\mathscr{L}_{(\tilde W_{V_t}.y,z)})\cr =&\frac{\Gamma^\epsilon_{V_t}}\epsilon-f(t,y,z,\mathscr{L}_{(\tilde W_{V_t}.y,z)})\cr
=&\frac 1 \epsilon\tilde\mathbb E\left(\int_{V_t}^{V_{t+\epsilon}}f^\epsilon(r)\text{\rm{d}} r|\tilde\mathscr{F}_{V_t}\right)-f(t,y,z,\mathscr{L}_{(\tilde W_{V_t}.y,z)})\cr
=&\frac 1 \epsilon\tilde\mathbb E\left[\int_{V_t}^{V_{t+\epsilon}}\left(f^\epsilon(r)-f(U_r,y+z(\tilde{W}_r-\tilde{W}_{V_t}),z,\mathscr{L}_{(\tilde{W}_r,y+z(\tilde{W}_r-\tilde{W}_{V_t}),z)})\right)\text{\rm{d}} r\big|\tilde\mathscr{F}_{V_t}\right]\cr &+\frac 1 \epsilon\tilde\mathbb E\bigg[\int_{V_t}^{V_{t+\epsilon}}\Big(f(U_r,y+z(\tilde{W}_r-\tilde{W}_{V_t}),z,\mathscr{L}_{(\tilde{W}_r,y+z(\tilde{W}_r-\tilde{W}_{V_t}),z)})\cr
&\qquad\qquad\qquad\quad -f(U_r,y,z,\mathscr{L}_{(\tilde W_{V_t}.y,z)})\Big)\text{\rm{d}} r\big|\tilde\mathscr{F}_{V_t}\bigg]\cr &+\frac 1 \epsilon\int_{V_t}^{V_{t+\epsilon}}f(U_r,y,z,\mathscr{L}_{(\tilde W_{V_t}.y,z)})\text{\rm{d}} r-f(t,y,z,\mathscr{L}_{(\tilde W_{V_t}.y,z)})\cr =:&I_1+I_2+I_3. \end{align} By the H\"{o}lder inequality and \textsc{\textbf{(H1)}}, we have \begin{align}\label{3PfTh(Re)}
\tilde\mathbb E|I_1|^2\leq&3L_f^2\frac {V_{t+\epsilon}-V_t}{\epsilon^2}\int_{V_t}^{V_{t+\epsilon}}\tilde\mathbb E\left[|\Gamma^\epsilon_r|^2+|\tilde{Z}^\epsilon_r-z|^2+\tilde\mathbb E\left(|\Gamma^\epsilon_r|^2+|\tilde{Z}^\epsilon_r-z|^2\right)\right]\text{\rm{d}} r\cr
=&6L_f^2\frac {V_{t+\epsilon}-V_t}{\epsilon^2}\int_{V_t}^{V_{t+\epsilon}}\tilde\mathbb E\left(|\Gamma^\epsilon_r|^2+|\tilde{Z}^\epsilon_r-z|^2\right)\text{\rm{d}} r \end{align} and \begin{align}\label{4PfTh(Re)}
\tilde\mathbb E|I_2|^2\leq&2L_f^2\frac {V_{t+\epsilon}-V_t}{\epsilon^2}\int_{V_t}^{V_{t+\epsilon}}\mathbb E\left[z^2|\tilde{W}_r-\tilde{W}_{V_t}|^2+(1+z^2)\tilde\mathbb E|\tilde{W}_r-\tilde{W}_{V_t}|^2\right]\text{\rm{d}} r\cr =&2L_f^2(1+2z^2)\frac {V_{t+\epsilon}-V_t}{\epsilon^2}\int_{V_t}^{V_{t+\epsilon}}(r-V_t)\text{\rm{d}} r\cr =&L_f^2(1+2z^2)\frac {(V_{t+\epsilon}-V_t)^3}{\epsilon^2}. \end{align}
Similar to \cite[Proposition 2.2]{BCHMP00}, using the It\^{o} formula applied to $\text{\rm{e}}^{\beta s}|\Gamma^\epsilon_s|^2$ with some constant $\beta>0$ depending only on $L_f$, we obtain the following a priori estimate for \eqref{2PfTh(Re)} (its solution is regarded as $(\Gamma^\epsilon_\cdot,\tilde{Z}^\epsilon_\cdot-z)$) \begin{align*}
&\tilde\mathbb E\left(\sup_{V_t\leq s\leq V_{t+\epsilon}}|\Gamma^\epsilon_s|^2+\int_{V_t}^{V_{t+\epsilon}}|\tilde{Z}^\epsilon_r-z|^2\text{\rm{d}} r\big|\tilde\mathscr{F}_{V_t}\right)\cr
\leq&C\tilde\mathbb E\left[\left(\int_{V_t}^{V_{t+\epsilon}}|
f(U_r,y+z(\tilde{W}_r-\tilde{W}_{V_t}),z,\mathscr{L}_{(\tilde{W}_r,y+z(\tilde{W}_r-\tilde{W}_{V_t}),z)})|\text{\rm{d}} r\right)^2\big|\tilde\mathscr{F}_{V_t}\right]. \end{align*} Then, it follows from the H\"{o}lder inequality and \textsc{\textbf{(H1)}} that \begin{align*}
&\tilde\mathbb E\left(\sup_{V_t\leq s\leq V_{t+\epsilon}}|\Gamma^\epsilon_s|^2+\int_{V_t}^{V_{t+\epsilon}}|\tilde{Z}^\epsilon_r-z|^2\text{\rm{d}} r\right)\cr \leq&C(V_{t+\epsilon}-V_t)
\int_{V_t}^{V_{t+\epsilon}}\tilde\mathbb E\left[|f(U_r,y,z,\mathscr{L}_{(\tilde{W}_{V_t},y,z)})|^2+z^2|\tilde{W}_r-\tilde{W}_{V_t}|^2+(1+z^2)\tilde\mathbb E|\tilde{W}_r-\tilde{W}_{V_t}|^2\right]\text{\rm{d}} r\cr =&C(V_{t+\epsilon}-V_t)
\left[\int_{V_t}^{V_{t+\epsilon}}|f(U_r,y,z,\mathscr{L}_{(\tilde{W}_{V_t},y,z)})|^2\text{\rm{d}} r+(1+2z^2)(V_{t+\epsilon}-V_t)^2\right]. \end{align*} Substituting this into \eqref{3PfTh(Re)} yields \begin{align}\label{5PfTh(Re)}
\tilde\mathbb E|I_1|^2\leq& C\left(\frac{V_{t+\epsilon}-V_t}{\epsilon}\right)^2(1+V_{t+\epsilon}-V_t)\cr
&\times\left[\int_{V_t}^{V_{t+\epsilon}}|f(U_r,y,z,\mathscr{L}_{(\tilde{W}_{V_t},y,z)})|^2\text{\rm{d}} r+(1+2z^2)(V_{t+\epsilon}-V_t)^2\right]. \end{align} Note that by the absolute continuity of integral, we have \begin{align*}
\lim\limits_{\epsilon\ra0^+}\int_{V_t}^{V_{t+\epsilon}}|f(U_r,y,z,\mathscr{L}_{(\tilde{W}_{V_t},y,z)})|^2\text{\rm{d}} r=0. \end{align*} Therefore, owing to the condition $V_{t+\epsilon}-V_t=O(\epsilon)$ as $\epsilon\ra0$ and \eqref{4PfTh(Re)}-\eqref{5PfTh(Re)}, we conclude that \begin{align*}
\lim\limits_{\epsilon\ra0^+}\left(\tilde\mathbb E|I_1|^2+\tilde\mathbb E|I_2|^2\right)=0, \end{align*} which, along with \eqref{6PfTh(Re)} and the fact that $\tilde Y^\epsilon_{V_t}$ is deterministic due to \cite[Proposition 4.2]{EPQ9705} again, implies the desired assertion.
\textit{Step 2. The equation with Gaussian noise case.}
With a slight modification of the proof of Theorem \ref{Th1}, we know that for every $\epsilon>0$, there exists a representation function $\tilde u^\epsilon$ such that the solutions of \eqref{1-Repr} and \eqref{1PfTh(Re)} can be written as the following form: for any $r\in[t,t+\epsilon]$, \begin{align*} Y^\epsilon_r=\tilde u^\epsilon(V_r,X_r-X_t,\mathscr{L}_{X_r-X_t}) \ \ \ \mathrm{and} \ \ \ \tilde Y^\epsilon_{V_r}=\tilde u^\epsilon(V_r,\tilde W_{V_r}-\tilde W_{V_t},\mathscr{L}_{\tilde W_{V_r}-\tilde W_{V_t}}). \end{align*} Then, we get \begin{align}\label{7PfTh(Re)} \frac {Y^\epsilon_t-y}\epsilon=\frac{\tilde u^\epsilon(V_t,0,\delta_0)-y}\epsilon=\frac{\tilde Y^\epsilon_{V_t}-y}{\epsilon}. \end{align} On the other hand, by a change of variables it is easy to see that \begin{align}\label{8PfTh(Re)} \int_{t}^{t+\epsilon}f(r,y,z,\mathscr{L}_{(\tilde W_{V_t},y,z)})\text{\rm{d}} r=\int_{V_t}^{V_{t+\epsilon}}f(U_r,y,z,\mathscr{L}_{(\tilde W_{V_t}.y,z)})\text{\rm{d}} r. \end{align} Finally, taking into account of the claim in step 1 and using \eqref{7PfTh(Re)}-\eqref{8PfTh(Re)} and the fact that $\mathscr{L}_{(\tilde W_{V_t},y,z)}=\mathscr{L}_{(X_t,y,z)}$, we deduce that \eqref{1Th(Re)} is equivalent to \eqref{2Th(Re)}. Our proof is now finished. \end{proof}
\begin{rem}\label{Re(Rep)} Although the computations become much more involved, it is not hard to extend the result in step 1 to \eqref{1PfTh(Re)} with $\tilde W$ being replaced by a diffusion process, which is a generalisation of \cite[Proposition 2.3]{BCHMP00} and \cite[Theorem 3.3]{Jiang05} that handled the distribution-free BSDEs driven by Brownian motion. \end{rem}
With the help of Theorem \ref{Th(Repre)}, we can establish a converse comparison theorem. To this end, we denote by $(Y_t^i(T,g(X_T,\mathscr{L}_{X_T})),Z_t^i(T,g(X_T,\mathscr{L}_{X_T})))_{t\in[0,T]}$ the solution of \begin{align}\label{3Th(Conve)} Y_t^i=&g(X_T,\mathscr{L}_{X_T})+\int_t^Tf^i(s,Y_s^i,Z_s^i,\mathscr{L}_{(X_s,Y^i_s,Z^i_s)})\text{\rm{d}} V_s\cr &-\int_t^TZ_s^i\text{\rm{d}}^\diamond X_s,\ \ t\in[0,T],\ \ i=1,2. \end{align}
Our converse comparison theorem reads as follows
\begin{thm}\label{Th(Conve)} Let \textsc{\textbf{(H1)}} hold for $f^i,i=1,2$ and $V_{t+\epsilon}-V_t=O(\epsilon)$ as $\epsilon\ra0$ for any $t\in[0,T)$. Assume moreover that for each $(t,y,z)\in[0,T)\times\mathbb R\times\mathbb R$ and $\epsilon\in(0,T-t]$, \begin{align}\label{1Th(Conve)} Y_t^1(t+\epsilon,y+z(X_{t+\epsilon}-X_t))\leq Y_t^2(t+\epsilon,y+z(X_{t+\epsilon}-X_t)). \end{align} Then for each $(t,y,z)\in[0,T)\times\mathbb R\times\mathbb R$, we have \begin{align}\label{2Th(Conve)} f^1(t,y,z,\mathscr{L}_{(X_t,y,z)})\leq f^2(t,y,z,\mathscr{L}_{(X_t,y,z)}). \end{align} \end{thm}
\begin{proof} Since \textsc{\textbf{(H1)}} holds for $f^i,i=1,2$, one can see that for any $(t,y,z)\in[0,T)\times\mathbb R\times\mathbb R$, \begin{align*} \lim\limits_{\epsilon\ra0^+}&\frac 1 \epsilon\int_t^{t+\epsilon}f^i(r,y,z,\mathscr{L}_{(X_t,y,z)})\text{\rm{d}} r=f^i(t,y,z,\mathscr{L}_{(X_t,y,z)}),\ \ i=1,2. \end{align*} Then, in light of Theorem \ref{Th(Repre)}, we have \begin{align}\label{1PfTh(Conve)} \lim\limits_{\epsilon\ra0^+}&\frac{Y_t^i(t+\epsilon,y+z(X_{t+\epsilon}-X_t))-y}{\epsilon}=f^i(t,y,z,\mathscr{L}_{(X_t,y,z)}),\ \ i=1,2. \end{align} By the hypothesis \eqref{1Th(Conve)} we derive that for any $(t,y,z)\in[0,T)\times\mathbb R\times\mathbb R$ and $\epsilon\in(0,T-t]$, \begin{align*} Y_t^1(t+\epsilon,y+z(X_{t+\epsilon}-X_t))-y\leq Y_t^2(t+\epsilon,y+z(X_{t+\epsilon}-X_t))-y, \end{align*} which, along with \eqref{1PfTh(Conve)}, yields the desired relation. This then completes the proof. \end{proof}
We conclude this section with a remark.
\begin{rem}\label{Re(Conve)} By Theorems \ref{Th(Repre)} and \ref{Th(Conve)}, it is a surprise to us for finding that the relations \eqref{1Th(Re)}, \eqref{2Th(Re)} and \eqref{2Th(Conve)} all depend on $\mathscr{L}_{(X_t,y,z)}$, which has a big difference from distribution-free case (see, e.g., \cite[Theorem 4.1]{BCHMP00}, \cite[Theorem]{Chen98a} or \cite[Theroem 5.1]{Jiang05}). This means that there is no arbitrariness for the measure of the generator when dealing with representation theorem or converse comparison theorem, which is due to the appearance of the distribution dependent terms $\mathscr{L}_{(X_s,Y_s^\epsilon,Z_s^\epsilon)}$ in \eqref{1-Repr} and $\mathscr{L}_{(X_s,Y^i_s,Z^i_s)}$ in \eqref{3Th(Conve)}, respectively. \end{rem}
\section{Functional inequalities}
In this section, we aim to establish functional inequalities for \eqref{Bsde}, including mainly transportation inequalities and Logarithmic-Sobolev inequalities. Our arguments consist of utilising stability of the Wasserstein distance and the relative entropy of measures under the homeomorphism, together with the transfer principle.
\subsection{Transportation inequalities}
Let $(E,d)$ be a metric space and let $\mathscr{P}(E)$ denote the set of all probability measures on $E$. For $p\in[1,\infty)$, we say that a probability measure $\mu\in\mathscr{P}(E)$ satisfies $p$-transportation inequality on $(E,d)$ (noted $\mu\in T_p(C)$) if there is a constant $C\geq0$ such that \begin{align*}
\mathbb{W}_p(\mu,\nu)\leq C\sqrt{H(\nu|\mu)}, \ \ \nu\in\mathscr{P}(E), \end{align*}
where $H(\nu|\mu)$ is the relative entropy (or Kullback-Leibler divergence) of $\nu$ with respect to $\mu$ defined as \begin{equation*}
H(\nu|\mu)=\left\{ \begin{array}{ll} \int\log\frac {\text{\rm{d}}\nu}{\text{\rm{d}}\mu}\text{\rm{d}}\nu,\ \ \mathrm{if}\ \nu\ll\mu,\\ +\infty,\ \ \ \ \ \ \ \ \ \ \mathrm{otherwise}. \end{array} \right. \end{equation*} The transportation inequality has found numerous applications, for instance, to quantitative finance, the concentration of measure phenomenon and various problems of probability in higher dimensions. We refer the reader e.g. to \cite{BT20,DGW04,Lacker18,Rid17,Saussereau12,SYZ22} and references therein.
Before moving to \eqref{Bsde}, we first show the transportation inequalities for the auxiliary equation \eqref{Aueq}, which is a distribution dependent version of \cite[Theorem 1.3 and Lemma 4.1]{BT20}.
\begin{prp}\label{Prp(Tr)} Assume that \textsc{\textbf{(H1)}} holds. Then we have \begin{itemize} \item[(i)] The law of $(\tilde Y_t)_{t\in[0,V_T]}$ satisfies $T_2(C_{Tr,\tilde Y})$ on $\tilde\Omega$ with $C_{Tr,\tilde Y}=2(L_g+L_fV_T)^2\text{\rm{e}}^{2L_fV_T}$. \item[(ii)] For any $p\geq1$, the law of $(\tilde Z_t)_{t\in[0,V_T]}$, denoted by $\mathscr{L}_{\tilde Z}$, satisfies \begin{align*}
\mathbb{W}_p(\mathscr{L}_{\tilde Z},\mu)\leq C_{Tr,\tilde Z}\left(H(\mu|\mathscr{L}_{\tilde Z})\right)^{\frac 1 {2p}},\ \ \mu\in\mathscr{P}(\tilde\Omega), \end{align*} where \begin{align*} C_{Tr,\tilde Z}=2\inf_{\alpha>0}\left\{\frac 1{2\alpha}\left[1+\alpha\text{\rm{e}}^{2pL_fV_T}(L_g+L_fV_T)^{2p}\right]\right\}^{\frac 1{2p}}. \end{align*} \end{itemize} \end{prp}
For the sake of conciseness, we defer the proof to the Appendix. With Proposition \ref{Prp(Tr)} in hand, along with the transfer principle, we now state and prove the transportation inequalities for \eqref{Bsde} as follows.
\begin{thm}\label{Th(TrIn)} Assume that \textsc{\textbf{(H1)}} holds. Then for every $t\in[0,T]$, we have \begin{itemize} \item[(i)] The law of $Y_t$ satisfies $T_2(C_{Tr,Y_t})$ on $\mathbb R$ with $C_{Tr,Y_t}=2(L_g+L_f(V_T-V_t))^2\text{\rm{e}}^{2L_f(V_T-V_t)}$. \item[(ii)] For any $p\geq1$, the law of $Z_t$ satisfies \begin{align*}
\mathbb{W}_p(\mathscr{L}_{Z_t},\mu)\leq C_{Tr,Z_t}\left(H(\mu|\mathscr{L}_{Z_t})\right)^{\frac 1 {2p}},\ \ \mu\in\mathscr{P}(\mathbb R), \end{align*} where \begin{align*} C_{Tr,Z_t}=2\inf_{\alpha>0}\left\{\frac 1{2\alpha}\left[1+\alpha\text{\rm{e}}^{2pL_f(V_T-V_t)}(L_g+L_f(V_T-V_t))^{2p}\right]\right\}^{\frac 1{2p}}. \end{align*} \end{itemize} \end{thm}
\begin{proof} By Proposition \ref{Prp(Tr)} and its proof (see \eqref{add4PfPrp(Tr)} and \eqref{6PfPrp(Tr)} in the Appendix), it is easy to see that for each $t\in[0,V_T]$, $\mathscr{L}_{\tilde Y_t}$ satisfies $T_2(C_{Tr,\tilde Y_t})$ on $\mathbb R$ with $C_{Tr,\tilde Y_t}=2(L_g+L_f(V_T-t))^2\text{\rm{e}}^{2L_f(V_T-t)}$, and $\mathscr{L}_{\tilde Z_t}$ satisfies \begin{align*}
\mathbb{W}_p(\mathscr{L}_{\tilde Z_t},\mu)\leq C_{Tr,\tilde Z_t}\left(H(\mu|\mathscr{L}_{\tilde Z_t})\right)^{\frac 1 {2p}},\ \ \mu\in\mathscr{P}(\mathbb R) \end{align*} with any $p\geq1$ and \begin{align*} C_{Tr,\tilde Z_t}=2\inf_{\alpha>0}\left\{\frac 1{2\alpha}\left[1+\alpha\text{\rm{e}}^{2pL_f(V_T-t)}(L_g+L_f(V_T-t))^{2p}\right]\right\}^{\frac 1{2p}}. \end{align*} Noting that for any $t\in[0,T]$, the laws of $Y_t$ and $Z_t$ are respectively the same as those of $\tilde Y_{V_t}$ and $\tilde Z_{V_t}$ due to Theorem \ref{Th1}, we obtain the desired assertions (i) and (ii). \end{proof}
\begin{rem}\label{Re(TrIn)} If $f=0$ and $g(x,\mu)=x$, then one can check that \eqref{Bsde} and \eqref{Aueq} have unique solutions $(Y,Z)=(X,1)$ and $(\tilde Y,\tilde Z)=(\tilde W,1)$, respectively. By Theorem \ref{Th(TrIn)} and Proposition \ref{Prp(Tr)}, we have $C_{Tr,Y_t}=2$ and $C_{Tr,\tilde Y}=2$ which are known to be optimal for Gaussian processes and Brownian motion. This means that the constants $C_{Tr,Y_t}$ and $C_{Tr,\tilde Y}$ above are sharp. Let us also mention that it is not clear here, if the laws of the paths of $Y$ and $Z$ satisfy the transportation inequality or not. Indeed, the transfer principle may fail in this situation because of the difference between spaces $\mathscr{P}(C([0,T]))$ and $\mathscr{P}(C([0,V_T]))$. \end{rem}
\subsection{Logarithmic-Sobolev inequality}
For introducing our result for \eqref{Bsde}, let us first give the Logarithmic-Sobolev inequality for the auxiliary equation \eqref{Aueq}.
\begin{prp}\label{Prp(LS)} Assume that \textsc{\textbf{(H1)}} holds. Then for every $t\in[0,V_T]$, \begin{align}\label{1Prp(LS)}
\mathrm{Ent}_{\mathscr{L}_{\tilde Y_t}}(f^2)\leq C_{LS,\tilde Y_t}\int_\mathbb R|f'|^2\text{\rm{d}}\mathscr{L}_{\tilde Y_t} \end{align} holds for all $\mathscr{L}_{\tilde Y_t}$-integrable and differentiable function $f:\mathbb R\rightarrow\mathbb R$, where \begin{align*} C_{LS,\tilde Y_t}=2V_T(L_g+L_f(V_T-t))^2\text{\rm{e}}^{2L_f(V_T-t)}. \end{align*} \end{prp}
Here the entropy of $0\leq F\in L^1(\mu)$ with respect to the probability measure $\mu$ is defined as \begin{align*} \mathrm{Ent}_{\mu}(F)=\int_\mathbb R F\log F\text{\rm{d}}\mu-\int_\mathbb R F\text{\rm{d}}\mu\cdot\log\int_\mathbb R F\text{\rm{d}}\mu. \end{align*} When $\mu$ satisfies \eqref{1Prp(LS)} for $\mu$ replacing $\mathscr{L}_{\tilde Y_t}$, we shall say that $\mu$ satisfies the $LSI(C_{LS})$. This inequality, initiated by Gross \cite{Gross75}, has become a crucial tool in infinite dimensional stochastic analysis. It had been well investigated in the context of forward diffusions, and was related with the 2-transportation inequality (see for instance \cite{CGW10,GW06,Ledoux99,OV00,Wang01,Wang09}).
To prove Proposition \ref{Prp(LS)}, we recall a result which asserts that the Logarithmic-Sobolev inequality satisfies stability under push-forward by Lipschitz maps (see \cite[Section 1]{CFJ09} or \cite[Lemma 6.1]{BT20}).
\begin{lem}\label{LS-Le}
If $\psi:(\tilde\Omega,\|\cdot\|_\infty)\rightarrow\mathbb R^d$ is Lipschitzian, i.e. \begin{align*}
|\psi(\omega_1)-\psi(\omega_2)|\leq L_\psi\|\omega_1-\omega_2\|_\infty:=L_\psi\sup\limits_{r\in[0,V_T]}|\omega_1(r)-\omega_2(r)|,\ \ \omega_1, \omega_2\in\tilde\Omega, \end{align*} where $L_\psi>0$ is a constant. Then $\tilde\mathbb{P}\circ\psi^{-1}$ satisfies LSI($2V_TL_\psi^2$). \end{lem}
In light of the proof of Proposition \ref{Prp(Tr)} (see the Appendix), we have shown that $\tilde Y:\tilde\Omega\rightarrow C([0,V_T])$ is Lipschitz continuous with Lipschitzian constant $(L_g+L_fV_T)\text{\rm{e}}^{L_fV_T}$, which actually implies that for any $t\in[0,V_T], \tilde Y_t:\tilde\Omega\rightarrow\mathbb R$ is also Lipschitz continuous with Lipschitzian constant $(L_g+L_f(V_T-t))\text{\rm{e}}^{L_f(V_T-t)}$ thanks to \eqref{add4PfPrp(Tr)} . So, owing to Lemma \ref{LS-Le}, we get the desired assertion stated in Proposition \ref{Prp(LS)}.
Observe that by Theorem \ref{Th1}, the law of $Y_t$ is the same as that of $\tilde Y_{V_t}$ for every $t\in[0,T]$. Therefore, by Proposition \ref{Prp(LS)} we have the following Logarithmic-Sobolev inequality for \eqref{Bsde}.
\begin{thm}\label{Th(LS)} Assume that \textsc{\textbf{(H1)}} holds. Then for any $t\in[0,T]$, the law of $Y_t$ satisfies the LSI($C_{LS,Y_t}$), where \begin{align*} C_{LS,Y_t}=2V_T(L_g+L_f(V_T-V_t))^2\text{\rm{e}}^{2L_f(V_T-V_t)}. \end{align*} \end{thm}
\section{Appendix: proof of Proposition \ref{Prp(Tr)}}
In order to prove the proposition, we first present three lemmas that are needed later on. The first one concerns the stability of transportation inequalities under push-forward by Lipschitz maps, which is due to \cite[Lemma 2.1]{DGW04} (see, e.g., \cite[Lemma 4.1]{Rid17} and \cite[Corollary 2.2]{SYZ22} for a generalisation).
\begin{lem}\label{FI-Le2} Let $(E,d_E)$ and $(\overline{E},d_{\overline{E}})$ be two metric spaces. Assume that $\mu\in T_p(C)$ on $(E,d_E)$ and $\chi:(E,d_E)\rightarrow(\overline{E},d_{\overline{E}})$ is Lipschitz continuous with Lipschitzian constant $L_\chi$. Then $\mu\circ\chi^{-1}\in T_p(CL^2_\chi)$ on $(\overline{E},d_{\overline{E}})$. \end{lem}
Our second lemma below provides a sufficient condition expressed in terms of exponential moment for a probability measure satisfying transportation inequality of the form \eqref{1FI-Le3}.
\begin{lem}\label{FI-Le3} (\cite[Corollary 2.4]{BV05}) Let $(E,d_E)$ be a metric space, and let $\nu$ be a probability measure on $E$ and $p\geq 1$. Assume that there exist $x_0\in E$ and $\alpha>0$ such that \begin{align*} \int_E\exp\left\{\alpha d^{2p}_E(x_0,x)\right\}\text{\rm{d}}\nu(x)<\infty. \end{align*} Then \begin{align}\label{1FI-Le3}
\mathbb{W}_p(\mu,\nu)\leq C(H(\mu|\nu))^{\frac 1 {2p}},\ \ \mu\in\mathscr{P}(E) \end{align} holds with \begin{align*} C=2\inf_{x_0\in E,\alpha>0}\left[\frac 1{2\alpha}\left(1+\log\int_E\exp\left\{\alpha d^{2p}_E(x_0,x)\right\}\text{\rm{d}}\nu(x)\right)\right]^{\frac 1{2p}}<\infty. \end{align*} \end{lem}
Before stating the third lemma, we need some notations from \cite{BT20,EKTZ14}. For $t\in[0,V_T]$, let $\tilde\Omega^t$ be the shifted space of $\tilde\Omega$ given by \begin{align*} \tilde\Omega^t:=\{\gamma\in C([t,V_T]):\gamma(t)=0\}. \end{align*} Denote by $\tilde W^t$ and $\tilde\mathbb{P}^t$ the canonical process and the Wiener measure on $\tilde\Omega^t$, respectively, and by $(\tilde\mathscr{F}^t_s)_{s\in[t,V_T]}$ the filtration generated by $\tilde W^t$. For $\omega\in\tilde\Omega,t\in[0,V_T]$ and $\gamma\in\tilde\Omega^t$, define the concatenation $\omega\otimes_t\gamma\in\tilde\Omega$ by \begin{equation*} (\omega\otimes_t\gamma)(s):=\left\{ \begin{array}{ll} \omega(s),\ \ \ \ \ \ \ \ \ \ \ s\in[0,t),\\ \omega(t)+\gamma(s),\ \ s\in[t,V_T], \end{array} \right. \end{equation*} and for $\zeta:\tilde\Omega\times[0,V_T]\rightarrow\mathbb R$, define its shift $X^{t,\omega}$ as follows \begin{align*} \zeta^{t,\omega}:&\ \tilde\Omega^t\times[t,V_T]\rightarrow\mathbb R,\cr &(\gamma,s)\mapsto \zeta_s(\omega\otimes_t\gamma)=:\zeta^{t,\omega}_s(\gamma). \end{align*} As pointed out in \cite{BT20}, we have \begin{align*}
\tilde\mathbb E(\zeta|\tilde\mathscr{F}_t)(\omega)=\int_{\Omega^t}\zeta^{t,\omega}(\gamma)\tilde\mathbb{P}^t(\text{\rm{d}}\gamma)=:\tilde\mathbb E_{\tilde\mathbb{P}^t}\zeta^{t,\omega}. \end{align*}
The following lemma gives a result which is a distribution dependent version of \cite[Lemma 2.2]{BT20}. The proof is pretty similar to that of \cite[Lemma 2.2]{BT20} and we omit it here.
\begin{lem}\label{FI-Le1} Let $(\tilde{Y},\tilde{Z})\in\widetilde{\Upsilon}$ be the solution to \eqref{Aueq}. Then for any $t\in[0,V_T]$, there exists a $\tilde\mathbb{P}$-zero set $N\subset\tilde\Omega$ such that for $\omega\in N^c$, \begin{align*} \tilde{Y}_s^{t,\omega}=&g(\tilde{W}^{t,\omega}_{V_T},\mathscr{L}_{\tilde{W}_{V_T}}) +\int_s^{V_T}f(U_r,\tilde{W}^{t,\omega}_r,\tilde{Y}^{t,\omega}_r,\tilde{Z}^{t,\omega}_r,\mathscr{L}_{(\tilde{W}_r,\widetilde{Y}_r,\tilde{Y}_r)})\text{\rm{d}} r\cr &-\int_s^{V_T}\tilde{Z}^{t,\omega}_r\text{\rm{d}}\tilde{W}^t_r, \ \ \tilde\mathbb{P}^t \textit{-}a.s., \ s\in[t,V_T] \end{align*} and $\tilde{Y}_t^{t,\omega}=\tilde{Y}_t(\omega), \tilde\mathbb{P}^t$-a.s.. \end{lem}
We are now ready to prove Proposition \ref{Prp(Tr)}.
\emph{Proof of Proposition \ref{Prp(Tr)}.} Owing to \textsc{\textbf{(H1)}}, we know that there exists a unique solution $(\tilde{Y},\tilde{Z})\in\tilde{\Upsilon}$ to \eqref{Aueq}. Moreover, it easily follows that $\tilde{Y}$ has $\tilde\mathbb{P}$-almost surely continuous paths and $\tilde{Z}$ is square integrable. The rest of the proof is divided into two steps.
\textsl{Step 1. Transportation inequality for $\tilde Y$}. Using arguments from the proofs of \cite[Proposition 5.4]{EKTZ14} and \cite[Theorem 1.3]{BT20}, we intend to show that $\tilde Y:\tilde\Omega\rightarrow\mathbb R$ is Lipschitz continuous.
Let $t\in[0,V_T]$. According to Lemma \ref{FI-Le1}, there exists a $\tilde\mathbb{P}$-zero set $N\subset\tilde\Omega$ such that for $\omega\in N^c$, \begin{align}\label{1PfPrp(Tr)} \tilde{Y}_t^{t,\omega}=\tilde{Y}_t(\omega),\ \ \tilde\mathbb{P}^t\textit{-}a.s. \end{align} and \begin{align}\label{2PfPrp(Tr)} \tilde{Y}_s^{t,\omega}=&g(\tilde{W}^{t,\omega}_{V_T},\mathscr{L}_{\tilde{W}_{V_T}}) +\int_s^{V_T}f(U_r,\tilde{W}^{t,\omega}_r,\tilde{Y}^{t,\omega}_r,\tilde{Z}^{t,\omega}_r,\mathscr{L}_{(\tilde{W}_r,\widetilde{Y}_r,\tilde{Y}_r)})\text{\rm{d}} r\cr &-\int_s^{V_T}\tilde{Z}^{t,\omega}_r\text{\rm{d}}\tilde{W}^t_r, \ \ \tilde\mathbb{P}^t\textit{-}a.s., \ s\in[t,V_T]. \end{align} Then for any $w_1,w_2\in N^c$, by \eqref{2PfPrp(Tr)} and \textsc{\textbf{(H1)}} we derive \begin{align*} &\tilde{Y}_s^{t,\omega_1}-\tilde{Y}_s^{t,\omega_2}\cr =&g(\tilde{W}^{t,\omega_1}_{V_T},\mathscr{L}_{\tilde{W}_{V_T}})-g(\tilde{W}^{t,\omega_2}_{V_T},\mathscr{L}_{\tilde{W}_{V_T}})\cr &+\int_s^{V_T}\left(f(U_r,\tilde{W}^{t,\omega_1}_r,\tilde{Y}^{t,\omega_1}_r,\tilde{Z}^{t,\omega_1}_r,\mathscr{L}_{(\tilde{W}_r,\widetilde{Y}_r,\tilde{Y}_r)}) -f(U_r,\tilde{W}^{t,\omega_2}_r,\tilde{Y}^{t,\omega_2}_r,\tilde{Z}^{t,\omega_2}_r,\mathscr{L}_{(\tilde{W}_r,\widetilde{Y}_r,\tilde{Y}_r)})\right) \text{\rm{d}} r\cr &-\int_s^{V_T}\left(\tilde{Z}^{t,\omega_1}_r-\tilde{Z}^{t,\omega_2}_r\right)\text{\rm{d}}\tilde{W}^t_r\cr =&g(\tilde{W}^{t,\omega_1}_{V_T},\mathscr{L}_{\tilde{W}_{V_T}})-g(\tilde{W}^{t,\omega_2}_{V_T},\mathscr{L}_{\tilde{W}_{V_T}})\cr &+\int_s^{V_T}\left[\alpha_r(\tilde{W}^{t,\omega_1}_r-\tilde{W}^{t,\omega_2}_r)+\beta_r(\tilde{Y}^{t,\omega_1}_r-\tilde{Y}^{t,\omega_2}_r)+\rho_r(\tilde{Z}^{t,\omega_1}_r-\tilde{Z}^{t,\omega_2}_r) \right]\text{\rm{d}} r\cr &-\int_s^{V_T}\left(\tilde{Z}^{t,\omega_1}_r-\tilde{Z}^{t,\omega_2}_r\right)\text{\rm{d}}\tilde{W}^t_r,\ \ \mathbb{P}^t\textit{-}a.s., \ s\in[t,V_T], \end{align*} where \begin{align*} \alpha_r&:=\int_0^1\partial_xf(U_r,\tilde{W}^{t,\omega_2}_r+\theta(\tilde{W}^{t,\omega_1}_r-\tilde{W}^{t,\omega_2}_r),\tilde{Y}^{t,\omega_1}_r,\tilde{Z}^{t,\omega_1}_r,\mathscr{L}_{(\tilde{W}_r,\widetilde{Y}_r,\tilde{Y}_r)})\text{\rm{d}}\theta,\cr \beta_r&:=\int_0^1\partial_yf(U_r,\tilde{W}^{t,\omega_2}_r,\tilde{Y}^{t,\omega_2}_r+\theta(\tilde{Y}^{t,\omega_1}_r-\tilde{Y}^{t,\omega_2}_r),\tilde{Z}^{t,\omega_1}_r,\mathscr{L}_{(\tilde{W}_r,\widetilde{Y}_r,\tilde{Y}_r)})\text{\rm{d}}\theta,\cr \rho_r&:=\int_0^1\partial_zf(U_r,\tilde{W}^{t,\omega_2}_r,\tilde{Y}^{t,\omega_2}_r,\tilde{Z}^{t,\omega_2}_r+\theta(\tilde{Z}^{t,\omega_1}_r-\tilde{Z}^{t,\omega_2}_r),\mathscr{L}_{(\tilde{W}_r,\widetilde{Y}_r,\tilde{Y}_r)})\text{\rm{d}}\theta. \end{align*}
We claim that the product $\text{\rm{e}}^{\int_t^s\beta_r\text{\rm{d}} r}(\tilde{Y}_s^{t,\omega_1}-\tilde{Y}_s^{t,\omega_2})$ yields a more suitable representation for $\tilde{Y}_s^{t,\omega_1}-\tilde{Y}_s^{t,\omega_2}$. Indeed, for $s\in(t,V_T]$, \begin{align*} &\text{\rm{d}}\left[\text{\rm{e}}^{\int_t^s\beta_r\text{\rm{d}} r}(\tilde{Y}_s^{t,\omega_1}-\tilde{Y}_s^{t,\omega_2})\right]\cr =&\bigg[(\tilde{Y}_s^{t,\omega_1}-\tilde{Y}_s^{t,\omega_2})\text{\rm{e}}^{\int_t^s\beta_r\text{\rm{d}} r}\beta_s\cr &-\text{\rm{e}}^{\int_t^s\beta_r\text{\rm{d}} r} \left(\alpha_s(\tilde{W}^{t,\omega_1}_s-\tilde{W}^{t,\omega_2}_s)+\beta_s(\tilde{Y}^{t,\omega_1}_s-\tilde{Y}^{t,\omega_2}_s)+\rho_s(\tilde{Z}^{t,\omega_1}_s-\tilde{Z}^{t,\omega_2}_s)\right)\bigg]\text{\rm{d}} s\cr &+e^{\int_t^s\beta_r\text{\rm{d}} r}\left(\tilde{Z}^{t,\omega_1}_s-\tilde{Z}^{t,\omega_2}_s\right)\text{\rm{d}}\tilde{W}^t_s\cr =&\left[ -\text{\rm{e}}^{\int_t^s\beta_r\text{\rm{d}} r} \left(\alpha_s(\tilde{W}^{t,\omega_1}_s-\tilde{W}^{t,\omega_2}_s)+\rho_s(\tilde{Z}^{t,\omega_1}_s-\tilde{Z}^{t,\omega_2}_s)\right)\right]\text{\rm{d}} s +e^{\int_t^s\beta_r\text{\rm{d}} r}\left(\tilde{Z}^{t,\omega_1}_s-\tilde{Z}^{t,\omega_2}_s\right)\text{\rm{d}}\tilde{W}^t_s. \end{align*} Then, integrating from $s$ to $V_T$ we get \begin{align*} &\text{\rm{e}}^{\int_t^{V_T}\beta_r\text{\rm{d}} r}(\tilde{Y}_{V_T}^{t,\omega_1}-\tilde{Y}_{V_T}^{t,\omega_2})-\text{\rm{e}}^{\int_t^s\beta_r\text{\rm{d}} r}(\tilde{Y}_s^{t,\omega_1}-\tilde{Y}_s^{t,\omega_2})\cr =&-\int_s^{V_T}\text{\rm{e}}^{\int_t^r\beta_\theta\text{\rm{d}} \theta}\left(\alpha_r(\tilde{W}^{t,\omega_1}_r-\tilde{W}^{t,\omega_2}_r)+\rho_r(\tilde{Z}^{t,\omega_1}_r-\tilde{Z}^{t,\omega_2}_r)\right)\text{\rm{d}} r\cr &+\int_s^{V_T}e^{\int_t^r\beta_\theta\text{\rm{d}}\theta}\left(\tilde{Z}^{t,\omega_1}_r-\tilde{Z}^{t,\omega_2}_r\right)\text{\rm{d}}\tilde{W}^t_r,\ \ \tilde\mathbb{P}^t\textit{-}a.s.. \end{align*} Observing that $\tilde{Y}_{V_T}^{t,\omega_i}=g(\tilde{W}^{t,\omega_i}_{V_T},\mathscr{L}_{\tilde{W}_{V_T}}),i=1,2$, we thus obtain \begin{align}\label{3PfPrp(Tr)} \tilde{Y}_s^{t,\omega_1}-\tilde{Y}_s^{t,\omega_2} =&\text{\rm{e}}^{\int_s^{V_T}\beta_r\text{\rm{d}} r}\left(g(\tilde{W}^{t,\omega_1}_{V_T},\mathscr{L}_{\tilde{W}_{V_T}})-g(\tilde{W}^{t,\omega_2}_{V_T},\mathscr{L}_{\tilde{W}_{V_T}})\right)\cr &+\int_s^{V_T}\text{\rm{e}}^{\int_s^r\beta_\theta\text{\rm{d}} \theta}\left(\alpha_r(\tilde{W}^{t,\omega_1}_r-\tilde{W}^{t,\omega_2}_r)+\rho_r(\tilde{Z}^{t,\omega_1}_r-\tilde{Z}^{t,\omega_2}_r)\right)\text{\rm{d}} r\cr &-\int_s^{V_T}e^{\int_s^r\beta_\theta\text{\rm{d}}\theta}\left(\tilde{Z}^{t,\omega_1}_r-\tilde{Z}^{t,\omega_2}_r\right)\text{\rm{d}}\tilde{W}^t_r\cr =&\text{\rm{e}}^{\int_s^{V_T}\beta_r\text{\rm{d}} r}\left(g(\tilde{W}^{t,\omega_1}_{V_T},\mathscr{L}_{\tilde{W}_{V_T}})-g(\tilde{W}^{t,\omega_2}_{V_T},\mathscr{L}_{\tilde{W}_{V_T}})\right)\nonumber\\ &+\int_s^{V_T}\text{\rm{e}}^{\int_s^r\beta_\theta\text{\rm{d}} \theta}\alpha_r(\tilde{W}^{t,\omega_1}_r-\tilde{W}^{t,\omega_2}_r)\text{\rm{d}} r -\int_s^{V_T}e^{\int_s^r\beta_\theta\text{\rm{d}}\theta}\left(\tilde{Z}^{t,\omega_1}_r-\tilde{Z}^{t,\omega_2}_r\right)\text{\rm{d}}\overline{W}^t_r, \end{align} where $\overline{W}^t_r:=\tilde W_r-\int_t^r\rho_r\text{\rm{d}} r$.\\ Now, for $s\in[t,V_T]$, we set \begin{align*}
R_s=\exp\left\{\int_t^s\rho_r\text{\rm{d}}\tilde W_r-\frac 1 2 \int_t^s|\rho_r|^2\text{\rm{d}} r\right\}. \end{align*} By \textsc{\textbf{(H1)}}, it is easy to verify that the Novikov condition holds, which implies that $\overline{W}^t_\cdot$ is a Brownian motion under the probability $R_{V_T}\tilde\mathbb{P}^t$ due to the Girsanov theorem. Then, conditioning by $\tilde\mathscr{F}^t_s$ under $R_{V_T}\tilde\mathbb{P}^t$ on both sides of \eqref{3PfPrp(Tr)} yields \begin{align}\label{4PfPrp(Tr)}
&\tilde\mathbb E_{R_{V_T}\tilde\mathbb{P}^t}\left(\tilde{Y}_s^{t,\omega_1}-\tilde{Y}_s^{t,\omega_2}|\tilde\mathscr{F}^t_s\right)\cr
=&\tilde\mathbb E_{R_{V_T}\tilde\mathbb{P}^t}\left[\text{\rm{e}}^{\int_s^{V_T}\beta_r\text{\rm{d}} r}\left(g(\tilde{W}^{t,\omega_1}_{V_T},\mathscr{L}_{\tilde{W}_{V_T}})-g(\tilde{W}^{t,\omega_2}_{V_T},\mathscr{L}_{\tilde{W}_{V_T}})\right)|\tilde\mathscr{F}^t_s\right]\nonumber\\
&+\tilde\mathbb E_{R_{V_T}\tilde\mathbb{P}^t}\left[\int_s^{V_T}\text{\rm{e}}^{\int_s^r\beta_\theta\text{\rm{d}} \theta}\alpha_r(\tilde{W}^{t,\omega_1}_r-\tilde{W}^{t,\omega_2}_r)\text{\rm{d}} r|\tilde\mathscr{F}^t_s\right],\ \ \tilde\mathbb{P}^t\textit{-}a.s., \ s\in[t,V_T]. \end{align} Consequently, we have \begin{align*} \tilde{Y}_t^{t,\omega_1}-\tilde{Y}_t^{t,\omega_2} =&\tilde\mathbb E_{R_{V_T}\tilde\mathbb{P}^t}\left[\text{\rm{e}}^{\int_t^{V_T}\beta_r\text{\rm{d}} r}\left(g(\tilde{W}^{t,\omega_1}_{V_T},\mathscr{L}_{\tilde{W}_{V_T}})-g(\tilde{W}^{t,\omega_2}_{V_T},\mathscr{L}_{\tilde{W}_{V_T}})\right)\right]\cr &+\tilde\mathbb E_{R_{V_T}\tilde\mathbb{P}^t}\left[\int_t^{V_T}\text{\rm{e}}^{\int_t^r\beta_\theta\text{\rm{d}} \theta}\alpha_r(\tilde{W}^{t,\omega_1}_r-\tilde{W}^{t,\omega_2}_r)\text{\rm{d}} r\right],\ \ \tilde\mathbb{P}^t\textit{-}a.s.. \end{align*} This allows us to deduce from \eqref{1PfPrp(Tr)} and \textsc{\textbf{(H1)}} that \begin{align}\label{add4PfPrp(Tr)}
&|\tilde{Y}_t(\omega_1)-\tilde{Y}_t(\omega_2)|=|\tilde{Y}_t^{t,\omega_1}-\tilde{Y}_t^{t,\omega_2}|\cr
\leq&\text{\rm{e}}^{L_f(V_T-t)}\left(L_g\tilde\mathbb E_{R_{V_T}\tilde\mathbb{P}^t}|\tilde{W}^{t,\omega_1}_{V_T}-\tilde{W}^{t,\omega_2}_{V_T}|
+L_f\int_t^{V_T}\tilde\mathbb E_{R_{V_T}\tilde\mathbb{P}^t}|\tilde{W}^{t,\omega_1}_r-\tilde{W}^{t,\omega_2}_r|\text{\rm{d}} r\right)\cr
=&\text{\rm{e}}^{L_f(V_T-t)}\bigg(L_g\tilde\mathbb E_{R_{V_T}\tilde\mathbb{P}^t}|(\omega_1\otimes_t\cdot)(V_T)-(\omega_2\otimes_t\cdot)(V_T)|\cr
&\ \ \ \ \ \ \ \ \ \ \ \ \ +L_f\int_t^{V_T}\tilde\mathbb E_{R_{V_T}\tilde\mathbb{P}^t}|(\omega_1\otimes_t\cdot)(r)-(\omega_2\otimes_t\cdot)(r)|\text{\rm{d}} r\bigg)\cr
\leq&\left(L_g+L_f(V_T-t)\right)\text{\rm{e}}^{L_f(V_T-t)}\sup\limits_{0\leq r\leq t}|\omega_1(r)-\omega_2(r)|, \end{align} where the last inequality is due to the definitions of concatenation variables $\omega_i\otimes_t\cdot, i=1,2$. Noting that $t\in[0,V_t]$ and $w_1,w_2\in N^c$ are arbitrary and $\tilde\mathbb{P}(N)=0$, we conclude that $\tilde Y:\tilde\Omega\rightarrow C([0,V_T])$ is Lipschitz continuous with Lipschitzian constant $(L_g+L_fV_T)\text{\rm{e}}^{L_fV_T}$. Therefore, taking into account of the fact that the law of Wiener process satisfies $T_2(2)$ (see \cite[Theorem 3.1]{FU04}), we obtain the first assertion due to Lemma \ref{FI-Le2}.
\textsl{Step 2. Transportation inequality for $\tilde Z$}. We first suppose that $g(x,\mu)$ and $f(t,x,y,z,\nu)$ are differentiable with respect to $x,y$ and $z$. Then $(\tilde Y,\tilde Z)$ is differentiable, and moreover $(\nabla\tilde{Y},\nabla\tilde{Z})$ solves the following linear DDBSDE \begin{align*} \nabla\tilde{Y}_t=&\nabla_xg(\widetilde{W}_{V_T},\mathscr{L}_{\widetilde{W}_{V_T}})-\int_t^{V_T}\nabla\widetilde{Z}_s\text{\rm{d}}\widetilde{W}_s\cr &+\int_t^{V_T}\Big[\nabla_xf(U_s,\widetilde{W}_s,\widetilde{Y}_s,\widetilde{Z}_s,\mathscr{L}_{(\widetilde{W}_s,\widetilde{Y}_s,\widetilde{Z}_s)}) +\nabla_yf(U_s,\widetilde{W}_s,\widetilde{Y}_s,\widetilde{Z}_s,\mathscr{L}_{(\widetilde{W}_s,\widetilde{Y}_s,\widetilde{Z}_s)})\nabla\tilde{Y}_s\cr &\qquad\qquad+\nabla_zf(U_s,\widetilde{W}_s,\widetilde{Y}_s,\widetilde{Z}_s,\mathscr{L}_{(\widetilde{W}_s,\widetilde{Y}_s,\widetilde{Z}_s)})\nabla\tilde{Z}_s\Big]\text{\rm{d}} s. \end{align*} Along the same lines as in \eqref{4PfPrp(Tr)}, applying the product $\text{\rm{e}}^{\int_0^t\nabla_yf(U_s,\widetilde{W}_s,\widetilde{Y}_s,\widetilde{Z}_s,\mathscr{L}_{(\widetilde{W}_s,\widetilde{Y}_s,\widetilde{Z}_s)})\text{\rm{d}} s}\nabla\tilde{Y}_t$ and the Girsanov theorem we deduce that there exists some probability $\tilde\mathbb Q$ under which \begin{align*} \widetilde{W}_\cdot-\int_0^\cdot\nabla_zf(U_s,\widetilde{W}_s,\widetilde{Y}_s,\widetilde{Z}_s,\mathscr{L}_{(\widetilde{W}_s,\widetilde{Y}_s,\widetilde{Z}_s)})\text{\rm{d}} s \end{align*} is a Brownian motion, and $\nabla\tilde{Y}_t$ has the representation \begin{align*} \nabla\tilde{Y}_t=&\tilde\mathbb E_{\tilde\mathbb Q}\bigg[e^{\int_t^{V_T}\nabla_yf(U_s,\widetilde{W}_s,\widetilde{Y}_s,\widetilde{Z}_s,\mathscr{L}_{(\widetilde{W}_s,\widetilde{Y}_s,\widetilde{Z}_s)})\text{\rm{d}} s}\nabla_xg(\widetilde{W}_{V_T},\mathscr{L}_{\widetilde{W}_{V_T}})\cr
&+\int_t^{V_T}e^{\int_t^s\nabla_yf(U_r,\widetilde{W}_r,\widetilde{Y}_r,\widetilde{Z}_r,\mathscr{L}_{(\widetilde{W}_r,\widetilde{Y}_r,\widetilde{Z}_r)})\text{\rm{d}} r}\nabla_xf(U_s,\widetilde{W}_s,\widetilde{Y}_s,\widetilde{Z}_s,\mathscr{L}_{(\widetilde{W}_s,\widetilde{Y}_s,\widetilde{Z}_s)})\text{\rm{d}} s\big|\tilde\mathscr{F}_t\bigg]. \end{align*} Consequently, it is easy to see that \begin{align}\label{5PfPrp(Tr)}
|\nabla\tilde{Y}_t|\leq\text{\rm{e}}^{L_f(V_T-t)}(L_g+L_f(V_T-t)). \end{align} On the other hand, it is classical to show that there exists a version of $(\tilde Z_t)_{t\in[0,T]}$ given by $(\nabla\tilde{Y}_t)_{t\in[0,T]}$ (see, e.g., \cite[Remark 9.1]{Li18}). Hence, combining this with \eqref{5PfPrp(Tr)} leads to \begin{align}\label{6PfPrp(Tr)}
|\tilde Z_t|\leq\text{\rm{e}}^{L_f(V_T-t)}(L_g+L_f(V_T-t)). \end{align} When $g$ and $f$ are not differentiable, we can also obtain \eqref{6PfPrp(Tr)} via a standard approximation and stability results (see Remark \ref{Re1} (i)). Therefore, with the help of Lemma \ref{FI-Le3}, we derive the second assertion. \qed
\textbf{Acknowledgement}
X. Fan is partially supported by the Natural Science Foundation of Anhui Province (No. 2008085MA10) and the National Natural Science Foundation of China (No. 11871076, 12071003).
\end{document} |
\begin{document}
\begin{center} {\usefont{OT1}{phv}{b}{n}\selectfont\Large{Algorithmically-Consistent Deep Learning Frameworks for
Structural Topology Optimization}}
{\usefont{OT1}{phv}{}{}\selectfont\normalsize {Jaydeep Rade$^1$, Aditya Balu$^1$, Ethan Herron$^1$, Jay Pathak$^2$, Rishikesh Ranade$^2$, \\ Soumik Sarkar$^1$, Adarsh Krishnamurthy$^1$* \blfootnote{Paper accepted in Engineering Applications of AI, 2021 \url{https://doi.org/10.1016/j.engappai.2021.104483}.\\ Correspondences to \url{[email protected]}}}}
{\usefont{OT1}{phv}{}{}\selectfont\normalsize {$^1$ Iowa State University\\ $^2$ Ansys Corporation\\ }} \end{center}
\section*{Abstract} Topology optimization has emerged as a popular approach to refine a component's design and increase its performance. However, current state-of-the-art topology optimization frameworks are compute-intensive, mainly due to multiple finite element analysis iterations required to evaluate the component's performance during the optimization process. Recently, machine learning (ML)-based topology optimization methods have been explored by researchers to alleviate this issue. However, previous ML approaches have mainly been demonstrated on simple two-dimensional applications with low-resolution geometry. Further, current methods are based on a single ML model for \emph{end-to-end} prediction, which requires a large dataset for training. These challenges make it non-trivial to extend current approaches to higher resolutions. In this paper, we develop deep learning-based frameworks consistent with traditional topology optimization algorithms for 3D topology optimization with a reasonably fine (high) resolution. We achieve this by training multiple networks, each learning a different step of the overall topology optimization methodology, making the framework more consistent with the topology optimization algorithm. We demonstrate the application of our framework on both 2D and 3D geometries. The results show that our approach predicts the final optimized design better (5.76$\times$ reduction in total compliance MSE in 2D; 2.03$\times$ reduction in total compliance MSE in 3D) than current ML-based topology optimization methods.
\subsection*{Keywords}
Topology Optimization $|$ Deep Learning $|$ Sequence Models $|$ Physics-Consistent Learning
\section{Introduction} Over the past few decades, there has been an increased emphasis on designing components with optimal performance, especially using topology optimization~\citep{orme2017designing,liu2016survey}. Topology optimization (a subset of design optimization methods), initially developed by \citet{bendsoe1988generating}, refers to a set of numerical design optimization methods developed to find appropriate material distribution in a prescribed design domain to obtain geometric shapes with optimal performances. Here, the performance could be any physical phenomenon such as structural strength (or mechanical design), heat transfer, fluid flow, acoustic properties, electromagnetic properties, optical properties, etc.~\citep{sigmund2013topology}. The domain refers to a 2D or 3D volumetric mesh representation of the CAD geometry, typically used for finite element analysis. Among the different topology optimization methods, some of the most prominent approaches are solid isotropic material with penalization (SIMP)~\citep{bendsoe1989optimal}, level-sets~\citep{wang2003level}, and evolutionary optimization~\citep{das2011optimal,xie1993simple}. These approaches are used for several topological design problems where structural, acoustic, or optical performance needs to be optimal~\citep{eschenauer2001topology,sigmund2013topology} while removing the material to satisfy a total material (or volume) constraint.
\begin{figure*}
\caption{\textbf{Overview:} The proposed deep learning-based topology optimization framework. The input to this framework is the compliance of the initial geometry along with the target volume fraction. Unlike SIMP, the DLTO framework predicts the optimal density of the geometry without any requirement of iterative finite element evaluations. The predicted optimal density of the geometry is then converted into triangular surface mesh representation using the marching cubes algorithm.}
\label{fig:overview}
\end{figure*}
One of the main challenges in performing topology optimization is the high computational cost associated with it. The performance measure that is being optimized needs to be computed after each iteration of the optimization process. These performance measures are usually obtained from physics simulations (often using numerical solution approaches, such as finite element analysis) that are typically compute-intensive. Due to this computational challenge, performing topology optimization for a fine (high resolution) topological mesh could take a few hours to even days. This computational challenge has inspired several researchers to develop deep learning-based topology optimization to reduce or eliminate the need for numerical simulations.
Although deep learning has many diverse applications and has demonstrated exceptional results in several real-world scenarios, our focus in this paper is the recent application of deep learning to learn the underlying physics of the system. There has been an increased interest in learning physical phenomena with neural networks to reduce the computational requirements and achieve better performance with very little or no data~\citep{pakravan2020solving,zhang2019quantifying,shah2019encoding,teichert2019machine_1,lu2019deepxde,jagtap2019adaptive,pan2019physics,raissi2017physics,bhatnagar2019prediction}. A popular approach relies on modifying the loss function to ensure that a set of physical constraints (boundary conditions) are satisfied. This approach has been especially successful in using deep learning to solve partial differential equations such as Burger's equation, Navier-Stokes equation, and Cahn-Hilliard's equation~\citep{shah2019encoding, singh2018physics, lu2019deepxde, jagtap2019adaptive, zhang2019quantifying, pan2019physics}. These approaches help the framework learn about the physical phenomena and make the learning consistent with the underlying physics. At the same time, better performance has been achieved by aligning the neural network architecture with the leaned phenomena~\cite{xu2020can}. With this motivation, we propose an algorithmically consistent deep learning framework for structural topology optimization.
A deep learning framework for the structural topology optimization need to (i) learn the underlying physics for computing the compliance, (ii) learn the topological changes that occur during the optimization process, and (iii) produce results that respect the different geometric constraints and boundary conditions imposed on the domain. To simplify the problem, we first discuss three essential elements that form the backbone of any data-driven approach: (i) the data representation, (ii) training algorithms, and (iii) the network architecture. As mentioned before, aligning the deep-learning framework with existing algorithms can provide better results and improved performance. For the framework to be algorithmically consistent, each of the three elements must be consistent with the classical structural topology optimization algorithm. In this particular instance, we focus on topology optimization using the solid isotropic material with penalization (called SIMP~\citep{bendsoe1989optimal}) algorithm for our framework. Thus our proposed framework is algorithmically consistent with SIMP topology optimization.
First, we align the data representation for the specific problem. Structural topology optimization is an iterative process where the design is modified through several iterations until the objective function (total compliance) converges to an optimal value. Further, each element's compliance is used in the sensitivity analysis for updating the element densities at each iteration. Thus, the element compliance is a valid and consistent representation of the geometry compared to other representations (such as voxel densities, strains, etc.) used in current deep learning approaches. Therefore, in the proposed framework, we use the element compliance as the CAD model representation of the geometry, loading, and boundary conditions (as shown in \coloredref{Figure}{fig:overview}). Note that, unlike the use of strain tensor and displacement tensor as proposed by \citet{zhang2019deep}, this representation is compact, leading to better scaling at higher resolutions.
Next, the training and inference pipelines need to be consistent with the classical structural topology optimization pipeline. In our experiments, we observe a non-trivial transformation of the densities from the first iteration to the final converged one. Due to this non-trivial transformation, learning the mapping between the initial topology and the final optimized topology is not a trivial \emph{one-step} learning task. Therefore, we use the intermediate densities obtained during data generation to enhance the performance of our proposed framework along with the initial compliance and target volume fraction as input and the final optimal density as the target.
Finally, the framework should simultaneously satisfy two constraints for structural topology optimization: the topological constraint of matching the target volume (often prescribed as a volume fraction or percentage of volume removed) and the physical constraint of minimizing the compliance. While computing the volume fraction is trivial, calculating the compliance involves performing a finite element solve. To avoid this computation, we propose developing a surrogate model for learning the mapping of a given intermediate density to its corresponding compliance.
In summary, we have developed two algorithmically consistent frameworks for structural topology optimization, namely, the Density Sequence (DS) prediction and the Coupled Density and Compliance Sequence (CDCS) prediction. The first approach uses a sequential prediction model to transform the densities without compliance. In the second approach, we add intermediate compliance to train a compliance-predicting surrogate model to improve results. We compare the proposed approaches with the baseline method, Direct Optimal Density (DOD) prediction. DOD prediction is an end-to-end learning approach where the final optimal density is directly predicted using just the initial compliance and the target volume fraction. The DS framework involves two convolutional neural networks (CNNs) for obtaining the final prediction, while the CDCS uses three CNNs iteratively during inference to predict the final optimal density.
In this paper, we develop a scalable, algorithmically consistent, deep-learning framework for 2D and 3D structural topology optimization. The main contributions are: \begin{itemize}[]
\item Two novel algorithmically consistent deep learning based structural topology optimization frameworks.
\item An algorithmically consistent representation for topology optimization using the initial compliance of the design and the target volume fraction.
\item Using intermediate densities and compliance data from the different optimization iterations obtained while generating the dataset to enhance the performance of our framework.
\item Performance comparison of our proposed networks on both 2D and 3D geometries. We also validate and compare the performance of our approaches with the baseline SIMP-based topology optimization results. \end{itemize}
The rest of the paper is arranged as follows. First, we discuss the formulation and related works to this paper in \secref{sec:formulation}. Next, we explain the deep learning methods proposed in our paper in \secref{sec:PCDL}. We cover the details of the data generation process in \secref{sec:data_gen}, which is used as training data for our proposed approaches. In \secref{sec:results}, we show the statistical results from our experiments and demonstrate the performance of our proposed methods on both 2D and 3D structural topology optimization. Finally, we conclude this work with some future directions of research in \secref{sec:conclusions}.
\section{Formulation and Related Work}\label{sec:formulation}
\subsection{Formulation}
Formally, topology optimization can be formulated as: \begin{equation}
\begin{aligned}
\text{minimize:}\; C(U)&\\
\text{subject to:}\; \mathbf{K}\mathbf{U} &= \mathbf{F}\\
g_i(\mathbf{U}) &\leq 0.
\end{aligned} \end{equation} \noindent Here, $C(U)$ refers to the objective function of topology optimization. In the case of structural topology optimization, this is the compliance of the system, \begin{equation} C = \int_{\Omega \in \mathcal{S}} bu \; d\Omega + \int_{\tau\in d\mathcal{S}} tu\; d\tau\; \end{equation} where $b$ represents the body forces, $u$ displacements, $t$ surface traction, and $\Omega$ and $\tau$ are volume and surface representations of solid. The constraint $g_i(U)$ includes a volume fraction constraint, $g_i = (v/v_0) - v_f$. Since this optimization is performed at every element of the mesh, the combinatorial optimization is computationally intractable. An alternative solution is to represent the topology optimization equations as a function of density $\rho$ for every element. \begin{equation}\label{eqn:TOFormulation} \begin{aligned}
\text{Minimize:}\; C(\rho,U)&\\
\text{subject to:}\; \mathbf{K}(\rho)\mathbf{U} &= \mathbf{F}\\
g_i(\rho,\mathbf{U}) &\leq 0\\
0 < \rho &\leq 1 \end{aligned} \end{equation}
\begin{algorithm}[t!]
\caption{SIMP topology optimization~\citep{bendsoe1989optimal}}\label{Alg:TO}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{$\mathcal{S}$, L, BC, $V_0$}
\Output{$D_{fin}$(set of all densities for each element, $\rho$)}
\text{Load design; apply loads and boundary conditions}\\
\text{Initialize: $D_0 \rightarrow V_0/\int_\Omega d\Omega$}\\
\text{Initialize : $ch = \inf$}\\
\While{$ch < threshold$}
{
Assemble global stiffness matrix $\mathbf{K}$ for element stiffness matrix $k_e(\rho_e)$\\
Solve for $\mathbf{U}$, using $\mathbf{K}$, loads (L) and boundary conditions (BC)\\
Compute objective function,
$\mathbf{C} = \mathbf{U}^T\mathbf{K}\mathbf{U} = \sum_{e=1}^{N} \rho^{p} u^T k_e u $ \\
Perform sensitivity analysis, $\frac{\partial{c}}{\partial{e}} = -p \rho^{(p-1)} u^T k_e u$\\
Update the densities ($D_i$) using a optimality criterion\\
$ch = ||D_i - D_{i-1}||$
} \end{algorithm}
This design problem is relaxed using the SIMP algorithm, where the stiffness for each element is described as, $E= E_{min} + \rho^p (E_{max} - E_{min})$. Here, $p$ is the parameter used to penalize the element density to be close to $1.0$. A typical SIMP-based topology optimization pipeline is shown in \coloredref{Algorithm}{Alg:TO}. While this is a naive implementation, more sophisticated methods for structural topology optimization such as level-set methods~\citep{wang2003level} and evolutionary optimization methods~\citep{das2011optimal,xie1993simple} are also popularly employed. Despite several advancements in structural topology optimization, a common challenge in all these approaches is that it requires several iterations of the finite element queries to converge on the final density distribution. Different optimization methods result in different yet comparable, optimal solutions alluding to the fact that multiple optimal solutions exist for the same topology optimization problem. Deep learning-based methods are a natural fit for accelerating this task, which has been explored previously, as described below.
\begin{figure*}
\caption{\textbf{Topology optimization pipeline:} The traditional topology optimization performs several iterations of finite element analysis, followed by sensitivity analysis and filtering. Using the filtered densities and compliance, we perform a density update. These iterations are performed several times till the density has converged. The DLTO approach replaces the repetitive performance of finite element analysis using a compliance prediction network and the density update with density prediction network.}
\label{fig:pipelinedlto}
\end{figure*}
\subsection{Deep Learning for Topology Optimization} Several deep learning-based topology optimization frameworks have been proposed~\citep{sosnovik2019neural,banga20183d,yu2019deep,zhang2019deep,nie2020topologygan,chandrasekhar2020tounn, qiyinLin,kollmann2dmetamaterials, rawat2019novel,Yu_2018,HAMDIA201921,shuheidoi, lagaros,oh_2019,baotongLi, diabWAbueidda,saskaiIPMmotor,zhang2020deep, oh2018GANTO,zhou2020,guo2018, leeSeunghye,de2019topology,bujny2018,takahashi2019convolutional, chaoQian2020,jang2020generative,rodriguez2021improved,poma2020optimization}. While we enlist several deep learning based topology optimization, several machine learning based methods without deep learning which have been explored in the recent years~\citep{mohammadzadeh2015new,sabzalian2019robust,kong2021fixed}. Further, there are several metaheuristics based topology optimization methods to reduce the computational time~\citep{tejani2018size,alberdi2015connection,gholizadeh2014topology,mortazavi2018comparison}.
Among the deep learning works, \citet{banga20183d} and \citet{sosnovik2019neural} proposed to perform the fine refinement of the design using deep convolutional autoencoders since the fine refinement stage usually requires several finite element iterations during the optimization process. \citet{sosnovik2019neural} used the densities obtained after five iterations of the SIMP-based structural topology optimization as input to a deep learning network that directly predicts the final density. \citet{banga20183d} extend this idea to 3D design geometries, along with an additional input of the boundary conditions, but for a very coarse geometric resolution ($12\times12\times24$). \citet{yu2019deep} developed a framework that takes the input design, boundary conditions, and the prescribed volume fraction and predicts the final target shape. They also create a generative framework where they generate several optimal designs. However, their research was restricted to only one type of boundary condition. A more generic framework to accommodate all possible boundary conditions using this method would require an impractically large dataset.
\citet{zhang2019deep} developed an improved representation of the geometry, loading conditions, and boundary conditions using the strain tensor and displacement tensor as input. They demonstrate this framework using 2D geometries and represent each component of the strain tensor and displacement tensor as a different channel of the 2D image input. Using convolutional neural networks, they predict the final density. While their results are an improvement over earlier methods, this representation is not scalable to 3D. The strain tensor has three more components in addition to the increase in overall data size due to representing the geometry using 3D voxels, leading to several computational challenges. Recently, \citet{chandrasekhar2020tounn} proposed a topology optimization algorithm using neural networks where the neural network is used for identifying the density for each element at each iteration of the optimization process. This approach produces faster convergence and results that are comparable to those from SIMP. However, this approach's main drawback is that some finite element evaluations are still needed (although fewer than SIMP-based structural topology optimization). To the authors' best knowledge, very few researchers consider using compliance and the intermediate densities and compliances to improve the learning of structural topology optimization. Further, most of the implementations and results in the area have only been demonstrated in 2D or very low-resolution 3D geometries. Therefore, a scalable 3D framework for structural topology optimization using algorithmically consistent deep learning approaches is needed.
\section{Algorithmically-Consistent Deep Learning}\label{sec:PCDL}
A deep learning framework is algorithmically consistent if the data representation, training algorithm, and network architecture are all consistent with the underlying computational algorithm that the framework is designed to learn. Our framework is algorithmically consistent with the SIMP topology optimization. We first explain the baseline deep learning approach, which we use to compare our results. We also compare the performance of our proposed frameworks and the baseline against the classical SIMP-based structural topology optimization. After explaining the baseline, we explain the two proposed frameworks, the density sequence (DS) prediction, and the coupled density and compliance sequence (CDCS) prediction. \figref{fig:pipelinedlto} shows how our proposed frameworks are algorithmically consistent with the SIMP topology optimization.
\subsection{Baseline Direct Optimal Density Prediction}\label{sec:dod} Recently, U-Nets~\citep{ronneberger2015u,cciccek20163d} have been known to be effective for applications such as semantic segmentation and image reconstruction. Due to its success in several applications, we chose a U-Net for this task. The input to U-Net is a tuple of two tensors. The first is the initial compliance (represented in the voxel or pixel space); the second is a constant tensor of the same shape as the compliance tensor. Each element of the constant tensor is initialized to the target volume fraction, which is a number between $[0, 1]$. First, a block of convolution, batch normalization, is applied. Then, the output is saved for later use for the skip-connection. This intermediate output is then downsampled to a lower resolution for a subsequent block of convolution, batch normalization layers, which is performed twice. The upsampling starts where the saved outputs of similar dimensions are concatenated with upsampling output for creating the skip-connections followed by a convolution layer. This process is repeated until the final image shape is reached. At this point, the network utilizes a final convolution layer before producing the final density. The network architecture is shown in~\figref{fig:directprediction}.
\begin{figure*}
\caption{\textbf{Direct optimal density (DOD) prediction:} This baseline model is used for comparing our proposed frameworks. The input in this approach is the initial compliance for the geometry along with the target volume fraction initialized. Then we use a U-Net architecture for predicting the optimal density.}
\label{fig:directprediction}
\end{figure*}
We preprocess the compliance to transform it to the $[0, 1]$ range. We first take the $log_{10}$ of the compliance and then normalize it by subtracting the minimum value and then dividing by the difference of maximum and minimum values to scale the log values to $[0, 1]$ range, so all the inputs are in the same range. To train the neural network model such that it is robust to the loads applied on the input geometry, we augment the inputs by rotating the input tensor by 90$^\circ$ clockwise and counter-clockwise, 180$^\circ$ around all three axes and by mirroring the tensor along the X-Y plane, X-Z plane and, Y-Z plane. To understand data augmentation visually, we illustrate these augmentation operations on 2D-image in \coloredref{Figure}{fig:data_augmentation}. We threshold the final target density to get a binary density value. The density value of 1 corresponds to the element where the material is present, while the density value of 0 corresponds to the region where the material is absent or removed. We do not use intermediate compliance or intermediate densities to train this network. We use the Adam~\citep{kingma2014adam} optimizer for training, with an adaptive learning rate. To guide the optimizer, we use the binary cross-entropy function to calculate the loss between the predicted and the target density.
\begin{figure}
\caption{\textbf{Data Augmentation:} We show augmentation operations on 2D image including 90$^\circ$ clockwise and counter-clockwise rotation, 180$^\circ$ rotation and mirroring it vertically and horizontally. }
\label{fig:data_augmentation}
\end{figure}
\subsection{Density Sequence Prediction} For the data representation to be algorithmically consistent, we learn the structural topology optimization from compliance of the initial geometry. However, the compliance keeps evolving during the iterations since the densities also change during optimization. Therefore, the mapping between the original compliance and the final density is not trivial and may not directly correlate with the final density. To improve the performance, we develop the framework in two phases, as shown in \coloredref{Figure}{fig:pipelineDS}. The first phase is called an initial density prediction network (IDPN), which predicts the topology's initial density distribution based on the initial compliance per element obtained for the original geometry. With initial density, we use the iterative density transformation information available from the topology optimization process to transform the initially proposed density to the final optimized density. We perform this transformation using another network (density transformation network, DTN). The DTN does not use any information about the compliances. Therefore, using IDPN and DTN, we can predict the final densities for a given initial design and its corresponding original compliances.
The two phases of the Density Sequence Prediction method require two different network architectures, with each performing algorithmically consistent transformations of the given input information to obtain the final optimized shape. The first architecture corresponds to the first phase, where the task is to predict an initial density. The second architecture corresponds to the second phase, where the density obtained from phase 1 is transformed to a final density.
\begin{figure*}
\caption{\textbf{Density sequence (DS) prediction:} In this framework, we perform the task in two phases as shown in \coloredref{Algorithm}{Alg:ds}; phase 1 (left block) and phase 2 (right block). We take the initial compliance and volume fraction initialization in the first phase to predict an initial density map. Using the initial density and the volume fraction initialization, we predict a series of densities similar to the prediction from a SIMP topology optimizer to finally predict the optimal density. The details of the training process are covered in the text.}
\label{fig:pipelineDS}
\end{figure*}
\noindent\textbf{Phase 1: Initial Density Prediction:} As a first phase of the method, the IDPN uses the initial elemental compliances and initialized volume fraction as input and predicts an initial density. We use U-Net~\citep{ronneberger2015u,cciccek20163d} network architecture for this phase. The architecture is similar to the architecture described in \secref{sec:dod} and is shown in \figref{fig:pipelineDS} on the left.
For 2D phase 1 (IDPN), the initial compliance and the volume fraction constraint are represented as a two-channel \textit{``image''}, and the target is a one-channel \textit{``image''} of the element densities obtained after the first iteration of structural topology optimization. For 3D structural topology optimization, the input is a four-dimensional tensor with two 3D inputs concatenated along the fourth axis, and the target is a 3D element density. Data processing of the compliance (as described in \secref{sec:dod}) is necessary for IDPN.
\begin{algorithm}[t!]
\caption{DS: Density sequence inference}\label{Alg:ds}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{$\mathcal{S}$, L, BC, $V_0$}
\Output{$D_{k}$, final optimized geometry}
\text{Load design; apply loads and boundary conditions}\\
\text{Initialize: $D_0 \rightarrow V_0/\int_\Omega d\Omega$}\\
Compute $C_0$ using L, BC and $D_0$\\
$D_1 = IDPN(C_0, D_0)$ \tcc{IDPN Inference}
\For{$i=1:k$}
{
$D_{i+1} = DTN(D_{i-1},D_{i})$ \tcc{DTN Inference}
}
return $D_k$ \end{algorithm}
\noindent\textbf{Phase 2: Density transformation:} The training of phase 2 is more involved than phase 1. We train a convolutional neural network (CNN) with long short-term memory (LSTM), enabling learning from temporal data. In phase 2, there is a sequence of density transformations. Given these transformations are non-linear, a short-term history is not sufficient for robust prediction of the transformation. Capturing both long-term and short-term temporal dependencies is one of the salient features of LSTMs. Therefore, we use LSTMs and CNNs (traditionally used for spatial data such as images) to transform the densities. The architecture of the CNN-LSTM used for DTN is shown in \coloredref{Figure}{fig:pipelineDS} on the right.
The CNN-LSTM architecture starts with a set of convolution, max pooling, and batch normalization layers (called the encoder), which transforms the image to a latent space flattened embedding used by the LSTM. A sequence of LSTM layers is used to obtain a transformed latent layer. A set of deconvolution and upsampling layers (called Decoder) is used to obtain an image (representing the element densities after one iteration of structural topology optimization). The LSTM is unrolled for predicting a sequence in order to provide back-propagation through time. So, the intermediate densities of the structural topology optimization process are loaded as a sequence and processed to obtain the transformed density during training.
For phase 2 (DTN), the intermediate densities (each represented as a one-channel image) are used for performing the training. However, all the iterations of topology optimization are not significant in the learning process. Therefore, we curate the intermediate densities to only have unique densities (defined by a metric of $L_2$ norm). This uniquely curated set of densities are used for performing the training of DTN. Since DTN only deals with densities, no processing is required. To make the neural network more robust, we implement on-the-fly data augmentation, as discussed in \secref{sec:dod}.
\subsubsection{Training Algorithms} For training IDPN, we use two different loss functions: (i) the mean-squared error between the predicted and target densities and (ii) the mean-squared error between the mean of the predicted and target densities. The second loss function ensures that the volume fraction of the target and predicted densities are the same. While training DTN, an additional loss function is added. Since the final geometry cannot have densities between $(0.0,1.0)$, the densities should belong to the set $\{0,1\}$ because of solid isotropic material. To impose this condition, we use the binary cross-entropy loss function and the two loss functions used for IDPN. In addition to the loss functions, we use stochastic gradient descent based optimizers such as Adam~\citep{kingma2014adam} for performing the optimization.
Once the training is performed, the \emph{learned} parameters for both the networks are joined such that an \emph{end-to-end} inference scheme can be performed. This inference scheme only requires the initial compliance and the volume fraction constraint (input to IDPN). The output of IDPN is used as input to DTN to get the final density without any additional information required. This \emph{end-to-end} scheme makes it applicable to any generic design.
\begin{figure*}
\caption{\textbf{Coupled density and compliance sequence (CDCS) prediction:} In this framework, the initial compliance (see text for more details) and volume fraction initialization is transformed by an iterative coupled prediction from a density prediction network (DPN) and compliance prediction network (CPN) as shown in \coloredref{Algorithm}{Alg:cdcs}. Five iterations of this process is performed to finally get the density and predicting the optimal density using a final density prediction network (FDPN). The details of the training process is covered in the text.}
\label{fig:coupled_inference}
\end{figure*}
\subsection{Coupled Density and Compliance Sequence Prediction}\label{sec:CDCS} Inspired by the iterative SIMP method, we use deep neural networks to develop a coupled density and compliance sequence prediction framework. In our dataset, we observed that the first five density iterations from the SIMP-based topology optimization method underwent more significant transformations compared to later iterations (also referred to as coarse and fine refinement by \citet{sosnovik2019neural}). We design three network architectures that use the intermediate compliances and intermediate densities to predict the final optimal density. The first two networks, namely, compliance prediction network (CPN) and density prediction network (DPN), feed their output as an input to each other as coupled interaction, and the third network, the final density prediction network (FDPN), uses the final output of density prediction network to produce the final optimal density (similar to the approach taken by \citet{sosnovik2019neural}).
\begin{algorithm}[b!]
\caption{CDCS: Coupled density compliance sequence inference}\label{Alg:cdcs}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{$\mathcal{S}$, L, BC, $V_0$}
\Output{$D_{fin}$, final optimized geometry}
\text{Load design; apply loads and boundary conditions}\\
\text{Initialize: $D_0 \rightarrow V_0/\int_\Omega d\Omega$}\\
Compute $C_0$ using L, BC and $D_0$\\
\For{$i=0:k$}
{
$D_{i+1} = DPN(D_{i},C_{i})$ \tcc{DPN Inference}
$C_{i+1} = CPN(D_{i+1},C_{0})$ \tcc{CPN Inference}
}
return $D_{fin} = FDPN(D_k)$ \tcc{FDPN Inference} \end{algorithm}
The compliance prediction network predicts the elemental compliance for a given iteration's density. It uses initial elemental compliance and the current iteration density obtained from the DPN. For CPN, we use Encoder-Decoder architecture. In the encoder, we use blocks of two convolutional layers followed by batch normalization. Similarly, we use an upsampling layer, two convolutional layers, and batch-normalization blocks for the decoder. The encoder encodes the input to the lower resolution latent space, and the decoder then decodes the encoded input to the next elemental compliance.
We use the current iteration elemental compliance and the current iteration density to predict the next iteration density for the density prediction network. We use U-SE-ResNet~\citep{nie2020topologygan} architecture for the DPN. Adding SE-ResNet~\citep{nie2020topologygan} blocks in the bottleneck region of U-Net architecture, in addition to the skip connections of U-Net from the encoder to the decoder, builds the U-SE-ResNet. The SE-ResNet block consists of two convolutional layers followed by SE(Squeeze-and-Excitation) block~\citep{huSE} with residual skip-connection from the input of the block. The encoder and decoder of U-SE-ResNet are the same as used in CPN architecture. Refer to \coloredref{}{sec:appendix_architecture} for more details on the architectures of U-SE-ResNet.
The final model in this method is FDPN. As mentioned earlier, the elemental density has undergone a significant transformation during the first five iterations. So, we avoid the iterative process to obtain the final density by taking advantage of the neural network. We only use the fifth iteration density to predict the final optimal density directly. For FDPN we implement U-Net~\citep{ronneberger2015u,cciccek20163d} architecture. The encoder and decoder part of the U-Net used here is the same as discussed in CPN architecture.
Compliance is preprocessed before feeding it to the neural networks. We normalize the compliance values to be in the $[0, 1]$ range. The method for normalizing the compliance is explained in detail in \secref{sec:dod}. In addition to this, we perform data augmentation discussed in \secref{sec:dod} for all three networks. More details on architectures mentioned in this section can be found in \ref{sec:appendix_architecture}.
\begin{figure*}
\caption{\textbf{3D data generation pipeline:} Each sample in the dataset if generated using this data generation pipeline. First we initialize the geometry (a cube with side length of 1 meter). This geometry is discretized into tetrahedrons to get the mesh. On this mesh, we define three non-collinear nodes to fix the mesh from any rigid body motion. Then we apply randomly generated boundary conditions and loading conditions with different magnitude and direction.}
\label{fig:datagen}
\end{figure*}
\subsubsection{Training Algorithms} All three networks are trained independently. During the training phase, we use Adam~\citep{kingma2014adam} optimizer for all three networks. For more efficient training, we use an adaptive learning rate. The mean absolute error loss function is used for CPN. Moreover, for DPN and FDPN, the binary cross-entropy loss function is used to predict the densities.
During inference, the first two networks are used iteratively (see \coloredref{Figure}{fig:coupled_inference}). We start with the initial compliance and initial density, which is initialized with a volume fraction value as a tensor with the same shape as the initial compliance tensor. Using the density prediction network, we predict the subsequent iteration's density and feed it as input to the compliance prediction network, producing the compliance corresponding to the new predicted density. This loop is executed five times, so we get the fifth iteration's density prediction at the end of the loop. We use this predicted fifth iteration density as input to the final density prediction network and directly predict the final optimal density.
\section{Data Generation}\label{sec:data_gen}
\subsection{2D Data Generation}
The data required for training the networks is obtained by performing several simulations of topology optimization on different designs and volume fraction constraints. We represent each design using a 2D mesh made up of quadrilateral elements. The nodes of the mesh form a regular grid such that each element represents a square element. With this representation, we can directly convert the elements of the mesh to pixels of an image. Therefore, we represent the geometry as an image such that the pixel intensity values represent the element compliances and the element densities of the 2D mesh.
For training data, we need raw compliance values, the volume fraction constraint, the intermediate element densities obtained during the intermediate iterations of the structural topology optimization process, and the final element densities. We generated 30,141 simulations of the structural topology optimization with different randomly generated load values, loading directions, load locations, and a randomly generated set of nodes in the mesh, fixed with zero displacements. We performed each simulation for 150 iterations of SIMP-based structural topology optimization. All the relevant information from each structural topology optimization simulation is stored for use during the training process.
\subsection{3D Data Generation}\label{sec:3Ddatagen} The 3D data used for DLTO is generated using ANSYS Mechanical APDL v19.2. We use a cube of length 1 meter in the form of 3D mesh as an initial design domain (see \coloredref{Figure}{fig:datagen}). The mesh created has 31093 nodes and 154,677 elements, and each element consists of 8 nodes. To ensure we sample a diverse set of topologies from the complete distribution of topologies originating from the cube, we use several available sets of boundary and load conditions in ANSYS software such as Nodal Force, Surface Force, Remote Force, Pressure, Moment, Displacement. First, we randomly sample three non-collinear nodes on one side of the cube, and we define zero displacements for these points; so they are fixed. This is necessary to avoid any rigid body motion of the geometry. The next step is to randomly select the load location, which is not close to the fixed support nodes. The nature of the load (nodal, surface, remote, pressure, or moment), the value, and direction is sampled randomly. The motivation behind the random sampling is to ensure the generated dataset has a variety of shapes and is independent of the type of load and its magnitude and direction. We employ a rejection sampling strategy to ensure that each sampled topology is unique. We obtained a total of 1500 configurations of load and boundary conditions, and then by sampling the volume fraction, we generated 13500 samples. In our dataset, the topology optimization took an average of 13 iterations; the minimum number of iterations is 6, and the maximum is 72; this number depends on several factors such as the mesh resolution, boundary conditions, and the target volume fraction.
In ANSYS, we store the topology optimization output, the original strain energy, and the intermediate results stored using the starting mesh representation. We now need to convert the mesh representation to a voxel representation for training 3D CNN models. This conversion process first discretizes the axis-aligned bounding box into a regular structured grid of voxels based on the grid's grid size/resolution. We compute the barycentric coordinates for each tetrahedron in the mesh for each voxel center. Using the barycentric coordinates, we can estimate if the grid point is inside the tetrahedron or not. If the grid point is inside that tetrahedron, we now interpolate the field values (such as density, strain energy, etc.) from the tetrahedron nodes to the voxel centers. Through this process, we obtain the voxel-based representation of the topology optimization data. Each sample's voxelization takes about 5-15 minutes, depending on the resolution and the number of tetrahedral elements. We parallelize this process using GNU parallel to complete this process in a few hours (depending on compute nodes' availability). To calculate the element compliance, we multiply strain energy obtained from ANSYS with the cube of the density to obtain the compliance ($C = \rho^p\mathbf{u}k_e\mathbf{u} = \rho^p * SE$, where $p$ is the penalty of the SIMP approach, set to 3 in our data generation process, $SE$ refers to the elemental strain energy).
Once we obtain the voxel-based representation, we perform other preprocessing steps, such as normalizing the compliance by the maximum compliance value and converting the compliances to log scale for better learning. We even perform on-the-fly data augmentation by rotating the model in any of the six possible orientations. Thus we finally get the data for training the neural network.
\section{Results}\label{sec:results}
We split both datasets (i.e., 2D and 3D geometries) into two parts for training the neural networks: training and testing dataset. Out of all data generated, we use 75\% of the topology optimization data for training and the remaining 25\% for testing. We use the testing dataset to evaluate the performance of all three methods. We will discuss the results for 2D and 3D topologies in the following subsections.
\subsection{Results on 2D Topology Optimization}\label{sec:2dresults}
To compare the performance of our proposed methods with the baseline DOD method, we start with the volume fraction (VF) constraint. We compute the predicted volume fraction of the final predicted topology by averaging the density values over the whole design domain. We compute the mean-squared error (MSE) between the predicted VF and the actual VF on the test data. This metric is shown as MSE of volume fraction in \tabref{tab:compare_2d}. We also plot the histogram for MSE values for VF from three methods in \figref{fig:2d_histograms}(a). Further, we plot the correlation plot between the predicted and actual VF for all the three methods in \figref{fig:2D_vf_correlation} and compute the Pearson's correlation coefficient between the predicted VF and actual VF for 2D test data in \tabref{tab:correlationcoeff_2d}.
\begin{table}[h!]
\setlength\extrarowheight{3pt}
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\centering
\small
\caption{Comparison of test loss metrics of our three methods on 2D test data.}
\label{tab:compare_2d}
\begin{tabular}{| r | r | r | r | r | r |}
\hline
\multirow{2}{*}{\textbf{Method}} & \textbf{VF} & \textbf{TC} & \multicolumn{3}{|c|}{\textbf{Density}} \\ \cline{2-6}
& \textbf{MSE} & \textbf{MSE} & \textbf{BCE} & \textbf{MAE} & \textbf{MSE} \\
\hline
DOD & 0.0035 & 15.1e+04 & \textbf{0.2354} & 0.1368 & \textbf{0.0737} \\
DS & 0.0068 & \textbf{2.35e+04} & 0.4421 & 0.1826 & 0.1206 \\
CDCS & \textbf{0.0027} & 2.62e+04 & 0.3146 & \textbf{0.1195} & 0.0812\\
\hline
\end{tabular} \end{table}
\begin{figure*}
\caption{Correlation plots between predicted volume fraction and target volume fraction on 2D test data for: (a) DOD, (b) DS, (c) CDCS.}
\label{fig:2D_vf_correlation}
\end{figure*}
\begin{table}[t!]
\setlength\extrarowheight{5pt}
\small
\newcommand\T{\rule{0pt}{2.7ex}}
\newcommand\B{\rule[-1.3ex]{0pt}{0pt}}
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\centering
\caption{Comparison of correlation coefficient(R) for volume fraction and total compliance on on 2D test data.}
\label{tab:correlationcoeff_2d}
\begin{tabular}{| r | l | l |}
\hline
\textbf{Method} & \textbf{R for volume fraction} & \textbf{R for total compliance}\\
\hline
DOD & 0.8986 & 0.8403 \\
\hline
DS & 0.6883 & 0.9551 \\
\hline
CDCS & {0.8945} & {0.8926}\\
\hline
\end{tabular} \end{table}
Next, we evaluate the performance of the methods using the physical constraint of topology optimization: total compliance (TC). TC is the SIMP algorithm's objective function value, which it tries to minimize while simultaneously satisfying the volume fraction constraint. We compute and compare the MSE between the predicted and simulated TC values. To determine the TC of the predicted final topology, we use the compliance prediction network (CPN), part of the CDCS framework, to predict the elemental compliance and take a sum of it over the whole design domain. We sum the elemental compliance of the target optimal topology to get the simulated total compliance value; this is the optimal minimum value achieved at the end of the SIMP method. We compare the MSE for TC in \tabref{tab:compare_2d} and also plot the histogram of MSE values from all three methods in \figref{fig:2d_histograms}(b). We also compute the Pearson's correlation coefficient between predicted total compliance and simulated total compliance values in \tabref{tab:correlationcoeff_2d} and the correlation plots between these two values for each of the three methods is in \figref{fig:2D_comp_correlation}.
\begin{figure*}
\caption{Histogram of (a) total volume fraction loss and (b) total compliance loss on the 2D test data.}
\label{fig:2d_histograms}
\end{figure*}
\begin{table*}[t!]
\newcommand\T{\rule{0pt}{2.7ex}}
\newcommand\B{\rule[-1.3ex]{0pt}{0pt}}
\centering
\small
\caption{Statistics on the volume fraction and total compliance loss on 2D test data.}
\label{Tab:2dstats}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\textbf{Statistics} \B & \multicolumn{3}{c|}{\T\B \textbf{MSE of volume fraction}} & \multicolumn{3}{c|}{ \textbf{MSE of total compliance}}\\
\cline{1-7}
\textbf{Method} &\T\B Min.& Median& Max& Min.& Median& Max\\
\hline
DOD\T & 5.96e-08 & 1.30e-03 & 6.08e-02 & 5.33e-01 & 1.01e+05 & 1.76e+06 \\
DS\T & 6.47e-07 & 2.58e-03 & 1.23e-01 & 7.47e-06 & 1.13e+04 & 5.06e+05\\
CDCS\T & 1.85e-08 & 7.74e-04 & 6.12e-02 & 2.87e-04 & 8.07e+03 & 6.41e+05\\
\hline
\end{tabular}
\end{table*}
We also compare the loss metrics like binary cross-entropy (BCE), mean absolute error (MAE), and mean squared error (MSE) between the density values of predicted topology and the ground truth optimal topology. In addition, we also perform statistical analysis on MSE loss between predicted and actual values of both topological constraints (VF) and physics constraints (TC). We summarize the minimum, median and maximum value of MSE for 2D test data in \tabref{Tab:2dstats}.
Apart from the numerical analysis, to further qualify the performance of our method, we compare the visualizations of the predicted final topology, obtained by performing end-to-end prediction using all three methods, with ground truth final optimal topology in \figref{fig:2dvisuals}. Additionally, we compute each sample's total compliance value and compute the percentage deviation of predicted TC from simulated TC in the visualization. In the ground truth column, we also show the boundary and load conditions applied for each sample.
We further evaluate the CDCS method by visualizing the evolution of intermediate iteration densities and elemental compliance predicted by the DPN and CPN, respectively. As discussed in \secref{sec:CDCS}, we feed the actual initial and current iteration elemental compliance and the density values to DPN and CPN to predict the next iteration quantities. We visualize this iteration-wise evolution of the topology and its compliance in \figref{fig:DPN2dintermediate} and \figref{fig:CPN2dintermediate}, respectively.
\begin{figure*}
\caption{Correlation plots between predicted total compliance and simulated total compliance on 2D test data for all three methods: (a) DOD, (b) DS, (c) CDCS.}
\label{fig:2D_comp_correlation}
\end{figure*}
\begin{figure*}
\caption{Visualization of test data in 2D: (i) Ground truth final topology shape with fixed supports and load locations (ii) Method 1: Baseline direct optimal density prediction, (iii) Method 2: Density sequence prediction, (iv) Method 3: Coupled density and compliance sequence prediction. The results show the target design and the predicted design.}
\label{fig:2dvisuals}
\end{figure*}
\begin{figure*}
\caption{Visualizations of DPN predicting intermediate iterations density on 2D test data.}
\label{fig:DPN2dintermediate}
\end{figure*}
\begin{figure*}
\caption{Visualizations of CPN predicting intermediate iterations compliance on 2D test data.}
\label{fig:CPN2dintermediate}
\end{figure*}
\subsection{Results on 3D Topologies}\label{sec:3dresults}
We perform a similar set of evaluations on the 3D data as discussed in \secref{sec:2dresults} to assess the performance of the CDCS method and comparing it to the DOD method.
Comparing the MSE between the volume fraction (VF) and the total compliance (TC) of predicted topology with actual final topology, in the \tabref{tab:compare_3d}. We plot the histogram of MSE values from DOD and CDCS for both, VF and TC, in \figref{fig:3d_histograms}. We also compute the Pearson's correlation coefficient for the VF and TC values between the predicted and actual final topology using both DOD and CDCS in \tabref{tab:correlationcoeff_3d} and correlation plots in \figref{fig:3D_vf_correlation} and \figref{fig:3D_comp_correlation}, respectively.
We have summarized the different loss metrics like BCE, MAE, and MSE between the predicted and actual topology for both CDCS and DOD in \tabref{tab:compare_3d}. In \tabref{Tab:3dstats}, we summarize the statistical analysis of the MSE value of both VF and TC on the 3D test data.
\begin{table}[t!]
\setlength\extrarowheight{5pt}
\centering
\small
\caption{Comparison of test loss metrics using DOD and CDCS on 3D test data.}
\label{tab:compare_3d}
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\begin{tabular}{| r | r | r | r | r | r |}
\hline
\multirow{2}{*}{\textbf{Method}} & \textbf{VF} & \textbf{TC} & \multicolumn{3}{|c|}{\textbf{Density}} \\ \cline{2-6}
& \textbf{MSE} & \textbf{MSE} & \textbf{BCE} & \textbf{MAE} & \textbf{MSE} \\
\hline
DOD & 0.0002 & 8.04e+05 & \textbf{0.1008} & \textbf{0.0648} & \textbf{0.0312}\\
CDCS & \textbf{0.0001} & \textbf{3.95e+05} & 0.1965 & 0.0875 & 0.0544\\
\hline
\end{tabular} \end{table}
\begin{table}[t!]
\setlength\extrarowheight{5pt}
\small
\caption{Comparison of correlation coefficient(R) for volume fraction and total compliance on on 3D test data.}
\label{tab:correlationcoeff_3d}
\newcommand\T{\rule{0pt}{2.7ex}}
\newcommand\B{\rule[-1.3ex]{0pt}{0pt}}
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\centering
\begin{tabular}{| r | l | l |}
\hline
\textbf{Method} & \textbf{R for volume fraction} & \textbf{R for total compliance}\\
\hline
DOD & 0.9966 & 0.9139 \\
\hline
CDCS & 0.9947 & 0.9578\\
\hline
\end{tabular} \end{table}
\begin{figure*}
\caption{Correlation plots between predicted volume fraction and target volume fraction on 3D test data for DOD and CDCS.}
\label{fig:3D_vf_correlation}
\end{figure*}
\begin{figure*}
\caption{Histogram of (a) total volume fraction loss and (b) total compliance loss on the 3D test data.}
\label{fig:3d_histograms}
\end{figure*}
\begin{figure*}
\caption{Correlation plots between predicted total compliance and simulated total compliance on 3D test data for DOD and CDCS.}
\label{fig:3D_comp_correlation}
\end{figure*}
\begin{table*}[t!]
\newcommand\T{\rule{0pt}{2.7ex}}
\newcommand\B{\rule[-1.3ex]{0pt}{0pt}}
\centering
\small
\caption{Statistics on the volume fraction and total compliance loss on 3D test data.}
\label{Tab:3dstats}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\textbf{Statistics} & \multicolumn{3}{c|}{\T\B \textbf{MSE of volume fraction}} & \multicolumn{3}{c|}{ \textbf{MSE of total compliance}}\\
\cline{1-7}
\textbf{Method} &\T\B Min.& Median& Max& Min.& Median& Max\\
\hline
DOD\T & 9.31e-10 & 4.80e-05 & 2.75e-02 & 4.64e-01 & 1.83e+05 & 3.26e+07 \\
CDCS\T & 9.31e-10 & 6.19e-05 & 8.97e-03 & 1.12e-01 & 7.63e+04 & 2.35e+06 \\
\hline
\end{tabular} \end{table*}
\begin{table*}[t!]
\newcommand\T{\rule{0pt}{2.7ex}}
\newcommand\B{\rule[-1.3ex]{0pt}{0pt}}
\centering
\small
\caption{Comparison of different neural network architectures for each task of CDCS on 3D test data.}
\label{Tab:3d_arch_compare}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\textbf{Method}\B\T & \multicolumn{2}{c|}{\T\B \textbf{CPN}} & \multicolumn{3}{c|}{ \textbf{DPN}} & \multicolumn{3}{c|}{\textbf{FDPN}} \\
\cline{1-9}
\textbf{Architecture} &\T\B MAE & MSE & BCE & MAE & MSE & BCE & MAE & MSE\\
\hline
AE\T & 0.0221 & 0.0009 & 0.2838 & 0.0211 & 0.0014 & 0.1140 & 0.0178 & 0.0026 \\
U-Net\T & 0.0286 & 0.0013 & 0.2810 & 0.0144 & 0.0005 & 0.1152 & 0.0145 & 0.0019 \\
U-SE-ResNet\T & 0.0294 & 0.0016 & 0.3157 & 0.0131 & 0.0006 & 0.1188 & 0.0155 & 0.0021\\
\hline
\end{tabular} \end{table*}
\begin{table}[t!]
\setlength\extrarowheight{5pt}
\centering
\small
\caption{Comparison of average time between traditional SIMP algorithm and deep-learning based DOD and CDCS to obtain one optimized 3D topology.}
\label{tab:compare_time_3d}
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\begin{tabular}{| c | c |}
\hline
\textbf{Method} & \textbf{Time (sec)} \\
\hline
SIMP & 390 \\
DLTO-DOD & 0.233\\
DLTO-CDCS & 0.102 \\
\hline
\end{tabular} \end{table}
As discussed earlier, the CDCS method has three different neural networks dedicated to learning the different aspects of structural topology optimization. We have compared the performance of the different architectures for each task of TO. We have experimented with three architectures, which are: (i) Encoder-Decoder architecture, (ii) U-Net~\citep{ronneberger2015u,cciccek20163d} architecture, and (iii) U-SE-ResNet~\citep{nie2020topologygan}. For CPN, we compare MAE and MSE metrics, while for DPN and FDPN, we evaluate the performance based on the BCE, MAE, and MSE values on 3D test data. All these metric values are summarized in \tabref{Tab:3d_arch_compare}. From the \tabref{Tab:3d_arch_compare} we selected the best of three for each task, like for CPN, we implemented Encoder-Decoder, for DPN used U-SE-ResNet, and similarly, for FDPN, we used U-Net architectures.
\begin{figure*}
\caption{Visualization of In-distribution test data in 3D: (i) Initial geometry with the fixed supports and load locations, (ii) Ground Truth final topology (iii) Method 1: Baseline direct optimal density prediction, (iii) Method 3: Coupled density and compliance sequence prediction. The results show the target shape and the predicted shape.}
\label{fig:3dvisuals_insideDistribution}
\end{figure*}
\begin{figure*}
\caption{Visualization of Out-distribution test data in 3D: (i) Initial geometry with the fixed supports and load locations, (ii) Ground Truth final topology (iii) Method 1: Baseline direct optimal density prediction, (iii) Method 3: Coupled density and compliance sequence prediction. The results show the target shape and the predicted shape.}
\label{fig:3dvisuals_outsideDistribution}
\end{figure*}
We use marching cube methods to visualize the predicted and actual optimal topology shapes in 3D. As mentioned earlier in \secref{sec:2dresults}, using the end-to-end prediction, we obtain the predicted final topology. We visualize some samples from the test data, which are in-distribution samples and some out-of-distribution samples. As discussed in \secref{sec:3Ddatagen} about the 3D data generation, the in-distribution dataset has three nodes with fixed support and one loading condition. On the other hand, we generated few samples with more than three fixed support locations and multiple loads such as four loads (torsional deformation) acting on the topology; we termed these samples as out-of-distribution samples. We visualize some samples from the in-distribution test data in \figref{fig:3dvisuals_insideDistribution} and the out-of-distribution samples in \figref{fig:3dvisuals_outsideDistribution}. The first column shows the initial geometry, fixed support locations, and load in both figures, giving an approximate idea of the final optimal topology. We also calculate the total compliance (TC) value and the percentage deviation of predicted TC from simulated TC for each sample and mention it under each topology.
\section{Discussion}
In terms of the volume fraction (VF) constraint, if we compare the histogram in \figref{fig:2d_histograms}(a) and the MSE values from \tabref{tab:compare_2d}, the CDCS method performs comparably to the DOD method and marginally better than the DS method. We observe comparable values of Pearson's correlation coefficient for DOD ($R=0.8986$) and CDCS ($R=0.8945$) method, while the DS ($R=0.6883$) method performs poorly in satisfying the VF constraint. From \tabref{tab:compare_2d}, we observe almost $10\times$ times lesser error value when we compare the MSE between the predicted and the simulated total compliance for both CDCS and DS with the DOD. The histogram in \figref{fig:2d_histograms}(b) shows that the CDCS has the maximum number of test samples with an MSE value closer to zero. We can see that both methods, CDCS and DS, predict the TC very close to the predicted optimal TC minimum value. From \tabref{tab:correlationcoeff_2d}, total compliance of predicted topology by DS ($R=0.9551$) and CDCS ($R=0.8926$) is highly correlated with simulated total compliance than the baseline DOD ($R=0.8403$) approach. The CDCS and DOD method's performance is comparable in terms of the different loss metrics used and is better than the DS method, as shown in \tabref{tab:compare_2d}. In \tabref{Tab:2dstats}, we see that all the three metrics listed have comparable values for DOD and CDCS and are slightly better than the DS for volume fraction constraint. For the MSE values of total compliance, for 2D data, we see that both DS and CDCS perform much better than the DOD in all three statistics, and DS and CDCS have comparable median and maximum values. However, DS has a minimum loss value in all three methods.
\figref{fig:2dvisuals} shows that the CDCS predicts the final shape significantly closer to the ground truth, and also, the predicted total compliance value is much closer to the actual value. Although there are some cases where the shape predicted by DOD and DS is slightly better than the CDCS, the predicted total compliance value is much higher than the actual value. To further evaluate the CDCS method, from the visualizations in \figref{fig:DPN2dintermediate} and \figref{fig:CPN2dintermediate}, it is evident that both DPN and CPN are efficient at predicting the next iteration density and compliance values. Also, it depicts the non-trivial transformation flow of the initial topology shape towards the final optimal shape.
The results show that the CDCS method performs better than the baseline DOD and the DS method on 2D topologies. Although DS satisfies the physics constraint (minimizing the TC) better, it does not satisfy the topological constraint (VF) to the same extent. On the other hand, CDCS accomplishes the best balance in satisfying the volume fraction constraint and achieving a total compliance value close to the actual optimal minimum value. Hence, we only extend the CDCS method to the 3D dataset and compare it with the baseline DOD method.
From \tabref{tab:compare_3d}, we see that the MSE of VF using the CDCS method is $2\times$ lower than using the DOD method. From the histogram plots in \figref{fig:3d_histograms}(a), we can infer that more samples have minimum MSE of VF using CDCS than the DOD. Comparing correlation plots in \figref{fig:3D_vf_correlation} and Pearson's correlation coefficient from \tabref{tab:correlationcoeff_3d}, we notice that both predicted and actual VF values are highly correlated. Comparing the MSE of total compliance(TC), DOD has $2\times$ more error value than the CDCS, and from the histogram in \figref{fig:3d_histograms}(b), we see that most of the samples have the lower MSE of TC value using CDCS. In \tabref{tab:correlationcoeff_3d},we observe that the TC value predicted by the CDCS ($R=0.9578$) are highly correlated to actual TC values than the DOD ($R=0.9139$). We also observe this high correlation when we plot the correlation plots in \figref{fig:3D_comp_correlation}. In terms of MAE and MSE, both CDCS and DOD are comparable, while DOD performs slightly better when comparing the BCE values from \tabref{tab:compare_3d}. But overall, like in the case of 2D, CDCS achieves the balance of satisfying both topological (VF) and physical (TC) constraints on the 3D dataset. \tabref{Tab:3dstats} shows that all the three metrics values are comparable in the case of MSE of VF. We see better performance when we consider the MSE of TC. We notice that the median value using CDCS is $2\times$ lower than the DOD method. Also, the maximum value of MSE of TC using CDCS is $15\times$ smaller than the DOD value, which affirms the greater performance of CDCS over the baseline DOD approach in satisfying the physics constraint (TC).
Apart from the numerical analysis, we notice that the CDCS method performs significantly better than the baseline DOD method when we visualize the obtained shapes of test samples from in-distribution test data in \figref{fig:3dvisuals_insideDistribution}. Even on out-of-distribution samples, which have more fixed supports and loads, the CDCS method predicts the shape of final topology much closer to actual shape than DOD. The shapes are smoother than the actual ground truth obtained by CDCS. We also observe that the TC value of topology predicted using CDCS is very close to the simulated TC value than using the DOD.
From the numerical analysis performed and supported by the visualizations on both 2D and 3D datasets, we claim that the performance of the CDCS is better than the baseline DOD and DS. With the multiple network setup proposed to learn different steps in SIMP, our method predicts the final optimal topology and its compliance closer to the topology simulated by SIMP. Additionally, with DPN and CPN, our method can predict intermediate densities and compliances values, respectively. Also, with the deep-learning-based methods proposed, we can perform topology optimization significantly faster, and the speedup is approximately by 3900$\times$ (see \tabref{tab:compare_time_3d}).
To get more insights on the network architectures of CPN, DPN, and FDPN used in CDCS, please refer to \appref{sec:appendix_architecture}. For training performance results such as loss curves and histograms of different loss metrics, please see \appref{sec:appendix_lossplots}.
\subsection{Limitations}
While we show better performance of CDCS over the baseline method and significant computational speedup over traditional SIMP-based topology optimization approaches, our proposed approach has some limitations. The primary limitation is that our framework considers the initial geometry of a solid cube, and then different loads and boundary conditions are applied. Naturally, more general designs would not start with a cube's initial geometry but a more generic geometry. This issue can be addressed by adding more data to the current dataset with diverse examples with different initial geometries. Further, to capture key features in the initial and final geometry, we need to extend this framework to voxel resolutions beyond $32\times32\times32$. Finally, the limitation of the data requirements for training is fundamental to this approach. Here, we would like to note that our approach is amortized over the number of inferences we would be performing. Our approach is beneficial if the number of inferences is an order of magnitude higher than the number of samples generated.
\subsection{Future Work} Future work includes 3D topology optimization performed on a generic 3D CAD model. Further, extending our framework to higher resolutions such as $128\times128\times128$ would be useful for more realistic designs with intricate features. Another avenue of future work is adding manufacturability constraints on the fly during inference and the capability of generative design. Finally, approaches to reduce the data requirements for training using information from structural mechanics as priors would be an interesting direction to explore.
\section{Conclusions}\label{sec:conclusions} In this paper, we explore the application of algorithmically consistent deep learning methods for structural topology optimization. We developed two approaches (density sequence and coupled density compliance sequence models), consistent with the physics constraints, topological constraints, and the SIMP topological optimization algorithm. We generated datasets for topology optimization in both 2D and 3D representations and then demonstrated the superior performance of our proposed approach over a direct density-based baseline approach. Finally, we visualize a few anecdotal topology optimization samples to visually compare the three methods with the SIMP-based topology optimization process. We believe that our proposed algorithmically consistent approach for topology optimization provides superior quality results and can considerably speed up the topology optimization process over existing finite-element-based approaches.
{ \small
\appendix \section*{Appendix} Here, we will discuss more details of the methodology we used in this paper along with few mathematical definitions and algorithms.
\section{Training Details}\label{sec:appendix_training} We begin by explaining the training procedure we used for training the different networks for building the three frameworks.
\begin{algorithm}[b!]
\caption{Training Algorithm}
\label{alg:training}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{~Network Architecture}
\textbf{Initialize:}~\text{Weights for all layers, $W_l, (l=1,2,\dots, m$);}\\\text{patience = 0}\\
\textbf{Load Data:}~\text{Load training data $\mathcal{\textbf{D}}$ and validation data $\mathcal{\textbf{D}}_V$}\\
\For{($i=0;i\le num\_epochs; i++$)}
{
\text{Randomly shuffle the data}\\
\text{Split $\mathcal{\textbf{D}}$ to $\mathcal{\textbf{D}}_j, (j=1,2,\dots, n$) mini-batches}\\
\For{$j=1:n$}
{
\text{Predict outputs $\mathcal{O}_j$ for mini-batch $\mathcal{\textbf{D}}_j$}\\
\text{Compute loss $\mathcal{L}(\mathcal{\textbf{D}}_j,\mathcal{O}_j,\{W\})$}\\
\text{Update weights,$\{W\}$ using Adam optimizer}\\
}
\text{Predict validation outputs $\mathcal{O}_V$ for $\mathcal{\textbf{D}}_V$}\\
\text{Compute Validation Loss $\mathcal{L}(\mathcal{\textbf{D}}_V,\mathcal{O}_V,\{W\})$}\\
\If{\text{Avg. Validation Loss not improving}}
{
\text{increment patience}
}
\Else{\text{patience = 0}}
\If{patience $\geq$ 30}
{
Exit\\
}
} \end{algorithm}
\coloredref{Algorithm}{alg:training} provides a general training procedure. We first load the training dataset and the validation data. For several epochs and several mini-batches of the dataset, we compute the loss and update weights using SGD (and its variants, such as Adam) optimizer. In general, we save the weights of the model with the least validation loss. We run for 50 additional epochs (patience parameter) to check if the loss reduces further. In the \coloredref{Figure}{fig:DSlosses_2d}, we see that at the $450^{th}$ epoch, the training, and validation loss are very close to each other, and validation loss is minimum. Yet, we still run for 50 more epochs to ensure that the weights obtained are truly minimal and have good generalization capability. Finally, we stop at the end of the 485th epoch because we do not find any better weights with a lower validation loss.
\section{Metrics used for comparison} To develop a baseline for our CDCS framework and further understand the different elements, we use several statistical metrics that we will detail in this section.
Two major metrics used while comparing the results are: (i) the mean-squared error (MSE) and (ii) the correlation coefficient (R). The mean-squared error is computed by:$$MSE = {\frac{\sum_i^k (p_1 - p_2)^2}{N}}$$ where $p_1$ and $p_2$ are two data scalar values to be compared. In the case of vector comparison, the $MSE$ is also the $L_2$ norm of the difference vector. Similarly, the mean absolute error($MAE$) is defined as the $L_1$ norm of the difference vector. For scalar values, $$MAE = {\frac{\sum_i^k |p_1 - p_2|}{N}}$$
The correlation coefficient (more popularly known as Pearson Correlation Coefficient, $R$), which was used for the comparison of the results is given by: $$R = \frac{cov(x,y)}{\sigma_x\sigma_y},$$ where $cov(x,y)$ is the covariance between $x$ and $y$ and $\sigma$ is the standard deviation. A simpler formula used for computing the correlation coefficient is as follows: $$R = \frac{\sum{(x- m_x)(y - m_y)}}{\sqrt{\sum(x- m_x)^2\sum{(y - m_y)^2}}}.$$ Here $m_x$ and $m_y$ represent the mean of vectors $x$ and $y$.
Apart from these metrics, another important metric we use is the binary crossentropy (BCE) loss. Binary crossentropy loss denotes the log likelihood of the predicted value for the target value. Mathematically, $$BCE = \frac{\sum_i^k |p_1\times log(1-p_2) + p_2 \times log(1-p_1)|}{N}$$
Finally, we use Accuracy to count the number of pixels/voxels accurately classified. For this, we threshold the density values predicted by $0.5$ and count the voxels classified correctly.
\section{Data convergence study} We conducted a data convergence study to confirm the sufficiency of the dataset required for performing all the experiments mentioned in the main paper. We use a separate test dataset and train the model with different training samples to perform these experiments. The metrics reported in \tabref{tab:dataconvergence} are obtained by evaluating the trained networks on the separate test dataset. For the sake of brevity, we show this only for the 3D dataset and baseline DOD framework alone. We observe that the performance of the network increases with the increase in the number of samples used for training. However, this saturates after 5000 samples with very little improvement obtained with 10000 samples. This demonstrates the sufficiency of the dataset.
\begin{table*}[h!]
\setlength\extrarowheight{5pt}
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\centering
\caption{Data convergence study using DOD on 3D test data.}
\label{tab:dataconvergence}
\begin{tabular}{| c | c | c | c | c | c | c |}
\hline
\textbf{Number of samples} & \textbf{MSE of VF} & \textbf{MSE of TC} & \textbf{Accuracy} & \textbf{BCE} & \textbf{MAE} & \textbf{MSE} \\
\hline
500 & 0.0010 & 4.76e+06 & 91.90\% & 0.1832 & 0.1185 & 0.0570 \\
1000 & 0.0004 & 3.07e+06 & 93.47\% & 0.1482 & 0.0943 & 0.0461 \\
2500 & 0.0002 & 1.39e+06 & 94.57\% & 0.1241 & 0.0782 & 0.0385 \\
5000 & 0.0002 & 9.82e+05 & 95.38\% & 0.1058 & 0.0672 & 0.0328 \\
10000 & 0.0002 & 8.04e+05 & 95.61\% & 0.1008 & 0.0648 & 0.0312 \\
\hline
\end{tabular} \end{table*}
\section{Performance plots and Histograms}\label{sec:appendix_lossplots}
In this section, we summarize different performance plots of training the neural networks used in the three frameworks explored for both 2D and 3D data. \figref{fig:perf_plots_idpn} shows the L2 and L1 loss plots when DOD is used to predict the first, second, fifth, tenth, and final densities of the 2D dataset. In \figref{fig:DSlosses_2d}, the left plot shows the training losses(L2 loss, L1 loss, and L2 loss of VF) of IDPN(Phase1 of DS), and similarly, the right plot is of training losses(L2 loss and L2 loss of VF) of DPN(Phase2 of DS). \figref{fig:coupled_losses_2d} shows the training losses of all three networks of the CDCS method on 2D data. Similarly for 3D dataset, \figref{fig:onestep_losses_3d} has the plot of training losses of DOD, \figref{fig:coupled_losses_3d} has the loss plots for each of the three networks of CDCS. From these loss plots, we see the losses decrease as we progress in training.
\begin{figure*}
\caption{Performance plots of intermediate density prediction networks while predicting first, second, fifth, tenth intermediate densities and final density. (a) $L_2$ loss (b) $L_1$ loss.}
\label{fig:perf_plots_idpn}
\end{figure*}
\begin{figure*}
\caption{Performance plots of two phases of Density sequence prediction method on 2D data. (a) DS Phase1:IDPN loss (b) DS Phase2: DTN loss.}
\label{fig:DSlosses_2d}
\end{figure*} \begin{figure*}
\caption{Performance plots of (a) Compliance Prediction Network (CPN) and (b) Density Prediction Network (DPN) and (c) Final Density Prediction Network (FDPN). Each plot shows different loss functions used in training with 2D data.}
\label{fig:coupled_losses_2d}
\end{figure*}
We also plot the histograms of different loss metrics values, between predicted and actual topology, like BCE, MAE and MSE loss for all the methods on 2D and 3D test dataset in \figref{fig:2d_loss_histograms} and \figref{fig:3d_loss_histograms}, respectively. Based on these histograms of metrics, in the 2D dataset, CDCS performs better than DOD and DS, while in the case of the 3D dataset, CDCS and DOD have comparable performance.
\begin{figure*}
\caption{Performance plots of Direct Optimal Density Prediction. Plot shows different loss functions used in training with 3D data.}
\label{fig:onestep_losses_3d}
\end{figure*}
\begin{figure*}
\caption{Performance plots of (a) Compliance Prediction Network (CPN) and (b) Density Prediction Network (DPN) and (c) Final Density Prediction Network (FDPN). Each plot shows different loss functions used in training with 3D data.}
\label{fig:coupled_losses_3d}
\end{figure*}
\begin{figure*}
\caption{Distribution of BCE, MAE, MSE losses on 2D test data.}
\label{fig:2d_loss_histograms}
\end{figure*}
\begin{figure*}
\caption{Distribution of BCE, MAE, MSE losses on 3D test data.}
\label{fig:3d_loss_histograms}
\end{figure*}
\section{Architectures}\label{sec:appendix_architecture}
In this section, we provide details about the different architectures used in the CDCS method. As mentioned earlier, we implemented Encoder-Decoder for CPN, U-SE-ResNet for DPN, and U-Net for the FDPN part.
\begin{figure*}
\caption{\textbf{Compliance Prediction Network (CPN) prediction:} This CNN Encoder-Decoder model is used to predict the next iteration compliance.}
\label{fig:cpn_architecture}
\end{figure*}
\begin{figure*}
\caption{\textbf{Final Density Prediction Network (FDPN) prediction:} This U-Net architecture is used to predict the next iteration density.}
\label{fig:fdpn_architecture}
\end{figure*}
\begin{figure*}
\caption{\textbf{Density Prediction Network (DPN) prediction:} This U-SE-ResNet\citep{nie2020topologygan} architecture is used to predict the next iteration density.}
\label{fig:dpn_architecture}
\end{figure*}
The next architecture for FDPN is a U-Net~\citep{ronneberger2015u,cciccek20163d} based network as shown in \figref{fig:dpn_architecture}. This architecture is the modified version of the encoder-decoder architecture. As we can see in \figref{fig:fdpn_architecture}, the skip-connections are introduced from encoder part to decoder part at each resolution level. These connections help transfer the encoder's contextual information to the decoder for better localization~\citep{ronneberger2015u}. The encoder and decoder of U-Net are the same as in the encoder-decoder architecture discussed above.
The Encoder-Decoder architecture is a simple convolution neural network (CNN) consisting of two parts: the encoder and the decoder (\figref{fig:cpn_architecture}). The input is passed through the encoder and converted to a lower-dimensional latent space, further expanding to the higher dimension required by the decoder. The encoder is a collection of encoding blocks that consist of strided convolutional layers, followed by non-linearity (ReLU) transformation and batch normalization. Similarly, the decoder blocks of the decoder have an up-sampling layer, convolutional layers, non-linearity(ReLU), and batch normalization. Finally, we use the last convolution and non-linearity to get the output of the desired shape.
U-SE-ResNet~\citep{nie2020topologygan} is constructed using U-Net with addition of SE-ResNet blocks as shown in \figref{fig:dpn_architecture}. Each SE-ResNet block is a combination of ResNet and Squeeze-and-Excitation(SE) blocks~\citep{huSE}. These blocks are introduced in the U-Net architecture at the bottle-neck region between the encoder and decoder. SE block enhances the network's performance by recalibrating the channel-wise features by explicitly weighing the inter-dependencies between channels. SE block consists of a pooling layer followed by fully connected (FC) and ReLU transformation and again passing through the FC and sigmoid transformation. In the end, the output of the sigmoid layer is scaled by multiplying with the input of the SE block, with which we get the same shape as the input of the SE block. In SE-ResNet, ResNet is combined with SE block to improve the performance~\citep{nie2020topologygan} by adding the residual connection between the input to the output of the SE block. Also, in this architecture, we use the same encoder and decoder, as explained earlier.
\end{document} |
\begin{document}
\newtheorem{theorem}{Theorem} \newtheorem*{defn}{Definition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{prop}[theorem]{Proposition} \newtheorem{cor}[theorem]{Corollary}
\begin{center} {\Large A Bijection on Dyck Paths and Its Cycle Structure }
DAVID CALLAN \\ Department of Statistics \\ \vspace*{-1mm} University of Wisconsin-Madison \\ \vspace*{-1mm} 1300 University Ave \\ \vspace*{-1mm} Madison, WI \ 53706-1532 \\ {\bf [email protected]} \\
November 21, 2006 \end{center}
\begin{abstract} The known bijections on Dyck paths are either involutions or have notoriously intractable cycle structure. Here we present a size-preserving bijection on Dyck paths whose cycle structure is amenable to complete analysis. In particular, each cycle has length a power of 2. A new manifestation of the Catalan numbers as labeled forests crops up enroute as does the Pascal matrix mod 2. We use the bijection to show the equivalence of two known manifestations of the Motzkin numbers. Finally, we consider some statistics on the new Catalan manifestation. \end{abstract}
{\Large \textbf{1 \ Introduction} }\quad There are several bijections on Dyck paths in the literature \cite{twobij04,catfine, invol1999,bij1998,ordered,simple2003,don80,acp44,lalanne92,lalanne93,vaille97}, usually introduced to show the equidistribution of statistics: if a bijection sends statistic A to statistic B, then clearly both have the same distribution. Another aspect of such a bijection is its cycle structure considered as a permutation on Dyck paths. Apart from involutions, this question is usually intractable. For example, Donaghey \cite{don80} introduces a bijection, gets some results on a restriction version, and notes its apparently chaotic behavior in general. In similar vein, Knuth \cite{acp44} defines a conjugate ($R$) and transpose ($T$), both involutions, on ordered forests, equivalently on Dyck paths, and asks when they commute \cite[Ex.\,17,\,7.2.1.6]{acp44}, equivalently, what are the fixed points of $(RT)^{2}$? This question is still open. (Donaghey's bijection is equivalent to the composition $RT$.)
In this paper, after reviewing Dyck path terminology (\S2), we recursively define a new bijection $F$ on Dyck paths (\S 3) and analyze its cycle structure (\S4,\:\S5). \S 4 treats the restriction of $F$ to paths that avoid the subpath $DUU$, and involves an encounter with the Pascal matrix mod 2. \S 5 generalizes to arbitrary paths. This entails an explicit description of $F$ involving a new manifestation of the Catalan numbers as certain colored forests in which each vertex is labeled with an integer composition. We show that each orbit has length a power of 2, find generating functions for orbit size, and characterize paths with given orbit size in terms of subpath avoidance. In particular, the fixed points of $F$ are those Dyck paths that avoid $DUDD$ and $UUP^{+}DD$ where $P^{+}$ denotes a nonempty Dyck path. \S 6 uses the bijection $F$ to show the equivalence of two known manifestations of the Motzkin numbers. \S7 considers some statistics on the new Catalan manifestation.
{\Large \textbf{2 \ Dyck Path Terminology} }\quad A Dyck path, as usual, is a lattice path of upsteps $U=(1,1)$ and downsteps $D=(1,-1)$, the same number of each, that stays weakly above the horizontal line joining its initial and terminal points (vertices). A peak is an occurrence of $UD$, a valley is a $DU$.
\vspace*{-3mm}
\Einheit=0.6cm \[ \Label\advance\ydim by.25cm{\rightarrow}(-5.2,3) \Label\ensuremath{\mathbf p}\xspace{\uparrow}(-2,1.9) \Label\advance\xdim by-.30cm{ \textrm{{\footnotesize peak upstep}}}(-6.9,3.5) \Label\ensuremath{\mathbf p}\xspace{ \textrm{{\footnotesize valley}}}(-2,1) \Label\ensuremath{\mathbf p}\xspace{ \textrm{{\footnotesize vertex}}}(-2,0.4) \Label\ensuremath{\mathbf p}\xspace{\uparrow}(.4,1.5) \Label\ensuremath{\mathbf p}\xspace{ \textrm{{\footnotesize return}}}(.4,.8) \Label\ensuremath{\mathbf p}\xspace{ \textrm{{\footnotesize downstep}}}(.4,0.2) \SPfad(-7,1),1111\endSPfad \SPfad(1,1),111111\endSPfad \SPfad(-1,1),1\endSPfad \Pfad(-7,1),33344344334344\endPfad \DuennPunkt(-7,1) \DuennPunkt(-6,2) \DuennPunkt(-5,3) \DuennPunkt(-4,4) \DuennPunkt(-3,3) \DuennPunkt(-2,2) \DuennPunkt(-1,3) \DuennPunkt(0,2) \DuennPunkt(1,1) \DuennPunkt(2,2) \DuennPunkt(3,3) \DuennPunkt(4,2) \DuennPunkt(5,3) \DuennPunkt(6,2) \DuennPunkt(7,1) \Label\ensuremath{\mathbf p}\xspace{\uparrow}(4,1) \Label\ensuremath{\mathbf p}\xspace{ \textrm{{\footnotesize ground level}}}(4,.2) \Label\advance\ydim by.25cm{ \textrm{\small A Dyck 7-path with 2 components, 2$DUD$s, and height 3}}(0,-2.5) \]
\vspace*{1mm}
The size (or semilength) of a Dyck path is its number of upsteps and a Dyck path of size $n$ is a Dyck $n$-path. The empty Dyck path (of size 0) is denoted $\epsilon$. The number of Dyck $n$-paths is the Catalan number $C_{n}$, sequence \htmladdnormallink{A000108}{http://www.research.att.com:80/cgi-bin/access.cgi/as/njas/sequences/eisA.cgi?Anum=A000108} in \htmladdnormallink{OEIS}{http://www.research.att.com/~njas/sequences/Seis.html} . The height of a vertex in a Dyck path is its vertical height above ground level\ and the height of the path is the maximum height of its vertices. A return downstep is one that returns the path to ground level. A \emph{primitive} Dyck path is one with exactly one return (necessarily at the end). Note that the empty Dyck path $\epsilon$ is not primitive. Its returns split a nonempty Dyck path into one or more primitive Dyck paths, called its \emph{components}. Upsteps and downsteps come in matching pairs: travel due east from an upstep to the first downstep encountered. More precisely, $D_{0}$ is the matching downstep for upstep $U_{0}$ if $D_{0}$ terminates the shortest Dyck subpath that starts with $U_{0}$. We use \ensuremath{\mathcal P}\xspace to denote the set of primitive Dyck paths, $\ensuremath{\mathcal P}\xspace_{n}$ for $n$-paths, $\ensuremath{\mathcal P}\xspace(DUU)$ for those that avoid $DUU$ as a subpath, and $\ensuremath{\mathcal P}\xspace[DUU]$ for those that contain at least one $DUU$. A path $UUUDUDDD$, for example, is abbreviated $U^{3}DUD^{3}$.
{\Large \textbf{3 \ The Bijection} }\quad Define a size-preserving bijection $F$ on Dyck paths recursively as follows. First, $F(\epsilon)=\epsilon$ and for a non-primitive Dyck path $P$ with components $P_{1},P_{2},\ldots,P_{r}\ (r\ge 2)$, $F(P)=F(P_{1})F(P_{2})\ldots F(P_{r})$ (concatenation). This reduces matters to primitive paths. From a consideration of the last vertex at height 3 (if any), every primitive Dyck path $P$ has the form $UQ(UD)^{i}D$ with $i\ge 0$ and $Q$ a Dyck path that is either empty (in case no vertex is at height 3) or ends $DD$; define $F(P)$ by \[ F(P)= \begin{cases}
U^{i}F(R)UDD^{i} & \textrm{if $Q$ is primitive, say $Q=URD$, and} \\
U^{i+1}F(Q)D^{i+1} & \textrm{if $Q$ is not primitive.} \end{cases} \] Schematically,
\vspace*{-5mm}
\Einheit=0.5cm \[ \Pfad(-15,5),33\endPfad \Pfad(-11,7),434\endPfad \Pfad(-6,6),344\endPfad \Pfad(0,5),3\endPfad \Pfad(5,7),34\endPfad \Pfad(8,6),4\endPfad \SPfad(-8,6),34\endSPfad \SPfad(1,6),3\endSPfad \SPfad(7,7),4\endSPfad \DuennPunkt(-15,5) \DuennPunkt(-14,6) \DuennPunkt(-13,7) \DuennPunkt(-11,7) \DuennPunkt(-10,6) \DuennPunkt(-9,7) \DuennPunkt(-8,6) \DuennPunkt(-7,7) \DuennPunkt(-6,6) \DuennPunkt(-5,7) \DuennPunkt(-4,6) \DuennPunkt(-3,5) \DuennPunkt(0,5) \DuennPunkt(1,6) \DuennPunkt(2,7) \DuennPunkt(5,7) \DuennPunkt(6,8) \DuennPunkt(7,7) \DuennPunkt(8,6) \DuennPunkt(9,5) \Label\advance\ydim by.25cm{\longrightarrow}(-1.5,5.5) \Label\advance\ydim by.25cm{\longrightarrow}(-1.5,0.5) \Label\advance\ydim by.25cm{\textrm{{\small $R$}}}(-12,6.8) \Label\advance\ydim by.25cm{\textrm{{\small $F(R)$}}}(3.5,6.8)
\Label\advance\ydim by.25cm{\textrm{{\small $Q$ non-primitive;}}}(12,1.5) \Label\advance\ydim by.25cm{\textrm{{\small $Q=\epsilon$ or ends $DD$}}}(12,0.5) \Pfad(-14,0),3\endPfad \Pfad(-11,1),34\endPfad \Pfad(-7,1),344\endPfad \Pfad(1,0),3\endPfad \Pfad(7,1),4\endPfad \SPfad(-9,1),34\endSPfad \SPfad(2,1),3\endSPfad \SPfad(6,2),4\endSPfad \Label\advance\ydim by.25cm{\nearrow}(1.2,6.2) \Label\advance\ydim by.25cm{\swarrow}(1.2,6.2) \Label\advance\ydim by.25cm{\searrow}(7.9,6.2) \Label\advance\ydim by.25cm{\nwarrow}(7.9,6.2) \Label\advance\ydim by.25cm{\textrm{{\small $i$}}}(0.9,6.7) \Label\advance\ydim by.25cm{\textrm{{\small $i$}}}(8.2,6.7) \Label\advance\xdim by-.30cm{\longleftarrow}(-8.7,5.4) \Label\advance\xdim by-.30cm{\textrm{{\small ---}}}(-7.7,5.4) \Label\advance\xdim by.30cm{\longrightarrow}(-5.4,5.4) \Label\advance\xdim by-.30cm{\textrm{{\small ---}}}(-5.1,5.4) \Label\advance\ydim by.25cm{\textrm{{\small $i$}}}(-7,5.0) \Label\advance\ydim by.25cm{\textrm{{\small $Q$}}}(-12,0.8) \Label\advance\ydim by.25cm{\textrm{{\small $F(Q)$}}}(4.5,1.8) \DuennPunkt(-14,0) \DuennPunkt(-13,1) \DuennPunkt(-11,1) \DuennPunkt(-10,2) \DuennPunkt(-9,1) \DuennPunkt(-8,2) \DuennPunkt(-7,1) \DuennPunkt(-6,2) \DuennPunkt(-5,1) \DuennPunkt(-4,0) \DuennPunkt(1,0) \DuennPunkt(2,1) \DuennPunkt(3,2) \DuennPunkt(6,2) \DuennPunkt(7,1) \DuennPunkt(8,0) \Label\advance\ydim by.25cm{\nearrow}(2.2,1.2) \Label\advance\ydim by.25cm{\swarrow}(2.2,1.2) \Label\advance\ydim by.25cm{\searrow}(6.9,1.2) \Label\advance\ydim by.25cm{\nwarrow}(6.9,1.2) \Label\advance\ydim by.25cm{\textrm{{\small $i$}}}(1.9,1.7) \Label\advance\ydim by.25cm{\textrm{{\small $i$}}}(7.2,1.7) \Label\advance\xdim by-.30cm{\longleftarrow}(-9.7,.4) \Label\advance\xdim by-.30cm{\textrm{{\small ---}}}(-8.7,.4) \Label\advance\xdim by.30cm{\longrightarrow}(-6.4,.4) \Label\advance\xdim by-.30cm{\textrm{{\small ---}}}(-6.1,.4) \Label\advance\ydim by.25cm{\textrm{{\small $i$}}}(-8,0) \Label\advance\ydim by.25cm{ \textrm{\small definition of $F$ on primitive Dyck paths}}(0,-2.5) \] \vspace*{3mm}
Note that $R=\epsilon$ in the top left path duplicates a case of the bottom left path but no matter: both formulas give the same result.
The map $G$, defined as follows, serves as an inverse of $F$ and hence $F$ is indeed a bijection. Again, $G(\epsilon)=\epsilon$ and for a non-primitive Dyck path $P$ with components $P_{1},P_{2},\ldots,P_{r}\ (r\ge 2)$, $G(P)=G(P_{1})G(P_{2})\ldots G(P_{r})$. By considering the lowest valley vertex, every primitive Dyck path has the form $U^{i+1}QD^{i+1}$ with $i\ge 0$ and $Q$ a non-primitive Dyck path ($Q=\epsilon$ in case valley vertices are absent); define $G(P)$ by \[ G(P)= \begin{cases}
U UG(R)D (UD)^{i}D & \textrm{if $Q$ ends $UD$, say $Q=RUD$, and} \\
UG(Q)(UD)^{i}D & \textrm{otherwise.} \end{cases} \]
The bijection $F$ is the identity on Dyck paths of size $\le 3$, except that it interchanges $U^{3}D^{3}$ and $U^{2}DUD^{2}$. Its action on primitive Dyck 4-paths is given in the Figure below.
\Einheit=0.4cm \[ \Label\advance\ydim by.25cm{\longrightarrow}(0,21) \Label\advance\ydim by.25cm{\longrightarrow}(0,16) \Label\advance\ydim by.25cm{\longrightarrow}(0,11) \Label\advance\ydim by.25cm{\longrightarrow}(0,6) \Label\advance\ydim by.25cm{\longrightarrow}(0,1) \Label\advance\ydim by.25cm{\longrightarrow}(0,26) \Label\advance\ydim by.25cm{ \textrm{{\small Dyck path $P$}}}(-5,26) \Label\advance\ydim by.25cm{ \textrm{{\small image $F(P)$}}}(5,26) \SPfad(-9,0),11111111\endSPfad \SPfad(1,0),11111111\endSPfad \SPfad(-9,5),11111111\endSPfad \SPfad(1,5),11111111\endSPfad \SPfad(-9,10),11111111\endSPfad \SPfad(1,10),11111111\endSPfad \SPfad(-9,15),11111111\endSPfad \SPfad(1,15),11111111\endSPfad \SPfad(-9,20),11111111\endSPfad \SPfad(1,20),11111111\endSPfad \Pfad(-9,0),33434344\endPfad \Pfad(1,0),33334444\endPfad \Pfad(-9,5),33433444\endPfad \Pfad(1,5),33433444\endPfad \Pfad(-9,10),33344344\endPfad \Pfad(1,10),33343444\endPfad \Pfad(-9,15),33343444\endPfad \Pfad(1,15),33434344\endPfad \Pfad(-9,20),33334444\endPfad \Pfad(1,20),33344344\endPfad \DuennPunkt(-9,0) \DuennPunkt(-8,1) \DuennPunkt(-7,2) \DuennPunkt(-6,1) \DuennPunkt(-5,2) \DuennPunkt(-4,1) \DuennPunkt(-3,2) \DuennPunkt(-2,1) \DuennPunkt(-1,0) \DuennPunkt(1,0) \DuennPunkt(2,1) \DuennPunkt(3,2) \DuennPunkt(4,3) \DuennPunkt(5,4) \DuennPunkt(6,3) \DuennPunkt(7,2) \DuennPunkt(8,1) \DuennPunkt(9,0) \DuennPunkt(-9,5) \DuennPunkt(-8,6) \DuennPunkt(-7,7) \DuennPunkt(-6,6) \DuennPunkt(-5,7) \DuennPunkt(-4,8) \DuennPunkt(-3,7) \DuennPunkt(-2,6) \DuennPunkt(-1,5) \DuennPunkt(1,5) \DuennPunkt(2,6) \DuennPunkt(3,7) \DuennPunkt(4,6) \DuennPunkt(5,7) \DuennPunkt(6,8) \DuennPunkt(7,7) \DuennPunkt(8,6) \DuennPunkt(9,5) \DuennPunkt(-9,10) \DuennPunkt(-8,11) \DuennPunkt(-7,12) \DuennPunkt(-6,13) \DuennPunkt(-5,12) \DuennPunkt(-4,11) \DuennPunkt(-3,12) \DuennPunkt(-2,11) \DuennPunkt(-1,10) \DuennPunkt(1,10) \DuennPunkt(2,11) \DuennPunkt(3,12) \DuennPunkt(4,13) \DuennPunkt(5,12) \DuennPunkt(6,13) \DuennPunkt(7,12) \DuennPunkt(8,11) \DuennPunkt(9,10) \DuennPunkt(-9,15) \DuennPunkt(-8,16) \DuennPunkt(-7,17) \DuennPunkt(-6,18) \DuennPunkt(-5,17) \DuennPunkt(-4,18) \DuennPunkt(-3,17) \DuennPunkt(-2,16) \DuennPunkt(-1,15) \DuennPunkt(1,15) \DuennPunkt(2,16) \DuennPunkt(3,17) \DuennPunkt(4,16) \DuennPunkt(5,17) \DuennPunkt(6,16) \DuennPunkt(7,17) \DuennPunkt(8,16) \DuennPunkt(9,15) \DuennPunkt(-9,20) \DuennPunkt(-8,21) \DuennPunkt(-7,22) \DuennPunkt(-6,23) \DuennPunkt(-5,24) \DuennPunkt(-4,23) \DuennPunkt(-3,22) \DuennPunkt(-2,21) \DuennPunkt(-1,20) \DuennPunkt(1,20) \DuennPunkt(2,21) \DuennPunkt(3,22) \DuennPunkt(4,23) \DuennPunkt(5,22) \DuennPunkt(6,21) \DuennPunkt(7,22) \DuennPunkt(8,21) \DuennPunkt(9,20) \Label\advance\ydim by.25cm{ \textrm{\small action of $F$ on primitive Dyck 4-paths}}(0,-4) \]
\vspace*{10mm}
{\Large \textbf{4 \ Restriction to \emph{DUU}-avoiding Paths} }\quad To analyze the structure of $F$ a key property, clear by induction, is that it preserves $\#\,DUU$s, in particular, it preserves the property ``path avoids $DUU$''. A $DUU$-avoiding Dyck $n$-path corresponds to a composition $\ensuremath{\mathbf c}\xspace=(c_{1},c_{2},\ldots,c_{h})$ of $n$ via $c_{i}=$ number of $D$s ending at height $h-i,\ i=1,2,\ldots,h$ where $h$ is the height of the path:
\Einheit=0.6cm \[ \SPfad(-10,0),111111111111111111\endSPfad \SPfad(0,3),11111111111\endSPfad \SPfad(5,2),111111\endSPfad \SPfad(8,1),111\endSPfad \Pfad(-10,0),333343434443434434\endPfad \DuennPunkt(-10,0) \DuennPunkt(-9,1) \DuennPunkt(-8,2) \DuennPunkt(-7,3) \DuennPunkt(-6,4) \DuennPunkt(-5,3) \DuennPunkt(-4,4) \DuennPunkt(-3,3) \DuennPunkt(-2,4) \DuennPunkt(-1,3) \DuennPunkt(0,2) \DuennPunkt(1,1) \DuennPunkt(2,2) \DuennPunkt(3,1) \DuennPunkt(4,2) \DuennPunkt(5,1) \DuennPunkt(6,0) \DuennPunkt(7,1) \DuennPunkt(8,0) \Label\advance\ydim by.25cm{ \textrm{\small $DUU$-avoiding path $P$}}(-1,5) \Label\advance\ydim by.25cm{ \textrm{\small \# $D$s at each level}}(10,5) \Label\advance\ydim by.25cm{ \textrm{\small 3}}(10,3) \Label\advance\ydim by.25cm{ \textrm{\small 1}}(10,2) \Label\advance\ydim by.25cm{ \textrm{\small 3}}(10,1) \Label\advance\ydim by.25cm{ \textrm{\small 2}}(10,0) \Label\advance\ydim by.25cm{ \textrm{\small $DUU$-avoiding path $P\quad \leftrightarrow\quad$ composition $(3,1,3,2)$}}(2,-2) \] \vspace*{2mm}
Under this correspondence, $F$ acts on compositions of $n$\,: $F$ is the identity on compositions of length 1, and for $\ensuremath{\mathbf c}\xspace=(c_{i})_{i=1}^{r}$ with $r\ge 2,\ F(\ensuremath{\mathbf c}\xspace)$ is the concatenation of $IncrementLast\big(F(c_{1},\ldots,c_{r-2})\big),\,1^{c_{r-1}-1},\,c_{r}$ where $IncrementLast$ means ``add 1 to the last entry'' and the superscript refers to repetition. In fact, $F$ can be described explicitly on compositions of $n$: \begin{prop}
For a composition \ensuremath{\mathbf c}\xspace of $n$, $F(\ensuremath{\mathbf c}\xspace)$ is given by the following
algorithm. For each entry $c$ in even position measured from
the end $($so the last entry is in position $1)$, replace it by
$c-1\ 1$s and increment its left neighbor.
\label{X} \end{prop} For example, $4\,2\,1\,5\,2\,3 = \overset{6}{4}\,\overset{5}{2}\,\overset{4}{1}\,\overset{3}{5}\, \overset{2}{2}\,\overset{1}{3} \rightarrow 1\ 1^{3}\ 3 \ 1^{0}\ 6 \ 1^{1}\ 3 = 1^{4}\,3\,6\,1\,3$. \qed
Primitive $DUU$-avoiding Dyck $n$-paths correspond to compositions of $n$ that end with a 1. Let $\ensuremath{\mathcal C}\xspace_{n}$ denote the set of such compositions. Thus $\vert \ensuremath{\mathcal C}\xspace_{1} \vert = 1$ and for $n\ge 2$, \ $\vert \ensuremath{\mathcal C}\xspace_{n} \vert = 2^{n-2}$ since there are $2^{n-2}$ compositions of $n-1$.
Denote the length of a composition \ensuremath{\mathbf c}\xspace by $\#\ensuremath{\mathbf c}\xspace$. The \emph{size} of \ensuremath{\mathbf c}\xspace is the sum of its entries. The \emph{parity} of \ensuremath{\mathbf c}\xspace is the parity (even/odd) of $\#\ensuremath{\mathbf c}\xspace$. There are two operations on nonempty compositions that increment (that is, increase by 1) the size: $P=$ prepend 1, and $I=$ increment first entry. For example, for $\ensuremath{\mathbf c}\xspace=(4,1,1)$ we have size(\ensuremath{\mathbf c}\xspace) = 6, $\ \#\ensuremath{\mathbf c}\xspace=3,$ the parity of \ensuremath{\mathbf c}\xspace is odd, $P(\ensuremath{\mathbf c}\xspace)=(1,4,1,1),\ I(\ensuremath{\mathbf c}\xspace)=(5,1,1)$. \begin{lemma}
\label{A}
$P$ changes the parity of a composition while $I$ preserves it. \qed \end{lemma} We'll call $P$ and $I$ \emph{augmentation operators} on $\ensuremath{\mathcal C}\xspace_{n}$ and for $A$ an augmentation operator, $A'$ denotes the other one. \begin{lemma}
Let $A$ be an augmentation
operator. On a composition $\ensuremath{\mathbf c}\xspace$ with $\#\ensuremath{\mathbf c}\xspace \ge 2$,
$A \circ F = F \circ A$ if $\,\#\ensuremath{\mathbf c}\xspace$ is odd and $A \circ F =
F \circ A'$ if $\,\#\ensuremath{\mathbf c}\xspace$ is even.
\label{B} \end{lemma} This follows from Proposition \ref{X}. \qed
Using Lemma \ref{B}, an $F$-orbit $(\ensuremath{\mathbf c}\xspace_{1},\ldots,\ensuremath{\mathbf c}\xspace_{m})$ in $\ensuremath{\mathcal C}\xspace_{n}$ together with an augmentation operator $A_{1} \in\{P,I\}$ yields part of an $F$-orbit in $\ensuremath{\mathcal C}\xspace_{n+1}$ via a ``commutative diagram'' as shown: \[ \begin{CD}
\ensuremath{\mathbf c}\xspace_{1} @>F>> \ensuremath{\mathbf c}\xspace_{2} @>F>> \ldots @>F>> \ensuremath{\mathbf c}\xspace_{i} @>F>> \ensuremath{\mathbf c}\xspace_{i+1} @>F>>
\ldots @>F>>\ensuremath{\mathbf c}\xspace_{m} @>F>> \ensuremath{\mathbf c}\xspace_{1} \\
@VVA_{1}V @VVA_{2}V @. @VVA_{i}V @VVA_{i+1}V @.
@VVA_{m}V @VVA_{m+1}V \\
\ensuremath{\mathbf d}\xspace_{1} @>F>> \ensuremath{\mathbf d}\xspace_{2} @>F>> \ldots @>F>> \ensuremath{\mathbf d}\xspace_{i} @>F>> \ensuremath{\mathbf d}\xspace_{i+1} @>F>>
\ldots @>F>>\ensuremath{\mathbf d}\xspace_{m} @>F>> \ensuremath{\mathbf d}\xspace_{m+1} \end{CD} \]
Let $B(\ensuremath{\mathbf c}\xspace_{1},A_{1})$ denote the sequence of compositions $(\ensuremath{\mathbf d}\xspace_{1},\ldots,\ensuremath{\mathbf d}\xspace_{m})$ thus produced. By Lemma \ref{B}, $A_{i+1}= A_{i}$ or $A_{i}'$ according as $\,\#\ensuremath{\mathbf c}\xspace_{i}$ is odd or even ($1\le i \le m$). Hence, if the orbit of $\ensuremath{\mathbf c}\xspace_{1}$ contains an even number of compositions of even parity, then $A_{m+1}=A_{1}$ and so $\ensuremath{\mathbf d}\xspace_{m+1}=\ensuremath{\mathbf d}\xspace_{1}$ and $B(\ensuremath{\mathbf c}\xspace_{1},A_{1})$ is a complete $F$-orbit in $\ensuremath{\mathcal C}\xspace_{n+1}$ for each of $A_{1}=P$ and $A_{1}=I$. On the other hand, if the orbit of $\ensuremath{\mathbf c}\xspace_{1}$ contains an odd number of compositions of even parity, then $A_{m+1}=A_{1}'$ and the commutative diagram will extend for another $m$ squares before completing an orbit in $\ensuremath{\mathcal C}\xspace_{n+1}$, consisting of the concatenation of $B(\ensuremath{\mathbf c}\xspace_{1},P)$ and $B(\ensuremath{\mathbf c}\xspace_{1},I)$, denoted $B(\ensuremath{\mathbf c}\xspace_{1},P,I)$. In the former case orbit size is preserved; in the latter it is doubled.
Our goal here is to generate $F$-orbits recursively and to get induction going, we now need to investigate the parities of the compositions comprising these ``bumped-up'' orbits $B(\ensuremath{\mathbf c}\xspace,A)$ and $B(\ensuremath{\mathbf c}\xspace,P,I)$. A bit sequence is a sequence of 0s and 1s. \textbf{In the sequel all operations on bit sequences are modulo 2}. Let $\ensuremath{\mathbf S}\xspace$ denote the partial sum operator on bit sequences: $\ensuremath{\mathbf S}\xspace\big( (\epsilon_{1},\epsilon_{2},\ldots,\epsilon_{m}) \big) =(\epsilon_{1},\epsilon_{1}+\epsilon_{2},\ldots,\epsilon_{1}+\epsilon_{2}+\ldots+\epsilon_{m})$. Let $\ensuremath{\mathbf e}\xspace_{m}$ denote the all 1s bit sequence of length $m$ and let $\ensuremath{\mathbf e}\xspace$ denote the infinite sequences of 1s. Thus $\ensuremath{\mathbf S}\xspace\ensuremath{\mathbf e}\xspace=(1,0,1,0,1,\ldots)$. Let $P$ denote the infinite matrix whose $i$th row ($i\ge 0$) is $\ensuremath{\mathbf S}\xspace^{i}\ensuremath{\mathbf e}\xspace$ ($\ensuremath{\mathbf S}\xspace^{i}$ denotes the $i$-fold composition of $\ensuremath{\mathbf S}\xspace$). The $(i,j)$ entry $p_{ij}$ of $P$ satisfies $p_{ij}=p_{i-1,j}+p_{i,j-1}$ and hence $P$ is the symmetric Pascal matrix mod 2 with $(i,j)$ entry =$\:\binom{i+j}{i}$ mod 2. The following lemma will be crucial. \begin{lemma}
Fix $k\ge 1$ and let $P_{k}$ denote the $2^{k}\times 2^{k}$ upper
left submatrix of $P$. Then the sum modulo $2$ of row $i$ in $P_{k}$ is $0$
for $0\le i \le 2^{k}-1$ and is $1$ for $i=2^{k}-1$.
\label{P} \end{lemma} \textbf{Proof} \quad The sum of row $i$ in $P_{k}$ is, modulo 2, \[ \sum_{j=0}^{2^{k}-1} p_{ij} = \sum_{j=0}^{2^{k}-1}\binom{i+j}{i}=\binom{i+2^{k}}{i+1}=\binom{i+2^{k}}{i+1,2^{k}-1} \] and for $i<2^{k}-1$ there is clearly at least one carry in the addition of $i+1$ and $2^{k}-1$ in base 2 so that, by Kummer's well known criterion, $2\,\vert\,\binom{i+2^{k}}{i+1,2^{k}-1}$ and the sum of row $i$ is 0 (mod 2). On the other hand, for $i=2^{k}-1$ there are no carries, so $2\nmid \binom{i+2^{k}}{i+1,2^{k}-1}$ and the sum of row $i$ is 1 (mod 2). \qed
Now let $p(\ensuremath{\mathbf c}\xspace)$ denote the mod-2 parity of a composition $\ensuremath{\mathbf c}\xspace:\ p(\ensuremath{\mathbf c}\xspace)=1$ if $\,\#\ensuremath{\mathbf c}\xspace$ is odd, $=0$ if $\,\#\ensuremath{\mathbf c}\xspace$ is even. For purposes of addition mod 2, represent the augmentation operators $P$ and $I$ by 0 and 1 respectively so that, for example, $p(A(\ensuremath{\mathbf c}\xspace))=p(\ensuremath{\mathbf c}\xspace)+A+1$ for $A=P$ or $I$ by Lemma \ref{A}. Then the parity of $\ensuremath{\mathbf d}\xspace_{i+1}$ above can be obtained from the following commutative diagram (all addition modulo 2) \[ \begin{CD}
\qquad p(\ensuremath{\mathbf c}\xspace_{i})\qquad @>>> p(\ensuremath{\mathbf c}\xspace_{i+1}) \\
@VVAV @VVp(\ensuremath{\mathbf c}\xspace_{i})+A+1V \\
\qquad \ldots\qquad @>>> p(\ensuremath{\mathbf c}\xspace_{i+1})+p(\ensuremath{\mathbf c}\xspace_{i})+A \end{CD} \] This leads to \begin{lemma}
Let $p_{i}$ denote the parity of $\ensuremath{\mathbf c}\xspace_{i}$ so that
$\ensuremath{\mathbf p}\xspace=(p_{i})_{i=1}^{m}$ is the parity vector for the $F$-orbit
$(\ensuremath{\mathbf c}\xspace_{i})_{i=1}^{m}$ of the composition $\ensuremath{\mathbf c}\xspace_{1}$. Then the parity
vector for $B(\ensuremath{\mathbf c}\xspace,A)$ is
\[
\ensuremath{\mathbf S}\xspace\ensuremath{\mathbf p}\xspace+\ensuremath{\mathbf S}\xspace\ensuremath{\mathbf e}\xspace_{m}+(A+1)\ensuremath{\mathbf e}\xspace_{m}.
\]\qed \end{lemma}
Now we are ready to prove the main result of this section concerning the orbits of $F$ on primitive $DUU$-avoiding Dyck $n$-paths identified with the set $\ensuremath{\mathcal C}\xspace_{n}$ of compositions of $n$ that end with a 1. The parity of an orbit is the sum mod 2 of the parities of the compositions comprising the orbit, in other words, the parity of the total number of entries in all the compositions. \begin{theorem}
For each $n\ge 1$,
\begin{itemize}
\item[$($i\,$)$] all $F$-orbits on $\ensuremath{\mathcal C}\xspace_{n}$ have the same
length and this length is a power of $2$.
\item[$($ii\,$)$] all $F$-orbits on $\ensuremath{\mathcal C}\xspace_{n}$ have the same parity.
\item[$($iii\,$)$] the powers in $($i\,$)$ and the parities
in $($ii\,$)$ are given as
follows:
For $n=1$, the power $($i.e. the exponent$)$ is $0$ and the parity
is $1$.
For $n=2$, the power and parity are both $0$.
As $n$ increases from $2$, the powers remain unchanged and the parity
stays $0$ except that when $n$ hits a number of the form $2^{k}+1$, the
parity becomes $1$, and at the next number, $2^{k}+2$, the power
increases by $1$ and the parity reverts to $0$.
\end{itemize} \end{theorem} \textbf{Proof}\quad We consider orbits generated by the augmentation operators $P$ and $I$. No orbits are missed because all compositions, in particular those ending 1, can be generated from the unique composition of 1 by successive application of $P$ and $I$. The base cases $n=1,2,3$ are clear from the orbits $(1)\to (1),\ (1,1)\to (1,1),\ (2,1) \to (1,1,1)\to (2,1)$. To establish the induction step, suppose given an orbit, orb($\ensuremath{\mathbf c}\xspace$), in $\ensuremath{\mathcal C}\xspace_{2^{k}+1}\ (k\ge 1)$ with parity vector $\ensuremath{\mathbf p}\xspace=(a_{i})_{i=1}^{2^{k}}$ and (total) parity 1. Then the next orbit $B(\ensuremath{\mathbf c}\xspace,P,I)$ has parity vector \[ \ensuremath{\mathbf p}\xspace_{1}=(\ensuremath{\mathbf S}\xspace\, \ensuremath{\mathbf p}\xspace,\ensuremath{\mathbf S}\xspace\, \ensuremath{\mathbf p}\xspace+\ensuremath{\mathbf e}\xspace_{2^{k}})+\ensuremath{\mathbf S}\xspace\,\ensuremath{\mathbf e}\xspace_{2^{k+1}} \] with parity ($\ensuremath{\mathbf S}\xspace\,\ensuremath{\mathbf p}\xspace$'s cancel out) $\underbrace{1+1+\ldots+1}_{2^{k}}+\underbrace{1+0+1+0+\ldots+1+0}_{2^{k+1}}=0$ for $k\ge 1$. Successively ``bump up'' this orbit using $A=\epsilon_{1},\epsilon_{2},\ldots,$ in turn until the parity hits 1 again. With Sum$(\ensuremath{\mathbf v}\xspace)$ denoting the sum of the entries in \ensuremath{\mathbf v}\xspace, the successive parity vectors $\ensuremath{\mathbf p}\xspace_{1},\ensuremath{\mathbf p}\xspace_{2},\ldots$ are given by \begin{multline*} \ensuremath{\mathbf p}\xspace_{i}=\big(\ensuremath{\mathbf S}\xspace^{i}\ensuremath{\mathbf p}\xspace,\ensuremath{\mathbf S}\xspace^{i}\ensuremath{\mathbf p}\xspace+\sum_{j=1}^{i-2}\textrm{Sum}(\ensuremath{\mathbf S}\xspace^{j}\ensuremath{\mathbf p}\xspace)\ensuremath{\mathbf S}\xspace^{i-1-j}\ensuremath{\mathbf e}\xspace_{2^{k}} + \ensuremath{\mathbf S}\xspace^{i-1}\ensuremath{\mathbf e}\xspace_{2^{k}}\big) + \\ \ensuremath{\mathbf S}\xspace^{i}\ensuremath{\mathbf e}\xspace_{2^{k+1}} + \ensuremath{\mathbf S}\xspace^{i-1}\ensuremath{\mathbf e}\xspace_{2^{k+1}} +\sum_{j=1}^{i-2}\epsilon_{j}\ensuremath{\mathbf S}\xspace^{i-1-j}\ensuremath{\mathbf e}\xspace_{2^{k+1}} + (\epsilon_{i-1}+1)\ensuremath{\mathbf e}\xspace_{2^{k+1}}. \end{multline*}
Applying Lemma \ref{P} we see that, independent of the $\epsilon_{i}$'s, $\ensuremath{\mathbf p}\xspace_{i}$ has sum 0 for $i<2^{k}-1$ and sum 1 for $i=2^{k}-1$. This establishes the induction step in the theorem. \qed
\begin{cor}
For $n\ge 2$, the length of each $F$-orbit in $\ensuremath{\mathcal P}\xspace_{n}(DUU)$ is $2^{k}$ where $k$
is the number of bits in the base-$2$ expansion of $n-2$.
\label{base2} \end{cor} \textbf{Proof}\quad This is just a restatement of part of the preceding Theorem. \qed
\vspace*{10mm}
{\Large \textbf{5 \ The Orbits of $\mathbf{F}$} }\quad The preceding section analyzed $F$ on $\ensuremath{\mathcal P}\xspace(DUU)$, paths avoiding
$DUU$. Now we consider $F$ on $\ensuremath{\mathcal P}\xspace[DUU]$, the primitive Dyck paths containing a $DUU$. Every $P \in \ensuremath{\mathcal P}\xspace[DUU]$ has the form $AQB$ where \begin{itemize}
\item[(i)] $A$ consists of one or more $U$s
\item[(ii)] $C:=AB \in \ensuremath{\mathcal P}\xspace(DUU)$
\item[(iii)] $Q \notin \ensuremath{\mathcal P}\xspace $ and $Q$ ends $DD$ (and hence $Q$ contains
a $DUU$ at ground level). \end{itemize}
To see this, locate the rightmost of the lowest $DUU$s in $P$, say at height $h$. Then $A=U^{h},\ Q$ starts at step number $h+1$ and extends through the matching downstep of the middle $U$ in this rightmost lowest $DUU$, and $B$ consists of the rest of the path.
\Einheit=0.4cm \[ \Pfad(-15,0),3\endPfad \Pfad(-13,2),33\endPfad \Pfad(-8,4),433\endPfad \Pfad(-2,5),44\endPfad \Pfad(6,3),4\endPfad \Pfad(14,1),4\endPfad \SPfad(-14,1),3\endSPfad \SPfad(-11,4),413\endSPfad \SPfad(-5,5),413\endSPfad \SPfad(13,2),4\endSPfad \DuennPunkt(-15,0) \DuennPunkt(-14,1) \DuennPunkt(-13,2) \DuennPunkt(-12,3) \DuennPunkt(-11,4) \DuennPunkt(-8,4) \DuennPunkt(-7,3) \DuennPunkt(-6,4) \DuennPunkt(-5,5) \DuennPunkt(-2,5) \DuennPunkt(-1,4) \DuennPunkt(0,3) \DuennPunkt(6,3) \DuennPunkt(7,2) \DuennPunkt(13,2) \DuennPunkt(14,1) \DuennPunkt(15,0) \textcolor{red} { \DuennPunkt(1,4) \DuennPunkt(2,3) \DuennPunkt(4,3) \DuennPunkt(5,4) \DuennPunkt(8,3) \DuennPunkt(9,2) \DuennPunkt(11,2) \DuennPunkt(12,3) \Pfad(0,3),34\endPfad \Pfad(4,3),34\endPfad \Pfad(7,2),34\endPfad \Pfad(11,2),34\endPfad \SPfad(2,3),34\endSPfad \SPfad(9,2),34\endSPfad } \textcolor{blue} { \Pfad(-15,0),111111111111111111111111111111\endPfad \Pfad(-15,0),2222222\endPfad \Pfad(-12,0),2222222\endPfad \Pfad(0,0),2222222\endPfad \Pfad(15,0),2222222\endPfad } \Label\advance\ydim by.25cm{\uparrow}(-7,1.7) \Label\advance\ydim by.25cm{\textrm{{\footnotesize $h$}}}(-7,0.8) \Label\ensuremath{\mathbf p}\xspace{\downarrow}(-7,1.2) \Label\advance\ydim by.25cm{\textrm{{\footnotesize $A$}}}(-13.5,6) \Label\advance\ydim by.25cm{\textrm{{\footnotesize $\leftarrow$ matching $\rightarrow$}}}(-3.5,2.7) \Label\advance\ydim by.25cm{\textrm{{\footnotesize $Q$}}}(-6,6) \Label\advance\ydim by.25cm{\textrm{{\footnotesize $B$}}}(7,6) \Label\advance\ydim by.25cm{\textrm{{\footnotesize red $UD$s may be absent}}}(7,4.5) \Label\advance\ydim by.25cm{\textrm{{\small The $AQB$ decomposition of a path containing a $DUU$}}}(0,-3) \] \vspace*{2mm}
Call the path $AB$ the ($DUU$-avoiding) \emph{skeleton} of $P$ and $Q$ the ($DUU$-containing) \emph{body} of $P$. In case $P\in\ensuremath{\mathcal P}\xspace(DUU)$, its skeleton is itself and its body is empty. If the skeleton of $P$ is $UD$, then $P$ is uniquely determined by its skeleton and body. On the other hand, a skeleton of size $\ge 2$ and a nonempty body determine precisely two paths $P$ in $\ensuremath{\mathcal P}\xspace[DUU]$, obtained by inserting the body at either the top or the bottom of the first peak upstep in the skeleton, as illustrated.
\Einheit=0.4cm \[ \Pfad(-17,0),3334344344\endPfad \Pfad(-5,0),33\endPfad \Pfad(-2,2),34344344\endPfad \Pfad(7,0),333\endPfad \Pfad(11,3),4344344\endPfad \SPfad(-17,0),1111111111\endSPfad \SPfad(-5,0),11111111111\endSPfad \SPfad(7,0),11111111111\endSPfad \DuennPunkt(-17,0) \DuennPunkt(-16,1) \DuennPunkt(-15,2) \DuennPunkt(-14,3) \DuennPunkt(-13,2) \DuennPunkt(-12,3) \DuennPunkt(-11,2) \DuennPunkt(-10,1) \DuennPunkt(-9,2) \DuennPunkt(-8,1) \DuennPunkt(-7,0) \DuennPunkt(-5,0) \DuennPunkt(-4,1) \DuennPunkt(-3,2) \DuennPunkt(-2,2) \DuennPunkt(-1,3) \DuennPunkt(0,2) \DuennPunkt(1,3) \DuennPunkt(2,2) \DuennPunkt(3,1) \DuennPunkt(4,2) \DuennPunkt(5,1) \DuennPunkt(6,0) \DuennPunkt(7,0) \DuennPunkt(8,1) \DuennPunkt(9,2) \DuennPunkt(10,3) \DuennPunkt(11,3) \DuennPunkt(12,2) \DuennPunkt(13,3) \DuennPunkt(14,2) \DuennPunkt(15,1) \DuennPunkt(16,2) \DuennPunkt(17,1) \DuennPunkt(18,0) \Label\advance\ydim by.25cm{\textrm{{\footnotesize $S$}}}(-12,-2) \Label\advance\ydim by.25cm{\textrm{{\footnotesize two possible $P$s}}}(6.5,-2) \Label\advance\ydim by.25cm{\textrm{{\footnotesize $B$}}}(-2.5,1.8) \Label\advance\ydim by.25cm{\textrm{{\footnotesize $B$}}}(10.5,2.8) \Label\advance\ydim by.25cm{\textrm{{\small Recapturing a path $P\in \ensuremath{\mathcal P}\xspace[DUU]$ from a skeleton $S$ and body $B$}}}(0,-4) \] \vspace*{2mm}
Thus paths in $\ensuremath{\mathcal P}\xspace[DUU]$ correspond bijectively to triples $(S,B,pos)$ where $S\in\ensuremath{\mathcal P}\xspace(DUU)$ is the skeleton, $B\ne \epsilon$ is the body, and $pos =top$ or $bot$ according as $B$ is positioned at the top or bottom of the first peak upstep in $S$, with the proviso that $pos=top$ if $S=UD$.
In these terms, $F$ can be specified on $\ensuremath{\mathcal P}\xspace[DUU]$ as follows. \begin{prop} \[ F\big( (S,B,pos)\big)= \begin{cases}
(F(S),F(B),\:pos\,) \textrm{ if height$(S)$ is odd, and} \\
(F(S),F(B),\:pos'\,) \textrm{ if height$(S)$ is even.} \end{cases} \] \end{prop} \textbf{Proof}\quad Let $h(P)$ denote the height of the terminal point of the lowest $DUU$ in $P\in \ensuremath{\mathcal P}\xspace[DUU]$. The result clearly holds for $h(P)=1$. If $h(P)\ge 2$, then $P$ has the form $U^{2}Q(UD)^{a}D(UD)^{b}D$ with $a,b\ge 0$ and $Q$ a Dyck path that ends $DD$. So $F(P)=U^{b+1}F(Q)(UD)^{a+1}D^{b+1}$ and $h(Q)=h(P)-2$. These two facts are the basis for a proof by induction that begins as follows. If $h(Q)=0$, then the body of $F(P)$ has position = bottom, while the body of $P$ has position bottom or top according as $a\ge 1$ or $a=0$. In the former case, the skeleton of $P$ has height 3 and position has been preserved, in the latter height 2 and position has been reversed. \qed
Iterating the skeleton-body-position decomposition on each component, a Dyck path has a forest representation as illustrated below. Each vertex represents a skeleton and is labeled with the corresponding composition. When needed, a color ($top$ or $bot$) is also applied to a vertex to capture the position of that skeleton's body.
\Einheit=0.4cm \[ \Pfad(-16,0),33344334443433343433444334344344\endPfad \SPfad(-16,0),11111111111111111111111111111111\endSPfad \DuennPunkt(-16,0) \DuennPunkt(-15,1) \DuennPunkt(-14,2) \DuennPunkt(-13,3) \DuennPunkt(-12,2) \DuennPunkt(-11,1) \DuennPunkt(-10,2) \DuennPunkt(-9,3) \DuennPunkt(-8,2) \DuennPunkt(-7,1) \DuennPunkt(-6,0) \DuennPunkt(-5,1) \DuennPunkt(-4,0) \DuennPunkt(-3,1) \DuennPunkt(-2,2) \DuennPunkt(-1,3) \DuennPunkt(0,2) \DuennPunkt(1,3) \DuennPunkt(2,2) \DuennPunkt(3,3) \DuennPunkt(4,4) \DuennPunkt(5,3) \DuennPunkt(6,2) \DuennPunkt(7,1) \DuennPunkt(8,2) \DuennPunkt(9,3) \DuennPunkt(10,2) \DuennPunkt(11,3) \DuennPunkt(12,2) \DuennPunkt(13,1) \DuennPunkt(14,2) \DuennPunkt(15,1) \DuennPunkt(16,0) \] \begin{center}
\begin{pspicture}(-6,-1.4)(6,3)
\psline(-4,1)(-3,0)(-2,1) \psline(1,2)(2,1)(3,2) \psline(2,2)(2,1)(3,0)(4,1) \rput(-3,0){$\bullet$} \rput(0,0){$\bullet$} \rput(3,0){$\bullet$}
\psdots(-4,1)(-2,1)(2,1)(4,1)(1,2)(2,2)(3,2)
\rput(-4.2,1.2){\textrm{{\footnotesize 11}}} \rput(-1.8,1.2){\textrm{{\footnotesize 11}}} \rput(2.5,1){\textrm{{\footnotesize 1}}}
\rput(0.9,2.2){\textrm{{\footnotesize 1}}} \rput(2,2.3){\textrm{{\footnotesize 1}}} \rput(3.1,2.2){\textrm{{\footnotesize 11}}}
\rput(4.1,1.2){\textrm{{\footnotesize 21}}}
\rput(0,-.3){\textrm{{\footnotesize 1}}} \rput(-3,-.3){\textrm{{\footnotesize 1}}} \rput(3,-.3){\textrm{{\footnotesize \quad 11, bot}}}
\rput(0,-1.3){\textrm{{\small A Dyck path and corresponding LCO forest}}}
\end{pspicture} \end{center} The 3 trees in the forest correspond to the 3 components of the Dyck path. The skeleton of the first component is $UD$ and its body has 2 identical components, each consisting of a skeleton alone, yielding the leftmost tree. The skeleton of the third component is $UUDD$ and its body is positioned at the bottom of its first peak upstep, and so on. Call this forest the LCO (labeled, colored, ordered) forest corresponding to the Dyck path. Here is the precise definition. \begin{defn}
An LCO forest is a labeled, colored, ordered forest such that
\begin{itemize}
\item the underlying forest consists of a list of ordered trees
(a tree may consist of a root only)
\item no vertex has outdegree $1$ $($i.e., exactly one child\,$)$
\item each vertex is labeled with a composition that ends $1$
\item each vertex possessing children and labeled with a composition
of size $\ge 2$ is also colored $top$ or $bot$
\item For each leaf $($i.e. vertex with a parent but no child\,$)$ that
is the rightmost child of its parent, its label composition has
size $\ge 2$. \end{itemize} \end{defn}
The \emph{size} of an LCO forest is the sum of the sizes of its label compositions. The correspondence Dyck path $\leftrightarrow$ LCO forest preserves size, and primitive Dyck paths correspond to one-tree forests. Thus we have
\begin{prop}
The number of LCO forests of size $n$ is the Catalan number
$C_{n}$, as is the number of one-tree LCO forests of size $n+1$. \qed \end{prop}
The $C_{4}=14$ one-tree LCO forests corresponding to primitive Dyck 5-paths are shown, partitioned into $F$-orbits. \Einheit=0.5cm \[ \Label\advance\ydim by.25cm{\rightarrow}(-13,5) \Label\advance\ydim by.25cm{\rightarrow}(-9,5) \Label\advance\ydim by.25cm{\rightarrow}(-5,5) \Label\ensuremath{\mathbf p}\xspace{ \textrm{{\footnotesize $1^{5}$}}}(-15,5) \Label\ensuremath{\mathbf p}\xspace{\textrm{{\footnotesize 221}}}(-11,5) \Label\ensuremath{\mathbf p}\xspace{\textrm{{\footnotesize 311}}}(-7,5) \Label\ensuremath{\mathbf p}\xspace{\textrm{{\footnotesize 41}}}(-3,5) \NormalPunkt(-15,5) \NormalPunkt(-11,5) \NormalPunkt(-7,5) \NormalPunkt(-3,5) \Label\advance\ydim by.25cm{\rightarrow}(13,5) \Label\advance\ydim by.25cm{\rightarrow}(9,5) \Label\advance\ydim by.25cm{\rightarrow}(5,5) \Label\ensuremath{\mathbf p}\xspace{\textrm{{\footnotesize 1211}} }(15,5) \Label\ensuremath{\mathbf p}\xspace{\textrm{{\footnotesize 1121}} }(11,5) \Label\ensuremath{\mathbf p}\xspace{\textrm{{\footnotesize 2111}} }(7,5) \Label\ensuremath{\mathbf p}\xspace{\textrm{{\footnotesize 131}} }(3,5) \NormalPunkt(15,5) \NormalPunkt(11,5) \NormalPunkt(7,5) \NormalPunkt(3,5) \Label\advance\ydim by.25cm{\rightarrow}(-12,0) \Label\advance\ydim by.25cm{\rightarrow}(-0,0) \Label\ensuremath{\mathbf p}\xspace{ \textrm{{\footnotesize 11,\ bot}}}(-15,0) \Label\ensuremath{\mathbf p}\xspace{ \textrm{{\footnotesize 11,\ top}}}(-9,0) \Label\advance\ydim by.25cm{\textrm{{\footnotesize 1}} }(-16,1) \Label\advance\ydim by.25cm{\textrm{{\footnotesize 11}} }(-14,1) \Label\advance\ydim by.25cm{\textrm{{\footnotesize 1}} }(-10,1) \Label\advance\ydim by.25cm{\textrm{{\footnotesize 11}} }(-8,1) \Label\advance\ydim by.25cm{\textrm{{\footnotesize 1}} }(-4,1) \Label\advance\ydim by.25cm{\textrm{{\footnotesize 21}} }(-2,1) \Label\advance\ydim by.25cm{\textrm{{\footnotesize 1}} }(2,1) \Label\advance\ydim by.25cm{\textrm{{\footnotesize 111}} }(4,1) \Label\advance\ydim by.25cm{\textrm{{\footnotesize 1}} }(8,1) \Label\advance\ydim by.25cm{\textrm{{\footnotesize 1}} }(9,1) \Label\advance\ydim by.25cm{\textrm{{\footnotesize 11}} }(10,1) \Label\advance\ydim by.25cm{\textrm{{\footnotesize 11}} }(16,1) \Label\advance\ydim by.25cm{\textrm{{\footnotesize 11}} }(14,1) \Label\ensuremath{\mathbf p}\xspace{\textrm{{\footnotesize 1}}}(-3,0) \Label\ensuremath{\mathbf p}\xspace{\textrm{{\footnotesize 1}}}(3,0) \Label\ensuremath{\mathbf p}\xspace{\textrm{{\footnotesize 1}}}(9,0) \Label\ensuremath{\mathbf p}\xspace{\textrm{{\footnotesize 1}}}(15,0) \textcolor{red} {\Pfad(0,3),2222\endPfad \Pfad(-6,-1),2222\endPfad \Pfad(6,-1),2222\endPfad \Pfad(12,-1),2222\endPfad} \Pfad(-16,1),4\endPfad \Pfad(-15,0),3\endPfad \Pfad(-10,1),4\endPfad \Pfad(-9,0),3\endPfad \Pfad(-4,1),4\endPfad \Pfad(-3,0),3\endPfad \Pfad(2,1),4\endPfad \Pfad(3,0),3\endPfad \Pfad(8,1),4\endPfad \Pfad(9,0),3\endPfad \Pfad(9,0),2\endPfad \Pfad(14,1),4\endPfad \Pfad(15,0),3\endPfad \DuennPunkt(-16,1) \NormalPunkt(-15,0) \DuennPunkt(-14,1) \DuennPunkt(-10,1) \DuennPunkt(-9,0) \DuennPunkt(-8,1) \DuennPunkt(-4,1) \NormalPunkt(-3,0) \DuennPunkt(-2,1) \DuennPunkt(2,1) \NormalPunkt(3,0) \DuennPunkt(4,1) \DuennPunkt(8,1) \DuennPunkt(9,1) \NormalPunkt(9,0) \DuennPunkt(10,1) \DuennPunkt(14,1) \DuennPunkt(16,1) \NormalPunkt(15,0) \Label\ensuremath{\mathbf p}\xspace{\textrm{{\small The LCO one-tree forests of size 5, partitioned into $F$-orbits}} }(0,-2) \] \vspace*{1mm}
We can now give an explicit description of $F$ on Dyck paths identified with LCO forests. On an LCO forest, $F$ acts as follows: \begin{itemize}
\item the underlying list of ordered trees is preserved
\item each label $\ensuremath{\mathbf c}\xspace$ becomes $F(\ensuremath{\mathbf c}\xspace)$ as defined in Prop.\:\ref{X}
\item each color ($top$/$bot$) is preserved or switched according
as the associated label \ensuremath{\mathbf c}\xspace has odd or even length. \end{itemize}
From this description and Cor. \ref{base2}, the size of the $F$-orbit of a Dyck path $P$ can be determined as follows. In the LCO forest for $P$, let $\ell$ denote the maximum size of a leaf label and $i$ the maximum size of an internal (i.e., non-leaf) label (note that an isolated root is an internal vertex). Let $k$ denote the number of bits in the base-2 expansion of $\max\{\ell-2,i-1\}$. Then the $F$-orbit of $P$ has size $2^{k}$.
It is also possible to specify orbit sizes in terms of subpath avoidance. For Dyck paths $Q$ and $R$, let $Q$ \emph{top} $R$ (resp. $Q$ \emph{bot} $R$) denote the Dyck path obtained by inserting $R$ at the top (resp. bottom) of the first peak upstep in $Q$. Then the $F$-orbit of a Dyck path $P$ has size $\le 2^{k}$ iff $P$ avoids subpaths in the set $\{Q\ top\ R,\ Q \ bot\ R\, :\, R\ne \epsilon,\ Q\in\ensuremath{\mathcal P}\xspace_{i}(DUU),\ 2^{k-1}+1 < i \le 2^{k}+1\}$. For $k\ge 1,$ listing these $Q$s explicitly would give $2^{2^{k}}-2^{2^{k-1}}$ proscribed patterns of the form $Q\ top\ R,\ R\ne \epsilon$ (and the same number of the form $Q\ bot\ R$). For $k=0$, that is, for fixed points of $F$, the proscribed patterns are $UP^{+}UDD$ and $UUP^{+}DD$ with $P^{+}$ a nonempty Dyck path, and avoiding the first of these amounts to avoiding the subpath $DUDD$.
The generating function\ for the number of $F$-orbits of size $\le 2^{k}$ can be found using the ``symbolic'' method \cite{flaj}. With $F_{k}(x),\: G_{k}(x),\: H_{k}(x)$ denoting the respective generating function s for general Dyck paths, primitive Dyck paths, and primitive Dyck paths that end $DD$ ($x$ always marking size), we find \begin{eqnarray*}
F_{k}(x) & = & 1+G_{k}(x)F_{k}(x) \\
G_{k}(x) & = & x
+\frac{x(1-(2x)^{2^{k}}}{1-2x}\big(x+(F_{k}(x)-1)H_{k}(x)\big) \\
H_{k}(x) & = & G_{k}(x)-x \end{eqnarray*} leading to \[ F_{k}(x)=\frac{1-a_{k}-\sqrt{1-4x-\frac{\textrm{{\small $a_{k}(2-a_{k})x $}}}{\textrm{{\small $1-x$}}}}}{2x-a_{k}}, \] where $a_{k}=(2x)^{2^{k}+1}$. In this formulation it is clear, as expected, that $\lim_{k \to \infty}F_{k}(x)=\frac{1-\sqrt{1-4x}}{2x}$, the generating function\ for the Catalan numbers. The counting sequence for fixed points of $F$, with generating function\ $F_{0}(x)$, is sequence \htmladdnormallink{A086625}{http://www.research.att.com:80/cgi-bin/access.cgi/as/njas/sequences/eisA.cgi?Anum=A086625} in \htmladdnormallink{OEIS}{http://www.research.att.com/~njas/sequences/Seis.html} .
\vspace*{10mm}
{\Large \textbf{6 \ An Application} }\quad Ordered trees and binary trees are manifestations of the Catalan numbers \htmladdnormallink{A000108}{http://www.research.att.com:80/cgi-bin/access.cgi/as/njas/sequences/eisA.cgi?Anum=A000108} . Donaghey \cite{motz77,restricted77} lists several types of restricted tree counted by the Motzkin numbers \htmladdnormallink{A001006}{http://www.research.att.com:80/cgi-bin/access.cgi/as/njas/sequences/eisA.cgi?Anum=A001006} . In particular, the following result is implicit in item III\,C of \cite{restricted77}. \begin{prop}
The Motzkin number $M_{n}$ counts right-planted binary trees on
$n+1$ edges with no erasable vertices.
\label{erasable} \end{prop} Here, planted means the root has only one child, and erasable refers to a vertex incident with precisely 2 edges \emph{both of the same slope}---the vertex could then be erased, preserving the slope, to produce a smaller binary tree. The $M_{3}=4$ such trees on 4 edges are shown.
\Einheit=0.6cm \[ \Pfad(-7,2),3\endPfad \Pfad(-7,2),43\endPfad \Pfad(-7,0),3\endPfad \Pfad(-3,0),33\endPfad \Pfad(-3,2),4\endPfad \Pfad(-2,3),4\endPfad \Pfad(1,3),43\endPfad \Pfad(2,2),4\endPfad \Pfad(2,0),3\endPfad \Pfad(5,0),3\endPfad \Pfad(5,2),4\endPfad \Pfad(5,2),3\endPfad \Pfad(5,4),4\endPfad \DuennPunkt(-7,0) \DuennPunkt(-7,2) \DuennPunkt(-6,1) \DuennPunkt(-6,3) \DuennPunkt(-5,2) \DuennPunkt(-3,0) \DuennPunkt(-3,2) \DuennPunkt(-2,1) \DuennPunkt(-2,3) \DuennPunkt(-1,2) \DuennPunkt(1,3) \DuennPunkt(2,0) \DuennPunkt(2,2) \DuennPunkt(3,1) \DuennPunkt(3,3) \DuennPunkt(5,0) \DuennPunkt(5,4) \DuennPunkt(5,2) \DuennPunkt(6,1) \DuennPunkt(6,3) \Label\advance\ydim by.25cm{ \textrm{\small The right-planted binary 4-trees with no erasable vertices}}(0,-2) \]
\vspace*{2mm}
Translated to Dyck paths, Prop. \ref{erasable} is equivalent to \begin{prop}
$M_{n}$ counts Dyck $(n+1)$-paths that end $DD$ and avoid
subpaths $DUDU$ and $UUP^{+}DD$ with $P^{+}$ denoting a nonempty Dyck subpath.
\label{UUXDD} \end{prop} We will use $F$ to give a bijective proof of Prop. \ref{UUXDD} based on the fact \cite{udu} that $M_{n}$ also counts $DUD$-avoiding Dyck $(n+1)$-paths. (Of course, path reversal shows that $\#\,UDU$s and $\#\,DUD$s are equidistributed on Dyck paths.) Define statistics $X$ and $Y$ on Dyck paths by $X=\#\:DUD$s and $Y=\#\:DUDU$s $ +\ \#\:UUP^{+}DD$s + [paths ends with $UD$] (Iverson notation) so that the paths in Prop. \ref{UUXDD} are those with $Y=0$. Prop. \ref{UUXDD} then follows from \begin{prop}
On Dyck $n$-paths with $n\ge 2$, $F$ sends the statistic $X$ to
the statistic $Y$. \end{prop} \textbf{Proof}\quad Routine by induction from the recursive definition of $F$. However, using the explicit form of $F$, it is also possible to specify precisely which $DUD$s correspond to each of the three summands in $Y$. For this purpose, given a $DUD$ in a Dyck path $P$, say $D_{1}U_{2}D_{3}$ (subscripts used simply to identify the individual steps), let $\ensuremath{S}\xspace(D_{1}U_{2}D_{3})$ denote the longest Dyck subpath of $P$ containing $D_{1}U_{2}D_{3}$ in its skeleton and let $h$ denote the height at which $D_{1}U_{2}D_{3}$ terminates in $\ensuremath{S}\xspace(D_{1}U_{2}D_{3})$. If $h$ is odd, $D_{1}U_{2}D_{3}$ is immediately followed in $P$ by $D_{4}$ or by $UD_{4}$ (it cannot be followed by $UU$). In either case, let $U_{4}$ be the matching upstep for $D_{4}$. Then the steps $D_{1},U_{2},D_{3},U_{4}$ show up in $F(P)$ as part of a subpath $U_{4}U_{2}P^{+}D_{3}D_{4}$ with $P^{+}$ a Dyck path that ends $D_{1}$. On the other hand, if $h$ is even, $D_{1}U_{2}D_{3}$ either (i) ends the path (here $\ensuremath{S}\xspace(D_{1}U_{2}D_{3})=P$ and $h=0$) or is immediately followed by (ii) $U_{4}$ or (iii) $D$. In case (iii), let $U_{4}$ be the matching upstep. Then $D_{1},U_{2},D_{3},U_{4}$ show up in $F(P)$ as a subpath in that order (cases (ii) and (iii)) or $F(P)$ ends $U_{2}D_{3}$ (case (i)). The details are left to the reader.
\vspace*{10mm}
{\Large \textbf{7 \ Statistics Suggested by LCO Forests} }\quad There are various natural statistics on LCO forests, some of which give interesting counting results. Here we present two such. First let us count one-tree LCO forests by size of root label. This is equivalent to counting primitive Dyck paths by skeleton size. Recall that the generalized Catalan number sequence $\big(C^{(j)}_{n}\big)_{n\ge 0}$ with $C^{(j)}_{n}:=\frac{j}{2n+j}\binom{2n+j}{n}$ is the $j$-fold convolution of the ordinary Catalan number sequence \htmladdnormallink{A000108}{http://www.research.att.com:80/cgi-bin/access.cgi/as/njas/sequences/eisA.cgi?Anum=A000108}. (See \cite{woan} for a nice bijective proof.) And, as noted above, in the skeleton-body-position decomposition of a primitive Dyck path, if the body is nonempty it contains a $DUU$ at (its own) ground level \ and ends $DD$. \begin{lemma}
The number of Dyck $n$-paths that contain a $DUU$ at ground level \ and end $DD$ is $C^{(4)}_{n-3}$. \end{lemma} \textbf{Proof}\quad In such a path, let $U_{0}$ denote the middle $U$ of the \emph{last} $DUU$ at ground level. The path then has the form $AU_{0}BD$ where $A$ and $B$ are arbitrary \emph{nonempty} Dyck paths, counted by $C^{(2)}_{n-1}$. So the desired counting sequence is the convolution of $\big(C^{(2)}_{n-1}\big)$ with itself and, taking the $U_{0}D$ into account, the lemma follows. \qed
The number of primitive $DUU$-avoiding Dyck $k$-paths is 1 if $k=1$, and $2^{k-2}$ if $k\ge 2$. But if $k\ge 2$, there are two choices (top/bottom) to insert the body. So the number of primitive Dyck $(n+1)$-paths with skeleton size $k$ is $2^{k-1}C^{(4)}_{n-k-2}$ for $1\le k \le n-2$ and is $2^{n-1}$ for $k=n+1$. Since there are $C_{n}$ primitive Dyck $(n+1)$-paths altogether, we have established the following identity. \begin{prop}
\[ C_{n} = 2^{n-1} + \sum_{k=1}^{n-2}\frac{2^{k}}{n-k}\binom{2n-2k}{n-2-k}. \] \end{prop} \qed
Lastly, turn an LCO forest into an LCO tree by joining all roots to a new root. The purpose of doing this is so that isolated roots in the forest will qualify as leaves in the tree. The symbolic method then yields \begin{prop}
The generating function\ for LCO trees by number of leaves $(x$ marks size, $y$
marks number of leaves\,$)$ is
\[
\frac{1-\sqrt{1-4x\:\frac{\textrm{{\small $1-x$}}}{\textrm{{\small $1-xy$}}}}}{2x}.
\] \end{prop} The first few values are given in the following table.
\[
\begin{array}{c|cccccccc}
n^{\textstyle{\,\backslash \,k}} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline
1& 1 & & & & & & & \\
2& 1 & 1 & & & & & & \\
3& 2 & 2 & 1 & & & & & \\
4& 4 & 6 & 3 & 1 & & & & \\
5& 8 & 17 & 12 & 4 & 1 & & & \\
6& 16 & 46 & 44 & 20 & 5 & 1 & & \\
7& 32 & 120 & 150 & 90 & 30 & 6 & 1 & \\
8& 64 & 304 & 482 & 370 & 160 & 42 & 7 & 1 \\
\end{array} \] \centerline{{\small number of LCO trees of size $n$ with $k$ leaves}}
\end{document} |
\begin{document}
\title{One Pass ImageNet}
\begin{abstract}
We present the One Pass ImageNet (OPIN) problem, which aims to study the effectiveness of deep learning in a streaming setting.
ImageNet is a widely known benchmark dataset that has helped drive and evaluate recent advancements in deep learning. Typically, deep learning methods are trained on static data that the models have random access to, using multiple passes over the dataset with a random shuffle at each epoch of training. Such data access assumption does not hold in many real-world scenarios where massive data is collected from a stream and storing and accessing all the data becomes impractical due to storage costs and privacy concerns. For OPIN, we treat the ImageNet data as arriving sequentially, and there is limited memory budget to store a small subset of the data. We observe that training a deep network in a single pass with the same training settings used for multi-epoch training results in a huge drop in prediction accuracy. We show that the performance gap can be significantly decreased by paying a small memory cost and utilizing techniques developed for continual learning, despite the fact that OPIN differs from typical continual problem settings. We propose using OPIN to study resource-efficient deep learning.
\end{abstract}
\section{Introduction}
ImageNet \cite{imagenet_cvpr09} is one of the most influential benchmarks that has helped the progress of machine learning research. Tremendous progresses have been made over the past decade in terms of a model's accuracy on the ImageNet dataset. While improving the ImageNet accuracy has been the major focus of the past, little effort has been made on studying the resource efficiency in ImageNet supervised learning. Most existing supervised learning methods assume the data is i.i.d., static and pre-existing. They train models with multiple epochs, i.e., multiple passes of the whole dataset. Specifically, a top-performing Residual Neural Network \cite{He2016DeepRL} is trained with 90 epochs; the model needs to review each of the 1.2M examples 90 times. One natural question to ask is whether it is necessary to train a model with so many passes over the whole data.
We are also motivated by the fact that real world data often comes hourly or daily in a stream and in a much larger scale. Maintaining all the data in a storage can be expensive and probably unnecessary. Additionally, real world data sometimes contains private information from human users, which further restricts the possibility of saving all the data into a separate storage. Without a pre-recorded dataset, the popular multi-epoch training method becomes impractical to these real world scenarios.
We propose the One Pass ImageNet (OPIN) problem to study the resource efficiency of deep learning from a streaming setting with constrained data storage, where space complexity is considered an important evaluation metric.
The goal is to develop a system that trains a model where each example is passed to the system only once. There is a small memory budget but no restriction on how the system utilizes its memory; it could store and revisit some examples but not all. Unlike the task-incremental continual learning setting \cite{masana2021classincremental, taskonomycl}, the One-Pass ImageNet problem does not have a special ordering of the data, nor is there a specific distribution shift as in task-free continual learning \cite{cai2021online,aljundi2019taskfree,DBLP:journals/corr/abs-2106-12772}. That is, the data comes from a fixed uniform random order. We use ResNet-50 \cite{He2016DeepRL} in all our experiments, and leave the question of choice of architecture as future work.
We observe that training a ResNet-50 \cite{He2016DeepRL} in a single pass leads to only 30.6\% top-1 accuracy on the validation set, a significant drop from 76.9\% top-1 accuracy obtained from the common 90-epoch training\footnote{The top-1 accuracy is obtained with 1-crop evaluation in ImageNet.}. Inspired by the effectiveness of memory-based continual learning \cite{taskonomycl,buzzega2020rethinking,rolnick2019experience,bangKYHC21}, we propose an error-prioritized replay (EPR) method to One Pass ImageNet. The proposed approach utilizes a priority function based on predictive error. Results show that EPR achieves 65.0\% top-1 accuracy, improving over naive one-pass training by 34.4\%. Although it still performs 11.9\% lower than multi-epoch training in terms of accuracy, EPR shows superior resource efficiency which reduces total gradient update steps by 90\% and total required data storage by 90\%.
We believe OPIN is an important first step that allows us to understand how existing techniques can train models in terms of computation and storage efficiency, although it may not be the most realistic example for the data streaming setting. We hope our results could inspire future research on large scale benchmarks and novel algorithms on resource-efficient supervised learning.
\section{One Pass ImageNet} The One Pass ImageNet problem assumes that examples are sent in mini-batches and do not repeat. The training procedure ends when the whole dataset is revealed. No restriction is applied on how the trainer utilizes its own memory, so a memory buffer that records past examples is allowed. However, the amount of data storage is a major evaluation metric considered as space efficiency.
We perform our study on a commonly used ImageNet solution: A ResNet-50 \cite{He2016DeepRL} trained over 90 epochs with cosine learning rate and augmented examples. We refer to this method as \textit{Multi-epoch} throughout the paper. The images are preprocessed by resizing to $250\times250$ and then performing augmentation into a size of $224\times224$. During training, the augmentation includes random horizontal flipping and random cropping. At test time, only center cropping is applied to the images.
\begin{table} \setlength{\tabcolsep}{9pt} \caption{Comparing One-Pass methods with Multi-epoch training in accuracy, storage, compute metrics. Naive One-Pass method results in significant accuracy drop. A priority replay based method achieves better accuracy while maintaining the significant improvements on storage and compute metrics over multi-epoch training. Higher accuracy, lower storage and lower compute are better.} \label{tab:main} \centering \begin{tabular}{ cccccc }
\toprule
& \bf Accuracy (\%) $\uparrow$ & \bf Storage (\%) $\downarrow$ & \bf Compute (\%) $\downarrow$\\
\midrule \bf Multi-epoch (90 epochs) & 76.9 & 100 & 100\\ \bf One-Pass (Naive) & 30.6 & 0 & 1.1 \\ \bf One-Pass (Prioritized Replay) & 65.0 & 10 & 10 \\
\bottomrule \end{tabular} \end{table}
\noindent\textbf{Evaluation metrics.} While standard ImageNet benchmark focuses on a model's overall accuracy, the One-Pass ImageNet problem aims at studying the learning capability under constrained space and computation. So the problem becomes essentially a multi-objective problem. We propose to evaluate training methods using three major metrics: (1) \textit{accuracy}, represented by the top-1 accuracy in the test set, (2) \textit{space}, represented by total additional data storage needed, and (3) \textit{compute}, represented by the total number of global steps for back-propagation. The space and compute metric is calculated relative to the multi-epoch training method, i.e., both metrics for Multi-epoch method are 100\%. The Multi-epoch method needs to save all the data into storage, so the space metric is measured by the size of the data storage divided by the size of the dataset. The Multi-epoch method needs to train a model with 90 epochs (or 100M global steps), so the compute metric is measured by the total number of back-propagation operations divided by 100M.
\noindent\textbf{Naive baseline.} A simple baseline method for the One Pass problem is to train a model with the same training configuration that multi-epoch training uses but with only a single epoch, which we call \textit{Naive One-Pass}. Since each example is seen only once, we replace the random augmentation with center cropping (which is used in the model evaluation) in this Naive baseline. Table \ref{tab:main} shows a comparison between multi-epoch training and naive one-pass training measured in three metrics: accuracy, space and compute. All metrics are in percentage. While Naive One-Pass is significantly worse than multi-epoch in terms of accuracy, its space and compute efficiency are both significantly higher. The Naive One-Pass does not need to save any data examples into memory and at the same time it only trains for one-epoch, the total number of training steps is $1/90\approx 1.1\%$ that of multi-epoch training.
\noindent\textbf{Problem Characteristics.} Here we list four properties of the OPIN problem as below: \begin{enumerate}
\item \textit{The cold-start problem}: Model start from random initialization. So the representation learning becomes challenging in OPIN especially during the early stage of the training.
\item \textit{The forgetting problem}: Each example is passed to the model only once. Even though the data is i.i.d., vanilla supervised learning is likely to incur forgetting of early examples.
\item \textit{A natural ordering of data}: No artificial order of the data is enforced. So the data can be seen as i.i.d., which is different from many existing continual learning benchmarks.
\item \textit{Multiple objectives}: The methods are evaluated using three metrics (accuracy, space and compute), so the goal is to improve all three metrics in a single training method. \end{enumerate}
\section{A Prioritized Replay Baseline} Memory replay is a common approach in continual learning \cite{rolnick2019experience}. Existing works have shown that memory replay is effective in sequential learning \cite{Hsu18_EvalCL, taskonomycl}. As our first investigation to the One Pass ImageNet problem, we study how a replay buffer could improve the overall performance in OPIN.
\subsection{Replay buffer}
Replay buffer is an extra memory that explicitly saves the data. This memory usually has a very limited size. At each training step, the received mini-batch of examples is inserted into this replay memory. Since the buffer size is smaller than the whole dataset, the typical solution is to apply the reservoir sampling strategy \cite{reservoirsampling} where each example is inserted at a probability of $p(n,m)=m/n$, where $m$ is the memory size and $n$ the total number of seen examples. In order to introduce a favor on fresh examples\footnote{Similar ideas on encouraging recent examples in reservoir sampling can be found in \cite{biased-reservoir,osborne-etal-2014-exponential}.}, we incorporate a factor to the inclusion probability, i.e., $p(n,m)=\beta m/n$ (we choose $\beta=1.5$), so that more recent examples will be included in the memory.
At each training step, extra examples are sampled from the replay buffer. These examples are trained by the model together with the incoming mini-batch. Existing research on continual learning has shown that uniform sampling from a replay buffer is effective in many cases \cite{taskonomycl}, however, a common intuition is that examples are not equally important when being replayed. The idea of prioritized experience replay \cite{schaul2016prioritized} is to add a priority score to each example and sample from the buffer according to the probability distribution normalized from the priority scores.
In order to apply data augmentation to replay examples, we save jpeg-encoded image bytes into the replay buffer instead of image tensors, which turns out to be more space efficient (3x more examples can be saved under the same memory budget). We found replaying multiple examples at each step could dramatically improve the model accuracy, although with a trade-off in compute. For each step when we receive one mini-batch of images, we replay $k$ mini-batches from the replay buffer, which leads to $k+1$ epochs of compute effectively.
\subsection{Priority function} We study a prioritized replay method that uses the predictive error (loss value) as the priority function, which we call Error-Prioritized Replay (EPR). The priority function is defined as \begin{equation}
P(x,y)=1-\alpha e^{-\ell(x,y;\theta)} \end{equation} where $\ell(x,y;\theta)$ is the loss value given input example $x$, ground truth label $y$ and model parameter $\theta$. The smoothing factor $\alpha$ varies from 0 to 1 such that $\alpha(T)=1-cos(T/T_\textrm{max})$ where $T$ is the current global step and $T_\textrm{max}$ is the maximum global step. We choose a smoothed $\alpha$ because the model's prediction is not trustworthy at the early stage of training. When the loss function is cross-entropy $\ell(x,y;\theta)=-\log f_y(x;\theta)$, it can also be shown that the priority value $P(x,y)=1-\alpha f_y(x;\theta)$, which is made of the model's confidence on the ground truth label.
\subsection{Importance weight} The examples sampled from the priority replay buffer changes the data distribution. To simplify the notation, we omit the label $y$ from the equations in this section. Let $p(x)$ be the distribution of the original data and $q(x)$ be the distribution in the replay buffer. The original objective is $\mathbb E_p[\ell(x;\theta)]$. Supposing $k$ mini-batches are sampled from the replay buffer at each step, directly combining replay examples and current examples will lead to a minimization of $\mathbb E_p[\ell(x;\theta)]+k\mathbb E_q[\ell(x;\theta)]$. In order to correct the distribution shift, we use an importance weight $w(x)=p(x)/q(x)$ to each replay example because $\mathbb E_q[w(x)\ell(x;\theta)]=\mathbb E_p[\ell(x;\theta)]$. Given that the original distribution can be assumed uniform ($p(x)=1$), the importance weight of each replay example is $w(x)\propto 1/P(x)$, inversely proportional to its priority value. The weights of each mini-batch are normalized to mean 1.
\section{Experiments}
\subsection{Experimental setup} The model in our study is a Residual Neural Network \cite{He2016DeepRL} with 50 layers (\textit{aka.} ResNet-50). We use cosine learning rate decay for all experiments, with initial learning rate 0.1. The model is optimized using stochastic gradient descent with Nesterov momentum 0.9. The batch size is 128. We evaluate different approaches with 10 different random orders of the data.
\begin{table} \setlength{\tabcolsep}{16pt} \caption{One-Pass ImageNet evaluation: Computation, Memory and Accuracy. The effective epoch is computed as the number of replay steps added by 1. The compute metric is calculated by the effective epoch divided by 90 (compared against the 90-epoch training). The multi-epoch results on the last column is obtained by training with the same number of effective epochs with cosine learning rate decay (however, it requires saving the full dataset to storage).} \label{tab:evaluation} \centering
\begin{tabular}{ c|cccc|c }
\toprule
Effective& \multirow{ 2}{*}{Computation} & \multicolumn{3}{c|}{Storage (Prioritized Replay)} & Multi-epoch\\
Epochs&& 1 \% &5 \% &10 \% & 100\% Storage\\
\midrule
2&$2/90\approx2.2\%$ & 44.7 & 45.1 & 45.7 & 46.1 \\
4&$4/90\approx4.4\%$& 55.5 & 57.1 & 57.2 & 59.0\\
6&$6/90\approx6.7\%$ & 58.9 & 61.3 & 62.2 & 64.1\\
9&$9/90=10\%$& 59.3&63.2&65.0&68.2\\
\bottomrule \end{tabular} \end{table}
\subsection{Results} The results are shown in Table \ref{tab:evaluation}. We performed 9 experiments with the replay steps being 1, 3, 5, 8 and the size of replay buffer being 1\%, 5\% and 10\% of the dataset. The effective epoch is computed as adding the number of replay steps $k$ by 1 because at receiving each new mini-batch, $k$ mini-batches of the same size are sampled from the replay buffer. So effectively, $k+1$ back-propagation operations are performed when the model sees a new mini-batch. In addition to the One-Pass solutions, we also show the results of multi-epoch training with the same number of effective epochs. The learning rate decay is adjusted accordingly to decay to minimum at the end of the corresponding epoch. We observe multiple trends from this table: \begin{enumerate}
\item Prioritized replay with 10\% memory size achieves performance very close to the multi-epoch training method under the same computational cost. And the multi-epoch method utilizes the full dataset which requires a large data storage. It is unknown whether multi-epoch gives a performance upper bound, we believe it is a strong target performance to reference.
\item 1\% data storage gives a strong starting point for prioritized replay. Having a 1\% data storage (equivalent to 100 mini-batches) dramatically improves the naive One-Pass performance by 28.7\%. According to the table, the accuracy increase from 1\% to 10\% storage is 1.0\% for 2 epochs, 1.7\% for 4 epochs, 3.3\% for 6 epochs and 5.7\% for 9 epochs.
\item When the buffer size becomes bigger, the accuracy gains more when the number of replay steps is more. From 5\% size to 10\% size, the model accuracy increase by 0.6\% and 0.1\% respectively for replay step 1 and 3, while the accuracy increases by 0.9\% for replay step 5. The model accuracy saturates quickly if one only increases either storage size or the replay steps. Increasing both of them could potentially incur much bigger accuracy boost. \end{enumerate}
\subsection{Discussion}
\noindent\textbf{Priority function.} Like recent literature \cite{taskonomycl,chaudhry2020using} suggested, we also observe competitive results using vanilla replay (with a uniform sample memory) in the One-Pass ImageNet problem. Specifically, a uniform memory leads to 64.7\% top-1 accuracy with 8 replay steps and 10\% storage, slightly worse than the prioritized replay method (65.0\%). The standard errors of mean for both are 0.07\%. Although the improvement is small, we believe the potential of prioritized replay is not fully explored in this problem and we leave the study of a better designed prioritization as future research.
\noindent\textbf{Importance weights.} To study the effectiveness of importance weights applied to the loss function, we evaluate the prioritized replay without importance weight at 1\% storage. With 5 replay steps (6 effective epochs), no importance weight results in 58.0\% top-1 accuracy, 0.9\% lower than the one with importance weight. With 8 replay steps, no importance weight results in 58.3\% accuracy, 1.0\% lower than using importance weight. The reason of this accuracy gap is due to the fact that the distribution of examples in the replay buffer differs from that of the evaluation data set.
\noindent\textbf{Priority updating.} An interesting and somewhat surprising result we obtained is that updating the priorities of the examples retrieved from the replay buffer does not lead to better accuracy. An experiment using 5 replay steps and 1\% storage shows that, after an example is replayed from the buffer, immediately updating its priority in the replay buffer using the latest model parameters results in a top-1 accuracy of 58.8\% while we can achieve 58.9\% without updating.
\section{Related Work} The OPIN problem is related to continual learning \cite{DBLP:journals/corr/abs-2106-12772,yin2020sola,borsos2020coresets,isele2018selective,van2020brain,chaudhry2018efficient,rusu2016progressive} and incremental learning \cite{Castro_2018_ECCV}. Existing continual learning methods are often evaluated in benchmarks composed of standard supervised learning datasets such as MNIST and CIFAR \cite{Hsu18_EvalCL,kirkpatrick2017overcoming,farajtabar2020orthogonal}. Recent efforts are exerted on benchmarks built upon large scale datasets such as ImageNet \cite{wu2019large} and Taskonomy \cite{taskonomycl}. Class-incremental ImageNet \cite{wu2019large} is probably the closest to our benchmark in continual learning which splits ImageNet into multiple tasks and each tasks consist of examples labeled as a certain subset of classes. The data for each task is given to the model altogether, so the model is able to repeat multiple passes for the data in the same task. OPIN differs in that it has no task concept and the data obeys a natural order instead of a manually designed class incremental ordering.
The OPIN problem is also related to stream data learning \cite{journals/sigkdd/GomesRBBG19,Souza2020ChallengesIB,dieuleveut2016nonparametric} and online learning \cite{10.1109/TIT.2004.833339, 6842642,onlinesgd}. While there have been many benchmarks utilizing real world streaming data \cite{Souza2020ChallengesIB}, they often have smaller scale, less complex features and limited number of classes. Their solutions are often limited to linear or convex models. The One-Pass ImageNet problem could also contribute to the research of stream data learning from the perspective of deep neural architectures and large scale high dimensional data.
\section{Conclusion} We presented the One-Pass ImageNet (OPIN) problem which aims at studying not only the predictive accuracy along with the space and compute efficiency of a supervised learning algorithm. The problem is motivated from real world applications that comes with a stream of extremely large scale data, which leads to impracticality of saving all the data into a dedicate storage and iterating the data for many epochs. We proposed an Error-Prioritized Replay baseline to this problem which achieves 65.0\% top-1 accuracy while reducing the required data storage by 90\% and reducing the total gradient update compute by 90\%, compared against the popular 90-epoch training procedure in ImageNet. We hope OPIN could inspire future research on improving resource efficiency in supervised learning.
\paragraph{Acknowledgement.} The authors would like to thank Razvan Pascanu, Murray Shanahan, Eren Sezener, Jianan Wang, Talfan Evans, Sam Smith, Soham De, Yazhe Li, Amal Rannen-Triki, Sven Gowal, Dong Yin, Arslan Chaudhry, Mehrdad Farajtabar, Timothy Nguyen, Nevena Lazic, Iman Mirzadeh, Timothy Mann and John Maggs for their meaningful discussions and contributions.
\end{document} |
\begin{document}
\renewcommand{\large Abstract}{\large Abstract}
\begingroup \centering {\LARGE Discrete All-Pay Bidding Games }\\[1.5em] \large Michael Menz\Mark{1}, Justin Wang\Mark{2}, Jiyang Xie\Mark{3}\\[1em] \begin{tabular}{*{3}{>{\centering}p{.3\textwidth}}} \Mark{1}Yale University & \Mark{2}Yale University & \Mark{3}Yale University \tabularnewline \url{[email protected]} & \url{[email protected]} & \url{[email protected]} \end{tabular}\par \endgroup
\begin{abstract} In an all-pay auction, only one bidder wins but all bidders must pay the auctioneer. All-pay bidding games arise from attaching a similar bidding structure to traditional combinatorial games to determine which player moves next. In contrast to the established theory of single-pay bidding games, optimal play involves choosing bids from some probability distribution that will guarantee a minimum probability of winning. In this manner, all-pay bidding games wed the underlying concepts of economic and combinatorial games. We present several results on the structures of optimal strategies in these games. We then give a fast algorithm for computing such strategies for a large class of all-pay bidding games. The methods presented provide a framework for further development of the theory of all-pay bidding games. \end{abstract}
\section{Introduction}
At the conclusion of an all-pay auction, all bidders must pay the bids they submitted, with only the highest bidder receiving the item. With this idea in mind, one can play a variant of a two-player game using an all-pay auction to decide who moves next instead of simply alternating between players. For example, one could play all-pay Tic-Tac-Toe with $100$ chips. Each round both players privately record their bids and then simultaneously reveal them. If player A bids $40$ and his opponent bids $25$, player A would get to choose a square to mark and the next round of bidding would begin wih player $A$ having $85$ chips, player $B$ having $115$ chips. Note that the chips have no value outside the game and only serve to determine who moves - the ultimate goal is still just to get three-in-a-row.
Another variant of the game could have only the player who wins the move pay his/her bid, i.e. deciding who moves next using a first-price auction. These games were first studied formally in the 1980s by Richman, whose work has since then been greatly expanded upon. Intuitively, there is less risk in these ``Richman games'' for the player losing the bid. If your opponent bids $100$ for a certain move, it makes no difference whether your bid was $99$ or $0$. All that matters is that your opponent's bid was higher. A surprising consequence of this single-pay structure is that for every state of a game, there exists a ``Richman value'' $v$ for each player that represents the proportion of the total chips that player would need to hold to have a deterministic winning strategy. In this situation, the player with the winning strategy can tell her opponent what bid she will be making next without affecting her ability to ultimately win. For zero-sum games, this means that unless a player's chip ratio is exactly $v$, then one of the players must have such a winning strategy \citep{LLPU96}, \citep{LL+99}.
Our objective is to begin the formal study of all-pay bidding games. Returning to the above example where your opponent bids $100$ chips and you are indifferent between bidding $99$ and $0$, it is clear this is no longer true for an all-pay bidding mechanism. You would be very disappointed had you bid $99$, as your opponent would be paying just $1$ chip on net to make a move. Had you bid $0$, though, you might feel pretty good about not moving this current turn, as the $100$ extra chips may make a bigger difference for the rest of the game. Thus, there are at least two bidding scenarios which intuitively seem like very good positions to be in: winning the bid by a relatively small number of chips or losing the bid by a relatively large number of chips. This behavior suggests that, unlike in Richman games, in all-pay bidding games one of the players will not necessarily have a deterministic winning strategy. Instead, players must randomize their bidding in some way. Thus, we must appeal to the concept of mixed bidding strategies in Nash equilibria.
\subsection{A Game of All-Pay Bidding Tic Tac Toe} Before presenting formal definitions and results, we provide a sample all-pay bidding game to illustrate some of the main features of playing these games. Alice and Bob, each with $100$ chips, are playing all-pay bidding Tic-Tac-Toe. Each turn Alice and Bob secretly write down a bid, a whole number less than or equal to their total number of chips. They then reveal their bids and whoever bid more gets to decide who makes the next move. We say a player has \textbf{advantage} if, when players bid the same amount, that player gets to decide who makes the next move. The question of deciding how to assign advantage is one we encountered early on. For our games, we give advantage to the player with more chips, then arbitrarily let Alice have advantage when Alice and Bob have the same number of chips. A number of other mechanisms would also suffice, such as alternating advantage or having a special ``tie-breaking'' chip that grants advantage and is passed each time it is used. Our choice was made in the interest of computational simplicity and to eventually allow extension to real-valued bidding.
\textit{First Move.} Both players have $100$ chips. Alice bids $25$, Bob bids $40$. Bob wins the right to move and plays in the center of the board.
\begin{center} \includegraphics[width=0.7in]{samplegame1.png} \end{center}
\textit{Second Move.} Alice has $115$ chips, Bob has $85$ chips. Alice wants to win this move to keep pace with Bob, but also does not see why it should be worth more than the first, so she only slightly increases her bid to $30$. Bob, thinking that Alice may want to win this move more, is content to let Alice win and collect chips by bidding $0$. Alice wins the right to move and plays in the top-left corner of the board.
\begin{center} \includegraphics[width=0.7in]{samplegame2.png} \end{center}
\textit{Third Move.} Alice has $85$ chips, Bob has $115$ chips. Alice bids $45$, Bob bids $40$, so Alice wins the right to move and plays in the top-center of the board.
\begin{center} \includegraphics[width=0.7in]{samplegame3.png} \end{center}
\textit{Fourth Move.} Alice has $80$ chips, Bob has $120$ chips. Alice is one move away from winning and decides to risk it and bid all of her $80$ chips. Unfortunately for her, Bob has guessed her move and has himself bid $80$ as well. Because Bob has more chips overall, he uses his advantage to win the tie and plays in the top-right corner of the board, blocking Alice's victory and setting himself up for one.
\begin{center} \includegraphics[width=0.7in]{samplegame4.png} \end{center}
\textit{Fifth Move.} Alice has $80$ chips, Bob has $120$ chips. Bob has more chips and is just a move away from winning, so he can bid everything, play in the bottom-left corner and win the game.
In normal Tic-Tac-Toe, both players can guarantee a draw by playing well, but as we see from this example, the result of a game of all-pay Tic-Tac-Toe involves far more chance.
For example, at the fourth move in the above game, Alice could have guessed Bob might bid $80$ and chosen to ``duck'' by bidding $0$. In this case Bob would win the move and play as before, but now the chip counts would be $160$ to $40$ in Alice's favor, and Alice can bid $40$ and then $80$ to win the next two moves and win in the left column. It is easy to see that if a player knows what his opponent will bid at each move, he can win the game easily. Thus, in the vast majority of all-pay bidding games, optimal play cannot be deterministic.
Though we do not return to Tic-Tac-Toe in this paper, it served as a test game for much of our research. Using our results, we built a computer program to play all-pay bidding Tic-Tac-Toe optimally. The program can be played against at \url{http://biddingttt.herokuapp.com}. The theory behind this program, which is not specific to Tic-Tac-Toe, will be the focus of the rest of the paper.
\subsection{Overview of Results}
Our ultimate goal is to characterize the optimal strategies for a general class of all-pay bidding games. The game consists of iterations of both players bidding for the right to move followed by one of the players making a move. In turn, an optimal strategy will also have two parts: the bid strategy and the move strategy. For a given position in the game (e.g. a configuration of the Tic-Tac-Toe board) and chip counts for each of the players (e.g. Alice has $115$ chips, Bob has $85$ chips), the bid strategy must tell players how to best randomize their bets (e.g. Alice bids $0$ chips half the time, $80$ chips half the time) while the move strategy must tell whoever wins the bid the best move to make (e.g. where to play on the Tic-Tac-Toe board).
The problem of determining move strategy is largely combinatorial in nature and remains similar to its analog in Richman games. We can still represent the space of game states as a directed graph, and there is a not always a single best move that each player can make upon winning the bid. That is, the best move could also depend on each player's chip counts moving forward.
The focus of our work, then, will be on determining the optimal bidding strategy for any game position and chip counts. Naturally, this should depend on a player's chances of winning in any of the possible subsequent game situations (i.e. after a single move and updated chip counts). For purposes of initial analysis, we will assume that these future winning probabilities are known, and see how the bidding strategy can be determined from this information. Then, by using the recursive nature of the directed graph, we will be able to start from the ``win'' and ``loss'' nodes (where the probability is just $1$ or $0$) to find the optimal bidding strategies and winning probabilities for any game situation. For the rest of this paper, we will often refer to a bidding strategy as just a ``strategy'' when it is clear that the focus is just on the bidding side of the game. Here, a strategy will be a probability vector where the $i$th coordinate corresponds to the probability a player will bid $i$ chips. Further, a Nash equilibrium for a game situation will just be a pair of strategies so that neither player has an incentive to deviate. This means that each player's strategy will maximize his/her minimum probability of ultimately winning from the next turn of the game.
It quickly becomes apparent that a naive recursive algorithm using linear programming is feasible only for games with very few moves. Thus, in the interest of being able to practically calculate the optimal bidding strategies for general games, we prove some structural results on the Nash equilibria. In particular, useful structure arises when we study a particular class of games that we dubbed ``precise'', which roughly speaking are games where having one more chip is strictly better than not. The key result is a surprising relationship between opposing optimal strategies that allows one to immediately write a Nash equilibrium strategy for the player without advantage if given a Nash equilibrium strategy for the player with advantage.
This relationship, (\ref{reverse_thm}), which we call the Reverse Theorem, is a critical step toward the calculation of optimal strategies for precise games. Further, by assigning an arbitrarily small value in the game to each chip, we get a precise game that is very similar to the original game. We show that the optimal strategies we can calculate for these new precise games will indeed converge to optimal strategies for our possibly imprecise games. Our theoretical results ultimately culminate in a fast algorithm for computing optimal probabilistic bidding strategies. Together with a move strategy for the combinatorial side of the game, this gives a complete characterization of optimal play for all-pay bidding games.
\section{Strategies in precise games}\label{tools}
Let $G_{a, b}$ denote a single turn of a two-player all-pay bidding game $\mathcal{G}$ where player $A$ is endowed with $a$ chips and player $B$ is endowed with $b$ chips. The underlying combinatorial game $\mathcal{G}$ is a two-player zero-sum game, represented by an acyclic, colored, directed graph with two marked vertices, $\mathcal{A}$ and $\mathcal{B}$. The game begins by placing a token at some starting vertex. At each turn, a player moves the token to an adjacent vertex. Player $A$ wins if the token reaches $\mathcal{A}$ and player $B$ wins if the token reaches $\mathcal{B}$. By saying the graph is colored, this means that edges are one of two colors, say red and blue, such that $A$ can only move the token along red edges and $B$ can only move the token along blue edges. To ensure consistency in the bidding strategy from turn to turn, we seek to avoid situations where the winner of a bid can be put in zugzwang - i.e. where it would be better to not move at all. Thus, the bid winning player, rather than simply being able to move next, gets to determine who moves next. With this condition, $\mathcal{G}$ can an asymmetric game where zugzwang is possible, like chess and many other popular two player games.
The \textbf{payoff}, or value of the game, for player A at $G_{a, b}$ is denoted by $v_A(G_{a, b}) \in [0,1]$ and is equal to the probability that player A wins the game under optimal play. That is, we set $v_A(\mathcal{A}) = 1$ and $v_A(\mathcal{B}) = 0$ and calculate payoffs recursively. Similarly, let $v_B(G_{a, b})$ denote the probability that player $B$ wins the game. Often, when the chip counts or specific combinatorial game are not relevant to the discussion, the payoffs will be shortened to $v_A$ and $v_B$. Note that $v_B = 1-v_A$ as we only study games that cannot end in ties (for the game of Tic-Tac-Toe above, we can arbitrarily let one of the players win all draws).
Thus a \textbf{payoff matrix} for player A in $G_{a, b}$ is denoted by $M_A(G_{a, b})$ and is given by \[ (M_A)_{i,j} = \left\{ \begin{array}{cc} \max(\max_{G' \in \mathcal{S}_A(G)} v_A(G'_{a-j+i, b-i+j}), \min_{G' \in \mathcal{S}_B(G)} v_A(G'_{a-j+i, b-i+j})) & \text{if $A$ wins the bid}\\ \min(\min_{G' \in \mathcal{S}_B(G)} v_A(G'_{a-j+i, b-i+j}), \max_{G' \in \mathcal{S}_A(G)} v_A(G'_{a-j+i, b-i+j})) & \text{if $B$ wins the bid}\end{array}\right. \] where $\mathcal{S}_A(G)$ and $\mathcal{S}_B$ are the set of game positions that can be moved to from $G$ by $A$ and $B$ respectively. The $(i, j)$ entry corresponds to player $A$'s probability of winning the game after $A$ bids $j$ and $B$ bids $i$. Note this is well-defined because the game is zero-sum: by moving to the game state that minimizes Player $A$'s payoff, player $B$ is maximizing his own payoff at the same time (and vice-versa). Similarly, let $M_B(G_{a, b})$ denote the payoff matrix for player $B$.
We notice that if player $A$ bids $x$ and player $B$ bids $y$, this is equivalent to player $A$ bidding $x+z$ and player $B$ bidding $y+z$ for any $z$ because the players are paying each other. Thus, we have that payoff matrices are Toeplitz, or diagonal-constant. We will write player $A$'s and player $B$'s payoff matrices for $G_{a, b}$ as \[ \left( \begin{array}{cccc} \alpha_0 & \alpha_1 & \ldots & \alpha_a \\ \alpha_{-1} & \alpha_0 & \ldots & \alpha_{a-1} \\
\vdots & \vdots & \vdots & \vdots \\ \alpha_{-b} & \alpha_{-b+1} & \ldots & \alpha_{-b+a} \end{array} \right) \hspace{1cm}\text{and}\hspace{1cm} \left( \begin{array}{cccc} \beta_0 & \beta_1 & \ldots & \beta_b \\ \beta_{-1} & \beta_0 & \ldots & \beta_{b-1} \\ \vdots & \vdots & \vdots & \vdots \\ \beta_{-a} & \beta_{-a+1} & \ldots & \beta_{-a+b} \end{array} \right)\] respectively.
We pause to consider a simple example. Let the underlying game be one where player $A$ needs to make two moves to win, while player $B$ needs to make only one more move to win. Suppose player $A$ has $5$ chips while player $B$ has $3$ chips. Then we would get the following payoff matrices for player $A$ and player $B$ \[ \left( \begin{array}{cccccc} 1 & 1 & 1 & 0 & 0 & 0\\ 0 & 1 & 1 & 1 & 0 & 0\\ 0 & 0 & 1 & 1 & 1 & 0\\ 0 & 0 & 0 & 1 & 1 & 1 \end{array} \right) \hspace{1cm}\text{and}\hspace{1cm} \left( \begin{array}{cccc} 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 0 \end{array} \right)\] respectively
A \textbf{strategy} for player A in $G_{a, b}$ is given by an $(a+1)$-dimensional column vector with all non-negative entries that sum to 1. The $i$-th entry of this vector (where we start indexing at $0$) gives the probability that player $A$ will bid $i$ chips. Similarly, a \textbf{strategy} for player $B$ in $G_{a,b}$ and is given by a $(b+1)$-dimensional column vector satisfying the same conditions. We denote a Nash equilibrium strategy in the game $G_{a,b}$ as $S_A(G_{a,b})$ for player $A$ and as $S_B(G_{a,b})$ for player $B$. Often times we will not be too explicit with the size of these vectors. It should be clear from context.
Note that the $i$th row of $M_A$ corresponds to the payoffs of each of $A$'s pure strategies if her opponent $B$ bids $i$. Letting $A_i$ be the $i$th row of $M_A$, we have $A_i \cdot S_A= a_{i0}(S_A)_0 + \cdots + a_{ia}(S_A)_a= (P_A)_i$, a weighted average of $A$'s pure payoffs when $B$ bids $i$. Thus, $(P_A)_i$ is player $A$'s probability of winning if her strategy is $S_A$ and her opponent purely bids $i$. For example, if we have \[ M_A\cdot S_A= \left( \begin{array}{cc} 1 & \frac{1}{2} \\ 0 & 1 \end{array}\right) \cdot \left(\begin{array}{c} \frac{1}{2} \\ \frac{1}{2} \end{array} \right)= \left(\begin{array}{c} \frac{3}{4} \\ \frac{1}{2} \end{array} \right), \] this means by playing $S_A$, player $A$ wins $\frac{3}{4}$ of the time if player $B$ only bids $0$ and wins $\frac{1}{2}$ of the time if player $B$ only bids $1$.
Now, if player $B$'s strategy is $S_B$, $S_B^TM_AS_A=(S_B)_0(P_A)_0+\cdots+(S_B)_b(P_A)_b$, another weighted average of $A$'s payoffs for each of $B$'s pure strategies. Thus, $S_B^TM_AS_A$ is exactly $A$'s payoff if she plays $S_A$ and her opponent plays $S_B$. $S_A^TM_BS_B$ is $B$'s payoff in the same situation. Continuing with the above example, if we now let $S_B^T=\left(\begin{array}{cc} \frac{1}{2}& \frac{1}{2} \end{array}\right)$ then $S_B^TM_AS_A=\frac{1}{2}\cdot\frac{3}{4}+\frac{1}{2}\cdot\frac{1}{2}=\frac{5}{8}$. So given strategies $S_A$ and $S_B$ for players $A$ and $B$, player $A$ has a $\frac{5}{8}$ probability of winning.
We compile these results in the lemma below.
\begin{lemma}[]\label{basics} Let $ M_A $ and $ M_B $ be payoff matrices for players $ A $ and $ B $, respectively, in $ G_{a, b} $. Then the following statements are true. \begin{itemize} \item[(a)] The diagonals of $ M_A $ and $ M_B $ are constant, i.e. the payoff matrices are Toeplitz.
\item[(b)] Let $\mathbf{1}$ be the appropriately sized matrix whose entries are all 1. Then $ M_B =\mathbf{1} -M_A ^ T $.
\item[(c)] Suppose $ (S_A, S_B) $ is a Nash equilibrium. Then $ (M_B S_B)_i = v_B $ if $ (S_A)_i\neq 0 $ and $ (M_A S_A)_i = v_A $ if $ (S_B)_i\neq 0 $. \end{itemize} \end{lemma}
\noindent This lemma provides the basic structure from which many of our main proofs will follow from later.
It is clear that $v_A(G_{a+1, b}) \ge v_A(G_{a, b})$, since player $A$ can always bid as if he did not have the extra chip. We now define a class of games pivotal to our analysis in which this inequality is strict. Formally, a game $G$ is called \textbf{precise} if in every successor state to $G$, it is strictly better to have one more chip.
\begin{remark}\label{precise_matrices} We note that in particular, this guarantees a certain strict monotonicity among the entries of the payoff matrices. In particular, winning the bid by one less chip is always strictly preferable, as is losing by one more chip. Thus we have that for the player with advantage, $\alpha_i > \alpha_j$ for $0 \le i < j$ and $\alpha_i > \alpha_j$ for $i < j < 0$. A similar relationship holds for the player without advantage, except $\beta_0 < \beta_1$ and $\beta_0 > \beta_{-1}$. \end{remark}
\begin{define}
A strategy $S$ has \textbf{length} $\ell=\ell(S)$ if $S_{\ell-1} \ne 0$ and $S_{m} = 0$ $\forall m \geq \ell$. A strategy $S$ is \textbf{gap-free} if $S_i,S_j \ne 0$ if and only if $S_k \ne 0$ $\forall i \le k \le j$. \end{define}
The definition of length encapsulates the observation that unless the game is close to completion, players will never bid a large proportion of their chips. The second definition seems more arbitrary at the moment, but it plays a pivotal role in the following Proposition and will serve to greatly simplify the language throughout the paper.
\begin{prop}[]\label{grounded_gapfree} Let $ G_{a, b} $ be precise. Any equilibrium strategy for the player with advantage is gap-free and bids $0$ with nonzero probability, while any equilibrium strategy for the other player is gap-free and bids 1 with nonzero probability. If the player with advantage has an equilibrium strategy of length $\ell $, any equilibrium strategy for the other player has length $\ell $ or $\ell +1 $. \end{prop}
\begin{proof} Suppose without loss of generality that player $ A $ has advantage, and let $ S_A = (s_0,\dots, s_a) $ and $ S_B = (t_0,\dots , t_b) $ be equilibrium strategies for players $ A $ and $ B $ respectively. We claim that if $ i\geq 0 $, \begin{itemize} \item[(i)] $ s_i = 0 $ implies $ t_{i +1} = 0 $, and \item[(ii)] $ t_{i +1} = 0 $ implies $ s_{i +1} = 0 $. \end{itemize} If $ s_i = 0 $ and $ t_{i +1}> 0 $, player $ B $ should alter his strategy so that he bids $ i $ with probability $ t_i+ t_{i +1} $ and $ i +1 $ with probability 0. This saves player $ B $ a chip whenever he would have bid $ i +1 $ without changing any possible outcome of these bids, and all other possibilities are unchanged. By precision, this new strategy is strictly better than $ S_B $ for player $ B $, a contradiction. This proves (i).
If $ t_i = 0 $ and $ s_i> 0 $, player $ A $ should alter her strategy so that she bids $ i $ with probability $ s_i+ s_{i +1} $ and $ i +1 $ with probability 0. As in the previous case this new strategy is strictly better for player $ A $, a contradiction, proving (ii).
Together, (i) and (ii) complete the proof except in the case when $ S_B = (1, 0,\dots , 0) $. However, in this case an optimal strategy for player $ A $ is to also bid 0 with probability 1, and it follows that $G_{a, b} $ is not precise. \end{proof}
This characterization of equilibrium strategies is what motivated our restriction to precise games. In the presence of precision, an easily observable, yet highly unexpected relationship between opposing optimal strategies appears. This relationship forms the foundation for the rest of our results.
\begin{define} The \textbf{reverse} of a length $\ell$ strategy $S$ is given by \[ \mathcal{R}(S) = \mathcal{R}((s_0,s_1,\ldots,s_{\ell-1},0,\ldots,0)) = (s_{\ell-1},s_{\ell-2},\ldots,s_0,0,\ldots,0). \] where the number of trailing zeroes will be clear from context. \end{define}
\begin{theorem}[]\label{reverse_thm} Suppose that $ G_{a, b} $ is precise, and that $ S $ is an equilibrium strategy for the player with advantage. Then $\mathcal{ R} (S) $ is an equilibrium strategy for the player without advantage. \end{theorem} \begin{proof} Suppose without loss of generality that player $ A $ has advantage, and $ S = S_A = (s_0,\dots , s_{\ell -1}, 0, \dots, 0) $ has length $ \ell $. By Lemma \ref{basics} and Proposition \ref{grounded_gapfree}, we have
\begin{equation}\label{reverse_thm1} M_A \cdot S_A= \left[ \begin{array}{cccc} \alpha_0 & \alpha_1 & \ldots & \alpha_a \\ \alpha_{-1} & \alpha_0 & \ldots & \alpha_{a-1} \\
\vdots & \vdots & & \vdots \\ \alpha_{-b} & \alpha_{1-b} & \ldots & \alpha_{ a-b} \end{array} \right] \left [ \begin{array}{c} s_0 \\ s_1\\ \vdots \\ s_{\ell-1} \\ 0 \\ \vdots \\ 0\\ \end{array} \right] = \left [ \begin{array}{c} w_0 \\ v_A \\ \vdots \\ v_A \\ w_{\ell} \\ \vdots \\ w_{b} \end{array} \right], \end{equation} where $ w_0, w_{\ell },\dots , w_b\geq v_A $. We claim further that $ w_0 = v_A $.
Suppose for a contradiction that $ w_0 > v_A $. Then if $ S_B $ is an equilibrium strategy for player $ B $, by Lemma \ref{basics} and Proposition \ref{grounded_gapfree} it is of the form $ S_B = (0, t_1, t_2,\dots , t_\ell , 0,\dots , 0) $ where $ t_1> 0 $, but possibly $ t_\ell = 0 $.
When played against $ S_A $, $ S_B $ gives a payoff of $ v_B $. Let $ v_B' $ be player $ B $'s payoff against $ S_A $ when he plays the shifted strategy $ S_B' = (t_1, t_2, \dots , t_\ell , 0,\dots , 0) $. Since $ (S_A, S_B) $ is a Nash equilibrium, $ v_B' \leq v_B $. On the other hand, player $ A $ can guarantee a payoff of $ 1-v_B' $ against $ S_B $ by using the strategy $ S_A' = (0, s_0,s_1,\dots , s_{\ell -2},s_{\ell - 1},\dots,0) $ since the probability of any given difference in bids occurring is the same in $(S_A', S_B) $ as in $ (S_A, S_B') $. Therefore $ v_B'\geq v_B $, so $ v_B = v_B' $, whence $S_B^T \cdot M_A \cdot S_A=S_B'^T \cdot M_A\cdot S_A $. Expanding this, we find \begin{equation*} 0\cdot w_0 + t_1\cdot v_A+\cdots+t_{\ell - 1} \cdot v_A + t_{\ell}\cdot w_{\ell} = t_1\cdot w_0+t_2\cdot v_A +\cdots+t_{\ell}\cdot v_A \end{equation*} Suppose $w_{\ell} > v_A$. Then, we must have $t_{\ell} = 0$, which solves to get $w_0 = v_A$. If $w_{\ell} = v_A$, the equation solves the same way to get $w_0 = v_A$. Thus, either way we have a contradiction of $w_0 > v_A$. Thus, $w_0 = v_A$. Together with \eqref{reverse_thm1}, this gives \begin{equation}\label{reverse_thm2} v_A =\alpha _0 s_0+\cdots +\alpha _{\ell -1} s_{\ell -1} =\cdots =\alpha _{-(\ell - 1)} s_0+\cdots + \alpha_0 s_{\ell -1}. \end{equation} By Lemma \ref{basics} we have $M_B = \mathbf{1} - M_A^T$, so \[ M_B \cdot \mathcal{R}(S_A)= \left[ \begin{array}{cccc}
1-\alpha_0 & 1-\alpha_{-1} & \ldots & 1-\alpha_{-b} \\
1-\alpha_{1} & 1-\alpha_0 & \ldots & 1-\alpha_{1-b} \\
\vdots & \vdots & & \vdots \\
1-\alpha_{a} & 1-\alpha_{a-1} & \ldots & 1-\alpha_{a-b} \end{array} \right] \left[ \begin{array}{c} s_{\ell-1} \\ \vdots \\ s_0 \\ 0 \\ \vdots \\ 0 \end{array} \right]. \] For $ 0\leq i\leq \ell -1 $ we have $(1-\alpha_i)s_{\ell-1}+\cdots+(1-\alpha_{i-\ell+1})s_0=(s_0+\cdots+s_{\ell-1})- (\alpha_{i-\ell+1}s_0+\cdots+\alpha_is_{\ell-1})=1-v_A= v_B$ by equation \eqref{reverse_thm2}. In other words, $\mathcal{ R} (S_A) $ guarantees player $ B $ his highest possible payoff against $ S_A $, so he has no incentive to deviate from $\mathcal{ R} (S_A) $ if player $ A $ uses $ S_A $.
We now show player $A$ has no incentive to deviate from $S_A$ against $\mathcal{R}(S_A)$. If $\ell\le i \leq a$, the payoff for player $B$ if player $ A $ bids $i$ will be $(s_0+\cdots+s_{\ell-1})-(\alpha_{i-\ell+1}s_0+\cdots+\alpha_is_{\ell-1})$. By the formulation of precision in terms of payoff matrices in Remark \ref{precise_matrices}, we have strict inequalities $\alpha_{i-\ell+1}<\alpha_0$ through $\alpha_i<\alpha_{\ell-1}$, so $\alpha_{i-\ell+1}s_0+\cdots+\alpha_is_{\ell-1} < \alpha_0s_0+\cdots+\alpha_{\ell-1}s_{\ell-1} = v_A$. Thus player $A$ loses utility if she bids any amount greater than $\ell -1 $ with positive probability. One also readily sees that if she alters her distribution of bids $ 0,\dots ,\ell -1 $ this will not change her payoff against $\mathcal{R}(S_A)$. It follows that $ (S_A,\mathcal{ R} (S_A)) $ is a Nash equilibrium as claimed. \end{proof}
The Reverse Theorem reveals a strong relationship between opposing player's strategies. Using it, we can now fully characterize the set of optimal strategies for both players in precise games.
\begin{theorem}[]\label{advantageunique} If $ G_{a, b} $ is precise, the player with advantage has a unique equilibrium strategy. \end{theorem}
\begin{proof} Let player $A$ have advantage. Suppose that $S_A$ and $S_A'$ are distinct equilibrium strategies for player $A$. Let $S_A$ and $S_A'$ have lengths $\ell$ and $\ell'$ respectively. By the Reverse Theorem, player $B$ has strategies $\mathcal{R}(S_A)$ and $\mathcal{R}(S_A')$ which have lengths $\ell$ and $\ell'$ respectively. Suppose $\ell' \ne \ell$. Assume, without loss of generality, that $\ell' > \ell$. Then $\mathcal{R}(S_A')$ is a Nash equilibrium strategy for $B$ with length greater than $S_A$ which contradicts Propostion \ref{grounded_gapfree}. Thus, $\ell = \ell'$.
Assume, without loss of generality, that $(M_AS_A)_{\ell} \ge (M_AS_A')_{\ell}$. That is, we assume, that if $B$ bids $\ell$ against $S_A$ he will do no better than if he were bidding $\ell$ against $S_A'$. It is possible he will do strictly worse as bidding $\ell$ is not necessarily a part of player $B$'s optimal strategy. Consider the following function: \[ S(x) = S_A' + x(S_A - S_A') \] We claim that for any $x$ for which $S(x)$ is a valid strategy, $S(x)$ is an optimal strategy. Note that $S(x)$ has entrywise sum of $1$ so $S(x)$ is at least valid for $x \in [0,1]$. Consider: \[ (M_AS(x))_i = (M_AS_A')_i + x(M_AS_A - M_AS_A')_i \] For $i < \ell$, $(M_AS_A')_i = (M_AS_A)_i = v_A$ so $ (M_AS(x))_i = v_A$. For $i = \ell$, $(M_AS_A)_i \geq (M_AS_A')_i$ so $(M_AS(x))_i \geq (M_AS_A')_i \geq v_A$. If player $B$ bids anything greater than $\ell$ then he will do strictly worse than if he bid $\ell$, because he will win by more than he would by bidding $\ell$. Therefore, $S(x)$ guarantees player $A$ a payoff of at least $v_A$. Choose the maximal $x^\star$ for which $S(x^\star)$ is valid. Because $S(x)$ has entrywise sum of $1$, it is only invalid if $S(x)$ has a negative entry. Thus, at this maximal $S(x^\star)$ has at least one zero entry. Either $S(x^\star)$ has length less than $\ell$, a $0$ in its first entry, or is not gap-free. Each of these is impossible (above, Prop \ref{grounded_gapfree}). Therefore distinct optimal strategies $S_A$ and $S_A'$ cannot exist. \end{proof}
In most precise games, both players have unique optimal strategies. It is possible, however, to construct a game in which the player without advantage has multiple optimal strategies. We give a characterization of these as well. If $S$ is a strategy let $(0,S)$ represent a new strategy where anytime one would bid $i$ in $S$ he will bid $i+1$ in $(0,S)$.
\begin{theorem}[]\label{noadvantagestrategies} Let $ G_{a, b} $ be precise and let player $A$ have advantage. The following statements hold: \begin{enumerate}[(1)] \item Player $B$ has a unique strategy of minimal length. This strategy is $\mathcal{R}(S_A)$. \item If Player $B$ has more than one optimal strategy, then another optimal strategy is of the form $(0, \mathcal{R}(S_A))$. \item All other optimal strategies for player $B$ are of the form \[t\mathcal{R}(S_A) + (1-t)(0, \mathcal{R}(S_A)) \hspace{0.5cm} t \in [0,1].\] \end{enumerate} \end{theorem}
\begin{proof}
Throughout this proof we will use a method from the proof of Theorem \ref{advantageunique}. Suppose we have two strategies $P$ and $T$ such that wherever $P$ is non-zero so is $T$. Then we define \[ E(x) = T + (P-T)x \] We showed above that $E(x)$ gives an optimal strategy as long as it is valid. If we choose $x^*$ to be maximal so that $E(x^*)$ is valid, then $E(x^*)$ gives an optimal strategy with a $0$ in some spot where $S$ was nonzero. Let us call the strategy produced by this method $E(P,T)$.
We begin with (1). By the Reverse Theorem, player $B$ has a strategy $\mathcal{R}(S_A)$ which is of the same length as $S_A$. By Proposition \ref{grounded_gapfree}, player $B$ cannot have a strategy shorter than $S_A$. Therefore, $\mathcal{R}(S_A)$ is a strategy of minimal length for player $B$. Suppose $S$ is another strategy of minimal length for player $B$. Then $S^{\ast} = E(S,\mathcal{R}(S_A))$ is either of lesser length, is not gap-free, or has a $0$ in the first entry. The first two possibilites are impossible by Proposition \ref{grounded_gapfree}. In the third case, we can apply the same method again to get $E(S, S^{\ast})$ which is either of lesser length, not gap free, or has $0$'s in the first two entries. Each of these is impossible by Proposition \ref{grounded_gapfree}.
We now proceed to (2). Suppose player $B$ has more than one optimal strategy. Then by (1) it must be of length greater than $\mathcal{R}(S_A)$. Let $\ell$ be the length of $\mathcal{R}(S_A)$. By Proposition \ref{grounded_gapfree}, any other optimal strategy of player $B$ must be of length $\ell + 1$. Let $S$ be such a strategy. Suppose $S_0 \ne 0$. Then we can take $S' = E(\mathcal{R}(S_A),S)$ which must have a $0$ in the first coordinate lest we contradict Proposition \ref{grounded_gapfree}. We must show that $S' = (0, \mathcal{R}(S_A))$. Because $M_B$ is Toeplitz, \[\left(M_B \cdot (0, \mathcal{R}(S_A))\right)_{i+1} = \left(M_B \cdot \mathcal{R}(S_A)\right)_i \] Therefore $(0, \mathcal{R}(S_A))$ guarantees player $B$ at least his optimal payoff unless player $A$ plays $0$. Suppose that if player $A$ bids $0$ then $(0, \mathcal{R}(S_A))$ gives player $B$ a payoff of $v$ less than his optimal payoff of $v_B$. Then define a strategy, \[ S^\triangle = \frac{S' - c(0, \mathcal{R}(S_A))}{1-c} \] for $c$ sufficiently small so that $S' - c(0, \mathcal{R}(S_A))$ has all positive entries. Then $S^\triangle$ is a valid strategy that guarantees player $B$ his optimal payoff if player $A$ bids anything from $1$ to $\ell+1$. It guarantees player $B$ more than his optimal payoff if player $A$ bids $0$ as: \[ \left(M_B \cdot \frac{S' - c(0, \mathcal{R}(S_A))}{1-c}\right)_0 = \frac{1}{1-c} \cdot (v_B - cv) >\frac{1}{1-c} \cdot (v_B - cv_B) = v_B \] $S^\triangle$ is a strictly better strategy than $S'$ as player $A$ always bids $0$ with nonzero probability. $S'$ is optimal so this is impossible. Thus, $(0, \mathcal{R}(S_A))$ is an optimal strategy. That it is equal to $S'$ will follow from (3).
Finally we prove (3). $\mathcal{R}(S_A)$ and $(0, \mathcal{R}(S_A))$ are optimal strategies so any convex combination of the two is optimal. Let $S^\star$ be an optimal strategy for player $B$ that is not a convex combination of the two. Then, $S^\star$ must be of length $\ell+1$. Therefore we can take $E((0, \mathcal{R}(S_A)), S^\star)$. This gives a strategy which is either of length $\ell$, is not gap-free, or has multiple $0$'s at the begining. The latter two possibilities are impossible by Proposition \ref{grounded_gapfree}. $\mathcal{R}(S_A)$ is the unique optimal strategy of length $\ell$ so: \[\mathcal{R}(S_A) = (0, \mathcal{R}(S_A)) + x(S^\star -(0, \mathcal{R}(S_A)))\] \[\frac{1}{x}\mathcal{R}(S_A) + \frac{(x-1)}{x}(0, \mathcal{R}(S_A)) = S^\star\] Note that $\frac{1}{x} + \frac{(x-1)}{x} = 1$ and both coefficients must be postive or else the first or last entry of $S^\star$ will be negative. Thus, $S^\star$ is a convex combination of $\mathcal{R}(S_A)$ and $(0, \mathcal{R}(S_A))$. \end{proof}
\section{Imprecise Games}
\subsection{Adjustments for Precision} In most of the above proofs we assume that $G_{a,b}$ is a precise game. In many games with small associated graphs, this is not the case. The simplest example is a game where in the associated graph the only directed edge goes to $\mathcal{A}$. Then player $A$ always wins, so the chip counts do no matter whatsoever. Thus, we apply a small adjustment to the payoff matrices for players $A$ and $B$. Pick a small $x>0$. We now define $M_A^x(G_{a,b})$ as \[ M_A^x(G_{a,b}) = M_A(G_{a,b}) + xB_{a,b} \] where $B_{a,b}$ is given by the $(b+1) \times (a+1)$ Toeplitz matrix \[ \left[ \begin{array}{cccc} a & a-1 & \cdots & 0 \\ a+1 & a & \cdots & 1 \\ \vdots & \vdots & \ddots & \vdots \\ a+b & a+b-1 & \cdots & b \end{array} \right]. \] Intuitively, we can think of $xB_{a, b}$ as a payoff matrix that gives payoff $x$ for each chip a player has at the end of a turn. $S_A^x(G_{a,b})$ is then given by the strategy that maximizes player $A$'s minimum payoff under $M_A^x(G_{a,b})$. $v_A^x(G_{a,b})$ is this payoff.
While the payoff no longer corresponds exactly to winning probability, the game $G_{a, b}^x$ is still zero-sum, with total utility $1+(a+b)x$ split between the two players. We generalize our Lemma \ref{basics} to this new game:
\begin{lemma} The game represented by $M_A^x(G_{a,b})$ is precise. \end{lemma}
\begin{proof} Each entry of $M_A^x(G_{a,b})$ represents a successor state of the game where each player has some number of chips. From the way we have defined $B_{a,b}$, for any successor state in which having one more chip provided an equal payoff in $G_{a,b}$, having one more chip will now provide a payoff exactly $x$ greater. \end{proof}
A natural question arising from this adjustment is whether or not it gives a good approximation of the actual payoff for $G_{a,b}$ and the actual Nash equilibria. The following theorem shows that by choosing a small enough $x$, $M_A^x, v_A^x$, and $S_A^x$ can be made arbitarily close to $M_A, v_A$ and some Nash equilbrium strategy $S_A$.
\begin{theorem} \label{convergence} With $S_A$ as described above, \begin{equation} \lim_{x \rightarrow 0} M_A^x(G_{a,b}) = M_A(G_{a,b}),\tag{1}\end{equation} \begin{equation} \lim_{x \rightarrow 0} v_A^x(G_{a,b}) = v_A(G_{a,b}),\tag{2}\end{equation} \begin{equation} \lim_{x \rightarrow 0} S_A^x(G_{a,b}) = S_A(G_{a,b}).\tag{3}\end{equation} \end{theorem}
\begin{proof}[Proof of (1) and (2).] We notice that (1) follows directly from the definition of $M_A^x$: $$\lim_{x \rightarrow 0} M_A^x(G_{a,b}) = \lim_{x \rightarrow 0} (M_A(G_{a,b}) + xB_{a,b}) = M_A(G_{a,b})$$
We now consider (2). We can define three functions: \begin{align*} v_A^x(G_{a,b}) &= \min_i(M_A^x(G_{a,b})\cdot S_A^x(G_{a,b}))_i = g(x)\\
\\ v_A^x(G_{a,b}) &= \min_i(M_A^x(G_{a,b})\cdot S_A^x(G_{a,b}))_i\\
&= \min_i(M_A(G_{a,b})\cdot S_A^x(G_{a,b}) + xB\cdot S_A^x(G_{a,b}))_i\\
&\le \min_i(M_A(G_{a,b})\cdot S_A^x(G_{a,b}))_i + \max_i(xB\cdot S_A^x(G_{a,b}))_i\\
&\le \min_i(M_A(G_{a,b})\cdot S_A(G_{a,b}))_i + \max_i(xB\cdot \mathbf{1})_i\\
&= v_A(G_{a,b}) + \max_i(xB\cdot \mathbf{1})_i = h(x)\\
\\ v_A^x(G_{a,b}) &= \min_i(M_A^x(G_{a,b})\cdot S_A^x(G_{a,b}))_i\\
&\ge \min_i(M_A^x(G_{a,b})\cdot S_A(G_{a,b}))_i = f(x) \end{align*} Notice that for all $x \ge 0$, $f(x) \le g(x) \le h(x)$. We also see that $$\lim_{x \rightarrow 0} f(x) = \lim_{x \rightarrow 0} \min_i(M_A^x(G_{a,b})\cdot S_A(G_{a,b}))_i = \min_i(M_A(G_{a,b})\cdot S_A(G_{a,b}))_i = v_A(G_{a,b})$$ $$\lim_{x \rightarrow 0} h(x) = \lim_{x \rightarrow 0} v_A(G_{a,b}) + \max_i(xB\cdot \mathbf{1})_i = v_A(G_{a,b}) +\max_i(B\cdot \mathbf{1})_i \cdot \lim_{x \rightarrow 0} x = v_A(G_{a,b})$$ Therefore, \[\lim_{x \rightarrow 0}h(x) = \lim_{x \rightarrow 0} v_A^x(G_{a,b}) = v_A(G_{a,b}).\qedhere\] \end{proof}
This leaves (3), the proof of which is more nuanced. We must first develop some more theory of all-pay bidding games.
\subsection{Restricted Games} In many bidding games, the random distribution governing optimal play does not involve bidding above some threshold. In a game of Bidding Tic-Tac-Toe where each player begins with 100 chips, a player should not bid 100 on the first turn. By the Reverse Theorem, the two players, have optimal strategies of equal length. Suppose in some bidding game $G_{a,b}$, both players have strategies of length $\ell$. Then we can consider the \textbf{restricted} game, $G_{a,b} \mid \ell$, where both players can bid at most $\ell - 1$ on the first turn and play returns to normal thereafter. In such a restricted game players are still able to play the length $\ell$ optimal strategy they would have employed in the original game. Is this strategy still optimal?
\begin{lemma} \label{restrictedgame} If $S_A, S_B$ are optimal length $\ell$ strategies in $G_{a,b}$ that provide the payoffs $v_A$ and $1-v_A$ respectively, then they are optimal in $G_{a,b} \mid \ell$ and provide the same payoffs. \end{lemma}
\begin{proof} $M_A(G_{a,b} \mid \ell)$ is the $\ell \times \ell$ top-left minor of $M_A(G_{a,b})$ as the games are identical after the first move. Thus, both players bidding less than $\ell$ in $G_{a,b}$ is equivalent to the players making the same bids in $G_{a,b} \mid \ell$. Thus, $M_A(G_{a,b} \mid \ell) \cdot S_A$ gives the first $\ell$ entries of $M_A(G_{a,b}) \cdot S_A$. The minimum entry of $M_A(G_{a,b}) \cdot S_A$ is $v_A$ so the minimum entry of $M_A(G_{a,b} \mid \ell) \cdot S_A$ is at least $v_A$. Thus, $S_A$ guarantees at least the payoff $v_A$. Using the same logic for $M_B(G_{a,b} \mid \ell)$, we obtain the $S_B$ guarantees the payoff at least $1-v_A$. The total payoff is exactly $1$ so player $A$ gets payoff $v_A$ and cannot do better and player $B$ gets the payoff $1-v_A$ and cannot do better. \end{proof}
Furthermore, recall that precision is a characteristic of the successor states in a game. The possible successors of a restricted game are a subset of the successors of the normal game. Thus, if a game is precise then its restricted game is also precise. We are now able to state a powerful result for the restricted game that will allow us to prove some important results for general bidding games.
\begin{lemma} \label{preciseinvertible} In a precise game $G_{a,b}$, if $S_A$, an optimal strategy of minimal length, has length $\ell$, then $M_A(G_{a,b} \mid \ell)$ is invertible. \end{lemma}
\begin{proof} Suppose by way of contradiction that there exists $y \in \mathbb{R}^\ell$ such that $M_A(G_{a,b} \mid \ell) \cdot y = 0$. Define $\bar{y} \in \mathbb{R}^a$ by $\bar{y}_i = y_i$ for $0 \le i \le \ell-1$ and $\bar{y}_i = 0$ for $i \ge \ell$. Then $M_A(G_{a,b}) \cdot \bar{y}$ is a vector with $0$ in it first $\ell$ entries. In particular, $(M_A(G_{a,b}) \cdot \bar{y})_0 = 0$. $S_A$ has all positive entries so there exists $c \in \mathbb{R}$ such that $S_+ = S_A + c\bar{y}$ and $S_- = S_A - c\bar{y}$ have all positive entries. We note that: \[ (M_A(G_{a,b}) \cdot S_+)_i = (M_A(G_{a,b}) \cdot S_A)_i + (M_A(G_{a,b}) \cdot c\bar{y})_i = v_A + 0 = v_A \] \[ (M_A(G_{a,b}) \cdot S_-)_i = (M_A(G_{a,b}) \cdot S_A)_i - (M_A(G_{a,b}) \cdot c\bar{y})_i = v_A + 0 = v_A \] for $0 \le i \le \ell-1$. Suppose the sum of the entries of $S_+$ is less than $1$. Then there exists $k > 1$ such that the sum of the entries of $kS_+$ is equal to $1$. Then $kS_+$ is a valid strategy for player $A$. that gives payoff $kv_A > v_A$ against player $B$'s first $\ell$ pure strategies. Thus, against $\mathcal{R}(S_A)$, $kS_+$ is better than $S_A$ so $(S_A, \mathcal{R}(S_A))$ is not a Nash equilibrium. Contradiction. Then suppose the sum of the entries of $S_+$ is greater than $1$. Then the sum of the entries of $S_-$ is less than $1$ so the same argument holds. Then suppose the the sum of the entries of $S_+$ equals $1$. Then $S_+$ and $S_A$ are optimal in $G_{a,b} \mid \ell$. $G_{a,b} \mid \ell$ is precise, however, so there exists only one optimal strategy of minimal length for either player in $G_{a,b} \mid \ell$. Therefore $y$ must equal $0$. \end{proof}
A method for computing optimal strategies for the player with advantage, say player $A$, now becomes apparent. Given the length of the player's unique optimal strategy we can consider the payoff matrix of the restricted game. By the Reverse Theorem, player $B$ has a gap-free strategy of the same length. Then the restricted payoff matrix multiplied by player $A$'s optimal strategy must give a constant vector. The inverse of our restricted payoff matrix multiplied by some non-zero constant vector will therefore give a scalar multiple of player $A$'s optimal strategy.
\begin{theorem} \label{strategyformula} Let player $A$ have advantage. In a precise game $G_{a,b}$ if $S_A$ has length $\ell$ then \[ S_A = \frac{M_A(G_{a,b} \mid \ell)^{-1} \mathbf{1}}{\mathbf{1}^T M_A(G_{a,b} \mid \ell)^{-1} \mathbf{1}} \] \end{theorem}
\begin{proof} As discussed above $M_A(G_{a,b} \mid \ell)^{-1} \cdot \mathbf{1}$ is a scalar multiple of $S_A$. The sum of the entries of $S_A$ is $1$ so we need only divide by the sum of the entries of $M_A(G_{a,b} \mid \ell)^{-1} \cdot \mathbf{1}$. This is given by $\mathbf{1}^T M_A(G_{a,b} \mid \ell)^{-1} \mathbf{1}$. \end{proof}
This theorem gives an explicit and rapid method for computing optimal strategies for a player with advantage. Combined with the Reverse Theorem, we will be able to develop a method for computing optimal strategies for both players in any simple bidding game. First, we will return to (3) of Theorem \ref{convergence}.
\subsection{Convergence of Strategies} Recall our conjecture that as $x \rightarrow 0$, $S_A(G_{a,b}^x) \rightarrow S_A(G_{a,b})$. The above theorem gives even more weight to this claim as together they give a method for approximating optimal strategies for imprecise games via a convergent sequence of strategies for precise games.
We begin by partially extending the invertibility of the restricted payoff matrix to imprecise games. The importance of this result is not immediately obvious, but it will be integral to the proof of part (3) of Theorem \ref{convergence}. For simplicity, we will sometimes write $M_A(G_{a,b} \mid \ell)$ as $M_A(\ell)$ and $M_B(G_{a,b} \mid \ell)$ as $M_B(\ell)$.
\begin{prop} \label{wow} If player $A$ has a length $\ell$ optimal strategy for $M_A^x = M_A(G^x_{a,b})$ then at least one of $M_A(G_{a,b} \mid \ell)$ and $M_B(G_{a,b} \mid \ell) = \mathbf{1} - M_A(G_{a,b} \mid \ell)^T$ is invertible. \end{prop}
\begin{proof} For simplicity, let $M_A = M_A(G_{a,b} \mid \ell)$ and $M_B = M_B(G_{a,b} \mid \ell)$. If $M_A$ is invertible, we are done, so suppose $M_A$ is not invertible. Let $w \ne 0$ be in the nullspace of $M_A$. Because $S_A^x$ is gap-free, there exists $c > 0$ sufficiently small such that $S_A^x \pm cw$ are valid strategies for player $A$. Then \[ M_A^x \cdot (S_A^x \pm cw) = M_A^xS_A^x \pm (M_A + xB)\cdot cw = M_A^xS_A^x \pm cxBw. \] Each successive row in $B$ is 1 greater in each entry than the previous row. Suppose that the sum of the entries of $w$ is equal to $0$. Then, \[ (Bw)_{i+1}=(Bw)_i+(1, \ldots, 1)\cdot w = (Bw)_i \] Thus, $Bw$ is a constant vector. If $Bw = 0$ then \[ M_A^x \cdot w = M_Aw + xBw = 0.\] By Lemma $\ref{preciseinvertible}$, $M_A^x$ is invertible so $Bw$ cannot equal $0$. Therefore, either $S_A^x + cw$ or $S_A^x - cw$ results in a better payoff for player $A$ than $S_A^x$ for $M_A^x$ contradicting the optimality of $S_A^x$. Therefore the sum of the entries of $w$ is not $0$.
We can then let the sum of the entries of $w$ be equal to $1$. Then \[w^T(\mathbf{1}-M_A^T) = (1, \ldots, 1).\] We will return to $w$ momentarily. We can compute that $M_B^x = \mathbf{1}-M_A^T+xB$. Since $S_A^x$ is a Nash equilibrium, \[ (S_A^x)^T \cdot M_B^x = (v, \ldots, v) \] \[ (S_A^x)^T \cdot (\mathbf{1}-M_A(\ell)^T) + x(S_A^x)^TB-(v, \ldots, v) = 0 \] \[ (S_A^x)^T \cdot (\mathbf{1}-M_A(\ell)^T) +(d+x(\ell-1)-v, d+x(\ell-2)-v, \ldots, d-v) = 0 \] where $d=x(S_A^x)^T\cdot(0, 1, \ldots, \ell-1)^T$. Then we can substitute $w$ into the equation: \begin{align*} x(\ell-1, \ldots, 1, 0) &= -(S_A^x)^T\cdot(\mathbf{1}-M_A(\ell)^T)+(v-d)(1, \ldots, 1) \\ &= -(S_A^x)^T\cdot(\mathbf{1}-M_A(\ell)^T)+(v-d)w^T(\mathbf{1}-M_A(\ell)^T)\\ &= (-(S_A^x)^T+(v-d)w^T)(\mathbf{1}-M_A(\ell)^T). \end{align*} Let $r_0=x(-(S_A^x)^T+(v-d)w^T)$ and $r_i=r_0+xiw^T$ so that:
\[ r_i(\mathbf{1}-M_A(\ell)^T)=x(\ell-1+i, \ldots, 1+i, i) \] Let $R$ be a $\ell \times \ell$ matrix with rows $r_0, \ldots, r_{\ell-1}$. Then \[ R(\mathbf{1}-M_A(\ell)^T)=xB. \] Therefore through this seemingly arbitrary construction we obtain that \[ (I_{\ell \times \ell}+R)(\mathbf{1}-M_A(\ell)^T)=\mathbf{1}-M_A(\ell)^T+xB=M_B^x\] which is invertible by Lemma \ref{preciseinvertible}. Thus, $\mathbf{1}-M_A(\ell)^T$ is invertible. \end{proof}
The last several results have dealt with payoff matrices of restricted games. The payoff matrix of a restricted game is, by definition, dependent on the length of a player's optimal strategy. The following lemma further demonstrates the relevance of the lengths of the players' optimal strategies.
\begin{lemma} \label{weakconvergence} If there exists $\ell_0$ and $x_1 > x_0 \ge 0$ such that for all $x_0 < x < x_1$, $\ell(S_A(G_{a,b}^x)) = \ell_0$ then \[ \lim_{x \rightarrow x_0} S_A(G_{a,b}^x) \] exists and is an optimal strategy. \end{lemma}
\begin{proof} Let $F_{a,b} = G_{a,b}^{x_0}$. We will treat $F_{a,b}$ as imprecise so the proof holds for both precise and imprecise games. By Proposition \ref{wow}, at least one of $M_A = M_A(F_{a,b} \mid \ell_0)$ and $M_B = M_B(F_{a,b} \mid \ell_0)$ is invertible. Suppose $M_A$ is invertible. Then the limit \[ \lim_{x \rightarrow 0} S_A(F_{a,b}^x) = \lim_{x \rightarrow 0} \frac{(M_A^x)^{-1} \mathbf{1}}{\mathbf{1}^T (M_A^x)^{-1} \mathbf{1}} = \frac{(M_A)^{-1} \mathbf{1}}{\mathbf{1}^T (M_A)^{-1} \mathbf{1}} = S\] exists. As $x$ goes to $0$, $S_A^x(F_{a,b})$ is nonzero and has entry wise sum of $1$. Thus, $S$ is all nonnegative and also has entry wise sum of $1$. Finally, \[ v_A = \lim_{x\rightarrow 0} v_A^x = \lim_{x \rightarrow 0} \min(M_A^x \cdot S_A^x) = \min(M_A \cdot S) \] Thus $S$ is optimal. If $M_A$ is not invertible, then $M_B$ is invertible. By the Reverse Theorem, for all $x_0 < x < x_1$, $\ell(S_B(G_{a,b}^x)) = \ell$. Therefore, we can apply the same argument as above to $S_B^x(G_{a,b})$. \end{proof}
While the above lemma's potential power is clear, we have not yet demonstrated that the conditions it requires are met by any games. We need some restrictions on the length of optimal strategies as we adjust chip value in order to effectively use the above results. The next lemma and its corollary give us the necessary structure.
\begin{lemma} \label{openneighborhoods} Let player $A$ have advantage. Let $\ell_0 = \max_{x \in \mathbb{R}_{>0}} \ell(S_A^x)$. The set of $p$ such that $\ell(S_A^p) = \ell_0$ is open in $\mathbb{R}_{>0}$. \end{lemma}
\begin{proof} The length of $S_A^x$ is an integer and is bounded above by $a$. Hence $\ell_0$ exists. Pick some $p$ so that $\ell(S_A^p) = \ell_o$. Suppose there exists no $\epsilon, \delta > 0$ such that for all $p' \in N_{\epsilon, \delta}(p) = (p-\epsilon, p+\delta)$, we have $\ell(S_A^{p'}) = \ell_0$. Then we can define a sequence $\{x_k\} \rightarrow p$ by $x_k \in N_{1/k,1/k}(p)$ so that $\ell(S_A^{x_k}) < \ell_0$. There exist only a finite number of possible values for $\ell(S_A^x)$ so there must be at least one $\ell_1 < \ell_0$ such that$\{x_k\}$ has a convergent subsequence $\{x_{a_k}\}$ with $\ell(S_A^{x_{a_k}}) = \ell_1$ for all $k$.
By Theorem \ref{convergence}, \[ \lim_{k \rightarrow \infty} M_A^{x_{a_k}}(G_{a,b} \mid \ell_1 ) = M_A^p(G_{a,b} \mid \ell_1),\] \[ \lim_{k \rightarrow \infty} v_A^{x_{a_k}}(G_{a,b} \mid \ell_1 ) = v_A^p(G_{a,b} \mid \ell_1).\] Then \[ \lim_{k\rightarrow \infty} S_A^{x_{a_k}}= \lim_{k \rightarrow \infty} ((M_A^{x_{a_k}}(\ell_1))^{-1} \cdot (v_A^{x_{a_k}}\mathbf{1}_{\ell_1}))= (M_A^p(\ell_1))^{-1}\cdot (v_A^p\mathbf{1_{\ell_1}}) = S \] for which we have that \[ \min(M_A^p \cdot S) = \lim_{k \rightarrow \infty} \min(M_A^{x_{a_k}} \cdot S_A^{x_{a_k}}) = \lim_{k \rightarrow \infty} v_A^{x_k} = v_A^p \] $S$ is then an optimal strategy in $G^p_{a,b}$. $S$ is the limit of length $\ell_1$ strategies so it has length at most $\ell_1$. Therefore $S \ne S_A^p$. The player with advantage has exactly one optimal strategy so an appropriate open neighborhood must exist. \end{proof}
\begin{lemma} Let $\ell_0$ be as above and let $M_A$ be invertible. Also assume $S_A^x$ is of constant-length on some interval $(a,b)$. Then there exists vectors $S,T$ such that for all $x\in (a,b)$, \[ S_A^x = S + xT. \] \end{lemma}
\begin{proof} Let \[ S = \frac{M_A^{-1}\mathbf{1}}{\mathbf{1}^TM_A^{-1}\mathbf{1}}. \] Note that $S$ is not necessarily optimal or even a valid strategy. It satisfies two notable properties. The sum of the entries of $S$ is $1$ and $M_AS$ is a constant vector. Consider, \[ (M_A^x - M_A)(S_A^x - S) = xB(S_A^x - S)\] $xB(S_A^x - S)$ is a constant vector as each row in $B$ differs by a vector of all $1$'s from the row above it. A vector of all $1$'s multiplied by $S_A^x - S$ is $0$ as both $S_A^x$ and $S$ have entrywise sum of $1$. Let this constant vector be denoted $\mathbf{u}$. Then \[ (M_A^x - M_A)(S_A^x - S) = \mathbf{u} \] \[ M_A^xS_A^x + M_AS - M_AS_A^x - M_A^xS = \mathbf{u} \] \[ M_A^x S_A^x + M_A S - M_A S_A^x - M_AS - xBS = \mathbf{u}\]
Note the $M_A S$ terms cancel, and that $M_A^x S_A^x$ is a constant vector. Thus, because $\mathbf{u}$ is also a constant vector, we know that $M_A S_A^x + xBS$ is a constant vector, which we call $\mathbf{v}$. Then \[ M_AS_A^x + xBS = \mathbf{v} \] \[ S_A^x = (M_A)^{-1}(\mathbf{v} - xBS) \] Note that $M_A^{-1}\mathbf{v}$ is a scalar multiple of $S$. Let this scalar be $c$. We have the relation: \[ S_A^x = cS - xM_A^{-1}BS \] We see that $c$ is a function of $x$, and must be the unique scalar that causes $cS - xM_A^{-1}BS$ to have entrywise sum of $1$. Thus $c$ is given by: \[ \sum_{i=0}^a (cS - xM_A^{-1}BS)_i = 1 \] \[ c\sum_{i=0}^a S = 1 + x\sum_{i=0}^a(M_A^{-1}BS)_i \] \[ c = 1 + x\sum_{i=0}^a(M_A^{-1}BS)_i \] $\sum_{i=0}^a(M_A^{-1}BS)$ is a constant because $M_A, B$, and $S$ are. Let it be denoted $r$. \[ S_A^x = (1+rx)S - xM_A^{-1}BS= S + x(rS - M_A^{-1}BS) \] $rS - M_A^{-1}BS$ is a vector independent of $x$. Let it be denoted by $T$. Thus, \[ S_A^x = S + xT \] Thus, on an $x$-interval on which $S_A^x$ is of constant length, $S_A^x$ is given by $S+xT$. Further, each entry of $S_A^x$ is given by a linear equation $S_i + xT_i$. \end{proof}
Note that the above lemma does not make use of anything specific to player $A$ or $B$. Thus, it also applies to $S_B^x$ if the necessary conditions hold.
\begin{corollary} \label{lengthstable} Let player $A$ have advantage. Let $\ell_0$ be as above. Then there exists $x_0 > 0$ such that for all $0 < x \le x_0$, $\ell(S_A^x) = \ell_0$. \end{corollary}
\begin{proof} Let $x \in \mathbb{R}_{>0}$ be chosen such that $\ell(S_A^x) = \ell_0$. By Proposition \ref{wow}, at least one of $M_A$ and $M_B$ is invertible. Suppose first that $M_A$ is invertible. By Lemma \ref{openneighborhoods}, there exists an open interval $(a,b)$ containing $x$ on which $S_A^x$ is constant-length. Let $(a,b)$ be the largest such open interval. We are able apply the above lemma. There exists vectors $S,T$ such that for all $x \in (a,b)$ \[ S_A^x = S + xT. \] For $x > 1$, the value of a chip is greater than the value of winning the game so neither player will ever bid more than $0$. Thus, $b \le 1$. Suppose that $a > 0$. On this interval $S_A^x$ is given by $S+xT$ for some $S,T$. Therefore, the $(\ell_0)$-th entry of $S_A^x$ is either strictly increasing, strictly decreasing, or constant. By Lemma \ref{weakconvergence} and the uniqueness of Nash equilibrium strategies for the player with advantage. \[ \lim_{x \rightarrow a} S_A^x = S_A^a\] \[ \lim_{x \rightarrow b} S_A^x = S_A^b\] If $S_A^a$ or $S_A^b$ have length $\ell_0$ then by Lemma \ref{openneighborhoods} there is an open interval about $a$ or $b$ respectively on which optimal strategies have length $\ell_0$ so $(a,b)$ is not maximal. Thus, both $S_A^a$ and $S_B^b$ must have length less than $\ell_0$. This implies that: \[ \lim_{x \rightarrow a} (S_A^x)_{\ell_0 - 1} = 0 = S_i + aT_i \] \[ \lim_{x \rightarrow b} (S_A^x)_{\ell_0 -1 } = 0 = S_i + bT_i\] A linear equation has at most one zero unless $S_i$ and $T_i$ are both $0$. $S_i + xT_i = (S_A^x)_{\ell_0} \ne 0$ however so this cannot be the case. Therefore $a$ must be equal to $0$. Then $S_A^x$ is constant-length on some interval which has $0$ as an endpoint.
Now suppose that $M_B$ is invertible. We can perform the same operations on the optimal strategy of minimal length $S_B^x$ for player $B$ and then apply the Reverse Theorem to achieve the same result for player $A$. \end{proof}
Given this structure, we can complete our discussion of convergence.
\begin{theorem} Let player $A$ have advantage. Then \[ \lim_{x \rightarrow 0} S_A(G_{a,b}^x) = S_A(G_{a,b}) \] exists and is optimal. \end{theorem}
\begin{proof} By Corollary \ref{lengthstable}, there exists $x_0$ such that for all $0 < x \le x_0$, $\ell(S_A^x) = \ell_0$. These are the necessary conditions to apply Lemma \ref{weakconvergence} which gives the result. \end{proof}
\section{Computing the Optimal Strategy}
Although we have developed results on the structure of optimal bidding in all-pay bidding games, we have yet to fully describe how these optimal strategies can be found. In this section, we put together our results for precise games with our convergence results for imprecise games to give an algorithm to calculate the optimal bidding strategy for any state in an all-pay bidding game.
\subsection{Main Algorithm} In this section, we will discuss the algorithm we developed to quickly calculate an optimal strategy. Our algorithm first assigns to each chip an arbitrarily small but positive value $x$. This adjusted game is precise, so we will be able to take advantage of the structure we have shown for precise games. In particular, we will be able to use Theorem \ref{strategyformula}, which gives a formula for the unique bidding strategy belonging to the player with advantage, in terms of the payoff matrix and optimal length: \[ S_A = \frac{M_A(G_{a,b} \mid \ell)^{-1} \cdot \mathbf{1}}{\mathbf{1}^T \cdot M_A(G_{a,b} \mid \ell)^{-1} \cdot \mathbf{1}} \] From the convergence results in the previous section, the resulting strategy will be able to approximate an optimal strategy for player $A$ in an imprecise game to any desired degree of accuracy. Note that this strategy is not guaranteed to be a unique optimal strategy in the unadjusted game if the unadjusted game is not precise. Once $S_A$ is known, we know by convergence that $\mathcal{R}(S_A)$ will have to be an optimal strategy for player $B$. $(S_A, \mathcal{R}(S_A))$ is then within any desired degree of accuracy of a Nash equilibrium for the unadjusted game.
For now we will assume the payoff matrix is known. Then, to implement Theorem \ref{strategyformula} we just need to invert the appropriate minor of that matrix, multiply by a vector of $1$'s, and rescale so that the entries of the resulting vector sum to $1$. The problem now is to find this optimal length in a precise game where the payoff matrix is given. The next two lemmas will allow us to use binary search to find the optimal length quickly.
\begin{lemma} \label{lengthsearch1} Let the game be precise. Let $\mathbf{1}_k$ be a vector of all $1$'s. Then for all $1 \leq k \leq \ell$, $M_A(k)^{-1} \cdot \mathbf{1}_k$ will have all nonnegative entries. \end{lemma}
\begin{proof} By similar reasoning as in Lemma \ref{preciseinvertible}, we know that $M_A(k)^{-1}$ will be invertible for all $k \leq \ell$. We naturally consider the game $G_{a, b} \mid k$. Let $S_A^k$ and $S_B^k$ be $A$'s and $B$'s optimal strategies in this game. Note that if $k=\ell$, then by definition of $\ell$, we have that $M_A(\ell)\cdot S_A$ gives a constant (nonnegative) vector, so $M_A(\ell)^{-1} \cdot \mathbf{1}$ will be $S_A$ scaled by $1/v_A$. This will have all nonnegative entries because $S_A$ is a strategy. We can extend this reasoning to when $k<\ell$ if we know that $S_A^k$ still has length $k$, as it must also give some constant payoff, $v_A^k$, in $G_{a, b} \mid k$.
Suppose $S_A^k$ does not have length $k$. Then $S_A^k$ has length $m < k < \ell$. Let $v_A^k$ be the value of $G_{a, b} \mid k$ for player $A$. Suppose that $v_A^k \geq v_A$. Then, we can make strategy $S_A'$ for player $A$ in $G_{a,b}$, by extending $S_A^k$ to the full game, where $(S_A)_i=(S_A^k)_i$ if $i\leq m-1$, and is $0$ otherwise. Then, note that $(M_A \cdot S_A')_i=v_A^k \geq v_A$ if $i \leq m-1$. Because $m<k$, $m \leq k-1$ where $k-1$ is the maximal number of chips useable in the $G_{a, b} \mid k$ game. Thus, if $i=m$, $(M_A \cdot S_A')_m \geq v_A^k \geq v_A$ by definition of Nash Equilibrium for $G_{a, b} \mid k$. This is $A$'s payoff against $S_A'$ if $B$ purely bids $m$.
But since $A$'s maximal bid in $S'$ is $m-1$, that means if $B$ uses a pure strategy where she bids $i>m$ chips, she will just be winning the same bids by more chips, which cannot be better in any way. Thus, $(M_A \cdot S_A')_i \geq (M_A \cdot S_A')_m \geq v_A^k \geq v_A$. Thus, for all $0<i<\ell-1$, $(M_A \cdot S_A')_i \geq v_A$, so $S_A'$ is a Nash Equilibrium for $G_{a, b}$ as well. But $S_A'$ has length $m < \ell$, so it would have to be distinct from $S_A$ because it has a different length. This cannot be the case as $A$'s optimal strategy is unique. Thus, we have a contradiction and $S_A^k$ cannot have length less than $k$.
Thus, $S_A^k$ has length $k$, so by the same argument as the $k=\ell$ case, all the entries of $M_A(k)^{-1} \cdot \mathbf{1}$ are nonnegative. Note that because none of the above reasoning depended upon player $A$ having advantage, if $v_A^k < v_A$, we can apply the above argument from player $B$'s perspective. \qedhere \end{proof}
\begin{lemma} \label{lengthsearch2} Let the game be precise. Let $\mathbf{1}_k$ be a vector of all $1$'s. Then for all $k>\ell$, either $M_A(k)$ is not invertible or $M_A(k)^{-1} \cdot \mathbf{1}_k$ will have all nonnegative entries. \end{lemma}
\begin{proof} Assume $M_A(k)$ is invertible.
We begin by showing there is no valid length $(\ell+1)$-strategy for player $A$ that produces the same payoff for player $B$'s first $\ell+1$ pure strategies. Suppose there does exist such a strategy $S$. Let $v$ be the payoff that $S$ produces against player $B$'s first $\ell+1$ pure strategies (pure bids from $0$ up to $\ell$). Note that because $S_A$ has length $\ell$ there is a Nash equilibrium strategy $\mathcal{R}(S_A)$ of length $\ell$ for player $B$. We consider three cases: \begin{description}
\item[(1) $v > v_A$]
\\
Since $B$ bids at most $\ell-1$, we only need to consider the first $\ell$ coordinates of $M_A \cdot S_A$ and $M_A \cdot S$.
By our assumption, $v>v_A$ so $S$ is strictly better than $S_A$ against $\mathcal{R}(S_A)$. Thus $S_A$ cannot be a Nash Equilibrium strategy, which is a contradiction.
\item[(2) $v < v_A$]
\\
If $v<v_A$ then let player $B$ use the strategy $\mathcal{R}(S)$. It is easy to verify that $\mathcal{R}(S)$ produces the payoff $1-v > 1-v_a$ against player $A$'s first $\ell+1$ pure strategies. Thus, by similar reasoning as in the previous case, $\mathcal{R}(S)$ is strictly better than $\mathcal{R}(S_A)$ against $S_A$, so $\mathcal{R}(S_A)$ cannot be a Nash Equilibrium strategy, which is a contradiction.
\item[(3) $v = v_A$]
\\
If $S=(s_0, \ldots, s_{\ell}, 0, \ldots, 0)^T$, then expanding the first $\ell+1$ coordinates of $M_A \cdot S$ results in the equations $\alpha_i s_0 + \cdots + \alpha_{i+\ell}s_{\ell} = v_A$ for $i=0, \ldots, -\ell$.
Considering the game from player $B$'s perspective, note that $\mathcal{R}(S)$ gives $B$ a payoff of $1-v_A$ against $A$'s first $\ell+1$ strategies.
In particular, $B$'s payoff against $A$ bidding $\ell$ will be \[(1-\alpha_{\ell})s_{\ell}+(1-\alpha_0)s_0=1-(\alpha_{\ell}s_{\ell}+\cdots+\alpha_0s_0)=1-v_A.\]
$B$'s payoff $x$ against against $A$ bidding $\ell+1$ will be \[x=1-(\alpha_{\ell+1}s_{\ell}+\alpha_1s_0).\]
Note that because player $A$ is winning ties, $\alpha_{\ell+1}<\alpha_{\ell}, \ldots, \alpha_1<\alpha_0$, as in each case $A$ is winning by one more chip.
Thus, $\alpha_{\ell}s_{\ell}+\cdots+\alpha_0s_0>\alpha_{\ell+1}s_{\ell}+\alpha_1s_0$ which means
\[1-v_A=1-(\alpha_{\ell}s_{\ell}+\cdots+\alpha_0s_0)\leq 1-(\alpha_{\ell+1}s_{\ell}+\alpha_1s_0)=x.\]
Similarly, $B$'s payoff if $A$ bids anything greater than $k$ will be greater than $1-v_A$.
Thus, $\mathcal{R}(S)$ is a Nash equilibrium strategy of length $k$ for player $B$.
Note that if $k$ is at least $\ell+2$, then $\mathcal{R}(S)$ will have length at least $\ell+2$, which will be a contradiction if $S_A$ has length $\ell$.
Since $k > \ell$, this means we must have $k = \ell+1$.
Then, $\mathcal{R}(S)$ is a Nash Equilibrium strategy of length $\ell+1$, so by Theorem \ref{noadvantagestrategies}
it must be of the form $\lambda (s_{\ell-1}, \ldots, s_0, 0, \ldots, 0) + (1-\lambda) (0, s_{\ell-1}, \ldots, s_0, 0, \ldots, 0)$ for $0 \leq \lambda \leq 1$.
In turn, $S$ must be of the form $\lambda (0, s_0, \ldots, s_{\ell-1}, 0, \ldots, 0) + (1-\lambda)(s_0, \ldots, s_{\ell-1}, 0, 0, \ldots, 0)$.
We can now write $M_A(\ell+1) \cdot S = v_A \mathbf{1}_{\ell+1}$ as $M_A(\ell+1) \cdot \lambda (0, s_0, \ldots, s_{\ell-1}) + (1-\lambda)(s_0, \ldots, s_{\ell-1}, 0) = v_A \mathbf{1}$, which can be expanded to the equation
\[ \lambda \left( \begin{array}{c} \alpha_1s_0+\cdots+\alpha_{\ell}s_{\ell-1} \\ \alpha_0s_0+\cdots+\alpha_{\ell-1}s_{\ell-1} \\ \vdots \\ \alpha_{-(\ell-1)}s_0+\cdots+\alpha_0s_{\ell-1} \end{array} \right) + (1-\lambda) \left( \begin{array}{c} \alpha_0s_0+\cdots+\alpha_{\ell-1}s_{\ell-1} \\ \vdots \\ \alpha_{-(\ell-1)}s_0+\cdots+\alpha_0s_{\ell-1} \\ \alpha_{-\ell}s_0+\cdots+\alpha_{-1}s_{\ell-1} \end{array} \right) = \left( \begin{array}{c} v_A \\ v_A \\ \vdots \\ v_A \end{array} \right)\]
By considering the first coordinate, we get the equation \[ \lambda(\alpha_1s_0 + \cdots + \alpha_{\ell}s_{\ell-1}) + (1-\lambda)v_A = v_A \] so we must have $\alpha_1s_0 + \cdots + \alpha_{\ell}s_{\ell-1}=v_A$ as well. Therefore, \[ \alpha_1s_0 + \cdots + \alpha_{\ell}s_{\ell-1}=\alpha_0s_0 + \cdots + \alpha_{\ell-1}s_{\ell-1}.\] But since the game is precise, there must be an inequality for all the coefficients: $\alpha_1 < \alpha_0, \ldots, \alpha_{\ell} < \alpha_{\ell-1},$ so \[ \alpha_1s_0 + \cdots + \alpha_{\ell}s_{\ell-1}<\alpha_0s_0 + \cdots + \alpha_{\ell-1}s_{\ell-1} \] because not all the $s_i$'s are $0$. Thus, we have a contradiction, and $k$ cannot be $\ell+1$ either.
Thus, no such strategy $S$ can exist, so if $M_A(k)$ is invertible, $M_A(k)^{-1} \cdot \mathbf{1}_k$ must have some negative terms. \qedhere \end{description} \end{proof}
We can implement the binary search algorithm as follows. Let the lower bound, $low$, start as $1$. Let the upper bound, $high$, start as $\min(a,b)+1$.
\begin{algorithm}[H] \DontPrintSemicolon \textbf{Function} LSearch($M_A$, $low$, $high$)\; \eIf{$low + 1 = high$}
{return $low$}
{
$k = (low+high)/2$\;
\eIf{$M_A(k)^{-1} \cdot 1_k$ is all nonnegative}{
return LSearch($M_A$, $k$, $high$)\;
}{\tcp{$M_A(k)$ is not invertible or $M_A(k)^{-1} \cdot 1_k$ has a negative entry}
return LSearch($M_A$, $low$, $k$)\;
}
}
\caption{Binary Search For Length} \end{algorithm} By Lemmas \ref{lengthsearch1} and \ref{lengthsearch2}, this algorithm will return the length of the optimal strategy for player $A$. We can then apply our formula to directly compute player $A$'s unique optimal strategy. The reverse of this strategy is an optimal strategy for player $B$. This completes the algorithm. From our results on the convergence of strategies, this algorithm is also able to approximate, with any desired degree of accuracy, optimal strategies for imprecise games.
\subsection{Recursion on Directed Graphs} So far, our results apply to the strategy for bidding on a single turn in an all-pay bidding game. This assumes some prior knowledge of successor game states that allows the payoff matrix to be already known. Thus, to use our algorithm to compute Nash equilibria for any all-pay bidding game state, we need some way of first finding the payoff matrix. By noting that the payoff matrices for end states (where one player has already won) can be set as $0$ and $1$ for win and loss, we use recursion from the end states of the game to find the payoff matrix for an arbitrary turn.
Consider a combinatorial game $G$ represented as a directed graph $D=(V,E)$ with two vertices marked as $\mathcal{A}$ and $\mathcal{B}$ and a token placed at some vertex of the graph. We can think of each vertex as the starting position of a subgame of $G$. Thus for player $A$ with $a$ chips and player $B$ with $b$ chips, the token on vertex $w$, we write the game as $w_{a,b}$. Let $\mathcal{S}(w)$ give all the vertices that can be moved to from $w$.
We can compute $v_A$ as follows: $$v_A(w_{a,b}) = \left\{\begin{array}{cc} 1 & \text{if $w = \mathcal{A}$} \\ 0 & \text{if $w = \mathcal{B}$} \\ S_B^T \cdot X \cdot S_A & \text{otherwise} \end{array}\right.$$ Then if $A$ bids $i$ and $B$ bids $j$ and $A$ makes a move then $A$'s payoff will be \[ A(i,j) = \max_{w' \in \mathcal{S}_A(w)} v_A(w'_{a-j+i, b-i+j}) \] because $A$ will seek to maximize his probability of winning over all of his possible sucessor states. If $A$ bids $i$ and $B$ bids $j$ and $B$ makes a move then $A$'s payoff will be \[ B(i,j) = \min_{w' \in \mathcal{S}_B(w)} v_A(w'_{a-j+i, b-i+j})\] because $B$ will seek to minimize $A$'s probability of winning over all of her possible sucessor states. Therefore $$X_{i,j} = \left\{\begin{array}{cc} \max\left(A(i,j), B(i,j)\right) & \text{if $i < j$ or $i=j$ and $A$ has advantage} \\ \min\left(A(i,j), B(i,j)\right) & \text{if $i > j$ or $i=j$ and $B$ has advantage} \\ \end{array} \right.$$ as each player will consider the best possible scenario if he moves and the worst possible scenario if their opponent moves. Then $S_A$, $S_B$ can computed from this payoff matrix $X$, using our algorithm from before.
Note this allows us to recurse up the directed graph from states $\mathcal{A}$ and $\mathcal{B}$, first with values for those states, then values for the states one move away (i.e. $v$ such that either $\mathcal{A}$ or $\mathcal{B} \in S(v)$), then states two moves away, and so on.
\subsection{Complexity}
An arbitary $n \times n$ matrix can be inverted in $O(n^3)$ time using the Gauss-Jordan method. There exist, however, many more efficient algorithms specific to Toeplitz matrices. In particular, the Levinson-Trench-Zohar algorithm can solve a Toeplitz system in $O(n^2)$ time \citep{Musicus1988}.
For an $n \times n$ matrix, the binary search algorithm requires $\log(n)$ iterations. Each iteration requires solving one Toeplitz system and scanning one vector for negative values. Thus, the algorithm runs in time on the order of $\log(n) \cdot (O(n^2) + O(n)) = O(\log(n)n^2)$. Thus finding an optimal strategy and corresponding payoff for a given payoff matrix requires time on the order of $O(\log(n)n^2)$.
A simple implementation of our recursive algorithm would take time growing exponentially with the depth of $D$. We can greatly speed up this process by storing each $v_A(i,j)$ that is computed. Then when $v_A(i,j)$ must be computed again the value can be looked up rather than recomputed. In the worst case, the program must compute $v_A$ for every possible combination of chips at every vertex. Because the sum of the chips is constant, this requires at most $(a+b)\cdot|V|$ computations. Thus, the entire algorithm runs in time on the order of $O(|V| \cdot \log(n)n^2)$ where $n=a+b$. For comparison, a linear programming algorithm to achieve the same results would require time on the order of $O(|V| \cdot n^{3.5})$ \citep{DBLP}.
\end{document} |
\begin{document}
\title{Federated Compositional Deep AUC Maximization}
\begin{abstract} Federated learning has attracted increasing attention due to the promise of balancing privacy and large-scale learning; numerous approaches have been proposed. However, most existing approaches focus on problems with balanced data, and prediction performance is far from satisfactory for many real-world applications where the number of samples in different classes is highly imbalanced. To address this challenging problem, we developed a novel federated learning method for imbalanced data by directly optimizing the area under curve (AUC) score. In particular, we formulate the AUC maximization problem as a federated compositional minimax optimization problem, develop a local stochastic compositional gradient descent ascent with momentum algorithm, and provide bounds on the computational and communication complexities of our algorithm. To the best of our knowledge, this is the first work to achieve such favorable theoretical results. Finally, extensive experimental results confirm the efficacy of our method. \end{abstract}
\section{Introduction}\label{sec:intro} Federated learning is a paradigm for training a machine learning model across multiple devices without sharing the raw data from each device. Practically, models are trained on each device, and, periodically, model parameters are exchanged between these devices. By not sharing the data itself, federated learning allows private information in the raw data to be preserved to some extent. This property has allowed federated learning to be proposed for numerous real-world computer vision and machine learning tasks.
Currently, one main drawback of existing federated learning methodologies is the assumption of balanced data, where the number of samples across classes is essentially the same. Most real-world data is imbalanced, even highly imbalanced. For example, in the healthcare domain, it is common to encounter problems where the amount of data from one class (e.g., patients with a rare disease) is significantly lower than the other class(es), leading to a distribution that is highly imbalanced. Traditional federated learning methods do not handle such imbalanced data scenarios very well. Specifically, training the classifier typically requires minimizing a classification-error induced loss function (e.g., cross-entropy). As a result, the resulting classifier may excel at classifying the majority, while failing to classify the minority.
To handle imbalanced data classification, the most common approach is to train the classifier by optimizing metrics designed for imbalanced data distributions. For instance, under the single-machine setting, Ying et al.~\cite{ying2016stochastic} proposed to train the classifier by maximizing the Area under the ROC curve (AUC) score. Since the AUC score can be affected by performance on both the majority and minority classes, the classifier is less prone to favoring one class above the rest. Later, \cite{guo2020communication,yuan2021federated} extended this approach to federated learning. However, optimizing the AUC score introduces some new challenges, since maximizing the AUC score requires solving a minimax optimization problem, which is more challenging to optimize than conventional minimization problems. More specifically, when the classifier is a deep neural network, recent work \cite{yuan2021compositional} has demonstrated empirically that training a deep classifier from scratch with the AUC objective function cannot learn discriminative features; the resulting classifier sometimes fails to achieve satisfactory performance. To address this issue, \cite{yuan2021compositional} developed a compositional deep AUC maximization model under the single-machine setting, which combines the AUC loss function and the traditional cross-entropy loss function, leading to a stochastic compositional minimax optimization problem. This compositional deep AUC maximization model is able to learn discriminative features, achieving superior performance over traditional models consistently.
Considering its remarkable performance under the single-machine setting, a natural question is:
\fbox{ \parbox{0.45\textwidth}{ \textbf{How can a compositional deep AUC maximization model be applied to federated learning?} }} The challenge is that the loss function of the compositional model involves \textit{two levels of distributed functions}. Moreover, the stochastic compositional gradient is a \textit{biased} estimation of the full gradient. Therefore, on the algorithmic design side, it is unclear what variables should be communicated when estimating the stochastic compositional gradient. On the theoretical analysis side, it is unclear if the convergence rate can achieve the linear speedup with respect to the number of devices in the presence of a \textit{biased stochastic compositional gradient}, \textit{two levels of distributed functions}, and the \textit{minimax structure of the loss function}.
To address the aforementioned challenges, in this paper, we developed a novel local stochastic compositional gradient descent ascent with momentum (LocalSCGDAM) algorithm for federated compositional deep AUC maximization. In particular, we demonstrated which variables should be communicated to address the issue of two levels of distributed functions. Moreover, for this nonconvex-strongly-concave problem, we established the convergence rate of our algorithm, disclosing how the communication period and the number of devices affect the computation and communication complexities. Specifically, with theoretical guarantees, the communication period can be as large as $O(T^{1/4}/K^{3/4})$ so that our algorithm can achieve $O(1/\sqrt{KT})$ convergence rate and $O(T^{3/4}/K^{3/4})$ communication complexity, where $K$ is the number of devices and $T$ is the number of iterations. To the best of our knowledge, this is the first work to achieve such favorable theoretical results for the federated compositional minimax problem. Finally, we conduct extensive experiments on multiple image classification benchmark datasets, and the experimental results confirm the efficacy of our algorithm.
In summary, we made the following important contributions in our work. \begin{itemize}
\item We developed a novel federated optimization algorithm, which enables compositional deep AUC maximization for federated learning.
\item We established the theoretical convergence rate of our algorithm, demonstrating how it is affected by the communication period and the number of devices.
\item We conducted extensive experiments on multiple imbalanced benchmark datasets, confirming the efficacy of our algorithm. \end{itemize}
\section{Related Work}
\paragraph{Imbalanced Data Classification.} In the field of computer vision, there has been a fair amount of work addressing imbalanced data classification. Instead of using conventional cross-entropy loss functions, which are not suitable for imbalanced datasets, optimizing the AUC score has been proposed. For instance, Ying et al.~\cite{ying2016stochastic} have proposed the minimax loss function to optimize the AUC score for learning linear classifiers. Liu et al.~\cite{liu2019stochastic} extended this minimax method to deep neural networks and developed the nonconvex-strongly-concave loss function. Yuan et al.~\cite{yuan2021compositional} have proposed a compositional training framework for end-to-end deep AUC maximization, which minimizes a compositional loss function, where the outer-level function is an AUC loss, and the inner-level function substitutes a gradient descent step for minimizing a traditional loss. Based on the empirical results, this compositional approach improved the classification performance by a large degree.
To address stochastic minimax optimization problems, there have been a number of diverse efforts launched in recent years. In particular, numerous stochastic gradient descent ascent (SGDA) algorithms \cite{zhang2020single,lin2020gradient,qiu2020single,yan2020optimal} have been proposed. However, most of them focus on non-compositional optimization problems. On the other hand, to solve compositional optimization problems, existing work \cite{wang2017stochastic,zhang2019composite,yuan2019stochastic} tends to only focus on the minimization problem. Only two recent works \cite{gao2021convergence,yuan2021compositional} studied how to optimize the compositional minimax optimization problem, but they focused on the single-machine setting.
\paragraph{Federated Learning.} In recent years, federated learning has shown promise with several empirical studies in the field of large-scale deep learning~\cite{mcmahan2017communication, povey2014parallel,su2015experiments}. The FedAvg \cite{mcmahan2017communication} algorithm has spawned a number of variants \cite{stich2018local,yu2019parallel,yu2019linear} designed to address the minimization problem. For instance, by maintaining a local momentum, Yu et al.~\cite{yu2019linear} have provided rigorous theoretical studies for the convergence of the local stochastic gradient descent with momentum (LocalSGDM) algorithm. These algorithms are often applied to balanced datasets, and their performance in the imbalanced imbalanced regime is lacking.
To address minimax optimization for federated learning, Deng et al.\cite{deng2021local} proposed local stochastic gradient descent ascent algorithms to provably optimize federated minimax problems. However, their theoretical convergence rate was suboptimal and later improved by \cite{tarzanagh2022fednest}. However, neither method could achieve a linear speedup with respect to the number of devices. Recently, Sharma et al.~\cite{sharma2022federated} developed the local stochastic gradient descent ascent with momentum (LocalSGDAM) algorithm, whose convergence rate is able to achieve a linear speedup for nonconvex-strongly-concave optimization problems. Guo et al. \cite{guo2020communication} proposed and analyzed a communication-efficient distributed optimization algorithm (CoDA) for the minimax AUC loss function under the assumption of PL-condition, which can also achieve a linear speedup, in theory. Yuan et al.~\cite{yuan2021federated} extended CoDA to hetereogneous data distributions and established its convergence rate. While these algorithms are designed for the federated minimax optimization problem, none can deal with the federated \textit{compositional} minimax optimization problem.
\section{Preliminaries}\label{sec:math} In this section, we first introduce the compositional deep AUC maximization model under the single-machine setting and then provide the problem setup in federated learning. \subsection{Compositional Deep AUC Maximization}
Training classifiers by optimizing AUC (\cite{hanley1982meaning,herschtal2004optimising}) is an effective way to handle highly imbalanced datasets. However, traditional AUC maximization models typically depend on pairwise sample input, limiting the application to large-scale data. Recently, Ying et al.~\cite{ying2016stochastic} formulated AUC maximization model as a minimax optimization problem, defined as follows: \begin{equation} \label{aucloss}
\begin{aligned}
& \min_{\mathbf{w},\tilde{w}_1,\tilde{w}_2} \max_{\tilde{w}_3} \mathcal{L}_{AUC} (\mathbf{w},\tilde{w}_1,\tilde{w}_2,\tilde{w}_3; a,b) \\
& \triangleq (1-p)(h(\mathbf{w};a)-\tilde{w}_1)^2\mathbb{I}_{[b=1]} \\
& \quad + p(h(\mathbf{w};a)-\tilde{w}_2)^2\mathbb{I}_{[b=-1]}-p(1-p)\tilde{w}_3^2\\
& \quad + 2(1+\tilde{w}_3)(ph(\mathbf{w};a)\mathbb{I}_{[b=-1]}-(1-p)h(\mathbf{w};a)\mathbb{I}_{[b=1]}) \ ,\\
\end{aligned} \end{equation} where $h$ denotes the classifier parameterized by $\mathbf{w}\in \mathbb{R}^d$, $\tilde{w}_1\in \mathbb{R}, \tilde{w}_2 \in \mathbb{R}, \tilde{w}_3 \in \mathbb{R}$ are the parameters for measuring AUC score, $(a,b)$ substitutes the sample's feature and label, $p$ is the prior probability of the positive class, and $\mathbb{I}$ is an indicator function that takes value 1 if the argument is true and 0 otherwise. Such a minimax objective function decouples the dependence of pairwise samples so that it can be applied to large-scale data.
Since training a deep classifier from scratch with $\mathcal{L}_{AUC}$ loss function did not yield satisfactory performance, Yuan et al.~\cite{yuan2021compositional} developed the compositional deep AUC maximization model, which is defined as follows: \begin{equation} \label{compositionalauc} \begin{aligned}
& \min_{\tilde{\mathbf{w}},\tilde{w}_1,\tilde{w}_2} \max_{\tilde{w}_3} \mathcal{L}_{AUC} (\tilde{\mathbf{w}},\tilde{w}_1,\tilde{w}_2,\tilde{w}_3; a,b) \\
& s.t. \quad \tilde{\mathbf{w}} = \mathbf{w}-\rho \nabla_{\mathbf{w}} \mathcal{L}_{CE}(\mathbf{w}; a, b) \ . \end{aligned} \end{equation} Here, $\mathcal{L}_{CE}$ denotes the cross-entropy loss function, $\mathbf{w}-\rho \nabla_{\mathbf{w}} \mathcal{L}_{CE}$ indicates using the gradient descent method to minimize the cross-entropy loss function, where $\rho>0$ is the learning rate. Then, for the obtained model parameter $\tilde{\mathbf{w}}$, one can further optimize it through optimizing the AUC loss function.
By denoting $g(\mathbf{x}) = \mathbf{x} -\rho \Delta (\mathbf{x})$ and $\mathbf{y}=\tilde{w}_3$, where $\mathbf{x}=[\mathbf{w}^T, \tilde{w}_1, \tilde{w}_2]^T$, $\Delta (\mathbf{x}) = [\nabla_{\mathbf{w}} \mathcal{L}_{CE}(\mathbf{w}; a, b)^T, 0, 0]^T$, and $f=\mathcal{L}_{AUC}$, Eq.~(\ref{compositionalauc}) can be represented as a generic compositional minimax optimization problem: \begin{equation}
\min_{\mathbf{x}\in \mathbb{R}^{d_1}}\max_{\mathbf{y}\in \mathbb{R}^{d_2}} f(g(\mathbf{x}), \mathbf{y}) \ , \end{equation} where $g$ is the inner-level function and $f$ is the outer-level function. It is worth noting that when $f$ is a nonlinear function, the stochastic gradient regarding $\mathbf{x}$ is a biased estimation of the full gradient. As such, the stochastic compositional gradient \cite{wang2017stochastic} is typically used to optimize this kind of problem. We will demonstrate how to adapt this compositional minimax optimization problem to federated learning and address the unique challenges.
\subsection{Problem Setup} In this paper, to optimize the federated deep compositional AUC maximization problem, we will concentrate on developing an efficient optimization algorithm to solve the following generic federated stochastic compositional minimax optimization problem:
\begin{equation} \label{loss}
\begin{aligned}
& \min_{\mathbf{x}\in\mathbb{R}^{d_1}} \max_{\mathbf{y}\in \mathbb{R}^{d_2}} \frac{1}{K}\sum_{k=1}^{K} f^{(k)}\Big(\frac{1}{K}\sum_{k'=1}^{K} g^{(k')}(\mathbf{x}), \mathbf{y}\Big) \ ,
\end{aligned}
\end{equation} where $K$ is the number of devices, $g^{(k)}(\cdot)=\mathbb{E}_{\xi \sim \mathcal{D}_g^{(k)}}[g^{(k)}(\cdot;\xi)] \in \mathbb{R}^{d_g}$ denotes the inner-level function for the data distribution $\mathcal{D}_g^{(k)}$ of the $k$-th device, $f^{(k)}(\cdot, \cdot)=\mathbb{E}_{\zeta \sim \mathcal{D}_f^{(k)}}[f^{(k)}(\cdot, \cdot;\zeta)]$ represents the outer-level function for the data distribution $\mathcal{D}_f^{(k)}$ of the $k$-th device. It is worth noting that \textit{both the inner-level function and the outer-level function are distributed on different devices}, which is significantly different from traditional federated learning models. Therefore, we need to design a new federated optimization algorithm to address this unique challenge.
Here, we introduce the commonly-used assumptions from existing work \cite{goodfellow2014explaining,zhang2019composite,yuan2021compositional,gao2021convergence} for investigating the convergence rate of our algorithm. \begin{assumption} \label{assumption_smooth}
The gradient of the outer-level function $f^{(k)}(\cdot, \cdot)$ is $L_f$-Lipschitz continuous where $L_f>0$, i.e.,
\begin{equation}
\begin{aligned}
& \quad \|\nabla_{g} f^{(k)}(\mathbf{z}_1, \mathbf{y}_1)-\nabla_{g} f^{(k)}(\mathbf{z}_2, \mathbf{y}_2)\|^2 \\
& \leq L_f^2\|(\mathbf{z}_1, \mathbf{y}_1)-(\mathbf{z}_2,\mathbf{y}_2)\|^2 \ , \\
& \quad \|\nabla_{\mathbf{y}} f^{(k)}(\mathbf{z}_1, \mathbf{y}_1)-\nabla_{\mathbf{y}} f^{(k)}(\mathbf{z}_2, \mathbf{y}_2)\|^2 \\
& \leq L_f^2\|(\mathbf{z}_1, \mathbf{y}_1)-(\mathbf{z}_2,\mathbf{y}_2)\|^2 \ , \\
\end{aligned}
\end{equation}
hold for $\forall (\mathbf{z}_1, \mathbf{y}_1), (\mathbf{z}_2, \mathbf{y}_2)\in \mathbb{R}^{d_g}\times \mathbb{R}^{d_2}$.
The gradient of the inner-level function $g^{(k)}(\cdot)$ is $L_g$-Lipschitz continuous where $L_g>0$, i.e.,
\begin{equation}
\begin{aligned}
& \|\nabla g^{(k)}(\mathbf{x}_1) - \nabla g^{(k)}(\mathbf{x}_2)\|^2 \leq L_g^2 \|\mathbf{x}_1-\mathbf{x}_2\|^2 \ ,
\end{aligned}
\end{equation}
holds for $\forall \mathbf{x}_1, \mathbf{x}_2 \in \mathbb{R}^{d_1}$.
\end{assumption}
\begin{assumption} \label{assumption_bound_gradient}
The second moment of the stochastic gradient of the outer-level function $f^{(k)}(\cdot, \cdot)$ and the inner-level function $g^{(k)}(\cdot)$ satisfies:
\begin{equation}
\begin{aligned}
& \mathbb{E}[\|\nabla_{g} f^{(k)}(\mathbf{z}, \mathbf{y}; \zeta)\|^2] \leq C_f^2 \ , \\
& \mathbb{E}[\|\nabla_{\mathbf{y}} f^{(k)}(\mathbf{z}, \mathbf{y}; \zeta)\|^2] \leq C_f^2, \\
& \mathbb{E}[\|\nabla g^{(k)}(\mathbf{x}; \xi)\|^2]\leq C_g^2 \ ,
\end{aligned}
\end{equation}
for $\forall (\mathbf{z}, \mathbf{y}) \in \mathbb{R}^{d_g}\times\mathbb{R}^{d_2}$ and $\forall \mathbf{x} \in \mathbb{R}^{d_1}$, where $C_f>0$ and $C_g>0$.
Meanwhile, the second moment of the full gradient is assumed to have the same upper bound. \end{assumption}
\begin{assumption} \label{assumption_bound_variance}
The variance of the stochastic gradient of the outer-level function $f^{(k)}(\cdot, \cdot)$ satisfies:
\begin{equation}
\begin{aligned}
& \mathbb{E}[\|\nabla_{g} f^{(k)}(\mathbf{z}, \mathbf{y}; \zeta) - \nabla_{g} f^{(k)}(\mathbf{z}, \mathbf{y})\|^2] \leq \sigma_f^2, \\
& \mathbb{E}[\|\nabla_{\mathbf{y}} f^{(k)}(\mathbf{z}, \mathbf{y}; \zeta) - \nabla_{\mathbf{y}} f^{(k)}(\mathbf{z}, \mathbf{y})\|^2] \leq \sigma_f^2 \ , \\
\end{aligned}
\end{equation}
for $\forall (\mathbf{z}, \mathbf{y}) \in \mathbb{R}^{d_g}\times\mathbb{R}^{d_2}$, where $\sigma_f>0$.
Additionally, the variance of the stochastic gradient and the stochastic function value of $g^{(k)}(\cdot)$ satisfies:
\begin{equation}
\begin{aligned}
& \mathbb{E}[\|\nabla g^{(k)}(\mathbf{x}; \xi) - \nabla g^{(k)}(\mathbf{x})\|^2] \leq \sigma_{g'}^2 \ ,\\
& \mathbb{E}[\| g^{(k)}(\mathbf{x}; \xi) - g^{(k)}(\mathbf{x})\|^2] \leq \sigma_g^2 \ , \\
\end{aligned}
\end{equation}
for $\forall \mathbf{x} \in \mathbb{R}^{d_1}$, where $\sigma_g>0$ and $\sigma_{g'}>0$. \end{assumption}
\begin{assumption} \label{assumption_strong}
The outer-level function $f^{(k)}(\mathbf{z}, \mathbf{y})$ is $\mu$-strongly-concave with respect to $\mathbf{y}$ for any fixed $\mathbf{z} \in \mathbb{R}^{d_g}$, where $\mu>0$, i.e.,
\begin{equation}
\begin{aligned}
& f^{(k)}(\mathbf{z}, \mathbf{y_1}) \leq f^{(k)}(\mathbf{z}, \mathbf{y_2}) + \\
& \langle \nabla_{y} f^{(k)}(\mathbf{z}, \mathbf{y_2}), \mathbf{y}_1 -\mathbf{y}_2\rangle - \frac{\mu}{2}\|\mathbf{y}_1-\mathbf{y}_2\|^2 \ . \\
\end{aligned}
\end{equation} \end{assumption}
\textbf{Notation:} Throughout this paper, $\mathbf{a}_t^{(k)}$ denotes the variable of the $k$-th device in the $t$-th iteration and $\bar{\mathbf{a}}_t=\frac{1}{K}\sum_{k=1}^{K}\mathbf{a}_t^{(k)}$ denotes the averaged variable across all devices, where $a$ denotes any variables used in this paper. $\mathbf{x}_{*}$ denotes the optimal solution.
\section{Methodology} In this section, we present the details of our algorithm for federated compositional deep AUC maximization.
\begin{algorithm}[h]
\caption{LocalSCGDAM}
\label{alg_dscgdamgp}
\begin{algorithmic}[1]
\REQUIRE $\mathbf{x}_0$, $\mathbf{y}_0$, $\eta\in (0, 1)$, $\gamma_x>0$, $\gamma_y>0$, $\beta_x>0$, $\beta_y>0$, $\alpha>0$, $\alpha\eta \in (0, 1)$, $\beta_x\eta \in (0, 1)$, $\beta_y\eta \in (0, 1)$. \\
All workers conduct the steps below to update $\mathbf{x}$, $\mathbf{y}$.
$\mathbf{x}_{0}^{(k)}=\mathbf{x}_0$ ,
$\mathbf{y}_{0}^{(k)}=\mathbf{y}_0$, \\
$\mathbf{h}_{0}^{(k)}= g^{(k)}(\mathbf{x}_{ 0}^{(k)}; \xi_{ 0}^{(k)})$ , \\
$\mathbf{u}_{ 0}^{(k)}=\nabla g^{(k)}(\mathbf{x}_{ 0}^{(k)}; \xi_{ 0}^{(k)})^T\nabla_{g} f^{(k)}(\mathbf{h}_{0}^{(k)}, \mathbf{y}_{0}^{(k)}; \zeta_{0}^{(k)})$, \quad
$\mathbf{v}_{0}^{(k)}=\nabla_{y} f^{(k)}(\mathbf{h}_{0}^{(k)}, \mathbf{y}_{0}^{(k)}; \zeta_{0}^{(k)})$,
\FOR{$t=0,\cdots, T-1$}
\STATE Update $\mathbf{x}$ and $\mathbf{y}$: \\
$\mathbf{x}_{ t+1}^{(k)}= \mathbf{x}_{ t}^{(k)} -\gamma_x\eta\mathbf{u}_{ t}^{(k)}$ \ , \\
$\mathbf{y}_{ t+1}^{(k)}= \mathbf{y}_{ t}^{(k)} +\gamma_y\eta\mathbf{v}_{ t}^{(k)}$ \ ,
\STATE Estimate the inner-level function: \\
$\mathbf{h}_{ t+1}^{(k)} = (1- \alpha\eta) \mathbf{h}_{ t}^{(k)} + \alpha\eta g^{(k)}(\mathbf{x}_{t+1}^{(k)}; \xi_{ t+1}^{(k)})$, \\
\STATE Update momentum: \\
$\mathbf{u}_{ t+1}^{(k)} = (1-\beta_x\eta)\mathbf{u}_{ t}^{(k)} + \beta_x \eta\nabla g^{(k)}(\mathbf{x}_{ t+1}^{(k)}; \xi_{ t+1}^{(k)})^T\nabla_{g} f^{(k)}(\mathbf{h}_{t+1}^{(k)}, \mathbf{y}_{t+1}^{(k)}; \zeta_{t+1}^{(k)})$,\\
$\mathbf{v}_{ t+1}^{(k)} = (1-\beta_y\eta) \mathbf{v}_{ t}^{(k)} + \beta_y\eta\nabla_{y} f^{(k)}(\mathbf{h}_{t+1}^{(k)}, \mathbf{y}_{t+1}^{(k)}; \zeta_{t+1}^{(k)})$,
\IF {$\text{mod}(t+1, p)==0$}
\STATE
$\mathbf{h}_{t+1}^{(k)}= \bar{\mathbf{h}}_{t+1}\triangleq \frac{1}{K}\sum_{k'=1}^{K}\mathbf{h}_{ t+1}^{(k')}$ , \\
$\mathbf{u}_{t+1}^{(k)}= \bar{\mathbf{u}}_{t+1}\triangleq \frac{1}{K}\sum_{k'=1}^{K}\mathbf{u}_{ t+1}^{(k')}$ , \\
$\mathbf{v}_{t+1}^{(k)}= \bar{\mathbf{v}}_{t+1}\triangleq \frac{1}{K}\sum_{k'=1}^{K}\mathbf{v}_{ t+1}^{(k')}$ , \\
$\mathbf{x}_{t+1}^{(k)}= \bar{\mathbf{x}}_{t+1}\triangleq \frac{1}{K}\sum_{k'=1}^{K}\mathbf{x}_{ t+1}^{(k')}$ , \\
$\mathbf{y}_{t+1}^{(k)}= \bar{\mathbf{y}}_{t+1}\triangleq \frac{1}{K}\sum_{k'=1}^{K}\mathbf{y}_{ t+1}^{(k')}$ , \\
\ENDIF
\ENDFOR
\end{algorithmic} \end{algorithm}
To optimize Eq.~(\ref{loss}), we developed a novel local stochastic compositional gradient descent ascent with momentum algorithm, shown in Algorithm \ref{alg_dscgdamgp}. Generally speaking, in the $t$-th iteration, we employ the local stochastic (compositional) gradient with momentum to update the local model parameters $\mathbf{x}_{t}^{(k)}$ and $\mathbf{y}_{t}^{(k)}$ on the $k$-th device. There exists an unique challenge when computing the local stochastic compositional gradient compared to traditional federated learning models. Specifically, as shown in Eq.~(\ref{loss}), \textit{the objective function depends on the global inner-level function}. However, it is not feasible to communicate the inner-level function at each iteration. To address this challenge, we propose employing the \textit{local} inner-level function to compute the stochastic compositional gradient at each iteration and communicating the estimation of this function periodically to obtain the global inner-level function.
In detail, since the objective function in Eq.~(\ref{loss}) is a compositional function whose stochastic gradient regarding $\mathbf{x}$, i.e., $\nabla g^{(k)}(\mathbf{x}_{t}^{(k)}; \xi_{ t}^{(k)})^T\nabla_{g} f^{(k)}(g^{(k)}(\mathbf{x}_{ t}^{(k)}; \xi_{ t}^{(k)}), \mathbf{y}_{t}^{(k)}; \zeta_{t}^{(k)})$, is a biased estimation for the full gradient, we employ the stochastic compositional gradient $\nabla g^{(k)}(\mathbf{x}_{ t}^{(k)}; \xi_{ t}^{(k)})^T\nabla_{g} f^{(k)}(\mathbf{h}_{t}^{(k)}, \mathbf{y}_{t}^{(k)}; \zeta_{t}^{(k)})$ to update the model parameter $\mathbf{x}$, where $\mathbf{h}_{t}^{(k)}$ is the moving-average estimation of the inner-level function $g^{(k)}(\mathbf{x}_{ t}^{(k)}; \xi_{ t}^{(k)})$ on the $k$-th device, which is defined as follows: \begin{equation}
\mathbf{h}_{ t}^{(k)} = (1- \alpha\eta) \mathbf{h}_{ t-1}^{(k)} + \alpha\eta g^{(k)}(\mathbf{x}_{t}^{(k)}; \xi_{ t}^{(k)}) \ , \end{equation} where $\alpha>0$ and $\eta>0$ are two hyperparameters, and $\alpha\eta\in(0, 1)$. The objective function in Eq.~(\ref{loss}) is not compositional regarding $\mathbf{y}$, thus we can directly leverage its stochastic gradient to perform an update. Then, based on the obtained stochastic (compositional) gradient, we compute the momentum as follows: \begin{equation}
\begin{aligned}
&
\mathbf{u}_{ t}^{(k)} = \beta_x \eta\nabla g^{(k)}(\mathbf{x}_{ t}^{(k)}; \xi_{ t}^{(k)})^T\nabla_{g} f^{(k)}(\mathbf{h}_{t}^{(k)}, \mathbf{y}_{t}^{(k)}; \zeta_{t}^{(k)}) \\
&
\quad \quad \quad + (1-\beta_x\eta)\mathbf{u}_{ t-1}^{(k)}\ , \\
&
\mathbf{v}_{ t}^{(k)} = (1-\beta_y\eta) \mathbf{v}_{ t-1}^{(k)} + \beta_y\eta\nabla_{y} f^{(k)}(\mathbf{h}_{t}^{(k)}, \mathbf{y}_{t}^{(k)}; \zeta_{t}^{(k)}) \ , \\
\end{aligned} \end{equation} where $\beta_x>0$ and $\beta_y>0$ are two hyperparameters, $\beta_x\eta \in (0, 1)$, and $\beta_y\eta \in (0, 1)$. Based on the obtained momentum, each device updates its local model parameters as follows: \begin{equation}
\begin{aligned}
& \mathbf{x}_{ t+1}^{(k)}= \mathbf{x}_{ t}^{(k)} -\gamma_x\eta\mathbf{u}_{ t}^{(k)} \ , \\
& \mathbf{y}_{ t+1}^{(k)}= \mathbf{y}_{ t}^{(k)} +\gamma_y\eta\mathbf{v}_{ t}^{(k)} \ ,
\end{aligned} \end{equation} where $\gamma_x>0$ and $\gamma_y>0$.
As we mentioned before, to obtain the global inner-level function, our algorithm periodically communicates the moving-average estimation of the inner-level function, i.e., $\mathbf{h}_{ t+1}^{(k)}$. In particular, at every $p$ iterations, i.e., $\text{mod}(t+1,p)==0$ where $p>1$ is the \textit{communication period}, each device uploads $\mathbf{h}_{t+1}^{(k)}$ to the central server and the central server computes the average of all received variables, which will be further broadcast to all devices as follows: \begin{equation}
\begin{aligned}
\mathbf{h}_{t+1}^{(k)}= \bar{\mathbf{h}}_{t+1}\triangleq \frac{1}{K}\sum_{k'=1}^{K}\mathbf{h}_{ t+1}^{(k')} \ . \
\end{aligned} \end{equation} In this way, each device is able to obtain the estimate of the global inner-level function. As for the model parameters and momentum, we employ the same strategy as traditional federated learning methods \cite{sharma2022federated,yu2019linear} to communicate them periodically with the central server, which is shown in Step 6 in Algorithm~\ref{alg_dscgdamgp}.
In summary, we developed a novel local stochastic compositional gradient descent ascent with momentum algorithm for the compositional minimax problem, which shows how to deal with two distributed functions in federated learning. With our algorithm, we can enable federated learning for the compositional deep AUC maximization model, benefiting imbalanced data classification tasks.
\section{Theoretical Analysis} In this section, we provide the convergence rate of our algorithm to show how it is affected by the number of devices and communication period.
To investigate the convergence rate of our algorithm, we first introduce some auxiliary functions as follows: \begin{equation}
\begin{aligned}
& \mathbf{y}_*(\mathbf{x}) = \arg \max_{\mathbf{y}\in \mathbb{R}^{d_2}} \frac{1}{K}\sum_{k=1}^{K}f^{(k)}(\frac{1}{K}\sum_{k'=1}^{K}g^{(k')}(\mathbf{x}), \mathbf{y}) \ , \\
& \Phi(\mathbf{x}) = \frac{1}{K}\sum_{k=1}^{K}f^{(k)}(\frac{1}{K}\sum_{k'=1}^{K}g^{(k')}(\mathbf{x}), \mathbf{y}_*(\mathbf{x})) \ . \ \\
\end{aligned} \end{equation} Then, based on Assumptions~\ref{assumption_smooth}-\ref{assumption_strong}, we can obtain that $\Phi^{(k)}$ is $L_{\Phi}$-smooth, where $L_{\Phi}=\frac{2C_g^2L_f^2}{\mu} + C_fL_g$. The proof can be found in Lemma~2 of Appendix~A. In terms of these auxiliary functions, we establish the convergence rate of our algorithm. \begin{theorem}\label{th}
Given Assumption~\ref{assumption_smooth}-\ref{assumption_strong}, by setting $\alpha>0$, $\beta_x>0$, $\beta_y>0$, $\eta \leq \min\{\frac{1}{2\gamma_x L_{\phi}}, \frac{1}{\alpha}, \frac{1}{\beta_x}, \frac{1}{\beta_y}, 1\}$, and
\begin{equation}
\begin{aligned}
& \gamma_x \leq \min\Big\{
\frac{\alpha\mu}{100C_g^2L_f\sqrt{1+6L_f^2}}, \frac{\beta_x}{32\sqrt{C_g^4L_f^2+C_f^2L_g^2}} , \\
& \quad \quad \quad \frac {\beta_y\mu}{144C_g^2L_f^2}, \frac{\sqrt{\alpha}\mu}{24C_g\sqrt{100C_g^2L_f^4+2C_g^2\mu^2}},\frac{\gamma_y\mu^2}{20C_g^2L_f^2}\Big\} \ , \\
& \gamma_y \leq \min\Big\{ \frac{1}{6L_f},
\frac{3\mu\beta_y^2}{400L_f^2}, \frac{3\beta_x^2}{16\mu}\ \Big\} \ , \\
\end{aligned}
\end{equation} Algorithm \ref{alg_dscgdamgp} has the following convergence rate \begin{equation}
\begin{aligned}
& \frac{1}{T}\sum_{t=0}^{T-1} \mathbb{E}[\|\nabla \Phi(\bar{\mathbf{x}}_{t})\|^2] \leq \frac{2(\Phi({\mathbf{x}}_{0})-\Phi({\mathbf{x}}_{*})}{\gamma_x\eta T}
\\
& \quad + \frac{24C_g^2L_f^2}{\gamma_y\eta\mu T} \|{\mathbf{y}}_{0} - \mathbf{y}^{*}({\mathbf{x}}_0)\|^2 + O(\frac{\eta}{ K}) + O(\frac{1}{ \eta T})\\
& \quad + O(p^2\eta^2) + O(p^4\eta^4) + O(p^6\eta^6) \ . \\
\end{aligned}
\end{equation}
\end{theorem}
\begin{remark} In terms of Theorem \ref{th}, for sufficiently large $T$, by setting the learning rate $\eta = O(K^{1/2}/T^{1/2}), p = O(T^{1/4}/K^{3/4})$, Algorithm \ref{alg_dscgdamgp} can achieve $O(1/\sqrt{KT})$ convergence rate, which indicates a linear speedup with respect to the number of devices $K$. In addition, it is straightforward to show that the communication complexity of our algorithm is $T/p=O(K^{3/4}T^{3/4})$. \end{remark}
\section{Experiments} In this section, we present the experimental results to demonstrate the performance of our algorithm.
\begin{figure*}
\caption{Testing performance with AUC score versus the number of iterations when the communication period $p=4$. }
\label{fig:p=4}
\end{figure*}
\begin{figure*}
\caption{Testing performance with AUC score versus the number of iterations when the communication period $p=8$. }
\label{fig:p=8}
\end{figure*}
\begin{table*}[ht]
\setlength{\tabcolsep}{15pt}
\caption{The comparison between the test AUC score of different methods on all datasets. Here, $p$ denotes the communication period. }
\label{result_auc}
\begin{center}
\begin{tabular}{l|c|ccc}
\toprule
{Datasets} & {Methods} & \multicolumn{3}{c} { AUC } \\
\cline{3-5}
& & \ $p=4$ & \ $p=8$ & \ $p=16$ \\
\midrule
{CATvsDOG} &\ \textbf{LocalSCGDAM} & \ \textbf{0.931$\pm$0.001} & \ \textbf{0.930$\pm$0.001} & \ \textbf{0.930$\pm$0.001} \\
& \ CoDA & \ 0.895$\pm$0.000 & \ 0.892$\pm$0.000 & \ 0.883$\pm$0.001 \\
& \ LocalSGDAM & \ 0.899$\pm$0.000 & \ 0.884$\pm$0.000 & \ 0.884$\pm$0.001 \\
& \ LocalSGDM & \ 0.888$\pm$0.001 & \ 0.889$\pm$0.000 & \ 0.887$\pm$0.000 \\
\hline
{CIFAR10} &\ \textbf{LocalSCGDAM} & \ \textbf{0.918$\pm$0.000} & \ \textbf{0.918$\pm$0.000} & \ \textbf{0.916$\pm$0.001} \\
& \ CoDA & \ 0.890$\pm$0.000 & \ 0.886$\pm$0.000 & \ 0.883$\pm$0.000 \\
& \ LocalSGDAM & \ 0.893$\pm$0.000 & \ 0.880$\pm$0.000 & \ 0.880$\pm$0.000 \\
& \ LocalSGDM & \ 0.883$\pm$0.001 & \ 0.871$\pm$0.000 & \ 0.874$\pm$0.000 \\
\hline
{CIFAR100} &\ \textbf{LocalSCGDAM} & \ \textbf{0.710$\pm$0.001} & \ \textbf{0.702$\pm$0.000} & \ \textbf{0.695$\pm$0.000} \\
& \ CoDA & \ 0.694$\pm$0.001 & \ 0.681$\pm$0.001 & \ 0.685$\pm$0.000 \\
& \ LocalSGDAM & \ 0.692$\pm$0.000 & \ 0.694$\pm$0.000 & \ 0.689$\pm$0.001 \\
& \ LocalSGDM & \ 0.675$\pm$0.001 & \ 0.669$\pm$0.000 & \ 0.669$\pm$0.000 \\
\hline
{STL10} &\ \textbf{LocalSCGDAM} & \ \textbf{0.826$\pm$0.000} & \ \textbf{0.816$\pm$0.001} & \ \textbf{0.821$\pm$0.000} \\
& \ CoDA & \ 0.801$\pm$0.000 & \ 0.784$\pm$0.000 & \ 0.783$\pm$0.000 \\
& \ LocalSGDAM & \ 0.792$\pm$0.000 & \ 0.790$\pm$0.000 & \ 0.780$\pm$0.000 \\
& \ LocalSGDM & \ 0.760$\pm$0.001 & \ 0.808$\pm$0.000 & \ 0.757$\pm$0.001 \\
\hline
{FashionMNIST} &\ \textbf{LocalSCGDAM} & \ \textbf{0.982$\pm$0.000} & \ \textbf{0.982$\pm$0.000} & \ \textbf{0.981$\pm$0.000} \\
& \ CoDA & \ 0.976$\pm$0.000 & \ 0.976$\pm$0.000 & \ 0.976$\pm$0.000 \\
& \ LocalSGDAM & \ 0.977$\pm$0.000 & \ 0.977$\pm$0.000 & \ 0.976$\pm$0.000 \\
& \ LocalSGDM & \ 0.963$\pm$0.000 & \ 0.956$\pm$0.000 & \ 0.955$\pm$0.000 \\
\hline
{Melanoma} &\ \textbf{LocalSCGDAM} & \ \textbf{0.873$\pm$0.001} & \ \textbf{0.873$\pm$0.001} & \ \textbf{0.872$\pm$0.000} \\
& \ CoDA & \ 0.734$\pm$0.002 & \ 0.721$\pm$0.000 & \ 0.725$\pm$0.003 \\
& \ LocalSGDAM & \ 0.730$\pm$0.000 & \ 0.729$\pm$0.000 & \ 0.721$\pm$0.003 \\
& \ LocalSGDM & \ 0.774$\pm$0.001 & \ 0.766$\pm$0.001 & \ 0.750$\pm$0.000 \\
\bottomrule
\end{tabular}
\end{center} \end{table*}
\begin{figure*}
\caption{Testing performance with AUC score versus the number of iterations when the communication period $p=16$. }
\label{fig:p=16}
\end{figure*}
\begin{table}[h]
\setlength{\tabcolsep}{4.5pt}
\caption{Description of benchmark datasets. Here, \#pos denotes the number of positive samples, and \#neg denotes the number of negative samples.}
\begin{center}
\begin{tabular}{l|cc|cc}
\toprule
\multirow{2} * {Dataset} & \multicolumn{2}{c|} {Training set} & \multicolumn{2}{c} {Testing set} \\
\cline{2-5}
& \#pos & \#neg & \#pos & \#neg \\
\midrule
CIFAR10 & 2,777 & 25,000 & 5,000 & 5,000\\
\hline
CIFAR100 & 2,777 & 25,000 & 5,000 & 5,000\\
\hline
STL10 & 277 & 2,500 & 8,000 & 8,000 \\
\hline
FashionMNIST & 3,333 & 30,000 & 5,000 & 5,000 \\
\hline
CATvsDOG & 1,112 & 10,016 & 2,516 & 2,888\\
\hline
Melanoma & 868 & 25,670 & 117 & 6,881 \\
\bottomrule
\end{tabular}
\end{center}
\label{dataset} \end{table}
\subsection{Experimental Setup}
\begin{figure*}
\caption{The test AUC score versus the number of iterations when using different imbalance ratios for CATvsDOG. }
\label{fig:imratio}
\end{figure*}
\paragraph{Datasets.} In our experiments, we employ six image classification datasets, including CIFAR10 \cite{krizhevsky2009learning}, CIFAR100 \cite{krizhevsky2009learning}, STL10 \cite{coates2011analysis}, FashionMNIST \cite{xiao2017/online}, CATvsDOG \footnote{\url{https://www.kaggle.com/c/dogs-vs-cats}}, and Melanoma \cite{rotemberg2021patient}. For the first four datasets, following \cite{yuan2021compositional}, we consider the first half of classes to be the positive class, and the second half as the negative class. Then, in order to construct highly imbalanced data, we randomly drop some samples of the positive class in the training set. Specifically, the ratio between positive samples and all samples is set to 0.1. For the two-class dataset, CATvsDOG, we employ the same strategy to construct the imbalanced training data. For these synthetic imbalanced datasets, the testing set is balanced. Melanoma is an intrinsically imbalanced medical image classification dataset, which we do not modify. The details about these benchmark datasets are summarized in Table~\ref{dataset}.
\paragraph{Experimental Settings.} For Melanoma, we use DenseNet121 \cite{huang2017densely} where the dimensionality of the last layer is set to 1 for binary classification. The details for the classifier for FashionMNIST can be found in Appendix~B. For the other datasets, we use ResNet20 \cite{he2016deep}, where the last layer is also set to 1. To demonstrate the performance of our algorithm, we compare it with three state-of-the-art methods: LocalSGDM \cite{yu2019linear}, CoDA \cite{guo2020communication}, LocalSGDAM \cite{sharma2022federated}. For a fair comparison, we use similar learning rates for all algorithms. The details can be found in Appendix~B. We use 4 devices (i.e., GPUs) in our experiment. The batch size on each device is set to 8 for STL10, 16 for Melanoma, and 32 for the others.
\subsection{Experimental Results} In Table~\ref{result_auc}, we report the AUC score of the test set for all methods, where we show the average and variance computed across all devices. Here, the communication period is set to $4$, $8$, and $16$, respectively. It can be observed that our LocalSCGDAM algorithm outperforms all competing methods for all cases. For instance, our LocalSCGDAM can beat baseline methods on CATvsDOG dataset with a large margin for all communication periods. These observations confirm the effectiveness of our algorithm.
In addition, we plot the average AUC score of the test set versus the number of iterations in Figures~\ref{fig:p=4},~\ref{fig:p=8},~\ref{fig:p=16}. It can also be observed that our algorithm outperforms baseline methods consistently, which further confirms the efficacy of our algorithm.
To further demonstrate the performance of our algorithm, we apply these algorithms to datasets with different imbalance ratios. Using the CATvsDOG dataset, we set the imbalance ratio to 0.01, 0.05, and 0.2 to construct three imbalanced training sets. The averaged testing AUC score of these three datasets versus the number of iterations is shown in Figure~\ref{fig:imratio}. It can be observed that our algorithm outperforms competing methods consistently and is robust to large imbalances in the training data. Especially when the training set is highly imbalanced, e.g., the imbalance ratio is 0.01, all AUC based methods outperform the cross-entropy loss based method significantly, and our LocalSCGDAM beats other AUC based methods with a large margin.
\section{Conclusion} In this paper, we developed a novel local stochastic compositional gradient descent ascent algorithm to solve the federated compositional deep AUC maximization problem. On the theoretical side, we established the convergence rate of our algorithm, which enjoys a linear speedup with respect to the number devices. On the empirical side, extensive experimental results on multiple imbalanced image classification tasks confirm the effectiveness of our algorithm.
\end{document} |
\begin{document}
\draft
\title{Simulating nonlinear spin models in an ion trap} \author{G.J. Milburn} \address{Isaac Newton Institute , University of Cambridge,\\ Clarkson Road, Cambridge, CB3 0HE, UK.\\
and Department of Physics, The University of Queensland,QLD 4072 Australia.} \date{\today} \maketitle \begin{abstract} We show how a conditional displacement of the vibrational mode of trapped ions can be used to simulate nonlinear collective and interacting spin systems including nonlinear tops and Ising models ( a universal two qubit gate), independent of the vibrational state of the ion. Thus cooling to the vibrational ground state is unnecessary provided the heating rate is not too large. \end{abstract} \pacs{ 42.50.Vk,03.67.Lx, 05.50.+q}
One of the paths leading to the current interest in quantum computation begins with attempts to answer Feynman's question\cite{Feynman}: can quantum physics be efficiently simulated on a classical computer? It is generally believed that the answer is no, although there is no explicit proof of this conjecture. It then follows that a computer operating entirely by quantum means could be more efficient than a classical computer. Our belief in this conjecture stems from a number of algorithms, such as Shor's factorisation algorithm\cite{Shor1994}, which appear to be substantially (even exponentially) more efficient than the classical algorithms. A number of schemes have now been proposed for a quantum computer, and some have been implemented in a very limited way. What kinds of simulations might these schemes enable? A number of investigators have attempted to answer this question\cite{Lloyd96,Abrams97,Lidar97,Boghosian98,Zalka98}. In this paper we consider this question in the context of the ion trap quantum computer model and show that there is a class of nonlinear collective and interacting spin models that can be simulated with current technology.
Nonlinear collective and interacting spin models have long endured as tractable, nonlinear quantum models with wide ranging relevance. Such models have appeared in nuclear physics\cite{Ring1980}, laser physics\cite{Drummond}, condensed matter physics\cite{Mahan} and of course as a theoretical laboratory to investigate aspects of nonlinear field theories\cite{Kaku}. In many cases however the match between model and experiment is only qualitative. In this paper I will show how some of these models may be directly simulated on a linear ion trap with individual ion addressing as in the quantum computation architecture.
The interaction Hamiltonian for N ions interacting with the centre of mass vibrational mode can be controlled by using different kinds of Raman laser pulses. A considerable variety of interactions has already been achieved or proposed \cite{NIST,LANL,James}. Consider first the simplest interaction that does not change the vibrational mode of the ions. Each ion is assumeds to be driven by a resonant laser field which couples two states, the ground state $|g\rangle$
and an excited state $|e\rangle$. The interaction Hamiltonian is \begin{equation} H_I=-\frac{i}{2}\hbar\sum_{i=1}^N(\Omega_i\sigma^{(i)}_+-\Omega_i^*\sigma^{(i)}_-) \end{equation}
where $\Omega_i$ is the effective Rabi frequency at the i'th ion and we have assumed the dipole and rotating wave approximation as usual. The raising and lowering operators for each ion are defined by $\sigma_-=|g\rangle \langle e|$ and
$\sigma_+=|e\rangle\langle g|$. If we now assume that each ion is driven by an identical field and chose the phase appropriately, the interaction may be written as \begin{equation} H_I=\hbar\Omega \hat{J}_y \label{rotation} \end{equation} where we have used the definition of the collective spin operators, \begin{equation} \hat{J}_\alpha=\sum_{i=1}^N\sigma^{(i)}_\alpha \end{equation} where $\alpha=x,y,z$ and \begin{eqnarray} \sigma^{(i)}_x & = & \frac{1}{2}(\sigma^{(i)}_++\sigma^{(i)}_-)\\ \sigma^{(i)}_y & = & -\frac{i}{2}(\sigma^{(i)}_+-\sigma^{(i)}_-)\\
\sigma^{(i)}_x & = & \frac{1}{2}(|e\rangle\langle e|-|g\rangle\langle g|) \end{eqnarray} The interaction Hamiltonian in Eq \ref{rotation} corresponds to a single collective spin of value $j=N/2$ precessing around the $\hat{J}_y$ direction due to an applied field. By choosing the driving field on each ion to be the same we have imposed a permutation symmetry in the ions reducing the dimension of the Hilbert space from $2^N$ to $2N+1$. The eigenstates of
$\hat{J}_z$ may be taken as a basis in this reduced Hilbert space. In ion trap quantum computers it is more usual to designate the electronic states with a binary number as $|g\rangle=|0\rangle,\ \ |e\rangle=|1\rangle$. The product basis for all N ions is then specified by a single binary string, or the corresponding integer code if the ions can be ordered. Each eigenstate,
$|j,m\rangle_z$, of $\hat{J}_z$ is a degenerate eigenstate of the Hamming weight operator ( the sum of the number of ones in a string) on the binary strings labelling the product basis states in the $2^N$ dimensional Hilbert space of all possible binary strings of length N. Collective spin models of this kind were considered many decades ago in quantum optics\cite{Drummond} and are sometimes called Dicke models after the early work on superradiance of Dicke\cite{Dicke}. In much of that work however the collective spin underwent an irreversible decay. In the case of an ion trap model however we can neglect such decays due to the long lifetimes of the excited states. However when the electronic and vibrational motion is coupled heating of the vibrational centre-of-mass mode can induce irreversible dynamics in the collective spin variables.
The natural variable to measure is $\hat{J}_z$ as a direct determination of the state of each ion via shelving techniques will give such a measurement. These measurements are highly efficient, approaching ideal projective measurements. The result of the measurement is a binary string which is an eigenstate of $\hat{J}_z$. Repeating such measurements it is possible to construct the distribution for $\hat{J}_z$ and corresponding averages. Other components may also be measured by first using a collective rotation of the state of the ions.
We now show how to realise nonlinear Hamiltonians using N trapped ions. By appropriate choice of Raman lasers it is possible to realise the conditional displacement operator for the i'th ion\cite{Monroe1996,NIST} \begin{equation} H=-i\hbar(\alpha_ia^\dagger-\alpha_i^* a)\sigma_z^{(i)} \end{equation} If the ion is in the excited (ground) state this Hamiltonian displaces the vibrational mode by a complex amplitude $\alpha$ ($-\alpha$). In the case of N ions with each driven by identical Raman lasers, the total Hamiltonian is \begin{equation} H=-i\hbar(\alpha a^\dagger-\alpha^* a)\hat{J}_z \end{equation} By an appropriate choice of Raman laser pulse phases we can then implement the following sequence of unitary transformations \begin{equation} U_{NL}=e^{i\kappa_x\hat{X}\hat{J}_z}e^{i\kappa_p\hat{P}\hat{J}_z}e^{-i\kappa_x\hat{X}\hat{J}_z}e^{i\kappa_p\hat{P}\hat{J}_z} \end{equation} where $\hat{X}=(a+a^\dagger)/\sqrt{2},\ \hat{P}=-i(a-a^\dagger)/\sqrt{2}$. Noting that \begin{equation} e^{i\kappa_p\hat{P}\hat{J}_z} \hat{X}e^{-i\kappa_p\hat{P}\hat{J}_z} =\hat{X}+\kappa_p\hat{J}_z \end{equation} it is easy to see that \begin{equation} U_{NL}=e^{-i\theta\hat{J}_z^2} \label{UNL} \end{equation} where $\theta=\kappa_x\kappa_p$ which is the unitary transformation generated by a nonlinear top Hamiltonian describing precession around the $\hat{J}_z$ axis at a rate dependant on the $z$ component of angular momentum. Such nonlinear tops have appeared in collective nuclear models\cite{Ring1980} and form the basis of a well known quantum chaotic system\cite{Haake91}.
It should be noted that the transformation in Eq(\ref{UNL}) contains no operators that act on the vibrational state. It is thus completely independent of the vibrational state and it does not matter if the vibrational state is cooled to the ground state or not. However Eq(\ref{UNL}) only holds if the heating of the vibrational mode can be neglected over the time it takes to apply the conditional displacement operators. We discuss below what this implies for current experiments.
In itself the unitary transformation in Eq (\ref{UNL}) can generate interesting states. For example if we begin with all the ions in the ground state so that the collective spin state is initially $|j,-j\rangle_z$ and apply laser pulses to each electronic transition according to the Hamiltonian in Eq (\ref{rotation}) for a time $T$ such that $\Omega T=\pi/2$ the collective spin state is just the $\hat{J}_x$ eigenstate $|j,-j\rangle_x$. If we now apply the nonlinear unitary transformation in Eq (\ref{UNL}) so that $\theta=\pi/2$ we find that the system evolves to the highly entangled state \begin{equation}
|+\rangle=\frac{1}{\sqrt{2}}(e^{-i\pi/4}|j,-j\rangle_x+(-1)^je^{i\pi/4}|j,j\rangle_x) \end{equation} Such states have been considered by Bollinger et al.\cite{Bollinger} in the context of high precision frequency measurements,and also by Sanders\cite{Sanders}. They exhibit interference fringes for measurements of $\hat{J}_z$. As noted above a measurement of $\hat{J}_z$ is easily made simply by reading out the state of each ion using highly efficient fluorescence shelving techniques. This particular nonlinear model is a well known system for studying quantum chaos, as we now discuss.
The nonlinear top model was introduced by Haake\cite{Haake91,Sanders89} as a system that could exhibit chaos in the classical limit on a compact phase space, and which could be treated quantum mechanically with a finite Hilbert space. This removed the necessity of truncating the Hilbert space and the possibility of thereby introducing spurious quantum features. The nonlinear top is defined by the collective spin Hamiltonian, \begin{equation} H = \frac{\kappa}{2 j \tau} \hat{J}_z^2 + p \hat{J}_y \sum_{n = -\infty}^\infty \delta(t-n\tau), \label{eq.top.hamiltonian} \end{equation} where $\tau$ is the duration between kicks, ${\bf \hat{J}} = (\hat{J}_x, \hat{J}_y, \hat{J}_z)$ is the angular momentum vector, and $\hat{J}^2=j(j+1)$ is a constant of the motion.
As the Hamiltonian is time periodic the appropriate quantum description is via the Floquet operator \begin{equation} U = \exp\left(-i\frac{3}{2 j} \hat{J}_z^2\right) \exp\left(-i\frac{\pi}{2} \hat{J}_y\right), \label{eq.U.top} \end{equation} which takes a state from just before one kick to just before the next, i.e.,
$\left|\psi\right> \longrightarrow U\left|\psi\right>$, where $J_z$ and $J_y$ are the usual angular momentum operators, and $j$ is the angular momentum quantum number. The first exponential, $U_P = \exp\left(-i\frac{3}{2 j} \hat{J}_z^2\right )$, describes the precession about the $z$-axis, and the second, $U_K = \exp\left(-i\frac{\pi}{2} \hat{J}_y\right)$, describes the kick.
The classical dynamics can be reduced to a two dimensional map of points on a sphere of radius $j$ \cite{Sanders89}, and the angular momentum vector can be parameterised in polar coordinates as \begin{equation} {\bf J} = j(\sin\Theta \cos\Phi, \sin\Theta \sin\Phi, \cos \Theta). \end{equation} The first term in the Hamiltonian (\ref{eq.top.hamiltonian}) describes a non-linear precession of the top about the z-axis, and the second term describes periodic kicks around the y-axis. The classical map for $p = \pi /2$ and $\kappa = 3$ has a mixed phase space with periodic elliptical fixed points and chaotic regions.
It is now clear that this model can be simulated by the sequence of pulses in Eq ({\ref{UNL}) with appropriate values for the pulse area, together with a single linear rotation. This presents the possibility of directly testing a number of ideas in the area of quantum chaos, particularly the idea of hypersensitivity to perturbation introduced by Schack and Caves \cite{Caves94}. Of particular interest here is the ability to very precisely simulate the measurement induced hypersensitivity discussed in \cite{Breslin99}. In that paper the kicked top was subjected to a readout using a single spin that could be prepared in a variety of states. The interaction between the top and the readout spin is described by \begin{equation} U_I = \exp\left(-i \mu J_y \sigma_z^{(R)}\right), \end{equation} where we regard one ion as set aside to do the readout and label it with a superscript. It is relatively straight forward to generate this interaction via the pulse sequence of conditional phase shifts \begin{equation} U_{NL}=e^{i\kappa_x\hat{X}\sigma_z^{(R)}}e^{i\kappa_p\hat{P}\sigma_z^{(R)}}e^{-i\kappa_x\hat{X}\hat{J}_z}e^{i\kappa_p\hat{P}\sigma_z^{(R)}} \end{equation} with $\mu=\kappa_x\kappa_p$. It is now possible to consider a long sequence of measurements made at the end of each nonlinear kick and record the resulting binary strings of measurement results.
Initial states of the kicked top can be easily be prepared as coherent angular momentum states by appropriate linear rotations. In the basis of orthonormal $\hat{J}_z\mbox{
eigenstates}$, and $\hat{{\bf J}}^2\left|j,m\right> = j(j+1)\left|j,m\right>$. the spin coherent states can be written as a rotation of the collective ground state\cite{Haake91,Arrechi72} through the spherical polar angles $(\theta,\phi)$, \begin{equation}
|\gamma\rangle = \exp\left [i\theta(\hat{J}_x\sin\phi-\hat{J}_y\cos\phi))\right ]|j,-j\rangle \label{eq.coherent} \end{equation} where $\gamma = e^{i\Phi} \tan\left(\frac{\Theta}{2}\right)$. This can be achieved by identical, appropriately phased pulses on each ion separately. Initial states localised in either the regular or chaotic regions of the classical phase space may thus be easily prepared.
Using a sequence of conditional displacement operators that does distinguish different ions we can simulate various interacting spin models. As interacting spins are required for general quantum logic gates, these models may be seen as a way to perform quantum logical operations without first cooling the ions to the ground state of some collective vibrational mode.
Suppose for example we wish to simulate the interaction of two spins with the Hamiltonian \begin{equation} H_{int}=\hbar\chi\sigma_z^{(1)}\sigma_z^{(2)} \end{equation} The required pulse sequence is \begin{eqnarray} U_{int} & = & e^{i\kappa_x\hat{X}\sigma_z^{(1)}}e^{i\kappa_p\hat{P}\sigma_z^{(2)}}e^{-i\kappa_x\hat{X}\sigma_z^{(1)}}e^{i\kappa_p\hat{X}\sigma_z^{(2)}}\\ \nonumber & = & e^{-i\chi\sigma_z^{(1)}\sigma_z^{(2)}} \end{eqnarray}
This transformation may be used together with single spin rotations to simulate a two spin transformation that is one of the universal two qubit gates for quantum computation. For example the controlled phase shift operation \begin{equation}
U_{cp}=e^{-i\pi|e\rangle_1\langle e|\otimes|e\rangle_2\langle e|} \end{equation} may be realised with $\chi=\pi$ as \begin{equation} U_{cp}=e^{-i\frac{\pi}{2}\sigma_z^{(1)}}e^{-i\frac{\pi}{2}\sigma_z^{(2)}}U_{int} \end{equation} Once again this transformation does not depend on the vibrational state and so long as it is applied faster than the heating rate of the collective vibrational mode it can describe the effective interaction between two qubits independent of the vibrational mode.
We have proposed a scheme, based on conditional displacements of a collective vibrational mode, to simulate a variety of nonlinear spin models using a linear ion trap in the quantum computing architecture and which does not require that the collective vibrational mode be cooled to the ground state. However the scheme does require that the heating of the collective vibrational mode is negligible over the time of the application of the Raman conditional displacement pulses. It does not matter that the ion heats up between pulses. If the pulses were applied for times comparable to the heating times the pulse sequences described above would not be defined by a product of unitary transformations but rather by the completely positive maps which include the unitary part as well as the nonunitary heating part. Such maps provide a means to test various thermodynamic limits of nonlinear spin models and will be discussed in a future paper. In current experiments the heating time is estimated to be of the order of 1 ms, which is much shorter than the theoretically expected values that are as long as seconds\cite{NIST}. The source of this heating is unclear but efforts are under way to eliminate it, so we can expect heating times to eventually be sufficiently long to ignore. In current experiments however the sequence of conditional displacements would need to be applied on time scales of less than 1 ms. This is achievable using Raman pulses. We thus conclude that simple collective and interacting spin models with a few spins are within reach of current ion trap quantum computer experiments.
\acknowledgements I would like to thank Daniel James and David Wineland for useful discussions.
\begin{references} \bibitem{Feynman}R.P.Feynman, Int. J. Theor. Phys. {\bf 21},467 (1982). \bibitem{Shor1994}P. W. Shor, in Proc. 35th Annual Symposium on the Foundations of Computer Science, edited by S. Goldwasser (IEEE Computer Society Press, Los Alamitos, California), p. 124 (1994); see also A. Ekert and R. Jozsa, Rev. Mod. Phys. 1996. \bibitem{Lloyd96}S. Lloyd, Science, 1073 {\bf 273}, (1996). \bibitem{Abrams97}D.S. Abrams and S. Lloyd, Phys. Rev. Lett. 79, 2586 (1997). \bibitem{Lidar97}D. Lidar and O. Biham, Phys. Rev. E, p.3661 {\bf 56}, (1997). \bibitem{Boghosian98}B. Boghosian and W. Taylor, Phys. Rev. E., 54, {\bf 57} (1998). \bibitem{Zalka98}C. Zalka, Proc. R. Soc. Lond. A (1998). \bibitem{Ring1980}P.Ring and P.Schuch, {\em The Nuclear Many Body Problem}, 110-112, (Springer, New York, 1980). \bibitem{Drummond}P.D.Drummond, Phys. Rev. A {\bf 22}, 1179 (1980), and references therein. \bibitem{Mahan}G.D.Mahan, {\em Many Particle Physics}, (Plenum, New York 1990). \bibitem{Kaku}M. Kaku, {\em Quantum Field Theory}, Chapter 17, (Oxford, New York, 1993). \bibitem{NIST}D.J.~Wineland, C.~Monroe, W.M.~Itano, D.~Leibfried, B.E.~King, and D.M.~Meekhof, "Experimental issues in coherent quantum-state manipulation of trapped atomic ions", Journal of Research of the National Institute of Standards and Technology {\bf 103},259 (1998). \bibitem{LANL}R.J.~Hughes, D.F.V.~James, J.J.~Gomez, M.S.~Gulley, M.H.~Holzscheiter, P.G.~Kwiat, S.K.~Lamoreaux, C.G.~Peterson, V.D.~Sandberg, M.M.~Schauer, C.M.~Simmons, C.E.~Thorburn, D.~Tupa, P.Z.~Wang, and A.G.~White, Fortschritte der Physik {\bf 46}, 329 (1998). \bibitem{James}Mark S.Gulley, Andrew white, and Daniel F.V.James, "A Raman approach to quantum logic in Calcium-like ions", submitted to J. Opt. Soc. Amer. B (1999). \bibitem{Dicke}R.H.Dicke, Phys. Rev. {\bf 93}, 99 (1954). \bibitem{Monroe1996}C. Monroe, D.M. Meekhof, B.E. King, and D.J. Wineland, Science, {\bf 272}, 1131 (1996). \bibitem{Bollinger}J. J. Bollinger,Wayne M. Itano, and D. J. Wineland and D. J. Heinzen, Phys. Rev. A {\bf 54}, 4649 (1996). \bibitem{Sanders}B.C.Sanders, Phys. Rev. A {\bf 40} 2417, (1989). \bibitem{Haake91}F. Haake, M. Ku\'{s} and R. Scharf, Z. Phys. B {\bf 65}, 381 (1987). \bibitem{Sanders89}B. C. Sanders and G. J. Milburn, Z. Phys. B {\bf 77}, 497 (1989). \bibitem{Caves94}R.Schack, G.M.D'Ariano, and C.M.Caves, Phys. Rev E. 972 (1994). \bibitem{Breslin99}J. Breslin and G. J. Milburn, Physical Review A, 59, 1781-1787 (1999). \bibitem{Arrechi72} F.T. Arrechi, E. Courtens, R. Gilmore and H.
Thomas, Phys. Rev A, {\bf 6}, 2211 (1972).
\end{references}
\end{document} |
\begin{document}
\begin{abstract} Descending plane partitions, alternating sign matrices, and totally symmetric self-complementary plane partitions are equinumerous combinatorial sets for which no explicit bijection is known. In this paper, we isolate a subset of descending plane partitions counted by the Catalan numbers. The proof follows by constructing a generating tree on these descending plane partitions that has the same structure as the generating tree for 231-avoiding permutations. We hope this result will provide insight on the search for a bijection with alternating sign matrices and/or totally symmetric self-complementary plane partitions, since these also contain Catalan subsets. \end{abstract}
\title{A Catalan Subset of Descending Plane Partitions}
\section{Introduction} Descending plane partitions with largest part at most $n$, alternating sign matrices with $n$ rows and $n$ columns, and totally symmetric self-complementary plane partitions inside a $2n\times 2n\times 2n$ box are each enumerated by the product formula \begin{equation} \label{eq:prod} \prod_{j=0}^{n-1}\frac{(3j+1)!}{(n+j)!}.\tag{$*$} \end{equation} This cries out for a \emph{bijective} explanation, but it has been an outstanding problem for decades to find these missing bijections. See \cite{ANDREWS_DPP,ANDREWS_PPV,book,kuperbergASMpf,MRRANDREWSCONJ,MRRASMDPP,MRR3,RobbinsRumseyDet,ZEILASM} for these enumerations and bijective conjectures. Many papers have been written in an effort to reduce the problem; papers relevant to the descending plane partition portion of the problem include R.~Behrend, P.~Di Francesco, and P.~Zinn-Justin's proofs of multiply-refined enumerations of alternating sign matrices and descending plane partitions \cite{WeightedASMDPP,Behrend} and the second author's bijective map between descending plane partitions with no special parts and permutation matrices \cite{dppandpmbijection}. Descending plane partitions may arguably be the least natural of these three sets of objects, but they are the only one for which a statistic is known whose generating function is a $q$-analogue of (\ref{eq:prod}). This statistic is particularly nice; it is the sum of the entries (conjectured in~\cite{ANDREWS_DPP} and proved in~\cite{MRRANDREWSCONJ}). Descending plane partitions can be seen as non-intersecting lattice paths~\cite{lalonde2003} and also as rhombus tilings of a hexagon with a triangular hole in the middle~\cite{KrattDPP}.
\textbf{Our main result} (Theorem~\ref{lem:catrelat}) isolates a subset of descending plane partitions of order $n$ and proves it to be enumerated by the $n$th Catalan number $C_n = \frac{1}{n+1}\binom{2n}{n}$. This result is a step toward better understanding how descending plane partitions relate to the other objects enumerated by (\ref{eq:prod}), since there are several known Catalan subsets within those sets. These include \emph{link patterns} in \emph{fully-packed loops}, diagonals of \emph{monotone and magog triangles} (equivalently, order ideals within layers of \emph{tetrahedral posets}~\cite{STRIKERPOSET}), and endpoints of certain nests of non-intersecting lattice paths~\cite{DiFrancesco_TSSCPP_qKZ}.
We hope that comparing Catalan descending plane partitions with the known Catalan subsets of alternating sign matrices and totally symmetric self-complementary plane partitions will be helpful in finding these missing bijections. We also remark that we prove Theorem~\ref{lem:catrelat} neither by finding a bijection to typical Catalan objects, such as Dyck paths or triangulations, nor by showing these objects satisfy the Catalan recurrence in the usual way. Rather, we use the generating tree approach of J.~West~\cite{CatTrees}, which we have found to be both lovely and useful and should be more widely known.
The paper is organized as follows. In Section~\ref{sec:def}, we first define descending plane partitions. We then define the other objects enumerated by (\ref{eq:prod}) and discuss their known Catalan subsets. In Section~\ref{sec:catdpp}, we identify our Catalan subset of descending plane partitions and state our main result, Theorem~\ref{lem:catrelat}, on the enumeration of this subset. In Section~\ref{sec:gentree}, we give background on the Catalan generating tree and 231-avoiding permutations. In Section~\ref{sec:lemmaproof}, we prove Theorem~\ref{lem:catrelat}. Finally, in Section~\ref{sec:disc}, we discuss some implications of our main theorem.
\section{Definitions and background} In this section, we first state definitions related to descending plane partitions. We then define alternating sign matrices and totally symmetric self-complementary plane partitions and briefly discuss their Catalan subsets.
\label{sec:def} \begin{defn} A \textbf{plane partition} is a finite subset, $P$, of positive integer lattice points, $\{(i,j,k)\subset \mathbb{N}^3\}$, such that if $(r,s,t)\in P$ and $(i,j,k)$ satisfies $1 \leq i \leq r$, $1 \leq j \leq s$, and $1 \leq k \leq t$, then $(i,j,k)\in P$ as well. \end{defn} A plane partition may be visualized as a stack of unit cubes pushed into a corner as shown in Figure~\ref{fig:planepartition} below. A plane partition may also be represented as a left and top justified array of integers where each row represents one layer of the plane partition and each integer represents the number of unit cubes stacked in that column of the layer. For example, Figure \ref{fig:planepartition}, left, shows a plane partition with its corresponding integer array. Note the rows and columns of an integer array corresponding to a plane partition must be weakly decreasing.
\begin{defn} A \textbf{shifted plane partition} is an array of integers weakly decreasing in rows and columns such that the beginning of each row is shifted to the right by one position relative to the previous row. That is, it is an integer array as in Figure \ref{fig:jesslabeling} where each row and column is weakly decreasing. A \textbf{strict shifted plane partition} is a shifted plane partition with the additional restriction that each column must be strictly decreasing. \end{defn}
Figure \ref{fig:planepartition}, center, shows a strict shifted plane partition.
\begin{figure}
\caption{Left: A plane partition, visualized as unit cubes in a corner, with its corresponding integer array representation; Center: A strict shifted plane partition; Right: A descending plane partition.}
\label{fig:planepartition}
\end{figure}
Our main objects of study are the following. \begin{defn} \label{def:dpp} A \textbf{descending plane partition} (DPP) is a strict shifted plane partition with the additional restrictions that: \begin{enumerate} \item the number of parts in each row is less than the greatest part in that row, and \item the number of parts in each row is greater than or equal to the largest part in the next row. \end{enumerate} Using the notation in Figure \ref{fig:jesslabeling}, the above conditions are equivalent to: \begin{enumerate} \item $\lambda_{k} - k + 1<a_{k,k}$, and \item $\lambda_{k} - k + 1\geq{a_{k+1,k+1}}$. \end{enumerate} A descending plane partition is \textbf{of order n} if its largest part is less than or equal to $n$. \end{defn}
\begin{figure}
\caption{Labeling of positions in shifted, strict shifted, and descending plane partitions.}
\label{fig:jesslabeling}
\end{figure}
An example of a descending plane partition is given in Figure \ref{fig:planepartition}, right. See Figure~\ref{fig:dppsorder3} for the seven descending plane partitions of order 3. \begin{figure}
\caption{The seven descending plane partitions of order 3.}
\label{fig:dppsorder3}
\end{figure}
In the next section, we define our Catalan subset of descending plane partitions, which we prove in Theorem~\ref{lem:catrelat} to be counted by the Catalan numbers. We finish this section with definitions of the other objects enumerated by (\ref{eq:prod}) and a short discussion of their Catalan subsets.
\begin{defn} An \textbf{alternating sign matrix} is a square matrix with entries in $\{0,1,-1\}$, row and column sums equal to 1, and the additional restriction that the nonzero entries alternate in sign across each row and column. \end{defn}
\begin{figure}
\caption{Left: An alternating sign matrix; Center: Its corresponding monotone triangle; Right: Its corresponding fully-packed loop configuration.}
\label{fig:asmexample}
\end{figure}
An example alternating sign matrix can be seen in Figure \ref{fig:asmexample}, left. Note any permutation matrix is also an alternating sign matrix. The seven $3\times3$ alternating sign matrices are given in Figure \ref{fig:sevenasms}.
\begin{figure}
\caption{The seven $3\times3$ alternating sign matrices.}
\label{fig:sevenasms}
\end{figure}
Alternating sign matrices are in bijection with monotone triangles and fully-packed loop configurations. See Figure~\ref{fig:asmexample} for an example; see also~\cite{ProppManyFaces}. Each length $n$ northwest to southeast diagonal of a monotone triangle is a weakly increasing sequence of integers $x_1,x_2,\ldots,x_n$ such that $i\leq x_i\leq n$; these are counted by the $n$th Catalan number $C_n$. Also, each fully-packed loop configuration of order $n$ has an associated link pattern, which is the noncrossing matching on its $2n$ external vertices; these are also counted by $C_n$.
\begin{defn} A plane partition is \textbf{totally symmetric} if whenever $(i,j,k)\in\pi$, then all six permutations of $(i,j,k)$ are also in $\pi$. A plane partition is \textbf{self-complementary} inside its $a\times b\times c$ bounding box if it is equal to its complement in the box, that is, the collection of empty cubes in the box is of the same shape as the collection of cubes in the plane partition itself. \end{defn}
\begin{figure}
\caption{Left: A totally symmetric self-complementary plane partition inside a $12\times 12\times 12$ box; Center: Its corresponding magog triangle; Right: Its corresponding nest of non-intersecting lattice paths.}
\label{fig:tsscppex}
\end{figure}
An example totally symmetric self-complementary plane partition can be seen in Figure \ref{fig:tsscppex}, left. The seven totally symmetric self-complementary plane partitions inside a $6\times 6\times 6$ box are given in Figure~\ref{ex:tikztsscpp}.
\begin{figure}
\caption{The seven totally symmetric self-complementary plane partitions inside a $6\times 6\times 6$ box.}
\label{ex:tikztsscpp}
\end{figure}
Totally symmetric self-complementary plane partitions inside a $2n\times 2n\times 2n$ box are in bijection with magog triangles and certain non-intersecting lattice paths. See Figure~\ref{fig:tsscppex} for an example; see also~\cite{PermTSSCPP}. Each length $n$ northwest to southeast diagonal of a magog triangle is a sequence of integers $x_1,x_2,\ldots,x_n$ such that $i\leq x_i\leq n$ and $x_{i+1}\leq x_{i}+1$; these are counted by $C_n$. The number of possible sequences of dots and blank spots across the bottom row of the non-intersecting lattice path representation is also counted by $C_n$, via a bijection to Dyck paths.
\section{Catalan descending plane partitions} \label{sec:catdpp} In this section, we identify a subset of descending plane partitions that can be counted by the Catalan numbers and state our main result, Theorem~\ref{lem:catrelat}. We will prove this theorem in Section~\ref{sec:lemmaproof} using the Catalan generating tree, discussed in Section~\ref{sec:gentree}.
\begin{defn} A \textbf{Catalan descending plane partition (Catalan DPP)} is a one-row descending plane partition $a_{1,1} \ a_{1,2} \ \cdots \ a_{1,{\lambda_1}}$ such that each entry satisfies $a_{1,j} \leq a_{1,1} - j + 1$. \end{defn}
The Catalan descending plane partitions of order 4 are shown in Figure \ref{fig:specialdpprows}.
\begin{figure}
\caption{The 14 Catalan descending plane partitions of order 4.}
\label{fig:specialdpprows}
\end{figure}
Our main theorem justifies the naming of these objects. \begin{thm} \label{lem:catrelat} Catalan descending plane partitions of order $n$ are counted by the $n$th Catalan number \[C_n= \displaystyle\frac{1}{n + 1}\displaystyle\binom{2n}{n}.\] \end{thm}
In our proof of Theorem~\ref{lem:catrelat}, we will represent Catalan descending plane partitions as paths.
\begin{defn} A \textbf{\rowname{} path} of order $n$ is a sequence of entries in $\{-1,1\}$ with at most $n-1$ ones such that the left-to-right partial sums are nonnegative and the total sum is greater than zero (if the sequence is non-empty). \end{defn}
A list of the first 14 \rowname{} path{s} is given in Figure \ref{fig:words}.
\begin{figure}
\caption{The 14 Catalan DPP paths of order 4, in order corresponding to Figure~\ref{fig:specialdpprows} via the bijection of Lemma~\ref{lem:bijDPPtoDPPpath}.}
\label{fig:words}
\end{figure}
\rowname{} path{s} are in natural bijection with Catalan DPP{s}.
\begin{lemma} \label{lem:bijDPPtoDPPpath} Catalan DPP{s} of order $n$ are in bijection with \rowname{} path{s} of order $n$. \end{lemma}
\begin{proof} The empty Catalan DPP{} $\emptyset$ corresponds to the empty \rowname{} path{} $\emptyset$. For nonempty Catalan DPP{s}, following the convention in the definitions of the previous section, we are interested in the southwest wall projected onto a grid; see Figure~\ref{fig:rowtopath}. This side view of the Catalan DPP{} nicely corresponds with the numeric representation, in that each entry $a_{1,j}$ of the Catalan DPP{} is represented as a set of $a_{1,j}$ boxes aligned with the bottom-left of the graph in sequential order.
Consider the path formed by tracing the upper edge of the projected Catalan DPP{} from upper left to lower right. Representing this path as a sequence of $1$'s and $-1$'s, where each $-1$ corresponds to a move to the right and each $1$ corresponds to a move down, we attain a word beginning with a $-1$ and ending with a $1$. Note that there cannot be a move down without first a move to the right to define the top of the first set of boxes, and there must be a final descent to finish the Catalan DPP{} on the last entry. Ignoring these first and last entries, we have a valid \rowname{} path. Each $-1$ in a \rowname{} path{} corresponds to an increase in the $j$ component of each $a_{1,j}$ of the Catalan DPP, so the requirement of partial sums greater than or equal to zero corresponds to the Catalan DPP{} requirement that $a_{1,j}$ be less than or equal to $a_{1,1} - j + 1$. The requirement that the sum of each \rowname{} path{} be greater than zero ensures that the Catalan DPP{} represented by the path is a valid DPP with the length of each row less than the largest part in the row. This process is clearly invertible, and is thus a bijection. \end{proof}
See Figure~\ref{fig:rowtopath} for an example of this bijection.
\begin{figure}
\caption{An example of the bijection between a Catalan DPP{} and its \rowname{} path{}.}
\label{fig:rowtopath}
\end{figure}
\section{231-avoiding permutations and the Catalan generating tree} \label{sec:gentree} To prove Theorem \ref{lem:catrelat}, we will use the Catalan generating tree to show that \rowname{} path{s} are in bijection with 231-avoiding permutations, which are known to be counted by $C_n$.
\begin{defn} A \textbf{$\mathbf{231}$-avoiding permutation} is a permutation $\sigma = \sigma_1 \sigma_2 \cdots \sigma_n$ with no triple of indices $\{i,j,k\}$ such that $i<j<k$ and $\sigma_k < \sigma_i < \sigma_j$. \end{defn}
It is well-known that permutations of $n$ avoiding any permutation $\pi\in\mathfrak{S}_3$ are counted by $C_n$; for example, see \cite{stanley2015catalan}. We give below a proof for the case $\pi=231$.
\begin{lemma} 231-avoiding permutations in $\mathfrak{S}_n$ are counted by $C_n$. \end{lemma}
\begin{proof} Let $X_n$ denote the number of 231-avoiding permutations of $\mathfrak{S}_n$. It is clear that if $\sigma\in\mathfrak{S}_n$ is a 231-avoiding permutation, then any $\sigma_i \cdots \sigma_j$ for $1\leq{i}\leq{j}\leq{n}$, is also a 231-avoiding permutation (when each $\sigma_k$ is standardized to fall between $1$ and $j-i+1$ while maintaining relative order). Suppose $\sigma_p=n$. We may break $\sigma$ at position $p$ into permutations $\sigma_1 \cdots \sigma_{p-1}$ and $\sigma_{p+1} \cdots \sigma_{n}$ (allowing the empty permutation), which are still 231-avoiding when standardized. This gives the Catalan recurrence relation $X_{n} = \displaystyle\sum_{p=1}^{n}X_{p-1} X_{n-p}, \ X_0=1$, since we obtain two new 231-avoiding permutations of length $p-1$ and $n-p$. \end{proof}
In \cite{CatTrees}, J.~West describes a generating tree whose nodes are in bijection with 231-avoiding permutations. The structure of these trees can be described as follows:
\begin{defn} To construct the \textbf{Catalan generating tree}, begin with the first node at level $n = 0$. If a node is at level $n$, we say its children are at level $n+1$, and its parent is at level $n-1$. To determine how many children each node should have, if a node is in the $p$th position from the left among its siblings, it will have $p+1$ children. The convention for the node at level $n=0$ is that it has one child. \end{defn}
\begin{lemma}[\cite{CatTrees}] \label{lem:cattree} The number of nodes at level $n$ in the Catalan generating tree is $C_n$. \end{lemma}
\begin{proof} The nodes of the Catalan generating tree will be populated with 231-avoiding permutations following West's procedure given in \cite{CatTrees}. First, the root (level $n=0$) will be labeled with $\emptyset$. The nodes at level $n$ for $n\geq 1$ will be 231-avoiding permutations of length $n$. The children of each node will be found by inserting $n$ into the permutation at all valid locations that maintain the 231-avoiding property, starting from left to right. By doing this, we ensure that each node generated is a valid 231-avoiding permutation. We say that $\sigma_i$ is a left-to-right maximum of $\sigma$ if $\sigma_i > \sigma_k$ for all $k<i$. The valid locations for insertion are immediately to the left of a left-to-right maximum or at the end of the permutation. This will generate every 231-avoiding permutation for a given $n$. Inserting the new $n$ into the permutation affects the descendants predictably. If a node has $k$ left-to-right maxima, it will have $k+1$ children (the additional child comes from adding the $n$ to the end of the permutation) that each have $1,2,3,\cdots,k+1$ left-to-right maxima, respectively, and so $2,3,4,\cdots,k+2$ children, respectively. This pattern follows exactly the structure of Catalan generating tree as defined above, and so the Catalan tree nodes at level $n$ are counted by $C_n$, as desired. \end{proof}
The start of the Catalan generating tree is shown in Figure \ref{fig:cattree}, and the correspondence with 231-avoiding permutations can be seen in Figure \ref{fig:permtree}.
\begin{figure}
\caption{The first five levels of the Catalan generating tree described in \cite{CatTrees}. Each node is labeled with the number of children it has.}
\label{fig:cattree}
\end{figure}
\begin{figure}
\caption{The first five levels of the Catalan generating tree with nodes labeled by the 231-avoiding permutations.}
\label{fig:permtree}
\end{figure}
\section{Proof of Theorem \ref{lem:catrelat}}\label{sec:lemmaproof}
We will obtain a bijection between Catalan DPP{s} and 231-avoiding permutations through \rowname{} path{s} by constructing a generating tree with \rowname{} path{s} as the nodes. The $n$th level of this tree will consist of all the \rowname{} path{s} of order $n$; see Figure \ref{fig:pathtree}.
\begin{defn} To construct the \textbf{Catalan DPP path generating tree}, begin with the topmost node, which will be $\emptyset$. To determine the children of a given Catalan DPP path $P$, we have the following three cases.
\begin{itemize} \item \textbf{Case $P=\emptyset$:} We must check to see if $P$ has a parent. If $P$ has a parent, its children will be $\emptyset$ and $1$. If $P$ does not have a parent, it will have one child, $\emptyset$. \item \textbf{Case $P\neq\emptyset$ and $P$ contains a -1:} The rightmost of $P$'s children will be $P$ with an additional pair 1 -1 inserted directly before the first occurrence of -1 in $P$. This produces a valid \rowname{} path{}, since the total sum of the child \rowname{} path{} will remain the same or increase, and adding a sequence of 1-1 will not produce a partial sum less than the previous partial sum. Clearly, inserting a 1 anywhere in the \rowname{} path{} will produce another valid path. After this first child is generated, the remaining siblings from right to left can be found by shifting the position of the leftmost -1 one position further left, unless doing so would cause the path to no longer be a valid \rowname{} path{} (in other words, unless the -1 would be shifted to the left of the first 1). \item \textbf{Case $P\neq\emptyset$ and $P$ contains no -1:} The rightmost of $P$'s children will be $P$ with an additional 1 inserted at the end. The next sibling \rowname{} path{} can be found by appending a -1 to the end of the rightmost child \rowname{} path. The remaining siblings can be found by shifting this -1 left as far as possible, as before. \end{itemize} \end{defn}
To prove Theorem \ref{lem:catrelat}, we show that the Catalan DPP path generating tree is in bijection with the Catalan generating tree, discussed in the previous section.
\begin{proof}[Proof of Theorem \ref{lem:catrelat}] It is clear that the first three levels of the \rowname{} path{} generating tree are in bijection with the Catalan generating tree, and each leftmost node except the first will always have two children, as the leftmost node is always $\emptyset$. To see how the position of each node relative to its siblings determines how many children it will have, we will consider two sibling nodes, $A$ and $B$, immediately next to each other with $A$ being on the left and $B$ being on the right. To see how $B$ will always have one more child than $A$, there are three cases to consider: \begin{itemize} \item \textbf{Case $A=\emptyset$ and $B=1$:} It is clear that $A$'s children will be 1 and $\emptyset$. $B$'s children from right to left will be 11, 11-1, and 1-11. Thus $B$ has one more child than $A$, as desired. \item \textbf{Case $A,B\neq\emptyset$ and $B$ contains no -1:} It is clear from the way that the tree is constructed that $B$ will have the same number of 1's as $A$. The first of $B$'s children from right to left will be $B$ with an additional 1 appended to it. The second child of $B$ will be the same as the first with an additional -1 appended to it. This second child of $B$ will be the same as the first child of $A$ after removing the last -1 from the child of $A$. Clearly, there will be the same number of remaining siblings generated for each of the cousin nodes, as there are the same positions available for the leftmost -1 to be shifted to. Thus, $B$ has one more child than $A$. \item \textbf{Case both $A$ and $B$ contain a -1:} The leftmost -1 of $A$ will be one position further left than the leftmost -1 of $B$, due to the way $A$ is generated from $B$. Thus, after inserting the 1 -1 to generate the first children of both $A$ and $B$, there will be one additional sibling generated for the first child of $B$ compared to the first child of $A$, so $B$ has one more child than $A$. \end{itemize} In all of these cases, $B$ has one more child than $A$ due to its position relative to $A$, just as each node in the Catalan tree has one more child than its closest sibling to the left. This ensures the structure of the \rowname{} path{} tree generated is the same as that of the Catalan generating tree.
To show that this mapping is one-to-one, we will show that each node can only have one unique parent, and so every node must be unique on that level of the \rowname{} path{} generating tree. For any node $C$ in the tree, it is clear that it is distinct from each of its siblings due to the way the tree is constructed. To find the parent of $C$, we must perform the process of constructing the tree in reverse. If $C$ is $\emptyset$, clearly its parent is $\emptyset$. Otherwise if $C$ contains no -1's, its parent is $C$ with one of its 1's removed. There are clearly no duplicate parents possible in this case. If $C$ contains a -1, its parent can be found by removing the leftmost 1 and leftmost -1 from $C$. It is clear that any other node than this could not generate $C$ as a child, as each 1 and -1 removed from $C$ were to the left of any other -1's in $C$. Also, the parent of $C$ found with this method must generate $C$. Thus, each node in each level could only have one unique parent and could not have appeared anywhere else in that level.
To show that every \rowname{} path{} is contained in the \rowname{} path{} generating tree, suppose there exists some Catalan DPP path, $D$, of order $n$, that is not present in the $n$th level of the \rowname{} path{} tree. It is clear that $D$ cannot be $\emptyset$ since $\emptyset$ is present at each level of the \rowname{} path{} tree. Therefore, $D$ contains at least one 1 and less than or equal to $n$ 1's and could contain any legal number of -1's. It is now clear that the method used to find the hypothetical parent of $D$ as above can be validly applied to $D$. Applying this method will always produce a new node that can have this method applied to it, or $\emptyset$ as the parent of $D$. If $D$ has $d$ 1's, it has at most $(d-1)$ -1's. Also, applying the parent-finding method recursively always yields a new node with one less 1 and one less -1 or zero -1's. Applying this parent-finding method $d$ times, we will have found $\emptyset$ as an ancestor of $D$. But if $\emptyset$ is an ancestor of $D$, this would clearly have generated a path to $D$, therefore $D$ is in the \rowname{} path{} generating tree.
Thus, every \rowname{} path{} is present in its appropriate level of the \rowname{} path{} generating tree, and there are no duplicates at any specific level. This mapping is one-to-one and onto, so therefore is a bijection, so by Lemma~\ref{lem:cattree}, \rowname{} path{s} of order $n$ are counted by $C_{n}$. Applying Lemma~\ref{lem:bijDPPtoDPPpath} that \rowname{} path{s} are in bijection with Catalan DPP{s}, we see that Catalan DPP{s} of order $n$ are also counted by $C_n$. \end{proof}
\begin{figure}
\caption{Left: The first five levels of the \rowname{} path{} generating tree, rotated for easier spacing; Right: The same tree with each node replaced by the corresponding Catalan DPP. The same structure can be seen as in Figure \ref{fig:cattree}.}
\label{fig:pathtree}
\end{figure}
\section{Discussion} \label{sec:disc} Though our proof of Theorem~\ref{lem:catrelat} relied on expressing Catalan DPPs in path form, we can now see the Catalan generating tree structure directly on Catalan DPPs. Given a Catalan DPP $a_{1,1} \ a_{1,2} \ \cdots \ a_{1,\lambda_1}$, construct its children by incrementing $a_{1,1}$ and inserting a new number between the first two numbers in all possible ways. For example, the Catalan DPP 6 4 2 2 has children, from right to left, 7 4 4 2 2, 7 5 4 2 2, and 7 6 4 2 2. A general Catalan DPP $a_{1,1} \ a_{1,2} \ \cdots \ a_{1,\lambda_1}$ will have exactly $a_{1,1}+1-a_{1,2}$ children, which will all be of the form $(a_{1,1}+1) \ x \ a_{1,2} \ \cdots \ a_{1,\lambda_1}$ for all integers $x$ such that $a_{1,1}+1\geq x\geq a_{1,2}$. See Figure~\ref{fig:pathtree}.
We hope that comparing the Catalan descending plane partitions presented in this paper with the known Catalan subsets of alternating sign matrices and totally symmetric self-complementary plane partitions (briefly discussed in Section \ref{sec:def}) will be helpful in finding the missing explicit bijections among these three sets of objects enumerated by (\ref{eq:prod}). One challenge is that, in contrast to the Catalan objects on the other two sets, it is not clear how to associate a Catalan DPP to each non-Catalan descending plane partition in a natural way. One would hope to map a Catalan DPP to a each DPP such that the distribution over all descending plane partitions of order $n$ would match one of the Catalan distributions on alternating sign matrices or totally symmetric self-complementary plane partitions. This has not been achieved, but there is potential that it yet may.
\section*{Acknowledgments} The authors thank Kevin Dilks for help in coding both \url{SageMath} and \LaTeX~to produce some of the figures of this paper. We also thank Mi Huynh for directing our attention toward the Catalan generating tree in \cite{stanley2015catalan}. Striker was partially supported by National Security Agency Grant H98230-15-1-0041.
\end{document} |
\begin{document}
\def{\cal F}{{\cal F}} \def\hbox{\rm diag}{\hbox{\rm diag}} \def\hbox{\rm trace}}\def\refer{\hangindent=0.3in\hangafter=1{\hbox{\rm trace}}\def\refer{\hangindent=0.3in\hangafter=1}
\def\varphi} \def\r{\rho}\def\e{\varepsilon{\varphi} \def\r{\rho}\def\e{\varepsilon} \def\left} \def\rt{\right}\def\t{\triangle} \def\ra{\rightarrow{\left} \def\rt{\right}\def\t{\triangle} \def\ra{\rightarrow} \def\label}\def\be{\begin{equation}} \def\ee{\end{equation}{\label}\def\be{\begin{equation}} \def\ee{\end{equation}} \def{\mathbb{S}}{{\mathbb{S}}}
\title{{f Explicit Numerical Approximations for McKean-Vlasov Neutral Stochastic Differential Delay Equations} \begin{abstract} This paper studies the numerical methods to approximate the solutions for a sort of McKean-Vlasov neutral stochastic differential delay equations (MV-NSDDEs) that the growth of the drift coefficients is super-linear. First, We obtain that the solution of MV-NSDDE exists and is unique. Then, we use a stochastic particle method, which is on the basis of the results about the propagation of chaos between particle system and the original MV-NSDDE, to deal with the approximation of the law. Furthermore, we construct the tamed Euler-Maruyama numerical scheme with respect to the corresponding particle system and obtain the rate of convergence. Combining propagation of chaos and the convergence rate of the numerical solution to the particle system, we get a convergence error between the numerical solution and exact solution of the original MV-NSDDE in the stepsize and number of particles.
\end{abstract} \noindent AMS Subject Classification: 60C30, 60H10, 34K26.
\noindent Keywords: McKean-Vlasov neutral stochastic differential delay equations; Super-linear growth; Particle system; Tamed Euler-Maruyama; Strong convergence
\section{Introduction}\label{s-w} Due to the coefficients depending on the current distribution, McKean-Vlasov (MV) stochastic differential equations (SDEs) are also konwn as distribution dependent (DD) SDEs described by \begin{align*} \mathrm{d}S(t)=b(t,S(t),\mathcal{L}_{t}^{S})\mathrm{d}t+\sigma(t,S(t),\mathcal{L}_{t}^{S})\mathrm{d}B(t), \end{align*} where~$\mathcal{L}_{t}^{S}$~is the law (or distribution) of~$S(t)$. The pioneering work of MV-SDEs has been done by McKean in \cite{ MHP1966, MHP1967, MHP1975} connected with a mathematical foundation of the Boltzmann equation. Under Lipschitz type conditions, it is known that MV-SDEs can be regarded as the result generated by the following interacting particle system \begin{align*} \mathrm{d}S^a(t)=b\bigg(t,S^a(t),\frac{1}{\Xi}\sum^{\Xi}_{j=1}\delta_{S^j(t)}\bigg)\mathrm{d}t +\sigma\bigg(t,S^a(t),\frac{1}{\Xi}\sum^{\Xi}_{j=1}\delta_{S^j(t)}\bigg)\mathrm{d}B^{a}(t),~~~a=1,\cdots,\Xi, \end{align*} where~$\delta_{S^j(t)}$~denotes the Dirac measure at point~$S^j(t)$, as the particle number~$\Xi$~tends to infinity \cite{MHP1966, MHP1967}. Such results are also called propagation of chaos (see, e.g., \cite{LD2018}). Now, MV-SDEs have been used as mathematical models widely in biological systems, financial engineering and physics.
The theory of MV-SDEs has been developed rapidly, including ergodicity \cite{EA2019}, Feyman-Kac Formula \cite{CD2018}, Harnack inequalities \cite{WFY2018} and so on. Recently, the existence of the unique solution for MV-SDEs which drift coefficients are super-linear growth attracts much attention \cite{DRG2019,WFY2018}. Due to the unsolvability of MV-SDEs, it is necessary to simulate them by some appropriate numerical methods. The explicit Euler-Maruyama (EM) method is common for approximating the solutions of SDEs with global Lipschitz coefficients \cite{KPE1992, MGN1995}. However, Hutzenthaler et al. \cite{H2011} gave several counter examples that the moments of order $p$ for EM numerical solutions diverge to infinity as the coefficients of the SDE are super-linear growth. Since explicit numerical methods are easily implementable and save the cost of computation, different approaches have been explored to modify the EM methods for the nonlinear SDEs (see, e.g., \cite{HM2012, LXY2019, SS2016}), particularly, \cite{HM2012} proposed a tamed EM approximation for SDEs whose coefficients are locally Lipschitz continuous.
The lag phenomena always happen in the real world. Stochastic differential delay equations (SDDEs) and stochastic differential delay equations with the neutral term (NSDDEs) are used commonly in many fields to represent a broad variety of natural and artificial systems (see, e.g., \cite{BBS2010}). In this article, we are interested in the approximation of MV-NSDDEs with drift coefficient satisfying super-linear growth condition. Compared with the approximation of NSDDEs, the numerical method for MV-NSDDEs needs to approximate the law at each grid. Recently, a few works paid attention to the numerical methods for MV-SDEs (see, e.g., \cite{AF2002, JX2019, BK20, TD1996, BM1997}). Especially, using the tamed EM scheme and the theory of propagation of chaos, Dos-Reis et al. \cite{B21, GSG2018} gave the strong convergence of the approximation solutions for MV-SDEs, \cite{RW19} studied least squares estimator of a kind of path-dependent MV-SDEs by adopting a tamed EM algorithm. However, as well as we know, there are few results on the numerical approximations of MV-NSDDEs. Therefore, the main goal of our paper is to establish the strong convergence theory for MV-NSDDEs using the tamed EM method. The neutral-type delay and the law are the key obstacles of the numerical approximation.
According to the ideas from \cite{B21,GSG2018, RW19}, we develop the tamed EM scheme to the particle system converging to the MV-NSDDEs, show the strong convergence between the numerical solutions and the exact solutions of the particle system and further estimate the convergence rate. Finally, using the result of propagation of chaos, we obtain the strong convergence for MV-NSDDEs and its rate.
The rest organization of this paper is as follows: In section 2, we introduce some notations and preliminaries. In section 3, we give the existence of the unique solution for the MV-NSDDE and introduce the result on propagation of chaos of the particle system. In section 4, we construct the tamed EM scheme for the particle system and obtain the moment boundedness for the corresponding approximation solutions. Furthermore, we estimate the strong convergence rate between the numerical solutions and exact solutions of the particle system, and then get the strong convergence for MV-NSDDEs. In section 5, the numerical simulation of an example is provided to demonstrate the validity of our numerical algorithm.
\section{Notations and Preliminaries}\label{n-p}
Assume $( \Omega,~\mathcal{F},~\mathbb{P} )$~is a probability space that is complete and has a normal filtration~$\{\mathcal{F}_{t}\}_{t\geq0}$~satisfying the usual conditions (namely, it is right continuous and increasing while~${\mathcal{F}}_{0}$~contains all $\mathbb{P}$-null sets). Let $\mathbb{N}=\{1,2,\cdots\}$~and~$d,~m\in\mathbb{N}$. Denote~$\{B(t)\}_{t\geq0}$~as a standard~$m$-dimensional Brownian motion on the probability space. Let~$\mathbb{R}_+=\{x\in\mathbb{R}: x\geq0\}$,~$\mathbb{R}^{d}$~be~$d$-dimensional Euclidean space, and $\mathbb{R}^{d\times m}$~be the space of real~$d\times m$-matrices. If~$x\in\mathbb{R}^{d}$, then denote by~$|x|$~the Euclidean norm. For any matrix~$A$, we denote its transpose by $A^{T}$ and define its trace norm by $|A|=\sqrt{\mathrm{trace}(AA^{T})}$. Moreover, for any $x\in \mathbb{R}$~and~$y\in\mathbb{R}$, we use the notation~$x\wedge y=\min\{x,y\}$~and~$x\vee y=\max\{x,y\}$.
Let $\emptyset$~denote the empty set and $\inf\emptyset=\infty$. For any~$x\in\mathbb{R}$, we use $\lfloor x\rfloor$~as the integer part of~$x$.
For any $q>0$, let~$L^{q}=L^{q}(\Omega;\mathbb{R}^{d})$ be the space of~$\mathbb{R}^{d}$-valued random variables $Z$ satisfying $\mathbb{E}[|Z(\omega)|^q]<+\infty$. We also let $\mathcal{L}^{Z}$ be the probability law (or distribution) of a random variable~$Z$. Let $\delta_{x}(\cdot)$~denote the Dirac delta measure concentrated at a point~$x\in\mathbb{R}^{d}$ and $\mathcal{P}(\mathbb{R}^{d})$~be a collection of probability measures on~$\mathbb{R}^{d}$. For $q\geq1$, we let $\mathcal{P}_q(\mathbb{R}^{d})$ be a collection of probability measures on~$\mathbb{R}^{d}$ with finite $q$th moments, and define \begin{align*}
W_q(\mu):=\big(\int_{\mathbb{R}^{d}}|x|^q\mu(\mathrm{d}x)\big)^{\frac{1}q},~ ~~~ \forall \mu\in\mathcal{P}_q(\mathbb{R}^{d}). \end{align*}
\begin{lemma}\label{defn2.1} {\rm\cite[pp.106-107]{VC2008} ( Wasserstein~Distance )} Let~$q\geq1$. Define $$
\mathbb{W}_q(\mu,\nu):=\inf_{\pi \in \mathcal{D}(\mu,\nu)}\bigg\{\int_{\mathbb{R}^{d}}|x-y|^q\pi(\mathrm{d}x,\mathrm{d}y)\bigg\}^{\frac{1}q}, ~\mu,\nu\in\mathcal{P}_q(\mathbb{R}^{d}), $$ where~$\mathcal{D}(\mu,\nu)$~denotes the family of all couplings for~$\mu$~and~$\nu$. Then~$\mathbb{W}_q$~is a distance on~$\mathcal{P}_q(\mathbb{R}^{d})$. \end{lemma}
\section{ MV-NSDDEs}\label{s-c}
Throughout this paper, let~$\tau>0$~and~$\mathcal{C}([-\tau,0];\mathbb{R}^{d})$~be the space of continuous functions~$\phi$ from~$[-\tau,0]$~to~$\mathbb{R}^{d}$~and~$\|\phi\|=\sup\limits_{-\tau\leq\theta\leq0}|\phi(\theta)|$. For any~$q\geq 0$, let $L^q_{\mathcal{F}_{0}}([-\tau,0];\mathbb{R}^{d})$ denote the set of~$\mathcal{F}_{0}$-measurable~$\mathcal{C}([-\tau,0];\mathbb{R}^{d})$-valued random variables $\xi$ with $\mathbb{E}[\|\xi\|^q]<\infty$. For any given $T\in(0,\infty)$, consider the MV-NSDDE \begin{equation} \label{eq3.1} \mathrm{d}\Big(S(t)-D(S(t-\tau))\Big)=b(S(t),S(t-\tau),\mathcal{L}_{t}^{S})\mathrm{d}t+\sigma(S(t),S(t-\tau),\mathcal{L}_{t}^{S})\mathrm{d}B(t),~~~t\in[0,T], \end{equation} where $\mathcal{L}_{t}^{S}$ is the law of~$S(t)$. $ D:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d},~b:\mathbb{R}^{d}\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})\rightarrow\mathbb{R}^{d},~\sigma:\mathbb{R}^{d}\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})\rightarrow\mathbb{R}^{d\times m} $ are Borel-measurable, and~$b,~\sigma$~are continuous on~$\mathbb{R}^{d}\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})$.
Define $S_t(\theta)=S(t+\theta)$, $-\tau\leq \theta\leq 0$. Then $\{S_t\}_{t\in[0,T]}$ is thought of as a $\mathcal{C}([-\tau,0];\mathbb{R}^{d})$-valued stochastic process. Assume that the initial condition satisfies
\begin{equation} \label{eq3.2}
S_{0} =\xi\in L^{p}_{\mathcal{F}_{0}}([-\tau,0];\mathbb{R}^{d}),~~\hbox{and}~~ \mathbb{E}\big[\sup_{_{-\tau\leq r,t\leq0}}|\xi(t)-\xi(r)|^{p}\big]\leq K_0|t-r|^{\frac{p}{2}} \end{equation} for some $p\geq 2$ and $K_0>0$. And for the sake of simplicity, unless otherwise stated,~$C$~denotes the positive constant whose value may vary with different places of this paper. \begin{defn}\label{dfn3.1}$\mathrm{(MV-NSDDE~strong~solution~and~uniqueness)}$~~An~$\mathbb{R}^{d}$-valued stochastic process~$\{S(t)\}_{t\in[-\tau,T]}$~is called a strong solution to~$(\ref{eq3.1})$~if it satisfies the following conditions: \begin{itemize} \item[$(1)$] it is continuous and~$\{S(t)\}_{t\in[0,T]}$~is~$\{\mathcal{F}_{t}\}$-adapted;
\item[$(2)$] $\mathbb{E}\Big[\int_{0}^{T} |b(S(t),S(t-\tau),\mathcal{L}_{t}^{S})|\mathrm{d}t\Big]<+\infty,~\mathbb{E}\Big[\int_{0}^{T}|\sigma(S(t),S(t-\tau),\mathcal{L}_{t}^{S})|^{2}\mathrm{d}t\Big]<+\infty$; \item[$(3)$] $S_0=\xi$,~and for any $t\in[0, T]$, \begin{align*} S(t)-D(S(t-\tau))=&\xi(0)-D(\xi(-\tau))+\int_{0}^{t}b(S(r),S(r-\tau),\mathcal{L}_{r}^{S})\mathrm{d}r \\ &+\int_{0}^{t}\sigma(S(r),S(r-\tau),\mathcal{L}_{r}^{S})\mathrm{d}B(r)~~~ ~ \hbox{a.s}. \end{align*} \end{itemize} We say that the solution~$\{S(t)\}_{t\in[-\tau,T]}$~is unique if for any other solution~$\{\overline{S}(t)\}_{t\in[-\tau,T]}$, $$ P\{S(t)=\overline{S}(t)~for~all~-\tau\leq t\leq T\}=1, $$ \end{defn}
To prove the main results of this paper, we prepare several lemmas. \begin{lemma}\label{l3.1}{\rm\cite{DRG2019} }~~For any~$\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})$, $\mathbb{W}_{2}(\mu,\delta_{0})=W_{2}(\mu)$. \end{lemma} \begin{lemma}\label{l3.2}{\rm\cite[pp.362, Theorem 5.8]{CR2018}}~~For~$\{Z_{n}\}_{n\geq1}$~a sequence of independent identically distributed (i.i.d. for short) random variables in~$\mathbb{R}^{d}$~with common distribution~$\mu\in\mathcal{P}(\mathbb{R}^{d})$~and a constant~$ \Xi\in\mathbb{N}$, we define the empirical measure $ \overline{\mu}^{\Xi}:=\frac{1}{\Xi}\sum\limits^{\Xi}_{j=1}\delta_{Z_{j}}. $ If~$\mu\in\mathcal{P}_{q}(\mathbb{R}^{d})$~with~$q>4$, then there exists a constant~$C=C(d,q,W_{q}(\mu))$~such that for each~$\Xi\geq2$, \begin{align*} \mathbb{E}\big[\mathbb{W}^{2}_{2}(\overline{\mu}^{\Xi},\mu)\big]\leq C \left\{ \begin{array}{lll} \Xi^{-1/2},&~~~1\leq d<4,\\ \Xi^{-1/2}\log(\Xi),&~~~d=4,\\ \Xi^{-2/d},&~~~ 4<d. \end{array} \right. \end{align*} \end{lemma}
For the theoretical results of this paper including the existence of a unique solution and convergence of numerical approximation for~$(\ref{eq3.1})$-$(\ref{eq3.2})$, we impose the following assumptions.
\begin{assp}\label{a1} $D(0)=0$. And there exist positive constants~ $\lambda<1$ and $K_i~ (i=1,2,3)$ such that \begin{align*}
&|D(x_{_{1}})-D(x_{_{2}})|\leq\lambda|x_{1}-x_{2}|,~ ~ |b(0,0,\mu)|\vee|\sigma(0,0,\mu)|\leq K_1\big(1+W_{2}(\mu)\big),\\ &(x_{1}-D(y_{1})-x_{2}+D(y_{2}))^{T} \big(b(x_{1},y_{1},\mu_{1})-b(x_{2},y_{2},\mu_{2})\big)\\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\leq K_2\big(|x_{1}-x_{2}|^{2}+|y_{1}-y_{2}|^{2}+\mathbb{W}^{2}_{2}(\mu_{1},\mu_{2})\big)\\ and\\
&|\sigma(x_{1},y_{1},\mu_{1})-\sigma(x_{2},y_{2},\mu_{2})|^2
\leq K_3\big(|x_{1}-x_{2}|^{2}+|y_{1}-y_{2}|^{2}+\mathbb{W}^{2}_{2}(\mu_{1},\mu_{2})\big) \end{align*} hold for any $x_{1},~x_{2},~y_{1},~y_{2}\in\mathbb{R}^{d}$,~$\mu,~\mu_{1},~\mu_{2}\in\mathcal{P}_{2}(\mathbb{R}^{d})$.
\end{assp} \begin{assp}\label{a2} There exist positive constants $K_4 $~and~$c$~such that $$
|b(x_{1},y_{1},\mu)-b(x_{2},y_{2},\mu)|\leq K_4\big(1+|x_{1}|^{c}+|y_{1}|^{c}+|x_{2}|^{c}+|y_{2}|^{c}\big)\big(|x_{1}-x_{2}|^{2}+|y_{1}-y_{2}|^{2}\big) $$ holds for any~$x_{1},~x_{2},~y_{1},~y_{2}\in\mathbb{R}^{d}$, and~$\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})$. \end{assp}
\begin{remark}\label{r3.3} Under Assumption~$\ref{a1}$, by virtue of the elementary inequality and Lemma~$\ref{l3.1}$, it is easy to show that \begin{align} \label{eq3.3}
|\sigma(x,y,\mu)|^2&\leq 2\big|\sigma(x,y,\mu)-\sigma(0,0,\delta_{0})\big|^2+2|\sigma(0,0,\delta_{0})|^2\nonumber\\
&\leq 2K_3\big(|x|^2+|y|^2+\mathbb{W}_{2}^2(\mu,\delta_{0})\big)+2K^{2}_1(1+W_{2}(\delta_{0}))^2\nonumber\\
&\leq 2 (K_3+K^{2}_1)\big(1+|x|^2+|y|^2+W_{2}^2(\mu)\big). \end{align}
Additionally, one obtains \begin{align} \label{eq3.4}
|D(x)|=|D(x)-D(0)|\leq\lambda|x|. \end{align} This, together with~$\mathrm{Young}$'s~inequality, implies that \begin{align}\label{eq3.5} \big(x-D(y)\big)^{T}b(x,y,\mu)&=(x-D(y)-0+D(0))^{T}\big(b(x,y,\mu) -b(0,0,\delta_{0})+b(0,0,\delta_{0})\big)\nonumber\\
&\leq K_2 \big( |x|^{2}+|y|^{2}+W^{2}_{2}(\mu) \big)+ (|x|+|D(y)|)|b(0,0,\delta_{0})|\nonumber\\
&\leq(K_1+ K_2) \big( 1+|x|^{2}+|y|^{2}+W^{2}_{2}(\mu) \big). \end{align} Moreover, Assumptions \ref{a1} and \ref{a2} also yield that \begin{align}\label{eq3.6}
|b(x,y,\mu)|&\leq|b(x,y,\mu)-b(0,0,\mu)|+|b(0,0,\mu) | \nonumber\\
&\leq K_{4}\big(1+|x|^{c}+|y|^{c}\big)\big(|x|+|y|\big)+K_{1}(1+W_{2}(\mu)) \nonumber\\
&\leq4(K_{1}+K_{4})\big(1+|x|^{c+1}+|y|^{c+1}+W_{2}(\mu)\big). \end{align} \end{remark}
\begin{lemma}\label{l3.4}{\rm \cite[pp.211, Lemma 4.1]{MX2007}}~~Let~$q>1,~\varepsilon>0$~and~$x,y\in\mathbb{R}$. Then $$
|x+y|^{q}\leq(1+\varepsilon^{\frac{1}{q-1}})^{q-1}(|x|^{q}+\frac{|y|^{q}}{\varepsilon}). $$ Especially for $q=p, ~\varepsilon=( {(1-\lambda)}/{\lambda})^{p-1}$, where $p\geq 2$ and $0<\lambda<1$ are given in \eqref{eq3.2} and Assumption \ref{a1}, respectively, one can derive the following inequality \begin{align}\label{eq3.7}
|x+y|^{p}\leq \frac{1}{\lambda ^{p-1}} |x|^{p}+\frac{1}{(1-\lambda )^{p-1}} |y|^{p} . \end{align} \end{lemma}
\subsection{Existence and Uniqueness of Solutions}\label{s3.1} Now we begin to investigate the existence and uniqueness of the exact solution to~$(\ref{eq3.1})$-$(\ref{eq3.2})$. \begin{theorem}\label{th3.5} Under Assumption~$\ref{a1}$, there is a unique strong solution to the equation~$(\ref{eq3.1})$-$(\ref{eq3.2})$. Moreover, for any $T>0$, \begin{align}\label{eq3.8}
\mathbb{E}\big[\sup_{0\leq t\leq T}|S(t)|^{p}\big]\leq V(1+T)e^{V(1+T)T}, \end{align}
where~$V: =L(p,K_{1},K_{2},K_{3},\mathbb{E}[\|\xi\|^{p}])$ depends on $p,K_{1},K_{2},K_{3},\mathbb{E}[\|\xi\|^{p}]$. \end{theorem} \begin{proof} We borrow the techniques for MV-SDEs from \cite{WFY2018}~and \cite{HY2020}. Then we divide the proof which is rather technical into 3 steps.\\ \underline{Step 1.} For~$t\in[0,T]$, we define $$
S^{(0)}(\theta)=\xi(\theta),~~~\theta\in[-\tau,0];~~~~S^{(0)}(t)\equiv\xi(0). $$ Then one observes that~$\mathcal{L}_{t}^{S^{(0)}}\equiv\mathcal{L}^{\xi(0)}$. For~$i\geq1$, let~$S^{(i)}(t) $~solve NSDDE \begin{align} \label{eq3.9}
\mathrm{d}\Big(S^{(i)}(t)-D(S^{(i)}(t-\tau))\Big)=& b(S^{(i)}(t),S^{(i)}(t-\tau),\mathcal{L}_{t}^{S^{(i-1)}})\mathrm{d}t\nonumber\\ &+\sigma(S^{(i)}(t),S^{(i)}(t-\tau),\mathcal{L}_{t}^{S^{(i-1)}})\mathrm{d}B(t),~~~~t\in[0,T], \end{align} and the initial data is given by \begin{align} \label{eq3.10}
S^{(i)} (\theta)=\xi(\theta),~~~\theta\in[-\tau,0], \end{align} where~$\mathcal{L}_{t}^{S^{(i-1)}}$~denotes the law of~$ S^{(i-1)}(t)$. Now, we claim that~$(\ref{eq3.9})$-$(\ref{eq3.10})$~has a unique solution and the solution has the property that $$
\mathbb{E}\big[\sup\limits_{0\leq t\leq T}|S^{(i)}(t)|^{p}\big]<\infty. $$ As $i=1$, it follows by virtue of~\cite{JSY2017}~that under Assumption~$\ref{a1}$, NSDDE~$(\ref{eq3.9})$-$(\ref{eq3.10})$~exists the unique regular solution~$S^{(1)}(t)$. Moreover, due to $(\ref{eq3.7})$ and Assumption \ref{a1}, one can know that \begin{align*}
|S^{(1)}(t)|^{p}&=|D(S^{(1)}(t-\tau))+S^{(1)}(t)-D(S^{(1)}(t-\tau))|^{p}\nonumber\\
&\leq\frac{1}{ \lambda^{p-1}}|D(S^{(1)}(t-\tau))|^{p} +\frac{1}{(1-\lambda)^{p-1}}|S^{(1)}(t)-D(S^{(1)}(t-\tau))|^{p}\nonumber\\ &\leq
\lambda| S^{(1)}(t-\tau) |^{p} +\frac{1}{(1-\lambda)^{p-1}}|S^{(1)}(t)-D(S^{(1)}(t-\tau))|^{p} \end{align*} for any $t\in[0,T]$. This implies that \begin{align*}
\sup_{0\leq r\leq t}|S^{(1)}(r)|^{p}&\leq
\lambda\|\xi\|^{p}+\lambda\sup_{0\leq r\leq t}|S^{(1)}(r)|^{p}+\frac{1}{(1-\lambda)^{p-1}}\sup_{0\leq r\leq t}\big|S^{(1)}(r)-D(S^{(1)}(r-\tau))\big|^{p}. \end{align*} Therefore, we have \begin{align} \label{eq3.12}
\sup_{0\leq r\leq t}|S^{(1)}(r)|^{p} &\leq
\frac{\lambda}{1-\lambda}\|\xi\|^{p}+\frac{1}{(1-\lambda)^{p}}\sup_{0\leq r\leq t}\big|S^{(1)}(r)-D(S^{(1)}(r-\tau))\big|^{p}. \end{align} The~It$\widehat{\mathrm{o}}$~formula leads to \begin{align*}
|S^{(1)}(t)-D(S^{(1)}(t-\tau))|^{p}\leq |\xi(0)-D(\xi(-\tau))|^{p}+J_{1}(t)+J_{2}(t)+J_{3}(t), \end{align*} where \begin{align*}
J_{1}(t)&=p\int_{0}^{t}\big|S^{(1)}(r)-D(S^{(1)}(r-\tau))\big|^{p-2}\\ &~~~~~~~~\times\Big(S^{(1)}(r)-D(S^{(1)}(r-\tau))\Big)^{T}b(S^{(1)}(r),S^{(1)}(r-\tau),\mathcal{L}_{r}^{S^{(0)}})\mathrm{d}r, \end{align*} \begin{align*}
J_{2}(t)&=\frac{p(p-1)}{2}\int_{0}^{t}\big|S^{(1)}(r)-D(S^{(1)}(r-\tau))\big|^{p-2}\big|\sigma(S^{(1)}(r),S^{(1)}(r-\tau),\mathcal{L}_{r}^{S^{(0)}})\big|^{2}\mathrm{d}r \end{align*} and \begin{align*}
J_{3}(t)&=p\int_{0}^{t}\big|S^{(1)}(r)-D(S^{(1)}(r-\tau))\big|^{p-2}\\ &~~~~~~~~\times\big(S^{(1)}(r)-D(S^{(1)}(r-\tau))\big)^{T}\sigma(S^{(1)}(r),S^{(1)}(r-\tau),\mathcal{L}_{r}^{S^{(0)}})\mathrm{d}B(r). \end{align*} For any positive integer~$N$, we define the stopping time $$
\varsigma^{(1)}_{N}=T\wedge\inf\big\{t\in[0,T]:~|S^{(1)}(t)|\geq N\big\}. $$ Obviously, $\varsigma^{(1)}_{N}\uparrow T$~a.s.~as~$N\rightarrow\infty$. Then, one can write that \begin{align} \label{eq3.14}
&\mathbb{E}\Big[\sup_{0\leq r\leq t\wedge\varsigma^{(1)}_{N}}\big|S^{(1)}(r)-D(S^{(1)}(r-\tau))\big|^{p}\Big]\nonumber\\
&\leq \mathbb{E}\big[\big|\xi(0)-D(\xi(-\tau))\big|^{p}\big]+\mathbb{E}\big[\sup_{0\leq r\leq t\wedge\varsigma^{(1)}_{N}}\big(J_{1}(r)+J_{2}(r)\big)\big]+\mathbb{E}\big[\sup_{0\leq r\leq t\wedge\varsigma^{(1)}_{N}}J_{3}(r)\big]. \end{align} By~$(\ref{eq3.3})$~and~$(\ref{eq3.5})$, we obtain that \begin{align*} &\mathbb{E}\Big[\sup_{0\leq r\leq t\wedge\varsigma^{(1)}_{N}}\big(J_{1}(r)+J_{2}(r)\big)\Big]\nonumber\\
&\leq C\mathbb{E}\Big[\int_{0}^{t\wedge\varsigma^{(1)}_{N}}\big|S^{(1)}(r)-D(S^{(1)}(r-\tau))\big|^{p-2}\nonumber\\
&~~~~~~~~~~~~~~~~~~~~~~~~~~\times\big(1+|S^{(1)}(r)|^{2}+|S^{(1)}(r-\tau)|^{2}+W_{2}^{2}(\mathcal{L}_{r}^{S^{(0)}})\big)\mathrm{d}r\Big]\nonumber\\
&\leq C\mathbb{E}\Big[\int_{0}^{t\wedge\varsigma^{(1)}_{N}}\big(|S^{(1)}(r)|^{p-2}+|D(S^{(1)}(r-\tau))|^{p-2}\big)\nonumber\\
&~~~~~~~~~~~~~~~~~~~~~~~~~~\times\big(1+|S^{(1)}(r)|^{2}+|S^{(1)}(r-\tau)|^{2}+\mathbb{E}[|\xi(0)|^2]\big)\mathrm{d}r\Big]. \end{align*} Using Young's inequality and~$\mathrm{H\ddot{o}lder}$'s inequality, we derive that \begin{align}\label{eq3.15} \mathbb{E}\big[\sup_{0\leq r\leq t\wedge\varsigma^{(1)}_{N}}(J_{1}(r)+J_{2}(r))\big]&\leq C\mathbb{E}\Big[\int_{0}^{t\wedge\varsigma^{(1)}_{N}}
\big(1+|S^{(1)}(r)|^{p}+|S^{(1)}(r-\tau)|^{p}+\mathbb{E}[|\xi(0)|^p]\big)\mathrm{d}r\Big]\nonumber\\
&\leq C\Big(\mathbb{E}\Big[\int_{0}^{t\wedge\varsigma^{(1)}_{N}}\big(|S^{(1)}(r)|^{p}+|S^{(1)}(r-\tau)|^{p}
\big)\mathrm{d}r\Big]+1\Big)\nonumber\\
&\leq C\Big(\mathbb{E}\Big[\int_{0}^{t\wedge\varsigma^{(1)}_{N}}|S^{(1)}(r)|^{p}\mathrm{d}r\Big]
+\int_{-\tau}^{0}\mathbb{E}\big[\|\xi\|^{p}\big]\mathrm{d}r+1\Big) \nonumber\\
&\leq C\Big(\int_{0}^{t}\mathbb{E}\big[\sup_{0\leq v\leq r\wedge\varsigma^{(1)}_{N}}|S^{(1)}(v)|^{p}\big]\mathrm{d}r+1\Big). \end{align} The application of the Burkholder-Davis-Gundy (BDG) inequality, Young's inequality and~$\mathrm{H\ddot{o}lder}$'s inequality yields \begin{align}\label{eq3.16} &\mathbb{E}\big[\sup_{0\leq r\leq t\wedge\varsigma^{(1)}_{N}}J_{3}(r)\big]\nonumber\\
&\leq C\mathbb{E}\Big[\Big(\int_{0}^{t\wedge\varsigma^{(1)}_{N}}\big|S^{(1)}(r)-D(S^{(1)}(r-\tau))\big|^{2p-2}
\big|\sigma(S^{(1)}(r),S^{(1)}(r-\tau),\mathcal{L}_{r}^{S^{(0)}})\big|^{2}\mathrm{d}r\Big)^{\frac{1}{2}}\Big]\nonumber\\
&\leq C\mathbb{E}\Big[\sup_{0\leq r\leq t\wedge\varsigma^{(1)}_{N}}\big|S^{(1)}(r)\!-\!D(S^{(1)}(r-\tau))\big|^{p-1}
\big(\int_{0}^{t\wedge\varsigma^{(1)}_{N}}\big|\sigma(S^{(1)}(r),S^{(1)}(r\!-\!\tau),\mathcal{L}_{r}^{S^{(0)}})\big|^{2}\mathrm{d}r\big)^{\frac{1}{2}}\Big]\nonumber\\
&\leq\frac{1}{2}\mathbb{E}\Big[\sup_{0\leq r\leq t\wedge\varsigma^{(1)}_{N}}\big|S^{(1)}(r)-D(S^{(1)}(r-\tau))\big|^{p}\Big]\nonumber\\
&~~~~+C\mathbb{E}\Big[\Big(\int_{0}^{t\wedge\varsigma^{(1)}_{N}}\big(1+|S^{(1)}(r)|^{2}+|S^{(1)}(r-\tau)|^{2}+W^{2}_{2}(\mathcal{L}_{r}^{S^{(0)}})\big)\mathrm{d}r\Big)^{\frac{p}{2}}\Big]\nonumber\\
&\leq\frac{1}{2}\mathbb{E}\Big[\sup_{0\leq r\leq t\wedge\varsigma^{(1)}_{N}}\big|S^{(1)}(r)\!-\!D(S^{(1)}(r\!-\!\tau))\big|^{p}\!\Big]\!+\!C\Big(\int_{0}^{t}\mathbb{E}\Big[\sup_{0\leq v\leq r\wedge\varsigma^{(1)}_{N}}\!|S^{(1)}(v)|^{p}\Big]\mathrm{d}r\!+\!1\Big). \end{align} Inserting~$(\ref{eq3.15})$~and~$(\ref{eq3.16})$ into $(\ref{eq3.14})$ yields \begin{align*}
\mathbb{E}\Big[\sup_{0\leq r\leq t\wedge\varsigma^{(1)}_{N}}\big|S^{(1)}(r)-D(S^{(1)}(r-\tau))\big|^{p}\Big]\leq C \int_{0}^{t}\mathbb{E}\Big[\sup_{0\leq v\leq r\wedge\varsigma^{(1)}_{N}}|S^{(1)}(v)|^{p}\Big]\mathrm{d}r+C , \end{align*} which together with~$(\ref{eq3.12})$~and the Gronwall inequality implies \begin{align*}
\mathbb{E}\big[\sup_{0\leq r\leq t\wedge\varsigma^{(1)}_{N}}|S^{(1)}(r)|^{p}\big]\leq C. \end{align*} In view of Fatou's lemma, we can deduce that \begin{align} \label{eq3.17}
\mathbb{E}\big[\sup_{0\leq r\leq t}|S^{(1)}(r)|^{p}\big]\leq C, ~~~~~0\leq t\leq T, \end{align}
holds for~$i=1$. Next, assume that the assertion $$
\mathbb{E}[\sup\limits_{0\leq t\leq T}|S^{(i)}(t)|^{p}]<\infty $$ holds for~$i=k$. By changing~ $(S^{(1)},\mathcal{L}^{S^{(0)}},S^{(0)})$ to $(S^{(i+1)},\mathcal{L}^{S^{(i)}},S^{(i)})$ in the above proof and repeating that procedure, we can obtain that
for~$i=k+1$ the assertion~still holds.
\underline{Step 2.} We shall prove that both the sequence $\{S^{(i)}(\cdot)\}$ and its law $\{\mathcal{L}^{S^{(i)}}\}$ have limit. To this end, we show that \begin{align}\label{eq3.18}
\mathbb{E}\big[\sup_{0\leq r\leq T_0}|S^{(i+1)}(r)-S^{(i)}(r)|^{p}\big]\leq Ce^{-i},~~~~\hbox{for some} ~ T_0\in (0, T] . \end{align} Due to~$(\ref{eq3.7})$ and Assumption \ref{a1} one observes \begin{align*}
|S^{(i+1)}(t)-S^{(i)}(t)|^{p}
&\leq \lambda\big|S^{(i+1)}(t -\tau)-S^{(i)}(t-\tau)\big|^{p}
+\frac{1}{(1-\lambda)^{p-1}}|\Gamma^{(i)}(t)|^{p} , \end{align*} where $\Gamma^{(i)}(t)= S^{(i+1)}(t)-D(S^{(i+1)}(t-\tau))-S^{(i)}(t)+D(S^{(i)}(t-\tau)) $, which implies \begin{align*}
&\sup_{0\leq r\leq t}\big|S^{(i+1)}(r)-S^{(i)}(r)\big|^{p}\nonumber\\
&\leq\lambda\sup_{-\tau\leq r\leq 0}\big|S^{(i+1)}(r)\!-\!S^{(i)}(r)\big|^{p}\!+\!\lambda\sup_{0\leq r\leq t}\big|S^{(i+1)}(r)\!-\!S^{(i)}(r)\big|^{p}
+\frac{1}{(1-\lambda)^{p-1}}\sup_{0\leq r\leq t}|\Gamma^{(i)}(r)|^{p}. \end{align*} Noting that $S^{(i+1)}(t)$ and $S^{(i)}(t)$ have the same initial value, we drive that \begin{align}\label{eq3.19}
\sup_{0\leq r\leq t}\big|S^{(i+1)}(r)-S^{(i)}(r)\big|^{p}
\leq\frac{1}{(1-\lambda)^{p}}\sup_{0\leq r\leq t}|\Gamma^{(i)}(r)|^{p}. \end{align} By the application of the~It$\widehat{\mathrm{o}}$~formula we have \begin{align}\label{eq3.20}
\mathbb{E}\big[\sup_{0\leq s\leq t} |\Gamma^{(i)}(s)|^{p}\big]\leq H_{1}+H_{2} , \end{align} where \begin{align*}
H_{1}&=p\mathbb{E}\bigg[\sup_{0\leq r\leq t}\int_{0}^{r}|\Gamma^{(i)}(v)|^{p-2}\\ &~~~~~~~~\times\Big\{\big(\Gamma^{(i)}(v)\big)^{T}\Big(b(S^{(i+1)}(v),S^{(i+1)}(v\!-\tau),\mathcal{L}_{v}^{S^{(i)}})\!-b(S^{(i)}(v),S^{(i)}(v\!-\tau),\mathcal{L}_{v}^{S^{(i-1)}})\Big)\\
&~~~~~~~~~~~~+\!\frac{p-1}{2}\Big|\sigma(S^{(i+1)}(v)\!,S^{(i+1)}(v\!-\!\tau),\!\mathcal{L}_{v}^{S^{(i)}})
\!-\!\sigma(S^{(i)}(v),S^{(i)}(v\!-\!\tau),\mathcal{L}_{v}^{S^{(i-1)}})\Big|^{2}\Big\}\mathrm{d}v\bigg]\\ \end{align*}
and \begin{align*}
H_{2}&=p\mathbb{E}\bigg[\sup_{0\leq r\leq t}\int_{0}^{r}|\Gamma^{(i)}(v)|^{p-2}\big(\Gamma^{(i)}(v)\big)^{T}\\ &~~~~~~~~\times \Big(\sigma(S^{(i+1)}(v),S^{(i+1)}(v-\tau),\mathcal{L}_{v}^{S^{(i)}}) -\sigma(S^{(i)}(v),S^{(i)}(v-\tau),\mathcal{L}_{v}^{S^{(i-1)}})\Big)\mathrm{d}B(v)\bigg]. \end{align*} It follows from Assumption~$\ref{a1}$, Young's inequality and~$\mathrm{H\ddot{o}lder}$'s~inequality that \begin{align}\label{eq3.21}
H_{1}&\leq C\mathbb{E}\bigg[ \int_{0}^{t}|\Gamma^{(i)}(r)|^{p-2}\nonumber\\
&~~~~~~~~\times\Big(\big|\!S^{(i+1)}(r)\!-\!S^{(i)}(r)\big|^{2}\!+\!\big|S^{(i+1)}(r\!-\!\tau)\!-S^{(i)}(r\!-\!\tau)\big|^{2}\!+\!\mathbb{W}^{2}_{2}(\mathcal{L}_{r}^{S^{(i)}},\!\mathcal{L}_{r}^{S^{(i-1)}})\Big)\mathrm{d}r\!\bigg]\nonumber\\
&\leq C\mathbb{E}\bigg[\int_{0}^{t}\bigg\{\Big|\big(S^{(i+1)}(r)\!-\!S^{(i)}(r)\big)\!-\!\big(D(S^{(i+1)}(r\!-\!\tau))\!-\!D(S^{(i)}(r\!-\!\tau))\big)\Big|^{p} \nonumber\\
&~~~~~~~~+\!\Big(\!|S^{(i+1)}(r)\!-\!S^{(i)}(r)|^{2}
\!+\!|S^{(i+1)}(r\!-\!\tau)\!-\!S^{(i)}(r\!-\!\tau)|^{2} \!+\!\mathbb{W}^{2}_{2}(\mathcal{L}_{r}^{S^{(i)}},\!\mathcal{L}_{r}^{S^{(i-1)}}) \Big)^{\frac{p}{2}}\!\bigg\}\mathrm{d}r\!\bigg]\nonumber\\
&\leq C\int_{0}^{t}\mathbb{E}\Big[\sup_{0\leq v\leq r}\big|S^{(i+1)}(v)-S^{(i)}(v)\big|^{p}\Big]\mathrm{d}r
+Ct\mathbb{E}\Big[\sup_{0\leq r\leq t}\big|S^{(i)}(r)-S^{(i-1)}(r)\big|^{p}\Big] . \end{align} Furthermore, using the BDG inequality one can see that \begin{align}\label{eq3.22}
H_{2} \leq& C\mathbb{E}\Big[\Big(\int_{0}^{t}|\Gamma^{(i)}(r)|^{2p-2}
\big|\sigma(S^{(i+1)}(r),S^{(i+1)}(r-\tau),\mathcal{L}_{r}^{S^{(i)}})\nonumber\\
& ~~~~~~~~~~~~~~~~~~~~~~~~~~~ -\sigma(S^{(i)}(r),S^{(i)}(r-\tau),\mathcal{L}_{r}^{S^{(i-1)}})\big|^{2}\mathrm{d}r\Big)^{\frac{1}{2}}\Big]\nonumber\\
\leq &C\mathbb{E}\Big[\sup_{0\leq r\leq t}|\Gamma^{(i)}(r)|^{p-1}
\Big(\int_{0}^{t}
\big|\sigma(S^{(i+1)}(r),S^{(i+1)}(r-\tau),\mathcal{L}_{r}^{S^{(i)}})\nonumber\\
& ~~~~~~~~~~~~~~~~~~~~~~~~~~~-\sigma(S^{(i)}(r),S^{(i)}(r-\tau),\mathcal{L}_{r}^{S^{(i-1)}})\big|^{2}\mathrm{d}r\Big)^{\frac{1}{2}}\Big]\nonumber\\
\leq&\frac{1}{2}\mathbb{E}\big[\sup_{0\leq r\leq t}|\Gamma^{(i)}(r)|^{p}\big] +C\mathbb{E}\Big[\int_{0}^{t}\Big(\big|S^{(i+1)}(r)-S^{(i)}(r)\big|^{2} \nonumber\\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+\big|S^{(i+1)}(r-\tau)-S^{(i)}(r-\tau)\big|^{2}+\mathbb{W}^{2}_{2}(\mathcal{L}_{r}^{S^{(i)}},\mathcal{L}_{r}^{S^{(i-1)}})\Big)^{\frac{p}{2}}\mathrm{d}r\Big]\nonumber\\
\leq&\frac{1}{2}\mathbb{E}\big[\sup_{0\leq r\leq t}|\Gamma^{(i)}(r)|^{p}\big] +C\int_{0}^{t}\mathbb{E}\Big[\sup_{0\leq v\leq r}\big|S^{(i+1)}(v)-S^{(i)}(v)\big|^{p}\Big]\mathrm{d}r
\nonumber\\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+Ct\mathbb{E}\Big[\sup_{0\leq r\leq t}\big|S^{(i)}(r)-S^{(i-1)}(r)\big|^{p}\Big] . \end{align} Substituting~$(\ref{eq3.21})$~and~$(\ref{eq3.22})$~into~$(\ref{eq3.20})$~arrives at \begin{align}\label{eq3.23}
&\mathbb{E}\big[\sup_{0\leq r\leq t}|\Gamma^{(i)}(r)|^{p}\big]\nonumber\\
&\leq C\int_{0}^{t}\mathbb{E}\Big[\sup_{0\leq v\leq r}\big|S^{(i+1)}(v)-S^{(i)}(v)\big|^{p}\Big]\mathrm{d}r+Ct\mathbb{E}\Big[\sup_{0\leq r\leq t}\big|S^{(i)}(r)-S^{(i-1)}(r)\big|^{p}\Big]. \end{align} By~$(\ref{eq3.19})$, $(\ref{eq3.23})$~and the Gronwall inequality, we obtain that \begin{align}\label{eq3.24}
\mathbb{E}\Big[\sup_{0\leq r\leq t}\big|S^{(i+1)}(r)-S^{(i)}(r)\big|^{p}\Big]\leq Ct e^{Ct}\mathbb{E}\Big[\sup_{0\leq r\leq t}\big|S^{(i)}(r)-S^{(i-1)}(r)\big|^{p}\Big]. \end{align} Taking~$T_{0}\in (0, T]$~such that~$C_2 T_{0} e^{C_1 T_{0}}\leq e^{-1}$, by~$(\ref{eq3.24})$, we obtain that \begin{align*}
\mathbb{E}\Big[\sup_{0\leq r\leq T_{0}}\big|S^{(i+1)}(r)-S^{(i)}(r)\big|^{p}\Big]
&\leq e^{-1}\mathbb{E}\Big[\sup_{0\leq r\leq T_{0}}\big|S^{(i)}(r)-S^{(i-1)}(r)\big|^{p}\Big]\nonumber\\ &\leq \cdots \nonumber\\
&\leq e^{-i}\mathbb{E}\Big[\sup_{0\leq r\leq T_{0}}\big|S^{(1)}(r)-S^{(0)}(r)\big|^{p}\Big]\nonumber\\
&\leq e^{-i}2^{p-1}\Big(\mathbb{E}\big[\sup_{0\leq r\leq T_{0}} |S^{(1)}(r)|^{p}\big]+\mathbb{E}[|\xi(0)|^{p}]\Big)\nonumber\\ &\leq Ce^{-i}. \end{align*} Then, there is an~$\mathcal{F}_{t}$-adapted continuous process~$\{S(t)\}_{t\in[0,T_{0}]}$~such that \begin{align}\label{eq3.26} \lim_{i\rightarrow\infty}\sup_{0\leq t\leq T_{0}}\mathbb{W}^{p}_{p}(\mathcal{L}_{t}^{S^{(i)}},\mathcal{L}_{t}^{S})
\leq\lim_{i\rightarrow\infty}\mathbb{E}\Big[\sup_{0\leq t\leq T_{0}}\big|S^{(i)}(t)-S(t)\big|^{p}\Big]=0, \end{align}
where~$\mathcal{L}_{t}^{S}$~is the law of~$S(t)$. And~$(\ref{eq3.26})$~implies~$\mathbb{E}[\sup\limits_{0\leq t\leq T_{0}}|S(t)|^{p}]<\infty$. Taking limits on both sides of~$(\ref{eq3.9})$~we derive that~$\mathbb{P}$-$a.s.$ \begin{equation*} \mathrm{d}\Big(S(t)-D(S(t-\tau))\Big)= b(S(t),S(t-\tau),\mathcal{L}_{t}^{S})\mathrm{d}t+\sigma(S(t),S(t-\tau),\mathcal{L}_{t}^{S})\mathrm{d}B(t),~~~t\in[0,T_{0}]. \end{equation*} Taking~$T_{0}$~as the initial time and repeating the previous procedure, one concludes that~$(\ref{eq3.1})$~has a solution~$\{S(t)\}_{t\in[0,T]}$~with the property $$
\mathbb{E}[\sup\limits_{0\leq t\leq T}|S(t)|^{p}]<\infty. $$ It still needs to prove the uniqueness. In fact, assume that~$\{S(t)\}_{t\in[0,T]}$~and~$\{\overline{S}(t)\}_{t\in[0,T]}$~are two solutions to~$(\ref{eq3.1})$ with the same initial data, using the similar way as~$(\ref{eq3.24})$, it is obvious to see $$
\mathbb{E}\Big[\sup_{0\leq t\leq T}\big|S(t)-\overline{S}(t)\big|^{p}\Big]=0. $$ \underline{Step 3.} We prove that \eqref{eq3.8} holds. Similar to ~$(\ref{eq3.12})$, we have \begin{align} \label{eq3.28}
\sup_{0\leq r\leq t}|S(r)|^{p}\leq\frac{\lambda}{1-\lambda}\|\xi\|^{p}+\frac{1}{(1-\lambda)^{p}}\sup_{0\leq r\leq t}|S(r)-D(S(r-\tau))|^{p}. \end{align} The~It$\widehat{\mathrm{o}}$~formula leads to \begin{align} \label{eq3.29}
&E\Big[\sup_{0\leq r\leq t}\big|S(r)-D(S(r-\tau))\big|^{p}\Big] \leq E\Big[\big|\xi(0)-D(\xi(-\tau))\big|^{p}\Big]+ \overline{J}_{1}+\overline{J}_{2}, \end{align} where \begin{align*}
\overline{J}_{1}=&pE\Big[\sup_{0\leq r\leq t}\int_{0}^{r}\big|S(v)-D(S(v-\tau))\big|^{p-2}
\times\Big(\big(S(v)-D(S(v-\tau))\big)^{T}\\
&~~~~~~~~~~~~~~~~\times b(S(v),S(v-\tau),\mathcal{L}_{v}^{S})+\frac{ p-1 }{2} \big|\sigma(S(v),S(v-\tau),\mathcal{L}_{v}^{S})\big|^{2}\Big)\mathrm{d}v\Big]\\ \end{align*} and \begin{align*}
\overline{J}_{2}=&pE\Big[\sup_{0\leq r\leq t}\int_{0}^{r}\big|S(v)-D(S(v-\tau))\big|^{p-2}\\
&~~~~~~~~~~~~~~~~~~~~~~~~~\times\big(S(v)-D(S(v-\tau))\big)^{T} \sigma(S(v),S(v-\tau),\mathcal{L}_{v}^{S})\mathrm{d}B(v)\Big]. \end{align*} By~$(\ref{eq3.3})$~and~$(\ref{eq3.5})$, we obtain \begin{align}\label{eq3.30}
\overline{J}_{1}&\leq C\mathbb{E}\Big[ \int_{0}^{t}\big|S(r)-D(S(r-\tau))\big|^{p-2}
\Big(1+|S(r)|^{2}+|S(r-\tau)|^{2}+W_{2}^{2}(\mathcal{L}_{r}^{S})\Big)\mathrm{d}r\Big] \nonumber\\
&\leq C\mathbb{E}\Big[\int_{0}^{t}\Big(1+|S(r)|^{p}+|S(r-\tau)|^{p}+W_{p}^{p}(\mathcal{L}_{r}^{S})\Big)\mathrm{d}r\Big]\nonumber\\
&\leq C\int_{0}^{t}\mathbb{E}\Big[\big(1+|S(r)|^{p}+|S(r-\tau)|^{p}\big)\Big]\mathrm{d}r\nonumber\\
&\leq C T+C\int_{0}^{t}\mathbb{E}\big[\sup_{0\leq v\leq r}|S(v)|^{p}\big]\mathrm{d}r+C\int_{0}^{t}\mathbb{E}\big[\sup_{0\leq v\leq r}|S(v-\tau)|^{p}\big]\mathrm{d}r \nonumber\\
&\leq C T\big(1+\mathbb{E}[\|\xi\|^{p}]\big)+C\int_{0}^{t}\mathbb{E}\big[\sup_{0\leq v\leq r}|S(v)|^{p}\big]\mathrm{d}r . \end{align} Using the BDG inequality, Young's inequality and~$\mathrm{H\ddot{o}lder}$'s inequality, we have \begin{align}\label{eq3.31}
\overline{J}_{2}&\leq C\mathbb{E}\Big[\Big(\int_{0}^{t}\big|S(r)-D(S(r-\tau))\big|^{2p-2}
|\sigma(S(r),S(r-\tau),\mathcal{L}_{r}^{S})|^{2}\mathrm{d}r\Big)^{\frac{1}{2}}\Big]\nonumber\\
&\leq C\mathbb{E}\Big[\sup_{0\leq r\leq t}\big|S(r)-D(S(r-\tau))\big|^{p-1}
\big(\int_{0}^{t}|\sigma(S(r),S(r-\tau),\mathcal{L}_{r}^{S})|^{2}\mathrm{d}r\big)^{\frac{1}{2}}\Big]\nonumber\\
&\leq\frac{1}{2}\mathbb{E}\Big[\sup_{0\leq r\leq t}\big|S(r)-D(S(r-\tau))\big|^{p}\Big]\nonumber\\
&~~~~+C\mathbb{E}\bigg[\bigg(\int_{0}^{t}\Big(1+|S(r)|^{2}+|S(r-\tau)|^{2}+W^{2}_{2}(\mathcal{L}_{r}^{S})\Big)\mathrm{d}r\bigg)^{\frac{p}{2}}\bigg]\nonumber\\
&\leq\frac{1}{2}\!\mathbb{E}\Big[\!\sup_{0\leq r\leq t}\big|S(r)-\!D(S(r\!-\!\tau))\big|^{p}\Big]
\!+\!C\mathbb{E}\Big[\int_{0}^{t}\Big(1\!+|S(r)|^{p}\!+|S(r-\tau)|^{p}\!+W_{p}^{p}(\mathcal{L}_{r}^{S})\Big)\mathrm{d}r\Big]\nonumber\\
&\leq\frac{1}{2}\mathbb{E}\Big[\!\sup_{0\leq r\leq t}\big|S(r)\!-\!D(S(r\!-\!\tau))\big|^{p}\!\Big]\!+\!CT\big(1\!+\!\mathbb{E}[\|\xi\|^{p}]\big)\!+\!C\int_{0}^{t}\mathbb{E}\big[\sup_{0\leq v\leq r}|S(v)|^{p}\big]\mathrm{d}r. \end{align} Inserting~$(\ref{eq3.30})$~and~$(\ref{eq3.31})$~into~$(\ref{eq3.29})$~yields \begin{align*}
\mathbb{E}\Big[\sup_{0\leq r\leq t}\big|S(r)-D(S(r-\tau))\big|^{p}\Big]\leq C\int_{0}^{t}\mathbb{E}\big[\sup_{0\leq v\leq r}|S(v)|^{p}\big]\mathrm{d}r+CT\big(1+\mathbb{E}[\|\xi\|^{p}]\big), \end{align*} which together with~$(\ref{eq3.28})$~and the Gronwall inequality means that \begin{align*}
\mathbb{E}\big[\sup_{0\leq r\leq t}|S(r)|^{p}\big]\leq V(1+T)e^{V(1+T)T}, \end{align*}
where~$V$ denotes a constant which depends on $p,K_{1},K_{2},K_{3}$ and $\mathbb{E}[\|\xi\|^{p}]$.\end{proof}
\subsection{Stochastic Particle Method}\label{s3.2}
In this subsection, we take use of stochastic particle method \cite{TD1996, BM1997} to approximate MV-NSDDE~$(\ref{eq3.1})$-$(\ref{eq3.2})$. For any~$ a\in\mathbb{N}$, $\{B^{a}(t)\}_{t\in[0,T]}$~is~an $m$-dimension Brownian motion and $\{B^{1}(t)\}, \{B^{2}(t)\}, \cdots$~are independent.
$\xi^{a}\in L^{p}_{\mathcal{F}_{0}}([-\tau,0];\mathbb{R}^{d})$, $\xi^{1}, \xi^{2}, \cdots$ are independent with the identity distribution ($i.i.d.$) and $\mathbb{E}[\sup_{_{-\tau\leq r,t\leq0}}|\xi^a(t)-\xi^a(r)|^{p}]\leq K_0|t-r|^{\frac{p}{2}}$.
Let~$\{S^a(t)\}_{t\in[0,T]}$~denote the unique solution to MV-NSDDE \begin{equation} \label{eq3.32} \mathrm{d}\Big(S^a(t)-D(Y^{a}(t-\tau))\Big)=b(S^a(t),S^{a}(t-\tau),\mathcal{L}_{t}^{S^{a}}) \mathrm{d}t+\sigma(S^a(t),S^{a}(t-\tau),\mathcal{L}_{t}^{S^{a}})B^{a}(t), \end{equation} with the initial condition~$S^{a}_0 =\xi^{a} $, where~$\mathcal{L}_{t}^{S^{a}}$~denotes the law of~$ S^a(t)$. We can see that $ S^{1}(t), S^{2}(t), \cdots$ are $i.i.d.$ for $t\geq 0$.
For each $\Xi\in\mathbb{N},~1\leq a\leq \Xi$, let~$ S^{a,M}(t) $~be the solution of NSDDE \begin{align} \label{eq3.33} \mathrm{d}\Big(S^{a,\Xi}(t)-D(S^{a,\Xi}(t-\tau))\Big) =&b(S^{a,\Xi}(t),S^{a,\Xi}(t-\tau),\mathcal{L}_{t}^{S,\Xi})\mathrm{d}t\nonumber\\ &\!+\!\sigma(S^{a,\Xi}(t),S^{a,\Xi}(t\!-\!\tau),\mathcal{L}_{t}^{S,\Xi})\mathrm{d}B^{a}(t),~~~t\in[0,T], \end{align} with the initial condition~$ S^{a,\Xi}_0 =\xi^{a} $, where $\mathcal{L}_{t}^{S,\Xi}(\cdot):=\frac{1}{\Xi}\sum\limits_{j=1}^{\Xi}\delta_{S^{j,\Xi}(t)}(\cdot)$. We prepare a path-wise propagation of chaos result on NSDDEs~$(\ref{eq3.33})$.
\begin{lemma}\label{le3.6} If Assumption~$\ref{a1}$~holds and $p>4$ in \eqref{eq3.2}, then \begin{align*}
\displaystyle\sup_{1\leq a\leq \Xi}\mathbb{E}\big[\sup_{0\leq t\leq T}|S^a(t)-S^{a,\Xi}(t)|^{2}\big]\leq C\left\{ \begin{array}{lll} \Xi^{-1/2},~~~&1\leq d<4,\\ \Xi^{-1/2}\log(\Xi),~~~&d=4,\\ \Xi^{-d/2},~~~&4<d, \end{array} \right. \end{align*} where~$C$~depends on the constant on the right side of \eqref{eq3.8} but is independent of $M$. \end{lemma} \begin{proof} For any~$1\leq a\leq \Xi$~and~$t\in[0,T]$, using Lemma~$\ref{l3.4}$, it easy to derive that \begin{align} \label{eq3.34}
\sup_{0\leq r\leq t}|S^{a}(r)-S^{a,\Xi}(r)|^{2}
\leq\frac{1}{(1-\lambda)^{2}}\sup_{0\leq r\leq t}|\Gamma^{a}(r) |^{2}, \end{align} where $ \Gamma^{a}(t)=S^a(t)-D(S^{a}(t-\tau))-S^{a,\Xi}(t)+D(S^{a,\Xi}(t-\tau)). $ It follows from It$\widehat{\mathrm{o}}$~formula that \begin{align} \label{eq3.35}
\mathbb{E}\big[\sup_{0\leq r\leq t}|\Gamma^{a}(r)|^{2}\big]\leq Q_{1}+Q_{2}, \end{align} where \begin{align*}
Q_{1}=&\mathbb{E}\bigg[\sup_{0\leq r\leq t} \int_{0}^{r}\Big\{2 \big(\Gamma^{a}(v)\big)^{T} \Big(b(S^{a}(v),S^{a}(v-\tau),\mathcal{L}_{v}^{S^{a}})-
b(S^{a,\Xi}(v),S^{a,\Xi}(v-\tau),\mathcal{L}_{v}^{S,\Xi}) \Big) \\
&~~~~~~~~~~~~~~~~~~ +\big|\sigma(S^{a}(v),S^{a}(v-\tau),\mathcal{L}_{v}^{S^{a}})-\sigma(S^{a,\Xi}(v),S^{a,\Xi}(v-\tau),\mathcal{L}_{v}^{S,\Xi})\big|^{2}\Big\}\mathrm{d}v\bigg]\\
\end{align*}
and \begin{align*} Q_{2}=&\mathbb{E}\bigg[\sup_{0\leq r\leq t}\int_{0}^{r}\!2\big(\Gamma^{a}(v)\big)^{T}\!\Big(\sigma(S^{a}(v),S^{a}(v\!-\!\tau),\mathcal{L}_{v}^{S^{a}})\!\\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-\sigma(S^{a,\Xi}(v),S^{a,\Xi}(v\!-\!\tau),\mathcal{L}_{v}^{S,\Xi})\!\Big)\mathrm{d}B^a(v)\bigg]. \end{align*} By Assumption~$\ref{a1}$~and Young's inequality, we have \begin{align}\label{eq3.36} Q_{1}&\leq C\mathbb{E}\bigg[\int_{0}^{t}
\Big(\big|S^{a}(r)-S^{a,\Xi}(r)\big|^{2}+\big|S^{a}(r-\tau)-S^{a,\Xi}(r-\tau)\big|^{2} +\mathbb{W}^{2}_{2}(\mathcal{L}_{r}^{S^{a}},\mathcal{L}_{r}^{S,\Xi})\Big)\mathrm{d}r\bigg]\nonumber\\
&\leq C \int_{0}^{t}\mathbb{E}\Big[\sup_{0\leq v\leq r}\big|S^{a}(v)-S^{a,\Xi}(v)\big|^{2}\Big]\mathrm{d}r +C \int_{0}^{t}\mathbb{E}\big[\mathbb{W}^{2}_{2}(\mathcal{L}_{r}^{S^{a}},\mathcal{L}_{r}^{S,\Xi})\big]\mathrm{d}r. \end{align} Using the~BDG~inequality and~$\mathrm{H\ddot{o}lder}$'s~inequality, we then have \begin{align}\label{eq3.37}
Q_{2}&\leq C\mathbb{E}\bigg[\Big(\int_{0}^{t}|\Gamma^{a}(r)|^{2 }
\big|\sigma(S^{a}(r),S^{a}(r-\tau),\mathcal{L}_{r}^{S^{a}})-\sigma(S^{a,\Xi}(r),S^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{S,\Xi})\big|^{2}\mathrm{d}r\Big)^{\frac{1}{2}}\bigg]\nonumber\\
&\leq C\mathbb{E}\bigg[\!\sup_{0\leq r\leq t}|\Gamma^{a}(r)| \Big(\int_{0}^{t}
\big|\sigma(S^{a}(r),S^{a}(r\!-\!\tau),\!\mathcal{L}_{r}^{S^{a}})\!-\!\sigma(S^{a,\Xi}(r),S^{a,\Xi}(r\!-\!\tau),\mathcal{L}_{r}^{S,\Xi})\big|^{2}\mathrm{d}r\Big)^{\frac{1}{2}}\!\bigg]\nonumber\\
&\leq\frac{1}{2}\mathbb{E}\big[\sup_{0\leq r\leq t}|\Gamma^{a}(r)|^{2}\big]\nonumber\\
&~~~~+C\mathbb{E}\Big[\int_{0}^{t}\Big(\big|S^{a}(r)\!-\!S^{a,\Xi}(r)\big|^{2}
+\big|S^{a}(r-\tau)-S^{a,\Xi}(r-\tau)\big|^{2}+\mathbb{W}^{2}_{2}(\mathcal{L}_{r}^{S^{a}},\mathcal{L}_{r}^{S,\Xi})\Big)\mathrm{d}r\Big]\nonumber\\
&\leq\frac{1}{2}\mathbb{E}\big[\sup_{0\leq r\leq t}|\Gamma^{a}(r)|^{2}\big]\nonumber\\
&~~~~+C\int_{0}^{t}\mathbb{E}\big[\sup_{0\leq v\leq r}\big|S^{a}(v)-S^{a,\Xi}(v)\big|^{2}\big]\mathrm{d}r+C\int_{0}^{t}\mathbb{E}\Big[\mathbb{W}^{2}_{2}(\mathcal{L}_{r}^{S^{a}},\mathcal{L}_{r}^{S,\Xi})\Big]\mathrm{d}r. \end{align} Now we introduce another empirical measure constructed from the exact solution to~$(\ref{eq3.32})$~by $$ \mathcal{L}_{r}^{\Xi}( dx)=\frac{1}{\Xi}\sum_{j=1}^{\Xi}\delta_{S^{j}(r)}(dx). $$ One notices \begin{align*} \mathbb{W}^{2}_{2}(\mathcal{L}_{r}^{S^{a}},\mathcal{L}_{r}^{S,\Xi})&\leq 2 \big(\mathbb{W}^{2}_{2}(\mathcal{L}_{r}^{S^{a}},\mathcal{L}_{r}^{\Xi}) +\mathbb{W}^{2}_{2}(\mathcal{L}_{r}^{\Xi},\mathcal{L}_{r}^{S,\Xi})\big)\nonumber\\ &= 2 \mathbb{W}^{2}_{2}(\mathcal{L}_{r}^{S^{a}},\mathcal{L}_{r}^{\Xi})+
\frac{2}{\Xi}\sum_{j=1}^{\Xi}\big|S^{j}(r)-S^{j,\Xi}(r)\big|^{2} , \end{align*} and \begin{align*}
\mathbb{E}\Big[\frac{1}{\Xi}\sum_{j=1}^{\Xi}\big|S^{j}(r)-S^{j,M}(r)\big|^{2}\Big]
=\mathbb{E}\Big[\big|S^{a}(r)-S^{a,\Xi}(r)\big|^{2}\Big]. \end{align*} Combining the above inequalities arrives at \begin{align}\label{eq3.38} \int_{0}^{t}\!\mathbb{E}\big[\mathbb{W}^{2}_{2}(\mathcal{L}_{r}^{S^{a}},\!\mathcal{L}_{r}^{S,\Xi})\big]\mathrm{d}r
\leq 2\!\int_{0}^{t}\!\mathbb{E}\Big[\!\sup_{0\leq v\leq r}\big|S^{a}(v)\!-\!S^{a,\Xi}(v)\big|^{2}\Big]\mathrm{d}r \!+\!2\!\int_{0}^{t}\!\mathbb{E}\big[\!\mathbb{W}^{2}_{2}(\mathcal{L}_{r}^{S^{a}},\!\mathcal{L}_{r}^{\Xi})\big]\mathrm{d}r. \end{align} This together with~$(\ref{eq3.34})$-$(\ref{eq3.37})$~yields \begin{align*}
\mathbb{E}\big[\sup_{0\leq r\leq t}|S^{a}(r)-S^{a,\Xi}(r)|^{2}\big]&\leq \frac{1}{(1-\lambda)^{2}}\mathbb{E}\big[\sup_{0\leq r\leq t}|\Gamma^{a}(r)|^{2}\big]\nonumber\\
&\leq C\!\int_{0}^{t}\mathbb{E}\Big[\sup_{0\leq v\leq r}\!\big|S^{a}(v)\!-\!S^{a,\Xi}(v)\big|^{2}\Big]\mathrm{d}r \!+\!C\int_{0}^{t}\!\mathbb{E}\big[\mathbb{W}^{2}_{2}(\mathcal{L}_{r}^{S^{a}},\!\mathcal{L}_{r}^{\Xi})\big]\mathrm{d}r. \end{align*} Therefore, thanks to~$(\ref{eq3.8})$~and Lemma~$\ref{l3.2}$, it follows from the~Gronwall~inequality that \begin{align*}
\mathbb{E}\big[\sup_{0\leq t\leq T}|S^a(t)-S^{a,\Xi}(t)|^{2}\big]\leq C\left\{ \begin{array}{lll} \Xi^{-1/2},~~~&1\leq d<4,\\ \Xi^{-1/2}\log(\Xi),~~~&d=4,\\ \Xi^{-d/2},~~~&4<d. \end{array} \right. \end{align*} \end{proof}
\section{ Tamed EM Method of MV-NSDDEs}\label}\def\be{\begin{equation}} \def\ee{\end{equation}{cov-rates} We propose in this section an appropriate explicit method for approximating the solution of~$(\ref{eq3.33})$, and go a further step to give the strong convergence between the numerical solutions and the exact solution of~$(\ref{eq3.33})$. By virtue of propagation of chaos, we establish the result of convergence of the numerical approximation for the original MV-NSDDE~$(\ref{eq3.32})$.
Since the drift coefficients satisfying Assumption $1$ and Assumption $2$ might be nonlinear, for any $\triangle\in (0,1\wedge \tau)$, we define an auxiliary function \begin{align}\label{eq4.1} b_{\triangle}(x,y,\mu)
=\frac{b(x,y,\mu)}{1+\triangle^{\alpha}|b(x,y,\mu)|}, \end{align} for~$x,y\in\mathbb{R}^{d}$,~$\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})$~and~$\alpha\in(0,\frac{1}{2}]$. To approximate $(\ref{eq3.33})$ we propose the tamed EM scheme. In addition, we let $\frac{\tau}{\triangle}=: n_0$ be an integer (for example, when $\tau$ and $\triangle$ are rational numbers). For the fixed step size~$\triangle$, define \be\label{eq4.2}\left\{\begin{array}{lcl} U^{a,\Xi}_{n}=\xi^a(t_{n}),~~~~~~~~~~~a=1,\cdots,\Xi,~~n=-n_0,\cdots,0,\\ U^{a,\Xi}_{n+1}-D(U^{a,\Xi}_{n+1-n_0})= U^{a,\Xi}_{n}-D(U^{a,\Xi}_{n-n_0}) +b_{\triangle}(U^{a,\Xi}_{n},U^{a,\Xi}_{n-n_0},\mathcal{L}^{U_n,\Xi} )\triangle\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~+\sigma(U^{a,\Xi}_{n},U^{a,\Xi}_{n-n_0},\mathcal{L}^{U_n,\Xi} )\triangle B^{a}_{n},~~~~~n\geq 0, \end{array}\right.\ee where $t_{-n_0}=-\tau$, $t_{n}=n\triangle$ for $n>-n_0$, $\mathcal{L}^{U_n,\Xi} =\frac{1}{\Xi}\sum\limits_{j=1}^{\Xi}\delta_{U^{j,\Xi}_{n}}$~and~$\triangle B^{a}_{n}=B^{a}(t_{n+1})-B^{a}(t_{n})$. To proceed, we give the numerical solution of this scheme \begin{align}\label{eq4.3} U^{a,\Xi}(t)=U^{a,\Xi}_{n}, ~~~~ t\in[t_n, t_{n+1}). \end{align} For convenience, we define $\mathcal{L}^{U,\Xi}_{t}=\frac{1}{\Xi}\sum\limits_{j=1}^{\Xi}\delta_{U^{j,\Xi}(t)}$ and $\rho(t)=\lfloor t/\triangle\rfloor\triangle$~for~$t\geq -\tau$. Then one observes~$\mathcal{L}^{U,\Xi}_{t}=\mathcal{L}^{U,\Xi}_{\rho(t)}=\mathcal{L}^{U_n,\Xi}$, for $t\in [t_n, t_{n+1})$. Moreover, we give the auxiliary continuous numerical solution \be \label{eq4.4}\left\{\begin{array}{lll}
\overline{U}^{a,\Xi}(t) = \xi^{a}(t),&~~ -\tau\leq t\leq 0,\\ \overline{U}^{a,\Xi}(t)\!-\!D\big(\!\overline{U}^{a,\Xi}( t\!-\!\tau )\big)\!=&\!\xi^{a}(0)\!-\!D(\xi^{a}(\!-\tau))\!+\!\int^{t}_{0}\!b_{\triangle}(U^{a,\Xi}(r),\!U^{a,\Xi}(r\!-\!\tau ),\!\mathcal{L}_{r}^{U,\Xi})\!\mathrm{d}r\\ &+\!\int^{t}_{0}\!\sigma (U^{a,\Xi}(r),\!U^{a,\Xi}(r\!-\!\tau ),\!\mathcal{L}_{r}^{U,\Xi})\mathrm{d}B^{a}(r),~~~~ t>0. \end{array}\right.\ee It follows from~$(\ref{eq4.4})$~directly that for $t>0$, \begin{align*} &\overline{U}^{a,\Xi}(t)-D\big(\overline{U}^{a,\Xi}( t-\tau )\big)\nonumber\\ &=U^{a,\Xi}_0-D\big(U^{a,\Xi}_{ -n_0}\big)+\sum_{n=0}^{\lfloor t/\triangle\rfloor-1}\int^{t_{n+1}}_{t_n}b_{\triangle}(U^{a,\Xi}_n,U^{a,\Xi}_{n-n_0},\mathcal{L}^{U_n,\Xi})\mathrm{d}r\nonumber\\ &~~~~+\sum_{n=0}^{\lfloor t/\triangle\rfloor-1}\int^{t_{n+1}}_{t_n}\sigma (U^{a,\Xi}_n,U^{a,\Xi}_{n-n_0},\mathcal{L}^{U_n,\Xi})\mathrm{d} B^{a}(r)\nonumber\\ &~~~~+\int^{t}_{\rho(t)}b_{\triangle}(U^{a,\Xi}(r),U^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{U,\Xi})\mathrm{d}r+\int^{t}_{\rho(t)}\sigma(U^{a,\Xi}(r),U^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{U,\Xi})\mathrm{d}B^{a}(r)\nonumber\\ &=U^{a,\Xi}(t)-D\big( {U}^{a,\Xi}( t-\tau )\big)+\int^{t}_{\rho(t)}b_{\triangle}(U^{a,\Xi}(r),U^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{U,\Xi})\mathrm{d}r\nonumber\\ &~~~~+\int^{t}_{\rho(t)}\sigma(U^{a,\Xi}(r),U^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{U,\Xi})\mathrm{d}B^{a}(r). \end{align*} One observes from the above equation that $$\overline{U}^{a,\Xi}(\rho(t))-D\big(\overline{U}^{a,\Xi}( \rho(t)-\tau )\big) =U^{a,\Xi}(\rho(t))-D\big( {U}^{a,\Xi}( \varrho(t)-\tau )\big)=U^{a,\Xi}(t)-D\big( {U}^{a,\Xi}( t-\tau )\big).$$ Due to solving the difference equations we arrive at \begin{align}\label{eq4.5}
\overline{U}^{a,\Xi}(\rho(t)) =U^{a,\Xi}(\rho(t)) =U^{a,\Xi}(t)=U^{a,\Xi}_n,~~~~t\in[t_n, t_{n+1}). \end{align} Therefore, it is clearly that~$\overline{U}^{a,\Xi}(t)$~and~$U^{a,\Xi}(t)$~take the same values at the grid points. \begin{remark}\label{r4.1} From~$(\ref{eq4.1})$ and \eqref{eq3.5}, one observes \begin{align}\label{eq4.6}
|b_{\triangle}(x,y,\mu)|\leq \triangle^{-\alpha} \wedge |b(x,y,\mu)|, \end{align}and \begin{align}\label{eq4.7}
\big(x-D(y)\big)^{T}b_{\triangle}(x,y,\mu) \leq(K_1+ K_2) \big( 1+|x|^{2}+|y|^{2}+W^{2}_{2}(\mu) \big). \end{align} \end{remark}
\subsection{ Moment estimate}\label{bound} For~$1\leq a\leq \Xi$, in order to show that the numerical solution produced by~$(\ref{eq4.2})$ converge to the solution of $(\ref{eq3.33})$, in this subsection we establish boundedness of $p$th moment of the numerical solution~$\overline{U}^{a,\Xi}(t)$ of~$(\ref{eq4.4})$. \begin{lemma}\label{le4.2} Under Assumption~$\ref{a1}$, $$
\sup_{1\leq a\leq \Xi}\mathbb{E}\big[\sup_{0\leq t\leq T}|\overline{U}^{a,\Xi}(t)|^{p}\big]\leq C,~~~T\geq0, $$ where ~$C$~is the constant not only its value may vary with different lines but also is independent of~$\Xi$~and~$\triangle$ from now on. \end{lemma} \begin{proof} Let~$ T\geq0$, for each~$a=1,\cdot\cdot\cdot,\Xi$~and positive integer~$N$, let us define the~$\mathcal{F}_{t}$-stopping time by $$
\eta^{a,\Xi}_{N}=T\wedge\inf\{t\in[0,T]:~|\overline{U}^{a,\Xi}(t)|\geq N\},~~~~\eta^{\Xi}_{N}=\min_{1\leq a\leq \Xi}\eta^{a,\Xi}_{N}, $$ and~$\eta^{\Xi}_{N}\uparrow T$~as~$N\rightarrow\infty$. Due to~$(\ref{eq3.7})$ and Assumption \ref{a1}, we know that \begin{align*}
|\overline{U}^{a,\Xi}(t\wedge\eta^{\Xi}_{N})|^{p} \leq\lambda\big|\overline{U}^{a,\Xi}(t\wedge\eta^{\Xi}_{N}-\tau)\big|^{p}+\frac{1}{(1-\lambda)^{p-1}} \big|\overline{U}^{a,\Xi}(t\wedge\eta^{\Xi}_{N})-D\big(\overline{U}^{a,\Xi}(t\wedge\eta^{\Xi}_{N}-\tau)\big)\big|^{p}, \end{align*} which implies \begin{align}\label{eq4.8}
\sup_{0\leq r\leq t}|\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N})|^{p}
\leq&\lambda\sup_{-\tau\leq r\leq 0}|\overline{U}^{a,\Xi}(r)|^{p}+\lambda\sup_{0\leq r\leq t }|\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N})|^{p}\nonumber\\
&+\frac{1}{(1-\lambda)^{p-1}}\sup_{0\leq r\leq t}\big|\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N})-D\big(\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N}-\tau)\big)\big|^{p}. \end{align} Taking expectation on both sides derives that \begin{align}\label{eq4.9}
&\mathbb{E}\big[\sup_{0\leq r\leq t}|\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N})|^{p}\big]\nonumber\\
&\leq\frac{\lambda}{1-\lambda}\mathbb{E}\big[\|\xi^{a}\|^{p}\big]+
\frac{1}{(1-\lambda)^{p}}\mathbb{E}\Big[\sup_{0\leq r\leq t}\big|\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N})-D\big(\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N}-\tau)\big)\big|^{p}\Big]. \end{align} By applying the~It$\widehat{\mathrm{o}}$~formula we have \begin{align}\label{eq4.11}
\mathbb{E}\Big[\sup_{0\leq r\leq t}\big|\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N})\!-\!D\big(\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N}\!-\!\tau)\big)\big|^{p}\Big]
\leq\mathbb{E}\big[|\xi^{a}(0)\!-\!D(\xi^{a}(-\tau))|^{p}\big]\!+\!\sum_{l=1}^{3}\widetilde{J}^{a}_{l}, \end{align} where \begin{align*}
\widetilde{J}^{a}_{1} &=p\mathbb{E}\Big[\sup_{0\leq r\leq t} \int_{0}^{r\wedge\eta^{\Xi}_{N}}\big|\overline{U}^{a,\Xi}(v)-D\big(\overline{U}^{a,\Xi}(v-\tau)\big)\big|^{p-2}\nonumber\\ &~~~~~~~~~~\times\Big(\overline{U}^{a,\Xi}(v)-D\big(\overline{U}^{a,\Xi}(v-\tau)\big)-U^{a,\Xi}(v)+D\big(U^{a,\Xi} (v-\tau)\big)\Big)^{T}\\ &~~~~~~~~~~~~~~~~~~\times b_{\triangle}(U^{a,\Xi}(v),U^{a,\Xi}(v-\tau),\mathcal{L}_{v}^{U,\Xi})\mathrm{d}v\Big],\\
\widetilde{J}^{a}_{2} &=p\mathbb{E}\bigg[\sup_{0\leq r\leq t}\bigg\{\int_{0}^{r\wedge\eta^{\Xi}_{N}}
\big|\overline{U}^{a,\Xi}(v)-D\big(\overline{U}^{a,\Xi}(v-\tau)\big)\big|^{p-2}\Big(U^{a,\Xi}(v)-D\big(U^{a,\Xi}(v-\tau)\big)\Big)^{T}\\ &~~~~~~~\times b_{\triangle}(U^{a,\Xi}(v),U^{a,\Xi}(v-\tau),\mathcal{L}_{v}^{U,\Xi})\mathrm{d}v \\ &~~~~~~~~~+\!\frac{ p-1 }{2}\!\int_{0}^{r\wedge\eta^{\Xi}_{N}}
\big|\overline{U}^{a,\Xi}(v)\!-\!D\big(\overline{U}^{a,\Xi}(v\!-\!\tau)\big)\!\big|^{p-2} \big|\sigma(U^{a,\Xi}(v),\!U^{a,\Xi}(v\!-\!\tau),\!\mathcal{L}_{v}^{U,\Xi})\big|^{2}\mathrm{d}v\bigg\}\!\bigg]\\ \end{align*} and \begin{align*} \widetilde{J}^{a}_{3} &
=p\mathbb{E}\bigg[\sup_{0\leq r\leq t}\int_{0}^{r\wedge\eta^{\Xi}_{N}}\big|\overline{U}^{a,\Xi}(v)-D\big(\overline{U}^{a,\Xi}(v-\tau)\big)\big|^{p-2} \Big(\overline{U}^{a,\Xi}(v)-D\big(\overline{U}^{a,\Xi}(v-\tau)\big)\Big)^{T}\\ &~~~~~~~~~~\times \sigma(U^{a,\Xi}(v),U^{a,\Xi}(v-\tau),\mathcal{L}_{v}^{U,\Xi})\mathrm{d}B^{a}(v)\bigg]. \end{align*} We begin to compute~$ \widetilde{J}^{a}_{1} $. Due to~$(\ref{eq3.3})$~and~$(\ref{eq4.6})$, using H$\ddot{\hbox{o}}$lder's inequality and Young's inequality, we have \begin{align}\label{eq4.12}
&\widetilde{J}^{a}_{1} \leq p\mathbb{E}\bigg[\int_{0}^{t\wedge\eta^{\Xi}_{N}}\big|\overline{U}^{a,\Xi}(r)-D\big(\overline{U}^{a,\Xi}(r-\tau)\big)\big|^{p-2}\nonumber\\
&~~~~~~\times\Big|\overline{U}^{a,\Xi}(r)-D\big(\overline{U}^{a,\Xi}(r-\tau)\big)-U^{a,\Xi}(r)+D\big(U^{a,\Xi} (r-\tau)\big)\Big|\nonumber\\
&~~~~~~~~~~~~~~\times\big|b_{\triangle}(U^{a,\Xi}(r),U^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{U,\Xi})\big|\mathrm{d}r\bigg]\nonumber\\
&\leq C\mathbb{E}\bigg[\int_{0}^{t\wedge\eta^{\Xi}_{N}}\big|\overline{U}^{a,\Xi}(r)-D\big(\overline{U}^{a,\Xi}(r-\tau)\big)\big|^{p-2}\nonumber\\
&~~~~\times\Big|\int^{r}_{\rho(r)}\!b_{\triangle}(U^{a,\Xi}(v),\!U^{a,\Xi}(v\!-\!\tau),\mathcal{L}_{v}^{U,\Xi})\mathrm{d}v
\!+\!\int^{r}_{\rho(r)}\sigma(U^{a,\Xi}(v),\!U^{a,\Xi}(v\!-\!\tau),\mathcal{L}_{v}^{U,\Xi})\mathrm{d}B^{a}(v)\Big|\nonumber\\
&~~~~~~~~~~~~~~\times\big|b_{\triangle}(U^{a,\Xi}(r),U^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{U,\Xi})\big|\mathrm{d}r\bigg]\nonumber\\
&\leq C\mathbb{E}\bigg[\sup_{0\leq r\leq t}\big|\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N})-D\big(\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N}-\tau)\big)\big|^{p-2}\nonumber\\
&~~~~\times\!\int_{0}^{t\wedge\eta^{\Xi}_{N}}\!\Big|\int^{r}_{\rho(r)}\!b_{\triangle}(U^{a,\Xi}(v),\!U^{a,\Xi}(v\!-\!\tau),\!\mathcal{L}_{v}^{U,\Xi}\!)\mathrm{d}r
\!\!+\!\!\int^{r}_{\rho(r)}\!\sigma(U^{a,\Xi}(v),\!U^{a,\Xi}(v\!-\!\tau),\!\mathcal{L}_{v}^{U,\Xi})\!\mathrm{d}B^{a}(v)\Big|\nonumber\\
&~~~~~~~~~~~~~\times\big|b_{\triangle}(U^{a,\Xi}(r),U^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{U,\Xi})\big|\mathrm{d}r\bigg]\nonumber\\
&\leq\frac{1}{4}\mathbb{E}\Big[\sup_{0\leq r\leq t}\big|\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N})-D\big(\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N}-\tau)\big)\big|^{p}\Big]\nonumber\\
&~~~~+C\mathbb{E}\bigg[\int_{0}^{t\wedge\eta^{\Xi}_{N}}\bigg(\big|\int^{r}_{\rho(r)}b_{\triangle}(U^{a,\Xi}(v),U^{a,\Xi}(v-\tau),\mathcal{L}_{v}^{U,\Xi})\mathrm{d}v\big|^{\frac{p}{2}}\nonumber\\
&~~~~~~+\!\big|\int^{r}_{\rho(r)}\!\sigma(U^{a,\Xi}(v),U^{a,\Xi}(v\!-\!\tau),\mathcal{L}_{v}^{U,\Xi})\mathrm{d}B^{a}(v)\big|^{\frac{p}{2}}\bigg)
\big|b_{\triangle}(U^{a,\Xi}(r),U^{a,\Xi}(r\!-\!\tau),\mathcal{L}_{r}^{U,\Xi})\big|^{\frac{p}{2}}\mathrm{d}r\!\bigg]\nonumber\\
&\leq\frac{1}{4}\mathbb{E}\Big[\sup_{0\leq r\leq t}\big|\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N})-D\big(\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N}-\tau)\big)\big|^{p}\Big]\nonumber\\ &~~~~+C\bigg(\triangle^{p(\frac{1}{2}-\alpha)}+\triangle^{-\frac{\alpha p}{2}}
\mathbb{E}\Big[\int_{0}^{t}\big|\int^{r\wedge\eta^{\Xi}_{N}}_{\rho(r\wedge\eta^{\Xi}_{N})}\sigma(U^{a,\Xi}(v),U^{a,\Xi}(v-\tau),\mathcal{L}_{v}^{U,\Xi})\mathrm{d}B^{a}(v)\big|^{\frac{p}{2}} \mathrm{d}r\Big]\bigg)\nonumber\\
&\leq\frac{1}{4}\mathbb{E}\Big[\sup_{0\leq r\leq t}|\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N})-D\big(\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N}-\tau)\big)|^{p}\Big]\nonumber\\ &~~~~+C\bigg(\triangle^{p(\frac{1}{2}-\alpha)}+\triangle^{-\frac{\alpha p}{2}}
\int_{0}^{t}\mathbb{E}\Big[\Big(\int^{r\wedge\eta^{\Xi}_{N}}_{\rho(r\wedge\eta^{\Xi}_{N})}\big|\sigma(U^{a,\Xi}(v),U^{a,\Xi}(v-\tau),\mathcal{L}_{v}^{U,\Xi})\big|^{2}\mathrm{d}v\Big)^{\frac{p}{4}} \Big]\mathrm{d}r\bigg)\nonumber\\
&\leq\frac{1}{4}\mathbb{E}\!\Big[\sup_{0\leq r\leq t}\big|\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N})\!-\!D\big(\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N}\!-\!\tau)\big)\big|^{p}\Big]\nonumber\\ &~~~~+\!C\bigg(\!\triangle^{p(\frac{1}{2}-\alpha)} \!+\!\triangle^{\frac{p}{2}(\frac{1}{2}-\alpha)}
\!\int_{0}^{t}\!\mathbb{E}\Big[1\!+\!|U^{a,\Xi}(r\wedge\eta^{\Xi}_{N})|^{\frac{p}{2}}\!+\!|U^{a,\Xi}(r\wedge\eta^{\Xi}_{N}\!-\!\tau)|^{\frac{p}{2}}\!+\!W^{\frac{p}{2}}_{2}(\mathcal{L}_{r\wedge\eta^{\Xi}_{N}}^{U,\Xi}) \!\Big]\mathrm{d}r\!\bigg)\nonumber\\
&\leq\frac{1}{4}\mathbb{E}\Big[\sup_{0\leq r\leq t}\big|\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N})-D\big(\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N}-\tau)\big)\big|^{p}\Big]\nonumber\\
&~~~~+C\bigg(\triangle^{p(\frac{1}{2}-\alpha)}+\int_{0}^{t}\mathbb{E}\Big[1+|U^{a,\Xi}(r\wedge\eta^{\Xi}_{N})|^{p}+|U^{a,\Xi}(r\wedge\eta^{\Xi}_{N}-\tau)|^{p}+W^{p}_{p}(\mathcal{L}_{r\wedge\eta^{\Xi}_{N}}^{U,\Xi}) \Big]\mathrm{d}r\bigg)\nonumber\\
&\leq\frac{1}{4}\mathbb{E}\Big[\sup_{0\leq r\leq t}\big|\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N})-D\big(\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N}-\tau)\big)\big|^{p}\Big]\nonumber\\ &~~~~+C\Big(\triangle^{p(\frac{1}{2}-\alpha)}
+\int_{0}^{t}\sup_{1\leq a\leq \Xi}\mathbb{E}\big[\sup_{0\leq v\leq r}|\overline{U}^{a,\Xi}(v\wedge\eta^{\Xi}_{N})|^{p}\big]\mathrm{d}r+1\Big)\nonumber\\
&\leq\frac{1}{4}\mathbb{E}\Big[\sup_{0\leq r\leq t}\big|\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N})-D\big(\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N}-\tau)\big)\big|^{p}\Big]\nonumber\\
&~~~~+C\Big(\int_{0}^{t}\sup_{1\leq a\leq \Xi}\mathbb{E}\big[\sup_{0\leq v\leq r}|\overline{U}^{a,\Xi}(v\wedge\eta^{\Xi}_{N})|^{p}\big]\mathrm{d}r+1\Big). \end{align} Using~$(\ref{eq3.3})$, $(\ref{eq4.7})$~and~Young's~inequality, we can deduce that \begin{align}\label{eq4.13}
\widetilde{J}^{a}_{2}&\leq C\mathbb{E}\bigg[\int_{0}^{t\wedge\eta^{\Xi}_{N}}\big|\overline{U}^{a,\Xi}(r)-D\big(\overline{U}^{a,\Xi}(r-\tau)\big)\big|^{p-2}\nonumber\\
&~~~~\times \Big(1+|U^{a,\Xi}(r)|^{2}+|U^{a,\Xi}(r-\tau)|^{2}+W^{2}_{2}(\mathcal{L}_{r}^{U,\Xi})\Big)\mathrm{d}r\bigg]\nonumber\\
&\leq C\mathbb{E}\Big[\int_{0}^{t\wedge\eta^{\Xi}_{N}}\big(|\overline{U}^{a,\Xi}(r)|^{p}+|\overline{U}^{a,\Xi}(r-\tau)|^{p}\big)\mathrm{d}r\Big]\nonumber\\
&~~~~+C\mathbb{E}\bigg[\int_{0}^{t\wedge\eta^{\Xi}_{N}}\Big(1+|U^{a,\Xi}(r)|^{p}+|U^{a,\Xi}(r-\tau)|^{p} +W^{p}_{2}(\mathcal{L}_{r}^{U,\Xi})\Big) \mathrm{d}r\bigg] \nonumber\\
&\leq C\Big(\int_{0}^{t}\sup_{1\leq a\leq \Xi}\mathbb{E}\big[\sup_{0\leq v\leq r}|\overline{U}^{a,\Xi}(v\wedge\eta^{\Xi}_{N})|^{p}\big]\mathrm{d}r+1\Big). \end{align} Applying the~BDG~inequality, Young's~inequality and~$\mathrm{H\ddot{o}lder}$'s~inequality we have \begin{align}\label{eq4.14} \widetilde{J}^{a}_{3}
&\leq C\mathbb{E}\bigg[\Big(\int_{0}^{t\wedge\eta^{\Xi}_{N}}\big|\overline{U}^{a,\Xi}(r)-D\big(\overline{U}^{a,\Xi}(r-\tau)\big)\big|^{2p-2}
\big|\sigma(U^{a,\Xi}(r),U^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{U,\Xi})\big|^{2}\mathrm{d}r\Big)^{\frac{1}{2}}\bigg]\nonumber\\
&\leq C\mathbb{E}\bigg[\sup_{0\leq r\leq t}\big|\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N})-D\big(\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N}-\tau)\big)\big|^{p-1}\nonumber\\ &~~~~~~~~\times\Big(\int_{0}^{t\wedge\eta^{\Xi}_{N}}
\big|\sigma(U^{a,\Xi}(r),U^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{U,\Xi})\big|^{2}\mathrm{d}r\Big)^{\frac{1}{2}}\bigg]\nonumber\\
&\leq\frac{1}{4}\mathbb{E}\Big[\sup_{0\leq r\leq t}\big|\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N})-D\big(\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N}-\tau)\big)\big|^{p}\Big]\nonumber\\
&~~~~+C\mathbb{E}\Big[\int_{0}^{t\wedge\eta^{\Xi}_{N}}\big|\sigma(U^{a,\Xi}(r),U^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{U,\Xi})\big|^{p}\mathrm{d}r\Big]\nonumber\\
&\leq\frac{1}{4}\mathbb{E}\Big[\sup_{0\leq r\leq t}\big|\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N})-D\big(\overline{U}^{a,M}(r\wedge\eta^{\Xi}_{N}-\tau)\big)\big|^{p}\Big]\nonumber\\ &~~~~+C\mathbb{E}\bigg[\int_{0}^{t\wedge\eta^{\Xi}_{N}}
\Big(1+|U^{a,\Xi}(r)|^{p}+|U^{a,\Xi}(r-\tau)|^{p}+W^{p}_{p}(\mathcal{L}_{r}^{U,\Xi})\Big)\mathrm{d}r\bigg]\nonumber\\
&\leq\frac{1}{4}\mathbb{E}\Big[\sup_{0\leq r\leq t}\big|\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N})-D\big(\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N}-\tau)\big)\big|^{p}\Big]\nonumber\\
&~~~~+C\Big(\int_{0}^{t}\sup_{1\leq a\leq \Xi}\mathbb{E}\big[\sup_{0\leq v\leq r}|\overline{U}^{a,\Xi}(v\wedge\eta^{\Xi}_{N})|^{p}\big]\mathrm{d}r+1\Big). \end{align} Inserting~$(\ref{eq4.12})$-$(\ref{eq4.14})$~into~$(\ref{eq4.11})$, we derive that \begin{align*}
&\mathbb{E}\Big[\sup_{0\leq r\leq t}\big|\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N})-D\big(\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N}-\tau)\big)\big|^{p}\Big]\nonumber\\
&\leq C\int_{0}^{t}\sup_{1\leq a\leq \Xi}\mathbb{E}\big[\sup_{0\leq v\leq r}|\overline{U}^{a,\Xi}(v\wedge\eta^{\Xi}_{N})|^{p}\big]\mathrm{d}r+C. \end{align*} Combining the above inequality with~$(\ref{eq4.9})$~and using the~Gronwall~inequality arrive at \begin{align*}
\sup_{1\leq a\leq M}\mathbb{E}\Big[\sup_{0\leq r\leq t}\big|\overline{U}^{a,\Xi}(r\wedge\eta^{\Xi}_{N})\big|^{p}\Big]\leq C. \end{align*} Therefore, letting $N\rightarrow \infty$, using Fatou's lemma one can see that \begin{align*}
\sup_{1\leq a\leq \Xi}\mathbb{E}\big[\sup_{0\leq t\leq T}|\overline{U}^{a,\Xi}(t)|^{p}\big]\leq C. \end{align*} \end{proof}
\subsection{Convergence Rate}\label}\def\be{\begin{equation}} \def\ee{\end{equation}{s6} We investigate in this section the convergence rate between the numerical solution and the exact solution. \begin{lemma}\label{le4.3} Under Assumption $\ref{a1}$, for any $T>0$, \begin{align}\label{eq4.15}
\sup_{1\leq a\leq \Xi}\sup_{0\leq n\leq \lfloor T/\triangle\rfloor}\mathbb{E}\big[\sup_{t_{n}\leq t<t_{n+1}}|\overline{U}^{a,\Xi}(t)-U^{a,\Xi}_{n}|^{p}\big]\leq C\triangle^{\frac{p}{2}},~~t\in[0,T]. \end{align}
\end{lemma} \begin{proof} Let~$\widetilde{N}=\lfloor T/\triangle\rfloor$. For each~$~a=1,\cdot\cdot\cdot,\Xi$ and $t\in[0,T]$, there exists~$n,~0\leq n\leq\widetilde{N}$~such that~$t\in[t_{n},t_{n+1})$. Then one observes \begin{align}\label{eq4.16}
&\mathbb{E}\Big[\sup_{t_{n}\leq t<t_{n+1}}\big|\overline{U}^{a,\Xi}(t)-D\big(\overline{U}^{a,\Xi}(t-\tau)\big)-U^{a,\Xi}_{n}+D(U^{a,\Xi}_{n-n_{0}})\big|^{p}\Big]\nonumber\\
&\leq C\mathbb{E}\bigg[\sup_{t_{n}\leq t<t_{n+1}}\Big(\big|b_{\triangle}(U^{a,\Xi}_{n},U^{a,\Xi}_{n-n_{0}},\mathcal{L}^{U_{n},\Xi})(t-t_{n})\big|^{p}\nonumber\\
&~~~~~~~~~~~~~~~~+\big|\sigma(U^{a,\Xi}_{n},U^{a,\Xi}_{n-n_{0}},\mathcal{L}^{U_{n},\Xi})(B^{a}(t)-B^{a}(t_{n}))\big|^{p}\Big) \bigg]\nonumber\\
&\leq C\bigg(\triangle^{(1-\alpha)p}+\mathbb{E}\Big[\sup_{t_{n}\leq t<t_{n+1}}\big|\sigma(U^{a,\Xi}_{n},U^{a,\Xi}_{n-n_{0}},\mathcal{L}^{U_{n},\Xi})(B^{a}(t)-B^{a}(t_{n}))\big|^{p}\Big]\bigg). \end{align} By~Doob's martingale inequality and Lemma~$\ref{le4.2}$, we derive that \begin{align}\label{eq4.17}
&\mathbb{E}\Big[\sup_{t_{n}\leq t<t_{n+1}}\big|\sigma(U^{a,\Xi}_{n},U^{a,\Xi}_{n-n_{0}},\mathcal{L}^{U_{n},\Xi})(B^{a}(t)-B^{a}(t_{n}))\big|^{p}\Big]\nonumber\\
&\leq C\triangle^{\frac{p}{2}}\mathbb{E}\Big[1+|U^{a,\Xi}_{n}|^{p}+|U^{a,\Xi}_{n-n_{0}}|^{p}+W^{p}_{p}(\mathcal{L}^{U_n,\Xi})\Big]\nonumber\\ &\leq C\triangle^{\frac{p}{2}}. \end{align} Therefore, substituting~$(\ref{eq4.17})$~into~$(\ref{eq4.16})$~yields that \begin{align}\label{eq4.18}
\sup_{0\leq n\leq \widetilde{N}}\mathbb{E}\Big[\sup_{t_{n}\leq t<t_{n+1}}\big|\overline{U}^{a,\Xi}(t)-D\big(\overline{U}^{a,\Xi}(t-\tau)\big)-U^{a,\Xi}_{n}+D(U^{a,\Xi}_{n-n_{0}})\big|^{p}\Big]\leq C\triangle^{\frac{p}{2}}. \end{align} Next, by Lemma~$\ref{l3.4}$, we have \begin{align*}
|\overline{U}^{a,\Xi}(t)-U^{a,\Xi}_{n}|^{p}
&\leq \frac{1}{(1-\lambda)^{p-1}} \big|\overline{U}^{a,\Xi}(t)-D\big(\overline{U}^{a,\Xi}(t-\tau)\big)-U^{a,\Xi}_{n}+D(U^{a,\Xi}_{n-n_{0}})\big|^{p}\nonumber\\
&~~~~+\frac{1}{ \lambda^{p-1}} \big|D\big(\overline{U}^{a,\Xi}(t-\tau)\big)-D(U^{a,\Xi}_{n-n_{0}})\big|^{p}, \end{align*} which together with Assumption \ref{a1} implies \begin{align*}
&\mathbb{E}\Big[\sup_{t_{n}\leq t<t_{n+1}}\big|\overline{U}^{a,\Xi}(t)-U^{a,\Xi}_{n}\big|^{p}\Big]\nonumber\\
&\leq\frac{1}{(1-\lambda)^{p-1}}\mathbb{E}\Big[\sup_{t_{n}\leq t<t_{n+1}}\big|\overline{U}^{a,\Xi}(t)-D\big(\overline{U}^{a,\Xi}(t-\tau)\big)-U^{a,\Xi}_{n}+D(U^{a,\Xi}_{n-n_{0}})\big|^{p}\Big]\nonumber\\
&~~~~+\lambda\mathbb{E}\big[\sup_{t_{n}\leq t<t_{n+1}}|\overline{U}^{a,\Xi}(t-\tau)-U^{a,\Xi}_{n-n_{0}}|^{p}\big]\nonumber\\
&\leq\frac{1}{(1-\lambda)^{p-1}}\mathbb{E}\Big[\sup_{t_{n}\leq t<t_{n+1}}\big|\overline{U}^{a,\Xi}(t)-D\big(\overline{U}^{a,\Xi}(t-\tau)\big)-U^{a,\Xi}_{n}+D(U^{a,\Xi}_{n-n_{0}})\big|^{p}\Big]\nonumber\\
&~~~~+\lambda\mathbb{E}\big[\sup_{t_{n-n_{0}}\leq t<t_{n-n_{0}+1}}|\overline{U}^{a,\Xi}(t)-U^{a,\Xi}_{n-n_{0}}|^{p}\big]. \end{align*} According to \eqref{eq4.18} we calculate \begin{align*}
&\sup_{0 \leq n\leq \widetilde{N}}\mathbb{E}\Big[\sup_{t_{n}\leq t<t_{n+1}}|\overline{U}^{a,\Xi}(t)-U^{a,\Xi}_{n}|^{p}\Big]\nonumber\\
&\leq\frac{1}{(1-\lambda)^{p-1}}\sup_{0 \leq n\leq \widetilde{N}}\mathbb{E}\Big[\sup_{t_{n}\leq t<t_{n+1}}\big|\overline{U}^{a,\Xi}(t)-D\big(\overline{U}^{a,\Xi}(t-\tau)\big)-U^{a,\Xi}_{n}+D(U^{a,\Xi}_{n-n_{0}})\big|^{p}\Big]\nonumber\\
&~~~~+\lambda\sup_{- n_0\leq n\leq \widetilde{N} -n_0 }\mathbb{E}\Big[\sup_{t_{n}\leq t<t_{n+1}}\big|\overline{U}^{a,\Xi}(t)-U^{a,\Xi}_{n}\big|^{p}\Big]\nonumber\\
&\leq\frac{C}{(1-\lambda)^{p-1}} \triangle^{\frac{p}{2}}+ \lambda\sup_{- n_0\leq n< 0 }\mathbb{E}\Big[\sup_{t_{n}\leq t<t_{n+1}}\big|\overline{U}^{a,\Xi}(t)-U^{a,\Xi}_{n}\big|^{p}\Big]\nonumber\\
&~~~~+\lambda\sup_{0\leq n\leq\widetilde{N}}\mathbb{E}\Big[\sup_{t_{n}\leq t<t_{n+1}}\big|\overline{U}^{a,\Xi}(t)-U^{a,\Xi}_{n}\big|^{p}\Big]\nonumber\\ &\leq\frac{C}{(1-\lambda)^{p }} \triangle^{\frac{p}{2}}
+\frac{\lambda}{ 1-\lambda}\sup_{- n_0\leq n< 0}\mathbb{E}\Big[\sup_{t_{n}\leq t<t_{n+1}}\big|\overline{U}^{a,\Xi}(t)-U^{a,\Xi}_{n}\big|^{p}\Big]. \end{align*} This yields that \begin{align}\label{eq4.19}
&\sup_{0 \leq n\leq \widetilde{N}}\mathbb{E}\Big[\sup_{t_{n}\leq t<t_{n+1}}|\overline{U}^{a,\Xi}(t)-U^{a,\Xi}_{n}|^{p}\Big]\nonumber\\ &\leq\frac{C}{(1-\lambda)^{p }} \triangle^{\frac{p}{2}}
+\frac{\lambda}{ 1-\lambda}\sup_{- n_0\leq n< 0}\mathbb{E}\Big[\sup_{t_{n}\leq t<t_{n+1}}\big|\overline{U}^{a,\Xi}(t)-U^{a,\Xi}_{n}\big|^{p}\Big]. \end{align} From \eqref{eq3.2} we can infer that \begin{align}\label{eq4.20}
\sup_{- n_0\leq n<0}\!\mathbb{E}\Big[\sup_{t_{n}\leq t<t_{n+1}}\big|\!\overline{U}^{a,\Xi}(t)\!-\!U^{a,\Xi}_{n}\!\big|^{p}\!\Big]
\leq\sup_{- n_0\leq n< 0 }\!\mathbb{E}\!\Big[\sup_{t_{n}\leq t<t_{n+1}}\big|\xi^{a}(t)\!-\!\xi^{a}(t_{n})\big|^{p}\!\Big]\leq K_{0}\triangle^{\frac{p}{2}}. \end{align} Inserting \eqref{eq4.20} into (\ref{eq4.19}) yields \begin{align*}
\sup_{0 \leq n\leq\widetilde{N}}\mathbb{E}\Big[\sup_{t_{n}\leq t<t_{n+1}}\big|\overline{U}^{a,\Xi}(t)-U^{a,\Xi}_{n}\big|^{p}\Big]\leq\Big(\frac{C}{(1-\lambda)^{p }} +\frac{\lambda K_0}{ 1-\lambda }\Big)\triangle^{\frac{p}{2}}. \end{align*} The required result $(\ref{eq4.15})$~follows. \end{proof}
\begin{lemma}\label{le4.4} Under Assumptions $\ref{a1}$~and~$\ref{a2}$ with~$p\geq4(c+1)$, for any~$q\in[2,\frac{p}{2(c+1)}]$, $$
\sup_{1\leq a\leq \Xi}\mathbb{E}\big[\sup_{0\leq t\leq T}|S^{a,\Xi}(t)-\overline{U}^{a,\Xi}(t)|^{q}\big]\leq C\triangle^{\alpha q},~~~T\geq0. $$ \end{lemma} \begin{proof} For each~$a=1,\cdot\cdot\cdot,\Xi$, $t\in[0,T]$, define $$ \Gamma^{a,\Xi}(t)=S^{a,\Xi}(t)-\overline{U}^{a,\Xi}(t)-D(S^{a,\Xi}(t-\tau))+D\big(\overline{U}^{a,\Xi}(t-\tau)\big). $$ For the fixed $q\in[2,\frac{p}{2(c+1)}]$, due to Lemma \ref{l3.4} and Assumption \ref{a1}, we can derive that \begin{align*}
\sup_{0\leq r\leq t}\big|S^{a,\Xi}(r)\!-\!\overline{U}^{a,\Xi}(r)\big|^{q}&\leq\lambda\sup_{0\leq r\leq t}\big|S^{a,\Xi}(r\!-\!\tau)\!-\!\overline{U}^{a,\Xi}(r\!-\!\tau)\big|^{q}\!+\!\frac{1}{(1-\lambda)^{q-1}}\sup_{0\leq r\leq t}|\Gamma^{a,\Xi}(r)|^{q}\nonumber\\
&\leq\lambda\sup_{-\tau\leq r\leq0}\big|S^{a,\Xi}(r)-\overline{U}^{a,\Xi}(r)\big|^{q}+\lambda\sup_{0\leq r\leq t}\big|S^{a,\Xi}(r)-\overline{U}^{a,\Xi}(r)\big|^{q}\nonumber\\
&~~~~+\frac{1}{(1-\lambda)^{q-1}}\sup_{0\leq r\leq t}|\Gamma^{a,\Xi}(r)|^{q}, \end{align*} which implies \begin{align}\label{eq4.21}
\mathbb{E}\Big[\sup_{0\leq r\leq t}\big|S^{a,\Xi}(r)-\overline{U}^{a,\Xi}(r)\big|^{q}\Big]\leq\frac{1}{(1-\lambda)^{q}}\mathbb{E}\big[\sup_{0\leq r\leq t}|\Gamma^{a,\Xi}(r)|^{q}\big]. \end{align} Using the~It$\mathrm{\widehat{o}}$~formula yields \begin{align*}
|\Gamma^{a,\Xi}(t)|^{q}
&\leq q\int_{0}^{t}|\Gamma^{a,\Xi}(r)|^{q-2}\big(\Gamma^{a,\Xi}(r)\big)^{T}\Big\{b(S^{a,\Xi}(r),S^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{S,\Xi})\\ &~~~~~~~~~~~~~~~~~-b_{\triangle}(U^{a,\Xi}(r),U^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{U,\Xi})\Big\}\mathrm{d}r\\
&~~~~+\frac{q(q-1)}{2}\int_{0}^{t}|\Gamma^{a,\Xi}(r)|^{q-2}\Big|\sigma(S^{a,\Xi}(r),S^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{S,\Xi})\\
&~~~~~~~~~~~~~~~~~-\sigma(U^{a,\Xi}(r),U^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{U,\Xi})\Big|^{2}\mathrm{d}r\\
&~~~~+q\int_{0}^{t}|\Gamma^{a,\Xi}(r)|^{q-2}\big(\Gamma^{a,\Xi}(r)\big)^{T} \Big\{\sigma(S^{a,\Xi}(r),S^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{S,\Xi})\\ &~~~~~~~~~~~~~~~~~-\sigma(U^{a,\Xi}(r),U^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{U,\Xi})\Big\}\mathrm{d}B^{a}(r). \end{align*} This implies \begin{align}\label{eq4.22}
\mathbb{E}\big[\sup_{0\leq r\leq t}|\Gamma^{a,\Xi}(r)|^{q}\big]\leq \sum_{l=1}^5 Q^{a,\Xi}_{l}, \end{align} where \begin{align*}
Q^{a,\Xi}_{1}&=q\mathbb{E}\bigg[\sup_{0\leq r\leq t}\int_{0}^{r}|\Gamma^{a,\Xi}(v)|^{q-2}\big(\Gamma^{a,\Xi}(v)\big)^{T}\Big\{b(S^{a,\Xi}(v),S^{a,\Xi}(v-\tau),\mathcal{L}_{v}^{S,\Xi}) \\ &~~~~~~~~~~~~~~~~~~~~~~~~-b(\overline{U}^{a,\Xi}(v),\overline{U}^{a,\Xi}(v-\tau),\mathcal{L}_{v}^{U,\Xi}) \Big\}\mathrm{d}v\bigg],\\
Q^{a,\Xi}_{2}&=q\mathbb{E}\bigg[\sup_{0\leq r\leq t}\int_{0}^{r}|\Gamma^{a,\Xi}(v)|^{q-2}\big(\Gamma^{a,\Xi}(v)\big)^{T}\Big\{b(\overline{U}^{a,\Xi}(v),\overline{U}^{a,\Xi}(v-\tau),\mathcal{L}_{v}^{U,\Xi})\\ &~~~~~~~~~~~~~~~~~~~~~~~~-b(U^{a,\Xi}(v),U^{a,\Xi}(v-\tau),\mathcal{L}_{v}^{U,\Xi})\Big\}\mathrm{d}v\bigg],\\
Q^{a,\Xi}_{3}&=q\mathbb{E}\bigg[\sup_{0\leq r\leq t}\int_{0}^{r}|\Gamma^{a,\Xi}(v)|^{q-2}(\Gamma^{a,\Xi}(v))^{T}
\Big\{b(U^{a,\Xi}(v),U^{a,\Xi}(v-\tau),\mathcal{L}_{v}^{U,\Xi})\\
&~~~~~~~~~~~~~~~~~~~~~~~~-b_{\triangle}(U^{a,\Xi}(v),U^{a,\Xi}(v-\tau),\mathcal{L}_{v}^{U,\Xi})\Big\}\mathrm{d}v\bigg],\\
Q^{a,\Xi}_{4}&=\frac{q(q-1)}{2}\mathbb{E}\bigg[\sup_{0\leq r\leq t} \int_{0}^{r}|\Gamma^{a,\Xi}(v)|^{q-2}
\Big|\sigma(S^{a,\Xi}(v),S^{a,\Xi}(v-\tau),\mathcal{L}_{v}^{S,\Xi})\\
&~~~~~~~~~~~~~~~~~~~~~~~~-
\sigma(U^{a,\Xi}(v),U^{a,\Xi}(v-\tau),\mathcal{L}_{v}^{U,\Xi})\Big|^{2}\mathrm{d}v\bigg]
\\ \end{align*} and \begin{align*}
Q^{a,\Xi}_{5}&=q\mathbb{E}\bigg[\sup_{0\leq r\leq t}\int_{0}^{r} |\Gamma^{a,\Xi}(v)|^{q-2}(\Gamma^{a,\Xi}(v))^{T} \Big\{\sigma(S^{a,\Xi}(v),S^{a,\Xi}(v-\tau),\mathcal{L}_{v}^{S,\Xi})\\ &~~~~~~~~~~~~~~~~~~~~~~~~-\sigma(U^{a,\Xi}(v),U^{a,\Xi}(v-\tau),\mathcal{L}_{v}^{U,\Xi})\Big\}\mathrm{d}B^{a}(v)\bigg].
\end{align*} Then, according to Young's inequality and Lemma~$\ref{le4.3}$, we have \begin{align}\label{eq4.23} Q^{a,\Xi}_{1}
&\leq C\mathbb{E}\bigg[\int^{t}_{0}\Big(\big|S^{a,\Xi}(r)-\overline{U}^{a,\Xi}(r)\big|^{q-2}+\big|D(S^{a,\Xi}(r-\tau))
-D\big(\overline{U}^{a,\Xi}(r-\tau)\big)\big|^{q-2}\Big)\nonumber\\
&~~~\times\Big(\big|S^{a,\Xi}(r)-\overline{U}^{a,\Xi}(r)\big|^{2}+\big|S^{a,\Xi}(r-\tau)-\overline{U}^{a,\Xi}(r-\tau)\big|^{2} +\mathbb{W}^{2}_{2}(\mathcal{L}_{r}^{S,\Xi},\mathcal{L}_{r}^{U,\Xi})\Big)\mathrm{d}r\bigg]\nonumber\\
&\leq C\mathbb{E}\bigg[\!\int^{t}_{0}\Big(\big|S^{a,\Xi}(r)\!-\!\overline{U}^{a,\Xi}(r)\big|^{q}
\!+\!\big|S^{a,\Xi}(r\!-\!\tau)\!-\!\overline{U}^{a,\Xi}(r-\tau)\big|^{q}\!+\!\mathbb{W}^{q}_{q}(\mathcal{L}_{r}^{S,\Xi},\!\mathcal{L}_{r}^{U,\Xi})\Big)\mathrm{d}r\!\bigg]\nonumber\\
&\leq C \mathbb{E}\bigg[\int^{t}_{0}\big|S^{a,\Xi}(r)-\overline{U}^{a,\Xi}(r)\big|^{q}\mathrm{d}r
+\int^{t}_{0}\frac{1}{\Xi}\sum^{\Xi}_{j=1}\big|S^{j,\Xi}(r)-U^{j,\Xi}(r)\big|^{q}\mathrm{d}r\bigg] \nonumber\\
&\leq C\mathbb{E}\bigg[\int^{t}_{0}\big|S^{a,\Xi}(r)-\overline{U}^{a,\Xi}(r)\big|^{q}\mathrm{d}r
+ \int^{t}_{0}\frac{1}{\Xi}\sum^{\Xi}_{j=1}\big|S^{j,\Xi}(r)-\overline{U}^{j,\Xi}(r)\big|^{q}\mathrm{d}r \nonumber\\
&~~~~~~~~~~~~~+\int^{t}_{0}\frac{1}{\Xi}\sum^{\Xi}_{j=1}\big|\overline{U}^{j,\Xi}(r)-U^{j,\Xi}(r)\big|^{q}\mathrm{d}r\bigg] \nonumber\\
&\leq C \int^{t}_{0}\sup_{1\leq a\leq \Xi}\mathbb{E}\Big[\sup_{0\leq v\leq r}\big|S^{a,\Xi}(v)-\overline{U}^{a,\Xi}(v)\big|^{q}\Big]\mathrm{d}r+C\triangle^{\frac{q}{2}}. \end{align} By \eqref{eq3.2}, Assumption~$\ref{a2}$~and Lemma~$\ref{le4.2}$, we can infer that \begin{align}\label{eq4.24}
Q^{a,\Xi}_{2}&\leq C\mathbb{E}\bigg[\int_{0}^{t}|\Gamma^{a,\Xi}(r)|^{q-1}
\Big(1+|\overline{U}^{a,\Xi}(r)|^{c}+|\overline{U}^{a,\Xi}(r-\tau)|^{c}+\!|U^{a,\Xi}(r)|^{c}+\!|U^{a,\Xi}(r-\tau)|^{c}\Big)\nonumber\\
&~~~~~~~~~~~~~~~~~~~~~~\times\Big(\big|\overline{U}^{a,\Xi}(r)-U^{a,\Xi}(r)\big|+\big|\overline{U}^{a,\Xi}(r-\tau)
-U^{a,\Xi}(r-\tau)\big|\Big)\mathrm{d}r\bigg]\nonumber\\
&\leq C\mathbb{E}\bigg[\!\sup_{0\leq r\leq t}\!|\Gamma^{a,\Xi}(r)|^{q-1}\!\int_{0}^{t}\!\Big(1\!+\!|\overline{U}^{a,\Xi}(r)|^{c}\!+\!|\overline{U}^{a,\Xi}(r\!-\!\tau)|^{c}\!+\!|U^{a,\Xi}(r)|^{c}\!+\!|U^{a,\Xi}(r\!-\!\tau)|^{c}\Big)\nonumber\\
&~~~~~~~~~~~~~~~~~~~~~~~\times\Big(\big|\overline{U}^{a,\Xi}(r)-U^{a,\Xi}(r)\big|+\big|\overline{U}^{a,\Xi}(r-\tau)-U^{a,\Xi}(r-\tau)\big|\Big)\mathrm{d}r\bigg]\nonumber\\
&\leq \frac{1}{4}\mathbb{E}\big[\sup_{0\leq r\leq t}|\Gamma^{a,\Xi}(r)|^{q}\big]\nonumber\\
&~~~~+ C\mathbb{E}\bigg[\Big\{\int_{0}^{t}\Big(1+|\overline{U}^{a,\Xi}(r)|^{c}+|\overline{U}^{a,\Xi}(r-\tau)|^{c}
+|U^{a,\Xi}(r)|^{c}+|U^{a,\Xi}(r-\tau)|^{c}\Big)\nonumber\\
&~~~~~~~~~~~~~~~~~~~\times\Big(\big|\overline{U}^{a,\Xi}(r)-U^{a,\Xi}(r)\big|+\big|\overline{U}^{a,\Xi}(r-\tau)-U^{a,\Xi}(r-\tau)\big|\Big)\mathrm{d}s\Big\}^{q}\bigg]\nonumber\\
&\leq \frac{1}{4}\mathbb{E}\big[\sup_{0\leq r\leq t}|\Gamma^{a,\Xi}(r)|^{q}\big] \nonumber\\
&~~~~+C\int_{0}^{t}\bigg(\mathbb{E}\Big[1+|\overline{U}^{a,\Xi}(r)|^{p}+|\overline{U}^{a,\Xi}(r-\tau)|^{p}+|U^{a,\Xi}(r)|^{p}+|U^{a,\Xi}(r-\tau)|^{p}\Big]\bigg)^{\frac{cq}{p}}\nonumber\\
&~~~~~~~~~~~~~~~~~~ \times\bigg(\mathbb{E}\bigg[\!\Big(\big|\overline{U}^{a,\Xi}(r)\!-\!U^{a,\Xi}(r)\big|
\!+\!\big|\overline{U}^{a,\Xi}(r\!-\!\tau)\!-\!U^{a,\Xi}(r\!-\!\tau)\big|\Big)^{\frac{pq}{p-cq}}\bigg]\bigg)^{\frac{p-cq}{p}}\mathrm{d}r\nonumber\\
&\leq \frac{1}{4}\mathbb{E}\big[\sup_{0\leq r\leq t}|\Gamma^{a,\Xi}(r)|^{q}\big] \nonumber\\
&~+\!C\bigg\{\!\int_{0}^{t}\!\Big(\!\mathbb{E}\Big[\!\big|\overline{U}^{a,\Xi}(r)\!-\!U^{a,\Xi}(r)\!\big|^{\frac{pq}{p-cq}}\Big]\!\Big)^{\frac{p-cq}{p}}\!\mathrm{d}r\!+\!\sup_{-\tau\leq r\leq0}\!\Big(\!\mathbb{E}\Big[\!\big|\overline{U}^{a,\Xi}(r)\!-\!U^{a,\Xi}(r)\big|^{\frac{pq}{p-cq}}\!\Big]\Big)^{\frac{p-cq}{p}}\!\bigg\}\nonumber\\
&\leq\frac{1}{4}\mathbb{E}\Big[\sup_{0\leq r\leq t}|\Gamma^{a,\Xi}(r)|^{q}\Big]+C\triangle^{\frac{q}{2}}. \end{align} One notices from~$(\ref{eq3.6})$~and~$(\ref{eq4.1})$~that \begin{align*}
&\mathbb{E}\Big[\int_{0}^{t}\big|b(U^{a,\Xi}(r),U^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{U,\Xi})
-b_{\triangle}(U^{a,\Xi}(r),U^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{U,\Xi})\big|^{q}\mathrm{d}r\Big]\nonumber\\ &\leq\triangle^{\alpha q}\int_{0}^{t}
\mathbb{E}\bigg[\frac{|b(U^{a,\Xi}(r),U^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{U,\Xi})|^{2q}}{\Big(1+\triangle^{\alpha}|b(U^{a,\Xi}(r),U^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{U,\Xi})|\Big)^{q}}\bigg]\mathrm{d}r\nonumber\\ &\leq C\triangle^{\alpha q}\int_{0}^{t}
\mathbb{E}\bigg[\Big(1+|U^{a,\Xi}(r)|^{c+1}+|U^{a,\Xi}(r-\tau)|^{c+1}+W_{2}(\mathcal{L}_{r}^{U,\Xi})\Big)^{2q}\bigg]\mathrm{d}r\nonumber\\ &\leq C\triangle^{\alpha q}. \end{align*} By Young's inequality and the above inequality, we obtain that \begin{align}\label{eq4.25}
Q^{a,\Xi}_{3}&\leq C\mathbb{E}\Big[\sup_{0\leq r\leq t}|\Gamma^{a,\Xi}(r)|^{q-1}
\int_{0}^{t}\big|b(U^{a,\Xi}(r),U^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{U,\Xi})\nonumber\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-b_{\triangle}(U^{a,\Xi}(r),U^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{U,\Xi})
\big|\mathrm{d}r\Big]\nonumber\\
&\leq\frac{1}{4}\mathbb{E}\big[\sup_{0\leq r\leq t}|\Gamma^{a,\Xi}(r)|^{q}\big]\nonumber\\
&~~~~+C\mathbb{E}\Big[\int_{0}^{t}\big|b(U^{a,\Xi}(r),U^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{U,\Xi})
-b_{\triangle}(U^{a,\Xi}(r),U^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{U,\Xi})\big|^{q}\mathrm{d}r\Big]\nonumber\\
&\leq\frac{1}{4}\mathbb{E}\big[\sup_{0\leq r\leq t}|\Gamma^{a,\Xi}(r)|^{q}\big] + C\triangle^{\alpha q}. \end{align} Due to Assumption~$\ref{a1}$, $(\ref{eq4.23})$ and~$(\ref{eq3.2})$, by applying Young's inequality and~$\mathrm{H\ddot{o}lder}$'s~inequality, we derive that \begin{align}\label{eq4.26}
Q^{a,\Xi}_{4}&\leq C\mathbb{E}\bigg[\int^{t}_{0}\Big(\big|S^{a,\Xi}(r)-\overline{U}^{a,\Xi}(r)\big|^{q-2}+\big|D(S^{a,\Xi}(r-\tau))-D\big(\overline{U}^{a,\Xi}(r-\tau)\big)\big|^{q-2}\Big)\nonumber\\
&~~~~~~~~\times\Big(\big|S^{a,\Xi}(r)-U^{a,\Xi}(r)\big|^{2}\!+\!\big|S^{a,\Xi}(r\!-\tau)-U^{a,\Xi}(r\!-\tau)\big|^{2} \!+\mathbb{W}^{2}_{2}(\mathcal{L}_{r}^{S,\Xi},\mathcal{L}_{r}^{U,\Xi})\Big)\mathrm{d}r\bigg]\nonumber\\
&\leq C\mathbb{E}\bigg[\int^{t}_{0}\Big(\big|S^{a,\Xi}(r)-\overline{U}^{a,\Xi}(r)\big|^{q}+\big|S^{a,\Xi}(r-\tau)-\overline{U}^{a,\Xi}(r-\tau)\big|^{q}\nonumber\\
&~~~~+\big|S^{a,\Xi}(r)-U^{a,\Xi}(r)\big|^{q}+\big|S^{a,\Xi}(r-\tau)-U^{a,\Xi}(r-\tau)\big|^{q} +\mathbb{W}^{q}_{2}(\mathcal{L}_{r}^{S,\Xi},\mathcal{L}_{r}^{U,\Xi})\Big)\mathrm{d}r\bigg]\nonumber\\
&\leq C\mathbb{E}\bigg[\int^{t}_{0}\Big(\big|S^{a,\Xi}(r)-\overline{U}^{a,\Xi}(r)\big|^{q}+\big|S^{a,\Xi}(r-\tau)-\overline{U}^{a,\Xi}(r-\tau)\big|^{q}\nonumber\\
&~~~~+\big|\overline{U}^{a,\Xi}(r)-U^{a,\Xi}(r)\big|^{q}+\big|\overline{U}^{a,\Xi}(r-\tau)-U^{a,\Xi}(r-\tau)\big|^{q} +\mathbb{W}^{q}_{q}(\mathcal{L}_{r}^{S,\Xi},\mathcal{L}_{r}^{U,\Xi})\Big)\mathrm{d}r\bigg]\nonumber\\
&\leq C\mathbb{E}\bigg[\int^{t}_{0}\Big(\big|S^{a,\Xi}(r)\!-\!\overline{U}^{a,\Xi}(r)\big|^{q}\!+\!\big|S^{a,\Xi}(r\!-\!\tau)\!-\overline{U}^{a,\Xi}(r-\!\tau)\big|^{q} \!+\!\mathbb{W}^{q}_{q}(\mathcal{L}_{r}^{S,\Xi},\mathcal{L}_{r}^{U,\Xi})\!\Big)\mathrm{d}r\!\bigg]\nonumber\\
&~~~~+C\mathbb{E}\Big[\int^{t}_{0}\big|\overline{U}^{a,\Xi}(r)\!-\!U^{a,\Xi}(r)\big|^{q}\mathrm{d}r\Big]
+C\sup_{-\tau\leq r\leq0}\mathbb{E}\Big[\big|\overline{U}^{a,\Xi}(r)-U^{a,\Xi}(r)\big|^{q}\Big] \nonumber\\
&\leq C \int^{t}_{0}\sup_{1\leq a\leq \Xi}\mathbb{E}\Big[\sup_{0\leq v\leq r}\big|S^{a,\Xi}(v)-\overline{U}^{a,\Xi}(v)\big|^{q}\Big]\mathrm{d}r+C\triangle^{\frac{q}{2}}. \end{align} By the~BDG~inequality, we obtain \begin{align}\label{eq4.27} Q^{a,\Xi}_{5}
&\leq C\mathbb{E}\bigg[\!\Big(\int_{0}^{t}\!|\Gamma^{a,\Xi}(r)|^{2q-2}\nonumber\\
&~~~~~~~~~\times\big|\sigma(S^{a,\Xi}(r),\!S^{a,\Xi}(r\!-\!\tau),\mathcal{L}_{r}^{S,\Xi})\!-\!\sigma(U^{a,\Xi}(r),\!U^{a,\Xi}(r\!-\!\tau),\mathcal{L}_{r}^{U,\Xi})\big|^{2} \mathrm{d}s\!\Big)^{\frac{1}{2}}\!\bigg]\nonumber\\
&\leq C\mathbb{E}\bigg[\sup_{0\leq r\leq t}|\Gamma^{a,\Xi}(r)|^{q-1}\nonumber\\
&~~~~~~~~~\times\Big(\int_{0}^{t}\big|\sigma(S^{a,\Xi}(r),S^{a,\Xi}(r\!-\!\tau),\mathcal{L}_{r}^{S,\Xi})\!-\!\sigma(U^{a,\Xi}(r),U^{a,\Xi}(r\!-\!\tau),\mathcal{L}_{r}^{U,\Xi})\big|^{2}\mathrm{d}r\Big)^{\frac{1}{2}}\bigg]\nonumber\\
&\leq\frac{1}{4}\mathbb{E}\big[\sup_{0\leq r\leq t}|\Gamma^{a,\Xi}(r)|^{q}\big]\nonumber\\
&~~~~+C\mathbb{E} \bigg[\!\Big(\int_{0}^{t}\big|\sigma(S^{a,\Xi}(r),S^{a,\Xi}(r-\tau),
\mathcal{L}_{r}^{S,\Xi})\!-\!\sigma(U^{a,\Xi}(r),U^{a,\Xi}(r-\tau),\mathcal{L}_{r}^{U,\Xi})\big|^{2} \mathrm{d}r\Big)^{\frac{q}{2}}\!\bigg]\nonumber\\
&\leq\frac{1}{4}\mathbb{E}\big[\sup_{0\leq r\leq t}|\Gamma^{a,\Xi}(r)|^{q}\big]\nonumber\\
&~~~~+\!C\mathbb{E}\bigg[\!\int_{0}^{t}\!\Big(\big|\!S^{a,\Xi}(r)\!-\!U^{a,\Xi}(r)\!\big|^{2}\!+\!\big|\!S^{a,\Xi}(r\!-\!\tau)\!-\!U^{a,\Xi}(r\!-\!\tau)\big|^{2} \!+\!\mathbb{W}^{2}_{2}(\mathcal{L}_{r}^{S,\Xi},\!\mathcal{L}_{r}^{U,\Xi})\!\Big)^{\frac{q}{2}}\!\mathrm{d}r\!\bigg] \nonumber\\
&\leq\frac{1}{4}\mathbb{E}\big[\sup_{0\leq r\leq t}|\Gamma^{a,\Xi}(r)|^{q}\big]
\!+\!C\mathbb{E}\bigg[\int^{t}_{0}\Big(\big|S^{a,\Xi}(r)\!-\!\overline{U}^{a,\Xi}(r)\big|^{q}\!+\!\big|S^{a,\Xi}(r-\tau)\!-\!\overline{U}^{a,\Xi}(r-\tau)\big|^{q}\nonumber\\
&~~~~+\big|\overline{U}^{a,\Xi}(r)-U^{a,\Xi}(r)\big|^{q}+\big|\overline{U}^{a,\Xi}(r-\tau)-U^{a,\Xi}(r-\tau)\big|^{q} +\mathbb{W}^{q}_{q}(\mathcal{L}_{r}^{S,\Xi},\mathcal{L}_{r}^{U,\Xi})\Big)\mathrm{d}r\bigg]\nonumber\\
&\leq\frac{1}{4}\mathbb{E}\big[\sup_{0\leq r\leq t}|\Gamma^{a,\Xi}(r)|^{q}\big]\!+\!C \int^{t}_{0}\sup_{1\leq a\leq \Xi}\mathbb{E}\Big[\sup_{0\leq v\leq r}\big|S^{a,\Xi}(v)\!-\!\overline{U}^{a,\Xi}(v)\big|^{q}\Big]\mathrm{d}r+C\triangle^{\frac{q}{2}} . \end{align} Inserting \eqref{eq4.23}-\eqref{eq4.27} into \eqref{eq4.22} yields
\begin{align}\label{eq4.28} &\mathbb{E}\big[\sup_{0\leq r\leq t}|\Gamma^{a,\Xi}(r)|^{q}\big]\leq C \int^{t}_{0}\sup_{1\leq a\leq \Xi}\mathbb{E}\Big[\sup_{0\leq v\leq r}|S^{a,\Xi}(v)-\overline{U}^{a,\Xi}(v)|^{q}\Big]\mathrm{d}r+C\triangle^{\alpha q} . \end{align} Hence, thanks to~$(\ref{eq4.21})$~and the Gronwall inequality, we get the desired assertion $$
\sup_{1\leq a\leq \Xi}\mathbb{E}\Big[\sup_{0\leq t\leq T}\big|S^{a,\Xi}(t)-\overline{U}^{a,\Xi}(t)\big|^{q}\Big]\leq C\triangle^{\alpha q}. $$ \end{proof}
Finally, the convergence rate between the numerical solution~$U^{a,\Xi}(t)$~and the exact solution of~$(\ref{eq3.33})$~ follows from Lemma~$\ref{le4.3}$~and Lemma~$\ref{le4.4}$~immediately. \begin{theorem}\label{th4.5} Under Assumptions \ref{a1} and \ref{a2} with~$p\geq4(c+1)$, for any~$q\in[2,\frac{p}{2(c+1)}]$, $$
\sup_{1\leq a\leq \Xi}\sup_{0\leq t\leq T}\mathbb{E}\big[|S^{a,\Xi}(t)-U^{a,\Xi}(t)|^{q}\big]\leq C\triangle^{\alpha q},~T\geq0. $$ \end{theorem}
In the above theorem choosing $\alpha= {1}/{2}$, we can get the optimal convergence rate $1/2$. Therefore, the desired rate of convergence follows directly from Lemma~$\ref{le3.6}$. \begin{theorem}\label{th4.6} Under Assumptions \ref{a1} and \ref{a2} with~$p\geq4(c+1)$, for any~$T\geq0$,~$\Xi\geq2$, \begin{align*}
\displaystyle\sup_{0\leq t\leq T}\mathbb{E}\Big[\big|S^{a}(t)-U^{a,\Xi}(t)\big|^{2}\Big]\leq C\left\{ \begin{array}{lll} \Xi^{-1/2}+\triangle^{2\alpha},~~~&1\leq d<4,\\ \Xi^{-1/2}\log(\Xi)+\triangle^{2\alpha},~~~&d=4,\\ \Xi^{-d/2}+\triangle^{2\alpha},~~~&4<d. \end{array} \right. \end{align*}
\end{theorem}
\section{ Numerical Example} Consider the scalar MV-NSDDE (refer to \cite[pp.609, Remark 2.3]{TC2018}) \begin{align}\label{eq5.1} \mathrm{d}\big(S(t)+\beta S(t-\tau)\big)&=\Big(S(t)-S^{3}(t)+\beta S(t-1)-\beta^{3}S^{3}(t-1)+\mathbb{E}[S(t)]\Big)\mathrm{d}t\nonumber\\ &~~~~+\big(S(t)+\beta S(t-\tau)\big)\mathrm{d}B(t),~~~~~~~~~~~~~t\geq0. \end{align} The initial data~is~$S(t)=\xi(t)=t,$ $ t\in[-\tau,0]$. Let~$\tau= {1}/{32}$,~$T=1$~and~$\beta= {1}/{2}$. \begin{figure}\label{figure1}
\end{figure} By computation, one observes that~$(\ref{eq3.2})$~and Assumptions~$\ref{a1}$,~$\ref{a2}$~hold with~all $p\geq2$~and~$c=2$. Thus by virtue of Theorem \ref{th3.5} it follows that $(\ref{eq5.1})$~has a unique solution with this initial data.
Furthermore, let $\alpha= {1}/{2}$, and fix $\triangle\in(0,1/32)$ such that $n_{0}= {1}/{(32\triangle)}$ is an integer. By Theorem~$\ref{th4.6}$, the approximation solution of the tamed scheme~$(\ref{eq4.2})$ converges to the exact solution of $(\ref{eq5.1})$ with error estimate in stepsize~$\triangle$ under the sense of mean square. For the numerical experiments we take the numerical solution with the small size $\triangle=2^{-16}$~as the exact solution $S(\cdot)$ of $(\ref{eq5.1})$.
Let $\Xi=1000$. The Figure~$\ref{figure1}$~shows the root mean square approximation error~$\big(\mathbb{E}[|S(T)-U(T)|^{2}]\big)^{\frac{1}{2}}$~between the exact solution~$S(T)$ of $(\ref{eq5.1})$ and the numerical solution~$U(T)$~of the tamed EM scheme~$(\ref{eq4.2})$, as a function of stepsize~$\triangle\in\{2^{-15},2^{-14},2^{-13},2^{-12},2^{-11}\}$. We observe that this numerical solution~$U(T)$~performs well to approximate the exact solution to~$(\ref{eq5.1})$~with almost ~$1/2$ order convergence rate. \qed
\section*{ Acknowledgements} The research of the first author was supported in part by the National Natural Science Foundation of China (11971096), the Natural Science Foundation of Jilin Province (YDZJ202101ZYTS154), the Education Department of Jilin Province (JJKH20211272KJ), the Fundamental Research Funds for the Central Universities.
\end{document} |
\begin{document}
\title{Sasakian Manifold and $*$-Ricci Tensor}
\author{Venkatesha $\cdot$ Aruna Kumara H}
\address{Department of Mathematics, Kuvempu University,\\
Shankaraghatta - 577 451, Shimoga, Karnataka, INDIA.\\}
\email{[email protected], [email protected]}
\begin{abstract} The purpose of this paper is to study $*$-Ricci tensor on Sasakian manifold. Here, $\varphi$-confomally flat and confomally flat $*$-$\eta$-Einstein Sasakian manifold are studied. Next, we consider $*$-Ricci symmetric conditon on Sasakian manifold. Finally, we study a special type of metric called $*$-Ricci soliton on Sasakian manifold. \end{abstract} \subjclass[2010]{53D10 $\cdot$ 53C25 $\cdot$ 53C15 $\cdot$ 53B21.} \keywords{Sasakian metric $\cdot$ $*$-Ricci tensor $\cdot$ Conformal curvature tensor $\cdot$ $\eta$-Einstein manifold.}
\maketitle
\section{Introduction} The notion of contact geometry has evolved from the mathematical formalism of classical mechanics\cite{Gei}. Two important classes of contact manifolds are $K$-contact manifolds and Sasakian manifolds\cite{DEB}. An odd dimensional analogue of Kaehler geometry is the Sasakian geometry. Sasakian manifolds were firstly studied by the famous geometer Sasaki\cite{Sasa} in 1960, and for long time focused on this, Sasakian manifold have been extensively studied under several points of view in \cite{Chaki,De,Ikw,Olz,Tanno}, and references therein. \par On the other hand, it is mentioned that the notion of $*$-Ricci tensor was first introduced by Tachibana \cite{TS} on almost Hermitian manifolds and further studied by Hamada and Inoguchi \cite{HT} on real hypersurfaces of non-flat complex space forms. \par Motivated by these studies the present paper is organized as follows: In section 2, we recall some basic formula and result concerning Sasakian manifold and $*$-Ricci tensor which we will use in further sections. A $\varphi$-conformally flat Sasakian manifold is studied in section 3, in which we obtain some intersenting result. Section 4 is devoted to the study of conformally flat $*$-$\eta$-Einstein Sasakian manifold. In section 5, we consider $*$-Ricci symmetric Sasakian manifold and found that $*$-Ricci symmetric Sasakian manifold is $*$-Ricci flat, moreover, it is $\eta$-Einstein manifold. In the last section, we studied a special type of metric called $*$-Ricci soliton. Here we have proved a important result on Sasakian manifold admitting $*$-Ricci soliton.
\section{Preliminaries} In this section, we collect some general definition and basic formulas on contact metric manifolds and Sasakian manifolds which we will use in further sections. We may refer to \cite{Ar,Bo,Kus} and references therein for more details and information about Sasakian geometry. \par A (2n+1)-dimensional smooth connected manifold $M$ is called almost contact manifold if it admits a triple $(\varphi, \xi, \eta)$, where $\varphi$ is a tensor field of type $(1,1)$, $\xi$ is a global vector field and $\eta$ is a 1-form, such that \begin{align} \label{2.1} \varphi^2X=-X+\eta(X)\xi, \qquad \eta(\xi)=1, \qquad \varphi\xi=0, \qquad \eta\circ\varphi=0, \end{align} for all $X, Y\in TM$. If an almost contact manifold $M$ admits a structure $(\varphi, \xi, \eta, g)$, $g$ being a Riemannian metric such that \begin{align} \label{2.2} g(\varphi X,\varphi Y)=g(X,Y)-\eta(X)\eta(Y), \end{align} then $M$ is called an almost contact metric manifold. An almost contact metric manifold $M(\varphi, \xi, \eta, g)$ with $d\eta(X,Y)=\Phi(X,Y)$, $\Phi$ being the fundamental 2-form of $M(\varphi, \xi,\eta,g)$ as defined by $\Phi(X,Y)=g(X,\varphi Y)$, is a contact metric manifold and $g$ is the associated metric. If, in addition, $\xi$ is a killing vector field (equivalentely, $h=\frac{1}{2}L_\xi \varphi=0$, where $L$ denotes Lie differentiation), then the manifold is called $K$-contact manifold. It is well known that\cite{DEB}, if the contact metric structure $(\varphi,\xi,\eta,g)$ is normal, that is, $[\varphi,\varphi]+2d\eta\otimes\xi=0$ holds, then $(\varphi,\xi,\eta,g)$ is Sasakian. An almost contact metric manifold is Sasakian if and only if \begin{align} \label{2.3} (\nabla_X \varphi)Y=g(X,Y)\xi-\eta(Y)X, \end{align} for any vector fields $X, Y$ on $M$, where $\nabla$ is Levi-Civita connection of $g$. A Sasakian manifold is always a $K$-contact manifold. The converse also holds when the dimension is three, but which may not be true in higher dimensions\cite{JB}. On Sasakian manifold, the following relations are well known; \begin{align} \label{a2.4}\nabla_X \xi&=-\varphi X\\ \label{2.4} R(X,Y)\xi&=\eta(Y)X-\eta(X)Y,\\ \label{2.5} R(\xi, X)Y&=g(X,Y)\xi-\eta(Y)X,\\ \label{2.6} Ric(X,\xi)&=2n\eta(X)\qquad (or\,\, Q\xi=2n\xi), \end{align} for all $X,Y\in TM$, where $R, Ric$ and $Q$ denotes the curvature tensor, Ricci tensor and Ricci operator, respectively. \par On the other hand, let $M(\varphi,\xi,\eta,g)$ be an almost contact metric manifold with Ricci tensor $Ric$. The $*$-Ricci tensor and $*$-scalar curvature of $M$ repectively are defined by \begin{align} \label{2.7}Ric^*(X,Y)=\sum_{i=1}^{2n+1}R(X,e_i,\varphi e_i, \varphi Y),\qquad r^*=\sum_{i=1}^{2n+1}Ric^*(e_i,e_i), \end{align} for all $X,Y\in TM$, where ${e_1,...,e_{2n+1}}$ is an orthonormal basis of the tangent space $TM$. By using the first Bianchi identity and \eqref{2.7} we get \begin{align} Ric^*(X,Y)=\frac{1}{2}\sum_{i=1}^{2n+1}g(\varphi R(X,\varphi Y)e_i,e_i). \end{align} An almost contact metric manifold is said to be $*$-Einstein if $Ric^*$ is a constant multiple of the metric $g$. One can see $Ric^*(X,\xi)=0$, for all $X \in TM$. It should be remarked that $Ric^*$ is not symmetric, in general. Thus the condition $*$-Einstein automatically requires a symmetric property of the $*$-Ricci tensor\cite{HT}.
Now we make an effort to find $*$-Ricci tensor on Sasakian manifold. \begin{Lemma}
In a (2n+1)-dimensional Sasakian manifold $M$, the $*$-Ricci tensor is given by
\begin{align}
\label{2.9} Ric^*(X,Y)=Ric(X,Y)-(2n-1)g(X,Y)-\eta(X)\eta(Y).
\end{align} \end{Lemma} \begin{proof}
In a (2n+1)-dimensional Sasakian manifold $M$, the Ricci tensor $Ric$ satisfies the relation (see page 284, Lemma 5.3 in \cite{YKK}):
\begin{align}
\label{2.10} Ric(X,Y)=\frac{1}{2}\sum_{i=1}^{2n+1}g(\varphi R(X,\varphi Y)e_i,e_i)+(2n-1)g(X,Y)+\eta(X)\eta(Y).
\end{align}
Using the definition of $Ric^*$ in \eqref{2.10}, we obtain \eqref{2.9} \end{proof} \begin{defi}\label{d2.1} \cite{JTC} An almost contact metric manifold $M$ is said to be weakly $\varphi$-Einstein if
\begin{align*}
Ric^\varphi(X,Y)=\beta g^\varphi(X,Y),\quad X,Y\in TM,
\end{align*}
for some function $\beta$. Here $Ric^\varphi$ denotes the symmetric part of $Ric^*$, that is,
\begin{align*}
Ric^\varphi(X,Y)=\frac{1}{2}\{Ric^*(X,Y)+Ric^*(Y,X)\}, \quad X,Y\in TM,
\end{align*}
we call $Ric^\varphi$, the $\varphi$-Ricci tensor on $M$ and the symmetric tensor $g^\varphi$ is defined by $g^\varphi(X,Y)=g(\varphi X, \varphi Y)$. When $\beta$ is constant, then $M$ is said to be $\varphi$-Einstein. \end{defi} \begin{defi}
If the Ricci tensor of a Sasakian manifold $M$ is of the form
\begin{align*}
Ric(X,Y)=\alpha g(X,Y)+\gamma \eta(X)\eta(Y),
\end{align*}
for any vector fields $X, Y$ on $M$, where $\alpha$ and $\gamma$ being constants, then $M$ is called an $\eta$-Einstein manifold. \end{defi} Let $M(\varphi,\xi,\eta,g)$ be a Sasakian $\eta$-Einstein manifold with constants $(\alpha, \gamma)$. Consider a $D$-homothetic Sasakian structure $S = (\varphi',\xi',\eta',g')=(\varphi,a^{-1}\xi,a\eta,ag + a(a-1)\eta\otimes\eta)$. Then $(M,S)$ is also $\eta$-Einstein with constants $\alpha'=\frac{\alpha+2-2a}{a}$ and $\gamma'=2n-\alpha'$ (see proposition 18 in \cite{Cha}). Here we make a remark that the particular value: $\alpha=-2$ remains fixed under a $D$-homothetic deformation\cite{AG}. Thus, we state the following definition. \begin{defi}
A Sasakian $\eta$-Einstein manifold with $\alpha=-2$ is said to be
D-homothetically fixed. \end{defi}
\section{$\varphi$-conformally Flat Sasakian Manifold} The Weyl conformal curvature tensor\cite{YKK} is defined as a map $C:TM\times TM\times TM \longrightarrow TM$ such that \begin{align} \nonumber C(X,Y)Z=&R(X,Y)Z-\frac{1}{2n-1}\{Ric(Y,Z)X-Ric(X,Z)Y+g(Y,Z)QX\\ \label{3.1} &-g(X,Z)QY\}+\frac{r}{2n(2n-1)}\{g(Y,Z)X-g(X,Z)Y\}, \quad X,Y\in TM. \end{align} In\cite{CAR}, Cabrerizo et al proved some necessary condition for $K$-contact manifold to be $\varphi$-conformally flat. In the following theorem we find a condition for $\varphi$-conformally flat Sasakian manifold. \begin{Th}
If a (2n+1)-dimensional Sasakian manifold $M$ is $\varphi$-conformally flat, then $M$ is $*$-$\eta$-Einstein manifold. Moreover, $M$ is weakly $\varphi$-Einstein. \end{Th} \begin{proof}
It is well known that (see in \cite{CAR}), if a $K$-contact manifold is $\varphi$-conformally flat then we get the following relation:
\begin{align}
\label{3.2} R(\varphi X, \varphi Y, \varphi Z, \varphi W)=\frac{r-4n}{2n(2n-1)}\{g(\varphi Y,\varphi Z)g(\varphi X, \varphi W)-g(\varphi X,\varphi Z)g(\varphi Y, \varphi W)\}.
\end{align}
In a Sasakian manifold, in view of \eqref{2.4} and \eqref{2.5} we can verify that
\begin{align}
\nonumber R(\varphi^2 X, \varphi^2Y,\varphi^2Z,\varphi^2W)=&R(X,Y,Z,W)-g(Y,Z)\eta(X)\eta(Y)+g(X,Z)\eta(Y)\eta(W)\\
\label{3.3} &+g(Y,W)\eta(X)\eta(Z)-g(X,W)\eta(Y)\eta(Z),
\end{align}
for all $X,Y,Z,W \in TM$. Replacing $X, Y,Z,W$ by $\varphi X, \varphi Y, \varphi Z, \varphi W$ respectively in \eqref{3.2} and making use of \eqref{2.2} and \eqref{3.3} we get
\begin{align}
\nonumber R(X,Y,Z,W)&=\frac{r-4n}{2n(2n-1)}\{g(Y,Z)g(X,W)-g(X,Z)g(Y,W)\}\\
\nonumber&-\frac{r-2n(2n+1)}{2n(2n-1)}\{g(Y,Z)\eta(X)\eta(W)-g(X,Z)\eta(Y)\eta(W)\\
&+g(X,W)\eta(Y)\eta(Z)-g(Y,W)\eta(X)\eta(Z)\}.
\end{align}
By the definition of $Ric^*$, direct computation yields
\begin{align}
\label{3.5} Ric^*(X,Y)=\sum_{i=1}^{2n+1}R(X,e_i,\varphi e_i, \varphi Y)=\beta g(X,Y)-\beta\eta(X)\eta(Y),
\end{align}
where $\beta=\frac{r-4n}{2n(2n-1)}$, showing that $M$ is $*$-$\eta$-Einstein. Next, in view of \eqref{2.2} we have
\begin{align}
\label{3.6} Ric^*(X,Y)=\frac{r-4n}{2n(2n-1)} g^\varphi(X,Y),
\end{align}
for all $X,Y \in TM$. Hence $Ric^*=Ric^\varphi$ and hence it is weakly $\varphi$-Einstein. This completes the proof. \end{proof} Suppose the scalar curvature of the manifold is constant. Then in view of \eqref{3.6}, we have \begin{Cor}
If a $\varphi$-conformally flat Sasakian manifold has constant scalar curvature, then it is $\varphi$-Einstein. \end{Cor}
In a Sasakian manifold, the $*$-Ricci tensor is given by \eqref{2.9} and so in view of \eqref{3.5}, we state the following; \begin{Cor}
A $\varphi$-conformally flat Sasakian manifold is $\eta$-Einstein. \end{Cor}\ The notion of $\eta$-parallel Ricci tensor was introduced in the context of Sasakian manifold by Kon\cite{KON} and is defined by $(\nabla_Z Ric)(\varphi X,\varphi Y)=0$, for all $X,Y \in TM$. From this definition, we define a $\eta$-parallel $*$-Ricci tensor by $(\nabla_Z Ric^*)(\varphi X,\varphi Y)=0$.
Replacing $X$ by $\varphi X$ and $Y$ by $\varphi Y$ in \eqref{3.5}, we obtain $Ric^*(\varphi X,\varphi Y)=\beta g(\varphi X,\varphi Y)$. Now taking covariant differention with respect to $W$, we get $(\nabla_W Ric^*)(\varphi X,\varphi Y)=dr(W) g(\varphi X, \varphi Y)$. Therefore we have the following; \begin{Cor}
A (2n+1)-dimensional $\varphi$-conformally flat Sasakian manifold has $\eta$-parallel $*$-Ricci tensor if and only if the scalar curvature of the
manifold is constant. \end{Cor}
\section{Conformally Flat $*$-$\eta$-Einstein Sasakian Manifold} Suppose $M$ is conformally flat Sasakian manifold, then from \eqref{3.1} we have \begin{align} \nonumber R(X.Y)Z=&\frac{1}{2n-1}\{Ric(Y,Z)X-Ric(X,Z)Y+g(Y,Z)QX\\ \label{4.1} &-g(X,Z)QY\}-\frac{r}{2n(2n-1)}\{g(Y,Z)X-g(X,Z)Y\}. \end{align} If we set $Y=Z=\xi\perp X$, we find $QX=\frac{r-2n}{2n}X$. From this equation, \eqref{4.1} becomes \begin{align} \label{4.2} R(X,Y)Z=\frac{r-4n}{2n(2n-1)}\{g(Y,Z)X-g(X,Z)Y\}. \end{align} By definition of $*$-Ricci tensor, direct computation yields \begin{align} \label{4.3} Ric^*(X,Y)=\frac{r-4n}{2n(2n-1)}g(\varphi X, \varphi Y). \end{align} Since M is a conformally flat Sasakian manifold, we have the following equations from the definition of $Ric^*$ and equation \eqref{4.2}: \begin{align*} Ric^*(\varphi Y, \varphi X)&=\sum_{i=1}^{2n+1}R(\varphi Y, e_i, \varphi e_i, \varphi^2X)\\ &=\sum_{i=1}^{2n+1}\{-R(X,\varphi e_i,e_i,\varphi Y)+\eta(X)R(\varphi Y,e_i,\varphi e_i,\xi)\}\\ &=Ric^*(X,Y). \end{align*} If we set $Y=\varphi X$ such that $X$ is unit, we obatin $Ric^*(\varphi^2X,\varphi X)=Ric^*(X,\varphi X)$ which implies that $Ric^*(X,\varphi X)=0$. Thus, from the definition of $*$-$\eta$-Einstein and \eqref{4.3}, we obtain $a g(X,Y)+b \eta(X)\eta(Y)=\frac{r-4n}{2n(2n-1)}g(\varphi X,\varphi Y)$. If we choose $X=Y=\xi$, we find $a+b=0$. If we set $Y=X\perp \xi$ such that $X$ and $Y$ are units, we get \begin{align} \label{4.4} a=\frac{r-4n}{2n(2n-1)}=K(X,\varphi X). \end{align} In\cite{OKU}, the author proved that every conformally flat Sasakian manifold has a constant curvature +1, that is, $R(X,Y)Z=g(Y,Z)X-g(X,Z)Y$. From this result and \eqref{4.2}, we find $r=2n(2n-1)+4n$. In view of \eqref{4.4}, we obtain $a=1$. Therefore we have the following: \begin{Th}\label{t4.2}
Let $M$ be a (2n+1)-dimensional conformally flat Sasakian manifold. If $M$ is $*$-$\eta$-Einstein, then it is of constant curvature +1. \end{Th} We know that every Riemannian manifold of constant sectional curvature is locally symmetric. From theorem \eqref{t4.2}, we have \begin{Cor}
A conformally flat $*$-$\eta$-Einstein Sasakian manifold is locally symmetric. \end{Cor} \begin{Th}
A (2n+1)-dimensional conformally flat Sasakian manifold is $\varphi$-Einstein. \end{Th} \begin{proof}
The theorem follows from \eqref{4.3} and definition \eqref{d2.1}. \end{proof}
\section{$*$-Ricci Semi-symmetric Sasakian Manifold} A contact metric manifold is called Ricci semi-symmetric if $R(X,Y)\cdot Ric=0$, for all $X,Y \in TM$. Analogous to this definition, we define $*$-Ricci semi-symmetric by $R(X,Y)\cdot Ric^*=0$. \begin{Th}
If a (2n+1)-dimensional Sasakian manifold $M$ is $*$-Ricci semi-symmetric, then $M$ is $*$-Ricci flat. Moreover, it is $\eta$-Einstein manifold and
the Ricci tensor can be expressed as
\begin{align*}
Ric(X,Y)=(2n-1)g(X,Y)+\eta(X)\eta(Y).
\end{align*} \end{Th} \begin{proof}
Let us consider (2n+1)-dimensional Sasakian manifold which satisfies the condition $R(X,Y).Ric^*=0$. Then we have
\begin{align}
\label{5.1} Ric^*(R(X,Y)Z,W)+Ric^*(Z,R(X,Y)W)=0.
\end{align}
Putting $X=Z=\xi$ in \eqref{5.1}, we have
\begin{align}
\label{5.2} Ric^*(R(\xi, Y)\xi,W)+Ric^*(\xi,R(\xi,Y)W)=0.
\end{align}
It is well known that $Ric^*(X,\xi)=0$. Making use of \eqref{2.4} in \eqref{5.2} and by virtue of last equation, we find
\begin{align}
\label{5.3} Ric^*(Y,W)=0, \qquad Y,W\in TM,
\end{align}
showing that $M$ is $*$-Ricci flat. Moreover, in view of \eqref{5.3} and \eqref{2.9}, we have the required result. \end{proof}
\section{Sasakian Manifold Admitting $*$-Ricci Soliton}
\par Ricci flows are intrinsic geometric flows on a Riemannian manifold, whose fixed points are solitons and it was introduced by Hamilton\cite{HRS}. Ricci solitons also correspond to self-similar solutions of Hamilton's Ricci flow. They are natural generalization of Einstein metrics and is defined by \begin{align} \label{R1}(L_Vg)(X,Y)+2Ric(X,Y)+2\lambda g(X,Y)=0, \end{align} for some constant $\lambda$, a potential vector field $V$. The Ricci soliton is said to be shrinking, steady, and expanding according as $\lambda$ is negative, zero, and positive respectively. \par The notion of $*$-Ricci soliton was introduced by George and Konstantina\cite{Geo}, in 2014, where they essentially modified the definition of Ricci soliton by replacing the Ricci tensor in \eqref{R1} with the $*$-Ricci tensor. Recently, the authors studied $*$-Ricci soliton on para-Sasakian manifold in the paper \cite{Prak} and obtain several interesting result. A Riemannian metric $g$ on $M$ is called $*$-Ricci soliton, if there is vector field $V$, such that \begin{align} \label{R2} (L_Vg)(X,Y)+2Ric^*(X,Y)+2\lambda g(X.Y)=0, \end{align} for all vector fields $X,Y$ on $M$. In this section we study a special type of metric called $*$-Ricci soliton on Sasakian manifold. Now we prove the following result: \begin{Th}
If the metric $g$ of a (2n+1)-dimensional Sasakian manifold $M(\varphi,\xi,\eta,g)$
is a $*$-Ricci soliton with potential vector field $V$, then (i) $V$ is Jacobi along geodesic of $\xi$, (ii) M is an $\eta$-Einstein manifold and the Ricci tensor can be expressed as
\begin{align}
\label{6.1} Ric(X,Y)=\left[2n-1-\frac{\lambda}{2}\right]g(X,Y)+\left[1+\frac{\lambda}{2}\right]\eta(X)\eta(Y).
\end{align}
\end{Th} \begin{proof}
Call the following commutation formula (see \cite{YK}, page 23):
\begin{align}
(L_V\nabla_X g-\nabla_X L_V g-\nabla_{[V,X]}g)(Y,Z)=-g((L_V\nabla)(X,Y),Z)-g((L_V \nabla)(X,Z),Y),
\end{align}
and is well known for all vector fields $X,Y,Z$ on $M$. Since $g$ is parallel with respect to Levi-Civita connection $\nabla$, then the above relation becomes
\begin{align}
\label{6.2}(\nabla_XL_V g)(Y,Z)=g((L_V \nabla)(X,Y),Z)+g((L_V \nabla)(X,Y),Z).
\end{align}
Since $L_V\nabla$ is a symmetric tensor of type $(1,2)$, i.e., $(L_V \nabla)(X,Y)=(L_V\nabla)(Y,Z)$, it follows from \eqref{6.2} that
\begin{align}
\label{6.3}g((L_V\nabla)(X,Y),Z)=\frac{1}{2}\{(\nabla_XL_V g)(Y,Z)+(\nabla_YL_V g)(Z,X)-(\nabla_ZL_Vg)(X,Y)\}.
\end{align}
Next, taking covariant differentiation of $*$-Ricci soliton equation \eqref{R2} along a vector field $X$, we obatin $(\nabla_X L_V g)(X,Y)=-2(\nabla_X Ric^*)(X,Y)$. Substitutimg this relation into \eqref{6.3} we have
\begin{align}
\label{6.4} g((L_V\nabla)(X,Y),Z)=(\nabla_Z Ric^*)(X,Y)-(\nabla_X Ric^*)(Y,Z)-(\nabla_Y Ric^*)(X,Z).
\end{align}
Again, taking covariant differentiation of \eqref{2.9} with respect to $Z$, we get
\begin{align}
\label{6.5} (\nabla_Z Ric^*)(X,Y)=(\nabla_Z Ric)(X,Y)-\{g(Z,\varphi X)\eta(Y)+g(Z,\varphi Y)\eta(X)\}.
\end{align}
Combining \eqref{6.5} with \eqref{6.4}, we find
\begin{align}
\nonumber g((L_V\nabla)(X,Y),Z)=&(\nabla_Z Ric)(X,Y)-(\nabla_X Ric)(Y,Z)-(\nabla_Y Ric)(X,Z)\\
\label{6.6} &+2g(X,\varphi Z)\eta(Y)+2g(Y,\varphi Z)\eta(X).
\end{align}
In Sasakian manifold we know the following relation\cite{AG}:
\begin{align}
\label{6.7} \nabla_\xi Q=Q\varphi-\varphi Q=0, \qquad (\nabla_X Q)\xi=Q\varphi X-2n\varphi X.
\end{align}
Replacing $Y$ by $\xi$ in \eqref{6.6} and then using \eqref{6.7} we obtain
\begin{align}
\label{6.8} (L_V \nabla)(X,\xi)=-2Q\varphi X+2(2n-1)\varphi X.
\end{align}
From the above equation, we have
\begin{align}
\label{6.9} (L_V\nabla)(\xi,\xi)=0.
\end{align}
Now, substituting $X=Y=\xi$ in the well known formula \cite{YK}:
\begin{align*}
(L_V \nabla)(X,Y)=\nabla_X\nabla_Y V-\nabla_{\nabla_X Y}V+R(V,X)Y,
\end{align*}
and then making use of equation \eqref{6.9} we obtain
\begin{align*}
\nabla_\xi\nabla_\xi V+R(V,\xi)\xi=0,
\end{align*}
which proves part (i).
Further, differentiating \eqref{6.8} covariantly along an arbitrary vector field $Y$ on $M$ and then using \eqref{2.3} and last equation of \eqref{2.6}, we obtain
\begin{align}
\nonumber &(\nabla_Y L_V \nabla)(X,\xi)-(L_V\nabla)(X,\varphi Y)\\
\label{6.10} &=2\{-(\nabla_Y Q)\varphi X-g(X,Y)\xi+\eta(X)QY-(2n-1)\eta(X)Y\}.
\end{align}
According to Yano\cite{YK}, we have the following commutation formula:
\begin{align}
\label{c1} (L_V R)(X,Y)Z=(\nabla_X L_V \nabla)(Y,Z)-(\nabla_YL_V\nabla)(X,Z).
\end{align}
Substituting $\xi$ for $Z$ in the foregoing equation and in view of \eqref{6.10}, we obtain
\begin{align}
\nonumber &(L_V R)(X,Y)\xi-(L_V\nabla)(Y,\varphi X)+(L_V\nabla)(X,\varphi Y)\\
\label{6.11} &=2\{(\nabla_Y Q)\varphi X-(\nabla_X Q)\varphi Y+\eta(Y)QX-\eta(X)QY+(2n-1)(\eta(X)Y-\eta(Y)X)\}.
\end{align}
Replacing $Y$ by $\xi$ in \eqref{6.11} and then using last equation of \eqref{2.6}, \eqref{6.7} and \eqref{6.8}, we have
\begin{align}
\label{6.12} (L_V R)(X,\xi)\xi=4\{QX-\eta(X)\xi-(2n-1)X\}.
\end{align}
Since $Ric^*(X,\xi)=0$, then $*$-Ricci soliton equation \eqref{R2} gives $(L_V g)(X,\xi)+2\lambda\eta(X)=0$, which gives
\begin{align}
\label{6.13} (L_V\eta)(X)-g(X,L_V\xi)+2\lambda\eta(X)=0.
\end{align}
Lie-derivative of $g(\xi,\xi)=1$ along $V$ gives $\eta(L_V \xi)=\lambda$. Next, Lie-differentiating the formula $R(X,\xi)\xi=X-\eta(X)\xi$ along $V$ and then by virtue of last equation, we obtain
\begin{align}
\label{6.14} (L_V R)(X,\xi)\xi-g(X,L_V\xi)\xi+2\lambda X=-((L_V\eta)X)\xi.
\end{align}
Combining \eqref{6.14} with \eqref{6.12}, and making use of \eqref{6.13}, we obtain part (ii). This completes the proof. \end{proof} By virtue of \eqref{2.9} and \eqref{6.1}, the $*$-Ricci soliton equation \eqref{R2} takes the form \begin{align} \label{617}(L_V g)(X,Y)=-\lambda\{g(X,Y)+\eta(X)\eta(Y)\}. \end{align} Now, differentiating this equation covariantly along an arbitrary vector field $Z$ on $M$, we have \begin{align} \label{6.17} (\nabla_Z L_V g)(X,Y)=-\lambda\{g(Z,\varphi X)\eta(Y)+g(Z,\varphi Y)\eta(X)\}. \end{align} Substitute this equation in commutation formula \eqref{6.3}, we find \begin{align} \label{6.18} (L_V\nabla)(X,Y)=\lambda\{\eta(Y)\varphi X+\eta(X)\varphi Y\}. \end{align} Taking covariant differentiation of \eqref{6.18} along a vector field $Z$ and then using \eqref{2.3}, we obtain \begin{align} \nonumber (\nabla_Z L_V\nabla)(X,Y)=&\lambda\{g(Z,\varphi Y)\varphi X+g(Z,\varphi X)\varphi Y+g(X,Z)\eta(Y)\xi\\ \label{6.19} &+g(Y,Z)\eta(X)\xi-2\eta(X)\eta(Y)Z\}. \end{align} Making use of \eqref{6.19} in commutation formula \eqref{c1}, we have \begin{align} \nonumber (L_V R)(X,Y)Z=&\lambda\{g(X,\varphi Z)\varphi Y+2g(X,\varphi Y)\varphi Z-g(Y,\varphi Z)\varphi X+g(X,Z)\eta(Y)\xi\\ \label{6.21} &-g(Y,Z)\eta(X)\xi+2\eta(X)\eta(Z)Y-2\eta(Y)\eta(Z)X\}. \end{align} Contracting \eqref{6.21} over $Z$, we have \begin{align} \label{6.22} (L_V Ric)(Y,Z)=2\lambda\{g(Y,Z)-(2n+1)\eta(Y)\eta(Z)\}. \end{align} On other hand, taking Lie-differentiation of \eqref{6.1} along the vector field $V$ and using \eqref{617}, we obtain \begin{align} \nonumber (L_V Ric)(Y,Z)=&\left[1+\frac{\lambda}{2}\right]\{(L_V\eta)(Y)\eta(Z)+ \eta(Y)(L_V\eta)(Z)\}\\ \label{6.23} &-\lambda\left[2n-1-\frac{\lambda}{2}\right]\{g(Y,Z)+\eta(Y)\eta(Z)\}. \end{align} Comparison of \eqref{6.22} and \eqref{6.23} gives \begin{align} \nonumber &\left[1+\frac{\lambda}{2}\right]\{(L_V\eta)(Y)\eta(Z)+ \eta(Y)(L_V\eta)(Z)\} -\lambda\left[2n-1-\frac{\lambda}{2}\right]\{g(Y,Z)+\eta(Y)\eta(Z)\}\\ \label{6.24} &=2\lambda\{g(Y,Z)-(2n+1)\eta(Y)\eta(Z)\}. \end{align} Taking $Y$ by $\xi$ in the foregoing equation, we get \begin{align} \label{6.25} \left[1+\frac{\lambda}{2}\right](L_V\eta)(Y)=-\left[\lambda+\frac{\lambda^2}{2}\right]\eta(Y). \end{align} Substitute \eqref{6.25} in \eqref{6.24} and then replacing $Z$ by $\varphi Z$, we obtain \begin{align} \lambda\left[2n+1-\frac{\lambda}{2}\right]g(Y,\varphi Z)=0. \end{align} Since $g(Y,\varphi Z)$ is non-vanishing everywhere on $M$, thus we have either $\lambda=0$, or $\lambda=2(2n+1)$.\\ \textbf{Case I:} If $\lambda=0$, then from \eqref{617} we can see that $L_V g=0$, i.e., $V$ is Killing. From \eqref{6.1}, we have \begin{align*} Ric(X,Y)=(2n-1)g(X,Y)+\eta(X)\eta(Y). \end{align*} This shows that $M$ is $\eta$-Einstein manifold with scalar curvature $r=2n(\alpha+1)=4n^2$.\\ \textbf{Case II:} If $\lambda=2(2n+1)$, then plugging $Y$ by $\varphi Y$ in \eqref{6.25} we have the relation $\left[1+\frac{\lambda}{2}\right](L_V\eta)(\varphi Y)=0$. Since $\lambda=2(2n+1)$, by virtue of last equation we have $\lambda\neq-2$, thus we must have $(L_V\eta)(\varphi Y)=0$. Replacing $Y$ by $\varphi Y$ in the foregoing equation and then using \eqref{2.1}, we have \begin{align} \label{6.27} (L_V\eta)(Y)=-2(2n+1)\eta(Y). \end{align} This shows that $V$ is a non-strict infinitesimal contact transformation. Now, substituting $Z$ by $\xi$ in \eqref{617} and using \eqref{6.27} we immediately get $L_V\xi=2(2n+1)\xi$. Using this in the commutation formula (see \cite{YK}, page 23) \begin{align*} L_V\nabla_X\xi-\nabla_XL_V\xi-\nabla_{[V,X]}\xi=(L_V\nabla)(X,\xi), \end{align*} for an arbitrary vector field $X$ on $M$ and in view of \eqref{a2.4} and \eqref{6.18} gives $L_V\varphi=0$. Thus, the vector field $V$ leaves the structure tensor $\varphi$ invariant. \par Other hand, using $\lambda=2(2n+1)$ in \eqref{6.1} if follows that \begin{align*} Ric(X,Y)=-2g(X,Y)+2(n+1)\eta(X)\eta(Y), \end{align*} showing that $M$ is $\eta$-Einstein with $\alpha=-2$. Thus $M$ is a $D$-homothetically fixed. In\cite{Cha}, the authors give a wonderful information on $\eta$-Einstein Sasakian geometry; in this paper authors says, when $M$ is null (transverse Calabi-Yau) then always $(\alpha,\gamma)=(-2, 2n+2)$ (see page 189). By this we conclude that $M$ is a $D$-homothetically fixed null $\eta$-Einstein manifold. Therefore, we have the following; \begin{Th}
Let $M$ be a (2n+1)-dimensional Sasakian manifold. If $M$ admits $*$-Ricci soliton, then either $V$ is Killing, or $M$ is $D$-homothetically fixed null $\eta$-Einstein manifold. In the first case, $M$ is $\eta$-Einstein manifold of constant scalar curvature $r=2n(\alpha+1)=4n^2$ and in second case, $V$ is a non-strict infinitesimal contact transformation and leaves the structure tensor $\varphi$ invariant. \end{Th} Now, we prove the following result, which gives some remark on $*$-Ricci soliton. \begin{Th}
Let $M$ be a (2n+1)-dimensional Sasakian manifold admitting $*$-Ricci soliton with $Q^*\varphi=\varphi Q^*$. Then the soliton vector field $V$ leaves the structure tensor $\varphi$ invariant if and only if $g(\varphi(\nabla_V \varphi)X,Y)=(dv)(X,Y)-(dv)(\varphi X,\varphi Y)-(dv)(X,\xi)\eta(Y)$. \end{Th} \begin{proof}
The $*$-Ricci soliton equation can be written as
\begin{align}
\label{6.28} g(\nabla_XV,Y)+g(\nabla_YV,X)+2Ric^*(X,Y)+2\lambda g(X,Y)=0.
\end{align}
Suppose $v$ is 1-form, metrically equivalent to $V$ and is given by $v(X) = g(X, V)$, for any arbitrary vector field $X$, then the exterior derivative $dv$ of $v$ is given by
\begin{align}
\label{6.29} 2(dv)(X,Y)=g(\nabla_X V,Y)-g(\nabla_YV,X)
\end{align}
As $dv$ is a skew-symmetric, if we define a tensor fieeld $F$ of type (1, 1) by
\begin{align}
(dv)(X,Y)=g(X,FY),
\end{align}
then $F$ is skew self-adjoint i.e., $g(X, FY)=-g(FX, Y)$. The equation \eqref{6.29} takes the form $2g(X,FY)=g(\nabla_X V,Y)-g(\nabla_YV,X)$. Adding it to equation \eqref{6.28} side by side and factoring out $Y$ gives
\begin{align}
\label{6.31}\nabla_XV=-Q^*X-\lambda X-FX,
\end{align}
where $Q^*$ is $*$-Ricci operator. Applying $\varphi$ on \eqref{6.31}, we have
\begin{align}
\label{6.32} \varphi\nabla_XV=-\varphi Q^*X-\varphi\lambda X-\varphi FX.
\end{align}
Next, Replacing $X$ by $\varphi X$ in \eqref{6.31}, we obtain
\begin{align}
\label{6.33} \nabla_{\varphi X}V=-Q^*\varphi X-\lambda\varphi X-F\varphi X.
\end{align}
Substracting \eqref{6.32} and \eqref{6.33}, we have
\begin{align}
\varphi\nabla_X V-\nabla_{\varphi X}V=(Q^*\varphi-\varphi Q^*)X+(F\varphi-\varphi F)X.
\end{align}
By our hypothesis, noting that $\varphi$ commutes with the $*$-Ricci operator $Q^*$ for Sasakian manifold, we have
\begin{align}
\label{6.35}\varphi\nabla_X V-\nabla_{\varphi X}V=(F\varphi-\varphi F)X.
\end{align}
Now, we note that
\begin{align*}
(L_V\varphi)X&=L_V\varphi X-\varphi L_VX\\
&=\nabla_V\varphi X-\nabla_{\varphi X}V-\varphi\nabla_VX+\varphi\nabla_XV\\
&=(\nabla_V\varphi)X-\nabla_{\varphi X}V+\varphi\nabla_XV.
\end{align*}
The use of foregoing equation in \eqref{6.35} gives
\begin{align}
\label{6.36}(L_V\varphi)X-(\nabla_V\varphi)X=(F\varphi-\varphi F)X.
\end{align}
Operating $\varphi$ on both sides of the equation \eqref{6.35} and then making use of \eqref{2.1}, \eqref{6.31} and \eqref{6.33}, we find
\begin{align*}
(dv)(\varphi X,\varphi Y)-(dv)(X,Y)+(dv)(X,\xi)\eta(Y)=g(\varphi(F\varphi-\varphi F)X,Y).
\end{align*}
Using \eqref{6.36} in the above equation provides
\begin{align*}
(dv)(\varphi X,\varphi Y)-(dv)(X,Y)+(dv)(X,\xi)\eta(Y)=g(\varphi(L_V\varphi)X-\varphi(\nabla_V\varphi)X, Y).
\end{align*}
This shows that $L_V\varphi=0$ if and only if $g(\varphi(\nabla_V \varphi)X,Y)=(dv)(X,Y)-(dv)(\varphi X,\varphi Y)-(dv)(X,\xi)\eta(Y)$, completing the proof. \end{proof}
\end{document} |
\begin{document}
\openup 0.6em
\fontsize{13}{5} \selectfont
\begin{center}\LARGE Continuum Without Non-Block Points
\end{center}
\begin{align*}
\text{\Large Daron Anderson } \qquad \text{\Large Trinity College Dublin. Ireland }
\end{align*}
\begin{align*} \text{\Large [email protected]} \qquad \text{\Large Preprint February 2016}
\end{align*}$ $\\
\begin{center}
\textbf{ \large Abstract}
\end{center}
\noindent
For any composant $E \subset \ensuremath{\mathbb H}^*$ and corresponding near-coherence class $\ensuremath{\mathscr E} \subset \ensuremath{\omega}^*$ we prove the following are equivalent :
(1) $E$ properly contains a dense semicontinuum.
(2) Each countable subset of $E$ is contained in a dense proper semicontinuum of $E$.
(3) Each countable subset of $E$ is disjoint from some dense proper semicontinuum of $E$.
(4) $\ensuremath{\mathscr E}$ has a minimal element in the finite-to-one monotone order of ultrafilters.
(5) $\ensuremath{\mathscr E}$ has a $Q$-point.
A consequence is that NCF is equivalent to $\ensuremath{\mathbb H}^*$ containing no proper dense semicontinuum and no non-block points.
This gives an axiom-contingent answer to a question of the author.
Thus every known continuum has either a proper dense semicontinuum at every point or at no points.
We examine the structure of indecomposable continua for which this fails, and deduce they contain a maximum semicontinuum with dense interior.
\section{Introduction} \noindent
Non-block points are known to always exist in metric continua \cite{B,Leonel01}.
Moreover it follows from Theorem 5 of \cite{Bing01} that every point of a metric continuum is included in a dense proper semicontinuum.
We call a point with this property a coastal point. A coastal continuum is one whose every point is coastal.
The author's investigation of whether non-metric continua are coastal began in \cite{me1}.
The problem was reduced to looking at indecomposable continua.
Specifically it was shown that every non-coastal continuum $X$ admits a proper subcontinuum $K$ such
that the quotient space $X/K$ obtained by treating $K$ as a single point is indecomposable and fails to be coastal
(as a corollary this proves separable continua are coastal).
Since every indecomposable continuum with more than one composant is automatically coastal,
the heart of the problem rests in those indecomposable (necessarily non-metric) continua with exactly one composant.
We henceforth call these \textit{Bellamy continua}, after David Bellamy who constructed the first example in ZFC~\cite{one}.
There are very few examples known. The best-studied candidate is the Stone-\v Cech remainder $\ensuremath{\mathbb H}^*$ of the half-line.
The composant number of $\ensuremath{\mathbb H}^*$ is axiom sensitive, but under the axiom Near Coherence of Filters (NCF) the composant number is exactly one~\cite{NCF2}.
In the first section of this paper, we show under NCF that $\ensuremath{\mathbb H}^*$ has neither coastal nor non-block points.
Thus there consistently exists a non-coastal continuum.
It remains unresolved whether such a continuum can be exhibited without auxiliary axioms.
The only other Bellamy continua of which the author is aware arise from an inverse limit process~\cite{one,smith2,smith1}.
The process in fact yields a continuum with exactly two composants $-$ which are then combined by identifying a point of each.
The nature of this construction ensures that what used to be a composant is still a dense proper semicontinuum,
and so these examples are easily shown to be coastal.
Thus every known Bellamy continuum is either coastal at every point or at none.
One might wonder whether these are the only options.
This question is addressed in the paper's final section, where we show what pathology a partially coastal Bellamy continuum must display.
\section{Notation and Terminology}
\noindent
By a \textit{continuum} we mean a compact connected Hausdorff space.
We do not presume metrisability.
The interior and closure of a subspace $B$ are denoted $B^\circ$ and $\overline {B}$ respectively.
The continuum $X$ is said to be \textit{irreducible} between two points $a,b \in X$ if no proper subcontinuum of $X$ contains the subset $\{a,b\}$.
The topological space $T$ is called \textit{continuumwise connected} if for every two points $a,b \in T$ there exists a continuum $K \subset T$ with $\{a,b\} \subset K$. We also call a continuumwise connected space a \textit{semicontinuum}. Every Hausdorff space is partitioned into maximal continuumwise connected subspaces. These are called the \textit{continuum components}. When $X$ is a continuum and $S \subset X$ a subset, we call $S$ \textit{thick} to mean it is proper and has nonvoid interior.
The point \mbox{$p \in X$} of a continuum is called a \textit{weak cut point} to mean the subspace $X-p$ is not continuumwise connected.
If $a,b \in X$ are in different continuum components of $X-p$ we say that $p$ is \textit{between} $a$ and $b$ and write $[a,p,b]$.
When $X$ is a continuum the \textit{composant} $\ensuremath{\kappa}(p)$ of the point $p \in X$ is the union of all proper subcontinua that include $p$.
Another formulation is that $\ensuremath{\kappa}(p)$ is the set of points $q \in X$ for which $X$ is not irreducible between $p$ and $q$.
For any points $x,p \in X$ we write $\ensuremath{\kappa}(x;p)$ for the continuum component of $x$ in $X-p$.
The point $x \in X$ is called \textit{coastal} to mean that $\ensuremath{\kappa}(x;p)$ is dense for some $p \in X$.
We call $p \in X$ a \textit{non-block point} if $\ensuremath{\kappa}(x;p)$ is dense for some $x \in X$.
From the definition, a continuum has a coastal point if and only if it has a non-block point,
f and only if it contains a dense proper semicontinuum.
Throughout $\ensuremath{\omega}^*$ is the space of nonprincipal ultrafilters on the set $\ensuremath{\omega} = \{0,1,2, \ldots\}$
with topology generated by the sets $\widetilde{D} = \{\ensuremath{\mathcal D} \in \ensuremath{\omega}^* \colon D \in \ensuremath{\mathcal D}\}$ for all subsets $D\subset \ensuremath{\omega}$.
Likewise $\ensuremath{\mathbb H}^*$ is the space of nonprincipal closed ultrafilters on $\ensuremath{\mathbb H} = \{x \in \ensuremath{\mathbb R} \colon x \ge 0\}$
with topology generated by the sets $\widetilde{U} = \{\ensuremath{\mathcal A} \in \ensuremath{\mathbb H}^* \colon A \subset U$ for some $A \in \ensuremath{\mathcal A}\} $
for all open subsets $U \subset \ensuremath{\mathbb H}$.
For background on such spaces the reader is directed to \cite{CS1} and \cite{CSbook}.
$\ensuremath{\mathbb H}^*$ is known to be an \textit{hereditarily unicoherent} continuum.
That is to say any pair of its subcontinua have connected intersection.
Moreover $\ensuremath{\mathbb H}^*$ is \textit{indecomposable}, meaning we cannot write it as the union of two proper subcontinua.
This is equivalent to every proper subcontinuum having void interior.
The composants of an indecomposable continuum are pairwise disjoint.
For any two subsets $A,B \subset \ensuremath{\mathbb H}$ we write $A < B$ to mean $a<b$ for each $a \in A$ and $b \in B$.
By a \textit{simple sequence} we mean a sequence $I_n = [a_n,b_n]$ of closed intervals of $\ensuremath{\mathbb H}$
such that $I_1 < I_2 < I_3 < \ldots$ and the sequence $a_n$ tends to infinity.
Suppose $\ensuremath{\mathbb I} = \{I_1,I_2, \ldots \}$ is a simple sequence.
For each subset $N \subset \ensuremath{\omega}$ define $I_N = \bigcup \{I_n \colon n \in N\}$.
For each $\ensuremath{\mathcal D} \in \ensuremath{\omega}^*$ the set $\ensuremath{\mathbb I}_\ensuremath{\mathcal D} = \bigcap \big \{ \overline {I_D} \colon D \in \ensuremath{\mathcal D} \big \}$ is a subcontinuum of $\ensuremath{\mathbb H}^*$.
These are called \textit{standard subcontinua}.
In case each sequence element is the singleton $\{a_n\}$ the corresponding standard subcontinuum is also a singleton,
called a \textit{standard point}, and we denote it by $a_\ensuremath{\mathcal D}$.
Throughout $\ensuremath{\mathbb I} = \{I_1,I_2, \ldots \}$ and $\ensuremath{\mathbb J} = \{J_1,J_2, \ldots \}$ are simple sequences. Each $I_n = [a_n,b_n]$ and $J_n = [c_n,d_n]$. For any choice of $x_n \in I_n$ the point $x_\ensuremath{\mathcal D}$ is called a regular point of $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$. Observe that while every regular point is standard, being regular is a relative notion. It makes no sense to say `$x$ is a regular point', only `$x$ is a regular point of $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$'.
Standard subcontinua have been studied under the guise of ultracoproducts of intervals~\cite{PaulSSC}. This perspective makes certain properties more transparent. For example every standard-subcontinuum $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$ is uniquely irreducible between the regular points $a_\ensuremath{\mathcal D}$ and $b_\ensuremath{\mathcal D}$. We call these the \textit{end points} of $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$ and denote them by $a$ and $b$ when there is no confusion. The set $\ensuremath{\mathbb I}_\ensuremath{\mathcal D} - \{a,b\}$ is called the interior of $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$.
There exists a natural preorder on $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$, where $x \sqsubseteq y$ means $y$ is between $x$ and $b$, or that every subcontinuum of $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$ that
includes $b$ and $x$ must also include $y$. As per convention we write $x \sqsubset y$ to mean $x \sqsubseteq y$ but $x \ne y$. The equivalence classes of this preorder are linearly ordered and called the \textit{layers} of $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$.
Layers are indecomposable subcontinua. The layer of each regular point of $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$ is a singleton, and the set of these singletons is dense in $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$
in both the topological and order theoretic sense. For points $x,y \in \ensuremath{\mathbb I}_\ensuremath{\mathcal D}$ we write $L^x$ and $L^y$ for their layers, and write such things as $[x,y)$
to mean $ \{z \in \ensuremath{\mathbb I}_\ensuremath{\mathcal D} \colon L^x \sqsubseteq L^z \sqsubset L^y\}$.
We define each $[x,z]$ to be intersection of all subcontinua that include the points $x,z \in \ensuremath{\mathbb I}_\ensuremath{\mathcal D}$. By hereditary unicoherence each $[x,z]$ is a subcontinuum, called a section of $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$. In case $x=x_\ensuremath{\mathcal D}$ and $y=y_\ensuremath{\mathcal D}$ are regular points then $[x,y]$ is just the standard subcontinuum $\ensuremath{\mathbb J}_\ensuremath{\mathcal D}$ where each $J_n= [x_n,y_n]$. By writing $[x,y)$ as the union of all segments $[x,z]$ for $x \sqsubseteq z \sqsubset y$ we see that $[x,y)$ is a semicontinuum.
For any function $g \colon \ensuremath{\omega} \to \ensuremath{\omega}$ and ultrafilter $\ensuremath{\mathcal D}$ on $\ensuremath{\omega}$ define the image
$g(\ensuremath{\mathcal D}) = \{E \subset \ensuremath{\omega} \colon g^{-1}(E) \in \ensuremath{\mathcal D}\}$.
It can be shown $g(\ensuremath{\mathcal D})$ is the ultrafilter generated by $\{g(D) \colon D \in \ensuremath{\mathcal D}\}$.
Suppose $\ensuremath{\mathcal D}$ and $\ensuremath{\mathcal E}$ are ultrafilters and $f \colon \ensuremath{\omega} \to \ensuremath{\omega}$ is a finite-to-one function such that
$f(\ensuremath{\mathcal D}) = \ensuremath{\mathcal E}$. Then we write $\ensuremath{\mathcal E} \lesssim \ensuremath{\mathcal D}$.
If in addition we can choose $f$ to be monotone
we write $\ensuremath{\mathcal E} \le \ensuremath{\mathcal D}$. The equivalence classes of $\le$ are called \textit{shapes} of ultrafilters.
Lemma \ref{example} is the referee's and illustrates how the partition into shapes is strictly finer than the partition
into types.
Two free ultrafilters $\ensuremath{\mathcal D}$ and $\ensuremath{\mathcal E}$ are said to \textit{nearly cohere} if they have a common lower bound relative to $\le$. The principle Near Coherence of Filters (NCF) states that every two free ultrafilters nearly cohere.
Blass and Shelah showed this assertion is consistent relative to ZFC~\cite{NCF3}
and Mioduszewski showed that NCF is equivalent to $\ensuremath{\mathbb H}^*$ being a Bellamy continuum~\cite{MiodComposants}.
Indeed it follows from Section 4 of Blass' ~\cite{NCF2} that the following correspondence is a bijecton
between the composants of $\ensuremath{\mathbb H}^*$ and the near-coherence classes of $\ensuremath{\omega}^*$:
Given a composant $E \subset \ensuremath{\mathbb H}^*$ we can define the subset $\ensuremath{\mathscr E} = \{\ensuremath{\mathcal D} \in \ensuremath{\omega}^* \colon$ some $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$
is contained in $E\}$ of $\ensuremath{\omega}^*$. Likewise for each near-coherence class $\ensuremath{\mathscr E}\subset \ensuremath{\omega}^*$
we can define the subset $E = \bigcup \{\ensuremath{\mathbb I}_\ensuremath{\mathcal D} \colon \ensuremath{\mathbb I}$ is a simple sequence and $\ensuremath{\mathcal D} \in \ensuremath{\mathscr E}\}$ of $\ensuremath{\mathbb H}^*$.
\section{The Betweenness Structure of $\ensuremath{\mathbb H}^*$}
\noindent
This section establishes some tools concerning the subcontinua of $\ensuremath{\mathbb H}^*$ for later use. Our first concerns the representation of standard subcontinua. We would like to define the shape of $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$ to be the shape of $\ensuremath{\mathcal D}$. To prove that this makes sense we need the following result that follows from \cite{CS1} Theorem 2.11 and the proof given for Theorem 5.3.
\begin{theorem} \label{hart} Suppose that $\ensuremath{\mathbb J}_\ensuremath{\mathcal E} \subset \ensuremath{\mathbb I}_\ensuremath{\mathcal D}$.
Then $\ensuremath{\mathcal D} \le \ensuremath{\mathcal E}$. Moreover if $\ensuremath{\mathcal D} < \ensuremath{\mathcal E}$ as well then $\ensuremath{\mathbb J}_\ensuremath{\mathcal E}$ is contained in a layer of $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$.
\end{theorem}
\begin{lemma}
Each standard subcontinuum has a well-defined shape.
\end{lemma}
\begin{proof}
Suppose $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$ and $\ensuremath{\mathbb J}_\ensuremath{\mathcal E}$ are two representations of the same standard subcontinuum. That is to say $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}=\ensuremath{\mathbb J}_\ensuremath{\mathcal E}$. Then we have $\ensuremath{\mathbb J}_\ensuremath{\mathcal E} \subset \ensuremath{\mathbb I}_\ensuremath{\mathcal D}$ and Theorem \ref{hart} says $\ensuremath{\mathcal D} \le \ensuremath{\mathcal E}$. Applying the same theorem to how $\ensuremath{\mathbb I}_\ensuremath{\mathcal D} \subset \ensuremath{\mathbb J}_\ensuremath{\mathcal E}$ shows that $\ensuremath{\mathcal E} \le \ensuremath{\mathcal D}$ as well. This is the definition of $\ensuremath{\mathcal D}$ and $\ensuremath{\mathcal E}$ having the same shape.
\end{proof}
Theorem \ref{hart} relates the $\le$ ordering to the interplay between different standard subcontinua.
It will be helpful to know something about $\le$-minimal elements and hence about shapes of standard subcontinua that are maximal with respect to inclusion. Here the direction of the ordering is unfortunate. We mean the standard subcontinua $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$ with $\ensuremath{\mathbb I}_\ensuremath{\mathcal D} \subset \ensuremath{\mathbb I}_\ensuremath{\mathcal E}$ only for $\ensuremath{\mathcal D}$ and $\ensuremath{\mathcal E}$ with the same shape.
It turns out the $\le$-minimal elements are already well-studied.
These ultrafilters are called $Q$-points and are usually defined as minimal elements of the $\lesssim$ ordering.
Theorem 9.2 (b) of \cite{uff} can be used to prove the following two characterisations are equivalent.
\begin{definition} We call $\ensuremath{\mathcal D} \in \ensuremath{\omega}^*$ a $Q$-point to mean it satisfies either (and therefore both) of the properties below.
\begin{enumerate}
\item Every finite-to-one function $f \colon \ensuremath{\omega} \to \ensuremath{\omega}$ is constant or bijective when restricted to some element of $\ensuremath{\mathcal D}$.
\item $\ensuremath{\mathcal D}$ is $\lesssim$-minimal. That means $(\ensuremath{\mathcal E} \lesssim \ensuremath{\mathcal D} \iff \ensuremath{\mathcal D} \lesssim \ensuremath{\mathcal E})$ for all $\ensuremath{\mathcal E} \in \ensuremath{\omega}^*$.
\end{enumerate}
\end{definition}
Condition (1) shows that when $\ensuremath{\mathcal D}$ is a $Q$-point and $f \colon \ensuremath{\omega} \to \ensuremath{\omega}$ finite-to-one
then $f(\ensuremath{\mathcal D})$ is either principal or is a permutation of $\ensuremath{\mathcal D}$.
The next lemma proves our assertion that $Q$-points are the $\le$-minimal ultrafilters.
\begin{lemma} \label{leminimal}
The $Q$-points are precisely the $\le$-minimal elements.
\end{lemma}
\begin{proof}
First suppose $\ensuremath{\mathcal D}$ is a $Q$-point and that $\ensuremath{\mathcal E} \le \ensuremath{\mathcal D}$ for some $\ensuremath{\mathcal E} \in \ensuremath{\omega}^*$.
That means $\ensuremath{\mathcal E} = f(\ensuremath{\mathcal D})$ for some $f \colon \ensuremath{\omega} \to \ensuremath{\omega}$ monotone finite-to-one.
There exists an element $D \in \ensuremath{\mathcal D}$ over which $f$ is bijective.
The inverse $f^{-1} \colon f(D) \to \ensuremath{\omega}$ is bijective monotone
and can be extended to a finite-to-one function on $\ensuremath{\omega}$ that maps $\ensuremath{\mathcal E}$ to $\ensuremath{\mathcal D}$.
Therefore $\ensuremath{\mathcal D} \le \ensuremath{\mathcal E}$ as required.
Now let $\ensuremath{\mathcal D}$ be $\le$-minimal.
We will show it is $\lesssim$-minimal as well.
Suppose $\ensuremath{\mathcal E} \lesssim \ensuremath{\mathcal D}$ meaning $\ensuremath{\mathcal E} = f(\ensuremath{\mathcal D})$ where $f \colon \ensuremath{\omega} \to \ensuremath{\omega}$ is finite-to-one.
Lemma 2.3 (2) of \cite{RBOrder} shows how to construct finite-to-one monotone functions $g$ and $h$ with $g(\ensuremath{\mathcal D})=h(\ensuremath{\mathcal E})$.
By definition $\ensuremath{\mathcal E} \ge h(\ensuremath{\mathcal E})$ and $g(\ensuremath{\mathcal D}) \le \ensuremath{\mathcal D}$.
By $\le$-minimality the second inequality implies $g(\ensuremath{\mathcal D}) \ge \ensuremath{\mathcal D}$.
Then we have $\ensuremath{\mathcal E} \ge h(\ensuremath{\mathcal E}) = g(\ensuremath{\mathcal D}) \ge \ensuremath{\mathcal D}$ and therefore $\ensuremath{\mathcal E} \ge \ensuremath{\mathcal D}$ which implies $\ensuremath{\mathcal E} \gtrsim \ensuremath{\mathcal D}$ as required.
\end{proof}
We will use the following result again and again to slightly expand a proper subcontinuum of $\ensuremath{\mathbb H}^*$.
\begin{lemma} \label{bulge}
Each proper subcontinuum $K \subset \ensuremath{\mathbb H}^*$ is contained in $\ensuremath{\mathbb I}_\ensuremath{\mathcal D} - \{a,b\}$ for some standard subcontinuum $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$. Moreover if $\ensuremath{\mathcal E}$ is a $Q$-point in the near-coherence class corresponding to the composant containing $K$ we may assume without loss of generality that $\ensuremath{\mathcal D}=\ensuremath{\mathcal E}$.
\end{lemma}
\begin{proof}
By Theorem 5.1 of \cite{CS1} we know $K$ is included in some standard subcontinuum $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$. For any positive constants $\ensuremath{\varepsilon}_1, \ensuremath{\varepsilon}_2, \ldots$ define the slightly larger intervals $I'_1 = [a_1, b_1 + \ensuremath{\varepsilon}_0]$ and $I'_n = [a_n- \ensuremath{\varepsilon}_n, b_n + \ensuremath{\varepsilon}_n]$ for each $n>1$. The constants $\ensuremath{\varepsilon}_1, \ensuremath{\varepsilon}_2, \ldots$ may be chosen such that $\ensuremath{\mathbb I}' = \{I_0', I_1' , \ldots\}$ is still a simple sequence. Then the end points of $\ensuremath{\mathbb I}'_\ensuremath{\mathcal D}$ are not elements of $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$ and therefore not elements of $K$, as required.
Suppose $\ensuremath{\mathcal E}$ shares a near-coherence class with $\ensuremath{\mathcal D}$. Then the proof of Theorem 4.1 of \cite{NCF2} shows $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$ is contained in some standard subcontinuum $\ensuremath{\mathbb J}_{f(\ensuremath{\mathcal D})} = \ensuremath{\mathbb J}_{f(\ensuremath{\mathcal E})}$ for $f \colon \ensuremath{\omega} \to \ensuremath{\omega}$ monotone finite-to-one. But $\ensuremath{\mathcal E}$ being a $Q$-point implies $f(\ensuremath{\mathcal E})=\ensuremath{\mathcal E}$ and thus $\ensuremath{\mathbb I}_\ensuremath{\mathcal D} \subset \ensuremath{\mathbb J}_\ensuremath{\mathcal E}$. Then we may rename $\ensuremath{\mathbb J}$ to $\ensuremath{\mathbb I}$ and expand each interval slightly as before.
\end{proof}
We show how the ordering of layers of $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$ relates to the weak cut point structure of $\ensuremath{\mathbb H}^*$.
\begin{lemma} \label{2}
Suppose $p \in \ensuremath{\mathbb I}_\ensuremath{\mathcal D}$ is not an end point. Then $[a,p)$ and $(p,b]$ are continuum components of $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}-p$.
\end{lemma}
\begin{proof}
Since $[a,p)$ and $(p,b]$ are semicontinua each is contained in a continuum component of $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}-p$.
Moreover if $a$ and $b$ share a continuum component of $\ensuremath{\mathbb I}_\ensuremath{\mathcal D} -p$ we would have $\{a,b\} \subset R \subset \ensuremath{\mathbb I}_\ensuremath{\mathcal D}-p$ for some subcontinuum $R \subset \ensuremath{\mathbb H}^*$. But this contradicts how $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$ is irreducible between its endpoints. Finally observe that, by how layers are defined, every subcontinuum of $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$ that joins $a$ to an element of $L^p$ must contain $L^p$ and thus $p$. Therefore the continuum component of $a$ is contained in $\ensuremath{\mathbb I}_\ensuremath{\mathcal D} - L^p \cup (p,b] = $ $[a,p)$ as required.
Likewise for $(p,b]$.
\end{proof}
Combining Lemmas \ref{bulge} and \ref{2} gives the following.
\begin{lemma} \label{wcp}
Every point of $\ensuremath{\mathbb H}^*$ is between two points of its composant. In particular suppose $p \in \ensuremath{\mathbb I}_\ensuremath{\mathcal D}$ is not an end point. Then $p$ is between $a$ and $b$.
\end{lemma}
\begin{proof}
Let $p \in \ensuremath{\mathbb H}^*$ be arbitrary. By Lemma \ref{bulge} we have $p \in \ensuremath{\mathbb I}_\ensuremath{\mathcal D} - \{a,b\}$ for some standard subcontinuum $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$. By Lemma \ref{2} we know $p$ is between $a$ and $b$ in $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$. But since $\ensuremath{\mathbb H}^*$ is hereditarily unicoherent this implies that $p$ is between $a$ and $b$ in $\ensuremath{\mathbb H}^*$ as well. Finally observe that since $\ensuremath{\mathbb I}_\ensuremath{\mathcal D} \subset \ensuremath{\mathbb H}^*$ is a proper subcontinuum the points $a$, $b$ and $p$ share a composant.
\end{proof}
Of course if $\ensuremath{\mathbb H}^*$ has more than one composant, every point is readily seen to be a weak cut point. But even then, it is not obvious that every point is between two points in its composant. Nor should we expect this to be true for all indecomposable continua. The result fails for the Knaster Buckethandle when removing its single end point $-$ what is left of its composant even remains arcwise connected.\\
\begin{figure}
\caption{The end point $p$ does not cut its composant.}
\end{figure}
We finish by giving the referee's example of two ultrafilters $\ensuremath{\mathcal D}$ and $\ensuremath{\mathcal E}$ such that $\ensuremath{\mathcal E} \lesssim \ensuremath{\mathcal D}$
but not $\ensuremath{\mathcal E} \le \ensuremath{\mathcal D}$. This demonstrates how the partition of $\ensuremath{\omega}^*$ into shapes is strictly finer than the partition
into types.
\begin{lemma}\label{example}
The partition of $\ensuremath{\omega}^*$ into shapes is strictly finer than the partition into types.
\end{lemma}
\begin{proof}
Partition $\ensuremath{\omega}$ into intervals $I_0 = \{0\}$ and $I_n = [2^{n-1}, 2^{n})$ for all $n > 0$. Define the filter $\ensuremath{\mathcal F}$ by
letting $F \in \ensuremath{\mathcal F}$ exactly if $\big \{|I_n-F| \colon n \in \ensuremath{\omega} \big \}$ is bounded. Choose $\ensuremath{\mathcal D}$ as any ultrafilter
extending $\ensuremath{\mathcal F}$. Observe each $D \in \ensuremath{\mathcal D}$ contains more than one element of some $I_k$;
otherwise $F = D^c$ is an element of $\ensuremath{\mathcal F}$ because each $|I_n-F|$ is bounded above by $2$,
and this contradicts how $D$ is an ultrafilter.
Let the permutation $\sigma$ reverse the order of elements in each $I_n$ and define
$\ensuremath{\mathcal E} = \ensuremath{\sigma}(\ensuremath{\mathcal D})$. Theorem 9.2 (a) of \cite{uff} says $\ensuremath{\mathcal D} \ne \ensuremath{\mathcal E}$ and Theorem 9.3 says $\ensuremath{\mathcal D}$ and $\ensuremath{\mathcal E}$
have the same type. It remains to show they have different shapes.
By definition $f(\ensuremath{\mathcal D}) = \ensuremath{\mathcal E}$ implies $(\ensuremath{\sigma}^{-1}\circ f)(\ensuremath{\mathcal D}) = \ensuremath{\mathcal D}$.
Then Theorem 9.2 (b) of \cite{uff} says $\ensuremath{\sigma}^{-1}\circ f$ is the identity, and hence $f=\ensuremath{\sigma}$,
over some set $D \in \ensuremath{\mathcal D}$. Now let $a,b \in D \cap I_k$ be distinct for some $k \in \ensuremath{\omega}$.
It follows that $f$ reverses the order of $a$ and $b$. Therefore $f$ is not monotone
and we cannot have $\ensuremath{\mathcal E} \le \ensuremath{\mathcal D}$. Therefore the shapes are different.
\end{proof}
\section{The Presence of $Q$-Points}
\noindent The number of near-coherence classes of $\ensuremath{\omega}^*$ (and hence composants of $\ensuremath{\mathbb H}^*$) is axiom-sensitive. Likewise for the distribution of $Q$-points in $\ensuremath{\omega}^*$. It is true in ZFC that there always exists a class without $Q$-points \cite{RBOrder}. At the same time it follows from Theorem 9.23 of \cite{CSbook} that under CH there are also $2^{\frak c}$ classes with $Q$-points. Conversely NCF implies there exists a single class and no $Q$-points \cite{NCF1}.
Moreover it was recently shown \cite{FmNCC} that for each $n \in \ensuremath{\mathbb N}$ there may exist exactly $n$ classes with $Q$-points and one class without.
Under the assumption that $\ensuremath{\mathbb H}^*$ has more than one composant each point is non-block and coastal for trivial reasons: Every composant $E \subset \ensuremath{\mathbb H}^*$ is a proper dense semicontinuum that witness how each $x \in E$ is coastal and how each $x \notin E$ is non-block. We are interested in whether this is the only reason a point can be non-block or coastal. This leads to the following definition.
\begin{definition}
The subset $P \subset X$ is called a \textit{proper non-block set} to mean that $P$ is contained in some composant $E$ of $X$, and that some continuum component of $E-P$ is dense in $X$. The subset $P \subset X$ is called a \textit{proper coastal set} to mean that $P$ is contained in a dense semicontinuum that is not a composant of $X$. Supposing the singleton $\{p\}$ is a proper non-block (coastal) set we call $p$ a proper non-block (coastal) point.
\end{definition}
It turns out the existence of proper coastal and non-block sets in a composant depends on whether the corresponding near coherence class has a $Q$-point or not. We will examine the two possibilities separately. Henceforth $\ensuremath{\mathcal D}$ is assumed to be a $Q$-point whose near coherence class corresponds to the composant $A \subset \ensuremath{\mathbb H}^*$. We are grateful to the referee for correcting our earlier misconception about this case, and for providing the proof of the following lemma.
\begin{lemma} \label{1}For any $p \in \ensuremath{\mathbb I}_\ensuremath{\mathcal D} - \{a,b\}$ the semicontinuua $\ensuremath{\kappa}(b;p)$ and $\ensuremath{\kappa}(a;p)$ are dense.\end{lemma}
\begin{proof}
We only consider $\ensuremath{\kappa}(b;p)$ because the other case is similar. There is a regular point $q$ of $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$ such that $p\sqsubset q \sqsubset b$. We showed in Lemma \ref{wcp} that $[p,q,b]$ which implies $\ensuremath{\kappa}(b;q) \subset \ensuremath{\kappa}(b;p)$. Therefore it suffices to show $\ensuremath{\kappa}(b;q)$ is dense.
Assuming $\ensuremath{\kappa}(b;q)$ is not dense it must be nowhere dense since $\ensuremath{\mathbb H}^*$ is indecomposable. Then $\overline{\ensuremath{\kappa}(b;q)}$ is a proper subcontinuum and thus by Lemma \ref{bulge} is contained in the interior of some standard subcontinuum $\ensuremath{\mathbb J}_\ensuremath{\mathcal D}$ where each $J_n = [c_n,d_n]$.
It follows that $q$ and $b$ are regular points of $\ensuremath{\mathbb J}_\ensuremath{\mathcal D}$ and moreover $q \sqsubset b \sqsubset d$ in $\ensuremath{\mathbb J}_\ensuremath{\mathcal D}$. But then the interval $[b,d]$ of $\ensuremath{\mathbb J}_\ensuremath{\mathcal D}$ witnesses how $d \in \ensuremath{\kappa}(b;q)$. But by construction $d \notin \ensuremath{\kappa}(b;q)$, a contradiction.
\end{proof}
We can use the semicontinua constructed in Lemma \ref{1} to show any countable subset of $A$ is both proper coastal and proper non-block.
\begin{theorem} \label{Q}
Every countable $P \subset A$ is a proper non-block set and a proper coastal set.
\end{theorem}
\begin{proof}
Let $P = \{p_1,p_2, \ldots\}$. Since all $p_i$ share a composant we can use Lemma \ref{bulge}
to form an increasing chain $K_1 \subset K_2 \subset \ldots$ of standard subcontinua of shape $\ensuremath{\mathcal D}$
such that each $\{p_1,p_2, \ldots, p_n\} \subset K_n$.
By the Baire Category theorem the union $\bigcup K_n$ is proper.
The complement of $\bigcup K_n$ is a nonempty $G_\delta$ set.
Section 1 of \cite{almostPLevy} proves the complement has nonvoid interior.
This implies $\bigcup K_n$ cannot be dense,
and so $\overline {\big ( \bigcup K_n \big )}$ is contained in the interior of some further standard subcontinuum $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$.
Observe that $\ensuremath{\mathbb I}_\ensuremath{\mathcal D} \subset A$.
There exists a regular point $q$ of $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$ such that $a \sqsubset x \sqsubset q \sqsubset b$ for each $x \in \overline {\big ( \bigcup K_n \big )}$.
Moreover Lemma \ref{2} implies that $x \notin \ensuremath{\kappa}(b;q)$ and $b \notin \ensuremath{\kappa}(x;q)$. Lemma \ref{1} says the semicontinuua $\ensuremath{\kappa}(b;q)$ and $\ensuremath{\kappa}(x;q)=\ensuremath{\kappa}(a;q)$ are both dense.
Moreover the subcontinuum $\overline {\big ( \bigcup K_n \big )}$ witnesses how all $\ensuremath{\kappa}(p_i;q)$ coincide with each other and with $\ensuremath{\kappa}(x;q)$. Therefore $\ensuremath{\kappa}(b;q)$ witnesses how $P$ is a proper non-block set and $\ensuremath{\kappa}(p_1;q)$ witnesses how $P$ is a proper coastal set.
\end{proof}
One can ask whether Theorem \ref{Q} can be strengthened by allowing the set $P$ to have some larger cardinality. In particular
we might look for the least cardinal $\eta(A)$ such that every element of $\{P \subset A \colon |P|<\eta(A)\}$
is a proper coastal set and a proper non-block set.
To see the number $\eta(A)$ is at most $2^{\aleph_0}$ consider the family $P$ of all standard points $a_\ensuremath{\mathcal D}$ for each $a_n$ rational.
The family has cardinality $|\ensuremath{\mathbb Q}^ \ensuremath{\mathbb N}| = {\aleph_0}^{\aleph_0} = 2^{\aleph_0}$ and is easily seen to be dense in $A$.
Therefore $P$ cannot be a proper coastal set. Moreover under the Continuum Hypothesis $\aleph_0$ has successor
$2^{\aleph_0}$ and so it is consistent that $\eta(A) = 2^{\aleph_0}$.
\begin{question}
Is it consistent that there exists a composant $A \subset \ensuremath{\mathbb H}^*$ where the corresponding near coherence class has $Q$-points and $\eta(A) < 2^{\aleph_0}$?
\end{question}
\begin{question}
Is it consistent that there exist composants $A,A' \subset \ensuremath{\mathbb H}^*$ where the corresponding near coherence classes have $Q$-points and $\eta(A) \ne \eta(A')$?
\end{question}
Next we will treat the case when the composant $B \subset \ensuremath{\mathbb H}^*$ corresponds to a near coherence class without a $Q$-point.
We remark it is consistent for the two composants $A$ and $B$ to exist simultaneously. For example in the model presented in \cite{FmNCC}
or indeed any model where the Continuum Hypothesis holds.
The outcome for $B$ is the complete opposite to that for $A$ $-$ the composant $B$ has neither proper coastal points nor proper non-block points. Our main tool to prove this is the following lemma, which is alluded to in the literature $-$ for example in \cite{RBOrder} $-$ but for which we have been unable to find a complete proof.
\begin{lemma} \label{chain}
$B$ is the union of an increasing chain of proper indecomposable subcontinua.
\end{lemma}
\begin{proof}
By Zorn's lemma there exists a maximal increasing chain $\ensuremath{\mathcal P}$ of proper indecomposable subcontinua in $B$.
We claim the union $\bigcup \ensuremath{\mathcal P}$ is dense. For otherwise by Lemma \ref{bulge} we have
$\bigcup \ensuremath{\mathcal P} \subset \ensuremath{\mathbb I}_\ensuremath{\mathcal D} \subset B$ for some standard subcontinuum $ \ensuremath{\mathbb I}_\ensuremath{\mathcal D}$.
Then since $\ensuremath{\mathcal D}$ is not a $Q$-point it is not $\le$-minimal by Lemma \ref{leminimal}.
Therefore we have $\ensuremath{\mathcal E} < \ensuremath{\mathcal D}$ for some $\ensuremath{\mathcal E} \in \ensuremath{\omega}^*$.
It follows from Theorem \ref{hart} that $\ensuremath{\mathbb I}_\ensuremath{\mathcal D}$ is contained in a layer of $\ensuremath{\mathbb I}_\ensuremath{\mathcal E}$.
But that layer is a proper indecomposable subcontinuum and so can be added as the new top element of
$\bigcup \ensuremath{\mathcal P}$, contradicting how the chain is maximal. We conclude $\bigcup \ensuremath{\mathcal P}$ is dense.
Compactness implies there is some $x \in \bigcap \ensuremath{\mathcal P}$. To prove $\bigcup \ensuremath{\mathcal P} = B$ we take \mbox{$b \in B$} to be arbitrary.
There exists a standard subcontinuum $L$ with $\{x,b\} \subset L$. We have already shown $\bigcup \ensuremath{\mathcal P}$ is dense.
That means some $P \in \ensuremath{\mathcal P}$ is not contained in $L$. By Theorem 5.9 of \cite{CS1} we have $L \subset P$.
This implies $b \in P \subset \bigcup \ensuremath{\mathcal P}$ as required.\end{proof}
\begin{theorem} \label{noQ}
$B$ has no proper coastal points and no proper non-block points.
\end{theorem}
\begin{proof}
Let $\ensuremath{\mathcal P}$ be an increasing chain of proper indecomposable subcontinua with union $B$.
Recall each $P \in \ensuremath{\mathcal P}$ is nowhere-dense. Now let $S \subset B$ be an arbitrary proper semicontinuum.
That means we can fix a point $y \in S$ and write $S = \bigcup_{x \in S} S(x)$
where each $S(x)$ is a subcontinuum containing $\{x,y\}$.
Choose any point $b \in (B-S)$. There exists $P$ such that $\{b,y\} \in P \subset \ensuremath{\mathcal P}$.
The point $b$ witnesses how $P \not \subset S(x)$ and so Theorem 5.9 of \cite{CS1}
implies that $S(x) \subset P$. But since $x \in S$ is arbitrary this implies $S \subset P$.
Therefore $S$ is nowhere-dense.
We conclude that $B$ contains no proper dense semicontinuum.
It follows $B$ has no proper coastal points and therefore no proper non-block points.
\end{proof}
Under NCF there are no $Q$-points. In this case Theorem \ref{noQ} tells us $\ensuremath{\mathbb H}^*$ has no proper non-block points. But NCF is also equivalent to $\ensuremath{\mathbb H}^*$ having exactly one composant, and that implies any non-block points that exist must be proper. So we have the stronger result.
\begin{theorem} \label{big}
(NCF) $\ensuremath{\mathbb H}^*$ lacks coastal points and non-block points.
\end{theorem}
Every separable continuum and {\it a fortiori} every metric continuum has two or more non-block points. The author has asked whether the separability assumption can be dropped. Theorem \ref{big} gives an axiom-contingent answer.
\begin{corollary} \label{con}There consistently exists a continuum without non-block points.\end{corollary}
Whether Corollary \ref{con} can be proved in ZFC alone is currently unresolved.
One possible line-of-attack to the problem is as follows:
Observe that every composant of an hereditarily indecomposable continuum is the union of the same sort of chain as described in Lemma \ref{chain}.
From here a similar proof to Theorem \ref{noQ} shows hereditarily indecomposable continua lack proper non-block points.
Thus an hereditarily indecomposable Bellamy continuum would be an example of a continuum without non-block points.
Smith has found some obstacles to constructing such a beast \cite{SmithHIRemainderProd,SmithHILexProd,SmithHISouslinArcsIL,SmithHISouslinProd}.
But we have so far no reason to believe one exists.
We can combine the main results of this section into two sets of equivalences. The first set looks at each composant separately.
\begin{corollary}
The following are equivalent for any composant $E \subset \ensuremath{\mathbb H}^*$ and corresponding near coherence class $\ensuremath{\mathscr E} \subset \ensuremath{\omega}^*$.
\begin{enumerate}
\item Some point of $E$ is proper non-block (coastal).
\item Every point of $E$ is proper non-block (coastal).
\item Some countable subset of $E$ is proper non-block (coastal).
\item Every countable subset of $E$ is proper non-block (coastal).
\item $\ensuremath{\mathcal E}$ has a $Q$-point.
\item $\ensuremath{\mathcal E}$ has a $\le$-minimal element.
\end{enumerate}
\end{corollary}
The second set looks at $\ensuremath{\mathbb H}^*$ as a whole.
\begin{corollary}
The following are equivalent.
\begin{enumerate}
\item NCF
\item $\ensuremath{\mathbb H}^*$ has exactly one composant
\item $\ensuremath{\mathbb H}^*$ lacks coastal points and non-block points.
\end{enumerate}
\end{corollary}
In particular we have that $-$ regardless of the model $-$ it can only be the case that either every point of $\ensuremath{\mathbb H}^*$ is non-block or none are. This observation motivates the next and final section.
\section{Partially Coastal Bellamy Continua}
\noindent
Thus far every Bellamy continuum has proved to be either coastal at every point or at no points. This section examines the remaining case.
Henceforth $H$ is some fixed Bellamy continuum. We will make frequent use of the fact that every semicontinuum $S \subset H$ is either dense or nowhere-dense.
Under the assumption that there is a coastal point $x \in H$ and a non-coastal point $y \in H$,
this section investigates how badly behaved $H$ must be.
Our description is in terms of thick semicontinua. Recall the semi- continuum $S \subset H$ is called thick to mean it is proper and has nonvoid interior.
Every indecomposable metric continuum has more than one composant and so cannot contain a thick semicontinuum.
It is unknown whether the result generalises $-$ no known Bellamy continuum contains a thick semicontinuum.
Thus the following lemma explains our failure to provide a concrete example for the continuum $H$.
\begin{lemma} \label{yes} $H$ contains a thick semicontinuum. Moreover every dense proper semicontinuum in $H$ is thick.\end{lemma}
\begin{proof} Let $S$ be an arbitrary dense proper semicontinuum. At least one exists to witness how the point $x \in H$ is coastal.
Since $H$ has one composant there is a proper subcontinuum $K \subset H$ with $\{x,y\} \subset K$. Then $K \cup S$ is also a dense semicontinuum.
But since $y$ is non-coastal we must have $K \cup S = H$. Therefore $S$ contains the open set $H-K$ and hence has nonvoid interior.
\end{proof}
\begin{lemma} $H$ has a thick semicontinuum that contains every other thick semicontinuum.\end{lemma}
\begin{proof} Let $S$ be the thick semicontinuum found in Lemma \ref{yes}.
Every other thick semicontinuum $M \subset H$ is dense, and since $S$ has interior this implies $S \cup M$ is a semicontinuum.
Moreover $y \notin M$ since $y$ is non-coastal and hence $S \cup M$ is proper. It follows the union of $S$ with all possible choices for $M$
is the maximum among thick semicontinua of $H$.
\end{proof}
Henceforth we will fix $S \subset H$ to be the maximum thick semicontinuum.
\begin{lemma} \label{cc}$S$ is a continuum component of $H-p$ for each $p \in H - S$.\end{lemma}
\begin{proof} We know $S$ is contained in some continuum component $C$ of $H - p$.
But since a continuum component is a semicontinuum this implies $C \subset S $ by maximality of $S$ and
therefore $C = S$. \end{proof}
\begin{lemma} \label{compl} $H - S^\circ$ is a subcontinuum and one of two things happens.
\begin{enumerate}
\item The thick semicontinuum $S$ is open
\item $H-S^\circ$ is indecomposable with more than one composant
\end{enumerate}
\end{lemma}
\begin{proof}
Observe that by boundary-bumping the arbitrary point $p$ is in the closure of every continuum component of $H-p$.
Therefore any union of continuum components of $H-p$ has connected closure.
Lemma \ref{cc} says $S$ is a continuum component of $H-p$.
Therefore $\overline {(H-p-S)} = H - S^\circ$ is connected and hence a continuum. Call this continuum $B$.
There is a partition $B = A \cup C$ where $A = H-S $ is the complement of $S$ and $C = B \cap S$ consists of the points of $S$ outside its interior.
Since $S$ is proper $A$ is nonempty. If we assume $S$ is not open then $C$ is nonempty as well.
We will demonstrate that $B$ is irreducible between each $a \in A$ and each $c \in C$.
Since $A$ and $C$ form a partition this will imply they are both unions of composants.
In particular $B$ will have two disjoint composants making it indecomposable.
Now suppose $E \subset B$ is a subcontinuum that meets each of $A$ and $C$.
Since $E$ meets $C = B \cap S$ we know that $S \cup E$ is continuumwise connected.
But since $E$ meets $A = H-S$ we know $S \cup E$ is strictly larger than $S$.
By assumption $S$ is a maximal thick semicontinuum. So the only option is that $S \cup E = H$.
In particular $S \cup E$ contains $A = H-S$.
This implies $A \subset E$ and since $E$ is closed $\overline {A} \subset E$. But by definition $\overline {A} = \overline {(H-S)} = H-S^\circ = B$.
We conclude $E = B$ as required.\end{proof}
The previous lemma showed that $H-S^\circ$ is a subcontinuum.
Recall that, since $H$ is indecomposable, its every subcontinuum has void interior. This gives us the corollary.
\begin{corollary} \label{di} $S$ has dense interior.\end{corollary}
Now we are ready to identify the coastal points of $H$.
\begin{lemma} $H$ is coastal exactly at the points of $S$. \end{lemma}
\begin{proof}
Since $S$ is the maximum dense proper semicontinuum, each of its points is coastal.
Now let $x \in H$ be an arbitrary coastal point. That means $x$ is an element of
some proper dense semicontinuum which is, by definition, contained in $S$. This implies $x \in S$ as required. \end{proof}
We can summarise the progress made in this section in the theorem.
\begin{theorem} \label{summary}Suppose $S$ is the set of coastal points of the Bellamy continuum $H$ and $\ensuremath{\varnothing} \ne S \ne H$. Then $S$ is a semicontinuum with dense interior and contains every semicontinuum of $H$ with nonempty interior. Moreover $H-S^\circ$ is a subcontinuum and one of the below holds.
\begin{enumerate}
\item The semicontinuum $S$ is open
\item $H-S^\circ$ is an indecomposable continuum with more than one composant
\end{enumerate}\end{theorem}
We conclude this section with a remark on the three classes of continua. Continua of the first class are coastal at every point $-$ for example all metric or separable continua \cite{me1}. Continua of the second class have no coastal points $-$ for example $\ensuremath{\mathbb H}^*$ under NCF.
Continua of the third (and possibly empty) class are coastal only at the points of some proper subset. However, as Theorem \ref{summary} tells, these continua have the extra property of being \textit{simultaneously coastal}. That means the set of coastal points knits together into a dense proper semicontinuum that simultaneously witnesses the coastal property for each of its points.
\end{document} |
\begin{document}
\newcommand{\refeq}[1]{(\ref{#1})}
\def\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1}{\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1}} \def\widehat}\def\cal{\mathcal{\widehat}\def\cal{\mathcal} \def \beq {\begin {eqnarray}} \def \eeq {\end {eqnarray}} \def \ba {\begin {eqnarray*}} \def \ea {\end {eqnarray*}} \def \bfo {\ba} \def \efo {\ea} \def\Omega{\Omega} \def{\cal O}{{\mathcal O}}
\newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{Mtheorem}{Theorem} \newtheorem{proposition}[theorem]{Proposition} \newtheorem {adefinition}[theorem]{Assumption} \newtheorem {definition}[theorem]{Definition} \newtheorem {example}[theorem]{Example} \newtheorem {corollary}[theorem]{Corollary} \newtheorem {problem}[theorem]{Problem} \newtheorem {ex}[theorem]{Exercise}
\def\noindent{\bf Proof.}\quad {\noindent{\bf Proof.}\quad}
\def \cA {{\cal A}}\def\varrho{\varrho} \def \fg{{\bf g}} \def \cB {{\cal B}} \def \cD {{\cal D}} \def \A {{\cal A}} \def \B {{\cal B}} \def \D {{\cal D}} \def \Disc {{\Bbb D}} \def \tX {{\widetilde X}} \def \t {{\tau}} \def \Box {\quad $\square$
} \def \ssskip {\vskip -23pt \hskip 40pt {\bf.} } \def \sskip {\vskip -23pt \hskip 33pt {\bf.} } \def\alpha{\alpha} \def \varepsilon{\varepsilon} \def \b{\beta} \def \bn{\underline n} \def\hbox{Grad}\,}\def\Div{\hbox{Div}\,{\hbox{Grad}\,}\def\Div{\hbox{Div}\,} \def \cW {{\cal W}} \def{\rm Hess}\,{{\rm Hess}\,} \def{\rm sign}\,{{\rm sign}\,} \def\hbox{Re}\,{\hbox{Re}\,} \def\hbox{Im}\,{\hbox{Im}\,} \def\hbox{Re}\,{\hbox{Re}\,} \def\hbox{Im}\,{\hbox{Im}\,}
\def\noindent{\bf Proof.}\quad {\noindent{\bf Proof.}\quad}
\title{CALDER\'ON'S INVERSE PROBLEM FOR ANISOTROPIC CONDUCTIVITY IN THE PLANE} \author {Kari Astala} \address{Rolf Nevanlinna Institute, University of Helsinki, P.O.~Box~4 (Yliopistonkatu~5), FIN-00014 University of Helsinki, Finland} \email{[email protected], [email protected], $ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $ \linebreak \hbox{ } $ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [email protected]}
\author{Matti Lassas}
\author{Lassi P\"aiv\"arinta}
\maketitle
\defH^{s+1}_0(\partial M\times [0,T]){H^{s+1}_0(\partial M\times [0,T])} \defH^{s+1}_0(\partial M\times [0,T/2]){H^{s+1}_0(\partial M\times [0,T/2])} \defC^\infty_0(\partial M\times [0,2r]){C^\infty_0(\partial M\times [0,2r])} \def\hbox{supp }{\hbox{supp }} \def\hbox{diam }{\hbox{diam }} \def\hbox{dist}{\hbox{dist}} \def{\mathbb R}{{\mathbb R}} \def \fR{{\mathbb R}} \def{ \mathbb Z}{{ \mathbb Z}} \def{\mathbb C}{{\mathbb C}} \def\varepsilon{\varepsilon} \def\tau{\tau} \def{\cal F}{{\cal F}} \def{\cal N}{{\cal N}} \def{\cal U}{{\cal U}} \def{\cal W}{{\cal W}} \def{\cal O}{{\cal O}} \def\hbox{exp}{\text{exp}}
\def\sigma{\sigma} \def\over{\over} \def\infty{\infty} \def\delta{\delta} \def\Gamma{\Gamma} \def\hbox{exp}{\hbox{exp}}
\def\langle{\langle} \def\rangle{\rangle} \def\partial{\partial}
\def\tilde M{\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} M} \def\tilde R{\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} R} \def\tilde r{\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} r} \def\tilde z{\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} z} \def\hskip1pt{\mathbb E}\hskip2pt{\hskip1pt{\mathbb E}\hskip2pt} \def{\cal P}{{\cal P}} \def{\cal A}{{\cal A}} \def\noindent{\bf Proof.}\quad {\noindent{\bf Proof.}\quad } \def\displaystyle{\displaystyle} \def\p_z{\partial_z} \def\p_{\overline z}{\partial_{\overline z}} \def\overline{\overline} \def{\cal H}{{\cal H}}
{\bf Abstract:} We study inverse conductivity problem for an anisotropic conductivity $\sigma\in L^\infty$ in bounded and unbounded domains. Also, we give applications of the results in the case when Dirichlet-to-Neumann and Neumann-to-Dirichlet maps are given only on a part of the boundary.
\section{INTRODUCTION} Let us consider the anisotropic conductivity equation in two dimensions \beq\label{conduct} \nabla\cdotp \sigma\nabla u= \sum_{j,k=1}^2 \frac \partial{\partial x^j}\sigma^{jk}(x) \frac \partial{\partial x^k} u &=& 0\hbox{ in } \Omega,\\
u|_{\partial \Omega}&=&\phi.\nonumber \eeq Here
$\Omega\subset{\mathbb R}^2$ is a simply connected domain. The
conductivity $\sigma=[\sigma^{jk}]_{j,k=1}^2$ is a symmetric, positive definite matrix function, and $\phi \in H^{1/2}(\partial \Omega)$ is the prescribed voltage on the boundary. Then it is well known that equation (\ref{conduct}) has a unique solution $u\in H^1(\Omega)$.
In the case when $\sigma$ and $\partial \Omega$ are smooth, we can define the voltage-to-current (or Dirichlet-to-Neumann) map by \beq
\Lambda_\sigma(\phi)= Bu|_{\partial \Omega} \eeq where \beq Bu=\nu \cdotp \sigma \nabla u, \eeq
$u\in H^1(\Omega) $ is the solution of (\ref{conduct}), and $\nu$ is the unit normal vector of $\partial\Omega$. Applying the divergence theorem, we have \beq\label{Q_l} Q_{\sigma,\Omega} (\phi):=\int_\Omega \sum_{j,k=1}^2\sigma^{jk}(x)\frac {\partial u}{\partial x^j} \frac {\partial u}{\partial x^k} dx=\int_{\partial \Omega} \Lambda_\sigma(\phi) \phi\, dS, \eeq where $dS$ denotes the arc lenght on $\partial \Omega$. The quantity $Q_{\sigma,\Omega} (\phi)$ represents the power needed to maintain the potential $\phi$ on $\partial \Omega$. By symmetry of $\Lambda_\sigma$, knowing $Q_{\sigma,\Omega}$ is equivalent with knowing
$\Lambda_\sigma$. For general $\Omega$ and $\sigma\in L^\infty(\Omega)$, the trace $u|_{\partial \Omega}$ is defined as the equivalence class of $u$ in $H^1(\Omega)/H^1_0(\Omega)$ (see \cite{AP}) and formula (\ref{Q_l}) is used to define the map $\Lambda_\sigma$.
If $F:\Omega\to \Omega,\quad F(x)=(F^1(x),F^2(x))$, is a diffeomorphism with $F|_{\partial \Omega}=\hbox{Identity}$, then by making the change of variables $y=F(x)$ and setting $v=u\circ F^{-1}$ in the first integral in (\ref{Q_l}), we obtain \ba \nabla\cdotp (F_*\sigma)\nabla v=0\quad\hbox{in }\Omega, \ea where \beq\label{cond and metr} (F_*\sigma)^{jk}(y)=\left. \frac 1{\det [\frac {\partial F^j}{\partial x^k}(x)]} \sum_{p,q=1}^2 \frac {\partial F^j}{\partial x^p}(x)
\,\frac {\partial F^k}{\partial x^q}(x) \sigma^{pq}(x)\right|_{x=F^{-1}(y)}, \eeq or \beq\label{5 1/2}
F_*\sigma(y)=\left.\frac 1{J_F(x)} DF(x)\,\sigma(x)\,DF(x)^t\right|_{x=F^{-1}(y)}, \eeq is the push-forward of the conductivity $\sigma$ by $F$. Moreover, since $F$ is identity at $\partial \Omega$, we obtain from (\ref{Q_l}) that \ba \Lambda_{F_*\sigma}=\Lambda_\sigma. \ea Thus, the change of coordinates shows that there is a large
class of conductivities which give rise to the same electrical measurements at the boundary.
We consider here the converse question, that if we have two conductivities which have the same Dirichlet-to-Neumann map, is it the case that each of them can be obtained by pushing forward the other.
In applied terms, this inverse problem to determine $\sigma$ (or its properties) from $\Lambda_{\sigma}$ is
also known as {\em Electrical Impedance Tomography}. It has been proposed as a valuable diagnostic, see \cite{CIN99}.
In the case where $\sigma^{jk}(x)=\sigma(x)\delta^{jk}$, $\sigma(x)\in {\mathbb R}_+$, the metric is said to be isotropic. In 1980 it was proposed by A.~Calder\'on
\cite {Cl} that in the isotropic case any bounded conductivity $\sigma(x)$ might be determined solely from the boundary measurements, i.e., from $\Lambda_\sigma$. Recently this has been confirmed in the two dimensional case (c.f. \cite{AP}). In the case when isotropic $\sigma$ is smoother than just a $L^\infty$-function, the same conclusion is known to hold also in higher dimensions.
The first global uniqueness result was obtained for a $C^\infty$--smooth conductivity in dimension $n\geq 3$ by
J.~Sylvester and G.~Uhlmann in 1987 \cite{SylUhl}. In dimension two A.~Nachman \cite{Nach2D} produced in 1995 a uniqueness result for conductivities with two derivatives. The corresponding algorithm has been successfully implemented and proven to work efficiently even with real data \cite{Siltanen1,Siltanen2}. The reduction of regularity assumptions has since been under active study. In dimension two the optimal $L^\infty$-regularity was obtained in \cite{AP}. In dimension $n\geq 3$ the uniqueness has presently been shown for isotropic conductivities $\sigma \in W^{3/2,\infty}(\Omega)$ in \cite{PPU} and for globally
$C^{1+\varepsilon}$--smooth isotropic conductivities having only co-normal singularities in \cite{GLU1}.
Also, the stability of reconstructions of the inverse conductivity problem have been extensively studied. For these results, see \cite{Al,AlS,BBR} where stability results are based on reconstruction techniques of \cite{BU} in dimension two and those of \cite{Na1} in dimensions $n\geq 3$.
In anisotropic case, where $\sigma$ is a matrix function and the problem is to recover the conductivity $\sigma$ up to the action of a class of diffeomorphisms, much less is known. In dimensions $n\geq 3$ it is generally known only that piecewise analytic conductivities can be constructed (see \cite{KV1,KV2}). For Riemannian manifolds this kind of technique has been generalized in \cite{LeU,LaU,LTU}. In dimension $n=2$ the inverse problem has been considered by J.~Sylvester \cite{Sy1} for $C^3$ and Z.~Sun and G.~Uhlmann \cite{SU} for $W^{1,p}$-conductivities. The idea of \cite{Sy1} and \cite{SU} is that under quasiconformal change of coordinates (cf. \cite{Alf,IM}) any anisotropic conductivity can be changed to isotropic one, see also section \ref{Sec: proofs} below. The purpose of this paper is to carry this technique over to the $L^\infty$--smooth case and then use the result of \cite{AP} to obtain uniqueness up to the group of diffeomorphisms.
The advantage of the reduction of the smoothness assumptions up to $L^\infty$ does not lie solely on the fact that many conductivities have jump-type singularities but it also allows us to consider much more complicated singular structures such as porous rocks \cite{Che}. Moreover it is important that this approach
enables us to consider general diffeomorphisms. Thus anisotropic inverse problems in half-space or exterior domains can be solved simultaneously. This will be considered in Section \ref{sec: con}.
If $\Omega\subset {\mathbb R}^2$ is a bounded domain, it is convenient to consider the class of matrix functions $\sigma=[\sigma^{jk}]$ such that \beq\label{basic ass} [\sigma^{ij}]\in L^\infty(\Omega;{\mathbb R}^{2\times 2}), \quad [\sigma^{ij}]^t=[\sigma^{ij}],\quad C_0^{-1}I \leq [\sigma^{ij}]\leq C_0I \eeq where $C_0>0$. In sequel, the minimal possible value of $C_0$ is denoted by $C_0(\sigma)$. We use the notation \ba
\Sigma(\Omega)=\{\sigma\in L^{\infty}(\Omega;{\mathbb R}^{2\times 2})& |& \ C_0(\sigma)<\infty\}. \ea Note that it is necessary to require $C_0(\sigma)<\infty$ as otherwise there would be counterexamples showing that even the equivalence class of the conductivity can not be recovered
\cite{GLU2a,GLU2}.
Our main goal in this paper is to show that an anisotropic $L^\infty$--conductivity can be determined up to a $W^{1,2}$-diffeomorphism:
\begin{Mtheorem}\label{theorem1} Let $\Omega\subset{\mathbb R}^{2}$ be a simply connected bounded domain and $\sigma\in L^{\infty}(\Omega;{\mathbb R}^{2\times 2})$. Suppose that the assumptions (\ref{basic ass}) are valid. Then the Dirichlet-to-Neumann map $\Lambda_{\sigma}$ determines the equivalence class
\ba E_\sigma=\{\sigma_1\in \Sigma(\Omega)& |& \hbox{$\sigma_1 = F_*\sigma$, $F:\Omega\to \Omega$ is $W^{1,2}$-diffeomorphism and}\\
& &F|_{\partial \Omega}=I\}. \ea \end{Mtheorem}
We prove this result in Section \ref{Sec: proofs}.
Finally, note that the $W^{1,2}$--diffeomorphisms $F$ preserving the class $\Sigma(\Omega)$ are precisely the quasiconformal mappings. Namely, if $\sigma_0\in \Sigma(\Omega)$ and $\sigma_1=F_*( \sigma_0)\in \Sigma(F(\Omega))$ then \beq\label{Kari 8}
\frac 1{C_0}||DF(x)||^2I\leq DF(x)\, \sigma_0(x)\, DF(x)^t\leq C_1J_F(x)I \eeq where $I=[\delta^{ij}]$ and we obtain \beq\label{Kari 9}
||DF(x)||^2\leq K J_F(x),\quad \hbox{for a.e. }\ x\in \Omega \eeq where $K=C_1C_0<\infty$. Conversely, if (\ref{Kari 9}) holds and $F$ is $W^{1,2}_{loc}$-homeomorphism then $F_*\sigma\in \Sigma(F(\Omega))$ whenever $\sigma\in \Sigma(\Omega)$. Furhtermore, recall that a map $F:\Omega \to \widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \Omega$ is quasiregular if $F\in W^{1,2}_{loc}(\Omega)$ and the condition (\ref{Kari 9}) holds. Moreover, a map $F$ is quasiconformal if it is quasiregular and a $W^{1,2}$--homeomorphism.
\section{CONSEQUENCES AND APPLICATIONS OF THEOREM \ref{theorem1}}\label{sec: con}
Here we consider applications of the diffeomorphism-technique to various inverse problem. The formulated results, Theorems \ref{Lem: A1}--\ref{Lem: A3} are proven in Section \ref{sec: proof of con}.
\subsection{Inverse Problem in the Half Space}
Inverse problem in half space is of crucial importance in geophysical prospecting, seismological imaging, non-destructive testing etc. For instance, the imaging of soil was the original motivation of Calder\'on's seminal paper \cite {Cl}. As we can use a diffeomorphism to map the open half space to the unit disc, we can apply the previous result for the half space case. One should observe that in this deformation even infinitely smooth conductivities can become non-smooth at the boundary (e.g. conductivity oscillating near infinity produces a non-Lipschitz conductivity in push-forward) and thus the low-regularity result \cite{AP} is essential for the problem.
Thus, for $\sigma\in \Sigma({\mathbb R}^2_-)$ let us consider the problem \beq\label{conduct D1a}
\nabla\cdotp \sigma\nabla u&=& 0\hbox{ in } {\mathbb R}^2_-=\{(x^1,x^2)\,|\, x^2<0\} ,\\
u|_{\partial {\mathbb R}^2_-}&=&\phi,\label{conduct D1b}\\ u&\in& L^\infty({\mathbb R}^2_-)\label{conduct D1c}. \eeq Notice that here the radiation condition at infinity (\ref{conduct D1c}) is quite simple. We assume just that the potential $u$ does not blow up at infinity. The equation (\ref{conduct D1a}--\ref{conduct D1c}) is uniquely solvable and as before we can define \ba \Lambda_\sigma: H_{comp}^{1/2}(\partial {\mathbb R}^2_-)\to H^{-1/2}(\partial {\mathbb R}^2_-),\quad
\phi\mapsto \nu\cdotp\sigma\nabla u|_{\partial {\mathbb R}^2_-}. \ea \begin{theorem}\label{Lem: A1} The map $\Lambda_\sigma$
determines the equivalence class
\ba E_{\sigma}=\{\sigma_1\in \Sigma({\mathbb R}^{2}_-)& |& \hbox{$\sigma_1 = F_*\sigma$, $F:{\mathbb R}^2_-\to {\mathbb R}^2_-$ is $W^{1,2}$-diffeomorphism,}\\
& &F|_{\partial {\mathbb R}^2_-}=I\}. \ea Moreover, each orbit $E_{\sigma}$ contains at most one isotropic conductivity, and consequently if $\sigma$ is known to be isotropic, it is determined uniquely by $\Lambda_ \sigma$. \end{theorem} Note that the natural growth requirement
$ \lim_{|z|\to \infty} |F(z)|=\infty$ follows automatically from the above assumptions on $F$.
\subsection{Inverse Problem in the Exterior Domain}
An inverse problem similar to that of the half space can be considered in an exterior domain where one wants to find the conductivity in a complement of a bounded simply connected domain. This type of problem is encountered in cases where measurement devices are embedded to an unknown domain.
In the case of $S={\mathbb R}^2\setminus \overline D$,
where $D$ is a bounded Jordan domain, we consider the problem \beq\label{conduct S} \nabla\cdotp \sigma\nabla u&=& 0\quad \hbox{ in } S,\\
u|_{\partial S}&=&\phi\in H^{1/2}(\partial S),\label{conduct S1} \\ u&\in& L^\infty(S).\label{conduct S2a} \eeq Again, the radiation condition (\ref{conduct S2a}) of infinity is only that the solution is uniformly bounded. For this equation we define \ba \Lambda_\sigma: H^{1/2}(\partial S)\to H^{-1/2}(\partial S),\quad
\phi\mapsto \nu\cdotp\sigma \nabla u|_{\partial S}. \ea Surprisingly, the result is different from the half-space case. The reason for this is the phenomenon that the group of diffeomorphisms preserving the data does not fix the point of the infinity. More precisely, there are two points $x_0,x_1\in S\cup\{\infty\}$ such that $F(x_0)=\infty$, $F^{-1}(x_1)=\infty$, and $F:S\setminus \{x_0\}\to S\setminus \{x_1\}$. In particular, this means that the uniqueness does not hold up to diffeomorphisms mapping the exterior domain to itself.
For convenience, we compactify $S$ by adding one infinity point, denote $\overline S=S\cup \{\infty\}$, and define $\sigma(\infty)=1$. We say that $F:\overline S\to \overline S$ is a $W^{1,2}$-diffeomorphism if $F$ is homeomorphism and a $W^{1,2}$-diffeomorphism in spherical metric \cite{Alf}.
\begin{theorem}\label{Lem: A2} Let $\sigma\in \Sigma(S)$. Then the map $\Lambda_\sigma$
determines the equivalence class
\ba E_{\sigma,S}=\{\sigma_1\in \Sigma(S)& |& \hbox{$\sigma_1 = F_*\sigma$, $F:\overline S\to \overline S$ is a $W^{1,2}$-diffeomorphism,}\\
& &F|_{\partial S}=I\,\}. \ea Moreover, if $\sigma$ is known to be isotropic, it is determined uniquely by $\Lambda_\sigma$. \end{theorem}
\subsection{Data on Part of the Boundary}
In many inverse problems data is measured only on a part of the boundary. For the conductivity equation in dimensions $n\geq 3$ it has been shown that if the measurements are done on a part of the boundary, then the integrals of the unknown conductivity over certain 2-planes can be determined \cite{GU}. In one-dimensional inverse problems partial data is often considered with two different boundary conditions, see e.g. \cite{Le,Ma}. For instance, in the inverse spectral problem for a one-dimensional Schr\"odinger operator,
it is known that measuring spectra corresponding to two different boundary conditions determine the potential uniquely. Here we consider similar results for the 2-dimensional conductivity equation assuming that we know measurements on part of the boundary for two different boundary conditions.
Let us consider the conductivity equation with the Dirichlet boundary condition \beq\label{conduct D} \nabla\cdotp \sigma\nabla u&=& 0\hbox{ in } \Omega,\\
u|_{\partial \Omega}&=&\phi\nonumber \eeq and with the Neumann boundary condition \beq\label{conduct N} \nabla\cdotp \sigma\nabla v&=& 0\hbox{ in } \Omega,\\
\nu\cdotp\sigma \nabla v|_{\partial \Omega}&=&\psi,\nonumber \eeq normalized by $\int_{\partial \Omega}v\,dS=0$. Let $\Gamma\subset \partial\Omega$ be open. We denote by $H_0^{s}(\Gamma)$ the space of functions $f\in H^{s}(\partial \Omega)$ that are supported on $\Gamma$ and by
$H^{s}(\Gamma)$ the space of restrictions $f|_\Gamma$ of
$f\in H^{s}(\partial \Omega)$. We define the Dirichlet-to-Neumann map $\Lambda_\Gamma$ and Neumann-to-Dirichlet map $\Sigma_\Gamma$ by \ba & &\Lambda_\Gamma:H_0^{1/2}(\Gamma)\to H^{-1/2}(\Gamma),
\quad \phi\mapsto (\nu\cdotp\sigma \nabla u)|_{\Gamma},\\ & &\Sigma_\Gamma:H_0^{-1/2}(\Gamma)\to H^{1/2}(\Gamma),
\quad \psi\mapsto v|_{\Gamma}. \ea \begin{theorem}\label{Lem: A3} Let $\Gamma\subset \partial\Omega$ be open. Then knowing $\partial \Omega$ and both of the maps $\Lambda_\Gamma$ and $\Sigma_\Gamma$ determine the equivalence class
\ba E_{\sigma,\Gamma}=\{\sigma_1\in \Sigma(\Omega)& |& \hbox{$\sigma_1 = F_*\sigma$, $F:\Omega\to \Omega$ is a $W^{1,2}$-diffeomorphism,}\\
& &F|_{\Gamma}=I\}. \ea Moreover, if $\sigma$ is known to be isotropic, it is determined uniquely by $\Lambda_\Gamma$ and $\Sigma_\Gamma$. \end{theorem}
\section{PROOF OF THEOREM \ref{theorem1}} \label{Sec: proofs}
\subsection{Preliminary Considerations} In the following we identify ${\mathbb R}^2$ and ${\mathbb C}$ by the map $(x^1,x^2)\mapsto x^1+ix^2$ and denote $z=x^1+ix^2$. We use the standard notations \ba \partial_z=\frac 12(\partial_1-i\partial_2),\quad \partial_{\overline z}=\frac 12(\partial_1+i\partial_2), \ea where $\partial_j=\partial/\partial x^j$. Below we consider $\sigma:\Omega\to {\mathbb R}^{2\times 2}$ to be extended as a function $\sigma:{\mathbb C}\to {\mathbb R}^{2\times 2}$ by defining $\sigma(z)=I$ for $z\in {\mathbb C}\setminus \Omega$. In following, we denote $C_0=C_0(\sigma)$.
For the conductivity $\sigma=\sigma^{jk}$ we define the corresponding Beltrami coefficient (see \cite{Sy1,AP,IM}) \beq\label{mu} \mu_{1}(z)= \frac {-\sigma^{11}(z)+\sigma^{22}(z)-2i\,\sigma^{12}(z)}
{\sigma^{11}(z)+\sigma^{22}(z)+2\sqrt{\det(\sigma(z))}}.
\eeq The coefficient $\mu_{1}(z)$ satisfies $|\mu_{1}(z)|\leq \kappa<1$ and is compactly supported.
Next we introduce a $W^{1,2}$-diffeomorphism (not necessarily
preserving the boundary) that transforms the conductivity to an isotropic one.
\begin{lemma}\label{lem: 1} There is a quasiconformal homeomorphism $F:{\mathbb C}\to {\mathbb C}$ such that \beq\label{asympt 1}
F(z)=z+{\cal O}(\frac 1{z})\quad\hbox{as }|z|\to \infty \eeq and such that $F\in W^{1,p}_{loc}({\mathbb C};{\mathbb C})$, $2<p<p(C_0)=\frac {2C_0}{C_0-1}$ for which \beq\label{isotropic} (F_*\sigma)(z)=\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \sigma(z):= \det(\sigma(F^{-1}(z)))^{\frac 12}. \eeq \end{lemma}
\noindent{\bf Proof.}\quad The proof can be found from \cite{Sy1} for $C^3$-smooth conductivities, see also \cite{IM}. Because of varying sign conventions, we
sketch here the proof for readers convenience. We need to find a quasiconformal map $F$ such that \beq\label{lassi 1} DF\,\sigma\,DF^t=\sqrt{\det(\sigma)}\, J_F I \eeq where $J_F=\det(DF)$ is the Jacobian of $F$. Denoting by $G=[g_{ij}]_{i,j=1}^2$ the inverse of the matrix $\sigma/\sqrt{\det(\sigma)}$ we see that the claim is equivalent to proving the following:
For any symmetric matrix $G$ with $\det(G)=1$ and $\frac 1 K I \leq G\leq K I$ there exists a quasiconformal map $F$ such that \beq\label{lassi 2} J_F G= DF^t DF. \eeq Next, the non-linear equation (\ref{lassi 2}) can be replaced in complex notation by a linear one. Indeed, if $F=u+iv$ then (\ref{lassi 2}) is equivalent to \beq\label{lassi 3} \nabla v=J G^{-1}\nabla u,\quad \hbox{where }J= \left(\begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array}\right). \eeq This follows readily from the identity \ba DF^tJ=\det(DF)\,J\,(DF)^{-1}=JG^{-1}DF^t \ea where the latter equality uses (\ref{lassi 2}). The matrix $J$ corresponds to the multiplication with the imaginary unit $i$ in complex notation. Denoting by $R= \left(\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right) $ (the matrix corresponding to complex conjugation) we see that (\ref{lassi 3}) is equivalent to \beq\label{lassi 4} \nabla u+J\nabla v=(G-1)(G+1)^{-1}(\nabla u-J\nabla v). \eeq But, $\nabla u+J\nabla v=2\p_{\overline z} F$ and $R (\nabla u-J\nabla v) =2\p_z F$ in complex notation and hence (\ref{lassi 4}) becomes \beq\label{lassi 5} & &\partial_{\overline z} F=\mu_{1}(z)\partial_{z} F \eeq where \ba \mu_{1}=(G-1)(G+1)^{-1}R=(\sqrt {\det \sigma} I-\sigma) (\sqrt {\det \sigma} I+\sigma )^{-1}R. \ea A direct calculation gives \ba \mu_{1} =\frac 1{2+g_{11}+g_{22}}\left(\begin{array}{cc} g_{11}-g_{22} & -2g_{12} \\ 2g_{12} & g_{11}-g_{22} \end{array}\right) \ea which shows that the matrix $\mu_{1}=(G-1)(G+1)^{-1}R$ corresponds to a multiplication operator (in complex notation) by the function \ba \mu_{1}(z)=\frac {g_{11}(z)-g_{22}(z)+2ig_{12}(z)} {2+g_{11}(z)+g_{22}(z)}.
\ea This gives (\ref{mu}) since $G^{-1}=\sigma/\sqrt{\det(\sigma)}$. Since $|\mu_{1}(z)|\leq \kappa <1$ for every $z\in {\mathbb C}$ it is well known by \cite[Thm. V.1, V.2]{Alf} that the equation
(\ref{lassi 5}) with asymptotics \ba F(z)=z+{\cal O}(\frac 1z ),\quad\hbox{as }z\to \infty \ea has a unique (quasiconformal) solution $F$. The fact that
$F\in W^{1,p}_{loc}({\mathbb C};{\mathbb C})$, $2<p<\frac {2C_0}{C_0-1}$ follows from \cite{Astala}. \Box
In this section we denote by $F=F_\sigma$ the diffeomorphism determined by Lemma \ref{lem: 1}. We also denote $\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \Omega=F(\Omega)$ where $F$ is as in Lemma \ref{lem: 1}. Note that (\ref{asympt 1}) implies also that \beq\label{asympt 2}
F^{-1}(z)=z+{\cal O}(\frac 1{|z|})\quad\hbox{as }|z|\to \infty. \eeq
Later we will use the obvious fact that the knowledge of map $\Lambda_{\sigma}$ is equivalent to the knowledge of the Cauchy data pairs
\ba C_{\sigma}=\{(u|_{\partial \Omega},\nu\cdotp \sigma \nabla u|_{\partial \Omega})\ |\ u\in H^1(\Omega), \ \nabla\cdotp \sigma \nabla u=0\}. \ea
In addition to the anisotropic conductivity equation (\ref{conduct}) we consider the corresponding conductivity equation with isotropic conductivity. For these considerations, we observe that if $u$ satisfies equation (\ref{conduct}) and $\widetilde \sigma$ is as in (\ref{isotropic}) then the function \ba w(x)=u(F^{-1}(x))\in H^1(\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \Omega) \ea satisfies the isotropic conductivity equation \beq\label{EQ 1} & &\nabla\cdotp \widetilde\sigma \nabla w=0\quad \hbox{ in }\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \Omega,\\
& &w|_{\partial \widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \Omega}=\phi\circ F^{-1}.\nonumber \eeq Thus, $\widetilde\sigma$ can be considered as a scalar, isotropic $L^\infty$--smooth conductivity $\widetilde\sigma I$. We
continue also the function $\widetilde\sigma:\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \Omega\to {\mathbb R}_+$ to a function $\widetilde\sigma:{\mathbb C}\to {\mathbb R}_+$ by defining $\widetilde\sigma(x)=1$ for $x\in {\mathbb C}\setminus \widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \Omega$.
\subsection{Conjugate Functions} While solving the isotropic inverse problem in \cite{AP}, the interplay of the scalar conductivities $\sigma(x)$ and $\frac 1 {\sigma(x)}$ played a crucial role. Motivated by this, we define \ba \widehat}\def\cal{\mathcal \sigma^{jk}(x)=\frac 1{\det(\sigma(x))}\sigma^{jk}(x). \ea Note that for a isotropic conductivity $\widehat}\def\cal{\mathcal \sigma=1/\sigma$.
Let now $F$ be the quasiconformal map defined in Lemma \ref{lem: 1} and $\widetilde\sigma=F_*\sigma$ as in (\ref{isotropic}). We say that
$\widehat}\def\cal{\mathcal w\in H^1(\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \Omega)$ is a $\widetilde\sigma$-harmonic conjugate of $w$ if \beq\label{EQ 2} & &\partial_1\widehat}\def\cal{\mathcal w(z)=-\widetilde\sigma(z)\partial_2 w(z),\\ & &\partial_2\widehat}\def\cal{\mathcal w(z)=\widetilde\sigma(z)\partial_1 w(z)\nonumber \eeq for $z=x^1+ix^2\in {\mathbb C}$. Using $\widehat}\def\cal{\mathcal w$ we define the function $\widehat}\def\cal{\mathcal u$ that we call the $\sigma$-harmonic conjugate of $u$, \ba \widehat}\def\cal{\mathcal u(x)=\widehat}\def\cal{\mathcal w(F(x)). \ea To find the equation governing $\widehat}\def\cal{\mathcal u$, it easily follows that
(c.f. \cite{AP})
\beq\label{EQ 3} \nabla\cdotp \frac 1 {\widetilde\sigma} \nabla\widehat}\def\cal{\mathcal w=0\quad \hbox{ in }\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \Omega, \eeq and by changing coordinates to $y=F(x)$ we see that
$1/\widetilde\sigma=F_*\widehat}\def\cal{\mathcal \sigma$. These facts imply
\beq\label{EQ 4} \nabla\cdotp \widehat}\def\cal{\mathcal \sigma \nabla\widehat}\def\cal{\mathcal u=0\quad \hbox{ in }\Omega. \eeq Thus $\widehat}\def\cal{\mathcal u$ is the $\widehat\sigma$-harmonic conjugate function of $u$ and we have \beq\label{EQ 5} \nabla \widehat}\def\cal{\mathcal u=J \sigma \nabla u,\quad \nabla u=J\widehat \sigma \nabla \widehat}\def\cal{\mathcal u. \eeq
Since $u$ is a solution of the conductivity equation if and only if $u+c$, $c\in {\mathbb C}$, is solution, we see from (\ref{EQ 5}) that the Cauchy data pairs $C_{\sigma}$ determine the pairs $C_{\widehat}\def\cal{\mathcal \sigma}$ and vice versa. Thus we get, almost free, that $\Lambda_\sigma$ determines $\Lambda_{\widehat}\def\cal{\mathcal \sigma}$, too.
Let us next consider the function \beq\label{EQ 6} f(z)=w(z)+i\widehat}\def\cal{\mathcal w(z). \eeq By \cite{AP}, it satisfies the pseudo-analytic equation of second type, \beq\label{EQ 7} \p_{\overline z} f=\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \mu_{2}\, \overline {\p_z f} \eeq where \beq\label{EQ 7b} \widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \mu_{2}(z)=\frac {1-\widetilde\sigma(z)}{1+\widetilde\sigma(z)},\quad
|\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \mu_{2}(z)|\leq \frac {C_0-1}{C_0+1}<1. \eeq Using this Beltrami coefficient, we define $\mu_2=\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \mu_2\circ F$.
We will need the following:
\begin{lemma}\label{lemma21} Let $g=f\circ F$ where $F:\Omega\to \widetilde \Omega$ is a quasiconformal homeomorphism and $f$ is a quasiregular map satisfying \beq\label{L99} \p_{\overline z} f=\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \mu_2 \overline{\p_z f}\quad\ \hbox{and}\quad \p_{\overline z} F=\mu_1 \p_z F, \eeq where $\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \mu_2=\mu_2\circ F^{-1}$ and $\mu_1$ satisfies
$|\mu_j|\leq \kappa<1$ and $\mu_2$ is real. Then $g$ is quasiregular and satisfies \beq\label{L100} \p_{\overline z} g= \nu_1{\p_z g}+ \nu_2 \overline{\p_z g}, \eeq
where \beq\label{L101}
\nu_1=\mu_1 \frac {1-\mu_2^2}{1-|\mu_1|^2\mu_2^2} ,\quad\ \hbox{and}\quad
\nu_2=\mu_2 \frac {1-|\mu_1|^2}{1-|\mu_1|^2\mu_2^2}. \eeq Conversely, if $g$ satisfies (\ref{L100}) with
$\nu_2$ real and $|\nu_1|+|\nu_2|\leq \kappa' <1$ then there exists unique $\mu_1$ and $\mu_2$ such that (\ref{L101}) holds and
$f=g\circ F^{-1}$ satisfies (\ref{L99}). \end{lemma}
\noindent{\bf Proof.}\quad We apply the chain rule \ba \partial(f\circ F)&=& (\partial f)\circ F\cdotp\partial F+(\overline \partial f)\circ F\cdotp\overline{\overline \partial F},\\ \overline \partial(f\circ F) &=& (\partial f)\circ F\cdotp\overline \partial F+(\overline \partial f)\circ F\cdotp\overline{ \partial F}, \ea and obtain \ba
\nu_1\p_z g+ \nu_2 \overline{\p_z g}= \partial f\circ F\cdotp \partial F \cdotp (\nu_1+\nu_2\mu_1\mu_2)+ \overline {\partial f\circ F}\cdotp {\overline\partial F}\cdotp (\nu_2+\nu_1\overline \mu_1\mu_2) \ea and \ba \p_{\overline z} g= \mu_1 \cdotp \partial f\circ F\cdotp \partial F + \mu_2 \cdotp \overline {\partial f\circ F}\cdotp {\overline\partial F}. \ea Hence, if $\mu_1,\mu_2,\nu_1$, and $\nu_2$ are related so that \ba \mu_1=\nu_1+ \nu_1\mu_2,\quad \mu_2=\nu_2+\overline \nu_1\mu_2, \ea we see that (\ref{L100}) and (\ref{L101}) are satisfied.
It is not difficult to see that for each $\nu_1$ and $\nu_2$ (\ref{L101}) has a unique solution
$\mu_1,\mu_2$ with $|\mu_j|\leq \kappa'<1$, $j=1,2$. Again, the general theory of quasiregular maps \cite{Alf} implies that (\ref{L99}) has a solution and the factorization $g=f\circ F$ holds. \Box
Note that (\ref{L101}) implies that \beq\label{L103}
|\nu_1|+|\nu_2|= \frac {|\mu_1|+|\mu_2|}{1+|\mu_1|\,|\mu_2|} \leq \frac {2\kappa}{1+\kappa^2}<1. \eeq
Lemma \ref{lemma21} has the following important corollary, that is the main goal of this subsection.
\begin{corollary}\label{cor21} If $u\in H^1(\Omega)$ is a real solution of the conductivity equation (\ref{conduct}), there exists $\widehat}\def\cal{\mathcal u\in H^1(\Omega)$, unique up to a constant, such that $g=u+i\widehat}\def\cal{\mathcal u$ satisfies (\ref{L100}) where \beq\label{L104} \nu_1= \frac {\sigma^{22}-\sigma^{11}-2i\sigma^{12}} {1+\hbox{\rm tr}\, \sigma+\det(\sigma)} ,\quad\ \hbox{and}\quad \nu_2= \frac {1-\det(\sigma)} {1+\hbox{\rm tr}\, \sigma+\det(\sigma)}. \eeq Conversely, if
$\nu_1$ and $\nu_2$, $|\nu_1|+|\nu_2|\leq \kappa' <1$
are as in Lemma \ref{lemma21} then there are unique $\sigma$ and $\widehat}\def\cal{\mathcal\sigma$ such that for any solution
$g$ of (\ref{L100})
$u=\hbox{Re}\, g$ and $\widehat}\def\cal{\mathcal u=\hbox{Im}\, g$ satisfy the conductivity equations \beq\label{cond eq 2} \nabla\cdotp \sigma\nabla u=0,\quad\hbox{and}\quad \nabla\cdotp \widehat}\def\cal{\mathcal \sigma\nabla \widehat}\def\cal{\mathcal u=0. \eeq \end{corollary}
\noindent{\bf Proof.}\quad Since $g=f\circ F$ where $ f=w+i\widehat}\def\cal{\mathcal w$ according to (\ref{EQ 6}), we obtain immediately the existence of $\widehat}\def\cal{\mathcal u=\widehat}\def\cal{\mathcal w\circ F$. Thus we need only to calculate $\nu_1$ and $\nu_2$ in terms of $\sigma$. Note that by (\ref{mu}), \beq\label{L106}
|\mu_1|^2= \frac {\text{tr}\,(\sigma)-2\det(\sigma)^{1/2}} {\text{tr}\,(\sigma)+2\det(\sigma)^{1/2}}. \eeq We recall that \beq\label{L107} \mu_2= \frac {1-\det(\sigma)^{1/2}} {1+\det(\sigma)^{1/2}} \eeq and thus
\ba 1-|\mu_1|^2\mu_2^2= \frac {4(\det(\sigma)^{1/2}\text{tr}\,(\sigma)+ (1+\det(\sigma))\det(\sigma)^{1/2})} {(1+\det(\sigma)^{1/2})^2(\text{tr}\,(\sigma)+ 2\det(\sigma)^{1/2})} \ea which readily yields (\ref{L104}) from (\ref{L101}).
Note that since $\nu_1$ and $\nu_2$ uniquely determine $\mu_1$ and $\mu_2$, they by (\ref{L106}) and (\ref{L107}) also determine $\det (\sigma)$ and tr$(\sigma)$. After observing this, it is clear from (\ref{L104}) that $\sigma$ is uniquely determined by $\nu_1 $ and $\nu_2$. \Box
Now one can write equations (\ref{EQ 5}) in
more explicit
form \beq\label{EQ 9}
\tau\cdotp \nabla \widehat}\def\cal{\mathcal u|_{\partial \Omega}=\Lambda_\sigma (u|_{\partial \Omega})
\eeq where $\tau=(-\nu_2,\nu_1)$ is a unit tangent vector of $\partial \Omega$. As $\hbox{Re}\, g|_{\partial \Omega}=u|_{\partial \Omega}$
and $\hbox{Im}\, g|_{\partial \Omega}=\widehat}\def\cal{\mathcal u|_{\partial \Omega}$, we see that
$\Lambda_\sigma$ determines the $\sigma$-Hilbert transform ${\cal H}_\sigma$ defined by \beq\label{EQ 9b} {\cal H}_\sigma&:& H^{1/2}(\partial \Omega)\to H^{1/2}(\partial \Omega)/{\mathbb C},\\ \nonumber
& &\hbox{Re}\, g|_{\partial \Omega}\mapsto \hbox{Im}\, g|_{\partial \Omega}+{\mathbb C}. \eeq Put yet in another terms, for $u,\widehat}\def\cal{\mathcal u\in H^{1/2}(\partial \Omega)$, $\widehat}\def\cal{\mathcal u={\cal H}_\sigma u$ if and only if the map $g(\xi)=(u+i\widehat}\def\cal{\mathcal u)(\xi),$ $\xi \in \partial \Omega$, extends to $\Omega$ so that (\ref{L100}) is satisfied.
Summarizing the previous results, we have \begin{lemma}\label{lem: 2} The Dirichlet-to-Neumann map $\Lambda_\sigma$ determines the maps $\Lambda_{\widehat}\def\cal{\mathcal \sigma}$ and ${\cal H}_\sigma$. \end{lemma}
\subsection{Solutions of Complex Geometrical Optics}
Next we consider exponentially growing solutions, i.e., solutions of complex geometrical optics originated by Calder\'on for linearized inverse problems and by Sylvester and Uhlmann for non-linear inverse problems. In our case, we seek solutions $G(z,k)$, $z\in {\mathbb C}\setminus \Omega$, $k\in {\mathbb C}$ satisfying \beq\label{EQ 10} & &\p_{\overline z} G(z,k)=0 \quad \hbox{for }z\in {\mathbb C}\setminus \overline \Omega,\\ \label{EQ 10 asym} & &G(z,k)=e^{ikz}(1+{\cal O}_k(\frac 1{z})),\\ \label{EQ 10 bnd}
& &\hbox{Im}\, G(z,k)|_{z\in \partial \Omega}={\cal H}_\sigma
(\hbox{Re}\, G(z,k)|_{z\in \partial \Omega}). \eeq Here ${\cal O}_k(h(z))$ means a function of $(z,k)$ that satisfies
$|{\cal O}_k(h(z))|\leq C(k)|h(z)|$ for all $z$ with some constant $C(k)$ depending only on $k\in {\mathbb C}$. For the conductivity $\widetilde\sigma$ we consider the corresponding exponentially growing solutions
$W(z,k)$, $z\in {\mathbb C}$, $k\in {\mathbb C}$ where
\beq\label{EQ 11} & &\p_{\overline z} W(z,k)=\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \mu_{2}(z)\overline {\p_z W(z,k)}, \quad \hbox{for }z\in {\mathbb C},\\ \label{EQ 11 b} & &W(z,k)=e^{ikz}(1+{\cal O}_k(\frac 1 z)). \eeq Note that in this stage, $z\mapsto G(z,k)$ is defined only in the exterior domain ${\mathbb C}\setminus \Omega$ but $z\mapsto W(z,k)$ in the whole complex plane. These two solutions are closely related:
\begin{lemma}\label{lem: 3} For all $k\in {\mathbb C}$ we have:
i. The system (\ref{EQ 11}) has a unique solution $W(z,k)$ in $ {\mathbb C}$.\\
ii. The system (\ref{EQ 10}--\ref{EQ 10 bnd}) has a unique solution $G(z,k)$ in ${\mathbb C}\setminus \Omega$.\\
iii. For $z\in {\mathbb C}\setminus \Omega$ we have \beq\label{EQ 11b} G(z,k)=W(F(z),k). \eeq \end{lemma}
\noindent{\bf Proof.}\quad For the claim i. we refer to \cite[Theorem 4.2]{AP}.
Next we consider ii. and iii. simultaneously. Assume that $G(z,k)$ is a solution of (\ref{EQ 10}--\ref{EQ 10 bnd}). By Lemma \ref{lemma21} and boundary condition (\ref{EQ 10 bnd}) we see that equation \ba & &\overline \partial h(z,k) = \nu_1\p_z h+\nu_2 \overline{\p_z h}, \quad \hbox{in }\Omega, \\
& &h(\cdotp,k)|_{\partial \Omega}=G(\cdotp,k)|_{\partial \Omega} \ea has a unique solution where $\nu_1$ and $\nu_2$ are given in (\ref{L104}).
Let \beq\label{extensions} H(z,k)=\begin{cases} G(z,k)&\quad \hbox{for } z\in {\mathbb C}\setminus \Omega\\ h(z,k)&\quad \hbox{for } z\in \Omega\end{cases} \eeq and $\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} H(z,k)=H(F^{-1}(z),k)$. Then $\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} H(z,k)$ satisfies equations \ba & &\p_{\overline z} \widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} H(z,k)=0, \quad \hbox{for }z\in {\mathbb C}\setminus \widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \Omega,\\ & &\p_{\overline z} \widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} H(z,k)=\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \mu_{2}(z)\overline {\p_z \widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} H(z,k)}, \quad \hbox{for }z\in \widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \Omega, \ea and traces from both sides of $\partial\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \Omega$ coincide. Thus $\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} H(z,k)$ satisfies equation in (\ref{EQ 11}). Now (\ref{asympt 2}) and (\ref{EQ 10 asym}) yield that \beq\label{EQ 11c} \widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} H(z,k)&=&H(F^{-1}(z),k)\\ &=& \nonumber
\hbox{exp}(ikF^{-1}(z))(1+{\cal O}_k(\frac 1{1+|F^{-1}(z)|}))\\ &=& \nonumber
\hbox{exp}(ikz)(1+{\cal O}_k(\frac 1{1+|z|})) \eeq showing that $\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} H$ satisfies (\ref{EQ 11}--\ref{EQ 11 b}). Thus
by i., $\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} H(z,k)=W(z,k)$. This proves both ii. and iii. \Box
\subsection{Proof of Theorem \ref{theorem1}} As $G(z,k)$ is the unique solution of (\ref{EQ 10}--\ref{EQ 10 bnd}) and the operator appearing in boundary condition
(\ref{EQ 10 bnd}) is known, Lemmata \ref{lem: 2} and \ref{lem: 3} imply the following:
\begin{lemma}\label{lem: 3b} The Dirichlet-to-Neumann map $\Lambda_\sigma$ determines $G(z,k)$, $z\in {\mathbb C}\setminus \Omega$, $k\in {\mathbb C}$. \end{lemma}
Next we use this to find the diffeomorphism $F_\sigma$ outside $\Omega$.
\begin{lemma}\label{lem: 4} The Dirichlet-to-Neumann map $\Lambda_\sigma$ determines the values the restriction
$F_\sigma|_{{\mathbb C}\setminus \Omega}$. \end{lemma}
\noindent{\bf Proof.}\quad By (\ref{EQ 11b}), $G(z,k)=W(F(z),k)$, where $W(z,k)$ is the exponentially growing solution corresponding to the isotropic conductivity $\widetilde\sigma$. Thus by applying the sub-exponential growth results for such solutions, \cite[Lemma 7.1 and Thm. 7.2]{AP}, we have representation \beq\label{EQ 17} W(z,k)=\hbox{exp}(ik\varphi(z,k)) \eeq where \beq\label{EQ 18}
\lim_{k\to \infty} \sup_{z\in {\mathbb C}}|\varphi(z,k)-z|=0. \eeq As $F(z)=z+{\cal O}(1/z)$, and $G(z,k)=W(F(z),k)$ we have \beq\label{EQ 19} \lim_{k\to \infty} \frac{\log G(z,k)}{ik}= \lim_{k\to \infty} \varphi(F(z),k)=F(z). \eeq By Lemma \ref{lem: 3b} we know the values of limit (\ref{EQ 19}) for any $z\in {\mathbb C}\setminus\Omega$. Thus the claim is proven. \Box
We are ready to prove Theorem \ref{theorem1}.
\noindent{\bf Proof.}\quad As we know $F|_{{\mathbb C}\setminus \Omega}\in W^{1,p}$, $2<p<p(C_0)$,
we in particular know $\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \Omega={\mathbb C}\setminus (F({\mathbb C}\setminus \Omega)).$ When $u$ is the solution of conductivity equation (\ref{conduct}) with Dirichlet boundary value $\phi$ and $w$ is the solution of (\ref{EQ 1}) with Dirichlet boundary value $\widetilde \phi
=\phi\circ h$, where $h=F^{-1}|_{\partial \widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \Omega}$ we see that \beq\label{kaava 1 apu} \int_{\partial \widetilde \Omega}\widetilde \phi \Lambda_{\widetilde\sigma}\widetilde \phi\, dS = Q_{\widetilde\sigma,\widetilde\Omega}(w)= Q_{\sigma,\Omega}(u)=\int_{ \partial \Omega}\phi \Lambda_{\sigma}\phi\, dS. \eeq Here, the second identity is justified by the fact that $F$ is quasiconformal and hence satisfies (\ref{Kari 9}). Since $ \Lambda_{\sigma}$ and $\Lambda_{\widetilde\sigma}$ are symmetric, this implies \beq\label{kaava 1} \int_{\partial \widetilde \Omega}\widetilde \psi \Lambda_{\widetilde\sigma}\widetilde \phi\, dS = \int_{ \partial \Omega}\psi \Lambda_{\sigma}\phi\, dS \eeq for any $\widetilde \psi ,\widetilde \phi\in H^{1/2}(\partial \widetilde \Omega)$ and $ \psi , \phi\in H^{1/2}(\partial
\Omega)$ are related by $\widetilde \psi= \psi\circ h$ and $\widetilde \psi= \psi\circ h$. Note that
$\phi\in H^{1/2}(\partial \Omega)$ if and only if $\widetilde \phi=\phi \circ h\in H^{1/2}(\partial \widetilde\Omega)$. To see this, extend $\phi$ to a $H^1(\Omega)$ function and after that define $\widetilde \phi$ in the interior of $\widetilde\Omega$ by $\widetilde \phi=\phi\circ F^{-1}$. Now \ba
||\nabla \phi||_{L^2(\Omega)}^2 \sim \int_{\Omega } \nabla \phi\cdotp \sigma \overline {\nabla \phi}\,dx \sim \int_{\widetilde\Omega } \nabla \widetilde\phi\cdotp \widetilde\sigma \overline {\nabla \widetilde\phi}\,dx
\sim ||\nabla \widetilde\phi||_{L^2(\widetilde\Omega)}^2 \ea and hence \ba
||\phi||_{H^{1/2}(\partial \Omega)}^2 \sim
|| \phi||_{H^1(\Omega)}^2 \sim
||\widetilde \phi||_{H^1(\widetilde \Omega)}^2 \sim
||\widetilde \phi||_{H^{1/2}(\partial \widetilde \Omega)}^2. \ea
As we know $F|_{{\mathbb C}\setminus \Omega}$ and $\Lambda_\sigma$, we can find $\Lambda_{\widetilde \sigma}$ using formula (\ref{kaava 1}). By \cite{AP}, the map $\Lambda_{\widetilde \sigma}$ determines uniquely the conductivity $\widetilde\sigma$ on $\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \Omega$ in a constructive manner.
Knowing $\Omega$, $\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \Omega$, and the boundary value
$f=F|_{\partial \Omega}$ of the map $F:\Omega\to \widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \Omega$, we next construct a sufficiently smooth diffeomorphism $H:\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \Omega\to \Omega$. First, by the Riemann mapping theorem we can map $\Omega$ and $\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \Omega$ to the unit disc $\Disc$ by the conformal maps $R$ and $\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} R$, respectively. Now \ba G=\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} R\circ F\circ R^{-1}:\Disc\to\Disc \ea
is a quasiconformal map and since we know $R$ and $\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} R$, we know the function $g=G|_{\partial \Disc}$ mapping $\partial \Disc$ onto itself. The map $g$ is quasisymmetric (cf. \cite{Alf}) and by Ahlfors-Beurling extension theorem \cite[Thm. IV.2]{Alf} it has a quasiconformal extension ${\mathcal AB}(g)$ mapping $\overline \Disc$ onto itself. Note that one can obtain ${\mathcal AB}(g)$ from $g$ constructively by an explicit formula
\cite[p. 69]{Alf}. Thus we may find a quasiconformal diffeomorphism $H=R^{-1} \circ[{\mathcal AB}(g)]^{-1}\circ \widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} R$, $H:\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \Omega\to \Omega$
satisfying $H|_{\partial \widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \Omega}=
F^{-1}|_{\partial \widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \Omega}$.
Combining the above results, we can find $H_*\widetilde\sigma$ that is a representative of the equivalence class $E_\sigma$. \Box
In the above proof the Riemann mappings can not be found as explicitly as the Ahlfors-Beurling extension. However, there are numerical packages for approximative construction of Riemann mappings, see e.g. \cite{webbi}.
\section{PROOFS OF CONSEQUENCES OF MAIN RESULT} \label{sec: proof of con}
Here we give proofs of Theorems \ref{Lem: A1}--\ref{Lem: A3}.
{\bf Proof of Theorem \ref{Lem: A1}.} Let $F:{\mathbb R}_-^2={\mathbb R}+i{\mathbb R}_-\to \Disc $ be the M\"obius transform \ba F(z)=\frac {z+i}{z-i}. \ea Since this map is conformal, we see that $C_0(F_*\sigma)=C_0(\sigma).$ Let $\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \sigma=F_*\sigma$ be the conductivity in $\Disc $. Then $\Lambda_{\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \sigma}\phi$ is determined as in (\ref{kaava 1}) for all $\phi\in C^\infty_0(\partial \Disc \setminus \{1\})$.
Since $\Lambda_{\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \sigma}1=0$ and functions ${\mathbb C}\oplus C^\infty_0(\partial \Disc \setminus \{1\})$ are dense in the space $H^{1/2}(\partial \Disc )$, we see that $\Lambda_{\sigma}$ determines the Dirichlet-to-Neumann map $\Lambda_{\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \sigma}$ on $\partial \Disc $. Thus we can find the equivalence class of the conductivity on $\Disc $. Pushing these conductivities forward with $F^{-1}$ to ${\mathbb R}^2_-$, we obtain the claim. \Box
{\bf Proof of Theorem \ref{Lem: A2}.} Let $F:S\to \Disc \setminus \{0\}$ be the conformal map such that \ba \lim_{z\to \infty}F(z)=0. \ea Again, since this map is conformal we have for $\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \sigma=F_*\sigma$ the equality $C_0(\sigma)=C_0(F_*\sigma).$ Moreover, if $u$ is a solution of (\ref{conduct S}), we have that $w=u\circ F^{-1}$ is solution of \beq\label{conduct S2} \nabla\cdotp \widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \sigma\nabla w&=& 0\hbox{ in } \Disc \setminus \{0\}, \\
w|_{\partial \Disc}&=&\phi\circ F^{-1},\nonumber\\ w&\in& L^\infty( \Disc)\nonumber. \eeq Since set $\{0\}$ has capacitance zero in $\Disc $, we
see that $w=W|_{\Disc \setminus \{0\}}$ where \beq\label{conduct S3} \nabla\cdotp \widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \sigma\nabla W&=& 0\hbox{ in } \Disc , \\
W|_{\partial \Disc}&=&\phi\circ F^{-1}.\nonumber \eeq Since $F$ can be constructed via the Riemann mapping theorem, we see that $\Lambda_\sigma$ determines $\Lambda_{\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \sigma}$ on $\partial \Disc $ and thus the equivalence class $E_{\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \sigma}$. When $\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} F:\Disc \to \Disc $ is a boundary preserving diffeomorphism, we see that $F^{-1}\circ\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} F\circ F$ defines a diffeomorphism $\overline S\to \overline S$. Since we have determined the conductivity $\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \sigma$ up to a
boundary preserving diffeomorphism, the claim follows easily. \Box
{\bf Proof of Theorem \ref{Lem: A3}.} Let $\Disc\subset {\mathbb C}$ be the unit disc and
$\Disc_+=\{z\in \Disc\ | \ \hbox{Re}\, z>0\}$. Let $F:\Omega\to \Disc_+$ be a Riemann mapping such that \ba \Disc_+\subset {\mathbb R}\times {\mathbb R}_+,\quad F(\Gamma)=\partial \Disc_+\setminus ({\mathbb R}\times \{0\}),\quad F(\partial \Omega \setminus \Gamma)=\partial \Disc_+\cap ({\mathbb R}\times \{0\}). \ea Let $\eta:(x^1,x^2)\mapsto (x^1,-x^2)$ and define $\Disc_-=\eta (\Disc_+)$, and $\widetilde}\newcommand{\HOX}[1]{\marginpar{\footnotesize #1} \sigma=F_*\sigma$. Let \ba \widehat}\def\cal{\mathcal \sigma(x)= \begin{cases} \sigma(x) &\quad \hbox{for } x\in \Disc_+, \\ (\eta_*\sigma)(x) &\quad \hbox{for } x\in \Disc_-.\end{cases} \ea Consider equation \beq\label{conduct D+-} \nabla\cdotp \widehat}\def\cal{\mathcal \sigma\nabla w&=& 0\hbox{ in } \Disc. \eeq Using formula (\ref{kaava 1}) we see that $F$ and $\Lambda_\Gamma$ determine the corresponding map $\Lambda_{F(\Gamma)}$ for $\widehat}\def\cal{\mathcal \sigma$. Similarly, we can find $\Sigma_{F(\Gamma)}$ for $\widehat}\def\cal{\mathcal \sigma$.
Then $\Lambda_{F(\Gamma)}$ determines the Cauchy data on the boundary for the solutions of (\ref{conduct D+-}) for which $w\in H^1(\Disc)$, $w=-w\circ \eta$. On the other hand,
$\Sigma_{F(\Gamma)}$ determines the Cauchy data on the boundary of the solutions of (\ref{conduct D+-}) for which $w\in H^1(\Disc)$ and $w=w\circ \eta$. Now each solution $w$ of (\ref{conduct D+-}) can be written as a linear combination \ba w(x)=\frac 12(w(x)+w(\eta(x)))+\frac 12(w(x)-w(\eta(x))). \ea Thus the maps $\Lambda_{F(\Gamma)}$ and $\Sigma_{F(\Gamma)}$ together determine $C_{\widehat}\def\cal{\mathcal \sigma}$, and hence we can find $\widehat}\def\cal{\mathcal \sigma$ up to a diffeomorphism. We can choose a representative $\widehat}\def\cal{\mathcal \sigma_0$ of the equivalence class $E_{\widehat}\def\cal{\mathcal \sigma}$ such that
$\widehat}\def\cal{\mathcal \sigma_0= \widehat}\def\cal{\mathcal \sigma_0\circ \eta$. In fact, choosing a symmetric Ahlfors-Beurling extension in the construction given in the proof of Theorem \ref{theorem1}, we obtain such a conductivity. Pushing the conductivity $\widehat}\def\cal{\mathcal \sigma_0$ from $\Disc_+$ to $\Omega$ with $F^{-1}$, we obtain the claim. \Box
\end{document} |
\begin{document}
\title{A Higher Order Godunov Method for Radiation Hydrodynamics: Radiation Subsystem} \author{Michael Sekora$^{\dagger}$ \\ James Stone$^{\dagger,~\ddagger}$ \\ \textit{{\small Program in Applied and Computational Mathematics$^{\dagger}$}} \\ \textit{{\small Department of Astrophysical Sciences$^{\ddagger}$}} \\ \textit{{\small Princeton University, Princeton, NJ 08540, USA}}} \date{1 December 2008} \maketitle
\begin{abstract} \noindent A higher order Godunov method for the radiation subsystem of radiation hydrodynamics is presented. A key ingredient of the method is the direct coupling of stiff source term effects to the hyperbolic structure of the system of conservation laws; it is composed of a predictor step that is based on Duhamel's principle and a corrector step that is based on Picard iteration. The method is second order accurate in both time and space, unsplit, asymptotically preserving, and uniformly well behaved from the photon free streaming (hyperbolic) limit through the weak equilibrium diffusion (parabolic) limit and to the strong equilibrium diffusion (hyperbolic) limit. Numerical tests demonstrate second order convergence across various parameter regimes. \end{abstract}
\pagestyle{myheadings} \markboth{M. Sekora, J. Stone}{A Higher Order Godunov Method for Radiation Hydrodynamics}
\section{Introduction} \noindent Radiation hydrodynamics is a fluid description of matter (plasma) that absorbs and emits electromagnetic radiation and in so doing modifies dynamical behavior. The coupling between matter and radiation is significant in many phenomena related to astrophysics and plasma physics, where radiation comprises a major fraction of the internal energy and momentum and provides the dominant transport mechanism. Radiation hydrodynamics governs the physics of radiation driven outflows, supernovae, accretion disks, and inertial confinement fusion \cite{castorbook, mmbook}. Such physics is described mathematically by a nonlinear system of conservation laws that is obtained by taking moments of the Boltzmann and photon transport equations. A key difficulty is choosing the frame of reference in which to take the moments of the photon transport equation. In the comoving and mixed frame approaches, one captures the matter/radiation coupling by adding relativistic source terms correct to $\mathcal{O}(u/c)$ to the right-hand side of the conservation laws, where $u$ is the material flow speed and $c$ is the speed of light. These source terms are stiff because of the variation in time/length scales associated with such problems \cite{mk1982}. This stiffness causes numerical difficulties and makes conventional methods such as operator splitting and method of lines breakdown \cite{leveque1, leveque2}. \\
\noindent Previous research in numerically solving radiation hydrodynamical problems was carried out by Castor 1972, Pomraning 1973, Mihalas \& Klein 1982, and Mihalas \& Mihalas 1984 \cite{mmbook, mk1982, castor1972, pomraning1973}. There are a variety of algorithms for radiation hydrodynamics. One of the simplest approaches was developed by Stone, Mihalas, \& Norman 1992 and implemented in the ZEUS code, which was based on operator splitting and Crank-Nicholson finite differencing \cite{stone1992}. Since then, higher order Godunov methods have emerged as a valuable technique for solving hyperbolic conservation laws (e.g., hydrodynamics), particularly when shock capturing and adaptive mesh refinement is important \cite{athena}. However, developing upwind differencing methods for radiation hydrodynamics is a difficult mathematical and computational task. In many cases, Godunov methods for radiation hydrodynamics either: $(i)$ neglect the heterogeneity of weak/strong coupling and solve the system of equations in an extreme limit \cite{dai1, dai2}, $(ii)$ are based on a manufactured limit and solve a new system of equations that attempts to model the full system \cite{jin, buet}, or $(iii)$ uses a variation on flux limited diffusion \cite{levermore1981, heracles2007}. All of these approaches do not treat the full generality of the problem. For example, in a series of papers, Balsara 1999 proposed a Riemann solver for the full system of equations \cite{balsara1999}. However, as pointed out by Lowrie \& Morel 2001, Balsara's method failed to maintain coupling between radiation and matter. Moreover, Lowrie \& Morel were critical of the likelihood of developing a Godunov method for full radiation hydrodynamics \cite{lowrie2001}. \\
\noindent In radiation hydrodynamics, there are three important dynamical scales and each scale is associated with either the material flow (speed of sound), radiation flow (speed of light), or source terms. When the matter-radiation coupling is strong, the source terms define the fastest scale. However, when the matter-radiation coupling is weak, the source terms define the slowest scale. Given such variation, one aims for a scheme that treats the stiff source terms implicitly. Following work by Miniati \& Colella 2007, this paper presents a method that is a higher order modified Godunov scheme that directly couples stiff source term effects to the hyperbolic structure of the system of conservation laws; it is composed of a predictor step that is based on Duhamel's principle and a corrector step that is based on Picard iteration \cite{mc2007}. The method is explicit on the fastest hyperbolic scale (radiation flow) but is unsplit and fully couples matter and radiation with no approximation made to the full system of equations for radiation hydrodynamics. \\
\noindent A challenge for the modified Godunov method is its use of explicit time differencing when there is a large range in the time scales associated with the problem, $c / a_{\infty} \gg 1$ where $a_{\infty}$ is the reference material sound speed. One could have built a fully implicit method that advanced time according to the material flow scale, but a fully implicit approach was not pursued because such methods often have difficulties associated with conditioning, are expensive because of matrix manipulation and inversion, and are usually built into central difference schemes rather than higher order Godunov methods. An explicit method may even out perform an implicit method if one considers applications that have flows where $c / a_{\infty} \lesssim 10$. A modified Godunov method that is explicit on the fastest hyperbolic scale (radiation flow) as well as a hybrid method that incorporates a backward Euler upwinding scheme for the radiation components and the modified Godunov scheme for the material components are under construction for full radiation hydrodynamics. A goal of future research is to directly compare these two methods in various limits for different values of $c / a_{\infty}$.
\section{Radiation Hydrodynamics} \noindent The full system of equations for radiation hydrodynamics in the Eulerian frame that is correct to $\mathcal{O}(1/\mathbb{C})$ is: \begin{equation} \partdif{\rho}{t} + \nabla \cdot \left( \mathbf{m} \right) = 0 , \end{equation} \begin{equation} \partdif{\mathbf{m}}{t} + \nabla \cdot \left( \frac{\mathbf{m} \otimes \mathbf{m}}{\rho} \right) + \nabla p = -\mathbb{P} \left [ -\sigma_t \left( \mathbf{F_r} - \frac{ \mathbf{u}E_r + \mathbf{u} \cdot \mathsf{P_r} }{\mathbb{C}} \right) + \sigma_a \frac{\mathbf{u}}{\mathbb{C}} (T^4 - E_r) \right ] , \end{equation} \begin{equation} \partdif{E}{t} + \nabla \cdot \left( (E+p) \frac{\mathbf{m}}{\rho} \right) = -\mathbb{P} \mathbb{C} \left [ \sigma_a(T^4 - E_r) + (\sigma_a - \sigma_s) \frac{\mathbf{u}}{\mathbb{C}} \cdot \left( \mathbf{F_r} - \frac{ \mathbf{u}E_r + \mathbf{u} \cdot \mathsf{P_r} }{\mathbb{C}} \right) \right ] , \end{equation} \begin{equation} \partdif{E_r}{t} + \mathbb{C} \nabla \cdot \mathbf{F_r} = \mathbb{C} \left [ \sigma_a(T^4 - E_r) + (\sigma_a - \sigma_s) \frac{\mathbf{u}}{\mathbb{C}} \cdot \left( \mathbf{F_r} - \frac{ \mathbf{u}E_r + \mathbf{u} \cdot \mathsf{P_r} }{\mathbb{C}} \right) \right ] \label{eq:rad_sys_e}, \end{equation} \begin{equation} \partdif{ \mathbf{F_r} }{t} + \mathbb{C} \nabla \cdot \mathsf{P_r} = \mathbb{C} \left [ -\sigma_t \left( \mathbf{F_r} - \frac{ \mathbf{u}E_r + \mathbf{u} \cdot \mathsf{P_r} }{\mathbb{C}} \right) + \sigma_a \frac{\mathbf{u}}{\mathbb{C}} (T^4 - E_r) \right ] \label{eq:rad_sys_f} , \end{equation} \begin{equation} \mathsf{P_r} = \mathsf{f} E_r ~~ \textrm{(closure relation)} \label{eq:rad_closure} . \end{equation}
\noindent For the material quantities, $\rho$ is density, $\mathbf{m}$ is momentum, $p$ is pressure, $E$ is total energy density, and $T$ is temperature. For the radiative quantities, $E_r$ is energy density, $\mathbf{F_r}$ is flux, $\mathsf{P_r}$ is pressure, and $\mathsf{f}$ is the variable tensor Eddington factor. In the source terms, $\sigma_a$ is the absorption cross section, $\sigma_s$ is the scattering cross section, and $\sigma_t = \sigma_a + \sigma_s$ is the total cross section. \\
\noindent Following the presentation of Lowrie, Morel, \& Hittinger 1999 and Lowrie \& Morel 2001, the above system of equations has been non-dimensionalized with respect to the material flow scale so that one can compare hydrodynamical and radiative effects as well as identify terms that are $\mathcal{O}(u/c)$. This scaling gives two important parameters: $\mathbb{C} = c / a_{\infty}$, $\mathbb{P} = \frac{a_r T^4_{\infty}}{\rho_{\infty}a^2_{\infty}}$. $\mathbb{C}$ measures relativistic effects while $\mathbb{P}$ measures how radiation affects material dynamics and is proportional to the equilibrium radiation pressure over material pressure. $a_r = \frac{8 \pi^5 k^4}{15 c^3h^3}$ is a radiation constant, $T_{\infty}$ is the reference material temperature, and $\rho_{\infty}$ is the reference material density. \\
\noindent For this system of equations, one has assumed that scattering is isotropic and coherent in the comoving frame, emission is defined by local thermodynamic equilibrium (LTE), and that spectral averages for the cross-sections can be employed (gray approximation). The coupling source terms are given by the modified Mihalas-Klein description \cite{lowrie2001, lowrie1999} which is more general and more accurate than the original Mihalas-Klein source terms \cite{mk1982} because it maintains an important $\mathcal{O}(1 / \mathbb{C}^2)$ term that ensures the correct equilibrium state and relaxation rate to equilibrium \cite{lowrie2001, lowrie1999}. \\
\noindent Before investigating full radiation hydrodynamics, it is useful to examine the radiation subsystem, which is a simpler system that minimizes complexity while maintaining the rich hyperbolic-parabolic behavior associated with the stiff source term conservation laws. This simpler system allows one to develop a reliable and robust numerical method. Consider Equations \ref{eq:rad_sys_e}, \ref{eq:rad_sys_f} for radiation hydrodynamics in one spatial dimension not affected by transverse flow. If one only considers radiative effects and holds the material flow stationary such that $u \rightarrow 0$, then the conservative variables, fluxes, and source terms for the radiation subsystem are given by: \begin{equation} \partdif{E_r}{t} + \mathbb{C} \partdif{F_r}{x} = \mathbb{C} \sigma_a (T^4 - E_r) \label{eq:rad_sub_e} , \end{equation} \begin{equation} \partdif{F_r}{t} + \mathbb{C} f \partdif{E_r}{x} = -\mathbb{C} \sigma_t F_r \label{eq:rad_sub_f} . \end{equation}
\noindent Motivated by the asymptotic analysis of Lowrie, Morel, \& Hittinger 1999 for full radiation hydrodynamics, one investigates the limiting behavior for this simpler system of equations. For non-relativistic flows $1/\mathbb{C} = \mathcal{O}(\epsilon)$, where $\epsilon \ll 1$. Assume that there is a moderate amount of radiation in the flow such that $\mathbb{P} = \mathcal{O}(1)$. Furthermore, assume that scattering effects are small such that $\sigma_s / \sigma_t = \mathcal{O}(\epsilon)$. Lastly, assume that the optical depth can be represented as $\mathcal{L} = \ell_{\textrm{mat}} / \lambda_t = \ell_{\textrm{mat}} ~ \sigma_t$, where $\lambda_t$ is the total mean free path of the photos and $\ell_{\textrm{mat}} = \mathcal{O}(1)$ is the material flow length scale \cite{lowrie1999}. \\
\noindent \textbf{Free Streaming Limit $\sigma_a, \sigma_t \sim \mathcal{O}(\epsilon)$:} In this regime, the right-hand-side of Equations \ref{eq:rad_sub_e} and \ref{eq:rad_sub_f} is negligible such that the system is strictly hyperbolic. $f \rightarrow 1$ and the Jacobian of the quasilinear conservation law has eigenvalues $\pm \mathbb{C}$: \begin{equation} \partdif{E_r}{t} + \mathbb{C} \partdif{F_r}{x} = 0 \label{eq:stream_e} , \end{equation} \begin{equation} \partdif{F_r}{t} + \mathbb{C} \partdif{E_r}{x} = 0 \label{eq:stream_f} . \end{equation}
\noindent \textbf{Weak Equilibrium Diffusion Limit $\sigma_a, \sigma_t \sim \mathcal{O}(1)$ :} One obtains this limit by plugging in $\sigma_a, \sigma_t \sim \mathcal{O}(1)$, matching terms of like order, and combining the resulting equations. From the definition of the equilibrium state, $E_r = T^4$ and $F_r = -\frac{1}{\sigma_t} \partdif{P_r}{x}$. Therefore, the system is parabolic and resembles a diffusion equation, where $f \rightarrow 1/3$: \begin{equation} \partdif{E_r}{t} = \frac{\mathbb{C}}{3 \sigma_t} \partdif{^2 E_r}{x^2} \label{eq:weak_e} , \end{equation} \begin{equation} F_r = -\frac{1}{3 \sigma_t} \partdif{E_r}{x} \label{eq:weak_f} . \end{equation}
\noindent \textbf{Strong Equilibrium Diffusion Limit $\sigma_a, \sigma_t \sim \mathcal{O}(1/\epsilon)$ :} One obtains this limit by plugging in $\sigma_a, \sigma_t \sim \mathcal{O}(1/\epsilon)$ and following the steps outlined for the weak equilibrium diffusion limit. One can consider the system to be hyperbolic, where $f \rightarrow 1/3$ and the Jacobian of the quasilinear conservation law has eigenvalues $\pm \epsilon$: \begin{equation} \partdif{E_r}{t} = 0 \label{eq:strong_e} , \end{equation} \begin{equation} F_r = 0 \label{eq:strong_f} . \end{equation}
\noindent Lowrie, Morel, \& Hittinger 1999 investigated an additional limit for full radiation hydrodynamics, the isothermal regime. This limit has some dynamical properties in common with the weak equilibrium diffusion limit, but its defining characteristic is that the material temperature $T(x,t)$ is constant. When considering the radiation subsystem, there is little difference between the weak equilibrium diffusion and isothermal limits because the material quantities, including the material temperature $T$, do not evolve. $T$ enters the radiation subsystem as a parameter rather than a dynamical quantity.
\section{Higher Order Godunov Method} \noindent In one spatial dimension, systems of conservation laws with source terms have the form: \begin{equation} \partdif{U}{t} + \partdif{F(U)}{x} = S(U) , \end{equation}
\noindent where $U: \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}^n$ is an $n$-dimensional vector of conserved quantities. For the radiation subsystem: \begin{equation} U = \left( \begin{array}{c} E_r \\
F_r \end{array} \right) , ~~ F(U) = \left( \begin{array}{c} \mathbb{C} F_r \\
\mathbb{C} f E_r \end{array} \right) , ~~ S(U) = \left( \begin{array}{c} \mathbb{C} S_E \\
\mathbb{C} S_F \end{array} \right) = \left( \begin{array}{c} \mathbb{C} \sigma_a (T^4 - E_r) \\
- \mathbb{C} \sigma_t F_r \end{array} \right) . \nonumber \end{equation}
\noindent The quasilinear form of this system of conservation laws is: \begin{equation} \partdif{U}{t} + A \partdif{U}{x} = S(U) , ~~ A = \partdif{F}{U} = \left( \begin{array}{cc} 0 & \mathbb{C} \\ \mathbb{C} f & 0 \end{array} \right) . \end{equation}
\noindent $A$ has eigenvalues $\lambda = \pm f^{1/2} \mathbb{C}$ as well as right eigenvectors $R$ (stored as columns) and left eigenvectors $L$ (stored as rows): \begin{equation} R = \left( \begin{array}{cc} 1 & 1 \\ - f^{1/2} & f^{1/2} \end{array} \right) , ~~ L = \left( \begin{array}{cc} \frac{1}{2} & -\frac{1}{2} \left( \frac{1}{f} \right)^{1/2} \\ \frac{1}{2} & \frac{1}{2} \left( \frac{1}{f} \right)^{1/2} \end{array} \right) . \end{equation}
\noindent Godunov's method obtains solutions to systems of conservation laws by using characteristic information within the framework of a conservative method: \begin{equation} U^{n+1}_{i} = U^{n}_{i} - \frac{\Delta t}{\Delta x} \left( F_{i+1/2} - F_{i-1/2} \right) + \Delta t S(U^{n}_{i}) . \end{equation}
\noindent Numerical fluxes $F_{i \pm 1/2}$ are obtained by solving the Riemann problem at the cell interfaces with left/right states to get $U_{i-1/2}^{n \pm 1/2}$ and computing $F_{i \pm 1/2} = F(U_{i \pm 1/2}^{n+1/2})$, where $i$ represents the location of a cell center, $i \pm 1/2$ represents the location cell faces to the right and left of $i$, and superscripts represent the time discretization. An HLLE (used in this work) or any other approximate Riemann solver may be employed because the Jacobian $\partial F / \partial U$ for the radiation subsystem is a constant valued matrix and by definition a Roe matrix \cite{leveque1, leveque2, roe1981}. This property also implies that one does not need to transform the system into primitive variables ($\nabla_U W$). The power of the method presented in this paper is that the spatial reconstruction, eigen-analysis, and cell-centered updating directly plug into conventional Godunov machinery.
\subsection{Predictor Step} \noindent One computes the flux divergence $(\nabla \cdot F)^{n+1/2}$ by using the quasilinear form of the system of conservation laws and the evolution along Lagrangian trajectories: \begin{equation} \frac{DU}{Dt} + A^L \partdif{U}{x} = S(U) , ~~ A^L = A - uI, ~~ \frac{DU}{Dt} = \partdif{U}{t} + \left( u \partdif{}{x} \right) U . \end{equation}
\noindent From the quasilinear form, one derives a system that includes (at least locally in time and state space) the effects of the stiff source terms on the hyperbolic structure. Following the analysis of Miniati \& Colella 2007 and Trebatich et al 2005 \cite{mc2007, treb2005}, one applies Duhamel's principle to the system of conservation laws, thus giving: \begin{equation} \frac{D U^{\textrm{eff}}}{Dt} = \mathcal{I}_{\dot{S}_n}(\eta) \left( - A^L \partdif{U}{x} + S_n \right) , \end{equation}
\noindent
where $\mathcal{I}_{\dot{S}_n}$ is a propagation operator that projects the dynamics of the stiff source terms onto the hyperbolic structure and $\dot{S}_n = \nabla_U S|_{U_n}$. The subscript $n$ designates time $t=t_n$. Since one is considering a first order accurate predictor step in a second order accurate predictor-corrector method, one chooses $\eta = \Delta t / 2$ and the effective conservation law is: \begin{equation} \frac{D U}{Dt} + \mathcal{I}_{\dot{S}_n}(\Delta t / 2) A^L \partdif{U}{x} = \mathcal{I}_{\dot{S}_n}(\Delta t / 2) S_n , ~ \Rightarrow ~ \partdif{U}{t} + A_{\textrm{eff}} \partdif{U}{x} = \mathcal{I}_{\dot{S}_n}(\Delta t / 2) S_n , \end{equation}
\noindent where $A_{\textrm{eff}} = \mathcal{I}_{\dot{S}_n}(\Delta t / 2) A^L + u I$. In order to compute $\mathcal{I}_{\dot{S}_n}$, one first computes $\dot{S}_n$. Since $\mathbb{C}$, $\sigma_a$, and $\sigma_t$ are constant and one assumes that $\partdif{T}{E_r}, \partdif{T}{F_r} = 0$: \begin{equation} \dot{S_n} = \left( \begin{array}{cc} -\mathbb{C} \sigma_a & 0 \\ 0 & -\mathbb{C} \sigma_t \end{array} \right) . \end{equation}
\noindent $\mathcal{I}_{\dot{S}_n}$ is derived from Duhamel's principle and is given by: \begin{eqnarray} \mathcal{I}_{\dot{S_n}} (\Delta t / 2) & = & \frac{1}{\Delta t/2} \int^{\Delta t/2}_{0} e^{\tau \dot{S_n}} d\tau \\
& = & \left( \begin{array}{cc} \alpha & 0 \\ 0 & \beta \end{array} \right) , ~~~~ \alpha = \frac{1 - e^{-\mathbb{C} \sigma_a \Delta t / 2}}{\mathbb{C} \sigma_a \Delta t / 2} , ~~ \beta = \frac{1 - e^{-\mathbb{C} \sigma_t \Delta t / 2}}{\mathbb{C} \sigma_t \Delta t / 2} . \end{eqnarray}
\noindent Before applying $\mathcal{I}_{\dot{S_n}}$ to $A_L$, it is important to understand that moving-mesh methods can be accommodated in non-relativistic descriptions of radiation hydrodynamics whenever an Eulerian frame treatment is employed. These methods do not require transformation to the comoving frame \cite{lowrie2001}. Since the non-dimensionalization is associated with the hydrodynamic scale, one can use $u_{\textrm{mesh}} = u$ from Lagrangean hydrodynamic methods. \\
\noindent The effects of the stiff source terms on the hyperbolic structure are accounted for by transforming to a moving-mesh (Lagrangean) frame $A_L = A - uI$, applying the propagation operator $\mathcal{I}_{\dot{S_n}}$ to $A_L$, and transforming back to an Eulerian frame $A_{\textrm{eff}} = \mathcal{I}_{\dot{S_n}} A_L + uI$ \cite{mc2007}. However, because only the radiation subsystem of radiation hydrodynamics is considered $u_{\textrm{mesh}} = u \rightarrow 0$. Therefore, the effective Jacobian is given by: \begin{equation} A_{\textrm{eff}} = \left( \begin{array}{cc} 0 & \alpha \mathbb{C} \\ \beta f \mathbb{C} & 0 \end{array} \right) \label{eq:a_eff} , \end{equation}
\noindent which has eigenvalues $\lambda_{\textrm{eff}} = \pm (\alpha \beta)^{1/2} f^{1/2} \mathbb{C}$ with the following limits: \begin{eqnarray} \sigma_a, \sigma_t \rightarrow 0 & \Rightarrow & \alpha, \beta \rightarrow 1 ~ \Rightarrow ~ \lambda_{\textrm{eff}} \rightarrow \pm f^{1/2} \mathbb{C}, ~ (\textrm{free streaming}) \nonumber \\ \sigma_a, \sigma_t \rightarrow \infty & \Rightarrow & \alpha, \beta \rightarrow 0 ~ \Rightarrow ~ \lambda_{\textrm{eff}} \rightarrow \pm \epsilon, ~ (\textrm{strong equilibrium diffusion}) \nonumber . \end{eqnarray}
\noindent $A_{\textrm{eff}}$ has right eigenvectors $R_{\textrm{eff}}$ (stored as columns) and left eigenvectors $L_{\textrm{eff}}$ (stored as rows): \begin{equation} R_{\textrm{eff}} = \left( \begin{array}{cc} 1 & 1 \\ -\left ( \frac{\beta f}{\alpha} \right )^{1/2} & \left ( \frac{\beta f}{\alpha} \right )^{1/2} \end{array} \right) , ~~ L_{\textrm{eff}} = \left( \begin{array}{cc} \frac{1}{2} & -\frac{1}{2} \left ( \frac{\alpha}{\beta f} \right )^{1/2} \\ \frac{1}{2} & \frac{1}{2} \left ( \frac{\alpha}{\beta f} \right )^{1/2} \end{array} \right) \label{eq:rl_eff} . \end{equation}
\subsection{Corrector Step} \noindent The time discretization for the source term is a single-step, second order accurate scheme based on the ideas from Dutt et al 2000, Minion 2003, and Miniati \& Colella 2007 \cite{mc2007, dutt2000, minion2003}. Given the system of conservation laws, one aims for a scheme that has an explicit approach for the conservative flux divergence term $\nabla \cdot F$ and an implicit approach for the stiff source term $S(U)$. Therefore, one solves a following collection of ordinary differential equations at each grid point: \begin{equation} \frac{dU}{dt} = S(U) - ( \nabla \cdot F )^{n+1/2}, \end{equation}
\noindent where the time-centered flux divergence term is taken to be a constant source which is obtained from the predictor step. Assuming time $t = t_n$, the initial guess for the solution at the next time step is: \begin{equation}
\hat{U} = U^n + \Delta t (I - \Delta t \nabla_U S(U) |_{U^n})^{-1} (S(U^n) - ( \nabla \cdot F )^{n+1/2}) , \end{equation}
\noindent where: \begin{equation} \left( I - \Delta t \nabla_U S(U) \right) = \left( \begin{array}{cc} 1 + \Delta t \mathbb{C} \sigma_a & 0 \\ 0 & 1 + \Delta t \mathbb{C} \sigma_t \end{array} \right) , \end{equation} \begin{equation} \left( I - \Delta t \nabla_U S(U) \right)^{-1} = \left( \begin{array}{cc} \frac{1}{1 + \Delta t \mathbb{C} \sigma_a} & 0 \\ 0 & \frac{1}{1 + \Delta t \mathbb{C} \sigma_t} \end{array} \right) . \end{equation}
\noindent The error $\epsilon$ is defined as the difference between the initial guess and the solution obtained from the Picard iteration equation where the initial guess was used as a starting value: \begin{equation} \epsilon(\Delta t) = U^n + \frac{\Delta t}{2} \left( S(\hat{U}) + S(U^n) \right) - \Delta t ( \nabla \cdot F )^{n+1/2} - \hat{U} . \end{equation}
\noindent Following Miniati \& Colella 2007, the correction to the initial guess is given by \cite{mc2007}: \begin{equation}
\delta(\Delta t) = \left( I - \Delta t \nabla_U S(U) |_{\hat{U}} \right)^{-1} \epsilon(\Delta t) . \end{equation}
\noindent Therefore, the solution at time $t = t_n + \Delta t$ is: \begin{equation} U^{n+1} = \hat{U} + \delta(\Delta t) \label{eq:update} . \end{equation}
\subsection{Stability and Algorithmic Issues}
\noindent The higher order Godunov method satisfies important conditions that are required for numerical stability \cite{mc2007}. First, $\lambda_{\rm{eff}} = \pm (\alpha \beta)^{1/2} f^{1/2} \mathbb{C}$ indicates that the subcharacteristic condition for the characteristic speeds at equilibrium is always satisfied, such that: $\lambda^{-} < \lambda_{\rm{eff}}^{-} < \lambda^0 < \lambda_{\rm{eff}}^{+} < \lambda^{+}$. This condition is necessary for the stability of the system and guarantees that the numerical solution tends to the solution of the equilibrium equation as the relaxation time tends to zero. Second, since the structure of the equations remains consistent with respect to classic Godunov methods, one expects the CFL condition to apply: $\rm{max}(|\lambda^*|) \frac{\Delta t}{\Delta x} \leq 1$, $* = -,0,+$. \\
\noindent Depending upon how one carries out the spatial reconstruction to solve the Riemann problem in Godunov's method, the solution is either first order accurate in space (piecewise constant reconstruction) or second order accurate in space (piecewise linear reconstruction). Piecewise linear reconstruction was employed in this paper, where left/right states (with respect to the cell center) are modified to account for the stiff source term effects \cite{mc2007, colella1990}: \begin{eqnarray} U_{i,\pm}^{n} & = & U_{i}^{n} + \frac{\Delta t}{2} \mathcal{I}_{\dot{S_n}} \left( \frac{\Delta t}{2} \right) S(U_{i}^{n}) + \frac{1}{2} \left( \pm I - \frac{\Delta t}{\Delta x} A_{\textrm{eff}}^{n} \right) P_{\pm}(\Delta U_i) \label{eq:spat_recon} \\ P_{\pm}(\Delta U_i) & = & \sum_{\pm \lambda_k > 0} \left( L_{\textrm{eff}}^{k} \cdot \Delta U_i \right) \cdot R_{\textrm{eff}}^{k} \label{eq:slope} . \end{eqnarray}
\noindent Left/right one-sided slopes as well as cell center slopes are defined for each cell centered quantity $U_i$. A van Leer limiter is applied to these slopes to ensure monotonicity, thus giving the local slope $\Delta U_i$.
\section{Numerical Tests} \noindent Four numerical tests spanning a range of mathematical and physical behavior were carried out to gauge the temporal and spatial accuracy of the higher order Godunov method. The numerical solution is compared with the analytic solution where possible. Otherwise, a self-similar comparison is made. Using piecewise constant reconstruction for the left/right states, one can show that the Godunov method reduces to a consistent discretization in each of the limiting cases. \\
\noindent The optical depth $\tau$ is a useful quantity for classifying the limiting behavior of a system that is driven by radiation hydrodynamics: \begin{equation} \tau = \int_{x_{min}}^{x_{max}} \sigma_t dx = \sigma_t (x_{\max}-x_{\min}) , \end{equation}
\noindent Optically thin/thick regimes are characterized by: \begin{eqnarray} \tau &<& O(1) ~~ (\textrm{optically thin}) \nonumber \\ \tau &>& O(1) ~~ (\textrm{optically thick}) \nonumber . \end{eqnarray}
\noindent In optically thin regimes (free streaming limit), radiation and hydrodynamics decouple such that the resulting dynamics resembles an advection process. In optically thick regimes (weak/strong equilibrium diffusion limit), radiation and hydrodynamics are strongly coupled and the resulting dynamics resembles a diffusion process. \\
\noindent The following definitions for the n-norms and convergence rates are used throughout this paper. Given the numerical solution $q^{r}$ at resolution $r$ and the analytic solution $u$, the error at a given point $i$ is: $\epsilon^{r}_{i} = q^{r}_{i} - u$. Likewise, given the numerical solution $q^{r}$ at resolution $r$ and the numerical solution $q^{r+1}$ at the next finer resolution $r+1$ (properly spatially averaged onto the coarser grid), the error resulting from this self-similar comparison at a given point $i$ is: $\epsilon^{r}_{i} = q^{r}_{i} - q^{r+1}_{i}$. The 1-norm and max-norm of the error are: \begin{equation}
L_1 = \sum_i | \epsilon^{r}_{i} | \Delta x^{r} , ~~~~ L_{\max} = \max_i | \epsilon^{r}_{i} | . \end{equation}
\noindent The convergence rate is measured using Richardson extrapolation: \begin{equation} R_n = \frac{ \textrm{ln} \left( L_n(\epsilon^r)/L_n(\epsilon^{r+1}) \right) }{ \textrm{ln} \left( \Delta x^{r}/\Delta x^{r+1} \right) } . \end{equation}
\subsection{Exponential Growth/Decay to Thermal Equilibrium} \noindent The first numerical test examines the temporal accuracy of how variables are updated in the corrector step. Given the radiation subsystem and the following initial conditions: \begin{displaymath} E_r^0 = \textrm{constant across space} , ~~ F_r^0 = 0, ~~ T = \textrm{constant across space} , \end{displaymath}
\noindent $F_r \rightarrow 0$ for all time. Therefore, the radiation subsystem reduces to the following ordinary differential equation: \begin{equation} \frac{d E_r}{dt} = \mathbb{C} \sigma_a (T^4-E_r), \end{equation}
\noindent which has the following analytic solution: \begin{equation} E_r = T^4 + (E_r^0 - T^4) \rm{exp}(-\mathbb{C} \sigma_a t) . \end{equation}
\noindent For $E_r^0 < T^4$ and $F_r^0 = 0$, one expects exponential growth in $E_r$ until thermal equilibrium $(E_R = T^4)$ is reached. For $E_r^0 > T^4$ and $F_r^0 = 0$, one expects exponential decay in $E_r$ until thermal equilibrium is reached. This numerical test allows one to examine the order of accuracy of the stiff ODE integrator. \\
\noindent \textbf{Parameters:} \begin{equation} \mathbb{C} = 10^{5} , ~ \sigma_{a} = 1 , ~ \sigma_{t} = 2 , ~ f = 1 \nonumber , \end{equation} \begin{equation} N_{cell} = [32, ~ 64, ~ 128, ~ 256] \nonumber , \end{equation} \begin{equation} x_{\min} = 0 , ~ x_{\max} = 1 , ~ \Delta x = \frac{x_{\min}-x_{\max}}{N_{cell}} , ~ CFL = 0.5 , ~ \Delta t = \frac{CFL ~ \Delta x}{f^{1/2} \mathbb{C}} \nonumber , \end{equation} \begin{equation} \textrm{IC for Growth:} ~~ E_r^0 = 1, ~ F_r^0 = 0 , ~ T = 10 \nonumber , \end{equation} \begin{equation} \textrm{IC for Decay:} ~~ E_r^0 = 10^4, ~ F_r^0 = 0 , ~ T = 1 \nonumber . \end{equation}
\begin{figure}
\caption{Exponential growth/decay to thermal equilibrium. $N_{cell}=256$.}
\label{fig:exp}
\end{figure}
\begin{table}[h] \begin{center} \begin{tabular*}{1.0\textwidth}{@{\extracolsep{\fill}}rcccccccc} \hline $N_{cell}$ & $L_1(E^{g}_{r})$ & Rate & $L_{\infty}(E^{g}_{r})$ & Rate & $L_1(E^{d}_{r})$ & Rate & $L_{\infty}(E^{g}_{r})$ & Rate \\ \hline
32 & 1.4E-1 & - & 1.4E-1 & - & 1.4E-1 & - & 1.4E-1 & - \\
64 & 3.7E-2 & 2.0 & 3.7E-2 & 2.0 & 3.7E-2 & 2.0 & 3.7E-2 & 2.0 \\ 128 & 9.3E-3 & 2.0 & 9.3E-3 & 2.0 & 9.3E-3 & 2.0 & 9.3E-3 & 2.0 \\ 256 & 2.3E-3 & 2.0 & 2.3E-3 & 2.0 & 2.3E-3 & 2.0 & 2.3E-3 & 2.0 \\ \hline \label{tbl:exp} \end{tabular*} Table 1: Errors and convergence rates for exponential growth/decay in $E_r$ to thermal equilibrium. Errors were obtained through analytic comparison. $t = 10^{-5} = 1/\sigma_a \mathbb{C}$. \end{center} \end{table}
\noindent From Figure \ref{fig:exp}, one sees that the numerical solution corresponds with the analytic solution. In Table 1, the errors and convergence rates are identical for growth and decay. This symmetry illustrates the robustness of the Godunov method. Furthermore, one finds that the method is well behaved and obtains the correct solution with second order accuracy for stiff values of the $e$ folding time $(\frac{\Delta t}{1 / \sigma_{a} \mathbb{C}} \geq 1)$, although with a significantly larger amplitude in the norm of the error. This result credits the flexibility of the temporal integrator in the corrector step. \\
\noindent In a similar test, the initial conditions for the radiation energy and flux are zero and the temperature is defined by some spatially varying profile (a Gaussian pulse). As time increases, the radiation energy grows into $T(x)^4$. Unless the opacity is sufficiently high, the radiation energy approaches but does not equal $T(x)^4$. This result shows that the solution has reached thermal equilibrium and any spatially varying temperature will diffuse.
\subsection{Free Streaming Limit} \noindent In the free streaming limit, $\tau \ll O(1)$ and the radiation subsystem reduces to Equations \ref{eq:stream_e}, \ref{eq:stream_f}. If one takes an additional temporal and spatial partial derivative of the radiation subsystem in the free streaming limit and subtracts the resulting equations, then one finds two decoupled wave equations that have the following analytic solutions: \begin{eqnarray} E_r(x,t) = E_0(x - f^{1/2} \mathbb{C} t) , \\ F_r(x,t) = F_0(x - f^{1/2} \mathbb{C} t) . \end{eqnarray}
\noindent \textbf{Parameters:} \begin{equation} \mathbb{C} = 10^{5} , ~ \sigma_{a} = 10^{-6} , ~ \sigma_{t} = 10^{-6} , ~ f = 1 , ~ T = 1 , \nonumber \end{equation} \begin{equation} N_{cell} = [32, ~ 64, ~ 128, ~ 256] , \nonumber \end{equation} \begin{equation} x_{\min} = 0 , ~ x_{\max} = 1 , ~ \Delta x = \frac{x_{\min}-x_{\max}}{N_{cell}} , ~ CFL = 0.5 , ~ \Delta t = \frac{CFL ~ \Delta x}{f^{1/2} \mathbb{C}} , \nonumber \end{equation} \begin{equation} \textrm{IC for Gaussian Pulse:} ~~ E_r^0, F_r^0 = \exp \left( -(\nu (x - \mu) )^2 \right) , ~ \nu = 20 , ~ \mu = 0.3 , \nonumber \end{equation} \begin{equation} \textrm{IC for Square Pulse:} ~~ E_r^0, F_r^0 = \left\{ \begin{array}{ll}
1 & 0.2 < x < 0.4 \\
0 & \rm{otherwise} \end{array} \right . \nonumber \end{equation}
\begin{figure}
\caption{ Gaussian pulse in free streaming limit. $t = 4 \times 10^{-6} = 0.4 ~ (x_{\max}-x_{\min}) / \mathbb{C}$.}
\label{fig:stream_gauss}
\caption{ Square pulse in free streaming limit. $t = 4 \times 10^{-6} = 0.4 ~ (x_{\max}-x_{\min}) / \mathbb{C}$.}
\label{fig:stream_sq}
\end{figure}
\begin{table} \begin{center} \begin{tabular*}{1.0\textwidth}{@{\extracolsep{\fill}}rcccccccc} \hline $N_{cell}$ & $L_1(E_{r})$ & Rate & $L_{\infty}(E_{r})$ & Rate & $L_1(F_{r})$ & Rate & $L_{\infty}(F_{r})$ & Rate \\ \hline
32 & 3.8E-2 & - & 3.9E-1 & - & 3.8E-2 & - & 3.9E-1 & - \\
64 & 1.3E-2 & 1.5 & 1.8E-1 & 1.1 & 1.3E-2 & 1.5 & 1.8E-1 & 1.1 \\ 128 & 3.6E-3 & 1.9 & 8.0E-2 & 1.2 & 3.6E-3 & 1.9 & 8.0E-2 & 1.2 \\ 256 & 8.6E-4 & 2.1 & 3.1E-2 & 1.4 & 8.6E-4 & 2.1 & 3.1E-2 & 1.4 \\ \hline \label{tbl:stream_gauss} \end{tabular*} Table 2: Errors and convergence rates for Gaussian pulse in free streaming limit. Errors were obtained through analytic comparison. $t = 4 \times 10^{-6} = 0.4 ~ (x_{\max}-x_{\min}) / \mathbb{C}$. \end{center} \end{table}
\begin{table} \begin{center} \begin{tabular*}{1.0\textwidth}{@{\extracolsep{\fill}}rcccc} \hline $N_{cell}$ & $L_1(E_{r})$ & Rate & $L_1(F_{r})$ & Rate \\ \hline
32 & 6.0E-2 & - & 6.0E-2 & - \\
64 & 4.2E-2 & 0.5 & 4.2E-2 & 0.5 \\ 128 & 2.6E-2 & 0.7 & 2.6E-2 & 0.7 \\ 256 & 1.5E-2 & 0.8 & 1.5E-2 & 0.8 \\ \hline \label{tbl:stream_sq} \end{tabular*} Table 3: Errors and convergence rates for square pulse in free streaming limit. Errors were obtained through analytic comparison. $t = 4 \times 10^{-6} = 0.4 ~ (x_{\max}-x_{\min}) / \mathbb{C}$. \end{center} \end{table}
\noindent Since the Gaussian pulse results from smooth initial data, one expects $R_1=2.0$. However, the square wave results from discontinuous initial data and one expects $R_1 \simeq 0.67$. This claim is true for all second order spatially accurate numerical methods when applied to an advection-type problem $(u_t + a u_x = 0)$ \cite{leveque1}.
\subsection{Weak Equilibrium Diffusion Limit} \noindent In the weak equilibrium diffusion limit, $\tau > O(1)$ and the radiation subsystem reduces to Equations \ref{eq:weak_e}, \ref{eq:weak_f}. The optical depth suggests the range of total opacities for which diffusion is observed: if $\tau = \sigma_t ~ \ell_{\textrm{diff}} > 1$, then one expects diffusive behavior for $\sigma_t > 1 / \ell_{\textrm{diff}}$. Additionally, Equations \ref{eq:weak_e}, \ref{eq:weak_f} set the time scale $t_{\textrm{diff}}$ and length scale $\ell_{\textrm{diff}}$ for diffusion, where $t_{\textrm{diff}} \sim \ell_{\textrm{diff}}^{~2}/D$ and $D = f \mathbb{C} / \sigma_{t}$ for the radiation subsystem. Given a diffusion problem for a Gaussian pulse defined over the entire real line $(u_t - D u_{xx} = 0)$, the analytic solution is given by the method of Green's functions: \begin{equation} u(x,t) = \int_{-\infty}^{\infty} f(\bar{x}) G(x,t;\bar{x},0) d\bar{x} \nonumber = \frac{1}{(4 D t \nu^2 + 1)^{1/2}} \textrm{exp} \left( \frac{-(\nu (x - \mu) )^2}{4 D t \nu^2 + 1} \right) . \label{diff_anal} \end{equation}
\noindent \textbf{Parameters:} \begin{equation} \mathbb{C} = 10^{5} , ~ \sigma_{a} = 40 , ~ \sigma_{t} = 40 , ~ f = 1/3 , ~ T^4 = E_r , \nonumber \end{equation} \begin{equation} N_{cell} = [320, ~ 640, ~ 1280, ~ 2560] , \nonumber \end{equation} \begin{equation} x_{\min} = -5 , ~ x_{\max} = 5 , ~ \Delta x = \frac{x_{\min}-x_{\max}}{N_{cell}} , ~ CFL = 0.5 , ~ \Delta t = \frac{CFL ~ \Delta x}{f^{1/2} \mathbb{C}} , \nonumber \end{equation} \begin{equation} \textrm{IC for Gaussian Pulse:} ~~ \left\{ \begin{array}{ll} E_r^0 = \exp \left( -(\nu (x - \mu) )^2 \right) , ~ \nu = 20 , ~ \mu = 0.3 , \\ F_r^0 = -\frac{f}{\sigma_t} \partdif{E_r^0}{x} = \frac{2 f \nu^2 (x-\mu)}{\sigma_t} E_r^0 \end{array} \right . \nonumber \end{equation}
\begin{figure}
\caption{ $E_r$ in weak equilibrium diffusion limit. $t = [0.25, ~ 1, ~ 4, ~ 16, ~ 64] \times 10^{-6}$.}
\label{fig:weak_e}
\caption{ $F_r$ in weak equilibrium diffusion limit. $t = [0.25, ~ 1, ~ 4, ~ 16, ~ 64] \times 10^{-6}$.}
\label{fig:weak_f}
\end{figure}
\begin{table} \begin{center} \begin{tabular*}{1.0\textwidth}{@{\extracolsep{\fill}}rcccccccc} \hline $N_{cell}$ & $L_1(E_{r})$ & Rate & $L_{\infty}(E_{r})$ & Rate & $L_1(F_{r})$ & Rate & $L_{\infty}(F_{r})$ & Rate \\ \hline
320 & 8.9E-3 & - & 4.5E-2 & - & 1.1E-3 & - & 3.7E-3 & - \\
640 & 6.6E-3 & 0.4 & 3.4E-2 & 0.4 & 8.3E-4 & 0.4 & 3.1E-3 & 0.2 \\ 1280 & 3.4E-3 & 1.0 & 1.6E-2 & 1.1 & 4.1E-4 & 1.0 & 1.4E-3 & 1.2 \\ 2560 & 1.6E-3 & 1.1 & 7.1E-3 & 1.1 & 1.9E-4 & 1.1 & 6.0E-4 & 1.2 \\ \hline \label{tbl:weak_th} \end{tabular*} Table 4: Errors and convergence rates for $E_r$, $F_r$ in the weak equilibrium diffusion limit. Time was advanced according to a hyperbolic time step: $\Delta t_{h} = \frac{CFL ~ \Delta x}{f^{1/2} \mathbb{C}}$. Errors were obtained through self-similar comparison. $t = 4 \times 10^{-6}$. \end{center} \end{table}
\begin{table} \begin{center} \begin{tabular*}{1.0\textwidth}{@{\extracolsep{\fill}}rcccccccc} \hline $N_{cell}$ & $L_1(E_{r})$ & Rate & $L_{\infty}(E_{r})$ & Rate & $L_1(F_{r})$ & Rate & $L_{\infty}(F_{r})$ & Rate \\ \hline
320 & 1.7E-2 & - & 8.3E-2 & - & 2.0E-3 & - & 7.9E-3 & - \\
640 & 5.0E-3 & 1.7 & 2.5E-2 & 1.7 & 6.0E-4 & 1.7 & 2.0E-3 & 2.0 \\ 1280 & 1.1E-3 & 2.2 & 5.1E-3 & 2.3 & 1.3E-4 & 2.3 & 3.6E-4 & 2.4 \\ 2560 & 2.5E-4 & 2.1 & 1.2E-3 & 2.1 & 2.8E-5 & 2.2 & 7.4E-5 & 2.3 \\ \hline \label{tbl:weak_tp} \end{tabular*} Table 5: Errors and convergence rates for $E_r$, $F_r$ in the weak equilibrium diffusion limit. Time was advanced according to a parabolic time step: $\Delta t_{p} = \frac{CFL ~ (\Delta x)^2}{2 D}$. Errors were obtained through self-similar comparison. $t = 4 \times 10^{-6}$. \end{center} \end{table}
\noindent One's intuition about diffusive processes is based on considering an infinite domain. So to minimize boundary effects in the numerical calculation, the computational domain and number of grid cells were expanded by a factor of 10. In Figures \ref{fig:weak_e}, \ref{fig:weak_f}, one observes the diffusive behavior expected for this parameter regime. Additionally, the numerical solution compares well with the analytic solution for a diffusion process defined over the entire real line (Equation \ref{diff_anal}). However, diffusive behavior is only a first order approximation to more complicated hyperbolic-parabolic dynamics taking place in radiation hydrodynamics as well as the radiation subsystem. Therefore, one needs to compare the numerical solution self-similarly. In Table 4, one sees first order convergence when a hyperbolic time step $\Delta t_{h} = \frac{CFL ~ \Delta x}{f^{1/2} \mathbb{C}}$ is used; while in Table 5, one sees second order convergence when a parabolic time step $\Delta t_{p} = \frac{CFL ~ (\Delta x)^2}{2 D}$ is used. This difference in the convergence rate results from the temporal accuracy in the numerical solution. In the weak equilibrium diffusion limit, the Godunov method reduces to a forward-time/centered-space discretization of the diffusion equation. Such a discretization requires a parabolic time step $\Delta t \sim (\Delta x)^2$ in order to see second order convergence because the truncation error of the forward-time/centered-space discretization of the diffusion equation is $\mathcal{O}(\Delta t,(\Delta x)^2)$.
\subsection{Strong Equilibrium Diffusion Limit} \noindent In the strong equilibrium diffusion limit, $\tau \gg O(1)$. From Equations \ref{eq:strong_e}, \ref{eq:strong_f}, $F_r \rightarrow 0$ for all time and space while $E_r = E_r^0$. \\
\noindent \textbf{Parameters:} \begin{equation} \mathbb{C} = 10^{5} , ~ \sigma_{a} = 10^6 , ~ \sigma_{t} = 10^6 , ~ f = 1/3 , ~ T^4 = E_r , \nonumber \end{equation} \begin{equation} N_{cell} = [320, ~ 640, ~ 1280, ~ 2560] , \nonumber \end{equation} \begin{equation} x_{\min} = -5 , ~ x_{\max} = 5 , ~ \Delta x = \frac{x_{\min}-x_{\max}}{N_{cell}} , ~ CFL = 0.5 , ~ \Delta t = \frac{CFL ~ \Delta x}{f^{1/2} \mathbb{C}} , \nonumber \end{equation} \begin{equation} \textrm{IC for Gaussian Pulse:} ~~ \left\{ \begin{array}{ll} E_r^0 = \exp \left( -(\nu (x - \mu) )^2 \right) , ~ \nu = 20 , ~ \mu = 0.3 , \\ F_r^0 = -\frac{f}{\sigma_t} \partdif{E_r^0}{x} = \frac{2 f \nu^2 (x-\mu)}{\sigma_t} E_r^0 \end{array} \right . \nonumber \end{equation}
\begin{table} \begin{center} \begin{tabular*}{1.0\textwidth}{@{\extracolsep{\fill}}rcccc} \hline $N_{cell}$ & $L_1(E_{r})$ & Rate & $L_{\infty}(E_{r})$ & Rate \\ \hline
320 & 2.2E-3 & - & 1.8E-2 & - \\
640 & 5.3E-4 & 2.1 & 5.6E-3 & 1.6 \\ 1280 & 1.3E-4 & 2.0 & 1.5E-3 & 1.9 \\ 2560 & 3.3E-5 & 2.0 & 3.8E-4 & 2.0 \\ \hline \label{tbl:strong} \end{tabular*} Table 6: Errors and convergence rates for $E_r$ in the strong equilibrium diffusion limit. Errors were obtained through self-similar comparison. $t = 4 \times 10^{-6}$. \end{center} \end{table}
\noindent In this test, the numerical solution is held fixed at the initial distribution because $\sigma_a, \sigma_t$ are so large. However, if one fixed $\ell_{\textrm{diff}}$ and scaled time according to $t_{\textrm{diff}} \approx \ell_{\textrm{diff}}^{~2}/D = \ell_{\textrm{diff}}^{~2} \sigma_t / f \mathbb{C}$, then one would observe behavior similar to Figures \ref{fig:weak_e}, \ref{fig:weak_f}. This test illustrates the robustness of the Godunov method to handle very stiff source terms.
\section{Conclusions and Future Work} \noindent This paper presents a Godunov method for the radiation subsystem of radiation hydrodynamics that is second order accurate in both time and space, unsplit, asymptotically preserving, and uniformly well behaved. Moreover, the method employs familiar algorithmic machinery without a significant increase in computational cost. This work is the starting point for developing a Godunov method for full radiation hydrodynamics. The ideas in this paper should easily extend to the full system in one and multiple dimensions using a MUSCL or CTU approach \cite{colella1990}. A modified Godunov method that is explicit on the fastest hyperbolic scale (radiation flow) as well as a hybrid method that incorporates a backward Euler upwinding scheme for the radiation components and the modified Godunov scheme for the material components are under construction for full radiation hydrodynamics. A goal of future research is to directly compare these two methods in various limits for different values of $c / a_{\infty}$. Nevertheless, one expects the modified Godunov method that is explicit on the fastest hyperbolic scale to exhibit second order accuracy for all conservative variables and the hybrid method to exhibit first order accuracy in the radiation variables and second order accuracy in the material variables. Work is also being conducted on applying short characteristic and Monte Carlo methods to solve the photon transport equation and obtain the variable tensor Eddington factors. In the present work, these factors were taken to be constant in their respective limits.
\section*{Acknowledgment} \noindent The authors thank Dr. Phillip Colella for many helpful discussions. MS acknowledges support from the DOE CSGF Program which is provided under grant DE-FG02-97ER25308. JS acknowledges support from grant DE-FG52-06NA26217.
\end{document} |
\begin{document}
\begin{abstract} We prove that a finite group acting by birational automorphisms of a non-trivial Severi--Brauer surface over a field of characteristic zero contains a normal abelian subgroup of index at most~$3$. Also, we find an explicit bound for orders of such finite groups in the case when the base field contains all roots of~$1$. \end{abstract}
\title{Birational automorphisms of Severi--Brauer surfaces}
\section{Introduction}
A \emph{Severi--Brauer variety} of dimension $n$ over a field $\mathbb{K}$ is a variety that becomes isomorphic to the projective space of dimension $n$ over the algebraic closure of~$\mathbb{K}$. Such varieties are in one-to-one correspondence with central simple algebras of dimension~$(n+1)^2$ over~$\mathbb{K}$. They have many nice geometric properties. For instance, it is known that a Severi--Brauer variety over $\mathbb{K}$ is isomorphic to the projective space if and only if it has a $\mathbb{K}$-point. We refer the reader to \cite{Artin} and~\cite{Kollar-SB} for other basic facts concerning Severi--Brauer varieties.
Automorphism groups of Severi--Brauer varieties can be described in terms of the corresponding central simple algebras, see Theorem~E on page~266 of~\cite{Chatelet}, or~\mbox{\cite[\S1.6.1]{Artin}}, or \cite[Lemma~4.1]{ShramovVologodsky}. As for the group of birational automorphisms, something is known in the case of surfaces. Namely, let $\mathbb{K}$ be a field of characteristic zero (more generally, one can assume that either $\mathbb{K}$ is perfect, or its characteristic is different from $2$ and~$3$). Let $S$ be a \emph{non-trivial} Severi--Brauer surface over $\mathbb{K}$, i.e. one that is not isomorphic to~$\mathbb{P}^2$. In this case generators for the group $\operatorname{Bir}(S)$ of birational automorphisms of $S$ are known, see \cite{Weinstein}, or~\cite{Weinstein-new}, or~\cite[Theorem~2.6]{Is-UMN}, or Theorem~\ref{theorem:Weinstein} below. Moreover, relations between the generators are known as well, see~\mbox{\cite[\S3]{IskovskikhTregub}}. This may be thought of as an analog of the classical theorem of Noether describing the generators of the group $\operatorname{Bir}(\mathbb{P}^2)$ over an algebraically closed field, and the results concerning relations between them (see \cite{Gizatullin}, \cite{Iskovskikh-SimpleGiz}, \cite{IKT}).
Regarding finite groups acting by automorphisms or birational automorphisms on Severi--Brauer surfaces, the following result is known.
\begin{theorem}[{see \cite[Proposition~1.9(ii),(iii)]{ShramovVologodsky}, \cite[Corollary~1.5]{ShramovVologodsky}}] \label{theorem:ShramovVologodsky} Let $S$ be a non-trivial Severi--Brauer surface over a field $\mathbb{K}$ of characteristic zero. Suppose that $\mathbb{K}$ contains all roots of $1$. The following assertions hold. \begin{itemize} \item[(i)] If $G\subset\operatorname{Aut}(S)$ is a finite subgroup, then every non-trivial element of $G$ has order~$3$, and $G$ is a $3$-group of order at most~$27$.
\item[(ii)] There exists a constant $B=B(S)$ such that for any finite subgroup $G\subset\operatorname{Bir}(S)$
one has $|G|\le B$. \end{itemize} \end{theorem}
In this paper we prove the result making Theorem~\ref{theorem:ShramovVologodsky} more precise.
\begin{theorem}\label{theorem:main} Let $S$ be a non-trivial Severi--Brauer surface over a field $\mathbb{K}$ of characteristic zero, and let $G\subset\operatorname{Bir}(S)$ be a finite group. The following assertions hold. \begin{itemize} \item[(i)] The order of $G$ is odd.
\item[(ii)] The group $G$ is either abelian, or contains a normal abelian subgroup of index~$3$.
\item[(iii)] If $\mathbb{K}$ contains all roots of $1$, then $G$ is an abelian $3$-group of order at most~$27$. \end{itemize} \end{theorem}
I do not know if the bounds in Theorem~\ref{theorem:main}(ii),(iii) are optimal. In particular, I am not aware of an example of a finite non-abelian group acting by birational (or biregular, cf.~Proposition~\ref{proposition:SB-Bir-vs-Aut} below) automorphisms on a non-trivial Severi--Brauer surface. Note that it is easy to construct an example of a non-trivial Severi--Brauer surface with an action of a group of order~$9$, see Example~\ref{example:cyclic-algebra} below. In certain cases this is the largest finite subgroup of the automorphism group of a Severi--Brauer surface; see Lemma~\ref{lemma:27}, which improves the result of Theorem~\ref{theorem:ShramovVologodsky}(i). It would be interesting to obtain a complete description of finite groups acting by biregular and birational automorphisms on Severi--Brauer surfaces, cf.~\cite{DI}.
Theorem~\ref{theorem:main}(ii) can be reformulated by saying that the \emph{Jordan constant} (see e.g.~\mbox{\cite[Definition~1.1]{Yasinsky}} for a definition) of the birational automorphism group of a non-trivial Severi--Brauer surface over a field $\mathbb{K}$ of characteristic zero is at most $3$. This shows one of the amazing differences between birational geometry of non-trivial Severi--Brauer surfaces and the projective plane, since in the latter case the corresponding Jordan constant may be much larger. For instance, if the base field is algebraically closed, the Jordan constant of the group of birational automorphisms of $\mathbb{P}^2$ equals~$7200$, see~\mbox{\cite[Theorem~1.9]{Yasinsky}}. Moreover, by the remark made after Theorem~5.3 in~\cite{Serre2009} the multiplicative analog of this constant for $\mathbb{P}^2$ equals~\mbox{$2^{10}\cdot 3^4\cdot 5^2\cdot 7$} in the case of the algebraically closed field of characteristic zero, while by Theorem~\ref{theorem:main}(ii) a similar constant for a non-trivial Severi--Brauer surface also equals either~$1$ or~$3$.
To prove Theorem~\ref{theorem:main}, we establish the following intermediate result that might be of independent interest (see also Proposition~\ref{proposition:summary} below for a more precise statement).
\begin{proposition}\label{proposition:SB-Bir-vs-Aut} Let $S$ be a non-trivial Severi--Brauer surface over a field of characteristic zero, and let $G\subset\operatorname{Bir}(S)$ be a finite non-abelian group. Then $G$ is conjugate to a subgroup of $\operatorname{Aut}(S)$. \end{proposition}
It would be interesting to find out if Proposition~\ref{proposition:SB-Bir-vs-Aut} holds for finite abelian subgroups of birational automorphism groups of non-trivial Severi--Brauer surfaces.
The plan of the paper is as follows. In~\S\ref{section:birational} we study surfaces that may be birational to a non-trivial Severi--Brauer surface. In~\S\ref{section:G-birational} we study finite groups acting on such surfaces. In~\S\ref{section:bounds} we prove Proposition~\ref{proposition:SB-Bir-vs-Aut} and Theorem~\ref{theorem:main}.
\textbf{Notation and conventions.} Throughout the paper assume that all varieties are projective. We denote by ${\boldsymbol{\mu}}_n$ the cyclic group of order $n$, and by $\mathfrak{S}_n$ the symmetric group on $n$ letters.
Given a field $\mathbb{K}$, we denote by $\bar{\mathbb{K}}$ its algebraic closure. For a variety $X$ defined over $\mathbb{K}$, we denote by $X_{\bar{\mathbb{K}}}$ its scalar extension to~$\bar{\mathbb{K}}$. By a point of degree $d$ on a variety defined over some field $\mathbb{K}$ we mean a closed point whose residue field is an extension of~$\mathbb{K}$ of degree~$d$; a $\mathbb{K}$-point is a point of degree~$1$.
By a linear system on a variety~$X$ over a field $\mathbb{K}$ we mean a twisted linear subvariety of a linear system on $X_{\bar{\mathbb{K}}}$ defined over~$\mathbb{K}$; thus, a linear system is not a projective space in general, but a Severi--Brauer variety.
By a degree of a subvariety $Z$ of a Severi--Brauer variety $X$ over a field $\mathbb{K}$ we mean the degree of the subvariety $Z_{\bar{\mathbb{K}}}$ of $X_{\bar{\mathbb{K}}}\cong\mathbb{P}^n_{\bar{\mathbb{K}}}$ with respect to the hyperplane in~$\mathbb{P}^n_{\bar{\mathbb{K}}}$.
For a Severi--Brauer variety $X$ corresponding to the central simple algebra $A$, we denote by~$X^{\mathrm{op}}$ the Severi--Brauer variety corresponding to the algebra opposite to~$A$.
A del Pezzo surface is a smooth surface with an ample anticanonical class. For a del Pezzo surface $S$, by its degree we mean its (anti)canonical degree~$K_S^2$.
Let $X$ be a variety over an algebraically closed field with an action of a group $G$, and let $g$ be an element of $G$. By $\operatorname{Fix}_X(G)$ and $\operatorname{Fix}_X(g)$ we denote the loci of fixed points of the group $G$ and the element $g$ on $X$, respectively.
\textbf{Acknowledgements.} I am grateful to A.\,Trepalin and V.\,Vologodsky for many useful discussions. I was partially supported by the HSE University Basic Research Program, Russian Academic Excellence Project~\mbox{``5-100''}, by the Young Russian Mathematics award, and by the Foundation for the Advancement of Theoretical Physics and Mathematics ``BASIS''.
\section{Birational models of Severi--Brauer surfaces} \label{section:birational}
In this section we study surfaces that may be birational to a non-trivial Severi--Brauer surface.
The following general result is sometimes referred to as the theorem of Lang and Nishimura.
\begin{theorem}[{see e.g.~\cite[Proposition~IV.6.2]{Kollar-RatCurves}}] \label{theorem:Lang-Nishimura} Let $X$ and $Y$ be smooth projective varieties over an arbitrary field $\mathbb{K}$. Suppose that $X$ is birational to $Y$. Then $X$ has a $\mathbb{K}$-point if and only if $Y$ has a $\mathbb{K}$-point. \end{theorem}
\begin{corollary}\label{corollary:Lang-Nishimura} Let $X$ and $Y$ be smooth projective varieties over an arbitrary field $\mathbb{K}$, and let $r$ be a positive integer. Suppose that $X$ is birational to $Y$, and $X$ has a point of degree not divisible by~$r$. Then $Y$ has a point of degree not divisible by~$r$. \end{corollary}
The following result concerning Severi--Brauer surfaces is well-known.
\begin{theorem}[{see e.g.~\cite[Theorem~53(2)]{Kollar-SB}}] \label{theorem:SB-point-degree} Let $S$ be a non-trivial Severi--Brauer surface over an arbitrary field. Then $S$ does not contain points of degree $d$ not divisible by $3$. \end{theorem}
\begin{corollary}\label{corollary:many-dP} Let $S$ be a non-trivial Severi--Brauer surface over an arbitrary field. Then $S$ is not birational to any conic bundle, and not birational to any del Pezzo surface of degree different from $3$, $6$, and $9$. \end{corollary}
\begin{proof} Suppose that $S$ is birational to a surface $S'$ with a conic bundle structure~\mbox{$\phi\colon S'\to C$}. Then $C$ is a conic itself, so that $C$ has a point of degree $2$. This implies that the surface $S'$ has a point of degree $2$ or $4$, and by Corollary~\ref{corollary:Lang-Nishimura} the surface~$S$ has a point of degree not divisible by~$3$. This gives a contradiction with Theorem~\ref{theorem:SB-point-degree}.
Now suppose that $S$ is birational to a del Pezzo surface $S'$
of degree $d$ not divisible by $3$. Then the intersection of two general elements of the anticanonical linear system~$|-K_{S'}|$ is an effective zero-cycle of degree $d$ defined over the base field. Thus $S'$ has a point of degree not divisible by $3$. By Corollary~\ref{corollary:Lang-Nishimura} this again gives a contradiction with Theorem~\ref{theorem:SB-point-degree}. \end{proof}
\begin{corollary}\label{corollary:dP-large-Picard-rank} Let $S$ be a non-trivial Severi--Brauer surface over an arbitrary field. Then~$S$ is not birational to any del Pezzo surface of degree $6$ of Picard rank greater than~$2$, and not birational to any del Pezzo surface of degree $3$ of Picard rank greater than~$3$. \end{corollary}
\begin{proof} Suppose that $S$ is birational to a del Pezzo surface $S'$ as above. Then there exists a birational contraction from $S'$ to a del Pezzo surface $S''$ of degree $K_{S''}^2>K_{S'}^2$ and Picard rank~\mbox{$\rk\Pic(S'')=\rk\Pic(S')-1$}.
If $K_{S'}^2=6$ and $\rk\Pic(S')>2$, then $\rk\Pic(S'')>1$, and thus~\mbox{$6<K_{S''}^2<9$}. This gives a contradiction with Corollary~\ref{corollary:many-dP}.
If $K_{S'}^2=3$ and $\rk\Pic(S')>3$, then~$S''$ is either a del Pezzo surface of degree~$6$ with~\mbox{$\rk\Pic(S'')>2$}, which is impossible by the above argument, or a del Pezzo surface of degree not divisible by $3$, which is impossible by Corollary~\ref{corollary:many-dP}. \end{proof}
To proceed we will need the following general fact about non-trivial Severi--Brauer surfaces.
\begin{lemma}\label{lemma:SB-surface-curves} Let $S$ be a non-trivial Severi--Brauer surface over an arbitrary field $\mathbb{K}$, and let~$C$ be a curve on $S$. Then the degree of $C$ is divisible by $3$. \end{lemma}
\begin{proof} It is well-known (see for instance \cite[Exercise~3.3.5(iii)]{GS}) that there exists an exact sequence of groups $$ 1\to\operatorname{Pic}(S)\to\operatorname{Pic}(S_{\bar{\mathbb{K}}})^{\operatorname{Gal}(\bar{\mathbb{K}}/\mathbb{K})}\stackrel{b}\to \operatorname{Br}(\mathbb{K})_3, $$ where $\operatorname{Br}(\mathbb{K})_3$ is the $3$-torsion part of the Brauer group of $\mathbb{K}$. Furthermore, the image of~$b$ is non-trivial since the Severi--Brauer surface $S$ is non-trivial, see for instance \cite[Exercise~3.3.4]{GS}. This means that any line bundle on $S$ has degree divisible by $3$, and the assertion follows. \end{proof}
\begin{remark} For an alternative proof of Lemma~\ref{lemma:SB-surface-curves}, one can consider the image $C'$ of~$C$ under a general automorphism of $S$ (see \cite[Lemma~4.1]{ShramovVologodsky} for a description of the automorphism group of~$S$). Then the zero-cycle $Z=C\cap C'$ is defined over $\mathbb{K}$ and has degree coprime to~$3$, so that the assertion follows from Theorem~\ref{theorem:SB-point-degree}. \end{remark}
Given $d\le 6$ distinct points $P_1,\ldots,P_d$ on $\mathbb{P}^2_{\bar{\mathbb{K}}}$ over an algebraically closed field $\bar{\mathbb{K}}$, we will say that they are \emph{in general position} if no three of them are contained in a line, and all six (in the case $d=6$) are not contained in a conic (cf.~\mbox{\cite[Remark~IV.4.4]{Manin}}). If~$P$ is a point of degree $d$ on a Severi--Brauer surface $S$ over a perfect field~$\mathbb{K}$, we will say that~$P$ is in general position if the $d$ points of the set~\mbox{$P_{\bar{\mathbb{K}}}\subset\mathbb{P}^2_{\bar{\mathbb{K}}}$} are in general position. Note that if $P$ is a point of degree~$d$ in general position, then the blow up of $S$ at $P$ is a del Pezzo surface of degree $9-d$, see for instance~\mbox{\cite[Theorem~IV.2.6]{Manin}}.
\begin{lemma}\label{lemma:SB-general-position} Let $S$ be a non-trivial Severi--Brauer surface over a perfect field~$\mathbb{K}$, and let~\mbox{$P\in S$} be a point of degree $d$. Suppose that $d=3$ or $d=6$. Then $P$ is in general position. \end{lemma}
\begin{proof} Suppose that $d=3$ and $P$ is not in general position. Then the three points of $P_{\bar{\mathbb{K}}}$ are contained in some line $L$ on $\mathbb{P}^2_{\bar{\mathbb{K}}}$. The line $L$ is $\operatorname{Gal}(\bar{\mathbb{K}}/\mathbb{K})$-invariant and thus defined over $\mathbb{K}$, which is impossible by Lemma~\ref{lemma:SB-surface-curves}.
Now suppose that $d=6$ and $P$ is not in general position. Let $P_{\bar{\mathbb{K}}}=\{P_1,\ldots,P_6\}$. If at least four of the points~\mbox{$P_1,\ldots,P_6$} are contained in a line, then a line with such a property is unique, and we again get a contradiction with Lemma~\ref{lemma:SB-surface-curves}.
Assume that no four of the points~\mbox{$P_1,\ldots,P_6$} are contained in a line, but some three of them (say, the points~$P_1$, $P_2$, and $P_3$) are contained in a line which we denote by $L$. The line $L$ is not $\operatorname{Gal}(\bar{\mathbb{K}}/\mathbb{K})$-invariant by Lemma~\ref{lemma:SB-surface-curves}, so that there exists a line $L'\neq L$ that is $\operatorname{Gal}(\bar{\mathbb{K}}/\mathbb{K})$-conjugate to~$L$. If $L'$ does not pass through any of the points $P_1$, $P_2$, and~$P_3$, then $L$ and $L'$ are the only lines in $\mathbb{P}^2_{\bar{\mathbb{K}}}$ that contain three of the points $P_1,\ldots,P_6$. This gives a contradiction with Lemma~\ref{lemma:SB-surface-curves}. Hence, up to relabelling the points, we may assume that $P_3=L\cap L'$, and the points $P_4$ and $P_5$ are contained in~$L'$. Let $L_{ij}$ be the line passing through the points $P_i$ and $P_j$, where $i\in\{1,2\}$ and $j\in\{4,5\}$. Note that $L_{ij}$ are pairwise different, none of them coincides with $L$ or $L'$, and no three of them intersect at one point. If the point $P_6$ is not contained in any of the lines $L_{ij}$, then $L$ and $L'$ are the only lines that contain three of the points $P_1,\ldots,P_6$, which is impossible by Lemma~\ref{lemma:SB-surface-curves}. If $P_6$ is an intersection point of two of the lines $L_{ij}$, then there are exactly four lines that contain three of the points $P_1,\ldots,P_6$, which is also impossible by Lemma~\ref{lemma:SB-surface-curves}. Thus, we see that $P_6$ must be contained in a unique line among $L_{ij}$, say, in $L_{15}$. Now there are exactly three lines that contain three of the points $P_1,\ldots,P_6$, namely, $L$, $L'$, and $L_{15}$. We see that each of the points $P_1$, $P_3$, and $P_5$ is contained in two of these lines, while each of the points $P_2$, $P_4$, and $P_6$ is contained in a unique such line. However, the Galois group~\mbox{$\operatorname{Gal}(\bar{\mathbb{K}}/\mathbb{K})$} acts transitively on the points~\mbox{$P_1,\ldots, P_6$}, which gives a contradiction.
Therefore, we may assume that the points $P_1,\ldots, P_6$ are contained in some irreducible conic~$C$. Obviously, such a conic is unique, and thus $\operatorname{Gal}(\bar{\mathbb{K}}/\mathbb{K})$-invariant. This again gives a contradiction with Lemma~\ref{lemma:SB-surface-curves}. \end{proof}
Let $S$ be a non-trivial Severi--Brauer surface over a perfect field~$\mathbb{K}$, and let~$A$ be the corresponding central simple algebra. Recall that $S^{\mathrm{op}}$ the Severi--Brauer surface corresponding to the algebra opposite to~$A$. There are two classes of interesting birational maps from $S$ to $S^{\mathrm{op}}$ which we describe below (cf. \cite[Lemma~3.1]{IskovskikhTregub}, or cases~(c) and~(e) of~\cite[Theorem~2.6(ii)]{Is-UMN}).
Let $P$ be a point of degree $3$ on $S$. Then $P$ is in general position by Lemma~\ref{lemma:SB-general-position}. Blowing up $P$ and blowing down the proper transforms of the three lines on $S_{\bar{\mathbb{K}}}\cong\mathbb{P}^2_{\bar{\mathbb{K}}}$ passing through the pairs of points of $P_{\bar{\mathbb{K}}}$, we obtain a birational map $\tau_P$ to another Severi--Brauer surface $S'$. This map is given by the linear system of conics passing through $P$.
Similarly, let $P$ be a point of degree $6$ on $S$. Then $P$ is in general position by Lemma~\ref{lemma:SB-general-position}. Blowing up $P$ and blowing down the proper transforms of the six conics on~\mbox{$S_{\bar{\mathbb{K}}}\cong\mathbb{P}^2_{\bar{\mathbb{K}}}$} passing through the quintuples of points of $P_{\bar{\mathbb{K}}}$, we obtain a birational map $\eta_P$ to another Severi--Brauer surface $S'$. This map is given by the linear system of quintics singular at~$P$ (or, in other words, at each point of~$P_{\bar{\mathbb{K}}}$).
In both of the above cases one has $S'\cong S^{\mathrm{op}}$. This follows from a well-known general fact about the degrees of birational maps between Severi--Brauer varieties with given classes, see for instance~\cite[Exercise~3.3.7(iii)]{GS}. For more details on the maps $\tau_P$, see~\mbox{\cite[\S2]{Corn}}.
\begin{remark}\label{remark:Amitsur} It is known that $S$ and $S^{\mathrm{op}}$ are the only Severi--Brauer surfaces birational to~$S$, see for instance~\cite[Exercise~3.3.6(v)]{GS}. \end{remark}
The following theorem was first published in~\cite{Weinstein} (see also~\cite{Weinstein-new}). It can be also obtained as a particular case of a much more general result, see~\cite[Theorem~2.6]{Is-UMN}. We provide a sketch of its proof for the reader's convenience.
\begin{theorem}\label{theorem:Weinstein} Let $S$ be a non-trivial Severi--Brauer surface over a perfect field $\mathbb{K}$, and let $S'$ be a del Pezzo surface over $\mathbb{K}$ with $\rk\Pic(S')=1$. Suppose that $S$ is birational to~$S'$. Then either $S'\cong S$, or $S'\cong S^{\mathrm{op}}$. Moreover, any birational map~\mbox{$\theta\colon S\dasharrow S'$} can be written as a composition $$ \theta=\theta_1\circ\ldots\circ\theta_k, $$ where each of the maps $\theta_i$ is either an automorphism, or a map $\tau_P$ or $\eta_P$ for some point~$P$ of $S$ or~$S^{\mathrm{op}}$. \end{theorem}
\begin{proof}[Sketch of the proof] Let $\theta\colon S\dasharrow S'$ be a birational map, and suppose that $\theta$ is not an isomorphism. Choose a very ample linear system ${\mathscr{L}}'$ on $S'$, and let ${\mathscr{L}}$ be its proper transform on $S$. Then ${\mathscr{L}}$ is a mobile non-empty (and in general incomplete) linear system on $S$. Write $$ {\mathscr{L}}\sim_{\mathbb{Q}} -\mu K_S $$ for some positive rational number $\mu$; note that $3\mu$ is an integer. By the Noether--Fano inequality (see \cite[Lemma~1.3(i)]{Is-UMN}), one has $\operatorname{mult}_{P}({\mathscr{L}})>\mu$ for some point $P$ on $S$. Let~$d$ be the degree of $P$, and let $L_1$ and $L_2$ be two general members of the linear system~${\mathscr{L}}$ (defined over~$\bar{\mathbb{K}}$). We see that $$ 9\mu^2=L_1\cdot L_2\ge d\operatorname{mult}_{P}({\mathscr{L}})^2>d\mu^2, $$ and thus $d<9$. Hence one has $d=3$ or $d=6$ by Theorem~\ref{theorem:SB-point-degree}, and the point $P$ is in general position by Lemma~\ref{lemma:SB-general-position}.
Consider a birational map $\theta_P$ defined as follows: if $d=3$, we let $\theta_P=\tau_P$, and if~\mbox{$d=6$}, we let $\theta_P=\eta_P$. Let $\theta^{(1)}=\theta\circ\theta_P^{-1}$. Let ${\mathscr{L}}_1$ be the proper transform of ${\mathscr{L}}$ (or ${\mathscr{L}}'$) on the surface~$S^{\mathrm{op}}$, and write $$ {\mathscr{L}}_1\sim_{\mathbb{Q}} -\mu_1 K_{S^{\mathrm{op}}} $$ for some positive rational number $\mu_1$ such that~\mbox{$3\mu_1\in\mathbb{Z}$}. Using the information about~$\theta_P$ provided in~\cite[Lemma~3.1]{IskovskikhTregub}, we see that $\mu_1<\mu$. Therefore, applying the same procedure to the surface~\mbox{$S_1=S^{\mathrm{op}}$}, the birational map $\theta^{(1)}$, and the linear system ${\mathscr{L}}_1$ and arguing by induction, we prove the theorem. \end{proof}
A particular case of Theorem~\ref{theorem:Weinstein} is the following result that we will need below.
\begin{corollary}\label{corollary:Weinstein} Let $S$ be a non-trivial Severi--Brauer surface over a perfect field. Then~$S$ is not birational to any del Pezzo surface $S'$ of degree~$3$ or~$6$ with~\mbox{$\rk\Pic(S')=1$}. \end{corollary}
The part of Corollary~\ref{corollary:Weinstein} concerning del Pezzo surfaces of degree $3$ also follows from~\mbox{\cite[Chapter~V]{Manin}}. The part concerning del Pezzo surfaces of degree $6$ can be obtained from \cite[\S2]{IskovskikhTregub}.
Corollaries~\ref{corollary:dP-large-Picard-rank} and \ref{corollary:Weinstein} show that a del Pezzo surface of degree $6$ birational to a non-trivial Severi--Brauer surface must have Picard rank equal to~$2$, and a del Pezzo surface of degree $3$ birational to a non-trivial Severi--Brauer surface must have Picard rank equal to~$2$ or~$3$. In the next section we will obtain further restrictions on such surfaces provided that they are $G$-minimal with respect to some finite group~$G$.
\section{$G$-birational models of Severi--Brauer surfaces} \label{section:G-birational}
In this section we study finite groups acting on surfaces birational to a non-trivial Severi--Brauer surface.
We start with del Pezzo surfaces of degree $6$. Recall that over an algebraically closed field $\bar{\mathbb{K}}$ of characteristic zero a del Pezzo surface of degree $6$ is unique up to isomorphism, and its automorphism group is isomorphic to $(\bar{\mathbb{K}}^*)^2\rtimes (\mathfrak{S}_3\times{\boldsymbol{\mu}}_2)$. More details on this can be found in~\cite[Theorem~8.4.2]{Dolgachev}.
Given a del Pezzo surface $S'$ of degree $6$ over an arbitrary field $\mathbb{K}$ of characteristic zero, we will call $\mathfrak{S}_3\times{\boldsymbol{\mu}}_2$ its \emph{Weyl group}. For every element $\theta\in\operatorname{Aut}(S')$ we will refer to the image of $\theta$ under the composition of the embedding $\operatorname{Aut}(S')\hookrightarrow\operatorname{Aut}(S'_{\bar{\mathbb{K}}})$ with the natural homomorphism $$ \operatorname{Aut}(S'_{\bar{\mathbb{K}}})\to\mathfrak{S}_3\times{\boldsymbol{\mu}}_2 $$ as the image of $\theta$ in the Weyl group. Similarly, we will consider the image of the Galois group $\operatorname{Gal}(\bar{\mathbb{K}}/\mathbb{K})$ in the Weyl group.
Let $\sigma\colon\mathbb{P}^2\dasharrow\mathbb{P}^2$ be the \emph{standard Cremona involution}, that is, a birational involution acting as $$ (x:y:z)\mapsto\left(\frac{1}{x}:\frac{1}{y}:\frac{1}{z}\right) $$ in some homogeneous coordinates $x$, $y$, and $z$. This involution becomes regular on the del Pezzo surface of degree $6$ obtained as a blow up of $\mathbb{P}^2$ at the points $(1:0:0)$, $(0:1:0)$, and $(0:0:1)$. Let $\hat{\sigma}$ be its image in the Weyl group $\mathfrak{S}_3\times{\boldsymbol{\mu}}_2$. Then $\hat{\sigma}$ is the generator of the center ${\boldsymbol{\mu}}_2$ of the Weyl group. If one thinks about~\mbox{$\mathfrak{S}_3\times{\boldsymbol{\mu}}_2$} as the group of symmetries of a regular hexagon, then $\hat{\sigma}$ is an involution that interchanges the opposite sides of the hexagon or, in other words, a rotation by~$180^\circ$. If $S'$ is a del Pezzo surface of degree $6$ over some field of characteristic zero and $\theta$ is its automorphism, we will say that $\theta$ is \emph{of Cremona type} if its image in the Weyl group coincides with $\hat{\sigma}$.
\begin{lemma}\label{lemma:Cremona-type} Let $S'$ be a del Pezzo surface of degree $6$ over a field $\mathbb{K}$ of characteristic zero, and let $\theta$ be its automorphism of Cremona type. Then $\theta$ is an involution that has exactly four fixed points on~$S'_{\bar{\mathbb{K}}}$. \end{lemma}
\begin{proof} Using a birational contraction $S'_{\bar{\mathbb{K}}}\to\mathbb{P}^2_{\bar{\mathbb{K}}}$, consider the automorphism $\theta$ as a birational automorphism of $\mathbb{P}^2_{\bar{\mathbb{K}}}$. Then $\theta$ can be represented as a composition of the standard Cremona involution $\sigma$ with some element of the standard torus acting on $\mathbb{P}^2_{\bar{\mathbb{K}}}$. Thus, one can choose homogeneous coordinates $x$, $y$, and $z$ on $\mathbb{P}^2_{\bar{\mathbb{K}}}$ so that $\theta$ acts as $$ (x:y:z)\mapsto\left(\frac{\alpha}{x}:\frac{\beta}{y}:\frac{\gamma}{z}\right) $$ for some non-zero $\alpha,\beta,\gamma\in\bar{\mathbb{K}}$. Therefore, $\theta$ is conjugate to the involution~$\sigma$ via the automorphism $$ (x:y:z)\mapsto (\sqrt{\alpha}x:\sqrt{\beta}y:\sqrt{\gamma}z). $$ This shows that $\theta$ has the same number of fixed points on $\mathbb{P}^2_{\bar{\mathbb{K}}}$ as $\sigma$, while it is easy to see that the fixed points of the latter are the four points $$ (1:1:1),\quad (-1:1:1), \quad (1:-1:1),\quad (1:1:-1). $$ It remains to notice that a fixed point of $\theta$ on $S'_{\bar{\mathbb{K}}}$ cannot be contained in a $(-1)$-curve, and thus all of them are mapped to fixed points of $\theta$ on~$\mathbb{P}^2_{\bar{\mathbb{K}}}$. \end{proof}
\begin{lemma}\label{lemma:dP6-rk-1} Let $S$ be a non-trivial Severi--Brauer surface over a field $\mathbb{K}$ of characteristic zero, and let $S'$ be a del Pezzo surface of degree $6$ over $\mathbb{K}$. Suppose that there exists a finite group $G\subset\operatorname{Aut}(S')$ such that $\rk\Pic(S')^G=1$. Then $S'$ is not birational to $S$. \end{lemma}
\begin{proof} Suppose that $S'$ is birational to $S$. We know from Corollary~\ref{corollary:dP-large-Picard-rank} that~\mbox{$\rk\Pic(S')\le 2$}. Hence by Corollary~\ref{corollary:Weinstein} one has~\mbox{$\rk\Pic(S')=2$}. Since $S'$ does not have a structure of a conic bundle by Corollary~\ref{corollary:many-dP}, we see that $S'$ has a contraction on a del Pezzo surface of larger degree. Again by Corollary~\ref{corollary:many-dP}, this means that $S'$ is a blow up of a Severi--Brauer surface at a point of degree~$3$. Hence the image $\Gamma$ of the Galois group~\mbox{$\operatorname{Gal}(\bar{\mathbb{K}}/\mathbb{K})$} in the Weyl group~\mbox{$\mathfrak{S}_3\times{\boldsymbol{\mu}}_2$} contains an element of order~$3$.
Since the cone of effective curves on $S'$ has two extremal rays and $\rk\Pic(S')^G=1$, we see that $G$ must contain an element whose image in the Weyl group has order~$2$. On the other hand, the image of $G$ in the Weyl group commutes with~$\Gamma$. Since $\hat{\sigma}$ is the only element of order $2$ in~\mbox{$\mathfrak{S}_3\times{\boldsymbol{\mu}}_2$} that commutes with an element of order~$3$, we conclude that $G$ must contain an element $\theta$ of Cremona type. The element~$\theta$ has exactly $4$ fixed points on $S'_{\bar{\mathbb{K}}}$ by Lemma~\ref{lemma:Cremona-type}. This implies that there is a point of degree not divisible by~$3$ on $S'$. Thus the assertion follows from Corollary~\ref{corollary:Lang-Nishimura} and Theorem~\ref{theorem:SB-point-degree}. \end{proof}
Now we deal with del Pezzo surfaces of degree $3$, i.e. smooth cubic surfaces in $\mathbb{P}^3$. Recall from that for a del Pezzo surface $S'$ of degree $3$ over a field $\mathbb{K}$ the action of the groups~\mbox{$\operatorname{Aut}(S')$} and $\operatorname{Gal}(\bar{\mathbb{K}}/\mathbb{K})$ on $(-1)$-curves on $S'$ defines homomorphisms of these groups to the Weyl group~$\mathrm{W}(\mathrm{E}_6)$. Furthermore, the homomorphism $\operatorname{Aut}(S')\to\mathrm{W}(\mathrm{E}_6)$ is an embedding. The order of the Weyl group~$\mathrm{W}(\mathrm{E}_6)$ equals~$2^7\cdot 3^4\cdot 5$. We refer the reader to~\cite[Theorem~8.2.40]{Dolgachev} and~\cite[Chapter~IV]{Manin} for details.
\begin{lemma}\label{lemma:dP3-not-3-group} Let $S$ be a non-trivial Severi--Brauer surface over a field $\mathbb{K}$ of characteristic zero, and let $S'$ be a del Pezzo surface of degree $3$ over $\mathbb{K}$. Suppose that there exists a non-trivial automorphism $g\in\operatorname{Aut}(S')$ such that the order of $g$ is not a power of $3$. Then~$S'$ is not birational to $S$. \end{lemma}
\begin{proof} We may assume that the order $p=\mathrm{ord}(g)$ is prime. Thus one has $p=2$ or~\mbox{$p=5$}, since the order of the Weyl group $\mathrm{W}(\mathrm{E}_6)$ is not divisible by primes greater than~$5$. The action of $g$ on $S'_{\bar{\mathbb{K}}}$ can be of one of the three types listed in \cite[Table~2]{Trepalin-cubic}; in the notation of \cite[Table~2]{Trepalin-cubic} these are types~1, 2, and~6. It is straightforward to check that if $g$ is of type $1$, then the fixed point locus $\operatorname{Fix}_{S'_{\bar{\mathbb{K}}}}(g)$ consists of a smooth elliptic curve and one isolated point; if $g$ is of type $2$, then $\operatorname{Fix}_{S'_{\bar{\mathbb{K}}}}(g)$ consists of a $(-1)$-curve and three isolated points; if $g$ is of type $6$, then $\operatorname{Fix}_{S'_{\bar{\mathbb{K}}}}(g)$ consists of a four isolated points. Note that a $\operatorname{Gal}(\bar{\mathbb{K}}/\mathbb{K})$-invariant $(-1)$-curve on $S'$ always contains a $\mathbb{K}$-point, since such a curve is a line in the anticanonical embedding of $S'$. Therefore, in each of the above three cases we find a $\operatorname{Gal}(\bar{\mathbb{K}}/\mathbb{K})$-invariant set of points on $S'_{\bar{\mathbb{K}}}$ of cardinality coprime to~$3$. Thus the assertion follows from Corollary~\ref{corollary:Lang-Nishimura} and Theorem~\ref{theorem:SB-point-degree}. \end{proof}
\begin{corollary}\label{corollary:dP3-rkPic-3} Let $S$ be a non-trivial Severi--Brauer surface over a field $\mathbb{K}$ of characteristic zero, and let $S'$ be a del Pezzo surface of degree $3$ over $\mathbb{K}$ birational to $S$. Suppose that there exists a subgroup $G\subset\operatorname{Aut}(S')$ such that $\rk\Pic(S')^G=1$. Then~$\rk\Pic(S')=3$. \end{corollary}
\begin{proof} We know from Corollaries~\ref{corollary:dP-large-Picard-rank} and \ref{corollary:Weinstein} that either $\rk\Pic(S')=2$, or $\rk\Pic(S')=3$. Suppose that $\rk\Pic(S')=2$, so that the cone of effective curves on $S'$ has two extremal rays. Since $\rk\Pic(S')^G=1$, the group $G$ must contain an element of even order. Therefore, the assertion follows from Lemma~\ref{lemma:dP3-not-3-group}. \end{proof}
\begin{corollary}\label{corollary:dP3-3-group} Let $S$ be a non-trivial Severi--Brauer surface over a field $\mathbb{K}$ of characteristic zero, and let $S'$ be a del Pezzo surface of degree $3$ over $\mathbb{K}$ birational to $S$. Let~\mbox{$G\subset\operatorname{Aut}(S')$} be a subgroup such that $\rk\Pic(S')^G=1$. Then $G$ is isomorphic to a subgroup of~${\boldsymbol{\mu}}_3^3$. \end{corollary}
\begin{proof} We know from Corollary~\ref{corollary:dP3-rkPic-3} that $\rk\Pic(S')=3$. Hence $S'$ is a blow up of a Severi--Brauer surface at two points of degree $3$. This means that the image $\Gamma$ of the Galois group~\mbox{$\operatorname{Gal}(\bar{\mathbb{K}}/\mathbb{K})$} in the Weyl group $\mathrm{W}(\mathrm{E}_6)$ contains an element conjugate to $$ \gamma=(123)(456) $$ in the notation of~\cite[\S4]{Trepalin-cubic}. We may assume that $\Gamma$ contains $\gamma$ itself, so that the image of~$G$ in $\mathrm{W}(\mathrm{E}_6)$ is contained in the centralizer $Z(\gamma)$ of $\gamma$. By~\mbox{\cite[Proposition~4.5]{Trepalin-cubic}} one has $$ Z(\gamma)\cong({\boldsymbol{\mu}}_3^2\rtimes{\boldsymbol{\mu}}_2)\times \mathfrak{S}_3. $$ On the other hand, we know from Lemma~\ref{lemma:dP3-not-3-group} that the order of $G$ is a power of $3$. Since the Sylow $3$-subgroup of $Z(\gamma)$ is isomorphic to ${\boldsymbol{\mu}}_3^3$, the required assertion follows. \end{proof}
\begin{remark} For an alternative proof of Corollary~\ref{corollary:dP3-3-group}, suppose that $G$ is a non-abelian $3$-group. One can notice that in this case the image of $\operatorname{Gal}(\bar{\mathbb{K}}/\mathbb{K})$ in $\mathrm{W}(\mathrm{E}_6)$ is either trivial, or is generated by an element of type~$\mathrm{A}_2^3$ in the notation of \cite{Carter}. In the former case $\rk\Pic(S')=7$, which is impossible by Corollary~\ref{corollary:dP-large-Picard-rank}. In the latter case $\rk\Pic(S')=1$, which is impossible by Corollary~\ref{corollary:Weinstein}. Thus, $G$ is an abelian $3$-group. The rest is more or less straightforward, cf.~\mbox{\cite[Theorem~6.14]{DI}}. \end{remark}
Let us summarize the results of this section.
\begin{proposition}\label{proposition:summary} Let $S$ be a non-trivial Severi--Brauer surface over a field $\mathbb{K}$ of characteristic zero, and let $G\subset\operatorname{Bir}(S)$ be a finite subgroup. Then $G$ is conjugate either to a subgroup of $\operatorname{Aut}(S)$, or to a subgroup of $\operatorname{Aut}(S^{op})$, or to a subgroup of $\operatorname{Aut}(S')$, where $S'$ is a del Pezzo surface of degree $3$ over $\mathbb{K}$ birational to $S$ such that~\mbox{$\rk\Pic(S')=3$} and~\mbox{$\rk\Pic(S')^G=1$}. In the latter case $G$ is isomorphic to a subgroup of~${\boldsymbol{\mu}}_3^3$. \end{proposition}
\begin{proof} Regularizing the action of $G$ and running a $G$-Minimal Model Program (see~\mbox{\cite[Theorem~1G]{Iskovskikh80}}), we obtain a $G$-surface $S'$ birational to $S$, such that~$S'$ is either a del Pezzo surface with $\rk\Pic(S')^G=1$, or a conic bundle. The case of a conic bundle is impossible by Corollary~\ref{corollary:many-dP}. Thus by Corollary~\ref{corollary:many-dP} and Lemma~\ref{lemma:dP6-rk-1} we conclude that $S'$ is a del Pezzo surface of degree $9$ or $3$. In the former case $S'$ is a Severi--Brauer surface itself, so that $S'$ is isomorphic either to $S$ or to $S^{\mathrm{op}}$ by Remark~\ref{remark:Amitsur}. In the latter case we have $\rk\Pic(S')=3$ by Corollary~\ref{corollary:dP3-rkPic-3}. Furthermore, in this case~$G$ is isomorphic to a subgroup of~${\boldsymbol{\mu}}_3^3$ by Corollary~\ref{corollary:dP3-3-group}. \end{proof}
I do not know the answer to the following question.
\begin{question} Does there exist an example of a finite abelian group $G$ acting on a smooth cubic surface~$S'$ over a field of characteristic zero, such that $S'$ is birational to a non-trivial Severi--Brauer surface and~\mbox{$\rk\Pic(S')^G=1$}? In other words, does there exist a non-trivial Severi--Brauer surface $S$ over a field of characteristic zero and a finite abelian group~\mbox{$G\subset\operatorname{Bir}(S)$}, such that the action of $G$ can be regularized on some smooth cubic surface, but $G$ is not conjugate to a subgroup of~\mbox{$\operatorname{Aut}(S)$}? \end{question}
\section{Automorphisms of Severi--Brauer surfaces} \label{section:bounds}
In this section we prove Proposition~\ref{proposition:SB-Bir-vs-Aut} and Theorem~\ref{theorem:main}. We start with a couple of simple auxiliary results.
Let ${\mathscr{H}}_3$ denote the Heisenberg group of order $27$; this is the only non-abelian group of order $27$ and exponent $3$. Its center $\operatorname{z}({\mathscr{H}}_3)$ is isomorphic to ${\boldsymbol{\mu}}_3$, and there is a non-split exact sequence $$ 1\to\operatorname{z}({\mathscr{H}}_3)\to{\mathscr{H}}_3\to{\boldsymbol{\mu}}_3^2\to 1. $$ On the other hand, one can also represent ${\mathscr{H}}_3$ as a semi-direct product ${\mathscr{H}}_3\cong{\boldsymbol{\mu}}_3^2\rtimes{\boldsymbol{\mu}}_3$.
\begin{lemma}\label{lemma:P2} Let $\bar{\mathbb{K}}$ be an algebraically closed field of characteristic zero, and let $$ G\subset\operatorname{Aut}(\mathbb{P}^2)\cong\operatorname{PGL}_3(\bar{\mathbb{K}}) $$ be a finite subgroup. The following assertions hold. \begin{itemize} \item[(i)] If the order of $G$ is odd, then $G$ is either abelian, or contains a normal abelian subgroup of index~$3$.
\item[(ii)] If the order of $G$ is odd and $G$ is non-trivial, then $G$ contains a subgroup $H$ such that either~\mbox{$|\operatorname{Fix}_{\mathbb{P}^2}(H)|=3$}, or $\operatorname{Fix}_{\mathbb{P}^2}(H)$ has a unique isolated point. Moreover, one can choose $H$ with such a property so that either $H=G$, or $H$ is a normal subgroup of index $3$ in~$G$.
\item[(iii)] If $G\cong{\mathscr{H}}_3$, then $G$ contains an element $g$ such that $\operatorname{Fix}_{\mathbb{P}^2}(g)$ has a unique isolated point.
\item[(iv)] The group $G$ is not isomorphic to ${\boldsymbol{\mu}}_3^3$. \end{itemize} \end{lemma}
\begin{proof} Let $\tilde{G}$ be the preimage of $G$ under the natural projection $$ \pi\colon\operatorname{SL}_3(\bar{\mathbb{K}})\to\operatorname{PGL}_3(\bar{\mathbb{K}}), $$ and let $V\cong\bar{\mathbb{K}}^3$ be the corresponding three-dimensional representation of $\tilde{G}$. We will use the classification of finite subgroups of $\operatorname{PGL}_3(\bar{\mathbb{K}})$, see for instance~\cite[Chapter~V]{Blichfeldt}.
Suppose that the order of $G$ is odd. Then it follows from the classification that either~$\tilde{G}$ is abelian, so that $V$ splits as a sum of three one-dimensional $\tilde{G}$-representations such that not all of them are isomorphic to each other; or $\tilde{G}$ is non-abelian and there exists a surjective homomorphism $\tilde{G}\to{\boldsymbol{\mu}}_3$ whose kernel $\tilde{H}$ is an abelian group, so that $V$ splits as a sum of three one-dimensional $\tilde{H}$-representations, and $\tilde{G}/\tilde{H}\cong{\boldsymbol{\mu}}_3$ transitively permutes these $\tilde{H}$-representations. In other words, the group $G$ cannot be primitive (see~\mbox{\cite[\S60]{Blichfeldt}} for the terminology), and $V$ cannot split as a sum of a one-dimensional and an irreducible two-dimensional $\tilde{G}$-representation. This proves assertion~(i).
If $\tilde{G}$ is abelian, let $\tilde{H}=\tilde{G}$. Thus, if $|G|$ is odd and $G$ is non-trivial, in all possible cases we see that the group $H=\pi(\tilde{H})\subset\operatorname{PGL}_3(\bar{\mathbb{K}})$ is a group with the properties required in assertion~(ii).
Suppose that $G\cong{\mathscr{H}}_3$. Then it follows from the classification (cf.~\cite[6.4]{Borel}) that $$ \tilde{G}\cong{\boldsymbol{\mu}}_3^3\rtimes{\boldsymbol{\mu}}_3, $$ and $\tilde{H}\cong{\boldsymbol{\mu}}_3^3$. The elements of $\tilde{H}$ can be simultaneously diagonalized. Hence the image~\mbox{$H\cong{\boldsymbol{\mu}}_3^2$} of $\tilde{H}$ in $\operatorname{PGL}_3(\bar{\mathbb{K}})$ contains an element $g$ such that $\operatorname{Fix}_{\mathbb{P}^2}(h)$ consists of a line and an isolated point. This proves assertion~(iii).
Assertion~(iv) directly follows from the classification (cf.~\cite[6.4]{Borel}). \end{proof}
Most of our remaining arguments are based on the following observation.
\begin{lemma} \label{lemma:SB-Aut} Let $S$ be a non-trivial Severi--Brauer surface over a field of characteristic zero, and let $G\subset\operatorname{Aut}(S)$ be a finite subgroup. Then the order of $G$ is odd. \end{lemma}
\begin{proof} Suppose that the order of $G$ is even. Then $G$ contains an element $g$ of order $2$. Consider the action of $g$ on $S_{\bar{\mathbb{K}}}\cong\mathbb{P}^2_{\bar{\mathbb{K}}}$. The fixed point locus $\operatorname{Fix}_{S_{\bar{\mathbb{K}}}}(g)$ is a union of a line and a unique isolated point $P$. Since the Galois group $\operatorname{Gal}(\bar{\mathbb{K}}/\mathbb{K})$ commutes with $g$, the point $P$ is $\operatorname{Gal}(\bar{\mathbb{K}}/\mathbb{K})$-invariant, which is impossible by assumption. \end{proof}
\begin{corollary} \label{corollary:SB-Aut} Let $S$ be a non-trivial Severi--Brauer surface over a field of characteristic zero, and let $G\subset\operatorname{Aut}(S)$ be a finite subgroup. Then \begin{itemize} \item[(i)] the group $G$ is either abelian, or contains a normal abelian subgroup of index~$3$;
\item[(ii)] there exists a $G$-invariant point of degree $3$ on $S$. \end{itemize} \end{corollary}
\begin{proof} By Lemma~\ref{lemma:SB-Aut}, the order of $G$ is odd. The action of $G$ on $S_{\bar{\mathbb{K}}}\cong\mathbb{P}^2_{\bar{\mathbb{K}}}$ gives an embedding $G\subset\operatorname{PGL}_3(\bar{\mathbb{K}})$. By Lemma~\ref{lemma:P2}(i) the group $G$ is either abelian, or contains a normal abelian subgroup of index~$3$. This proves assertion~(i).
To prove assertion~(ii), we may assume that $G$ is non-trivial. By Lemma~\ref{lemma:P2}(ii), the group $G$ contains a subgroup $H$ such that either $|\operatorname{Fix}_{S_{\bar{\mathbb{K}}}}(H)|=3$, or $\operatorname{Fix}_{S_{\bar{\mathbb{K}}}}(H)$ has a unique isolated point; moreover, one can choose $H$ with such a property so that either~$H$ coincides with~$G$, or $H$ is a normal subgroup of index $3$ in~$G$. In any case, $\operatorname{Fix}_{S_{\bar{\mathbb{K}}}}(H)$ cannot have a unique isolated point, because this point would be $\operatorname{Gal}(\bar{\mathbb{K}}/\mathbb{K})$-invariant, which is impossible by assumption. Hence $|\operatorname{Fix}_{S_{\bar{\mathbb{K}}}}(H)|=3$. Since $H$ is a normal subgroup in $G$, the set $\operatorname{Fix}_{S_{\bar{\mathbb{K}}}}(H)$ is $G$-invariant. This gives a $G$-invariant point of degree $3$ on~$S$ and proves assertion~(ii). \end{proof}
\begin{remark} It is interesting to note that the analogs of Lemma~\ref{lemma:SB-Aut} and Corollary~\ref{corollary:SB-Aut} do not hold for Severi--Brauer curves, i.e. for conics. Thus, a conic over the field $\mathbb{R}$ of real numbers defined by the equation $$ x^2+y^2+z^2=0 $$ in $\mathbb{P}^2$ with homogeneous coordinates $x$, $y$, and $z$ has no $\mathbb{R}$-points. However, it is acted on by all finite groups embeddable into $\operatorname{PGL}_2(\mathbb{C})$, that is, by cyclic groups, dihedral groups, the tetrahedral group~$\mathfrak{A}_4$, the octahedral group~$\mathfrak{S}_4$, and the icosahedral group~$\mathfrak{A}_5$. From this point of view finite groups acting on non-trivial Severi--Brauer curves are \emph{more} complicated than those acting on non-trivial Severi--Brauer surfaces. It would be interesting to obtain a complete classification of finite groups acting on Severi--Brauer surfaces similarly to what is done for conics in~\cite{Garcia-Armas}. \end{remark}
Recall from~\S\ref{section:birational} that a Severi--Brauer surface $S$ is birational to the surface~$S^{op}$. In particular, the groups $\operatorname{Bir}(S)$ and $\operatorname{Bir}(S^{op})$ are (non-canonically) isomorphic, and the group $\operatorname{Aut}(S^{op})$ is (non-canonically) realized as a subgroup of~\mbox{$\operatorname{Bir}(S)$}. Note also that~\mbox{$\operatorname{Aut}(S^{op})\cong \operatorname{Aut}(S)$}, although these groups are not conjugate in~\mbox{$\operatorname{Bir}(S)$}.
Corollary~\ref{corollary:SB-Aut} has the following geometric consequence.
\begin{corollary}\label{corollary:conjugate} Let $S$ be a non-trivial Severi--Brauer surface over a field of characteristic zero, and let $G\subset\operatorname{Aut}(S^{op})$ be a finite subgroup. Then $G$ is conjugate to a subgroup of~$\operatorname{Aut}(S)$. \end{corollary}
\begin{proof} By Corollary~\ref{corollary:SB-Aut}(ii) the group $G$ has an invariant point $P$ of degree $3$ on $S'$, and by Lemma~\ref{lemma:SB-general-position} this point is in general position. Blowing up~$P$ and blowing down the proper transforms of the three lines on $S'_{\bar{\mathbb{K}}}\cong\mathbb{P}^2_{\bar{\mathbb{K}}}$ passing through the pairs of the three points of $P_{\bar{\mathbb{K}}}$, we obtain a (regular) action of $G$ on the surface~$S$ together with a $G$-equivariant birational map $\tau_P\colon S'\dasharrow S$, cf.~\S\ref{section:birational}. This means that $G$ is conjugate to a subgroup of~$\operatorname{Aut}(S)$. \end{proof}
Now we prove Proposition~\ref{proposition:SB-Bir-vs-Aut}.
\begin{proof}[Proof of Proposition~\ref{proposition:SB-Bir-vs-Aut}] We know from Proposition~\ref{proposition:summary} that $G$ is conjugate to a subgroup of $\operatorname{Aut}(S')$, where $S'\cong S$ or $S'\cong S^{op}$. In the former case we are done. In the latter case~$G$ is conjugate to a subgroup of $\operatorname{Aut}(S)$ by Corollary~\ref{corollary:conjugate}. \end{proof}
Similarly to Lemma~\ref{lemma:SB-Aut}, we prove the following.
\begin{lemma} \label{lemma:27} Let $\mathbb{K}$ be a field of characteristic zero that contains all roots of $1$. Let $S$ be a non-trivial Severi--Brauer surface over $\mathbb{K}$, and let $G\subset\operatorname{Aut}(S)$ be a finite subgroup. Then $G$ is isomorphic to a subgroup of~${\boldsymbol{\mu}}_3^2$. \end{lemma} \begin{proof} We know from Theorem~\ref{theorem:ShramovVologodsky}(i)
that every non-trivial element of $G$ has order $3$, and~\mbox{$|G|\le 27$}. Assume that $G$ is not isomorphic to a subgroup of~${\boldsymbol{\mu}}_3^2$. Then either $G\cong{\boldsymbol{\mu}}_3^3$ or~\mbox{$G\cong{\mathscr{H}}_3$}. The former case is impossible by Lemma~\ref{lemma:P2}(iv). Thus, we have $G\cong{\mathscr{H}}_3$. By Lemma~\ref{lemma:P2}(iii) the group $G$ contains an element $g$ such that $\operatorname{Fix}_{S_{\bar{\mathbb{K}}}}(g)$ has a unique isolated point. This latter point must be $\operatorname{Gal}(\bar{\mathbb{K}}/\mathbb{K})$-invariant, which is impossible by assumption. \end{proof}
Finally, we prove our main result.
\begin{proof}[Proof of Theorem~\ref{theorem:main}] Assertion (i) follows from Proposition~\ref{proposition:summary} and Lemma~\ref{lemma:SB-Aut}. Assertion~(ii) follows from Proposition~\ref{proposition:SB-Bir-vs-Aut} and Corollary~\ref{corollary:SB-Aut}(i). Assertion (iii) follows from Proposition~\ref{proposition:summary} and Lemma~\ref{lemma:27}. \end{proof}
I do not know if the bound provided by Theorem~\ref{theorem:main}(iii) (or Lemma~\ref{lemma:27}) is optimal. However, in certain cases it is easy to construct non-trivial Severi--Brauer surfaces with an action of the group~\mbox{${\boldsymbol{\mu}}_3^2$}.
\begin{example}\label{example:cyclic-algebra} Let $\mathbb{K}$ be a field of characteristic different from $3$ that contains a primitive cubic root of unity $\omega$. Let $a, b\in \mathbb{K}$ be the elements such that $b$ is not a cube in $\mathbb{K}$, and~$a$ is not contained in the image of the Galois norm for the field extension~\mbox{$\mathbb{K}\subset\mathbb{K}(\sqrt[3]{b})$}. Consider the algebra~$A$ over~$\mathbb{K}$ generated by variables $u$ and $v$ subject to relations $$ u^3=a,\quad v^3=b,\quad uv=\omega vu. $$ Then $A$ is a central division algebra, see for instance~\mbox{\cite[Exercise~3.1.6(ii),(iv)]{GS}}. One has $\dim A=9$, so that $A$ corresponds to a non-trivial Severi--Brauer surface $S$. Conjugation by $u$ defines an automorphism of order $3$ of $A$ (sending $u$ to $u$ and $v$ to~$\omega v$). Together with conjugation by $v$ it generates a group ${\boldsymbol{\mu}}_3^2$ acting by automorphisms of $A$ and $S$. \end{example}
\end{document} |
\begin{document}
\title{Uniform test of algorithmic randomness over a general space}
\author{Peter G\'acs}
\address{Boston University}
\email{[email protected]}
\date{\today}
\begin{abstract} The algorithmic theory of randomness is well developed when the underlying space is the set of finite or infinite sequences and the underlying probability distribution is the uniform distribution or a computable distribution. These restrictions seem artificial. Some progress has been made to extend the theory to arbitrary Bernoulli distributions (by Martin-L\"of), and to arbitrary distributions (by Levin). We recall the main ideas and problems of Levin's theory, and report further progress in the same framework. The issues are the following: {
\settowidth\leftmarginii{\hspace{0.5pc}--}
\begin{enumerate}[--]
\item Allow non-compact spaces (like the space of continuous functions,
underlying the Brown\-ian motion).
\item The uniform test (deficiency of randomness) $\mathbf{d}_{P}(x)$ (depending both on the outcome $x$ and the measure $P$) should be defined in a general and natural way.
\item See which of the old results survive: existence of universal tests, conservation of randomness, expression of tests in terms of description complexity, existence of a universal measure, expression of mutual information as "deficiency of independence".
\item The negative of the new randomness test is shown to be a generalization of
complexity in continuous spaces; we show that the addition theorem
survives.
\end{enumerate}
The paper's main contribution is introducing an appropriate framework for studying these questions and related ones (like statistics for a general family of distributions). }
\end{abstract}
\keywords{algorithmic information theory, algorithmic entropy, randomness test, Kolmogorov complexity, description complexity} \subjclass{60A99; 68Q30}
\maketitle
\section{Introduction}
\subsection{Problem statement}
The algorithmic theory of randomness is well developed when the underlying space is the set of finite or infinite sequences and the underlying probability distribution is the uniform distribution or a computable distribution. These restrictions seem artificial. Some progress has been made to extend the theory to arbitrary Bernoulli distributions by Martin-L\"of in~\cite{MLof66art}, and to arbitrary distributions, by Levin in~\cite{LevinRand73,LevinUnif76,LevinRandCons84}. The paper~\cite{HertlingWeihrauchRand98} by Hertling and Weihrauch also works in general spaces, but it is restricted to computable measures. Similarly, Asarin's thesis~\cite{Asarin88} defines randomness for sample paths of the Brownian motion: a fixed random process with computable distribution.
The present paper has been inspired mainly by Levin's early paper~\cite{LevinUnif76} (and the much more elaborate~\cite{LevinRandCons84} that uses different definitions): let us summarize part of the content of~\cite{LevinUnif76}. The notion of a constructive topological space $\mathbf{X}$ and the space of measures over $\mathbf{X}$ is introduced. Then the paper defines the notion of a uniform test. Each test is a lower semicomputable function $(\mu,x) \mapsto f_{\mu}(x)$, satisfying $\int f_{\mu}(x) \mu(dx) \leqslant 1$ for each measure $\mu$. There are also some additional conditions. The main claims are the following.
\begin{enumerate}[\upshape (a)]
\item There is a universal test $\mathbf{t}_{\mu}(x)$, a test such that for each other test $f$ there is a constant $c > 0$ with $f_{\mu}(x) \leqslant c\cdot \mathbf{t}_{\mu}(x)$.
The \df{deficiency of randomness} is defined as $\mathbf{d}_{\mu}(x) = \log\mathbf{t}_{\mu}(x)$.
\item The universal test has some strong properties of ``randomness conservation'': these say, essentially, that a computable mapping or a computable randomized transition does not decrease randomness.
\item There is a measure $M$ with the property that for every outcome $x$ we have $\mathbf{t}_{M}(x) \leqslant 1$. In the present paper, we will call such measures \df{neutral}.
\item\label{i.Levin.semimeasure} Semimeasures (semi-additive measures) are introduced and it is shown that there is a lower semicomputable semimeasure that is neutral (so we can assume that the $M$ introduced above is lower semicomputable).
\item Mutual information $I(x : y)$ is defined with the help of (an appropriate version of) Kolmogorov complexity, between outcomes $x$ and $y$. It is shown that $I(x : y)$ is essentially equal to $\mathbf{d}_{M \times M}(x,y)$. This interprets mutual information as a kind of ``deficiency of independence''.
\end{enumerate} This impressive theory leaves a number of issues unresolved:
\begin{enumerate}[\upshape 1.]
\item The space of outcomes is restricted to be a compact topological space, moreover, a particular compact space: the set of sequences over a finite alphabet (or, implicitly in~\cite{LevinRandCons84}, a compactified infinite alphabet). However, a good deal of modern probability theory happens over spaces that are not even locally compact: for example, in case of the Brownian motion, over the space of continuous functions.
\item The definition of a uniform randomness test includes some conditions (different ones in~\cite{LevinUnif76} and in~\cite{LevinRandCons84}) that seem somewhat arbitrary.
\item No simple expression is known for the general universal test in terms of description complexity. Such expressions are nice to have if they are available.
\end{enumerate}
\subsection{Content of the paper}
The present paper intends to carry out as much of Levin's program as seems possible after removing the restrictions. It leaves a number of questions open, but we feel that they are worth to be at least formulated. A fairly large part of the paper is devoted to the necessary conceptual machinery. Eventually, this will also allow to carry further some other initiatives started in the works~\cite{MLof66art} and~\cite{LevinRand73}: the study of tests that test nonrandomness with respect to a whole class of measures (like the Bernoulli measures).
Constructive analysis has been developed by several authors, converging approximately on the same concepts. We will make use of a simplified version of the theory introduced in~\cite{WeihrauchComputAnal00}. As we have not found a constructive measure theory in the literature fitting our purposes, we will develop this theory here, over (constructive) complete separable metric spaces. This generality is well supported by standard results in measure theoretical probability, and is sufficient for constructivizing a large part of current probability theory.
The appendix recalls some of the needed topology, measure theory and constructive analysis. Constructive measure theory is introduced in Section~\ref{s.constr-meas}.
Section~\ref{s.unif-test} introduces uniform randomness tests. It proves the existence of universal uniform tests, under a reasonable assumption about the topology (``recognizable Boolean inclusions''). Then it proves conservation of randomness.
Section~\ref{s.complexity} explores the relation between description (Kolmogorov) complexity and uniform randomness tests. After extending randomness tests over non-normalized measures, its negative logarithm will be seen as a generalized description complexity.
The rest of the section explores the extent to which the old results characterizing a random infinite string by the description complexity of its segments can be extended to the new setting. We will see that the simple formula working for computable measures over infinite sequences does not generalize. However, still rather simple formulas are available in some cases: namely, the discrete case with general measures, and a space allowing a certain natural cell decomposition, in case of computable measures.
Section~\ref{s.neutral} proves Levin's theorem about the existence of a neutral measure, for compact spaces. Then it shows that the result does not generalize to non-compact spaces, not even to the discrete space. It also shows that with our definition of tests, the neutral measure cannot be chosen semicomputable, even in the case of the discrete space with one-point compactification.
Section~\ref{s.rel-entr} takes up the idea of viewing the negative logarithm of a randomness test as generalized description complexity. Calling this notion \df{algorithmic entropy}, this section explores its information-theoretical properties. The main result is a (nontrivial) generalization of the addition theorem of prefix complexity (and, of course, classical entropy) to the new setting.
\subsection{Some history}
Attempts to define randomness rigorously have a long but rather sparse history starting with von Mises and continuing with Wald, Church, Ville. Kolmogorov's work in this area inspired Martin-L\"of whose paper~\cite{MLof66art} introduces the notion of randomness used here.
Description complexity has been introduced independently by Solomonoff, Kolmogorov and Chaitin. Prefix complexity has been introduced independently by Levin and Chaitin. See~\cite{LiViBook97} for a discussion of priorities and contributions. The addition theorem (whose generalization is given here) has been proved first for Kolmogorov complexity, with a logarithmic error term, by Kolmogorov and Levin. For the prefix complexity its present form has been proved jointly by Levin and G\'acs in~\cite{GacsSymm74}, and independently by Chaitin in~\cite{Chaitin75}.
In his PhD thesis, Martin-L\"of also characterized randomness of finite sequences via their complexity. For infinite sequences, complete characterizations of their randomness via the complexity of their segments were given by Levin in~\cite{LevinRand73}, by Schnorr in~\cite{Schnorr73} and in~\cite{Chaitin75} (attributed). Of these, only Levin's result is formulated for general computable measures: the others apply only to coin-tossing. Each of these works uses a different variant of description complexity. Levin uses monotone complexity and the logarithm of the universal semicomputable measure (see~\cite{GacsRel83} for the difficult proof that these two complexities are different). Schnorr uses ``process complexity'' (similar to monotone complexity) and prefix complexity. The work~\cite{GacsExact80} by the present author gives characterizations using the original Kolmogorov complexity (for general computable measures).
Uniform tests over the space of infinite sequences, randomness conservation and neutral measures were introduced in Levin's work~\cite{LevinUnif76}. The present author could not verify every result in that paper (which contains no proofs); he reproduced most of them with a changed definition in~\cite{GacsExact80}. A universal uniform test with yet another definiton appeared in~\cite{LevinRandCons84}. In this latter work, ``information conservation'' is a central tool used to derive several results in logic. In the constellation of Levin's concepts, information conservation becomes a special case of randomness conservation. We have not been able to reproduce this exact relation with our definition here.
The work~\cite{GacsBoltzmann94} is based on the observation that Zurek's idea on ``physical'' entropy and the ``cell volume'' approach of physicists to the definition of entropy can be unified: Zurek's entropy can be seen as an approximation of the limit arising in a characterization of a randomness test by complexity. The author discovered in this same paper that the negative logarithm of a general randomness test can be seen as a generalization of complexity. He felt encouraged by the discovery of the generalized addition theorem presented here.
The appearence of other papers in the meantime (including~\cite{HertlingWeihrauchRand98}) convinced the author that there is no accessible and detailed reference work on algorithmic randomness for general measures and general spaces, and a paper like the present one, developing the foundations, is needed. (Asarin's thesis~\cite{Asarin88} does develop the theory of randomness for the Brownian motion. It is a step in our direction in the sense that the space is not compact, but it is all done for a single explicitly given computable measure.)
We do not advocate the uniform randomness test proposed here as necessarily the ``definitive'' test concept. Perhaps a good argument can be found for some additional conditions, similar to the ones introduced by Levin, providing additional structure (like a semicomputable neutral measure) while preserving naturalness and the attractive properties presented here.
\subsection{Notation for the paper}
(Nothing to do with the formal concept of ``notation'', introduced later in the section on constructive analysis.) The sets of natural numbers, integers, rational numbers, real numbers and complex numbers will be denoted respectively by $\mathbb{N}, \mathbb{Z}, \mathbb{Q}, \mathbb{R}$. The set of nonnegative real numbers will be denoted by $\mathbb{R}_{+}$. The set of real numbers with $-\infty,\infty$ added (with the appropriate topology making it compact) will be denoted by $\ol\mathbb{R}$. We use $\et$ and $\V$ to denote $\min$ and $\max$, further
\[
|x|^{+} = x\V 0,\quad |x|^{-} = |-x|^{+}
\] for real numbers $x$. We partially follow~\cite{WeihrauchComputAnal00}, \cite{BrattkaComputTopStruct03} and~\cite{HertlingWeihrauchRand98}. In particular, adopting the notation of~\cite{WeihrauchComputAnal00}, we denote intervals of the real line as follows, (to avoid the conflict with the notation of a pair $(a,b)$ of objects).
\[
\clint{a}{b} = \setof{x : a \leqslant x \leqslant b}, \quad
\opint{a}{b} = \setof{x : a < x < b}, \quad \lint{a}{b} = \setof{x : a \leqslant x < b}.
\] If $X$ is a set then $X^{*}$ is the set of all finite strings made up of elements of $X$, including the ``empty string'' $\Lg$. We denote by $X^{\og}$ the set of all infinite sequences of elements of
$X$. If $A$ is a set then $1_{A}(x)$ is its indicator function, defined to be 1 if $x\in A$ and to 0 otherwise. For a string $x$, its length is $|x|$, and
\[
x^{\leqslant n} = (x(1),\dots,x(n)).
\] The relations
\[
f \lea g,\quad f\lem g
\] mean inequality to within an additive constant and multiplicative constant respectively. The first is equivalent to $f \leqslant g + O(1)$, the second to $f = O(g)$. The relation $f\eqm g$ means $f \lem g$ and $f \gem g$.
Borrowing from~\cite{PollardUsers01}, for a function $f$ and a measure $\mu$, we will use the notation
\[
\mu f = \int f(x)\mu(dx), \quad \mu^{y} f(x, y) = \int f(x,y)\mu(dy).
\]
\section{Constructive measure theory}\label{s.constr-meas}
The basic concepts and results of measure theory are recalled in Section~\ref{s.measures}. For the theory of measures over metric spaces, see Subsection~\ref{ss.measure-metric}. We introduce a certain fixed, enumerated sequence of Lipschitz functions that will be used frequently. Let $\mathcal{F}_{0}$ be the set of functions of the form $g_{u,r,1/n}$ where $u \in D$, $r\in \mathbb{Q}$, $n = 1, 2, \dots$, and
\begin{equation*}
g_{u,r,\eps}(x) = |1 - |d(x, u) - r|^{+}/\eps|^{+}
\end{equation*}
is a continuous function that is $1$ in the ball $B(u,r)$, it is 0 outside $B(u, r+\eps)$, and takes intermediate values in between. Let
\begin{equation}\label{e.bd-Lip-seq}
\mathcal{E} = \{g_{1}, g_{2}, \dots \}
\end{equation}
be the smallest set of functions containing $\mathcal{F}_{0}$ and the constant 1, and closed under $\V$, $\et$ and rational linear combinations. The following construction will prove useful later.
\begin{proposition}\label{p.bd-Lip-set} All bounded continuous functions can be obtained as the limit of an increasing sequence of functions from the enumerated countable set $\mathcal{E}$ of bounded computable Lip\-schitz functions introduced in~\eqref{e.bd-Lip-seq}.
\end{proposition} The proof is routine.
\subsection{Space of measures}
Let $\mathbf{X} = (X, d, D, \ag)$ be a computable metric space. In Subsection~\ref{ss.measure-metric}, the space $\mathcal{M}(\mathbf{X})$ of measures over $\mathbf{X}$ is defined, along with a natural enumeration $\nu = \nu_{\mathcal{M}}$ for a subbase $\sg = \sg_{\mathcal{M}}$ of the weak topology. This is a constructive topological space $\mathbf{M}$ which can be metrized by introducing, as in~\ref{sss.Prokh}, the \df{Prokhorov distance} $p(\mu, \nu)$: the infimum of all those $\eps$ for which, for all Borel sets $A$ we have $\mu(A) \leqslant \nu(A^{\eps}) + \eps$, where $A^{\eps} = \setof{x : \exists y\in A\; d(x, y) < \eps}$. Let $D_{\mathbf{M}}$ be the set of those probability measures that are concentrated on finitely many points of $D$ and assign rational values to them. Let $\ag_{\mathbf{M}}$ be a natural enumeration of $D_{\mathbf{M}}$. Then
\begin{equation}\label{e.metric-measures}
(\mathcal{M}, p, D_{\mathbf{M}}, \ag_{\mathbf{M}})
\end{equation} is a computable metric space whose constructive topology is equivalent to $\mathbf{M}$. Let $U=B(x, r)$ be one of the balls in $\mathbf{X}$, where $x\in D_{\mathbf{X}}$, $r \in \mathbb{Q}$. The function $\mu \mapsto \mu(U)$ is typically not computable, not even continuous. For example, if $\mathbf{X}=\mathbb{R}$ and $U$ is the open interval $\opint{0}{1}$, the sequence of probability measures $\dg_{1/n}$ (concentrated on $1/n$) converges to $\dg_{0}$, but $\dg_{1/n}(U)=1$, and $\dg_{0}(U)=0$. The following theorem shows that the situation is better with $\mu \mapsto \mu f$ for computable $f$:
\begin{proposition}\label{p.computable-integral} Let $\mathbf{X} = (X, d, D, \ag)$ be a computable metric space, and let $\mathbf{M} = (\mathcal{M}(\mathbf{X}), \sg, \nu)$ be the effective topological space of probability measures over $\mathbf{X}$. If function $f : \mathbf{X} \to \mathbb{R}$ is bounded and computable then $\mu \mapsto \mu f$ is computable.
\end{proposition}
\begin{proof}[Proof sketch] To prove the theorem for bounded Lip\-schitz functions, we can invoke the Strassen coupling theorem~\ref{p.coupling}.
The function $f$ can be obtained as a limit of a computable monotone increasing sequence of computable Lip\-schitz functions $f^{>}_{n}$, and also as a limit of a computable monotone decreasing sequence of computable Lip\-schitz functions $f^{<}_{n}$. In step $n$ of our computation of $\mu f$, we can approximate $\mu f^{>}_{n}$ from above to within $1/n$, and $\mu f^{<}_{n}$ from below to within $1/n$. Let these bounds be $a^{>}_{n}$ and $a^{<}_{n}$. To approximate $\mu f$ to within $\eps$, find a stage $n$ with $a^{>}_{n} - a^{<}_{n} +2/n < \eps$.
\end{proof}
\subsection{Computable measures and random transitions}\label{ss.computable-trans} A measure $\mu$ is called \df{computable} if it is a computable element of the space of measures. Let $\{g_{i}\}$ be the set of bounded Lip\-schitz functions over $X$ introduced in~\eqref{e.bd-Lip-seq}.
\begin{proposition}\label{p.computable-meas-crit} Measure $\mu$ is computable if and only if so is the function $i \mapsto \mu g_{i}$.
\end{proposition}
\begin{proof} The ``only if'' part follows from Proposition~\ref{p.computable-integral}. For the ``if'' part, note that in order to trap $\mu$ within some Prokhorov neighborhood of size $\eps$, it is sufficient to compute $\mu g_{i}$ within a small enough $\dg$, for all $i\leqslant n$ for a large enough $n$.
\end{proof}
\begin{example} Let our probability space be the set $\mathbb{R}$ of real numbers with its standard topology. Let $a < b$ be two computable real numbers. Let $\mu$ be the probability distribution with density function
$f(x) = \frac{1}{b-a}1_{\clint{a}{b}}(x)$ (the uniform distribution over the interval $\clint{a}{b}$). Function $f(x)$ is not computable, since it is not even continuous. However, the measure $\mu$ is computable: indeed, $\mu g_{i} = \frac{1}{b-a} \int_{a}^{b} g_{i}(x) dx$ is a computable sequence, hence Proposition~\ref{p.computable-meas-crit} implies that $\mu$ is computable.
\end{example}
The following theorem compensates somewhat for the fact mentioned earlier, that the function $\mu \mapsto \mu(U)$ is generally not computable.
\begin{proposition}
Let $\mu$ be a finite computable measure. Then there is a computable map $h$ with the property that for every bounded computable function $f$ with $|f| \leqslant 1$ with the property $\mu(f^{-1}(0))=0$, if $w$ is the name of $f$ then $h(w)$ is the name of a program computing the value $\mu\setof{x: f(x) < 0}$.
\end{proposition}
\begin{proof} Straightforward.
\end{proof}
\begin{remark} Suppose that there is a computable function that for each $i$ computes a Cauchy sequence $j \mapsto m_{i}(j)$ with the property that for
$i < j_{1} < j_{2}$ we have $|m_{i}(j_{1})-m_{i}(j_{2})| < 2^{-j_{1}}$, and that for all $n$, there is a measure $\nu$ with the property that for all $i \leqslant n$, $\nu g_{i} = m_{i}(n)$. Is there a measure $\mu$ with the property that for each $i$ we have $\lim_{j} m_{i}(j) = \mu g_{i}$? Not necessarily, if the space is not compact. For example, let $X = \{1,2,3,\dots\}$ with the discrete topology. The sequences $m_{i}(j) = 0$ for $j > i$ satisfy these conditions, but they converge to the measure 0, not to a probability measure. To guarantee that the sequences $m_{i}(j)$ indeed define a probability measure, progress must be made, for example, in terms of the narrowing of Prokhorov neighborhoods.
\end{remark}
Let now $\mathbf{X},\mathbf{Y}$ be computable metric spaces. They give rise to measurable spaces with $\sg$-algebras $\mathcal{A}, \mathcal{B}$ respectively. Let $\Lg = \setof{\lg_{x} : x \in X}$ be a probability kernel from $X$ to $Y$ (as defined in Subsection~\ref{ss.transitions}). Let $\{g_{i}\}$ be the set of bounded Lip\-schitz functions over $Y$ introduced in~\eqref{e.bd-Lip-seq}. To each $g_{i}$, the kernel assigns a (bounded) measurable function
\[
f_{i}(x) = (\Lg g_{i})(x) = \lg_{x}^{y} g_{i}(y).
\] We will call $\Lg$ \df{computable} if so is the assignment $(i, x) \mapsto f_{i}(x)$. In this case, of course, each function $f_{i}(x)$ is continuous. The measure $\Lg^{*}\mu$ is determined by the values $\Lg^{*} g_{i} = \mu (\Lg g_{i})$, which are computable from $(i, \mu)$ and so the function $\mu \mapsto \Lg^{*}\mu$ is computable.
\begin{example}\label{x.computable-determ-trans} A computable function $h : X \to Y$ defines an operator $\Lg_{h}$ with $\Lg_{h} g = g \circ h$ (as in Example~\ref{x.determ-trans}). This is a deterministic computable transition, in which $f_{i}(x) = (\Lg_{h} g_{i})(x) = g_{i}(h(x))$ is, of course, computable from $(i,x)$. We define $h^{*}\mu = \Lg_{h}^{*}\mu$.
\end{example}
\subsection{Cells}\label{ss.cells}
As pointed out earlier, it is not convenient to define a measure $\mu$ constructively starting from $\mu(\Gg)$ for open cells $\Gg$. The reason is that no matter how we fix $\Gg$, the function $\mu \mapsto \mu(\Gg)$ is typically not computable. It is better to work with bounded computable functions, since for such a function $f$, the correspondence $\mu \mapsto \mu f$ is computable.
Under some special conditions, we will still get ``sharp'' cells. Let $f$ be a bounded computable function over $\mathbf{X}$, let $\ag_{1}<\dots<\ag_{k}$ be rational numbers, and let $\mu$ be a computable measure with the property that $\mu f^{-1}(\ag_{j})=0$ for all $j$. In this case, we will say that $\ag_{j}$ are \df{regular points} of $f$ with respect to $\mu$. Let $\ag_{0}=-\infty$, $\ag_{k+1}=\infty$, and for $j=0,\dots,k$, let Let $U_{j} = f^{-1}((j,j+1))$. The sequence of disjoint r.e.~open sets $(U_{0},\dots,U_{k})$ will be called the \df{partition generated by} $f,\ag_{1},\dots,\ag_{k}$. (Note that this sequence is not a partition in the sense of $\bigcup_{j}U_{j}=\mathbf{X}$, since the boundaries of the sets are left out.) If we have several partitions $(U_{i0},\dots,U_{i,k})$, generated by different functions $f_{i}$ ($i=1,\dots,m$) and different regular cutoff sequences $(\ag_{ij}: j=1,\dots,k_{i})$, then we can form a new partition generated by all possible intersections
\[
V_{j_{1},\dots,j_{n}} = U_{1,j_{1}}\cap \dots \cap U_{m,j_{m}}.
\] A partition of this kind will be called a \df{regular partition}. The sets $V_{j_{1},\dots,j_{n}}$ will be called the \df{cells} of this partition.
\begin{proposition}\label{p.reg-partit-meas-cptable} In a regular partition as given above, the values $\mu V_{j_{1},\dots,j_{n}}$ are computable from the names of the functions $f_{i}$ and the cutoff points $\ag_{ij}$.
\end{proposition}
\begin{proof} Straightforward.
\end{proof}
Assume that a computable sequence of functions $b_{1}(x),b_{2}(x),\dots$ over $X$ is given, with the property that for every pair $x_{1},x_{2}\in X$ with $x_{1}\ne x_{2}$, there is a $j$ with $b_{j}(x_{1})\cdot b_{j}(x_{2}) < 0$. Such a sequence will be called a \df{separating sequence}. Let us give the correspondence between the set $\mathbb{B}^{\og}$ of infinite binary sequences and elements of the set
\[
X^{0} = \setof{x\in X: b_{j}(x)\ne 0,\;j=1,2,\dots}.
\] For a binary string $s_{1}\dotsm s_{n} = s\in\mathbb{B}^{*}$, let
\[
\Gg_{s}
\] be the set of elements of $X$ with the property that for $j=1,\dots,n$, if $s_{j}=0$ then $b_{j}(\og) < 0$, otherwise $b_{j}(\og)>0$. This correspondence has the following properties.
\begin{enumerate}[(a)]
\item $\Gg_{\Lg}=X$.
\item For each $s\in \mathbb{B}$, the sets $\Gg_{s0}$ and $\Gg_{s1}$ are disjoint and their union is contained in $\Gg_{s}$.
\item For $x\in X^0$, we have $\{x\} = \bigcap_{x\in\Gg_{s}} \Gg_{s}$.
\end{enumerate} If $s$ has length $n$ then $\Gg_{s}$ will be called a \df{canonical $n$-cell}, or simply canonical cell, or $n$-cell. From now on, whenever $\Gg$ denotes a subset of $X$, it means a canonical cell. We will also use the notation
\[
l(\Gg_{s})=l(s).
\] The three properties above say that if we restrict ourselves to the set $X^0$ then the canonical cells behave somewhat like binary subintervals: they divide $X^0$ in half, then each half again in half, etc. Moreover, around each point, these canonical cells become ``arbitrarily small'', in some sense (though, they may not be a basis of neighborhoods). It is easy to see that if $\Gg_{s_1},\Gg_{s_2}$ are two canonical cells then they either are disjoint or one of them contains the other. If $\Gg_{s_1}\sbs\Gg_{s_2}$ then $s_2$ is a prefix of $s_1$. If, for a moment, we write $\Gg^0_s=\Gg_s\cap X^0$ then we have the disjoint union $\Gg^0_s=\Gg^0_{s0}\cup\Gg^0_{s1}$. For an $n$-element binary string $s$, for $x \in \Gg_{s}$, we will write
\[
\mu(s) = \mu(\Gg_{s}).
\]
Thus, for elements of $X^0$, we can talk about the $n$-th bit $x_n$ of the description of $x$: it is uniquely determined. The $2^n$ cells (some of them possibly empty) of the form $\Gg_s$ for $l(s)=n$ form a partition
\[
\mathcal{P}_n
\]
of $X^0$.
\begin{examples}\label{x.cells}\
\begin{enumerate}[\upshape 1.]
\item If $\mathbf{X}$ is the set of infinite binary sequences with its usual topology, the functions $b_{n}(x) = x_{n}-1/2$ generate the usual cells, and $\mathbf{X}^{0}=\mathbf{X}$.
\item If $\mathbf{X}$ is the interval $\clint{0}{1}$, let $b_{n}(x) = -\sin(2^{n}\pi x)$. Then cells are open intervals of the form $\opint{k\cdot 2^{-n}}{(k+1)\cdot 2^{n}}$, the correspondence between infinite binary strings and elements of $X^0$ is just the usual representation of $x$ as the binary decimal string $0.x_{1}x_{2}\dots$.
\end{enumerate}
\end{examples}
When we fix canonical cells, we will generally assume that the partition chosen is also ``natural''. The bits $x_1,x_2,\ldots$ could contain information about the point $x$ in decreasing order of importance from a macroscopic point of view. For example, for a container of gas, the first few bits may describe, to a reasonable degree of precision, the amount of gas in the left half of the container, the next few bits may describe the amounts in each quarter, the next few bits may describe the temperature in each half, the next few bits may describe again the amount of gas in each half, but now to more precision, etc. From now on, whenever $\Gg$ denotes a subset of $X$, it means a canonical cell. From now on, for elements of $X^0$, we can talk about the $n$-th bit $x_n$ of the description of $x$: it is uniquely determined.
The following observation will prove useful.
\begin{proposition}\label{p.compact-cell-basis} Suppose that the space $\mathbf{X}$ is compact and we have a separating sequence $b_{i}(x)$ as given above. Then the cells $\Gg_{s}$ form a basis of the space $\mathbf{X}$.
\end{proposition}
\begin{proof} We need to prove that for every ball $B(x,r)$, there is a cell $x\in\Gg_{s}\sbs B(x,r)$. Let $C$ be the complement of $B(x,r)$. For each point $y$ of $C$, there is an $i$ such that $b_{i}(x)\cdot b_{i}(y) < 0$. In this case, let $J^{0} = \setof{z: b_{i}(z) < 0}$, $J^{1} = \setof{z: b_{i}(z) > 0}$. Let $J(y)=J^{p}$ such that $y\in J^{p}$. Then $C \sbs \bigcup_{y} J(y)$, and compactness implies that there is a finite sequence $y_{1},\dots,y_{k}$ with $C \sbs \bigcup_{j=1}^{k} J(y_{j})$. Clearly, there is a cell $x \in \Gg_{s} \sbs B(x,r) \xcpt \bigcup_{j=1}^{k} J(y_{j})$.
\end{proof}
\section{Uniform tests}\label{s.unif-test}
\subsection{Universal uniform test}
Let $\mathbf{X} = (X, d, D, \ag)$ be a computable metric space, and let $\mathbf{M} = (\mathcal{M}(\mathbf{X}), \sg, \nu)$ be the constructive topological space of probability measures over $\mathbf{X}$. A \df{randomness test} is a function $f : \mathbf{M} \times \mathbf{X} \to \ol\mathbb{R}$ with the following two properties.
\begin{condition}\label{cnd.test}\
\begin{enumerate}[\upshape 1.]
\item The function $(\mu, x) \mapsto f_{\mu}(x)$ is lower semicomputable. (Then for each $\mu$, the integral $\mu f_{\mu} = \mu^{x} f_{\mu}(x)$ exists.)
\item\label{i.test.integr} $\mu f_{\mu} \leqslant 1$.
\end{enumerate}
\end{condition}
The value $f_{\mu}(x)$ is intended to quantify the nonrandomness of the outcome $x$ with respect to the probability measure $\mu$. The larger the values the less random is $x$. Condition~\ref{cnd.test}.\ref{i.test.integr} guarantees that the probability of those outcomes whose randomness is $\geqslant m$ is at most $1/m$. The definition of tests is in the spirit of Martin-L\"of's tests. The important difference is in the semicomputability condition: instead of restricting the measure $\mu$ to be computable, we require the test to be lower semicomputable also in its argument $\mu$.
Just as with Martin-L\"of's tests, we want to find a universal test; however, we seem to need a condition on the space $\mathbf{X}$. Let us say that a sequence $i \mapsto U_{i}$ of sets has \df{recognizable Boolean inclusions} if the set
\[
\setof{ (S, T) : S, T \txt{ are finite, }
\bigcap_{i \in S} U_{i} \sbs \bigcup_{j \in T} U_{j}}
\] is recursively enumerable. We will say that a computable metric space has recognizable Boolean inclusions if this is true of the enumerated basis consisting of balls of the form $B(x, r)$ where $x \in D$ and $r > 0$ is a rational number.
It is our conviction that the important metric spaces studied in probability theory have recognizable Boolean inclusions, and that proving this in each individual case should not be too difficult. For example, it does not seem difficult to prove this for the space $C\clint{0}{1}$ of Example~\ref{x.cptable-metric-{0}{1}}, with the set of rational piecewise-linear functions chosen as $D$. But, we have not carried out any of this work!
\begin{theorem}\label{t.univ-unif} Suppose that the metric space $\mathbf{X}$ has recognizable Boolean inclusions. Then there is a universal test, that is a test $\mathbf{t}_{\mu}(x)$ with the property that for every other test $f_{\mu}(x)$ there is a constant $c_{f} > 0$ with $c_{f} f_{\mu}(x) \leqslant \mathbf{t}_{\mu}(x)$.
\end{theorem}
\begin{proof}\ \begin{enumerate}[1.]
\item\label{univ-unif.g'} We will show that there is a mapping that to each name $u$ of a lower semicomputable function $(\mu, x) \mapsto g(\mu, x)$ assigns the name of a lower semicomputable function $g'(\mu, x)$ such that $\mu^{x} g'(\mu,x) \leqslant 1$, and if $g$ is a test then $g'=g$.
To prove the statement, let us represent the space $\mathbf{M}$ rather as
\begin{equation}\label{e.metric-measures-again}
\mathbf{M} = (\mathcal{M}(\mathbf{X}), p, D, \ag_{\mathbf{M}}),
\end{equation} as in~\eqref{e.metric-measures}. Since $g(\mu, x)$ is lower semicomputable, there is a computable sequence of basis elements $U_{i} \sbs \mathbf{M}$ and $V_{i} \sbs \mathbf{X}$ and rational bounds $r_{i}$ such that
\[
g(\mu, x) = \sup_{i}\; r_{i} 1_{U_{i}}(\mu)1_{V_{i}}(x).
\] Let $h_{n}(\mu, x) = \max_{i \leqslant n} r_{i} 1_{U_{i}}(\mu)1_{V_{i}}(x)$. Let us also set $h_{0}(\mu, x) = 0$. Our goal is to show that the condition $\forall \mu\; \mu^{x} h_{n}(\mu, x) \leqslant 1$ is decidable. If this is the case then we will be done. Indeed, we can define $h'_{n}(\mu,x)$ recursively as follows. Let $h'_{0}(\mu, x)=0$. Assume that $h'_{n}(\mu,x)$ has been defined already. If $\forall\mu\; \mu^{x} h_{n+1}(\mu,x) \leqslant 1$ then $h'_{n+1}(\mu,x) = h_{n+1}(\mu,x)$; otherwise, it is $h'_{n}(\mu,x)$. The function $g'(\mu, x) = \sup_{n} h'_{n}(\mu,x)$ clearly satisfies our requirements.
We proceed to prove the decidability of the condition
\begin{equation}\label{e.finite-test-cond}
\forall \mu\; \mu^{x} h_{n}(\mu, x) \leqslant 1.
\end{equation} The basis elements $V_{i}$ can be taken as balls $B(q_{i},\dg_{i})$ for a computable sequence $q_{i} \in D$ and computable sequence of rational numbers $\dg_{i}>0$. Similarly, the basis element $U_{i}$ is the set of measures that is a ball $B(\sg_{i}, \eps_{i})$, in the metric space~\eqref{e.metric-measures-again}. Here, using notation~\eqref{e.finite-Prokhorov}, $\sg_{i}$ is a measure concentrated on a finite set $S_{i}$. According to Proposition~\ref{p.simple-Prokhorov-ball}, the ball $U_{i}$ is the set of measures $\mu$ satisfying the inequalities
\[
\mu(A^{\eps_{i}}) > \sg_{i}(A) - \eps_{i}
\] for all $A \sbs S_{i}$. For each $n$, consider the finite set of balls
\[
\mathcal{B}_{n} = \setof{B(q_{i},\dg_{i}) : i \leqslant n} \cup
\setof{B(s, \eps_{i}) : i \leqslant n,\; s \in S_{i}}.
\] Consider all sets of the form
\[
U_{A,B} = \bigcap_{U \in A} U \xcpt \bigcup_{U \in B} U
\] for all pairs of sets $A, B \sbs \mathcal{B}_{n}$. These sets are all finite intersections of balls or complements of balls from the finite set $\mathcal{B}_{n}$ of balls. The space $\mathbf{X}$ has recognizable Boolean inclusions, so it is decidable which of these sets $U_{A,B}$ are nonempty. The condition~\eqref{e.finite-test-cond} can be formulated as a Boolean formula involving linear inequalities with rational coefficients, for the variables $\mu_{A,B}=\mu(U_{A,B})$, for those $A,B$ with $U_{A,B}\ne\emptyset$. The solvability of such a Boolean condition can always be decided.
\item\label{univ-unif.end}
Let us enumerate all lower semicomputable functions $g_{u}(\mu, x)$ for all the names $u$. Without loss of generality, assume these names to be natural numbers, and form the functions $g'_{u}(\mu,x)$ according to the assertion~\ref{univ-unif.g'} above. The function $t = \sum_{u} 2^{-u-1} g'_{u}$ will be the desired universal test. \end{enumerate}
\end{proof}
From now on, when referring to randomness tests, we will always assume that our space $\mathbf{X}$ has recognizable Boolean inclusions and hence has a universal test. We fix a universal test $\mathbf{t}_{\mu}(x)$, and call the function
\[
\mathbf{d}_{\mu}(x) = \log \mathbf{t}_{\mu}(x).
\] the \df{deficiency of randomness} of $x$ with respect to $\mu$. We call an element $x\in X$ \df{random} with respect to $\mu$ if $\mathbf{d}_{\mu}(x) < \infty$.
\begin{remark}\label{r.cond-test} Tests can be generalized to include an arbitrary parameter $y$: we can talk about the universal test
\[
\mathbf{t}_{\mu}(x \mid y),
\] where $y$ comes from some constructive topological space $\mathbf{Y}$. This is a maximal (within a multiplicative constant) lower semicomputable function $(x,y,\mu) \mapsto f(x,y,\mu)$ with the property $\mu^{x} f(x,y,\mu) \leqslant 1$.
\end{remark}
\subsection{Conservation of randomness}
For $i=1,0$, let $\mathbf{X}_{i} = (X_{i}, d_{i}, D_{i}, \ag_{i})$ be computable metric spaces, and let $\mathbf{M}_{i} = (\mathcal{M}(\mathbf{X}_{i}), \sg_{i}, \nu_{i})$ be the effective topological space of probability measures over $\mathbf{X}_{i}$. Let $\Lg$ be a computable probability kernel from $\mathbf{X}_{1}$ to $\mathbf{X}_{0}$ as defined in Subsection~\ref{ss.computable-trans}. In the following theorem, the same notation $\mathbf{d}_{\mu}(x)$ will refer to the deficiency of randomness with respect to two different spaces, $\mathbf{X}_{1}$ and $\mathbf{X}_{0}$, but this should not cause confusion. Let us first spell out the conservation theorem before interpreting it.
\begin{theorem}\label{p.conservation} For a computable probability kernel $\Lg$ from $\mathbf{X}_{1}$ to $\mathbf{X}_{0}$, we have
\begin{equation}\label{e.conservation}
\lg_{x}^{y} \mathbf{t}_{\Lg^{*}\mu}(y) \lem \mathbf{t}_{\mu}(x).
\end{equation}
\end{theorem}
\begin{proof} Let $\mathbf{t}_{\nu}(x)$ be the universal test over $\mathbf{X}_{0}$. The left-hand side of~\eqref{e.conservation} can be written as
\[
u_{\mu} = \Lg \mathbf{t}_{\Lg^{*}\mu}.
\] According to~\eqref{e.Lg-Lg*}, we have $\mu u_{\mu} = (\Lg^{*}\mu) \mathbf{t}_{\Lg^{*}\mu}$ which is $\leqslant 1$ since $\mathbf{t}$ is a test. If we show that $(\mu,x) \mapsto u_{\mu}(x)$ is lower semicomputable then the universality of $\mathbf{t}_{\mu}$ will imply $u_{\mu} \lem \mathbf{t}_{\mu}$.
According to Proposition~\ref{p.lower-semi-as-limit}, as a lower semicomputable function, $\mathbf{t}_{\nu}(y)$ can be written as $\sup_{n} g_{n}(\nu, y)$, where $(g_{n}(\nu, y))$ is a computable sequence of computable functions. We pointed out in Subsection~\ref{ss.computable-trans} that the function $\mu \mapsto \Lg^{*}\mu$ is computable. Therefore the function $(n, \mu, x) \mapsto g_{n}(\Lg^{*}\mu, f(x))$ is also a computable. So, $u_{\mu}(x)$ is the supremum of a computable sequence of computable functions and as such, lower semicomputable.
\end{proof}
It is easier to interpret the theorem first in the special case when $\Lg = \Lg_{h}$ for a computable function $h : X_{1} \to X_{0}$, as in Example~\ref{x.computable-determ-trans}. Then the theorem simplifies to the following.
\begin{corollary}\label{c.conservation-determ} For a computable function $h : X_{1} \to X_{0}$, we have $\mathbf{d}_{h^{*}\mu}(h(x)) \lea \mathbf{d}_{\mu}(x)$.
\end{corollary}
Informally, this says that if $x$ is random with respect to $\mu$ in $\mathbf{X}_{1}$ then $h(x)$ is essentially at least as random with respect to the output distribution $h^{*}\mu$ in $\mathbf{X}_{0}$. Decrease in randomness can only be caused by complexity in the definition of the function $h$. It is even easier to interpret the theorem when $\mu$ is defined over a product space $\mathbf{X}_{1}\times \mathbf{X}_{2}$, and $h(x_{1},x_{2}) = x_{1}$ is the projection. The theorem then says, informally, that if the pair $(x_{1},x_{2})$ is random with respect to $\mu$ then $x_{1}$ is random with respect to the marginal $\mu_{1} = h^{*}\mu$ of $\mu$. This is a very natural requirement: why would the throwing-away of the information about $x_{2}$ affect the plausibility of the hypothesis that the outcome $x_{1}$ arose from the distribution $\mu_{1}$?
In the general case of the theorem, concerning random transitions, we cannot bound the randomness of each outcome uniformly. The theorem asserts that the average nonrandomness, as measured by the universal test with respect to the output distribution, does not increase. In logarithmic notation: $\lg_{x}^{y} 2^{\mathbf{d}_{\Lg^{*}\mu}(y)} \lea \mathbf{d}_{\mu}(x)$, or equivalently, $\int 2^{\mathbf{d}_{\Lg^{*}\mu}(y)} \lg_{x}(dy) \lea \mathbf{d}_{\mu}(x)$.
\begin{corollary} Let $\Lg$ be a computable probability kernel from $\mathbf{X}_{1}$ to $\mathbf{X}_{0}$. There is a constant $c$ such that for every $x\in\mathbf{X}^{1}$, and integer $m > 0$ we have
\[
\lg_{x}\setof{y : \mathbf{d}_{\Lg^{*}\mu}(y) > \mathbf{d}_{\mu}(x) + m + c} \leqslant 2^{-m}.
\]
\end{corollary}
Thus, in a computable random transition, the probability of an increase of randomness deficiency by $m$ units (plus a constant $c$) is less than $2^{-m}$. The constant $c$ comes from the description complexity of the transition $\Lg$.
A randomness conservation result related to Corollary~\ref{c.conservation-determ} was proved in~\cite{HertlingWeihrauchRand98}. There, the measure over the space $\mathbf{X}_{0}$ is not the output measure of the transformation, but is assumed to obey certain inequalities related to the transformation.
\section{Tests and complexity}\label{s.complexity}
\subsection{Description complexity}
\subsubsection{Complexity, semimeasures, algorithmic entropy} Let $X=\Sg^{*}$. For $x \in \Sg^{*}$ for some finite alphabet $\Sg$, let $H(x)$ denote the prefix-free description complexity of the finite sequence $x$ as defined, for example, in~\cite{LiViBook97} (where it is denoted by $K(x)$). For completeness, we give its definition here. Let $A : \{0,1\}^{*} \times \Sg^{*} \to \Sg^{*}$ be a computable (possibly partial) function with the property that if $A(p_{1},y)$ and $A(p_{2},y)$ are defined for two different strings $p_{1}, p_{2}$, then $p_{1}$ is not the prefix of $p_{2}$. Such a function is called a (prefix-free) \df{interpreter}. We denote
\[
H^{A}(x \mid y) = \min_{A(p,y)=x} |p|.
\] One of the most important theorems of description complexity is the following:
\begin{proposition}[Invariance Theorem, see for example~\protect\cite{LiViBook97}] There is an optimal interpreter $T$ with the above property: with it, for every interpreter $A$ there is a constant $c_{A}$ with
\[
H^{T}(x \mid y) \leqslant H^{A}(x \mid y) + c_{A}.
\]
\end{proposition}
We fix an optimal interpreter $T$ and write $H(x \mid y) = H^{T}(x \mid y)$, calling it the conditional complexity of a string $x$ with respect to string $y$. We denote $H(x) = H(x \mid \Lg)$. Let
\[
\mathbf{m}(x) = 2^{-H(x)}.
\] The function $\mathbf{m}(x)$ is lower semicomputable with $\sum_{x} \mathbf{m}(x) \leqslant 1$. Let us call any real function $f(x) \geqslant 0$ over $\Sg^{*}$ with $\sum_{x} f(x) \leqslant 1$ a \df{semimeasure}. The following theorem, known as the Coding Theorem, is an important tool.
\begin{proposition}[Coding Theorem]\label{t.coding} For every lower semicomputable semimeasure $f$ there is a constant $c>0$ with $\mathbf{m}(x) \geqslant c\cdot f(x)$.
\end{proposition}
Because of this theorem, we will say that $\mathbf{m}(x)$ is a \df{universal lower semicomputable semimeasure}. It is possible to turn $\mathbf{m}(x)$ into a measure, by compactifying the discrete space $\Sg^{*}$ into
\[
\ol{\Sg^{*}}=\Sg^{*}\cup\{\infty\}
\] (as in part~\ref{i.compact.compactify} of Example~\ref{x.compact}; this process makes sense also for a constructive discrete space), and setting $\mathbf{m}(\infty) = 1-\sum_{x\in\Sg^{*}} \mathbf{m}(x)$. The extended measure $\mathbf{m}$ is not quite lower semicomputable since the number $\mu(\ol{\Sg^{*}} \xcpt \{0\})$ is not necessarily lower semicomputable.
\begin{remark} A measure $\mu$ is computable over $\ol{\Sg^{*}}$ if and only if the function $x \mapsto \mu(x)$ is computable for $x \in \Sg^{*}$. This property does not imply that the number
\[
1 - \mu(\infty) = \mu(\Sg^{*}) = \sum_{x\in\Sg^{*}} \mu(x)
\]
is computable.
\end{remark}
Let us allow, for a moment, measures $\mu$ that are not probability measures: they may not even be finite. Metric and computability can be extended to this case (see~\cite{Topsoe70}), the universal test $\mathbf{t}_{\mu}(x)$ can also be generalized. The Coding Theorem and other considerations suggest the introduction of the following notation, for an arbitrary measure $\mu$:
\begin{equation}\label{e.alg-ent}
H_{\mu}(x) = -\mathbf{d}_{\mu}(x) = -\log\mathbf{t}_{\mu}(x).
\end{equation} Then, with $\#$ defined as the counting measure over the discrete set
$\Sg^{*}$ (that is, $\#(S) = |S|$), we have
\[
H(x) \eqa H_{\#}(x).
\] This allows viewing $H_{\mu}(x)$ as a generalization of description complexity: we will call this quantity the \df{algorithmic entropy} of $x$ relative to the measure $\mu$. Generalization to conditional complexity is done using Remark~\ref{r.cond-test}. A reformulation of the definition of tests says that $H_{\mu}(x)$ is minimal (within an additive constant) among the upper semicomputable functions $(\mu, x) \mapsto f_{\mu}(x)$ with $\mu^{x} 2^{-f_{\mu}(x)} \leqslant 1$. The following identity is immediate from the definitions:
\begin{equation}\label{e.Hmu-to-cond}
H_{\mu}(x) = H_{\mu}(x \mid \mu).
\end{equation}
\subsubsection{Computable measures and complexity} It is known that for computable $\mu$, the test $\mathbf{d}_{\mu}(x)$ can be expressed in terms of the description complexity of $x$ (we will prove these expressions below). Assume that $\mathbf{X}$ is the (discrete) space of all binary strings. Then we have \begin{equation}\label{e.test-charac-fin-cpt}
\mathbf{d}_{\mu}(x) = -\log \mu(x) - H(x) + O(H(\mu)). \end{equation} The meaning of this equation is the following. Due to maximality property of the semimeasure $\mathbf{m}$ following from the Coding Theorem~\ref{t.coding} above, the expression $-\log\mu(x)$ is an upper bound (within $O(H(\mu))$) of the complexity $H(x)$, and nonrandomness of $x$ is measured by the difference between the complexity and this upper bound. See~\cite{ZvLe70} for a first formulation of this general upper bound relation. As a simple example, consider the uniform distribution $\mu$ over the set of binary sequences of length $n$. Conditioning everything on $n$, we obtain
\[
\mathbf{d}_{\mu}(x\mid n) \eqa n - H(x\mid n),
\] that is the more the description complexity $H(x\mid n)$ of a binary sequence of length $n$ differs from its upper bound $n$ the less random is $x$.
Assume that $\mathbf{X}$ is the space of infinite binary sequences. Then equation~\eqref{e.test-charac-fin-cpt} must be replaced with \begin{equation}\label{e.test-charac-infin-cpt}
\mathbf{d}_{\mu}(x) =
\sup_{n}\Paren{-\log \mu(x^{\leqslant n}) - H(x^{\leqslant n})} + O(H(\mu)). \end{equation} For the coin-tossing distribution $\mu$, this characterization has first been first proved by Schnorr, and published in~\cite{Chaitin75}.
\begin{remark}\label{r.monot-compl} It is possible to obtain similar natural characterizations of randomness, using some other natural definitions of description complexity. A universal semicomputable semimeasure $\mathbf{m}_{\Og}$ over the set $\Og$ of infinite sequences was introduced, and a complexity $\KM(x) = -\log\mathbf{m}_{\Og}(x)$ defined in~\cite{ZvLe70}. A so-called ``monotonic complexity'', $\operatorname{Km}(x)$ was introduced, using Turing machines with one-way input and output, in~\cite{LevinRand73}, and a closely related quantity called ``process complexity'' was introduced in~\cite{Schnorr73}. These quantities can also be used in a characterization of randomness similar to~\eqref{e.test-charac-fin-cpt}. The nontrivial fact that the complexities $\KM$ and $\operatorname{Km}$ differ by an unbounded amount was shown in~\cite{GacsRel83}.
\end{remark}
For noncomputable measures, we cannot replace $O(H(\mu))$ in these relations with anything finite, as shown in the following example. Therefore however attractive and simple, $\exp(-\log \mu(x) - H(x))$ is not a universal uniform test of randomness.
\begin{proposition}\label{p.test-charac-counterexample} There is a measure $\mu$ over the discrete space $\mathbf{X}$ of binary strings such that for each $n$, there is an $x$ with $\mathbf{d}_{\mu}(x) = n - H(n)$ and $-\log \mu(x) - H(x) \lea 0$.
\end{proposition}
\begin{proof} Let us treat the domain of our measure $\mu$ as a set of pairs $(x,y)$. Let $x_{n} = 0^{n}$, for $n=1,2,\dotsc$. For each $n$, let $y_{n}$ be some binary string of length $n$ with the property $H(x_{n}, y_{n}) > n$. Let $\mu(x_{n},y_{n})=2^{-n}$. Then $- \log \mu(x_{n},y_{n}) - H(x_{n},y_{n}) \leqslant n - n = 0$. On the other hand, let $t_{\mu}(x,y)$ be the test nonzero only on strings $x$ of the form $x_{n}$:
\[
t_{\mu}(x_{n}, y) = \frac{\mathbf{m}(n)}{\sum_{z \in \mathcal{B}^{n}} \mu(x_{n}, z)}.
\] The form of the definition ensures semicomputability and we also have
\[
\sum_{x,y} \mu(x,y) t_{\mu}(x,y) \leqslant \sum_{n} \mathbf{m}(n) < 1,
\] therefore $t_{\mu}$ is indeed a test. Hence $\mathbf{t}_{\mu}(x,y) \gem t_{\mu}(x,y)$. Taking logarithms, $\mathbf{d}_{\mu}(x_{n}, y_{n}) \gea n - H(n)$.
\end{proof}
The same example implies that it is also not an option, even over discrete
sets, to replace the definition of uniform tests with the \emph{ad hoc}
formula $\exp(-\log\mu(x) - H(x))$:
\begin{proposition} The test defined as $f_{\mu}(x) = \exp(-\log\mu(x) - H(x))$ over discrete spaces $\mathbf{X}$ does not obey the conservation of randomness.
\end{proposition}
\begin{proof} Let us use the example of Proposition~\ref{p.test-charac-counterexample}. Consider the function $\pi : (x,y) \mapsto x$. The image of the measure $\mu$ under the projection is $(\pi\mu)(x) = \sum_{y} \mu(x,y)$. Thus, $(\pi\mu)(x_{n}) = \mu(x_{n},y_{n}) = 2^{-n}$. We have seen $\log f_{\mu}(x_{n},y_{n}) \leqslant 0$. On the other hand,
\[
\log f_{\pi\mu} (\pi(x_{n},y_{n})) = -\log(\pi\mu)(x_{n}) - H(x_{n})
\eqa n - H(n).
\] Thus, the projection $\pi$ takes a random pair $(x_{n},y_{n})$ into an object $x_{n}$ that is very nonrandom (when randomness is measured using the tests $f_{\mu}$).
\end{proof} In the example, we have the abnormal situation that a pair is random but one of its elements is nonrandom. Therefore even if we would not insist on universality, the test $\exp(-\log\mu(x) - H(x))$ is unsatisfactory.
Looking into the reasons of the nonconservation in the example, we will notice that it could only have happened because the test $f_{\mu}$ is too special. The fact that $-\log (\pi\mu)(x_{n}) - H(x_{n})$ is large should show that the pair $(x_{n},y_{n})$ can be enclosed into the ``simple'' set $\{x_{n}\} \times \mathbf{Y}$ of small probability; unfortunately, this observation does not reflect on $-\log\mu(x,y) - H(x,y)$ when the measure $\mu$ is non-computable (it does for computable $\mu$).
\subsubsection{Expressing the uniform test in terms of complexity} It is a natural idea to modify equation~\eqref{e.test-charac-fin-cpt} in such a way that the complexity $H(x)$ is replaced with $H(x \mid \mu)$. However, this expression must be understood properly. The measure $\mu$ (especially, when it is not computable) cannot be described by a finite string; on the other hand, it can be described by infinite strings in many different ways. Clearly, irrelevant information in these infinite strings should be ignored. The notion of representation in computable analysis (see Subsection~\ref{ss.notation-repr}) will solve the problem. An interpreter function should have the property that its output depends only on $\mu$ and not on the sequence representing it. Recall the topological space $\mathbf{M}$ of computable measures over our space $\mathbf{X}$. An interpreter $A : \{0,1\}^{*} \times \mathbf{M} \to \Sg^{*}$ is a computable function that is prefix-free in its first argument. The complexity
\[
H(x \mid \mu)
\] can now be defined in terms of such interpreters, noting that the Invariance Theorem holds as before. To define this complexity in terms of representations, let $\gm_{\mathbf{M}}$ be our chosen representation for the space $\mathbf{M}$ (thus, each measure $\mu$ is represented via all of its Cauchy sequences in the Prokhorov distance). Then we can say that $A$ is an interpreter if it is $(\operatorname{id}, \gm_{\mathbf{M}}, \operatorname{id})$-computable, that is a certain computable function $B : \{0,1\}^{*} \times \Sg^{\og} \to \Sg^{*}$ realizes $A$ for every $p\in\{0,1\}^{*}$, and for every sequence $z$ that is a $\gm_{\mathbf{M}}$-name of a measure $\mu$, we have $B(p, z) = A(p, \mu)$.
\begin{remark} The notion of oracle computation and reducibility in the new sense (where the result is required to be independent of which representation of an object is used) may be worth investigating in other settings as well.
\end{remark}
Let us mention the following easy fact:
\begin{proposition}\label{p.H/mu-computable} If $\mu$ is a computable measure then $H(x \mid \mu) \eqa H(x)$. The constant in $\eqa$ depends on the description complexity of $\mu$.
\end{proposition}
\begin{theorem}\label{t.test-charac-discr} If $\mathbf{X}$ is the discrete space $\Sg^{*}$ then we have
\begin{equation}\label{e.test-charac-discr}
\mathbf{d}_{\mu}(x) \eqa -\log\mu(x) - H(x \mid \mu).
\end{equation}
\end{theorem}
Note that in terms of the algorithmic entropy notation introduced in~\eqref{e.alg-ent}, this theorem can be expressed as
\begin{equation}\label{e.alg-entr-charac}
H_{\mu}(x) \eqa H(x \mid \mu) + \log\mu(x).
\end{equation}
\begin{proof} In exponential notation, equation~\eqref{e.test-charac-discr} can be written as $\mathbf{t}_{\mu}(x) \eqm \mathbf{m}(x \mid \mu)/\mu(x)$. Let us prove $\gem$ first. We will show that the right-hand side of this inequality is a test, and hence $\lem \mathbf{t}_{\mu}(x)$. However, the right-hand side is clearly lower semicomputable in $(x, \mu)$ and when we ``integrate'' it (multiply it by $\mu(x)$ and sum it), its sum is $\leqslant 1$; thus, it is a test.
Let us prove $\lem$ now. The expression $\mathbf{t}_{\mu}(x)\mu(x)$ is clearly lower semicomputable in $(x,\mu)$, and its sum is $\leqslant 1$. Hence, it is $\lea \mathbf{m}(x \mid \mu)$.
\end{proof}
\begin{remark} As mentioned earlier, our theory generalizes to measures that are not probability measures. In this case, equation~\eqref{e.alg-entr-charac} has interesting relations to the quantity called ``physical entropy'' by Zurek in~\cite{ZurekPhR89}; it justifies calling $H_{\mu}(x)$ ``fine-grained algorithmic Boltzmann entropy'' by this author in~\cite{GacsBoltzmann94}.
\end{remark}
For non-discrete spaces, unfortunately, we can only provide less intuitive expressions.
\begin{proposition}\label{t.test-charac} let $\mathbf{X}=(X, d, D, \ag)$ be a complete computable metric space, and let $\mathcal{E}$ be the enumerated set of bounded Lip\-schitz functions introduced in~\eqref{e.bd-Lip-seq}, but for the space $\mathbf{M}(\mathbf{X}) \times \mathbf{X}$. The uniform test of randomness $\mathbf{t}_{\mu}(x)$ can be expressed as
\begin{equation}\label{e.test-charac}
\mathbf{t}_{\mu}(x) \eqm
\sum_{f \in \mathcal{E}}f(\mu,x)\frac{\mathbf{m}(f \mid \mu)}{\mu^{y} f(\mu, y)}.
\end{equation}
\end{proposition}
\begin{proof} For $\gem$, we will show that the right-hand side of the inequality is a test, and hence $\lem \mathbf{t}_{\mu}(x)$. For simplicity, we skip the notation for the enumeration of $\mathcal{E}$ and treat each element $f$ as its own name. Each term of the sum is clearly lower semicomputable in $(f, x, \mu)$, hence the sum is lower semicomputable in $(x, \mu)$. It remains to show that the $\mu$-integral of the sum is $\leqslant 1$. But, the $\mu$-integral of the generic term is $\leqslant \mathbf{m}(f \mid \mu)$, and the sum of these terms is $\leqslant 1$ by the definition of the function $\mathbf{m}(\cdot \mid \cdot)$. Thus, the sum is a test.
For $\lem$, note that $(\mu,x) \mapsto \mathbf{t}_{\mu}(x)$, as a lower semicomputable function, is the supremum of functions in $\mathcal{E}$. Denoting their differences by $f_{i}(\mu,x)$, we have $\mathbf{t}_{\mu}(x) = \sum_{i} f_{i}(\mu,x)$. The test property implies $\sum_{i} \mu^{x} f_{i}(\mu,x) \leqslant 1$. Since the function $(\mu,i) \mapsto \mu^{x} f_{i}(\mu,x)$ is lower semicomputable, this implies $\mu^{x} f_{i}(\mu,x) \lem \mathbf{m}(i \mid \mu)$, and hence
\[
f_{i}(\mu,x) \lem f_{i}(\mu,x) \frac{\mathbf{m}(i \mid \mu)}{\mu^{x} f_{i}(\mu,x)}.
\] It is easy to see that for each $f\in\mathcal{E}$ we have
\[
\sum_{i : f_{i} = f} \mathbf{m}(i \mid \mu) \leqslant \mu(f \mid \mu),
\] which leads to~\eqref{e.test-charac}.
\end{proof}
\begin{remark}\label{r.test-charac-lb} If we only want the $\gem$ part of the result, then $\mathcal{E}$ can be replaced with any enumerated computable sequence of bounded computable functions.
\end{remark}
\subsection{Infinite sequences}
In this section, we get a nicer characterization of randomness tests in terms of complexity, in special cases. Let $\mathcal{M}_{R}(X)$ be the set of measures $\mu$ with $\mu(X)=R$.
\begin{theorem}\label{t.defic-charac-cpt-seqs} Let $\mathbf{X}=\mathbb{N}^{\og}$ be the set of infinite sequences of natural numbers, with the product topology. For all computable measures $\mu\in\mathcal{M}_{R}(X)$, for the deficiency of randomness $\mathbf{d}_{\mu}(x)$, we have
\begin{equation}\label{e.defic-charac-seq}
\mathbf{d}_{\mu}(x) \eqa \sup_{n}\Paren{-\log \mu(x^{\leqslant n}) - H(x^{\leqslant n})}.
\end{equation} Here, the constant in $\eqa$ depends on the computable measure $\mu$.
\end{theorem}
We will be able to prove the $\gea$ part of the statement in a more general space, and without assuming computability. Assume that a separating sequence $b_{1},b_{2},\dots$ is given as defined in Subsection~\ref{ss.cells}, along with the set $X^{0}$. For each $x \in X^{0}$, the binary sequence $x_{1},x_{2},\dots$ has been defined. Let
\begin{align*}
\ol\mu(\Gg_{s}) &= R - \sum\setof{\mu(\Gg_{s'}) : l(s)=l(s'),\;s'\ne s}.
\end{align*} Then $(s,\mu)\mapsto \mu(\Gg_{s})$ is lower semicomputable, and $(s,\mu)\mapsto \ol\mu(\Gg_{s})$ is upper semicomputable. And, every time that the functions $b_{i}(x)$ form a regular partition for $\mu$, we have $\ol\mu(\Gg_{s})=\mu(\Gg_{s})$ for all $s$. Let $\mathcal{M}_{R}^{0}(X)$ be the set of those measures $\mu$ in $\mathcal{M}_{R}(X)$ for which $\mu(X \xcpt X^{0})=0$.
\begin{theorem}\label{t.defic-charac-compact} Suppose that the space $\mathbf{X}$ is compact. Then for all computable measures $\mu\in\mathcal{M}_{R}^{0}(\mathbf{X})$, for the deficiency of randomness $\mathbf{d}_{\mu}(x)$, the characterization~\eqref{e.defic-charac-seq} holds.
\end{theorem}
For arbitrary measures and spaces, we can say a little less:
\begin{proposition}\label{p.test-charac-seq-lb} For all measures $\mu\in\mathcal{M}_{R}(X)$, for the deficiency of randomness $\mathbf{d}_{\mu}(x)$, we have
\begin{equation}\label{e.defic-ineq-seq}
\mathbf{d}_{\mu}(x) \gea \sup_{n}\Paren{-\log \ol\mu(x^{\leqslant n}) - H(x^{\leqslant n} \mid \mu)}.
\end{equation}
\end{proposition}
\begin{proof}
Consider the function
\[
f_{\mu}(x) = \sum_{s} 1_{\Gg_{s}}(x) \frac{\mathbf{m}(s \mid \mu)}{\ol\mu(\Gg_{s})}
= \sum_{n} \frac{\mathbf{m}(x^{\leqslant n} \mid \mu)}{\ol\mu(x^{\leqslant n})}
\geqslant \sup_{n} \frac{\mathbf{m}(x^{\leqslant n} \mid \mu)}{\ol\mu(x^{\leqslant n})}.
\] The function $(\mu,x) \mapsto f_{\mu}(x)$ is clearly lower semicomputable and satisfies $\mu^{x} f_{\mu}(x) \leqslant 1$, and hence
\[
\mathbf{d}_{\mu}(x) \gea \log f(x) \gea
\sup_{n}\Paren{-\log\ol\mu(x^{\leqslant n}) - H(x^{\leqslant n} \mid \mu)}.
\]
\end{proof}
\begin{proof}[Proof of Theorem~\protect\ref{t.defic-charac-cpt-seqs}] For binary sequences instead of sequences of natural numbers, the part $\gea$ of the inequality follows directly from Proposition~\ref{p.test-charac-seq-lb}: indeed, look at Examples~\ref{x.cells}. For sequences of natural numbers, the proof is completely analogous.
The proof of $\lea$ reproduces the proof of Theorem 5.2 of~\cite{GacsExact80}. The computability of $\mu$ implies that $t(x)=\mathbf{t}_{\mu}(x)$ is lower semicomputable. Let us first replace $t(x)$ with a rougher version:
\[
t'(x) = \max \setof{2^{n} : 2^{n} < \mathbf{t}_{\mu}(x)}.
\] Then $t'(x) \eqm t(x)$, and it takes only values of the form $2^{n}$. It is also lower semicomputable. Let us abbreviate:
\[
1_{y}(x) = 1_{x\mathbb{N}^{\og}}(x),\quad \mu(y) = \mu(y\mathbb{N}^{\og}).
\] For every lower semicomputable function $f$ over $\mathbb{N}^{\og}$, there are computable sequences $y_{i}\in\mathbb{N}^{*}$ and $r_{i}\in\mathbb{Q}$ with $f(x) = \sup_{i} r_{i} 1_{y_{i}}(x)$, with the additional property that if $i<j$ and $1_{y_{i}}(x)=1_{y_{j}}(x)=1$ then $r_{i}<r_{j}$. Since $t'(x)$ only takes values of the form $2^{n}$, there are computable sequences $y_{i} \in \mathbb{B}^{*}$ and $k_{i} \in \mathbb{N}$ with
\[
t'(x) = \sup_{i}\; 2^{k_{i}} 1_{y_{i}}(x)
\eqm \sum_{i} 2^{k_{i}} 1_{y_{i}}(x),
\] with the property that if $i<j$ and $1_{y_{i}}(x)=1_{y_{j}}(x)=1$ then $k_{i}<k_{j}$. The equality $\eqm$ follows easily from the fact that for any finite sequence $n_{1}<n_{2}<\dots$, $\sum_{j} 2^{n_{j}} \leqslant 2 \max_{j} 2^{n_{j}}$. Since $\mu t' \lem 1$, we have $\sum_{i} 2^{k_{i}}\mu(y_{i}) \lem 1$. Since the function $i \to 2^{k_{i}}\mu(y_{i})$ is computable, this implies $2^{k_{i}}\mu(y_{i}) \lem \mathbf{m}(i)$, $2^{k_{i}} \lem \mathbf{m}(i)/\mathbf{m}(y_{i})$. Thus,
\[
t(x) \lem \sup_{i}\; 1_{y_{i}}(x)\frac{\mathbf{m}(i)}{\mu(y_{i})}.
\] For $y \in \mathbb{N}^{*}$ we certainly have $H(y) \lea \inf_{y = y_{i}} H(i)$, which implies $\sup_{y = y_{i}} \mathbf{m}(i) \leqslant \mathbf{m}(y)$. It follows that
\[
t(x) \lem \sup_{y \in \mathbb{B}^{*}}\; 1_{y}(x)\frac{\mathbf{m}(y)}{\mu(y)}
= \sup_{n} \frac{\mathbf{m}(x^{\leqslant n})}{\mu(x^{\leqslant n})}.
\] Taking logarithms, we obtain the $\lea$ part of the theorem.
\end{proof}
\begin{proof}[Proof of Theorem~\protect\ref{t.defic-charac-compact}] The proof of part $\gea$ of the inequality follows directly from Proposition~\ref{p.test-charac-seq-lb}, just as in the proof of Theorem~\ref{t.defic-charac-cpt-seqs}.
The proof of $\lea$ is also similar to the proof of that theorem. The only part that needs to be reproved is the statement that for every lower semicomputable function $f$ over $X$, there are computable sequences $y_{i}\in\mathbb{B}^{*}$ and $q_{i}\in\mathbb{Q}$ with $f(x) = \sup_{i} q_{i} 1_{y_{i}}(x)$. This follows now, since according to Proposition~\ref{p.compact-cell-basis}, the cells $\Gg_{y}$ form a basis of the space $\mathbf{X}$.
\end{proof}
\section{Neutral measure}\label{s.neutral}
Let $\mathbf{t}_{\mu}(x)$ be our universal uniform randomness test. We call a measure $M$ \df{neutral} if $\mathbf{t}_{M}(x) \leqslant 1$ for all $x$. If $M$ is neutral then no experimental outcome $x$ could refute the theory (hypothesis, model) that $M$ is the underlying measure to our experiments. It can be used as ``apriori probability'', in a Bayesian approach to statistics. Levin's theorem says the following:
\begin{theorem}\label{t.neutral-meas} If the space $\mathbf{X}$ is compact then there is a neutral measure over $\mathbf{X}$.
\end{theorem}
The proof relies on a nontrivial combinatorial fact, Sperner's Lemma, which also underlies the proof of the Brouwer fixpoint theorem. Here is a version of Sperner's Lemma, spelled out in continuous form:
\begin{proposition}[see for
example~\protect\cite{SpanierAlgTop71}]\label{p.Sperner} Let $p_{1},\dots,p_{k}$ be points of some finite-dimensional space $\mathbb{R}^{n}$. Suppose that there are closed sets $F_{1},\dots,F_{k}$ with the property that for every subset $1 \leqslant i_{1} < \dots < i_{j} \leqslant k$ of the indices, the simplex $S(p_{i_{1}},\dots,p_{i_{j}})$ spanned by $p_{i_{1}},\dots,p_{i_{j}}$ is covered by the union $F_{i_{1}} \cup \dots \cup F_{i_{j}}$. Then the intersection $\bigcap_{i} F_{i}$ of all these sets is not empty.
\end{proposition}
The following lemma will also be needed.
\begin{lemma}\label{l.concentr} For every closed set $A \sbs \mathbf{X}$ and measure $\mu$, if $\mu(A)=1$ then there is a point $x\in A$ with $\mathbf{t}_{\mu}(x) \leqslant 1$.
\end{lemma}
\begin{proof} This follows easily from $\mu\, t_{\mu} = \mu^{x} 1_{A}(x)t_{\mu}(x) \leqslant 1$.
\end{proof}
\begin{proof}[Proof of Theorem~\protect\ref{t.neutral-meas}] For every point $x \in \mathbf{X}$, let $F_{x}$ be the set of measures for which $\mathbf{t}_{\mu}(x) \leqslant 1$. If we show that for every finite set of points $x_{1},\dots,x_{k}$, we have
\begin{equation}\label{e.finite-inters}
F_{x_{1}}\cap\dots\cap F_{x_{k}} \ne \emptyset,
\end{equation} then we will be done. Indeed, according to Proposition~\ref{p.measures-compact}, the compactness of $\mathbf{X}$ implies the compactness of the space $\mathcal{M}(\mathbf{X})$ of measures. Therefore if every finite subset of the family $\setof{F_{x} : x \in \mathbf{X}}$ of closed sets has a nonempty intersection, then the whole family has a nonempty intersection: this intersection consists of the neutral measures.
To show~\eqref{e.finite-inters}, let $S(x_{1},\dots,x_{k})$ be the set of probability measures concentrated on $x_{1},\dots,x_{k}$. Lemma~\ref{l.concentr} implies that each such measure belongs to one of the sets $F_{x_{i}}$. Hence $S(x_{1},\dots,x_{k}) \sbs F_{x_{1}} \cup \dots \cup F_{x_{k}}$, and the same holds for every subset of the indices $\{1,\dots,k\}$. Sperner's Lemma~\ref{p.Sperner} implies $F_{x_{1}} \cap \dots \cap F_{x_{k}} \ne\emptyset$.
\end{proof}
When the space is not compact, there are generally no neutral probability measures, as shown by the following example.
\begin{proposition}\label{t.no-neutral} Over the discrete space $\mathbf{X} = \mathbb{N}$ of natural numbers, there is no neutral measure.
\end{proposition}
\begin{proof} It is sufficient to construct a randomness test $t_{\mu}(x)$ with the property that for every measure $\mu$, we have $\sup_{x} t_{\mu}(x) = \infty$. Let
\begin{equation}\label{e.no-neutral}
t_{\mu}(x) = \sup\setof{ k \in \mathbb{N}: \sum_{y<x}\mu(y) > 1-2^{-k}}.
\end{equation} By its construction, this is a lower semicomputable function with $\sup_{x} t_{\mu}(x) = \infty$. It is a test if $\sum_{x}\mu(x)t_{\mu}(x) \leqslant 1$. We have
\[
\sum_{x} \mu(x) t_{\mu}(x) = \sum_{k>0} \sum_{t_{\mu}(x) \geqslant k} \mu(x) < \sum_{k>0} 2^{-k} \leqslant 1.
\]
\end{proof}
Using a similar construction over the space $\mathbb{N}^{\og}$ of infinite sequences of natural numbers, we could show that for every measure $\mu$ there is a sequence $x$ with $\mathbf{t}_{\mu}(x)=\infty$.
Proposition~\ref{t.no-neutral} is a little misleading, since as a locally compact set, $\mathbb{N}$ can be compactified into $\ol\mathbb{N} = \mathbb{N} \cup \{\infty\}$ (as in Part~\ref{i.compact.compactify} of Example~\ref{x.compact}). Theorem~\ref{t.neutral-meas} implies that there is a neutral probability measure $M$ over the compactified space $\ol\mathbb{N}$. Its restriction to $\mathbb{N}$ is, of course, not a probability measure, since it satisfies only $\sum_{x < \infty} M(x) \leqslant 1$. We called these functions \df{semimeasures}.
\begin{remark}\label{r.compactify}\
\begin{enumerate}[\upshape 1.]
\item
It is easy to see that Theorem~\ref{t.test-charac-discr} characterizing randomness in terms of complexity holds also for the space $\ol\mathbb{N}$.
\item The topological space of semimeasures over $\mathbb{N}$ is not compact, and there is no neutral one among them. Its topology is not the same as what we get when we restrict the topology of probability measures over $\ol\mathbb{N}$ to $\mathbb{N}$. The difference is that over $\mathbb{N}$, for example the set of measures $\setof{\mu : \mu(\mathbb{N}) \geqslant 1/2}$ is closed, since $\mathbb{N}$ (as the whole space) is a closed set. But over $\ol\mathbb{N}$, this set is not closed.
\end{enumerate}
\end{remark}
Neutral measures are not too simple, even over $\ol\mathbb{N}$, as the following theorem shows.
\begin{theorem}\label{t.no-upper-semi-neutral} There is no neutral measure over $\ol\mathbb{N}$ that is upper semicomputable over $\mathbb{N}$ or lower semicomputable over $\mathbb{N}$.
\end{theorem}
\begin{proof} Let us assume that $\nu$ is a measure that is upper semicomputable over $\mathbb{N}$. Then the set
\[
\setof{(x,r) : x \in\mathbb{N},\; r\in\mathbb{Q},\; \nu(x) < r}
\] is recursively enumerable: let $(x_{i},r_{i})$ be a particular enumeration. For each $n$, let $i(n)$ be the first $i$ with $r_{i} < 2^{-n}$, and let $y_{n} = x_{i(n)}$. Then $\nu(y_{n}) < 2^{-n}$, and at the same time $H(y_{n}) \lea H(n)$. As mentioned, in Remark~\ref{r.compactify}, Theorem~\ref{t.test-charac-discr} characterizing randomness in terms of complexity holds also for the space $\ol\mathbb{N}$. Thus,
\[
\mathbf{d}_{\nu}(y_{n}) \eqa -\log\nu(y_{n}) - H(y_{n} \mid \nu) \gea n - H(n).
\] Suppose now that $\nu$ is lower semicomputable over $\mathbb{N}$. The proof for this case is longer. We know that $\nu$ is the monotonic limit of a recursive sequence $i\mapsto \nu_{i}(x)$ of recursive semimeasures with rational values $\nu_{i}(x)$. For every $k=0,\dots,2^{n}-2$, let
\begin{align*}
V_{n,k} &= \setof{\mu \in \mathcal{M}(\ol\mathbb{N}) : k\cdot 2^{-n} < \mu(\{0,\dots,2^{n}-1\}) < (k+2)\cdot 2^{-n}}, \\ J &= \setof{(n,k): k\cdot 2^{-n} < \nu(\{0,\dots,2^{n}-1\})}.
\end{align*} The set $J$ is recursively enumerable. Let us define the functions $j:J\to\mathbb{N}$ and $x:J\to\{0,\dots,2^{n}-1\}$ as follows: $j(n,k)$ is the smallest $i$ with $\nu_{i}(\{0,\dots,2^{n}-1\}) > k\cdot 2^{-n}$, and
\[
x_{n,k} = \min\setof{y < 2^{n}: \nu_{j(n,k)}(y) < 2^{-n+1}}.
\] Let us define the function $f_{\mu}(x,n,k)$ as follows. We set $f_{\mu}(x,n,k)=2^{n-2}$ if the following conditions hold:
\begin{enumerate}[(a)]
\item\label{i.no-upper-semi-neutral.mu-global} $\mu \in V_{n,k}$;
\item\label{i.no-upper-semi-neutral.mu-upper} $\mu(x) < 2^{-n+2}$;
\item\label{i.no-upper-semi-neutral.unique}
$(n,k) \in J$ and $x=x_{n,k}$.
\end{enumerate} Otherwise, $f_{\mu}(x,n,k)=0$. Clearly, the function $(\mu,x,n,k) \mapsto f_{\mu}(x,n,k)$ is lower semicomputable. Condition~\eqref{i.no-upper-semi-neutral.mu-upper} implies
\begin{equation}\label{e.no-upper-semi-neutral.n-k-test}
\sum_{y} \mu(y) f_{\mu}(y,n,k) \leqslant
\mu(x_{n,k})f_{\mu}(x_{n,k},n,k) < 2^{-n+2}\cdot 2^{n-2} = 1.
\end{equation} Let us show that $\nu \in V_{n,k}$ implies
\begin{equation}\label{e.found-bad}
f_{\nu}(x_{n,k},n,k) = 2^{n-2}.
\end{equation} Consider $x=x_{n,k}$. Conditions~\eqref{i.no-upper-semi-neutral.mu-global} and~\eqref{i.no-upper-semi-neutral.unique} are satisfied by definition. Let us show that condition~\eqref{i.no-upper-semi-neutral.mu-upper} is also satisfied. Let $j=j(n,k)$. By definition, we have $\nu_{j}(x) < 2^{-n+1}$. Since by definition $\nu_{j}\in V_{n,k}$ and $\nu_{j} \leqslant \nu \in V_{n,k}$, we have
\[
\nu(x) \leqslant \nu_{j}(x) + 2^{-n+1} < 2^{-n+1} + 2^{-n+1} = 2^{-n+2}.
\] Since all three conditions~\eqref{i.no-upper-semi-neutral.mu-global}, \eqref{i.no-upper-semi-neutral.mu-upper} and~\eqref{i.no-upper-semi-neutral.unique} are satisfied, we have shown~\eqref{e.found-bad}. Now we define
\[
g_{\mu}(x) = \sum_{n\geqslant 2}\frac{1}{n(n+1)}\sum_{k}f_{\mu}(x,n,k).
\] Let us prove that $g_{\mu}(x)$ is a uniform test. It is lower semicomputable by definition, so we only need to prove
$\sum_{x} \mu(x) f_{\mu}(x) \leqslant 1$. For this, let $I_{n,\mu} = \setof{k: \mu\in V_{n,k}}$. Clearly by definition, $|I_{n,\mu}|\leqslant 2$. We have, using this last fact and the test property~\eqref{e.no-upper-semi-neutral.n-k-test}:
\[
\sum_{x} \mu(x) g_{\mu}(x) =
\sum_{n\geqslant 2}\frac{1}{n(n+1)}
\sum_{k\in I_{n,\mu}} \sum_{x}\mu(x) f_{\mu}(x,n,k)
\leqslant \sum_{n\geqslant 2}\frac{1}{n(n+1)}\cdot 2 \leqslant 1.
\] Thus, $g_{\mu}(x)$ is a uniform test. If $\nu\in V_{n,k}$ then we have
\[
\mathbf{t}_{\nu}(x_{n,k}) \gem g_{\nu}(x_{n,k}) \geqslant \frac{1}{n(n+1)}f_{\mu}(x_{n,k},n,k) \geqslant
\frac{2^{n-2}}{n(n+1)}.
\] Hence $\nu$ is not neutral.
\end{proof}
\begin{remark} In~\cite{LevinUnif76} and~\cite{LevinRandCons84}, Levin imposed extra conditions on tests which allow to find a lower semicomputable neutral semimeasure. A typical (doubtless reasonable) consequence of these conditions would be that if outcome $x$ is random with respect to measures $\mu$ and $\nu$ then it is also random with respect to $(\mu+\nu)/2$.
\end{remark}
\begin{remark} The universal lower semicomputable semimeasure $\mathbf{m}(x)$ has a certain property similar to neutrality. According to Theorem~\ref{t.test-charac-discr}, for every computable measure $\mu$ we have $\mathbf{d}_{\mu}(x) \eqa -\log\mu(x) - H(x)$ (where the constant in $\eqa$ depends on $\mu$). So, for computable measures, the expression
\begin{equation}\label{e.ol-d}
\ol\mathbf{d}_{\mu}(x) = -\log\mu(x) - H(x)
\end{equation} can serve as a reasonable deficiency of randomness. (We will also use the test $\ol\mathbf{t} = 2^{\ol\mathbf{d}}$.) If we substitute $\mathbf{m}$ for $\mu$ in $\ol\mathbf{d}_{\mu}(x)$, we get 0. This substitution is not justified, of course. The fact that $\mathbf{m}$ is not a probability measure can be helped, at least over $\mathbb{N}$, using compactification as above, and extending the notion of randomness tests. But the test $\ol\mathbf{d}_{\mu}$ can replace $\mathbf{d}_{\mu}$ only for computable $\mu$, while $\mathbf{m}$ is not computable. Anyway, this is the sense in which all outcomes might be considered random with respect to $\mathbf{m}$, and the heuristic sense in which $\mathbf{m}$ may still be considered ``neutral''.
\end{remark}
\begin{remark} Solomonoff proposed the use of a universal lower semicomputable semimeasure (actually, a closely related structure) for inductive inference in~\cite{Solomonoff64I}. He proved in~\cite{Solomonoff78} that sequences emitted by any computable probability distribution can be predicted well by his scheme. It may be interesting to see whether the same prediction scheme has stronger properties when used with the truly neutral measure $M$ of the present paper.
\end{remark}
\section{Relative entropy}\label{s.rel-entr}
Some properties of description complexity make it a good expression of the idea of individual information content.
\subsection{Entropy} The entropy of a discrete probability distribution $\mu$ is defined as
\[
\mathcal{H}(\mu) = - \sum_{x} \mu(x) \log \mu(x).
\] To generalize entropy to continuous distributions the \df{relative entropy} is defined as follows. Let $\mu,\nu$ be two measures, where $\mu$ is taken (typically, but not always), to be a probability measure, and $\nu$ another measure, that can also be a probability measure but is most frequently not. We define the \df{relative entropy} $\mathcal{H}_{\nu}(\mu)$ as follows. If $\mu$ is not absolutely continuous with respect to $\nu$ then $\mathcal{H}_{\nu}(\mu) = -\infty$. Otherwise, writing
\[
\frac{d\mu}{d\nu} = \frac{\mu(dx)}{\nu(dx)} =: f(x)
\] for the (Radon-Nikodym) derivative (density) of $\mu$ with respect to $\nu$, we define
\[
\mathcal{H}_{\nu}(\mu) = - \int \log\frac{d\mu}{d\nu} d\mu
= - \mu^{x} \log\frac{\mu(dx)}{\nu(dx)} = -\nu^{x} f(x) \log f(x).
\] Thus, $\mathcal{H}(\mu) = \mathcal{H}_{\#}(\mu)$ is a special case.
\begin{example}
Let $f(x)$ be a probability density function for the distribution $\mu$ over the real line, and let $\lg$ be the Lebesgue measure there. Then
\[
\mathcal{H}_{\lg}(\mu) = -\int f(x) \log f(x) d x.
\]
\end{example}
In information theory and statistics, when both $\mu$ and $\nu$ are probability measures, then $-\mathcal{H}_{\nu}(\mu)$ is also denoted $D(\mu \parallel \nu)$, and called (after Kullback) the information divergence of the two measures. It is frequently used in the role of a distance between $\mu$ and $\nu$. It is not symmetric, but can be shown to obey the triangle inequality, and to be nonnegative. Let us prove the latter property: in our terms, it says that relative entropy is nonpositive when both $\mu$ and $\nu$ are probability measures.
\begin{proposition}\label{p.Kullback-pos} Over a space $\mathbf{X}$, we have
\begin{equation}\label{e.Kullback-pos}
\mathcal{H}_{\nu}(\mu) \leqslant -\mu(X) \log\frac{\mu(X)}{\nu(X)}.
\end{equation}
In particular, if $\mu(X) \geqslant \nu(X)$ then $\mathcal{H}_{\nu}(\mu) \leqslant 0$.
\end{proposition}
\begin{proof} The inequality $- a \ln a \leqslant -a\ln b + b-a$ expresses the concavity of the logarithm function. Substituting $a = f(x)$ and $b = \mu(X)/\nu(X)$ and integrating by $\nu$:
\[
(\ln 2) \mathcal{H}_{\nu}(\mu) =
-\nu^{x} f(x) \ln f(x) \leqslant -\mu(X) \ln\frac{\mu(X)}{\nu(X)}
+ \frac{\mu(X)}{\nu(X)} \nu(X) - \mu(X)
= -\mu(X) \ln\frac{\mu(X)}{\nu(X)},
\] giving~\eqref{e.Kullback-pos}.
\end{proof}
The following theorem generalizes an earlier known theorem stating that over a discrete space, for a computable measure, entropy is within an additive constant the same as ``average complexity'': $\mathcal{H}(\mu) \eqa \mu^{x} H(x)$.
\begin{theorem} Let $\mu$ be a probability measure. Then we have
\begin{equation}\label{e.entropy-less-avg-algentr}
\mathcal{H}_{\nu}(\mu) \leqslant \mu^{x} H_{\nu}(x \mid \mu).
\end{equation} If $X$ is a discrete space then the following estimate also holds:
\begin{equation}\label{e.entropy-gea-avg-algentr}
\mathcal{H}_{\nu}(\mu) \gea \mu^{x} H_{\nu}(x \mid \mu).
\end{equation}
\end{theorem}
\begin{proof} Let $\dg$ be the measure with density $\mathbf{t}_{\nu}(x \mid \mu)$ with respect to $\nu$: $\mathbf{t}_{\nu}(x \mid \mu) = \frac{\dg(dx)}{\nu(dx)}$. Then $\dg(X) \leqslant 1$. It is easy to see from the maximality property of $\mathbf{t}_{\nu}(x \mid \mu)$ that $\mathbf{t}_{\nu}(x \mid \mu) > 0$, therefore according to Proposition~\ref{p.density-props}, we have $\frac{\nu(dx)}{\dg(dx)} = \Paren{\frac{\dg(dx)}{\nu(dx)}}^{-1}$. Using Proposition~\ref{p.density-props} and~\ref{p.Kullback-pos}:
\begin{align*}
\mathcal{H}_{\nu}(\mu) &= - \mu^{x} \log\frac{\mu(dx)}{\nu(dx)}, \\ - \mu^{x} H_{\nu}(x \mid \mu) &= \mu^{x} \log \frac{\dg(dx)}{\nu(dx)}
= - \mu^{x} \log \frac{\nu(dx)}{\dg(dx)}, \\ \mathcal{H}_{\nu}(\mu) - \mu^{x} H_{\nu}(x \mid \mu)
&= - \mu^{x} \log \frac{\mu(dx)}{\dg(dx)}
\leqslant -\mu(X) \log\frac{\mu(X)}{\dg(X)} \leqslant 0.
\end{align*} This proves~\eqref{e.entropy-less-avg-algentr}.
Over a discrete space $\mathbf{X}$, the function $(x,\mu,\nu) \mapsto \frac{\mu(dx)}{\nu(dx)} = \frac{\mu(x)}{\nu(x)}$ is computable, therefore by the maximality property of $H_{\nu}(x \mid \mu)$ we have $\frac{\mu(dx)}{\nu(dx)} \lem \mathbf{t}_{\nu}(x \mid \mu)$, hence $\mathcal{H}_{\nu}(\mu) = -\mu^{x} \log \frac{\mu(dx)}{\nu(dx)}
\gea \mu^{x} H_{\nu}(x \mid \mu)$.
\end{proof}
\subsection{Addition theorem} The most important information-theoretical property of description complexity is the following theorem (see for example~\cite{LiViBook97}):
\begin{proposition}[Addition Theorem]\label{p.addition} We have $H(x,y) \eqa H(x) + H(y \mid x, H(x))$.
\end{proposition}
Mutual information is defined as $I(x : y) = H(x) + H(y) - H(x,y)$. By the Addition theorem, we have $I(x:y) \eqa H(y) - H(y \mid x,\, H(x)) \eqa H(x) - H(x \mid y,\,H(y))$. The two latter expressions show that in some sense, $I(x:y)$ is the information held in $x$ about $y$ as well as the information held in $y$ about $x$. (The terms $H(x)$, $H(y)$ in the conditions are logarithmic-sized corrections to this idea.) Using~\eqref{e.ol-d}, it is interesting to view mutual information $I(x : y)$ as a deficiency of randomness of the pair $(x,y)$ in terms of the expression $\ol\mathbf{d}_{\mu}$, with respect to $\mathbf{m} \times \mathbf{m}$:
\[
I(x : y) = H(x) + H(y) - H(x,y) = \ol\mathbf{d}_{\mathbf{m} \times \mathbf{m}}(x, y).
\] Taking $\mathbf{m}$ as a kind of ``neutral'' probability, even if it is not quite such, allows us to view $I(x:y)$ as a ``deficiency of independence''. Is it also true that $I(x:y) \eqa \mathbf{d}_{\mathbf{m} \times \mathbf{m}}(x)$? This would allow us to deduce, as Levin did, ``information conservation'' laws from randomness conservation laws.\footnote{We cannot use the test $\ol\mathbf{t}_{\mu}$ for this, since---as it can be shown easily--it does not obey randomness conservation.}
Expression $\mathbf{d}_{\mathbf{m} \times \mathbf{m}}(x)$ must be understood again in the sense of compactification, as in Section~\ref{s.neutral}. There seem to be two reasonable ways to compactify the space $\mathbb{N}\times\mathbb{N}$: we either compactify it directly, by adding a symbol $\infty$, or we form the product $\ol\mathbb{N} \times \ol\mathbb{N}$. With either of them, preserving Theorem~\ref{t.test-charac-discr}, we would have to check whether $H(x,y \mid \mathbf{m} \times \mathbf{m}) \eqa H(x,y)$. But, knowing the function $\mathbf{m}(x)\times\mathbf{m}(y)$ we know the function $x \mapsto \mathbf{m}(x) \eqm \mathbf{m}(x) \times \mathbf{m}(0)$, hence also the function $(x,y)\mapsto\mathbf{m}(x,y) = \mathbf{m}(\ang{x,y})$, where $\ang{x,y}$ is any fixed computable pairing function. Using this knowledge, it is possible to develop an argument similar to the proof of Theorem~\ref{t.no-upper-semi-neutral}, showing that $H(x,y \mid \mathbf{m} \times \mathbf{m}) \eqa H(x,y)$ does not hold.
\begin{question} Is there a neutral measure $M$ with the property $I(x:y) = \mathbf{d}_{M\times M}(x,y)$? Is this true maybe for all neutral measures $M$? If not, how far apart are the expressions $\mathbf{d}_{M\times M}(x,y)$ and $I(x:y)$ from each other?
\end{question}
The Addition Theorem (Proposition~\ref{p.addition}) can be generalized to the algorithmic entropy $H_{\mu}(x)$ introduced in~\eqref{e.alg-ent} (a somewhat similar generalization appeared in~\cite{VovkVyugin93}). The generalization, defining $H_{\mu,\nu} = H_{\mu\times\nu}$, is
\begin{equation}\label{e.addition-general}
H_{\mu,\nu}(x,y)\eqa
H_\mu(x \mid \nu)+ H_\nu(y \mid x,\; H_\mu(x \mid \nu),\; \mu).
\end{equation} Before proving the general addition theorem, we establish a few useful facts.
\begin{proposition}\label{p.int.H.of.xy}
We have
\[
H_{\mu}(x \mid \nu) \lea -\log \nu^{y} 2^{-H_{\mu,\nu}(x, y)}.
\]
\end{proposition}
\begin{proof} The function $f(x,\mu,\nu)$ that is the right-hand side, is upper semicomputable by definition, and obeys $\mu^{x}2^{-f(x,\mu,\nu)} \leqslant 1$. Therefore the inequality follows from the minimum property of $H_{\mu}(x)$.
\end{proof}
Let us generalize the minimum property of $H_{\mu}(x)$.
\begin{proposition}\label{p.univ-test-gener}
Let $(x,y,\nu) \mapsto f_{\nu}(x,y)$ be a nonnegative lower semicomputable function with $F_{\nu}(x) = \log \nu^{y} f_{\nu}(x,y)$. Then for all $x$ with $F_{\nu}(x) > -\infty$ we have
\[
H_{\nu}(y \mid x, \flo{F_{\nu}(x)}) \lea -\log f_{\nu}(x,y) + F_{\nu}(x).
\]
\end{proposition}
\begin{proof}
Let us construct a lower semicomputable function $(x,y,m,\nu) \mapsto g_{\nu}(x,y,m)$ for integers $m$ with the property that $\nu^{y} g_{\nu}(x,y,m) \leqslant 2^{-m}$, and for all $x$ with $F_{\nu}(x) \leqslant -m$ we have $g_{\nu}(x,y,m) = f_{\nu}(x,y)$. Such a $g$ can be constructed by watching the approximation of $f$ grow and cutting it off as soon as it would give $F_{\nu}(x) > -m$. Now $(x,y,m,\nu) \mapsto 2^{m} g_{\nu}(x,y,m)$ is a uniform conditional test of $y$ and hence it is $\lem 2^{-H_{\nu}(y \mid x, m)}$. To finish the proof, substitute $-\flo{F_{\nu}(x)}$ for $m$ and rearrange.
\end{proof}
Let $z \in \mathbb{N}$, then the inequality
\begin{equation}\label{e.H.x.cond.z}
H_{\mu}(x) \lea H(z) + H_{\mu}(x \mid z)
\end{equation}
will be a simple consequence of the general addition theorem. The following lemma, needed in the proof of the theorem, generalizes this inequality somewhat:
\begin{lemma}\label{l.H.x.cond.z}
For a com\-put\-able func\-tion $(y,z) \mapsto f(y,z)$ over $\mathbb{N}$, we have
\[
H_{\mu}(x \mid y) \lea H(z) + H_{\mu}(x \mid f(y,z)).
\]
\end{lemma}
\begin{proof}
The function
\[
(x,y,\mu) \mapsto g_{\mu}(x, y)=\sum_{z} 2^{-H_{\mu}(x \mid f(y,z))-H(z)}
\]
is lower semicomputable, and $\mu^{x} g_{\mu}(x, y) \leqslant \sum_{z} 2^{-H(z)} \leqslant 1$. Hence $g_{\mu}(x, y) \lem 2^{-H_{\mu}(x \mid y)}$. The left-hand side is a sum, hence the inequality holds for each element of the sum: just what we had to prove.
\end{proof}
As mentioned above, the theory generalizes to measures that are not probability measures. Taking $f_{\mu}(x,y)=1$ in Proposition~\ref{p.univ-test-gener} gives the inequality
\[
H_{\mu}(x \mid \flo{\log \mu(X)}) \lea \log\mu(X),
\] with a physical meaning when $\mu$ is the phase space measure. Using~\eqref{e.H.x.cond.z}, this implies
\begin{equation}\label{e.unif.ub}
H_\mu(x)\lea \log\mu(X) + H(\flo{\log\mu(X)}).
\end{equation}
The following simple monotonicity property will be needed:
\begin{lemma}\label{l.mon}
For $i < j$ we have
\[
i + H_\mu(x\mid i) \lea j + H_\mu(x\mid j) .
\] \end{lemma} \begin{proof}
From Lemma~\ref{l.H.x.cond.z}, with $f(i, n)=i + n$ we have
\[
H_{\mu}(x \mid i) - H_{\mu}(x \mid j) \lea H(j-i) \lea j-i.
\] \end{proof}
\begin{theorem}[General addition]\label{t.addition-general} The following inequality holds:
\[
H_{\mu,\nu}(x,y) \eqa
H_{\mu}(x \mid \nu)+ H_{\nu}(y \mid x,\; H_\mu(x \mid \nu),\; \mu).
\]
\end{theorem}
\begin{proof}
To prove the inequality $\lea$, let us define
\[
G_{\mu,\nu}(x,y,m) =\min_{i\geqslant m}\;i + H_{\nu}(y \mid x, i, \mu).
\]
Function $G_{\mu,\nu}(x,y,m)$ is upper semicomputable and decreasing in $m$. Therefore
\[
G_{\mu,\nu}(x,y) = G_{\mu,\nu}(x, y, H_{\mu}(x \mid \nu))
\] is also upper semicomputable since it is obtained by substituting an upper semicomputable function for $m$ in $G_{\mu,\nu}(x,y,m)$. Lemma~\ref{l.mon} implies
\begin{align*}
G_{\mu,\nu}(x,y,m) &\eqa m + H_{\nu}(y \mid x, m, \mu), \\ G_{\mu,\nu}(x,y) &\eqa H_{\mu}(x \mid \nu) +
H_{\nu}(y \mid x, H_{\mu}(x \mid \nu), \mu).
\end{align*}
Now, we have
\begin{align*}
\nu^{y} 2^{-m - H_{\nu}(y \mid x, m, \mu)} &\leqslant 2^{-m}, \\ \nu^{y} 2^{-G_{\mu,\nu}(x,y)} &\lem 2^{-H_{\mu}(x \mid \mu)}.
\end{align*}
Therefore $\mu^{x}\nu^{y} 2^{-G} \lem 1$, implying $H_{\mu,\nu}(x,y) \lea G_{\mu,\nu}(x,y)$ by the minimality property of $H_{\mu,\nu}(x,y)$. This proves the $\lea$ half of our theorem.
To prove the inequality $\gea$, let
\begin{align*}
f_{\nu}(x,y,\mu) &= 2^{-H_{\mu,\nu}(x,y)}, \\ F_{\nu}(x, \mu) &= \log \nu^{y} f_{\nu}(x,y,\mu).
\end{align*}
According to Proposition~\ref{p.univ-test-gener},
\begin{align*}
H_{\nu}(y \mid x,\flo{F},\mu)
&\lea -\log f_{\nu}(x,y,\mu) + F_{\nu}(x, \mu), \\ H_{\mu,\nu}(x,y) &\gea -F + H_{\nu}(y \mid x, \cei{-F}, \mu).
\end{align*}
Proposition~\ref{p.int.H.of.xy} implies $-F_{\nu}(x, \mu) \gea H_{\mu}(x \mid \nu)$. The monotonity lemma~\ref{l.mon} implies from here the $\gea$ half of the theorem.
\end{proof}
\subsection{Some special cases of the addition theorem; information}
The function $H_{\mu}(\cdot)$ behaves quite differently for different kinds of measures $\mu$. Recall the following property of complexity:
\begin{equation} \label{e.compl.of.fun}
H(f(x)\mid y)\lea H(x\mid g(y)) \lea H(x) .
\end{equation} for any computable functions $f,g$ This implies
\[
H(y)\lea H(x,y).
\]
In contrast, if $\mu$ is a probability measure then
\[
H_{\nu}(y) \gea H_{\mu,\nu}(x, y).
\]
This comes from the fact that $2^{-H_{\nu}(y)}$ is a test for $\mu\times\nu$.
Let us explore some of the consequences and meanings of the additivity property. As noted in~\eqref{e.Hmu-to-cond}, the subscript $\mu$ can always be added to the condition: $H_{\mu}(x) \eqa H_{\mu}(x \mid \mu)$. Similarly, we have
\[
H_{\mu,\nu}(x,y) := H_{\mu\times\nu}(x,y)
\eqa H_{\mu\times\nu}(x,y\mid \mu\times\nu)
\eqa H_{\mu\times\nu}(x,y\mid \mu,\nu)
=: H_{\mu,\nu}(x,y\mid \mu,\nu),
\] where only before-last inequality requires new (easy) consideration.
Let us assume that $X=Y=\Sg^{*}$, the discrete space of all strings. With general $\mu,\nu$ such that $\mu(x),\nu(x) \ne 0$ for all $x$, using~\eqref{e.alg-entr-charac}, the addition theorem specializes to the ordinary addition theorem, conditioned on $\mu,\nu$:
\[
H(x,y\mid \mu,\nu) \eqa
H(x \mid \mu,\nu)+ H(y \mid x,\; H(x \mid \mu,\nu),\; \mu,\nu).
\] In particular, whenever $\mu,\nu$ are computable, this is just the regular addition theorem.
Just as above, we defined mutual information as $I(x : y) = H(x) + H(y) - H(x,y)$, the new addition theorem suggests a more general definition
\[
I_{\mu,\nu}(x : y) = H_{\mu}(x \mid \nu) +
H_{\nu}(y\mid \mu) - H_{\mu,\nu}(x,y).
\] In the discrete case $X=Y=\Sg^{*}$ with everywhere positive $\mu(x),\nu(x)$, this simplifies to
\[
I_{\mu,\nu}(x : y) = H(x \mid \mu,\nu) + H(y\mid \mu,\nu)
- H(x,y | \mu,\nu),
\] which is $\eqa I(x:y)$ in case of computable $\mu,\nu$. How different can it be for non-computable $\mu,\nu$?
In the general case, even for computable $\mu,\nu$, it seems worth finding out how much this expression depends on the choice of $\mu,\nu$. Can one arrive at a general, natural definition of mutual information along this path?
\section{Conclusion}
When uniform randomness tests are defined in as general a form as they were here, the theory of information conservation does not fit nicely into the theory of randomness conservation as it did with~\cite{LevinUnif76} and~\cite{LevinRandCons84}. Still, it is worth laying the theory onto broad foundations that, we hope, can serve as a basis for further development.
\appendix
\section{Topological spaces}\label{s.top}
Given two sets $X,Y$, a \df{partial function} $f$ from $X$ to $Y$, defined on a subset of $Y$, will be denoted as
\[
f:\sbsq X \to Y.
\]
\subsection{Topology}\label{ss.top}
A \df{topology} on a set $X$ is defined by a class $\tau$ of its subsets called \df{open sets}. It is required that the empty set and $X$ are open, and that arbitrary union and finite intersection of open sets is open. The pair $(X, \tau)$ is called a \df{topological space}. A topology $\tau'$ on $X$ is called \df{larger}, or \df{finer} than $\tau$ if $\tau' \spsq \tau$. A set is called \df{closed} if its complement is open. A set $B$ is called the \df{neighborhood} of a set $A$ if $B$ contains an open set that contains $A$. We denote by $\Cl{A}, \Int{A}$ the closure (the intersection of all closed sets containing $A$) and the interior of $A$ (the union of all open sets in $A$) respectively. Let
\[
\partial A = \Cl{A} \xcpt \Int{A}
\] denote the boundary of set $A$. A \df{base} is a subset $\bg$ of $\tau$ such that every open set is the union of some elements of $\bg$. A \df{neighborhood} of a point is a base element containing it. A \df{base of neighborhoods of a point} $x$ is a set $N$ of neighborhoods of $x$ with the property that each neighborhood of $x$ contains an element of $N$. A \df{subbase} is a subset $\sg$ of $\tau$ such that every open set is the union of finite intersections from $\sg$.
\begin{examples}\label{x.topol}\
\begin{enumerate}[\upshape 1.]
\item\label{i.x.topol.discr} Let $X$ be a set, and let $\bg$ be the set of all points of $X$. The topology with base $\bg$ is the \df{discrete topology} of the set $X$. In this topology, every subset of $X$ is open (and closed).
\item\label{i.x.topol.real} Let $X$ be the real line $\mathbb{R}$, and let $\bg_{\mathbb{R}}$ be the set of all open intervals $\opint{a}{b}$. The topology $\tau_{\mathbb{R}}$ obtained from this base is the usual topology of the real line. When we refer to $\mathbb{R}$ as a topological space without qualification, this is the topology we will always have in mind.
\item Let $X = \ol\mathbb{R} = \mathbb{R} \cup \{-\infty,\infty\}$, and let $\bg_{\ol\mathbb{R}}$ consist of all open intervals $\opint{a}{b}$ and in addition of all intervals of the forms $\lint{-\infty}{a}$ and $\rint{a}{\infty}$. It is clear how the space $\ol\mathbb{R}_{+}$ is defined.
\item\label{i.x.topol.real-upper-converg} Let $X$ be the real line $\mathbb{R}$. Let $\bg_{\mathbb{R}}^{>}$ be the set of all open intervals $\opint{-\infty}{b}$. The topology with base $\bg_{\mathbb{R}}^{>}$ is also a topology of the real line, different from the usual one. Similarly, let $\bg_{\mathbb{R}}^{<}$ be the set of all open intervals $\opint{b}{\infty}$.
\item\label{i.x.topol.Cantor} On the space $\Sg^{\og}$, let $\tg_{C} = \setof{A\Sg^{\og} : A \sbsq \Sg^{*}}$ be called the topology of the \df{Cantor space} (over $\Sg$).
\end{enumerate} \end{examples}
A set is called a $G_{\dg}$ set if it is a countable intersection of open sets, and it is an $F_{\sg}$ set if it is a countable union of closed sets.
For two topologies $\tau_{1},\tau_{2}$ over the same set $X$, we define the topology $\tau_{1}\V \tau_{2} = \tau_{1} \cap \tau_{2}$, and $\tau_{1} \et \tau_{2}$ as the smallest topology containing $\tau_{1} \cup \tau_{2}$. In the example topologies of the real numbers above, we have $\tau_{\mathbb{R}} = \tau_{\mathbb{R}}^{<} \et \tau_{\mathbb{R}}^{>}$.
We will always require the topology to have at least the $T_{0}$ property: every point is determined by the class of open sets containing it. This is the weakest one of a number of other possible separation properties: both topologies of the real line in the example above have it. A stronger such property would be the $T_{2}$ property: a space is called a \df{Hausdorff} space, or $T_{2}$ space, if for every pair of different points $x,y$ there is a pair of disjoint open sets $A,B$ with $x\in A$, $y\in B$. The real line with topology $\tau_{\mathbb{R}}^{>}$ in Example~\ref{x.topol}.\ref{i.x.topol.real-upper-converg} above is not a Hausdorff space. A space is Hausdorff if and only if every open set is the union of closed neighborhoods.
Given two topological spaces $(X_{i}, \tau_{i})$ ($i=1,2$), a function $f :\sbsq X_{1} \to X_{2}$ is called \df{continuous} if for every open set $G \sbs X_{2}$ its inverse image $f^{-1}(G)$ is also open. If the topologies $\tau_{1},\tau_{2}$ are not clear from the context then we will call the function $(\tau_{1}, \tau_{2})$-continuous. Clearly, the property remains the same if we require it only for all elements $G$ of a subbase of $X_{2}$. If there are two continuous functions between $X$ and $Y$ that are inverses of each other then the two spaces are called \df{homeomorphic}. We say that $f$ is continuous \df{at point} $x$ if for every neighborhood $V$ of $f(x)$ there is a neighborhood $U$ of $x$ with $f(U) \sbsq V$. Clearly, $f$ is continuous if and only if it is continuous in each point.
A \df{subspace} of a topological space $(X, \tau)$ is defined by a subset $Y \sbsq X$, and the topology $\tau_{Y} = \setof{G \cap Y : G \in \tau}$, called the \df{induced} topology on $Y$. This is the smallest topology on $Y$ making the identity mapping $x \mapsto x$ continuous. A partial function $f :\sbsq X \to Z$ with $\operatorname{dom}(f) = Y$ is continuous iff $f : Y \to Z$ is continuous.
For two topological spaces $(X_{i}, \tau_{i})$ ($i=1,2$), we define the \df{product topology} on their product $X \times Y$: this is the topology defined by the subbase consisting of all sets $G_{1} \times X_{2}$ and all sets $X_{1} \times G_{2}$ with $G_{i} \in \tau_{i}$. The product topology is the smallest topology making the projection functions $(x,y) \mapsto x$, $(x,y) \mapsto y$ continuous. Given topological spaces $X,Y,Z$ we call a two-argument function $f: X \times Y \mapsto Z$ continuous if it is continuous as a function from $X \times Y$ to $Z$. The product topology is defined similarly for over the product $\prod_{i \in I} X_{i}$ of an arbitrary number of spaces, indexed by some index set $I$. We say that a function is $(\tau_{1},\dots,\tau_{n},\mu)$-continuous if it is $(\tau_{1} \times\dots\times\tau_{n},\mu)$-continuous.
\begin{examples}\label{x.prod}\
\begin{enumerate}[\upshape 1.]
\item\label{i.x.prod.real} The space $\mathbb{R} \times \mathbb{R}$ with the product topology has the usual topology of the Euclidean plane.
\item\label{i.x.top-inf-seq} Let $X$ be a set with the discrete topology, and $X^{\og}$ the set of infinite sequences with elements from $X$, with the product topology. A base of this topology is provided by all sets of the form $uX^{\og}$ where $u \in X^{*}$. The elements of this base are closed as well as open. When $X = \{0,1\}$ then this topology is the usual topology of infinite binary sequences.
\end{enumerate}
\end{examples}
A real function $f : X_{1} \to \mathbb{R}$ is called continuous if it is $(\tau_{1}, \tau_{\mathbb{R}})$-continuous. It is called \df{lower semicontinuous} if it is $(\tau_{1}, \tau_{\mathbb{R}}^{<})$-continuous. The definition of upper semicontinuity is similar. Clearly, $f$ is continuous if and only if it is both lower and upper semicontinuous. The requirement of lower semicontinuity of $f$ is that for each $r \in \mathbb{R}$, the set $\setof{x: f(x) > r}$ is open. This can be seen to be equivalent to the requirement that the single set $\setof{(x,r): f(x) > r}$ is open. It is easy to see that the supremum of any set of lower semicontinuous functions is lower semicontinuous.
Let $(X, \tau)$ be a topological space, and $B$ a subset of $X$. An \df{open cover} of $B$ is a family of open sets whose union contains $B$. A subset $K$ of $X$ is said to be \df{compact} if every open cover of $K$ has a finite subcover. Compact sets have many important properties: for example, a continuous function over a compact set is bounded.
\begin{example}\label{x.compact}\
\begin{enumerate}[\upshape 1.]
\item\label{i.compact.compactify}
Every finite discrete space is compact. An infinite discrete space $\mathbf{X} = (X, \tau)$ is not compact, but it can be turned into a compact space $\ol\mathbf{X}$ by adding a new element called $\infty$: let $\ol X = X\cup\{\infty\}$, and $\ol\tau = \tau\cup\setof{\ol X \xcpt A: A \sbs X\txt{ closed }}$. More generally, this simple operation can be performed with every space that is \df{locally compact}, that each of its points has a compact neighborhood.
\item In a finite-dimensional Euclidean space, every bounded closed set is compact.
\item It is known that if $(\mathbf{X}_{i})_{i\in I}$ is a family of compact spaces then their direct product is also compact.
\end{enumerate}
\end{example}
A subset $K$ of $X$ is said to be \df{sequentially compact} if every sequence in $K$ has a convergent subsequence with limit in $K$. The space is \df{locally compact} if every point has a compact neighborhood.
\subsection{Metric spaces}\label{ss.metric}
In our examples for metric spaces, and later in our treatment of the space of probability measures, we refer to~\cite{BillingsleyConverg68}. A \df{metric space} is given by a set $X$ and a distance function $d : X\times X \to \mathbb{R}_{+}$ satisfying the \df{triangle inequality} $d(x, z) \leqslant d(x, y) + d(y, z)$ and also property that $d(x,y) = 0$ implies $x = y$. For $r \in\mathbb{R}_{+}$, the sets
\[
B(x,r) = \setof{y : d(x, y) < r},\quad
\ol B(x,r) = \setof{y : d(x, y) \leqslant r}
\] are called the open and closed \df{balls} with radius $r$ and center $x$. A metric space is also a topological space, with the base that is the set of all open balls. Over this space, the distance function $d(x,y)$ is obviously continuous. Each metric space is a Hausdorff space; moreover, it has the following stronger property. For every pair of different points $x,y$ there is a continuous function $f : X \to \mathbb{R}$ with $f(x) \ne f(y)$. (To see this, take $f(z) = d(x, z)$.) This is called the $T_{3}$ property. A metric space is \df{bounded} when $d(x,y)$ has an upper bound on $X$. A topological space is called \df{metrizable} if its topology can be derived from some metric space.
\begin{notation} For an arbitrary set $A$ and point $x$ let \begin{align}\nonumber
d(x, A) &= \inf_{y \in A} d(x,y), \\\label{e.Aeps}
A^{\eps} &= \setof{x : d(x, A) < \eps}.
\end{align} \end{notation}
\begin{examples}\label{x.metric}\
\begin{enumerate}[\upshape 1.]
\item The real line with the distance $d(x,y) = |x - y|$ is a metric space. The topology of this space is the usual topology $\tau_{\mathbb{R}}$ of the real line.
\item The space $\mathbb{R} \times \mathbb{R}$ with the Euclidean distance gives the same topology as the product topology of $\mathbb{R} \times \mathbb{R}$.
\item An arbitrary set $X$ with the distance $d(x,y)=1$ for all pairs $x,y$ of different elements, is a metric space that induces the discrete topology on $X$.
\item\label{i.x.metric-inf-seq} Let $X$ be a bounded metric space, and let $Y = X^{\og}$ be the set of infinite sequences $x = (x_{1}, x_{2}, \dotsc)$ with distance function $d^{\og}(x,y) = \sum_{i} 2^{-i} d(x_{i},y_{i})$. The topology of this space is the same as the product topology defined in Example~\ref{x.prod}.\ref{i.x.top-inf-seq}.
\item\label{i.x.metric-nonsep} Let $X$ be a metric space, and let $Y = X^{\og}$ be the set of infinite bounded sequences $x = (x_{1}, x_{2}, \dotsc)$ with distance function $d(x, y) = \sup_{i} d(x_{i}, y_{i})$.
\item\label{i.x.C(X)} Let $X$ be a metric space, and let $C(X)$ be the set of bounded continuous functions over $X$ with distance function $d'(f, g) = \sup_{x} d(f(x), g(x))$. A special case is $C\clint{0}{1}$ where the interval $\clint{0}{1}$ of real numbers has the usual metric.
\item\label{i.x.l2} Let $l_{2}$ be the set of infinite sequences $x = (x_{1}, x_{2}, \dotsc)$ of real numbers with the property that $\sum_{i} x_{i}^{2} < \infty$. The metric is given by
the distance $d(x, y) = (\sum_{i} |x_{i} - y_{i}|^{2})^{1/2}$.
\end{enumerate}
\end{examples}
A topological space has the \df{first countability property} if each point has a countable base of neighborhoods. Every metric space has the first countability property since we can restrict ourselves to balls with rational radius. Given a topological space $(X, \tau)$ and a sequence $x = (x_{1}, x_{2}, \dotsc)$ of elements of $X$, we say that $x$ \df{converges} to a point $y$ if for every neighborhood $G$ of $y$ there is a $k$ such that for all $m > k$ we have $x_{m} \in G$. We will write $y = \lim_{n \to \infty} x_{n}$. It is easy to show that if spaces $(X_{i}, \tau_{i})$ $(i=1,2)$ have the first countability property then a function $f : X \to Y$ is continuous if and only if for every convergent sequence $(x_{n})$ we have $f(\lim_{n} x_{n}) = \lim_{n} f(x_{n})$. A topological space has the \df{second countability property} if the whole space has a countable base. For example, the space $\mathbb{R}$ has the second countability property for all three topologies $\tau_{\mathbb{R}}$, $\tau_{\mathbb{R}}^{<}$, $\tau_{\mathbb{R}}^{>}$. Indeed, we also get a base if instead of taking all intervals, we only take intervals with rational endpoints. On the other hand, the metric space of Example~\ref{x.metric}.\ref{i.x.metric-nonsep} does not have the second countability property. In a topological space $(X, \tau)$, a set $B$ of points is called \df{dense} at a point $x$ if it intersects every neighborhood of $x$. It is called \df{everywhere dense}, or \df{dense}, if it is dense at every point. A metric space is called \df{separable} if it has a countable everywhere dense subset. This property holds if and only if the space as a topological space has the second countability property.
\begin{example}\label{x.Cclint{0}{1}} In Example~\ref{x.metric}.\ref{i.x.C(X)}, for $X=\clint{0}{1}$, we can choose as our everywhere dense set the set of all polynomials with rational coefficients, or alternatively, the set of all piecewise linear functions whose graph has finitely many nodes at rational points.
\end{example}
Let $X$ be a metric space, and let $C(X)$ be the set of bounded continuous functions over $X$ with distance function $d'(f, g) = \sup_{x} d(f(x), g(x))$. A special case is $C\clint{0}{1}$ where the interval $\clint{0}{1}$ of real numbers has the usual metric.
Let $(X, d)$ be a metric space, and $a = (a_{1}, a_{1},\dotsc)$ an infinite sequence. A metric space is called \df{complete} if every Cauchy sequence in it has a limit. It is well-known that every metric space can be embedded (as an everywhere dense subspace) into a complete space.
It is easy to see that in a metric space, every closed set is a $G_{\dg}$ set (and every open set is an $F_{\sg}$ set).
\begin{example} Consider the set $D\clint{0}{1}$ of functions over $\clint{0}{1}$ that are right continuous and have left limits everywhere. The book~\cite{BillingsleyConverg68} introduces two different metrics for them: the Skorohod metric $d$ and the $d_{0}$ metric. In both metrics, two functions are close if a slight monotonic continuous deformation of the coordinate makes them uniformly close. But in the $d_{0}$ metric, the slope of the deformation must be close to 1. It is shown that the two metrics give rise to the same topology; however, the space with metric $d$ is not complete, and the space with metric $d_{0}$ is.
\end{example}
Let $(X, d)$ be a metric space. It can be shown that a subset $K$ of $X$ is compact if and only if it is sequentially compact. Also, $K$ is compact if and only if it is closed and for every $\eps$, there is a finite set of $\eps$-balls (balls of radius $\eps$) covering it.
We will develop the theory of randomness over separable complete metric spaces. This is a wide class of spaces encompassing most spaces of practical interest. The theory would be simpler if we restricted it to compact or locally compact spaces; however, some important spaces like $C\clint{0}{1}$ (the set of continuouos functions over the interval $\clint{0}{1}$, with the maximum difference as their distance) are not locally compact.
Given a function $f: X \to Y$ between metric spaces and $\bg > 0$, let $\operatorname{Lip}_{\bg}(X,Y)$ denote the set of functions (called the Lip\-schitz$(\bg)$ functions, or simply Lip\-schitz functions) satisfying
\begin{equation}\label{e.Lipschitz}
d_{Y}(f(x), f(y)) \leqslant \bg d_{X}(x, y).
\end{equation} All these functions are uniformly continuous. Let $\operatorname{Lip}(X) = \operatorname{Lip}(X,\mathbb{R}) = \bigcup_{\bg} \operatorname{Lip}_{\bg}$ be the set of real Lip\-schitz functions over $X$.
\section{Measures}\label{s.measures}
For a survey of measure theory, see for example~\cite{PollardUsers01}.
\subsection{Set algebras}
A (Boolean set-) \df{algebra} is a set of subsets of some set $X$ closed under intersection and complement (and then, of course, under union). It is a \df{$\sg$-algebra} if it is also closed under countable intersection (and then, of course, under countable union). A \df{semialgebra} is a set $\mathcal{L}$ of subsets of some set $X$ closed under intersection, with the property that the complement of every element of $\mathcal{L}$ is the disjoint union of a finite number of elements of $\mathcal{L}$. If $\mathcal{L}$ is a semialgebra then the set of finite unions of elements of $\mathcal{L}$ is an algebra.
\begin{examples}\label{x.algebras}\
\begin{enumerate}[\upshape 1.]
\item\label{i.x.algebras.l-closed} The set $\mathcal{L}_{1}$ of left-closed intervals of the line (including intervals of the form $\opint{-\infty}{a}$) is a semialgebra.
\item The set $\mathcal{L}_{2}$ of all intervals of the line (which can be open, closed, left-closed or right-closed), is a semialgebra.
\item\label{i.x.algebras.inf-seq} In the set $\{0,1\}^{\og}$ of infinite 0-1-sequences, the set $\mathcal{L}_{3}$ of all subsets of the form $u\{0,1\}^{\og}$ with $u\in\{0,1\}^{*}$, is a semialgebra.
\item The $\sg$-algebra $\mathcal{B}$ generated by $\mathcal{L}_{1}$, is the same as the one generated by $\mathcal{L}_{2}$, and is also the same as the one generated by the set of all open sets: it is called the family of \df{Borel sets} of the line. The Borel sets of the extended real line $\ol\mathbb{R}$ are defined similarly.
\item Given $\sg$-algebras $\mathcal{A},\mathcal{B}$ in sets $X,Y$, the product $\sg$-algebra $\mathcal{A}\times\mathcal{B}$ in the space $X \times Y$ is the one generated by all elements $A \times Y$ and $X \times B$ for $A\in\mathcal{A}$ and $B\in\mathcal{B}$.
\end{enumerate}
\end{examples}
\subsection{Measures}\label{ss.measures}
A \df{measurable space} is a pair $(X, \mathcal{S})$ where $\mathcal{S}$ is a $\sg$-algebra of sets of $X$. A \df{measure} on a measurable space $(X, \mathcal{S})$ is a function $\mu : B \to \ol\mathbb{R}_{+}$ that is \df{$\sg$-additive}: this means that for every countable family $A_{1}, A_{2},\dotsc$ of disjoint elements of $\mathcal{S}$ we have $\mu(\bigcup_{i} A_{i}) = \sum_{i} \mu(A_{i})$. A measure $\mu$ is \df{$\sg$-finite} if the whole space is the union of a countable set of subsets whose measure is finite. It is \df{finite} if $\mu(X) < \infty$. It is a \df{probability measure} if $\mu(X) = 1$.
It is important to understand how a measure can be defined in practice. Algebras are generally simpler to grasp constructively than $\sg$-algebras; semialgebras are yet simpler. Suppose that $\mu$ is defined over a semialgebra $\mathcal{L}$ and is additive. Then it can always be uniquely extended to an additive function over the algebra generated by $\mathcal{L}$. The following is an important theorem of measure theory.
\begin{proposition}\label{p.Caratheo-extension} Suppose that a nonnegative set function defined over a semialgebra $\mathcal{L}$ is $\sg$-additive. Then it can be extended uniquely to the $\sg$-algebra generated by $\mathcal{L}$.
\end{proposition}
\begin{examples}\label{x.semialgebra}\
\begin{enumerate}[\upshape 1.]
\item Let $x$ be point and let $\mu(A) = 1$ if $x \in A$ and $0$ otherwise. In this case, we say that $\mu$ is \df{concentrated} on the point $x$.
\item\label{i.left-cont} Consider the the line $\mathbb{R}$, and the algebra $\mathcal{L}_{1}$ defined in Example~\ref{x.algebras}.\ref{i.x.algebras.l-closed}. Let $f : \mathbb{R} \to \mathbb{R}$ be a monotonic real function. We define a set function over $\mathcal{L}_{1}$ as follows. Let $\lint{a_{i}}{b_{i}}$, ($i=1,\dots,n$) be a set of disjoint left-closed intervals. Then $\mu(\bigcup_{i} \lint{a_{i}}{b_{i}}) = \sum_{i} f(b_{i}) - f(a_{i})$. It is easy to see that $\mu$ is additive. It is $\sg$-additive if and only if $f$ is left-continuous.
\item\label{i.measure-Cantor} Let $B = \{0,1\}$, and consider the set $B^{\og}$ of infinite 0-1-sequences, and the semialgebra $\mathcal{L}_{3}$ of Example~\ref{x.algebras}.\ref{i.x.algebras.inf-seq}. Let $\mu : B^{*} \to \mathbb{R}^{+}$ be a function. Let us write $\mu(uB^{\og}) = \mu(u)$ for all $u \in B^{*}$. Then it can be shown that the following conditions are equivalent: $\mu$ is $\sg$-additive over $\mathcal{L}_{3}$; it is additive over $\mathcal{L}_{3}$; the equation $\mu(u) = \mu(u0) + \mu(u1)$ holds for all $u \in B^{*}$.
\item The nonnegative linear combination of any finite number of measures is also a measure. In this way, it is easy to construct arbitrary measures concentrated on a finite number of points.
\item Given two measure spaces $(X,\mathcal{A},\mu)$ and $(Y,\mathcal{B},\nu)$ it is possible to define the product measure $\mu\times\nu$ over the measureable space $(X\times Y, \mathcal{A}\times\mathcal{B})$. The definition is required to satisfy
$\mu\times\nu(A\times B) = \mu(A)\times\nu(B)$, and is determined uniquely by this condition. If $\nu$ is a probability measure then, of course,
$\mu(A) = \mu\times\nu(A \times Y)$.
\end{enumerate}
\end{examples}
\begin{remark}\label{r.measure.step-by-step} Example~\ref{x.semialgebra}.\ref{i.measure-Cantor} shows a particularly attractive way to define measures. Keep splitting the values $\mu(u)$ in an arbitrary way into $\mu(u0)$ and $\mu(u1)$, and the resulting values on the semialgebra define a measure. Example~\ref{x.semialgebra}.\ref{i.left-cont} is less attractive, since in the process of defining $\mu$ on all intervals and only keeping track of finite additivity, we may end up with a monotonic function that is not left continuous, and thus with a measure that is not $\sg$-additive. In the subsection on probability measures over a metric space, we will find that even on the real line, there is a way to define measures in a step-by-step manner, and only checking for consistency along the way.
\end{remark}
A \df{probability space} is a triple $(X, \mathcal{S}, P)$ where $(X, \mathcal{S})$ is a measurable space and $P$ is a probability measure over it.
Let $(X_{i}, \mathcal{S}_{i})$ ($i=1,2$) be measurable spaces, and let $f : X \to Y$ be a mapping. Then $f$ is \df{measurable} if and only if for each element $B$ of $\mathcal{S}_{2}$, its inverse image $f^{-1}(B)$ is in $\mathcal{S}_{1}$. If $\mu_{1}$ is a measure over $(X_{1}, \mathcal{S}_{1})$ then $\mu_{2}$ defined by $\mu_{2}(A) = \mu_{1}(f^{-1}(A))$ is a measure over $X_{2}$ called the measure \df{induced} by $f$.
\subsection{Integral}\label{ss.integral}
A measurable function $f : X \to \mathbb{R}$ is called a \df{step function} if its range is finite. The set of step functions is closed with respect to linear combinations and also with respect to the operations $\et,\V$. Such a set of functions is called a \df{Riesz space}.
Given a step function which takes values $x_{i}$ on sets $A_{i}$, and a finite measure $\mu$, we define
\[
\mu(f) = \mu f = \int f\,d\mu = \int f(x) \mu(d x)
= \sum_{i} x_{i} \mu(A_{i}).
\] This is a linear positive functional on the set of step functions. Moreover, it can be shown that it is continuous on monotonic sequences: if $f_{i} \searrow 0$ then $\mu f_{i} \searrow 0$. The converse can also be shown: Let $\mu$ be a linear positive functional on step functions that is continuous on monotonic sequences. Then the set function $\mu(A) = \mu(1_{A})$ is a finite measure.
\begin{proposition}\label{p.Riesz-extension} Let $\mathcal{E}$ be any Riesz space of functions with the property that $1 \in \mathcal{E}$. Let $\mu$ be a positive linear functional on $\mathcal{E}$ continuous on monotonic sequences, with $\mu 1 = 1$. The functional $\mu$ can be extended to the set $\mathcal{E}_{+}$ of monotonic limits of nonnegative elements of $\mathcal{E}$, by continuity. In case when $\mathcal{E}$ is the set of all step functions, the set $\mathcal{E}_{+}$ is the set of all nonnegative measurable functions.
\end{proposition}
Let us fix a finite measure $\mu$ over a measurable space $(X, \mathcal{S})$. A measurable function $f$ is called \df{integrable} with respect to $\mu$
if $\mu |f|^{+} < \infty$ and $\mu |f|^{-} < \infty$. In this case, we define $\mu f = \mu |f|^{+} - \mu |f|^{-}$. The set of integrable functions is a Riesz space, and the positive linear functional $\mu$ on it is continuous with respect to monotonic sequences. The continuity over monotonic sequences also implies the following \df{bounded convergence theorem}.
\begin{proposition} Suppose that functions $f_{n}$ are integrable and
$|f_{n}| < g$ for some integrable function $g$. Then $f = \lim_{n} f_{n}$ is integrable and $\mu f = \lim_{n} \mu f_{n}$.
\end{proposition}
Two measurables functions $f,g$ are called \df{equivalent} with respect to $\mu$ if $\mu(f - g) = 0$.
For two-dimensional integration, the following theorem holds.
\begin{proposition} Suppose that function $f(\cdot,\cdot)$ is integrable over the space $(X\times Y, \mathcal{A}\times\mathcal{B}, \mu\times\nu)$. Then for $\mu$-almost all $x$, the function $f(x,\cdot)$ is integrable over $(Y,\mathcal{B},\nu)$, and the function $x\mapsto\nu^{y}f(x,y)$ is integrable over $(X,\mathcal{A},\mu)$ with $(\mu\times\nu) f = \mu^{x}\mu^{y}f$.
\end{proposition}
\subsection{Density}
Let $\mu, \nu$ be two measures over the same measurable space. We say that $\nu$ is \df{absolutely continuous} with respect to $\mu$, or that $\mu$ \df{dominates} $\nu$, if for each set $A$, $\mu(A) = 0$ implies $\nu(A) = 0$. It can be proved that this condition is equivalent to the condition that there is a positive real number $c$ with $\nu \leqslant c \mu$. Every nonnegative integrable function $f$ defines a new measure $\nu$ via the formula $\nu(A) = \mu(f\cdot 1_{A})$. This measure $\nu$ is absolutely continuous with respect to $\mu$. The Radon-Nikodym theorem says that the converse is also true.
\begin{proposition}[Radon-Nikodym theorem] If $\nu$ is dominated by $\mu$ then there is a nonnegative integrable function $f$ such that $\nu(A) = \mu(f \cdot 1_{A})$ for all measurable sets $A$. The function $f$ is defined uniquely to within equivalence with respect to $\mu$.
\end{proposition}
The function $f$ of the Radom-Nikodym Theorem above is called the \df{density} of $\nu$ with respect to $\mu$. We will denote it by
\[
f(x) = \frac{\mu(dx)}{\nu(dx)} = \frac{d\mu}{d\nu}.
\] The following theorem is also standard.
\begin{proposition}\label{p.density-props}\
\begin{enumerate}[(a)]
\item Let $\mu, \nu, \eta$ be measures such that $\eta$ is absolutely continuous with respect to $\mu$ and $\mu$ is absolutely continuous with respect to $\nu$. Then the ``chain rule'' holds:
\begin{equation}\label{e.chain-rule}
\frac{d\eta}{d\nu} = \frac{d\eta}{d\mu} \frac{d\mu}{d\nu}.
\end{equation}
\item If $\frac{\nu(dx)}{\mu(dx)} > 0$ for all $x$ then $\mu$ is also absolutely continuous with respect to $\nu$ and $\frac{\mu(dx)}{\nu(dx)} = \Paren{\frac{\nu(dx)}{\mu(dx)}}^{-1}$.
\end{enumerate}
\end{proposition}
Let $\mu, \nu$ be two measures, then both are dominated by some measure $\eta$ (for example by $\eta = \mu + \nu$). Let their densities with respect to $\eta$ be $f$ and $g$. Then we define the \df{total variation distance} of the two measures as
\[
D(\mu, \nu)=\eta(|f - g|).
\] It is independent of the dominating measure $\eta$.
\begin{example} Suppose that the space $X$ can be partitioned into disjoint sets $A,B$ such that $\nu(A)=\mu(B) = 0$. Then $D(\mu, \nu) = \mu(A) + \nu(B) = \mu(X) + \nu(X)$.
\end{example}
\subsection{Random transitions}\label{ss.transitions}
Let $(X, \mathcal{A})$, $(Y, \mathcal{B})$ be two measureable spaces (defined in Subsection~\ref{ss.measures}). We follow the definition given in~\cite{PollardUsers01}. Suppose that a family of probability measures $\Lg = \setof{\lg_{x} : x \in X}$ on $\mathcal{B}$ is given. We call it a \df{probability kernel}, (or Markov kernel, or conditional distribution) if the map $x \mapsto \lg_{x} B$ is measurable for each $B \in \mathcal{B}$. When $X,Y$ are finite sets then $\lg$ is a Markov transition matrix. The following theorem shows that $\lg$ assigns a joint distribution over the space $(X \times Y, \mathcal{A}\times\mathcal{B})$ to each input distribution $\mu$.
\begin{proposition} For each nonnegative $\mathcal{A}\times \mathcal{B}$-measureable function $f$ over $X \times Y$,
\begin{enumerate}[\upshape 1.]
\item the function $y \to f(x,y)$ is $\mathcal{B}$-measurable for each fixed $x$;
\item $x \to \lg_{x}^{y} f(x, y)$ is $\mathcal{A}$-measurable;
\item the integral $f \to \mu^{x} \lg_{x}^{y} f(x, y)$ defines a measure on $\mathcal{A} \times \mathcal{B}$.
\end{enumerate}
\end{proposition}
According to this proposition, given a probability kernel $\Lg$, to each measure $\mu$ over $\mathcal{A}$ corresponds a measure over $\mathcal{A} \times \mathcal{B}$. We will denote its marginal over $\mathcal{B}$ as
\begin{equation}\label{e.Markov-op-meas}
\Lg^{*} \mu.
\end{equation} For every measurable function $g(y)$ over $Y$, we can define the measurable function $f(x) = \lg_{x} g = \lg_{x}^{y} g(y)$: we write
\begin{equation}\label{e.Markov-op-fun}
f = \Lg g.
\end{equation} The operator $\Lg$ is linear, and monotone with $\Lg 1 = 1$. By these definitions, we have
\begin{equation}\label{e.Lg-Lg*}
\mu(\Lg g) = (\Lg^{*}\mu) g.
\end{equation}
\begin{example}\label{x.determ-trans} Let $h : X \to Y$ be a measureable function, and let $\lg_{x}$ be the measure $\dg_{h(x)}$ concentrated on the point $h(x)$. This operator, denoted $\Lg_{h}$ is, in fact, a deterministic transition, and we have $\Lg_{h} g = g \circ h$. In this case, we will simplify the notation as follows:
\[
h^{*}\mu = \Lg_{h}^{*}.
\]
\end{example}
\subsection{Probability measures over a metric space}\label{ss.measure-metric}
We follow the exposition of~\cite{BillingsleyConverg68}. Whenever we deal with probability measures on a metric space, we will assume that our metric space is complete and separable (Polish). Let $\mathbf{X} = (X, d)$ be a complete separable metric space. It gives rise to a measurable space, where the measurable sets are the Borel sets of $\mathbf{X}$. It can be shown that, if $A$ is a Borel set and $\mu$ is a finite measure then there are sets $F \sbsq A \sbsq G$ where $F$ is an $F_{\sg}$ set, $G$ is a $G_{\dg}$ set, and $\mu(F) = \mu(G)$. Let $\mathcal{B}$ be a base of open sets closed under intersections. Then it can be shown that $\mu$ is determined by its values on elements of $\mathcal{B}$. The following proposition follows then essentially from Proposition~\ref{p.Caratheo-extension}.
\begin{proposition}\label{p.Caratheo-topol} Let $\mathcal{B}^{*}$ be the set algebra generated by the above base $\mathcal{B}$, and let $\mu$ be any $\sigma$-additive set function on $\mathcal{B}^{*}$ with $\mu(X)=1$. Then $\mu$ can be extended uniquely to a probability measure.
\end{proposition}
We say that a set $A$ is a \df{continuity set} of measure $\mu$ if $\mu(\partial A) = 0$: the boundary of $A$ has measure 0.
\subsubsection{Weak topology} Let
\[
\mathcal{M}(\mathbf{X})
\] be the set of probability measures on the metric space $\mathbf{X}$. Let
\[
\dg_{x}
\] be a probability measure concentrated on point $x$. Let $x_{n}$ be a sequence of points converging to point $x$ but with $x_{n} \ne x$. We would like to say that $\dg_{x_{n}}$ converges to $\dg_{x}$. But the total variation distance $D(\dg_{x_{n}}, \dg_{x})$ is 2 for all $n$. This suggests that the total variation distance is not generally the best way to compare probability measures over a metric space. We say that a sequence of probability measures $\mu_{n}$ over a metric space $(X, d)$ \df{weakly converges} to measure $\mu$ if for all bounded continuous real functions $f$ over $X$ we have $\mu_{n} f \to \mu f$. This \df{topology of weak convergence} $(\mathcal{M}, \tau_{w})$ can be defined using a number of different subbases. The one used in the original definition is the subbase consisting of all sets of the form
\[
A_{f,c} = \setof{\mu : \mu f < c}
\] for bounded continuous functions $f$ and real numbers $c$. We also get a subbase (see for example~\cite{PollardUsers01}) if we restrict ourselves to the set $\operatorname{Lip}(X)$ of Lip\-schitz functions defined in~\eqref{e.Lipschitz}. Another possible subbase giving rise to the same topology consists of all sets of the form
\begin{equation}\label{e.measure-on-open}
B_{G,c} = \setof{\mu : \mu(G) > c}
\end{equation} for open sets $G$ and real numbers $c$. Let us find some countable subbases. Since the space $\mathbf{X}$ is separable, there is a sequence $U_{1}, U_{2}, \dotsc$ of open sets that forms a base. We can restrict the subbase of the space of measures to those sets $B_{G, c}$ where $G$ is the union of a finite number of base elements $U_{i}$ and $c$ is rational. Thus, the space $(\mathcal{M}, \tau_{w})$ itself has the second countability property.
It is more convenient to define a countable subbase using bounded continuous functions $f$, since $\mu \mapsto \mu f$ is continuous on such functions, while $\mu \mapsto \mu U$ is typically not continuous when $U$ is an open set. Let $\mathcal{F}_{0}$ be the set of functions introduced before~\eqref{e.bd-Lip-seq}. Let
\[
\mathcal{F}_{1}
\] be the set of functions $f$ with the property that $f$ is the minimum of a finite number of elements of $\mathcal{F}_{0}$. Note that each element $f$ of $\mathcal{F}_{1}$ is bounded between 0 and 1, and from its definition, we can compute a bound $\bg$ such that $f\in\operatorname{Lip}_{\bg}$.
\begin{proposition}\label{p.Portmanteau} The following conditions are equivalent:
\begin{enumerate}[\upshape 1.]
\item $\mu_{n}$ weakly converges to $\mu$.
\item $\mu_{n} f \to \mu f$ for all $f \in \mathcal{F}_{1}$.
\item For every Borel set $A$, that is a continuity set of $\mu$, we have $\mu_{n}(A) \to \mu(A)$.
\item For every closed set $F$, $\lim\inf_{n} \mu_{n}(F) \geqslant \mu(F)$.
\item For every open set $G$, $\lim\sup_{n} \mu_{n}(G) \leqslant \mu(G)$.
\end{enumerate}
\end{proposition}
As a subbase
\begin{equation}\label{e.metric-measure-subbase}
\sg_{\mathcal{M}}
\end{equation} for $\mathcal{M}(x)$, we choose the sets $\setof{\mu : \mu f < r}$ and $\setof{\mu : \mu f > r}$ for all $f \in \mathcal{F}_{1}$ and $r \in \mathbb{Q}$. Let $\mathcal{E}$ be the set of functions introduced in~\eqref{e.bd-Lip-seq}. It is a Riesz space as defined in Subsection~\ref{ss.integral}. A reasoning combining Propositions~\ref{p.Caratheo-extension} and~\ref{p.Riesz-extension} gives the following.
\begin{proposition}\label{p.metric-Riesz-extension} Suppose that a positive linear functional $\mu$ with $\mu 1 = 1$ is defined on $\mathcal{E}$ that is continuous with respect to monotone convergence. Then $\mu$ can be extended uniquely to a probability measure over $\mathbf{X}$ with $\mu f = \int f(x) \mu(dx)$ for all $f \in \mathcal{E}$.
\end{proposition}
\subsubsection{Prokhorov distance}\label{sss.Prokh} The definition of measures in the style of Proposition~\ref{p.metric-Riesz-extension} is not sufficiently constructive. Consider a gradual definition of the measure $\mu$, extending it to more and more elements of $\mathcal{E}$, while keeping the positivity and linearity property. It can happen that the function $\mu$ we end up with in the limit, is not continuous with respect to monotone convergence. Let us therefore metrize the space of measures: then an arbitrary measure can be defined as the limit of a Cauchy sequence of simple meaures.
One metric that generates the topology of weak convergence is the \df{Prokhorov distance} $p(\mu, \nu)$: the infimum of all those $\eps$ for which, for all Borel sets $A$ we have (using the notation~\eqref{e.Aeps})
\[
\mu(A) \leqslant \nu(A^{\eps}) + \eps.
\] It can be shown that this is a distance and it generates the weak topology. The following result helps visualize this distance:
\begin{proposition}[Coupling Theorem, see~\protect\cite{Strassen65}] \label{p.coupling} Let $\mu,\nu$ be two probability measures over a complete separable metric space $\mathbf{X}$ with $p(\mu, \nu) \leqslant\eps$. Then there is a probability measure $P$ on the space $\mathbf{X} \times \mathbf{X}$ with marginals $\mu$ and $\nu$ such that for a pair of random variables $(\xi,\eta)$ having joint distribution $P$ we have
\[
P\setof{d(\xi,\eta) > \eps} \leqslant \eps.
\]
\end{proposition}
Since this topology has the second countability property, the metric space defined by the distance $p(\cdot,\cdot)$ is separable. This can also be seen directly. Let $S$ be a countable everywhere dense set of points in $X$. Consider the set of $\mathcal{M}_{0}(X)$ of those probability measures that are concentrated on finitely many points of $S$ and assign rational values to them. It can be shown that $\mathcal{M}_{0}(X)$ is everywhere dense in the metric space $(\mathcal{M}(X), p)$; so, this space is separable. It can also be shown that $(\mathcal{M}(X), p)$ is complete. Thus, a measure can be given as the limit of a sequence of elements $\mu_{1},\mu_{2},\dots$ of $\mathcal{M}_{0}(X)$, where $p(\mu_{i},\mu_{i+1}) < 2^{-i}$.
The definition of the Prokhorov distance quantifies over all Borel sets. However, in an important simple case, it can be handled efficiently.
\begin{proposition}\label{p.simple-Prokhorov-ball} Assume that measure $\nu$ is concentrated on a finite set of points $S \sbs X$. Then the condition $p(\nu,\mu) < \eps$ is equivalent to the finite set of conditions
\begin{equation}\label{e.finite-Prokhorov}
\mu(A^{\eps}) > \nu(A) - \eps
\end{equation} for all $A \sbs S$.
\end{proposition}
\subsubsection{Relative compactness} A set $\Pi$ of measures in $(\mathcal{M}(X), p)$ is called \df{relatively compact} if every sequence of elements of $\Pi$ contains a convergent subsequence. Relative compactness is an important property for proving convergence of measures. It has a useful characterization. A set of $\Pi$ of measures is called \df{tight} if for every $\eps$ there is a compact set $K$ such that $\mu(K) > 1 - \eps$ for all $\mu$ in $\Pi$. Prokhorov's theorem states (under our assumptions of the separability and completeness of $(X, d)$) that a set of measures is relatively compact if and only if it is tight and if and only if its closure is compact in $(\mathcal{M}(X), p)$. In particular, the following fact is known.
\begin{proposition}\label{p.measures-compact} The space $(\mathcal{M}(\mathbf{X}), p)$ of measures is compact if and only if the space $(X, d)$ is compact.
\end{proposition}
So, if $(X, d)$ is not compact then the set of measures is not compact. But still, each measure $\mu$ is ``almost'' concentrated on a compact set. Indeed, the one-element set $\{\mu\}$ is compact and therefore by Prokhorov's theorem tight. Tightness says that for each $\eps$ a mass of size $1-\eps$ of $\mu$ is concentrated on some compact set.
\section{Computable analysis}
If for some finite or infinite sequences $x,y,z,w$, we have $z = wxy$ then we write $w \sqsubseteq z$ ($w$ is a \df{prefix} of $z$) and $x \triangleleft z$. For integers, we will use the toupling functions
\[
\ang{i, j} = \frac{1}{2} (i+1)(i+j+1) + j, \quad \ang{n_{1},\dots,n_{k+1}} = \ang{\ang{n_{1},\dots,n_{k}},n_{k+1}}.
\] Inverses: $\pi_{i}^{k}(n)$.
Unless said otherwise, the alphabet $\Sg$ is always assumed to contain the symbols 0 and 1. After~\cite{WeihrauchComputAnal00}, let us define the \df{wrapping function} $\ig : \Sg^{*} \to \Sg^{*}$ by
\begin{equation}\label{e.ig}
\ig(a_{1}a_{2}\dotsm a_{n}) = 110a_{1}0a_{2}0\dotsm a_{n}011.
\end{equation} Note that
\begin{equation}\label{e.ig-len}
|\ig(x)| = (2 |x| + 5)\V 6.
\end{equation} For strings $x,x_{i} \in \Sg^{*}$, $p, p_{i} \in \Sg^{\og}$, $k \geqslant 1$, appropriate tupling functions $\ang{x_{1},\dots,x_{k}}$, $\ang{x,p}$, $\ang{p,x}$, etc.~can be defined with the help of $\ang{\cdot,\cdot}$ and $\ig(\cdot)$.
\subsection{Notation and representation}\label{ss.notation-repr}
The concepts of notation and representation, as defined in~\cite{WeihrauchComputAnal00}, allow us to transfer computability properties from some standard spaces to many others. Given a countable set $C$, a \df{notation} of $C$ is a surjective partial mapping $\dg :\sbsq \mathbb{N} \to C$. Given some finite alphabet $\Sg \spsq \{0,1\}$ and an arbitrary set $S$, a \df{representation} of $S$ is a surjective mapping $\chi :\sbsq \Sg^{\og} \to S$. A \df{naming system} is a notation or a representation. Here are some standard naming systems:
\begin{enumerate}[\upshape 1.]
\item $\operatorname{id}$, the identity over $\Sg^{*}$ or $\Sg^{\og}$.
\item $\nu_{\mathbb{N}}$, $\nu_{\mathbb{Z}}$, $\nu_{\mathbb{Q}}$ for the set of natural numbers, integers and rational numbers.
\item $\operatorname{Cf} : \Sg^{\og} \to 2^{\mathbb{N}}$, the \df{characteristic function representation} of sets of natural numbers, is defined by $\operatorname{Cf}(p) = \setof{i : p(i) = 1}$.
\item $\operatorname{En} : \Sg^{\og} \to 2^{\mathbb{N}}$, the \df{enumeration representation} of sets of natural numbers, is defined by $\operatorname{En}(p) = \setof{w \in \Sg^{*} : 110^{n+1}11 \triangleleft p}$.
\item For $\Dg \sbsq \Sg$, $\operatorname{En}_{\Dg} : \Sg^{\og} \to 2^{\Dg^{*}}$, the \df{enumeration representation} of subsets of $\Dg^{*}$, is defined by $\operatorname{En}_{\Dg}(p) = \setof{w \in \Sg^{*} : \ig(w) \triangleleft p}$.
\end{enumerate}
One can define names for all computable functions between spaces that are Cartesian products of terms of the kind $\Sg^{*}$ and $\Sg^{\og}$. Then, the notion of computability can be transferred to other spaces as follows. Let $\dg_{i} : Y_{i} \to X_{i}$, $i=1,0$ be naming systems of the spaces $X_{i}$. Let $f : \sbsq X_{1} \to X_{0}$, $g : \sbsq Y_{1} \to Y_{0}$. We say that function $g$ \df{realizes} function $f$ if
\begin{equation}\label{e.realize}
f(\dg_{1}(y)) = \dg_{0}(g(y))
\end{equation} holds for all $y$ for which the left-hand side is defined. Realization of multi-argument functions is defined similarly. We say that a function $f : X_{1} \times X_{2} \to X_{0}$ is \df{$(\dg_{1},\dg_{2},\dg_{0})$-computable} if there is a computable function $g : \sbsq Y_{1} \times Y_{2} \to Y_{0}$ realizing it. In this case, a name for $f$ is naturally derived from a name of $g$.\footnote{Any function $g$ realizing $f$ via~\eqref{e.realize} automatically has a certain \df{extensivity} property: if $\dg_{1}(y) = \dg_{1}(y')$ then $g(y) = g(y')$.}
For representations $\xi,\eta$, we write $\xi \leqslant \eta$ if there is a computable function $f :\sbsq \Sg^{\og} \to \Sg^{\og}$ with $\xi(x) = \eta(f(x))$. In words, we say that $\xi$ is \df{reducible} to $\eta$, or that $f$ reduces (translates) $\xi$ to $\eta$. There is a similar definition of reduction for notations. We write $\xi \equiv \eta$ if $\xi \leqslant \eta$ and $\eta \leqslant \xi$.
\subsection{Constructive topological space}
\subsubsection{Definitions} Section~\ref{s.top} gives a review of topological concepts. A \df{constructive topological space} $\mathbf{X} = (X, \sg, \nu)$ is a topological space over a set $X$ with a subbase $\sg$ effectively given as a list $\sg = \{\nu(1),\nu(2),\dots\}$, and having the $T_{0}$ property (thus, every point is determined uniquely by the subset of elements of $\sg$ containing it). By definition, a constructive topological space satisfies the second countability axiom.\footnote{A constructive topological space is an effective topological space as defined in~\cite{WeihrauchComputAnal00}, but, for simplicity we require the notation $\nu$ to be a total function.} We obtain a base
\[
\sg^{\cap}
\] of the space $\mathbf{X}$ by taking all possible finite intersections of elements of $\sg$. It is easy to produce an effective enumeration for $\sg^{\cap}$ from $\nu$. We will denote this enumeration by $\nu^{\cap}$.
The \df{product operation} is defined over constructive topological spaces in the natural way.
\begin{examples}\label{x.constr-topol}\
\begin{enumerate}[\upshape 1.]
\item A discrete topological space, where the underlying set is finite or countably infinite, with a fixed enumeration.
\item\label{i.constr-topol.real} The real line, choosing the base to be the open intervals with rational endpoints with their natural enumeration. Product spaces can be formed to give the Euclidean plane a constructive topology.
\item The real line $\mathbb{R}$, with the subbase $\sg_{\mathbb{R}}^{>}$ defined as the set of all open intervals $\opint{-\infty}{b}$ with rational endpoints $b$. The subbase $\sg_{\mathbb{R}}^{<}$, defined similarly, leads to another topology. These two topologies differ from each other and from the usual one on the real line, and they are not Hausdorff spaces.
\item Let $X$ be a set with a constructive discrete topology, and $X^{\og}$ the set of infinite sequences with elements from $X$, with the product topology: a natural enumerated basis is also easy to define.
\end{enumerate}
\end{examples}
Due to the $T_{0}$ property, every point in our space is determined uniquely by the set of open sets containing it. Thus, there is a representation $\gm_{\mathbf{X}}$ of $\mathbf{X}$ defined as follows. We say that $\gm_{\mathbf{X}}(p) = x$ if $\operatorname{En}_{\Sg}(p) = \setof{w : x \in \nu(w)}$. If $\gm_{\mathbf{X}}(p) = x$ then we say that the infinite sequence $p$ is a \df{complete name} of $x$: it encodes all names of all subbase elements containing $x$. From now on, we will call $\gm_{\mathbf{X}}$ the \df{complete standard representation of the space $\mathbf{X}$}.\footnote{ The book~\cite{WeihrauchComputAnal00} denotes $\gm_{\mathbf{X}}$ as $\dg'_{\mathbf{X}}$ instead. We use $\gm_{\mathbf{X}}$ only, dispensing with the notion of a ``computable'' topological space.}
\subsubsection{Constructive open sets, computable functions} In a constructive topological space $\mathbf{X} = (X, \sg, \nu)$, a set $G \sbsq X$ is called \df{r.e.~open} in set $B$ if there is a r.e.~set $E$ with $G = \bigcup_{w \in E} \nu^{\cap}(w) \cap B$. It is r.e.~open if it is r.e.~open in $X$. In the special kind of spaces in which randomness has been developed until now, constructive open sets have a nice characterization:
\begin{proposition}\label{p.constr-open-nice-charac} Assume that the space $\mathbf{X} = (X, \sg, \nu)$ has the form $Y_{1}\times \dots \times Y_{n}$ where each $Y_{i}$ is either $\Sg^{*}$ or $\Sg^{\og}$. Then a set $G$ is r.e.~open iff it is open and the set $\setof{(w_{1},\dots,w_{n}) : \bigcap_{i}\nu(w_{i}) \sbs G}$ is recursively enumerable.
\end{proposition}
\begin{proof} The proof is not difficult, but it relies on the discrete nature of the space $\Sg^{*}$ and on the fact that the space $\Sg^{\og}$ is compact and its base consists of sets that are open and closed at the same time.
\end{proof}
It is easy to see that if two sets are r.e.~open then so is their union. The above remark implies that a space having the form $Y_{1}\times \dots \times Y_{n}$ where each $Y_{i}$ is either $\Sg^{*}$ or $\Sg^{\og}$, also the intersection of two recursively open sets is recursively open. We will see that this statement holds, more generally, in all computable metric spaces.
Let $\mathbf{X}_{i} = (X_{i}, \sg_{i}, \nu_{i})$ be constructive topological spaces, and let $f : \sbsq X_{1} \to X_{0}$ be a function. As we know, $f$ is continuous iff the inverse image $f^{-1}(G)$ of each open set $G$ is open. Computability is an effective version of continuity: it requires that the inverse image of subbase elements is uniformly constructively open. More precisely, $f :\sbsq X_{1} \to X_{0}$ is \df{computable} if the set
\[
\bigcup_{V \in \sg_{0}^{\cap}} f^{-1}(V) \times \{V\}
\] is a r.e.~open subset of $X_{1} \times \sg_{0}^{\cap}$. Here the base $\sg_{0}^{\cap}$ of $\mathbf{X}_{0}$ is treated as a discrete constructive topological space, with its natural enumeration. This definition depends on the enumerations $\nu_{1},\nu_{0}$. The following theorem (taken from~\cite{WeihrauchComputAnal00}) shows that this computability coincides with the one obtained by transfer via the representations $\gm_{\mathbf{X}_{i}}$.
\begin{proposition}\label{p.hertling-computable} For $i=0,1$, let $\mathbf{X}_{i} = (X_{i}, \sg_{i}, \nu_{i})$ be constructive topological spaces. Then a function $f :\sbsq X_{1} \to X_{0}$ is computable iff it is $(\gm_{\mathbf{X}_{1}},\gm_{\mathbf{X}_{0}})$-computable for the representations $\gm_{\mathbf{X}_{i}}$ defined above.
\end{proposition}
As a name of a computable function, we can use the name of the enumeration algorithm derived from the definition of computability, or the name derivable using this representation theorem.
\begin{remark} As in Proposition~\ref{p.constr-open-nice-charac}, it would be nice to have the following statement, at least for total functions: ``Function $f : X_{1} \to X_{0}$ is computable iff the set
\[
\setof{(v, w) : \nu^{\cap}_{1}(w) \sbs f^{-1}[\nu_{0}(v)] }
\] is recursively enumerable.'' But such a characterization seems to require compactness and possibly more.
\end{remark}
Let us call two spaces $X_{1}$ and $X_{0}$ \df{effectively homeomorphic} if there are computable maps between them that are inverses of each other. In the special case when $X_{0}=X_{1}$, we say that the enumerations of subbases $\nu_{0},\nu_{1}$ are \df{equivalent} if the identity mapping is a effective homeomorphism. This means that there are recursively enumerable sets $F,G$ such that
\[
\nu_{1}(v) = \bigcup_{(v, w) \in F} \nu_{0}^{\cap}(w) \txt{ for all $v$}, \quad
\nu_{0}(w) = \bigcup_{(w, v) \in G} \nu_{1}^{\cap}(v) \txt{ for all $w$}.
\] Lower semicomputability is a constructive version of lower semicontinuity. Let $\mathbf{X} = (X, \sg, \nu)$ be a constructive topological space. A function $f :\sbsq X \to \ol\mathbb{R}_{+}$ is called \df{lower semicomputable} if the set $\setof{(x,r): f(x) > r}$ is r.e.~open. Let $\mathbf{Y} = (\ol\mathbb{R}_{+}, \sg_{\mathbb{R}}^{<}, \nu_{\mathbb{R}}^{<})$ be the effective topological space introduced in Example~\ref{x.constr-topol}.\ref{i.constr-topol.real}, in which $\nu_{\mathbb{R}}^{>}$ is an enumeration of all open intervals of the form $\rint{r}{\infty}$ with rational $r$. It can be seen that $f$ is lower semicomputable iff it is $(\nu,\nu_{\mathbb{R}}^{>})$-computable.
\subsubsection{Computable elements and sequences}\label{sss.computable-elements} Let $\mathbf{U} = (\{0\}, \sg_{0}, \nu_{0})$ be the one-element space turned into a trivial constructive topological space, and let $\mathbf{X} = (X, \sg, \nu)$ be another constructive topological space. We say that an element $x \in X$ is \df{computable} if the function $0 \mapsto x$ is computable. It is easy to see that this is equivalent to the requirement that the set $\setof{u : x \in \nu(u)}$ is recursively enumerable. Let $\mathbf{X}_{j}= (X_{j}, \sg_{j}, \nu_{j})$, for $i=0,1$ be constructive topological spaces. A sequence $f_{i}$, $i=1,2,\dots$ of functions with $f_{i} : X_{1} \to X_{0}$ is a \df{computable sequence of computable functions} if $(i, x) \mapsto f_{i}(x)$ is a computable function. Using the s-m-n theorem of recursion theory, it is easy to see that this statement is equivalent to the statement that there is a recursive function computing from each $i$ a name for the computable function $f_{i}$. The proof of the following statement is not difficult.
\begin{proposition}\label{p.one-arg-cpt} Let $\mathbf{X}_{i} = (X_{i}, \sg_{i}, \nu_{i})$ for $i=1,2,0$ be constructive topological spaces, and let $f: X_{1} \times X_{2} \to X_{0}$, and assume that $x_{1} \in X_{1}$ is a computable element.
\begin{enumerate}[\upshape 1.]
\item If $f$ is computable and then $x_{2} \mapsto f(x_{1}, x_{2})$ is also computable.
\item If $\mathbf{X}_{0} = \ol\mathbb{R}$, and $f$ is lower semicomputable then $x_{2} \mapsto f(x_{1}, x_{2})$ is also lower semicomputable.
\end{enumerate}
\end{proposition}
\subsection{Computable metric space}
Following~\cite{BrattkaPresserMetric03}, we define a computable metric space as a tuple $\mathbf{X} = (X, d, D, \ag)$ where $(X,d)$ is a metric space, with a countable dense subset $D$ and an enumeration $\ag$ of $D$. It is assumed that the real function $d(\ag(v),\ag(w))$ is computable. As $x$ runs through elements of $D$ and $r$ through positive rational numbers, we obtain the enumeration of a countable basis $\setof{B(x, r) : x \in D, r\in \mathbb{Q}}$ (of balls or radius $r$ and center $x$) of $\mathbf{X}$, giving rise to a constructive topological space $\tilde\mathbf{X}$. Let us call a sequence $x_{1}, x_{2},\dots$ a \df{Cauchy} sequence if for all $i<j$ we have $d(x_{i},x_{j}) \leqslant 2^{-i}$. To connect to the type-2 theory of computability developed above, the \df{Cauchy-representation} $\dg_{\mathbf{X}}$ of the space can be defined in a natural way. It can be shown that as a representation of $\tilde\mathbf{X}$, it is equivalent to $\gm_{\tilde\mathbf{X}}$: $\dg_{\mathbf{X}} \equiv \gm_{\tilde\mathbf{X}}$.
\begin{example}\label{x.cptable-metric-{0}{1}} Example~\protect\ref{x.Cclint{0}{1}} is a computable metric space, with either of the two (equivalent) choices for an enumerated dense set. \end{example}
Similarly to the definition of a computable sequence of computable functions in~\ref{sss.computable-elements}, we can define the notion of a computable sequence of bounded computable functions, or the computable sequence $f_{i}$ of computable Lip\-schitz functions: the bound and the Lip\-schitz constant of $f_{i}$ are required to be computable from $i$. The following statement shows, in an effective form, that a function is lower semicomputable if and only if it is the supremum of a computable sequence of computable functions.
\begin{proposition}\label{p.lower-semi-as-limit} Let $\mathbf{X}$ be a computable metric space. There is a computable mapping that to each name of a nonnegative lower semicomputable function $f$ assigns a name of a computable sequence of computable bounded Lip\-schitz functions $f_{i}$ whose supremum is $f$.
\end{proposition} \begin{proof}[Proof sketch] Show that $f$ is the supremum of a computable sequence of computable functions $c_{i} 1_{B(u_{i}, r_{i})}$ where $u_{i}\in D$ and $c_{i}, r_{i} > 0$ are rational. Clearly, each indicator function $1_{B(u_{i},r_{i})}$ is the supremum of a computable sequence of computable functions $g_{i,j}$. We have $f = \sup_{n} f_{n}$ where $f_{n} = \max_{i \leqslant n} c_{i} g_{i,n}$. It is easy to see that the bounds on the functions $f_{n}$ are computable from $n$ and that they all are in $\operatorname{Lip}_{\bg_{n}}$ for a $\bg_{n}$ that is computable from $n$.
\end{proof}
The following is also worth noting.
\begin{proposition} In a computable metric space, the intersection of two r.e.~open sets is r.e.~open.
\end{proposition}
\begin{proof} Let $\bg = \setof{B(x, r) : x \in D, r\in \mathbb{Q}}$ be a basis of our space. For a pair $(x,r)$ with $x \in D$, $r \in \mathbb{Q}$, let
\[
\Gg(x,r) = \setof{(y,s): y\in D,\;s\in \mathbb{Q},\; d(x,y)+s < r}.
\] If $U$ is a r.e.~open set, then there is a r.e.~set $S_{U} \sbs D \times \mathbb{Q}$ with $U = \bigcup_{(x,r) \in S_{U}} B(x,r)$. Let $S'_{U} = \bigcup\setof{\Gg(x,r) : (x,r) \in S_{U}}$, then we have $U = \bigcup_{(x,r) \in S'_{U}} B(x,r)$. Now, it is easy to see
\[
U\cap V = \bigcup_{(x,r) \in S'_{U} \cap S'_{V}} B(x,r).
\]
\end{proof}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
\end{document} |
\begin{document}
\vspace*{1.5cm} \begin{center} {\Large \bf A rough path over multidimensional fractional Brownian motion with arbitrary Hurst index by Fourier normal ordering } \end{center}
\begin{center} {\bf J\'er\'emie Unterberger} \end{center}
\begin{quote}
\renewcommand{1.0}{1.0} \footnotesize {Fourier normal ordering \cite{Unt09bis} is a new algorithm to construct explicit rough paths over arbitrary H\"older-continuous multidimensional paths. We apply in this article the Fourier normal ordering ordering algorithm to the construction of
an explicit rough path over multi-dimensional fractional Brownian motion $B$ with arbitrary Hurst index $\alpha$ (in particular, for $\alpha\le 1/4$, which was till now an open problem)
by regularizing the iterated integrals of the analytic approximation of $B$
defined in \cite{Unt08}. The regularization procedure is applied to 'Fourier normal ordered' iterated integrals obtained by permuting the order of integration so that innermost integrals have highest Fourier modes. The algebraic properties of this rough path are best understood
using two Hopf algebras: the Hopf algebra of decorated rooted trees \cite{ConKre98} for the multiplicative or Chen property, and the shuffle algebra for the geometric or shuffle property. The rough path lives in Gaussian chaos of integer orders and is shown to have finite moments.
As well-known, the construction of a
rough path is the key to defining a stochastic calculus and solve stochastic differential equations driven by $B$.
The article \cite{Unt09ter} gives a quick overview of the method.
} \end{quote}
\noindent {\bf Keywords:} fractional Brownian motion, stochastic integrals, rough paths, Hopf algebra of decorated rooted trees
\noindent {\bf Mathematics Subject Classification (2000):} 05C05, 16W30, 60F05, 60G15, 60G18, 60H05
\tableofcontents
\section{Introduction}
The (two-sided) fractional Brownian motion $t\to B_t$, $t\in\mathbb{R}$ (fBm for short) with Hurst exponent $\alpha$, $\alpha\in(0,1)$, defined as the centered Gaussian process with covariance
\begin{equation} {\mathbb{E}}[B_s B_t]={1\over 2} (|s|^{2\alpha}+|t|^{2\alpha}-|t-s|^{2\alpha}), \end{equation} is a natural generalization in the class of Gaussian processes of the usual Brownian motion (which is the case $\alpha={1\over 2}$), in the sense that it exhibits two fundamental properties shared with Brownian motion, namely, it has stationary increments, viz. ${\mathbb{E}}[(B_t-B_s)(B_u-B_v)]={\mathbb{E}}[(B_{t+a}-B_{s+a})(B_{u+a}-B_{v+a})]$ for every $a,s,t,u,v\in\mathbb{R}$, and it is self-similar, viz. \begin{equation} \forall \lambda>0, \quad (B_{\lambda t}, t\in\mathbb{R}) \overset{(law)}{=} (\lambda^{\alpha} B_t, t\in\mathbb{R}). \end{equation} One may also define a $d$-dimensional vector Gaussian process (called: {\it $d$-dimensional fractional Brownian motion}) by setting $B_t=(B_t(1),\ldots,B_t(d))$ where $(B_t(i),t\in\mathbb{R})_{i=1,\ldots,d}$ are $d$ independent (scalar) fractional Brownian motions.
Its theoretical interest lies in particular in the fact that it is (up to normalization) the only Gaussian process satisfying these two properties.
A standard application of Kolmogorov's theorem shows that fBm has a version with $\alpha^-$-H\"older continuous (i.e. $\kappa$-H\"older continuous for every $\kappa<\alpha$) paths. In particular, fBm with small Hurst parameter $\alpha$ is a natural, simple model for continuous but very irregular processes.
There has been a widespread interest during the past ten years in constructing a stochastic integration theory with respect to fBm and solving stochastic differential equations driven by fBm, see for instance \cite{LLQ02,GraNouRusVal04,CheNua05,RusVal93,RusVal00}. The multi-dimensional case is very different from the one-dimensional case. When one tries to integrate for instance a stochastic differential equation driven by a two-dimensional fBm $B=(B(1),B(2))$ by using any kind of Picard iteration scheme, one encounters very soon the problem of defining the L\'evy area of $B$ which is the antisymmetric part of ${\cal A}_{ts}:=\int_s^t dB_{t_1}(1) \int_s^{t_1} dB_{t_2}(2)$. This is the simplest occurrence of iterated integrals $\vec{B}^k_{ts}(i_1,\ldots,i_k):=\int_s^t dB_{t_1}(i_1)\ldots \int_s^{t_{k-1}} dB_{t_k}(i_k)$, $i_1,\ldots,i_k\le d$ for $d$-dimensional fBm $B=(B(1),\ldots,B(d))$ which lie at the heart of the rough path theory due to T. Lyons, see \cite{Lyo98,LyoQia02}. An alternative construction has been given by M. Gubinelli in \cite{Gu} under the name of 'algebraic rough path theory', which we now propose to describe briefly.
Assume $\Gamma_t=(\Gamma_t(1),\ldots,\Gamma_t(d))$ is some non-smooth $d$-dimensional path which is $\alpha$-H\"older continuous. Integrals such as $\int f_1(\Gamma_t)d\Gamma_t(1)+\ldots+f_d(\Gamma_t)d\Gamma_t(d)$ do not make sense a priori because $\Gamma$ is not differentiable (Young's integral \cite{Lej} works for $\alpha>{1\over 2}$ but not beyond). In order to define the integration of a differential form along $\Gamma$, it is enough to define a
{\it geometric rough path} $(\vec{\Gamma}^{1},\ldots,\vec{\Gamma}^{\lfloor 1/\alpha \rfloor})$ lying above $\Gamma$,
$\lfloor 1/\alpha \rfloor$=entire part of $1/\alpha$,
where $\vec{\Gamma}^{1}_{ts}=(\delta\Gamma)_{ts}:=\Gamma_t-\Gamma_s$ is the {\it increment} of $\Gamma$ between $s$ and $t$, and each $\vec{\Gamma}^k=(\vec{\Gamma}^k(i_1,\ldots,i_k))_{1\le i_1,\ldots,i_k\le d}$, $k\ge 2$ is a {\it substitute} for the iterated integrals $\int_s^t d\Gamma_{t_1}(i_1)\int_s^{t_1} d\Gamma_{t_2}(i_2) \ldots \int_{s}^{t_{k-1}} d\Gamma_{t_k}(i_k)$ with the following three properties:
\begin{itemize} \item[(i)] (\it H\"older continuity)
each component of $\vec{\Gamma}^k$ is $k\alpha^-$-H\"older continuous, that is to say, $k\kappa$-H\"older for every $\kappa<\alpha$;
\item[(ii)] ({\it multiplicativity}) letting $\delta{\bf\Gamma}^k_{tus}:= {\bf\Gamma}_{ts}^k-{\bf\Gamma}^k_{tu}-{\bf \Gamma}^k_{us}$, one requires \begin{equation}
\delta\vec{\Gamma}^k_{tus}(i_1,\ldots,i_k) = \sum_{k_1+k_2=k} \vec{\Gamma}_{tu}^{k_1}(i_1,\ldots,i_{k_1}) \vec{\Gamma}_{us}^{k_2}(i_{k_1+1},\ldots,i_k); \label{eq:0:x} \end{equation}
\item[(iii)] ({\it geometricity}) \begin{equation} {\bf \Gamma}^{n_1}_{ts}(i_1,\ldots,i_{n_1}) {\bf \Gamma}^{n_2}_{ts}(j_1,\ldots,j_{n_2})
= \sum_{\vec{k}\in {\mathrm{Sh}}(\vec{i},\vec{j})} {\bf \Gamma}^{n_1+n_2}_{ts}(k_1,\ldots,k_{n_1+n_2}) \label{eq:0:geo} \end{equation} where ${\mathrm{Sh}}(\vec{i},\vec{j})$ is the subset of permutations of $i_1,\ldots,i_{n_1},j_1,\ldots,j_{n_2}$ which do not change the orderings of $(i_1,\ldots,i_{n_1})$ and $(j_1,\ldots,j_{n_2})$. \end{itemize}
The multiplicativity property implies in particular the following identity for the (non anti-symmetrized) L\'evy area: \begin{equation} {\cal A}_{ts}={\cal A}_{tu}+{\cal A}_{us}+ (B_t(1)-B_u(1))(B_u(2)-B_s(2)) \label{eq:0:mult} \end{equation} while the geometric property implies \begin{eqnarray} && \int_s^t dB_{t_1}(1)\int_s^{t_1} dB_{t_2}(2)+\int_s^t dB_{t_2}(2)\int_s^{t_2} dB_{t_1}(1) \nonumber\\ &&= \left(\int_s^t dB_{t_1}(1)\right) \left(\int_s^t dB_{t_2}(2)\right)=(B_t(1)-B_s(1))(B_t(2)-B_s(2)). \nonumber\\ \end{eqnarray}
Then there is a standard procedure which allows to define out of these data iterated integrals of any order and to solve differential equations driven by $\Gamma$.
The multiplicativity property (\ref{eq:0:x}) and the geometric property (\ref{eq:0:geo}) are satisfied by smooth paths, as can be checked by direct computation. So the most natural way to construct such a multiplicative functional is to start from some smooth approximation $\Gamma^{\eta}$, $\eta\overset{>}{\to} 0$ of $\Gamma$ such that each iterated
integral $\vec{\Gamma}^{k,\eta}_{ts}(i_1,\ldots,i_k)$, $k\le \lfloor 1/\alpha \rfloor$ converges in the $k\kappa$-H\"older norm for every $\kappa<\alpha$.
This general scheme has been applied to fBm in a paper by L. Coutin and Z. Qian \cite{CQ02} and later in a paper by the author \cite{Unt08}, using different schemes of approximation of $B$ by $B^{\eta}$ with $\eta\to 0$.
In both cases, the variance of the L\'evy area has been proved to diverge in the limit $\eta\to 0$ when $\alpha\le 1/4$.
The approach developed in \cite{Unt08} makes use of a complex-analytic process $\Gamma$ defined on the upper half-plane $\Pi^+=\{z=x+{\rm i} y\ | \ y>0\}$, called {\it $\Gamma$-process} or better {\it analytic fractional Brownian motion} (afBm for short) \cite{TinUnt08}. Fractional Brownian motion $B_t$ appears as the {\it real part} of the {\it boundary value} of $\Gamma_z$ when ${\rm Im\ } z\overset{>}{\to} 0$. A natural approximation of $B_t$ is then obtained by considering \begin{equation} B_t^{\eta}:=\Gamma_{t+{\rm i}\eta}+\overline{\Gamma_{t+{\rm i}\eta}}=2{\rm Re\ } \Gamma_{t+{\rm i}\eta} \end{equation}
for $\eta\overset{>}{\to} 0.$ We show in subsection 3.1 that $B^{\eta}$ may be written as a Fourier integral,
\begin{equation} B^{\eta}_t= c_{\alpha} \int_{\mathbb{R}} e^{-\eta|\xi|} |\xi|^{{1\over 2}-\alpha}
\frac{e^{{\rm i} t\xi}-1}{{\rm i} \xi} \ W(d\xi) \label{eq:0:B-eta-Fourier} \end{equation} for some constant $c_{\alpha}$, where $(W(\xi),\xi\ge 0)$ is a standard complex Brownian motion extended to $\mathbb{R}$ by setting $W(-\xi)=-\bar{W}(\xi)$, $\xi\ge 0$. When $\eta\to 0$, one retrieves the well-known harmonizable representation of $B$ \cite{SamoTaq}.
The so-called {\it analytic iterated integrals} $$\int_s^t f_1(z_1) d\Gamma_{z_1}(1)\int_s^{z_1} f_2(z_2)d\Gamma_{z_2}(2)\ldots \int_s^{z_{d-1}} f_d(z_d) d\Gamma_{z_d}(d)$$ (where $f_1,\ldots,f_d$ are analytic functions), defined a priori for $s,t\in\Pi^+$ by integrating over complex paths wholly contained in $\Pi^+$, converge to a finite limit when ${\rm Im\ } s,{\rm Im\ } t\to 0$ \cite{Unt08}, which is the starting point for the construction of a rough path associated to $\Gamma$ \cite{TinUnt08}. The main tool for proving this kind of results is analytic continuation.
Computing iterated integrals associated to $B_t=2\lim_{\eta\to 0} {\rm Re\ }\Gamma_{t+{\rm i}\eta}$
instead of $\Gamma$ yields analytic iterated
integrals, together with mixed integrals such as for instance $\int_s^t d\Gamma_{z_1}(1)\int_s^{z_1} \overline{d\Gamma_{z_2}(2)}$. For these
the analytic continuation method may no longer be applied because Cauchy's formula fails to hold, and the above quantities may be shown to diverge when ${\rm Re\ } s, {\rm Re\ } t\to 0$, see \cite{Unt08,Unt08b}.
Let us explain first how to define a L\'evy area for $B$. Proofs (as well as a sketch of the Fourier normal ordering method for general iterated integrals) may be found in \cite{Unt09ter}.
As mentioned before, the {\it uncorrected area} ${\cal A}^{\eta}_{ts}:=\int_{s}^{t} dB^{\eta}_{u_1}(1)\int_{s}^{u_1} dB^{\eta}_{u_2}(2)$ diverges when $\eta\to 0^+$. The idea is now to find some {\it increment counterterm} $(\delta Z^{\eta})_{ts}=Z^{\eta}_{t}-Z^{\eta}_{s}$ such that the {\it regularized area} ${\cal R}{\cal A}^{\eta}_{ts}:={\cal A}^{\eta}_{ts}-(\delta Z^{\eta})_{ts}$ converges when $\eta\to 0^+$. Note that the multiplicativity property (\ref{eq:0:mult}) holds for ${\cal R}{\cal A}^{\eta}$ as well as for ${\cal A}^{\eta}$ since $(\delta Z^{\eta})_{ts}= (\delta Z^{\eta})_{t u}+(\delta Z^{\eta})_{us}$. This counterterm $Z^{\eta} $ may be found by using a suitable decomposition of ${\cal A}^{\eta}_{ts}$ into the sum of :
-- an {\it increment term}, $(\delta G^{\eta})_{ts}$;
-- a {\it boundary term} denoted by ${\cal A}^{\eta}_{ts}(\partial)$.
The simplest idea one could think of would be to set \begin{equation} (\delta G^{\eta})_{ts}=\int_{s}^{t} dB^{\eta}_{u_1}(1) B^{\eta}_{u_1}(2), \label{eq:0:G1}\end{equation}
and \begin{equation} {\cal A}^{\eta}_{ts}(\partial)=-\int_{s}^{t} dB^{\eta}_{u_1}(1)\ .\ B^{\eta}_{s}(2)=-B^{\eta}_{s}(2)(B^{\eta}_{t}(1)-B^{\eta}_{s}(1)). \label{eq:0:del1} \end{equation} Alternatively, rewriting ${\cal A}^{\eta}_{ts}$ as $\int_{s}^{t} dB^{\eta}_{u_2}(2)
\int_{u_2}^{t} dB^{\eta}_{u_1}(1)$, one may equivalently set
\begin{equation} (\delta G^{\eta})_{ts}=-\int_{s}^{t} dB^{\eta}_{u_2}(2) B^{\eta}_{u_2}(1) \label{eq:0:G2} \end{equation} and \begin{equation} {\cal A}^{\eta}_{ts}(\partial)=\int_{s}^{t} dB^{\eta}_{u_2}(2)\ .\ B^{\eta}_{t}(1)=B^{\eta}_{t}(1)(B^{\eta}_{t}(2)-B^{\eta}_{s}(2)). \label{eq:0:del2} \end{equation}
Now $\delta G^{\eta}$ diverges when $\eta\to 0^+$,
but since it is an increment, it may be discarded (i.e. it might be used as a counterterm). The problem is, ${\cal A}^{\eta}_{ts}(\partial)$ converges when $\eta\to 0^+$ in the $\kappa$-H\"older norm for every $\kappa<\alpha$, but not in the $2\kappa$-H\"older norm (which is of course well-known and may be seen as the starting point for rough path theory).
It turns out that a slight adaptation of this poor idea gives the solution. Decompose ${\cal A}^{\eta}_{ts}$ into a double integral in the Fourier coordinates $\xi_1,\xi_2$ using (\ref{eq:0:B-eta-Fourier}). Use the first increment/boundary decomposition (\ref{eq:0:G1},\ref{eq:0:del1})
for all indices $|\xi_1|\le |\xi_2|$, and the second one (\ref{eq:0:G2},\ref{eq:0:del2}) if
$|\xi_1|>|\xi_2|$. Then ${\cal A}^{\eta}_{ts}(\partial)$, defined as the sum of two contributions, one coming from (\ref{eq:0:del1}) and the other from (\ref{eq:0:del2}), {\em does converge} in the
$2\kappa$-H\"older norm when $\eta\to 0^+$, for every $\kappa<\alpha$.
As for the increment term $\delta G^{\eta}$,
defined similarly as the sum of two contributions coming from (\ref{eq:0:G1}) and (\ref{eq:0:G2}), it diverges as soon as $\alpha\le 1/4$,
but may be discarded at will. Actually we use in this article a {\it minimal regularization scheme}: only the close-to-diagonal (i.e. $\xi_1/\xi_2\approx -1$)
terms in the double integral defining $\delta G^{\eta}$ make it diverge. Summing over an appropriate subset, e.g. $-\xi_1\not\in[\xi_2/2,2\xi_2]$ yields an increment which converges (for {\it every} $\alpha\in(0,{1\over 2})$)
when $\eta\to 0$ in the $2\kappa$-H\"older norm for every $\kappa<\alpha$.
Let $\alpha<1/4$. As noted in \cite{Unt08b}, the uncorrected L\'evy area ${\cal A}^{\eta}$ of the regularized process $B^{\eta}$ converges in law to a Brownian motion when $\eta\to 0^+$ after a rescaling by the factor $\eta^{{1\over 2}(1-4\alpha)}$. In the latter article, the following question was raised: is it possible to define a counterterm $X^{\eta}$ living on the same probability space as fBm, such that (i) the rescaled process $\eta^{{1\over 2}(1-4\alpha)}X^{\eta}$ converges in law to Brownian motion;
(ii) $(B^{\eta},{\cal A}^{\eta}- X^{\eta})$ is a multiplicative or almost multiplicative functional in the sense of \cite{Lej}, Definition 7.1; (iii) ${\cal A}^{\eta}-X^{\eta}$ converges in the $2\kappa$-H\"older norm for every $\kappa<\alpha$ when $\eta\to 0$ ? The
counterterm $X^{\eta}:={\cal A}^{\eta}-{\cal R} {\cal A}^{\eta}$ gives a solution to this problem.
The above ideas have a suitable generalization to iterated integrals\\ $\int dB(i_1)\ldots \int dB(i_n)$ of order $n\ge 3$. There is one more difficulty though: decomposing $(B^{\eta})'_{u_j}(i_j)$ into
$c_{\alpha}\int dW_{\xi_j}(i_j) e^{{\rm i} u_j\xi_j} e^{-\eta|\xi_j|} |\xi_j|^{{1\over 2}-\alpha}$,
an extension of the first increment/boundary decomposition (\ref{eq:0:G1}, \ref{eq:0:del1}), together with a suitable regularization scheme, yield the correct H\"older estimate {\em provided} $|\xi_1|\le \ldots\le |\xi_n|$. What should one do then if $|\xi_{\sigma(1)}|\le \ldots\le |\xi_{\sigma(n)}|$ for some permutation $\sigma$
instead ? The idea is to permute the order of integration by using Fubini's theorem, and write $\int_{s}^{t}
dB^{\eta}_{u_1}(i_1)\ldots \int_{s}^{u_{n-1}} dB^{\eta}_{u_n}(i_n)$ as some {\em iterated tree integral} $\int dB^{\eta}_{u_1}(i_{\sigma(1)})\ldots \int dB^{\eta}_{u_n} (i_{\sigma(n)})$. The integration domain, in the general case, becomes a little involved, and necessitates the introduction of combinatorial tools on trees, such as admissible cuts for instance. The underlying structures are those of the Hopf algebra of decorated rooted trees \cite{ConKre98,ConKre00}
(as already noted in \cite{Kre99} or \cite{Gu2}), and of the Hopf shuffle algebra \cite{Mur1,Mur2}. The proof of the multiplicative and of the geometric properties for the regularized rough path, as well as the Hopf algebraic reinterpretation, are to be found in \cite{Unt09bis}. The general idea (see subsection 2.5 for more details) is that the fundamental objects are {\em skeleton integrals} (a particular type of tree integrals) defined in subsection 2.1, and that {\em any} regularization of the skeleton integrals (possibly even trivial)
yielding finite quantities with the correct H\"older regularity produces a regularized rough path, which implies a large degree of arbitrariness in the definition.
The idea of cancelling singularities by building iteratively counterterms, originated from the Bogolioubov-Hepp-Parasiuk-Zimmermann (BPHZ) procedure
for renormalizing Feynmann diagrams in quantum field theory \cite{Hepp}, mathematically formalized in terms of Hopf algebras by A. Connes and D. Kreimer, has been applied during the last decade in a variety of contexts ranging from numerical methods to quantum chromodynamics or multi-zeta functions, see for instance \cite{Kre99,Mur2,Wal00}. We plan to such a (less arbitrary) construction in the near future (see discussion at the end of subsection 2.5).
The main result of the paper may be stated as follows.
\begin{Theorem} \label{th:0}
{\it Let $B=(B(1),\ldots,B(d))$ be a $d$-dimensional fBm of Hurst index $\alpha\in(0,1)$, defined via the harmonizable representation, with the associated family of approximations $B^{\eta}$, $\eta>0$ living in the same probability space, see eq. (\ref{eq:0:B-eta-Fourier}). Then there exists a rough path $({\cal R}{\bf B}^{1,\eta}=\delta B^{\eta},\ldots,{\cal R}{\bf B}^{\lfloor 1/\alpha\rfloor,\eta})$ over $B^{\eta}$ $(\eta>0)$, living in the chaos of order $1,\ldots,\lfloor 1/\alpha\rfloor$ of $B$, satisfying properties (ii) (multiplicative property) and (iii) (geometric property) of the Introduction, together with the following estimates:
\begin{itemize} \item[(uniform H\"older estimate)] There exists a constant $C>0$ such that, for every $s,t\in\mathbb{R}$ and $\eta>0$,
$${\mathbb{E}} |{\cal R}{\bf B}^{n,\eta}_{ts}(i_1,\ldots,i_n)|^2 \le C|t-s|^{2n\alpha};$$ \item[(rate of convergence)] there exists a constant $C>0$ such that, for every $s,t\in\mathbb{R}$ and $\eta_1,\eta_2>0$,
$${\mathbb{E}} |{\cal R}{\bf B}^{n,\eta_1}_{ts}(i_1,\ldots,i_n)-{\cal R}{\bf B}^{n,\eta_2}_{ts}(i_1,\ldots,i_n)|^2 \le C|\eta_1-\eta_2|^{2\alpha}.$$ \end{itemize} }
\end{Theorem}
These results imply the existence of an explicit rough path ${\cal R}{\bf B}$ over $B$, obtained as the limit of ${\cal R}{\bf B}^{\eta}$ when $\eta\to 0$.
Here is an outline of the article. We first recall briefly some definitions and preliminary results on algebraic rough path theory in Section 1, which show in particular that Theorem \ref{th:0} implies the convergence of ${\cal R}{\bf B}^{\eta}$ to a rough path ${\cal R}{\bf B}$ over fractional Brownian motion $B$ when $\eta\to 0$. Section 2 is dedicated to tree combinatorics and to the introduction of quite general regularization schemes for the iterated integrals of an arbitrary smooth path $\Gamma$. The proof of the multiplicative and geometric properties are to be found in \cite{Unt09bis} and are not reproduced here. We apply a suitable regularization scheme to the construction of the regularized rough path ${\cal R}{\bf B}^{\eta}$ in section 3, and prove the H\"older and rate of convergence estimates of Theorem \ref{th:0} for the iterated integrals ${\cal R}{\bf B}^{n,\eta}(i_1,\ldots,i_n)$ with distinct indices, $i_1\not=\ldots\not=i_n$. We conclude in Section 4 by showing how to extend these results to coinciding indices, and introducing a new, real-valued, two-dimensional Gaussian process which we call {\it two-dimensional antisymmetric fractional Brownian motion}, to which the above construction extends naturally.
{\bf Notations.} The group of permutations of $\{1,\ldots,n\}$ will be denoted by
$\Sigma_n$. The Fourier transform is ${\cal F}:f\to {\cal F}f(\xi)=\frac{1}{\sqrt{2\pi}}\int f(x) e^{-{\rm i} x\xi} dx$. If $|a|\le C|b|$ for some constant $C$ ($a$ and $b$ depending on some arbitrary set of parameters), then we shall write $|a|\lesssim |b|$.
\section{The analysis of rough paths}
The present section will be very sketchy since the objects and results needed in this work have alread been presented in great details in \cite{TinUnt08}. The fundational paper on the subject of algebraic rough path theory is due to M. Gubinelli \cite{Gu}, see also \cite{Gu2} for more details in the case $\alpha<1/3$. Let us recall briefly the original problem motivating the introduction of rough paths. Let $\Gamma:\mathbb{R}\to\mathbb{R}^d$ be some fixed irregular (i.e. not differentiable) path, say $\kappa$-H\"older, and $f:\mathbb{R}\to\mathbb{R}^d$ some function which is also irregular (mainly because one wants to consider functions $f$ obtained as a composition $g\circ \Gamma$ where $g:\mathbb{R}^d\to\mathbb{R}^d$ is regular).
Can one define the integral $\int f_x d\Gamma_x$ ? The answer depends on the H\"older regularity of $f$ and $\Gamma$. Assuming $f$ is $\gamma$-H\"older with $\kappa+\gamma>1$, then one may define the so-called {\it Young integral} \cite{Lej}
$\int_s^t f_x d\Gamma_x$ as the Riemann sum type limit $\lim_{|\Pi|\to 0} \sum_{\{t_j\}\in\Pi}
f_{t_i}(\Gamma_{t_{i+1}}-\Gamma_{t_i})$, where $\Pi=\{s=t_0<\ldots<t_n=t\}$ is a partition of $[s,t]$ with mesh $|\Pi|$ going to $0$. Then the resulting path $Y_t-Y_s:=\int_s^t f_x d\Gamma_x$ has the same regularity as $\Gamma$. If $\kappa+\gamma\le 1$ instead, this is no more possible in general. One way out of this problem, giving at the same time a coherent way to solve differential equations driven by $\Gamma$, is to define a class of {\it $\Gamma$-controlled paths} ${\cal Q}$, such that the above integration problem may be solved uniquely in this class by a formula generalizing the above Riemann sums, in which formal iterated integrals ${\bf \Gamma}^n(i_1,\ldots,i_n)$ of $\Gamma$ appear as in the Introduction.
\begin{Definition}[H\"older spaces]
{Let $\kappa\in(0,1)$ and $T>0$ fixed. \begin{itemize}
\item[(i)] Let $C_1^{\kappa}=C_1^{\kappa}([0,T],\mathbb{C})$ be the space of complex-valued $\kappa$-H\"older functions $f$ in one variable with (semi-)norm $||f||_{\kappa}=\sup_{s,t\in[0,T]} \frac{|f(t)-f(s)|}{|t-s|^{\kappa}}.$ \item[(ii)] Let $C_2^{\kappa}=C_2^{\kappa}([0,T],\mathbb{C})$ be the space of complex-valued functions $f=f_{t_1,t_2}$
of two variables vanishing on the diagonal $t_1=t_2$, such that $||f||_{2,\kappa}<\infty$, where
$|| \ .\ ||_{2,\kappa}$ is the following norm:
\begin{equation} ||f||_{2,\kappa}=\sup_{s,t\in[0,T]} \frac{|f_{t_1,t_2}|}{|t-s|^{\kappa}}.\end{equation} \item[(iii)] Let $C_3^{\kappa}=C_3^{\kappa}([0,T],\mathbb{C})$ be the space of complex-valued functions $f=f_{t_1,t_2,t_3}$
of three variables vanishing on the subset $\{t_1=t_2\}\cup\{t_2=t_3\}\cup\{t_1=t_3\}$,
such that $||f||_{3,\kappa}<\infty$ for some generalized H\"older semi-norm $|| \ .\ ||_{3,\kappa}$ defined for instance in \cite{Gu}, section 2.1. \end{itemize} }
\end{Definition}
\begin{Definition}[increments]
{\it \begin{itemize} \item[(i)] Let $f$ be a function of one variable: then the increment of $f$, denoted by $\delta f$, is $(\delta f)_{ts}:=f(t)-f(s).$ \item[(ii)] Let $f=f_{ts}$ be a function of two variables: then we define \begin{equation} (\delta f)_{tus}:= f_{ts}- f_{tu}-f_{us}.\end{equation} \end{itemize} Note that $\delta\circ\delta(f)=0$ if $f$ is a function of one variable. } \end{Definition}
Let $\Gamma=(\Gamma(1),\ldots,\Gamma(d)):[0,T]\to\mathbb{R}^d$ be a $\kappa$-H\"older path, and $({\bf\Gamma}^1_{ts}(i_1):=\Gamma_t(i_1)-\Gamma_s(i_1),{\bf\Gamma}^2_{ts}(i_1,i_2),\ldots,{\bf\Gamma}^{\lfloor 1/\kappa\rfloor}_{ts}(i_1,\ldots,i_{\lfloor 1/\kappa\rfloor}) )$ be a rough path lying above $\Gamma$, satisfying properties (i) (H\"older property), (ii) (multiplicativity property) and (iii) (geometricity property) of the Introduction.
\begin{Definition}[controlled paths]
{\it Let $z=(z(1),\ldots,z(d))\in C_1^{\kappa}$ for some $\kappa<\alpha$ and $N=\lfloor 1/\kappa\rfloor+1$. Then $z$ is called a ($\Gamma$-)controlled path if its increments can be decomposed into \begin{equation} \delta z(i)=\sum_{n=1}^{N} \sum_{(i_1,\ldots,i_n)} {\bf \Gamma}^n(i_1,\ldots,i_n).f^n(i_1,\ldots,i_n;i)+g^0(i)\end{equation} for some remainders $g^0(i)\in C_2^{N\kappa}$ and some paths $f^n(i_1,\ldots,i_n;i)\in (C_1^{\kappa})^n$ such that \begin{eqnarray} && \delta f^n(i_1,\ldots,i_n;i)= \nonumber\\ &&\quad \sum_{l=1}^{N-1-n}\sum_{(j_1,\ldots,j_l)} {\bf \Gamma}^l(j_1,\ldots,j_l). f^{l+n}(j_1,\ldots,j_l,i_1,\ldots,i_n;i) +g^n(i_1,\ldots,i_n;i), \quad n=1,\ldots,N \nonumber\\ \end{eqnarray} for some remainder terms $g^n(i_1,\ldots,i_n;i)\in C_2^{(N-n)\kappa}$.
We denote by ${\cal Q}_{\kappa}$ the space of all such paths, and by ${\cal Q}_{\alpha^-}$ the intersection $\cap_{\kappa<\alpha} {\cal Q}_{\kappa}.$ }
\end{Definition}
We may now state the main result.
\begin{Proposition}[see \cite{Gu2}, Theorem 8.5, or \cite{TinUnt08}, Proposition 3.1]
{\it Let $z\in{\cal Q}_{\alpha^-}$. Then the limit
\begin{equation} \int_s^t z_x d\Gamma_x:=\lim_{|\Pi|\to 0} \sum_{k=0}^n \sum_{i=1}^d \left[ \delta X_{t_{k+1},t_k}(i) z_{t_k}(i) +\sum_{n=1}^{N-1}\sum_{(i_1,\ldots,i_{n})} {\bf \Gamma}^{n+1}_{t_{k+1},t_k}(i_1,\ldots,i_n,i)\zeta^n_{t_k}(i_1,\ldots,i_n;i)
\right] \end{equation} exists in the space ${\cal Q}_{\alpha^-}$. }
\end{Proposition}
Assume $\Gamma$ is a centered Gaussian process, and $\Gamma^{\eta}$ a family of Gaussian approximations of $\Gamma$ living in its first chaos. Then the Proposition below gives very convenient moment conditions for a family of
rough paths $(\Gamma^{\eta},{\bf\Gamma}^{2,\eta},\ldots,{\bf\Gamma}^{\lfloor 1/\kappa\rfloor,\eta})$ to converge in the right H\"older norms when $\eta\to 0$, thereby
defining a rough path above $\Gamma$.
\begin{Proposition}
{\it Let $\Gamma$ be a $d$-dimensional centered Gaussian process admitting a version with a.s. $\alpha^-$-H\"older paths. Let $N=\lfloor 1/\alpha\rfloor.$ Assume: \begin{enumerate} \item there exists a family $\Gamma^{\eta}$, $\eta\to 0^+$ of Gaussian processes living in the first chaos of $\Gamma$ and an overall constant $C$ such that \begin{itemize}
\item[(i)] \begin{equation} {\mathbb{E}} |\Gamma^{\eta}_t-\Gamma_s^{\eta}|^2 \le C|t-s|^{2\alpha}; \label{eq:1:i} \end{equation}
\item[(ii)] \begin{equation} {\mathbb{E}} |\Gamma^{\eta}_t-\Gamma^{\varepsilon}_t|^2 \le C |\varepsilon-\eta|^{2\alpha}; \label{eq:1:ii} \end{equation} \item[(iii)] $\forall t\in[0,T]$, $\Gamma_t^{\eta}\overset{L^2}{\to} \Gamma_t$ when $\eta\to 0$; \end{itemize} \item there exists a truncated multiplicative functional $({\bf \Gamma}_{ts}^{1,\eta}=\Gamma_t^{\eta}-\Gamma_s^{\eta}, {\bf\Gamma}_{ts}^{2,\eta},\ldots,{\bf\Gamma}_{ts}^{N,\eta})$ lying above $\Gamma^{\eta}$ and living in the $n$-th chaos of $\Gamma$, $n=1\ldots,N$, such that, for every $2\le k\le N$, \begin{itemize}
\item[(i)] \begin{equation} {\mathbb{E}} |{\bf\Gamma}_{ts}^{k,\eta}|^2 \le C|t-s|^{2k\alpha}; \label{eq:1:ibis}\end{equation}
\item[(ii)] \begin{equation} {\mathbb{E}} |{\bf \Gamma}_{ts}^{k,\varepsilon}-{\bf\Gamma}_{ts}^{k,\eta}|^2 \le C|\varepsilon-\eta|^{2\alpha}. \label{eq:1:iibis} \end{equation} \end{itemize} \end{enumerate}
Then $({\bf\Gamma}^{1,\eta},\ldots,{\bf\Gamma}^{N,\eta})$ converges in $L^2(\Omega;C_2^{\kappa}([0,T],\mathbb{R}^d)\times C_2^{2\kappa}([0,T],\mathbb{R}^{d^2}) \times\ldots\times C_2^{N\kappa}([0,T],\mathbb{R}^{d^N}))$ for every $\kappa<\alpha$ to a rough path $({\bf\Gamma}^1,\ldots,{\bf\Gamma}^N)$ lying above $\Gamma$. } \end{Proposition}
{\bf Short proof} (see \cite{TinUnt08}, Lemma 5.1, Lemma 5.2 and Prop. 5.4).
The main ingredient is the Garsia-Rodemich-Rumsey (GRR for short) lemma \cite{Ga} which states that, if $f\in C_2^{\kappa}([0,T],\mathbb{C})$,
\begin{equation} ||f||_{2,\kappa} \le C \left(
||\delta f||_{3,\kappa}+ \left( \int_0^T \int_0^T \frac{|f_{vw}|^{2p}}{|w-v|^{2\kappa p+2}} \ dv\ dw\right)^{1/2p} \right) \end{equation}
for every $p\ge 1$.
Then properties (\ref{eq:1:i},\ref{eq:1:ibis}) imply by using the GRR lemma for $p$ large enough,
Jensen's inequality and the equivalence of $L^p$-norms for processes living in a fixed Gaussian chaos
\begin{equation} {\mathbb{E}} ||{\bf\Gamma}^{k,\eta}||_{2,k\kappa} \lesssim {\mathbb{E}} ||\delta{\bf\Gamma}^{k,\eta}||_{3,k\kappa}+C.\end{equation} By using the multiplicative property (ii) in the Introduction and induction on $k$, ${\mathbb{E}} ||\delta{\bf\Gamma}^{k,\eta}||_{3,k\kappa}$ may in the same way be proved to be bounded by a constant.
On the other hand, properties (\ref{eq:1:i},\ref{eq:1:ii},\ref{eq:1:ibis},\ref{eq:1:iibis}), together with the equivalence of $L^p$-norms, imply (for every $\kappa<\alpha$)
\begin{equation} {\mathbb{E}} |{\bf\Gamma}^{k,\varepsilon}_{ts}-{\bf\Gamma}^{k,\eta}_{ts}|^2 \lesssim |t-s|^{2k\kappa} |\varepsilon-\eta|^{2(\alpha-\kappa)}\end{equation} hence, by the same arguments,
\begin{equation} {\mathbb{E}} ||{\bf\Gamma}^{k,\varepsilon}-{\bf\Gamma}^{k,\eta}||_{2,k\kappa} \lesssim |\varepsilon-\eta|^{\alpha-\kappa} \end{equation} which shows that ${\bf\Gamma}^{k,\varepsilon}$ is a Cauchy sequence in $C_2^{k\kappa}([0,T],\mathbb{R}^{d^k})$.
$\Box$
\section{Tree combinatorics and the Fourier normal ordering method}
\subsection{From iterated integrals to trees}
It was noted already long time ago \cite{But72} that iterated integrals could be encoded by trees. This remark has been exploited in connection with the construction of the rough path solution of (partial, stochastic) differential equations in \cite{Gu2}. The correspondence between trees and itegrated integrals goes simply as follows.
\begin{Definition}
A decorated rooted tree (to be drawn growing {\em up}) is a finite tree with a distinguished vertex called {\em root} and edges oriented {\em downwards} (i.e. directed towards the root), such that every vertex wears an integer label.
If ${\mathbb{T}}$ is a decorated rooted tree, we let $V({\mathbb{T}})$ be the set of its vertices (including the root), and $\ell:V({\mathbb{T}})\to\mathbb{N}$ be its vertex labeling.
More generally, a decorated rooted {\em forest} is a finite set of decorated rooted trees. If ${\mathbb{T}}=\{{\mathbb{T}}_1, \ldots,{\mathbb{T}}_l\}$ is a forest, then we shall write ${\mathbb{T}}$ as the formal commutative product ${\mathbb{T}}_1\ldots{\mathbb{T}}_l$.
\end{Definition}
\begin{Definition} \label{def:2:connect}
Let ${\mathbb{T}}$ be a decorated rooted tree.
\begin{itemize} \item Letting $v,w\in V({\mathbb{T}})$, we say that $v$ {\em connects directly to} $w$, and write $v\to w$ or equivalently $w=v^-$, if $(v,w)$ is an edge oriented downwards from $v$ to $w$. (Note that $v^-$ exists and is unique except if $v$ is the root). \item If $v_m\to v_{m-1}\to\ldots\to v_1$, then we shall write $v_m\twoheadrightarrow v_1$, and say that $v_m$ {\em connects to} $v_1$. By definition, all vertices (except the root) connect to the root.
\item Let $(v_1,\ldots,v_{|V({\mathbb{T}})|})$ be an ordering of $V({\mathbb{T}})$. Assume that $\left( v_i\twoheadrightarrow v_j\right)\Rightarrow\left(i>j\right)$ (in particular, $v_1$ is the root). Then we shall say that the ordering is {\em compatible} with the {\em tree partial ordering} defined by $\twoheadrightarrow$. \end{itemize}
\end{Definition}
\begin{Definition} \label{def:2:it-int}
\begin{itemize} \item[(i)] Let $\Gamma=(\Gamma(1),\ldots,\Gamma(d))$ be a $d$-dimensional smooth path, and ${\mathbb{T}}$ a decorated rooted tree such that $\ell:V({\mathbb{T}})\to\{1,\ldots,d\}$. Then $I_{{\mathbb{T}}}(\Gamma):\mathbb{R}^2\to\mathbb{R}$ is the iterated integral defined as \begin{equation} [I_{{\mathbb{T}}}(\Gamma)]_{ts}:=\int_s^t d\Gamma_{x_{v_1}}(\ell(v_1))\int_s^{x_{v_2^-}} d\Gamma_{x_{v_2}}(\ell(v_2))
\ldots \int_s^{x_{v^-_{|V({\mathbb{T}})|}}} d\Gamma_{x_{v_{|V({\mathbb{T}})|}}}(\ell(v_{|V({\mathbb{T}})|}))
\end{equation} where $(v_1,\ldots,v_{|V({\mathbb{T}})|})$ is any ordering of $V({\mathbb{T}})$ compatible with the tree partial ordering.
In particular, if ${\mathbb{T}}$ is a trunk tree with $n$ vertices (see Fig. \ref{Fig1}) -- so that the tree ordering is total -- we shall write \begin{equation} I_{{\mathbb{T}}}(\Gamma)=I_n^{\ell}(\Gamma),\end{equation} where \begin{equation} [I_n^{\ell}(\Gamma)]_{ts}:=\int_s^t d\Gamma_{x_1}(\ell(1)) \int_s^{x_1} d\Gamma_{x_2}(\ell(2)) \ldots \int_s^{x_{n-1}} d\Gamma_{x_{n}}(\ell(n)). \end{equation}
\item[(ii)] (generalization) Assume ${\mathbb{T}}$ is a subtree of $\tilde{{\mathbb{T}}}$. Let $\mu$ be a Borel measure on $\mathbb{R}^{\tilde{{\mathbb{T}}}}$. Then
\begin{equation} [I_{\tilde{{\mathbb{T}}}}(\mu)]_{ts}:=\int_s^t \int _s^{x_{v_1^-}}\ldots\int_s^{x_{v^-_{|V({\mathbb{T}})|}}}
\mu(dx_{v_1},\ldots,dx_{v_{|V({\mathbb{T}})|}}) \end{equation} is a measure on $\mathbb{R}^{\tilde{{\mathbb{T}}}\setminus{\mathbb{T}}}$.
\end{itemize}
\end{Definition}
\begin{figure}
\caption{\small{Trunk tree.}}
\label{Fig1}
\end{figure}
Assume ${\mathbb{T}}=\tilde{{\mathbb{T}}}$ so $[I_{\tilde{{\mathbb{T}}}}(\mu)]_{ts}$ is a {\it number}. Then case (i) may be seen as a particular case of case (ii) with $\mu=d\Gamma(\ell(v_1))\otimes\ldots
\otimes d\Gamma(\ell(v_{|V({\mathbb{T}})|}))$. Conversely, case (ii) may be seen as a multilinear extension of case (i), and will turn out to be useful later on for the regularization procedure. Note however that (i) uses the labels of ${\mathbb{T}}$ while (ii) {\it doesn't}.
The above correspondence extends by (multi)linearity to the {\em algebra of decorated rooted trees} which we shall
now introduce.
\begin{Definition}[algebra of decorated rooted trees]
\begin{itemize} \item[(i)] Let $\cal T$ be the free commutative algebra over $\mathbb{R}$ generated by decorated rooted trees. If ${\mathbb{T}}_1,{\mathbb{T}}_2,\ldots {\mathbb{T}}_l$ are (decorated rooted) trees, then the product ${\mathbb{T}}_1\ldots{\mathbb{T}}_l$ is the forest with connected components ${\mathbb{T}}_1,\ldots,{\mathbb{T}}_l$. \item[(ii)] Let ${\mathbb{T}}'=\sum_{l=1}^L m_l {\mathbb{T}}_l\in{\cal T}$, where $m_l\in\mathbb{R}$ and each ${\mathbb{T}}_l={\mathbb{T}}_{l,1}\ldots {\mathbb{T}}_{l,L(l)}$ is a forest with labels in the set $\{1,\ldots,d\}$, and $\Gamma$ be a smooth $d$-dimensional path as above. Then \begin{equation} [I_{{\mathbb{T}}'}(\Gamma)]_{ts}:=\sum_{l=1}^L m_l [I_{{\mathbb{T}}_{l,1}}(\Gamma)]_{ts}
\ldots [I_{{\mathbb{T}}_{l,L(l)}}(\Gamma)]_{ts}. \end{equation} \end{itemize}
\end{Definition}
Let us now rewrite these iterated integrals by using Fourier transform.
\begin{Definition}[formal integral]
Let $f:\mathbb{R}\to\mathbb{R}$ be a smooth, compactly supported function such that ${\cal F}f(0)=0$. Then the formal integral $\int^t f=-\int_t f$ of $f$ is defined as $\frac{1}{\sqrt{2\pi}} \int_{-\infty}^{+\infty} ({\cal F}f)(\xi) \frac{e^{{\rm i} t\xi}}{{\rm i} \xi}\ d\xi.$
\end{Definition}
Formally one may write: \begin{equation} \int^t e^{{\rm i} x\xi} dx=\int_{\pm{\rm i}\infty}^t e^{{\rm i} x\xi} dx=\frac{e^{{\rm i} t\xi}}{{\rm i}\xi} \end{equation} (depending on the sign of $\xi$). The condition ${\cal F}f(0)=0$ prevents possible infra-red divergence when $\xi\to 0$.
The {\it skeleton integrals} defined below must be understood in a {\em formal} sense because of the possible infra-red divergences.
\begin{Definition}[skeleton integrals]
\begin{itemize} \item[(i)]
Let ${\mathbb{T}}$ be a tree with $\ell:{\mathbb{T}}\to\{1,\ldots,d\}$ and $\Gamma$ be a $d$-dimensional compactly supported, smooth path. Let $(v_1,\ldots, v_{|V({\mathbb{T}})|})$ be any ordering of $V({\mathbb{T}})$ compatible with the tree partial ordering. Then the {\em skeleton integral} of $\Gamma$ along ${\mathbb{T}}$ is by definition \begin{equation} [{\mathrm{Sk}} I_{{\mathbb{T}}}(\Gamma)]_t=\int^t d\Gamma_{x_{v_1}}(\ell(v_1)) \int^{x_{v_2^-}} d\Gamma_{x_2}(\ell(v_2))
\ldots \int^{x_{v^-_{|V({\mathbb{T}})|}}} d\Gamma_{x_{v_{|V({\mathbb{T}})|}}}(\ell(v_{|V({\mathbb{T}})|})). \end{equation}
\item[(ii)] (multilinear extension, see Definition \ref{def:2:it-int}) Assume ${\mathbb{T}}$ is a subtree of $\tilde{{\mathbb{T}}}$, and $\mu$ a compactly supported Borel measure on $\mathbb{R}^{\tilde{{\mathbb{T}}}}$. Then
\begin{equation} [{\mathrm{Sk}} I_{{\mathbb{T}}}(\mu)]_t=\int^t \int^{x_{v_2^-}}\ldots\int^{x_{v^-_{|V({\mathbb{T}})|}}} \mu(dx_{v_1},\ldots, dx_{v_{|V({\mathbb{T}})|}}) \end{equation} is a measure on $\mathbb{R}^{\tilde{{\mathbb{T}}}\setminus{\mathbb{T}}}$.
\end{itemize}
\end{Definition}
Formally again, $[{\mathrm{Sk}} I_{{\mathbb{T}}}(\Gamma)]_t$ may be seen as $[I_{{\mathbb{T}}}(\Gamma)]_{t,\pm{\rm i}\infty}$. Note that (denoting by $\hat{\mu}$ the partial Fourier transform of $\mu$ with respect to $(x_v)_{v\in V({\mathbb{T}})}$), the following equation holds,
\begin{equation} [{\mathrm{Sk\,I}}_{{\mathbb{T}}}(\mu)]_t=(2\pi)^{-|V({\mathbb{T}})|/2} \langle \hat{\mu}, \left[ {\mathrm{Sk\,I}}_{{\mathbb{T}}}\left( (x_v)_{v\in V({\mathbb{T}})} \to e^{{\rm i} \sum_{v\in V({\mathbb{T}})} x_v \xi_v} \right) \right]_t \rangle. \label{eq:2:reg-sk-it0} \end{equation}
\begin{Lemma} \label{lem:2:SkI}
The following formula holds:
\begin{equation} [{\mathrm{Sk}} I_{{\mathbb{T}}}(\Gamma)]_t=({\rm i}\sqrt{2\pi})^{-|V({\mathbb{T}})|} \int\ldots\int_{\mathbb{R}^{{\mathbb{T}}}} \prod_{v\in V({\mathbb{T}})} d\xi_v \ .\ e^{{\rm i} t\sum_{v\in V({\mathbb{T}})} \xi_v} \frac{\prod_{v\in V({\mathbb{T}})} {\cal F}(\Gamma'(\ell(v)))(\xi_v)}{\prod_{v\in V({\mathbb{T}})} (\xi_v+\sum_{w\twoheadrightarrow v} \xi_w)}.\end{equation}
\end{Lemma}
{\bf Proof.} We use induction on $|V({\mathbb{T}})|$. After stripping the root of ${\mathbb{T}}$ (denoted by $0$) there remains a forest ${\mathbb{T}}'={\mathbb{T}}'_1\ldots{\mathbb{T}}'_J$, whose roots are the vertices directly connected to $0$. Assume \begin{equation} [{\mathrm{Sk}} I_{{\mathbb{T}}'_j}(\Gamma)]_{x_0}=\int\ldots\int \prod_{v\in V({\mathbb{T}}'_j)} d\xi_v \ .\ e^{{\rm i} x_0\sum_{v\in V({\mathbb{T}}'_j)} \xi_v} F_j((\xi_v)_{v\in {\mathbb{T}}'_j}).\end{equation} Note that \begin{equation} {\cal F} \left( \prod_{j=1}^J {\mathrm{Sk}} I_{{\mathbb{T}}'_j}(\Gamma) \right)(\xi)=\int_{\sum_{v\in V({\mathbb{T}})\setminus \{0\}} \xi_v=\xi} \prod_{v\in V({\mathbb{T}})\setminus\{0\}} d\xi_v \prod_{j=1}^J F_j((\xi_v)_{v\in V({\mathbb{T}}'_j)}).\end{equation}
Then \begin{eqnarray} [{\mathrm{Sk}} I_{{\mathbb{T}}}(\Gamma)]_t&=& \int^t d\Gamma_{x_0}(\ell(0)) \prod_{j=1}^J [{\mathrm{Sk}} I_{{\mathbb{T}}'_j}(\Gamma)]_{x_0} \nonumber\\ &=& \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty} \frac{d\xi}{{\rm i}\xi} e^{{\rm i} t\xi} {\cal F}\left( \Gamma'(\ell(0)) \prod_{j=1}^J {\mathrm{Sk}} I_{{\mathbb{T}}'_j}(\Gamma) \right)(\xi) \nonumber\\ &=& \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{+\infty} d\xi_0 {\cal F}(\Gamma'(\ell(0)))(\xi_0) e^{{\rm i} t\xi_0} \ .\ \nonumber\\ && \qquad \qquad \int_{-\infty}^{+\infty} \frac{d\xi}{{\rm i}\xi} e^{{\rm i} t(\xi-\xi_0)} \int_{\sum_{v\in V({\mathbb{T}})\setminus\{0\}} \xi_v=\xi-\xi_0} d\xi_v \prod_{j=1}^J F_j((\xi_v)_{v\in V({\mathbb{T}}'_j)}) \nonumber\\ \end{eqnarray} hence the result.
$\Box$
Skeleton integrals are the fundamental objects from which regularized rough paths will be constructed in the next subsections.
\subsection{Coproduct structure and increment-boundary decomposition}
Consider for an example the trunk tree ${\mathbb{T}}^{{\mathrm{Id}}_n}$ (see subsection 2.4 for an explanation of the notation) with vertices $n\to n-1\to\ldots\to 1$ and labels $\ell:\{1,\ldots,n\}\to\{1,\ldots,d\}$, and the associated iterated integral (assuming $\Gamma=(\Gamma(1),\ldots,\Gamma(d))$ is a smooth path) \begin{equation} [I_n^{\ell}(\Gamma)]_{ts}=
[I_{{\mathbb{T}}^{{\mathrm{Id}}_n}}(\Gamma)]_{ts}=\int_s^t d\Gamma_{x_1}(\ell(1))\ldots\int_s^{x_{n-1}}d\Gamma_{x_{n}} (\ell(n)).\end{equation}
Cutting ${\mathbb{T}}^{{\mathrm{Id}}_n}$ at some vertex $v\in\{2,\ldots,n\}$ produces two trees, $L_v {\mathbb{T}}^{{\mathrm{Id}}_n}$ ({\em left} or rather {\em bottom}
part of ${\mathbb{T}}^{{\mathrm{Id}}_n}$) and $R_v{\mathbb{T}}^{{\mathrm{Id}}_n}$ ({\em right} or {\em top} part), with respective vertex subsets $\{1,\ldots,v-1\}$ and $\{v,\ldots,n\}$. One should actually see the couple $(L_v{\mathbb{T}}^{{\mathrm{Id}}_n},R_v{\mathbb{T}}^{{\mathrm{Id}}_n})$ as $L_v{\mathbb{T}}^{{\mathrm{Id}}_n}\otimes R_v{\mathbb{T}}^{{\mathrm{Id}}_n}$ sitting in the tensor product algebra ${\cal T}\otimes{\cal T}$. Then multiplicative property (ii) in the Introduction reads \begin{equation} [\delta I_{{\mathbb{T}}^{{\mathrm{Id}}_n}}(\Gamma)]_{tus}=\sum_{v\in V({\mathbb{T}}^{{\mathrm{Id}}_n})\setminus\{1\}} [I_{L_v{\mathbb{T}}^{{\mathrm{Id}}_n}}(\Gamma)]_{tu} [I_{R_v {\mathbb{T}}^{{\mathrm{Id}}_n}}(\Gamma)]_{us}. \label{eq:treex0} \end{equation}
On the other hand, one may rewrite $[I_{{\mathbb{T}}^{{\mathrm{Id}}_n}}(\Gamma)]_{ts}$ as the sum of the {\em increment term} \begin{eqnarray} [\delta G]_{ts} & =\int^t d\Gamma_{x_1}(\ell(1))\int^{x_1} d\Gamma_{x_2}(\ell(2)) \ldots\int^{x_{n-1}}d\Gamma_{x_{n}}(\ell(n))\nonumber\\ & \qquad - \int^s d\Gamma_{x_1}(\ell(1))\int^{x_1} d\Gamma_{x_2}(\ell(2)) \ldots\int^{x_{n-1}}d\Gamma_{x_{n}}(\ell(n)) \label{eq:2:incr-term} \nonumber\\ \end{eqnarray} and of the {\em boundary term} \begin{eqnarray} && [I_{{\mathbb{T}}^{{\mathrm{Id}}_n}}(\Gamma)(\partial)]_{ts}= -\sum_{n_1+n_2=n} \int_s^t d\Gamma_{x_1}(\ell(1)) \ldots \int_s^{x_{n_1-1}}d\Gamma_{x_{n_1}}(\ell(n_1)) \ .\ \nonumber\\
&& \qquad .\ \int^s d\Gamma_{x_{n_1+1}}(\ell(n_1+1)) \int^{x_{n_1+1}}d\Gamma_{x_{n_1+2}}(\ell(n_1+2)) \ldots \int^{x_{n-1}} d\Gamma_{x_{n}}(\ell(n)). \label{eq:2:bdry-term} \nonumber\\ \end{eqnarray}
The above decomposition is fairly obvious for $n=2$ (see Introduction) and obtained by easy induction for general $n$. Thus (using tree notation this time) \begin{equation} [I_{{\mathbb{T}}^{{\mathrm{Id}}_n}}(\Gamma)]_{ts}=[\delta {\mathrm{Sk}} I_{{\mathbb{T}}^{{\mathrm{Id}}_n}}]_{ts}-\sum_{v\in V({\mathbb{T}}^{{\mathrm{Id}}_n})\setminus\{1\}} [I_{L_v{\mathbb{T}}^{{\mathrm{Id}}_n}}(\Gamma)]_{ts} \ .\ [{\mathrm{Sk}} I_{R_v{\mathbb{T}}^{{\mathrm{Id}}_n}}(\Gamma)]_s. \label{eq:2:*} \end{equation}
The above considerations extend to arbitrary trees (or also forests) as follows.
\begin{Definition}[admissible cuts]
{\it \begin{enumerate} \item
Let ${\mathbb{T}}$ be a tree, with set of vertices $V({\mathbb{T}})$ and root denoted by $0$. If $\vec{v}=(v_1,\ldots,v_J)$, $J\ge 1$ is any totally disconnected subset of $V({\mathbb{T}})\setminus\{0\}$,
i.e. $v_i\not\twoheadrightarrow v_j$ for all $i,j=1,\ldots,J$, then we shall say that $\vec{v}$ is an {\em admissible cut} of ${\mathbb{T}}$, and
write $\vec{v}\models V({\mathbb{T}})$. We let $R_{\vec{v}}{\mathbb{T}}$ be the sub-forest (or sub-tree if $J=1$) obtained by keeping only the vertices above $\vec{v}$, i.e. $V(R_{\vec{v}}{\mathbb{T}})=\vec{v}\cup\{w\in V({\mathbb{T}}):\ \exists j=1,\ldots,J, w\twoheadrightarrow v_j\}$, and $L_{\vec{v}}{\mathbb{T}}$ be the sub-tree obtained by keeping all other vertices.
\item Let ${\mathbb{T}}={\mathbb{T}}_1\ldots{\mathbb{T}}_l$ be a forest, together with its decomposition into trees. Then an admissible cut of ${\mathbb{T}}$ is a disjoint union $\vec{v}_1\cup\ldots\cup\vec{v}_l$, $\vec{v}_i\subset{\mathbb{T}}_i$, where $\vec{v}_i$ is either $\emptyset$, $\{0_i\}$ (root of ${\mathbb{T}}_i$) or an admissible cut of ${\mathbb{T}}_i$. By definition, we let $L_{\vec{v}}{\mathbb{T}}=L_{\vec{v}_1}{\mathbb{T}}_1 \ldots L_{\vec{v}_l}{\mathbb{T}}_l$, $R_{\vec{v}}{\mathbb{T}}= R_{\vec{v}_1}{\mathbb{T}}_1\ldots R_{\vec{v}_l}{\mathbb{T}}_l$ (if $\vec{v}_i=\emptyset$, resp. $\{0_i\}$, then $(L_{\vec{v}_i}{\mathbb{T}}_i,R_{\vec{v}_i}{\mathbb{T}}_i):=({\mathbb{T}}_i,\emptyset)$, resp. $(\emptyset,{\mathbb{T}}_i)$).
We exclude by convention the two trivial cuts $\emptyset\cup\ldots\cup\emptyset $ and $\{0_1\}\cup\ldots\cup\{0_l\}$. \end{enumerate} } \label{def:6:admissible-cut} \end{Definition}
See Fig. \ref{Fig2} and \ref{Fig3}. Defining the co-product operation $\Delta: {\cal T}\to {\cal T}\otimes {\cal T}$, ${\mathbb{T}}\to e \otimes {\mathbb{T}}+{\mathbb{T}}\otimes e+\sum_{\vec{v}\models V({\mathbb{T}})} L_{\vec{v}}{\mathbb{T}}\otimes R_{\vec{v}}{\mathbb{T}}$ (where $e$ stands for the empty tree, which is the unit of the algebra) yields a coalgebra structure on ${\cal T}$ which makes it (once the antipode -- which we do not need here -- is defined) a Hopf algebra (see articles by A. Connes and D. Kreimer \cite{ConKre98,ConKre00,ConKre01}). The convention is usuall to write $\vec{v}=c$ (cut), $L_{\vec{v}}{\mathbb{T}}=R^c ({\mathbb{T}})$ (root part), $R_{\vec{v}}{\mathbb{T}}=P^c ({\mathbb{T}})$ and $\Delta({\mathbb{T}})= e\otimes{\mathbb{T}}+{\mathbb{T}}\otimes e+\sum_c P^c({\mathbb{T}})\otimes R^c({\mathbb{T}})$ (note the inversion of the order of the factors in the tensor product).
\begin{figure}
\caption{\small{Admissible cut.}}
\label{Fig2}
\end{figure}
\begin{figure}
\caption{\small{Non-admissible cut.}}
\label{Fig3}
\end{figure}
Eq. (\ref{eq:treex0}) extends to the general formula (called: {\it tree multiplicative property}), which one can find in \cite{Kre99} or \cite{Gu2}, \begin{equation} [\delta I_{{\mathbb{T}}}(\Gamma)]_{tus}=\sum_{\vec{v}\models V({\mathbb{T}})} [I_{L_{\vec{v}}{\mathbb{T}}}(\Gamma)]_{tu} [I_{R_{\vec{v}} {\mathbb{T}}}(\Gamma)]_{us}, \label{eq:treex} \end{equation} satisfied by any regular path $\Gamma$ for any tree ${\mathbb{T}}$.
Letting formally $s=\pm{\rm i}\infty$ in eq. (\ref{eq:treex}) yields
\begin{equation} [I_{{\mathbb{T}}}(\Gamma)]_{tu}=[\delta {\mathrm{Sk}} I_{{\mathbb{T}}}]_{tu}-\sum_{v\in V({\mathbb{T}})\setminus\{0\}} [I_{L_v{\mathbb{T}}}(\Gamma)]_{tu} \ .\ [{\mathrm{Sk}} I_{R_v{\mathbb{T}}}(\Gamma)]_u. \label{eq:2:SkI} \end{equation}
which generalizes eq. (\ref{eq:2:*}). Conversely, eq. (\ref{eq:2:SkI}) implies the tree multiplicative property eq. (\ref{eq:treex}), as shown in Lemma \ref{lem:2:reg} below.
\subsection{Regularization procedure}
\begin{Definition}[regularization procedure for skeleton integrals] \label{def:2:reg}
Let $\tilde{{\mathbb{T}}}=\{v_1<\ldots<v_{|\tilde{T}|}\}$ be a tree, ${\mathbb{T}}\subset\tilde{{\mathbb{T}}}$ a subtree, $\mu$ a compactly supported Borel measure on
$\mathbb{R}^{\tilde{{\mathbb{T}}}}$ such that ${\mathrm{supp}}\hat{\mu}\subset\{(\xi_1,\ldots,\xi_{|V(\tilde{T})|}) \ | \
|\xi_1|\le \ldots\le |\xi_{|V(\tilde{{\mathbb{T}}})|}\}$, and $D_{reg}\subset\mathbb{R}^{{\mathbb{T}}}$ a Borel subset.
The (formal) $D_{reg}$-{\em regularized skeleton integral} ${\cal R}{\mathrm{Sk\,I}}_{{\mathbb{T}}}$
is the linear mapping (see eq. (\ref{eq:2:reg-sk-it0}))
\begin{equation} \mu\to [{\cal R}{\mathrm{Sk\,I}}_{{\mathbb{T}}}(\mu)]_{t}=(2\pi)^{-|V({\mathbb{T}})|/2} \langle \hat{\mu}, {\bf 1}_{D_{reg}}(\xi) \ .\ \left[ {\mathrm{Sk\,I}}_{{\mathbb{T}}}\left( (x_v)_{v\in V({\mathbb{T}})} \to e^{{\rm i} \sum_{v\in V({\mathbb{T}})} x_v \xi_v} \right) \right]_t \rangle \label{eq:2:reg-sk-it} \end{equation} where $\hat{\mu}$ is the partial Fourier transform of $\mu$ with respect to $(x_v)_{v\in V({\mathbb{T}})}$.
By assumption we shall only allow $D_{reg}=\mathbb{R}$ if ${\mathbb{T}}$ is a tree reduced to one vertex.
\end{Definition}
\begin{Lemma}[regularization] \label{lem:2:reg} {\it
Let ${\mathbb{T}}={\mathbb{T}}_1\ldots{\mathbb{T}}_l$ be a forest, together with its tree decomposition. Define by induction on $|V({\mathbb{T}})|$ the regularized integration operator $\left[{\cal R}I_{{\mathbb{T}}}\right]_{ts}$ by \begin{equation} \prod_{j=1}^l \left\{ \left[ \delta {\cal R}{\mathrm{Sk\,I}}_{ {\mathbb{T}}_j}\right]_{ts}- \sum_{\vec{v}\models V({\mathbb{T}}_j)} \left[ {\cal R} I_{L_{\vec{v}}{\mathbb{T}}_j}\right]_{ts} \left[ {\cal R}{\mathrm{Sk\,I}}_{ R_{\vec{v}}{\mathbb{T}}_j}\right]_{s} \right\} \label{eq:6:reg-int} \end{equation}
Then $\left[{\cal R}I_{{\mathbb{T}}}\right]_{ts}$ satisfies the following tree multiplicative property:
\begin{equation} \left[ \delta{\cal R}I_{{\mathbb{T}}}\right]_{t us} =\sum_{\vec{v}\models V({\mathbb{T}})} \left[ {\cal R} I_{L_{\vec{v}} {\mathbb{T}} }
\right]_{t u} \ .\ \left[ {\cal R} I_{R_{\vec{v}} {\mathbb{T}}} \right]_{us}. \label{eq:6:x} \end{equation}
By analogy with eq. (\ref{eq:2:incr-term}, \ref{eq:2:bdry-term}, \ref{eq:2:*}),
$\left[ \delta {\cal R}{\mathrm{Sk\,I}}_{{\mathbb{T}}_j}\right]_{ts}$, resp. $[{\cal R} I_{{\mathbb{T}}_j}(\partial)]_{ts}:= - \sum_{\vec{v}\models V({\mathbb{T}}_j)} \left[ {\cal R} I_{L_{\vec{v}}{\mathbb{T}}_j}\right]_{ts} \left[ {\cal R}{\mathrm{Sk\,I}}_{ R_{\vec{v}}{\mathbb{T}}_j}\right]_{s} $
may be called the {\em increment}, resp. {\em boundary operators} associated to the tree ${\mathbb{T}}_j$. }
\end{Lemma}
{\bf Remark.} By Definition \ref{def:2:reg}, the condition $[{\cal R}I_{{\mathbb{T}}}]_{ts}=[I_{{\mathbb{T}}}]_{ts}$ holds for a tree reduced to one vertex. This implies in the end that one has constructed a rough path over the {\em original} path $\Gamma$.
{\bf Proof.} If the multiplicative property (\ref{eq:6:x}) holds for trees, then it holds automatically for forests since $[{\cal R} I_{{\mathbb{T}}_1\ldots{\mathbb{T}}_l}]_{ts}$ is the product $\prod_{j=1}^l [{\cal R}I_{{\mathbb{T}}_j}]_{ts}$. Hence we may assume that ${\mathbb{T}}$ is a tree, say, with $n$ vertices.
Suppose (by induction) that the above multiplicative property (\ref{eq:6:x}) holds for all trees with $\le n-1$ vertices. Then \begin{eqnarray} && \left[ \delta{\cal R}I_{{\mathbb{T}}}\right]_{t u s} =
\sum_{\vec{v}\models V({\mathbb{T}})} \left( - \left[ \delta {\cal R} I_{L_{\vec{v}}{\mathbb{T}}}\right]_{t u s}
\left[ {\cal R}{\mathrm{Sk\,I}}_{R_{\vec{v}}{\mathbb{T}}}\right]_{s}+
\left[ {\cal R} I_{L_{\vec{v}}{\mathbb{T}}}\right]_{t u}
\left[\delta {\cal R}{\mathrm{Sk\,I}}_{R_{\vec{v}}{\mathbb{T}}}\right]_{us} \right) \nonumber\\ && = \sum_{\vec{v}\models V({\mathbb{T}})} \sum_{\vec{w}\models V(L_{\vec{v}} {\mathbb{T}})} \left( - \left[ {\cal R} I_{ L_{\vec{w}}\circ L_{\vec{v}}({\mathbb{T}})} \right]_{t u} \left[ {\cal R} I_{R_{\vec{w}}\circ L_{\vec{v}}({\mathbb{T}})} \right]_{us} \left[{\cal R} {\mathrm{Sk\,I}}_{R_{\vec{v}}{\mathbb{T}}}\right]_{s} \right. \nonumber\\ && \left. \qquad \qquad \qquad
+ \left[ {\cal R} I_{L_{\vec{v}}{\mathbb{T}}}\right]_{t u}
\left[ \delta {\cal R}{\mathrm{Sk\,I}}_{R_{\vec{v}}{\mathbb{T}}}\right]_{us} \right). \nonumber\\ \end{eqnarray}
Let $\vec{x}=\vec{v}\amalg \vec{w}:=\vec{v}\cup\vec{w}\setminus\{i\in \vec{v}\cup\vec{w}\ |\
\exists j\in \vec{v}\cup\vec{w} \ |\ i\twoheadrightarrow j\}$. Then one easily proves that
$L_{\vec{w}}\circ L_{\vec{v}}({\mathbb{T}})=L_{\vec{x}}({\mathbb{T}})$, $R_{\vec{v}}({\mathbb{T}})=R_{\vec{v}}\circ R_{\vec{x}}({\mathbb{T}})$ and $R_{\vec{w}}\circ L_{\vec{v}}({\mathbb{T}})= L_{\vec{v}}\circ R_{\vec{x}}({\mathbb{T}})$. Hence \begin{eqnarray} [\delta {\cal R} I_{{\mathbb{T}}}]_{t us}&=& \sum_{\vec{x}\models V({\mathbb{T}})} [{\cal R} I_{L_{\vec{x}}{\mathbb{T}}}]_{t u} \left( -\sum_{\vec{v}\models V(R_{\vec{x}}{\mathbb{T}})} [{\cal R}I_{L_{\vec{v}}(R_{\vec{x}}{\mathbb{T}})}]_{us} [{\cal R} {\mathrm{Sk\,I}}_{R_{\vec{v}}(R_{\vec{x}}{\mathbb{T}})}]_{s} + [\delta {\cal R}{\mathrm{Sk\,I}}_{R_{\vec{x}}{\mathbb{T}}}]_{us}\right) \nonumber\\ &=& \sum_{\vec{x}\models V({\mathbb{T}})} [{\cal R}I_{L_{\vec{x}}{\mathbb{T}}}]_{t u} [{\cal R} I_{R_{\vec{x}}{\mathbb{T}}}]_{us}. \end{eqnarray}
$\Box$
\subsection{Permutation graphs}
Consider now a permutation $\sigma\in\Sigma_n$. Applying Fubini's theorem yields \begin{eqnarray} I_n^{\ell}(\Gamma)&=& \int_s^t d\Gamma_{x_1}(\ell(1))\int_s^{x_1} d\Gamma_{x_2}(\ell(2)) \ldots \int_s^{x_{n-1}} d\Gamma_{x_{n}}(\ell(n)) \nonumber\\ &=& \int_{s_1}^{t_1} d\Gamma_{x_{\sigma(1)}}(\ell(\sigma(1)))\int_{s_2}^{t_2} d\Gamma_{x_{\sigma(2)}} (\ell(\sigma(2))) \ldots \int_{s_{n}}^{t_{n}} d\Gamma_{x_{\sigma(n)}}(\ell(\sigma(n))), \nonumber\\ \label{eq:2:2.8} \end{eqnarray} with $s_1=s$, $t_1=t$ and $s_j\in\{s\}\cup\{x_{\sigma(i)}, i<j\}$,
$t_j\in\{t\}\cup\{x_{\sigma(i)},i<j\}$ $(j\ge 2)$. Now decompose $\int_{s_j}^{t_j} d\Gamma_{x_{\sigma(j)}}(\ell(\sigma(j)))$ into $\left( \int_s^{t_j}-\int_s^{s_j}\right) d\Gamma_{x_{\sigma(j)}}(\ell(\sigma(j)))$ if $s_j\not=s,t_j\not=t$, and $\int_{s_j}^{t} d\Gamma_{x_{\sigma(j)}}(\ell(\sigma(j)))$ into $\left( \int_s^{t}-\int_s^{s_j}\right) d\Gamma_{x_{\sigma(j)}}(\ell(\sigma(j)))$ if $s_j\not=s$. Then $I_n^{\ell}(\Gamma)$ has been rewritten as a sum of terms of the form \begin{equation}\pm \int_s^{\tau_1} d\Gamma_{x_1}(\ell(\sigma(1)))\int_s^{\tau_2}d\Gamma_{x_2}(\ell(\sigma(2))) \ldots\int_s^{\tau_{n}} d\Gamma_{x_{n}}(\ell(\sigma(n))), \label{eq:2:2.9} \end{equation} where $\tau_1=t$ and $\tau_j\in\{t\}\cup\{x_i, i<j\}$, $j=2,\ldots,n$. Note the renaming of variables and vertices from eq. (\ref{eq:2:2.8}) to eq. (\ref{eq:2:2.9}). Encoding each of these expressions by the forest ${\mathbb{T}}$ with set of vertices $V({\mathbb{T}})=\{1,\ldots,n\}$, label function
$\ell\circ\sigma$, roots $\{j=1,\ldots,n\ |\
\tau_j=t\}$, and oriented edges $\{(j,j^-)\ |\ j=2,\ldots,n, \tau_j=x_{j^-}\}$, yields \begin{equation} I_n^{\ell}(\Gamma)=I_{{\mathbb{T}}^{\sigma}}(\Gamma) \end{equation} for some ${\mathbb{T}}^{\sigma}\in{\cal T}$ called {\bf permutation graph associated to $\sigma$}.
Summarizing:
\begin{Lemma}[permutation graphs] \label{lem:2:sigma}
To every permutation $\sigma\in\Sigma_n$ is associated a permutation graph \begin{equation} {\mathbb{T}}^{\sigma}=\sum_{j=1}^{J_{\sigma}} g(\sigma,j) {\mathbb{T}}_j^{\sigma}\in {\cal T},\end{equation} $g(\sigma,j)=\pm 1$, each forest ${\mathbb{T}}_j^{\sigma}$ being provided by construction with a total ordering compatible with its tree structure, image of the ordering $\{v_1<\ldots<v_n\}$ of the trunk tree ${\mathbb{T}}^{{\mathrm{Id}}_n}$ by the permutation $\sigma$. The label function of ${\mathbb{T}}^{\sigma}$ is $\ell\circ\sigma$, where $\ell$ is the original label function of ${\mathbb{T}}^{{\mathrm{Id}}_n}$.
\end{Lemma}
\begin{Example} \label{ex:2:1}
Let $\sigma=\left(\begin{array}{ccc} 1 & 2 & 3\\ 2 & 3 & 1 \end{array}\right)$. Then \begin{eqnarray} && \int_s^t d\Gamma_{x_1}(\ell(1)) \int_s^{t_2} d\Gamma_{x_2}(\ell(2)) \int_s^{t_3} d\Gamma_{x_3}(\ell(3))= \nonumber\\ && \qquad -\int_s^t d\Gamma_{x_2}(\ell(2)) \int_s^{x_2} d\Gamma_{x_3}(\ell(3)) \int_s^{x_2} d\Gamma_{x_1}(\ell(1)) \nonumber\\ && \qquad \qquad + \int_s^t d\Gamma_{x_2}(\ell(2)) \int_s^{x_2} d\Gamma_{x_3}(\ell(3)) \ .\ \int_s^t d\Gamma_{x_1}(\ell(1)).\end{eqnarray} Hence ${\mathbb{T}}^{\sigma}=-{\mathbb{T}}_1^{\sigma}+{\mathbb{T}}_2^{\sigma}$ is the sum of a tree and of a forest with two components (see Fig. \ref{Fig4bis}).
\end{Example}
\begin{figure}\label{Fig4bis}
\end{figure}
\subsection{Fourier normal ordering algorithm}
Let $\Gamma=(\Gamma(1),\ldots,\Gamma(d))$ be a compactly supported, smooth path, and ${\bf \Gamma}^n(i_1,\ldots,i_n)$ some iterated integral of $\Gamma$. To regularize ${\bf \Gamma}^n(i_1,\ldots,i_n)$, we shall apply the following algorithm (a priori formal, since skeleton integrals may be infra-red divergent) :
\begin{enumerate} \item (Fourier projections) Split the measure $\mu=d\Gamma(i_1)\otimes\ldots\otimes d\Gamma(i_n)$ into
$\sum_{\sigma\in\Sigma_n} {\cal F}^{-1} \left({\bf 1}_{D^{\sigma}}(\xi)\hat{\mu}(\xi) \right)$, where $D^{\sigma}=\{(\xi_1,\ldots,\xi_n)\in\mathbb{R}^n \ |\ |\xi_{\sigma(1)}|
\le \ldots\le |\xi_{\sigma(n)}|\}$, and $\hat{\mu}$ is the Fourier transform of $\mu$. We shall write \begin{equation} \mu^{\sigma}:={\cal F}^{-1} \left({\bf 1}_{D^{\sigma}}.\hat{\mu}\right)\circ\sigma= {\cal F}^{-1} \left( {\bf 1}_{D^{{\mathrm{Id}}_n}} . (\hat{\mu}\circ\sigma)\right); \label{eq:2:mu-sigma} \end{equation}
\item Rewrite $I_n^{\ell} \left({\cal F}^{-1} ({\bf 1}_{D^{\sigma}}.\hat{\mu}) \right)$, where $\ell(j)=i_j$, as $I_{{\mathbb{T}}^{\sigma}}(\mu^{\sigma}):=\sum_{j=1}^{J_{\sigma}} g(\sigma,j)I_{{\mathbb{T}}^{\sigma}_j}(\mu^{\sigma})$, where ${\mathbb{T}}^{\sigma}$ is the permutation graph defined in subsection 2.4;
\item Replace $I_{{\mathbb{T}}^{\sigma}}(\mu^{\sigma})$ with some regularized integral as in Definition \ref{def:2:reg} and Lemma \ref{lem:2:reg}, \begin{equation} {\cal R}I_{{\mathbb{T}}^{\sigma}}(\mu^{\sigma}):=\sum_{j=1}^{J_{\sigma}} g(\sigma,j) {\cal R}I_{{\mathbb{T}}_j^{\sigma}}(\mu^{\sigma});\end{equation}
\item Sum the terms corresponding to all possible permutations, yielding ultimately \begin{equation} {\cal R}{\bf\Gamma}^n(i_1,\ldots,i_n)=\sum_{\sigma\in\Sigma_n} {\cal R}I_{{\mathbb{T}}^{\sigma}}(\mu^{\sigma}).\end{equation}
\end{enumerate}
Explicit formulas for $\Gamma=B^{\eta}$ may be found in the following section.
\begin{Theorem} \cite{Unt09bis}
${\cal R}{\bf\Gamma}$ satisfies the multiplicative (ii) and geometric (iii) properties defined in the Introduction.
\end{Theorem}
The proof given in \cite{Unt09bis} shows actually that {\em any} choice of linear maps $[{\cal R}{\mathrm{Sk}} I_{{\mathbb{T}}}]_t:\mu\to [{\cal R}{\mathrm{Sk}} I_{{\mathbb{T}}}(\mu)]_t$ such that
(i) $[{\cal R}{\mathrm{Sk}} I_{{\mathbb{T}}_1.{\mathbb{T}}_2}(\mu_1\otimes\mu_2)]_t=[{\cal R}{\mathrm{Sk}} I_{{\mathbb{T}}_1}(\mu_1)]_t [{\cal R}{\mathrm{Sk}} I_{{\mathbb{T}}_2}(\mu_2)]_t$ and
(ii) $[{\cal R}{\mathrm{Sk}} I_{{\mathbb{T}}} (f)]_t=[{\mathrm{Sk}} I_{{\mathbb{T}}}(f)]_t=\int^t f(u)\ du$ if ${\mathbb{T}}$ is the trivial tree with one vertex,
yields a regularized rough path over $\Gamma$ if $\Gamma$ is {\em smooth}. Hence our
'cut' Fourier domain construction is arbitrary if convenient. As already said in the Introduction, it seems natural to look for some more restrictive rules for the regularization; iterated renormalization schemes (such as BPHZ or dimensional regularization) are obvious candidates (work in progress). The question is: is such or such regularization scheme better in any sense ? Contrary to the case of quantum field theory where all renormalization schemes may be implemented by local counterterms, which amount to a change of the value of the (finite number of) parameters in the functional integral (which are experimentally measurable),
and give ultimately after resumming the perturbation series one and only one theory, we do not know of any {\em probabilistically motivated} reason to choose a particular regularization scheme here.
\section{Rough path construction for fBm: case of distinct indices}
The strategy is now to choose an appropriate regularization procedure, so that regularized skeleton integrals of $B^{\eta}$ are finite and satisfy the uniform H\"older and convergence rate estimates given in Theorem \ref{th:0}.
\subsection{Analytic approximation of fBm}
Recall $B$ may be defined via the harmonizable representation \cite{SamoTaq}
\begin{equation} B_t=c_{\alpha} \int_{\mathbb{R}} |\xi|^{{1\over 2}-\alpha} \frac{e^{{\rm i} t\xi}-1}{{\rm i} \xi} \ W(d\xi) \end{equation} where $(W_{\xi},\xi\ge 0)$ is a complex Brownian motion extendeded to $\mathbb{R}$
by setting $W_{-\xi}=-\overline{W}_{\xi}$ $(\xi\ge 0)$, and $c_{\alpha}={1\over 2} \sqrt{-\frac{\alpha}{\cos\pi\alpha\Gamma(-2\alpha)}}$.
We shall use the following approximation of $B$ by a family of centered Gaussian processes $(B^{\eta},\eta>0)$ living in the first chaos of $B$.
\begin{Definition}[approximation $B^{\eta}$]
Let, for $\eta>0$,
\begin{equation} B_t^{\eta}=c_{\alpha} \int_{\mathbb{R}} e^{-\eta|\xi|} |\xi|^{{1\over 2}-\alpha}
\frac{e^{{\rm i} t\xi}-1}{{\rm i} \xi}\ W(d\xi). \end{equation}
\end{Definition}
The process $B^{\eta}$ is easily seen to have a.s. smooth paths. The infinitesimal covariance ${\mathbb{E}} (B^{\eta})'_s (B^{\eta})'_t$ may be computed explicitly using the Fourier transform \cite{Erd54} \begin{equation} {\cal F}K^{',-}_{\eta}(\xi)=\frac{1}{\sqrt{2\pi}} \int_{\mathbb{R}} K^{',-}_{\eta}(x) e^{-{\rm i} x\xi} dx
=-\frac{\pi\alpha}{2\cos\pi\alpha\Gamma(-2\alpha)} e^{-2\eta |\xi|} |\xi|^{1-2\alpha}
{\bf 1}_{|\xi|>0},\end{equation} where $K^{',-}_{\eta}(s-t):=\frac{\alpha(1-2\alpha)}{2\cos\pi\alpha} (-{\rm i}(s-t)+2\eta)^{2\alpha-2}$. By taking the real part of these expressions, one finds that $B^{\eta}$ has the same law as the analytic approximation of $B$ defined in \cite{Unt08}, namely, $B^{\eta}= \Gamma_{t+{\rm i}\eta}+\Gamma_{t-{\rm i}\eta}=2{\rm Re\ }\Gamma_{t+{\rm i}\eta}$, where $\Gamma$ is the analytic fractional Brownian motion (see also \cite{TinUnt08}).
\subsection{Choice of the regularization procedure}
Let $\sigma\in\Sigma_n$ be a permutation. Recall (see Lemma \ref{lem:2:sigma}) that the permutation graph ${\mathbb{T}}^{\sigma}$
may be written as a finite sum $\sum_{j=1}^{J_{\sigma}} g(\sigma,j) {\mathbb{T}}^{\sigma}_j$, where each ${\mathbb{T}}^{\sigma}_j$ is a forest which is automatically provided with a total ordering. In the two following subsections, we shall consider regularized tree or skeleton integrals, ${\cal R}I_{{\mathbb{T}}}$ or ${\cal R}{\mathrm{Sk}} I_{{\mathbb{T}}}$, for a forest ${\mathbb{T}}$ which is one of the ${\mathbb{T}}^{\sigma}_j$.
\begin{Definition} \label{def:6:RTreg} {\it Fix $C_{reg}\in(0,1).$ Let, for ${\mathbb{T}}$ with set of vertices $V({\mathbb{T}})=\{v_1<\ldots<v_j\}$,
\begin{equation} \mathbb{R}_{+}^{{\mathbb{T}}}:=\big\{(\xi_{v_1},\ldots,\xi_{v_j})\in\mathbb{R}^{{\mathbb{T}}}\ |\ |\xi_{v_1}|\le\ldots\le|\xi_{v_j}|\},\end{equation}
\begin{equation} \mathbb{R}_{reg}^{{\mathbb{T}}}:=\big\{(\xi_{v_1},\ldots,\xi_{v_j})\in\mathbb{R}^{{\mathbb{T}}}_+\ | \ \forall v\in V({\mathbb{T}}),
|\xi_v+\sum_{w\twoheadrightarrow v} \xi_w|>C_{reg}\max\{|\xi_w|;\ w\twoheadrightarrow v\}
\ \big\}, \label{eq:3:RTreg} \end{equation} and ${\cal R}I_{{\mathbb{T}}}$, resp. ${\cal R}{\mathrm{Sk\,I}}_{{\mathbb{T}}}$ be the corresponding $\mathbb{R}^{{\mathbb{T}}}_{reg}$-regularized iterated, resp. skeleton integrals as in subsection 2.3. } \end{Definition}
Condition (\ref{eq:3:RTreg}) ensures that the denominators in the skeleton integrals are not too small (see Lemma \ref{lem:2:SkI}).
The following Lemma (close to arguments used in the study of random Fourier series \cite{Kah}) is fundamental for the estimates of the following subsections.
\begin{Lemma} \label{lem:3:Kah}
\begin{itemize}
\item[(i)] Let $F(u)=\int_{\mathbb{R}} dW_{\xi} a(\xi)e^{{\rm i} u\xi}$, where $|a(\xi)|^2\le C|\xi|^{-1-2\beta}$ for some $0<\beta<1$: then, for every $u_1,u_2\in\mathbb{R}$,
\begin{equation} {\mathbb{E}} |F(u_1)-F(u_2)|^2 \le C' |u_1-u_2|^{2\beta}.\end{equation}
\item[(ii)] Let $\tilde{F}(\eta)=\int_{\mathbb{R}} dW_{\xi} a(\xi)e^{-\eta|\xi|}$ $(\eta>0)$,
where $|a(\xi)|^2\le C|\xi|^{-1-2\beta}$ for some $0<\beta<1$: then, for every $\eta_1,\eta_2\in\mathbb{R}_+$,
\begin{equation} {\mathbb{E}} |\tilde{F}(\eta_1)-\tilde{F}(\eta_2)|^2 \le C' |\eta_1-\eta_2|^{2\beta}.\end{equation}
\end{itemize}
\end{Lemma}
{\bf Proof.} Bound $|e^{{\rm i} u_1\xi}-e^{{\rm i} u_2\xi}|$ by $|u_1-u_2| |\xi|$ for $|\xi|\le\frac{1}{|u_1-u_2|}$ and by $2$ otherwise, and similarly for $|e^{-\eta_1|\xi|}-e^{-\eta_2|\xi|}|$.
Note the variance integral is infra-red convergent near $\xi=0$.
$\Box$
{\bf Remark:} Unless $|a(\xi)|^2$ is $L^1_{loc}$ near $\xi=0$, only the {\em increments} $F(u_1)-F(u_2)$, $\tilde{F}(\eta_1)-\tilde{F}(\eta_2)$ are well-defined.
\subsection{Estimates for the increment term}
In this paragraph, as in the next one, we consider regularized tree integrals associated to ${\cal R}{\bf B}^{n,\eta}(i_1,\ldots,i_n)$ where $i_1\not=\ldots\not=i_n$ are distinct indices, so that $B(i_1),\ldots,B(i_n)$ are {\it independent}.
\begin{Lemma}[H\"older estimate and rate of convergence] \label{lem:3:increment-Holder-rate}
{\it Let ${\mathbb{T}}={\mathbb{T}}^{\sigma}_j$ for some $j$, and $\alpha<1/|V({\mathbb{T}})|$. \begin{enumerate} \item
The skeleton term \begin{equation} [G^{\eta,\sigma}_{{\mathbb{T}}}(i_1,\ldots,i_n)]_u= \left[{\cal R}{\mathrm{Sk\,I}}_{{\mathbb{T}}}\left( \left( dB^{\eta}(i_1)\otimes\ldots\otimes dB^{\eta}(i_n) \right)^{\sigma}
\right) \right]_u \end{equation} (see eq. (\ref{eq:2:mu-sigma})) writes
\begin{eqnarray} && [G^{\eta,\sigma}_{{\mathbb{T}}}(i_1,\ldots,i_n)]_u=(-{\rm i} c_{\alpha})^{|V({\mathbb{T}})|} \int\ldots\int_{(\xi_v)_{v\in V({\mathbb{T}})}\in \mathbb{R}^{{\mathbb{T}}}_{reg}} \prod_{v\in V({\mathbb{T}})} dW_{\xi_v}(i_{\sigma(v)}) \nonumber\\
&& \qquad \qquad \qquad e^{{\rm i} u \sum_{v\in V({\mathbb{T}})} \xi_v} e^{-\eta\sum_{v\in V({\mathbb{T}})} |\xi_v|}
\frac{\prod_{v\in V({\mathbb{T}})} |\xi_v|^{{1\over 2}-\alpha}} {\prod_{v\in V({\mathbb{T}})} \left[ \xi_v+\sum_{w\twoheadrightarrow v} \xi_w\right]}. \label{eq:6:40}\end{eqnarray} \item It satisfies the uniform H\"older estimate:
\begin{equation} {\mathbb{E}} \left| [\delta G^{\eta,\sigma}_{{\mathbb{T}}}(i_1,\ldots,i_n)]_{ts} \right|^2
\le C|t-s|^{2\alpha|V({\mathbb{T}})|}.\end{equation} \item (rate of convergence) : there exists a constant $C>0$ such that, for every $\eta_1, \eta_2>0$ and $s,t\in\mathbb{R}$,
\begin{equation} {\mathbb{E}} \left| [\delta G^{\eta_1,\sigma}_{{\mathbb{T}}}(i_1,\ldots,i_n)]_{ts}-
[\delta G^{\eta_2,\sigma}_{{\mathbb{T}}}(i_1,\ldots,i_n)]_{ts}
\right|^2 \le C|\eta_1-\eta_2|^{2\alpha}.\end{equation} \end{enumerate} } \end{Lemma}
{\bf Proof.}
\begin{enumerate}
\item Follows from Lemma \ref{lem:2:SkI} and the definitions of $B^{\eta}$ and of regularized integrals in the previous subsections 2.3 and 3.1.
\item (H\"older estimate)
One may just as well (by multiplying the integral estimates on each tree component) assume ${\mathbb{T}}$ is a tree, i.e. ${\mathbb{T}}$ is connected.
Let $V({\mathbb{T}})=\{v_1<\ldots<v_{|V({\mathbb{T}})|}\}$, so that $|\xi_{v_1}|\le \ldots\le |\xi_{v_{|V({\mathbb{T}})|}}|$.
Since every vertex $v\in V({\mathbb{T}})\setminus\{v_1\}$ connects to the root $v_1$, one has
\begin{equation} |V({\mathbb{T}})|\ .\ |\xi_{v_{|V({\mathbb{T}})|}}|\ge |\xi_{v_1}+\ldots+\xi_{v_{|V({\mathbb{T}})|}}|>C_{reg}|\xi_{v_{|V({\mathbb{T}})|}}|, \end{equation} so that $\xi:=\sum_{v\in V({\mathbb{T}})} \xi_v$
is comparable to $\xi_{v_{|V({\mathbb{T}})|}}$, i.e. belongs to $[C^{-1}\xi_{v_{|V({\mathbb{T}})|}},C\xi_{v_{|V({\mathbb{T}})|}}]$ if $C$ is some large enough positive constant. Write $[G^{\eta,\sigma}_{{\mathbb{T}}}(i_1,\ldots,i_n)]_u =\int_{\mathbb{R}} e^{{\rm i} u\xi} a(\xi) d\xi$.
Vertices at which 2 or more branches join are called {\it nodes}, and vertices to which no vertex is connected are called {\it leaves} (see Fig. \ref{Fig5}).
\begin{figure}
\caption{\small{3,4,6 are leaves; 1, 2 and 5 are nodes, 2 and 5 are uppermost; branches are e.g. $Br(2\twoheadrightarrow 1)$ or $Br(6\twoheadrightarrow 1)$.}}
\label{Fig5}
\end{figure}
The set $Br(v_1\twoheadrightarrow v_2)$ of vertices from a leaf or a node $v_1$ to a node $v_2$ (or to the root) is called a {\it branch}
if it does not contain any other node. By convention, $Br(v_1\twoheadrightarrow v_2)$ includes $v_1$ and excludes $v_2$.
Consider an uppermost node $n$, i.e. a node to which no other node is connected, together with the set of leaves $\{w_1<\ldots<w_J\}$ above $n$. Let $p_j=|V(Br(w_j\twoheadrightarrow n))|$. Note that $\left( \frac{|\xi_{n}|^{{1\over 2}-\alpha}}{\xi_n+\sum_{w\twoheadrightarrow n} \xi_w} \right)^2\lesssim
|\xi_{w_J}|^{-1-2\alpha}$. Now we proceed to estimate ${\mathrm{Var}}\ a(\xi)$. On the branch number $j$ from $w_j$ to $n$,
\begin{eqnarray} && \int\ldots\int_{|\xi_v|\le |\xi_{w_j}|, v\in Br(w_j\twoheadrightarrow n)\setminus\{w_j\} }
\left[ \prod_{v\in Br(w_j\twoheadrightarrow n)} \frac{e^{-\eta |\xi_v|} |\xi_v|^{{1\over 2}-\alpha}}{\xi_v+\sum_{w\twoheadrightarrow v} \xi_w} \right]^2 \nonumber\\
&& \qquad \lesssim |\xi_{w_j}|^{-1-2\alpha p_j} \end{eqnarray} and (summing over $\xi_{w_1},\ldots,\xi_{w_{J-1}}$ and over $\xi_n$)
\begin{eqnarray} && |\xi_{w_J}|^{-1-2\alpha p_J} \int_{|\xi_{w_{J-1}}| \le |\xi_{w_J}|} d\xi_{w_{J-1}}
|\xi_{w_{J-1}}|^{-1-2\alpha p_{J-1}} \ \nonumber\\ && \qquad \left( \ldots
\left( \int_{|\xi_{w_1}|\le|\xi_{w_2}|} d\xi_{w_1} |\xi_{w_1}|^{-1-2\alpha p_1} \left(
\int_{|\xi_n| \le |\xi_{w_1}|}
d\xi_n \frac{|\xi_n|^{1-2\alpha}}{\xi_{w_J}^2} \right)\right) \ldots \right) \nonumber\\
&& \lesssim |\xi_{w_J}|^{-(1+2\alpha p_j)+[2-2\alpha(1+p_1+\ldots+p_{J-1})]-2}
=|\xi_{w_J}|^{-1-2\alpha W(n)},\end{eqnarray} where $W(n)=p_1+\ldots+p_J+1=|\{v: v\twoheadrightarrow n\}|+1$ is the {\it weight} of $n$.
One may then consider the reduced tree ${\mathbb{T}}_n$ obtained by shrinking all vertices above $n$ (including $n$) to {\it one} vertex with weight $W(n)$ and perform the same operations on ${\mathbb{T}}_n$. Repeat this inductively until ${\mathbb{T}}$ is shrunk to one point. In the end, one gets
${\mathrm{Var}} \ a(\xi) \lesssim |\xi_{v_{|V({\mathbb{T}})|}}|^{-1-2\alpha|V({\mathbb{T}})|} \lesssim |\xi|^{-1-2\alpha|V({\mathbb{T}})|}$. Now apply Lemma \ref{lem:3:Kah} (i).
\item (rate of convergence)
Let $X_u^{\eta_1,\eta_2}:=[G^{\eta_1,\sigma}_{{\mathbb{T}}}(i_1,\ldots,i_n)]_{u}-
[ G^{\eta_2,\sigma}_{{\mathbb{T}}}(i_1,\ldots,i_n)]_{u}$. Expanding $\prod_{j=1}^{|V({\mathbb{T}})|}
e^{-\eta_1 |\xi_{j}|} -\prod_{j=1}^{|V({\mathbb{T}})|} e^{-\eta_2 |\xi_j|}$ as
\begin{equation} \sum_{j=1}^{|V({\mathbb{T}})|} e^{-\eta_2(|\xi_{v_1}|+\ldots+|\xi_{v_{j-1}}|)}
(e^{-\eta_1|\xi_{v_j}|}-e^{-\eta_2|\xi_{v_j}|}) e^{-\eta_1(|\xi_{v_{j+1}}|+\ldots
+|\xi_{v_{V({\mathbb{T}})}}|)} \nonumber\\ \end{equation}
gives $X_u^{\eta_1,\eta_2}$ as a sum, $X_u^{\eta_1,\eta_2}=\sum_{v\in V({\mathbb{T}})} X_u^{\eta_1,\eta_2}(v)$, where $X_u^{\eta_1,\eta_2}(v)=\int d\xi_v b_u(\xi_v)
(e^{-\eta_1 |\xi_v|}-e^{-\eta_2|\xi_v|}) $ is obtained from $[G^{\eta,\sigma}_{{\mathbb{T}}}(i_1,\ldots,i_n)]_{u}$
by replacing $e^{-\eta|\xi_v|}$ with $e^{-\eta_1|\xi_v|}-e^{-\eta_2|\xi_v|}$,
and $e^{-\eta|\xi_w|}$, $w\not=v$ either by
$e^{-\eta_1|\xi_w|}$ or by $e^{-\eta_2|\xi_w|}$. We want to estimate ${\mathrm{Var}}\ b_u(\xi_v)$
uniformly in $u$.
Fix the value of $\xi_v$ in the computations in the above proof for the H\"older estimate. Let $w_J$ be the maximal leaf above $v$, and $n\twoheadrightarrow v$ be the node just above $v$ if $v$ is not a node, $n=v$ otherwise.
Summing over all nodes above $n$ and taking the variance leads to an expression bounded by $|\xi_{w_J}|^{-1-2\alpha W(n)}$, where $W(n)=|\{w\ :\ w\twoheadrightarrow n\}|+1$ is as before
the weight of $n$. Consider now the corresponding shrunk tree ${\mathbb{T}}_n$. Let ${\mathbb{T}}_n(v)$ be the trunk tree defined by
${\mathbb{T}}_n(v)=\{w\in{\mathbb{T}}_n: w\twoheadrightarrow v \ {\mathrm{or}}\ v\twoheadrightarrow w\}\cup\{v\}$; similarly, let ${\mathbb{T}}(v)$ be the tree defined by ${\mathbb{T}}(v)=\{w\in{\mathbb{T}}: w\twoheadrightarrow v \ {\mathrm{or}}\ v\twoheadrightarrow w\}\cup\{v\}$, so that ${\mathbb{T}}_n(v)$ is the corresponding shrunk tree. Sum over all vertices $w\in {\mathbb{T}}_n(v)\setminus\{v\}$. The variance of the coefficient of $e^{-\eta_1 |\xi_v|}$ is
\begin{eqnarray} S(\xi_v) &\lesssim& \int_{|\xi_n|\ge |\xi_{v}|} d\xi_n |\xi_n|^{-1-2\alpha W(n)}|\xi_n|^{-1-2\alpha}
\int_{|\xi_w|\le |\xi_n|,w\in {\mathbb{T}}_n(v)\setminus\{n,v\}} \nonumber\\ && \qquad \left[
\prod_{w\in{\mathbb{T}}_n(v)\setminus\{n,v\}} d\xi_w \ .\ |\xi_n|^{-(1+2\alpha)} \right] \nonumber\\
&\lesssim & \int_{|\xi_n|\ge |\xi_{v}|} d\xi_n |\xi_n|^{-2-2\alpha|{\mathbb{T}}(v)|} \lesssim |\xi_v|^{-1-2\alpha|{\mathbb{T}}(v)|} \end{eqnarray} if $v\not =n$, and
\begin{equation} S(\xi_v)\lesssim |\xi_n|^{-1-2\alpha W(n)}
\int_{|\xi_w|\le |\xi_n|,w\in{\mathbb{T}}_n(v)\setminus\{n\}} \prod_{w\in {\mathbb{T}}_n(v)\setminus\{n\}}
|\xi_n|^{-(1+2\alpha)} \lesssim |\xi_v|^{-1-2\alpha|{\mathbb{T}}(v)|} \end{equation} if $v=n$.
Removing the vertices belonging to ${\mathbb{T}}(v)$ from ${\mathbb{T}}$ leads to a forest which gives a finite contribution to the variance. Hence (by Lemma \ref{lem:3:Kah} (ii)) ${\mathbb{E}}|X_u^{\eta_1,\eta_2}(v)|^2\lesssim
|\eta_1-\eta_2|^{2\alpha|{\mathbb{T}}(v)|}.$
\end{enumerate}
$\Box$
The notion of {\em weight} $W(v)$ {\em of a vertex} $v$ introduced in this proof will be used again in subsections 3.4 and 4.1.
\subsection{Estimates for boundary terms}
Let ${\mathbb{T}}={\mathbb{T}}^{\sigma}_j$ for some $\sigma\in \Sigma_n$, and $i_1\not=\ldots\not=i_n$ as in the previous subsection. By multiplying the estimates on each tree component,
one may just as well assume ${\mathbb{T}}$ is a tree, i.e. is connected.
We shall now prove estimates for the boundary term ${\cal R}I_{{\mathbb{T}}} \left( \left( dB^{\eta}(i_1)\otimes \ldots\otimes dB^{\eta}(i_n) \right)^{\sigma} \right) (\partial)$ associated to ${\mathbb{T}}$ (see Lemma \ref{lem:2:reg}).
\begin{Lemma}
{\it Let ${\mathbb{T}}={\mathbb{T}}^{\sigma}_j$ for some $j$ (so that $n=|V({\mathbb{T}})|$). \begin{enumerate}
\item (H\"older estimate)
The regularized
boundary term ${\cal R}I_{{\mathbb{T}}} \left( \left( dB^{\eta}(i_1)\otimes \ldots\otimes dB^{\eta}(i_n) \right)^{\sigma} \right)(\partial)$ satisfies:
\begin{equation} {\mathbb{E}} \left|[ {\cal R} I_{{\mathbb{T}}}\left( \left( dB^{\eta}(i_1)\otimes \ldots\otimes dB^{\eta}(i_n) \right)^{\sigma}
\right)(\partial) \right]_{ts}|^2
\le C |t-s|^{2\alpha|V({\mathbb{T}})|} \label{eq:6:boundary-Holder} \end{equation} for a certain constant $C$.
\item (rate of convergence)
There exists a positive constant $C$ such that, for every $\eta_1,\eta_2>0$,
\begin{eqnarray} && {\mathbb{E}} \left| [{\cal R}I_{{\mathbb{T}}}\left( \left( dB^{\eta_1}(i_1)\otimes \ldots\otimes dB^{\eta_1}(i_n) \right)^{\sigma} \right)(\partial)]_{ts} - \right.\nonumber\\ && \left. \qquad \qquad -[{\cal R}I_{{\mathbb{T}}}\left( \left( dB^{\eta_2}(i_1)\otimes \ldots\otimes dB^{\eta_2}(i_n)
\right)^{\sigma}
\right)(\partial)]_{ts} \right|^2
\le C|\eta_1-\eta_2|^{2\alpha}. \nonumber\\ \end{eqnarray} \end{enumerate}
\label{lem:3:boundary-Holder-rate} }
\end{Lemma}
{\bf Proof.}
\begin{enumerate}
\item
Apply repeatedly Lemma \ref{lem:2:reg} to ${\mathbb{T}}$: in the end,
$[{\cal R}I_{{\mathbb{T}}}\left( \left( dB^{\eta}(i_1)\otimes \ldots\otimes dB^{\eta}(i_n) \right)^{\sigma} \right)(\partial)]_{ts}$ appears as a sum of 'skeleton-type' terms of the form (see Figure \ref{Fig6}) \begin{eqnarray} && A_{ts}:=[\delta{\cal R}{\mathrm{Sk\,I}}_{ L{\mathbb{T}}}]_{ts}\ . \
[{\cal R}{\mathrm{Sk\,I}}_{R_{\vec{v}_l}\circ L_{\vec{v}_{l-1}}\circ\ldots\circ L_{\vec{v}_1}({\mathbb{T}})}]_{s} \ldots
[{\cal R}{\mathrm{Sk\,I}}_{R_{\vec{v}_2}\circ L_{\vec{v}_1}({\mathbb{T}})}]_{s} [{\cal R}{\mathrm{Sk\,I}}_{ R_{\vec{v}_1{\mathbb{T}}}}]_{s} \nonumber\\ && \qquad \qquad \qquad \qquad \left( \left( dB^{\eta}(i_1)\otimes \ldots\otimes dB^{\eta}(i_n) \right)^{\sigma} \right), \nonumber\\ \label{eq:6:skeleton-type} \end{eqnarray} where $\vec{v}_1=(v_{1,1}<\ldots<v_{1,J_1})\models {\mathbb{T}}$, $\vec{v}_2\models L_{\vec{v}_1}{\mathbb{T}}$, $\ldots$, $\vec{v}_l=(v_{l,1}<\ldots<v_{l,J_l})\models L_{\vec{v}_{l-1}}\circ \ldots \circ L_{\vec{v}_1}({\mathbb{T}})$ and $L{\mathbb{T}}:=L_{\vec{v}_l}\circ\ldots\circ L_{\vec{v}_1}({\mathbb{T}})$. In eq. (\ref{eq:6:skeleton-type}) the forest ${\mathbb{T}}$ has been split into a number of sub-forests, $L{\mathbb{T}}\cup \left( \cup_{j=1}^J {\mathbb{T}}_j \right)$;
we call this splitting {\em the splitting associated to} $A_{ts}$ for further reference.
\underline{First step.}
Let $ [B^{\vec{v}_1}_s [\vec{\xi}]]_u \prod_{j=1}^{J_1} dW_{\xi_{v_1,j}} (i_{\sigma(v_{1,j})})$ be the contribution to ${\cal R}{\mathrm{Sk\,I}}_{R_{\vec{v}_1}{\mathbb{T}}}$ of all Fourier components such that
$\vec{\xi}=(\xi_{v_{1,1}},\ldots,\xi_{v_{1,J_1}})$, $|\xi_{v_{1,1}}|\le \ldots \le|\xi_{v_{1,J_1}}|$ is fixed. For definiteness (see Definition \ref{def:6:RTreg}), \begin{eqnarray} && \left[{\cal R}{\mathrm{Sk\,I}}_{R_{\vec{v}_1{\mathbb{T}}}} \left( \left( dB^{\eta}(i_1)\otimes \ldots\otimes dB^{\eta}(i_n) \right)^{\sigma} \right) \right]_u ((x_v)_{v\in L_{\vec{v}_1}{\mathbb{T}}}) \nonumber\\
&& = \int\ldots\int {\bf 1}_{\mathbb{R}^{V(L_{\vec{v}_1}{\mathbb{T}})\cup\vec{v}_1}_+} \left((\xi_v)_{v\in V(L_{\vec{v}_1}{\mathbb{T}})\cup\vec{v}_1}\right) \left[ B_s^{\vec{v}_1}[\vec{\xi}] \prod_{j=1}^{J_1} dW_{\xi_{v_1,j}} (i_{\sigma(v_{1,j})}) \right] \ . \nonumber\\ && \qquad .\ \left[ \prod_{v\in L_{\vec{v}_1}{\mathbb{T}}} c_{\alpha}
e^{-\eta |\xi_v|} e^{{\rm i} x_v\xi_v} |\xi_v|^{{1\over 2}-\alpha} dW_{\xi_v}(i_{\sigma(v)}) \right].\end{eqnarray}
Then \begin{equation} {\mathrm{Var}} [B_s^{\vec{v}_1}[\vec{\xi}]]_{s} \lesssim
\int\ldots\int \prod_{v\in\vec{v}_1} d\xi_v \left[ |\xi_v|^{-1-2\alpha} \int\ldots\int_{|\xi_w|\ge|\xi_v|,w\in R_v{\mathbb{T}}\setminus\{v\}} \prod_{w\in R_v{\mathbb{T}}\setminus\{v\}} |\xi_w|^{-1-2\alpha} \right],
\end{equation} hence \begin{equation} {\mathrm{Var}} [B_s^{\vec{v}_1}[\vec{\xi}]]_{s} \lesssim \prod_{v\in\vec{v}_1}
|\xi_v|^{-2|V(R_v{\mathbb{T}})|\alpha-1}.\end{equation}
\underline{Second step.}
More generally, let $ B_{s}^{\vec{v}_1,\ldots,\vec{v}_l}[\vec{\xi}]\prod_{j=1}^{J_l} dW_{\xi_{v_{l,j}}}(i_{\sigma(v_{l,j})})$ be the contribution to \begin{equation} [{\cal R}{\mathrm{Sk\,I}}_{R_{\vec{v}_l}\circ L_{\vec{v}_{l-1}}\circ\ldots\circ L_{\vec{v}_1}({\mathbb{T}})}]_{s} \ldots
[{\cal R}{\mathrm{Sk\,I}}_{R_{\vec{v}_2}\circ L_{\vec{v}_1}({\mathbb{T}})}]_{s} [{\cal R}{\mathrm{Sk\,I}}_{ R_{\vec{v}_1{\mathbb{T}}}}]_{s} \left( \left( dB^{\eta}(i_1)\otimes \ldots\otimes dB^{\eta}(i_n) \right)^{\sigma} \right) \end{equation}
of all Fourier components such that $\vec{\xi}=(\xi_{v_{l,1}},\ldots,\xi_{v_{l,J_l}})$ is fixed. Then
\begin{equation} {\mathrm{Var}} ( B^{\vec{v}_1,\ldots,\vec{v}_l}_{s}[\vec{\xi}]) \lesssim \prod_{v\in\vec{v}_l} |\xi_v|^{-2|V(R_{v}{\mathbb{T}})|\alpha-1}\end{equation} (proof by induction on $l$).
\underline{Third step.}
Let $V(L{\mathbb{T}})=\{w_1<\ldots<w_{max}\}$. By definition, $A_{ts}=\int_{\mathbb{R}} a_s(\Xi)(e^{{\rm i} \Xi t}-e^{{\rm i} \Xi s})d\Xi$, with \begin{eqnarray} && a_s(\Xi)=\int d\vec{\xi} \int\ldots\int_{( (\xi_w)_{w\in V(L{\mathbb{T}})})\in D_{\vec{\xi}}} \prod_{w\in V(L{\mathbb{T}})} dW_{\xi_w}(i_{\sigma(w)}) \nonumber\\ && \qquad \qquad
\frac{\prod_{w\in V(L{\mathbb{T}})} (-{\rm i} c_{\alpha})e^{-\eta|\xi_w|} |\xi_w|^{{1\over 2}-\alpha}}{\prod_{w\in V(L{\mathbb{T}})} (\xi_w+\sum_{w'\twoheadrightarrow w,w'\in V(L{\mathbb{T}})} \xi_{w'})} B_s^{\vec{v}_1,\ldots, \vec{v}_l}[\vec{\xi}] \nonumber\\ \end{eqnarray} where Fourier components in $D_{\vec{\xi}}$ satisfy in particular the following conditions:
\begin{itemize}
\item $|\xi_w+\sum_{w'\twoheadrightarrow w,w'\in V(L{\mathbb{T}})} \xi_{w'}|>C_{reg} \max\{ |\xi_{w'}|: w'\twoheadrightarrow w, w'\in V(L{\mathbb{T}})\}$; in particular, $\left(\frac{|\xi_w|^{{1\over 2}-\alpha}}{\xi_w+\sum_{w'\twoheadrightarrow w,w'\in V(L{\mathbb{T}})} \xi_{w'}}\right)^2 \lesssim |\xi_w|^{-1-2\alpha}$; \item $\sum_{w\in V(L{\mathbb{T}})} \xi_w=\Xi$;
\item for every $w\in V(L{\mathbb{T}})$, $|\xi_w|\le |\xi_{w_{max}}|$ and $|\xi_w| \le |\xi_v|$ for every $v\in R(w):=\{
v=v_{l,1},\ldots,v_{l,J_l}\ |\ v\to w\}$ (note that $R(w)$ may be empty). See Fig. \ref{Fig6}. \end{itemize}
\begin{figure}\label{Fig6}
\end{figure}
Note that $|\Xi|\lesssim |\xi_{w_{max}}| \lesssim |\Xi|$ since every vertex in $V(L{\mathbb{T}})$ connects to the root (see first lines of the proof of Lemma \ref{lem:3:increment-Holder-rate} (2)).
If $w\in L{\mathbb{T}}$, split $R(w)$ into $R(w)_{>}\cup R(w)_{<}$, where $R(w)_{\gtrless}
:=\{v\in R(w)\ |\ v\gtrless w_{max}\}$. Summing over indices corresponding to vertices in $R{\mathbb{T}}_{>}:=
\{v=v_{l,1},\ldots,v_{l,J_l}\ |\ v>w_{max}\}=\cup_{w\in L{\mathbb{T}}} R(w)_{>}$, one gets (see again proof of Lemma \ref{lem:3:increment-Holder-rate} (2))
\begin{equation} \prod_{v\in R{\mathbb{T}}_>} \int_{|\xi_v|\ge |\Xi|} d\xi_v |\xi_v|^{-2|V(R_v{\mathbb{T}})|\alpha-1} \lesssim |\Xi|^{-2\alpha\sum_{v\in R{\mathbb{T}}_>} |V(R_v{\mathbb{T}})|}.\end{equation}
Let $w\in L{\mathbb{T}}\setminus\{w_{max}\}$ such that $R(w)_<\not=\emptyset$ (note that $R(w_{max})_<=\emptyset$).
Let
$R(w)_< = \{v_{i_1}<\ldots<v_{i_j}\}$. Then (integrating over $(\xi_v), v\in R(w)_<$)
\begin{eqnarray} && |\xi_w|^{-1-2\alpha} \int_{|\xi_{v_{i_1}}|\ge |\xi_w|} d\xi_{v_{i_1}} \int_{|\xi_{v_{i_2}}|\ge|\xi_{v_{i_1}}|} d\xi_{v_{i_2}}
\ldots \int_{|\xi_{v_{i_j}}|\ge|\xi_{v_{i_{j-1}}}|} d\xi_{v_{i_j}} \nonumber\\ &&
|\xi_{v_{i_1}}|^{-2|V(R_{v_{i_1}}{\mathbb{T}})|\alpha-1} \ldots |\xi_{v_{i_j}}|^{-2|V(R_{v_{i_j}}{\mathbb{T}})|\alpha-1}
\lesssim |\xi_w|^{-1-2\alpha(1+ \sum_{v\in R(w)_<} |V(R_v{\mathbb{T}})|)}. \nonumber\\ \end{eqnarray}
In other words, each vertex $w\in L{\mathbb{T}}$ 'behaves' as if it had a weight $1+\sum_{v\in R(w)_<} |V(R_v{\mathbb{T}})|.$ Hence (by the same method as in the proof of Lemma \ref{lem:3:increment-Holder-rate} (2))
${\mathrm{Var}}(a_s(\xi))\lesssim |\Xi|^{-1-2\alpha(|V(L{\mathbb{T}})|+\sum_{v\in R{\mathbb{T}}_<} |V(R_v{\mathbb{T}})|)} \ .\ |\Xi|^{-2\alpha\sum_{v\in R{\mathbb{T}}_>} |V(R_v{\mathbb{T}})|}
= |\Xi|^{-1-2\alpha|V({\mathbb{T}})|}.$ Now apply Lemma \ref{lem:3:Kah} (i).
\item Similar to the proof of Lemma \ref{lem:3:increment-Holder-rate} (3). Details are left to the reader. \end{enumerate}
$\Box$
\section{End of proof and final remarks}
\subsection{Estimates: case of coinciding indices}
Our previous estimates for ${\mathbb{E}}|{\cal R}{\bf B}^{n,\eta}_{ts}(i_1,\ldots,i_{n})|^2$ (H\"older estimate) and \\
${\mathbb{E}}|{\cal R}{\bf B}^{n,\eta_1}_{ts}(i_1,\ldots,i_{n})-{\cal R}{\bf B}^{n,\eta_2}_{ts}(i_1,\ldots,i_{n})|^2$ (rate of convergence) with $i_1\not=\ldots\not=i_n$ rest on the independence of the Brownian motions $W(i_1),\ldots,W(i_n)$.
We claim that the same estimates also hold true for ${\mathbb{E}}
|{\cal R}{\bf B}^{n,\eta}(i_1,\ldots,i_n)|^2$ and ${\mathbb{E}}|{\cal R}{\bf B}^{n,\eta_1}_{ts}(i_1,\ldots,i_n)-{\cal R}{\bf B}^{n,\eta_2}_{ts}(i_1,\ldots,i_n)|^2$ if some of the indices $(i_1,\ldots,i_n)$ coincide, with the {\it same definition} of the regularization procedure ${\cal R}$. The key Lemma for the proof is
\begin{Lemma}[Wick's lemma](see \cite{LeB}, \S 5.1.2 and 9.3.4)
{\it Let $(X_1,\ldots,X_{n})$ be a centered Gaussian vector. Denote by $X_{i_1}\diamond\ldots\diamond X_{i_k}$ $(1\le i_1,\ldots,i_k\le n)$ or $:X_{i_1}\ldots X_{i_k}:$
the Wick product of $X_{i_1},\ldots,X_{i_k}$ (also called: {\it normal ordering} of the product $X_{i_1}\ldots X_{i_k}$), i.e. the projection of the product $X_{i_1}\ldots X_{i_k}$ onto the $k$-th chaos of the Gaussian space generated by $X_1,\ldots,X_{n}$. Then:
\begin{enumerate}
\item
\begin{eqnarray} && X_1\ldots X_{n} = X_{1}\diamond\ldots\diamond X_{n}
+ \sum_{(i_1,i_2)} {\mathbb{E}}[X_{i_1}X_{i_2}] X_{1}\diamond \ldots \diamond \check{X}_{i_1}\diamond \ldots\diamond \check{X}_{i_2}\diamond\ldots\diamond X_n \nonumber\\ && \qquad + \ldots+ \sum_{(i_1,i_2),\ldots,(i_{2k+1},i_{2k+2})}
{\mathbb{E}}[X_{i_1}X_{i_2}]\ldots {\mathbb{E}}[X_{i_{2k+1}}X_{i_{2k+2}}] \nonumber\\ && \qquad X_1\diamond\ldots\diamond \check{X}_{i_1}\diamond\ldots\diamond \check{X}_{i_2} \diamond\ldots\diamond \check{X}_{i_{2k+1}} \diamond\ldots\diamond \check{X}_{i_{2k+2}} \diamond\ldots\diamond X_n \nonumber\\ && +\ldots, \end{eqnarray} where the sum ranges over all partial pairings of indices $(i_1,i_2),\ldots,(i_{2k+1},i_{2k+2})$ $(1\le k\le \lfloor \frac{n}{2}\rfloor -1)$.
\item For every set of indices $i_1,\ldots,i_{j},i'_1,\ldots,i'_{j}$, \begin{equation} {\mathbb{E}}\left[ (X_{i_1}\diamond\ldots\diamond X_{i_{j}})(X_{i'}\diamond\ldots\diamond X_{i'_{j}}) \right] =\sum_{\sigma\in \Sigma_j} \prod_{m=1}^{j} {\mathbb{E}}[ X_{i_m} X_{i'_{\sigma(m)}}]. \end{equation} \end{enumerate} } \label{lem:7:Wick} \end{Lemma}
In our case (considering ${\cal R} {\bf B}^{n,\eta}_{ts}(i_1,\ldots,i_n)$) we get a decomposition of the product $dW_{\xi_1}(i_1)\ldots dW_{\xi_{n}}(i_n)$ into $dW_{\xi_1}(i_1)\diamond\ldots\diamond dW_{\xi_n}(i_n)$,
plus the sum over all possible non-trivial pair contractions, schematically $\langle W'_{\xi_j}(i_j) W'_{\xi_{j'}}(i_{j'})\rangle=\delta_0(\xi_j+\xi_{j'}) \delta_{i_j,i_{j'}}.$
Consider first the normal ordering of ${\cal R}{\bf B}^{n,\eta}_{ts}(i_1,\ldots,i_n)$. As in the proof of Lemma 5.10 in \cite{TinUnt08}, let $\Sigma_{\vec{i}}$ be the
'index-fixing' subgroup of $\Sigma_n$ such that : $\sigma'\in\Sigma_{\vec{i}}\Longleftrightarrow \forall j=1,\ldots,n, \ i_{\sigma'(j)}=i_j$. Then (by Wick's lemma and the Cauchy-Schwarz inequality) : \begin{eqnarray} && {\mathrm{Var}} :{\cal R}{\bf B}^{n,\eta}_{ts}(i_1,\ldots,i_n): =
{\mathbb{E}} \left| :{\cal R}{\bf B}^{n,\eta}_{ts}(i_1,\ldots,i_n): \right|^2 \nonumber\\ && = \sum_{\sigma'\in\Sigma_{\vec{i}}} {\mathbb{E}}\left[ :{\cal R}{\bf B}^{n,\eta}_{ts}(1,\ldots,n): \ :{\cal R}{\bf B}^{n,\eta}_{ts}(\sigma'(1),\ldots,\sigma'(n)): \right] \nonumber\\
&& \le |\Sigma_{\vec{i}}| \ .\ {\mathbb{E}} |{\cal R}{\bf B}^{n,\eta}(1,\ldots,n)|^2, \label{eq:7:3}\end{eqnarray} hence the H\"older and rate estimates of section 3 also hold for \\ $:{\cal R}{\bf B}^{n,\eta}(i_1,\ldots,i_n):$.
One must now prove that the estimates of section 3 hold true for all possible contractions of ${\cal R}{\bf B}^{n,\eta}(i_1,\ldots,i_n)$. Fixing some non-trivial contraction $(j_1,j_2),\ldots, (j_{2l-1},j_{2l})$, $l\ge 1$, results in an expression ${\bf X}^{contr}_{ts}$ belonging to the chaos of order $n-2l$. By necessity, $i_{j_1}=i_{j_2},\ldots,i_{j_{2l-1}}=i_{j_{2l}}$, but it may well be that there are other index coincidences. The same reasoning as in the case of $:{\cal R}{\bf B}_{ts}^{n,\eta}(i_1,\ldots,i_n):$ (see eq. (\ref{eq:7:3})) shows that one may actually assume $i_m\not=i_{m'}$ if $m\not=m'$ and $\{m,m'\}\not=\{j_1,j_2\},\ldots,\{j_{2l-1},j_{2l}\}$. Now (as we shall presently prove) the tree integrals related to the contracted iterated integral ${\bf X}^{contr}_{ts}$ may be estimated by considering the tree integrals related to $\check{\bf X}_{ts}:={\cal R}{\bf B}_{ts}^{n-2l,r}(i_1,\ldots,\check{i_{j_1}},\ldots,\check{i_{j_{2l}}},\ldots, i_n)$ (which has same law as ${\cal R}{\bf B}^{n-2l,r}_{ts}(1,\ldots,n-2l)$) and (following the idea introduced in the course of the proof of Lemma \ref{lem:3:increment-Holder-rate}) increasing by one the weight $W$ of some other (possibly coinciding) indices $j'_1,\ldots,j'_{2l}\not= j_1,\ldots, j_{2l}$ -- or, in other words,
'inserting' a factor $|\xi_{j'_1}|^{-2\alpha}\ldots |\xi_{j'_{2l}}|^{-2\alpha}$ in the variance integrals --.
This amounts in the end to increasing the H\"older regularity $(n-2l)\alpha^-$
of $\check{\bf X}_{ts}$ by $2l\alpha$, which gives the expected regularity.
Fix some permutation $\sigma\in\Sigma_n$, and consider the integral over the Fourier domain
$|\xi_{\sigma(1)}|\le \ldots \le |\xi_{\sigma(n)}|$ as in section 2. Change as before the order of integration and the names of the indices so that $dW_{\xi_{\sigma(j)}}(i_j)\to dW_{\xi_j}(i_{\sigma(j)})$;
for convenience, we shall still index the pairing indices as $(j_1,j_2),\ldots,(j_{2l-1},j_{2l})$. We may assume that $|j_{2k-1}-j_{2k}|= 1$, $k=1,\ldots,l$ (otherwise $|\xi_m|=|\xi_{j_{2k-1}}|=
|\xi_{j_{2k}}|$ for $j_{2k-1}<m<j_{2k}$ or $j_{2k}<m<j_{2k-1}$, which corresponds to
a Fourier subdomain of zero Lebesgue measure). In the sequel, we fix $\sigma\in\Sigma_n$ and $(j,j')=(j_{2k-1},j_{2k})$ for some $k$.
Let $\tilde{{\mathbb{T}}}=\tilde{{\mathbb{T}}}_1\ldots\tilde{{\mathbb{T}}}_L$
be a forest appearing in the decomposition of the permutation graph ${\mathbb{T}}^{\sigma}$ as in subsection 2.4. Applying repeatedly Lemma \ref{lem:2:reg} to $\tilde{{\mathbb{T}}}$ leads to a sum of
terms obtained from the contraction of $A_{ts}=A_{ts}(1)\ldots A_{ts}(L)$, with $ A_{ts}(k)= [\delta{\cal R}{\mathrm{Sk\,I}}_{L\tilde{{\mathbb{T}}}_{k}}]_{ts} \prod_j [{\cal R}{\mathrm{Sk\,I}}_{{\mathbb{T}}'_{k,j}}]_{s} \left(\left( \otimes_{v\in V(\tilde{{\mathbb{T}}}_k)} dB^{\eta}(i_v)\right)^{\sigma}\right)$, where $L\tilde{{\mathbb{T}}}_k,{\mathbb{T}}'_{k,1},\ldots, {\mathbb{T}}'_{k,j},\ldots$ are all subtrees appearing in the splitting associated to $A_{ts}(k)$ (see proof of Lemma \ref{lem:3:boundary-Holder-rate}).
Let ${\mathbb{T}}$ be one of the above trees, either $L\tilde{{\mathbb{T}}}_k$ or ${\mathbb{T}}'_{k,j}$. Reconsider the proof of the H\"older
estimate or rate of convergence in Lemma \ref{lem:3:increment-Holder-rate} or Lemma \ref{lem:3:boundary-Holder-rate}. The integrals $\left[ {\mathrm{Sk\,I}} \left( (x_v)_{v\in V({\mathbb{T}})} \to e^{{\rm i} \sum_{v\in V({\mathbb{T}})} x_v \xi_v} \right) \right]_u$
appearing in the definition of the regularized skeleton integrals write $i^{-|V({\mathbb{T}})|} \frac{e^{{\rm i} u\sum_{v\in V({\mathbb{T}})} \xi_v}}{ \prod_{v\in V({\mathbb{T}})} (\xi_v+\sum_{w\twoheadrightarrow v} \xi_w)}$ (see Lemma \ref{lem:2:SkI}). After the contractions, one must sum over Fourier indices $(\xi_v)_{v\in V({\mathbb{T}})}$ such that $(\xi_v)_{v\in V({\mathbb{T}})}\in \mathbb{R}^{{\mathbb{T}}}_{reg}$ and $\xi_{j_{2m-1}}=-\xi_{j_{2m}}$ if both $j_{2m-1},j_{2m}\in V({\mathbb{T}})$.
Let \ $\check{{\mathbb{T}}}$ be the contracted tree obtained by 'skipping' $\{j_1,\ldots,j_{2l}\}\cap V({\mathbb{T}})$ while going down the tree ${\mathbb{T}}$
(see Fig. \ref{Fig7}, \ref{Fig8}, \ref{Fig9}).
\begin{figure}\label{Fig7}
\end{figure}
\begin{figure}\label{Fig8}
\end{figure}
\begin{figure}\label{Fig9}
\end{figure}
The denominator $|\xi_v+\sum_{w\in{\mathbb{T}}, w\twoheadrightarrow v} \xi_w|$ is larger (up to a constant) than the denominator $|\xi_v+\sum_{w\in\check{{\mathbb{T}}}, w\twoheadrightarrow v} \xi_w|$ obtained by considering the same term in the contracted tree integral $\check{\bf X}_{ts}$ (namely,
$|\xi_v+\sum_{w\in{\mathbb{T}}, w\twoheadrightarrow v} \xi_w|$ is of the same order as
$\max\{|\xi_w|; w\in{\mathbb{T}},w\twoheadrightarrow v\}\ge \max\{|\xi_w|;w\in\check{{\mathbb{T}}},w\twoheadrightarrow v\}$). Hence ${\mathbb{E}} (A_{ts}^{contr})^2$ may be bounded in the same way as
${\mathbb{E}} A_{ts}^2$ in the proof of Lemma \ref{lem:3:increment-Holder-rate} or Lemma \ref{lem:3:boundary-Holder-rate}, except that each term in the sum over $(\xi_{v},v\in V({\mathbb{T}}),v\not=j_1,\ldots,j_{2l})$ comes with an extra multiplicative pre-factor $S=S((\xi_v),v\in V({\mathbb{T}}),v\not=j_1,\ldots,j_{2l})$ -- due to the sum over $(\xi_{j_m})_{m=1,\ldots,2l}$ -- which may be seen as an 'insertion'.
Let us estimate this prefactor. We shall assume for the sake of clarity that there is a single contraction $(j_1,j_2)=(j,j')$ (otherwise the prefactor should be evaluated by contracting each tree in several stages, 'skipping' successively
$(j_1,j_2),\ldots,(j_{2l-1},j_{2l})$ by pairs). As already mentioned before, $|j-j'|=1$ so that $j$ and $j'$ must be successive vertices if they belong to the same branch of the same tree ${\mathbb{T}}$. Note that, if $j$ and $j'$ are on the same tree,
the Fourier index $\Xi:=\sum_{v\in V({\mathbb{T}})} \xi_v$ (used in the Fourier decomposition of Lemma \ref{lem:3:increment-Holder-rate} or in the third step of Lemma \ref{lem:3:boundary-Holder-rate}) is left unchanged since $\xi_j+\xi_{j'}=0$.
\underline{Case (i)}: $(j,j')$ belong to unconnected branches of the same tree ${\mathbb{T}}$.
This case splits into three different subcases:
\begin{itemize}
\item[(i-a)] neither $j$ nor $j'$ is a leaf. Let $w$, resp. $w'$ be the leaf above $j$, resp. $j'$ of maximal index and assume (without loss of generality) that $|\xi_{w}|\le |\xi_{w'}|$. Then
\begin{eqnarray} && S \lesssim \left( \int_{|\xi_{j}|\le |\xi_{w}|} d\xi_j \frac{|\xi_j|^{1-2\alpha}}{|\xi_{w}\xi_{w'}|} \right)^2
\lesssim \left( \int_{|\xi_{j}|\le |\xi_{w}|} d\xi_j |\xi_{w}|^{-1-2\alpha} \right)^2 \lesssim |\xi_{w}|^{-4\alpha}\nonumber\\ \end{eqnarray} which has the effect of increasing the weight $W(w)$ by $2$.
\item[(i-b)] $j$ is a leaf, $j'$ is not. Let $w'$ be the leaf of maximal index above $j'$. Then
\begin{eqnarray} && S\le \left( \int_{|\xi_{j}|\le |\xi_{w'}|} d\xi_j \frac{ |\xi_{j}|^{1-2\alpha}}{|\xi_{j} \xi_{w'}|} \right)^2
\lesssim \left( \frac{1}{|\xi_{w'}|} \int_{|\xi_{j}|\le |\xi_{w'}|} d\xi_j |\xi_{j}|^{-2\alpha} \right)^2 \lesssim
|\xi_{w'}|^{-4\alpha}. \nonumber\\ \end{eqnarray}
\item[(i-c)] both $j$ and $j'$ are leaves. Let $v$, resp. $v'$ be the vertex below $j$, resp. $j'$, i.e. $j\to v$, $j'\to v'$. Then
\begin{equation} S\lesssim \left( \int_{|\xi_{j}|\ge \max(|\xi_{v}|,|\xi_{v'}|)} d\xi_j |\xi_j|^{-1-2\alpha}
\right)^2\lesssim |\xi_{v}|^{-4\alpha}\end{equation} which has the effect of increasing $W(v)$ by $2$.
\end{itemize}
\underline{Case (ii)}: $(j,j')$ are successive vertices on the same branch of the same tree ${\mathbb{T}}$. Assume (without loss of generality) that $j\to j'$. Then $S=0$ if $j$ is a leaf
(since $\xi_{j'}+\sum_{w\twoheadrightarrow j'}\xi_w=\xi_{j}+\xi_{j'}=0$
and such indices fail to meet the condition defining
$\mathbb{R}^{{\mathbb{T}}}_{reg}$), otherwise $S\lesssim |\xi_w|^{-4\alpha}$ if $w$ is the leaf of maximal index above $j$ (by the same argument as in case (i-a)).
\underline{Case (iii)}: $(j,j')$ belong to two different trees, ${\mathbb{T}}$ and ${\mathbb{T}}'$.
This case is a variant of case (i). Nothing changes compared to case (i) unless (as in the proof of Lemma \ref{lem:3:increment-Holder-rate} or in the 3rd step of Lemma \ref{lem:3:boundary-Holder-rate}) one needs to compute the variance of the coefficient $a(\Xi)$ or $a_s(\Xi)$ of $e^{{\rm i} u\Xi}$ for $\Xi$ fixed. Assume $j$ belongs to the tree ${\mathbb{T}}=L\tilde{{\mathbb{T}}}_{k}$ while $j'$ is on one of the cut trees ${\mathbb{T}}'_{k,1},\ldots,{\mathbb{T}}'_{k,j},\ldots$
Assume first $j$ is not a leaf,
and let $w$ be the leaf above $j$. Then the presence of the extra vertex $j$ modifies the Fourier index $\Xi$ in the Fourier decomposition of $A^{contr}_{ts}(k)$, $A^{contr}_{ts}(k)=\int_{\mathbb{R}} a(\Xi) (e^{{\rm i} \Xi t}-e^{{\rm i} \Xi s}) d\Xi$ or $A^{contr}_{ts}(k)=\int_{\mathbb{R}} a_s(\Xi) (e^{{\rm i} \Xi t}-e^{{\rm i} \Xi s}) d\Xi$,
by a factor which is bounded and bounded away from $0$, hence $S\lesssim |\xi_{w}|^{-4\alpha}$ as in case (i-a).
If $j$ is a leaf as in case (i-b) -- while
$w'$ is as before the leaf of maximal index over $j'$ --, one has: $|\xi_{j}|\lesssim |\Xi|\lesssim |\xi_{j}|$. Hence the sum over $\xi_{j}$ contributes an extra multiplicative pre-factor $S$ to the variance of the coefficient of $a(\Xi)$ or $a_s(\Xi)$ of order
\begin{equation} S\lesssim \left( \int_{|\Xi|/2\le |\xi_j|\le 2|\Xi|} d\xi_j
\frac{|\xi_{j}|^{1-2\alpha}}{|\xi_{j}\xi_{w'}|}\right)^2 \lesssim
\left( \int_{|\Xi|/2\le |\xi_j|\le 2|\Xi|}
|\xi_j|^{-1-2\alpha} \right)^2 \lesssim |\Xi|^{-4\alpha},\end{equation} which increases the H\"older index by $2\alpha$ (see Lemma \ref{lem:3:Kah}).
The case when both $j$ and $j'$ belong to left parts $L\tilde{{\mathbb{T}}}_{k}$, $L\tilde{{\mathbb{T}}}_{k'}$
is similar and left to the reader.
$\Box$
This concludes at last the proof of Theorem \ref{th:0}.
\subsection{A remark: about the two-dimensional antisymmetric fBm}
Consider a one-dimensional analytic fractional Brownian motion $\Gamma$ as in \cite{TinUnt08}.
\begin{Definition}
{\it Let $Z_t=(Z_t(1),Z_t(2))=(2{\rm Re\ } \Gamma_t,2{\rm Im\ }\Gamma_t)$, $t\in\mathbb{R}$. We call this new centered Gaussian process indexed by $\mathbb{R}$
the {\em two-dimensional antisymmetric fBm}. }
\end{Definition}
Its paths are a.s. $\alpha^-$-H\"older. The marginal processes $Z(1)$, $Z(2)$ are usual fractional Brownian motions. The covariance between $Z(1)$ and $Z(2)$ writes (see \cite{TinUnt08})
\begin{equation} {\mathrm{Cov}}(Z_s(1),Z_t(2))=-\frac{\tan\pi\alpha}{2}[-{\mathrm{sgn}}(s)|s|^{2\alpha}+{\mathrm{sgn}}(t) |t|^{2\alpha}
-{\mathrm{sgn}}(t-s) |t-s|^{2\alpha}].\end{equation}
Note that we never used any particular linear combination of the analytic/anti-analytic components of $B$ in the estimates of section 3 and 4. Hence these also hold for $Z$, which gives for free a rough path over $Z$ satisfying Theorem \ref{th:0} of the Introduction.
\end{document} |
\begin{document}
\title{Exact solution for the interaction of two decaying quantized fields}
\author{L.~Hernández-Sánchez} \affiliation{Instituto Nacional de Astrofísica Óptica y Electrónica, Calle Luis Enrique Erro No. 1\\ Santa María Tonantzintla, Pue., 72840, Mexico} \author{I. Ramos-Prieto} \email[e-mail: ]{[email protected]} \affiliation{Instituto Nacional de Astrofísica Óptica y Electrónica, Calle Luis Enrique Erro No. 1\\ Santa María Tonantzintla, Pue., 72840, Mexico} \author{F. Soto-Eguibar} \affiliation{Instituto Nacional de Astrofísica Óptica y Electrónica, Calle Luis Enrique Erro No. 1\\ Santa María Tonantzintla, Pue., 72840, Mexico} \author{H. M. Moya-Cessa} \affiliation{Instituto Nacional de Astrofísica Óptica y Electrónica, Calle Luis Enrique Erro No. 1\\ Santa María Tonantzintla, Pue., 72840, Mexico} \email[e-mail: ]{[email protected]}
\begin{abstract} We show that the Markovian dynamics of two coupled harmonic oscillators may be analyzed using a Schrödinger equation and an effective non-Hermitian Hamiltonian. This may be achieved by a non-unitary transformation that involves superoperators; such transformation enables the removal of quantum jump superoperators, that allows us to rewrite the Lindblad master equation in terms of a von Neumann-like equation with an effective non-Hermitian Hamiltonian. This may be generalized to an arbitrary number of interacting fields. Finally, by applying an extra non-unitary transformation, we may diagonalize the effective non-Hermitian Hamiltonian to obtain the evolution of any input state in a fully quantum domain. \end{abstract} \date{\today} \maketitle
When a quantum system is influenced by its environment, one of the main tools to describe the Markovian dynamics is the Lindblad master equation~\cite{Breuer_Book}. It is instructive to rewrite the Lindblad master equation in two parts: on the one hand, the terms that preserve the number of excitations in an effective non-Hermitian Hamiltonian, and on the other hand, the remaining terms that describe the quantum jumps of the system (de-excitations); understanding and studying Markovian dynamics in this way is intimately related to the quantum trajectories technique or the Monte-Carlo wave-function method~\cite{Carmichael_Book,Dum_1992,Molmer_1993}. Nevertheless, even if the non-unitary effective part of the equation is solvable, the full solution of the Linblad master equation is not a trivial task~\cite{Prosen_2012,Torres_2014,Minganti_2019,Arkhipov_2020,Minganti_2020,Teuber_2020}. Despite this, in non-Hermitian or semi-classical approaches, interest in such non-Hermitian systems has vastly grown due to their unusual properties, and a hallmark of these systems are the concepts of parity-time~($\mathcal{PT}$)~symmetry and exceptional points (EPs)~\cite{Bender_1998,ElGanainy_2007,Ruter_2010,Heiss_2012,Kato,MohammadAli_2019,Quiroz_2019,Tschernig_2022}. In addition to the above, in a fully quantum domain, where the light behaves non-classically, multi-particle quantum interference phenomena and non-classical states arise \cite{Lai_1991,Klauck_2019,Longhi_2020}. In this sense, and although the quantum jump superoperators are suppressed in the non-Hermitian Hamiltonian formalism, from our perspective there is a question that must be answered: Is it possible to establish a direct equivalence with the Markovian dynamics through some transformation?; this point represents the main motivation of the present work.
On the other hand, since the paraxial equation and the Schrödinger equation are isomorphic under certain conditions (see, for instance,~\cite{Longhi_2009} and references therein), photonic structures or evanescently coupled waveguides are the main platforms for directly observing non-unitary evolution~\cite{ElGanainy_2007,Guo_2009, Ruter_2010,Ornigotti_2014,Zhang_2016}. Significantly, non-Hermitian systems exhibiting $\mathcal{PT}$-symmetry and exceptional points have generated a wide range of applications, e.g., loss-induced transparency, laser mode control, optical detection and unidirectional invisibility, just to name a few \cite{Guo_2009,Lin_2011,Feng_2017,ElGanainy_2018,MohammadAli_2019,Ozdemir_2019}. Recently, two representative examples of two-photon quantum interference have been considered in photonic systems with passive losses: Hong-Ou-Mandel dip in a passive $\mathcal{PT}$-symmetric optical directional coupler~\cite{Hong-Ou-Mandel,Klauck_2019}, and a phase transition at the coalescence point~\cite{Longhi_2020}. However, although dissipation process can be described by coupling one or a set of waveguides to one or more reservoirs at zero temperature~\cite{Klauck_2019}, one hopes to have greater control, a priory, over the photonic platform if the quantum jump superoperators are judiciously removed. In fact, removing these superoperators under some transformation leads to the non-Hermitian Hamiltonian formalism.
In this contribution, we consider two quantized fields experiencing (Markovian) losses. These systems have been used to describe other physical systems of bosonic nature, e.g., states of light traveling along two evanescently coupled waveguides~\cite{Lai_1991,Klauck_2019,Longhi_2020}. By means of two transformations, we demonstrate the following: a) the non-unitary dynamics, governed by the Lindblad master equation and the von Neumann-like equation, with an effective non-Hermitian Hamiltonian, are equivalent by removing the quantum jump superoperators by means of the transformation $e^{\pm(\hat{J}_a+\hat{J}_b)/2}\hat{\rho}$; b) we diagonalize the effective non-Hermitian Hamiltonian to obtain the evolution of any input state in a fully quantum domain. The above are the main contributions of this work, because any non-classical state that is constrained to Markovian dynamics, can be equivalently described in terms of light state crossing non-Hermitian systems~(e.g., waveguides or directional coupler) with~(experimentally feasible) passive losses, provided that the input and output states are transformed by means of~$\exp\left[\pm(\hat{J}_a+\hat{J}_b)/2\right]\left[\hat{\bullet}\right]$ (see Fig.~\ref{Fig_1}). We would like to stress that such transformation may be easily generalized to an arbitrary number of interacting and decaying fields.
Let us consider the interaction between two quantized fields subject to Markovian losses, determined by the loss rates $\gamma_a$ and $\gamma_b$, respectively. The Markovian dynamics in the interaction picture for the reduced density matrix $\hat{\rho}$ is governed by the Lindblad master equation~\cite{Breuer_Book,Carmichael_Book} \begin{equation}\label{ME1}
\frac{d\hat{\rho}}{dz} =-\mathrm{i}g\hat{S}\hat{\rho} +\gamma_a\mathcal{L}[\hat{a}]\hat{\rho}+\gamma_b\mathcal{L}[\hat{b}]\hat{\rho}, \end{equation} where $\hat{S}\hat{\rho}=[\hat{a}\hat{b}^\dagger+\hat{a}^\dagger\hat{b},\hat{\rho}]$ determines the interaction of the two fields or light modes, while the second term of the right-hand side is the dissipation superoperator or quantum jump superoperator, with $\mathcal{L}\left[\hat{c}\right]\hat{\rho}:=2\hat{c}\hat{\rho}\hat{c}^\dagger-\left(\hat{c}^\dagger\hat{c}\hat{\rho}+\hat{\rho}\hat{c}^\dagger\hat{c}\right)$, being $\hat{c}=\hat{a},\hat{b}$. Here, $\hat{a}$ ($\hat{a}^\dagger$) and $\hat{b}$ ($\hat{b}^\dagger$) are the usual annihilation (creation) operators, and $g$ denotes the coupling strength between the two field bosonic modes.
In order to solve the Lindblad master equation, it is important to note that the superoperator $2\hat{c}\hat{\rho}\hat{c}^\dagger$ prevents to rewrite or reinterpret such an equation in terms of a von Neumann-like equation, such that: $d\hat{\varrho}/dz =\mathrm{i}(\hat{H}_{\mathrm{eff}}\hat{\varrho}-\hat{\varrho}\hat{H}_{\mathrm{eff}}^\dagger)$. Despite this, by defining the superoperators $\hat{J}_c\hat{\rho}:=2\hat{c}\hat{\rho}\hat{c}$ and $\hat{L}_c\hat{\rho}:=\hat{c}^\dagger\hat{c}\hat{\rho}+\hat{\rho}\hat{c}^\dagger\hat{c}$, it is not difficult to show that $[(\hat{J}_a+\hat{J}_b),\hat{S}]\hat{\rho}=0$, and $[\hat{J}_c,\hat{L}_c]\hat{\rho}=2\hat{J}_c\hat{\rho}$. This simple, but decisive finding, allows us to perform the transformation \begin{equation}\label{Trho}
\hat{\rho}=\exp \left[-\chi\left(\hat{J}_a+\hat{J}_b\right) \right]\hat{\varrho}, \end{equation} to obtain \begin{equation} \begin{split}
\frac{d\hat{\varrho}}{dz}=&-\mathrm{i}g\hat{S}\hat{\varrho}+\gamma_a\left[\left(1-2\chi\right)\hat{J}_a-\hat{L}_a\right]\hat{\varrho}\\&+\gamma_b\left[\left(1-2\chi\right)\hat{J}_b-\hat{L}_b\right]\hat{\varrho}, \end{split} \end{equation} and by setting $\chi=1/2$, the above equation reads \begin{equation} \label{rho_J} \frac{d\hat{\varrho}}{dz}=-\mathrm{i}\left(\hat{H}_{\mathrm{eff}}\hat{\varrho}-\hat{\varrho}\hat{H}_{\mathrm{eff}}^\dagger\right), \end{equation} where $\hat{H}_{\mathrm{eff}}=-\mathrm{i}\gamma_a\hat{a}^\dagger\hat{a}-\mathrm{i}\gamma_b\hat{b}^\dagger\hat{b}+g\left(\hat{a}{b}^\dagger+\hat{a}^\dagger\hat{b}\right)$ is an effective non-Hermitian Hamiltonian.\\ We can note that the density matrices $\hat{\rho}$ or $\hat{\varrho}$, governed by \eqref{ME1} or \eqref{rho_J}, respectively, will evolve in two different ways~\cite{Minganti_2019,Minganti_2020,Arkhipov_2020}. A sudden change in the state of the system due to the term $(\hat{J}_a+\hat{J}_b)\hat{\rho}$ is considered in the Lindblad master equation as an average of many quantum trajectories after many experimental realizations or numerical stochastic simulations~\cite{Carmichael_Book,Dum_1992,Molmer_1993}; on the other hand, the evolution of the density matrix undergoes a passive dissipation processes as the quantum jump superoperators are suppressed. Nevertheless, we show that it is possible to establish a direct equivalence between both approaches, through the transformation $\hat{\rho} = e^{-(\hat{J}_a+\hat{J}_b)/2}\hat{\varrho}$. This is one of the main contributions of this work.
Now, from \eqref{rho_J}, we can recognize that the evolution of the density matrix is governed by a von Neumann-like equation, which in turn can be interpreted as two Schrödinger equations due to the non-Hermitian nature of $\hat{H}_{\mathrm{eff}}$, i.e., $\mathrm{i}\frac{d}{dt}\ket{\psi_{R}}=\hat{H}_{\mathrm{eff}}\ket{\psi_{R}}$ and $-\mathrm{i}\frac{d}{dt} \bra{\psi_L}=\bra{\psi_L}\hat{H}_{\mathrm{eff}}^\dagger$, with $\hat{\varrho} = \ket{\psi_R}\bra{\psi_L}$. In order to find a diagonal form of $\hat{H}_{\mathrm{eff}}$ (or $\hat{H}_{\mathrm{eff}}^\dagger$), it is useful and instructive to rewrite it as \begin{equation} \begin{split}\label{H_eff}
\hat{H}_{\mathrm{eff}}=&-\mathrm{i}\left[\frac{\gamma}{2}\left(\hat{a}^\dagger\hat{a}+\hat{b}^\dagger\hat{b}\right)+\frac{\Delta}{2}\left(\hat{b}^\dagger\hat{b}-\hat{a}^\dagger\hat{a}\right)\right]
\\& +g\left(\hat{a}\hat{b}^\dagger+\hat{a}^\dagger\hat{b}\right) \end{split} \end{equation} where $\gamma=\gamma_a+\gamma_b$ and $\Delta=\gamma_b-\gamma_a$; this allows us to notice that \begin{equation} \begin{split} \hat{N} &= \hat{a}^\dagger\hat{a}+\hat{b}^\dagger\hat{b},\\ \hat{J}_x&=\frac{1}{2}\left(\hat{a}\hat{b}^\dagger+\hat{a}^\dagger\hat{b}\right),\\ \hat{J}_y&=\frac{\mathrm{i}}{2}\left(\hat{a}^\dagger\hat{b}-\hat{a}\hat{b}^\dagger\right),\\ \hat{J}_z&=\frac{1}{2}\left(\hat{b}^\dagger\hat{b}-\hat{a}^\dagger\hat{a}\right), \end{split} \end{equation} satisfy the commutation rules $[\hat{J}_j,\hat{J}_k]=\mathrm{i}\epsilon_{jkl}\hat{J}_l$ (right-hand rule), and $[\hat{N},\hat{J}_j]=0$ (with $j=x,y,z$). We apply the non-unitary transformation $\mathcal{\hat{R}}= e^{\eta\hat{J}_y}$, namely $\hat{\mathcal{H}}=\mathcal{\hat{R}}^{-1}\hat{H}_{\mathrm{eff}}\mathcal{\hat{R}}$, and use the following identities \begin{equation} \begin{split} \hat{\mathcal{R}}^{-1}\hat{J}_z\hat{\mathcal{R}}&=\cosh(\eta)\hat{J}_z-\mathrm{i}\sinh(\eta)\hat{J}_x,\\ \hat{\mathcal{R}}^{-1}\hat{J}_x\hat{\mathcal{R}}&=\cosh(\eta)\hat{J}_x+\mathrm{i}\sinh(\eta)\hat{J}_z, \end{split} \end{equation} the get the transformed Hamiltonian \begin{equation} \begin{split} \hat{\mathcal{H}}=&-\mathrm{i}\frac{\gamma}{2}\hat{N}+\left[2g\cosh(\eta)-\Delta\sinh(\eta)\right]\hat{J}_x\\&+\mathrm{i}\left[2g\sinh(\eta)-\Delta\cosh(\eta)\right]\hat{J}_z. \end{split} \end{equation} Note that although $\hat{\mathcal{R}}$ is a non-unitary transformation, it preserves the commutation rules, but not the Hermiticity~\cite{Balian_1969}. Therefore, from the above equation, with $\tanh(\eta)=2g/\Delta$, the diagonal form of $\hat{\mathcal{H}}$ is \begin{equation}\label{H_diag} \hat{\mathcal{H}}_{\mathrm{diag}}=\frac{1}{2}\begin{cases} -\mathrm{i}\gamma\hat{N}+\omega_{\mathrm{I}}\left(\hat{b}^\dagger\hat{b}-\hat{a}^\dagger\hat{a}\right),&\mbox{if}\quad\Delta\leq 2g,\\ -\mathrm{i}\left[\gamma\hat{N}+\omega_{\mathrm{II}}\left(\hat{b}^\dagger\hat{b}-\hat{a}^\dagger\hat{a}\right)\right],&\mbox{if}\quad\Delta\geq2g,\\ \end{cases} \end{equation} and whose eigenvalues in the transformed diagonal basis are \begin{equation}\label{eigenvalores} \lambda_{jk}=\frac{1}{2} \begin{cases} -\mathrm{i}\gamma(j+k)+\omega_\mathrm{I}(k-j),&\mbox{if}\quad\Delta\leq 2g,\\ -\mathrm{i}\left[\gamma(j+k)+\omega_\mathrm{II}(k-j)\right],&\mbox{if}\quad\Delta\geq2g,\\ \end{cases} \end{equation} where $\omega_{\mathrm{I}}=\sqrt{4g^2-\Delta^2}$, and $\omega_{\mathrm{II}}=\sqrt{\Delta^2-4g^2}$.\\ It is important to point out that at the transition point, $\Delta = 2g$, both eigenvalues and eigenvectors coalesce, and such singularities are called EPs~\cite{Kato}. Furthermore, the aforementioned singularities are associated with symmetry breaking for $\mathcal{PT}$-symmetric Hamiltonians, i.e., the eigenvalues are real before the coalescence point, and complex after that point~\cite{Bender_1998,Heiss_2012}. As can be seen from \eqref{eigenvalores}, the eigenvalues have both real and imaginary part before the coalescence point; however, it has been shown that such system have the same dynamics as the former, and are often called quasi-$\mathcal{PT}$-symmetric systems~\cite{Guo_2009,Ornigotti_2014}.
On the other hand, although $\hat{\mathcal{H}}_{\mathrm{diag}}$ and $\hat{H}_{\mathrm{eff}}$ (or $\hat{\mathcal{H}}_{\mathrm{diag}}^\dagger$ and $\hat{H}_{\mathrm{eff}}^\dagger$) share eigenvalues, this is not the case for the eigenvectors. Multiplying the left-hand side of $\hat{\mathcal{H}}_{\mathrm{diag}}\ket{j,k} = \lambda_{jk}\ket{j,k}$ by $\hat{\mathcal{R}}$ (or $\hat{\mathcal{R}}^{-1}$), one can recognize that the eigenvalues of $\hat{H}_{\mathrm{eff}}$ or $\hat{H}_{\mathrm{eff}}^\dagger$ are determined by $\hat{H}_{\mathrm{eff}}\hat{\mathcal{R}}\ket{j,k}=\lambda_{j,k}\hat{\mathcal{R}}\ket{j,k}$ or $\bra{j,k}\hat{\mathcal{R}}^{-1}\hat{H}_{\mathrm{eff}}^\dagger=\bra{j,k}\hat{\mathcal{R}}^{-1}\lambda_{jk}^*$, and in turn the eigenvectors by \begin{equation}\label{eta_LR} \begin{split} \ket{\eta_R} &= \hat{\mathcal{R}}\ket{j,k},\\ \bra{\eta_L} &= \bra{j,k}\hat{\mathcal{R}}^{-1}. \end{split} \end{equation}
This is an intrinsic consequence of the non-Hermitian nature of $\hat{H}_{\mathrm{eff}}$; its eigenvectors are not orthogonal, i.e., $\braket{\eta_R|\eta_R}\neq0$ or $\braket{\eta_L|\eta_L}\neq0$. It is known that $\ket{\eta_R}$ and $\bra{\eta_L}$ form a bi-orthogonal system~\cite{Rosas_2018}, such that, $\braket{\eta_L|\eta_R} = \braket{j,k|\hat{\mathcal{R}}^{-1}\hat{\mathcal{R}}|l,m} = \delta_{j,l}\delta_{k,m}$. For example, in the semiclassical approach (non-unitary evolution by dropping the quantum jump superoperators $\hat{J}_c\hat{\rho}$), it has been demonstrated that it is possible to generate high-order photon exceptional points by exciting non-Hermitian waveguide arrangements with coherent states~\cite{Tschernig_2022}. However, the bi-orthogonal system determined by \eqref{eta_LR} generalizes the propagation of this type of states in photonics systems with passive losses, and its possible application to generate EPs of arbitrary order.
\begin{figure}
\caption{Schematic representation of two coupled waveguides with decay rate $\gamma_a$ and $\gamma_b$, respectively. At $z=0$, the photonic state is transformed via $\hat{\varrho}(0)=\exp\left[(\hat{J}_a+\hat{J}_b)/2\right]\hat{\rho}(0)$, to then cross a passive $\mathcal{PT}$-symmetric optical system~\cite{Ornigotti_2014}, whose evolution as a function of the propagation distance $z$ is determined by $\hat{\varrho}(z)=\hat{U}(z)\hat{\varrho}(0)\hat{U}^{-1}(z)$. Finally, before detection, the photonic state is transformed via $\exp\left[-(\hat{J}_a+\hat{J}_b)/2\right]\hat{\varrho}(z)$, in order to obtain the density matrix $\hat{\rho}(z)$, \eqref{rho}.}
\label{Fig_1}
\end{figure} Finally, having established $\hat{\mathcal{H}}_{\mathrm{diag}} = \hat{\mathcal{R}}^{-1}\hat{H}_{\mathrm{eff}}\hat{\mathcal{R}}$, by direct integration of the $\mathrm{i}\frac{d}{dt}\ket{\psi_R} = \hat{H}_{\mathrm{eff}}\ket{\psi_R}$, and remembering that $\hat{\rho} = e^{-(\hat{J}_a+\hat{J}_b)/2}\hat{\varrho}$, with $\hat{\varrho} = \ket{\psi_R}\bra{\psi_L}$, the exact solution of the Lindblad master equation \eqref{ME1}, given an initial condition $\hat{\rho}(0)$, is \begin{equation}\label{rho} \hat{\rho}(z)=e^{-(\hat{J}_a+\hat{J}_b)/2}\hat{U}(z)e^{(\hat{J}_a+\hat{J}_b)/2}\hat{\rho}(0)\hat{U}^{-1}(z), \end{equation} where \begin{equation}
\hat{U}(z)= e^{\eta\hat{J}_y}e^{-\mathrm{i}\hat{\mathcal{H}}_{\mathrm{diag}}z}e^{-\eta\hat{J}_y} \end{equation} is the non-unitary evolution operator associated to $\hat{H}_{\mathrm{eff}}$. On account of this, the evolution given an input state is determined by three ingredients. By developing in Taylor series $e^{(\hat{J}_a+\hat{J}_b)/2}$ and applying the powers of $(\hat{J}_a+\hat{J}_b)/2$ to the input state, we obtain a new state (pure or mixed); this new state will evolve according to $\hat{U}(z)$ to then be transformed as in the first step and finally obtain the output state. In other words, the dynamics of two-coupled fields subject to Markovian conditions is equivalent to two fields or light states crossing a non-Hermitian system with passive losses, provided that the input and output states are transformed according to $e^{\pm(\hat{J}_a+\hat{J}_b)/2}[\hat{\bullet}]$ (see Fig.~\ref{Fig_1}).
In addition, due to the mathematical similarities between paraxial optics and the Schrödinger equation, one of the quintessential platforms for propagating quantum states in non-Hermitian photonic systems at the single-photon level are evanescently coupled waveguides~\cite{ElGanainy_2007,Ruter_2010}. In such systems, the photonic state is affected along the propagation distance due to the balance between gains and losses, giving rise to the concept of $\mathcal{PT}$-symmetry. When only losses (passive losses) are considered, this is known as passive or quasi-$\mathcal{PT}$-symmetry~\cite{Guo_2009,Ornigotti_2014}. Furthermore, under Markovian assumptions, two-photon interference in two evanescently coupled waveguides has recently been demonstrated, where losses are induced through an asymmetric refractive index distribution and modulation along the propagation distance~\cite{Klauck_2019}. However, we have shown that the dynamics of the photon density matrix in such non-Hermitian systems does not require extra modulation along the propagation distance (there are only passive losses, see Fig.~\ref{Fig_1}). Nonetheless, the exchange of energy in non-conservative systems with their surrounding environment, described in terms of \eqref{ME1} or \eqref{rho_J}, is fundamentally different, since the quantum jump superoperators have a great influence on the propagation of non-classical states and a strong impact on the spectral responses of dissipative systems~\cite{Minganti_2019,Arkhipov_2020,Minganti_2020}. Although non-Hermitian $\mathcal{PT}$ systems are the prelude to open quantum systems, a direct equivalence between the two has not been found so far. However, we show that both systems are equivalent as long as the photonic density matrix is transformed via \eqref{Trho}.
\begin{figure}
\caption{(a)~Two photons are launched into two waveguides with passive losses. At $z=\pi/4g$, when both waveguides have the same decay rate or the rate is zero, the coincidence rate is zero (black and blue lines). However, when we consider only passive losses in one of the two waveguides, the minimum value of the coincidence rate undergoes a shift (red line). In panel (b), we plot the coincidence rate as a function of propagation distance and decay rate $\gamma_a/g$ (with $\gamma_b=0$). For $\gamma_a/g=0.75$, we recover the previous case. As $\gamma_a/g$ increases, we observe that the minimum value of the coincidence rate occurs at smaller distances, and at $z=\pi/4g$, the photons tend to anti-bunch. The analytical results are shown as continuous lines, while the numerical simulations are represented by dots, stars, and triangles.}
\label{Fig_2}
\end{figure} For example, in order to test our analytical results, we investigate two-photon quantum interference in two evanescently coupled waveguides with passive losses. The photonic density matrix $\hat{\rho}(z)$ evolves as a function of propagation distance, and for two indistinguishable photons $\ket{1,1}$, we can compute the coincidence rate at distance $z=z$, taking into account first of all that \begin{equation} \begin{split} e^{(\hat{J}_a+\hat{J}_b)/2}\ket{1,1}\bra{1,1}=& \ket{0,0}\bra{0,0}+\ket{1,1}\bra{1,1}\\&+\ket{0,1}\bra{0,1}+\ket{1,0}\bra{1,0} \end{split} \end{equation} and \begin{equation} \begin{split}
e^{\eta\hat{J}_y}\ket{1,1} &= e^{-\eta\hat{J}_y}\hat{a}^\dagger\hat{b}^\dagger e^{\eta\hat{J}_y}e^{-\eta\hat{J}_y}\ket{0,0}\\ &= \cosh (\eta )\ket{1,1}+\frac{\mathrm{i}}{2} \sinh (\eta)\left[\ket{0,2}-\ket{2,0}\right] \end{split} \end{equation} and the other equivalent operations involved in \eqref{rho}, such that \begin{equation}
\braket{1,1|\hat{\rho}(z)|1,1} =e^{-2\gamma z} \begin{cases} \left[\frac{ 4g^2 \cos(\omega_\mathrm{I}z) - \Delta^2}{\omega_\mathrm{I}^2} \right]^{2},~\mbox{if}~\Delta\leq 2g,\\ \left[ \frac{\Delta^2 - 4g^2 \cosh(\omega_{\mathrm{II}} z)}{\omega^{2}_{\mathrm{II}}}\right]^{2},~\mbox{if}~\Delta\geq 2g. \end{cases} \end{equation} In Fig.~\ref{Fig_2}, we show two complementary results associated with the coincidence rate as a function of propagation distance and the loss rates $\gamma_a$ and $\gamma_b$, respectively. \begin{itemize}
\item (a) For both unitary and non-unitary evolutions, i.e., $\gamma_a=\gamma_b=0$ and $\gamma_a=\gamma_b\neq0$, respectively, the bunching occurs at $z=\pi/4g$. However, when the losses are asymmetric, i.e., for example, $\gamma_a/g=.75$ and $\gamma_b=0$, the bunching occurs at a smaller distance than in the previous case. This fact has already been reported experimentally in the framework of the Hong-Ou-Mandel dip in~\cite{Klauck_2019}. However, the photonic system exhibiting $\mathcal{PT}$ symmetry is different. In our case, no extra bending modulation is necessary to generate Markovian losses in either of the two waveguides. We show that the Markovian signature is inscribed outside the photonic system exhibiting $\mathcal{PT}$ symmetry, as it suffices to apply the transformation $e^{\pm(\hat{J}_a+\hat{J}_b)/2}[\hat{\bullet}]$ to the input and output channels.
\item (b) As we increase the losses asymmetrically ($\gamma_b=0$), we can notice that photon bunching occurs at distances smaller and smaller than $z=\pi/4g$. On the other hand, at the $\mathcal{PT}$ symmetry phase transition point ($\gamma_a/g = 2$), we can observe anti-bunching, i.e., the coincidence rate increases and the photons tend to exit separately from each of the channels. In \cite{Longhi_2020}, a unitary transformation was proposed to verify this fact, which involves rotating the mode basis of the input and output. \end{itemize}
In this work, we solve \eqref{ME1} for any set of parameters $\gamma_j$ and $g$ (with $j=a,b$, respectively). This solution determines the evolution of the photonic density matrix $\hat{\rho}(z)$, given by \eqref{rho}, as a function of the propagation distance or time. As a consequence of this solution, we can reinterpret the dynamics of non-classical states in systems exhibiting $\mathcal{PT}$ symmetry. For example, our system exhibiting $\mathcal{PT}$ symmetry requires only passive losses in each of the waveguides (or directional coupler) as long as the initial and final conditions are transformed by $e^{\pm(\hat{J}_a+\hat{J}_b)/2}[\hat{\bullet}]$ (see Fig.~\ref{Fig_1}). This means, and it is important to re-emphasize, that non-Hermitian systems are not only a prelude to systems with Markovian losses but also that it is possible to establish a direct equivalence relation without omitting the quantum jump superoperators, at least for two quantized fields subject to Markovian losses. We believe that this result will allow greater control and experimental feasibility when propagating non-classical states in systems exhibiting some kind of $\mathcal{PT}$ symmetry.
\begin{thebibliography}{32} \makeatletter \providecommand \@ifxundefined [1]{
\@ifx{#1\undefined} } \providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{https://doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Breuer}\ and\ \citenamefont
{Petruccione}(2002)}]{Breuer_Book}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-P.}\ \bibnamefont
{Breuer}}\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Petruccione}},\ }\href@noop {} {\emph {\bibinfo {title} {The theory of open
quantum systems}}}\ (\bibinfo {publisher} {Oxford University Press on
Demand},\ \bibinfo {year} {2002})\BibitemShut {NoStop} \bibitem [{\citenamefont {Carmichael}(1993)}]{Carmichael_Book}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~J.}\ \bibnamefont
{Carmichael}},\ }\href@noop {} {\emph {\bibinfo {title} {An open systems
approach to quantum optics}}}\ (\bibinfo {publisher} {Springer-Verlag},\
\bibinfo {year} {1993})\BibitemShut {NoStop} \bibitem [{\citenamefont {Dum}\ \emph {et~al.}(1992)\citenamefont {Dum},
\citenamefont {Zoller},\ and\ \citenamefont {Ritsch}}]{Dum_1992}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Dum}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zoller}},\ and\
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Ritsch}},\ }\bibfield
{title} {\bibinfo {title} {Monte {C}arlo simulation of the atomic master
equation for spontaneous emission},\ }\href
{https://doi.org/10.1103/PhysRevA.45.4879} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {45}},\ \bibinfo
{pages} {4879} (\bibinfo {year} {1992})}\BibitemShut {NoStop} \bibitem [{\citenamefont {M{\o}lmer}\ \emph {et~al.}(1993)\citenamefont
{M{\o}lmer}, \citenamefont {Castin},\ and\ \citenamefont
{Dalibard}}]{Molmer_1993}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{M{\o}lmer}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Castin}},\
and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Dalibard}},\
}\bibfield {title} {\bibinfo {title} {Monte {C}arlo wave-function method in
quantum optics},\ }\href {https://doi.org/10.1364/JOSAB.10.000524} {\bibfield
{journal} {\bibinfo {journal} {J. Opt. Soc. Am. B}\ }\textbf {\bibinfo
{volume} {10}},\ \bibinfo {pages} {524} (\bibinfo {year} {1993})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Prosen}(2012)}]{Prosen_2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Prosen}},\ }\bibfield {title} {\bibinfo {title}
{$\mathbb{P}\mathbb{T}$-symmetric quantum {L}iouvillean dynamics},\ }\href
{https://doi.org/10.1103/PhysRevLett.109.090404} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {109}},\
\bibinfo {pages} {090404} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Torres}(2014)}]{Torres_2014}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont
{Torres}},\ }\bibfield {title} {\bibinfo {title} {Closed-form solution of
{L}indblad master equations without gain},\ }\href
{https://doi.org/10.1103/PhysRevA.89.052133} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo
{pages} {52133} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Minganti}\ \emph {et~al.}(2019)\citenamefont
{Minganti}, \citenamefont {Miranowicz}, \citenamefont {Chhajlany},\ and\
\citenamefont {Nori}}]{Minganti_2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Minganti}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Miranowicz}}, \bibinfo {author} {\bibfnamefont {R.~W.}\ \bibnamefont
{Chhajlany}},\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Nori}},\ }\bibfield {title} {\bibinfo {title} {Quantum exceptional points
of non-{H}ermitian {H}amiltonians and {L}iouvillians: {T}he effects of
quantum jumps},\ }\href {https://doi.org/10.1103/PhysRevA.100.062131}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {100}},\ \bibinfo {pages} {62131} (\bibinfo {year}
{2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Arkhipov}\ \emph {et~al.}(2020)\citenamefont
{Arkhipov}, \citenamefont {Miranowicz}, \citenamefont {Minganti},\ and\
\citenamefont {Nori}}]{Arkhipov_2020}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {I.~I.}\ \bibnamefont
{Arkhipov}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Miranowicz}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Minganti}},\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Nori}},\ }\bibfield {title} {\bibinfo {title} {Liouvillian exceptional
points of any order in dissipative linear bosonic systems: Coherence
functions and switching between {$\mathcal{PT}$} and anti-{$\mathcal{PT}$}
symmetries},\ }\href {https://doi.org/10.1103/PhysRevA.102.033715} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{102}},\ \bibinfo {pages} {33715} (\bibinfo {year} {2020})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Minganti}\ \emph {et~al.}(2020)\citenamefont
{Minganti}, \citenamefont {Miranowicz}, \citenamefont {Chhajlany},
\citenamefont {Arkhipov},\ and\ \citenamefont {Nori}}]{Minganti_2020}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Minganti}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Miranowicz}}, \bibinfo {author} {\bibfnamefont {R.~W.}\ \bibnamefont
{Chhajlany}}, \bibinfo {author} {\bibfnamefont {I.~I.}\ \bibnamefont
{Arkhipov}},\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Nori}},\ }\bibfield {title} {\bibinfo {title} {Hybrid-{L}iouvillian
formalism connecting exceptional points of non-{H}ermitian {H}amiltonians and
{L}iouvillians via postselection of quantum trajectories},\ }\href
{https://doi.org/10.1103/PhysRevA.101.062112} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {101}},\ \bibinfo
{pages} {062112} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Teuber}\ and\ \citenamefont
{Scheel}(2020)}]{Teuber_2020}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Teuber}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Scheel}},\ }\bibfield {title} {\bibinfo {title} {Solving the quantum master
equation of coupled harmonic oscillators with {L}ie-algebra methods},\ }\href
{https://doi.org/10.1103/PhysRevA.101.042124} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {101}},\ \bibinfo
{pages} {42124} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bender}\ and\ \citenamefont
{Boettcher}(1998)}]{Bender_1998}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont
{Bender}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Boettcher}},\ }\bibfield {title} {\bibinfo {title} {Real {S}pectra in
{N}on-{H}ermitian {H}amiltonians {H}aving {$\mathcal{PT}$} {S}ymmetry},\
}\href {https://doi.org/10.1103/PhysRevLett.80.5243} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {80}},\
\bibinfo {pages} {5243} (\bibinfo {year} {1998})}\BibitemShut {NoStop} \bibitem [{\citenamefont {El-Ganainy}\ \emph {et~al.}(2007)\citenamefont
{El-Ganainy}, \citenamefont {Makris}, \citenamefont {Christodoulides},\ and\
\citenamefont {Musslimani}}]{ElGanainy_2007}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{El-Ganainy}}, \bibinfo {author} {\bibfnamefont {K.~G.}\ \bibnamefont
{Makris}}, \bibinfo {author} {\bibfnamefont {D.~N.}\ \bibnamefont
{Christodoulides}},\ and\ \bibinfo {author} {\bibfnamefont {Z.~H.}\
\bibnamefont {Musslimani}},\ }\bibfield {title} {\bibinfo {title} {Theory of
coupled optical {$\mathcal{PT}$}-symmetric structures},\ }\href
{https://doi.org/10.1364/OL.32.002632} {\bibfield {journal} {\bibinfo
{journal} {Opt. Lett.}\ }\textbf {\bibinfo {volume} {32}},\ \bibinfo {pages}
{2632} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {R{\"u}ter}\ \emph {et~al.}(2010)\citenamefont
{R{\"u}ter}, \citenamefont {Makris}, \citenamefont {El-Ganainy},
\citenamefont {Christodoulides}, \citenamefont {Segev},\ and\ \citenamefont
{Kip}}]{Ruter_2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~E.}\ \bibnamefont
{R{\"u}ter}}, \bibinfo {author} {\bibfnamefont {K.~G.}\ \bibnamefont
{Makris}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {El-Ganainy}},
\bibinfo {author} {\bibfnamefont {D.~N.}\ \bibnamefont {Christodoulides}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Segev}},\ and\ \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Kip}},\ }\bibfield {title}
{\bibinfo {title} {Observation of parity--time symmetry in optics},\ }\href
{https://doi.org/10.1038/nphys1515} {\bibfield {journal} {\bibinfo
{journal} {Nat. Phys.}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages}
{192} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Heiss}(2012)}]{Heiss_2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.~D.}\ \bibnamefont
{Heiss}},\ }\bibfield {title} {\bibinfo {title} {The physics of exceptional
points},\ }\href {https://doi.org/10.1088/1751-8113/45/44/444016} {\bibfield
{journal} {\bibinfo {journal} {J. Phys. A: Math. Theor.}\ }\textbf {\bibinfo
{volume} {45}},\ \bibinfo {pages} {444016} (\bibinfo {year}
{2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kato}(2013)}]{Kato}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Kato}},\ }\href@noop {} {\emph {\bibinfo {title} {Perturbation theory for
linear operators}}},\ Vol.\ \bibinfo {volume} {132}\ (\bibinfo {publisher}
{Springer Science \& Business Media},\ \bibinfo {year} {2013})\BibitemShut
{NoStop} \bibitem [{\citenamefont {Miri}\ and\ \citenamefont
{Al{\`u}}(2019)}]{MohammadAli_2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.-A.}\ \bibnamefont
{Miri}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Al{\`u}}},\
}\bibfield {title} {\bibinfo {title} {Exceptional points in optics and
photonics},\ }\href {https://doi.org/10.1126/science.aar7709} {\bibfield
{journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume}
{363}},\ \bibinfo {pages} {eaar7709} (\bibinfo {year} {2019})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Quiroz-Ju{\'a}rez}\ \emph {et~al.}(2019)\citenamefont
{Quiroz-Ju{\'a}rez}, \citenamefont {Perez-Leija}, \citenamefont {Tschernig},
\citenamefont {Rodr{\'\i}guez-Lara}, \citenamefont {Maga{\~n}a-Loaiza},
\citenamefont {Busch}, \citenamefont {Joglekar},\ and\ \citenamefont
{de~Le{\'o}n-Montiel}}]{Quiroz_2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont
{Quiroz-Ju{\'a}rez}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Perez-Leija}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Tschernig}}, \bibinfo {author} {\bibfnamefont {B.~M.}\ \bibnamefont
{Rodr{\'\i}guez-Lara}}, \bibinfo {author} {\bibfnamefont {O.~S.}\
\bibnamefont {Maga{\~n}a-Loaiza}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Busch}}, \bibinfo {author} {\bibfnamefont {Y.~N.}\
\bibnamefont {Joglekar}},\ and\ \bibinfo {author} {\bibfnamefont {R.~J.}\
\bibnamefont {de~Le{\'o}n-Montiel}},\ }\bibfield {title} {\bibinfo {title}
{Exceptional points of any order in a single, lossy waveguide beam splitter
by photon-number-resolved detection},\ }\href
{https://doi.org/10.1364/PRJ.7.000862} {\bibfield {journal} {\bibinfo
{journal} {Photonics Res.}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo
{pages} {862} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tschernig}\ \emph {et~al.}(2022)\citenamefont
{Tschernig}, \citenamefont {Busch}, \citenamefont {Christodoulides},\ and\
\citenamefont {Perez-Leija}}]{Tschernig_2022}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Tschernig}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Busch}},
\bibinfo {author} {\bibfnamefont {D.~N.}\ \bibnamefont {Christodoulides}},\
and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Perez-Leija}},\
}\bibfield {title} {\bibinfo {title} {Branching high-order exceptional
points in non-hermitian optical systems},\ }\href
{https://doi.org/10.1002/lpor.202100707} {\bibfield {journal} {\bibinfo
{journal} {Laser Photonics Rev.}\ }\textbf {\bibinfo {volume} {16}},\
\bibinfo {pages} {2100707} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lai}\ \emph {et~al.}(1991)\citenamefont {Lai},
\citenamefont {Buek},\ and\ \citenamefont {Knight}}]{Lai_1991}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.~K.}\ \bibnamefont
{Lai}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Buek}},\ and\
\bibinfo {author} {\bibfnamefont {P.~L.}\ \bibnamefont {Knight}},\ }\bibfield
{title} {\bibinfo {title} {Nonclassical fields in a linear directional
coupler},\ }\href {https://doi.org/10.1103/PhysRevA.43.6323} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{43}},\ \bibinfo {pages} {6323} (\bibinfo {year} {1991})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Klauck}\ \emph {et~al.}(2019)\citenamefont {Klauck},
\citenamefont {Teuber}, \citenamefont {Ornigotti}, \citenamefont {Heinrich},
\citenamefont {Scheel},\ and\ \citenamefont {Szameit}}]{Klauck_2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Klauck}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Teuber}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ornigotti}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Heinrich}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Scheel}},\ and\ \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Szameit}},\ }\bibfield {title} {\bibinfo
{title} {Observation of {PT}-symmetric quantum interference},\ }\href
{https://doi.org/10.1038/s41566-019-0517-0} {\bibfield {journal} {\bibinfo
{journal} {Nat. Photonics}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo
{pages} {883} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Longhi}(2020)}]{Longhi_2020}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Longhi}},\ }\bibfield {title} {\bibinfo {title} {Quantum statistical
signature of {$\mathcal{PT}$}-symmetry breaking},\ }\href
{https://doi.org/10.1364/OL.386232} {\bibfield {journal} {\bibinfo
{journal} {Opt. Lett.}\ }\textbf {\bibinfo {volume} {45}},\ \bibinfo {pages}
{1591} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Longhi}(2009)}]{Longhi_2009}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Longhi}},\ }\bibfield {title} {\bibinfo {title} {Quantum-optical analogies
using photonic structures},\ }\href {https://doi.org/10.1002/lpor.200810055}
{\bibfield {journal} {\bibinfo {journal} {Laser Photonics Rev.}\ }\textbf
{\bibinfo {volume} {3}},\ \bibinfo {pages} {243} (\bibinfo {year}
{2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Guo}\ \emph {et~al.}(2009)\citenamefont {Guo},
\citenamefont {Salamo}, \citenamefont {Duchesne}, \citenamefont {Morandotti},
\citenamefont {Volatier-Ravat}, \citenamefont {Aimez}, \citenamefont
{Siviloglou},\ and\ \citenamefont {Christodoulides}}]{Guo_2009}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Guo}}, \bibinfo {author} {\bibfnamefont {G.~J.}\ \bibnamefont {Salamo}},
\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Duchesne}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Morandotti}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Volatier-Ravat}}, \bibinfo {author}
{\bibfnamefont {V.}~\bibnamefont {Aimez}}, \bibinfo {author} {\bibfnamefont
{G.~A.}\ \bibnamefont {Siviloglou}},\ and\ \bibinfo {author} {\bibfnamefont
{D.~N.}\ \bibnamefont {Christodoulides}},\ }\bibfield {title} {\bibinfo
{title} {Observation of {$\mathcal{PT}$}-symmetry breaking in complex optical
potentials},\ }\href {https://doi.org/10.1103/PhysRevLett.103.093902}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {103}},\ \bibinfo {pages} {93902} (\bibinfo {year}
{2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ornigotti}\ and\ \citenamefont
{Szameit}(2014)}]{Ornigotti_2014}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Ornigotti}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Szameit}},\ }\bibfield {title} {\bibinfo {title} {Quasi {$\mathcal{PT}
$}-symmetry in passive photonic lattices},\ }\href
{https://doi.org/10.1088/2040-8978/16/6/065501} {\bibfield {journal}
{\bibinfo {journal} {J. Opt.}\ }\textbf {\bibinfo {volume} {16}},\ \bibinfo
{pages} {65501} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhang}\ \emph {et~al.}(2016)\citenamefont {Zhang},
\citenamefont {Zhang}, \citenamefont {Sheng}, \citenamefont {Yang},
\citenamefont {Miri}, \citenamefont {Christodoulides}, \citenamefont {He},
\citenamefont {Zhang},\ and\ \citenamefont {Xiao}}]{Zhang_2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont
{Zhang}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Zhang}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Sheng}}, \bibinfo
{author} {\bibfnamefont {L.}~\bibnamefont {Yang}}, \bibinfo {author}
{\bibfnamefont {M.-A.}\ \bibnamefont {Miri}}, \bibinfo {author}
{\bibfnamefont {D.~N.}\ \bibnamefont {Christodoulides}}, \bibinfo {author}
{\bibfnamefont {B.}~\bibnamefont {He}}, \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Zhang}},\ and\ \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Xiao}},\ }\bibfield {title} {\bibinfo {title}
{Observation of parity-time symmetry in optically induced atomic lattices},\
}\href {https://doi.org/10.1103/PhysRevLett.117.123601} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {117}},\
\bibinfo {pages} {123601} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lin}\ \emph {et~al.}(2011)\citenamefont {Lin},
\citenamefont {Ramezani}, \citenamefont {Eichelkraut}, \citenamefont
{Kottos}, \citenamefont {Cao},\ and\ \citenamefont
{Christodoulides}}]{Lin_2011}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont
{Lin}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Ramezani}},
\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Eichelkraut}}, \bibinfo
{author} {\bibfnamefont {T.}~\bibnamefont {Kottos}}, \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Cao}},\ and\ \bibinfo {author}
{\bibfnamefont {D.~N.}\ \bibnamefont {Christodoulides}},\ }\bibfield {title}
{\bibinfo {title} {Unidirectional invisibility induced by
{$\mathcal{PT}$}-symmetric periodic structures},\ }\href
{https://doi.org/10.1103/PhysRevLett.106.213901} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {106}},\
\bibinfo {pages} {213901} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Feng}\ \emph {et~al.}(2017)\citenamefont {Feng},
\citenamefont {El-Ganainy},\ and\ \citenamefont {Ge}}]{Feng_2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Feng}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {El-Ganainy}},\
and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Ge}},\ }\bibfield
{title} {\bibinfo {title} {Non-{H}ermitian photonics based on parity--time
symmetry},\ }\href {https://doi.org/10.1038/s41566-017-0031-1} {\bibfield
{journal} {\bibinfo {journal} {Nat. Photonics}\ }\textbf {\bibinfo {volume}
{11}},\ \bibinfo {pages} {752} (\bibinfo {year} {2017})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {El-Ganainy}\ \emph {et~al.}(2018)\citenamefont
{El-Ganainy}, \citenamefont {Makris}, \citenamefont {Khajavikhan},
\citenamefont {Musslimani}, \citenamefont {Rotter},\ and\ \citenamefont
{Christodoulides}}]{ElGanainy_2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{El-Ganainy}}, \bibinfo {author} {\bibfnamefont {K.~G.}\ \bibnamefont
{Makris}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Khajavikhan}},
\bibinfo {author} {\bibfnamefont {Z.~H.}\ \bibnamefont {Musslimani}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Rotter}},\ and\ \bibinfo
{author} {\bibfnamefont {D.~N.}\ \bibnamefont {Christodoulides}},\ }\bibfield
{title} {\bibinfo {title} {Non-{H}ermitian physics and {PT} symmetry},\
}\href {https://doi.org/10.1038/nphys4323} {\bibfield {journal} {\bibinfo
{journal} {Nat. Phys.}\ }\textbf {\bibinfo {volume} {14}},\ \bibinfo {pages}
{11} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{\"O}zdemir}\ \emph {et~al.}(2019)\citenamefont
{{\"O}zdemir}, \citenamefont {Rotter}, \citenamefont {Nori},\ and\
\citenamefont {Yang}}]{Ozdemir_2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {{\c S}.~K.}\
\bibnamefont {{\"O}zdemir}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Rotter}}, \bibinfo {author} {\bibfnamefont
{F.}~\bibnamefont {Nori}},\ and\ \bibinfo {author} {\bibfnamefont
{L.}~\bibnamefont {Yang}},\ }\bibfield {title} {\bibinfo {title}
{Parity--time symmetry and exceptional points in photonics},\ }\href
{https://doi.org/10.1038/s41563-019-0304-9} {\bibfield {journal} {\bibinfo
{journal} {Nat. Mater.}\ }\textbf {\bibinfo {volume} {18}},\ \bibinfo {pages}
{783} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hong}\ \emph {et~al.}(1987)\citenamefont {Hong},
\citenamefont {Ou},\ and\ \citenamefont {Mandel}}]{Hong-Ou-Mandel}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~K.}\ \bibnamefont
{Hong}}, \bibinfo {author} {\bibfnamefont {Z.~Y.}\ \bibnamefont {Ou}},\ and\
\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Mandel}},\ }\bibfield
{title} {\bibinfo {title} {Measurement of subpicosecond time intervals
between two photons by interference},\ }\href
{https://doi.org/10.1103/PhysRevLett.59.2044} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {59}},\ \bibinfo
{pages} {2044} (\bibinfo {year} {1987})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Balian}\ and\ \citenamefont
{Brezin}(1969)}]{Balian_1969}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Balian}}\ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Brezin}},\ }\bibfield {title} {\bibinfo {title} {Nonunitary {B}ogoliubov
transformations and extension of {W}ick{\rq}s theorem},\ }\href
{https://doi.org/10.1007/BF02710281} {\bibfield {journal} {\bibinfo
{journal} {Il Nuovo Cimento B}\ }\textbf {\bibinfo {volume} {64}},\ \bibinfo
{pages} {37} (\bibinfo {year} {1969})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rosas-Ortiz}\ and\ \citenamefont
{Zelaya}(2018)}]{Rosas_2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont
{Rosas-Ortiz}}\ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Zelaya}},\ }\bibfield {title} {\bibinfo {title} {Bi-orthogonal approach to
non-{H}ermitian {H}amiltonians with the oscillator spectrum: Generalized
coherent states for nonlinear algebras},\ }\href
{https://doi.org/10.1016/j.aop.2017.10.020} {\bibfield {journal} {\bibinfo
{journal} {Ann. Phys.}\ }\textbf {\bibinfo {volume} {388}},\ \bibinfo {pages}
{26} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \end{thebibliography}
\end{document} |
\begin{document}
\title[Derivation of Computational Formulas for certain class of finite sums]{Derivation of Computational Formulas for certain class of finite sums: Approach to Generating functions arising from $p$-adic integrals and special functions}
\author{Yilmaz Simsek$^*$} \address[Yilmaz Simsek]{Department of Mathematics, Faculty of Science University of Akdeniz TR-07058 Antalya-TURKEY} \email{[email protected] } \thanks{$^*$Corresponding author}
\maketitle
\begin{abstract} The aim of this paper is to construct generating functions for some families of special finite sums with the aid of the Newton-Mercator series, hypergeometric series, and $p$-adic integral (the Volkenborn integral). By using these generating functions, their functional equations, and their partial derivative equations, many novel computational formulas involving the special finite sums of (inverse) binomial coefficients, the Bernoulli type polynomials and numbers, Euler polynomials and numbers, the Stirling numbers, the (alternating) harmonic numbers, the Leibnitz polynomials and others. Among these formulas, by considering a computational formula which computes the aforementioned certain class of finite sums with the aid of the Bernoulli numbers and the Stirling numbers of the first kind, we present a computation algorithm and we provide some of their special values. Morover, using the aforementioned special finite sums and combinatorial numbers, we give relations among multiple alternating zeta functions, the Bernoulli polynomials of higher order and the Euler polynomials of higher order. We also give decomposition of the multiple Hurwitz zeta functions with the aid of finite sums. Relationships and comparisons between the main results given in the article and previously known results have been criticized. With the help of the results of this paper, the solution of the problem that Charalambides \cite[Exercise 30, p. 273]{Charalambos} gave in his book was found and with the help of this solution, we also find very new formulas. In addition, the solutions of some of the problems we have raised in \cite{SimsekMTJPAM2020} are also given.
\noindent \textbf{Keywords:} Generating function, Finite sums, Special functions, Special numbers numbers and polynomials, multiple alternating zeta functions, $p$-adic integral, Computational algorithm \newline \textbf{MSC(2010):} 05A15, 11B68, 11B73, 11B83, 26C05, 11S40, 11S80, 33B15, 65D20, 33F05 \end{abstract}
\section{Introduction, definitions and motivation}
It is known that there are a large number of researchers have studied to find computational formulas for finite sums and infinite sums. Because it is often not easy to find computational formulas for any finite sum, involving special functions, special numbers, special polynomials, sums of higher powers of binomial coefficients. In order to find any computational formula for finite sums, still many new methods and techniques have been developed, investigated in mathematics, and also in other applied sciences. We know that finite sums and their computational formulas are special important mathematical tools most used by mathematicians, physicists, engineers and other scientists. Applications of the generating functions for special numbers and polynomials, and finite sums with their computational formulas have also been given by many different methods (\textit{cf}. \cite{Apostol}- \cite{Zave}). With this motivation, by an approach to generating functions arising from $p$-adic integrals and special functions, our purpose and motivations in this paper are to develop a computational methodology by deriving computational formulas for certain class of finite sums. The provided computational methodology provides the researchers a variety of methods that they can use in different fields and many situations.
In this study, we will construct generating functions that include special numbers and polynomials, and special finite sums. With the help of these generating functions and their functional equation, some new computational formulas will be given for these special finite sums. On the other hand, the main motivation of this paper is to construct and investigate generating functions, given by Theorem \ref{Theorem1} and Theorem \ref{Theorem2}, with a great deal distinct applications for the following numbers $y(n,\lambda )$ , represented with certain finite sum: \begin{equation} y(n,\lambda )=\sum_{j=0}^{n}\frac{(-1)^{n}}{(j+1)\lambda ^{j+1}\left(
\lambda -1\right) ^{n+1-j}} \label{ynldef} \end{equation} (\textit{cf.} \cite{SimsekMTJPAM2020}).
The numbers $y(n,\lambda )$ reveal from the following zeta type function $ \mathcal{Z}_{1}(s;a,\lambda )$: \begin{equation} \mathcal{Z}_{1}(s;a,\lambda )=\frac{\ln \lambda }{\left( \lambda -1\right)
(\ln a)^{s}}Li_{s}\left( \frac{1}{\lambda -1}\right) +\frac{1}{(\ln a)^{s}} \sum\limits_{n=0}^{\infty }\frac{y(n,\lambda )}{(n+1)^{s}}, \label{1aGZ} \end{equation}
where $\lambda \in \mathbb{C}\setminus \left\{ 0,1\right\} $ ($|\frac{1}{
\lambda -1}|<1$; $\operatorname{Re}(s)>1$), $a\geq 1$, and $s\in \mathbb{C}$, ( \textit{cf}. \cite{SimsekMTJPAM2020}, \cite{SimsekREVISTA}, \cite {SimsekBull2021}). The function $Li_{s}\left( \lambda \right) $ given on the right-hand side of the equation (\ref{1aGZ}) is denoted the polylogarithm function: \begin{equation*} Li_{s}(z)=\sum\limits_{n=1}^{\infty }\frac{\lambda ^{n}}{n^{s}}, \end{equation*} (\textit{cf}. \cite{SrivatavaChoi}, \cite{ChoiMTJPAM}).
Our first goal is to construct the following generating functions for the numbers $y\left( n,\lambda \right) $ by aid the Newton-Mercator series which is the Taylor series for the logarithm function.
\begin{definition}
Let $\lambda \in \mathbb{R}$ with $\lambda \neq 0,1$. Let $z\in \mathbb{C}$.
The numbers $y\left( n,\lambda \right) $ are defined by the following
generating function:
\begin{equation}
G\left( z,\lambda \right) =\sum\limits_{n=0}^{\infty }\left( 1-\lambda
\right) ^{n+2}y\left( n,\lambda \right) z^{n}. \label{1aG}
\end{equation} \end{definition}
This paper provide many new formulas that include not only the numbers $ y\left( n,\lambda \right) $ and special numbers and polynomials and special finite sums, but also their generating functions. Among others, we list the following some of theorems involving these novel formulas. The proofs of the Theorems are given in detail in the following sections.
\begin{theorem}
\label{Theorem1} Let $\lambda \in \mathbb{R}$ with $\lambda \neq 0,1$. Let $
z\in \mathbb{C}$ with $\left\vert \frac{\lambda -1}{\lambda }z\right\vert <1$
. Then we have
\begin{equation}
G\left( z,\lambda \right) =\frac{\ln \left( 1-\frac{\lambda -1}{\lambda }
z\right) }{z(z-1)}. \label{1aG1}
\end{equation} \end{theorem}
To show the power of the function $G\left( z,\lambda \right) $ and its various applications, it will be examined in detail in the next sections.
Subsequent, we come up with an interesting theorem asserting that the function $G\left( z,\lambda \right) $ can be expressed as a hypergeometric function $_{2}F_{1}\left[ \begin{array}{c} 1,1 \\ 2 \end{array} ;\frac{1-\lambda }{\lambda }z\right] $.
\begin{theorem}
\label{Theorem2} Let $\lambda \in \mathbb{R}$ with $\lambda \neq 0,1$. Let $
z\in \mathbb{C}$ with $\left\vert \frac{\lambda -1}{\lambda }z\right\vert <1$
. Then we have
\begin{equation}
G\left( z,\lambda \right) =\frac{\left( 1-\lambda \right) z}{\lambda \left(
z-1\right) }\text{ }_{2}F_{1}\left[
\begin{array}{c}
1,1 \\
2
\end{array}
;\frac{1-\lambda }{\lambda }z\right] . \label{1aGH}
\end{equation} \end{theorem}
Applying Theorem \ref{Theorem1} and Theorem \ref{Theorem2}, we derive many novel computational formulas and relations involving finite sums, special numbers, and special polynomials.
In addition, we present several eliciting and considerable remarks on these formulas and relations.
By using generating functions and their functional equations, we give some properties of the numbers $y(n,\lambda )$. We show that the numbers $ y(n,\lambda )$ are closely associated with the Bernoulli numbers, the Euler numbers, the harmonic numbers, the alternating Harmonic numbers, the Apostol-Bernoulli numbers, the Stirling numbers, the Leibnitz numbers, and special finite sums.
Another important purpose of this paper is to provide solutions to some of the open problems that have been raised by the author \cite{SimsekMTJPAM2020} and the Exercise 30 given by Charalambides \cite[Exercise 30, p. 273] {Charalambos} with the aid of the numbers $y(n,\lambda )$ and their generating functions.
Some of our main contributions, derived from Theorem \ref{Theorem1}, Theorem \ref{Theorem2}, and hypergeometric functions, are listed by the following Theorems, among other results.
\begin{theorem}
\label{Theorem3}
Let $m\in \mathbb{N}_0$. Then, we have
\begin{equation}
\mathcal{B}_{m}\left( \lambda \right) =\sum\limits_{n=0}^{m}\left(
n+1\right) !\lambda ^{n+1}y\left( n,\lambda \right) S_{2}(m,n+1),
\label{Abn-1}
\end{equation}
where $\mathcal{B}_{m}\left( \lambda \right) $ and $S_{2}(m,n)$ denote the
Apostol-Bernoulli numbers and the Stirling numbers of the second kind,
respectively. \end{theorem}
By applying to the equation (\ref{1aG1}), we derive the following a new relation involving the harmonic numbers $H_{n}$ and the numbers $y\left( n,\lambda \right) $.
\begin{theorem}
\label{Theorem H} Let $n\in \mathbb{N}$. Then we have
\begin{eqnarray}
H_{2n+2}-H_{n+1} &=&-\frac{1}{2\left( n+1\right) }+\left( n+1\right)
\sum\limits_{k=0}^{2n-1}\frac{\lambda ^{k+2}}{2n-k}y\left( n,\lambda \right)
\label{aHY} \\
&&+\left( n+1\right) \left( \lambda -1\right) \sum\limits_{k=0}^{2n}\frac{
\lambda ^{k+1}}{2n+1-k}y\left( n,\lambda \right) . \notag
\end{eqnarray} \end{theorem}
\begin{theorem}
\label{Theorem 4} Let $n\in \mathbb{N}$. Then we have
\begin{equation}
\sum_{j=0}^{n}\frac{\zeta _{E}^{(n+1-j)}\left( -m,n+2\right) }{\left(
j+1\right) 2^{n+1-j}}=\sum_{j=0}^{n}\frac{E_{m}^{(n+1-j)}\left( n+2\right) }{
\left( j+1\right) 2^{n+1-j}}, \label{ah5z}
\end{equation}
where $\zeta _{E}^{(d)}\left( s,x\right) $ denotes the multiple alternating
Hurwitz zeta function (multiple Hurwitz-Euler eta function), which is given
as fallows \textup{(\ref{ah5zE})}:
\begin{equation}
\zeta _{E}^{(d)}\left( s,x\right) =2^{d}\sum_{v=0}^{\infty }(-1)^{v}\binom{
v+d-1}{v}\frac{1}{\left( x+v\right) ^{s}} \label{ah5zE}
\end{equation}
(\textit{cf}. \cite{ChoiSrivastavaTJM}, \cite{SrivatavaChoi}). \end{theorem}
Note that proof of Theorem \ref{Theorem 4} will be presented in Section \ref{Section6}.
\begin{theorem}
\label{Theorem 5}
Let $m\in \mathbb{N}_0$. Then, we have
\begin{equation}
y\left( m,\lambda \right) =\sum_{v=0}^{m}\sum_{n=0}^{v}(-1)^{v-m}\frac{
\left( \lambda -1\right) ^{v-m-1}B_{n}S_{1}(v,n)}{\lambda ^{v+1}v!}.
\label{1aGbs}
\end{equation} \end{theorem}
We give two different proof of Theorem \ref{Theorem 5}. The first proof related to generating functions and functional equation method. The second proof is associated with the $p$-adic integral method.
\begin{theorem}
\label{Theorem 6} Let $f(\lambda )$ be an entire function and $|\lambda |<1$
. Then we have
\begin{equation*}
\sum_{v=0}^{\infty }f(v)\lambda ^{v}+\sum_{m=1}^{\infty }\frac{f^{(m-1)}(0)}{
m!}\sum\limits_{n=0}^{m}\left( n+1\right) !\lambda ^{n+1}y\left( n,\lambda
\right) S_{2}(m,n+1)=0.
\end{equation*} \end{theorem}
\subsection{Preliminaries}
Throughout the paper, we use the following notation and definitions.
Let $\mathbb{N}$,$\mathbb{\
\mathbb{Z}
}$, $\mathbb{R}$, and $\mathbb{C}$ denote the set of natural numbers, the set of integer numbers, the set of real numbers and the set of complex numbers, respectively. $\mathbb{N}_{0}:=\mathbb{N}\cup \left\{ 0\right\} $. For $z\in \mathbb{C}$ with $z=x+iy$ ($x,y\in \mathbb{R}$); $\operatorname{Re}(z)=x$ and $\operatorname{Im}(z)=y$ and also $\ln z$ denotes the principal branch of the many-valued function $\Im (\ln z)$ with the imaginary part of $\ln z$ constrained by \begin{equation*} -\pi <\operatorname{Im}(\ln z)\leq \pi . \end{equation*} \begin{equation*} 0^{n}=\left\{ \begin{array}{cc} 1, & n=0 \\ 0, & n\in \mathbb{N}. \end{array} \right. \end{equation*}
The well-known generalized hypergeometric function is given by \begin{equation} _{p}F_{q}\left[ \begin{array}{c} \alpha _{1},...,\alpha _{p} \\ \beta _{1},...,\beta _{q} \end{array} ;z\right] =\sum\limits_{m=0}^{\infty }\left( \frac{\prod\limits_{j=1}^{p}
\left( \alpha _{j}\right) ^{\overline{m}}}{\prod\limits_{j=1}^{q}\left(
\beta _{j}\right) ^{\overline{m}}}\right) \frac{z^{m}}{m!}, \label{hyper} \end{equation} where the above series converges for all $z$ if $p<q+1$, and for $\left\vert z\right\vert <1$ if $p=q+1$. Assuming that all parameters have general values, real or complex, except for the $\beta _{j}$, $j=1,2,...,q$ none of which is equal to zero or a negative integer and also \begin{equation*} \left( \lambda \right) ^{\overline{v}}=\prod\limits_{j=0}^{v-1}(\lambda +j), \end{equation*} and \ $\left( \lambda \right) ^{\overline{0}}=1$ for $\lambda \neq 1$, where $v\in \mathbb{N}$, $\lambda \in \mathbb{C}$. For the generalized hypergeometric function and their applications, it is also recommended to refer to the following resource (\textit{cf}. \cite {Koepf,simsekJMAA,RT}, \cite{Rao}; and references therein).
\begin{equation*} \left( \lambda \right) ^{\underline{v}}=\prod\limits_{j=0}^{v-1}(\lambda -j), \end{equation*} and $\left( \lambda \right) ^{\underline{0}}=1$.
The Bernoulli polynomials of higher order, $B_{n}^{(l)}\left( y\right)$, are defined by \begin{equation} F_{B}(u,y;l)=\left( \frac{u}{e^{u}-1}\right) ^{l}e^{yt}=\sum_{n=0}^{\infty }B_{n}^{(l)}\left( y\right) \frac{u^{n}}{n!}, \label{ApostolBern} \end{equation} (\textit{cf}. \cite{balMS,CAC,comtet,KimDahee,Koepf,KucukogluAADM2019,Ozdemir,5Riardon,Roman,Rota,simsekAADM,Simsekfilomat,simsekJMAA,SimsekMTJPM,SimsekMTJPAM2020,1Sofo,SrivatavaChoi}; and references therein).
When $y=0$ in (\ref{ApostolBern}), we have the Bernoulli numbers of order $k$ \begin{equation*} B_{n}^{(k)}=B_{n}^{(k)}\left( 0\right) , \end{equation*} and when $k=1$, we have the Bernoulli numbers \begin{equation*} B_{n}=B_{n}^{(1)}, \end{equation*} (\textit{cf}. \cite{CAC,comtet,KimDahee,Koepf,KucukogluAADM2019,Ozdemir,5Riardon,Roman,Rota,simsekAADM,Simsekfilomat,simsekJMAA,SimsekMTJPM,SimsekMTJPAM2020,1Sofo,SrivatavaChoi}; and references therein).
The Euler polynomials of higher order, $E_{m}^{(l)}(y)$, are defined by
\begin{equation} F_{E}(u,y;l)=\left( \frac{2}{e^{u}+1}\right) ^{l}e^{yu}=\sum_{m=0}^{\infty }E_{m}^{(l)}\left( y\right) \frac{u^{m}}{m!} \label{Aeuler} \end{equation}
The\ harmonic numbers $H_{n}$ are defined by \begin{equation} F_{1}(u)=\frac{\ln (1-u)}{u-1}=\sum\limits_{n=1}^{\infty }H_{n}u^{n}, \label{aH} \end{equation} where $H_{0}=0$ and $\left\vert u\right\vert <1$ (\textit{cf}. \cite{comtet,5Riardon,1Sofo,SrivatavaChoi,ChoiJIA}).
A relation between the numbers $y\left( n,\frac{1}{2}\right) $ and $H_{n}$ is given as follows: \begin{equation} y\left( n,\frac{1}{2}\right) =2^{n+2}\left( H_{\left[ \frac{n}{2}\right] }-H_{n}+\frac{(-1)^{n+1}}{n+1}\right) , \label{91c} \end{equation} (\textit{cf}. \cite{SimsekREVISTA}).
The alternating Harmonic numbers $\mathcal{H}_{n}$ are defined by \begin{equation} F_{2}(u)=\frac{\ln (1+u)}{u-1}=\sum\limits_{n=1}^{\infty }\mathcal{H} _{n}u^{n}, \label{AlH} \end{equation} where $\left\vert u\right\vert <1$ (\textit{cf}. \cite{comtet}, \cite{Guo}, \cite{SimsekREVISTA}, \cite{1Sofo}).
In \cite[Eq. (20)]{SimsekREVISTA}, we showed that the following formula \begin{equation} y\left( n,\frac{1}{2}\right) =2^{n+2}\sum_{j=0}^{n}\frac{(-1)^{j+1}}{j+1} \label{91b} \end{equation} is related to the following well-known alternating harmonic numbers \begin{equation*} \mathcal{H}_{n}=\sum_{j=1}^{n}\frac{(-1)^{j}}{j}=H_{\left[ \frac{n}{2}\right] }-H_{n}, \end{equation*} (\textit{cf}. \cite{comtet}, \cite{Guo}, \cite[Eq. (1.5)]{1Sofo}, \cite[Eq. (20)]{SimsekREVISTA}).
The Stirling numbers of the first kind $S_{1}\left( v,d\right) $ are defined \begin{equation} F_{s1}(u,d)=\frac{\left( \ln (1+u)\right) ^{d}}{d!}=\sum_{v=0}^{\infty }S_{1}\left(v,d\right) \frac{u^{v}}{v!} \label{Sitirling1} \end{equation} and \begin{equation} \left( u\right) ^{\underline{d}}=\sum_{j=0}^{d}S_{1}\left( d,j\right) u^{j} \label{Sitirling1a} \end{equation} (\textit{cf}. \cite{CAC,comtet,KimDahee,Koepf,KucukogluAADM2019,Ozdemir,5Riardon,Roman,Rota,simsekAADM,Simsekfilomat,simsekJMAA,simsekRSCM,SimsekMTJPM,SimsekMTJPAM2020,1Sofo,SrivatavaChoi}; and references therein).
Combining (\ref{Sitirling1}) with the Lagrange inversion formula, a computation formula of the Stirling numbers of the first kind is given by \begin{equation} S_{1}\left( n,k\right) =\sum_{c=0}^{n-k}\sum_{j=0}^{c}(-1)^{j}\binom{c}{j} \binom{n+c-1}{k-1}\binom{2n-k}{n-k-c}\frac{j^{n-k+c}}{c!}, \label{s1C} \end{equation} where $k=0,1,2,\ldots ,n$ and $n\in \mathbb{N}_{0}$ (\textit{cf}. \cite[Eq. (8.21), p. 291]{Charalambos}).
The Stirling numbers of the second kind $S_{2}\left( v,d\right) $ are defined by \begin{equation} F_{s2}(u,k)=\frac{\left( e^{u}-1\right) ^{d}}{d!}=\sum_{v=0}^{\infty }S_{2}\left( v,d\right) \frac{u^{v}}{v!} \label{S2} \end{equation} (\textit{cf}. \cite{CAC,comtet,KimDahee,Koepf,KucukogluAADM2019,Ozdemir,5Riardon,Roman,Rota,simsekAADM,Simsekfilomat,simsekJMAA,simsekRSCM,SimsekMTJPM,SimsekMTJPAM2020,1Sofo,SrivatavaChoi}; and references therein).
Using (\ref{S2}), a computation formula of the Stirling numbers of the second kind is given by \begin{equation} S_{2}\left( n,k\right) =\frac{1}{k!}\sum_{c=0}^{k}(-1)^{k-c}\binom{k}{c}c^{n} \label{S2C} \end{equation} where $k=0,1,2,\ldots ,n$ and $n\in \mathbb{N}_{0}$ (\textit{cf}. \cite[Eq. (8.19), p. 289]{Charalambos}).
The Apostol-Bernoulli numbers $\mathcal{B}_{v}\left( \theta \right) $ are defined by \begin{equation} F_{A}(u,\theta )=\frac{u}{\theta e^{u}-1}=\sum_{v=0}^{\infty }\mathcal{B} _{v}\left( \theta \right) \frac{u^{v}}{v!} \label{Abn} \end{equation} (\textit{cf}. \cite{Apostol}).
Combining (\ref{Abn}) with (\ref{S2}), a computation formula of the Apostol-Bernoulli numbers $\mathcal{B}_{v}\left( \theta \right) $ is given by \begin{equation} \mathcal{B}_{v}\left( \theta \right) =\frac{n\theta }{\left( \theta
-1\right) ^{n}}\sum_{c=0}^{n-1}(-1)^{c}c!\theta ^{c-1}\left( \theta -1\right) ^{n-1-c}S_{2}(n-1,c) \label{AbnC} \end{equation} (\textit{cf}. \cite{Apostol}).
The Leibnitz polynomials $L_{m}(x)$ are defined by the following generating function \begin{equation} \mathcal{G}_{l}\left( x,u\right) =\frac{\ln \left( 1-u\right) +\ln \left(
1-xu\right) }{\left( 1-u\right) \left( 1-xu\right) -1}=\sum\limits_{m=0}^{
\infty }L_{n}(x)u^{m}, \label{A1} \end{equation}
where $|u|<1$ (\textit{cf}. \cite[Exercise 16, p. 127]{Charalambos}).
Using (\ref{A1}), the polynomials $L_{m}(x)$, whose degree is $m$, are given by \begin{equation*} L_{m}(x):=\sum\limits_{l=0}^{m}\boldsymbol{l}\left( m,l\right) x^{l}, \end{equation*} where $\boldsymbol{l}\left( m,l\right) $\ denotes the Leibnitz numbers, defined by \begin{equation} \boldsymbol{l}\left( m,l\right) =\frac{1}{\left( m+1\right) \binom{m}{l}} \label{ExpLeib} \end{equation} or \begin{equation} \boldsymbol{l}\left( m,l\right) =\sum\limits_{d=0}^{l}\left( -1\right) ^{l-d} \frac{1}{m-d+1}\binom{l}{d}, \label{SumLeib} \end{equation} where $l=0,1,2,\ldots ,m$ and $m\in \mathbb{N}_{0}$ (\textit{cf}. \cite[ Exercise 16, p. 127]{Charalambos}, \cite{SimsekASCM2021}).
The Bernstein basis functions are defined by \begin{equation} B_{l}^{m}(x)=\binom{m}{l}x^{l}(1-x)^{m-l} \label{aberns} \end{equation}
Integrate the following equation (\ref{aberns}) with respect to $x$ from $0$ to $1$, we have \begin{equation*} \frac{1}{\binom{m}{l}}\int\limits_{0}^{1}B_{l}^{m}(x)dx=\sum \limits_{d=0}^{m-l}(-1)^{m-l-d}\binom{m-l}{d}\int\limits_{0}^{1}x^{m-d}dx. \end{equation*} Combining the above equation with (\ref{ExpLeib}), we have \begin{equation*} \boldsymbol{l}\left( m,l\right) =\sum\limits_{d=0}^{m-l}(-1)^{m-l-d}\binom{
m-l}{d}\frac{1}{m-d+1}. \end{equation*}
The numbers $Y_{v}(\lambda )$ are defined by \begin{equation} g(u;\lambda )=\frac{2}{\theta ^{2}u+\theta -1}=\sum_{v=0}^{\infty }Y_{v}(\theta )\frac{u^{v}}{v!}, \label{1aYY} \end{equation} where \begin{equation*} Y_{v}(\lambda )=-\frac{2\left( v!\right) \theta ^{2v}}{\left( 1-\theta
\right) ^{v+1}} \end{equation*} (\textit{cf}. \cite[Eq. (2.13)]{SimsekTJM2018}).
The Bernoulli numbers of the second kind (the Cauchy numbers) $b_{v}(0)$ are defined by \begin{equation} F_{b2}(u)=\frac{u}{\ln (1+u)}=\sum_{n=0}^{\infty }b_{v}(0)\frac{u^{v}}{v!}, \label{Be-1t} \end{equation} (\textit{cf}. \cite{comtet}, \cite[p. 116]{Roman}, \cite{SimsekMTJPM}, \cite {SrivatavaChoi}; see also the references cited in each of these earlier works).
The numbers $D_{n}$, which are so-called the Daehee numbers, are defined by \begin{equation} F_{3}(u)=\frac{\ln (1+u)}{u}=\sum_{n=0}^{\infty }D_{n}\frac{u^{n}}{n!},\quad\text{ }(u\neq 0,\left\vert u\right\vert <1) \label{Da} \end{equation} (\textit{cf}. \cite{KimDahee}).
By combining the Newton-Mercator series with (\ref{Da}), one has the following forula: \begin{equation} D_{n}=(-1)^{n}\frac{n!}{n+1}, \label{D} \end{equation} (\textit{cf}. \cite[p. 117]{CAC}, \cite{KimDahee}, \cite[p. 45, Exercise 19 (b)]{5Riardon}).
The derangement numbers $d_{m}$ are defined by the following generating function: \begin{equation} F_{d}(u)=\frac{e^{-u}}{1-u}=\sum_{m=0}^{\infty }d_{m}\frac{u^{m}}{m!},\quad\text{ }(\left\vert u\right\vert <1) \label{dn} \end{equation} where \begin{equation*} d_{m}=\sum\limits_{j=0}^{m}(-1)^{j}\left( m-j\right) !\binom{m}{j} \label{dm} \end{equation*} (\textit{cf}. \cite{Carlitz}, \cite{Ma}).
The Fibonacci-type polynomials in two variables are defined by the following generating function: \begin{equation} H\left( t;x,y;k,m,l\right) =\sum_{n=0}^{\infty }\mathcal{G}_{n}\left( x,y;k,m,l\right) t^{n}=\frac{1}{1-x^{k}t-y^{m}t^{m+l}}, \label{GH} \end{equation} where $k,m,l\in \mathbb{N}_{0}$ (\textit{cf}. \cite{ozdemirFilomat}).
Using (\ref{GH}), we have the following explicit formula for the polynomials $\mathcal{G}_{n}\left( x,y;k,m,l\right) $: \begin{equation*} \mathcal{G}_{n}\left( x,y;k,m,l\right) =\sum_{c=0}^{\left[ \frac{n}{m+l}
\right] }\binom{n-c\left( m+l-1\right) }{c}y^{mc}x^{nk-mck-lck}, \end{equation*} where $\left[ a\right] $ is the largest integer $\leq a$ (\textit{cf}. \cite {ozdemirFilomat}, \cite{Ozdemir}).
Let's briefly summarize the next sections of the article.
In Section \ref{Section2}, we give the solution of the open problem 1, which has been proposed by the author \cite[p.57, Open problem 1]{SimsekMTJPAM2020} about the generating functions of the numbers $y\left( n,\lambda \right) $. We give many properties of this function.
In Section \ref{Section3}, with the help of generating functions and their functional equations, we give many identities involving the numbers $y(n,\lambda )$, the Bernoulli numbers of the second kind, the harmonic numbers, alternating Harmonic numbers, the Apostol-Bernoulli numbers, the Stirling numbers, the Leibnitz numbers, the Bernoulli numbers, and sums involving higher powers of inverses of binomial coefficients.
In Section \ref{Section4}, we give computation algorithm for the numbers $y\left( n,\lambda \right) $. We also give some values of the numbers $y\left( n,\lambda \right) $.
In Section \ref{Section5}, we give differential equations of the generating functions and their applications. We give some applications of these equation.
In Section \ref{Section6}, with the aid of the numbers $y\left( n,\lambda \right) $, we give decomposition of the multiple Hurwitz zeta functions involving the Bernoulli polynomials of higher order.
In Section \ref{Section7}, we give infinite series representations of the numbers $ y\left( n,\lambda \right) $ on entire functions.
In Section \ref{Section8}, we also construct the generating function for the numbers $y\left( n,\lambda \right) $ with the help of Volkenborn integral on $p$-adic integers. We give some applications of the $p$-adic integral.
In Section \ref{Section9}, we conclusion about the results of this paper.
\section{Generating functions for the numbers $y\left( n,\protect\lambda
\right) $} \label{Section2}
In this section, we construct generating functions for the numbers $y\left( n,\lambda \right) $. We give some properties of these functions. By using these functions and their functional equations, we give many new computational formulas and relations for the numbers $y\left( n,\lambda \right) $ and special finite sums.
We also give the solution of the following open problem, which has been proposed in \cite[p.57, Open problem 1]{SimsekMTJPAM2020}:
\textit{What is generating function for the numbers }$y\left( n,2\right) $ \textit{\ and the numbers }$y\left( n,\lambda \right) ?$
Its answer is given by Theorem \ref{Theorem1}.
\begin{proof}[Proof of Theorem \protect\ref{Theorem1}] Substituting (\ref{ynldef}) into (\ref{1aG}), after
some calculations, we obtain
\begin{equation*}
G\left( z,\lambda \right) =\sum\limits_{n=0}^{\infty }\sum_{j=0}^{n}\frac{1}{
j+1}\left( \frac{\lambda -1}{\lambda }\right) ^{j+1}z^{n}.
\end{equation*}
The following result is obtained by decomposing the above series for which
the Cauchy product of two infinite series has been applied.
\begin{equation*}
G\left( z,\lambda \right) =\frac{1}{z}\sum_{n=0}^{\infty }\frac{(-1)^{n+1}}{
n+1}\left( \frac{1-\lambda }{\lambda }z\right)
^{n+1}\sum\limits_{n=0}^{\infty }z^{n}.
\end{equation*}
After combining the above equation with the following well-known
Newton-Mercator series
\begin{equation*}
\ln (1+z)=\sum_{j=0}^{\infty }\frac{(-1)^{j}}{j+1}z^{j+1},\text{ }
(\left\vert z\right\vert <1)
\end{equation*}
and geometric series
\begin{equation*}
\sum\limits_{n=0}^{\infty }z^{n}=\frac{1}{1-z},\text{ }(\left\vert
z\right\vert <1)
\end{equation*}
yields the assertation of Theorem \ref{Theorem1}. \end{proof}
\begin{proof}[Proof of Theorem \protect\ref{Theorem2}]
Substituting $p=2$, $q=1$, $\alpha _{1}=\alpha _{2}=1$, $\beta _{1}=1$ and $
u=\frac{1-\lambda }{\lambda }z$ into (\ref{hyper}), we obtain
\begin{equation*}
_{2}F_{1}\left[
\begin{array}{c}
1,1 \\
2
\end{array}
;\frac{1-\lambda }{\lambda }z\right] =\sum\limits_{n=0}^{\infty }\frac{
\left( 1\right) ^{\overline{m}}\left( 1\right) ^{\overline{m}}}{\left(
2\right) ^{\overline{m}}m!}\left( \frac{1-\lambda }{\lambda }z\right) ^{m}.
\end{equation*}
Multiplying both sides of the above equation by $\frac{1-\lambda }{\lambda
\left( z-1\right) }$, we get
\begin{equation*}
_{2}F_{1}\left[
\begin{array}{c}
1,1 \\
2
\end{array}
;\frac{1-\lambda }{\lambda }z\right] \frac{1-\lambda }{\lambda \left(
z-1\right) }=\frac{1}{z-1}\sum\limits_{n=0}^{\infty }\frac{1}{m+1}\left(
\frac{1-\lambda }{\lambda }z\right) ^{m+1}.
\end{equation*}
Combining the above equation with (\ref{1aG1}), we obtain
\begin{equation*}
\frac{\left( 1-\lambda \right) z}{\lambda \left( z-1\right) }\text{ }
_{2}F_{1}\left[
\begin{array}{c}
1,1 \\
2
\end{array}
;\frac{1-\lambda }{\lambda }z\right] =G\left( z,\lambda \right) .
\end{equation*}
Thus, proof of Theorem \ref{Theorem2} is completed. \end{proof}
Some special cases of the generating function, given in (\ref{1aG1}), are given as follows:
Substituting $\lambda =-1$ into (\ref{1aG1}), we have the following generating function for the numbers $y\left( n,-1\right) $: \begin{equation} g_{1}(z)=G\left( z,-1\right) =\frac{\ln (1-2z)}{z^{2}-z}=\sum\limits_{n=0}^{
\infty }2^{n+2}y\left( n,-1\right) z^{n}, \label{1aG2} \end{equation} where $z\neq 0$, $z\neq 1$ and $\left\vert 2z\right\vert <1$.
The function $g_{1}(z)$ is associated with generating function for the finite sums of powers of inverse binomial coefficients. These relationships will be investigated in detail in the following Sections.
Substituting $\lambda =2$ into (\ref{1aG}), we have the following generating function for the numbers $y\left( n,2\right) $: \begin{equation} g_{2}(z)=G\left( z,2\right) =\frac{\ln (1-\frac{z}{2})}{z^{2}-z} =\sum\limits_{n=0}^{\infty }(-1)^{n}y\left( n,2\right) z^{n}, \label{1aG3} \end{equation} where $z\neq 0$, $z\neq 1$ and $\left\vert z\right\vert <1$.
By using the function $g_{2}(z)$, relationships among the numbers $y\left( n,2\right) $, the Bernoulli numbers, the Stirling numbers, and some special finite sums will be investigated in detail in the following sections.
Substituting $\lambda =\frac{1}{2}$ into (\ref{1aG1}), we have the following generating function for the numbers $y\left( n,\frac{1}{2}\right) $: \begin{equation} g_{3}(z)=G\left( z,\frac{1}{2}\right) =\frac{\ln (1+z)}{z^{2}-z} =\sum\limits_{n=0}^{\infty }\frac{1}{2^{n+2}}y\left( n,\frac{1}{2}\right) z^{n}, \label{1aYn} \end{equation} where $z\neq 0$, $z\neq 1$ and $\left\vert z\right\vert <1$.
By using the function $g_{3}(z)$, relationships among the numbers $y\left( n, \frac{1}{2}\right) $, the Bernoulli numbers, the Stirling numbers, the Harmonic numbers, and some special finite sums will be investigated in detail in the following sections.
\section{Identities derived from Generating function} \label{Section3}
In this section, using generating functions and their functional equations, we give very interesting and novel formulas and identities involving the numbers $y(n,\lambda )$, the Bernoulli numbers of the second kind, the harmonic numbers, alternating Harmonic numbers, the Apostol-Bernoulli numbers, the Stirling numbers, the Leibnitz numbers, the Bernoulli numbers, and sums involving higher powers of inverses of binomial coefficients.
We begin this section by giving proofs of some theorems given in the Section 1.
\begin{proof}[Proof. of Theorem \protect\ref{Theorem H}]
Multiply both sides of the equation (\ref{1aG}) by the function $\ln \left(
1+\frac{\lambda -1}{\lambda }z\right) $ and, with the help of the
Newton-Mercator series, we obtain
\begin{equation}
\frac{\ln \left( 1-\frac{\lambda -1}{\lambda }z\right) \ln \left( 1+\frac{
\lambda -1}{\lambda }z\right) }{z(z-1)}=\sum\limits_{n=0}^{\infty
}\sum\limits_{k=0}^{n}(-1)^{n}\frac{\left( 1-\lambda \right) ^{n+3}y\left(
k,\lambda \right) }{\left( n+1-k\right) \lambda ^{n-k+1}}z^{n+1}.
\label{Af-1}
\end{equation}
By using Abel's summation formula, and using multiplication two of the
Newton-Mercator series, Furdui \cite{Furdui} gave the following formula:
\begin{equation}
\ln \left( 1-y\right) \ln \left( 1+y\right) =\sum\limits_{v=1}^{\infty
}\left( H_{v}-H_{v}-\frac{1}{2v}\right) \frac{y^{2v}}{v},\text{ }y^{2}<1.
\label{Af2}
\end{equation}
Combining (\ref{Af-1}) with (\ref{Af2}), we get
\begin{eqnarray*}
&&\sum\limits_{m=1}^{\infty }\left( H_{m}-H_{m}-\frac{1}{2m}\right) \left(
\frac{\lambda -1}{\lambda }\right) ^{2m}\frac{z^{2m}}{m} \\
&=&\sum\limits_{n=3}^{\infty }\sum\limits_{k=0}^{n-3}(-1)^{n+1}\frac{\left(
1-\lambda \right) ^{n}y\left( k,\lambda \right) }{\left( n-k-2\right)
\lambda ^{n-k-2}}z^{n} \\
&&+\sum\limits_{n=2}^{\infty }\sum\limits_{k=0}^{n-2}(-1)^{n+1}\frac{\left(
1-\lambda \right) ^{n+1}y\left( k,\lambda \right) }{\left( n-k-1\right)
\lambda ^{n-k-1}}z^{n}.
\end{eqnarray*}
Therefore
\begin{eqnarray}
&&\sum\limits_{n=1}^{\infty }\left( H_{n}-H_{n}-\frac{1}{2n}\right) \left(
\frac{\lambda -1}{\lambda }\right) ^{2n}\frac{z^{2n}}{n} \label{1aGi} \\
&=&\sum\limits_{n=1}^{\infty }\sum\limits_{k=0}^{2n-1}\frac{\left( 1-\lambda
\right) ^{2n+2}y\left( k,\lambda \right) }{\left( 2n-k\right) \lambda ^{2n-k}
}z^{2n}+\sum\limits_{n=1}^{\infty }\sum\limits_{k=0}^{2n}\frac{\left(
1-\lambda \right) ^{2n+3}y\left( k,\lambda \right) }{\left( 2n-k+1\right)
\lambda ^{2n-k+}}z^{2n} \notag \\
&&\sum\limits_{n=1}^{\infty }\left( \sum\limits_{k=0}^{2n-2}\frac{\left(
1-\lambda \right) ^{2n+1}y\left( k,\lambda \right) }{\left( 2n-k-1\right)
\lambda ^{2n-k-1}}-\sum\limits_{k=0}^{2n-1}\frac{\left( 1-\lambda \right)
^{2n+2}y\left( k,\lambda \right) }{\left( 2n-k\right) \lambda ^{2n-k}}
\right) z^{2n+1}. \notag
\end{eqnarray}
After making some necessary algebraic calculations in the previous equation,
the coefficients of $z^{2n}$ are equalized and the above equation we arrive
at the desired result. \end{proof}
By using (\ref{1aGi}), we also get the following result:
\begin{corollary}
Let $n\in\mathbb{N}$. Then we have
\begin{equation*}
\sum\limits_{k=0}^{2n-2}\frac{\left( 1-\lambda \right) ^{2n+1}y\left(
k,\lambda \right) }{\left( 2n-k-1\right) \lambda ^{2n-k-1}}
-\sum\limits_{k=0}^{2n-1}\frac{\left( 1-\lambda \right) ^{2n+2}y\left(
k,\lambda \right) }{\left( 2n-k\right) \lambda ^{2n-k}}=0.
\end{equation*} \end{corollary}
We give the following functional equation \begin{equation} -\frac{\lambda -1}{\lambda }\left( x+1-\frac{\lambda -1}{\lambda }z\right) \mathcal{G}_{l}\left( x,\frac{\lambda -1}{\lambda }z\right) =(z-1)G\left( z,\lambda \right) +x(xz-1)G\left( xz,\lambda \right) . \label{A1l} \end{equation} Combining (\ref{A1l}) with (\ref{1aG}) and (\ref{A1}) yields \begin{eqnarray*}
&&-\left( x+1\right) \sum\limits_{n=0}^{\infty }L_{n}(x)\left( \frac{\lambda
-1}{\lambda }\right) ^{n+1}z^{n}+x\sum\limits_{n=1}^{\infty
}L_{n-1}(x)\left( \frac{\lambda -1}{\lambda }\right) ^{n+1}z^{n} \\
&=&-\sum\limits_{n=0}^{\infty }(-1)^{n}\left( \lambda -1\right) ^{n+2}\left(
x^{n+1}+1\right) y\left( n,\lambda \right) z^{n} \\
&&-\sum\limits_{n=0}^{\infty }(-1)^{n}\left( 1-\lambda \right) ^{n+1}\left(
x^{n+1}+1\right) y\left( n-1,\lambda \right) z^{n}. \end{eqnarray*} Comparing the coefficients of $z^{n}$ on both sides of the above equation, we have following theorem: \begin{theorem}
Let $n\in \mathbb{N}$. Then we have
\begin{equation}
\left( x+1\right) L_{n}(x)-xL_{n-1}(x)=(-1)^{n}\lambda ^{n+1}\left(
x^{n+1}+1\right) \left( \left( \lambda -1\right) y\left( n,\lambda \right)
+y\left( n-1,\lambda \right) \right) . \label{A1l1}
\end{equation} \end{theorem}
We set the following functional equation: \begin{equation*} g_{3}(z)F_{b2}(z)=\frac{1}{z-1}, \end{equation*} where $\left\vert z\right\vert <1$. Combining the above functional equation with (\ref{1aYn}) and (\ref{Be-1t}), we get \begin{equation*} \sum\limits_{n=0}^{\infty }\frac{1}{2^{n+2}}y\left( n,\frac{1}{2}\right) z^{n}\sum_{n=0}^{\infty }b_{n}(0)\frac{z^{n}}{n!}=-\sum_{n=0}^{\infty }z^{n}. \end{equation*} Therefore \begin{equation*} \sum\limits_{n=0}^{\infty }\sum_{j=0}^{n}\frac{1}{2^{j+2}}y\left( j,\frac{1}{
2}\right) \frac{b_{n-j}(0)}{(n-j)!}z^{n}=-\sum_{n=0}^{\infty }z^{n}. \end{equation*} Comparing the coefficients of $z^{n}$ on both sides of the above equation, we have following theorem:
\begin{theorem}
Let $n\in \mathbb{N}_0$. Then, we have
\begin{equation*}
\sum_{j=0}^{n}\frac{y\left( j,\frac{1}{2}\right) b_{n-j}(0)}{2^{j}(n-j)!}=-4.
\end{equation*} \end{theorem}
We give a decomposition of the generating function $g_{3}(z)$ as follows: \begin{equation*} g_{3}(z)=F_{2}(z)-F_{3}(z). \end{equation*} Combining the above function with (\ref{Da}) and (\ref{AlH}), we obtain \begin{equation*} \sum\limits_{n=0}^{\infty }\frac{1}{2^{n+2}}y\left( n,\frac{1}{2}\right) z^{n}=\sum\limits_{n=1}^{\infty }\mathcal{H}_{n}z^{n}-\sum_{n=0}^{\infty }D_{n}\frac{z^{n}}{n!}. \end{equation*} Comparing the coefficients of $z^{n}$ on both sides of the above equation, we have following theorem: \begin{theorem}
Let $n\in \mathbb{N}_0$. Then, we have
\begin{equation}
y\left( n,\frac{1}{2}\right) =\frac{2^{n+2}}{n!}\left( n!\mathcal{H}
_{n}-D_{n}\right) .
\end{equation} \end{theorem}
\begin{corollary}
Let $n\in \mathbb{N}_0$. Then, we have
\begin{equation}
y\left( n,\frac{1}{2}\right) =\frac{2^{n+2}}{n!}\left( n!\mathcal{H}
_{n}-\sum_{j=0}^{n}B_{j}S_{1}(n,j)\right) .
\end{equation} \end{corollary}
\begin{corollary}
Let $n\in \mathbb{N}_0$. Then, we have
\begin{equation}
y\left( n,\frac{1}{2}\right) =2^{n+2}\left( \mathcal{H}_{n}+\frac{(-1)^{n+1}
}{n+1}\right) . \label{1aYn1}
\end{equation} \end{corollary}
\begin{corollary}
Let $n\in \mathbb{N}_0$. Then, we have
\begin{equation}
y\left( n,\frac{1}{2}\right) =2^{n+1}\mathcal{H}_{n+1}.
\end{equation} \end{corollary}
We set \begin{equation*} F_{2}(z)=zF(z). \end{equation*} By using the above equation, we have \begin{equation*} \sum\limits_{n=1}^{\infty }\mathcal{H}_{n}z^{n}=\sum\limits_{n=0}^{\infty } \frac{1}{2^{n+2}}y\left( n,\frac{1}{2}\right) z^{n+1}. \end{equation*} After some elementary calculations, we arrive at the allowing result:
\begin{corollary}
Let $n\in \mathbb{N}$. Then, we have
\begin{equation}
\mathcal{H}_{n}=\frac{1}{2^{n+1}}y\left( n-1,\frac{1}{2}\right) .
\label{aHhy}
\end{equation} \end{corollary}
Noting that with the aid of (\ref{1aYn1}), we give a series representation of the function $F_{2}(z)-g_{3}(z)$ as follows: \begin{equation} F_{2}(z)-g_{3}(z)=\sum\limits_{n=0}^{\infty }\frac{(-1)^{n}}{n+1}z^{n}. \label{1aYn2} \end{equation}
By combining (\ref{1aYn}) with (\ref{1aYn2}), we also arrive at (\ref{aHhy}).
It is time to give the first proof of Theorem \ref{Theorem 5}. The following proof is related to generating functions and functional equation method.
\begin{proof}[The first proof of Theorem \protect\ref{Theorem 5}]
Substituting $l=1$, $u=\ln \left( 1+\frac{1-\lambda }{\lambda }z\right) $, $
y=0$, and $z\neq 0$\ into (\ref{ApostolBern}), after some elementary
calculations, we obtain
\begin{equation}
\frac{\ln \left( 1+\frac{1-\lambda }{\lambda }z\right) }{\frac{1-\lambda }{
\lambda }z}=\sum_{n=0}^{\infty }B_{n}\frac{(\ln \left( 1+\frac{1-\lambda }{
\lambda }z\right) )^{n}}{n!}. \label{d8}
\end{equation}
Combining the above equation with (\ref{1aG}) and (\ref{Sitirling1}), we get
\begin{equation*}
\sum\limits_{m=0}^{\infty }\left( 1-\lambda \right) ^{m+2}y\left( m,\lambda
\right) z^{m}=\frac{1}{z-1}\sum_{m=0}^{\infty
}\sum_{n=0}^{m}B_{n}S_{1}(m,n)\left( \frac{1-\lambda }{\lambda }\right)
^{m+1}\frac{z^{m}}{m!}
\end{equation*}
since $S_{1}(m,n)=0$ if $n>m$. Assuming that $\left\vert z\right\vert <1$,
we obtain
\begin{equation*}
\sum\limits_{m=0}^{\infty }\left( 1-\lambda \right) ^{m+2}y\left( m,\lambda
\right) z^{m}=-\sum_{v=0}^{\infty }z^{v}\sum_{m=0}^{\infty
}\sum_{n=0}^{m}B_{n}S_{1}(m,n)\left( \frac{1-\lambda }{\lambda }\right)
^{m+1}\frac{z^{m}}{m!}.
\end{equation*}
After some elementary calculations, the above equation yields
\begin{equation*}
\sum\limits_{m=0}^{\infty }\left( 1-\lambda \right) ^{m+2}y\left( m,\lambda
\right) z^{m}=-\sum_{m=0}^{\infty }\sum_{v=0}^{m}\sum_{n=0}^{v}\frac{\left(1-\lambda\right) ^{v+1}}{\lambda ^{v+1}}\frac{B_{n}S_{1}(v,n)}{v!}z^{m}.
\end{equation*}
Now equating the coefficients of $z^{m}$ on both sides of the above
equation, we arrive at the desired result. \end{proof}
Substituting $\lambda =\frac{1}{2}$ and $\lambda =2$ into (\ref{1aGbs}), we arrive at the following corollaries, respectively:
\begin{corollary}
Let $m\in \mathbb{N}_0$. Then, we have
\begin{equation}
y\left( m,\frac{1}{2}\right) =-2^{m+2}\sum_{v=0}^{m}\sum_{n=0}^{v}\frac{
B_{n}S_{1}(v,n)}{v!}. \label{1aGbs1}
\end{equation} \end{corollary}
\begin{corollary}
Let $m\in \mathbb{N}_0$. Then, we have
\begin{equation*}
y\left( m,2\right) =\sum_{v=0}^{m}\sum_{n=0}^{v}\frac{\left( -1\right)
^{v-m}B_{n}S_{1}(v,n)}{2^{v+1}v!}.
\end{equation*} \end{corollary}
\begin{remark}
By combining the following well-known identity:
\begin{equation}
\sum_{n=0}^{m}B_{n}S_{1}(m,n)=\frac{(-1)^{m}m!}{m+1} \label{a91}
\end{equation}
(\textit{cf}. \cite[p. 117]{CAC}, \cite{KimDahee}, \cite[p. 45, Exercise 19
(b)]{5Riardon}, \cite[Eq. (20)]{SimsekREVISTA}), with (\ref{1aGbs}), we
arrive at (\ref{ynldef}). Noting that there are many other proofs of (\ref
{a91}). For example, Kim \cite{KimDahee} gave proof of (\ref{a91}) by using the $p$-adic
invariant integral on the set of $p$-adic integers. Kim represented the equation
(\ref{a91}) by the notation $D_{n}$, which are so-called the Daehee numbers.
Riordan \cite[p. 45, Exercise 19 (b)]{5Riardon} represented the equation (\ref{a91}) by the notation $(b)_{n}$. \end{remark}
With the aid of the equations (\ref{a91}), (\ref{AlH}), (\ref{Da}) and (\ref {D}), we get some interesting formulas involving the Bernoulli numbers, the Stirling numbers of the first kind, the Daehee numbers, and the alternating Harmonic numbers.
Combining (\ref{AlH}) with (\ref{Da}), we get \begin{equation*} F_{3}(u)=\frac{u-1}{u}F_{2}(u) \end{equation*} Using the above equation, we get \begin{equation*} \sum_{n=0}^{\infty }D_{n}\frac{u^{n+1}}{n!}=\sum\limits_{n=1}^{\infty } \mathcal{H}_{n}u^{n+1}-\sum\limits_{n=1}^{\infty }\mathcal{H}_{n}u^{n}. \end{equation*} Therefore \begin{equation*} \sum_{n=1}^{\infty }D_{n-1}\frac{u^{n}}{\left( n-1\right) !} =\sum\limits_{n=2}^{\infty }\mathcal{H}_{n-1}u^{n}-\sum\limits_{n=1}^{\infty }\mathcal{H}_{n}u^{n}. \end{equation*} Comparing the coefficients of $u^{n}$ on both sides of the above equation, we arrive at the following relation: \begin{equation*} D_{n-1}=\left( n-1\right) !\left( \mathcal{H}_{n-1}-\mathcal{H}_{n}\right) . \end{equation*} By the above equation and (\ref{D}), we see that \begin{equation*} \mathcal{H}_{n-1}-\mathcal{H}_{n}=\frac{(-1)^{n-1}}{n}. \end{equation*}
Combining (\ref{AlH}) with (\ref{Da}), we also have \begin{equation*} F_{2}(u)=-\frac{u}{1-u}F_{3}(u) \end{equation*} Using the above equation, we obtain \begin{equation*} \sum\limits_{n=1}^{\infty }\mathcal{H}_{n}u^{n}=-u\sum\limits_{n=0}^{\infty }u^{n}\sum_{n=0}^{\infty }D_{n}\frac{u^{n}}{n!}. \end{equation*} Therefore \begin{equation*} \sum\limits_{n=1}^{\infty }\mathcal{H}_{n}u^{n}=-u\sum\limits_{n=0}^{\infty }\sum_{j=0}^{n}D_{j}\frac{u^{n}}{j!}. \end{equation*} Comparing the coefficients of $u^{n}$ on both sides of the above equation, we arrive at the following relation: \begin{equation} \mathcal{H}_{n}=-\sum_{j=0}^{n-1}\frac{D_{j}}{j!}. \label{HarmonicDaehee} \end{equation} Substituting (\ref{D}) into (\ref{HarmonicDaehee}), and using (\ref{a91}), we arrive at the following theorem:
\begin{theorem}
Let $n\in \mathbb{N}$. Then, we have
\begin{equation*}
\mathcal{H}_{n}=-\sum_{j=0}^{n-1}\sum_{v=0}^{j}\frac{B_{v}S_{1}(j,v)}{j!}.
\end{equation*} \end{theorem}
It is time to give the first proof of Theorem \ref{Theorem3}. The following proof is related to generating functions and functional equation method.
\begin{proof}[Proof of Theorem \protect\ref{Theorem3}]
Putting $z=\frac{\lambda }{1-\lambda }\left( e^{w}-1\right) $ in (\ref{1aG})
and combining with (\ref{1aG1}), (\ref{Abn}) and (\ref{S2}), we obtain
\begin{equation*}
F_{A}(w,\lambda )=\sum\limits_{n=0}^{\infty }\left( n+1\right) !\lambda
^{n+1}y\left( n,\lambda \right) F_{s2}(w,n+1).
\end{equation*}
By using the above equation, we get
\begin{eqnarray*}
\sum_{m=0}^{\infty }\mathcal{B}_{m}\left( \lambda \right) \frac{w^{m}}{m!}
&=&\sum\limits_{n=0}^{\infty }\left( n+1\right) !\lambda ^{n+1}y\left(
n,\lambda \right) \sum_{m=0}^{\infty }S_{2}(m,n+1)\frac{w^{m}}{m!} \\
&=&\sum_{m=0}^{\infty }\sum\limits_{n=0}^{m}\left( n+1\right) !\lambda
^{n+1}y\left( n,\lambda \right) S_{2}(m,n+1)\frac{w^{m}}{m!}.
\end{eqnarray*}
such that we here use the fact that $S_{2}(m,n)=0$ if $n>m$. Equating the
coefficients of $\frac{w^{m}}{m!}$ on both sides of the above equation, we
get the desired result. \end{proof}
Combining (\ref{Abn-1}) with (\ref{ynldef}), we arrive at the following corollaries:
\begin{corollary}
Let $m\in \mathbb{N}_0$. Then, we have
\begin{equation*}
\mathcal{B}_{m}\left( \lambda \right) =\frac{1}{\lambda -1}
\sum\limits_{n=0}^{m}\sum_{j=0}^{n}\frac{(-1)^{n}\left( n+1\right) !}{j+1}
\left( \frac{\lambda }{\lambda -1}\right) ^{n-j}S_{2}(m,n+1).
\end{equation*} \end{corollary}
\begin{corollary}
Let $m\in \mathbb{N}_0$. Then, we have
\begin{equation*}
\mathcal{B}_{m}\left( \lambda \right) =\frac{1}{1-\lambda }
\sum\limits_{n=0}^{m}\sum_{j=0}^{n}\frac{n+2}{j+1}\left( \frac{\lambda }{
\lambda -1}\right) ^{n-j}D_{n+1}S_{2}(m,n+1).
\end{equation*} \end{corollary}
\begin{corollary}
Let $m\in \mathbb{N}_0$. Then, we have
\begin{equation*}
\mathcal{B}_{m}\left( \lambda \right) =\frac{1}{1-\lambda }
\sum\limits_{n=0}^{m}\sum_{j=0}^{n}\sum_{k=0}^{n+1}\frac{n+2}{j+1}\left(
\frac{\lambda }{\lambda -1}\right) ^{n-j}B_{k}S_{1}(n+1,k)S_{2}(m,n+1).
\end{equation*} \end{corollary}
Substituting $z=e^{t}-1$ into (\ref{1aYn}), we get \begin{equation*} \frac{t}{\left( e^{t}-2\right) \left( e^{t}-1\right) }=\sum\limits_{n=0}^{
\infty }\frac{1}{2^{n+2}}y\left( n,\frac{1}{2}\right) \left( e^{t}-1\right) ^{n}. \end{equation*}
Combining the above equation with (\ref{Abn}) and (\ref{S2}), we get \begin{equation*} \frac{1}{2t}\sum_{m=0}^{\infty }\mathcal{B}_{m}\left( \frac{1}{2}\right) \frac{t^{m}}{m!}\sum_{m=0}^{\infty }B_{m}\frac{t^{m}}{m!}=\sum_{m=0}^{\infty }\sum\limits_{n=0}^{m}\frac{n!}{2^{n+2}}y\left( n,\frac{1}{2}\right) S_{2}\left( m,n\right) \frac{t^{m}}{m!} \end{equation*} or \begin{equation*} \frac{1}{2}\sum_{m=0}^{\infty }\mathcal{B}_{m}\left( \frac{1}{2}\right) \frac{t^{m}}{m!}=2\sum_{m=0}^{\infty }\sum\limits_{n=0}^{m}\frac{(n+1)!}{
2^{n+2}}y\left( n,\frac{1}{2}\right) S_{2}\left( m,n+1\right) \frac{t^{m}}{m! }. \end{equation*} After making some necessary algebraic calculations in the previous equations, the coefficients of $\frac{t^{m}}{m!}$ are equalized, we arrive at the following theorems:
\begin{theorem}
Let $m\in \mathbb{N}_0$. Then, we have
\begin{equation}
\mathcal{B}_{m}\left( \frac{1}{2}\right) =\sum\limits_{n=0}^{m}\frac{(n+1)!}{
2^{n+1}}y\left( n,\frac{1}{2}\right) S_{2}\left( m,n+1\right) .
\label{1AAeq}
\end{equation} \end{theorem}
\begin{theorem}
Let $m\in \mathbb{N}_0$. Then, we have
\begin{equation*}
\sum_{n=0}^{m}\binom{m}{n}\mathcal{B}_{n}\left( \frac{1}{2}\right)
B_{m-n}=m\sum\limits_{n=0}^{m-1}\frac{(n+1)!}{2^{n+1}}y\left( n,\frac{1}{2}
\right) S_{2}\left( m-1,n+1\right) .
\end{equation*} \end{theorem}
Substituting $u=e^{t}-1$ into (\ref{AlH}), we get \begin{equation*} \frac{1}{2}\sum\limits_{m=0}^{\infty }\mathcal{B}_{n}\left( \frac{1}{2} \right) \frac{t^{m}}{m!}=\sum\limits_{m=0}^{\infty }\sum_{n=0}^{m}\frac{n!}{
m!}\mathcal{H}_{n}S_{2}(m,n)t^{m}. \end{equation*} Combining the above equation with (\ref{1AAeq}), we arrive at the following theorem:
\begin{theorem}
Let $m\in \mathbb{N}_0$. Then, we have
\begin{equation*}
\sum_{n=0}^{m}n!\mathcal{H}_{n}S_{2}(m,n)=\sum\limits_{n=0}^{m}\frac{(n+1)!}{2^{n}}
y\left( n,\frac{1}{2}\right) S_{2}\left( m,n+1\right) .
\end{equation*} \end{theorem}
Combining (\ref{1aG1}), (\ref{Da}) and (\ref{dn}), we get the following functional equation: \begin{equation*} \frac{\lambda-1 }{\lambda}F_{3}\left( \frac{1-\lambda }{\lambda }u\right) F_{d}(u)=e^{-u}G(u,\lambda ). \end{equation*}
By using the above equation, we obtain \begin{equation*} \frac{\lambda-1 }{\lambda}\sum_{m=0}^{\infty }d_{m}\frac{u^{m}}{m!} \sum_{n=0}^{\infty }\left( \frac{1-\lambda }{\lambda }\right) ^{n}D_{n}\frac{
u^{n}}{n!}=\sum\limits_{n=0}^{\infty }\sum\limits_{j=0}^{n}\frac{(-1)^{n-j}}{
(n-j)!}\left( 1-\lambda \right) ^{j+2}y\left( j,\lambda \right) u^{n}. \end{equation*} Therefore \begin{equation*} -\sum_{n=0}^{\infty }\sum\limits_{m=0}^{n}\binom{n}{m}d_{m}\left( \frac{
1-\lambda }{\lambda }\right) ^{n-m+1}D_{n-m}\frac{u^{n}}{n!} =\sum\limits_{n=0}^{\infty }\sum\limits_{j=0}^{n}\frac{(-1)^{n-j}}{(n-j)!} \left( 1-\lambda \right) ^{j+2}y\left( j,\lambda \right) u^{n}. \end{equation*} Comparing the coefficients of $u^{n}$ on both sides of the above equation, we have following theorem: \begin{theorem}
Let $n\in \mathbb{N}_0$. Then, we have
\begin{equation*}
-\sum\limits_{m=0}^{n}\binom{n}{m}\left( \frac{1-\lambda }{\lambda }\right)
^{n-m+1}D_{n-m}d_{m}=\sum\limits_{j=0}^{n}\frac{(-1)^{n-j}}{(n-j)!}\left(
1-\lambda \right) ^{j+2}y\left( j,\lambda \right) .
\end{equation*} \end{theorem}
Using (\ref{D}) and (\ref{dm}) in the above equation, we have \begin{eqnarray*}
&&\sum\limits_{j=0}^{n}\frac{(-1)^{n-j}}{(n-j)!}\left( 1-\lambda \right)
^{j+2}y\left( j,\lambda \right) \\
&=&\sum\limits_{m=0}^{n}\sum\limits_{j=0}^{m}(-1)^{n-m+j+1}\binom{n}{m}
\binom{m}{j}\left( \frac{1-\lambda }{\lambda }\right) ^{n-m+1}\frac{\left(
n-m\right) !\left( m-j\right) !}{n-m+1}. \end{eqnarray*} After some elementary calculations, we also arrive at the following corollary: \begin{corollary}
Let $n\in \mathbb{N}_0$. Then, we have
\begin{eqnarray*}
\sum\limits_{j=0}^{n}\frac{(-1)^{n-j}}{(n-j)!}\left( 1-\lambda \right)
^{j+2}y\left( j,\lambda \right) =\sum\limits_{m=0}^{n}\sum\limits_{j=0}^{m}(-1)^{n-m+j+1}\left( \frac{
1-\lambda }{\lambda }\right) ^{n-m+1}\frac{n!}{\left( n-m+1\right) j!}.
\end{eqnarray*} \end{corollary}
\section{Computation algorithm for the numbers $y\left( n,\protect\lambda \right)$} \label{Section4}
In this section, with the aid of (\ref{1aGbs}), (\ref{s1C}), and the definition of the Bernoulli numbers, we present a computation algorithm (Algorithm \ref{alg:yCalculation} with a procedure called \texttt{COMPUTE$\char`_$y$\char`_$NUMBER}) for the numbers $y\left(m,\lambda \right) $.
\begin{algorithm}[H]
\caption{Let $m$ be nonnegative integer and $\lambda\in \mathbb{C}$. This algorithm includes a procedure called \texttt{COMPUTE$\char`_$y$\char`_$NUMBER} which returns the numbers $y\left( m,\lambda \right) $.}
\label{alg:yCalculation}
\begin{algorithmic}
\Procedure{\texttt{\textbf{COMPUTE$\char`_$y$\char`_$NUMBER}}}{$m$: nonnegative integer, $\lambda$}
\State {$\textbf{Local variables: } v,n,y$}
\State {$v,n,y\leftarrow 0$}
\ForAll {$v$ in $\{0,1,2,\dots,m\}$}
\ForAll {$n$ in $\{0,1,2,\dots,v\}$}
\State $y\leftarrow y+\bigg(\Big($\texttt{Power}$\left(-1, v-m\right)*$\texttt{Power}$\left(\lambda-1, v-m-1\right)*$\texttt{BERNOULLI\_NUM}$\left(n\right)$\par\quad\qquad\qquad\(\hookrightarrow\)\enspace$*$\texttt{STIRLING\_NUM\_FIRST}$\left(v,n\right)\Big)/\Big($\texttt{Power}$\left(\lambda, v+1\right)*$\texttt{Factorial}$\left(v\right)\Big)\bigg)$
\EndFor
\EndFor
\State \textbf{return} {$y$}
\EndProcedure
\end{algorithmic} \end{algorithm}
\begin{remark}
In Algorithm \ref{alg:yCalculation}, the procedure \texttt{BERNOULLI$\char`_$NUM$\left(n\right)$} corresponds to the procedure which gives the $n$-th Bernoulli number. In addition, the procedure \texttt{STIRLING$\char`_$FIRST$\char`_$NUM} is corresponding to the procedure which computes the Stirling numbers of the first kind using the formula given in (\ref{s1C}). For details about the procedure \texttt{STIRLING$\char`_$FIRST$\char`_$NUM}, the interested readers may refer to the paper \cite{KucukogluAADM2019}. \end{remark}
By using the computation algorithm (Algorithm \ref{alg:yCalculation}), we give some values of the numbers $y\left(m,\lambda \right) $ as follows: \begin{eqnarray*}
y\left(0,\lambda \right)&=&\frac{1}{\lambda\left(\lambda-1 \right)},\\
y\left(1,\lambda \right)&=&\frac{-3\lambda+1}{2\lambda^2\left(\lambda-1 \right)^2},\\
y\left(2,\lambda \right)&=&\frac{11\lambda^2 -7\lambda +2}{6\lambda^3\left(\lambda-1 \right)^3},\\
y\left(3,\lambda \right)&=&\frac{-25\lambda^3 +23\lambda^2 -13\lambda +3}{12\lambda^4\left(\lambda-1 \right)^4},\\
y\left(4,\lambda \right)&=&\frac{137\lambda^4 -163\lambda^3 + 137\lambda^2 -63\lambda +12}{60\lambda^5\left(\lambda-1 \right)^5}, \end{eqnarray*} and so on. Observe that $y\left( n,\lambda \right) $ is a rational function of the variable $\lambda $. The sequence of the leading coefficients of the polynomial in the numerator of the numbers $y\left( n,\lambda \right) $ is given as follows:
\begin{equation*} 1,-3,11,-25,137,-147,1089,-2283,7129,-7381,83711,\ldots \end{equation*} and so on. Taking absulute value of the each term above sequence, we get the following well-known sequence: \begin{equation*} \left( a(n)\right) _{n=1}^{\infty }=\left\{ 1,3,11,25,137,147,1089,2283,7129,7381,83711,\ldots \right\} \end{equation*} and so on. The sequence $a(n)$ is given by OEIS: A025529 with the following explicit formula (\textit{cf}. \cite{OEIS}): \begin{equation*} a(n)=\operatorname{lcm}(1,2,3,\ldots ,n)H_{n}, \end{equation*} where $n\in \mathbb{N}$, $H_{n}$ denotes the harmonic numbers.
\section{Differential equations of the generating functions and their
applications} \label{Section5}
In this section, we give partial derivative equations of the generating functions. By applying these equations, we give many identities and many novel recurrence relations involving the numbers $y(n,\lambda )$, $ \boldsymbol{l}\left( n,0\right) $, and the special finite sums of (inverse) binomial coefficients.
Differentiating equation (\ref{1aG1}) with respect to $z$, we obtain the following partial derivative equation: \begin{equation} \left( z^{2}-z\right) \frac{\partial }{\partial z}\left\{ G\left( z,\lambda \right) \right\} +\left( 2z-1\right) G\left( z,\lambda \right) =\frac{
1-\lambda }{\lambda +(1-\lambda )z}. \label{1aGd1} \end{equation} Differentiating equations (\ref{1aG2}), (\ref{1aG3}), and (\ref{1aYn}) with respect to $z$, we obtain the following partial derivative equations, respectively: \begin{equation} \left( z^{2}-z\right) \frac{d}{dz}\left\{ g_{1}(z)\right\} +\left( 2z-1\right) g_{1}(z)=\frac{2}{2z-1}, \label{1aGd2} \end{equation} \begin{equation} \left( z^{2}-z\right) \frac{d}{dz}\left\{ g_{2}(z)\right\} +\left( 2z-1\right) g_{2}(z)=\frac{1}{z-2}, \label{1aGd3} \end{equation} and \begin{equation} \left( z^{2}-z\right) \frac{d}{dz}\left\{ g_{3}(z)\right\} +\left( 2z-1\right) g_{3}(z)=\frac{1}{z+1}. \label{1aGd4} \end{equation}
\subsection{Recurrence relations derived from PDEs for the generating
functions}
Here, using equations (\ref{1aGd1})-(\ref{1aGd4}), we give recurrence relations and identities involving the numbers $y(n,\lambda )$, $\boldsymbol{
l}\left( n,0\right) $, and the special finite sums of (inverse) binomial coefficients.
\begin{theorem}
The numbers $y(n,\lambda )$ satisfy the following derivative equations:
\begin{equation}
(\lambda -1)\frac{d}{d\lambda }\left\{ y(n,\lambda )\right\}
+(n+2)y(n,\lambda )=(-1)^{n+1}\sum\limits_{k=0}^{n}\frac{(\lambda -1)^{k-n-1}
}{\lambda ^{k+2}}. \label{ynldefQED}
\end{equation}
and
\begin{equation*}
\frac{d}{d\lambda }\left\{ y(n,\lambda )\right\} +\frac{n+2}{\lambda -1}
y(n,\lambda )=\frac{(-1)^{n}}{\lambda }\left( 1-\left( \frac{\lambda }{
\lambda -1}\right) ^{n+1}\right) .
\end{equation*} \end{theorem}
\begin{proof}
Differentiating equation (\ref{1aG1}) with respect to $\lambda $, we obtain
\begin{eqnarray*}
\sum\limits_{n=0}^{\infty }(1-\lambda )^{n+1}\left( (1-\lambda )^{n+1}
\frac{d}{d\lambda }\left\{ y(n,\lambda )\right\} -(n+2)y(n,\lambda )\right)
z^{n} =\frac{\partial }{\partial \lambda }\left\{ \frac{\ln \left( 1-\frac{
\lambda -1}{\lambda }z\right) }{z(z-1)}\right\} .
\end{eqnarray*}
Therefore
\begin{eqnarray*}
\lambda ^{2}\sum\limits_{n=0}^{\infty }(1-\lambda )^{n+1}\left( (1-\lambda
)^{n+1}\frac{d}{d\lambda }\left\{ y(n,\lambda )\right\} -(n+2)y(n,\lambda
)\right) z^{n} =\frac{1}{\left( 1-\frac{\lambda -1}{\lambda }z\right) \left( 1-z\right) }.
\end{eqnarray*}
Assuming that $|\frac{\lambda -1}{\lambda }z|<1$ and $|z|<1$, then we obtain
\begin{eqnarray*}
\lambda ^{2}\sum\limits_{n=0}^{\infty }(1-\lambda )^{n+1}\left( (1-\lambda
)^{n+1}\frac{d}{d\lambda }\left\{ y(n,\lambda )\right\} -(n+2)y(n,\lambda
)\right) z^{n} =\sum\limits_{n=0}^{\infty }\sum\limits_{k=0}^{n}\frac{(\lambda -1)^{k}}{
\lambda ^{k}}z^{n}.
\end{eqnarray*}
Comparing the coefficients of $z^{n}$ on both sides of the above equation,
we arrive at the desired result. \end{proof}
Combining (\ref{1aG}) with (\ref{1aGd1}), we get \begin{eqnarray*}
&&\left( z^{2}-z\right) \sum\limits_{n=1}^{\infty }n\left( 1-\lambda \right)
^{n+2}y\left( n,\lambda \right) z^{n-1}+\left( 2z-1\right)
\sum\limits_{n=0}^{\infty }\left( 1-\lambda \right) ^{n+2}y\left( n,\lambda
\right) z^{n} \\
&=&\sum\limits_{n=0}^{\infty }(-1)^{n}\left( \frac{1-\lambda }{\lambda }
\right) ^{n+1}z^{n}. \end{eqnarray*} After some calculations in the above equation, after that equating the coefficients of $z^{n}$ on both sides of the final equation, we get \begin{equation} y\left( n-1,\lambda \right) +(\lambda -1)y\left( n,\lambda \right) =\frac{
(-1)^{n}}{(n+1)\lambda ^{n+1}}. \label{1aGa} \end{equation} Combining the above equation with (\ref{a91}), we arrive at a recurrence relation for the numbers $y\left( n,\lambda \right) $ as in the following theorem: \begin{theorem}
Let $n\in \mathbb{N}$. Then we have
\begin{equation*}
y\left( n-1,\lambda \right) +(\lambda -1)y\left( n,\lambda \right) =\frac{1}{
\lambda ^{n+1}n!}\sum_{j=0}^{n}B_{j}S_{1}(n,j).
\end{equation*} \end{theorem}
\begin{remark}
Substituting $x=0$ into (\ref{A1l1}) and using the following well-known
identity
\begin{equation*}
L_{n}(0)=\boldsymbol{l}\left( n,0\right) =\frac{1}{n+1},
\end{equation*}
we also arrive at the equation (\ref{1aGa}). \end{remark}
\begin{remark}
By using (\ref{1aG1}) and (\ref{1aG}), we get
\begin{equation*}
\sum\limits_{n=0}^{\infty }\left( 1-\lambda \right) ^{n+2}y\left( n,\lambda
\right) z^{n+1}-\sum\limits_{n=0}^{\infty }\left( 1-\lambda \right)
^{n+2}y\left( n,\lambda \right) z^{n}=\sum\limits_{n=0}^{\infty }\left(
\frac{1-\lambda }{\lambda }\right) ^{n+1}\frac{z^{n}}{n+1}.
\end{equation*}
Equating the coefficients of $z^{n}$ on both sides of the above equation, we
also arrive at the equation (\ref{1aGa}). \end{remark}
Substituting $\lambda =2$ into (\ref{1aGa}), we have \begin{equation} y\left( n-1,2\right) +y\left( n,2\right) =\frac{(-1)^{n}}{\left( n+1\right)
2^{n+1}}. \label{A1l2} \end{equation} Combining the above equation with the following well-known identity \begin{equation*} \frac{(-1)^{n}}{2^{n}}n!=\sum_{j=0}^{n}E_{j}S_{1}(n,j) \end{equation*} (\textit{cf}. \cite{Dkim}), we arrive at the following result:
\begin{corollary}
Let $n\in \mathbb{N}$. Then we have
\begin{equation*}
y\left( n-1,2\right) +y\left( n,2\right) =\frac{1}{2}\sum_{j=0}^{n}\frac{
E_{j}S_{1}(n,j)}{\left( n+1\right) !}.
\end{equation*} \end{corollary}
Combining (\ref{1aGd4}) with (\ref{1aYn}), and assuming that $\left\vert z\right\vert <1$, we get \begin{eqnarray*}
\left( z^{2}-z\right) \sum\limits_{n=1}^{\infty }\frac{1}{2^{n+2}}y\left( n,
\frac{1}{2}\right) z^{n-1} &=&\sum\limits_{n=0}^{\infty
}(-1)^{n}z^{n}+\left( 1-2z\right) \\
&&\times \sum\limits_{n=0}^{\infty }\frac{1}{2^{n+2}}y\left( n,\frac{1}{2}
\right) z^{n}. \end{eqnarray*} After some elementary calculations in the above equation, we obtain \begin{eqnarray*}
&&\sum\limits_{n=1}^{\infty }\frac{1}{2^{n+2}}y\left( n,\frac{1}{2}\right)
z^{n+1}-\sum\limits_{n=1}^{\infty }\frac{1}{2^{n+2}}y\left( n,\frac{1}{2}
\right) z^{n} \\
&=&\sum\limits_{n=0}^{\infty }(-1)^{n}z^{n}+\sum\limits_{n=0}^{\infty }\frac{
1}{2^{n+2}}y\left( n,\frac{1}{2}\right) z^{n}-2\sum\limits_{n=0}^{\infty }
\frac{1}{2^{n+2}}y\left( n,\frac{1}{2}\right) z^{n+1}. \end{eqnarray*} Now equating the coefficients of $z^{n}$ on both sides of the above equation, we arrive at the following corollary: \begin{corollary}
Let $n\in \mathbb{N}$. Then we have
\begin{equation}
2y\left( n-1,\frac{1}{2}\right) -y\left( n,\frac{1}{2}\right) =(-1)^{n}\frac{
2^{n+2}}{n+1}. \label{1aGd4a}
\end{equation} \end{corollary}
\begin{remark}
Substituting $\lambda =\frac{1}{2}$ into (\ref{1aGa}), we also arrive at the
equation (\ref{1aGd4}) and (\ref{1aGd4a}). Substituting $\lambda =2$ into (
\ref{1aGa}), we also arrive the (\ref{A1l2}). \end{remark}
Substituting $\lambda =-1$ into (\ref{1aGa}), we get the following corollary:
\begin{corollary}
Let $n\in \mathbb{N}$. Then we have
\begin{equation}
y\left( n-1,-1\right) -2y\left( n,-1\right) =-\frac{1}{n+1}. \label{1aGd1a}
\end{equation} \end{corollary}
Combining (\ref{1aGd1a}) with the following well-known formula \begin{equation*} y\left( n,-1\right) =\frac{1}{2(n+1)}\sum\limits_{j=0}^{n}\frac{1}{\binom{n
}{j}} \end{equation*} (\textit{cf}. \cite[Eq. (6.5)]{SimsekMTJPAM2020}), we get the following combinatorial sum:
\begin{corollary}
Let $n\in \mathbb{N}$. Then, we have
\begin{equation}
\sum\limits_{j=0}^{n-1}\frac{1}{\binom{n-1}{j}}=\frac{2n}{n+1}
\sum\limits_{j=0}^{n-1}\frac{1}{\binom{n}{j}}. \label{1aGf1}
\end{equation} \end{corollary}
Multiplying both sides of the equation (\ref{1aGd1a}) by $2(n+1)$, we arrive at the following result:
\begin{corollary}
Let $n\in \mathbb{N}$. Then we have
\begin{equation*}
2(n+1)y\left( n-1,-1\right) -4(n+1)y\left( n,-1\right) =-2.
\end{equation*} \end{corollary}
\section{Decomposition of the multiple Hurwitz zeta functions with the help
of the numbers $y\left( n,\protect\lambda \right) $} \label{Section6}
In \cite{SimsekREVISTA}, by the aid of the numbers $y\left( n,\lambda \right) $, we gave decomposition of the multiple Hurwitz zeta functions in terms of the Bernoulli polynomials of higher order. In this section, by using the same method in \cite{SimsekREVISTA}, we give decomposition of the multiple alternating Hurwitz zeta functions in terms of the Bernoulli polynomials of higher order, the Euler numbers and polynomials of higher order, and the Stirling numbers of the first kind. By combining these decomposition relations, we derive some formulas involving these numbers and polynomials.
It is time to give the proof of Theorem \ref{Theorem 4} as follows:
\begin{proof}[Proof of Theorem \protect\ref{Theorem 4}]
Substituting $\lambda =-e^{-t}$ into (\ref{ynldef}), we get
\begin{equation}
y\left( n,-\frac{1}{e^{t}}\right) =\sum_{j=0}^{n}\frac{e^{t(n+2)}}{
(j+1)\left( e^{t}+1\right) ^{n+1-j}}. \label{aH3a}
\end{equation}
Combining (\ref{aH3a}) with (\ref{Aeuler}), we get
\begin{equation}
y\left( n,-\frac{1}{e^{t}}\right) =\sum_{m=0}^{\infty }\sum_{j=0}^{n}\frac{
E_{m}^{(n+1-j)}\left( n+2\right) }{(j+1)2^{n+1-j}}\frac{t^{m}}{m!}.
\label{aH3}
\end{equation}
Using (\ref{aH3a}), we also get
\begin{equation}
y\left( n,-\frac{1}{e^{t}}\right) =\sum_{j=0}^{n}\frac{1}{(j+1)}
\sum_{v=0}^{\infty }(-1)^{v}\binom{v+n-j}{v}e^{t(v+n+2)}, \label{aH3S}
\end{equation}
where $\left\vert e^{t}\right\vert <1$. Using Taylor series of $e^{tx}$ in (
\ref{aH3S}) yields
\begin{equation}
y\left( n,-\frac{1}{e^{t}}\right) =\sum_{j=0}^{n}\frac{1}{j+1}
\sum_{v=0}^{\infty }\sum_{m=0}^{\infty }(-1)^{v}\binom{v+n-j}{v}(v+n+2)^{m}
\frac{t^{m}}{m!}. \label{aH4}
\end{equation}
After making the required calculations in (\ref{aH3}) and (\ref{aH4}),
equating the coefficients of $\frac{t^{m}}{m!}$ on both sides of the above
equation, we obtain
\begin{equation}
\sum_{j=0}^{n}\frac{1}{j+1}\left( \sum_{v=0}^{\infty }(-1)^{v}\binom{v+n-j}{v
}(v+n+2)^{m}-\frac{E_{m}^{(n+1-j)}\left( n+2\right) }{2^{n+1-j}}\right) =0.
\label{ah5}
\end{equation}
Therefore, proof is completed. \end{proof}
\begin{remark}
The well-known multiple Hurwitz-Euler eta function (or the multiple
alternating Hurwitz function), which is given by the equation (\ref{ah5zE}),
can also represent as follows:
\begin{equation*}
\zeta _{E}^{(d)}\left( s,x\right) =2^{d}\sum_{v_{1},v_{2},\ldots
,v_{d}=0}^{\infty }\frac{(-1)^{v_{1}+v_{2}+\cdots +v_{d}}}{\left(
x+v_{1}+v_{2}+\cdots +v_{d}\right) ^{s}},
\end{equation*}
where $\operatorname{Re}(s)>0$, $d\in \mathbb{N}$ and $x>0$ (\textit{cf}. \cite
{Cangul}, \cite{ChoiSrivastavaTJM}, \cite{Ozden},
\cite{SimsekJdea}, \cite{SrivatavaChoi}). \end{remark}
Combining the equation (\ref{ah5}) with the\ equation (\ref{ah5zE}), we also arrive at the following theorem.
\begin{theorem}
Let $m,n\in \mathbb{N}_{0}$. Then we have
\begin{eqnarray}
&&\sum_{j=0}^{n}\frac{1}{\left( j+1\right) 2^{n+1-j}}\zeta
_{E}^{(n+1-j)}\left( -m,n+2\right) \label{1AAe} \\
&=&\sum_{j=0}^{n}\sum_{l=0}^{m}\sum_{\underset{l_{1}+l_{2}+\cdots
+l_{n+1-j}=l}{l_{1},l_{2},\cdots ,l_{n+1-j}=0}}^{l}\binom{m}{l}\frac{\left(
n+2\right) ^{m-l}l!E_{l_{1}}E_{l_{2}}\cdots E_{l_{n+1-j}}}{
l_{1}!l_{2}!\cdots l_{n+1-j}!\left( j+1\right) 2^{n+1-j}}. \notag
\end{eqnarray} \end{theorem}
Putting $n=0$ in (\ref{ah5z}), we get \begin{equation*} E_{m+1}(2)=\zeta _{E}(-m,2), \end{equation*} where \begin{equation*} \zeta _{E}(s,x)=\zeta _{E}^{(1)}(s,x)=2\sum\limits_{n=0}^{\infty }\frac{
(-1)^{n}}{(n+x)^{s}} \end{equation*} (\textit{cf}. \cite{Cangul}, \cite{ChoiSrivastavaTJM}, \cite {Ozden}, \cite{SimsekJdea}, \cite{SrivatavaChoi}).
Substituting $n=1$ into (\ref{ah5z}), we obtain \begin{equation*} \zeta _{E}^{(2)}(-m,3)-\zeta _{E}(-m,3)=E_{m}^{(2)}\left( 3\right) -E_{m}\left( 3\right) . \end{equation*} Substituting $n=2$ into (\ref{ah5z}), we also obtain \begin{eqnarray*}
3\zeta _{E}^{(3)}(-m,4)+3\zeta _{E}^{(2)}(-m,4)+8\zeta _{E}(-m,3)=3E_{m}^{(3)}\left( 4\right) +3E_{m}^{(2)}\left( 4\right) +8E_{m}\left(
4\right) . \end{eqnarray*} Substituting $n=3$ into (\ref{aHh}), we also obtain \begin{eqnarray*}
&&15\zeta _{E}^{(5)}(-m,5)+15\zeta _{E}^{(4)}(-m,5)+10\zeta
_{E}^{(3)}(-m,5)+10\zeta _{E}^{(2)}(-m,5)+24\zeta _{E}(-m,5) \\
&=&15E_{m}^{(5)}\left( 5\right) +15E_{m}^{(4)}\left( 5\right)
+10E_{m}^{(3)}\left( 5\right) +10E_{m}^{(2)}\left( 5\right) +24E_{m}\left(
5\right) . \end{eqnarray*}
Since \begin{equation*} \binom{v+n-j}{v}=\binom{v+n-j}{n-j}, \end{equation*} by combining the following well-known identity \begin{equation*} \binom{v+n-j}{n-j}=\frac{1}{(n-j)!}\sum_{c=0}^{n-j}\left\vert S_{1}(n-j,c+1)\right\vert v^{c}, \end{equation*} (\textit{cf. \cite{Choi1a}, \cite{comtet}, \cite{SrivatavaChoi}})\textit{\ } with the equation (\ref{aH4}), we get \begin{equation*} y\left( n,-\frac{1}{e^{t}}\right) =\sum_{j=0}^{n}\sum_{c=0}^{n-j}\sum_{v=0}^{\infty }\sum_{m=0}^{\infty }\frac{
(-1)^{v}(v+n+2)^{m}v^{c}}{\left( j+1\right) \left( n-j\right) !}\left\vert S_{1}(n-j,c+1)\right\vert \frac{t^{m}}{m!}. \end{equation*}
In \cite{SimsekREVISTA}, we gave the following results involving the multiple Hurwitz zeta functions and the Bernoulli polynomials of higher order: \begin{equation} y\left( n,\frac{1}{e^{t}}\right) =\sum_{j=0}^{n}\frac{(-1)^{n}}{(j+1)}\zeta _{n+1-j}\left( -m,n+2\right) \frac{t^{m}}{m!}, \label{aH3a1} \end{equation} where $\zeta _{d}\left( s,x\right) $ denotes the Hurwitz zeta functions, for $d\in \mathbb{N}$, which defined by \begin{equation*} \zeta _{d}\left( s,x\right) =\sum_{v=0}^{\infty }\binom{v+d-1}{v}\frac{1}{
\left( x+v\right) ^{s}}=\sum_{v_{1}=0}^{\infty }\sum_{v_{2}=0}^{\infty }\cdots \sum_{v_{d}=0}^{\infty }\frac{1}{\left( x+v_{1}+v_{2}+\cdots
+v_{d}\right) ^{s}} \end{equation*} where $\Re (s)>d$, when $d=1$, we have the Hurwitz zeta function \begin{equation*} \zeta (s,x)=\zeta _{1}\left( s,x\right) =\sum_{v=0}^{\infty }\frac{1}{
(x+v)^{s}}, \end{equation*} (\textit{cf}. \cite{Choi1a}, \cite{MSKim1a}, \cite{SimsekJNT}-\cite{Simsek11a}, \cite{SrivatavaChoi}).
It is clear that \begin{equation} \zeta _{d}\left( -m,x\right) =\frac{(-1)^{d}m!B_{m+d}^{(d)}(x)}{(d+m)!} \label{aH3a4a} \end{equation} and \begin{equation} \zeta \left( -m,x\right) =\zeta _{1}\left( -m,x\right) =-\frac{B_{m+1}(x)}{
m+1} \label{aH3a4} \end{equation} where $m\in \mathbb{N}_{0}$ (\textit{cf}. \cite{Choi1a}, \cite{MSKim1a}, \cite{SimsekJNT}-\cite{Simsek11a}, \cite {SrivatavaChoi}).
For $m,n\in \mathbb{N}$, we \cite{SimsekREVISTA} also defined \begin{equation} \sum_{j=0}^{n}\frac{1}{j+1}\left( (-1)^{n}\zeta _{n+1-j}(-m,n+2)+\frac{
(-1)^{j}B_{m+n+1-j}^{(n+1-j)}\left( n+2\right) }{\binom{m+n+1-j}{n+1-j}
(n+1-j)!}\right) =0. \label{aHh} \end{equation} Substituting $\lambda =-e^{-2t}$ into (\ref{ynldef}), we get \begin{equation} y\left( n,-\frac{1}{e^{2t}}\right) =\sum_{j=0}^{n}\frac{(-1)^{j-1}}{(j+1)} \frac{e^{2t(n+2)}}{\left( e^{2t}-1\right) ^{n+1-j}}. \label{aH3eb} \end{equation} Combining (\ref{aH3eb}) with (\ref{ApostolBern}) and (\ref{Aeuler}), we get \begin{equation} y\left( n,-\frac{1}{e^{2t}}\right) =\sum_{j=0}^{n}\frac{(-1)^{j-1}}{
(j+1)2^{n+1-j}}\sum_{m=0}^{\infty }\frac{(-1)^{j}B_{m+n+1-j}^{(n+1-j)}\left(
n+2\right) }{\binom{m+n+1-j}{n+1-j}(n+1-j)!}\frac{t^{m}}{m!}. \label{aHeb} \end{equation} Combining (\ref{aH3eb}) with (\ref{ApostolBern}), we also get \begin{equation} y\left( n,-\frac{1}{e^{2t}}\right) =\sum_{j=0}^{n}\sum_{m=0}^{\infty }\sum_{c=0}^{m}\frac{(-1)^{j-1}\binom{m}{c}B_{c+n+1-j}^{(n+1-j)}\left(
2n+4\right) E_{m-c}^{(n+1-j)}\left( n+2\right) }{\binom{c+n+1-j}{n+1-j}
(n+1-j)!(j+1)2^{n+1-j}}\frac{t^{m}}{m!}. \label{aH3Be} \end{equation}
Combining (\ref{aHeb}) with (\ref{aH3Be}), we arrive at the following theorem:
\begin{theorem}
Let $n\in \mathbb{N}_0$. Then, we have
\begin{eqnarray*}
&&\sum_{j=0}^{n}\frac{(-1)^{j-1}}{(n+1-j)!(j+1)2^{n+1-j}}\frac{
B_{m+n+1-j}^{(n+1-j)}\left( n+2\right) }{\binom{m+n+1-j}{n+1-j}} \\
&=&\sum_{j=0}^{n}\frac{(-1)^{j-1}}{(n+1-j)!(j+1)2^{n+1-j}}\sum_{c=0}^{m}
\frac{\binom{m}{c}B_{c+n+1-j}^{(n+1-j)}\left( 2n+4\right) E_{m-c}^{(n+1-j)}}{
\binom{c+n+1-j}{n+1-j}}.
\end{eqnarray*} \end{theorem}
\begin{remark}
Many other decompositions are obtained by continuing as above. It is known
that the decomposition of the multiple Hurwitz zeta function is given by
different techniques and methods in the literature. In this paper, we do not
focus on the other kinds decompositions. \end{remark}
\section{Infinite series representations of the numbers $y\left( n,\protect
\lambda \right) $ on entire functions} \label{Section7}
In this section, we give some formulas containing the numbers $y\left( n,\lambda \right) $ with the help of power series of entire functions. In order to give these formulas, we need the following infinite series representation, which was given by Boyadzhiev \cite{Boyadv1}-\cite{Boyadv2}: \begin{equation*} \sum_{m=0}^{\infty }\frac{f^{(m)}(0)}{m!}h(m)y^{m}=\sum_{m=0}^{\infty }\frac{
h^{(m)}(0)}{m!}\sum\limits_{j=0}^{m}S_{2}(m,j)y^{j}f^{(j)}(y), \end{equation*} where $f$ and $h$ are appropriate functions.
For a large class of entire functions and $|\lambda |<1$, Boyadzhiev \cite {Boyadv2} gave the following novel formula: \begin{equation} \sum_{m=0}^{\infty }h(m)\lambda ^{m}+\sum_{m=1}^{\infty }\frac{h^{(m-1)}(0)}{
m!}\mathcal{B}_{m}(\lambda )=0. \label{Abn-1b} \end{equation}
Combining (\ref{Abn-1}) with (\ref{Abn-1b}), we arrive at the following corollary:
\begin{corollary}
Let $h(\lambda )$ be an entire function and $|\lambda |<1$. Then we have
\begin{equation}
\sum_{v=0}^{\infty }h(v)\lambda ^{v}=-\sum_{m=1}^{\infty }\frac{h^{(m-1)}(0)
}{m!}\sum\limits_{n=0}^{m}\left( n+1\right) !\lambda ^{n+1}y\left( n,\lambda
\right) S_{2}(m,n+1). \label{ps1}
\end{equation} \end{corollary}
Substituting $h(\lambda )=\cos \lambda $ into (\ref{ps1}), with the aid of the Euler formula, for $\left\vert \lambda e^{\pm i}\right\vert <1$, we obtain \begin{equation} \sum_{v=0}^{\infty }\lambda ^{v}\cos (v)=\frac{1}{2}\sum_{v=0}^{\infty }\left( \left( \lambda e^{i}\right) ^{v}+\left( \lambda e^{-i}\right) ^{v}\right) =\frac{1-\lambda \cos 1}{1-2\lambda \cos 1+\lambda ^{2}}. \label{GH1} \end{equation} Combining (\ref{GH1}) with (\ref{ps1}), we have \begin{eqnarray} &&\sum_{m=1}^{\infty }\sum\limits_{n=0}^{2m-1}\frac{(-1)^{m+1}\left(
n+1\right) !S_{2}(2m-1,n+1)}{2(2m-1)!}\lambda ^{n+1}y\left( n,\lambda \right) \label{Gh2} \\ &=&\frac{1-\lambda \cos 1}{1-2\lambda \cos 1+\lambda ^{2}}, \notag \end{eqnarray} where $\left\vert \lambda \right\vert <1$.
Combining (\ref{Gh2}) with (\ref{GH}), we arrive at the following theorem:
\begin{theorem}
\begin{eqnarray*}
&&\sum_{m=1}^{\infty }\sum\limits_{n=0}^{2m-1}\frac{(-1)^{m+1}\left(
n+1\right) !S_{2}(2m-1,n+1)}{2(2m-1)!}\lambda ^{n+1}y\left( n,\lambda \right)
\\
&=&1+\sum_{n=1}^{\infty }\left( \mathcal{G}_{n}(2\cos 1,-1;1,1,1)-\mathcal{G}
_{n-1}(2\cos 1,-1;1,1,1)\cos 1\right) \lambda ^{n}.
\end{eqnarray*} \end{theorem}
Noting that a considerable attribute of this formula is depended on the function $f$. For instance, if $f$ is a polynomial, then infinite series on the right-hand side of the equation (\ref{ps1}) reduces to the finite sum. On account of this, one can easily compute the value of the infinite series on the left-hand side of this equation.
The Hurwitz-Lerch zeta function $\Phi (\lambda ,z,b)$ is defined by \begin{equation*} \Phi (\lambda ,z,b)=\sum_{j=0}^{\infty }\frac{\lambda ^{j}}{\left(
j+b\right) ^{z}}, \end{equation*} where $b\in \mathbb{C}\setminus \mathbb{Z}_{0}^{-}$;$\;\lambda ,z\in \mathbb{
C}$ when $\;\left\vert \lambda \right\vert <1$;$\;x>1\;$when$\;\left\vert \lambda \right\vert =1$ (\textit{cf}. \cite{Apostol}, \cite{SrivatavaChoi}).
The function $\Phi (\lambda ,z,b)$ interpolates the Apostol-Bernoulli polynomials at negative integers, that is \begin{equation} \Phi (\lambda ,1-n,b)=-\frac{1}{n}\mathcal{B}_{n}\left( b;\lambda \right) , \label{abp} \end{equation} where $n\in \mathbb{N}$, $\left\vert \lambda \right\vert <1$ (\textit{cf}. \cite{Apostol}, \cite{Boyadv1}, \cite{Boyadv2}, \cite{SrivatavaChoi}).
Since \begin{equation*} \mathcal{B}_{0}\left( 0;\lambda \right) =0 \end{equation*} and \begin{equation*} \lambda \mathcal{B}_{1}(1;\lambda )=1+\mathcal{B}_{1}(\lambda ) \end{equation*} and for $n\geq 2$, \begin{equation*} \lambda \mathcal{B}_{n}(1;\lambda )=\mathcal{B}_{n}(\lambda ), \end{equation*} (\textit{cf}. \cite{Apostol}), for $b=1$, the equation (\ref{abp}) reduces to the following well-known formula: \begin{eqnarray} \Phi (\lambda ,1-n,1) &=&-\frac{1}{n}\mathcal{B}_{n}\left( 1;\lambda \right) \label{1AAep} \\ &=&-\frac{1}{n\lambda }\mathcal{B}_{n}\left( \lambda \right) , \notag \end{eqnarray} where $n\in \mathbb{N}$ with $n\geq 2$.
Substituting $f(z)=z^{m}$ ($m\in \mathbb{N}$) into (\ref{ps1}), and using ( \ref{Abn-1}), we arrive at the another proof of Theorem \ref{Theorem3}.
\section{Construction of the generating function $G\left( t,\protect\lambda
\right) $ with the help of Volkenborn integral on $p$-adic integers} \label{Section8}
In this section, we present another construction of the generating function $ G\left( t,\lambda \right) $ with the help of Volkenborn integral on $p$-adic integers. We give $p$-adic integral representation of the function $G\left( t,\lambda \right) $. We also give some applications of this integral representation.
Let $\mathbb{Z}_{p}$ denote the set of $p$-adic integers. Also, let $C^{1}$ denote the set of continuous differentiable functions from $\mathbb{Z}_{p}$ to a field with a complete valuation. With the aid of the indefinite sum of a continuous function $f$ on $\mathbb{Z}_{p}$, the Volkenborn integral is given by \begin{equation} \int\limits_{\mathbb{Z}_{p}}f\left( x\right) d\mu _{1}\left( x\right) = \underset{N\rightarrow \infty }{\lim }\frac{1}{p^{N}}\sum_{x=0}^{p^{N}-1}f \left( x\right) , \label{M} \end{equation} where $f\in C^{1}$ on $\mathbb{Z}_{p}\ $and $\mu _{1}\left( x\right) $ denotes the Haar distribution, which is given by \begin{equation*} \mu _{1}\left( x+p^{N}\mathbb{Z}_{p}\right) =\mu _{1}\left( x\right) =\frac{1 }{p^{N}} \end{equation*} (\textit{cf}. \cite{T. Kim}, \cite{KIMjmaa2017}, \cite[Definition 55.1, p.167 ]{Schikof}, \cite{SimsekMTJPM}, \cite{Volkenborn}); see also the references cited in each of these earlier works).
\begin{theorem}
Let $\lambda\in \mathbb{Z}_p$. Then, we have
\begin{equation}
G\left( t,\lambda \right) =\frac{1-\lambda }{\lambda\left(
t-1\right) }\int\limits_{\mathbb{Z}_{p}}\left( 1+\frac{1-\lambda}{\lambda }
t\right) ^{x}d\mu _{1}\left( x\right) . \label{pG}
\end{equation} \end{theorem}
\begin{proof} Substituting $f(x;t,\lambda )=\left( 1+\frac{1-\lambda}{\lambda }
t\right) ^{x}$ into the following integral equation, which is given in \cite[
p.169]{Schikof}:
\begin{equation*}
\int\limits_{\mathbb{Z}_{p}}f\left( x+1;t,\lambda \right) d\mu _{1}\left(
x\right) =\int\limits_{\mathbb{Z}_{p}}f\left( x;t,\lambda \right) d\mu
_{1}\left( x\right) +\frac{d}{dx}f(x)\left\vert _{x=0}\right. ,
\end{equation*}
and after some elementary computations, we obtain
\begin{equation*}
G\left( t,\lambda \right) =\frac{1-\lambda }{\lambda\left(
t-1\right) }\sum\limits_{n=0}^{\infty }(-1)^{n}\left(\frac{1-\lambda}{
\lambda }t\right) ^{n}\frac{1}{n+1}.
\end{equation*}
Combining the above equation with the Newton-Mercator series, we arrive at
the desired result. \end{proof}
It is time to give the second proof of Theorem \ref{Theorem 5}. The following proof is associated with the $p$-adic integral method.
\begin{proof}[The second proof of Theorem \protect\ref{Theorem 5}]
Using (\ref{pG}), we get
\begin{equation}
G\left( t,\lambda \right) =\frac{1-\lambda }{\lambda\left(
t-1\right) }\sum\limits_{n=0}^{\infty }(-1)^{n}\left( \frac{\lambda -1}{
\lambda }t\right) ^{n}\frac{1}{n!}\int\limits_{\mathbb{Z}_{p}}\left(
x\right) ^{\underline{n}}\mu _{1}\left( x\right) . \label{pG2}
\end{equation}
Combining the above equation with (\ref{Sitirling1a}), we obtain
\begin{equation}
G\left( t,\lambda \right) =\frac{1-\lambda }{\lambda\left(
t-1\right) }\sum\limits_{n=0}^{\infty }(-1)^{n}\left( \frac{\lambda -1}{
\lambda }t\right) ^{n}\frac{1}{n!}\sum\limits_{j=0}^{n}S_{1}(n,j)\int
\limits_{\mathbb{Z}_{p}}x^{j}\mu _{1}\left( x\right) . \label{pG1}
\end{equation}
Combining (\ref{pG1}) with the following well-known formula
\begin{equation*}
B_{j}=\int\limits_{\mathbb{Z}_{p}}x^{j}\mu _{1}\left( x\right)
\end{equation*}
(\textit{cf}. \cite[ p.171]{Schikof}), and (\ref{1aG}), we have
\begin{equation*}
\sum\limits_{n=0}^{\infty }\left( 1-\lambda \right) ^{n+2}y\left( n,\lambda
\right) t^{n}=\frac{1-\lambda }{\lambda\left( t-1\right) }
\sum\limits_{n=0}^{\infty }\sum\limits_{j=0}^{n}\frac{S_{1}(n,j)B_{j}}{n!}
\left( \frac{1-\lambda }{\lambda }\right) ^{n}t^{n}.
\end{equation*}
Therefore
\begin{equation*}
\sum\limits_{n=0}^{\infty }\left( 1-\lambda \right) ^{n+2}y\left( n,\lambda
\right) t^{n}=\sum\limits_{n=0}^{\infty
}\sum\limits_{d=0}^{n}\sum\limits_{j=0}^{d}(-1)^{d}\frac{S_{1}(d,j)B_{j}}{
d!}\left( \frac{\lambda -1}{\lambda }\right) ^{d+1}t^{n}.
\end{equation*}
Comparing the coefficients of $t^{n}$ on both sides of the above equation,
we arrive at the equation (\ref{1aGbs}). \end{proof}
By using (\ref{pG2}), we obtain \begin{eqnarray} &&\sum\limits_{n=0}^{\infty }\left( 1-\lambda \right) ^{n+2}y\left( n,\lambda \right) t^{n+1}-\sum\limits_{n=0}^{\infty }\left( 1-\lambda \right) ^{n+2}y\left( n,\lambda \right) t^{n} \label{pG3} \\ &=&\sum\limits_{n=0}^{\infty }(-1)^{n+1}\left( \frac{\lambda -1}{\lambda } \right) ^{n+1}\frac{1}{n!}\int\limits_{\mathbb{Z}_{p}}\left( x\right) ^{
\underline{n}}\mu _{1}\left( x\right) t^{n}. \notag \end{eqnarray} Combining the above equation with the Volkenborn integral in terms of the Mahler coefficients \begin{equation*} \int\limits_{\mathbb{Z}_{p}}\binom{x}{n}\mu _{1}\left( x\right) =\frac{
(-1)^{n}}{n+1} \end{equation*} (\textit{cf}. \cite[Proposition 55.3, p.168]{Schikof}), we get \begin{eqnarray*}
\sum\limits_{n=0}^{\infty }\left( 1-\lambda \right) ^{n+2}y\left(
n,\lambda \right) t^{n+1}-\sum\limits_{n=0}^{\infty }\left( 1-\lambda
\right) ^{n+2}y\left( n,\lambda \right) t^{n} =\sum\limits_{n=0}^{\infty }\frac{(-1)^{n}}{n+1}\left( \frac{1-\lambda }{
\lambda }\right) ^{n+1}t^{n}. \end{eqnarray*} Comparing the coefficients of $t^{n}$ on both sides of the above equation, we arrive at the equation (\ref{1aGa}).
Combining the following well-known identity which proved by Kim et al. \cite {DSkimDaehee}: \begin{equation*} D_{m}=\int\limits_{\mathbb{Z}_{p}}\left( x\right) ^{\underline{m}}\mu _{1}\left( x\right) \end{equation*} where $m\in \mathbb{N}_{0}$, with (\ref{pG3}), we get the following result:
\begin{theorem}
Let $n\in \mathbb{N}$. Then, we have
\begin{equation*}
y\left( n-1,\lambda \right) +(\lambda -1)y\left( n,\lambda \right) =\frac{
D_{n}}{\lambda ^{n+1}n!}.
\end{equation*} \end{theorem}
\section{Conclusion} \label{Section9}
In this paper, we have given the solution of the Open problem 1, which has been proposed by the author \cite[p.57, Open problem 1]{SimsekMTJPAM2020} about the generating functions of the numbers $y\left( n,\lambda \right) $. We have also given many properties of this function. with the help of generating functions and their functional equations, we have derived many formulas associated with the numbers $y(n,\lambda )$, the Bernoulli numbers of the second kind, the harmonic numbers, alternating Harmonic numbers, the Apostol-Bernoulli numbers, the Stirling numbers, the Leibnitz numbers, the Bernoulli numbers, and sums involving higher powers of inverses of binomial coefficients. Furthermore, we have provided an algorithm to compute the numbers $y\left( n,\lambda \right) $. By using this algorithm, we have also computed some values of the numbers $y\left( n,\lambda \right) $. In addition, we have presented differential equations of the generating functions with their applications. With the aid of the numbers $y\left( n,\lambda \right) $, we have given decomposition of the multiple Hurwitz zeta functions involving the Bernoulli polynomials of higher order. We have also given infinite series representations of the numbers $y\left( n,\lambda \right) $ on entire functions. By the aid of the Volkenborn integral on $p$-adic integers, we have constructed the generating function for the numbers $y\left( n,\lambda \right) $ and given their some applications.
As a result, the results produced in this article are in a wide range and have the potential to attract the attention of many researchers. In the future, the examination of the properties of the numbers $y\left( n,\lambda \right) $ will continue and it will be investigated which other numbers and polynomials these numbers are related to.
\end{document} |
\begin{document}
\title[Savage-Hutter Equations]{A Finite Volume Scheme for
Savage-Hutter Equations on Unstructured Grids}
\author[R. Li]{Ruo Li} \address{CAPT, LMAM and School of Mathematical
Sciences, Peking University, Beijing 100871, P.R. China} \email{[email protected]}
\author[X.-H. Zhang]{Xiaohua Zhang} \address{College of Science and
Three Gorges Mathematical Research Center, China Three Gorges
University, Yichang, 443002, China} \email{[email protected]}
\maketitle
\begin{abstract}
A Godunov-type finite volume scheme on unstructured triangular grids
is proposed to numerically solve the Savage-Hutter equations in
curvilinear coordinate. We show the direct observation that the
model is a not Galilean invariant system. At the cell boundary, the
modified Harten-Lax-van Leer (HLL) approximate Riemann solver is
adopted to calculate the numerical flux. The modified HLL flux is
not troubled by the lack of Galilean invariance of the model and it
is helpful to handle discontinuities at free interface. Rigidly the
system is not always a hyperbolic system due to the dependence of
flux on the velocity gradient. Even though, our numerical results
still show quite good agreements to reference solutions. The
simulations for granular avalanche flows with shock waves indicate
that the scheme is applicable.
\textbf{keywords}: granular avalanche flow, Savage-Hutter equations,
finite volume method, Galilean invariant
\end{abstract}
\section{Introduction}
The Savage-Hutter model was first proposed to describe the motion of a finite mass of granular material flowing down a rough incline, in which the granular material was treated as an incompressible continuum \cite{Savage1989, Gray2003, Hutter2005}. It was then extended to 2D and with curvilinear coordinate \cite{Hutter1993, Greve1994,
Universit1999} which may be applied to study natural disasters as landslides and debris flows \cite{Paik2015}. For more details on Savage-Hutter model, we refer to \cite{Hutter2005, Pudasaini2007,
Gray2001, Iverson1997, Iverson2001a, Pitman2005}.
Numerical method for the Savage-Hutter model may be dated back to work in \cite{Savage1989, Koch1994} using finite difference methods which are not able to capture shock waves. Later on, regardless of the Savage-Hutter model or other avalanches of granular flow models, the Godunov-type schemes are commonly used in literature \cite{Denlinger2001a, Wang2004, Chiou2005, Cui2014, Zhai2015,
Vollmoller2004, Rosatti2013} considering that most of them have hyperbolic nature of the depth averaged equations. We note that rigidly, the Savage-Hutter model is not hyperbolic since its flux is depended on the gradient of velocity. Precisely, the dependence is on the sign of the gradient of velocity components, that the model does be hyperbolic in the region with monotonic velocity components. Some related interesting development on numerical methods can be found in such as \cite{Tai2002, Nichita2004, Chiou2005, Pelanti2011,
Ouyang2013}.
In practical scenario, landslides, rock avalanches and debris flows usually occur in mountainous areas with complex topography, that we have to use unstructured grids for numerical simulation. However, there is few work of numerical methods for Savage-Hutter equations on unstructured grids. This may due primarily to the lack of Galilean invariance of the model, which is a direct observation to the model as we point out in section 2. In this paper, we are motivated to develop a high resolution and robust numerical model for Savage-Hutter equations under the unstructured grids. Whether the numerical method may provide a correct solution on the unstructured grids for a problem without Galilean invariance is a major question here. Meanwhile, the loss of hyperbolicity at the zeros of velocity gradients may some unpredictable behavior in numerical solutions, too.
Additionally, there are still some challenges in numerical solving strongly convective Savage-Hutter equations on unstructured grids. For example, shock formation is an essential mechanism in granular flows on an inclined surface merging into a horizontal run-out zone or encountering an obstacle when the velocity becomes subcritical from its supercritical state \cite{Wang2004}. Therefore, numerical efficiency is an important issue to help resolve the steep gradients and moving fronts. Noticing that the Savage-Hutter equations and Shallow water equations have some similarities, we basically follow the method in \cite{Deng2013} for shallow water equations. The modified HLL flux is adopted to calculate the numerical flux on the cell boundary. The formation of the modified HLL flux does not require the flux to be Galilean invariant, thus formally we have no difficulties in calculation of numerical flux. The techniques involved to improve efficiency include the MUSCL-Hancock scheme in time discretization, the ENO-type reconstruction and the $h$-adaptive method in spatial discretization.
We apply the numerical scheme to three examples. The first one is a granular dam break problem, and the second one is a granular avalanche flows down an incline plane and merges continuously into a horizontal plane. In the third example, a granular slides down an inclined plane merging into a horizontal run-out zone is simulated, whereby the flow is diverted by an obstacle which is located on the inclined plane. It is interesting that in these examples, the numerical solutions are agreed with the references solutions quite well. We are indicated that the scheme is applicable to the model, though the model is lack of the Galilean invariance and is not always hyperbolic. The reason why the loss of the hyperbolicity of the model are not destructive to the numerical methods is still under investigation.
The rest part of the paper is organized as follows. In section 2, the Savage-Hutter equations on the curivilinear coordinate is briefly introduced. We directly show that the system is not Galilean invariant. In section 3, we give the details of our numerical scheme. In section 4, we present the numerical results and a short conclusion in section 5 close the paper.
\section{Governing Equations}
There are various forms of Savage-Hutter equations according to different coordinate system and a detailed derivation of these different versions of Savage-Hutter equations has been given in \cite{Pudasaini2007}. Here, we confine ourselves to one of them. Precisely, the governing equations are built on an orthogonal curvilinear coordinate system (see Fig. \ref{fig:sketch}) given in \cite{Chiou2005}. The curvilinear coordinate $Oxyz$ is defined on the reference surface, where the $x$-axis is oriented in the downslope, the $y$-axis lies in the cross-slope direction of the reference surface and the $z$-axis is normal to them. The downslope inclination angle of the reference surface $\zeta$ only depends on the downslope coordinates $x$, that is, there is no lateral variation in the $y$-direction, thus it is strictly horizontal. The shallow basal topography is defined by its elevation $z = z^{b}(x,y)$ above the curvilinear reference surface. \begin{figure}
\caption{The sketch of curvilinear coordinate for Savage-Hutter
model(reproduced from \cite{Chiou2005}).}
\label{fig:sketch}
\end{figure}
The corresponding non-dimensionless Savage-Hutter equations are \begin{equation}
\frac{\partial h}{\partial t} + \frac{\partial}{\partial x}(hu)
+ \frac{\partial}{\partial y}(hv) = 0, \label{mass} \end{equation} \begin{equation}
\frac{\partial(hu)}{\partial t} + \frac{\partial}{\partial x}(hu^2
+ \frac{1}{2}\beta_x h^2) + \frac{\partial}{\partial y}(huv) = hs_x,
\label{momentumx} \end{equation} \begin{equation}
\frac{\partial (hv)}{\partial t} + \frac{\partial}{\partial x}(huv)
+ \frac{\partial}{\partial y}(hv^2 + \frac{1}{2} \beta_y h^2) =
hs_y,
\label{momentumy} \end{equation} where $h$ is the avalanche depth in the $z$ direction, and $\bm{u} = (u, v)$ are the depth-averaged velocity components in the downslope($x$) and cross-slope($y$) directions, respectively. The factors $\beta_x$, $\beta_y$ are defined as \[
\beta_x = \epsilon \cos\zeta K_x, \quad \beta_y = \epsilon \cos\zeta
K_y, \] respectively, where $\epsilon$ is aspect ratio of the characteristic thickness to the characteristic downslope extent. Here $K_x$, $K_y$ are the $x$, $y$ directions earth pressure coefficients defined by the Mohr-Coulomb yield criterion. Generally, the earth pressure coefficients are considered in the active or passive state, which depends on whether the downslope and cross-slope flows are expanding or contracting \cite{Pirulli2007}. Hutter et al. assumed that $K_x$, $K_y$ link the normal pressures in the $x$, $y$ directions with the overburden pressure and suggested that \cite{Savage1991, Hutter1993} \[
K_{x_{act/pass}}=2\left(1\mp
\sqrt{1-\frac{\cos^2\phi}{\cos^2\delta}}\right)\sec^2\phi - 1, \] \[
K_{y_{act/pass}}=\frac{1}{2}\left(K_x + 1\mp \sqrt{(K_x -1)^2 + 4
\tan^2 \delta}\right). \] where $\phi$ and $\delta$ are the internal and basal Coulomb friction angles, respectively. The subscripts ``act'' and ``pass'' denote active(``$-$'') and passive (``$+$'') stress by \[ K_x = \left \{\begin{array}{lr} K_{x_{act}}, & \partial u / \partial x \ge 0, \\ K_{x_{pass}}, & \partial u / \partial x < 0, \end{array} \right. \] \[
K_y = \left \{
\begin{array}{llr}
K_{y_{act}}^{x_{act}}, & \partial u/ \partial x \ge 0,
& \partial v/ \partial y \ge 0, \\ [2mm]
K_{y_{pass}}^{x_{act}}, & \partial u/ \partial x \ge 0,
& \partial v / \partial y < 0, \\ [2mm]
K_{y_{act}}^{x_{pass}}, & \partial u/ \partial x < 0, & \partial
v/ \partial
y \ge 0,
\\ [2mm]
K_{y_{pass}}^{x_{pass}}, & \partial u/ \partial x < 0, & \partial v / \partial y < 0.
\end{array}\right. \] In this model the earth pressure coefficients $K_x$, $K_y$ are assumed to be functions of the velocity gradient.
The terms $s_x$, $s_y$ are the net driving accelerations in the $x$, $y$ directions, respectively, \[
s_x = \sin\zeta - \frac{u}{|\bm{u}|} \tan\delta (\cos\zeta +\lambda
\kappa u^2) - \epsilon \cos\zeta \frac{\partial z_b}{\partial x}, \] \[
s_y = -\frac{v}{|\bm{u}|}\tan \delta (\cos\zeta + \lambda \kappa
u^2) - \epsilon \cos\zeta \frac{\partial z_b}{\partial y}, \]
where $|\bm{u}| = \sqrt{u^2 + v^2}$, $\kappa = -\frac{\partial \zeta}{\partial x}$ is the local curvature of the reference surface, and $\lambda \kappa$ is the local stretching of the curvature.
The two dimensional Savage-Hutter equations \eqref{mass}-\eqref{momentumy} can be collected in a general vector form as \begin{equation}
\frac{\partial \bm{U}}{\partial t} + \frac{\partial \bm{F}(\bm{U})}
{\partial x} + \frac{\partial \bm{G}(\bm{U})}{\partial y} =
\bm{S}(\bm{U}), \label{cs} \end{equation} where $\bm{U}$ denotes the vector of conservative variables, $\bm{\mathcal{F}} = (\bm{F}, \bm{G})$ represent the physical fluxes in the $x$ and $y$ directions, respectively, and $\bm{S}$ is the source term. They are \begin{equation} \begin{aligned}
\bm{U} = \left[
\begin{array}{c}
h \\ hu \\ hv
\end{array}\right], \quad \quad
\bm{F} = \left[
\begin{array}{c}
hu \\ hu^2 + \frac{1}{2}\beta_x h^2 \\ huv
\end{array}\right], \\
\bm{G} = \left[
\begin{array}{c}
hv \\ huv \\ hv^2 + \frac{1}{2}\beta_y h^2
\end{array}\right], \quad\quad
\bm{S}=\left[
\begin{array}{c}
0 \\ hs_x \\ hs_y
\end{array} \right]. \end{aligned} \label{cs2} \end{equation} The flux for \eqref{cs} and \eqref{cs2} along a direction $\bm{n} = (n_x, n_y)$ is $\bm{\mathcal{F}} \cdot \bm{n} = \bm{F}n_x +\bm{G}n_y$, while $\bm{n}$ is a unit vector. The Jacobian of the flux along $\bm{n}$ is given by \[ \bm{J} = \frac{\partial(\bm{F}n_x +\bm{G}n_y )}{\partial \bm{U}} = \left[\begin{array}{ccc} 0 & n_x & n_y \\ (\beta_x h-u^2)n_x-uvn_y & 2un_x + vn_y & un_y\\ -uvn_x +(\beta_yh-v^2)n_y & vn_x & un_x + 2vn_y \end{array}\right], \] where $n_x$ and $n_y$ are the components of the unit vector in the $x$ and $y$ directions, respectively. The three eigenvalues of the flux Jacobian are \[ \begin{aligned}
\lambda_1 & = u_{\bm{n}} -\sqrt{h(\beta_x n_x^2 + \beta_y
n_y^2)}, \\
\lambda_2 & = u_{\bm{n}}, \\
\lambda_3 & = u_{\bm{n}} + \sqrt{h(\beta_x n_x^2 + \beta_y
n_y^2)} \end{aligned} \] where $u_{\bm{n}} = \bm{u} \cdot \bm{n}$.
For laboratory avalanches, the ranges of $\delta$ and $\phi$ are usually made $\beta_x$ and $\beta_y$ greater than zero \cite{Hutter2005, Pudasaini2007}, thus, the eigenvalues of the flux Jacobian are all real and are distinct if the avalanche depth $h>0$. Therefore the Savage-Hutter system given by \eqref{cs} and \eqref{cs2} is hyperbolic if the gradient of velocity components are not zeros.
Clearly, the Savage-Hutter equations have a very similar mathematical structure to the shallow water equations of hydrodynamics at a first glance. But as a matter of fact, the derivation of Savage-Hutter equations are not the same as those in the shallow water approximation although they both start from the incompressible Navier-Stokes equation. Meanwhile, on account of the jump in the earth pressure coefficients $K_{x_{act/pass}}$ and $K_{y_{act/pass}}$, the complex source term $s_x$ and $s_y$, the free surface at the front and rear margins, it can be quite complicated to develop an appropriate numerical method to solve the Savage-Hutter equations.
It is a direct observation that the Savage-Hutter equations is not Galinean invariant. Precisely, it is not rotational invariant if $\beta_x \neq \beta_y$. It is well-known that the shallow water equations is rotational invariant. This is an essential difference between the shallow water equations and the Savage-Hutter equations. Let us point out this fact by \begin{theorem}\label{theorem1}
For any unit vector $\bm{n}=(\cos{\theta}, \sin{\theta})$,
$\theta \neq 0$, and all vectors $\bm{U}$, the following equality
\begin{equation}
\cos\theta \bm{F}(\bm{U}) + \sin\theta \bm{G}(\bm{U}) =
\bm{T}^{-1}\bm{F}(\bm{TU}) \label{rotational}
\end{equation}
holds if and only if $\beta_x = \beta_y$, where the rotation matrix
$\bm{T}$ is as
\[
\bm{T}=\left[
\begin{array}{ccc}
1 & 0 & 0 \\
0 & \cos\theta & \sin\theta \\
0 & -\sin\theta & \cos\theta
\end{array}\right].
\] \end{theorem} \begin{proof} First we calculate $\bm{TU}$. The result is \[
\bm{TU}=\left[\begin{array}{c} h \\
h(u\cos{\theta}+v\sin{\theta})\\
h(v\cos{\theta}-u\sin{\theta})
\end{array}\right]. \] Next we compute $\bm{F}(\bm{TU})$ and obtain \[
\bm{F}(\bm{TU})=\left[\begin{array}{c}
h(u\cos{\theta}+v\sin{\theta}) \\
\frac{h^2\beta_x}{2}+\frac{(hu\cos{\theta}+hv\sin{\theta})^2}{h}\\
\frac{(hv\cos{\theta}-hu\sin{\theta})(hu\cos{\theta}+hv\sin{\theta})}{h}
\end{array}\right]. \] Then we apply the inverse rotation $\bm{T}^{-1}$ to $\bm{F}(\bm{TU})$ and get \begin{equation} \bm{T^{-1}F}(\bm{TU})=\left[\begin{array}{c} h(u\cos{\theta}+v\sin{\theta}) \\ \frac{1}{2}h(2u^2+\beta_x)\cos{\theta}+huv\sin{\theta} \\ huv\cos{\theta}+\frac{1}{2}h(2v^2+h\beta_x)\sin{\theta} \end{array}\right]. \label{RH} \end{equation} For the left hand of Eq. \eqref{rotational}, we have \begin{equation} \cos\theta \bm{F}(\bm{U}) + \sin\theta \bm{G}(\bm{U}) = \left[ \begin{array}{c} h(u\cos{\theta}+v\sin{\theta}) \\ \frac{1}{2}h(2u^2+\beta_x)\cos{\theta}+huv\sin{\theta} \\ huv\cos{\theta}+\frac{1}{2}h(2v^2+h\beta_y)\sin{\theta} \end{array}\right]. \label{LH} \end{equation} Thus, if $\beta_x \ne \beta_y$, \eqref{RH} is not equal to \eqref{LH}. \end{proof}
If the system is rotational invariant, one may solve a 1D Riemann problem with $(h, h u_{\bm{n}})$, where $u_{\bm{n}} = \bm{u} \cdot \bm{n}$, as variable and get a 1D numerical flux $(f_h, f_m)$, then the numerical flux along $\bm{n}$ for 2D problem is given as $(f_h, f_m \bm{n})$. This procedure is very popular while it only works in case that $\bm{\mathcal{F}} \cdot \bm{n} = \bm{T}^{-1} \bm{F}(\bm{TU})$ holds. The theorem tells us that this equality holds only when $\theta = 0$ or $\beta_x = \beta_y$. In a finite volume method to be investigated, $\bm{n}$ will be set as the unit normal of the cell interfaces, that it can not always vanish $\theta$ in unstructured grids. Therefore, if $\beta_x \neq \beta_y$, any approximate Riemann solver based on such a procedure is out of the scope of our candidate numerical flux.
\section{Numerical Method} Let us discretize the Savage-Hutter equations \eqref{cs} and \eqref{cs2} on the Delaunay triangle grids using a finite volume Godunov-type approach. We will adopt the modified HLL approximate Riemann Solver to calculate numerical fluxes across cell interfaces. The MUSCL-Hancock method and ENO-type reconstruction technique are adopted to achieve second-order accuracy in space and time.
\subsection{Finite volume method} Before introducing the cell-centered finite volume method, the entire spatial domain $\Omega$ is subdivided into $N$ triangular cells $\tau_i, i = 1, 2, \cdots N$. In each cell $\tau_i$, the conservative variables of Savage-Hutter equations are discretized as piecewise linear functions as \[
\left.\bm{U}^{n}\right|_{\tau_{i}}(\bm{x})=\bm{U}_{i}^{n}+
\nabla \bm{U}_{i}^{n}\left(\bm{x}-\bm{x}_{i}\right) \] where $\bm{U}_{i}^{n}=\left(h_{i}^{n},(h u)_{i}^{n},(h v)_{i}^{n}\right)^{T}$ is the cell average value, $\nabla \bm{U}_{i}^{n}$ is the slope on $\tau_i$ and $\bm{x}_i$ is the barycenter of $\tau_i$, the superscript $n$ indicates at time $t^n$.
To introduction the finite volume scheme, Eq. \eqref{cs} is integrated in a triangle cell $\tau_i$ \begin{equation}
\frac{\partial}{\partial t}\int_{\tau_i}{\bm{U}}d\Omega +
\int_{\tau_i}\left(\frac{\partial \bm{F}}{\partial x} +
\frac{\partial \bm{G}}{\partial y} \right) d\Omega
= \int_{\tau_i}{\bm{S}}d\Omega. \label{intcs} \end{equation} Applying Green's formulation, Eq.\eqref{intcs} becomes \[
\frac{\partial}{\partial t}\int_{\tau_i}{\bm{U}}d\Omega +
\int_{\partial\tau_i}{\bm{\mathcal{F}} \cdot\bm{n}}ds
=\int_{\tau_i}{\bm{S}}d\Omega, \] where $\partial\tau_i$ denotes the boundary of the cell $\tau_i$; $\bm{n}$ is the unit outward vector normal to the boundary; $ds$ is the arc elements. The integrand $\bm{\mathcal{F}}\cdot\bm{n}$ is the outward normal flux vector in which $\bm{\mathcal{F}} = \left[\bm{F}, \bm{G}\right]$. Define the cell average: \[
\bm{U}_i=\frac{1}{|\tau_i|}\int_{\tau_i}{\bm{U}}d\Omega, \]
where $|\tau_i|$ is the area of the cell $\tau_i$. The method achieves second-order accuracy by using the MUSCL-Hancock method which consists of predictor and corrector steps. The procedure of the predictor-corrector Godunov-type method goes as follows:
\textbf{Step 1: Predictor step} \[
\bm{U}_{i}^{n+1/2} = \bm{U}_i - \frac{\Delta
t}{2|\tau_i|}\sum_{j=1}^{3}{\int_{\partial\tau_i,j}{\bm{\mathcal{F}}(\bm{U}_{in}^{n})\cdot
\bm{n}_{j}}}ds + \frac{\Delta t}{2}\bm{S}_{i}^{n}, \] where $\partial \tau_{i,j}$ is the $j$-th edge of $\tau_{i}$ with the unit outer normal as $\bm{n}_{j}$, and $\bm{S}_{i}$ is source term discretized by a centered scheme. The flux vector $\bm{\mathcal{F}}(\bm{U}_{in}^{n})$ is calculated at each cell face $\partial \tau_{i,j}$ after ENO-type piecewise linear reconstruction. $\Delta t$ is the time step.
\textbf{Step 2: Corrector step} \[
\bm{U}_{i}^{n+1} = \bm{U}_{i}^{n} - \frac{\Delta
t}{|\tau_i|}\sum_{j=1}^{3}{\int_{\partial
\tau_i,j}{\bm{\mathcal{F}}_{j}^{*}(\bm{U}_{in}^{n+1/2},\bm{U}_{out}^{n+1/2})\cdot\bm{n}_{j}ds}
} + \Delta t \bm{S}_{i}^{n+1/2}. \] where the numerical flux vector $\bm{\mathcal{F}}^{*}(\bm{U}_{in}^{n+1/2},\bm{U}_{out}^{n+1/2})\cdot\bm{n}$ is evaluated based on an approximate Riemann solver at each quadrature point on the cell boundary. In the paper, we use HLL approximate Riemann solver owe to its simplicity and stable performance on wet/day interface.
\subsection{HLL approximate Riemann solver} For a lot of hyperbolic systems, it uses the approximate Riemann solvers to obtain the approximate solutions in most practical situations. Since Savage-Hutter equations and shallow water equations are very similar, we can directly extend approximate Riemann solvers of shallow water equations to Savage-Hutter equations. The HLL approximate Riemann solver can offer a simple way of dealing with dry bed situaitons and the determination of the wet/dry front velocities \cite{Fraccarollo2010}. Similar to shallow water equations, the numerical interface flux of HLL for Savage-Hutter equations is computed as follows \[
\bm{\mathcal{F}}^{*}\cdot \bm{n} = \left\{ \begin{array}{lc}
\bm{\mathcal{F}}(\bm{U}_L)\cdot\bm{n}, & \mathrm{if} \quad s_L \ge 0, \\
\dfrac{s_R \bm{\mathcal{F}}(\bm{U}_L)\cdot \bm{n} - s_L
\bm{\mathcal{F}}(\bm{U}_R)\cdot \bm{n} + s_R s_L(\bm{U}_R -
\bm{U}_L)}{s_R - s_L}, & \mathrm{if} \quad s_L \le 0 \le s_R, \\
\bm{\mathcal{F}}(\bm{U}_R)\cdot\bm{n}, & \mathrm{if} \quad s_R \le 0,
\end{array}
\right. \] where $\bm{U}_L = (h_L, (hu)_L, (hv)_L)$ and $\bm{U}_R = (h_R, (hu)_R, (hv)_R)$ are the left and right Riemann states for a local Riemann problem, respectively. $ s_L$ and $s_R$ are estimate of the speeds of the left and right waves, respectively. The key to the success of the HLL approach is the availiability of estimates for the wave speeds $s_L$ and $s_R$. For the Savage-Hutter equations, there are also dry/wet bed cases. Thus, in order to take the dry bed into account, similar to shallow water equations, the left and right wave speeds are expressed as \[
s_L = \left \{ \begin{array}{lc}
\min(\bm{u}_L\cdot \bm{n} - \sqrt{c_L h_L},
u_{*} - \sqrt{c_L h_{*}}), & \mathrm{if} \quad h_L, h_R > 0,
\\
\bm{u}_{L}\cdot\bm{n} - \sqrt{c_L h_L}, & \mathrm{if}
\quad
h_R = 0, \\
\bm{u}_{R}\cdot\bm{n} - 2\sqrt{c_R h_R}, & \mathrm{if}
\quad h_L = 0,
\end{array}
\right. \]
\[
s_R = \left \{ \begin{array}{lc}
\max(\bm{u}_R\cdot \bm{n} + \sqrt{c_R h_R},
u_{*} + \sqrt{c_R h_{*}}), & \mathrm{if} \quad h_L, h_R > 0,
\\ [2mm]
\bm{u}_{L}\cdot\bm{n} + 2\sqrt{c_L h_L}, & \mathrm{if}
\quad h_R = 0, \\ [2mm]
\bm{u}_{R}\cdot\bm{n} + \sqrt{c_R h_R}, & \mathrm{if}
\quad
h_L = 0,
\end{array}
\right. \] where $\bm{u}_L = (u_L, v_L)^{T}$, $\bm{u}_R = (u_R, v_R)^{T}$, $c_L = (\beta_x n_x^2 + \beta_y n_y^2)_L$, $c_R = (\beta_x n_x^2 + \beta_y n_y^2)_R$ and \[
\left\{ \begin{array}{lc}
u_{*} = \frac{1}{2}(\bm{u}_L + \bm{u}_R)\cdot \bm{n} +
\sqrt{c_L h_L} - \sqrt{c_R h_R} \\ [2mm]
h_{*} =
\frac{1}{\bm{\beta}\cdot\bm{n}}\left[\frac{1}{2}(\sqrt{c_L h_L}
+ \sqrt{c_R h_R}) + \frac{1}{4}(\bm{u}_L -
\bm{u}_R)\cdot\bm{n}\right]^{2}.
\end{array}
\right. \]
\subsection{Linear reconstruction} In order to get second order accuracy in spatial, a simple ENO-type piecewise linear reconstruction was applied to suppress the numerical oscillations near steep gradients or discontinuites. This reconstrction method had been used by Deng et. al. for shallow water equations \cite{Deng2013}.
\begin{figure}
\caption{A schematic of the ENO-type reconstruction on patch of cells (reproduced from \cite{Deng2013}).}
\label{fig:ENO}
\end{figure}
This part is mainly taken from \cite{Deng2013}. For a variable $\psi=h, hu$ or $hv$, assuming its cell average value $\psi_i$ on $\tau_i$ is given (see Fig. \ref{fig:ENO}), then we can get the gradients of $\psi$ for the three triangles $\triangle AiB$, $\triangle BiC$ and $\triangle CiA$ are referred to as $(\nabla \psi)_{AiB}$, $(\nabla \psi)_{BiC}$ and $(\nabla \psi)_{CiA}$, and choose the one with the minimal $l^{2}$ norm as $\nabla \psi_i$, that is, \[
\nabla \psi_{i}=\underset{\nabla
\psi}{\operatorname{min}}\left\{\|\nabla \psi\|_{l^{2}}, \nabla
\psi \in\left\{(\nabla \psi)_{A i B},(\nabla \psi)_{B i
C},(\nabla \psi)_{C i A}\right\}\right\} \]
Acoording to the idea of minmod slope limiter, we let $\nabla \psi_i = 0$ when $\psi_i \geq \max{\{\psi_A, \psi_B, \psi_C\}}$ or $\psi_i \leq \min{\{ \psi_A, \psi_B, \psi_C\}} $. Meanwhile, in the process of reconstructing $h$, we must consider the physical criteria, that is, the avalanche depth $h$ must be non-negative at each quadrature points, otherwise $\nabla h_i$ is set to be zero.
\subsection{Time stepping} The time-marching formula is explicit, and the time step length $\Delta t$ is determined by the Courant-Friedrichs-Lewy(CFL) condition for its stability. For triangular grid and implemention of CFL condition, the time step length is usually expressed as \[
\Delta t=\mathrm{C_r} \cdot \min _{i} \frac{\Delta
x_{i}}{\sqrt{|\bm{\beta}| h_{i}}+\sqrt{u_{i}^{2}+v_{i}^{2}}},
\quad i = 1, 2, \cdots, N \] where $\mathrm{C_r}$ is the Courant number specified in the range $0<\mathrm{C_r} \leq 1$, and $\mathrm{C_r} = 0.5$ is adopted in our
simulations; $|\bm{\beta}| = \sqrt{\beta_x^2 + \beta_y^2}$, $N$ is the total number of cells, and $\Delta x_i$ is the size of $\tau_i$. For triangular grid, $\Delta x_i$ is usually calculated as the minimum barycenter-to-barycenter distance between $\tau_i$ and its adjacent cells.
\section{Numerical Examples}
In this section, three numerical examples were performed to verify the proposed scheme which implemented using C++ programming language based on the adaptive finite element package {\tt AFEPack}\cite{Li2006}. The initial triangular grids were generated by easymesh. Because the posteriori error estimator for $h$-adaptive method on unstructured grid was not the main goal of this work, we just applied the following local error indicator which advised by \cite{Deng2013} \[
\mathcal{E}_{\tau}=\mathcal{E}_{\tau}(h)+\mathcal{E}_{\tau}(h
u)+\mathcal{E}_{\tau}(h v) \] where \[
\mathcal{E}_{\tau}(\psi)=|\tau| \int_{\partial \tau} \left(
\frac{1}{\sqrt{|\tau|}}\left|\left[[I_{h} \psi]\right]\right| +
\left| \left[[ \nabla I_{h} \psi ]\right] \right| \right)
\mathrm{d\mathbf{l}}. \]
in which $\tau$ is a cell in the mesh and $|\tau|$ is its area, $\left[[ \cdot \right]] $ denotes the jump of a variable across $\partial \tau$, and $I_h \psi$ is the piecewise linear numerical solution for $\psi = h, hu, hv$, respectively. \subsection{Dam break test cases with exact solution} For this dam break problem, it was originally a one dimensional problem with analytical solutions involving a constant bed slope and a Coulomb frictional stress, thus, its governing equations can reduced as \cite{Juez2013}\cite{Zhai2015} \[
\left\{ \begin{array}{l}
{\dfrac{\partial h}{\partial t} + \dfrac{\partial (hu)}{\partial x}
= 0 } \\ [2mm]
{\dfrac{(hu)}{\partial t} + \dfrac{(hu^2 + \frac{1}{2} g\cos{\zeta}
h^2)}{\partial x} = -gh\cos{\zeta}(\tan{\delta}-\tan{\zeta})}
\end{array}
\right. \] where $g$ is the gravitational constant, the analytical solution of a granular dam break problem is given \cite{Zhai2015}: \[
(h, \mathcal{U})=\left\{\begin{array}{ll}{\left(h_{0}, 0\right),}
& {\chi< -c_{0} t} \\
{\left(\frac{h_{0}}{9}\left(2-\frac{\chi}{c_{0} t}\right)^{2},
\frac{2}{3}\left(\frac{\chi}{t}+c_{0}\right)\right),} & {-c_{0} t
\leq \chi \leq 2 c_{0} t} \\ {(0,0),} & {\chi>2 c_{0}
t}\end{array}\right. \] where $c_0 = \sqrt{g h_0 \cos{\zeta}}$ and \[
\chi=x+\frac{1}{2} g \cos \zeta(\tan \delta-\tan \zeta) t^{2}, \] \[
\mathcal{U}=u+g \cos \zeta(\tan \delta-\tan \zeta) t. \] In order to check the performance of the numerical schemes proposed in this work, we modified it to an equivalent two-dimensional problem as Ref.\cite{Zhai2015}. The computational domain is $[-12.8,12.8] \times[-1.6,1.6]$, the initial conditions are \[
\left\{\begin{array}{ll}{(h, u, v)=(10,0,0),} & {x<0} \\
{(h, u, v)=(0,0,0),} & {x>0}\end{array}\right. \] and $\zeta=40^{\circ}, \delta = 24.5^{\circ} $.
Fig. \ref{fig::1D} plots the compute avalanche depth in comparison with the exact solutions at $t=0.1, 0.2, 0.3, 0.4$ and $0.5$. It can be seen that our numerical solutions are in agreement with exact solutions very well, which also verifies the robustness and validity of our numerical algorithm.
\begin{figure}
\caption{Exact and numerical solutions for the 1D dam break problem at $t=0.0,0.1,0.2,0.3,0.4$ and $0.5$ }
\label{fig::1D}
\end{figure}
\subsection{Avalanche flows slide down an inclined plane and merge
continuously into a horizontal plane} In this section, we simulate an avalanche of finite granular mass sliding down an inclined plane and merging contiuously into a horizontal plane, which was considered in \cite{Wang2004} to verify their model's ability.
In order to compare with the results from Ref. \cite{Wang2004}\cite{Zhai2015}, all computational parameters are set to the same as they are. That is, the computational domain is $[0,30]\times[-7, 7]$, and $\epsilon = \lambda = 1$ in our simulations. The inclined section lies $x \in [0, 17.5]$ and the horizontal region lies $x \in [21.5, 30]$ with a smooth transition zone lies $x \in [17.5, 21.5]$. The inclination angle is given by \[
\zeta(x)=\left\{\begin{array}{ll}{\zeta_{0},} & {0 \leq x \leq
17.5} \\ {\zeta_{0}\left(1-\frac{x-17.5}{4}\right),} &
{17.5<x<21.5} \\ {0^{\circ},} & {x \geq 21.5} \end{array}\right. \] where $\zeta_{0} = 35^{\circ}$ and $\delta = \phi = 30^{\circ}$. The granular mass is suddenly released at $t=0$ from the hemispherical shell with an initial radius of $r_0 = 1.85$ in dimensionless length units. The center of the cap is initially located at $(x_0, y_0)=(4, 0)$.
Fig. \ref{fig::ex2} shows the avalanche depth contours of the fluid at eight time slices as the avalanche slides on the inclined plane into the horizontal run-out zone. From the Fig. \ref{fig::ex2} it can be seen that, the avalanche starts to flow along the $x$ and $y$ direction due to gravity, and it flows faster along the downslope direction obviously. At $ t= 9$ the part of granular reaches the horizontal run-out zone and begins to deposite because of the basal friction is sufficient large, and the tail is still doing the acceleration movement. Here, some small numerical oscillations are visible in the solutions which also found in Ref.\cite{Wang2004}\cite{Zhai2015} with orthogonal quadrilateral mesh. At $ t= 12$ a shock wave develops round the end of the transition zone at $ x = 21.5$ and a surge wave is created. During the $t=12$ to $t=24$, most of graunlar mass are accumulated at the end of the transition zone and the front-end of the horizontal zone, meanwhile, a obvious backward surge can be found. At $t=24$ the final deposition of the avalanche is nearly formed and an expansion fan is observed.
On the whole, our numerical resluts are identical with these available from Ref. \cite{Wang2004}\cite{Zhai2015}. Fig. \ref{fig::ex2mesh} presents the mesh adaptive moving for the avalanche granular flows down the inclined plane and merges contiuously into a horizontal plane at $ t=6, 12, 18, 24$. It can be seen that the triangle mesh is refined at the free surface where gradients varied significantly.
\begin{figure}
\caption{Thickness contours of the avalanche at eight different
dimensionless times $t=3, 6, 9, 12, 15, 18, 21, 24$ for the flow
slides down the inclined plane and merging continuously into a
horizontal plane. The transition zone from the inclined plane to
the horizontal plane lies between the two green dashed lines. }
\label{fig::ex2}
\end{figure}
\begin{figure}
\caption{The triangular meshes at times $t=6, 12, 18, 24$ for the
flow slides down the inclined plane and merging continuously into
a horizontal plane.}
\label{fig::ex2mesh}
\end{figure}
\subsection{Granular flow pasts obstacle} The research of granular avalanches around obstacle is of partiacular interest because some important pheomena such as shock waves, expansion fans and granular vacuum can be observed \cite{Cui2014}. Meanwhile, setting obstacles is an effective means to control the dynamic process of granular avalanche flow. In this section, we present a simulation example of finite granular mass flows down an inclined plane and merges inito a horizontal run-out zone with a conical obstacle located on the inclined plane. In addition, in order to compared the effects of with and without obstacle, all compuational parameters (such as computational domain, the inclination angle $\zeta(x)$, $\phi$, $\delta$ ) are the same as the above example. A conical obstacle is incorporated into this model and the centre of obstacle is located at $(13,0)$ with dimensionless radius of bottom-circle of 1 and dimsionless heights $H=1$.
Fig. \ref{fig::ex3} shows a series of snap-shots of the thickness contours as the avalanche slides down the inclined plane which has a conical obstacle in the avalanche track. It can be seen that the flow of granular is the same as that without obstacle before it reaches the obstacle (eg. for $t \leq 3$). At $t=6$ the granular avalanche has reache the obstacle, some of them climb the obstacle, and the other divide into two parts and flow around the obstacle downwards. At the moment, a stationary zone appears in the front of the obstacle, where the avalance velocity becomes zero, but it is followed by a great increase in the avalanche thickness. Meanwhile, behind the obstacle, a so-called granular vacuum is formed which can protect the zone directly behind it from disasters. At $t=9$, the granular mass continues to accelerate until it reaches the horizontal run-out zone where the basal friction is sufficient large, which results in the front becomes to rest, but the part of the tail accelerates further. At $t=12$ the shcoks waves fromed on the horizontal run-out zone during the front of granular mass deposits and the tail of it still slides accelerated. From $t=15$ to $t=18$, most of the granualr mass has slided down either side of the obstacle and on the transition zone two expansion fans gradually formed. For $ t= 21, 24$ two separate and symmetrical deposition mass of granular are formed at the end of transition zone and the front of horizontal zone. From $t = 6$ to $ t = 24$ there always has the flow-free region directly behind the obstacle, which can be regarded as the protected area in the practial application of avalanche protection. Thus, the obstacle does change the path of granular avalanche flow movement and affects its final deposition pattern. The numerical results show that our numerical model has the ability of capturing key qualitative features, such as shocks wave and flow-free region. As in the previous example, Fig. \ref{fig::ex3mesh} also shows the adaptive mesh moving for the granular avalanche flows down with a obstacle at $ t=6, 12, 18, 24$. \begin{figure}
\caption{Thickness contours of the avalanche at eight different
dimensionless times $t=3, 6, 9, 12, 15, 18, 21, 24$ for a
granular flow past a circular cone located on the inclined
plane and indicated by red solid lines with a maximum
dimensionless height of $H=1$. The transition zone from the
inclined plane to the horizontal plane lies between the two
green dashed lines. }
\label{fig::ex3}
\end{figure}
\begin{figure}
\caption{The triangular meshes at times $t=6, 12, 18, 24$ for the
flow slides down the inclined plane with a conical obstacle.}
\label{fig::ex3mesh}
\end{figure}
\section{Conclusions} This paper is an attempt to solve the Savage-Hutter equations on unstructred grids. Keeping in mind that the model is not rotational invariant, it is more than encouraging for our numerical scheme to attain numerical results with good agreement to the reference solutions. It is indicated that the method is applicable to problems as granular avalanche flows. The underlying reason why the scheme works smoothly is still under investigation.
\end{document} |
\begin{document}
\title[nearest points and DC functions]{nearest points and delta convex functions in Banach spaces}
\author{Jonathan M. Borwein \and Ohad Giladi} \address{Centre for Computer-assisted Research Mathematics and its Applications (CARMA), School of Mathematical and Physical Sciences, University of Newcastle, Callaghan, NSW 2308, Australia} \email{[email protected], [email protected]}
\begin{abstract}
Given a closed set $C$ in a Banach space $(X, \|\cdot\|)$, a point $x\in X$ is said to have a nearest point in $C$ if there exists $z\in C$ such that $d_C(x) =\|x-z\|$, where $d_C$ is the distance of $x$ from $C$. We shortly survey the problem of studying how large is the set of points in $X$ which have nearest points in $C$. We then discuss the topic of delta-convex functions and how it is related to finding nearest points. \end{abstract}
\subjclass[2010]{46B10. 41A29}
\maketitle
\section{Nearest points in Banach spaces}
\subsection{Background}\label{sec intro}
Let $(X, \|\cdot\|)$ be a real Banach space, and let $C\subseteq X$ be a non-empty closed set. Given $x\in X$, its distance from $C$ is given by
\[d_C(x) = \inf_{y\in C} \|x-y\|.\]
If there exists $z\in C$ with $d_C(x) = \|x-z\|$, we say that $x$ has a \emph{nearest point} in $C$. Let also \[N(C) = \big\{x\in X: x \text{ has a nearest point in $C$ }\big\}.\] One can then ask questions about the structure of the set $N(C)$. This question has been studied in \cite{Ste63, Lau78, Kon80, Zaj83, BF89, DMP91, Dud04, RZ11, RZ12} to name just a few. More specifically, the following questions are at the heart of this note:
\begin{center} \emph{Given a nonempty closed set $C\subseteq X$, how large is the set $N(C)$? When is it non-empty?} \end{center}
One way to do so is to consider sets which are large in the set theoretic sense, such as dense $G_{\delta}$ sets. We begin with a few definitions.
\begin{defin} If $N(C) = X$, i.e., every point in $X$ has a nearest point in $C$, then $C$ is said to be proximinal. If $N(C)$ contains a dense $G_{\delta}$ set, then $C$ is said to be almost proximinal. \end{defin}
In passing we recall that If every point in $X$ is uniquely proximinal then $C$ is said to be a \emph{Chebyshev set} It has been conjectured for over half a century, that in Hilbert space Chebyshev sets are necessarily convex, but this is only proven for weakly closed sets \cite{BV10}. See also \cite{FM15} for a recent survey on the topic.
For example, closed convex sets in reflexive spaces are proximinal, as well as closed sets in finite dimensional spaces. See \cite{BF89}. One can also consider stronger notions of ``large" sets. See Section \ref{sec porous}. First, we also need the following definition.
\begin{defin}
A Banach space is said to be a (sequentially) Kadec space if for each sequence $\{x_n\}$ that converges weakly to $x$ with $\lim \|x_n\| = \|x\|$, $\{x_n\}$ converges to $x$ in norm, i.e.,
\[\lim_{n\to \infty} \|x-x_n\| =0.\] \end{defin}
With the above definitions in hand, the following result holds.
\begin{thm}[Lau \cite{Lau78}, Borwein-Fitzpatrick \cite{BF89}]\label{thm Lau BF} If $X$ is a reflexive Kadec space and $C\subseteq X$ is closed, then $C$ is almost proximinal. \end{thm}
The assumptions on $X$ are in fact necessary.
\begin{thm}[Konjagin \cite{Kon80}]\label{thm Kon} If $X$ is not both Kadec and reflexive, then there exist $C\subseteq X$ closed and $U\subseteq X\setminus C$ open such that no $x\in U$ has a nearest point in $C$. \end{thm}
It is known that under stronger assumption on $X$ one can obtain stronger results on the set $N(C)$. See Section \ref{sec porous}.
\subsection{Fr\'echet sub-differentiability and nearest points}\label{sec diff}
We begin with a definition.
\begin{defin} Assume that $f: X\to \mathbb R$ is a real valued function with $f(x)$ finite. Then $f$ is said to be Fr\'echet sub-differentiable at $x\in X$ if there exists $x^*\in X^*$ such that \begin{align}\label{cond subdiff}
\liminf_{y\to 0}\frac{f(x+y)-f(x)-x^*(y)}{\|y\|} \ge 0. \end{align} The set of points in $X^*$ that satisfy \eqref{cond subdiff} is denoted by $\partial f(x)$. \end{defin}
Sub-derivatives have been found to have many applications in approximation theory. See for example \cite{BF89, BZ05, BL06, BV10, Pen13}.
One of the connections between sub-differentiability and the nearest point problem was studied in \cite{BF89}. Given $C\subseteq X$ closed, the following modification of a construction of \cite{Lau78} was introduced. \begin{align*} L_n(C) = \Big\{x\in X\setminus C : \exists x^*\in \mathbb S_{X^*} \text{ s.t. }\sup_{\delta>0}~\inf_{z\in C\cap B(x,d_C(x)+\delta)}~x^*(x-z) > \big(1-2^{-n}\big)d_C(x)\Big\}, \end{align*} where $\mathbb S_{X^*}$ denotes the unit sphere of $X^*$. Also, let \begin{align*} L(C) = \bigcap_{n=1}^{\infty}L_n(C). \end{align*} The following is known. \begin{prop}[Borwein-Fitzpatrick \cite{BF89}]\label{prop open} For every $n\in \mathbb N$, $L_n(C)$ is open. In particular, $L(C)$ is $G_{\delta}$. \end{prop} Finally, let \begin{align*} \Omega(C) = & \Big\{ x\in X\setminus C : \exists x^*\in \mathbb S_{X^*}, \text{ s.t. } \forall \epsilon >0, \exists \delta>0, \\ & ~~\quad\inf_{z\in C\cap B(x,d_C(x)+\delta)}x^*(x-z) > \big(1-\epsilon\big)d_C(x)\Big\}. \end{align*}
While $L(C)$ is $G_{\delta}$ by Proposition \ref{prop open}, under the assumption that $X$ is reflexive, the following is known.
\begin{prop}[Borwein-Fitzpatrick \cite{BF89}] If $X$ is reflexive then $\Omega(C) = L(C)$. In particular, $\Omega(C)$ is $G_{\delta}$. \end{prop}
The connection to sub-differentiability is given in the following proposition.
\begin{prop}[Borwein-Fitzpatrick \cite{BF89}] If $x\in X\setminus C$ and $\partial d_C(x) \neq \emptyset$, then $x\in \Omega(C)$. \end{prop}
Also, the following result is known.
\begin{thm}[Borwein-Preiss \cite{BP87}]\label{thm var} If $f$ is lower semicontiuous on a reflexive Banach space, then $f$ is Fr\'echet sub-differentiable on a dense set. \end{thm}
In fact, Theorem \ref{thm var} holds under a weaker assumption. See \cite{BP87, BF89}. Since the distance function is lower semicontinuous, it follows that it is sub-differentiable on a dense subset, and therefore, by the above propositions, $\Omega(C)$ is a dense $G_{\delta}$ set. Thus, in order to prove Theorem \ref{thm Lau BF}, it is only left to show that every $x\in \Omega(C)$ has a nearest point in $C$. Indeed, if $\{z_n\}\subseteq C$ is a minimizing sequence, then by extracting a subsequence, assume that $\{z_n\}$ has a weak limit $z\in C$. By the definition of $\Omega(C)$, there exists $x^* \in \mathbb S_{X^*}$ such that
\[\|x-z\| \ge x^*(x-z) = \lim_{n\to \infty} x^*(x-z_n) \ge d_C(x) = \lim_{n\to \infty}\|x-z_n\|.\] On the other hand, by weak lower semicontinuity of the norm,
\[\lim_{n\to \infty} \|x-z_n\| \ge \|x-z\|,\]
and so $\|x-z\| = \lim\|x-z_n\|$. Since it is known that $\{z_n\}$ converges weakly to $z$, the Kadec property implies that in fact $\{z_n\}$ converges in norm to $z$. Thus $z$ is a nearest point. This completes the proof of Theorem \ref{thm Lau BF}.
This scheme of proof from \cite{BF89} shows that differentiation arguments can be used to prove that $N(C)$ is large.
\subsection{Nearest points in non-Kadec spaces}\label{sec non-Kadec} It was previously mentioned that closed convex sets in reflexive spaces are proximinal. It also known that non-empty ``Swiss cheese" sets (sets whose complement is a mutually disjoint union of open convex sets) in reflexive spaces are almost proximinal \cite{BF89}. These two examples show that for some classes of closed sets, the Kadec property can be removed. Moreover, one can consider another, weaker, way to ``measure" whether a set $C\subseteq X$ has ``many" nearest points: ask whether the set of nearest points in $C$ to points in $X\setminus C$ is dense in the boundary of $C$. Note that if $C$ is almost proximinal, then nearest points are dense in the boundary. The converse, however, is not true. In \cite{BF89} an example of a non-Kadec reflexive space was constructed where for every closed set, the set of nearest points is dense in its boundary. The following general question is still open.
\begin{qst}
Let $(X, \|\cdot\|)$ be a reflexive Banach space and $C\subseteq X$ closed. Is the set of nearest points in $C$ to points in $X\setminus C$ dense in its boundary? \end{qst}
Relatedly, if the set $C$ is norm closed and bounded in a space with the \emph{Radon-Nikodym property} as is the caae of reflexive space, then $N(C)$ is nonempty and is large enough so that $\overline{\rm conv} C = \overline{\rm conv} N(C)$ \cite{BF89}.
\subsection{Porosity and nearest points}\label{sec porous}
As was mentioned in subsection \ref{sec diff}, one can consider stronger notions of ``large" sets. One is the following notion.
\begin{defin} A set $S\subseteq X$ is said to be porous if there exists $c\in (0,1)$ such that for every $x\in X$ and every $\epsilon>0$, there is a $y \in B(0,\epsilon)\setminus \{0\}$ such that
\[ B(x+y, c\|y\|) \cap S = \emptyset.\] A set is said to be $\sigma$-porous if it a countable union of porous sets. Here and in what follows, $B(x,r)$ denotes the closed ball around $x$ with radius $r$. \end{defin}
See \cite{Zaj05, LPT12} for a more detailed discussion on porous sets. It is known that every $\sigma$-porous set is of the first category, i.e., union of nowhere dense set. Moreover, it is known that the class of $\sigma$-porous sets is a proper sub-class of the class of first category sets. When $X=\mathbb R^n$, one can show that every $\sigma$-porous set has Lebesgue measure zero. This is not the case for every first category set: $\mathbb R$ can be written as a disjoint union of a set of the first category and a set of Lebesgue measure zero. Hence, the notion of porosity automatically gives a stronger notion of large sets: every set whose complement is $\sigma$-porous is also a dense $G_{\delta}$ set.
A Banach space $(X, \|\cdot\|)$ is said to be \emph{uniformly convex} if the function \begin{align}\label{def uni conv}
\delta(\epsilon) = \inf\left\{ 1-\left\|\frac{x+y}2\right\| ~ : ~ x, y\in \mathbb S_X, \|x-y\| \ge \epsilon \right\}, \end{align} is strictly positive whenever $\epsilon>0$. Here $\mathbb S_X$ denotes the unit sphere of $X$. In \cite{DMP91} the following was shown.
\begin{thm}[De Blasi-Myjak-Papini \cite{DMP91}]\label{thm DMP} If $X$ is uniformly convex, then $N(C)$ has a $\sigma$-porous compliment. \end{thm}
In fact, \cite{DMP91} proved a stronger result, namely that for every $x$ outside a $\sigma$-porous set, the minimization problem is \emph{well posed}, i.e., there is unique minimizer to which every minimizing sequence converges. See also \cite{FP91, RZ11, RZ12} for closely related results in this direction.
The proof of Theorem \ref{thm DMP} builds on ideas developed in \cite{Ste63}. However, it would be interesting to know whether one could use differentiation arguments as in Section \ref{sec diff}. This raises the following question:
\begin{qst} Can differentiation arguments be used to give an alternative proof of Theorem \ref{thm DMP}? \end{qst}
More specifically, if one can show that $\partial d_C \neq \emptyset$ outside a $\sigma$-porous set, then by the arguments presented in Section \ref{sec diff}, it would follow that $N(C)$ has a $\sigma$-porous complement. Next, we mention two important results regarding differentiation in Banach spaces.
\begin{thm}[Preiss-Zaj\'i\v{c}ek \cite{PZ84}]\label{thm PZ} If $X$ has a separable dual and $f:X\to \mathbb R$ is continuous and convex, then $X$ is Fr\'echet differentiable outside a $\sigma$-porous set. \end{thm}
See also \cite[Sec. 3.3]{LPT12}. Theorem \ref{thm PZ} implies that if, for example, $d_C$ is a linear combination of convex functions (see more on this in Section \ref{sec DC}), then $N(C)$ has a $\sigma$-porous complement. Also, we have the following.
\begin{thm}[C\'uth-Rmoutil \cite{CR13}]\label{thm CR} If $X$ has a separable dual and $f:X\to \mathbb R$ is Lipschitz, then the set of points where $f$ is Fr\'echet sub-differentiable but not differentiable is $\sigma$-porous. \end{thm}
Since $d_C$ is 1-Lipschitz, the questions of seeking points of sub-differentiability or points of differentiability are similar. Theorem \ref{thm PZ} and Theorem \ref{thm CR} remain true if we consider $f:A\to \mathbb R$ where $A\subseteq X$ is open and convex.
\section{DC functions and DC sets}\label{sec DC}
\subsection{Background}
\begin{defin} A function $f:X\to \mathbb R$ is said to be delta-convex, or DC, if it can be written as a difference of two convex functions on $X$. \end{defin}
This notion was introduced in \cite{Har59} and was later studied by many authors. See for example \cite{KM90, Cep98, Dud01, VZ01, DVZ03, BZ05, Pav05, BB11}. In particular, \cite{BB11} gives a good introduction to this topic. We will discuss here only the parts that are closely related to the nearest point problem.
The following is an important proposition. See for example \cite{VZ89, HPT00} for a proof.
\begin{prop}\label{prop select} If $f_1,\dots,f_k$ are DC functions and $f:X\to \mathbb R$ is continuous and $f(x)\in \big\{f_1(x),\dots,f_n(x)\big\}$. Then $f$ is also DC. \end{prop}
The result is true if we replace the domain $X$ by any convex subset.
\subsection{DC functions and nearest points}
Showing that a given function is in fact DC is a powerful tool, as it allows us to use many known results about convex and DC functions. For example, if a function is DC on a Banach space with a separable dual, then by Theorem \ref{thm PZ}, it is differentiable outside a $\sigma$-porous set. In the context of the nearest point problem, if we know that the distance function is DC, then using the scheme presented in Section \ref{sec diff}, it would follow that $N(C)$ has a $\sigma$-porous complement. The same holds if we have a difference of a convex function and, say, a smooth function.
The simplest and best known example is when $(X,\|\cdot\|)$ is a Hilbert space, where we have the following.
\begin{align*}
d_C^2(x) & = \inf_{y\in C}\|x-y\|^2
\\ & = \inf_{y\in C}\Big[\|x\|^2-2\langle x,y\rangle +\|y\|^2\Big]
\\ & = \|x\|^2 - 2\sup_{y\in C}\Big[\langle x,y\rangle - \|y\|^2/2\Big], \end{align*}
and the function $x\mapsto \sup_{y\in C}\Big[\langle x,y\rangle - \|y\|^2/2\Big]$ is convex as a supremum of affine functions. Hence $d_C^2$ is DC on $X$. Moreover, in a Hilbert space we have the following result (see \cite[Sec. 5.3]{BZ05}).
\begin{thm}\label{thm local DC}
If $(X, \|\cdot\|)$ is a Hilbert space, $d_C$ is locally DC on $X\setminus C$. \end{thm}
\begin{proof}
Fix $y\in C$ and $x_0\in X\setminus C$. It can be shown that if we let $f_y(x) = \|x-y\|$, then $f_y$ satisfies \begin{align*}
\big\|f_y'(x_1)-f_y'(x_2)\big\|_{X^*} \le L_{x_0}\|x_1-x_2\|, ~~ x_1,x_2 \in B_{x_0}, \end{align*} where $L_{x_0} = \frac 4 {d_S(x_0)}$ and $B_{x_0} = B\Big(x_0, \frac 1 2 d_C(x_0)\Big)$. In particular, \begin{align}\label{lip prop} \big(f_y'(x+tv_1)-f_y'(x+t_2v)\big)(v) \le L_{x_0}(t_2-t_1), ~~ v\in \mathbb S_X, t_2> t_1 \ge 0, \end{align}
whenever $x+t_1v, x+t_2v \in B_{x_0}$. Next, the convex function $F(x) = \frac {L_{x_0}} 2 \|x\|^2$ satisfies \begin{align}\label{anti lip hilbert}
\big(F'(x_1)-F'(x_2)\big)(x_1-x_2) \ge L_{x_0}\|x_1-x_2\|^2, ~~ \forall x_1,x_2\in X. \end{align} In particular \begin{align}\label{anti lip} \big(F'(x+t_2v)-F'(x+t_1v)\big)(v) \ge L_{x_0}(t_2-t_1), ~~ v\in \mathbb S_X, ~t_2>t_1\ge 0. \end{align} Altogether, if $g_y(x) = F(x)-f_y(x)$, then \begin{align*} \big(g_y'(x+t_2v)-g_y'(x+t_1v)\big)(v) \stackrel{\eqref{lip prop}\wedge \eqref{anti lip}}{\ge} 0, ~~ v\in \mathbb S_X, ~t_2>t_1\ge 0, \end{align*} whenever $x+t_1v, x+t_2v \in B_{x_0}$. This implies that $g_y$ is convex on $B_{x_0}$. It then follows that \begin{align*}
d_C(x) & = \frac {L_{x_0}} 2 \|x\|^2 -\sup_{y\in C}\Bigg[~\frac {L_{x_0}} 2 \|x\|^2-\|x-y\|\Bigg] = h(x) - \sup_{y\in C}g_y(x) \end{align*} is DC on $B_{x_0}$. \end{proof}
\begin{remark} Even in $\mathbb R^2$ there are sets for which $d_C$ is not DC everywhere (not even locally DC), as was shown in \cite{BB11}. Thus, the most one could hope for is a locally DC function on $X\setminus C$. \end{remark}
Given $q\in (0,1]$, a norm $\|\cdot\|$ is said to be $q$\emph{-H\"older smooth} at a point $x\in X$ if there exists a constant $K_x\in (0,\infty)$ such that for every $y\in \mathbb S_X$ and every $\tau>0$, \begin{align*}
\frac{\|x+\tau y\|}{2} + \frac{\|x-\tau y\|}{2} \le 1+K_x\tau^{1+q}. \end{align*}
If $q=1$ then $(X,\|\cdot\|)$ is said to be \emph{Lipschitz smooth} at $x$. The spaces $L_p$, $p \ge 2$ are known to be Lipschitz smooth, and in general $L_p$, $p>1$, is $s$-H\"older smooth with $s = \min\{1,p-1\}$.
A Banach space is said to be $p$\emph{-uniformly convex} if for every $x,y\in \mathbb S_X$, \begin{align*}
1-\left\|\frac{x+y}{2}\right\| \ge L\|x-y\|^p. \end{align*} Note that this is similar to assuming that $\delta(\epsilon) = L\epsilon^p$ in \eqref{def uni conv}. The spaces $L_p$, $p>1$, are $r$-uniformly convex with $r = \max\{2,p\}$.
One could ask whether the scheme of proof of Theorem \ref{thm local DC} can be used in a more general setting.
\begin{prop}\label{prop Hilbert}
Let $(X, \|\cdot\|)$ be a Banach space, $C\subseteq X$ a closed set, and fix $x_0\in X\setminus C$ and $y\in C$. Assume that there exists $r_0$ such that $f_y(x) = \|x-y\|$ has a Lipschitz derivative on $B(x_0,r_0)$: \begin{align}\label{lip of der}
\big\|f_y'(x_1)-f_y'(x_2)\| \le L_{x_0}\|x_1-x_2\|. \end{align} Then the norm is Lipschitz smooth on $-y + B_{x_0} = B(x_0-y, r_0)$. If in addition there exists a function $F: X \to \mathbb R$ satisfying \begin{align}\label{strong conv}
\big(F'(x_1)-F'(x_2)\big)(x_1-x_2) \ge L_{x_0}\|x_1-x_2\|^2, ~~ \forall x_1,x_2\in B(x_0,r_0), \end{align}
then $(X,\|\cdot\|)$ admits an equivalent norm which is 2-uniformly convex. In particular, if $X=L_p$ then $p=2$. \end{prop}
\begin{proof} To prove the first assertion note that \eqref{lip of der} is equivalent to \begin{align*}
\|x-y+h\|+\|x-y-h\|-2\|x-y\| \le L_{x_0}\|h\|^2, ~~ x\in B_{x_0}. \end{align*} See for example \cite[Prop. 2.1]{Fab85}.
To prove the second assertion, note that a function that satisfies \eqref{strong conv} is also known as \emph{strongly convex}: one can show that \eqref{strong conv} is in fact equivalent to the condition \begin{align*}
f\left(\frac{x_1+x_2}{2}\right) \le \frac 1 2f(x_1)+\frac 1 2 f(x_2) - C\|x_1-x_2\|^2, \end{align*} for some constant $C$. See for example \cite[App. A]{SS07}. This implies that there exists an equivalent norm which is 2-uniformly convex (\cite[Thm 5.4.3]{BV10}).
\end{proof}
\begin{remark} From \cite{Ara88} it is know that if $F:X\to \mathbb R$ satisfies \begin{align*}
\big(F'(x_1)-F'(x_2)\big)(v) \ge L\|x_1-x_2\|^2, \end{align*}
for \emph{all} $x_1,x_2\in X$, and also that $F$ is twice (Fr\'echet) differentiable at one point, then $(X,\|\cdot\|)$ is isomorphic to a Hilbert space. \end{remark}
\begin{remark} If we replace the Lipschitz condition by a H\"older condition \begin{align*}
\big\|f_y'(x_1)-f_y'(x_2)\big\| \le \|x_1-x_2\|^{\beta}, ~~ \beta <1, \end{align*} then in order to follow the same scheme of proof of Theorem \ref{thm local DC}, instead of \eqref{anti lip hilbert}, we would need a function $F$ satisfying \begin{align*}
\big(F'(x_1)-F'(x_2)\big)(x_1-x_2) \ge \|x_1-x_2\|^{1+\beta}, ~~ x_1,x_2\in B_{x_0}. \end{align*} which implies \begin{align}\label{anti holder}
\big\|F'(x_1)-F'(x_2)\big\| \ge \|x_1-x_2\|^{\beta}, ~~ x_1,x_2\in B_{x_0}. \end{align} If $G= (F')^{-1}$, then we get \begin{align*}
\|Gx_1-Gx_2\| \le \|x_1-x_2\|^{1/\beta}, ~~ x_1,x_2 \in F'(B_{x_0}), \end{align*} which can occur only if $G$ is a constant. Hence \eqref{anti holder} cannot hold and the scheme of proof cannot be used if we replace the Lipschitz condition by a H\"older condition. \end{remark}
\subsection{DC sets, DC representable sets}
\begin{defin} A set $C$ is is said to be a DC set if $C=A\setminus B$ where $A,B$ are convex. \end{defin} We can also consider the following class of sets. \begin{defin} A set $C\subseteq X$ is said to be DC representable if there exists a DC function $f:X\to R$ such that $C = \big\{x\in X: f(x) \le 0\big\}$. \end{defin} Note that if $C = A\setminus B$ is a DC set, then we can write $C = \Big\{\mathbbm 1_{B}-\mathbbm 1_{A}+1/2 \le 0\Big\}$, where $\mathbbm 1_A$, $\mathbbm 1_B$ are the indicator functions of $A,B,$ respectively. Therefore, $C$ is DC representable. Moreover, we have the following.
\begin{thm}[Thach \cite{Th93}] Assume that $X$ and $Y$ are two Banach space, and $T:Y\to X$ is surjective map with $\mathrm{ker}(T) \neq \emptyset$. Then for any set $M\subseteq X$ there exists a DC representable set $D\subseteq Y$, such that $M= T(D)$. \end{thm}
Also, the following is known. See \cite{HPT00}.
\begin{prop} If $C$ is a DC representable set, then there exist $A,B \subseteq X\oplus \mathbb R$ convex, such that $x\in C \iff (x,x') \in A\setminus B$. \end{prop}
\begin{proof} Define $g_1(x,x') = f_1(x)-x'$, $g_2(x,x') = f_2(x)-x'$. Let $A = \big\{(x,x') : g_1(x,x')\le 0\big\}$, $B = \big\{(x,x'): g_2(x,x') \le 0\big\}$. Then $x\in C \iff (x,x') \in A\setminus B$. \end{proof}
In particular, every DC representable set in $X$ is a projection of a DC set in $X\oplus \mathbb R$. The following theorem was proved in \cite{TK96}
\begin{thm}[Thach-Konno \cite{TK96}] If $X$ is a reflexive Banach space and $C\subseteq X$ is closed, then $C$ is DC representable. \end{thm}
This raises the following question.
\begin{qst}\label{qst DC} Is it true that for some classes of spaces, e.g. uniformly convex spaces, there exists $\alpha>0$ such that $d_C^{\alpha}$ is locally DC on $X\setminus C$ whenever $C$ is a DC representable set? \end{qst} If the answer to Question \ref{qst DC} is positive, then by the discussion in subsection \ref{sec diff} we could conclude that $N(C)$ has a $\sigma$-porous complement, thus giving an alternative proof of Theorem \ref{thm DMP}. One could also ask Question \ref{qst DC} for DC sets instead of DC representable sets.
To end this note, we discuss some simple cases where DC and DC representable sets can be used to study the nearest point problem.
\begin{prop} Assume that $C = X \setminus \bigcup_{a\in \Lambda}U_a$, where each $U_a$ is an open convex set. Then $d_C$ is locally DC (in fact, locally concave) on $X\setminus C$. \end{prop}
\begin{proof} First, it is shown in \cite[Sec. 3]{BF89} that if $a\in \Lambda$, then $d_{X\setminus U_a}$ is concave on $U_a$. Next, it also shown in \cite{BF89} that if $x\in U_a$ then $d_{X\setminus U_a}(x) = d_C(x)$. In particular, $d_C$ is concave on $U_a$. \end{proof}
\begin{prop} Assume that $C = A\setminus B$ is a closed DC set, and assume $A$ is closed and $B$ is open, then $d_C$ is convex whenever $d_{C}(x) \le d_{A\cap B}$. \end{prop}
\begin{proof} Since $A = \big(A\setminus B\big) \bigcup B$, we have $$d_A(x) = \min\big\{d_{A\setminus B}(x), d_{A\cap B}(x)\big\} = \min\big\{d_{C}(x), d_{A\cap B}(x)\big\}.$$ Hence, if $d_C(x) \le d_{A\cap B}(x)$ then $d_C(x) = d_A(x)$ is convex. \end{proof}
\begin{prop} Assume that $C$ is a DC representable set, i.e., $C = \big\{x\in X: f_1(x)-f_2(x)\le 0\big\}$, and that $f_2(x) =\max_{1\le i \le m}\varphi_i(x)$, where $\varphi_i$ is affine. Then $d_C$ is DC on $X$. \end{prop}
\begin{proof} Write \begin{align*} C & = \Big\{x : f_1(x)-f_2(x)\le 0\Big\} \\ &= \Big\{x : f_1(x)-\max_{1\le i \le m}\varphi_i(x) \le 0\Big\} \\ & = \Big\{x : \min_{1\le i \le m}\big(f_1(x)-\varphi_i(x)\big) \le 0\Big\} \\ & = \bigcup_{i=1}^n \Big\{x : f_1(x)-\varphi_i(x) \le 0\Big\}. \end{align*} where the sets $\Big\{x~ : ~ f_1(x)-\varphi_i(x) \le 0\Big\}$ are convex sets. Hence, we have that \[d_C(x) = \min_{1\le i \le m}d_{C_i}(x),\] is a minimum of convex sets and therefore by Proposition \ref{prop select} is a DC function. \end{proof}
In \cite{Cep98} it was shown that if $X$ is superreflexive, then any Lipschitz map is a uniform limit of DC functions. See also \cite[Sec. 5.1]{BV10}. We have the following simple result.
\begin{prop} If $X$ is separable, then $d_C$ is a limit (not necessarily uniform) of DC functions. \end{prop}
\begin{proof} If $X$ is separable, i.e., there exists a countable $Q = \{q_1,q_2,\dots\}\subseteq X$ with $\bar Q = X$. We have \begin{align*}
d_C(x) = \inf_{z\in C}\|x-z\| = \inf_{z\in C\cap Q}\|x-z\| = \lim_{n\to \infty} \Big[\min_{z\in C\cap Q_n}\|x-z\|\Big], \end{align*}
where $Q_n = \{q_1,q_2,\dots,q_n\}$. Again by Proposition \ref{prop select} we have that $\min_{z\in C\cap Q_n}\|x-z\|$ is a DC function as a minimum of convex functions. \end{proof}
\section{Conclusion} Despite many decades of study, the core questions addressed in this note are still far from settled. We hope that our analysis will encourage others to take up the quest, and also to reconsider the related \emph{Chebshev problem} \cite{B07,BV10}.
\end{document} |
\begin{document}
\title{Quenched invariance principles for random walks with random conductances. }
\begin{abstract} We prove an almost sure invariance principle for a random walker among i.i.d. conductances in ${\mathbb Z}^d$, $d\geq 2$. We assume conductances are bounded from above but we do not require that they are bounded from below. \end{abstract}
\section{Introduction}
We consider continuous-time, nearest-neighbor random walks among random (i.i.d.) conductances in ${\mathbb Z}^d$, $d\geq 2$ and prove that they satisfy an almost sure invariance principle.
\subsection{Random walks and environments} For $x, y \in {\mathbb Z}^d$, we write: $x \sim y$ if $x$ and $y$ are neighbors in the grid ${\mathbb Z}^d$ and let ${\mathbb E}_d$ be the set of non-oriented nearest-neighbor pairs $(x,y)$.\\ An {\it environment} is a function $\omega:{\mathbb E}_d\rightarrow [0,+\infty[$. Since edges in ${\mathbb E}_d$ are not oriented, i.e. we identified the edge $(x,y)$ with the reversed edge $(y,x)$, it is implicit in the definition that environments are symmetric i.e. $\omega} \def\oo{{\tilde{\omega}}(x,y)=\omega} \def\oo{{\tilde{\omega}}(y,x)$ for any pair of neighbors $x$ and $y$. \\ We let $(\tau_z\,,\, z\in{\mathbb Z}^d)$ be the group of transformations of environments defined by $\tau_z\omega} \def\oo{{\tilde{\omega}}(x,y)=\omega} \def\oo{{\tilde{\omega}}(z+x,z+y)$.
We shall always assume that our environments are uniformly bounded from above. Without loss of generality, we may assume that $\omega} \def\oo{{\tilde{\omega}}(x,y)\leq 1$ for any edge. Thus, for the rest of this paper, an environment will rather be a function $\omega:{\mathbb E}_d\rightarrow [0,1]$. We use the notation $\Omega=[0,1]^{E_d}$ for the set of environments (endowed with the product topology and the corresponding Borel structure). The value of an environment $\omega} \def\oo{{\tilde{\omega}}$ at a given edge is called the {\it conductance}.
Let $\omega} \def\oo{{\tilde{\omega}}\in\Omega$. We are interested in the behavior of the random walk in the environment $\omega} \def\oo{{\tilde{\omega}}$. We denote with $D({\mathbb R}_+,{\mathbb Z}^d)$ the space of c\`ad-l\`ag ${\mathbb Z}^d$-valued functions on ${\mathbb R}_+$ and let $X(t)$, $t\in{\mathbb R}_+$, be the coordinate maps from $D({\mathbb R}_+,{\mathbb Z}^d)$ to ${\mathbb Z}^d$. The space $D({\mathbb R}_+,{\mathbb Z}^d)$ is endowed with the Skorokhod topology, see \cite{kn:Bill} or \cite{kn:JS}. For a given $\omega\in [0,1]^{{\mathbb E}_d}$ and for $x\in{\mathbb Z}^d$, let $P^\omega} \def\oo{{\tilde{\omega}}_x$ be the probability measure on $D({\mathbb R}_+,{\mathbb Z}^d)$ under which the coordinate process is the Markov chain starting at $X(0)=x$ and with generator \begin{eqnarray}\label{int:gen} \LL^\omega} \def\oo{{\tilde{\omega}} f(x)=\frac 1{n^\omega} \def\oo{{\tilde{\omega}}(x)}\sum_{y\sim x} \omega} \def\oo{{\tilde{\omega}}(x,y) (f(y)-f(x))\,, \end{eqnarray} where $n^\omega} \def\oo{{\tilde{\omega}}(x)=\sum_{y\sim x} \omega} \def\oo{{\tilde{\omega}}(x,y)$. If $n^\omega} \def\oo{{\tilde{\omega}}(x)=0$, let $\LL^\omega} \def\oo{{\tilde{\omega}} f(x)=0$ for any function $f$.
The behavior of $X(t)$ under $P^\omega} \def\oo{{\tilde{\omega}}_x$ can be described as follows: starting from point $x$, the random walker waits for an exponential time of parameter $1$ and then chooses at random one of its neighbors to jump to according to the probability law $\omega} \def\oo{{\tilde{\omega}}(x,.)/n^\omega} \def\oo{{\tilde{\omega}}(x)$. This procedure is then iterated with independent hopping times.
We have allowed environments to take the value $0$ and it is clear from the definition of the random walk that $X$ will only travel along edges with positive conductances. This remark motivates the following definitions: call a {\it cluster} of the environment $\omega} \def\oo{{\tilde{\omega}}$ a connected component of the graph $({\mathbb Z}^d,\{e\in E_d\,;\, \omega} \def\oo{{\tilde{\omega}}(e)>0\})$. By construction, our random walker never leaves the cluster of $\omega} \def\oo{{\tilde{\omega}}$ it started from. Since edges are not oriented, the measures with weights $n^\omega} \def\oo{{\tilde{\omega}}(x)$ on the possibly different clusters of $\omega} \def\oo{{\tilde{\omega}}$ are reversible.
\subsection{Random environments} Let $Q$ be a product probability measure on $\Omega$. In other words, we will now pick environments at random, in such a way that the conductances of the different edges form a family of independent identically distributed random variables. $Q$ is of course invariant under the action of $\tau_z$ for any $z\in{\mathbb Z}^d$.
The random variables $({\mathbf 1}_{\omega} \def\oo{{\tilde{\omega}}(e)>0}\,;\, e\in E_d)$ are independent Bernoulli variables with common parameter $q=Q(\omega} \def\oo{{\tilde{\omega}}(e)>0)$. Depending on the value of $q$, a typical environment chosen w.r.t. $Q$ may or may not have infinite clusters. More precisely, it is known from percolation theory that there is a critical value $p_c$, that depends on the dimension $d$, such that for $q<p_c$, $Q$.a.s. all clusters of $\omega} \def\oo{{\tilde{\omega}}$ are finite and for $q>p_c$, $Q$.a.s. there is a unique infinite cluster. In the first case the random walk is almost surely confined to a finite set and therefore does not satisfy the invariance principle (or satisfies a degenerate version of it with vanishing asymptotic variance). We shall therefore assume that the law $Q$ is {\it super-critical} i.e. that $$q=Q(\omega} \def\oo{{\tilde{\omega}}(e)>0)>p_c\,.$$ Then the event `the origin belongs to the infinite cluster' has a non vanishing $Q$ probability and we may define the conditional law: \begin{eqnarray*} Q_0(.)=Q(.\,\vert\, \hbox{$0$ belongs to the infinite cluster})\,. \end{eqnarray*}
\subsection{Annealed results} Part of the analysis of the behavior of random walks in random environments can be done using the {\it point of view of the particle}: we consider the random walk $X$ started at the origin and look at the random process describing the environment shifted by the position of the random walker i.e. we let $\omega} \def\oo{{\tilde{\omega}}(t)=\tau_{X(t)}\omega} \def\oo{{\tilde{\omega}}$. Thus $(\omega} \def\oo{{\tilde{\omega}}(t)\,,\, t\in{\mathbb R}_+)$ is a random process taking its values in $\Omega$.
Let us also introduce the measure \begin{eqnarray*} {\tilde Q}_0(A)=\frac{\int_A n^\omega} \def\oo{{\tilde{\omega}}(0)dQ_0(\omega} \def\oo{{\tilde{\omega}})}{\int n^\omega} \def\oo{{\tilde{\omega}}(0)dQ_0(\omega} \def\oo{{\tilde{\omega}})}\,. \end{eqnarray*} Observe that ${\tilde Q}_0$ is obviously absolutely continuous with respect to $Q_0$.
We list some of the properties of the process $\omega} \def\oo{{\tilde{\omega}}(.)$ as proved in \cite{kn:DFGW}: \begin{prop} \label{propDeMasi} (Lemmata 4.3 and 4.9 in \cite{kn:DFGW})\\ The random process $\omega} \def\oo{{\tilde{\omega}}(t)$ is Markovian under $P^\omega} \def\oo{{\tilde{\omega}}_0$. The measure ${\tilde Q}_0$ is reversible, invariant and ergodic with respect to $\omega} \def\oo{{\tilde{\omega}}(t)$. \end{prop}
Based on this proposition, the authors of \cite{kn:DFGW} could deduce that the random walk $X(t)$ satisfies the invariance principle {\it in the mean}. Let us define the so-called {\it annealed} semi-direct product measure
\begin{eqnarray*} Q_0.P_x^\omega} \def\oo{{\tilde{\omega}}[\,F(\omega} \def\oo{{\tilde{\omega}},X(.))\,]=\int P_x^\omega} \def\oo{{\tilde{\omega}}[\,F(\omega} \def\oo{{\tilde{\omega}},X(.))\,]\,dQ_0(\omega} \def\oo{{\tilde{\omega}})\,.\end{eqnarray*}
\begin{theo} \label{theoDeMasi} (Annealed invariance principle, \cite{kn:DFGW})\\ Consider a random walk with i.i.d. super-critical conductances. Under $Q_0.P_0^\omega} \def\oo{{\tilde{\omega}}$, the process $(X^\varepsilon(t)=\varepsilon X(\frac t{\varepsilon^2}),t\in{\mathbb R}_+)$ converges in law to a non-degenerate Brownian motion with covariance matrix $\sigma^2Id$ where $\sigma^2$ is positive. \end{theo}
It should be pointed out that the result of \cite{kn:DFGW} is in fact much more general. On one hand, \cite{kn:DFGW} deals with random walks with unbounded jumps, under a mild second moment condition. Besides, a similar annealed invariance principle is in fact proved for any stationary law $Q$ rather than just product measures.
The positivity of $\sigma^2$ is not ensured by the general results of \cite{kn:DFGW}) but it can be proved using comparison with the Bernoulli case, see Remark \ref{rem:positivity}.
\subsection{The almost sure invariance principle} The annealed invariance principle is not enough to give a completely satisfactory description of the long time behavior of the random walk. It is for instance clear that the annealed measure $Q_0.P_0^\omega} \def\oo{{\tilde{\omega}}$ retains all the symmetries of the grid. In particular it is invariant under reflections through hyperplanes passing through the origin. This is not true anymore for the law of the random walk in a given environment. Still, one would expect symmetries to be restored in the large scale, for a given realization of $\omega} \def\oo{{\tilde{\omega}}$.
Our main result is the following almost sure version of Theorem \ref{theoDeMasi}:
\begin{theo} \label{theorem1} (Quenched invariance principle)\\ Consider a random walk with i.i.d. super-critical conductances. $Q_0$ almost surely, under $P^\omega} \def\oo{{\tilde{\omega}}_0$, the process $(X^\varepsilon(t)=\varepsilon X(\frac t{\varepsilon^2}),t\in{\mathbb R}_+)$ converges in law as $\varepsilon$ tends to $0$ to a non-degenerate Brownian motion with covariance matrix $\sigma^2Id$ where $\sigma^2$ is positive and does not depend on $\omega} \def\oo{{\tilde{\omega}}$. \end{theo}
\subsection{The Bernoulli case and other cases} The main difficulty in proving Theorem \ref{theorem1} is the lack of assumption on a lower bound for the values of the conductances. Indeed, if one assumes that almost any environment is bounded from below by a fixed constant i.e. there exists a $\delta>0$ such that $Q(\omega} \def\oo{{\tilde{\omega}}(e)<\delta)=0$ then the conclusion of Theorem \ref{theorem1} was already proved in \cite{kn:SS} using the classical `corrector approach' adapted from \cite{kn:Ko}.
Another special case recently solved is the Bernoulli case: let us assume that only the values $0$ and $1$ are allowed for the conductances i.e. $Q$ is a product of Bernoulli measures of parameter $q$. Remember that we assume that we are in the supercritical regime $q>p_c$. An environment can then be also thought of as a (unweighted) random sub-graph of the grid and our random walk is the simple symmetric random walk on the clusters of the environment, i.e. jumps are performed according to the uniform law on the neighbors of the current position in the graph $\omega} \def\oo{{\tilde{\omega}}$.
In the Bernoulli case, quenched invariance principles have been obtained by various authors in \cite{kn:BB}, \cite{kn:MP} and \cite{kn:SS}. These three works develop different approaches to handle the lack of a positive lower bound for the conductances. They have in common the use of quantitative bounds on the transition probabilities of the random walk. It is indeed known from \cite{kn:Ba} that the kernel of the simple random walk on an infinite percolation cluster satisfies Gaussian bounds. A careful analysis of the proofs shows that a necessary condition to obtain the invariance principle using any of the three approaches in \cite{kn:BB}, \cite{kn:MP} or \cite{kn:SS} is a Poincar\'e inequality of the correct scaling (and in fact \cite{kn:MP} shows that the Poincar\'e inequality is `almost' sufficient.) To be more precise, let $A_n$ be the Poincar\'e constant on a box of size $n$ centered at the origin. In other words, $A_n$ is the inverse spectral gap of the operator $\LL^\omega} \def\oo{{\tilde{\omega}}$ restricted to the connected component at the origin of the graph $\omega} \def\oo{{\tilde{\omega}}\cap[-n,n]^d$ and with reflection boundary conditions. Then one needs know that $Q_0$ almost surely, \begin{eqnarray} \label{int:poinc} \limsup n^{-2}A_n<\infty\,. \end{eqnarray} Such a statement was originally proved in \cite{kn:MR} for the Bernoulli case.
It turns out that (\ref{int:poinc}) is false in the general case of i.i.d. conductances, even if one assumes that conductances are always positive. We can choose for instance a product law with a polynomial tail at the origin i.e. we assume that there exists a positive parameter $\gamma$ such that $Q(\omega} \def\oo{{\tilde{\omega}}(e)\leq a)\sim a^\gamma$ as $a$ tends to $0$. Then it is not difficult to prove that, for small values of $\gamma$, \begin{eqnarray*} \liminf\frac{\log A_n}{\log n}> 2\,.\end{eqnarray*} In \cite{kn:FM}, we considered a slightly different model of symmetric random walks with random conductances with a polynomial tail but non i.i.d. (although with finite range dependency only) and we proved that \begin{eqnarray*} \frac{\log A_n}{\log n}\rightarrow 2\vee\frac d\gamma\,,\end{eqnarray*} showing that, at least in the case $\gamma<d/2$, the Poincar\'e constant is too big to be directly used to prove the diffusive behavior of the random walk and one needs some new ingredient to prove Theorem \ref{theorem1}.
\begin{rmk} In \cite{kn:FM}, we derived annealed estimates on the decay of the return probability of the random walk. More interestingly, in the very recent work \cite{kn:BBHK}, the authors could also obtain quenched bounds on the decay of the return probability for quite general random walks with random conductances. Their results in particular show that anomalous decays do occur in high dimension. In such situations, although the almost sure invariance principle holds, see Theorem \ref{theorem1}, the local CLT fails. \end{rmk}
Our proof of Theorem \ref{theorem1} uses a time change argument that we describe in the next part of the paper. \vskip.5cm
{\it Acknowledgments:} the author would like to thank the referees of the first version of the paper for their careful reading and comments that lead to an improvement of the paper.
{\it Note: after this paper was posted on the Arxiv, M. Biskup and T. Prescott wrote a preprint with a different proof of Theorem \ref{theorem1}, see \cite{kn:BiPres}. Their approach is based on ideas from \cite{kn:BB} when we prefer to invoke \cite{kn:MP}. They also need a time change argument, as here, and percolation results like Lemma \ref{lem:site''}. }
\vskip 1cm \section{A time changed process} \setcounter{equation}{0} \label{sec:timechange}
In this section, we introduce a time changed process, $X^\xi$, and state an invariance principle for it: Theorem \ref{theorem'}.
Choose a threshold parameter $\xi>0$ such that $Q(\omega} \def\oo{{\tilde{\omega}}(e)\geq\xi)>p_c$. For $Q$ almost any environment $\omega} \def\oo{{\tilde{\omega}}$, the percolation graph $({\mathbb Z}^d,\{ e\in E_d\,;\, \omega} \def\oo{{\tilde{\omega}}(e)\geq\xi\})$ has a unique infinite cluster that we denote with $\CC^\xi(\omega} \def\oo{{\tilde{\omega}})$.
By construction $\CC^\xi(\omega} \def\oo{{\tilde{\omega}})$ is a subset of $\CC(\omega} \def\oo{{\tilde{\omega}})$. We will refer to the connected components of the complement of $\CC^\xi(\omega} \def\oo{{\tilde{\omega}})$ in $\CC(\omega} \def\oo{{\tilde{\omega}})$ as {\it holes}. By definition, holes are connected sub-graphs of the grid. Let $\HH^\xi(\omega} \def\oo{{\tilde{\omega}})$ be the collection of all holes. Note that holes may contain edges such that $\omega} \def\oo{{\tilde{\omega}}(e)\geq\xi$.
We also define the conditioned measure \begin{eqnarray*} Q_0^\xi(.)=Q(.\vert 0\in\CC^\xi(\omega} \def\oo{{\tilde{\omega}}))\,.\end{eqnarray*}
Consider the following additive functional of the random walk: \begin{eqnarray*} A^\xi(t)=\int_0^t {\mathbf 1}_{X(s)\in\CC^\xi(\omega} \def\oo{{\tilde{\omega}})}\,ds\,, \end{eqnarray*} its inverse $(A^\xi)^{-1}(t)=\inf \{s\,;\, A^\xi(s)>t\}$ and define the corresponding time changed process \begin{eqnarray*} \tX(t)=X((A^\xi)^{-1}(t))\,. \end{eqnarray*}
Thus the process $\tX$ is obtained by suppressing in the trajectory of $X$ all the visits to the holes. Note that, unlike $X$, the process $\tX$ may perform long jumps when straddling holes.
As $X$ performs the random walk in the environment $\omega} \def\oo{{\tilde{\omega}}$, the behavior of the random process $\tX$ is described in the next
\begin{prop} Assume that the origin belongs to $\CC^\xi(\omega} \def\oo{{\tilde{\omega}})$. Then, under $P^\omega} \def\oo{{\tilde{\omega}}_0$, the random process $\tX$ is a symmetric Markov process on $\CC^\xi(\omega} \def\oo{{\tilde{\omega}})$. \end{prop}
The Markov property, which is not difficult to prove, follows from a very general argument about time changed Markov processes. The reversibility of $\tX$ is a consequence of the reversibility of $X$ itself as will be discussed after equation (\ref{2:rates}).
The generator of the process $\tX$ has the form \begin{eqnarray}\label{2:gen'} {\tLLo} f(x)=\frac 1{n^\omega} \def\oo{{\tilde{\omega}}(x)}\sum_{y} \oxi(x,y) (f(y)-f(x))\,, \end{eqnarray} where \begin{eqnarray} \nonumber \frac{\oxi(x,y)}{n^\omega} \def\oo{{\tilde{\omega}}(x)} &=&\lim_{t\rightarrow 0} \frac 1t P_x^\omega} \def\oo{{\tilde{\omega}}(\tX(t)=y)\\ &=&P_x^\omega} \def\oo{{\tilde{\omega}}(\hbox{ $y$ is the next point in $\CC^\xi(\omega} \def\oo{{\tilde{\omega}})$ visited by the random walk $X$})\,, \label{2:rates} \end{eqnarray} if both $x$ and $y$ belong to $\CC^\xi(\omega} \def\oo{{\tilde{\omega}})$ and $\oxi(x,y)=0$ otherwise.
The function $\oxi$ is symmetric: $\oxi(x,y)=\oxi(y,x)$ as follows from the reversibility of $X$ and formula (\ref{2:rates}), but it is no longer of nearest-neighbor type i.e. it might happen that $\oxi(x,y)\not=0$ although $x$ and $y$ are not neighbors. More precisely, one has the following picture: $\oxi(x,y)=0$ unless either $x$ and $y$ are neighbors and $\omega} \def\oo{{\tilde{\omega}}(x,y)\geq \xi$, or there exists a hole, $h$, such that both $x$ and $y$ have neighbors in $h$. (Both conditions may be fulfilled by the same pair $(x,y)$.)
Consider a pair of neighboring points $x$ and $y$, both of them belonging to the infinite cluster $\CC^\xi(\omega} \def\oo{{\tilde{\omega}})$ and such that $\omega} \def\oo{{\tilde{\omega}}(x,y)\geq\xi$, then \begin{eqnarray} \label{2:lowbo} \oxi(x,y)\geq\xi\,.\end{eqnarray} This simple remark will play an important role. It implies, in a sense to be made precise later, that the parts of the trajectory of $\tX$ that consist in nearest-neighbors jumps are similar to what the simple symmetric random walk on $\CC^\xi(\omega} \def\oo{{\tilde{\omega}})$ does.
Finally observe that the environment $\oxi$ is stationary i.e. the law of $\oxi$ under $Q$ is invariant with respect to $\tau_z$ for all $z\in{\mathbb Z}^d$ as can be immediately seen from formula \ref{2:rates}.
\begin{theo} \label{theorem'} (Quenched invariance principle for $\tX$)\\ There exists a value $\xi_0>0$ such that for any $0<\xi\leq\xi_0$ the following holds. For $Q_0$ almost any environment, under $P^\omega} \def\oo{{\tilde{\omega}}_0$, the process $(X^{\xi,\,\varepsilon}(t)=\varepsilon \tX(\frac t{\varepsilon^2}),t\in{\mathbb R}_+)$ converges in law as $\varepsilon$ tends to $0$ to a non-degenerate Brownian motion with covariance matrix $\sigma^2(\xi) Id$ where $\sigma^2(\xi)$ is positive and does not depend on $\omega} \def\oo{{\tilde{\omega}}$. \end{theo}
The proof of Theorem \ref{theorem'} will be given in part \ref{sec:proof}. It very closely mimics the arguments of \cite{kn:MP}. Indeed, one uses the lower bound (\ref{2:lowbo}) to bound the Dirichlet form of the process $\tX$ in terms of the Dirichlet form of the simple symmetric random walk on $\CC^\xi(\omega} \def\oo{{\tilde{\omega}})$ and thus get the Poincar\'e inequality of the correct order. It is then not difficult to adapt the approach of \cite{kn:MR} and \cite{kn:Ba} to derive the tightness of the family $X^{\xi,\,\varepsilon}$ and the invariance principle follows as in \cite{kn:MP}.
\begin{rmk} \label{rem:positivity} The positivity of $\sigma^2$ in Theorem \ref{theorem1} and the positivity of $\sigma^2(\xi)$ in Theorem \ref{theorem'} can be checked using comparison arguments from \cite{kn:DFGW}. Indeed it follows from the expression of the effective diffusivity, see Theorem 4.5 part (iii) of \cite{kn:DFGW}, and from the discussion on monotonicity in part 3 of \cite{kn:DFGW} that $\sigma^2$ is an increasing function of the probability law $Q$ (up to some multiplicative factor). Therefore, if $Q$ stochastically dominates $Q'$ and the effective diffusivity under $Q'$ is positive, then the effective diffusivity under $Q$ is also positive. Here $Q$ stochastically dominates the law of the environment with conductances $\omega} \def\oo{{\tilde{\omega}}'(e)=\xi{\mathbf 1}_{\omega} \def\oo{{\tilde{\omega}}(e)\geq\xi}$. The random walk in the environment $\omega} \def\oo{{\tilde{\omega}}'$ is the simple random walk on a percolation cluster which is known to have a positive asymptotic diffusivity, see \cite{kn:Ba} or the references in \cite{kn:MP}. The same argument shows that $\sigma^2(\xi)>0$ for any $\xi$ such that $Q(\omega} \def\oo{{\tilde{\omega}}(e)\geq\xi)>p_c$. \end{rmk}
To derive Theorem \ref{theorem1} from Theorem \ref{theorem'}, we will compare the processes $X$ and $X^\xi$, for small values of $\xi$. The large time asymptotic of the time change $A^\xi$ is easily deduced from the ergodic theorem, as shown in Lemma \ref{lem:ergoA} below and it implies that the asymptotic variance $\sigma^2(\xi)$ is continuous at $\xi=0$, see Lemma \ref{lem:sigma}.
Let \begin{eqnarray*} c(\xi)={\tilde Q}_0 (0\in\CC^\xi(\omega} \def\oo{{\tilde{\omega}})) \,. \end{eqnarray*}
\begin{lm} \label{lem:ergoA} \begin{eqnarray*} \frac {A^\xi(t)} t \rightarrow c(\xi)\, \hbox{ $Q_0$ a.s.} \end{eqnarray*} as $t$ tends to $\infty$ and
\begin{eqnarray}\label{2:cxi} c(\xi)\rightarrow 1\,, \end{eqnarray} as $\xi$ tends to $0$. \end{lm} {\it Proof}: remember the notation $\omega} \def\oo{{\tilde{\omega}}(t)=\tau_{X(t-)}\omega} \def\oo{{\tilde{\omega}}$. The additive functional $A^\xi(t)$ can also be written in the form $A^\xi(t)=\int_0^t {\mathbf 1}_{0\in\CC^\xi(\omega} \def\oo{{\tilde{\omega}}(s))}\,ds$.
From Proposition \ref{propDeMasi}, we know that ${\tilde Q}_0$ is an invariant and ergodic measure for the process $\omega} \def\oo{{\tilde{\omega}}(t)=\tau_{X(t-)}\omega} \def\oo{{\tilde{\omega}}$ and that it is absolutely continuous with respect to $Q_0$.
Thus the existence of the limit $\lim_{t\rightarrow +\infty} \frac {A^\xi(t)} t$ follows from the ergodic theorem and the limit is $c(\xi)={\tilde Q}_0(0\in\CC^\xi(\omega} \def\oo{{\tilde{\omega}}))$. To check (\ref{2:cxi}), note that ${\mathbf 1}_{0\in\CC^\xi(\omega} \def\oo{{\tilde{\omega}})}$ almost surely converges to ${\mathbf 1}_{0\in\CC(\omega} \def\oo{{\tilde{\omega}})}$ as $\xi$ tends to $0$. Since ${\tilde Q}_0(0\in\CC(\omega} \def\oo{{\tilde{\omega}}))=1$, we get that $c(\xi)$ converges to $1$.
\rule{.2cm}{.2cm}
\begin{lm} \label{lem:sigma} The asymptotic variances $\sigma^2$ in Theorem \ref{theoDeMasi} and $\sigma^2(\xi)$ from Theorem \ref{theorem'}, and the constant $c(\xi)$ from Lemma \ref{lem:ergoA} satisfy the equality \begin{eqnarray}\label{form:variance} c(\xi)\sigma^2(\xi)=\sigma^2\,.\end{eqnarray} As a consequence, $\sigma^2(\xi)$ converges to $\sigma^2$ as $\xi$ tends to $0$. \end{lm}
{\it Proof}: formula (\ref{form:variance}) is deduced from Lemma \ref{lem:ergoA}. One can, for instance, compute the law of the exit times from a large slab for both processes $X$ and $X^\xi$. Let $\tau(r)$ (resp. $\tau^\xi(r)$) be the exit time of $X$ (resp. $X^\xi$) from the set $[-r,r]\times{\mathbb R}^{d-1}$. Under the annealed measure, the Laplace transform of $\tau(r)/r^2$ converges to $E(\exp(-\lambda T/\sigma^2))$ where $T$ is the exit time of $[-1,1]$ by a Brownian motion. This is a consequence of the invariance principle of Theorem \ref{theoDeMasi}. Theorem \ref{theorem'} implies that the Laplace transform of $\tau^\xi(r)/r^2$ converges to $E(\exp(-\lambda T/\sigma^2(\xi)))$. (The convergence holds for $Q_0$ almost any environment and, by dominated convergence, under the annealed measure.) \\ On the other hand, we have $\tau^\xi(r)=A^\xi(\tau(r))$ and therefore Lemma \ref{lem:ergoA} implies that the Laplace transform of $\tau^\xi(r)/r^2$ has the same limit as the Laplace transform of $c(\xi)\tau^\xi(r)/r^2$ and therefore converges to $E(\exp(-\lambda c(\xi)T/\sigma^2))$. We deduce from these computations that $$E(\exp(-\lambda c(\xi)T/\sigma^2))=E(\exp(-\lambda T/\sigma^2(\xi)))\,,$$ and, since this is true for any $\lambda\geq 0$, we must have $c(\xi)\sigma^2(\xi)=\sigma^2$. \\ The continuity of $\sigma^2(\xi)$ for $\xi=0$ is ensured by the continuity of $c(\xi)$.
\rule{.2cm}{.2cm}
\vskip 1cm \section{How to deduce Theorem \ref{theorem1} from Theorem \ref{theorem'}} \setcounter{equation}{0} \label{sec:deduce}
We start stating a percolation lemma that will be useful to control the contribution of holes to the behavior of the random walk.
\begin{lm} \label{lem:holes} There exists a value $\xi_0>0$ such that for any $0<\xi\leq\xi_0$ the following holds. There exists a constant $a$ such that, $Q$ almost surely, for large enough $n$, the volume of any hole $h\in\HH^\xi(\omega} \def\oo{{\tilde{\omega}})$ intersecting the box $[-n,n]^d$ is bounded from above by $(\log n)^a$. ($a=7$ would do.) \end{lm}
The proof of Lemma \ref{lem:holes} is postponed to part \ref{sec:perco}.
\subsection{Tightness}
In this section, we derive the tightness of the sequence of processes $X^\varepsilon$ from Theorem \ref{theorem'}.
\begin{lm} \label{lem:tight} Under the assumptions of Theorem \ref{theorem1}, $Q_0$ almost surely, under $P^\omega} \def\oo{{\tilde{\omega}}_0$, the family of processes $(X^\varepsilon(t)=\varepsilon X(\frac t{\varepsilon^2}),t\in{\mathbb R}_+)$ is tight in the Skorokhod topology. \end{lm}
{\it Proof}: we read from \cite{kn:JS}, paragraph 3.26, page 315 that a sequence of processes $x^\varepsilon$ is tight if and only if the following two estimates hold:\\ (i) for any $T$, any $\delta>0$, there exist $\varepsilon_0$ and $K$ such that for any $\varepsilon\leq\varepsilon_0$ \begin{eqnarray} \label{ti1} P(\sup_{t\leq T} \vert x^\varepsilon(t)\vert\geq K)\leq\delta\,, \end{eqnarray} and\\ (ii) for any $T$, any $\delta>0$, any $\eta>0$, there exist $\varepsilon_0$ and $\theta_0$ such that for any $\varepsilon\leq\varepsilon_0$ \begin{eqnarray} \label{ti2} P(\sup_{v\leq u\leq T\,;\, u-v\leq\theta_0} \vert x^\varepsilon(u)-x^\varepsilon(v)\vert>\eta)\leq\delta\,. \end{eqnarray}
Choose $\xi$ as in Theorem \ref{theorem'}. The sequence $X^{\xi,\,\varepsilon}$ converges; therefore it is tight and satisfies (\ref{ti1}) and (\ref{ti2}). By definition, $$X^{\xi,\,\varepsilon}(t)=X^\varepsilon(\varepsilon^2 (A^\xi)^{-1}(\frac t{\varepsilon^2}))\,.$$
{\it Proof of condition (i)}: let us first check that $X^\varepsilon$ satisfies (\ref{ti1}). \\ Assume that $\sup_{t\leq T} \vert X^{\xi,\,\varepsilon}(t)\vert\leq K$. Given $t_0\leq T$, let $x_0=X^\varepsilon(t_0)$ i.e. $X(\frac {t_0}{\varepsilon^2})=\frac 1 \varepsilon {x_0}$ and define $s_0=\varepsilon^2 A^\xi(\frac {t_0}{\varepsilon^2})$. Since $A^\xi(t)\leq t$, we have $s_0\leq t_0$.\\ If $\frac 1 \varepsilon {x_0}$ belongs to $\CC^\xi(\omega} \def\oo{{\tilde{\omega}})$, then $t_0=\varepsilon^2 (A^\xi)^{-1}(\frac {s_0}{\varepsilon^2})$ and $ X^{\xi,\,\varepsilon}(s_0)=X^\varepsilon(t_0)=x_0$ and therefore $\vert x_0\vert\leq K$. \\ Now suppose that $\frac 1 \varepsilon {x_0}$ does not belong to $\CC^\xi(\omega} \def\oo{{\tilde{\omega}})$ and let $t_1=\varepsilon^2 (A^\xi)^{-1}(\frac {s_0}{\varepsilon^2})$ and $x_1=X^\varepsilon(t_1)$. Then $t_1\leq t_0$ and $\frac 1 \varepsilon {x_1}$ belongs to $\CC^\xi(\omega} \def\oo{{\tilde{\omega}})$. The same argument as before shows that $\vert x_1\vert\leq K$. On the other hand, by definition of the time changed process $X^\xi$, $\frac 1\varepsilon {x_1}$ is the last point in $\CC^\xi(\omega} \def\oo{{\tilde{\omega}})$ visited by $X$ before time $t_0$. Thus $\frac 1 \varepsilon {x_0}$ belongs to a hole on the boundary of which sits $\frac 1 \varepsilon{x_1}$. It then follows from Lemma \ref{lem:holes} that $$\vert \frac 1 \varepsilon{x_1}-\frac 1 \varepsilon{x_0}\vert \leq (\log\frac K{\varepsilon})^a\,.$$ Thus we have proved that $$\vert x_0\vert\leq K+\varepsilon (\log\frac K{\varepsilon})^a\,.$$ We can choose $\varepsilon_0$ small enough so that $\varepsilon (\log\frac K{\varepsilon})^a\leq K$ and therefore we have $$\sup_{t\leq T} \vert X^{\xi,\,\varepsilon}(t)\vert\leq K \implies \sup_{t\leq T} \vert X^{\varepsilon}(t)\vert\leq 2K\,.$$ Since the sequence $X^{\xi,\,\varepsilon}$ satisfies (\ref{ti1}), the event `$\sup_{t\leq T} \vert X^{\xi,\,\varepsilon}(t)\vert\leq K$' has a large probability; therefore $\sup_{t\leq T} \vert X^{\varepsilon}(t)\vert\leq 2K$ has a large probability and the sequence $X^{\varepsilon}$ satisfies (\ref{ti1}).
{\it Proof of condition (ii)}: as before, we will deduce that the sequence $X^{\varepsilon}$ satisfies (\ref{ti2}) from the fact that the sequence $X^{\xi,\,\varepsilon}$ satisfies (\ref{ti1}) and (\ref{ti2}). Assume that $$\sup_{v\leq u\leq T\,;\, u-v\leq\theta_0} \vert X^{\xi,\,\varepsilon}(u)- X^{\xi,\,\varepsilon}(v)\vert\leq \eta\,.$$ We further assume that $\sup_{t\leq T} \vert X^{\xi,\,\varepsilon}(t)\vert\leq K$.\\ Given $v_0\leq u_0\leq T$ such that $u_0-v_0\leq\theta_0$, let $x_0=X^\varepsilon(u_0)$, $y_0=X^\varepsilon(v_0)$ and define $s_0=\varepsilon^2 A^\xi(\frac {u_0}{\varepsilon^2})$, $t_0=\varepsilon^2 A^\xi(\frac {v_0}{\varepsilon^2})$, $u_1=\varepsilon^2 (A^\xi)^{-1}(\frac {s_0}{\varepsilon^2})$ and $v_1=\varepsilon^2 (A^\xi)^{-1}(\frac {t_0}{\varepsilon^2})$. Also let $x_1=X^\varepsilon(u_1)$, $y_1=X^\varepsilon(v_1)$.\\ Since $A^\xi(t)-A^\xi(s)\leq t-s$ whenever $s\leq t$, we have $t_0\leq s_0\leq T$ and $s_0-t_0\leq\theta_0$. Besides, by definition of $A^\xi$, we have $x_1= X^{\xi,\,\varepsilon}(s_0)$ and $y_1= X^{\xi,\,\varepsilon}(t_0)$. We conclude that $$ \vert x_1-y_1\vert\leq \eta\,.$$ On the other hand, the same argument as in the proof of condition (i) based on Lemma \ref{lem:holes} shows that $$ \vert x_1-x_0\vert+ \vert y_1-y_0\vert\leq 2\varepsilon (\log\frac K{\varepsilon})^a\,.$$ We have proved that $$\sup_{v\leq u\leq T\,;\, u-v\leq\theta_0} \vert X^\varepsilon(u)- X^\varepsilon(v)\vert \leq \eta+2\varepsilon (\log\frac K{\varepsilon})^a\,.$$ Since both events `$\sup_{v\leq u\leq T\,;\, u-v\leq\theta_0} \vert X^{\xi,\,\varepsilon}(u)- X^{\xi,\,\varepsilon}(v)\vert\leq \eta$' and `$\sup_{t\leq T} \vert X^{\xi,\,\varepsilon}(t)\vert\leq K$' have large probabilities, we deduce that the processes $X^\varepsilon$ satisfy condition (ii).
\rule{.2cm}{.2cm}
\subsection{Convergence}
To conclude the derivation of Theorem \ref{theorem1} from Theorem \ref{theorem'}, it only remains to argue that, for any given time $t$, the two random variables $X^\varepsilon(t)$ and $X^{\xi,\,\varepsilon}(t)$ are close to each other in probability.
\begin{lm} \label{lem:conv} Under the assumptions of Theorem \ref{theorem1}, $Q_0$ almost surely, for any $t$, any $\delta>0$, any $\eta>0$, then, for small enough $\xi$, \begin{eqnarray*} \limsup_{\varepsilon\rightarrow 0} P^\omega} \def\oo{{\tilde{\omega}}_0 ( \vert X^\varepsilon(t)-X^{\xi,\,\varepsilon}(t)\vert>\eta)\leq\delta\,. \end{eqnarray*} \end{lm}
{\it Proof}: we shall rely on Lemma \ref{lem:ergoA}. If $\vert X^\varepsilon(t)-X^{\xi,\,\varepsilon}(t)\vert>\eta$, then one of the following two events must hold: $$(I)= \{\sup_{\theta c(\xi)t\leq s\leq t} \vert X^{\xi,\,\varepsilon}(s)-X^{\xi,\,\varepsilon}(t)\vert>\frac \eta 2\}\,,$$ $$(II) =\{ \inf_{\theta c(\xi)t\leq s\leq t} \vert X^{\xi,\,\varepsilon}(s)-X^\varepsilon(t)\vert>\frac \eta 2\}\,.$$ Here $\theta$ is a parameter in $]0,1[$.\\ The invariance principle for $X^{\xi,\,\varepsilon}$, see Theorem \ref{theorem'}, implies that the probability of $(I)$ converges as $\varepsilon$ tends to $0$ to the probability $P(\sup_{\theta c(\xi)t\leq s\leq t} \sigma(\xi)\vert B(s)-B(t)\vert>\frac \eta 2)$, where $B$ is a Brownian motion. Since $\sigma(\xi)$ is bounded away from $0$, see Lemma \ref{lem:sigma}, and since $c(\xi)\rightarrow 1$ as $\xi\rightarrow 0$, we deduce that there exists a value for $\theta$ such that \begin{eqnarray}\label{eq:12} \limsup_{\xi \rightarrow 0}\limsup_{\varepsilon\rightarrow 0} P^\omega} \def\oo{{\tilde{\omega}}_0 (I)\leq \delta\,. \end{eqnarray}
We now assume that $\theta$ has been chosen so that (\ref{eq:12}) holds. We shall end the proof of the Lemma by showing that \begin{eqnarray} \label{eq:11} \limsup_{\varepsilon\rightarrow 0} P^\omega} \def\oo{{\tilde{\omega}}_0 (II)=0\,. \end{eqnarray} Since, from the tightness of the processes $X^\varepsilon$, see Lemma \ref{lem:tight}, we have $$\limsup_{\varepsilon\rightarrow 0}P^\omega} \def\oo{{\tilde{\omega}}_0 (\sup_{s\leq t}\vert X^\varepsilon(s)\vert\geq \varepsilon^{-1})=0\,,$$ we will estimate the probability that both events $(II)$ and `$\sup_{s\leq t}\vert X^\varepsilon(s)\vert\leq \varepsilon^{-1}$' hold. \\ Let $u=\varepsilon^2 A^\xi(\frac t{\varepsilon^2})$ and note that $u\leq t$. From Lemma \ref{lem:ergoA}, we know that $u\geq \theta c(\xi) t$ for small enough $\varepsilon$ depending on $\omega} \def\oo{{\tilde{\omega}}$. \\ If $X^\varepsilon(t)$ belongs to $\CC^\xi(\omega} \def\oo{{\tilde{\omega}})$, then $X^\varepsilon(t)=X^{\xi,\,\varepsilon}(u)$ and therefore $(II)$ does not hold. \\ Otherwise $X^\varepsilon(t)$ belongs to a hole on the boundary of which sits $X^{\xi,\,\varepsilon}(u)$. Using the condition $\sup_{s\leq t}\vert X^\varepsilon(s)\vert\leq \varepsilon^{-1}$ and Lemma \ref{lem:holes}, we get that $$\vert X^\varepsilon(t)-X^{\xi,\,\varepsilon}(u)\vert\leq \varepsilon(\log \frac 1\varepsilon)^a\,.$$ For sufficiently small $\varepsilon$ we have $\varepsilon(\log 1/\varepsilon)^a<\frac \eta 2$ and therefore $(II)$ fails. The proof of (\ref{eq:11}) is complete.
\rule{.2cm}{.2cm}
{\it End of the proof of Theorem \ref{theorem1}}: choose times $0<t_1<...<t_k$. Use Lemma \ref{lem:conv}, to deduce that for small enough $\xi$, as $\varepsilon$ tends to $0$, the law of $(X^\varepsilon(t_1),...,X^\varepsilon(t_k))$ comes close to the law of $(X^{\xi,\,\varepsilon}(t_1),...,X^{\xi,\,\varepsilon}(t_k))$, which in turn, according to Theorem \ref{theorem'}, converges to the law of $(\sigma(\xi)B(t_1),...,\sigma(\xi)B(t_k))$, where $B$ is a Brownian motion. We now let $\xi$ tend to $0$: since $\sigma(\xi)$ converges to $\sigma$, see Lemma \ref{lem:sigma}, the limiting law of $(X^\varepsilon(t_1),...,X^\varepsilon(t_k))$ is the law of $(\sigma B(t_1),...,\sigma B(t_k))$ i.e. we have proved that $X^\varepsilon$ converges in law to a Brownian motion with variance $\sigma^2$ in the sense of finite dimensional marginals. The tightness Lemma \ref{lem:tight} implies that the convergence in fact holds in the Skorokhod topology.
\rule{.2cm}{.2cm}
\vskip 1cm \section{Proof of Theorem \ref{theorem'}} \setcounter{equation}{0} \label{sec:proof}
We will outline here a proof of Theorem \ref{theorem'}. Our strategy is quite similar to the one recently used in \cite{kn:MR}, \cite{kn:Ba} and \cite{kn:MP} to study the simple symmetric random walk on a percolation cluster. No new idea is required.
\vskip .5cm {\bf Step 0: notation}
As before, we use the notation $\omega$ to denote a typical environment under the measure $Q$. For a given edge $e\in{\mathbb E}_d$ (and a given choice of $\omega$), we define \begin{eqnarray*} \alpha(e)={\mathbf 1}_{\omega} \def\oo{{\tilde{\omega}}(e)>0}\,;\, \alpha'(e)={\mathbf 1}_{\omega} \def\oo{{\tilde{\omega}}(e)\geq\xi}\,. \end{eqnarray*} As in part \ref{sec:timechange}, let $\CC^\xi(\omega} \def\oo{{\tilde{\omega}})$ be the infinite cluster of the percolation graph $\alpha'$. For $x,y\in\CC^\xi(\omega} \def\oo{{\tilde{\omega}})$, we define the {\it chemical distance} $d^\xi_\omega} \def\oo{{\tilde{\omega}}(x,y)$ as the minimal number of jumps required for the process $\tX$ to go from $x$ to $y$, see part \ref{sec:deviation}.
We recall the definition of the generator ${\tLLo}$ from formula (\ref{2:gen'}). Since the function $\oxi$ is symmetric, the operator ${\tLLo}$ is reversible with respect to the measure $\mu_\omega} \def\oo{{\tilde{\omega}}=\sum_{z\in\CC^\xi(\omega} \def\oo{{\tilde{\omega}})} n^\omega} \def\oo{{\tilde{\omega}}(z)\delta_z$.
Let $\CC^{n}(\omega} \def\oo{{\tilde{\omega}})$ be the connected component of $\CC^\xi(\omega} \def\oo{{\tilde{\omega}})\cap [-n,n]^d$ that contains the origin. Let $(\tXn(t),\ t \geq 0)$ be the random walk $\tX$ restricted to the set $\CC^n(\omega} \def\oo{{\tilde{\omega}})$. The definition of $\tXn$ is the same as for $\tX$ except that jumps outside $\CC^n$ are now forbidden. Its Dirichlet form is \begin{eqnarray*} \tEEon(f,f)=\frac 12 \sum_{x\sim y\in \CC^n(\omega} \def\oo{{\tilde{\omega}})}
\oxi(x,y) (f(x)-f(y))^2 \end{eqnarray*}
We use the notation $\tau^n$ for the exit time of the process $\tX$ from the box $[-2n+1,2n-1]^d$ i.e. $\tau^n=\inf\{t\,;\, \tX(t)\notin[-2n+1,2n-1]^d\}$.
\vskip .5cm {\bf Step 1: Carne-Varopoulos bound}
The measure $\mu_\omega} \def\oo{{\tilde{\omega}}$ being reversible for the process $\tX$, the transition probabilities satisfy a Carne-Varopoulos bound: \begin{eqnarray*} P^\omega} \def\oo{{\tilde{\omega}}_x(\tX(t)=y)\leq Ce^{-d^\xi_\omega} \def\oo{{\tilde{\omega}}(x,y)^2/(4t)}+e^{-ct}\,, \end{eqnarray*} where $c=\log 4-1$ and $C$ is some constant that depends on $\xi$ and $\omega} \def\oo{{\tilde{\omega}}$. (See \cite{kn:MR}, appendix C.)
By Lemma \ref{lem:deviation}, we can replace the chemical distance $d^\xi_\omega} \def\oo{{\tilde{\omega}}(x,y)$ by the Euclidean distance $\vert x-y\vert$, provided that $x\in[-n,n]^d$ and $n$ is large enough. We get that, $Q_0^\xi$ almost surely, for large enough $n$, for any $x\in[-n,n]^d$ and any $y\in{\mathbb Z}^d$ such that $\vert x-y\vert\geq (\log n)^2$, then \begin{eqnarray}\label{eq:carnevaro} P^\omega} \def\oo{{\tilde{\omega}}_x(\tX(t)=y)\leq Ce^{-\frac{\vert x-y \vert^2}{Ct}}+e^{-ct}\,.\end{eqnarray}
The same reasoning as in \cite{kn:MR}, appendix C (using Lemma \ref{lem:deviation} again) then leads to upper bounds for the exit time $\tau^n$: $Q_0^\xi$ almost surely, for large enough $n$, for any $x\in[-n,n]^d$ and any $t$, we have \begin{eqnarray} \label{eq:taun} P_x^\omega[\tau^n\leq t] \leq Ct n^d e^{-\frac{n^2}{Ct}}+e^{-ct}\,. \end{eqnarray} Indeed, let $N(t)$ be the number of jumps the random walk performs until time $t$ and let $\sigma^n$ be the number of jumps of the walk until it exits the box $[-2n+1,2n-1]^d$, so that $\sigma^n=N(\tau^n)$. Note that the process $(N(t)\,,\, t\in{\mathbb R}_+)$ is a Poisson process of rate $1$. With probability larger than $1-e^{-ct}$, we have $N(t)\leq 2t$. If $N(t)\leq 2t$ and $\tau^n\leq t$, then $\sigma^n\leq 2t$ and there are at most $2t$ choices for the value of $\sigma^n$. Let $y$ be the position of the walk at the exit time and let $z$ be the last point visited before exiting. Note that $d^\xi_\omega} \def\oo{{\tilde{\omega}}(z,y)=1$. Due to Lemma \ref{lem:deviation}, we have \begin{eqnarray*} \vert x-y\vert\leq \frac 1{c^-} d^\xi_\omega} \def\oo{{\tilde{\omega}}(x,y) \leq \frac 1{c^-} (d^\xi_\omega} \def\oo{{\tilde{\omega}}(x,z)+1) \leq \frac{c^+}{c^-} (1+\vert x-z\vert)\leq \frac{c^+}{c^-} (1+2n)\,. \end{eqnarray*} Note that our use of Lemma \ref{lem:deviation} here is legitimate. Indeed $\vert x-y\vert$ is of order $n$ and, since $d^\xi_\omega} \def\oo{{\tilde{\omega}}(z,y)=1$, Lemma \ref{lem:holes} implies that $\vert y-z\vert$ is at most of order $(\log n)^7$. Therefore $\vert x-z\vert$ is of order $n$ and thus certainly larger that $(\log n)^2$.
Thus we see that there are at most of order $n^d$ possible choices for $y$. Finally, due to (\ref{eq:carnevaro}), \begin{eqnarray*} P^\omega} \def\oo{{\tilde{\omega}}_x(\tX(s)=y)\leq C e^{-\frac{n^2}{Ct}}\,,\end{eqnarray*} for any $s\leq 2t$, $x\in[-n,n]^d$ and $y\notin [-2n+1,2n-1]^d$. Putting everything together, we get (\ref{eq:taun}).
\vskip .5cm {\bf Step 2: Nash inequalities and on-diagonal decay}
\begin{lm}\label{lem:1} For any $\theta>0$, there exists a constant $c_u(\theta)$ such that, $Q_0^\xi$ a.s. for large enough $t$, we have \begin{eqnarray} \label{eq:ondiag}
P_x^{\omega}[\tX(t)=y]
\leq
\frac{c_u(\theta)}{t^{d/2}}\,, \end{eqnarray} for any $x\in \CC^\xi(\omega} \def\oo{{\tilde{\omega}})$ and $y\in{\mathbb Z}^d$ such that $\vert x\vert\leq t^\theta$. \end{lm}
{\it Proof}:
We use the notation $\alpha'(e)={\mathbf 1}_{\omega} \def\oo{{\tilde{\omega}}(e)\geq\xi}$. Note that the random variables $(\alpha'(e)\,;\, e\in E_d)$ are independent Bernoulli variables with common parameter $Q(\alpha'(e)>0)=Q(\omega} \def\oo{{\tilde{\omega}}(e)\geq\xi)$. Since we have assumed that $Q(\omega} \def\oo{{\tilde{\omega}}(e)\geq\xi)>p_c$, the environment $\alpha'$ is a typical realization of super-critical bond percolation.
The following Nash inequality is proved in \cite{kn:MR}, equation (5): there exists a constant $\beta$ such that $Q_0^\xi$ a.s. for large enough $n$, for any function $f:\CC^n(\omega} \def\oo{{\tilde{\omega}})\rightarrow{\mathbb R}$ one has \begin{eqnarray*}
\Var(f)^{1+\frac{2}{\varepsilon(n)}}
\leq \beta \,
n^{2(1-\frac{d}{\varepsilon(n)})}\,\EE^{\alpha',n}(f,\,f)\,\|f\|_1^{4/\varepsilon(n)}\,, \end{eqnarray*} where \begin{eqnarray*} \EE^{\alpha',n}(f,f)=\frac 12 \sum_{x\sim y\in \CC^n(\omega} \def\oo{{\tilde{\omega}})}
\alpha'(x,y) (f(x)-f(y))^2 \,. \end{eqnarray*}
The variance and the $L_1$ norms are computed with respect to the counting measure on $\CC^n(\omega} \def\oo{{\tilde{\omega}})$ and $\varepsilon(n)=d+2d\frac{\log\log n}{\log n}$. (Note that there is a typo in \cite{kn:MR} where it is claimed that (5) holds for the uniform probability on $\CC^n(\omega} \def\oo{{\tilde{\omega}})$ instead of the counting measure.)
Inequality (\ref{2:lowbo}) implies that $\alpha'(x,y)\leq \xi^{-1} \oxi(x,y)$. Therefore $\EE^{\alpha',n}$ and $\tEEon$ satisfy the inequality \begin{eqnarray}\label{eq:compDF} \EE^{\alpha',n}(f,f)\leq \frac 1\xi\, \tEEon(f,f)\,. \end{eqnarray}
Using inequality (\ref{eq:compDF}) in the previous Nash inequality, we deduce that there exists a constant $\beta$ (that depends on $\xi$) such that $Q_0^\xi$ a.s. for large enough $n$, for any function $f:\CC^n(\omega} \def\oo{{\tilde{\omega}})\rightarrow{\mathbb R}$ one has \begin{eqnarray} \label{eq:nash}
\Var(f)^{1+\frac{2}{\varepsilon(n)}}
\leq \beta\,
n^{2(1-\frac{d}{\varepsilon(n)})}\,\tEEon(f,\,f)\,\|f\|_1^{4/\varepsilon(n)}\,. \end{eqnarray}
As shown in \cite{kn:MR} part 4, the Carne-Varopoulos inequality (\ref{eq:carnevaro}), inequality (\ref{eq:taun}) and the Nash inequality (\ref{eq:nash}) can be combined to prove upper bounds on the transition probabilities. We thus obtain that: there exists a constant $c_u$ such that, $Q_0^\xi$ a.s. for large enough $t$, we have \begin{eqnarray} \label{eq:ondiag0}
P_0^{\omega}[\tX(t)=y]
\leq
\frac{c_u}{t^{d/2}}\,, \end{eqnarray} for any $y\in{\mathbb Z}^d$.
Using the translation invariance of $Q$, it is clear that estimate (\ref{eq:ondiag0}) in fact holds if we choose another point $x\in{\mathbb Z}^d$ to play the role of the origin. Thus, for any $x\in{\mathbb Z}^d$, $Q$ a.s. on the set $x\in \CC^\xi(\omega} \def\oo{{\tilde{\omega}})$, for $t$ larger than some random value $t_0(x)$, we have \begin{eqnarray} \label{eq:ondiag'}
P_x^{\omega}[\tX(t)=y]
\leq
\frac{c_u}{t^{d/2}}\,, \end{eqnarray} for any $y\in{\mathbb Z}^d$.
In order to deduce the Lemma from the upper bound (\ref{eq:ondiag'}), one needs control the tail of the law of $t_0(0)$. \\ Looking at the proofs in \cite{kn:MR}, one sees that all the error probabilities decay faster than any polynomial. More precisely, the $Q_0^\xi$ probability that inequality (\ref{eq:nash}) fails for some $n\geq n_0$ decays faster than any polynomial in $n_0$. From the proof of Lemma \ref{lem:deviation}, we also know that the $Q_0^\xi$ probability that inequality (\ref{eq:carnevaro}) fails for some $n\geq n_0$ decays faster than any polynomial in $n_0$. As a consequence, a similar bound holds for inequality (\ref{eq:taun}).\\ To deduce error bounds for (\ref{eq:ondiag0}), one then needs to go to part 4 of \cite{kn:MR}. Since the proof of the upper bound (\ref{eq:ondiag0}) is deduced from (\ref{eq:carnevaro}), (\ref{eq:taun}) and (\ref{eq:nash}) by choosing $t\log t =b n^2$ for an appropriate constant $b$, we get that $Q_0^\xi(\hbox {inequality (\ref{eq:ondiag0}) fails for some $t\geq t_0$})$ decays faster than any polynomial in $t_0$. By translation invariance, the same holds for (\ref{eq:ondiag'}) i.e. for any $A>0$, there exists $T$ such that \begin{eqnarray*} Q(x\in \CC^\xi(\omega} \def\oo{{\tilde{\omega}})\,\hbox{and}\,t_0(x)\geq t_0 )\leq t_0^{-A}\,,\end{eqnarray*} for any $t_0>T$. Therefore, \begin{eqnarray*} Q(\exists x\in \CC^\xi(\omega} \def\oo{{\tilde{\omega}})\,;\, \vert x\vert\leq t_0^\theta \,\hbox{and}\, t_0(x)\geq t_0)\leq t_0^{d\theta-A}\,.\end{eqnarray*} One then chooses $A$ larger than $d\theta+1$ and the Borel-Cantelli lemma gives the end of the proof of (\ref{eq:ondiag}).
\rule{.2cm}{.2cm}
\vskip .5cm {\bf Step 3: exit times estimates and tightness}
We denote with $\tau(x,r)$ the exit time of the random walk from the ball of center $x$ and Euclidean radius $r$.
\begin{lm}\label{lem:2} For any $\theta>0$, there exists a constant $c_e$ such that, $Q_0^\xi$ a.s. for large enough $t$, we have \begin{eqnarray} \label{eq:exit}
P_x^{\omega}[\tau(x,r)< t]
\leq c_e \frac{\sqrt{t}}r\,, \end{eqnarray} for any $x\in {\mathbb Z}^d$ and $r$ such that $\vert x\vert\leq t^\theta$ and $r\leq t^\theta$. \end{lm}
{\it Proof}: the argument is the same as in \cite{kn:Ba}, part 3. We define $$ M_x(t)=E^\omega} \def\oo{{\tilde{\omega}}_x[d^\xi_\omega} \def\oo{{\tilde{\omega}}(x,X^\xi(t))]$$ and $$ Q_x(t)=-E^\omega} \def\oo{{\tilde{\omega}}_x[\log q^\omega} \def\oo{{\tilde{\omega}}_t(x,X^\xi(t))]\,,$$ where $q^\omega} \def\oo{{\tilde{\omega}}_t(x,y)=P^\omega} \def\oo{{\tilde{\omega}}_x(X^\xi(t)=y)/\mu_\omega} \def\oo{{\tilde{\omega}}(x)$. Then, for large enough $t$ and for $\vert x\vert\leq t^\theta$, one has: \begin{eqnarray*} &&Q_x(t)\geq -\log c_u+\frac d 2 \log t\,,\\ &&M_x(t)\geq c_2 \exp(Q_x(t)/d)\,,\\ &&Q_x'(t)\geq \frac 12 (M_x'(t))^2\,. \end{eqnarray*} The first inequality is obtained as an immediate consequence of Lemma \ref{lem:1}. The second one is proved as in \cite{kn:Ba}, Lemma 3.3 and the third one as in \cite{kn:Ba}, equation (3.10), using ideas from \cite{kn:Bass} and \cite{kn:Nash}. Note that, in the proof of the second inequality, we used Lemma \ref{lem:deviation} to control the volume growth in the chemical distance $d^\xi_\omega} \def\oo{{\tilde{\omega}}$. One now integrates these inequalities to deduce that \begin{eqnarray}\label{eq:mean} c_1\sqrt t \leq M_x(t)\leq c_2 \sqrt t\,. \end{eqnarray} Once again the proof is the same as in \cite{kn:Ba}, Proposition 3.4. Note that, in the notation of \cite{kn:Ba}, $T_B=\vert x\vert^{1/\theta}$ so that equation (\ref{eq:mean}) holds for $t\geq \frac 1\theta \vert x\vert^{1/\theta}\log\vert x\vert$. The end of the proof is identical to the proof of Equation (3.13) in \cite{kn:Ba}.
\rule{.2cm}{.2cm}
\begin{lm}\label{lem:2'} $Q_0^\xi$ a.s. for large enough $t$, we have \begin{eqnarray} \label{eq:exit'}
P_x^{\omega}[\tau(x,r)< t]
\leq 27 (c_e)^3 (\frac{\sqrt{t}}r)^3\,, \end{eqnarray} for any $x\in {\mathbb Z}^d$ and $r$ such that $\vert x\vert\leq t^\theta$ and $r\leq t^\theta$. \end{lm}
{\it Proof}: let $x'=X^\xi(\tau(x,r/3))$, $x''=X^\xi(\tau'(x',r/3))$ where $\tau'(x',r/3)$ is the exit time from the ball of center $x'$ and radius $r/3$ after time $\tau(x,r/3)$ and let $\tau''(x'',r/3)$ be the exit time from the ball of center $x''$ and radius $r/3$ after time $\tau'(x,r/3)$. In order that $\tau(x,r)< t$ under $P_x^{\omega}$ we must have $\tau(x,r/3)< t$ and $\tau'(x',r/3)< t$ and $\tau''(x'',r/3)<t$. We can then use Lemma \ref{lem:2} to estimate the probabilities of these $3$ events and conclude that (\ref{eq:exit'}) holds.
\rule{.2cm}{.2cm}
\begin{lm}\label{lem:3} For small enough $\xi$, $Q_0$ almost surely, under $P^\omega} \def\oo{{\tilde{\omega}}_0$, the family of processes $(X^{\xi,\varepsilon}(t)=\varepsilon X^\xi(\frac t{\varepsilon^2}),t\in{\mathbb R}_+)$ is tight in the Skorokhod topology (as $\varepsilon$ goes to $0$). \end{lm}
{\it Proof}: we shall prove that, for any $T>0$, for any $\eta>0$ and for small enough $\theta_0$ then \begin{eqnarray} \label{eq:ti10} \limsup_\varepsilon \sup_{v\leq T} P^\omega} \def\oo{{\tilde{\omega}}_0(\sup_{u\leq T\,;\, v\leq u\leq v+\theta_0} \vert X^{\xi,\varepsilon}(u)-X^{\xi,\varepsilon}(v)\vert >\eta) \leq 27 (c_e)^3 (\frac{\sqrt {\theta_0}}\eta)^3\,. \end{eqnarray} Indeed inequality (\ref{eq:ti10}) implies that
\begin{eqnarray} \label{eq:ti11}
\limsup_{\theta_0} \frac 1 {\theta_0} \limsup_\varepsilon \sup_{v\leq T}P^\omega} \def\oo{{\tilde{\omega}}_0(\sup_{u\leq T\,;\, v\leq u\leq v+\theta_0} \vert X^{\xi,\varepsilon}(u)-X^{\xi,\varepsilon}(v)\vert >\eta)=0\,.\end{eqnarray} According to Theorem 8.3 in Billingsley's book \cite{kn:Bill}, this last inequality is sufficient to ensure the tightness.
We use Lemma \ref{lem:2} with $\theta=1$ to check that \begin{eqnarray*} P^\omega} \def\oo{{\tilde{\omega}}_0(\sup_{t\leq T}\vert X^{\xi,\varepsilon}(t)\vert\geq K) =P^\omega} \def\oo{{\tilde{\omega}}_0(\tau(0,\frac K\varepsilon)\leq\frac T{\varepsilon^2})\leq c_e\frac{\sqrt T}K\,. \end{eqnarray*} (We could use Lemma \ref{lem:2} since $\frac K\varepsilon\leq \frac T{\varepsilon^2}$ for small $\varepsilon$.)
Next choose $\eta>0$ and use Lemma \ref{lem:2'} with $\theta=3$ and the Markov property to get that \begin{eqnarray*} P^\omega} \def\oo{{\tilde{\omega}}_0(\sup_{v\leq u\leq T\,;\, u-v\leq\theta_0} \vert X^{\xi,\varepsilon}(u)-X^{\xi,\varepsilon}(v)\vert >\eta) \leq P^\omega} \def\oo{{\tilde{\omega}}_0(&&\sup_{t\leq T}\vert X^{\xi,\varepsilon}(t)\vert\geq K) \\&&+ \sup_{y\,;\, \vert y\vert\leq K/\varepsilon} P^\omega} \def\oo{{\tilde{\omega}}_y(\tau(y,\frac\eta\varepsilon)\leq \frac{\theta_0}{\varepsilon^2})\,.\end{eqnarray*} If we choose $K$ of order $1/\varepsilon$ and pass to the limit as $\varepsilon$ tends to $0$, then, due to the previous inequality, the contribution of the first term vanishes. As for the second term, by Lemma \ref{lem:2'}, it is bounded by $27 (c_e)^3 (\frac{\sqrt {\theta_0}}\eta)^3$. Note that we could use Lemma \ref{lem:2'} since $\frac K\varepsilon\leq (\frac {\theta_0}{\varepsilon^2})^3$ and $\frac \eta\varepsilon\leq (\frac {\theta_0}{\varepsilon^2})^3$ for small $\varepsilon$. Thus the proof of (\ref{eq:ti10}) is complete.
\rule{.2cm}{.2cm}
\vskip .5cm {\bf Step 4: Poincar\'e inequalities and end of the proof of Theorem \ref{theorem'}}
Applied to a centered function $f$, Nash inequality (\ref{eq:nash}) reads: \begin{eqnarray*}
\Vert f\Vert_2^{2+\frac{4}{\varepsilon(n)}}
\leq \beta\,
n^{2(1-\frac{d}{\varepsilon(n)})}\,\tEEon(f,\,f)\,\|f\|_1^{4/\varepsilon(n)}\,. \end{eqnarray*}
Holder's inequality implies that $$\Vert f\Vert_1\leq \Vert f\Vert_2 (2n+1)^{d/2}$$ since $\# C^n(\omega} \def\oo{{\tilde{\omega}})\leq (2n+1)^d$. We deduce that any centered function on $\CC^n(\omega} \def\oo{{\tilde{\omega}})$ satisfies \begin{eqnarray*} \Vert f\Vert_2^2 \leq \beta n^2\,\tEEon(f,\,f)\,, \end{eqnarray*} for some constant $\beta$. Equivalently, any (not necessarily centered) function on $\CC^n(\omega} \def\oo{{\tilde{\omega}})$ satisfies \begin{eqnarray*} \Var (f)\leq \beta n^2\,\tEEon(f,\,f)\,. \end{eqnarray*}
Thus we have proved the following Poincar\'e inequality on $C^n(\omega} \def\oo{{\tilde{\omega}})$: there is a constant $\beta$ such that, $Q_0^\xi$.a.s. for large enough $n$, for any function $f:\CC^n(\omega} \def\oo{{\tilde{\omega}})\rightarrow{\mathbb R}$ then \begin{eqnarray}\label{eq:poinc1} \sum_{x\in \CC^n(\omega} \def\oo{{\tilde{\omega}})}f(x)^2 \leq \beta n^2\, \sum_{x\sim y\in \CC^n(\omega} \def\oo{{\tilde{\omega}})}\oxi(x,y) (f(x)-f(y))^2 \end{eqnarray}
Our second Poincar\'e inequality is derived from \cite{kn:Ba}, see Definition 1.7, Theorem 2.18, Lemma 2.13 part a) and Proposition 2.17 part b): there exist constants $M<1$ and $\beta$ such that $Q_0^\xi$.a.s. for any $\delta>0$, for large enough $n$, for any $z \in {\mathbb Z}^d$ s.t. $\vert z\vert\le n$ and for any function $f:{\mathbb Z}^d\rightarrow{\mathbb R}$ then \begin{eqnarray}\label{eq:poinc2} \sum_{x\in \CC^\xi(\omega} \def\oo{{\tilde{\omega}})\cap (z+[-M\delta n,M\delta n]^d)}
f(x)^2 \leq \beta \delta^2 n^2\, \sum_{x\sim y\in \CC^\xi(\omega} \def\oo{{\tilde{\omega}})\cap (z+[-\delta n,\delta n]^d)}
\oxi(x,y) (f(x)-f(y))^2 \end{eqnarray} In \cite{kn:Ba}, inequality (\ref{eq:poinc2}) is in fact proved for the Dirichlet form $\EE^{\alpha',n}$ but the comparison inequality (\ref{eq:compDF}) implies that it also holds for the Dirichlet form $\tEEon$.
One can now conclude the proof of the Theorem following the argument in \cite{kn:MP} line by line starting from paragraph 2.2.
\rule{.2cm}{.2cm}
\vskip 1cm \section{Percolation results} \setcounter{equation}{0} \label{sec:perco}
\subsection{Prerequisites on site percolation}
We shall use some properties of site percolation that we state below.
By site percolation of parameter $r$ on ${\mathbb Z}^d$, we mean the product Bernoulli measure of parameter $r$ on the set of applications $\zeta:{\mathbb Z}^d\rightarrow\{0,1\}$. We identify any such application with the sub-graph of the grid whose vertices are the points $x\in{\mathbb Z}^d$ such that $\zeta(x)=1$ and equipped with the edges of the grid linking two points $x,y$ such that $\zeta(x)=\zeta(y)=1$.
Let $l>1$. Call a sub-set of ${\mathbb Z}^d$ {\it $l$-connected} if it is connected for the graph structure defined by: two points are neighbors when the Euclidean distance between them is less than $l$.
We recall our notation $\vert x-y\vert$ for the Euclidean distance between $x$ and $y$.
A {\it path} is a sequence of vertices of ${\mathbb Z}^d$ such that two successive vertices in $\pi$ are neighbors. We mostly consider injective paths. With some abuse of vocabulary, a sequence of vertices of ${\mathbb Z}^d$ in which two successive vertices are at distance not more than $l$ will be called a {\it $l$-nearest-neighbor path}. Let $\pi=(x_0,...,x_k)$ be a sequence of vertices. We define its length $$\vert \pi\vert=\sum_{j=1}^k \vert x_{j-1} -x_j\vert\,,$$ and its cardinality $\#\pi=\#\{x_0,...,x_k\}$. ($\#\pi=k+1$ for an injective path.) When convenient, we identify an injective path with a set (its range).
\begin{lm}\label{lem:site} Let $l>1$. There exists $p_1>0$ such that for $r<p_1$, almost any realization of site percolation of parameter $r$ has only finite $l$-connected components and, for large enough $n$, any $l$-connected component that intersects the box $[-n,n]^d$ has volume smaller than $(\log n)^{6/5}$. \end{lm}
{\it Proof}: the number of $l$-connected sets that contain a fixed vertex and of volume $m$ is smaller than $e^{a(l)m}$ for some constant $a(l)$, see \cite{kn:G}. Thus the number of $l$-connected sets of volume $m$ that intersect the box $[-n,n]^d$ is smaller than $(2n+1)^d e^{a(l)m}$. But the probability that a given set of volume $m$ contains only opened sites is $r^m\leq p_1^m$. We now choose $p_1$ small enough so that $\sum_n\sum_{m\geq (\log n)^{6/5}} (2n+1)^d e^{a(l)m} p_1^m<\infty$ and the Borel-Cantelli lemma yields the conclusion of Lemma \ref{lem:site}.
\rule{.2cm}{.2cm}
As in the case of bond percolation discussed in the introduction, it is well known that for $r$ larger than some critical value then almost any realization of site percolation of parameter $r$ has a unique infinite connected component - the {\it infinite cluster} - that we will denote with $\CC$.
\begin{lm}\label{lem:site'} There exists $p_2<1$ such that for $r>p_2$, for almost any realization of site percolation of parameter $r$ and for large enough $n$, any connected component of the complement of the infinite cluster $\CC$ that intersects the box $[-n,n]^d$ has volume smaller than $(\log n)^{5/2}$. \end{lm}
{\it Proof}: let $\zeta$ be a typical realization of site percolation of parameter $r$. We assume that $r$ is above the critical value so that there is a unique infinite cluster, $\CC$. We also assume that $1-r<p_1$ where $p_1$ is the value provided by Lemma \ref{lem:site} for $l=d$.
Let $A$ be a connected component of the complement of $\CC$. Define the {\it interior boundary of $A$}: $\partial_{int}A=\{x\in A\,;\, \exists y\, s.t.\, (x,y)\in {\mathbb E}_d\,\hbox{and}\, y\notin A\}$. It is known that $\partial_{int}A$ is $d$-connected,
see \cite{kn:DP}, Lemma 2.1. By construction any $x\in \partial_{int}A$ satisfies $\zeta(x)=0$. Since the application $x\rightarrow 1-\zeta(x)$ is a typical realization of site percolation of parameter $1-r$ and $1-r<p_1$, as an application of Lemma \ref{lem:site} we get that $\partial_{int}A$ is finite. Because we already know that the complement of $A$ is infinite (since it contains $\CC$), it implies that $A$ itself is finite.
We now assume that $A$ intersects the box $[-n,n]^d$. Choose $n$ large enough so that $\CC\cap [-n,n]^d\not=\emptyset$ so that $[-n,n]^d$ is not a sub-set of $A$. Then it must be that $\partial_{int}A$ intersects $[-n,n]^d$. Applying Lemma \ref{lem:site} again, we get that, for large $n$, the volume of $\partial_{int}A$ is smaller than $(\log n)^{6/5}$. The classical isoperimetric inequality in ${\mathbb Z}^d$ implies that, for any finite connected set $B$, one has $(\#\partial_{int} B)^{d/(d-1)}\geq {\cal I} \# B$ for some constant $\cal I$. Therefore $\# A\leq {\cal I}^{-1} ( \log n)^{6d/5(d-1)}$. Since $6d/5(d-1)<5/2$, the proof is complete.
\rule{.2cm}{.2cm}
\begin{lm}\label{lem:site''} There exists $p_3<1$ and a constant $c_3$ such that for $r>p_3$, for almost any realization of site percolation of parameter $r$ and for large enough $n$, for any two points $x,y$ in the box $[-n,n]^d$ such that $\vert x-y\vert\geq (\log n)^{3/2}$ we have\\ (i) for any injective $d$-nearest-neighbor path $\pi$ from $x$ to $y$ then \begin{eqnarray*} \#\{z\in\pi\,;\,\zeta(z)=1\}\geq c_3\vert x-y\vert\,. \end{eqnarray*} (ii) for any injective ($1$-nearest-neighbor) path $\pi$ from $x$ to $y$ then \begin{eqnarray*} \#(\CC\cap\pi)\geq c_3\vert x-y\vert\,. \end{eqnarray*} \end{lm}
{\it Proof}: we assume that $r$ is close enough to $1$ so that there is a unique infinite cluster $\CC$. We also assume that $1-r<p_1$, where $p_1$ is the constant appearing in Lemma \ref{lem:site} for $l=1$. Then the complement of $\CC$ only has finite connected components.
Part (i) of the Lemma is proved by a classical Borel-Cantelli argument based on the following simple observations: the number of injective $d$-nearest-neighbor paths $\pi$ from $x$ of length $L$ is bounded by $(c_d)^L$ for some constant $c_d$ that depends on the dimension $d$ only; the probability that a given set of cardinality $L$ contains less than $dc_3L$ sites where $\zeta=1$ is bounded by $exp(\lambda dc_3 L)(re^{-\lambda}+1-r)^L$ for all $\lambda>0$. We choose $c_3<\frac 1d$ and $\lambda$ such that $c_d e^{-(1-dc_3)\lambda}<1$ and $p_3$ such that $\gamma=c_d e^{\lambda dc_3} (p_3e^{-\lambda}+1-p_3)<1$. Let now $x$ and $y$ be as in the Lemma. Note that any injective $d$-nearest-neighbor path $\pi$ from $x$ to $y$ satisfies $\#\pi\geq \frac 1d \vert x-y\vert\geq \frac 1d (\log n)^{3/2}$. Therefore the probability that there is an injective $d$-nearest-neighbor path $\pi$ from $x$ to $y$ such that $\#\{z\in\pi\,;\,\zeta(z)=1\}<c_3\vert x-y\vert$ is smaller than $\sum_{L\geq \frac 1d (\log n)^{3/2}} \gamma^L$ and the probability that (i) fails for some $x$ and $y$ is smaller than $(2n+1)^{2d}\sum_{L\geq \frac 1d (\log n)^{3/2}} \gamma^L$. Since $\sum_n (2n+1)^{2d} \sum_{L\geq \frac 1d (\log n)^{3/2}} \gamma^L<\infty$, the Borel-Cantelli lemma then yields that, for large enough $n$, part (i) of Lemma \ref{lem:site''} holds.
We prove part (ii) by reducing it to an application of part (i). Assume that, for some points $x$ and $y$ as in the Lemma, there exists an injective nearest-neighbor path $\pi$ from $x$ to $y$ such that $\#(\CC\cap\pi)< c_3\vert x-y\vert$. We first modify the path $\pi$ into a $d$-nearest-neighbor path from $x$ to $y$, say $\pi'$, in the following way: the parts of $\pi$ that lie in $\CC$ remain unchanged but the parts of $\pi$ that visit the complement of $\CC$ are modified so that they only visit points where $\zeta=0$. Such a modified path $\pi'$ exists because the interior boundary of a connected component of the complement of $\CC$ is $d$ connected (as we already mentioned in the proof of Lemma \ref{lem:site'}) and only contains points where $\zeta=0$.
Observe that $\CC\cap\pi'=\CC\cap\pi$ and that $\CC\cap\pi'=\{z\in\pi'\,;\,\zeta(z)=1\}$ so that \begin{eqnarray*} \#\{z\in\pi'\,;\,\zeta(z)=1\}< c_3\vert x-y\vert\,. \end{eqnarray*} Next turn $\pi'$ into an injective $d$-nearest-neighbor path, say $\pi''$, by suppressing loops in $\pi'$. Clearly $\{z\in\pi''\,;\,\zeta(z)=1\}\subset\{z\in\pi'\,;\,\zeta(z)=1\}$ and therefore \begin{eqnarray*} \#\{z\in\pi''\,;\,\zeta(z)=1\}< c_3\vert x-y\vert\,, \end{eqnarray*} a contradiction with part (i) of the Lemma.
\rule{.2cm}{.2cm}
\subsection{Proof of Lemma \ref{lem:holes}}
Lemma \ref{lem:holes} only deals with the geometry of percolation clusters, with no reference to random walks. We will restate it as a percolation lemma at the cost of changing a little our notation. In order to make a distinction with a typical realization of an environment for which we used the notation $\omega$, we will use the letters $\alpha$ or $\alpha'$ to denote typical realizations of a percolation graphs. Thus one switches from the notation of the following proof back to the notation of part \ref{sec:deduce} using the following dictionary: \begin{eqnarray*} \alpha(e)={\mathbf 1}_{\omega} \def\oo{{\tilde{\omega}}(e)>0}\,&;&\, \alpha'(e)={\mathbf 1}_{\omega} \def\oo{{\tilde{\omega}}(e)\geq\xi}\\ q=Q(\omega} \def\oo{{\tilde{\omega}}(e)>0)\,&;&\, p=Q(\omega} \def\oo{{\tilde{\omega}}(e)\geq\xi\,\vert\, \omega} \def\oo{{\tilde{\omega}}(e)>0)\,. \end{eqnarray*} This way taking $\xi$ close to $0$ is equivalent to taking $p$ close to $1$.
We very much rely on renormalization technics, see Proposition 2.1. in \cite{kn:AP}.
As in the introduction, we identify a sub-graph of ${\mathbb Z}^d$ with an application $\alpha:{\mathbb E}_d\rightarrow\{0,1\}$, writing $\alpha(x,y)=1$ if the edge $(x,y)$ is present in $\alpha$ and $\alpha(x,y)=0$ otherwise. Thus $\AAA=\{0,1\}^{{\mathbb E}_d}$ is identified with the set of sub-graphs of ${\mathbb Z}^d$. Edges pertaining to $\alpha$ are then called {\it open}. Connected components of such a sub-graph will be called {\it clusters}.
Define now $Q$ to be the probability measure on $\{0,1\}^{{\mathbb E}_d}$ under which the random variables $(\alpha(e),\,e \in {\mathbb E}_d)$ are Bernoulli$(q)$ independent variables with \begin{eqnarray*} q>p_c. \end{eqnarray*} Then, $Q$ almost surely, the graph $\alpha$ has a unique infinite cluster denoted with $\CC(\alpha)$.
For a typical realization of the percolation graph under $Q$, say $\alpha$, let $Q^\alpha$ be the law of bond percolation on $\CC(\alpha)$ with parameter $p$. We shall denote $\alpha'$ a typical realization under $Q^\alpha$ i.e. $\alpha'$ is a random subgraph of $\CC(\alpha)$ obtained by keeping (resp. deleting) edges with probability $p$ independently of each other. We always assume that $p$ is close enough to $1$ so that $Q^\alpha$ almost surely there is a unique infinite cluster in $\alpha'$ that we denote $\CC^\alpha(\alpha')$. By construction $\CC^\alpha(\alpha')\subset \CC(\alpha)$. Connected components of the complement of $\CC^\alpha(\alpha')$ in $\CC(\alpha)$ are called {\it holes}.
We now restate Lemma \ref{lem:holes}:\\ {\it there exists $p_0<1$ such that for $p>p_0$, for $Q$ almost any $\alpha$, for $Q^\alpha$ almost any $\alpha'$, for large enough $n$, then any hole intersecting the box $[-n,n]^d$ has volume smaller than $(\log n)^a$.}
{\it Renormalization}: let $\alpha$ be a typical realization of percolation under $Q$.
Let $N$ be an integer. We chop ${\mathbb Z}^d$ in a disjoint union of boxes of side length $2N+1$. Say ${\mathbb Z}^d=\cup_{{\mathbf i}\in{\mathbb Z}^d}B_{\mathbf i}$, where $B_{\mathbf i}$ is the box of center $(2N+1){\mathbf i}$. Following \cite{kn:AP}, let $B'_{\mathbf i}$ be the box of center $(2N+1){\mathbf i}$ and side length $\frac 5 2 N +1$. From now on, the word {\it box} will mean one of the boxes $B_{\mathbf i}, {\mathbf i}\in{\mathbb Z}^d$.
We say that a box $B_{\mathbf i}$ is {\it white} if $B_{\mathbf i}$ contains at least one edge from $\alpha$ and the event $R_{\mathbf i}^{(N)}$ in equation (2.9) of \cite{kn:AP} is satisfied. Otherwise, $B_{\mathbf i}$ is a {\it black} box. We recall that the event $R_{\mathbf i}^{(N)}$ is defined by: there is a unique cluster of $\alpha$ in $B'_{\mathbf i}$, say $K_{\mathbf i}$; all open paths contained in $B'_{\mathbf i}$ and of radius larger than $\frac 1 {10} N$ intersect $K_{\mathbf i}$ within $B'_{\mathbf i}$; $K_{\mathbf i}$ is crossing for each subbox $B\subset B'_{\mathbf i}$ of side larger than $\frac 1 {10} N$. See \cite{kn:AP} for details. We call $K_{\mathbf i}$ the {\it crossing cluster } of $\alpha$ in the box $B_{\mathbf i}$. Note the following consequences of this definition.
(Fact i) If $x$ and $y$ belong to the same white box $B_{\mathbf i}$ and both $x$ and $y$ belong to the infinite cluster of $\alpha$, then there is a path in $\CC(\alpha)$ connecting $x$ and $y$ within $B'_{\mathbf i}$.
(Fact ii) Choose two neighboring indices $\mathbf i$ and $\mathbf j$ with $\vert {\mathbf i}-{\mathbf j}\vert=1$ and such that both boxes $B_{\mathbf i}$ and $B_{\mathbf j}$ are white. As before, let $K_{\mathbf i}$ and $K_{\mathbf j}$ be the crossing clusters in $B_{\mathbf i}$ and $B_{\mathbf j}$ respectively. Let $x\in K_{\mathbf i}$ and $y\in K_{\mathbf j}$. Then there exists a path in $\alpha$ connecting $x$ and $y$ within $B'_{\mathbf i}\cup B'_{\mathbf j}$.
We call {\it renormalized} process the random subsets of ${\mathbb Z}^d$ obtained by taking the image of the initial percolation model by the application $\phi_N$, see equation (2.11) in \cite{kn:AP}. A site $\mathbf i \in{\mathbb Z}^d$ is thus declared {\it white} if the box $B_{\mathbf i}$ is white.
Let $\mathbf Q$ be the law of the renormalized process. The comparison result of Proposition 2.1 in \cite{kn:AP} states that $\mathbf Q$ stochastically dominates the law of site percolation with parameter $p(N)$ with $p(N)\rightarrow 1$ as $N$ tends to $\infty$.
We now introduce the extra percolation $Q^\alpha$. Let us call {\it grey} a white box $B_{\mathbf i}$ that contains an edge $e\in \CC(\alpha)$ such that $\alpha'(e)=0$. We call {\it pure white} white boxes that are not grey.
Let $\mathbf Q'$ be the law on subsets of the renormalized grid obtained by keeping pure white boxes, and deleting both black and grey boxes. We claim that $\mathbf Q'$ dominates the law of site percolation with parameter $p'(N)=p(N) p^{\,e_N(d)}$ where $e_N(d)$ is the number of edges in a box of side length $2N+1$. (Remember that $p$ is the parameter of $Q^\alpha$.) This claim is a consequence of the three following facts. We already indicated that $\mathbf Q$ stochastically dominates the law of site percolation with parameter $p(N)$. The conditional probability that a box $B_{\mathbf i}$ is pure white given it is white is larger or equal than $p^{\,e_N(d)}$. Besides, still under the condition that $B_{\mathbf i}$ is white, the event `$B_{\mathbf i}$ is pure white' is independent of the colors of the other boxes.
We further call {\it immaculate} a pure white box $B_{\mathbf i}$ such that any box $B_{\mathbf j}$ intersecting $B'_{\mathbf i}$ is also pure white. Call $\mathbf Q''$ the law on subsets of the renormalized grid obtained by keeping only immaculate boxes. Since the event `$B_{\mathbf i}$ is immaculate' is an increasing function with respect to the percolation process of pure white boxes, we get that $\mathbf Q''$ stochastically dominates the law of site percolation with parameter $p''(N)=p'(N)^{3^d}$.
{\it End of the proof of Lemma \ref{lem:holes}}: choose $p_0$ and $N$ such that $p''(N)$ is close enough to $1$ so that, $\mathbf Q''$ almost surely, there is an infinite cluster of immaculate boxes that we call ${\mathbb C}$.
For $\mathbf i\in{\mathbb C}$, let $K_{\mathbf i}$ be the crossing cluster in the box $B_{\mathbf i}$ and let $K=\cup_{\mathbf i\in{\mathbb C}}K_{\mathbf i}$. Then $K$ is connected (This follows from the definition of white boxes, see (Fact i) and (Fact ii) above.) and infinite (Because ${\mathbb C}$ is infinite.). Thus we have $K\subset \CC^\alpha(\alpha')$.
Let $A$ be a hole and let $\mathbf A$ be the set of indices $\mathbf i$ such that $B_{\mathbf i}$ intersects $A$. Observe that $\mathbf A$ is connected. We claim that $$\mathbf A\cap{\mathbb C}=\emptyset\,.$$
Indeed, assume there exists $x\in B_{\mathbf i}$ such that $\mathbf i\in {\mathbb C}$ and $x\in A$. By definition $A$ is a subset of $\CC(\alpha)$ and therefore $x\in\CC(\alpha)$. Let $y\in K_{\mathbf i}$, $y\not=x$. As we already noted $y\in\CC^\alpha(\alpha')$. Since $x\in \CC(\alpha)$ and $y\in\CC(\alpha)$ there is a path, $\pi$, connecting $x$ and $y$ within $B'_{\mathbf i}$, see (Fact i) above. But $B_{\mathbf i}$ is immaculate and therefore $B'_{\mathbf i}$ only contains edges $e$ with $\alpha'(e)=1$. Therefore all edges along the path $\pi$ belong to $\alpha'$ which imply that $x\in\CC^\alpha(\alpha')$. This is in contradiction with the assumptions that $x\in A$. We have proved that $\mathbf A\cap{\mathbb C}=\emptyset$.
To conclude the proof of Lemma \ref{lem:holes}, it only remains to choose $p_0$ and $N$ such that $p''(N)\geq p_2$ and apply Lemma \ref{lem:site'}. We deduce that the volume of $\mathbf A$ is bounded by $(\log n)^{5/2}$ and therefore the volume of $A$ is smaller than $(2N+1)^d (\log n)^{5/2}$.
\rule{.2cm}{.2cm}
\subsection{Deviation of the chemical distance} \label{sec:deviation}
We use the same notation as in the preceeding section. For given realizations of the percolations $\alpha$ and $\alpha'$, we define the corresponding {\it chemical distance} $d^\alpha_{\alpha'}$ on $\CC^\alpha(\alpha')$: two points $x\not=y$ in $\CC^\alpha(\alpha')$ satisfy $d^\alpha_{\alpha'}(x,y)=1$ if and only if one (at least) of the following two conditions is satisfied: either $x$ and $y$ are neighbors in ${\mathbb Z}^d$ and $\alpha'(x,y)=1$ or both $x$ and $y$ are at the boundary of a hole $h$ i.e. there is a hole $h$ and $x',y'\in h$ such that $x'$ is a neighbor of $x$ and $y'$ is a neighbor of $y$. In general, $d^\alpha_{\alpha'}(x,y)$ is defined as the smaller integer $k$ such that there exists a sequence of points $x_0,...,x_k$ in $\CC^\alpha(\alpha')$ with $x_0=x$, $x_k=y$ and such that $d^\alpha_{\alpha'}(x_j,x_{j+1})=1$ for all $j$.
\begin{lm}\label{lem:deviation} There exists $p_4<1$ such that for $p>p_4$, there exist constants $c^+$ and $c^-$ such that for $Q$ almost any $\alpha$, for $Q^\alpha$ almost any $\alpha'$, for large enough $n$, then \begin{eqnarray}\label{eq:deviation} c^- \vert x-y\vert\leq d^\alpha_{\alpha'}(x,y) \leq c^+ \vert x-y\vert\,,\end{eqnarray} for any $x,y\in\CC^\alpha(\alpha')$ such that $x\in[-n,n]^d$ and $\vert x-y\vert\geq (\log n)^2$. \end{lm}
{\it Proof}: let $d^\alpha(x,y)$ be the chemical distance between $x$ and $y$ within $\CC(\alpha)$ i.e. $d^\alpha(x,y)$ is the minimal length of a path from $x$ to $y$, say $\pi$, such that any edge $e\in\pi$ satisfies $\alpha(e)=1$. \\ Applying Theorem 1.1 in \cite{kn:AP} together with the Borel-Cantelli Lemma, we deduce that there exists a constant $c^+$ such that $d^\alpha(x,y) \leq c^+ \vert x-y\vert$ for any $x,y\in\CC(\alpha)$ such that $x\in[-n,n]^d$ and $\vert x-y\vert\geq (\log n)^2$. Since $d^\alpha_{\alpha'}(x,y) \leq d^\alpha(x,y)$, it gives the upper bound in (\ref{eq:deviation}).
We now give a proof of the lower bound. As for Lemma \ref{lem:holes}, we use a renormalization argument. The notation used below is borrowed from the proof of Lemma \ref{lem:holes} except that the role of $p_0$ is now played by $p_4$. .
We wish to be able to apply Lemma \ref{lem:site''} (ii) to the renormalized site percolation model with law $\mathbf Q''$ (i.e. the percolation model of immaculate boxes): therefore we choose $p_4$ and $N$ such that $p''(N)\geq p_3$ and observe that the event considered in Lemma \ref{lem:site''} (ii) is increasing.
Consider two points $x$ and $y$ as in Lemma \ref{lem:deviation} and let $\pi$ be an injective path from $x$ to $y$ within $\CC(\alpha)$. We shall prove that \begin{eqnarray}\label{eq:5.3.1} \#\EE_\pi\geq c_5\vert x-y\vert\,,\end{eqnarray} where $\EE_\pi=\{z,z'\in\pi\cap\CC^\alpha(\alpha')\,;\, \alpha'(z,z')=1\}$. By construction of the chemical distance $d^\alpha_{\alpha'}$, (\ref{eq:5.3.1}) implies the lower bound in (\ref{eq:deviation}) with $c^-=c_5$.
Let $\Pi'$ be the sequence of the indices of the boxes $B_{\mathbf i}$ that $\pi$ intersects. At the level of the renormalized grid, $\Pi'$ is a nearest-neighbor path from $\mathbf i_0$ to $\mathbf i_k$ with $x\in B_{\mathbf i_0}$ and $y\in B_{\mathbf i_k}$. Let $\Pi=({\mathbf i}_0,...,{\mathbf i}_k)$ be the injective path obtained by suppressing loops in $\Pi'$. We may, and will, assume that $n$ is large enough so that $i_0\not= i_k$ so that $\vert \mathbf i_0-\mathbf i_k \vert$ and $\vert x-y\vert$ are comparable. Applying Lemma \ref{lem:site''} (ii) to $\mathbf Q''$, we get that \begin{eqnarray}\label{eq:5.3.2}\# ({\mathbb C}\cap\Pi)\geq c_3\vert \mathbf i_0-\mathbf i_k \vert \geq c'_3\vert x-y\vert\,,\end{eqnarray} for some constant $c'_3$.
Let ${\mathbf i}\in {\mathbb C}\cap\Pi$ and choose $z\in B_{\mathbf i}\cap\pi$. Since the path $\pi$ is not entirely contained in one box, it must be that $\pi$ connects $z$ to some point $z'\notin B_{\mathbf i}$. Since $z'\in\pi$, we also have $z'\in \CC(\alpha)$. By definition of a white box, it implies that $z\in K_{\mathbf i}$. Since ${\mathbf i}\in {\mathbb C}$, it implies that actually $z\in K$ and therefore $z\in \CC^\alpha(\alpha')$. As a matter of fact, since the box $B_{\mathbf i}$ is pure white, we must have $\alpha'=1$ on all the edges of $\pi$ from $z$ to $z'$. In particular $z$ has a neighbor in $\CC(\alpha)$, say $z''$, such that $\alpha'(z,z'')=1$. Therefore $(z,z'')\in\EE_\pi$. We conclude that any indice in ${\mathbb C}\cap\Pi$ gives a contribution of at least $1$ to $\#\EE_\pi$. Therefore (\ref{eq:5.3.2}) implies that $$ \#\EE_\pi\geq c'_3\vert x-y\vert\,.$$
\rule{.2cm}{.2cm}
\end{document} |
\begin{document}
\title[Weak limit of an immersed surface sequence with bounded Willmore functional] {\bf Weak limit of an immersed surface sequence with bounded Willmore functional} \author[Yuxiang Li]{Yuxiang Li\\ {\small\it Department of Mathematical Sciences},\\ {\small\it Tsinghua University,}\\ {\small\it Beijing 100084, P.R.China.}\\ {\small\it Email: [email protected].}} \date{} \maketitle
\begin{abstract} This paper is an extension of \cite{K-L}. In this paper, we will study the blowup behavior of a surface sequence $\Sigma_k$ immersed in $\mathbb{R}^n$ with bounded Willmore functional and fixed genus $g$. We will prove that, we can decompose $\Sigma_k$ into finitely many parts: $$\Sigma_k=\bigcup_{i=1}^m\Sigma_k^i,$$ and find $p_k^i\in \Sigma_k^i$, $\lambda_k^i \in\mathbb{R}$, such that $\frac{\Sigma_k^i-p_k^i} {\lambda_k^i}$ converges locally in the sense of varifolds to a complete branched immersed surface $\Sigma_\infty^i$ with $$\sum_i\int_{\Sigma_\infty^i}K_{\Sigma_\infty^i}=2\pi(2-2g).$$ The basic tool we use in this paper is a generalized convergence theorem of F. H\'elein. \end{abstract}
{{\bf Keywords}: Willmore functional, Bubble tree.}
{{\bf Mathematics Subject Classification}: Primary 58E20, Secondary 35J35.}
\date{} \maketitle
\section{Introduction} For an immersed surface $\ f : \Sigma \rightarrow \mathbb{R}^n\ $ the Willmore functional is defined by \begin{displaymath}
W(f) = \frac{1}{4} \int_\Sigma |H_f|^2 d \mu_{f}, \end{displaymath} where $H_f=\Delta_{g_f}f$ denotes the mean curvature vector of $f$, $g_f = f^* g_{euc}$ the pull-back metric and $\mu_f$ the induced area measure on $\Sigma$. This functional first appeared in the papers of Blaschke \cite{Bl} and Thomsen \cite{T}, and was reinvented and popularized by Willmore \cite{W}.
We denote the infimum of Willmore functional of immersed surfaces of genus $p$ by $\beta_p^n$. We have $\beta_p^n\geq 4\pi$ by Gauss-Bonnet formula, and $\beta_p^n<8\pi$ as observed by Pinkall and Kusner \cite{K} independently. Willmore conjectured that $\beta_1^n$ is attained by Clifford torus. This conjecture is still open.
Given a surface sequence with bounded Willmore functional and measure, we are particularly interested to know what the limit looks like? In other words, we expect to understand the blowup behavior of such a surface sequence. It is very important as we meet blowup almost everywhere in the study of Willmore functional. For example, if $\Sigma_t$ is a Willmore flow defined on $[0,T)$, then by $\epsilon$-regularity proved in \cite{K-S}, $\int_{B_\rho\cap
\Sigma_t}|A_t|^2<\epsilon$ implies
$\|\nabla_{g_t}^mA_t\|_{L^\infty(B_\frac{\rho}{2} \cap \Sigma_t)} <C(m,\rho)$. Then
$\Sigma_t$ converges smoothly in any compact subset of $\mathbb{R}^n$ minus the concentration points set which is defined by $$\mathcal{S}=\{p\in\mathbb{R}^n:\lim_{r\rightarrow 0} \liminf_{t\rightarrow T}
\int_{B_r(p)\cap\Sigma_t} |A_t|^2>0\}.$$ So, if we want to have a good knowledge of Willmore flow, we have to learn the behavior, especially the structure of the bubble trees of $\Sigma_t$ near the concentration points.
Note that $W(f_k)<C$ implies
$\int_{\Sigma}|A_{k}|^2 d\mu_k<C'$. One expects that
$\|f_k\|_{W^{2,2}}$ is equivalent to
$\int|A_k|^2d\mu_k=\int g_k^{ij}g_k^{km}A_{ik}A_{jm}
\sqrt{|g_k|}dx$. However, it is not always true. One reason is that the diffeomorphism group of a surface is extremely big. Therefore, even when an immersion sequence $f_k$ converges smoothly, we can easily find a diffeomorphism sequence $\phi_k$ such that $f_k\circ \phi_k$ will not converge. Moreover, the Sobolev embedding
$W^{2,2q}\hookrightarrow C^1$ is invalid when $q=1$, so that it is impossible to estimate the $L^\infty$ norms of $g^{-1}_k$ and $g_{k}$ via the Sobolev inequalities
directly.
To overcome these difficulties, an approximate decomposition lemma was used by L. Simon when he proved the existence of the minimizer \cite{S}. He proved that $\beta_p^n$ can be attained if $p=1$ or \begin{equation}\label{simon} p>1,\,\,\,\, and\,\,\,\, \beta_p^n< \omega_p^n=\min\Big\{4\pi+\sum\limits_{i}(\beta_{p_i}^n-4\pi): \sum\limits_{i} p_i =p,\,1 \leq p_i < p\Big\}. \end{equation} Then Bauer and Kuwert proved that \eqref{simon} is always true, thus $\beta_p^n$ can be attained for any $p$ and $n$ \cite{B-K}. Later, such a technique was extended by W. Minicozzi to get the minimizer of $W$ on Lagrangian tori \cite{M}, by Kuwert-Sch\"atzle to get the minimizer of $W$ in a fixed conformal class \cite{K-S3}, and by Sch\"atzle to get the minimizer of $W$ with boundary condition \cite{Sh}.
In a recent paper \cite{K-L}, we presented a new approach. Given an immersion sequence $f_k$, we consider each $f_k$ as a conformal immersion of $(\Sigma,h_k)$ in $\mathbb{R}^n$, where $h_k$ is the smooth metric with Gaussian curvature $\pm1$ or 0. On the one hand, the conformal diffeomorphism group of $(\Sigma,h_k)$ is very small.
On the other hand, if we set $g_{f_k}=e^{2u_k} g_{euc}$ on an isothermal coordinate system, then we can estimate
$\|u_k\|_{L^\infty}$ from the compensated compactness property of $K_{f_k}e^{2u_k}$. Thus we may get the upper boundary of
$\|f_k\|_{W^{2,2}}$ via the equation $\Delta_{h_k}f_k=H_{f_k}$. However, the compensated compactness only holds when the $L^2$ norm of the second fundamental form is small locally, thus the blowup analysis is needed here. Our basic tools are the following 2 results: \begin{thm}\cite{H}\label{Helein} Let $f_k\in W^{2,2}(D,\mathbb{R}^n)$ be a sequence of conformal immersions with induced metrics $(g_k)_{ij} = e^{2u_k} \delta_{ij}$, and assume $$
\int_D |A_{f_k}|^2\,d\mu_{g_k} \leq \gamma < \gamma_n = \begin{cases} 8\pi & \mbox{ for } n = 3,\\ 4\pi & \mbox{ for }n \geq 4. \end{cases} $$ Assume also that $\mu_{g_k}(D) \leq C$ and $f_k(0) = 0$. Then $f_k$ is bounded in $W^{2,2}_{loc}(D,\mathbb{R}^n)$, and there is a subsequence such that one of the following two alternatives holds: \begin{itemize} \item[{\rm (a)}] $u_k$ is bounded in $L^\infty_{loc}(D)$ and $f_k$ converges weakly in $W^{2,2}_{loc}(D,\mathbb{R}^n)$ to a conformal immersion $f \in W^{2,2}_{loc}(D,\mathbb{R}^n)$. \item[{\rm (b)}] $u_k \to - \infty$ and $f_k \to 0$ locally uniformly on $D$. \end{itemize} \end{thm}
\begin{thm}\label{D.K.}\cite{D-K} Let $h_k,h_0$ be smooth Riemannian metrics on a surface $M$, such that $h_k \to h_0$ in $C^{s,\alpha}(M)$, where $s \in \mathbb{N}$, $\alpha \in (0,1)$. Then for each $p \in M$ there exist neighborhoods $U_k, U_0$ and smooth conformal diffeomorphisms $\vartheta_k:D \to U_k$, such that $\vartheta_k \to \vartheta_0$ in $C^{s+1,\alpha}(\overline{D},M)$. \end{thm} A $W^{2,2}$-conformal immersion is defined as follows: \begin{defi}\label{defconformalimmersion} Let $(\Sigma,g)$ be a Riemann surface. A map $f\in W^{2,2}(\Sigma,g,\mathbb{R}^n)$ is called a conformal immersion, if the induced metric $g_{f} = df\otimes df$ is given by $$ g_{f} = e^{2u} g \quad \mbox{ where } u \in L^\infty(\Sigma). $$ For a Riemann surface $\Sigma$ the set of all $W^{2,2}$-conformal immersions is denoted by $W^{2,2}_{{\rm conf}}(\Sigma,g,\mathbb{R}^n)$. When $f\in W^{2,2}_{loc} (\Sigma,g,\mathbb{R}^n)$ and $u\in L^\infty_{loc}(\Sigma)$, we say $f\in W^{2,2}_{conf,loc}(\Sigma,g,\mathbb{R}^n)$. \end{defi}
\begin{rem} F. H\'elein first proved Theorem \ref{Helein} is true for $\gamma< \frac{8\pi}{3}$ \cite[Theorem 5.1.1]{H}. In \cite{K-L}, we show that the constant $\gamma_n$ is optimal. \end{rem} \noindent Theorem \ref{Helein} together with Theorem \ref{D.K.} give the convergence of a $W^{2,2}$-conformal sequence of $(D,h_k)$ in $\mathbb{R}^n$ with $h_k$ converging smoothly to $h_0$.
Then using the theory of moduli space of Riemann surface, we proved in \cite{K-L} the following
\begin{thm}\label{KL}\cite{K-L} Let $f\in W^{2,2}_{conf} (\Sigma,h_k,\mathbb{R}^n)$. If \begin{equation}\label{omega} W(f_k)\leq \left\{\begin{array}{ll}
8\pi-\delta&p=1\\
\min\{8\pi,\omega_p\}-\delta&p>1
\end{array}\right.,\,\,\,\, \delta>0, \end{equation} then the conformal class sequence represented by $h_k$ converges in $\mathcal{M}_p$. \end{thm} In other words, $h_k$ converges to a metric $h_0$ smoothly. This was also proved by T. Rivi\`{e}re \cite{R}. Then up to M\"obius transformations, $f_k$ will converge weakly in $W^{2,2}_{loc}(\Sigma \setminus\{\mbox{finite points}\},h_0)$ to a $W^{2,2}(\Sigma,h_0)$-conformal immersion. In this way, we give a new proof of the existence of minimizer of Willmore functional with fixed genus.
\eqref{omega} also gives us a hint that, it is the degeneration of complex structure that makes the trouble for the convergence of an immersion sequence with \begin{equation}\label{bmw} \mu(f_k)+W(f_k)<C. \end{equation}
In \cite{C-L}, the Hausdorff limit of $\{f_k\}$ with \eqref{bmw} was studied, using conformal immersion as a tool. We proved that, the limit of $f_0$ is a conformal branched immersion from a stratified surface $\Sigma_\infty$ into $\mathbb{R}^n$. Briefly speaking,
if $(\Sigma_0,h_0)$ is the limit of $(\Sigma,h_k)$ in $\overline{\mathcal{M}_p}$, then $f_k$ converges weakly in the $W^{2,2}$ sense in any component of $\Sigma_0$ away from the blowup points $$\mathcal{S}(f_k)=\{p\in D:\lim_{r\rightarrow 0}
\liminf_{k\rightarrow+\infty}\int_{B_r(p,h_0)}|A(f_k)|^2d\mu_{f_k}\geq 4\pi\}.$$ Meanwhile, some bubble trees, which consist of $W^{2,2}$ branched conformal immersions of $S^2$ in $\mathbb{R}^n$ will appear. As a corollary, we get the following \begin{pro}\cite{C-L} Let $f_k:\Sigma\rightarrow \mathbb{R}^n$ be a sequence of smooth immersions with \eqref{bmw}. Assume the Hausdorff limit of $f_k(\Sigma)$ is not a union of $W^{2,2}$ branched conformal immersed spheres. Then the complex structure of $c_k$ induced by $f_k$ diverges in the moduli space if and only if there are a seqence of closed curves $\gamma_k$ which are nontrivial in $H^1(\Sigma)$, such that the length of $f_k(\gamma_k)$ converges to 0. \end{pro}
Thus, when the conformal class induced by $f_k$ diverges in the moduli space, topology will be lost. They are two reasons why the topology is lost. One reason is that Theorem \ref{Helein} does not ensure the limit is an immersion on each component of $\Sigma_0$. If $f_k$ converges to a point in some components, then some topologies are taken away. The other reason is that on each collar which is conformal to $Q(T_k)=S^1\times[-T_k,T_k]$ with $T_k\rightarrow+\infty$, there must exist a sequence $t_k\in[-T_k,T_k]$ such that $f_k(S^1\times\{t_k\})$ will shrink to a point.
It is not easy to calculate how many topologies are lost, but it is indeed possible to find where $\int_\Sigma K_{f_k}d\mu_{f_k}$ is lost. We have to study those bubbles which have nontrivial topologies but shrink to points. For this sake, we should check if those conformal immersion sequences which converge to points will converge to immersions after being rescaled:
\begin{thm}\label{convergence} Let $\Sigma$ be a smooth connected Riemann surface without boundary, and $\Omega_k\subset\subset\Sigma$ be domains with $$\Omega_1\subset \Omega_2\subset\cdots\Omega_k\subset\cdots,\,\,\,\, \bigcup_{i=1}^\infty\Omega_i=\Sigma.$$ Let $\{h_k\}$ be a smooth metric sequence over $\Sigma$ which converges to $h_0$ in $C^\infty_{loc}(\Sigma)$, and $\{f_k\}$ be a conformal immersion sequence of $(\Omega_k,h_k)$ in $\mathbb{R}^n$ satisfying \begin{itemize} \item[{\rm 1)}] $\mathcal{S}(f_k):= \{p\in\Sigma: \lim\limits_{r\rightarrow 0}\liminf\limits_{k\rightarrow+\infty}
\int_{B_r(p,h_0)}|A_{f_k}|^2d\mu_{f_k}\geq 4\pi \}=\emptyset$. \item[{\rm 2)}] $f_k(\Omega_k)$ can be extended to a closed compact immersed surface $\Sigma_k$ with
$$\int_{\Sigma_k}(1+|A_{f_k}|^2)d\mu_{f_k}<\Lambda.$$ \end{itemize} Take a curve $\gamma:[0,1]\rightarrow \Sigma$, and set $\lambda_k=diam\, f_k(\gamma[0,1])$. Then we can find a subsequence of $\frac{f_k -f_k(\gamma(0))}{\lambda_k}$ which converges weakly in $W^{2,2}_{loc}(\Sigma)$ to an
$f_0\in W_{conf,loc}^{2,2}(\Sigma,\mathbb{R}^n)$. Further, we can find an inverse $I=\frac{y-y_0}{|y-y_0|^2}$ with $y_0\notin f_0(\Sigma)$ such that
$$\int_\Sigma(1+|A_{I(f_0)}|^2)d\mu_{I(f_0)}<+\infty.$$ \end{thm}
When $\Sigma$ is a compact closed surface minus finitely many points, $f_0$ may not be compact. However, by Removability of singularity (see Theorem \ref{removal} in section 2), $I(f_0)$ is a conformal branched immersion. Thus $f_0$ is complete.
\begin{defi} We call $f$ a generalized limit of $f_k$, if we can find a point $x_0\notin \mathcal{S}(f_k)$ and a positive sequence $\lambda_k$ which is equivalent to 1 or tends to 0, such that $\frac{f_k-f_k(x_0)}{\lambda_k}$ converges to $f$ weakly in $W^{2,2}_{loc}(\Sigma\setminus \mathcal{S}(f_k))$. \end{defi}
Obviously, if $f$ and $f'$ are both generalized limits of $f_k$, then $f=\lambda f'+b$ for some $\lambda$ and $b$. We will not distinguish between $f$ and $f'$.
Near the concentration points, we will get some bubbles. The divergence of complex structure also gives us some bubbles. In \cite{C-L}, we only considered the bubbles with $\lambda_k\equiv1$. In this paper, we will study the bubbles with $\lambda_k \rightarrow 0$ which do not appear in the Hausdorff limit. All the bubbles can be considered as conformal branched immersions from $\mathbb{C}$ (or $S^1\times \mathbb{R}$, $S^2$) into $\mathbb{R}^n$.
However, the structures of bubble trees here are much more complicated than those of harmonic maps. For example, there might exist infinite many bubbles here, therefore, we should neglect the bubbles which do not carry total Gauss curvature.
\begin{defi} We say a conformal branched immersion of $S^1\times\mathbb{R}$ into $\mathbb{R}^n$ is trivial, if for any $t$, $$\int_{S^1\times\{t\}}\kappa\neq 2m\pi+\pi,\,\,\,\, for\,\,\,\, some\,\,\,\, m\in\mathbb{Z}.$$ \end{defi}
The bubble trees constructed in this paper consist of finitely many branches. Small branches are on the big branches level by level. Each branch consists of nontrivial bubbles, bubbles with concentration, and the first bubble (see definitions in Section 4). We can classify the bubbles into four types: $T_\infty$, $T_0$, $B_\infty$ and $B_0$ (see Definition \ref{typeofbubble}). We will show that a $T_0$ type bubble must follow a $B_\infty$ type bubble, and a $T_\infty$ type bubble must follow a $B_0$ type bubble.
Moreover, we have total Gauss curvature identity. To state the total Gauss curvature identity precisely, we have to divide it into 3 cases.
{\bf Hyperbolic case (genus$>1$):} Let $\Sigma_0$ be the stable surface in $ \overline{\mathcal{M}}_g$ with nodal points $\mathcal{N}=\{ a_1,\cdots, a_{m}\}$. $\Sigma_0$ is obtained by pinching some curves in a surface to points, thus $\Sigma_0\setminus\mathcal{N}$ can be divided into finitely many components $\Sigma_0^1$, $\cdots$, $\Sigma_0^s$. For each $\Sigma_0^i$, we can extend $\Sigma_0^i$ to a smooth closed Riemann surface $\overline{\Sigma_0^i}$ by adding a point at each puncture. Moreover, the complex structure of $\Sigma_0^i$ can be extended smoothly to a complex structure of $\overline{\Sigma_0^i}$.
We say $h_0$ to be a hyperbolic structure on $\Sigma_0$ if $h_0$ is a smooth complete metric on $\Sigma_0\setminus\mathcal{N}$ with finite volume and Gauss curvature $-1$. We define $\Sigma_{0}(a_j,\delta)$ to be the domain in $\Sigma_0$ which satisfies $$a_j\in \Sigma_0(a_j,\delta),\,\,\,\, and \,\,\,\, injrad_{\Sigma_0\setminus\mathcal{N}}^{h_0}(p)<\delta\,\,\,\, \forall p\in\Sigma_0(a_j,\delta)\setminus\{a_j\}.$$ We set $h_0^i$ to be a smooth metric over $\overline{\Sigma_0^i}$ which is conformal to $h_0$ on $\Sigma_0^i$. We may assume $h_0^i$ has curvature $\pm1$ or curvature $0$ and measure 1.
Now, we let $\Sigma_k$ be a sequence of compact Riemann surfaces of fixed genus $g$ whose metrics $h_k$ have curvature $-1$, such that $\Sigma_k \rightarrow \Sigma_0$ in $\overline{\mathcal{M}_g}$. Then, there exist a maximal collection $\Gamma_k = \{\gamma_k^1,\ldots,\gamma_k^{m}\}$ of pairwise disjoint, simply closed geodesics in $\Sigma_k$ with $\ell^j_k = L(\gamma_k^j) \to 0$, such that after passing to a subsequence the following hold: \begin{itemize} \item[{\rm (1)}] There are maps $\varphi_k \in C^0(\Sigma_k,\Sigma_0)$, such that $\varphi_k: \Sigma_k \backslash \Gamma_k \to \Sigma_0 \backslash \mathcal{N}$ is diffeomorphic and $\varphi_k(\gamma_k^j) = a_j$ for $j = 1,\ldots,m$. \item[{\rm (2)}] For the inverse diffeomorphisms $\psi_k:\Sigma_0 \backslash \mathcal{N} \to \Sigma_k \backslash \Gamma_k$, we have $\psi_k^\ast (h_k) \to h_0$ in $C^\infty_{loc}(\Sigma_0 \backslash \mathcal{N})$. \item[{\rm (3)}] Let $c_k$ be the complex structure over $\Sigma_k$, and $c_0$ be the complex structure over $\Sigma_0\setminus\mathcal{N}$. Then $$\psi_{k}^*(c_k)\rightarrow c_0\,\,\,\, in\,\,\,\, C^\infty_{loc}(\Sigma_0\setminus\mathcal{N}).$$ \item[{\rm (4)}]For each $\gamma_k^j$, there is a collar $U_k^j$ containing $\gamma_k^j$, which is isometric to cylinder $$Q_k^j=S^1\times(-\frac{\pi^2}{l_k^j},\frac{\pi^2}{l_k^j}),\,\,\,\, with\,\,\,\, metric\,\,\,\, h_k^j=\left(\frac{1}{2\pi\sin(\frac{l_k^j}{2\pi}t+\theta_k)}\right)^2(dt^2+d\theta^2),$$ where $\theta_k=\arctan(\sinh( \frac{l_k^j}{2}))+\frac{\pi}{2}$. Moreover, for any $(\theta,t)\in S^1\times (-\frac{\pi^2}{l_k^j},\frac{\pi^2}{l_k^j})$, we have \begin{equation}\label{injrad} \sinh(injrad_{\Sigma_k}(t,\theta))\sin( \frac{l_k^jt}{2\pi}+\theta_k) =\sinh\frac{l_k^j}{2}. \end{equation} Let $\phi_k^j$ be the isometric between $Q_k^j$ and $U_k^j$. Then $\varphi_k\circ\phi_k^{j}(T_k^j+t,\theta)\cup \varphi_k\circ\phi_k^{j}(-T_k^j+t,\theta)$ converges in $C^\infty_{loc}((-\infty,0)\cup (0,\infty))$ to an isometric from $S^1\times(-\infty,0)\cup S^1\times(0,+\infty)$ to $\Sigma_0(a_j,1)\setminus \{a_j\}$.
\end{itemize}
Items 1) and 2) in the above can be found in Proposition 5.1 in \cite{Hum}. The main part of 3) is just the collar Lemma.
Now, we consider a sequence $f_k\in W^{2,2}_{conf} (\Sigma,h_k,\mathbb{R}^n)$, with $$\mu(f_k)+W(f_k)<\Lambda.$$ By Theorem \ref{convergence}, on each component $\Sigma_k^i$, $f_k\circ \psi_k$ has a generalized limit $f_0^i\in W^{2,2}_{conf}(\overline{\Sigma_k^i}\setminus A^i,h_0^i,\mathbb{R}^n)$, where $A^i$ is a finite set. We have the following
\begin{thm}\label{main} Let $f^1$, $f^2$, $\cdots$ be all of the non-trivial bubbles of $\{f_k\}$. Then $$\sum_i\int_{\overline{\Sigma_k^i}}K_{f_0^i}d\mu_{f_0^i}+ \sum_i\int_{S^2}K_{f^i}d\mu_{\varphi^i}=2\pi\chi(\Sigma).$$ \end{thm}
{\bf Torus case:} Let $(\Sigma,h_k)=\mathbb{C}/(\pi,z)$, where
$|z|\geq\pi$ and $|\mbox{Re}{z}|\leq\frac{\pi}{2}$. We can write $$(\Sigma,h_k)=S^1\times\mathbb{R}/G_k,$$ where $S^1$ is the circle with perimeter 1 and $G_k\cong \mathbb{Z}$ is the transformation group generalized by $$(t,\theta)\rightarrow (t+a_k,\theta+\theta_k),\,\,\,\, where\,\,\,\, a_k\geq \sqrt{\pi^2-\theta_k^2},\,\,\,\, and\,\,\,\, \theta_k\in [-\frac{\pi}{2},\frac{\pi}{2}].$$ $(\Sigma_k,h_k)$ diverges in $\mathcal{M}_1$ if and only if $a_k\rightarrow+\infty$.
Then any $f_k\in W^{2,2}_{conf}(\Sigma,h_k,\mathbb{R}^n)$ can be lifted to a conformal immersion $f_k':S^1\times\mathbb{R} \rightarrow\mathbb{R}^n$ with $$f_k'(t,\theta)=f_k'(t+a_k,\theta+\theta_k).$$ After translating, we may assume that $f_k'(-t+\frac{a_k}{2},\theta)$ and $f_k'(t-\frac{a_k}{2},\theta)$ have no concentrations. We let $\lambda_k=diam f_k'(S^1\times{\frac{a_k}{2}})$, then $\frac{f_k'(-t+\frac{a_k}{2},\theta)-f(\frac{a_k} {2},0)}{\lambda_k}$ and $\frac{f_k'(t-\frac{a_k}{2},\theta) -f_k'(\frac{a_k}{2},\theta_k)}{\lambda_k}$ will converge to $f_0^1$ and $f_0^2$ respectively in $W^{2,2}_{loc}(S^1\times[0,+\infty))$. However, they can be glued together via $$f_0=\left\{\begin{array}{ll}
f_0^1(-t,\theta)&t\leq 0\\
f_0^2(t,\theta+\theta_0)&t>0,
\end{array}\right.$$ into a conformal immersion of $S^1\times\mathbb{R}$ in $\mathbb{R}^n$, where $\theta_0=\lim\limits_{k\rightarrow+\infty}\theta_k$. Then we have \begin{thm}\label{main2} $$\int_{S^1\times\mathbb{R}}K_{f_0}d\mu_{f_0} +\sum_{i=1}^m\int_{S^1\times\mathbb{R}}K_{f^i}d\mu_i=0,$$ where $f^1$, $\cdots$, $f^m$ are all of the non-trivial bubbles of $f_k'$. \end{thm}
{\bf Sphere case:} When $\Sigma$ is the sphere, we can let $h_k\equiv h_0$. There is no bubble from collars. We have \begin{thm}\label{sphere} Let $f_0$ be the generalized limit of $f_k$. Then $$\int_{S^2}K_{f_0}d\mu_{f_0}+\sum_{i=1}^m\int_{S^1\times\mathbb{R}} K_{f^i}d\mu_{f^i}=4\pi,$$ where $f^1$, $\cdots$, $f^m$ are all of the non-trivial bubbles. \end{thm}
Put Theorem \ref{main}--\ref{sphere} together, we get the main theorem of this paper, which is a precise version of Theorem \ref{KL}: \begin{thm} Let $\Sigma_k$ be a sequence of surfaces immersed in $\mathbb{R}^n$ with bounded Willmore functional. Assume $g(\Sigma_k)=g$. Then we can decompose $\Sigma_k$ into finite parts: $$\Sigma_k=\bigcup_{i=1}^m\Sigma_k^i,\,\,\,\, \Sigma_i\cap\Sigma_j =\emptyset,$$ and find $p_k^i\in \Sigma_k^i$, $\lambda_k^i \in\mathbb{R}$, such that $\frac{\Sigma_k^i-p_k^i} {\lambda_k^i}$ converges locally in the sense of varifolds to a complete branched immersed surface $\Sigma_\infty^i$ with $$\sum_i\int_{\Sigma_\infty^i}K_{\Sigma_\infty^i}=2\pi(2-2g), \,\,\,\, and\,\,\,\, \sum_{i}W(\Sigma_\infty^i)\leq \lim_{k\rightarrow+\infty} W(\Sigma_k).$$
\end{thm}
\begin{rem} Parts of Theorem \ref{sphere} have appeared in \cite{L-L}, in which we assumed that $\{f_k\}\subset W^{2,2}_{conf}(D,\mathbb{R}^n)$ and does not converge to a point. \end{rem}
\section{Preliminary}
\subsection{Hardy estimate} Let $f\in W^{2,2}_{conf}(D,\mathbb{R}^n)$ with $g_f= e^{2u}(dx^1\otimes dx^1+dx^2\otimes dx^2)$ and
$\int_D|A_f|^2<4\pi-\delta$. $f$ induces a Gauss map $$G(f)=e^{-2u}(f_1\wedge f_2):D\rightarrow G(2,n)\hookrightarrow \mathbb{C}\mathbb{P}^{n-1}.$$ Following \cite{M-S}, we define
the map $\Phi(f):\mathbb{C} \to \mathbb{C} P^{n-1}$ by $$ \Phi(f)(z) = \left\{\begin{array}{ll} G(f)(z) & \mbox{ if }z \in D\\ G(f)(\frac{1}{\overline{z}}) & \mbox{ if }z \in \mathbb{C} \backslash \overline{D}. \end{array}\right.$$ Then $\Phi(f)\in W_0^{1,2}(\mathbb{C},\mathbb{C} P^{n-1})$ and $\int_{\mathbb{C}}{\Phi}^*(f) (\omega) =0$, where $\omega$ is the K\"ahler form of $\mathbb{C}\mathbb{P}^{n-1}$. Thus by Corollary 3.5.7 in \cite{M-S}, $\Psi(f)=*\Phi^*(f)(\omega)$ is in Hardy space, and \begin{equation}\label{Psi}
\|\Psi(f)\|_{
\mathcal{H}}<C(\delta)\|A_f\|_{L^2(D)}. \end{equation} Note that \begin{equation}\label{psi-k}
\Psi(f)|_{D}=K_{f}e^{2u}. \end{equation}
If we set that $v$ solves the equation $-\Delta v=\Psi(f)$, $v(\infty)=0$, then we have \begin{equation*}
\|v\|_{L^\infty(\mathbb{R}^n)}+\|\nabla v\|_{L^2(\mathbb{R}^n)}+\|\nabla^2 v\|_{L^1(\mathbb{R}^n)}<
C\|\Psi(f)\|_{\mathcal{H}}. \end{equation*} Noting that $u-v$ is harmonic on $D$, we get \begin{equation}\label{hardy0}
\|u\|_{L^\infty(D_\frac{1}{2})}+\|\nabla u\|_{L^2(D_\frac{1}{2})}+\|\nabla^2 u\|_{L^1(D_\frac{1}{2})}<
C(\|\Psi(f)\|_{\mathcal{H}}+\|u\|_{L^1(D)}). \end{equation}
\subsection{Gauss-Bonnet formula} Let $f\in W^{2,2}_{conf}(\Sigma,g,\mathbb{R}^n)$ with $g_f=e^{2u}g$. Let $\gamma$ be a smooth curve. On $\gamma$, we define \begin{equation}\label{geodesic.curvature} \kappa_{f}=\frac{\partial u}{\partial n}+\kappa_g, \end{equation} where $n$ is one of the unit normal field along $\gamma$ which is compatible to $\kappa_g$. By \eqref{hardy0}, $\frac{\partial u}{\partial n}$ is well-defined. In \cite{K-L}, we proved that $u$ satisfies the weak equation $$-\Delta_g u=K_{f}e^{2u}-K_g.$$ Then, for any domain $\Omega$ with smooth boundary, we have the Gauss-Bonnet formula: $$\int_{\partial\Omega}\kappa_f=\chi(\overline{\Omega})+\int_{\Omega} K_fd\mu_f.$$
\subsection{Convergence of $\int K_{f_k}d\mu_{f_k}$}
By \eqref{Psi} \eqref{psi-k} \eqref{hardy0} and Theorem \ref{Helein}, we have :
\begin{lem}\label{measureconvergence} Let $f_k$ be a conformal sequence from $D$ into $\mathbb{R}^n$ with $g_{f_k}=e^{2u_k}g_0$ and
$\int_D|A_{f_k}|^2d\mu_f\leq \gamma<4\pi$, which converges to $f_0$ weakly. We assume $f_0$ is not a point map, and $g_{f_0}=e^{2u_0}g_0$. Then we can find a subsequence, such that \begin{equation}\label{hardy} K_{f_k}d\mu_{f_k} \rightharpoonup K_{f_0}d\mu_{f_0}\,\,\,\, over \,\,\,\, D_\frac{1}{2},\,\,\,\, \mbox{in distribution,} \end{equation} and \begin{equation*} u_k\rightharpoonup u_0,\,\,\,\, in\,\,\,\, W^{1,2}(D_\frac{1}{2}). \end{equation*} \end{lem}
We will use the following \begin{cor} Let $f_k$ be a conformal sequence of $D\setminus D_\frac{1}{2}$ in $\mathbb{R}^n$, which converges to $f_0\in W^{2,2}_{{conf},loc}(D\setminus D_\frac{1}{2},\mathbb{R}^n)$. For any $t\in(\frac{1}{2},1)$ with $\partial D_t\cap \mathcal{S}(f_k) =\emptyset$, we have $$\lim_{k\rightarrow+\infty}\int_{\partial D_t}\kappa_{f_k} ds_k= \int_{\partial D_t}\kappa_{f_0}ds_0.$$ \end{cor}
\proof Take $s\in (t,1)$, such that $\mathcal{S}(f_k)\cap \overline{D_s\setminus D_t}=\emptyset$. Let $g_{f_k}=e^{2u_k}g_0$ and $\varphi\in C^\infty_0(D_s)$, which is 1 on $D_t$. Then we have $$-\int_{\partial D_t}\frac{\partial u_k}{\partial r}ds=- \int_{D_s\setminus D_t}\nabla u_k\nabla\varphi d\sigma +\int_{D_s\setminus D_t}\varphi K_ke^{2u_k}d\mu_{f_k},$$ and the right-hand side will converge to $$- \int_{D_s\setminus D_t}\nabla u_0\nabla\varphi d\sigma+\int_{D_s\setminus D_t}\varphi K_0e^{2u_0}d\mu_{f_0},\,\,\,\, as\,\,\,\, k\rightarrow+\infty.$$ Then we get $$-\int_{\partial D_t}\frac{\partial u_k}{\partial r}ds \rightarrow -\int_{\partial D_t}\frac{\partial u_0}{\partial r}ds.$$ By \eqref{geodesic.curvature} we get $$\int_{\partial D_t}\kappa_k\rightarrow \int_{\partial D_t}\kappa_0.$$ $
\Box$\\
\subsection{Removability of singularity} We have the following \begin{thm}\label{removal}\cite{K-L} Suppose that $f\in W^{2,2}_{{conf},loc}(D\backslash \{0\},\mathbb{R}^n)$ satisfies $$
\int_D |A_f|^2\,d\mu_g < \infty \quad \mbox{ and } \quad \mu_g(D) < \infty, $$ where $g_{ij} = e^{2u} \delta_{ij}$ is the induced metric. Then $f \in W^{2,2}(D,\mathbb{R}^n)$ and we have \begin{eqnarray*}
u(z) & = & m\log |z|+ \omega(z) \quad \mbox{ where } m\geq 0,\, z\in \mathbb{Z},\,\omega \in C^0 \cap W^{1,2}(D),\\ -\Delta u & = & -2m\pi \delta_0+K_g e^{2u} \quad \mbox{ in }D. \end{eqnarray*} The multiplicity of the immersion at $f(0)$ is given by $$ \theta^2\big(f(\mu_g \llcorner D_\sigma(0)),f(0)\big) = m+1 \quad \mbox{ for any small } \sigma > 0. $$ Moreover, we have \begin{equation}\label{kappa2} \lim_{t\rightarrow 0}\int_{\partial D_t}\kappa_{f} ds_f=2\pi (m+1). \end{equation} \end{thm}
\proof We only prove \eqref{kappa2}. For the proof of other part of the theorem, one can refer to \cite{K-L}.
Observe that
$$|\int_{\partial D_t}\frac{\partial u} {\partial r}-\int_{\partial D_{t'}}\frac{\partial u}
{\partial r}|=|\int_{D_t\setminus D_{t'}}
Kd\mu|\rightarrow 0$$ as $t, t'\rightarrow 0$. Then $\lim\limits_{t\rightarrow 0}\int_{\partial D_t}\frac{\partial u} {\partial r}$ exists.
Since $\omega\in W^{1,2}(D_r)$, we can find $t_k\in [2^{-k-1},2^{-k}]$, s.t.
$$(2^{-k}-2^{-k-1})\int_{\partial D_{t_k}}|
\frac{\partial w}{\partial r}| =\int_{2^{-k-1}}^{2^{-k}}(\int_{\partial D_t}
|\frac{\partial w}{\partial r}|)dt\leq C\|\nabla w\|_{L^2 (D_{2^{-k}})}2^{-k},$$ which implies that $\int_{\partial D_{t_k}}\frac{\partial w}{\partial r} \rightarrow 0$. Then we get $\int_{\partial D_{t_k}}\frac{\partial u} {\partial r}\rightarrow 2\pi m$, which implies that $$\lim_{t\rightarrow 0} \int_{\partial D_{t}}\frac{\partial u} {\partial r}\rightarrow 2\pi m.$$
$
\Box$\\
\begin{rem} In the proof of Theorem \ref{removal} in \cite{K-L}, we get that \begin{equation}\label{isolate}
\lim_{z\rightarrow 0}\frac{|f(z)-f(0)|}{|z|^{m+1}}=\frac{e^{w(0)}}{m+1}. \end{equation} \end{rem}
We give the following definition: \begin{defi} \label{defconformalimmersion} A map $f\in W^{2,2}(\Sigma,\mathbb{R}^n)$ is called a $W^{2,2}$- branched conformal immersion, if we can find finitely many points $p_1$, $\cdots$, $p_m$, s.t. $f\in W^{2,2}_{conf,loc}(\Sigma\setminus\{p_1,\cdots,p_m\})$, and $$
\mu(f)<+\infty,\,\,\,\, \int_{\Sigma}|A_f|^2d\mu_f<+\infty. $$ \end{defi}
For the behavior at infinity of complete conformally parameterized surfaces, we have the following
\begin{thm}\label{removal2} Suppose that $f\in W^{2,2}_{{conf},loc}(\mathbb{C}\setminus D_R,\mathbb{R}^n)$ with $$
\int_{\mathbb{C}\setminus D_R} |A_f|^2\,d\mu_{g} < \infty, $$ where $g_{ij} = e^{2u} \delta_{ij}$ is the induced metric. We assume $f(\mathbb{C}\setminus D_{2R})$ is complete. Then we have \begin{equation*}
u(z) = m\log |z|+ \omega(z) \quad \mbox{ where } m\geq 0,\, z\in \mathbb{Z},\,\omega \in W^{1,2}(\mathbb{C}\setminus D_{2R}). \end{equation*} Moreover, we have \begin{equation}\label{kappa3} \lim_{t\rightarrow +\infty}\int_{\partial D_t}\kappa_{f} ds_f=2\pi (m+1). \end{equation}
\end{thm}
The proof of \eqref{kappa3} is similar to that of \eqref{kappa2}. Other part of the proof can be found in \cite{M-S}. Though Muller-Sverak's result was stated for smooth surface, it is easy to check that their proof also holds for a $W^{2,2}$ conformal immersion.
\section{Proof of Theorem \ref{convergence}} We first prove the following \begin{lem}\label{convergence2} Suppose $(\Sigma,h_k)$ to be smooth Riemann surfaces, where $h_k$ converges to $h_0$ in $C^\infty_{loc}(\Sigma)$. Let $\{f_k\}\subset W^{2,2}_{{conf},loc}(\Sigma,h_k,\mathbb{R}^n)$ with $$\mathcal{S}(f_k)=\{p\in\Sigma: \lim_{r\rightarrow 0}
\liminf_{k\rightarrow+\infty}\int_{B_r(p,h_0)}|A_{f_k}|^2d\mu_{f_k}\geq 4\pi\} =\emptyset.$$ Then $f_k$ converges in $W^{2,2}_{loc}(\Sigma,h_0,\mathbb{R}^n)$ to a point or an $f_0\in W^{2,2}_{conf,loc}(\Sigma,h_0,\mathbb{R}^n)$. \end{lem}
\proof Let $g_{f_k}=e^{2u_k}h_k$. We only need to prove the following statement: for any $p\in\Sigma$, we can find a neighborhood $V$ which is independent of $\{f_k\}$, such that $f_k$ converges weakly to $f_0$ in $W^{2,2}(V,h_0)$. Moreover,
$\|u_k\|_{L^\infty(V)}<C$ if and only if $f_0\in W^{2,2}_{conf} (V,\mathbb{R}^n)$; $u_k\rightarrow-\infty$ uniformly, if and only if $f_0$ is a point map.
Now we prove this statement: Given a point $p$, we choose
$U_k$, $U_0$, $\vartheta_k$, $\vartheta_0$
as in the Theorem \ref{D.K.}. Set $\vartheta_k^*(h_k)=e^{2v_k}g_0$, where $g_0=(dx^1)^2+(dx)^2$. We may assume $v_k\rightarrow v_0$ in $C^\infty_{loc}(D)$.
Let $\hat{f}_k=f_k(\vartheta_k)$ which is a map from $D$ into $\mathbb{R}^n$. It is easy to check that $\hat{f}_k\in W^{2,2}_{conf}(D,\mathbb{R}^n)$ and $g_{f_k}=e^{2u_k+2v_k}g_0$. By Theorem \ref{Helein}, we can assume that $\hat{f}_k$ converges to $\hat{f}_0$ weakly in $W^{2,2}(D_\frac{3}{4})$. Moreover, $\hat{f}_0$ is a point when $u_k+v_k\ \rightarrow-\infty$ uniformly on $D_\frac{3}{4}$, and a conformal immersion when $\sup_{k}
\|u_k+v_k\|_{L^\infty(D_\frac{3}{4})}<+\infty$.
Let $V=\vartheta_0(D_\frac{1}{2})$. Since $\vartheta_k$ converges to $\vartheta_0$, $\vartheta_k^{-1}(V)\subset D_\frac{3}{4}$ for any sufficiently large $k$ and $f_k=\hat{f}_k(\vartheta_k^{-1})$ converges to $f_0=\hat{f}_0 (\vartheta_0^{-1})$ weakly in $W^{2,2}(V,h_0)$. Moreover,
$f_0$ is a conformal immersion when $\|u_k\|_{L^\infty(V)}<C$, and a point when $u_k\rightarrow-\infty$ uniformly in $V$.
$
\Box$\\
{\it The proof of Theorem \ref{convergence}:} When $f_k$ converges to a conformal immersion weakly, the result is obvious. Now we assume that $f_k$ converges to a point. For this case,
$\lambda_k \rightarrow 0$.
Put $f_k'=\frac{f_k-f_k(\gamma(0))}{\lambda_k}$, $\Sigma_k'=\frac{\Sigma_k-f_k(\gamma(0))}{\lambda_k}$. We have two cases:
\noindent Case 1: $diam(f_k')<C$. Letting $\rho$ in inequality (1.3) in \cite{S} tend to infinity, we get $\frac{\Sigma_k'\cap B_\sigma(\gamma(0))}{\sigma^2}\leq C$ for any $\sigma>0$, hence we get $\mu(f_k')<C$ by taking $\sigma=diam(f_k')$. Then Lemma \ref{convergence2} shows that $f_k'$ converges weakly in $W^{2,2}_{loc}(\Sigma,h_0)$. Since $diam\, f_k' (\gamma)=1$, the weak limit is not a point.
\noindent Case 2: $diam(f_k')\rightarrow +\infty$. We take a point $y_0\in\mathbb{R}^n$ and a constant $\delta>0$, s.t. $$B_\delta(y_0)\cap \Sigma_k'=\emptyset,\,\,\,\, \forall k.$$
Let $I=\frac{y-y_0}{|y-y_0|^2}$, and $$f_k''=I(f_k'),\,\,\,\, \Sigma_k''=I(\Sigma_k').$$ By conformal invariance of Willmore functional \cite{C,W}, we have
$$\int_{\Sigma''}|A_{\Sigma''}|^2d\mu_{\Sigma''}
=\int_{\Sigma}|A_\Sigma|^2d\mu_{\Sigma}<\Lambda.$$ Since $\Sigma_k''\subset B_\frac{1}{\delta}(0)$, also by (1.3) in \cite{S}, we get $\mu(f_k'')<C$. Thus $f_k''$ converges weakly in $W^{2,2}_{loc}(\Sigma\setminus \mathcal{S}(f_k''),h_0)$.
Next, we prove that $f_k''$ will not converge to a point by assumption. If $f_k''$ converges to a point in $W^{2,2}_{loc}(\Sigma\setminus \mathcal{S}(f_k''))$, then the limit must be 0, for $diam\,(f_k')$ converges to $+\infty$. By the definition of $f_k''$, we can find a $\delta_0>0$, such that $f_k''(\gamma)\cap B_{\delta_0}(0,h_0)=\emptyset$. Thus for any $p\in \gamma([0,1]) \setminus \mathcal{S}(f_k'')$, $f_k''$ will not converge to $0$. A contradiction.
Then we only need to prove that $f_k'$ converges weakly in $W^{2,2}_{loc}(\Sigma,h_0,\mathbb{R}^n)$. Let $f_0''$ be the limit of $f_k''$. By Theorem \ref{removal}, $f_0''$ is a branched immersion of $\Sigma$ in $\mathbb{R}^n$. Let $\mathcal{S}^*=f_0^{''-1}(\{0\})$. By \eqref{isolate}, $\mathcal{S}^*$ is isolate.
First, we prove that for any $\Omega\subset\subset\Sigma\setminus (\mathcal{S}^*\cup\mathcal{S}(\{f_k''\})$, $f_k'$ converges weakly in $W^{2,2}(\Omega,h_0,\mathbb{R}^n)$: Since $f_0''$ is continuous on $\bar{\Omega}$, we may assume $dist(0,f_0''(\Omega))>\delta>0$. Then $dist(0,f_k''(\Omega))>\frac{\delta}{2}$ when $k$ is sufficiently large. Noting that $f_k'
=\frac{f_k''}{|f_k''|^2}+y_0$, we get that $f_k'$ converges weakly in $W^{2,2}(\Omega,h_0,\mathbb{R}^n)$.
Next, we prove that for each $p\in \mathcal{S}^*\cup\mathcal{S}(\{f_k''\})$, $f_k'$ also converges in a neighborhood of $p$. We use the denotation $U_k$, $U_0$, $\vartheta_k$ and $\vartheta_0$ with $\theta_k(0)=p$ again. We only need to prove that $\hat{f}_k'=f_k'(\vartheta_k)$ converges weakly in $W^{2,2}(D_\frac{1}{2})$.
Let $g_{\hat{f}_k'}=e^{2\hat{u}_k'}(dx^2+dy^2)$. Since $\hat{f}_k'\in W^{2,2}_{conf}
(D_{4r})$ with $\int_{D_{4r}}|A_{\hat{f}_k'}|^2d\mu_{\hat{f}_k'}<4\pi$ when $r$ is sufficiently small and $k$ sufficiently large, by the arguments in subsection 2.1, we can find a $v_k$ solving the equation
$$-\Delta v_k=K_{\hat{f}_k'}e^{2\hat{u}_k'},\,\,\,\, z\in D_r\,\,\,\, and\,\,\,\, \|v_k\|_{L^\infty(D_r)}<C.$$ Since $f_k'$ converges to a conformal immersion in $D_{4r}\setminus D_{\frac{1}{4}r}$, by Theorem \ref{Helein}, we may assume that
$\|\hat{u}_k'\|_{L^\infty(D_{2r}\setminus D_r)}<C$.
Then $\hat{u}_k'-v_k$ is a harmonic function with
$\|\hat{u}_k'-v_k\|_{L^\infty(\partial D_{2r}(z))}<C$, then we get $\|\hat{u}_k'(z)-v_k(z)\|_{L^\infty(D_{2r}(z))}<C$
by the Maximum Principle. Thus, $\|\hat{u}_k'\|_{L^\infty(D_{2r})}<C$, which implies $\|\nabla f_k'\|_{L^\infty(D_{2r})}<C$. By the equation $\Delta \hat{f}_k'=e^{2\hat{u}_k'}H_{\hat{f}_k'}$, and the fact that $\|e^{2\hat{u}_k'}H_{\hat{f}_k'}\|_{L^2 (D_{2r})}^2<
e^{\|\hat{u}_k'\|_{L^\infty}}\int_{D_{2r}}|H_{\hat{f}_k'}|^2d\mu_{{\hat{f}_k'}}$, we get $\|\nabla{\hat{f}_k'}\|_{W^{1,2}(D_{r})}<C$. Recalling that $\hat{f}_k'$ converges in $C^0(D_r\setminus D_\frac{r}{2})$, we complete the proof.
$
\Box$\\
\begin{rem} In fact, we proved that $\mathcal{S}^*=\emptyset$. \end{rem}
\section{Analysis of the neck} For a sequence of conformal immersions from a surface into $\mathbb{R}^n$ with the conformal class divergence, the blowup comes from concentrations and collars. Both cases can be changed into a blowup analysis of a conformal immersion sequence of $S^1\times[0,T_k]$ in $\mathbb{R}^n$ with $T_k\rightarrow+\infty$. So we first analyze the blow up procedure on long cylinders without concentrations.
\subsection{Classification of bubbles of a simple sequence over an infinite cylinder}
Let $f_k$ be an immersion sequence of $S^1\times [0,T_k]$ in $\mathbb{R}^n$ with $T_k\rightarrow+\infty$. We say $f_k$ has concentration, if we can find a sequence $\{(\theta_k,t_k)\}\subset S^1\times [0,T_k]$, such that $$\lim_{r\rightarrow 0}\liminf_{k\rightarrow+\infty}\int_{D_r(\theta_k,t_k)}
|A_{f_k}|^2d\mu_{f_k}\geq 4\pi.$$ We say $\{f_k\}$ is simple if: \begin{itemize} \item[{\rm 1)}] $f_k$ has no concentration; \item[{\rm 2)}] $f_k(S^1\times[0,T_k])$ can be extended to a compact closed immersed surface $\Sigma_k$ with
$$\int_{\Sigma_k}(1+|A_{f_k}|^2)d\mu_{f_k}<\Lambda.$$ \end{itemize}
When $\{f_k\}$ is simple, we say $f_0$ is a bubble of $f_k$, if we can find a sequence $\{t_k\}\subset [0,T_k]$ with $$t_k\rightarrow+\infty,\,\,\,\, and\,\,\,\, T_k-t_k\rightarrow+ \infty,$$ such that $f_0$ is a generalized limit of $f_k(\theta,t_k+t)$. If $f_0$ is nontrivial, we call it a nontrivial bubble.
For convenience, we call the generalized limit of $f(\theta,t+T_k)$ and $f(\theta,t)$ the top and the bottom respectively. Note that the top and the bottom are in $W^{2,2}_{conf}(S^1\times(-\infty, 0])$ and $W^{2,2}_{conf}(S^1\times[0,+\infty))$ respectively.
\begin{defi} Let $f^1$ and $f^2$ be two bubbles which are limits of $f_k(\theta,t+t_k^1)$ and $f_k(\theta,t+t_k^2)$ respectively. We say these two bubbles are the same, if
$$\sup_k|t_k^1-t_k^2|<+\infty.$$ When $f^1$ and $f^2$ are not the same, we say $f^1$ is in front of $f^2$ (or $f^2$ is behind $f^1$) if $t_k^1<t_k^2$. We say $f^2$ follows $f^1$, if $f^2$ is behind $f^1$ and there are no non-trivial bubbles between $f^1$ and $f^2$. \end{defi}
Obviously, the bubbles in this section must be in $W^{2,2}_{conf}(S^1\times\mathbb{R})$, and must be one of the following: \begin{itemize} \item[1).] $S^2$-type, i.e. $I(f^0)(S^1\times\{\pm\infty\}) \neq 0$; \item[2).] Catenoid-type, i.e. $I(f^0)(S^1\times\{\pm\infty\})=0$; \item[3).] Plain-type, i.e. one and only one of $I(f^0)(S^1\times\{\infty\})$, $I(f^0)(S^1\times\{-\infty\})$ is 0, \end{itemize}
where $I=\frac{y-y_0}{|y-y_0|^2}$, $y_0\notin f^0(S^1\times\mathbb{R})$.
We give another classification of bubbles: \begin{defi}\label{typeofbubble} We call a bubble $f^0$ to be a bubble of \begin{itemize} \item[] type $T_{\infty}$ if $diam f^0(S^1\times\{+\infty\})=+\infty$; type $T_0$ if $diam f^0(S^1\times\{+\infty\})=0$; \item[] type $B_{\infty}$ if $diam f^0(S^1\times\{-\infty\})=+\infty$; type $B_0$ if $diam f^0(S^1\times\{-\infty\})=0$. \end{itemize} \end{defi}
We say $f_k$ has $m$ non-trivial bubbles, if we can not find the ($m+1$)-th non-trivial bubble for any subsequence of $f_k$.
\begin{rem} Let $f_0$ be a bubble. By \eqref{kappa2} and \eqref{kappa3}, $$\lim_{t\rightarrow+\infty}\int_{S^1\times\{t\}}\kappa_{f^0} =2m^+\pi,\,\,\,\, and\,\,\,\, \lim_{t\rightarrow+\infty}\int_{S^1\times\{t\}}\kappa_{f^0} =2m^-\pi$$ for some $m^+$ and $m^-\in\mathbb{Z}$. Then
$f^0$ is trivial implies that
$\int_{S^1\times \mathbb{R}}K_{f^0}d\mu_{f^0}=0$. Thus both $S^2$ type of bubbles and catenoid type of bubbles are non-trivial. \end{rem}
\begin{rem} It is easy to check that $\mu(f^0)<+\infty$ implies that $f^0$ is a sphere-type bubble and is of type $(B_0,T_0)$. \end{rem}
\begin{rem} If $f^{0'}$ is a catenoid-type bubble, then it is of type $(B_\infty,T_\infty)$; If $f^{0'}$ is a plain-type bubble, then it is of type $(B_\infty,T_0)$ or $(B_0,T_\infty)$. \end{rem}
First, we study the case that $f_k$ has no bubbles. Basically, we want to show that after scaling, the image of $f_k$ will converge to a topological disk.
\begin{lem} If $f_k$ has no bubbles, then $$\frac{diam\, f_k(S^1\times \{1\})}{diam\, f_k(S^1\times\{T_k-1\})}\rightarrow 0\,\,\,\, or\,\,\,\, +\infty.$$ \end{lem}
\proof Assume this lemma is not true. Then we may assume $\frac{diam\, f_k(S^1\times \{1\})}{diam\, f_k(S^1\times\{T_k-1\})}\rightarrow\lambda\in (0,+\infty)$. Let $\lambda_k=diam f_k(S^1\times\{1\})$. By Theorem \ref{convergence2}, $\frac{f_k(\theta,t)-f_k(0,1)}{\lambda_k}$ converges to $f^B$ weakly in $W^{2,2}_{loc} (S^1\times(0,+\infty))$, and $\frac{f_k(\theta,t+T_k)-f_k(0,T_k-1)}{\lambda_k}$ converges to $f^T$ weakly in $W^{2,2}_{loc} (S^1\times(-\infty,0))$ respectively.
When $diam f^B(S^1\times \{+\infty\})=0$, we set $\delta_k$ and $t_k$ to be defined by $$\delta_k=diam f_k(S^1\times\{t_k\})= \inf_{t\in [1,T_k-1]}f_k(S^1\times\{t\}).$$ Obviously, $\delta_k\rightarrow 0$, and
$t_k\rightarrow+\infty$, $T_k-t_k \rightarrow+\infty$. $ \frac{f_k(\theta,t)-f_k(0,t_k)}{\delta_k}$ will converge to a non-trivial bubble. A contradiction.
When $diam f^B(S^1\times \{+\infty\})=+\infty$,
we set $\delta_k'$ and $t_k'$ to be defined by $$\delta_k'=diam f_k(S^1\times\{t_k'\})= \sup_{t\in [1,T_k-1]}f_k(S^1\times\{t\}),$$ then we can also get a bubble. $
\Box$\\
Now we assume $f_k$ has no bubbles, and $\frac{diam\, f_k(S^1\times \{1\})}{diam\, f_k(S^1\times\{T_k-1\})}\rightarrow +\infty$. Let $\lambda_k=diam\, f_k(S^1\times\{T_k-1\})$. The bottom $f^B$ is the weak limit of $f_k'=\frac{f_k(\theta,t)-f_k(0,1)}{\lambda_k}$. Let $\phi$ be the conformal diffeomorphism from $D\setminus\{0\}$ to $S^1\times[0,+\infty)$. Then $f^B\circ\phi$ is an immersion of $D$ in $\mathbb{R}^n$ perhaps with branch point $0$. Moreover, by the arguments in \cite{C-L} or in \cite{C}, we have $$f^B(\phi(0))=\lim_{t\rightarrow+\infty}\lim_{k\rightarrow+\infty} f_k'(\theta,T_k-t).$$ Since $diam f_k'(S^1\times\{T_k-1\})\rightarrow 0$, $f_k'(\theta,T_k-t)$ converges to a point, then the Hausdorff limit of $f_k'((0,T_k))$ is a branched conformal immersion of $D$.
\begin{rem} In fact, the above results and arguments hold for a sequence
$\{f_k\}$ which has neither $S^2$-type nor catenoid-type bubbles.\\ \end{rem}
Next, we show when $\{f_k\}$ has bubbles, how we will find out all of them. We need the following simple lemma: \begin{lem}\label{interval} After passing to a subsequence, we can find $0=d_k^0<d_k^1<\cdots<d_k^l=T_k$, where $l\leq\frac{\Lambda}{4\pi}$, such that $$d_k^i-d_k^{i-1}\rightarrow+\infty,\,\,\,\, i=1,\cdots,l\,\,\,\, and
\int_{S^1\times\{d_k^i\}}\kappa_k=2m_i\pi+\pi,\,\,\,\, m_i\in\mathbb{Z},\,\,\,\,
i=1,\cdots,l-1, $$ and
$$\lim_{T\rightarrow+\infty}\sup_{t\in [d_k^{i-1}+T, d_k^i-T]}\left|
\int_{S^1\times \{t\}}\kappa_k-\int_{S^1\times\{d_k^{i-1}+T\}}\kappa_k\right|< \pi.$$ \end{lem}
\proof Let $\Lambda<4m\pi$. We prove the lemma by induction of $m$.
We first prove it is true for $m=1$. Let $$\lim_{t\rightarrow+\infty} \lim_{k\rightarrow+\infty}\int_{S^1\times\{t\}}=2m_1\pi,\,\,\,\, \lim_{t\rightarrow+\infty} \lim_{k\rightarrow+\infty}\int_{S^1\times\{T_k-t\}}=2m_2\pi,$$ where $m_1$ and $m_2$ are integers. Thus, we can find $T$, such that
$$\left|\int_{S^1\times\{T\}}\kappa_k-2m_1\pi\right|<\epsilon,\,\,\,\, and \,\,\,\, \left|\int_{S^1\times\{T_k-T\}}\kappa_k-2m_2\pi\right|<\epsilon$$ when $k$ is sufficiently large. Take a $t_0\in (T,T_k-T)$, such that
$$\int_{S^1\times[T,t_0]}|A_{f_k}|^2<2\pi,\,\,\,\,
\int_{S^1\times[t_0,T_k-T]}|A_{f_k}|^2\leq 2\pi.$$ By Gauss-Bonnet,
$$\left|\int_{S^1\times\{t\}}\kappa_k-\int_{S^1\times\{T\}}\kappa_k\right|
\leq \int_{S^1\times[T,t]}|K_{f_k}|d\mu_{f_k}
\leq \frac{1}{2}\int_{S^1\times[T,t_0]}|A_{f_k}|^2d\mu_{f_k}<\pi, \,\,\,\,\forall t\in(T,t_0),$$
$$\left|\int_{S^1\times\{t\}}\kappa_k-\int_{S^1\times\{T_k-T\}}\kappa_k\right| \leq
\frac{1}{2}\int_{S^1\times[t_0,T_k-T]}|A_{f_k}|^2d\mu_{f_k}<\pi, \,\,\,\,\forall t\in(t_0,T_k-T).$$ Thus, we can take $\epsilon$ to be very small so that $\int_{S^1\times\{t\}} \neq 2i\pi$ for any $i\in\mathbb{Z}$ and $t\in (T,T_k-T)$.
Now, we assume the result is true for $m$, and prove it is also true for $m+1$. We have two cases.
Case 1, there is a sequence $\{t_k\}$, such that $t_k\rightarrow+\infty$, $T_k-t_k\rightarrow+\infty$, $\int_{S^1\times\{t_k\}}\kappa_k=2m_k\pi+\pi$ for some $m_k\in\mathbb{Z}$. For this case, we let $f_k'=\frac{f_k(t+t_k,\theta)-f_k(t_k,0)}{\lambda_k}$ which converges weakly to $f_0'$, where $\lambda_k=diam f_k(S^1\times\{t_k\})$. Then by Gauss-Bonnet
$$\int_{S^1\times\mathbb{R}}|K_{f_0'}|\geq
\left|\int_{S^1\times(0,+\infty)}K_{f_0'}\right|
+\left|\int_{S^1\times(-\infty,0)}K_{f_0'}\right|\geq 2\pi.$$
Thus, $\int_{S^1\times\mathbb{R}}|A_{f_0'}|^2\geq 4\pi$. We can find $T$, such that
$$\int_{S^1\times[0,t_k-T]}|A_{f_k}|^2<4(m-1)\pi,\,\,\,\, and\,\,\,\, \int_{S^1\times[t_k+T,T_k]}|A_{f_k}|^2<4(m-1)\pi$$ when $k$ is sufficiently large. Thus, we can use induction on $[0,t_k-T]$ to get $0=\bar{d}_k^0< \bar{d}_k^1<\cdots<\bar{d}_k^{\bar{l}}=t_k-T$, and on $[t_k+T,T_k]$ to get $t_k+T=\tilde{d}_k^0<\cdots< \tilde{d}_k^{\tilde{l}}=T_k$. We can set $$d_k^i=\left\{\begin{array}{ll}
\bar{d}_k^i&i<\bar{l}\\
t_k&i=\bar{l}\\
\tilde{d}_k^{i-l}&i>\bar{l}
\end{array}\right.$$ Then, we complete the proof.
$
\Box$\\
Set $f_k^i=\frac{f_k(t+d_k^i,\theta)-f_k(d_k^i,0)}{ diam\, f_k(S^1\times\{d_k^i\})}$, and assume $f_k^i\rightharpoonup f^i$. It is easy to check that $$\lim_{T\rightarrow+\infty}\lim_{k\rightarrow+\infty} \int_{S^1\times\{d_k^i+T\}}\kappa_k =\lim_{T\rightarrow+\infty}\lim_{k\rightarrow+\infty} \int_{S^1\times\{d_k^{i+1}-T\}}\kappa_k,$$ we get $$\lim_{T\rightarrow+\infty}\lim_{k\rightarrow+\infty}\int_{S^1\times[d_k^i+T, d_k^{i+1}-T]}K_{f_k}=0.$$
\begin{rem} In fact, we can get that for any $t_k<t_k'$ with $$t_k-d_k^i\rightarrow+\infty, \,\,\,\, and\,\,\,\, d_k^{i+1}-t_k'\rightarrow+\infty,$$ we have $$\lim_{k\rightarrow+\infty}\int_{S^1\times[t_k, t_k']}K_{f_k}=0.$$ \end{rem}
Hence, we get
\begin{pro}\label{simple} Let $f_k$ be a simple sequence on $S^1\times[0,T_k]$. Then after passing to a subsequence, $f_k$ has finitely many bubbles. Moreover, we have $$ \lim_{T\rightarrow+\infty}\lim_{k\rightarrow+\infty} \int_{S^1\times [T,T_k-T]}K_{f_k}d\mu_{f_k}=\sum_{i=1}^m \int_{S^1\times\mathbb{R}}K_{f^i}d\mu_{f^i},$$ where $f^1$, $\cdots$, $f^m$ are all of the bubbles. \end{pro}
Next, we prove a property of the order of the bubbles.
\begin{thm} Let $f^1$, $f^2$ be two bubbles. Then
1). If $f^1$ and $f^2$ are of type $T_0$ and $B_0$ respectively, then there is at least one catenoid-type bubble between them.
2). If $f^1$ and $f^2$ are of type $T_\infty$ and $B_\infty$ respectively, then there is at least one $S^2$-type bubble between $f^1$ and $f^{2}$. \end{thm}
\proof 1). Suppose $\frac{f_k(\theta,t_k^1+t)-f_k(0,t_k^1)} {diam\, f_k(S^1\times\{t_k^1\})}\rightharpoonup f^1$, and $\frac{f_k(\theta,t_k^{2}+t)-f_k(0,t_k^{2})} {diam\, f_k(S^1\times\{t_k^2\})}\rightharpoonup f^{2}$.
Let $t_k'$ be defined by \begin{equation}\label{infdiam} diam\, f_k(S^1\times\{t_k'\})=\inf\{diam\, f_k(S^1\times\{t\}):t\in[t_k^1+T,t_k^{2}-T] \}, \end{equation} where $T$ is sufficiently large. Since $f^1$ is of type $T_0$ and $f^{2}$ of type $B_0$, we get $$\lim_{t\rightarrow+\infty}diam\, f^1(S^1\times\{t\})=0,\,\,\,\, and\,\,\,\, \lim_{t\rightarrow-\infty}diam\, f^{2}(S^1\times\{t\})=0.$$ Then, we have $$t_k'-t_k^1\rightarrow+\infty,\,\,\,\, t_k^{2}-t_k' \rightarrow+\infty.$$ If we set $f_k'(t)=\frac{f_k(\theta,t_k'+t)- f_k(0,t_k')}{diam\, f_k(S^1\times\{t_k'\})}$, then $f_k'$ will converge to a bubble $f'$ with $$diam\, f'(S^1\times \{0\}) =\inf \{diam\, f'(S^1\times\{t\}):t\in \mathbb{R} \}=1.$$ Thus, $f'$ is a catenoid type bubble.
2). If we replace \eqref{infdiam} with \begin{equation*} diam\, f_k(S^1\times\{t_k'\})=\sup\{diam\, f_k(S^1\times\{t\}):t\in[t_k^1+T,t_k^{2}-T] \}, \end{equation*} we will get 2).
$
\Box$\\
The structure of the bubble tree of a simple sequence is clear now: {\it The $S^2$ type bubbles stand in a line, with a unique catenoid type bubble between the two neighboring $S^2$-type bubbles. There might exist plain-type bubbles between the neighboring $S^2$ type and catenoid type bubbles. A $T_0$ type bubble must follow a $B_\infty$ type bubble, and a $T_\infty$ type bubble must follow a $B_0$ type bubble.}
\subsection{Bubble trees for a sequence of immersed $D$}
In this subsection, we will consider a conformal immersion sequence $f_k: D\rightarrow \mathbb{R}^n$ with $\mathcal{S}(f_k)=\{0\}$. We assume that $f_k(D)$ can be extended to a closed embedded surface $\Sigma_k$ with
$$\int_{\Sigma_k}(1+|A_{\Sigma_k}|^2)d\mu<\Lambda.$$
Take $z_k$ and $r_k$, s.t. \begin{equation}\label{top}
\int_{D_{r_k}(z_k)}|A_{f_k}|^2d\mu_{f_k}=4\pi-\epsilon, \end{equation}
and $\int_{D_r(z)}|A_{f_k}|^2d\mu_{f_k}<4\pi-\epsilon$ for any $r<r_k$ and $D_{r}(z)\subset D_\frac{1}{2}$, where $\epsilon$ is sufficiently small.
We set $f_k'=f_k(z_k+r_kz)-f_k(z_k)$. Then $\mathcal{S}(f_k',D_L)=\emptyset$ for any $L$. Thus, we can find $\lambda_k$, s.t. $\frac{f_k'(z)}{\lambda_k}$ converges weakly to $f^F$ which is a conformal immersion of $\mathbb{C}$ in $\mathbb{R}^n$. We call $f^F$ the first bubble of $f_k$ at the concentration point $0$.
It will be convenient to make a conformal change of the domain. Let $(r, \theta)$ be the polar coordinates centered at $z_k$. Let $\varphi_k:S^1\times\mathbb{R}^1\rightarrow\mathbb{R}^2$ be the mapping given by $$r=e^{-t},\theta=\theta.$$ Then $$\varphi_k^*(dx^1\otimes dx^1+dx^2\otimes dx^2)= \frac{1}{r^2}(dt^2+d\theta^2).$$ Thus $f_k\circ\varphi_k$ can be considered as a conformal immersion of $S^1\times [0,+\infty)$ in $\mathbb{R}^n$. For simplicity, we will also denote $f_k\circ\varphi_k$ by $f_k$.
Set $T_k=-\log r_k$. Similarly to Lemma \ref{interval}, we have \begin{lem}\label{interval2} There is $t_k^0=0<s_k^1<s_k^2< \cdots< s_k^l=T_k$, such that $l\leq \frac{\Lambda}{4\pi}$ and
1). $\int_{S^1\times(s_k^i-1,s_k^i+1)}|A_{f_k}|^2\geq 4\pi$;
2). $\lim\limits_{T\rightarrow +\infty}\lim\limits_{ k\rightarrow+\infty}\sup\limits_{t\in [d_k^i+T,d_k^{i+1}-T]}
\int_{S^1\times(t-1,t+1)}|A_{f_k}|^2<4\pi$.
\end{lem} Let $f_k^i=f_k(\theta,s_k^i+t)$. A generalized limit of $f_k^i$ is called a bubble with concentration (which may be trivial). There are $W^{2,2}$-conformal immersions of $S^1\times\mathbb{R}$ with finite branch points and finite $L^2$ norm of the second fundamental form. However, if we neglect the concentration points, we can also define the types of $T_\infty$, $T_0$, $B_{\infty}$, and $B_0$ for it.
Obviously, we can find a $T'$, such that $f_k$ is simple on $S^1\times[s_k^{i}+T',s_k^{i+1}-T']$. Note that the top of $f_k$ on $S^1\times[s_k^i+T',s_k^{i+1}-T']$ is just a part of a generalized limit of $f_k^{i+1}$ and the bottom of $f_k$ on $S^\times[s_k^i+T',s_k^{i+1}-T']$ is just a part of a generalized limit of $f_k^{i-1}$. We call the union of nontrivial bubbles of $f_k$ on each $[s_k^i,s_k^{i+1}]$, the generalized limit of $f_k^i$ and $f^F$
the first level of bubble tree. By Proposition \ref{simple}, we have $$\begin{array}{lll} \lim\limits_{r\rightarrow 0}\lim\limits_{ k\rightarrow+\infty}\displaystyle{\int}_{D_r}K_{f_k}&=& \sum\limits_{i=1}^{l}\lim\limits_{T\rightarrow+\infty} \lim\limits_{k\rightarrow+\infty}\displaystyle{\int}_{S^1\times [s_k^i-T'-T,s_k^i+T'+T]} K_{f_k^i}\\[2.0ex] &&+ \sum\limits_{i=0}^l\lim\limits_{T\rightarrow+\infty} \lim\limits_{k\rightarrow+\infty}\displaystyle{\int}_{S^1\times [s_k^i+T'+T,s_k^{i+1}-T'-T]} K_{f_k^i}\\[2.0ex] &=&\sum\limits_{(r,\theta)\in \mathcal{S}(\{f_k^i\})}\lim\limits_{r\rightarrow 0}\lim\limits_{k\rightarrow+\infty} \displaystyle{\int}_{B_r(t,\theta)}K_{f_k^i}+ \sum_j\int_{S^1\times\mathbb{R}}K_{f^j}, \end{array}$$ where $\{f^j\}$ are all the bubbles of the first level.
Next, at each concentration point of $\{f_k^i\}$, we get the first level of $\{f_k^i\}$. We usually call them the second level of bubble trees. Such a construction will stop after finite steps.
\begin{lem}\label{identity1}After passing to a subsequence, $f_k$ has finitely many non-trivial bubbles. Moreover, for any $r<1$ $$\lim_{k\rightarrow+\infty} \int_{D_r}K_{f_k}d\mu_{f_k}=\int_{D_r}K_{f^0}d\mu_{f^0}+ \sum_{i=1}^m\int_{S^1\times\mathbb{R}}K_{f^i}d\mu_{f^i},$$ where $f^0$ is the generalized limit of $f_k$, and $f^1$, $f^2$, $\cdots$, $f^m$ are all of the non-trivial bubbles. \end{lem}
\subsection{Immersion sequence of cylinder which is not simple}
Now we assume $f_k$ is not simple on $S^1\times[0,T_k]$. We also assume $f_k(S^1\times[0,T_k])$ can be extended to a closed immersed surface $\Sigma_k$ with
$$\int_{\Sigma_k}(1+|A_{\Sigma_k}|^2)d\mu<\Lambda.$$ Moreover, we assume $f_k(t,\theta)$ and $f_k(T_k+t,\theta)$ have no concentration.
Then we still have Lemma \ref{interval2}. The other properties are the same as those of the immersion of $D$. Moreover, we have $$\lim_{k\rightarrow+\infty} \int_{S^1\times[0,T_k]}K_{f_k}d\mu_{f_k}=\int_{S^1\times[0,+\infty)}K_{f^B} d\mu_{f^B}+ \int_{S^1\times(-\infty,0]}K_{f^T}d\mu_{f^T}+\sum_{i=1}^m\int_{\mathbb{C}}K_{f^i}d\mu_{f^i},$$ where $f^1$, $\cdots$, $f^m$ are all of the nontrivial bubbles.
\section{Proof of Theorem \ref{main}} Since Theorem \ref{main2} can be deduced directly from subsection 4.3, and Theorem \ref{sphere} can be deduced directly from subsection 4.2,
we only prove Theorem \ref{main}.
{\it Proof of Theorem \ref{main}:} Take a curve $\gamma_i\subset\Sigma_0^i\setminus\mathcal{S}(\{f_k\circ \psi_k\})$ with $\gamma_i(0)=p_i$. We set $\lambda_i=diam\,f_k(\gamma_i)$, and $\tilde{f}_k^i=\frac{f_k\circ\psi_k-f_k\circ\psi_k(p_i)}{\lambda_k^i}$ which is a mapping from $\Sigma_0^i$ into $\mathbb{R}^n$. It is easy to check that $\tilde{f}_k^i\in W^{2,2}_{{conf},loc} (\Sigma_0^i,\psi_k^{\ast}(h_k),\mathbb{R}^n)$.
Given a point $p\in\Sigma_0^i$. We choose
$U_k$, $U_0$, $\vartheta_k$, $\vartheta_0$ as in the Theorem \ref{D.K.}. Let $\hat{f}_k^i=\tilde{f}_k^i(\vartheta_k)$ which is a map from $D$ into $\mathbb{R}^n$. Let $V=\vartheta(D_\frac{1}{2})$. Since $\vartheta_k$ converges to $\vartheta_0$, $\vartheta_k^{-1}(V)\subset D_\frac{3}{4}$ for any sufficiently large $k$.
When $p$ is not a concentration point, by Lemma \ref{measureconvergence}, for any $\varphi$ with $supp\varphi\subset\subset V$, we have $$\int_{V}\varphi K_{\tilde{f}_k^i}d\mu_{\tilde{f}_k^i} =\int_{D_\frac{3}{4}}\varphi(\vartheta_k)K_{\hat{f}_k^i} d\mu_{\hat{f}_k^i}\rightarrow \int_{D_\frac{3}{4}}\varphi(\vartheta_0) K_{\hat{f}_0^i}=\int_V\varphi K_{f_0^i}d\mu_{f_0^i}.$$ When $p$ is a concentration point, by Lemma \ref{identity1}, we get $$\int_{V}\varphi K_{\tilde{f}_k^i}d\mu_{\tilde{f}_k^i} \rightarrow \int_V\varphi K_{f_0^i}d\mu_{f_0^i}+ \varphi(p)\sum_j\int_{S^1\times \mathbb{R}}K_{f^i_j}d\mu_{f^i_j},$$ where $\{f^i_j\}$ is the set of nontrivial bubbles of $\hat{f}_k^i$ at $p$.
Next, we consider the convergence of $f_k$ at the collars. Let $a^j$ be the intersection of $\overline{\Sigma_0^i}$ and $\overline{\Sigma_0^{i'}}$. We set $\check{f}_k^j=f_k(\phi_k^j)$, and $T_k^j =\frac{\pi^2}{l_k^j}-T$. We may choose $T$ to be sufficiently large such that $\check{f}_k^j(T_k^j-t,\theta)$ and $\check{f}_k^j (-T_k^j+t,\theta)$ have no blowup point. Then $\check{f}_k^j$ satisfies the conditions in subsection 2.4. So the convergence of $\check{f}_k^j$ is clear. Since $$\check{f}_k^j=f_k\circ\phi_k^j=f_k\circ\psi_k\circ(\varphi_k\circ\phi_k^j)= \tilde{f}_k(\varphi_k\circ\phi_k^j).$$ The images of the limit of $\check{f}_k^j(T_k^j-t,\theta)$ and $\check{f}_k^j (-T_k^j+t,\theta)$ are parts of the images of $\tilde{f}_0^i$ and $\tilde{f}_0^{i'}$. Then we have $$\lim_{\delta\rightarrow 0} \lim_{k\rightarrow+\infty}\int_{\Sigma_0(\delta,a^j)} K_{f_k}=\sum_i\int_{S^1\times\mathbb{R}}K_{f^{i'}},$$ where all $f^{i'}$ are nontrivial bubbles of $\check{f}_k^j$.
$
\Box$\\
\section{A remark about trivial bubbles}
The methods in section 4 can be also used to find all bubbles with $\|A\|_{L^2}\geq\epsilon_0$ for a fixed $\epsilon_0>0$. We only consider the simple sequence $f_k$ on $S^1\times[0,T_k]$ here.
Let $t_k$ be a sequence with $t_k, T_k-t_k\rightarrow \infty$, such that $\frac{f_k(t+t_k,\theta)-f_k(t_k,0)}{\lambda_k}$ converges to a $f_0\in W^{2,2}(S^1\times\mathbb{R},\mathbb{R}^n)$ with
$\int_{S^1\times\mathbb{R}}|A_{f_0}|^2\geq\epsilon_0^2$. Take $T$, such that $\int_{S^1\times[-T,T]}
|A_{f_0}|^2\geq\frac{\epsilon_0^2}{2}$. We consider
the convergence on $S^1\times [0,t_k-T]$
and $S^1\times[t_k+T,T_k]$ respectively. In this way, we can find out all the bubbles.
\end{document} |
\begin{document}
\definecolor{red}{rgb}{1,0,0}
\title{Do Large Number of Parties Enforce Monogamy in All Quantum Correlations?}
\author{Asutosh Kumar, R. Prabhu, Aditi Sen(De), and Ujjwal Sen}
\affiliation{Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211 019, India}
\begin{abstract} Monogamy is a non-classical property that restricts the sharability of quantum correlation among the constituents of a multipartite quantum system. Quantum correlations may satisfy or violate monogamy for
quantum states. Here we provide evidence that almost all pure quantum states of systems consisting of a large number of subsystems are monogamous with respect to all quantum correlation measures of both the entanglement-separability and the information-theoretic paradigms, indicating that the volume of the monogamous pure quantum states increases with an increasing number of parties. Nonetheless, we identify important classes of pure states that remain non-monogamous with respect to quantum discord and quantum work-deficit, irrespective of the number of qubits. We find conditions for which a given quantum correlation measure satisfies vis-\`a-vis violates monogamy.
\end{abstract}
\maketitle
\section{Introduction}
Correlations, classical as well as quantum, present in quantum systems play a significant role in quantum physics.
Differentiating quantum correlations \cite{Horodecki09, ModiDiscord} from classical ones have created a lot of attention, since it has been established that quantum ones can be a useful resource for quantum information protocols including those in quantum communication and possibly quantum computation. Moreover, it turns out to be an effective tool to detect cooperative quantum phenomena in many-body physics \cite{Lewenstein07, Amico08}. Current technological developments ensure detection of quantum correlations in several physical systems like photons \cite{XYZphoton}, ions \cite{XYZtrapion}, optical lattices \cite{Treutlein}, in the laboratories by using techniques like Bell inequalities \cite{inequality}, tomography \cite{tomography}, and entanglement witnesses \cite{witness}.
In the case of bipartite systems, quantum correlations can broadly be categorized into two groups: entanglement measures and information-theoretic ones. Entanglement measures include, to name a few, entanglement of formation \cite{eof}, distillable entanglement \cite{eof,distillable}, logarithmic negativity \cite{VidalWerner}, relative entropy of entanglement \cite{VPRK}, etc., while quantum discord \cite{hv,oz}, quantum work-deficit \cite{workdeficit} are information-theoretic ones.
Monogamy forms a connecting theme in the exquisite variety in the space of quantum correlation measures. In simple terms, monogamy dictates that if two quantum systems are highly quantum correlated with respect to some quantum correlation measure, then they cannot be significantly quantum correlated with a third party with respect to that measure. It should be noted that monogamy is a non-classical concept and classical correlation can violate monogamy to the maximal extent. These qualitative statements have been quantified \cite{ekert,measent,CKW}, and while quantum correlations, in general, are expected to be qualitatively monogamous, they may violate it for some states, when quantitatively probed. In particular, while the square of the concurrence is monogamous for all multiqubit systems and the square of the negativity is monogamous for all three-qubit pure states \cite{CKW,negativitysquare,Osborne,monogamyN}, entanglement of formation, logarithmic negativity, quantum discord, and quantum work-deficit are known to violate monogamy already for three-qubit pure states \cite{amaderdiscord,giorgi,Fanchini,Dagmar,Salini}.
Along with being a way to provide structure to the space of quantum correlation measures, the concept of monogamy is crucially important for security of quantum cryptography \cite{cryptoreview}. Moreover, they also provide a method for quantifying quantum correlations in multiparty scenarios, where it is usually difficult to conceptualize and compute quantum correlations \cite{ekert,measent,CKW,negativitysquare,Osborne,monogamyN,amaderdiscord,giorgi,Fanchini,Dagmar,Salini,MaheshExp}.
In this paper, we address the question of monogamy of bipartite quantum correlations for quantum states of an arbitrary number of parties. We prove a result to identify properties of bipartite quantum correlation measures
which are sufficient for that measure to follow the monogamy relation for all states. In particular, we show that if entanglement of formation is monogamous for a pure quantum state of an arbitrary number of parties, any bipartite ``good'' entanglement measure is also monogamous for that state. Furthermore, we perform
numerical simulations
with randomly chosen pure multiqubit quantum states over the uniform Haar measure, which clearly indicate that all the computable bipartite quantum correlation measures for almost all states become (quantitatively) monogamous with the increase in the number of parties.
Composing the analytical results with the numerical simulations, it follows that, for example, the relative entropy of entanglement and distillable entanglement, both non-computable but crucially important quantum information quantities, are (quantitatively) monogamous for almost all pure states of four or more qubits. We also show that there are classes of multiparty pure quantum states that are non-monogamous for an arbitrary number of parties for certain quantum correlations. These classes, which have zero Haar volumes and hence are not covered in the random Haar searches, include the multiparty $W$ \cite{Wref}, the Dicke states \cite{dickestates}, and the symmetric states, and the corresponding quantum correlations are quantum discord and quantum work-deficit. We provide sufficient conditions for a multiparty quantum state to be non-monogamous. Precisely, we show that the multiqubit states with vanishing tangle \cite{CKW} violate the monogamy relation for quantum discord with a certain nodal observer, provided the sum of the unmeasured conditional entropies of the nodal observer, conditioned on the non-nodal observers, is negative.
The rest of the paper is organized as follows. In Sec. \ref{sec:QCmeas}, we have given the definitions of the quantum correlation measures chosen for our study.
In Sec. \ref{sec:monogamy}, we recapitulate the concept of monogamy for an arbitrary quantum correlation measure and numerically establish that all quantum correlation measures eventually become monogamous for almost all pure states with the increase in the number of parties. The zero Haar volume regions containing non-monogamous states are identified in Sec. \ref{sec:analytic} where we also find sufficient conditions for violation of monogamy for given multisite quantum states. We present a conclusion in Sec. \ref{sec:conclusion}.
\section{Quantum Correlation Measures} \label{sec:QCmeas}
Over the years, many quantum correlation measures have been proposed to quantify and characterize quantum correlation in quantum systems, consisting of two subsystems \cite{Horodecki09,ModiDiscord}.
The general properties that such bipartite measures should exhibit, have been extensively studied.
The sharing of bipartite quantum correlations among the subsystems of a multiparticle system plays an important role in this regard. In particular, it has been realized that it is important to understand the monogamy properties of these measures. We will take up this issue in the succeeding sections. In this section, we present brief definitions of the quantum correlation measures that we use in the succeeding sections.
Bipartite quantum correlation measures fall into two broad paradigms: (1) entanglement-separability measures and (2) information-theoretic quantum correlation measures. In this section, we define a few quantum correlation measures from both the paradigms.
\subsection{Measures of entanglement-separability paradigm} \label{Sec:esmeas}
Here we consider the quantum correlation measures which vanish for separable states.
Moreover, they, on average, do not increase under local quantum operations and classical communication (LOCC). Such quantum correlation measures belong to the entanglement-separability paradigm. We consider three quantum correlation measures, viz., entanglement of formation, concurrence, and logarithmic negativity, within this paradigm.
\subsubsection{Entanglement of formation and concurrence} \label{Sec:eof}
The entanglement of formation (EoF) \cite{concurrence} of a bipartite quantum state is the average number of singlets, \(\frac{1}{\sqrt{2}}(|01\rangle - |10\rangle)\), that is required to prepare a single copy of the state by LOCC, assuming that the EoF of pure bipartite states are given by their local von Neumann entropies. Here \(|0\rangle\) and \(|1\rangle\) form an orthonormal qubit basis. For an arbitrary two-qubit state, \(\rho_{AB}\), there exists a closed form of the entanglement of formation \cite{concurrence}, in terms of the concurrence, ${\cal C}$, as \begin{equation} \mathcal{E}(\rho_{AB})=h\left(\frac{1+\sqrt{1-\mathcal{C}^2(\rho_{AB})}}{2}\right), \end{equation} where $h(x)=-x\log_2x-(1-x)\log_2(1-x)$ is the Shannon (binary) entropy. Note that EoF is a concave function of $\mathcal{C}^2$, lies between 0 and 1, and vanishes for separable states.
Concurrence can also be used to quantify entanglement for all two-qubit states \cite{concurrence}. For any two-qubit state, \(\rho_{AB}\), the concurrence is given by ${\cal C}(\rho_{AB})=\mbox{max}\{0,\lambda_1-\lambda_2-\lambda_3-\lambda_4\}$, where the $\lambda_i$'s are the square roots of the eigenvalues of $\rho_{AB}\tilde{\rho}_{AB}$ in decreasing order and
$\tilde{\rho}_{AB}=(\sigma_y\otimes\sigma_y)\rho_{AB}^*(\sigma_y\otimes\sigma_y)$, with the complex conjugation being taken in the computational basis. $\sigma_y$ is the Pauli spin matrix. For pure two-qubit states, $|\psi_{AB}\rangle$, the concurrence is given by $2\sqrt{\textrm{det} \rho_A}$, where $\rho_A$ is the subsystem density matrix obtained by tracing over the $B$-part from the two-qubit pure state $|\psi_{AB}\rangle$. In case of pure states in $2\otimes d$, the concurrence is again given by $2\sqrt{\textrm{det} \rho_A}$ due to Schmidt decomposition. For mixed states in $2\otimes d$, one can use the convex roof extension to calculate the same.
\subsubsection{Logarithmic negativity} \label{entanglement}
Logarithmic negativity (LN) \cite{VidalWerner} is another measure which belongs to the entanglement-separability paradigm. It is defined in terms of the negativity, \({\cal N}(\rho_{AB})\), of a bipartite state \(\rho_{AB}\). It is defined as the absolute value of the sum of the negative eigenvalues of \(\rho_{AB}^{T_{A}}\),
where \(\rho_{AB}^{T_{A}}\) denotes the partial transpose of \(\rho_{AB}\) with respect to the \(A\)-part \cite{Peres_Horodecki}. It can be expressed as
\begin{equation}
{\cal N}(\rho_{AB})=\frac{\|\rho_{AB}^{T_A}\|_1-1}{2},
\end{equation}
where $\|M\|_1 \equiv \mbox{tr}\sqrt{M^\dag M}$ is the trace-norm of the matrix $M$. The logarithmic negativity is defined as \begin{equation} E_{\cal N}(\rho_{AB}) = \log_2 [2 {\cal N}(\rho_{AB}) + 1]. \label{eq:LN} \end{equation}
For two-qubit states, a strictly positive LN implies that the state is entangled and distillable \cite{Peres_Horodecki, Horodecki_distillable}, whereas
a vanishing LN implies that the state is separable \cite{Peres_Horodecki}.
\subsection{Information-theoretic quantum correlation measures}
In this subsection, we will briefly describe two measures of quantum correlation chosen from the information-theoretic perspective. Although many of the quantum information protocols are assisted by quantum entanglement, there are several protocols for which presence of entanglement is not required \cite{nlwe,KnillLaflamme,Animesh,others,MLSM}. Information-theoretic quantum correlation measures may potentially explain such phenomena.
Unlike entanglement-based quantum correlation measures, there does not exist closed forms for any information-theoretic measure. However, in many of the cases, it is possible to calculate them numerically.
\subsubsection{Quantum discord} \label{discord}
Quantum discord for a bipartite state $\rho_{AB}$ is defined as the difference between the total correlation and the classical correlation of the state. The total correlation is defined as the quantum mutual information of \(\rho_{AB}\), which is given by \cite{qmi} (see also \cite{Cerf, GROIS}) \begin{equation} \label{qmi} \mathcal{I}(\rho_{AB})= S(\rho_A)+ S(\rho_B)- S(\rho_{AB}), \end{equation} where $S(\varrho)= - \mbox{tr} (\varrho \log_2 \varrho)$ is the von Neumann entropy of the quantum state \(\varrho\). The classical correlation is based on the conditional entropy, and is defined as \begin{equation} \label{eq:classical}
{\cal J}^{\leftarrow}(\rho_{AB}) = S(\rho_A) - S(\rho_{A|B}). \end{equation} Here, \begin{equation}
S(\rho_{A|B}) = \min_{\{B_i\}} \sum_i p_i S(\rho_{A|i}) \end{equation} is the conditional entropy of \(\rho_{AB}\), conditioned on a measurement performed by \(B\) with a rank-one projection-valued measurement \(\{B_i\}\), producing the states
\(\rho_{A|i} = \frac{1}{p_i} \mbox{tr}_B[(\mathbb{I}_A \otimes B_i) \rho (\mathbb{I}_A \otimes B_i)]\), with probability \(p_i = \mbox{tr}_{AB}[(\mathbb{I}_A \otimes B_i) \rho (\mathbb{I}_A \otimes B_i)]\). \(\mathbb{I}\) is the identity operator on the Hilbert space of \(A\). Hence the discord can be calculated as \cite{hv,oz} \begin{equation} \label{eq:discord} {\cal D}^{\leftarrow}(\rho_{AB})= {\cal I}(\rho_{AB}) - {\cal J}^{\leftarrow}(\rho_{AB}). \end{equation} Here, the superscript ``$\leftarrow$" on ${\cal J}^{\leftarrow}(\rho_{AB})$ and ${\cal D}^{\leftarrow}(\rho_{AB})$ indicates that the measurement is performed on the subsystem $B$ of the state $\rho_{AB}$. Similarly, if measurement is performed on the subsystem $A$ of the state $\rho_{AB}$, one can define a quantum discord as
\begin{equation} \label{eq:discordA} {\cal D}^{\rightarrow}(\rho_{AB})= {\cal I}(\rho_{AB}) - {\cal J}^{\rightarrow}(\rho_{AB}). \end{equation}
\subsubsection{Quantum work-deficit} \label{sec:workdeficit}
Quantum work-deficit \cite{workdeficit} is a quantum correlation measure also belonging to the information-theoretic paradigm. It is defined as the difference between the amount of pure states that can be extracted under global operations and pure product states that can be extracted under local operations, in closed systems for which addition of the corresponding pure states are not allowed.
The number of pure qubits that can be extracted from $\rho_{AB}$ by ``closed global operations'' (CGO) is given by \[I_G (\rho_{AB})= N - S(\rho_{AB}),\]
where $N = \log_2 (\dim {\cal H})$. Here, CGO are any sequence of unitary operations and dephasing of the given state $\rho_{AB}$ by using a set of projectors $\{P_i\}$, i.e., $\rho \rightarrow \sum_i P_i \rho_{AB} P_i$, where $P_iP_j = \delta_{ij} P_i$, $\sum_i P_i = \mathbb{I}$, with $\mathbb{I}$ being the identity operator on the Hilbert space ${\cal H}$ on which $\rho_{AB}$ is defined.
The number of qubits that can be extracted from a bipartite quantum state $\rho_{AB}$ under ``closed local operations and classical communication''(CLOCC), is given by \begin{equation} I_L(\rho_{AB}) = N - \inf_{\Lambda \in CLOCC} [S(\rho{'}_A) + S(\rho{'}_B)], \end{equation} where $S(\rho{'}_A) = S(\mbox{tr}_B (\Lambda (\rho_{AB})))$ and $S(\rho{'}_B) = S(\mbox{tr}_A (\Lambda (\rho_{AB})))$. Here CLOCC can be local unitary, local dephasing, and sending dephased state from one party to another.
The quantum work-deficit is the difference between the work, $I_G (\rho_{AB})$, extractable by CGO, and that by CLOCC, $I_L (\rho_{AB})$:
\begin{equation}
\Delta(\rho_{AB}) = I_G(\rho_{AB}) - I_L(\rho_{AB}). \end{equation} Since it is inefficient to compute this quantity for arbitrary states, we restrict our analysis only to CLOCC, where measurement is done at any one of the subsystems. One can show that one-way work-deficit is the same as quantum discord for bipartite states with maximally mixed marginals.
\section{Status of Monogamy of Quantum Correlations for Arbitrary Number of Parties} \label{sec:monogamy}
In this section, we begin by formally introducing the concept of monogamy and the relations that a quantum correlation measure must satisfy, for it to be monogamous for a given quantum state of an arbitrary number of parties. We then
try to find the extent to which bipartite quantum correlation measures satisfy the (quantitative) monogamy relation.
\subsection{Monogamy of quantum correlations} \label{MonoQC}
Let ${\cal Q}$ be a bipartite quantum correlation measure. An \(n\)-party quantum state, $\rho_{12\cdots n}$, is said to be (quantitatively) monogamous under the quantum correlation measure ${\cal Q}$, if it follows the inequality, given by \cite{CKW} \begin{equation}
{\cal Q}(\rho_{12})+{\cal Q}(\rho_{13})+\cdots+{\cal Q}(\rho_{1n})\leq{\cal Q}(\rho_{1(2\cdots n)}). \end{equation} Otherwise, it is non-monogamous. Here, $\rho_{12}=\mbox{tr}_{3\ldots n}(\rho_{12\cdots n})$, etc. and ${\cal Q}(\rho_{1(2\cdots n)})$ denotes the quantum correlation ${\cal Q}$ of $\rho_{12\cdots n}$ in the $1:2\cdots n$ bipartition. We will also denote ${\cal Q}(\rho_{12})$, etc. as ${\cal Q}_{12}$, etc. and ${\cal Q}(\rho_{1(2\cdots n)})$ as ${\cal Q}_{1(2\cdots n)}$. The party ``\(1\)" can be referred to as the ``nodal" observer. In this respect, one can define the ``\({\cal Q}\)-monogamy score" \cite{manab} for the \(n\)-party state, $\rho_{12\cdots n}$, as \begin{equation} \label{eq:monoscore} \delta_{{\cal Q}} = {\cal Q}_{1(2\cdots n)} - \sum _{i=2}^{n} {\cal Q}_{1i}. \end{equation} Non-negativity of \(\delta_{{\cal Q}}\) for all quantum states implies monogamy of \({\cal Q}\). For instance, the square of the concurrence has been shown to be monogamous \cite{CKW, Osborne} for all quantum states for an arbitrary number of qubits. However, there exist other measures like entanglement of formation, quantum discord, and quantum work-deficit which are known to be non-monogamous, examined primarily for pure three-qubit states \cite{measent,amaderdiscord, giorgi, Salini}. See also \cite{Fanchini,Dagmar,GMS,RajagopalRendell,usha,FanchiniEof,Sandersrev, mono13}.
\subsection{Appearance of monogamy as generic in large systems} \label{subsec:monoarb}
In most of the previous works on monogamy, the status of the monogamy relation for different quantum correlation measures has been considered only for three-qubit pure states. Exceptions include the square of the concurrence, which is proven to be monogamous for any number of qubits \cite{Osborne}. Here we address the question of monogamy for an arbitrary number of parties for an arbitrary bipartite quantum correlation measure. Before presenting the general results, let us first consider the numerical data that we obtain for different bipartite measures, which strongly indicate that the results for tripartite systems can not be generalized to systems with a large number of parties. In particular, numerical simulations indicate that \emph{all} quantum correlation measures become monogamous for almost all multiparty pure quantum states of a relatively moderate number of parties. Monogamous quantum states are known to be useful in several applications, including quantum cryptography. The relatively moderate number of parties -- five
-- ensure that the results are relevant to the experimental techniques currently available, both with photons as well as with massive particles \cite{XYZphoton,XYZtrapion,Treutlein}. The status of the monogamy relation, as obtained via the numerical simulations, for all computable entanglement measures are summarized below in Table \ref{table:ent-mono-percent}. It is clear from Table \ref{table:ent-mono-percent} that several entanglement measures which are non-monogamous for three-qubit pure states, become monogamous, when one increases the number of parties by a relatively moderate amount. Some of the results from Table \ref{table:ent-mono-percent} are also depicted in Fig. \ref{fig-HistoPlot_EntMeas}.
\begin{table}[ht] \centering \begin{tabular}{cccccccc} \hline $n$ & $\delta_{\cal C}$ & $\delta_{\cal E}$ & $\delta_{{\cal E}^2}$ & $\delta_{\cal N}$ & $\delta_{{\cal N}^2}$ & $\delta_{E_{\cal N}}$ & $\delta_{E^2_{\cal N}}$ \\[0.5ex] \hline 3 & 60.2 & 93.3 & 100 & 91.186 & 100 & 68.916 & 100\\[0.5ex] 4 & 99.6 & 100 & 100 & 99.995 & 100 & 99.665 & 100\\[0.5ex] 5 & 100 & 100 & 100 & 100 & 100 & 100 & 100\\[1ex]
\hline \end{tabular} \caption{Monogamy percentage table for entanglement measures. We randomly choose $10^5$ pure quantum states uniformly according to the Haar measure over the complex $2^n$-dimensional Hilbert space for each $n$, for $n=3,4,\ \textrm{and}\ 5$. Here $n$, therefore, denotes the number of qubits forming the system from which the pure quantum states are chosen. $\delta_{\cal C}, \delta_{\cal E}, \delta_{\cal N}, \delta_{E_{\cal N}}$ respectively denote the monogamy scores of concurrence, EoF, negativity and logarithmic negativity, while $\delta_{{\cal E}^2}, \delta_{{\cal N}^2}, \delta_{E_{\cal N}^2}$ are the monogamy scores of the squares of these measures. The numbers shown are percentages of the randomly chosen states that are monogamous for that case.
}\label{table:ent-mono-percent} \end{table}
Before moving to the other quantum correlation measures, let us prove here a sufficient condition that has to be satisfied by any entanglement measure for it to be monogamous.
\noindent\textbf{Theorem 1:(Monogamy for given states.)} \textit{If entanglement of formation is monogamous for a pure quantum state of an arbitrary number of parties, any bipartite ``good" entanglement measure is also monogamous for that state.}\\ \textbf{Remark.} In Ref. \cite{horodecki-limit}, bipartite entanglement measures satisfying certain reasonable axioms, were referred to as ``good" measures, and were shown to be bounded above by the entanglement of formation and equal to the local von Neumann entropy for pure bipartite states \cite{measent}. Here, we slightly broaden the definition, and call an entanglement measure as ``good" if it is lower than or equal to the entanglement of formation and it is equal to the local von Neumann entropy for pure bipartite states.
\noindent\texttt{Proof.} Let $|\psi_{12\cdots n}\rangle$ be a multipartite quantum state consisting of $n$ parties and $\delta_{\cal E}$, the monogamy score of entanglement of formation, be non-negative for the $n$-party state. Consider now any ``good" bipartite entanglement measure ${\cal Q}$.
Therefore, when entanglement of formation is monogamous, we have \begin{equation}
\mathcal{Q}(|\psi_{1(2\cdots n)}\rangle)= S(\rho^{\psi}_1)= \mathcal{E}(|\psi_{1(2\cdots n)}\rangle)\geq \sum_{j=2}^n\mathcal{E}_{1j} \geq \sum_{j=2}^n\mathcal{Q}_{1j}
\label{theorem}
\end{equation}
Hence the proof. (Here $\rho^{\psi}_1=\mbox{tr}_{2\cdots n}|\psi\rangle\langle \psi|$.)
$\blacksquare$
\begin{figure}
\caption{(Color online) Percentage bar-diagram for monogamy scores of entanglement measures for pure $n$-party states. $10^5$ random pure quantum states are generated for each $n$. All notations are the same as for Table \ref{table:ent-mono-percent}.}
\label{fig-HistoPlot_EntMeas}
\end{figure}
Table \ref{table:ent-mono-percent} shows that entanglement of formation is monogamous for almost all pure states of four qubits. Utilizing this result along with that from Theorem 1, we obtain that relative entropy of entanglement, regularized relative entropy of entanglement \cite{regREE}, entanglement cost \cite{eof,Entcost}, distillable entanglement, all of which are not generally computable, are monogamous for almost all pure states of four or more qubits.
Just like for entanglement measures, and as displayed in Table \ref{table:info-mono-percent}, percentages of randomly chosen pure states satisfying monogamy, increase also for all information-theoretic quantum correlation measures with the increase in the number of parties (see also Fig. \ref{fig-HistoPlot_DisWDscore}). Note here that the square of quantum discord (precisely ${{\cal D}^{\leftarrow}}^2$) was shown to be monogamous \cite{discordsquare}
for three-qubit pure states. For ease of notation, we often denote $\delta_{{{\cal D}^{\rightarrow}}^2}$, $\delta_{\vartriangle^{\leftarrow}}$, etc., as $\delta_{{\cal D}^2}^{\rightarrow}$, $\delta_{\vartriangle}^{\leftarrow}$, etc., respectively. It is to be noted here that uniform Haar searches may tend to become inefficient when we consider a large number of parties (cf. \cite{AWinter}). However, they are efficient for the few qubit systems that we consider here, especially for $n=3$ and $n=4$.
\begin{table}[ht] \centering \begin{tabular}{ccccccccc} \hline $n$ & $\delta_{\cal D}^{\rightarrow}$ & $\delta_{{\cal D}^2}^{\rightarrow}$ & $\delta_{\cal D}^{\leftarrow}$ & $\delta_{{\cal D}^2}^{\leftarrow}$ & $\delta_{\vartriangle}^{\rightarrow}$ & $\delta_{{\vartriangle}^2}^{\rightarrow}$ & $\delta_{\vartriangle}^{\leftarrow}$ & $\delta_{{\vartriangle}^2}^{\leftarrow}$\\[0.5ex] \hline 3 & 90.5 & 100 & 93.28 & 100 & 56.29 & 88.10 & 57.77 & 89.56\\[0.5ex] 4 & 99.997 & 100 & 99.99 & 100 & 94.27 & 99.99 & 97.63 & 100\\[0.5ex] 5 & 100 & 100 & 100 & 100 & 99.98 & 100 & 99.99 & 100 \\[1ex]
\hline \end{tabular} \caption{Percentage table for quantum states satisfying the monogamy relation for information-theoretic paradigm measures. We randomly chose $10^5$ pure quantum states uniformly according to the Haar measure over the complex $2^n$-dimensional Hilbert space.
The numbers shown are percentages of the randomly chosen states that are monogamous for that case. }\label{table:info-mono-percent} \end{table}
\begin{figure}
\caption{(Color online) Percentage bar-diagram for monogamy scores of information-theoretic quantum correlation measures for pure $n$-party states. $10^5$ random pure quantum states are generated for each $n$. All notations are the same as in Table \ref{table:info-mono-percent}.}
\label{fig-HistoPlot_DisWDscore}
\end{figure}
Let us now specify certain sets of properties that are sufficient for any quantum correlation measure to satisfy the monogamy relation for an arbitrary number of parties in arbitrary dimensions, when it is monogamous for smaller number of parties \cite{Osborne}.
Let us consider an \(n\)-party state, \( |\psi_{12\ldots n}\rangle\) in \(d\) dimensions, in which we make the partition, in such a way that the final state is always tripartite. In this case, we have the following theorem. \\
\noindent\textbf{Theorem 2: (Monogamy for given measures.)} \textit{If \({\cal Q}\) is monogamous for all tripartite quantum states in \(d \otimes d \otimes d_C \) where \(d_C =d^m,\ m\leq n-2\), then \({\cal Q}\) is monogamous for pure quantum states of the $n$ parties.
}
\noindent\texttt{Proof.}
Suppose that the dimension of the third party is \(d^{n-2}\), consisting of \((n-2)\) parties, say \(3, \ldots, n\), each being of dimension \(d\). The monogamy of such a \(3\)-party state, \(|\psi_{1:2:3 \ldots n}\rangle\), implies that \begin{eqnarray} \label{eq:recursion}
{\cal Q}(|\psi_{1(23\ldots n)}\rangle) &\geq& {\cal Q}(\rho^{\psi}_{12}) + {\cal Q}(\rho^{\psi}_{1(3\ldots n)}) \nonumber \\ &\geq & {\cal Q}(\rho^{\psi}_{12}) + {\cal Q}(\rho^{\psi}_{13}) + {\cal Q}(\rho^{\psi}_{1(4\ldots n)}) \nonumber \\ &\geq & {\cal Q}(\rho^{\psi}_{12}) + {\cal Q}(\rho^{\psi}_{13}) + \sum_{k=4}^{n}{\cal Q}(\rho^{\psi}_{1k}), \end{eqnarray} where the second inequality is obtained by applying the monogamy relation for the tripartite state \(\rho^{\psi}_{1:3:4\ldots n}\), with the third party having \((n-3)\) parties, and the third inequality is by applying such monogamy relations recursively. Here, the local density matrices are denoted by $\rho^{\psi}$ with the appropriate suffixes determined by the parties that are not traced out.
$\blacksquare$
\noindent\textbf{Theorem 3:} \textit{If \({\cal Q}\) is monogamous for all tripartite pure quantum states in \(d \otimes d \otimes d_C \) where \(d_C =d^m,\ m\leq n-2\), then \({\cal Q}\) is monogamous for all quantum states, pure or mixed, of the $n$ parties, provided \({\cal Q}\) is convex, and \({\cal Q}\) for mixed states is defined through the convex roof approach.
}
\noindent\texttt{Proof.}
Consider a mixed state $\rho_{123\ldots n}$ in the tripartition $1:2:3\ldots n$ and let $\{p_i,\, |\psi^i_{1(2\ldots n)}\rangle\}$ be the optimal decomposition that attains the convex roof of ${\cal Q}(\rho_{1(2\ldots n)})$. Therefore, \begin{eqnarray}
{\cal Q}(\rho_{1(23\ldots n)}) &=& {\cal Q}\left(\sum_i p_i|\psi^i_{1(2\ldots n)}\rangle\langle\psi^i_{1(2\ldots n)}|\right) \nonumber \\
&=& \sum_ip_i{\cal Q}(|\psi^i_{1(2\ldots n)}\rangle). \end{eqnarray} Due to the assumed monogamy over pure states, we have \begin{eqnarray} {\cal Q}(\rho_{1(2\ldots n)}) &\geq & \sum_i p_i \left({\cal Q}(\rho^{\psi^i}_{12})+{\cal Q}(\rho^{\psi^i}_{1(3\ldots n)})\right) \nonumber \\ &=&\sum_i p_i {\cal Q}(\rho^{\psi^i}_{12}) + \sum_i p_i {\cal Q}(\rho^{\psi^i}_{1(3\ldots n)}), \end{eqnarray} which, due to convexity of ${\cal Q}$, reduces to \begin{eqnarray} {\cal Q}(\rho_{1(2\ldots n)}) &\geq & {\cal Q}(\rho_{12}) + {\cal Q}(\rho_{1(3\ldots n)}). \end{eqnarray} The result follows now by applying Theorem 2 and concavity of ${\cal Q}$.
$\blacksquare$
\section{A zero measure class of non-monogamous states} \label{sec:analytic}
In the preceding section, we presented evidence that almost all multiparty states for even a moderate number of parties are monogamous with respect to all quantum correlation measures. The qualification ``almost'' is important and necessary, firstly because the uniform Haar searches do not take into account of violations of the corresponding property (monogamy, here) on hypersurfaces (more generally, on sets of zero Haar measure). Secondly, and more constructively, we identify
a class of multiparty states that are non-monogamous with respect to information-theoretic quantum correlation measures for an arbitrary number of parties. We begin by deriving an analytic relation, which will subsequently help us to identify the class of states.
For an arbitrary tripartite quantum state $\rho_{123}$, we have the relation \cite{Koashi-Winter} \begin{equation}
{\cal S}(\rho_{1|3}) + D^{\leftarrow}(\rho_{13}) \geq \mathcal{E}(\rho_{12}), \end{equation}
where ${\cal S}(\rho_{1|3}) = S(\rho_{13})-S(\rho_3)$ is the ``unmeasured conditional entropy" of $\rho_{13}$ conditioned on the party 3, and $\rho_{13},\, \rho_{12}$, and $\rho_{3}$ are local density matrices of $\rho_{123}$ of the corresponding parties. Let us now consider an $n$-party pure state $|\psi_{12\cdots n}\rangle$. Applying the above relation for an arbitrary tripartite partition of $|\psi_{12\cdots n}\rangle$,
we obtain \begin{equation}
{\cal S}(\rho^{\psi}_{1|j}) + D^{\leftarrow}(\rho^{\psi}_{1j}) \geq \mathcal{E}(\rho^{\psi}_{1i}), \end{equation}
with $i\ne j$, and where the superscript $\psi$ on the local density matrices indicates that they are obtained by tracing out the requisite parties from the state $|\psi_{12\ldots n}\rangle$. If we choose $i=j+1$ for all $n$ except for $j=n$ and choose $i=2$ for $j=n$, we have
\begin{equation}
\sum_{j=2}^n\mathcal{E}(\rho^{\psi}_{1j}) \leq \sum_{j=2}^n\left(\mathcal{S}(\rho^{\psi}_{1|j})+{\cal D}^{\leftarrow}(\rho^{\psi}_{1j})\right).
\label{eq:eofdisc} \end{equation}
We now move to prove a theorem by using the above inequality. The tangle for a multiqubit quantum state $\rho_{12\ldots n}$ is denoted by $\tau(\rho_{12\ldots n})$ and is defined as $ \mathcal{C}^2(\rho_{1(2\cdots n)})-\sum_{j=2}^n\mathcal{C}^2(\rho_{1j})$.
\noindent\textbf{Theorem 4:} {\em Multiparty pure states with vanishing tangle violate the monogamy relation for quantum discord with a certain nodal observer, provided the sum of the unmeasured conditional entropies, conditioned on all non-nodal observers, is negative. }
\noindent\texttt{Proof.} Let us consider an $n$-party state $|\psi_{12\cdots n}\rangle$ for which the tangle vanishes, i.e., the state saturates the monogamy relation for ${\cal C}^2$. Hence
$\sum_{j=2}^n\mathcal{C}^2(\rho^{\psi}_{1j})= \mathcal{C}^2(|\psi_{1(2\cdots n)}\rangle)$, where by
we have \begin{equation}
\sum_{j=2}^n\mathcal{E}(\rho^{\psi}_{1j}) \geq \mathcal{E}(|\psi_{1(2\cdots n)}\rangle). \end{equation}
Since $\mathcal{E}(|\psi_{1(2\cdots n)}\rangle)= S(\rho^{\psi}_1)=\mathcal{D}^{\leftarrow}(|\psi_{1(2\cdots n)}\rangle)$, by using the inequality in (\ref{eq:eofdisc}),
we have \begin{eqnarray}
&&\sum_{j=2}^n\left({\cal S}(\rho^{\psi}_{1|j})+{\cal D}^{\leftarrow}(\rho^{\psi}_{1j})\right) \geq \sum_{j=2}^n\mathcal{E}(\rho^{\psi}_{1j}) \nonumber\\
&&\geq \mathcal{E}(|\psi_{1(2\cdots n)}\rangle)= S(\rho^{\psi}_1)=\mathcal{D}^{\leftarrow}(|\psi_{1(2\cdots n)}\rangle). \end{eqnarray} Therefore, the discord monogamy score has the following bound: \begin{equation}
\delta_{\cal D}^{\leftarrow}=\mathcal{D}^{\leftarrow}(|\psi_{1(2\cdots n)}\rangle)-\sum_{j=2}^n{\cal D}^{\leftarrow}(\rho^{\psi}_{1j})
\leq \sum_{j=2}^n {\cal S}(\rho^{\psi}_{1|j}).
\label{eq:discordpolygamy} \end{equation}
Hence, if states with vanishing tangle, additionally satisfies $\sum_{j=2}^n\mathcal{S}(\rho^{\psi}_{1|j}) <0$, quantum discord is non-monogamous for those states.
$\blacksquare$
The non-monogamy of discord for three-qubit $W$ states \cite{amaderdiscord,giorgi} is a special case of Theorem 4.
It can be easily checked that the $n$-qubit W state, $|W_n\rangle$ \cite{Wref}, given by \begin{equation}
|W_n\rangle = \frac{1}{\sqrt{n}}(|0\ldots1\rangle + \ldots + |1\ldots0\rangle), \end{equation} remain non-monogamous with respect to quantum discord for an arbitrary number of parties.
Let us now discuss some further classes of states which remain non-monogamous for quantum discord for arbitrary number of parties.
Towards that end, consider the Dicke state \cite{dickestates} \begin{equation}
|W_{n}^r\rangle=\frac{1}{\sqrt{\binom{n}{r}}} \sum_{permuts}|\underbrace{00...0}_{n-r}\underbrace{11...1}_{r}\rangle,
\label{eq:Dicke} \end{equation}
where $\sum_{permuts}$ represents the unnormalized equal superposition over all $\binom{n}{r}$ combinations of $(n-r)$ $|0\rangle$'s and $r$ $|1\rangle$'s.
We now examine the discord score of the above state.
Using the property that the Dicke state is permutationally invariant with respect to the subsystems, the optimization involved in computing quantum discord of the two-qubit reduced density matrices can be obtained analytically \cite{Chen}. Hence, an analytic expression of discord score for the Dicke states
can be obtained and is given by
\begin{equation}
\delta_{\cal D}^{\leftarrow}(|W_{n}^r\rangle)=S_1-(n-1)\Big(S_2- S_{12}+H(\{\lambda_{\pm}\})\Big), \end{equation} where \begin{eqnarray}
S_1&=&-\frac{r}{n}\log_2 \frac{r}{n}-(1-\frac{r}{n})\log_2 (1-\frac{r}{n}), \nonumber \\
S_2&=&-(a+b)\log_2 (a+b)-(b+c)\log_2 (b+c), \nonumber \\
S_{12}&=&-a\log_2 a-2b\log_2 2b-c\log_2 c, \nonumber \\
\lambda_{\pm}&=&(1 \pm \sqrt{1-4(ab+bc+ca)})/2, \end{eqnarray}
with $a=\frac{(n-r)(n-r-1)}{n(n-1)}$, $b=\frac{r(n-r)}{n(n-1)}$ and $c=\frac{r(r-1)}{n(n-1)}$. Note here that the tangle vanishes for the Dicke states for $r=1$. However it is non-vanishing for $r\neq 1$ and hence the previous theorem cannot be applied for the Dicke states with $r\neq 1$. The quantum discord and work-deficit scores of the Dicke states for various choices of $n$ with respect to excitations, $r$, are plotted in Fig. \ref{fig-JMstate_DisSco}. For comparison, the tangle of the Dicke states against $r$ for different $n$ is plotted in Fig. \ref{fig-JMstate_EntMeas}.
Although
the Dicke states, for arbitrary $r$ and $n$, are non-monogamous with respect to discord and work-deficit,
the generalized Dicke states given by \begin{equation}
|W_{n}^r(\alpha_i^r)\rangle= \sum_i \alpha^r_i |\underbrace{00...0}_{n-r}\underbrace{11...1}_r\rangle
\label{eq:genDicke} \end{equation}
with the normalization $\sum_i |\alpha^r_i|^2=1$, for $r>1$,
are largely monogamous. In Table \ref{table:dicke-mono-percent}, we list the percentage of randomly chosen states with positive quantum discord score and quantum work-deficit score of the generalized Dicke states as given in Eq. (\ref{eq:genDicke}) for $n=3,4,5,6$ qubits for some excitations, $r$.
\begin{table}[ht] \centering \begin{tabular}{c c c c c c} \hline $n$ & 3 & 4 & 5 & 6 & 6 \\[0.5ex] $r$ & 1 & 2 & 2 & 2 & 3 \\[0.5ex] \hline \\[0.5ex] $\delta_{\cal D}^{\rightarrow}$ & 0 & 94.86 & 99.29 & 99.80 & 100 \\[1ex]
$\delta_{\vartriangle}^{\rightarrow}$ & 27.19 & 66.77 & 71.46 & 72.01 & 88.25 \\[1ex] \hline \end{tabular} \caption{Percentage table for the generalized Dicke states (see Eq. (\ref{eq:genDicke})) that satisfies monogamy for quantum discord and quantum work-deficit for $10^5$ randomly chosen pure states in each of the cases, according to the Haar measure over the corresponding space.}
\label{table:dicke-mono-percent} \end{table}
\begin{figure}
\caption{(Color online) Top panel: Non-monogamy of Dicke states
with respect to quantum discord. For fixed $n\ (>6)$, $\delta_{\cal D}^{\leftarrow}$ decreases with increasing $r$ and it decays exponentially for large $n$. Bottom panel: Non-monogamy of Dicke states with respect to quantum work-deficit. For fixed $n\ (>6)$, the trajectory of $\delta_{\vartriangle}^{\leftarrow}$ resembles an anharmonic potential well. It decays with increasing $r$ (upto $r \leq \left[\frac{n}{4}\right]$) and rises in the interval $\left[\frac{n}{4}\right]< r \leq \left[\frac{n}{2}\right]$. In both the panels, the horizontal axes are in terms of the number of excitations in the corresponding Dicke state, while the vertical axis in the top panel is in bits and that in the bottom panel is in qubits.}
\label{fig-JMstate_DisSco}
\end{figure}
\begin{figure}
\caption{(Color online) Tangle $(\tau)$
for Dicke states
is plotted against the number of excitations, $r$ present in the $n$-party system. The tangle vanishes for all Dicke states with $r=1$ (i.e., for the $W$ states) and remains positive for Dicke states with $r>1$. Tangle is measured in ebits.}
\label{fig-JMstate_EntMeas}
\end{figure}
As a final example, we now consider a general $n$-qubit symmetric state, given by
$|\psi_{GS}\rangle=\sum_{r=0}^na_r |W_n^r\rangle$.
We generate $10^5$ states randomly in the space of $n$-qubit symmetric states for $n=3,4,5,6$ uniformly according to the Haar measure. The monogamy scores for quantum discord and quantum work-deficit are calculated. The percentage of states which turned out to be monogamous in the different cases are depicted in Table \ref{table:gensymmet-mono-percent}. \begin{table}[ht] \centering \begin{tabular}{c c c c c} \hline $n$ & 3 & 4 & 5 & 6 \\[0.5ex] \hline \\[0.5ex] $\delta_{\cal D}^{\rightarrow}$ & 97.47 & 98.37 & 86.12 & 49.71 \\[1ex] $\delta_{\cal D}^{\leftarrow}$ & 97.15 & 97.69 & 86.77 & 64.35 \\[1ex] \hline \\[0.5ex] $\delta_{\vartriangle}^{\rightarrow}$ & 81.40 & 81.49 & 56.41 & 26.40 \\[1ex] $\delta_{\vartriangle}^{\leftarrow}$ & 78.97 & 77.77 & 61.19 & 41.48 \\[1ex] \hline \end{tabular} \caption{Percentage table for the $n$-qubit symmetric states that satisfies monogamy for quantum discord and quantum work-deficit for $10^5$ randomly chosen pure states, for $n=3,4,5,6$.
} \label{table:gensymmet-mono-percent} \end{table} Comparing now with the Tables \ref{table:ent-mono-percent} and \ref{table:info-mono-percent}, where general quantum states (not necessarily symmetric) in the same multiqubit spaces were considered, we find that in drastic contrast to those cases, the frequency of states which satisfies monogamy actually decreases with increasing number of qubits for the symmetric case. We have performed a finite-size scaling analysis, by assuming that all symmetric states will be monogamous for sufficiently large $n$. We find log-log scaling, with the scaling law being $$p_n \approx p_c + n^{-\alpha},$$ where $p_n$ is the percentage of symmetric $n$-qubit states that are monogamous with respect to a given measure, ${\mathcal Q}$, $p_c \equiv p_{n \rightarrow \infty}$ (being assumed to be vanishing), and $\alpha$ is the critical exponent of the scaling law. Based on the percentages obtained in the Haar searches, we calculated the critical exponents, and are displayed them in Table \ref{table:gensymmet-mono-critiexponent}. \begin{table}[ht] \centering \begin{tabular}{cc} \hline ${\mathcal Q}$ & $\alpha$ \\ \hline $\delta_{\cal D}^{\rightarrow}$ & 0.8715 \\ $\delta_{\cal D}^{\leftarrow}$ & 0.5523\\ \hline $\delta_{\vartriangle}^{\rightarrow}$ & 1.5351 \\ $\delta_{\vartriangle}^{\leftarrow}$ & 0.8960\\ \hline \end{tabular} \caption{The critical exponents of the scaling law of monogamous states among the symmetric states, for quantum discord and quantum work-deficit.
} \label{table:gensymmet-mono-critiexponent} \end{table} It should be noted here that all the classes of pure states considered in this section fall in a set of zero Haar measure in the space of all pure quantum states, for a given $n$-qubit space. This is also true for all symmetric pure states. It is plausible that symmetric mixed states form a non-zero, perhaps fast-decaying, volume of monogamous multiparty quantum states within the space of all quantum states, for large systems.
\section{conclusion} \label{sec:conclusion}
In quantum communication protocols, in particular in quantum key sharing, it is desirable to detect and control external noise like eavesdropping. In this case, the concept of monogamy comes as a savior, as it does not allow an arbitrary sharing of quantum correlation among subsystems of a larger system. Thus identifying and quantifying which quantum correlation measures are monogamous for the given states, and under what conditions, become extremely important. It is well-known that bipartite quantum correlation measures are, in general, not monogamous for arbitrary tripartite pure states.
We have shown that a quantum correlation measure which is
non-monogamous for a substantial section of tripartite quantum states, becomes monogamous for almost all quantum states of $n$-party systems, with $n$ being only slightly higher than 3. We have also identified sets of zero Haar measure in the space of all multiparty quantum states that remain non-monogamous for an arbitrary number of parties. Apart from providing an understanding on the structure of space of quantum correlation measures, and their relation to the underlying space of multiparty quantum states, our results may shed more light on the methods for choosing quantum systems for secure quantum information protocols, especially in large quantum networks.
\begin{acknowledgments} We acknowledge discussions with Arul Lakshminarayan, M. S. Ramkarthik, and Andreas Winter. RP acknowledges an INSPIRE-faculty position at the Harish-Chandra Research Institute (HRI) from the Department of Science and Technology, Government of India. We acknowledge computations performed at the cluster computing facility at HRI.
\end{acknowledgments}
\end{document} |
\begin{document}
\title{Explicit monomial expansions of the generating series for connection coefficients} \begin{abstract} This paper is devoted to the explicit computation of generating series for the connection coefficients of two commutative subalgebras of the group algebra of the symmetric group, the class algebra and the double coset algebra. As shown by Hanlon, Stanley and Stembridge (1992), these series gives the spectral distribution of some random matrices that are of interest to statisticians. Morales and Vassilieva (2009, 2011) found explicit formulas for these generating series in terms of monomial symmetric functions by introducing a bijection between partitioned hypermaps on (locally) orientable surfaces and some decorated forests and trees. Thanks to purely algebraic means, we recover the formula for the class algebra and provide a new simpler formula for the double coset algebra. As a salient ingredient, we derive a new explicit expression for zonal polynomials indexed by partitions of type $[a,b,1^{n-a-b}]$. \end{abstract}
\section{Introduction} \label{sec : intro}
For integer $n$ we note $S_n$ the symmetric group on $n$ elements and $\lambda = (\lambda_1, \lambda_{2},...,\lambda_p) \vdash n$ an integer partition of $\ell(\lambda) = p$ parts sorted in decreasing order. We define as well $Aut_\lambda = \prod_i m_i(\lambda)!$ and $z_\lambda =\prod_i i^{m_i(\lambda)}m_i(\lambda)!$ where $m_i(\lambda)$ is the number of parts in $\lambda$ equal to $i$. Let $m_\lambda(x)$ be the monomial symmetric function indexed by $\lambda$ on indeterminate $x$, $p_\lambda(x)$ and $s_\lambda(x)$ the power sum and schur symmetric function respectively. Let $C_\lambda$ be the conjugacy class of $S_n$ containing the permutations of cycle type $\lambda$. The cardinality of the conjugacy classes is given by $|C_\lambda| = n!/z_\lambda$. Additionally, $B_n$ is the hyperoctahedral group (i.e the centralizer of $f_\star = (12)(34)\ldots(2n-1\,2n)$ in $S_{2n}$). We note $K_\lambda$ the double coset of $B_n$ in $S_{2n}$ consisting in all the permutations $\omega$ of $S_{2n}$ such that $f_\star \circ\omega\circ f_\star\circ\omega^{-1}$ has cycle type $(\lambda_1,\lambda_1, \lambda_{2},\lambda_{2},...,\lambda_p,\lambda_p)$. We have $|B_n| = 2^nn!$ and $|K_\lambda| = |B_n|^2/(2^{\ell(\lambda)}z_\lambda)$. By abuse of notation, let $C_\lambda$ (resp. $K_\lambda$) also represent the formal sum of its elements in the group algebra $\mathbb{C} S_{n}$ (resp. $\mathbb{C} S_{2n}$). Then, $\{C_\lambda, \lambda \vdash n\}$ (resp. $\{K_\lambda, \lambda \vdash n\}$) forms a basis of the class algebra (resp. double coset algebra). For $\lambda$, $\mu$, $\nu \vdash n$, we focus on the connection coefficients $c^\nu_{\lambda\mu}$ and $b^\nu_{\lambda\mu}$ than can be defined formally by: \begin{equation} c^\nu_{\lambda\mu} = [C_\nu]C_\lambda C_\mu, \;\;\;\;\; b^\nu_{\lambda\mu} = [K_\nu]K_\lambda K_\mu \end{equation} From a combinatorial point of view $c^\nu_{\lambda\mu}$ is the number of ways to write a given permutation $\gamma_\nu$ of $C_\nu$ as the ordered product of two permutations $\alpha\circ\beta$ where $\alpha$ is in $C_\lambda$ and $\beta$ is in $C_\mu$. Similarly, $b^\nu_{\lambda\mu}$ counts the number of ordered factorizations of a given element in $K_\nu$ into two permutations of $K_\lambda$ and $K_\mu$. Although very intractable in the general case, Morales and Vassilieva (\cite{MV09}) found an explicit expression of the generating series for the $c^\nu_{\lambda\mu}$ for the special case $\nu = (n)$ in terms of monomial symmetric functions.
\begin{thm}[Morales and Vassilieva, 2009] \label{thm : mv09} \begin{equation} \frac{1} {n}\sum_{\lambda, \mu \vdash n} {c_{\lambda\mu}^n} p_{\lambda}(x)p_{\mu}(y) = \sum_{\lambda, \mu \vdash n} \frac{(n-\ell(\lambda))!(n-\ell(\mu))!}{(n+1-\ell(\lambda)-\ell(\mu))!}m_{\lambda}(x)m_{\mu}(y) \end{equation}
\end{thm}
In \cite{FV10}, Feray and Vassilieva found an interesting expression when we set $\mu = (n)$ as well :
\begin{thm}[Feray and Vassilieva, 2010] \label{thm : fv10} \begin{equation} \frac{1} {n!}\sum_{\lambda \vdash n} {c_{\lambda,n}^n} p_{\lambda}(x) = \sum_{\lambda\vdash n} \frac{m_{\lambda}(x)}{n+1-\ell(\lambda)} \end{equation} \end{thm}
Both of these works rely on purely bijective arguments involving the theory of hypermaps on orientable surfaces. As shown in section \ref{sec : class}, these formulas can be recovered in a very simple way using some known relations between the connection coefficients, the characters of the symmetric group and the classical bases of the symmetric functions.\\ \indent The evaluation of the generating series for the $b^\nu_{\lambda\mu}$ is much more complicated. Morales and Vassilieva (in \cite{MV11}) found the first explicit formula in terms of monomial symmetric functions for the case $\nu = (n)$ thanks to a bijection between partitioned hypermaps in locally orientable surfaces and some decorated forests. Using an algebraic method analog to the class algebra case and a new formula for zonal polynomials on near hooks, section \ref{sec : doublecoset}, \ref{sec : zon poly} and \ref{sec : results} prove a new much simpler expression. To state it, we need to introduce a few more notations. If $x$ and $y$ are non negative integers, we define: \begin{equation} \GenBin{x}{y} = \frac{\binom{x}{y}^2}{\binom{2x}{2y}}, \;\;\;\;\; \VarGenBin{x}{y} = \frac{\binom{x}{y}^2}{\binom{2x+1}{2y}} \end{equation} \noindent if $x \geq y$ and $0$ otherwise. We also define the rational functions: \begin{align} &R(x,y,z,t,w) = \frac{(2x+w)(2y+w)(2z+w-1)(2t+w-1)}{(2x+w-1)(2y+w+1)(2z+w-2)(2t+w)}\\ &r_n(x,y) = 2n\frac{(n+x-y+1)(n+y-x)(n-x-y)!(2x-1)!!(2y-2)!!}{(-1)^{n+1-x-y}(n+x-y)(n+y-x-1)(2(x-y)+1)} \end{align}
\noindent where the later definition of $r_n(x,y)$ is valid for $y\geq1$. Additionally, we define $r_n(n,0) = (2n-1)!!$. We have:
\begin{thm}[Main result] \label{thm : main} \begin{align} \nonumber \frac{1} {2^nn!}&\sum_{\lambda,\mu \vdash n} {b_{\lambda,\mu}^n} p_{\lambda}(x)p_{\mu}(y) = \sum_{\lambda, \mu \vdash n}m_\lambda(x)m_\mu(y)\sum_{\substack{a,b\\ \{a^1_i,b^1_i,c^1_i\} \in C_{a,b}^{\lambda}\\ \{a^2_i,b^2_i,c^2_i\} \in C_{a,b}^{\mu}}}r_n(a,b)\times\\ &\prod_{\substack{1\leq i \leq n\\1\leq k\leq 2}}\GenBin{\overline{a}^k_{i-1}-\overline{b}^k_{i-1}}{a^k_i}\VarGenBin{\overline{a}^k_{i-1}-\overline{b}^k_{i}}{b^k_i}{R(\overline{a}^k_{i},\overline{a}^k_{i-1},\overline{b}^k_{i},\overline{b}^k_{i-1},\overline{c}^k_{i-1})}^{c^k_i} \end{align} \noindent where the sum runs over pairs of integers $(a,b)$ such that $(a,b,1^{n-a-b})$ is a partition and $C_{a,b}^{\rho}$ is the set of non negative integer sequences $\{a_i,b_i,c_i\}_{1\leq i\leq n}$ with (assume $\rho_i = 0$ for $i>\ell(\rho)$) : \footnotesize \begin{flalign*} &\sum_i a_i = a,\;\;\;\;\sum_i b_i = b,\;\;\;\; a_i+b_i \begin{cases} \in \{\rho_i, \rho_i-1\}&\mbox{ for }i<\ell(\rho)-1\\= \rho_i&\mbox{ otherwise}\end{cases},\;\;\;\; c_i = \rho_i-a_i-b_i
\end{flalign*}
\normalsize
Moreover, $\sum_{i=1}^{j} b_i < b$ if $\sum_{i=1}^{j-1} c_i < c=n-a-b$ and we noted $\overline{x}^k_i=x-\sum_{j=1}^{i}x^k_i\; (x\in\{a,b,c\})$. \end{thm}
The formula of theorem \ref{thm : main} doesn't have some of the nice properties of the one in \cite{MV11}. It is not obvious that the coefficients are integers (the summands are rational numbers) and the $r_n(a,b)$ have alternate signs. However, the summation runs over much less parameters and the computation of the coefficients is more efficient especially (see section $\ref{sec : results}$) when $\lambda$ or $\mu$ has a small number of parts.
\section{Connection coefficients of the class algebra} \label{sec : class} Let $\chi^\lambda$ be the irreducible character of $S_n$ indexed by $\lambda$ and $\chi^\lambda_\mu$ its value at any element of $C_\mu$. Denote by $f^\lambda$ the degree $\chi^\lambda_{1^n}$ of $\chi^\lambda$. As Biane in \cite{B04} we start with the expression: \begin{equation} c^\nu_{\lambda,\mu}= \frac{n!}{z_\lambda z_\mu}\sum_{\alpha \vdash n}\frac{\chi^\alpha_\lambda\chi^\alpha_\mu\chi^\alpha_\nu}{f^\alpha} \end{equation} which becomes much simpler when $\nu=(n)$ as $\chi^\alpha_{(n)} = (-1)^{a}$ if $\alpha = (n-a,1^a)$ and $0$ otherwise. Furthermore $f^{(n-a,1^a)}=\binom{n-1}{a}$. We have: \begin{equation} \label{eq : con} c^n_{\lambda,\mu} = \frac{n}{z_\lambda z_\mu}\sum_{a=0}^{n-1}(-1)^a(n-1-a)!a!\chi^{(n-a,1^a)}_\lambda\chi^{(n-a,1^a)}_\mu \end{equation} Then, following \cite{B04}, the generating series are equal to: \begin{align} \nonumber \frac{1}{n}&\sum_{\lambda,\mu \vdash n}c^n_{\lambda,\mu}p_\lambda(x)p_\mu(y) =\\
&\sum_{a=0}^{n-1}(-1)^a(n-1-a)!a!\sum_{\lambda,\mu \vdash n}z_\lambda^{-1}\chi^{(n-a,1^a)}_\lambda p_\lambda(x) z_\mu^{-1}\chi^{(n-a,1^a)}_\mu p_\mu(y) \end{align} and simplify as $s_\lambda = \sum_\mu z_\mu^{-1}\chi^{\lambda}_\mu p_\mu$: (see e.g. \cite{M}): \begin{equation} \frac{1}{n}\sum_{\lambda,\mu \vdash n}c^n_{\lambda,\mu}p_\lambda(x)p_\mu(y) = \sum_{a=0}^{n-1}(-1)^a(n-1-a)!a!\sum_{\lambda,\mu \vdash n}s_{(n-a,1^a)}(x)s_{(n-a,1^a)}(y) \end{equation} In order to recover theorem \ref{thm : mv09}, we need to express the Schur functions in terms of monomials. The transition between the two basis is performed thanks to the Kostka numbers: $s_\lambda = \sum_\mu K_{\lambda,\mu} m_\mu$. But obviously (see \cite[I.6. ex 1]{M}) $K_{(n-a,1^a)\lambda} = \binom{\ell(\lambda)-1}{a}$: \begin{align} \label{eq : bin} \nonumber &\frac{1}{n}\sum_{\lambda,\mu \vdash n}c^n_{\lambda,\mu}p_\lambda(x)p_\mu(y) =\\ \nonumber &\sum_{\lambda,\mu \vdash n}\sum_{a=0}^{n-1}(-1)^a(n-1-a)!a!\binom{\ell(\lambda)-1}{a}\binom{\ell(\mu)-1}{a}m_\lambda(x)m_\mu(y)=\\ & \sum_{\lambda,\mu \vdash n}(\ell(\lambda)-1)!(n-\ell(\lambda))!\sum_{a=0}^{n-1}(-1)^a\binom{n-1-a}{\ell(\lambda)-1-a}\binom{\ell(\mu)-1}{a}m_\lambda(x)m_\mu(y) \end{align} Finally, using the classical formulas for binomial coefficients summation (see e.g. \cite{K}), we obtain : \begin{align} \nonumber \frac{1}{n}\sum_{\lambda,\mu \vdash n}c^n_{\lambda,\mu}p_\lambda(x)p_\mu(y) &= \sum_{\lambda,\mu \vdash n}(\ell(\lambda)-1)!(n-\ell(\lambda))!\binom{n-\ell(\mu)}{\ell(\lambda)-1}m_\lambda(x)m_\mu(y)\\ &= \sum_{\lambda,\mu \vdash n}\frac{(n-\ell(\lambda))!(n-\ell(\mu))!}{(n+1-\ell(\lambda)-\ell(\mu))!}m_\lambda(x)m_\mu(y) \end{align} \begin{rem}
The coefficient of $m_\lambda(x)m_n(y)$ in the series is $n!$. This can be shown directly as $[m_n]p_\mu = 1$ and $\sum_\mu c^n_{\lambda \mu} = |C_\lambda| = n!/z_\lambda$: \begin{equation} [m_n(y)]\left(\sum_{\lambda,\mu \vdash n}c^n_{\lambda,\mu}p_\lambda(x)p_\mu(y)\right)= n!\sum_{\lambda \vdash n}z_\lambda^{-1}p_\lambda(x) =n!s_n(x)=n! \sum_{\lambda \vdash n}m_\lambda(x) \end{equation} \end{rem}
\indent We can recover the formula of theorem \ref{thm : fv10} in a very similar fashion. Equation \ref{eq : con} becomes: \begin{equation} c^n_{\lambda,n} = \frac{1}{z_\lambda}\sum_{a=0}^{n-1}(n-1-a)!a!\chi^{(n-a,1^a)}_\lambda \end{equation} Pursuing with an analog development gives: \begin{align} \sum_{\lambda \vdash n}c^n_{\lambda,n}p_\lambda(x) &= \sum_{\lambda \vdash n}(\ell(\lambda)-1)!(n-\ell(\lambda))!\sum_{a=0}^{n-1}\binom{n-1-a}{\ell(\lambda)-1-a}m_\lambda(x)\\ &= \sum_{\lambda \vdash n}(\ell(\lambda)-1)!(n-\ell(\lambda))!\binom{n}{n-\ell(\lambda)+1}m_\lambda(x)\\ &=\sum_{\lambda \vdash n}\frac{n!}{n-\ell(\lambda)+1}m_\lambda(x) \end{align}
\section{Connection coefficients of the double coset algebra} \label{sec : doublecoset} Given a partition $\lambda$ and a box $s$ in the Young diagram of $\lambda$, let $l'(s),l(s),$ $a(s),a'(s)$ be the number of boxes to the north, south, east, west of $s$ respectively. These statistics are called {\bf co-leglength, leglength, armlength, co-armlength} respectively. We note as well: \begin{align} c_{\lambda}= \prod_{s\in \lambda} (2a(s) + l(s) + 1), \;\;\;\;\;\;\; c'_{\lambda}= \prod_{s\in \lambda} (2(1+a(s)) + l(s)) \end{align}
Let $\varphi^{\lambda}_\mu=\sum_{w\in K_{\mu}} \chi^{2\lambda}_w$ with $2\lambda=(2\lambda_1,2\lambda_2,\ldots,2\lambda_p)$. Then $| K_\mu |^{-1} \varphi^{\lambda}_\mu$ is the value of the zonal spherical function indexed by $\lambda$ of the Gelfand pair $(S_{2n}, B_n)$ at the elements of the double coset $K_\mu$. We define as well, $H_{2\lambda}$ as the product of the hook lengths of the partition $2\lambda$. We have $H_{2\lambda} = c_{\lambda}c'_{\lambda}$ \cite[VII.2 eq. (2.19)]{M}. As shown in \cite{HSS92}, the connection coefficients of the double coset algebra verify the relation: \begin{equation} \label{conncoeff}
b_{\lambda,\mu}^{\nu} = \frac{1}{|K_{\nu}|} \sum_{\beta \vdash n} \frac{\varphi^{\beta}_\lambda \varphi^{\beta}_\mu\varphi^{\beta}_\nu}{H_{2\beta}} . \end{equation} Getting back to the generating series yields: \begin{equation}
\sum_{\lambda,\mu \vdash n} {b_{\lambda,\mu}^\nu} p_{\lambda}(x)p_{\mu}(y) = \frac{|B_n|^2}{|K_{\nu}|} \sum_{\beta \vdash n}\frac{\varphi^{\beta}_\nu}{H_{2\beta}}Z_{\beta}(x)Z_{\beta}(y) . \end{equation}
where $Z_{\beta}(x) = |B_n|^{-1}\sum_\lambda\varphi^{\beta}_\lambda p_{\lambda}(x)$ are the {\bf zonal polynomials} \cite[ch. VII]{M}. As an alternative definition, the $Z_{\beta}(x)$ are special cases of the integral form of the Jack symmetric functions (or polynomials) $J_\beta(x,\alpha)$ with parameter $\alpha = 2$. As described in \cite[VI. ch. 10]{M}, there exist two other classical normalizations of the Jack polynomials. Using notations consistent to \cite{M}, we define $Q_{\lambda}=Z_{\lambda}/c'_{\lambda}$, and $P_{\lambda}=Z_{\lambda}/c_{\lambda}$. The generating series simplifies in the case $\nu = (n)$ since \cite[VII.2 ex 2(c)]{M}: \begin{equation} \label{macex}
\varphi^{\lambda}_{(n)} = \frac{|K_{(n)}|}{|B_{n-1}|}\prod_{s\in \lambda} (2a'(s)-l'(s)) \end{equation} where the product omits the square $(1,1)$. As such, $\varphi^{\lambda}_{(n)}=0$ if $\lambda \supset (2^3)$ {\em i.e.} if $\lambda$ is not a near-hook of the form $(a,b,1^{n-a-b})$. Finally, the generating series reduce to: \begin{align} \label{eq gen series}
\nonumber \frac{1}{|B_n|}\sum_{\lambda,\mu \vdash n} &{b_{\lambda,\mu}^n} p_{\lambda}(x)p_{\mu}(y) =\\
&\frac{|B_n|}{|K_{(n)}|} \sum_{a,b}\varphi^{(a,b,1^{n-a-b})}_{(n)}P_{(a,b,1^{n-a-b})}(x)Q_{(a,b,1^{n-a-b})}(y) \end{align} The proof of theorem \ref{thm : main} relies on the evaluation of the expansion in the monomial basis of the zonal polynomials indexed by near hooks, which is done in the following section.
\section{Zonal polynomials on near hooks} \label{sec : zon poly} Zonal polynomials are special cases of Jack symmetric functions which in their turn are special cases of the MacDonald polynomials \cite[VI]{M}. While these latter symmetric functions have been heavily studied over the past years (see e.g. \cite{LS03}) no simple expansion in terms monomial functions is known except in the special case of a single part partition (see Stanley \cite{S}). We show that such an expression can be found in the case of Zonal polynomials indexed by near hooks. We start with the general combinatorial formula of MacDonald \cite[VI. eq. (7.13) and (7.13')]{M}: \begin{eqnarray} Q_\lambda = \sum_{\mu}m_\mu\sum_{\substack{shape(T)=\lambda,\\ type(T)=\mu}}\phi_{T}, \;\;\;\;\;\;\;\;\;\; P_\lambda = \sum_{\mu}m_\mu\sum_{\substack{shape(T)=\lambda,\\ type(T)=\mu}}\psi_{T} \end{eqnarray}
\noindent where the internal sums run over all (column strict) tableaux $T$. As usual, a tableau of shape $\lambda$ and type $\mu$ with $\ell(\mu) = p$ can be seen as a sequence of partitions $(\lambda^{(1)},\ldots,\lambda^{(p)})$ such that $\lambda^{(1)}\subset\lambda^{(2)}\subset\ldots\subset\lambda^{(p)}=\lambda$ and each $\lambda^{(i+1)}/\lambda^{(i)}$ (as well as $\lambda^{(1)}$) is a horizontal strip with $|\lambda^{(i+1)}/\lambda^{(i)}| = \mu_{p-i}$ boxes filled with integer $i+1$. Further we will assume $\lambda^{(0)}$ to be the empty partition to simplify notations.
\begin{exm} \label{ex : tableau} The following tableau has shape $\lambda = (6,3,1,1)$, type $\mu = (3,2,2,2,2)$ and a sequence $\lambda^{(5)}= \lambda$, $\lambda^{(4)} = (4,3,1)$, $\lambda^{(3)} = (3,2,1)$, $\lambda^{(2)} = (3,1)$, $\lambda^{(1)} = (2)$. $$ \vartableau{ 1 & 1 & 2 & 4 & 5 &5\\ 2 & 3 & 4\\ 3\\ 5} $$ \end{exm}
\noindent Following (7.11) of \cite{M}: \begin{equation} \phi_{T} = \prod_{i=0}^{p-1}\phi_{\lambda^{(i+1)}/\lambda^{(i)}}, \;\;\;\;\;\;\; \psi_{T} = \prod_{i=0}^{p-1}\psi_{\lambda^{(i+1)}/\lambda^{(i)}} \end{equation} The analytic formulation of $\phi_{\lambda/\mu}$ and $\psi_{\lambda/\mu}$ is given by \cite[VI. 7. ex 2.(a)]{M} for the general case of MacDonald polynomials. We use the formulation given by Okounkov and Olshanski in \cite{OO97} (equation (6.2)) specific to the Jack symmetric functions: \begin{equation} \psi_{\lambda/\mu} = \prod_{1\leq k \leq l \leq \ell(\mu)}\frac{(\mu_k-\lambda_{k+1}+\theta(l-k)+1)_{\theta-1}(\lambda_k-\mu_{k}+\theta(l-k)+1)_{\theta-1}}{(\lambda_k-\lambda_{l+1}+\theta(l-k)+1)_{\theta-1}(\mu_k-\mu_{l}+\theta(l-k)+1)_{\theta-1}} \end{equation} where $(t)_r=\Gamma(t+r)/\Gamma(t)$ for any numbers $t$ and $r$. This formula holds in the case of zonal polynomials by setting $\theta=1/2$. Similarly, \begin{equation} \label{formula : phi} \phi_{\lambda/\mu} = \prod_{1\leq k \leq l \leq \ell(\lambda)}\frac{(\mu_k-\lambda_{k+1}+\theta(l-k)+1)_{\theta-1}(\lambda_k-\mu_{k}+\theta(l-k)+1)_{\theta-1}}{(\lambda_k-\lambda_{l}+\theta(l-k)+1)_{\theta-1}(\mu_k-\mu_{l+1}+\theta(l-k)+1)_{\theta-1}} \end{equation} Given a tableau of shape $(a,b,1^c)$ of type $\mu = (\mu_1,\mu_{2},\ldots, \mu_p)$ we note $a_i$ (resp. $b_i$ and $c_i$) the number of boxes of the first line (resp. second line and column) filled with integer $p-i+1$. This notation is illustrated by figure \ref{tableau}.
\begin{figure}
\caption{Definition of the $(a_i,b_i,c_i)$ on a tableau of shape $(a,b,1^c)$}
\label{tableau}
\end{figure}
\noindent Obviously, using the notations of section \ref{sec : intro} we have : \begin{align*} &\lambda^{(p-i)} = (\overline{a}_i,\overline{b}_i,1^{\overline{c}_i}), \;\;\;\;\;\; a_i+b_i+c_i = \mu_i \\ &a_i\leq \overline{a}_{i-1}-\overline{b}_{i-1}, \;\;\;\;\;\;\;\;\;\;\; b_i\leq \overline{b}_{i-1}-1 \mbox{ if } \overline{c}_{i} > 0\\ &b_p = c_p = c_{p-1} = 0, \;\;\;\;\;\;\; c_i \in \{0,1\} \end{align*}
\begin{exm} The tableau of example \ref{ex : tableau} is of shape $(6,3,1^2)$ and check $(a_1, b_1$, $c_1)$ $= (2,0,1)$, $(a_2, b_2, c_2) = (1,1,0)$, $(a_3, b_3, c_3) = (0,1,1)$, $(a_4, b_4, c_4) = (1,1,0)$, $(a_5, b_5, c_5) = (2,0,0)$. \end{exm} Given a tableau of shape $(a,b,1^c)$ with filling described by $(a_i,b_i,c_i)_{1\leq i\leq p}$ we apply formula \ref{formula : phi} to $\phi_{\lambda^{(p-i+1)}/\lambda^{(p-i)}}$. The following are the non-$1$ contributions to the product:
\begin{itemize} \item[(i)] for $k=l=1$, the factor in equation \ref{formula : phi} is $\frac{(a_i+1)_{-1/2}(\overline{a}_{i-1}-\overline{b}_{i-1}-a_i+1)_{-1/2}}{(1)_{-1/2}(\overline{a}_{i-1}-\overline{b}_{i-1}-a_i+b_i+1)_{-1/2}}$ \item[(ii)] for $k=1$, $l=2$: $\frac{(\overline{a}_{i-1}-\overline{b}_{i-1}+b_i+3/2)_{-1/2}}{(\overline{a}_{i-1}-\overline{b}_{i-1}+3/2)_{-1/2}}$ \item[(iii)] for $k=l=2$: $\frac{(b_i+1)_{-1/2}}{(1)_{-1/2}}$ \end{itemize} \noindent Additionally, if $c_i=1$ we have: \begin{itemize} \item[(iv)] $k=1$, $l =\overline{c}_{i-1}+1$: $\frac{(\overline{a}_{i-1}-a_i+\overline{c}_{i-1}/2)_{-1/2}}{(\overline{a}_{i-1}-a_i+\overline{c}_{i-1}/2+1)_{-1/2}}$ \item[(v)] $k=1$, $l =\overline{c}_{i-1}+2$: $\frac{(\overline{a}_{i-1}+(\overline{c}_{i-1}+1)/2+1)_{-1/2}}{(\overline{a}_{i-1}+(\overline{c}_{i-1}+1)/2)_{-1/2}}$ \item[(vi)] $k=2$, $l =\overline{c}_{i-1}+1$: $\frac{(\overline{b}_{i-1}-b_i+(\overline{c}_{i-1}-1)/2)_{-1/2}}{(\overline{b}_{i-1}-b_i+(\overline{c}_{i-1}-1)/2+1)_{-1/2}}$ \item[(vii)] $k=2$, $l =\overline{c}_{i-1}+2$: $\frac{(\overline{b}_{i-1}+\overline{c}_{i-1})/2+1)_{-1/2}}{(\overline{b}_{i-1}+\overline{c}_{i-1}/2)_{-1/2}}$ \item[(viii)] $k>2$, $l =\overline{c}_{i-1}+1$: $\frac{(\overline{c}_{i-1}+1-k)/2+1)_{-1/2}}{(\overline{c}_{i-1}+1-k)/2+2)_{-1/2}}$ \item[(ix)] $k>2$, $l =\overline{c}_{i-1}+2$: $\frac{(\overline{c}_{i-1}+2-k)/2+2)_{-1/2}}{(\overline{c}_{i-1}+2-k)/2+1)_{-1/2}}$ \end{itemize} As $(t+1)_{-1/2} = (2t)!/(\sqrt{\pi}t!^22^{2t})$ and $(t+3/2)_{-1/2} = \sqrt{\pi}t!^22^{2t+1}/(2t+1)!$, (i)$\times$(ii)$\times$(iii) gives: \begin{equation} \frac{1}{2^{2a_i-2b_i}}\GenBin{a_{i-1}-b_{i-1}}{a_i}\VarGenBin{a_{i-1}-b_{i}}{b_i}\frac{\binom{2a_{i-1}-2b_{i-1}}{a_{i-1}-b_{i-1}}}{\binom{2a_{i}-2b_{i}}{a_{i}-b_{i}}} \end{equation} \noindent Then, as $(t)_{-1/2}/(t+1)_{-1/2} = 2t/(2t-1)$, combining (iv), (v), (vi) and (vii) yields $1$ if $c_i=0$ and \begin{equation} \label{eq : R} \frac{2\overline{a}_{i}+\overline{c}_{i-1}}{2\overline{a}_{i}+\overline{c}_{i-1}-1}\frac{2\overline{a}_{i-1}+\overline{c}_{i-1}}{2\overline{a}_{i-1}+\overline{c}_{i-1}+1}\frac{2\overline{b}_{i}+\overline{c}_{i-1}-1}{2\overline{b}_{i}+\overline{c}_{i-1}-2}\frac{2\overline{b}_{i-1}+\overline{c}_{i-1}-1}{2\overline{b}_{i-1}+\overline{c}_{i-1}} \end{equation} \noindent otherwise. For all $k>2$ the combination of (viii) and (ix) can be written as \begin{equation} \prod_{k>2}^{\overline{c}_{i-1}+1}\frac{\overline{c}_{i-1}+3-k}{\overline{c}_{i-1}+3-(k+1)}\prod_{k>2}^{\overline{c}_{i-1}+2}\frac{\overline{c}_{i-1}+4-(k+1)}{\overline{c}_{i-1}+4-k}=\frac{\overline{c}_{i-1}}{\overline{c}_{i-1}+1} =\frac{\overline{c}_{i}+1}{\overline{c}_{i-1}+1} \end{equation} \noindent The last ratio holds in the general case ($c_i \in \{0,1\}$). Putting everything together and multiplying the factors for $i=1\ldots p$ gives the formula:
\begin{thm}[Zonal polynomials on near hooks] \label{thm : Zonal} \begin{align} \nonumber &Q_{a,b,1^c}= \frac{\binom{2a-2b}{a-b}}{4^{a-b}(1+c)}\times\\ &\sum_{\substack{\lambda \vdash a+b+c\\\{a_i,b_i,c_i\}}}\prod_{i}\GenBin{\overline{a}_{i-1}-\overline{b}_{i-1}}{a_i}\VarGenBin{\overline{a}_{i-1}-\overline{b}_{i}}{b_i}{R(\overline{a}_{i},\overline{a}_{i-1},\overline{b}_{i},\overline{b}_{i-1},\overline{c}_{i-1})}^{c_i} \end{align} \noindent Similarly, we find: \begin{align} \nonumber &P_{a,b,1^c}= f(a,b,c)\times\\ & \sum_{\substack{\lambda \vdash a+b+c\\\{a_i,b_i,c_i\}}}\prod_{i}\GenBin{\overline{a}_{i-1}-\overline{b}_{i-1}}{a_i}\VarGenBin{\overline{a}_{i-1}-\overline{b}_{i}}{b_i}{R(\overline{a}_{i},\overline{a}_{i-1},\overline{b}_{i},\overline{b}_{i-1},\overline{c}_{i-1})}^{c_i} \end{align} \footnotesize where $f(a,b,c)=\begin{cases} (2a+c+1)(2b+c)/\left ((2a+c)(2b+c-1)\VarGenBin{a-1}{b-1}\right) \mbox{ if } b\neq 0\\ 1 \;\;\;\mbox{ otherwise}\end{cases}$ \normalsize \end{thm} \begin{exm} When $b=c=0$, we get $P_{n}= \sum_{\lambda \vdash n} \GenBin{n}{\lambda}m_\lambda$ and $Q_{n}=\frac{1}{4^{n}}\binom{2n}{n} \sum_{\lambda \vdash n} \GenBin{n}{\lambda}m_\lambda$, which is equivalent to the formula of Stanley in \cite{S} (the extension of {\footnotesize$\GenBin{x}{y}$} to multinomial coefficients is straightforward) . \end{exm} \begin{exm} Only the following tableau of shape $(a,b,1^{c})$ yields a non zero contribution to the coefficient in $m_{a,b,1^{c}}$ of $P_{a,b,1^{c}}$: \footnotesize $$ \tableau{ 1 &p\hspace{-1mm}-\hspace{-1.4mm}1& \ldots &p\hspace{-1mm}-\hspace{-1.4mm}1&p\hspace{-1mm}-\hspace{-1.4mm}1&p& \ldots&p & p\\ 2 &p&\ldots & p & p\\ 3\\ :\\ p\hspace{-1mm}-\hspace{-1.4mm}2\\ p\hspace{-1mm}-\hspace{-1.4mm}1\\ p} $$ \normalsize where we have $(a-b)$ cases filled with $p$ on the first line, $(b-1)$ on the second and $1$ at the bottom of the column. Then $(b-1)$ cases filled with $p-1$ on the first line and $1$ in the column. Finally the column is labeled from bottom to top with $p-2,p-3,\ldots,1$. When all the $c_i$'s are equal to $1$ expression \ref{eq : R} reads:
\begin{equation} \frac{2\overline{a}_{i}+\overline{c}_{i}+1}{2\overline{a}_{i}+\overline{c}_{i}}\frac{2\overline{a}_{i-1}+\overline{c}_{i-1}}{2\overline{a}_{i-1}+\overline{c}_{i-1}+1}\frac{2\overline{b}_{i}+\overline{c}_{i}}{2\overline{b}_{i}+\overline{c}_{i}-1}\frac{2\overline{b}_{i-1}+\overline{c}_{i-1}-1}{2\overline{b}_{i-1}+\overline{c}_{i-1}} \end{equation} \noindent and we have \begin{equation} \prod_i{R(\overline{a}_{i},\overline{a}_{i-1},\overline{b}_{i},\overline{b}_{i-1},\overline{c}_{i-1})}^{c_i} = \frac{3}{2}\times\frac{2a+c}{2a+c+1}\times\frac{2}{1}\times\frac{2b+c-1}{2b+c} \end{equation} The other contributing factors read \begin{align} \nonumber \GenBin{a-b}{a-b}\VarGenBin{a-1}{b-1}\GenBin{b-1}{b-1}&\VarGenBin{b-1}{0}\GenBin{0}{0}\VarGenBin{1}{1}\GenBin{1}{1}\VarGenBin{1}{0} \\
&=\frac{1}{3}\VarGenBin{a-1}{b-1} \end{align} where we use the fact that $\VarGenBin{1}{1} = 1/3$. Putting everything together, yields the classical property: \begin{equation} [m_{a,b,1^{c}}]P_{a,b,1^{c}}=1 \end{equation} \end{exm}
\section{Proof of main theorem, examples and further results} \label{sec : results} The proof of theorem \ref{thm : main} follows immediately from equation \ref{eq gen series}, theorem \ref{thm : Zonal} and the final remark: \begin{equation} \prod_{s\in (a,b,1^c)} (2a'(s)-l'(s)) = \begin{cases}(-1)^{c+1}(c+1)!(2a-2)!!(2b-3)!!\mbox{ if } b>0\\ (2a-2)!! \mbox{ otherwise}\end{cases} \end{equation} Now we study some particular coefficients of the generating series in theorem \ref{thm : main} and emphasize the link with the alternative formula in \cite{MV11}.\\ \subsection{Coefficient of $m_\lambda(x)m_n(y)$} As a first example, we notice that only the near hooks $(a,b,1^c)$ that have less parts than $\min(\ell(\lambda),\ell(\mu))$ contribute to the coefficient of $m_\lambda(x)m_\mu(y)$ in theorem \ref{thm : main}. If either $\lambda$ or $\mu$ is the single part partition $(n)$, then only the one row tableaux of length $n$ contribute to the coefficient in the generating series. It is straightforward from theorem \ref{thm : main} that \begin{equation} [m_n(x)m_n(y)]\left (\frac{1} {2^nn!}\sum_{\lambda,\mu \vdash n} {b_{\lambda,\mu}^n} p_{\lambda}(x)p_{\mu}(y)\right ) = r_n(n,0) = (2n-1)!! \end{equation} From the perspective of the combinatorial interpretation in \cite{MV11} this is an obvious result as the coefficient of $m_n(x)m_n(y)$ is the number of pairings on a set of size $2n$. More interestingly, noticing that \begin{equation} \GenBin{n}{\lambda} = \binom{n}{\lambda}\frac{(2\lambda-1)!!}{(2n-1)!!} \end{equation} \noindent where $(2\lambda-1)!! = \prod_i(2\lambda_i-1)!!$, we get: \begin{equation} \label{eq : mlamn} [m_\lambda(x)m_n(y)]\left (\frac{1} {2^nn!}\sum_{\lambda,\mu \vdash n} {b_{\lambda,\mu}^n} p_{\lambda}(x)p_{\mu}(y)\right ) = \binom{n}{\lambda}(2\lambda-1)!! \end{equation} This result is not obvious to derive from the formula in \cite{MV11} obtained by Lagrange inversion, but it can be proved with the combinatorial interpretation in terms of some decorated bicolored forests with a single white vertex. The exact definition of these forests is given in \cite[Def. 2.10]{MV11}. We briefly remind that they are composed of white and black (internal or root) vertices. The descendants of a given vertex are composed of edges (linking a white vertex and a black one), thorns and loops. Additionally, there is a bijection between thorns connected to white vertices and thorns connected to black vertices and a mapping of the loops on white (resp. black) vertices to the set of black (resp. white) vertices. In this paper, the authors show that $(2n-1)!! =F_n$ is the number of two-vertex (one white and one black) forests of size $n$ (see figure \ref{forests} for examples). \begin{figure}
\caption{Two examples of two-vertex forests for $n=7$ (left) and $8$ (right). Letters depict the bijection between thorns}
\label{forests}
\end{figure} But a forest with one white (root) vertex and $\ell(\lambda)$ black vertices of degree distribution $\lambda$ ($F_\lambda$ denotes the number of such forests) can be seen as a $\ell(\lambda)$-tuple of two vertex forests of size $\lambda_i$. The $i$-th forest is composed of the $i$-th black vertex with its descendants and one white vertex with a subset of descendants of the original one's containing (i) the edge linking the white vertex and the $i$-th black vertex, (ii) the thorns in bijection with the thorns of the $i$-th black vertex, (iii) the loops mapped to the $i$-th black vertex. The construction is bijective if we distinguish in the initial forest the black vertices with the same degree ($Aut_\lambda$ ways to do it) and we keep track in the tuple of forests the initial positions of the descendants of the white vertices within the initial forest ($\binom{n}{\lambda}$ possible choices). We get: \begin{equation} Aut_\lambda F_\lambda = \binom{n}{\lambda}\prod_i F_{\lambda_i} =\binom{n}{\lambda}(2\lambda-1)!! \end{equation} \begin{figure}
\caption{Splitting a forest of black degree distribution $\lambda$ into a $\ell(\lambda)$-tuple of two vertex forests for $\lambda = (4^3,1)$. Greek letters depict the mapping on the sets of loops connected to the white vertex}
\label{forests}
\end{figure} Finally, according to \cite{MV11}, $Aut_\lambda F_\lambda$ is equal to the desired coefficient in (\ref{eq : mlamn}). \begin{rem} Using the main formula of \cite{MV11} we've shown: \begin{equation} \sum_{Q,Q'}\prod_{i,j}\frac{2^{Q'_{ij}-2j(Q_{ij}+Q'_{ij})}}{Q_{ij}!Q'_{ij}!}{\binom{i-1}{j,j}}^{Q_{ij}}{\binom{i-1}{j,j-1}}^{Q'_{ij}} = \frac{(2\lambda-1)!!}{\lambda!Aut_\lambda} \end{equation} where the sum runs over matrices $Q$ and $Q'$ with $m_i(\lambda)=\sum_{j \geq 0}Q_{ij} + Q'_{ij}$. \end{rem} \begin{rem} If we admit the expansion of $Z_n$ in terms of monomials, we can directly show (\ref{eq : mlamn}) as:
\begin{align}
\nonumber {|B_n|}^{-1}\sum_{\lambda \vdash n}\left ( \sum_{\mu \vdash n}{b_{\lambda,\mu}^n}\right) p_{\lambda}(x)&= {|B_n|}^{-1}\sum_{\lambda \vdash n}|K_\lambda|p_{\lambda}(x) =Z_n(x)\\
&= \sum_{\lambda \vdash n}\binom{n}{\lambda}(2\lambda-1)!!m_\lambda(x) \end{align}
\end{rem} \subsection{Coefficient of $m_{n-p,1^{p}}(x)m_{n-p,1^{p}}(y)$} The number $F_{(n-p,1^p),(n-p,1^p)}$ of forests with $p+1$ white and $p+1$ black vertices, both of degree distribution $(n-p,1^{p})$, can be easily obtained from the number of two-vertex forests $F_{n-2p}$. We consider $2p\leq n-1$, it's easy to show the coefficient to be equal to $0$ otherwise. Two cases occur: either the white vertex with degree $n-p$ is the root and there are $\binom{n-p}{p}\times\binom{n-p-1}{p}$ ways to add the black and the white descendants of degree $1$, or the root is a white vertex of degree $1$ and there are $\binom{n-p-1}{p-1}\times\binom{n-p-1}{p}$ ways to add the remaining white vertices and the $p$ black vertices of degree $1$. We have: \begin{equation} F_{(n-p,1^p),(n-p,1^p)} = F_{n-2p}\binom{n-p-1}{p}\left [\binom{n-p}{p}+\binom{n-p-1}{p-1}\right ] \end{equation} As a result, we obtain : \begin{align} \nonumber [m_{n-p,1^{p}}(x)m_{n-p,1^{p}}(y)]&\left (\frac{1} {2^nn!}\sum_{\lambda,\mu \vdash n} {b_{\lambda,\mu}^n} p_{\lambda}(x)p_{\mu}(y)\right )\\ &\nonumber = Aut_{n-p,1^{p}}^2F_{(n-p,1^p),(n-p,1^p)}\\ &= n(n-2p)\left(\frac{(n-p-1)!}{(n-2p)!}\right)^2(2n-4p-1)!! \end{align} We check this result with the formula of theorem \ref{thm : main} for the special cases $p\in\{1,2\}$. For $p=1$, two tableaux \begin{equation} \overbrace{\vartableau{1 & 2&... & 2&2 }}^{n}\;\;\mbox{ and }\;\;\overbrace{\vartableau{1 & 2&... & 2&2\\2 }}^{n-1} \end{equation} contribute to the coefficient with respective contributions $n^2(2n-5)!!(2n-3)/(2n-1)$ and $-2n(2n-5)!!(n-1)/(2n-1)$. Adding them gives the desired result $n(n-2)(2n-5)!!$. In the case $p=2$, the following tableaux are contributing: \begin{equation} \overbrace{\vartableau{1&2&3&...&3&3}}^{n},\;\;\;\; \overbrace{\vartableau{1&2&3&...&3\\3}}^{n-1},\;\;\;\; \overbrace{\vartableau{1&3&3&...&3\\2}}^{n-1},\;\;\;\; \overbrace{\vartableau{1&2&3&...&3\\3&3}}^{n-2},\;\;\;\overbrace{\vartableau{1&3&3&...&3\\2\\3}}^{n-2} \end{equation} The second and third tableaux are the two possible fillings for the shape $a=n-1$ and $b=1$ and we have to consider the cross contributions (one filling for $\lambda$ and the other for $\mu$). Combining thecontributions for tableaus 2 and 3, the sum writes : \footnotesize \begin{align} \nonumber &\frac{n^2(n-1)^2(2n-9)!!(2n-7)(2n-5)}{(2n-1)(2n-3)}-\frac{2n(2n^2-6n+3)^2(2n-9)!!(2n-7)}{(n-1)(2n-1)(2n-5)}\\ \nonumber&-\frac{8n(n-2)^2(n-3)^2(2n-9)!!}{3(2n-5)(2n-3)}+\frac{2n(2n-7)(2n-3)(2n-9)!!}{3(n-1)} \end{align} \normalsize Finally, we have the desired result: $n(n-4)(n-3)^2(2n-9)!!$. \normalsize \subsection{Generating series for $b_{\lambda n}^n$} As a final application we compute the generating series for $b_{\lambda n}^n$ in a similar fashion as in section \ref{sec : doublecoset} and \ref{sec : zon poly}: \begin{equation}
\Pi_n = \frac{1}{|B_n|}\sum_{\lambda \vdash n} {b_{\lambda n}^n} p_{\lambda} = \frac{1}{|K_{(n)}|} \sum_{a,b}\frac{\left (\varphi^{(a,b,1^{n-a-b})}_{(n)}\right )^2}{c'_{a,b,1^{n-a-b}}}P_{(a,b,1^{n-a-b})} \end{equation} Using the same notations as in theorem \ref{thm : main}, we find: \begin{thm}[Generating series for $b_{\lambda n}^n$] \begin{align} \nonumber &\Pi_n = \sum_{\substack{\lambda \vdash n \\ a,b \\ a_i,b_i,c_i}}r'_n(a,b)\\ &\times \prod_{1\leq i \leq n}\GenBin{\overline{a}_{i-1}-\overline{b}_{i-1}}{a_i}\VarGenBin{\overline{a}_{i-1}-\overline{b}_{i}}{b_i}{R(\overline{a}_{i},\overline{a}_{i-1},\overline{b}_{i},\overline{b}_{i-1},\overline{c}_{i-1})}^{c_i}m_\lambda \end{align} with $r'_n(n,0) = (2n-2)!!$ and $r'_n(x,y) = 2n\frac{(n+1-x-y)!(2x-2)!!(2y-3)!!}{(n+x-y)(n+y-x-1)}\; (y>0)$ \end{thm} Contrary to $r_n$, $r'_n$ is not of alternate sign. This leaves possibilities for asymptotic evaluations. \begin{exm} The following table gives the value of some coefficients in the monomial expansion of $\Pi_n$.\\ \footnotesize
\begin{tabular}{|c|c|c|} \hline $\lambda$&$(n)$&$(n-1,1)$\\ \hline $[m_\lambda]\Pi_n$&$(2n-2)!!$&$n(2n-4)!!$\\ \hline $\lambda$&$(n-2,1,1)$&$(n-3,1,1,1)$\\ \hline $[m_\lambda]\Pi_n$&$n(n-1)(2n-6)!!$&$n(n-1)(n-2)(2n-8)!!$\\ \hline $\lambda$&$(1^n)$&$(n-2,2)$\\ \hline $[m_\lambda]\Pi_n$&$n!$&$n(2n-6)!!(3n-5)/2$\\ \hline $\lambda$&$(n-3,3)$&$(n-4,4)$\\ \hline $[m_\lambda]\Pi_n$&$n(2n-8)!!(5n^2-21n+20)/2$&$n(2n-10)!!(35n^3-270n^2+649n-486)/8$\\ \hline \end{tabular} \end{exm} \normalsize
\end{document} |
\begin{document}
\twocolumn[ \icmltitle{Appendix for ``Outlier-Robust Optimal Transport"} \icmlsetsymbol{equal}{*}
\begin{icmlauthorlist} \icmlauthor{Debarghya Mukherjee}{to} \icmlauthor{Aritra Guha}{to} \icmlauthor{Justin Solomon}{goo} \icmlauthor{Yuekai Sun}{to} \icmlauthor{Mikhail Yurochkin}{ed} \end{icmlauthorlist}
\icmlaffiliation{to}{Department of Statistics, University of Michigan} \icmlaffiliation{goo}{MIT CSAIL, MIT-IBM Watson AI Lab} \icmlaffiliation{ed}{IBM Research, MIT-IBM Watson AI Lab}
\icmlcorrespondingauthor{Debarghya Mukherjee}{[email protected]}
\icmlkeywords{Machine Learning, ICML}
\vskip 0.3in ]
\printAffiliationsAndNotice{}
\makeatletter \newcommand*{\addFileDependency}[1]{
\typeout{(#1)}
\@addtofilelist{#1}
\IfFileExists{#1}{}{\typeout{No file #1.}} } \makeatother
\newcommand*{\myexternaldocument}[1]{
\externaldocument{#1}
\addFileDependency{#1.tex}
\addFileDependency{#1.aux} }
\myexternaldocument{main}
\appendix
\section{Proof of Theorem \ref{thm:main_thm}} \label{sec:proofs}
In the proofs, $a \wedge b$ denotes $\min\{a, b\}$ for any $a, b \in \reals$. \subsection{Proof of discrete version } \label{sec:proof_discrete} \begin{proof} Define a matrix $\Pi$ as: $$ \Pi(i,j) = \begin{cases} 0, & \text{if } C(i, j) > 2 \lambda \\ \Pi^*_2(i, j), & \text{otherwise} \end{cases} $$ Also define $s \in \mathbb{R}^n$ and $t \in \mathbb{R}^m$ as: $$ s^*_1(i) = -\sum_{j=1}^m \Pi^*_2(i, j) \mathds{1}_{C(i, j) > 2 \lambda} $$ and similarly define: $$ t^*_1(j) = \sum_{i=1}^n \Pi^*_2(i, j) \mathds{1}_{C(i, j) > 2 \lambda} $$ These vectors corresponds to the row sums and the column sums of the elements of the optimal transport plan of Formulation 2, where the cost function exceeds $2 \lambda$. Note that, these co-ordinates of the optimal transport plan corresponding to those co-ordinates of cost matrix, where the cost is greater than $2\lambda$ and contribute to the objective value via their sum only, hence any different arrangement of these transition probabilities with same sum gives the same objective value.
Now based on this $\Pi$ obtained we construct a feasible solution of Formulation 1 following Algorithm \ref{algo:f1-f2}: $$ \Pi^*_1 = \begin{bmatrix} \mathbf{0} & \Pi \\ \mathbf{0} & \diag(t^*_1) \end{bmatrix} $$ The row sums of $\Pi^*_1$ is: \[ \Pi^*_1 \mathbf{1}= \begin{bmatrix} \mu_n + s^*_1 \\ t^*_1 \end{bmatrix} \] and it is immediate from the construction that the column sums of $\Pi^*_1$ is $\nu_m$. Also as: $$ \sum_{i=1}^n s^*_1(i) = \sum_{j=1}^m t^*_1(j) = \sum_{(i, j): C_{i, j} > 2 \lambda} \Pi^*_2(i, j) $$ and $s^*_1 \preceq 0, t^*_1 \succeq 0$, we have: $$ \mathbf{1}^{\top}(\mu_n + s^*_1 + t^*_1) = \mathbf{1}^{\top}p = 1 \,. $$ Therefore, we have $(\Pi^*_1, s^*_1, t^*_1)$ is a feasible solution of Formulation 1. Now suppose this is not an optimal solution. Pick an optimal solution $\tilde \Pi, \tilde s, \tilde t$ of Formulation 1 so that: $$
\langle C_{aug}, \tilde \Pi \rangle + \lambda \left[\|\tilde s\|_1 + \|\tilde t\|_1\right] < \langle C_{aug}, \Pi^*_1 \rangle + \lambda \left[\|s^*_1\|_1 + \|t^*_1\|_1\right] $$ The following two lemmas provide some structural properties of any optimal solution of Formulation 1:
\begin{lemma} \label{lem:R1-structure} Suppose $\Pi^*_1, s^*_1, t^*_1$ are optimal solution for Formulation 1. Divide $\Pi^*_1$ into four parts corresponding to augmentation as in algorithm \ref{algo:f1-f2}: $$ \Pi^*_1 = \begin{bmatrix} \Pi^*_{1, 11} & \Pi^*_{1, 12} \\ \Pi^*_{1,21} & \Pi^*_{1, 22} \end{bmatrix} $$ Then we have $\Pi^*_{1, 11} = \Pi^*_{1, 21} = \mathbf{0}$ and $\Pi^*_{1, 22}$ is a diagonal matrix. \end{lemma}
\begin{lemma} \label{lem:f2_characterization} If $\Pi^*_1, s^*_1, t^*_1$ is an optimal solution of Formulation 1 then: \begin{enumerate}
\item If $C_{i,j} > 2 \lambda$ then $\Pi^*_1(i, j) = 0$.
\item If $C_{i, j} < 2 \lambda$ for some $i$ and for all $1 \le j \le n$, then $s^*_1(i) = 0$.
\item If $C_{i, j} < 2 \lambda$ for some $j$ and for all $1 \le i \le m$, then $t^*_1(j) = 0$.
\item If $C_{i, j} < 2 \lambda$ then $s^*_1(i) t^*_1(j) = 0$. \end{enumerate} \end{lemma}
We provide the proofs in the next subsection. By Lemma \ref{lem:R1-structure} we can assume without loss of generality: $$ \tilde \Pi = \begin{bmatrix} \mathbf{0} & \tilde \Pi_{12} \\ \mathbf{0} & \diag(\tilde t) \end{bmatrix} $$ Now based on $\left(\tilde \Pi, \tilde s, \tilde t\right)$ we create a feasible solution namely $\Pi^*_{2, new}$ of Formulation 2 as follows: Define the set of indices $\{i_1, \cdots, i_k\}$ and $\{j_1, \dots, j_l\}$ as: $$ \tilde s_{i_1}, \tilde s_{i_2}, \dots, \tilde s_{i_k} > 0 \ \ \ \ \text{and} \ \ \ \tilde t_{j_1}, \tilde t_{j_2}, \dots, \tilde t_{j_l} > 0 \,. $$
Then by part (4) of Lemma \ref{lem:f2_characterization} we have $C_{i_\alpha, j_\beta} > 2 \lambda$ for $\alpha \in \{1, \dots, k\}$ and $\beta \in \{1, \dots, l\}$. Also by part (2) of Lemma \ref{lem:f2_characterization} the value of transport plan at these co-ordinates is 0. Now distribute the mass of slack variables in these co-ordinates such that the marginals of new transport plan becomes exactly $\mu_n$ and $\nu_m$. This new transport plan is our $\Pi^*_{2, new}$. Recall that, $\|\tilde s\|_1 = \| \tilde t\|_1$. Hence, here the regularizer value decreases by $2 \lambda \|\tilde s\|_1$ and the cost value increased by exactly $2 \lambda \|\tilde s\|_1$ as we are truncating the cost. Hence we have: \begin{align*}
\langle C_{\lambda}, \Pi^*_{2,new} \rangle & = \langle C_{aug}, \tilde \Pi \rangle + \lambda \left[ \|\tilde s \|_1 + \|\tilde t\|_1\right] \\
& <\langle C_{aug}, \Pi^*_1 \rangle + \lambda \left[\|s^*_1\|_1 + \|t^*_1\|_1\right] \\
& = \langle C_{\lambda}, \Pi^*_2 \rangle \end{align*} which is contradiction as $\Pi^*_2$ is the optimal solution of Formulation 2. This completes the proof for the discrete part.
\end{proof}
\subsection{Proof of equivalence for two sided formulation} \label{sec:proof_two_sided} Here we prove that our two sided formulation, i.e.\ Formulation 3 (\eqref{eq:F_3}) is equivalent to Formulation 1 (\eqref{eq:robot1-d}) for the discrete case. Towards that end, we introduce another auxiliary formulation and show that both Formulation 1 and Formulation 3 are equivalent to the following auxiliary formulation of the problem.
\textbf{Formulation 4:} \begin{equation} \begin{aligned}
& \min\nolimits_{\Pi\in\reals^{m\times n},s_1\in\reals^m, s_2 \in \reals^n} & & \langle C,\Pi\rangle + \lambda \left[\|s_1\|_1 + \|s_2\|_1\right] \\ & \subjectto & & \Pi1_n = p + s_1 \\ & & & \Pi^T1_m = q + s_2 \\ & & & \Pi \succeq 0
\end{aligned}. \label{eq:F_1} \end{equation} First we show that Formulation 1 and Formulation 4 are equivalent in a sense that they have the same optimal objective value. \begin{theorem} \label{thm:f12} Suppose $C$ is a cost function such that $C(x, x) = 0$. Then Formulation 1 and Formulation 4 has same optimal objective value. \end{theorem} \begin{proof} Towards that end, we show that given one optimal variables of one formulation we can get optimal variables of other formulation with the same objective value. Before going into details we need the following lemma whose proof is provided in Appendix B: \begin{lemma} \label{lem:negativity} Suppose $\Pi^*_{4}, s^*_{4, 1}, s^*_{4, 2}$ are the optimal variables of Formulation 4. Then $s^*_{4, 1} \preceq 0$ and $s^*_{4, 2} \preceq 0$. \end{lemma}
\noindent Now we prove that optimal value of Formulation 1 and Formulation 4 are same. Let $(\Pi^*_{1}, s^*_{1,1}, t^*_{1,1})$ is an optimal solution of Formulation 1. Then we claim that $(\Pi^*_{1}, s^*_{1,1}, t^*_{1,1})$ is also an optimal solution of Formulation 4. Clearly it is feasible solution of Formulation 4. Suppose it is not optimal, i.e.\ there exists another optimal solution $(\tilde \Pi_{4}, \tilde s_{4, 1}, \tilde s_{4, 2})$ such that: $$
\langle C, \tilde \Pi_{4} \rangle + \lambda(\|\tilde s_{4, 1}\|_1 + \|\tilde s_{4, 2}\|_2) < \langle C, \Pi^*_{1, 12} \rangle + \lambda(\|s^*_{1, 1}\|_1 + \|t^*_{1, 1}\|_1) $$ Now based on $(\tilde \Pi_{4}, \tilde s_{4, 1}, \tilde s_{4, 2})$ we construct a feasible solution of Formulation 1 as follows: $$ \tilde \Pi_{1} = \begin{bmatrix} \mathbf{0} & \tilde \Pi_{4} \\ \mathbf{0} & -\diag(\tilde s_{4, 2}) \end{bmatrix} $$ Note that we proved in Lemma \ref{lem:negativity} $\tilde s_{4, 2} \preceq 0$, hence we have $\tilde \Pi_{1} \succeq 0$. Now as the column sums of $\tilde \Pi_{4}$ is $q + \tilde s_{4, 2}$, we have column sums of $\tilde \Pi_{1} = [\mathbf{0} \ \ q^{\top}]^{\top}$ and the row sums are $[(p+\tilde s_{4, 1})^{\top} \ \ \ \tilde s_{4, 2}^{\top}]^{\top}$. Hence we take $\tilde s_{1, 1} = \tilde s_{4, 1}$ and $\tilde s_{1, 2} = \tilde s_{4, 2}$. Then it follows: \begin{align*}
& \langle C_{aug}, \tilde \Pi_{1} \rangle + \lambda \left[\|\tilde s_{1 , 1}\|_1 + \|\tilde s_{1, 2} \|_1\right] \\
& = \langle C, \tilde \Pi_{4} \rangle + \lambda \left[\|\tilde s_{4 , 1}\|_1 + \|\tilde s_{4, 2} \|_1\right] \\
& < \langle C, \Pi^*_{1, 12} \rangle + \lambda \left[\| s^*_{1, 1}\|_1 + \|t^*_{1, 1}\|_1\right] \\
& = \langle C_{aug}, \Pi^*_{1} \rangle + \lambda \left[\| s^*_{1, 1}\|_1 + \|t^*_{1, 1}\|_1\right] \end{align*} This is contradiction as we assumed $(\Pi^*_{1}, s^*_{1, 1}, t^*_{1, 2})$ is an optimal solution of Formulation 1. Therefore we conclude $(\Pi^*_{1}, s^*_{1,1}, t^*_{1,1})$ is also an optimal solution of Formulation 4 which further concludes Formulation 1 and Formulation 4 have same optimal values. This completes the proof of the theorem. \end{proof} \begin{theorem} \label{thm:f13} The optimal objective value of Formulation 3 and Formulation 4 are same. \end{theorem} \begin{proof} Like in the proof of Theorem \ref{thm:f12} we also prove couple of lemmas. \begin{lemma} \label{lem:struct_f3} Any optimal transport plan $\Pi^*_{3}$ of Formulation 3 has the following structure: If we write, \[ \Pi^*_{3} = \begin{bmatrix} \Pi^*_{3, 11} & \Pi^*_{3, 12} \\ \Pi^*_{3, 21} & \Pi^*_{3, 22} \end{bmatrix} \] then $\Pi^*_{3, 11}$ and $\Pi^*_{3, 22}$ are diagonal matrices and $\Pi^*_{3, 21} = \mathbf{0}$. \end{lemma}
\begin{lemma} \label{lem:negativity_2} If $s^*_{3, 1}, t^*_{3, 1}, s^*_{3, 2}, t^*_{3, 2}$ are four optimal slack variables in Formulation 3, then $s^*_{3, 1}, t^*_{3, 1} \preceq 0$ and $s^*_{3, 2}, t^*_{3, 2} \succeq 0$. \end{lemma} \begin{proof} The line of argument is same as in proof of Lemma \ref{lem:negativity}. \end{proof}
Next we establish equivalence. Suppose $(\Pi^*_{3}, s^*_{3, 1}, t^*_{3, 1}, s^*_{3, 2}, t^*_{3, 2})$ are optimal values of Formulation 3. We claim that $(\Pi^*_{3, 12}, s^*_{3,1} - s^*_{3,2}, t^*_{3, 1} - t^*_{3, 2})$ forms an optimal solution of Formulation 4. The objective value will then also be same as $s^*_{3, 1} \preceq 0, s^*_{3, 2} \succeq 0$ (Lemma \ref{lem:negativity_2}) implies $\|s^*_{3, 1} - s^*_{3, 2}\|_1 = \|s^*_{3, 1}\|_1 + \|s^*_{3, 2}\|_1$ and similarly $t^*_{3, 1} \preceq 0, t^*_{3, 2} \succeq 0$ implies $\|t^*_{3, 1} - t^*_{3, 2}\|_1 = \|t^*_{3, 1}\|_1 + \|t^*_{3, 2}\|_1$. Feasibility is immediate. Now for optimality, we again prove by contradiction. Suppose they are not optimal. Then lets say $\tilde \Pi_{4}, \tilde s_{4, 1}, \tilde s_{4, 2}$ are an optimal triplet of Formulation 4. Now construct another feasible solution of Formulation 3 as follows: Set $\tilde s_{3, 2} = \tilde t_{3, 2} = 0, \tilde s_{3, 1} = \tilde s_{4, 1} $ and $\tilde t_{3, 1} = \tilde s_{4, 2}$. Set the matrix as: \[ \tilde \Pi_{3} = \begin{bmatrix} \mathbf{0} & \tilde \Pi_{4} \\ \mathbf{0} & -\diag(\tilde s_{4, 2}) \end{bmatrix} \] Then it follows that $\left(\tilde \Pi_{3}, \tilde s_{3, 1}, \tilde s_{3, 2}, \tilde t_{3, 1}, \tilde t_{3, 2}\right)$ is a feasible solution of Formulation 3. Finally we have: \begin{align*}
& \langle C_{aug}, \tilde \Pi_{3} \rangle + \lambda \left[\|\tilde s_{3, 1}\|_1 + \| \tilde s_{3, 2} \|_1 + \| \tilde t_{3, 1} \|_1 + \| \tilde t_{3, 2} \|_1 \right] \\
& = \langle C_{aug}, \tilde \Pi_{3} \rangle + \lambda \left[\|\tilde s_{4, 1}\|_1 + \| \tilde s_{4, 2} \|_1 \right] \\
& = \langle C, \tilde \Pi_{4} \rangle + \lambda \left[\|\tilde s_{4, 1}\|_1 + \| \tilde s_{4, 2} \|_1 \right] \\
& < \langle C, \Pi^*_{3, 12} \rangle + \lambda \left[\|s^*_{3,1} - s^*_{3,2}\|_1 + \|t^*_{3, 1} - t^*_{3, 2}\|_1 \right] \\
& = \langle C_{aug}, \Pi^*_{3} \rangle + \lambda \left[\|s^*_{3, 1}\|_1 + \|s^*_{3, 2}\|_1 + \|t^*_{3, 1}\|_1 + \|t^*_{3, 2}\|_1 \right] \end{align*} This contradicts the optimality of $(\Pi^*_{3}, s^*_{3, 1}, s^*_{3, 2}, t^*_{3, 1}, t^*_{3, 2})$. This completes the proof. \end{proof}
\subsection{Proof of continuous version}
\begin{proof}
In this proof we denote by $F_1$ the optimization problem of \eqref{eq:robot1-cts} and by $F_2$ the optimization problem \eqref{eq:robot2-cts}. Let $\mu, \nu$ be two absolutely continuous measures on $\mathbb{R}^d$. Moreover, we assume $c(x,y)=\|x-y\|$ for some norm $\|\cdot\|$ on $\mathbb{R}^d$. We assume that $\int \|x\| \nu(\mathrm{d}x), \int \|x\| \mu(\mathrm{d}x) < \infty$.
\textbf{Step 1:} Let $K_{\epsilon}$ be a compact set such that $\int_{K_{\epsilon}} \|x\|\mu(\mathrm{d}x), \int_{K_{\epsilon}} \|x\|\nu(\mathrm{d}x) >1-\epsilon$.
Also, let $\tilde{K}_\epsilon=\{x_1,\dots,x_{n_\epsilon}\}$ be a maximal $\epsilon$-packing set of $K_{\epsilon}$. Starting from $\tilde{K}_\epsilon$, define $\{S_1,\dots, S_{n_{\epsilon}}\}$ as a mutually disjoint covering of $K_{\epsilon}$ with internal points $x_1,\dots, x_{n_{\epsilon}}$ respectively, so that Diam$(S_i) \leq 2\epsilon$. With $p_i=\int_{S_i} \mu(\mathrm{d}x)$, $q_i=\int_{S_i} \nu(\mathrm{d}x)$ for $i=1,\dots, n_{\epsilon}$, $p_0=\int_{K_{\epsilon}^C} \mu(\mathrm{d}x)$, $q_0=\int_{K_{\epsilon}^C} \nu(\mathrm{d}x)$ and $x_0=0 \in \mathbb{R}^d$, define \begin{eqnarray} \mu_{\epsilon}&=&\sum_0^{n_{\epsilon}}p_i \delta_{x_i} \nonumber \\ \nu_{\epsilon}&=&\sum_0^{n_{\epsilon}}q_i \delta_{x_i} \nonumber \end{eqnarray}
A coupling $Q$ between two probability distributions is a joint distribution with marginals as the given two distributions. The Wasserstein distance between two distributions $P_1$ and $P_2$ is defined as: \begin{eqnarray}
W_1(P_1,P_2)= \inf_{Q\in \mathscr{Q}(P_1,P_2)} \int Q(x,y)\|x-y\|\mathrm{d}x\mathrm{d}y, \end{eqnarray} where $\mathscr{Q}(P_1,P_2)$ is the collection of all couplings of $P_1$ and $P_2$.
Define $Q(x,y)= (\mathbbm{1}_{x=x_0, y \in K_{\epsilon}^C}+ \sum_{i=1}^{n_\epsilon} \mathbbm{1}_{x=x_i, y \in S_i})\mu(\mathrm{d}y)$. Then $Q$ is a coupling between $\mu$ and $\mu_{\epsilon}$.
Therefore, clearly,
\begin{eqnarray}
W_1(\mu,\mu_{\epsilon})\leq \int_{K_{\epsilon}^C} \|x\|\mu(\mathrm{d}x) +2\epsilon \left(\sum_{i=1}^{n_{\epsilon}} p_i\right) \leq 3 \epsilon
\end{eqnarray}
Similarly, $W_1(\nu,\nu_{\epsilon}) \leq 3 \epsilon$. Therefore $\lim_{\epsilon \to 0} W_1(\nu,\nu_{\epsilon})=0$.
Moreover, $W_1(\mu,\nu)=\lim_{\epsilon \to 0} W_1(\mu_{\epsilon},\nu_{\epsilon})$, as $W_1(\mu_{\epsilon},\nu_{\epsilon})-6 \epsilon \leq W_1(\mu,\nu) \leq W_1(\mu_{\epsilon},\nu_{\epsilon})+6 \epsilon$ by triangle inequality.
\textbf{Step 2:}
Let $S$ be an arbitrary measure with $\|S\|_{ \mathrm{TV}}= 2\gamma$, so that $\mu+S$ is a probability measure with $\int \|x\| (\mu+S) (\mathrm{d}x) <\infty$. Also, let us define $\epsilon_n=2^{-(n+1)}$.
Let $S=S^+-S^-$, where $S^+$ and $S^-$ are positive measures on $\bbR^d$. Then, $\|S^-\|_{ \mathrm{TV}}=\|S^+\|_{ \mathrm{TV}} =\gamma $.
Clearly $(\mu-S^{-})/(1-\gamma),\mu, \nu,S^+/\gamma$ are tight probability measures. So we can construct compact sets $K_{\epsilon_n}^{(1)}$, similar to Step 1 to approximate all the four measures. Without loss of generality we assume that $0 \in K_{\epsilon_n}^{(1)}$ for all $n$. Moreover, we can also construct approximate measures $(\mu-S^-)_n=((\mu-S^-)/(1-\gamma))_{\epsilon_n}$ and $(S^+)_n= (S^+/\gamma)_{\epsilon_n}$ defined as in Step 1. $\mu_n=\mu_{\epsilon_n},\nu_n= \nu_{\epsilon_n}$ are defined similarly. All four of the measures have support points in $K_{\epsilon_n}^{(1)}$.
Next, we define $(\mu+S)_n=\gamma(S^+)_n +(1-\gamma)(\mu-S^-)_n$. Then by the construction, from~\citep{Villani-09}, $\lim_{n \to \infty}W_1((\mu+S)_n,\mu+S) \to 0$ and thus $\lim_{n \to \infty}W_1((\mu+S)_n,\nu_n) \to W_1(\mu+S,\nu)$. Therefore we can define a signed measure $S_n=(\mu+S)_n- \mu_n$. Moreover, \begin{align} \label{eq:STV}
\|S_n\|_{ \mathrm{TV}} & \leq \gamma\|(S^+)_n\|_{ \mathrm{TV}} +\|(1-\gamma)(\mu-S^-)_n-\mu_n\|_{ \mathrm{TV}} \\
& = 2\gamma= \|S\|_{ \mathrm{TV}} \end{align}
Note that $\mu_n,\nu_n,(\mu+S)_n$ put masses (sometimes zero masses) on a common set of support points given by $\tilde{K}_{\epsilon_n}^{(1)} \subset K_{\epsilon_n}^{(1)}$.
The $\tilde{K}_{\epsilon_n}^{(1)}$ is sequentially defined so that $\tilde{K}_{\epsilon_{n+1}}^{(1)}$ is a refinement of $\tilde{K}_{\epsilon_n}^{(1)}$. This can easily be achieved by the choice of $\epsilon_n$ defined.
Consider $\tilde{s}_n,\Pi_n$ such that \begin{eqnarray} \label{eq: argmin s}
F_1(\mu_n,\nu_n)=\int\|x-y\|\Pi_n(\mathrm{d}x\mathrm{d}y) +\lambda\|\tilde{s}_n\|_{ \mathrm{TV}} \end{eqnarray}
By the discrete nature of $\mu_n,\nu_n$, using the proof of the discrete part $F_1(\mu_n,\nu_n)= F_2(\mu_n,\nu_n)$.
Since, $\min\{\|x-y\|,2\lambda\}$ is a metric, whenever $\|x-y\|$ is, therefore, it is easy to check that $F_2(\mu,\nu)=\lim_n F_2(\mu_n,\nu_n)=\lim_n F_1(\mu_n,\nu_n)$.
Moreover, by construction, $F_1(\mu_n,\nu_n) \leq \int\|x-y\|\Pi(\mathrm{d}x\mathrm{d}y) +\lambda\|S_n\|_{ \mathrm{TV}}$ for any arbitrary coupling $\Pi$ of $\mu$ and $\mu+S$. Also $\lim_n W_1(\mu_n,\mu), W_1(\mu+S,(\mu+S)_n) \to 0$.
Thus, combining the above result with \eqref{eq:STV}, we get
\begin{eqnarray}
\lim_n F_1(\mu_n,\nu_n) \leq \int\|x-y\|\tilde{\Pi}(\mathrm{d}x\mathrm{d}y) +\lambda\|S\|_{ \mathrm{TV}} \nonumber
\end{eqnarray}
for any coupling $\tilde{\Pi}$ of $\mu$ and $\mu+S$.
Therefore, $F_2(\mu,\nu) \leq F_1(\mu,\nu)$.
\textbf{Step 3:} Consider $\tilde{s}_n$ defined in \eqref{eq: argmin s}. As $\tilde{s}_n$ has support in the compact sets $K_{\epsilon_n}^{(1)}$ defined in Step 2, therefore, $\{\mu_n+\tilde{s}_n\}_{n \geq 1}$ are tight measures.
Therefore, by Prokhorov's Theorem for equivalence of sequential compactness and tightness for a collection of measures, there exists a probability measure
$\mu \oplus s$ and a subsequence $\{n_k\}_{k \geq 1}$ such that $\mu_{n_k}+\tilde{s}_{n_k}$ converges weakly to $\mu \oplus s$. Moreover, by construction $\lim_{R\to \infty} \limsup_{n \to \infty} \bigintss_{\|x\|>R} \|x\| (\mu_n +\nu_n)(\text{d}x) =0$ and so $\lim_{R\to \infty} \limsup_{n \to \infty} \bigintss_{\|x\|>R} \|x\|(\mu_n + \tilde{s}_n)(\text{d}x) =0$.
Thus, by Definition 6.8 part (iii) and Theorem 6.9 of~\citep{Villani-09}, $W_1(\mu_{n_k}+\tilde{s}_{n_k}, \mu\oplus s)\to 0 $. Moreover, $W_1(\mu_{n_k},\mu)\to 0$. Therefore $\|\tilde{s}_{n_k}\|_{ \mathrm{TV}} \to \|\mu \oplus s -\mu\|_{ \mathrm{TV}}$. Thus, $W_1(\mu_{n_k}+\tilde{s}_{n_k}, \nu_{n_k}) + \lambda\|\tilde{s}_{n_k}\|_{ \mathrm{TV}} \to W_1(\mu \oplus s,\nu)+\lambda\|\mu \oplus s -\mu\|_{ \mathrm{TV}}$. But by the proof of the discrete part,
$W_1(\mu_{n_k}+\tilde{s}_{n_k}, \nu_{n_k}) + \lambda\|\tilde{s}_{n_k}\|_{ \mathrm{TV}} = F_1(\mu_{n_k},\nu_{n_k}) =F_2(\mu_{n_k},\nu_{n_k}) \to F_2(\mu, \nu)$. Therefore, with $s=\mu \oplus s -\mu$, $ W_1(\mu + s,\nu)+\lambda\|s\|_{ \mathrm{TV}}= F_2(\mu,\nu)$.
Therefore, $F_2(\mu,\nu)=\limsup_{n \to \infty} F_1(\mu_n,\nu_n) \geq F_1(\mu,\nu)$. Thus the equality holds.
\end{proof}
\section{Proof of Theorem \ref{thm:bound}} \label{sec:theorem_bnd} \begin{proof} The proof is immediate from the Formulation 1. Recall that the Formulation 1 can restructured as: $$
\mathrm{ROBOT}(\tilde \mu, \nu) = \inf_{P} \left\{\mathrm{OT}(P, \nu) + \lambda \|P - \tilde \mu\|_{ \mathrm{TV}}\right\} \,. $$ where the infimum is taking over all measure dominated by some common measure $\sigma$ (with respect to which $\mu, \mu_c, \nu$ are dominated). Hence, $$
\mathrm{ROBOT}(\tilde \mu, \nu) \le \mathrm{OT}(P, \nu) + \lambda \|P - \tilde \mu\|_{ \mathrm{TV}} $$ for any particular choice of $P$. Taking $P = \mu$ we get that \begin{align*}
\mathrm{ROBOT}(\tilde \mu, \nu) & \le \mathrm{OT}(\mu, \nu) + \lambda \|\mu - \tilde \mu\|_{ \mathrm{TV}} \\
& = \mathrm{OT}(\mu, \nu) + \lambda \eps \|\mu - \mu_c\|_{ \mathrm{TV}} \end{align*}
Taking $P = \nu$ we get $ \mathrm{ROBOT}(\tilde \mu, \nu) \le \lambda \|\nu - \tilde \mu\|_{ \mathrm{TV}}$ and finally taking $P = \tilde \mu$ we get $ \mathrm{ROBOT}(\tilde \mu, \nu) \le \mathrm{OT}(\tilde \mu, \nu)$. This completes the proof. \end{proof}
\section{Proof of Lemma \ref{lem:entropy-f2-f1}} \label{sec:lemma_sinkhorn} As defined in the main text, let $\Pi^*_2$ be the optimal solution of~\eqref{eq:robot2-d} and $\Pi^*_{2, \alpha}$ be the optimal solution of \eqref{eq:robot2-d-entropy}. Then by Proposition 4.1 from ~\citet{peyre2018Computational} we conclude: \begin{equation}
\label{eq:cuturi}
\Pi^*_{2, \alpha} \overset{\alpha \to 0}{\longrightarrow} \Pi^*_2 \,. \end{equation} Now we have defined $\left(\Pi^*_{1, \alpha}, \bs^*_{1, \alpha}\right)$ as the \emph{approximate} solution of \eqref{eq:robot1-d} obtained via Algorithm \ref{algo:f1-f2} from $\Pi^*_{2, \alpha}$. Note that we can think of Algorithm \ref{algo:f1-f2} as a map from $ \reals^{m \times n}$ to $\reals^{(m+n) \times (m + n)} \times \reals^{m}$. Define this map as $F$. $$ F(\Pi_2) \mapsto (\Pi_1, \bs_1) $$ Hence, by our notation, $\left(\Pi^*_{1, \alpha}, \bs^*_{1, \alpha}\right) = F(\Pi^*_{2, \alpha})$ and $\left(\Pi^*_{1}, \bs^*_{1}\right) = F(\Pi^*_2)$. Now if we show that $F$ is a continuous map, then by continuous mapping theorem, it is also immediate from \eqref{eq:cuturi} that: \begin{align*}
F(\Pi^*_{2, \alpha}) \overset{\alpha \to 0}{\longrightarrow} F(\Pi^*_2) \,. \end{align*} which implies: \begin{align*}
\Pi^*_{1, \alpha} & \overset{\alpha \to 0}{\longrightarrow} \Pi^*_1 \\
\bs^*_{1, \alpha} & \overset{\alpha \to 0}{\longrightarrow} \bs^*_1 \,. \end{align*} which will complete the proof. Therefore all we need to show is that $F$ is a continuous map. Towards that direction, first fix a sequence of matrices $\{\bar \Pi_{2, i}\}_{i \in \bbN} \to \bar \Pi_2$. Define $F(\bar \Pi_{2, i}) = \left(\bar \Pi_{1, i}, \bar \bs_{1, i}\right)$ and $F(\bar \Pi_{2}) = \left(\bar \Pi_{1}, \bar \bs_{1}\right)$. By Step 3 - Step 5 of Algorithm \ref{algo:f1-f2}, we obtain $\bar \Pi_{1, i}$ by first setting $\bar \Pi_{1, i, 12} = \bar \Pi_{2, i}$ and for each of the columns of $\bar \Pi_{1, i, 12}$, dumping the sum of its entries for which the cost is $> 2 \lambda$ to the diagonals of $\bar \Pi_{1, i, 22}$. Also, we have all the entries of the first $n$ columns of $\bar \Pi_{1, i}$ to be $0$. In step 6 of Algorithm \ref{algo:f1-f2}, we obtain $\bs_{1, i}$ by taking the negative of the sum of the elements of each rows of $\bar \Pi_{1, i, 12}$ for which the cost is $> 2\lambda$. Note that these operations (Step 3 - Step 6 of Algorithm \ref{algo:f1-f2}) are continuous. Therefore we conclude: \begin{enumerate}
\item $0 = \bar \Pi_{1, i, 11} \to \bar \Pi_{1, 11} = 0 \,.$
\item $0 = \bar \Pi_{1, i, 21} \to \bar \Pi_{1, 21} = 0 \,.$
\item $\bar \Pi_{1, i, 12} = \bar \Pi_{2, i} \odot \mathds{1}_{\cI^c} \to \bar \Pi_{2} \odot \mathds{1}_{\cI^c} = \bar \Pi_{1, 12} \,.$
\item \begin{align*}
\bar \Pi_{1, i, 22} & = \diag\left(\mathbf{1}^{\top}\left(\bar \Pi_{2, i} \odot \mathds{1}_{\cI}\right)\right) \\
& \to \diag\left(\mathbf{1}^{\top}\left(\bar \Pi_{2} \odot \mathds{1}_{\cI}\right)\right) \\
& = \bar \Pi_{1, 22} \,.
\end{align*}
\item \begin{align*} \bs_{1, i} & = -\left(\bar \Pi_{i, n} \odot \mathds{1}_{\cI}\right)\mathbf{1} \\ & \to -\left(\bar \Pi_2 \odot \mathds{1}_{\cI}\right)\mathbf{1} = \bs_1 \,. \end{align*} \end{enumerate} where $A \odot B$ denotes the Hadamard product (element-wise multiplication) between two matrices. Hence we have established: \begin{align*} F(\bar \Pi_{2, i}) & = \left(\bar \Pi_{1, i}, \bar \bs_{1, i}\right) \\ & \overset{n \to \infty}{\longrightarrow} \left(\bar \Pi_{1}, \bar \bs_{1}\right) \\ & = F(\bar \Pi_2) \,. \end{align*} This completes the proof of continuity of $F$.
\section{Proof of auxiliary lemmas}
\subsection{Proof of Lemma \ref{lem:R1-structure}}
\begin{proof} The fact that $\Pi^*_{1, 11} = \Pi^*_{1, 21} = \mathbf{0}$ follows from the fact that $\Pi^*_1 \succeq 0$ and $\Pi^*_1\mathbf{1} = \bQ$. To prove that $\Pi^*_{1, 22}$ is diagonal, we use the fact that the any diagonal entry the cost matrix is $0$. Now suppose $\Pi^*_{1, 22}$ is not diagonal. Then define a matrix $\hat \Pi$ as following: set $\hat \Pi_{11} = \hat \Pi_{21} = \mathbf{0}$, $\hat \Pi_{12} = \Pi^*_{1, 12}$ and: \[ \hat \Pi_{22}(i, j) = \begin{cases} \sum_{k=1}^m \Pi^*_{1, 22}(k, i), & \text{if } j = i \\ 0, & \text{if } j \neq i \end{cases} \] Also define $\hat s = s^*_1$ and $\hat t$ as $\hat t(i) = \hat \Pi_{22}(i, i)$. Then clearly $(\hat \Pi, \hat s, \hat t)$ is a feasible solution of Formulation 1. Note that: $$
\|\hat t\|_1 = 1^{\top}\hat \Pi_{22} 1 = 1^{\top}\Pi^*_{1, 22} 1 = \|t^*_1\|_1 $$ and by our construction $\langle C_{aug}, \hat \Pi \rangle < \langle C_{aug}, \Pi^*_1 \rangle$. Hence $(\hat \Pi, \hat s, \hat t)$ reduces the value of the objective function of Formulation 1 which is a contradiction. This completes the proof. \end{proof}
\subsection{Proof of Lemma \ref{lem:f2_characterization}} \begin{proof} \begin{enumerate}
\item Suppose $\Pi^*_1(i, j) > 0$. Then dump this mass to $s^*_1(j)$ and make it $0$. In this way $\langle C_{aug}, \Pi^*_1 \rangle$ will decrease by $> 2 \lambda \Pi^*_1(i, j)$ and the regularizer value will increase by atmost $2 \lambda \Pi^*_1(i, j)$, resulting in overall reduction in the objective value, which leads to a contradiction.
\item Suppose each entry of $i^{th}$ row of $C$ is $< 2 \lambda$. Then if $s^*_1(i) > 0$, we can distribute this mass in the $i^{th}$ row such that, $s^*_1(i) = a_1 + a_2 + \dots + a_m$ with the condition that $t^*_1(j) \ge a_j$. Now we reduce $t^*_1$ as: $$ t^*_1(j) \leftarrow t^*_1(j) - a_j $$ Hence the value $\langle C_{aug}, \Pi^*_1(i, j) \rangle$ will increase by a value $< 2\lambda s^*_1(i)$ but the value of regularizer will decrease by the value of $2 \lambda s^*_1(i)$, resulting in overall decrease in the value of objective function. \item Same as proof of part (2) by interchanging row and column in the argument. \item Suppose not. Then choose $\eps < s^*_1(i) \wedge t^*_1(j)$, Add $\eps$ to $\Pi^*_1(i, j)$. Hence the cost function value $\langle C_{aug}, \Pi^*_1 \rangle$ will increase by $ < 2\lambda \eps$ but the regularizer value will decrease by $2 \lambda \eps$, resulting in overall decrease in the objective function. \end{enumerate} \end{proof}
\subsection{Proof of Lemma \ref{lem:negativity}} \begin{proof} For the notational simplicity, we drop the subscript $4$ now as we will only deal with the solution of Formulation 4 and there will be no ambiguity. We prove the Lemma by contradiction. Suppose $s^*_{1, i} > 0$. Then we show one can come up with another solution $(\tilde \Pi, \tilde s_1, \tilde s_2)$ of Formulation 4 such that it has lower objective value. To construct this new solution, make: \[ \tilde s_{1, j} = \begin{cases} s^*_{1, j}, & \text{if } j \neq i \\ 0, & \text{if } j = i \end{cases} \] Now to change the optimal transport plan, we will only change $i^{th}$ row of $\Pi^*$. We subtract $a_1, a_2, \dots, a_n \ge 0$ from $i^{th}$ column of $\Pi^*$ in such a way, such that none of the elements are negative. Hence the column sum will be change, i.e.\ the value of $\tilde s_2$ will be: \[ \tilde s_{2, j} = s^*_{2, j} - a_j \ \ \ \forall 1 \le j \le n \,. \] Now clearly from our construction: $$ \langle C, \tilde \Pi \rangle \le \langle C, \Pi^* \rangle $$
For the regularization part, note that, as we only reduced $i^{th}$ element of $s^*_1$, we have $\|\tilde s_1\|_1 = \|s^*_1\|_1 - s^*_{1, i}$. And by simple triangle inequality,
$$\|\tilde s_2 \|_1 \le \|s^*_2 \|_1 + \|a_1\|_1 = \|s^*_2 \|_1 + s^*_{1, i} $$ by construction $a_i$'s, as $a_i \ge 0$ and $\sum_i a_i = s^*_{1, i}$. Hence we have: $$
\|\tilde s_1\|_1 + \|\tilde s_2\|_1 \le \|s^*_1\|_1 - s^*_{1, i} + \|s^*_2\|_1 + s^*_{1, i} = \|s^*_1\|_1 + \|s^*_2\|_1 \,. $$ Hence the value corresponding to regularizer will also decrease. This completes the proof. \end{proof}
\subsection{Proof of Lemma \ref{lem:struct_f3}} \begin{proof} We prove this lemma by contradiction. Suppose $\Pi^*_{3}$ does not have the structure mentioned in the statement of Lemma. Construct another transport plan for Formulation 3 $\tilde \Pi_{3}$ as follows: Keep $\tilde \Pi_{3, 12} = \Pi^*_{3, 12}$ and set $\tilde \Pi_{3, 12} = \mathbf{0}$. Construct the other parts as: \begin{align*} & \hspace{-1em}\tilde \Pi_{3, 11}(i,j) = \\ & \begin{cases} \sum_{k=1}^{m} \Pi^*_{3, 11}(i, k) + \sum_{k=1}^{n} \Pi^*_{3, 21}(k, i), & \text{if } i = j \\ 0, & \text{if } i \neq j \end{cases} \end{align*} and \[ \tilde \Pi_{3, 22}(i,j) = \begin{cases} \sum_{k=1}^{n} \Pi^*_{3, 22}(k, i), & \text{if } i = j \\ 0, & \text{if } i \neq j \end{cases} \] It is immediate from the construction that: $$ \langle C_{aug}, \tilde \Pi_{3} \rangle \le \langle C_{aug}, \Pi^*_{3} \rangle $$ As for the regularization term: Note the by our construction $\tilde s_4$ will be same as $s_4^*$ as column sum of $\tilde \Pi_{3, 22}$ is same as $\Pi^*_{3, 22}$. For the other three: $$ \tilde s_3(i) = \tilde \Pi_{3, 11}(i, i) = \sum_{k=1}^{m} \Pi^*_{3, 11}(i, k) + \sum_{k=1}^{n} \Pi^*_{3, 21}(k, i) $$ $$ \tilde s_2(i) = \tilde \Pi_{3, 22}(i,i) = \sum_{k=1}^{n} \Pi^*_{3, 22}(k, i) $$ and hence by construction: $$
\|\tilde s_2 \|_1 = \mathbf{1}^{\top} \Pi^*_{3, 22}\mathbf{1} = \|s^*_2\|_1 - \mathbf{1}^{\top} \Pi^*_{3, 21}\mathbf{1}\,. $$ $$
\|\tilde s_3 \|_1 = \mathbf{1}^{\top} \Pi^*_{3, 11}\mathbf{1} + \mathbf{1}^{\top} \Pi^*_{3, 21}\mathbf{1} = \|s^*_3\|_1 $$
And also by our construction, $\tilde s_1 = s^*_1 + c$ where $c = (\Pi^*_{3, 21})^{\top}\mathbf{1}$. As a consequence we have $\|c\|_1 = \mathbf{1}^{\top} \Pi^*_{3, 21}\mathbf{1}$. Then it follows: \begin{align*}
\sum_{i=1}^4 \|\tilde s_i\|_1 & = \|s^*_1 + c\| + \|s^*_2\|_1 - \mathbf{1}^{\top} \Pi^*_{3, 21}\mathbf{1} + \|s^*_3\|_1 + \|s^*_4\|_1 \\
& \le \sum_{i=1}^4 \|s^*_i\|_1 + \|c\|_1 - \mathbf{1}^{\top} \Pi^*_{3, 21}\mathbf{1} \\
& = \sum_{i=1}^4 \|s^*_i\|_1 \end{align*} So the objective value is overall reduced. This contradicts the optimality of $\Pi^*_3$ which completes the proof. \end{proof}
\section{Change of support of outliers with respect to $\lambda$} For any $\lambda$, define the set $\cI_{\lambda} = \{(i, j): C_{i, j} > 2\lambda\}$, i.e. $\cI_\lambda$ denotes the costs which exceeds the threshold $2\lambda$. As before we define by $C_\lambda$ to be truncated cost $C \wedge 2\lambda$. Denote by $\pi_\lambda$ to be the optimal transport plan with respect to $C_\lambda$ and the marginal measures $\mu, \nu$. Borrowing our notations from previous theorems, we define a "slack vector" $\bs_\lambda$ as: $$ \bs_\lambda(i)= \sum_{j=1}^n \pi_{\lambda}(i, j) \mathds{1}_{C(i, j) > 2\lambda} = \sum_{j: (i, j) \in \cI_\lambda}\pi_\lambda(i, j) \,. $$ And we define the observation $i_0$ to be an outlier if $\bs_\lambda(i_0) > 0$. It is immediate that for any $\lambda_1 < \lambda_2$, $\cI_{\lambda_1} \supseteq \cI_{\lambda_2}$. We goal is to establish the following theorem:
\begin{theorem} For any $\lambda_1 < \lambda_2$, if $\bs_{\lambda_2}(i_0) > 0$, then $\bs_{\lambda_1}(i_0) > 0$, i.e. if a point is selected as outlier for larger $\lambda$, then it is also selected as outlier for smaller $\lambda$. \end{theorem}
\begin{proof} Fix $\lambda_1 < \lambda_2$. Note that for any $\pi \in \Pi(\mu, \nu)$ we have: \begin{align*}
\langle C_{\lambda_2} - C_{\lambda_1}, \pi \rangle & = \sum_{(i, j) \in I_{\lambda_1} \cap \cI_{\lambda_2}^c} (C(i, j) - 2\lambda_1) \pi(i, j) \\
& \qquad \qquad + 2(\lambda_2 - \lambda_1)\sum_{(i, j) \in \cI_{\lambda_2}} \pi(i, j) \\
& := T_1(\pi) + T_2(\pi) \,. \end{align*} Now as $\pi_{\lambda_2}$ is optimal with respect to $C_{\lambda_2}$ and $\pi_{\lambda_1}$ is optimal with respect to $C_{\lambda_1}$ we have: \begin{align*}
& \langle C_{\lambda_1}, \pi_{\lambda_2} \rangle + T_1(\pi_{\lambda_2}) + T_2(\pi_{\lambda_2}) \\
& = \langle C_{\lambda_2} , \pi_{\lambda_2} \rangle \\
& \le \langle C_{\lambda_2} , \pi_{\lambda_1} \rangle \\
& = \langle C_{\lambda_1}, \pi_{\lambda_1} \rangle + T_1(\pi_{\lambda_1}) + T_2(\pi_{\lambda_1}) \\
& \le \langle C_{\lambda_1}, \pi_{\lambda_2} \rangle + T_1(\pi_{\lambda_1}) + T_2(\pi_{\lambda_1}) \end{align*} Therefore we have: \begin{equation}
\label{eq:ineq_1_bound}
T_1(\pi_{\lambda_2}) + T_2(\pi_{\lambda_2}) \le T_1(\pi_{\lambda_1}) + T_2(\pi_{\lambda_1}) \,. \end{equation} \textbf{But this is not enough.} Note that we can further decompose $T_1$ (and similarly $T_2$) as: \begin{align*} T_{1, i}(\pi) & = \sum_{j : (i, j) \in I_{\lambda_1} \cap \cI_{\lambda_2}^c} \left(C_{i, j} - 2\lambda_1\right) \pi(i, j) \\ T_{2, i}(\pi) &= 2(\lambda_2 - \lambda_1)\sum_{j : (i, j) \in \cI_{\lambda_2}} \pi(i ,j) \,. \end{align*} Hence we have: $$ T_1(\pi) = \sum_{i=1}^n T_{1, i}(\pi), \ \ \ T_2(\pi)= \sum_{i=1}^n T_{2, i}(\pi) \,. $$ In \eqref{eq:ineq_1_bound} we have established that $T_1(\pi_{\lambda_2}) + T_2(\pi_{\lambda_2}) \le T_1(\pi_{\lambda_1}) + T_2(\pi_{\lambda_1}) \,.$ In addition if we can show that: \begin{equation}
\label{eq:ineq_2_bound}
T_{1, i_0}(\pi_{\lambda_1}) + T_{2, i_0}(\pi_{\lambda_1}) = 0 \implies T_{2, i_0}(\pi_{\lambda_1}) = 0 \,. \end{equation} holds for all $1 \le i \le n$ then we are done. This is because, suppose $\bs_{\lambda_1}(i_0) = 0$, Then $$ T_{1, i_0}(\pi_{\lambda_1}) + T_{2, i_0}(\pi_{\lambda_1}) = 0 \,. $$ This in turn by \eqref{eq:ineq_2_bound} implies $$ T_{2,i_0}(\pi_{\lambda_2}) = 0 \,, $$ i.e. $\bs_{\lambda_2}(i_0) = 0$.
\begin{lemma} \label{lem:cost_2dim} Suppose $C$ is a $2 \times 2$ cost matrix with all unequal cost: $$ C = \begin{bmatrix} C_{11} & C_{12} \\ C_{21} & C_{22} \end{bmatrix} $$ If $C_{22} > 2\lambda_2$ and $C_{21} < 2 \lambda_1$, then the following two inequalities won't occur simultaneously: \begin{align*}
C^{\lambda_2}_{11} + C^{\lambda_2}_{22} & \le C^{\lambda_2}_{12} + C^{\lambda_2}_{21} \,,\\
C^{\lambda_1}_{12} + C^{\lambda_1}_{21} & \le C^{\lambda_1}_{11} + C^{\lambda_1}_{22} \,. \end{align*} \end{lemma} Now suppose $i_2$ is an outlier with respect to $\lambda_2$ but not with respect to $\lambda_1$. Then there exists $j_1$ and $j_2$ ($j_1 \neq j_2$) such that: $$ C_{i_2, j_1} < 2\lambda_1, C_{i_2, j_2} > 2\lambda_2 $$ such that $\pi^{\lambda_2}_{i_2 ,j_2} > 0$ and $\pi^{\lambda_1}_{i_2, j_1} > 0$.
\paragraph{Case 1: }Now assume that we can find $i_1 \neq i_2$ such that $\pi^{\lambda_2}_{i_1, j_1} > 0$ and $\pi^{\lambda_1}_{i_1, j_2} > 0$.
Then $(i_1, j_2), (i_2, j_1) \in \supp(\pi^{\lambda_1})$ and $(i_1, j_1), (i_2, j_2) \in \supp(\pi^{\lambda_2})$. Hence from c-cyclical monotonicity properties of the support of the optimal transport plan we have for $\pi^{\lambda_1}$: $$
C^{\lambda_1}_{i_1, j_2} + C^{\lambda_1}_{i_2, j_1} \le C^{\lambda_1}_{i_1, j_1} + C^{\lambda_1}_{i_2, j_2} \,, $$ and for $\pi^{\lambda_2}$: $$ C^{\lambda_2}_{i_1, j_1} + C^{\lambda_2}_{i_2, j_2} \le C^{\lambda_2}_{i_1, j_2} + C^{\lambda_2}_{i_2, j_1} \,. $$ which is a contradiction from Lemma \ref{lem:cost_2dim}. This completes the proof.
\paragraph{Case 2: Need to be proved} Now we need to consider the other case, there does not exist any row $i_1 \neq i_2$ such that both $\pi^{\lambda_2}_{i_1, j_1} > 0$ and $\pi^{\lambda_1}_{i_1, j_2} > 0$ occur simultaneously. This means that the columns $j_1, j_2$ are orthogonal, i.e. $\langle \pi^{\lambda_2}_{:, j_1}, \pi^{\lambda_1}_{:, j_2}\rangle = 0$. \end{proof}
\subsection{Proof of Lemma \ref{lem:cost_2dim}} As $C_{21} < 2\lambda_1$ and $C_{22} > 2\lambda_2$ we can modify the inequalities in Lemma \ref{lem:cost_2dim} as: \begin{align}
\label{eq:cost_ineq_1} C^{\lambda_2}_{11} + 2\lambda_2 & \le C^{\lambda_2}_{12} + C_{21} \,,\\
\label{eq:cost_ineq_2} C^{\lambda_1}_{12} + C_{21} & \le C^{\lambda_1}_{11} + 2\lambda_1 \,. \end{align} Now as we have assume $C_{21} < 2\lambda_1$, from \eqref{eq:cost_ineq_1} we obtain: \begin{align}
& 2\lambda_1 > C^{\lambda_2}_{11} - C^{\lambda_2}_{12} + 2\lambda_2 \notag \\
\iff & 2(\lambda_2 - \lambda_1) < C^{\lambda_2}_{12} - C^{\lambda_2}_{11} \,. \end{align} Hence $C^{\lambda_2}_{12} - C^{\lambda_2}_{11} > 0$, which implies $C_{11} < C_{12}, C_{11} < 2\lambda_2$ and also both $C_{11}$ and $C_{12}$ can not lie within $(2\lambda_1, 2\lambda_2)$. We divide the rest of the proofs into four small cases: \paragraph{Case 1:} Assume $2\lambda_1 < C_{11} < 2\lambda_2, C_{12} > 2\lambda_2$. In this case from \eqref{eq:cost_ineq_1} we have: \begin{align*}
C_{11} + 2\lambda_2 & \le 2\lambda_2 + C_{21} \end{align*} i.e. $C_{21} \ge C_{11}$ which is not possible as $C_{11} > 2\lambda_1$ and $C_{21} < 2\lambda_1$.
\paragraph{Case 2:} Assume $C_{11} < 2\lambda_1, C_{12} > 2\lambda_2$. Then from \eqref{eq:cost_ineq_2} we have $C_{21} \le C_{11}$ and from \eqref{eq:cost_ineq_1} we have: $C_{11} \le C_{21}$ which cannot occur simultaneously.
\paragraph{Case 3:} Assume $C_{11} < 2\lambda_1$ and $2\lambda_1 < C_{12} < 2\lambda_2$. Then from \eqref{eq:cost_ineq_1} and \eqref{eq:cost_ineq_2} we have respectively: \begin{align*}
C_{11} + 2\lambda_2 & \le C_{12} + C_{21} \,,\\
2\lambda_1 + C_{21} & \le C_{11} + 2\lambda_1 \end{align*} From the second inequality we have $C_{21} \le C_{11}$, which putting back in the first inequality yields: $$ C_{11} + 2\lambda_2 \le C_{12} + C_{11} \implies C_{12} \ge 2\lambda_2 $$ which is a contradiction.
\paragraph{Case 4: } Assume $C_{11} < 2\lambda_1$ and $C_{12} < 2\lambda_1$. This form \eqref{eq:cost_ineq_1} yields: \begin{align}
C_{11} + 2\lambda_2 & \le C_{12} + C_{21} \notag \\
\label{eq:cost_ineq_3} \implies C_{21} & \ge C_{11} - C_{12} + 2\lambda_2 \,. \end{align} Also from \eqref{eq:cost_ineq_2} we have: \begin{align}
C_{12} + C_{21} & \le C_{11} + 2\lambda_1 \notag \\
\label{eq:cost_ineq_4} \implies C_{21} & \le C_{11} - C_{12} + 2\lambda_1 \,. \end{align} From \eqref{eq:cost_ineq_3} and \eqref{eq:cost_ineq_4} we have: $$ C_{11} - C_{12} + 2\lambda_2 \le C_{11} - C_{12} + 2\lambda_1 $$ i.e. $\lambda_2 \le \lambda_1$ which is a contradiction. This completes the proof.
\begin{table*}[ht!]
\caption{Robust mean estimation with GANs using different distribution divergences. True mean is $\eta_0 = \mathbf{0}_5$; sample size $n=1000$; contamination proportion $\eps=0.2$. We report results over 30 experiment restarts.}
\label{table:robogan_cauchy} \begin{center} \begin{tabular}{lccccc}
\toprule
Contamination & JS Loss & SH Loss & ROBOT\\ \midrule
$\text{Cauchy}(0.1 \cdot \mathbf{1_5}, I_5)$ & 0.2 $\pm$ 0.06 & \textbf{0.17} $\pm$ 0.04 & \textbf{0.17} $\pm$ 0.05 \\
$\text{Cauchy}(0.5 \cdot \mathbf{1_5}, I_5)$ & 0.3 $\pm$ 0.07 & 0.26 $\pm$ 0.05 & \textbf{0.25} $\pm$ 0.05 \\
$\text{Cauchy}(1 \cdot \mathbf{1_5}, I_5)$ & 0.45 $\pm$ 0.14 & 0.37 $\pm$ 0.06 & \textbf{0.36} $\pm$ 0.07 \\
$\text{Cauchy}(2 \cdot \mathbf{1_5}, I_5)$ & 0.39 $\pm$ 0.3 & 0.26 $\pm$ 0.06 & \textbf{0.2} $\pm$ 0.07 \\ \bottomrule \end{tabular} \end{center}
\end{table*}
\section{Robust mean experiment with Cauchy distribution} \label{sec:robot_cauchy} In this section we present our results corresponding to the robust mean estimation with the generative distribution $g_{\theta}(x) = x + \theta$ where $x \sim \text{Cauchy}(0, 1)$. As in Subsection \ref{sec:robust_mean_est}, we assume that we have observation $\{x_1, \dots, x_n\}$ from a contaminated distribution $(1 - \eps) \ \text{Cauchy}(\eta_0, 1) + \eps \ \text{Cauchy}(\eta_1, 1)$. For our experiments we take $\eta_0 = \mathbf{0}_5$ and vary $\eta_1 \in \left\{0.1 \cdot \mathbf{1_5}, 0.5 \cdot \mathbf{1_5}, 1 \cdot \mathbf{1_5}, 2 \cdot \mathbf{1_5} \right\}$ along wth $\eps = 0.2$. We compare our method with \citet{wu2020minimax} and results are presented in Table \ref{table:robogan_cauchy}.
\end{document} |
\begin{document}
\title{Existence of nonparametric solutions for a capillary problem in warped products} \author{Jorge H. Lira, Gabriela A. Wanderley} \maketitle
\begin{abstract} We prove that there exist solutions for a non-parametric capillary problem in a wide class of Riemannian manifolds endowed with a Killing vector field. In other terms, we prove the existence of Killing graphs with prescribed mean curvature and prescribed contact angle along its boundary. These results may be useful for modelling stationary hypersurfaces under the influence of a non-homogeneous gravitational field defined over an arbitrary Riemannian manifold.
\noindent {\bf MSC:} 53C42, 53C21.
\noindent {\bf Keywords:} capillary, mean curvature, Killing graphs. \end{abstract}
\section{Introduction}
Let $M$ be a $(n+1)$-dimensional Riemannian manifold endowed with a Killing vector field $Y$. Suppose that the distribution orthogonal to $Y$ is of constant rank and integrable. Given an integral leaf $P$ of that distribution, let $\Omega\subset P$ be a bounded domain with regular boundary $\Gamma =\partial\Omega$. We suppose for simplicity that $Y$ is complete. In this case, let $\vartheta: \mathbb{R}\times \bar\Omega \to M$ be the flow generated by $Y$ with initial values in $M$. In geometric terms, the ambient manifold is a warped product $M = P\times_{1/\sqrt{\gamma}} \mathbb{R}$ where $\gamma = 1/|Y|^2$.
The Killing graph of a differentiable function $u:\bar\Omega\to \mathbb{R}$ is the hypersurface $\Sigma \subset M$ parametrized by the map \[ X(x)=\vartheta(u(x),x), \quad x\in\bar\Omega. \] The Killing cylinder $K$ over $\Gamma$ is by its turn defined by \begin{equation} K=\{\vartheta(s,x): s\in \mathbb{R}, \, x \in \Gamma\}. \end{equation} The height function with respect to the leaf $P$ is measured by the arc lenght parameter $\varsigma$ of the flow lines of $Y$, that is, \[ \varsigma=\frac{1}{\sqrt\gamma}s. \] Fixed these notations, we are able to formulate a capillary problem in this geometric context which model stationary graphs under a gravity force whose intensity depends on the point in the space. More precisely, given a \emph{gravitational potential} $\Psi \in C^{1,\alpha}(\bar\Omega \times \mathbb{R})$ we define the functional \begin{equation} \mathcal{A}[u] = \int_\Sigma \bigg(1+\int_0^{u/\sqrt\gamma}\Psi(x, s(\varsigma)) \,\textrm{d}\varsigma\bigg)\textrm{d}\Sigma. \end{equation} The volume element $\textrm{d}\Sigma$ of $\Sigma$ is given by \[
\frac{1}{\sqrt\gamma}\sqrt{\gamma+|\nabla u|^2}\,\textrm{d}\sigma, \] where $\textrm{d}\sigma$ is the volume element in $P$.
The first variation formula of this functional may be deduced as follows. Given an aarbitrary function $v\in C^\infty_c(\Omega)$ we compute \begin{eqnarray*}
& & \frac{d}{d\tau}\Big|_{\tau=0}\mathcal{A}[u+\tau v] =\int_\Omega \bigg(\frac{1}{\sqrt\gamma}\frac{\langle \nabla u, \nabla v\rangle}{\sqrt{\gamma+|\nabla u^2|}} + \frac{1}{\sqrt\gamma}\Psi (x, u(x)) v\bigg) \sqrt{\sigma}\textrm{d}x\\ & & \,\, = \int_\Omega \bigg(\textrm{div}\Big(\frac{1}{\sqrt\gamma}\frac{\nabla u}{W}v\Big) - \textrm{div}\Big(\frac{1}{\sqrt\gamma}\frac{\nabla u}{W}\Big) v + \frac{1}{\sqrt\gamma}\Psi (x, u(x)) v\bigg) \sqrt{\sigma}\textrm{d}x \\ & & \,\,\,\, -\int_\Omega \bigg(\frac{1}{\sqrt\gamma}\textrm{div}\Big(\frac{\nabla u}{W}\Big) - \frac{1}{\sqrt\gamma}\langle \frac{\nabla \gamma}{2\gamma}, \frac{\nabla u}{W}\rangle -\frac{1}{\sqrt\gamma}\Psi (x, u(x)) \bigg) v \sqrt{\sigma}\textrm{d}x, \end{eqnarray*} where $\sqrt\sigma \textrm{d}x$ is the volume element $\textrm{d}\sigma$ expressed in terms of local coordinates in $P$. The differential operators $\textrm{div}$ and $\nabla$ are respectively the divergence and gradient in $P$ with respect to the metric induced from $M$.
We conclude that stationary functions satisfy the capillary-type equation \begin{equation} \label{capillary} \textrm{div}\Big(\frac{\nabla u}{W}\Big) - \langle \frac{\nabla \gamma}{2\gamma}, \frac{\nabla u}{W}\rangle = \Psi. \end{equation} Notice that a Neumann boundary condition arises naturally from this variational setting: given a $C^{2,\alpha}$ function $\Phi:K \to (-1,1)$, we impose the following prescribed angle condition \begin{equation} \label{neumann-condition} \langle N, \nu\rangle = \Phi \end{equation} along $\partial\Sigma$, where \begin{equation} N = \frac{1}{W}\big(\gamma Y - \vartheta_* \nabla u\big) \end{equation} is the unit normal vector field along $\Sigma$ satisfying $\langle N, Y\rangle >0$ and $\nu$ is the unit normal vector field along $K$ pointing inwards the Killing cylinder over $\Omega$.
Equation (\ref{capillary}) is the prescribed mean curvature equation for Killing graphs. A general existence result for solutions of the Dirichlet problem for this equation may be found in \cite{DHL}. There the authors used local perturbations of the Killing cylinders as barriers for obtaining height and gradient estimates. However this kind of barrier is not suitable to obtain \emph{a priori} estimates for solutions of Neumann problems. For that reason we consider now local perturbations of the graph itself adapted from the original Korevaar's approach in \cite{korevaar} and its extension by M. Calle e L. Shahriyari \cite{calle}.
Following \cite{calle} and \cite{korevaar} we suppose that the data $\Psi$ and $\Phi$ satisfy \begin{itemize}
\item[i.] $|\Psi|+|\bar\nabla\Psi|\le C_\Psi$ in $\bar\Omega\times \mathbb{R}$, \item[ii.] $\langle \bar\nabla \Psi, Y\rangle \ge \beta>0$ in $\bar\Omega\times \mathbb{R}$, \item[iii.] $\langle \bar\nabla\Phi, Y\rangle \le 0$, \item[iv.] $(1-\Phi^2)\ge \beta'$,
\item[v.] $|\Phi|_2\le C_\Phi$ in $K$, \end{itemize} for some positive constants $C_\Psi, C_\Phi, \beta$ and $\beta'$, where $\bar\nabla$ denotes the Riemannian connection in $M$. Assumption ($ii)$ is classically referred to as the \emph{positive gravity} condition. Even in the Euclidean space, it seems to be an essential assumption in order to obtain \emph{a priori} height estimates. A very geometric discussion about this issue may be found at \cite{concus-finn}. Condition ($iii$) is the same as in \cite{calle} and \cite{korevaar} since at those references $N$ is chosen in such a way that $\langle N, Y\rangle >0$.
The main result in this paper is the following one
\begin{theorem} \label{main} Let $\Omega$ be a bounded $C^{3,\alpha}$ domain in $P$.
Suppose that the $\Psi\in C^{1,\alpha}(\bar\Omega\times\mathbb{R})$ and $\Phi\in C^{2,\alpha}(K)$ with $|\Phi|\le 1$ satisfy conditions {\rm (i)-(v)} above. Then there exists a unique solution $u\in C^{3,\alpha}(\bar\Omega)$ of the capillary problem {\rm (\ref{capillary})-(\ref{neumann-condition})}. \end{theorem}
We observe that $\Psi=nH$, where $H$ is the mean curvature of $\Sigma$ calculated with respect to $N$. Therefore Theorem \ref{main} establishes the existence of Killing graphs with prescribed mean curvature $\Psi$ and prescribed contact angle with $K$ along the boundary. Since the Riemannian product $P\times \mathbb{R}$ corresponds to the particular case where $\gamma=1$, our result extends the main existence theorem in \cite{calle}. Space forms constitute other important examples of the kind of warped products we are considering. In particular, we encompass the case of Killing graphs over totally geodesic hypersurfaces in the hyperbolic space $\mathbb{H}^{n+1}$.
In Section \ref{section-height}, we prove \emph{a priori} height estimates for solutions of (\ref{capillary})-(\ref{neumann-condition}) based on Uraltseva's method as presented in \cite{uraltseva}. These height estimates are one of the main steps for using the well-known Continuity Method in order to prove Theorem \ref{main}. At this respect, we refer the reader to the classical references \cite{concus-finn}, \cite{gerhardt} and \cite{spruck-simon}.
Section \ref{section-gradient} contains the proof of interior and boundary gradient estimates. There we follow closely a method due to N. Korevaar \cite{korevaar} for graphs in the Euclidean spaces and extended by M. Calle and L. Shahriyari \cite{calle} for Riemannian products. Finally the classical Continuity Method is applied to (\ref{capillary})-(\ref{neumann-condition}) in Section \ref{section-proof} for proving the existence result.
\section{Height estimates} \label{section-height}
In this section, we use a technique developed by N. Uraltseva \cite{uraltseva} (see also \cite{uraltseva-book} and \cite{GT} for classical references on the subject) in order to obtain a height estimate for solutions of the capillary problem (\ref{capillary})-(\ref{neumann-condition}). This estimate requires the \emph{positive gravity} assumption ($ii$) stated in the Introduction.
\begin{proposition} Denote \begin{equation} \beta = \inf_{\Omega\times \mathbb{R}}\langle \bar\nabla \Psi, Y\rangle \end{equation} and \begin{equation} \mu = \sup_\Omega \Psi(x,0). \end{equation} Suppose that $\beta >0$. Then any solution $u$ of (\ref{capillary})-(\ref{neumann-condition}) satisfies \begin{equation}
|u(x)|\le \frac{\sup_\Omega |Y|}{\inf_\Omega |Y|}\frac{\mu}{\beta} \end{equation} for all $x\in \bar\Omega$. \end{proposition}
\noindent \emph{Proof.} Fix an arbitrary real number $k$ with \begin{equation*}
k > \frac{\sup_\Omega |Y|}{\inf_\Omega |Y|}\frac{\mu}{\beta}. \end{equation*} Suppose that the superlevel set \begin{equation*} \Omega_k = \{x\in \Omega: u(x)>k\} \end{equation*} has a nonzero Lebesgue measure. Define $u_k:\Omega \to \mathbb{R}$ as \begin{equation*} u_k(x) = \max\{u(x)-k,0\}. \end{equation*} From the variational formulation we have \begin{eqnarray*}
0 &=&\int_{\Omega_k} \bigg(\frac{1}{\sqrt\gamma}\frac{\langle \nabla u, \nabla u_k\rangle}{\sqrt{\gamma+|\nabla u^2|}} + \frac{1}{\sqrt\gamma}\Psi (x, u(x)) u_k\bigg) \sqrt{\sigma}\textrm{d}x\\
&=& \int_{\Omega_k} \bigg(\frac{1}{\sqrt\gamma}\frac{|\nabla u|^2}{W} +\frac{1}{\sqrt\gamma} \Psi (x, u(x)) (u-k)\bigg) \sqrt{\sigma}\textrm{d}x\\ & = & \int_{\Omega_k} \bigg(\frac{1}{\sqrt\gamma}\frac{W^2-\gamma}{W} +\frac{1}{\sqrt\gamma} \Psi (x, u(x)) (u-k)\bigg) \sqrt{\sigma}\textrm{d}x \\ &= & \int_{\Omega_k} \bigg(\frac{W}{\sqrt\gamma}-\frac{\sqrt\gamma}{W} + \frac{1}{\sqrt\gamma}\Psi (x, u(x)) (u-k)\bigg) \sqrt{\sigma}\textrm{d}x . \end{eqnarray*} However \begin{equation*} \Psi(x,u(x)) = \Psi(x,0) +\int_0^{u(x)} \frac{\partial \Psi}{\partial s}\textrm{d}s \ge -\mu +\beta u(x). \end{equation*} Since $\frac{\sqrt{\gamma}}{W}\leq 1$ we conclude that \begin{eqnarray*}
|\Omega_k|-|\Omega_k|-\mu\int_{\Omega_k}\frac{1}{\sqrt{\gamma}}(u-k)+\beta\int_{\Omega_k}\frac{1}{\sqrt{\gamma}}u(u-k)\le 0. \end{eqnarray*} Hence we have \begin{eqnarray*} \beta\int_{\Omega_k}\frac{1}{\sqrt{\gamma}}u(u-k) \le \mu\int_{\Omega_k}\frac{1}{\sqrt{\gamma}}(u-k).\nonumber \end{eqnarray*} It follows that \begin{eqnarray*}
\beta k \inf_\Omega |Y| \int_{\Omega_k}(u-k) \le \mu\sup_\Omega |Y|\int_{\Omega_k}(u-k)\nonumber \end{eqnarray*}
Since $|\Omega_k|\neq 0$ we have \[
k \le \frac{\sup_\Omega |Y|}{\inf_\Omega |Y|}\frac{\mu}{\beta}, \]
what contradicts the choice of $k$. We conclude that $|\Omega_k|=0$ for all $k \ge \frac{\sup_\Omega |Y|}{\inf_\Omega |Y|}\frac{\mu}{\beta}$. This implies that \[
u(x)\le \frac{\sup_\Omega |Y|}{\inf_\Omega |Y|}\frac{\mu}{\beta}, \] for all $x\in \bar\Omega$. A lower estimate may be deduced in a similar way. This finishes the proof of the Proposition. $
\square$
\begin{remark} The construction of geometric barriers similar to those ones in \cite{concus-finn} is also possible at least in the case where $P$ is endowed with a rotationally invariant metric and $\Omega$ is contained in a normal neighborhood of a pole of $P$. \end{remark}
\section{Gradient estimates} \label{section-gradient}
Let $\Omega'$ be a subset of $\Omega$ and define \begin{equation} \Sigma'= \{\vartheta(u(x),x): x\in \Omega'\}\subset \Sigma \end{equation}
be the graph of $u|_{\Omega'}$. Let $\mathcal{O}$ be an open subset in $M$ containing $\Sigma'$. We consider a vector field $Z\in \Gamma(TM)$ with bounded $C^2$ norm and supported in $\mathcal{O}$. Hence there exists $\varepsilon>0$ such that the local flow $\Xi:(-\varepsilon, \varepsilon)\times \mathcal{O}\to M$ generated by $Z$ is well-defined. We also suppose that \begin{equation} \label{Zboundary} \langle Z(y), \nu (y)\rangle = 0, \end{equation} for any $y\in K\cap\mathcal{O}$. This implies that the flow line of $Z$ passing through a point $y\in K\cap\mathcal{O}$ is entirely contained in $K$.
We define a variation of $\Sigma$ by a one-parameter family of hypersurfaces $\Sigma_\tau$, $\tau \in (-\varepsilon, \varepsilon)$, parameterized by $X_\tau:\bar\Omega\to M$ where \begin{equation} \label{perturbation} X_\tau (x) = \Xi(\tau, \vartheta(u(x),x)), \quad x\in \bar\Omega. \end{equation} It follows from the Implicit Function Theorem that there exists $\Omega_\tau \subset P$ and $u_\tau:\bar\Omega_\tau\to \mathbb{R}$ such that $\Sigma_\tau$ is the graph of $u_\tau$. Moreover, (\ref{Zboundary}) implies that the $\Omega_\tau\subset\Omega$.
Hence given a point $y\in \Sigma$, denote $y_\tau = \Xi(\tau, y)\in \Sigma_\tau$. It follows that there exists $x_\tau\in \Omega_\tau$ such that $y_\tau= \vartheta(u_\tau(x_\tau), x_\tau)$. Then we denote by $\hat y_\tau = \vartheta(u(x_\tau), x_\tau)$ the point in $\Sigma$ in the flow line of $Y$ passing through $y_\tau$. The vertical separation between $y_\tau$ and $\hat y_\tau$ is by definition the function $s(y,\tau)=u_\tau(x_\tau)- u(x_\tau)$.
\begin{lemma}\label{lema1} For any $\tau\in (-\varepsilon, \varepsilon)$, let $A_\tau$ and $H_\tau$ be, respectively, the Weingarten map and the mean curvature of the hypersurface $\Sigma_\tau$ calculated with respect to the unit normal vector field $N_\tau$ along $\Sigma_\tau$ which satisfies $\langle N_\tau, Y\rangle >0$. Denote $H=H_0$ and $A=A_0$. If $\zeta\in C^\infty(\mathcal{O})$ and $T\in \Gamma(T\mathcal{O})$ are defined by \begin{equation} Z = \zeta N_\tau + T \end{equation} with $\langle T, N_\tau\rangle=0$ then \begin{itemize} \item[i.]
$\frac{\partial s}{\partial\tau}\big|_{\tau=0} = \langle Z, N\rangle W.$ \item[ii.]
$\bar{\nabla}_Z N\big|_{\tau=0} = -AT-\nabla^{\Sigma}\zeta$ \item[iii.]
$\frac{\partial H}{\partial\tau}\big|_{\tau=0}=\Delta_\Sigma\zeta+(|A|^2+{\rm Ric}_M(N,N))\zeta+\langle\bar\nabla \Psi, Z\rangle,$ \end{itemize}
where $W=\langle Y, N_\tau\rangle^{-1}=(\gamma+|\nabla u_\tau|^2)^{-1/2}$. The operators $\nabla^\Sigma$ and $\Delta_\Sigma$ are, respectively, the intrinsic gradient operator and the Laplace-Beltrami operator in $\Sigma$ with respect to the induced metric. Moreover, $\bar\nabla$ and ${\rm Ric}_M$ denote, respectively, the Riemannian covariant derivative and the Ricci tensor in $M$. \end{lemma}
\noindent \textit{Proof.} (i) Let $(x^i)_{i=1}^n$ a set of local coordinates in $\Omega\subset P$. Differentiating (\ref{perturbation}) with respect to $\tau$ we obtain \begin{eqnarray*}
X_{\tau*}\frac{\partial}{\partial\tau} = Z|_{X_\tau} = \zeta N_\tau + T \end{eqnarray*} On the other hand differentiating both sides of \[ X_\tau(x) =\vartheta(u_\tau(x_\tau), x_\tau) \] with respect to $\tau$ we have \begin{eqnarray*} X_{\tau*}\frac{\partial}{\partial\tau} &=&\Big( \frac{\partial u_\tau}{\partial \tau}+\frac{\partial u_\tau}{\partial x^i}\frac{\partial x_\tau^i}{\partial \tau}\Big)\vartheta_* Y +\frac{\partial x_\tau^i}{\partial \tau} \vartheta_* \frac{\partial}{\partial x^i}\\ & = & \frac{\partial u_\tau}{\partial \tau}\vartheta_* Y+\frac{\partial x_\tau^i}{\partial \tau}\Big(\vartheta_* \frac{\partial}{\partial x^i}+\frac{\partial u_\tau}{\partial x^i}\vartheta_* Y\Big) \end{eqnarray*} Since the term between parenthesis after the second equality is a tangent vector field in $\Sigma_\tau$ we conclude that \begin{eqnarray*} \frac{\partial u_\tau}{\partial \tau}\langle Y, N_\tau\rangle = \langle X_{\tau*}\frac{\partial}{\partial\tau}, N_\tau\rangle = \zeta \end{eqnarray*} from what follows that \[ \frac{\partial u_\tau}{\partial \tau} = \zeta W \] and \begin{eqnarray} \frac{\partial s}{\partial\tau} = \frac{\partial }{\partial\tau} (u_{\tau}-u) = \frac{\partial u_{\tau}}{\partial\tau} = \zeta W.\nonumber \end{eqnarray}
\noindent (ii) Now we have \begin{eqnarray} & & \langle\bar{\nabla}_{Z}N_\tau,X_*\partial_i\rangle = -\langle N_\tau,\bar{\nabla}_{Z}X_*\partial_i\rangle= -\langle N_\tau,\bar{\nabla}_{X_*\partial_i} Z\rangle= -\langle N_\tau,\bar{\nabla}_{X_*\partial_i} (\zeta N+T)\rangle\nonumber\\ & & \,\, = -\langle N_\tau,\bar{\nabla}_{X_*\partial_i} T\rangle-\langle N_\tau,\bar{\nabla}_{X_*\partial_i} \zeta N_\tau\rangle= -\langle A_\tau T, X_*\partial_i\rangle- \langle\nabla^{\Sigma}\zeta, X_*\partial_i\rangle,\nonumber \end{eqnarray} for any $1\le i\le n$. It follows that \[ \bar{\nabla}_Z N = -AT-\nabla^{\Sigma}\zeta. \]
\noindent (iii) This is a well-known formula whose proof may be found at a number of references (see, for instance, \cite{gerhardt-book}).
$\square$
For further reference, we point out that the Comparison Principle \cite{GT} when applied to (\ref{capillary})-(\ref{neumann-condition}) may be stated in geometric terms as follows. Fixed $\tau$, let $x\in \bar\Omega'$ be a point of maximal vertical separation $s(\cdot, \tau)$. If $x$ is an interior point we have \[ \nabla u_\tau (x,\tau) -\nabla u(x) = \nabla s (x,\tau) = 0, \] what implies that the graphs of the functions $u_\tau$ and $u+s(x,\tau)$ are tangent at their common point $y_\tau =\vartheta(u_\tau(x), x)$. Since the graph of $u+s(x, \tau)$ is obtained from $\Sigma$ only by a translation along the flow lines of $Y$ we conclude that the mean curvature of these two graphs are the same at corresponding points. Since the graph of $u+s(x,\tau)$ is locally above the graph of $u_\tau$ we conclude that \begin{equation} \label{comparison-int} H(\hat y_\tau)\ge H_\tau (y_\tau). \end{equation} If $x\in \partial\Omega\subset \partial\Omega'$ we have \[
\langle \nabla u_\tau, \nu\rangle|_{x} - \langle \nabla u, \nu\rangle|_x = \langle \nabla s, \nu\rangle \le 0 \] since $\nu$ points toward $\Omega$. This implies that \begin{equation} \label{comparison-bdry}
\langle N, \nu\rangle|_{y_\tau} \ge \langle N, \nu\rangle|_{\hat y_\tau} \end{equation}
\subsection{Interior gradient estimate} \label{section-int}
\begin{proposition}\label{interior} Let $B_R(x_0)\subset \Omega$ where $R<{\rm inj}P$. Then there exists a constant $C>0$ depending on $\beta, C_\Psi, \Omega$ and $K$ such that \begin{equation}
|\nabla u(x)|\le C\frac{R^2}{R^2 -d^2(x)}, \end{equation} where $d={\rm dist}(x_0, x)$ in $P$. \end{proposition}
\noindent \emph{Proof.} Fix $\Omega'= B_R(x_0)\subset \Omega$. We consider the vector field $Z$ given by \begin{equation} \label{Zint} Z=\zeta N, \end{equation} where $\zeta$ is a function to be defined later. Fixed $\tau\in [0, \varepsilon)$, let $x\in B_R(x_0)$ be a point where the vertical separation $s(\cdot, \tau)$ attains a maximum value.
If $y=\vartheta(u(x), x)$ it follows that \begin{equation}
H_\tau (y_\tau) - H_0(y) = \frac{\partial H_\tau}{\partial\tau}\Big|_{\tau=0}\tau + o(\tau). \end{equation} However the Comparison Principle implies that $H_0(\hat y_\tau)\ge H_\tau (y_\tau)$. Using Lemma \ref{lema1} ($iii$) we conclude that \begin{eqnarray*}
H_0(\hat y_\tau)- H_0(y) \ge \frac{\partial H_\tau}{\partial\tau}\Big|_{\tau=0}\tau + o(\tau)= (\Delta_\Sigma\zeta+ |A|^2\zeta + \textrm{Ric}_M(N,N)\zeta)\tau + o(\tau). \end{eqnarray*} Since $\hat y_\tau = \vartheta (-s(y,\tau), y_\tau)$ we have \begin{eqnarray} \label{dd}
\frac{d\hat y_\tau}{d\tau}\Big|_{\tau=0} =-\frac{ds}{d\tau}\vartheta_{*}\frac{\partial}{\partial s}+\frac{\partial y_\tau^i}{\partial\tau}\vartheta_{*}\frac{\partial}{\partial x^{i}}
=- \frac{ds}{d\tau} Y + \frac{d y_\tau}{d\tau}\Big|_{\tau=0}=-\frac{ds}{d\tau} Y + Z(y). \end{eqnarray} Hence using Lemma \ref{lema1} ($i$) and (\ref{Zint}) we have \begin{equation} \label{dtau}
\frac{d\hat y_\tau}{d\tau}\Big|_{\tau=0}=-\zeta WY+\zeta N. \end{equation} On the other hand for each $\tau\in (-\varepsilon, \varepsilon)$ there exists a smooth $\xi: (-\varepsilon, \varepsilon)\to TM$ such that \[ \hat y_\tau = \exp_y \xi(\tau). \] Hence we have \begin{eqnarray}
\frac{d\hat y_\tau}{d\tau}\Big|_{\tau=0} =\xi'(0).\nonumber \end{eqnarray} With a slight abuse of notation we denote $\Psi(s,x)$ by $\Psi(y)$ where $y=\vartheta(s,x)$. It results that \begin{equation*}
H_0(\hat y_\tau)- H_0(y) = \Psi(x_\tau, u(x_\tau)) - \Psi(x, u(x)) = \Psi(\exp_y \xi_\tau)-\Psi(y)= \langle \bar\nabla\Psi|_y, \xi'(0)\rangle \tau + o(\tau). \end{equation*} However \begin{eqnarray} \langle\bar\nabla\Psi, \xi'(0)\rangle =\zeta \langle \bar\nabla\Psi, N-WY \rangle= -\zeta W\frac{\partial\Psi}{\partial s}+\zeta\langle \bar\nabla\Psi, N\rangle. \end{eqnarray} We conclude that \begin{equation*}
-\zeta W\frac{\partial\Psi}{\partial s}\tau+\zeta\langle \bar\nabla\Psi, N\rangle \tau + o(\tau) \ge (\Delta_\Sigma\zeta+ |A|^2\zeta + \textrm{Ric}_M(N,N)\zeta)\tau + o(\tau). \end{equation*} Suppose that \begin{equation}
W(x) > \frac{C+|\bar\nabla\Psi|}{\beta} \end{equation} for a constant $C>0$ to be chosen later. Hence we have \begin{equation*} (\Delta_\Sigma\zeta+ \textrm{Ric}_M(N,N)\zeta)\tau + C\zeta \tau \le o(\tau). \end{equation*} Following \cite{calle} and \cite{korevaar} we choose \[ \zeta = 1-\frac{d^2}{R^2}, \] where $d=\textrm{dist}(x_0, \cdot)$. It follows that \begin{eqnarray*} \nabla^\Sigma\zeta = -\frac{2d}{R^2}\nabla^\Sigma d \end{eqnarray*} and \begin{eqnarray*}
\Delta_\Sigma \zeta = -\frac{2d}{R^2}\Delta_\Sigma d -\frac{2}{R^2}|\nabla^\Sigma d|^2 \end{eqnarray*} However using the fact that $P$ is totally geodesic and that $[Y,\bar\nabla d]=0$ we have \begin{eqnarray} & & \Delta_{\Sigma}d=\Delta_M d-\langle\bar\nabla_{N}\bar\nabla d,N\rangle + nH\langle\bar\nabla d,N\rangle\nonumber\\ & &\,\, = \Delta_P d -\langle \nabla_{\frac{\nabla u}{W}} \nabla d, \frac{\nabla u}{W}\rangle -\gamma^2\langle Y, N\rangle^2 \langle \bar\nabla_Y \bar\nabla d, Y\rangle+ nH\langle\bar\nabla d,N\rangle\nonumber \nonumber \end{eqnarray} Let $\pi:M\to P$ the projection defined by $\pi(\vartheta(s,x))=x$. Then \[ \pi_* N = -\frac{\nabla u}{W}. \] We denote \[ \pi_*N^\perp = \pi_* N -\langle \pi_* N, \nabla d\rangle\nabla d. \] If $\mathcal{A}_d$ and $\mathcal{H}_d$ denote, respectively, the Weingarten map and the mean curvature of the geodesic ball $B_d(x_0)$ in $P$ we conclude that \begin{eqnarray} & & \Delta_{\Sigma}d= n\mathcal{H}_d -\langle \mathcal{A}_d(\pi_* N^\perp), \pi_*N^\perp\rangle +\gamma\langle Y, N\rangle^2 \kappa+ nH\langle\bar\nabla d,N\rangle. \nonumber \end{eqnarray} where \[ \kappa = -\gamma\langle \bar\nabla_Y \bar\nabla d, Y\rangle \] is the principal curvature of the Kiling cylinder over $B_d(x_0)$ relative to the principal direction $Y$. Therefore we have \[
|\Delta_\Sigma d|\le C_1(C_\Psi, \sup_{B_R(x_0)}(\mathcal{H}_d+\kappa), \sup_{B_R(x_0)}\gamma ) \] in $B_R(x_0)$. Hence setting \[ C_2 = \sup_{B_R(x_0)}\textrm{Ric}_M \] we fix \begin{equation} \label{C}
C =\max\{2(C_1+C_2), \sup_{\mathbb{R}\times\Omega} |\bar\nabla \Psi|\}. \end{equation} With this choice we conclude that \[ C\zeta \le \frac{o(\tau)}{\tau}, \] a contradiction. This implies that \begin{equation}
W(x) \le \frac{C-|\bar\nabla \Psi|}{\beta}. \end{equation} However \[ \zeta(z) W(z) + o(\tau) = s(X(z), \tau) \le s(X(x), \tau) = \zeta(x) W(x)+o(\tau), \] for any $z\in B_R(x_0)$. It follows that \[
W(z) \le \frac{R^2-d^2(z)}{R^2-d^2(x)} W(x) + o(\tau) \le \frac{R^2}{R^2-d^2(x)} \frac{C-|\bar\nabla \Psi|}{\beta}+o(\tau) \le \widetilde C \frac{R^2}{R^2-d^2(x)}, \] for very small $\varepsilon>0$. This finishes the proof of the proposition.
$\square$
\begin{remark} \label{sphere} If $\Omega$ satisfies the interior sphere condition for a uniform radius $R>0$ we conclude that \begin{equation} W(x)\le \frac{C}{d_\Gamma(x)}, \end{equation} for $x\in \Omega$, where $d_\Gamma(x) ={\rm dist}(x, \Gamma)$. \end{remark}
\subsection{Boundary gradient estimates}
Now we establish boundary gradient estimates using other local perturbation of the graph which this time has also tangential components.
\begin{proposition}\label{boundary} Let $x_0\in P$ and $R>0$ such that $3R <{\rm inj}P$. Denote by $\Omega'$ the subdomain $\Omega \cap B_{2R}(x_0)$. Then there exists a positive constant $C=C(R, \beta, \beta', C_\Psi, C_\Phi, \Omega, K)$ such that \begin{equation} W(x) \le C, \end{equation} for all $x\in \overline\Omega'$. \end{proposition}
\noindent \emph{Proof.} Now we consider the subdomain $\Omega'=\Omega\cap B_{R}(x_0)$. We define \begin{equation} Z = \eta N + X, \end{equation} where \[ \eta = \alpha_0 v + \alpha_1 d_\Gamma \]
and $\alpha_0$ and $\alpha_1$ are positive constants to be chosen and $d_\Gamma$ is a smooth extension of the distance function $\textrm{dist}(\,\cdot\, , \Gamma)$ to $\Omega'$ with $|\nabla d_\Gamma|\le 1$ and \[ v =4R^2-d^2, \] where $d=\textrm{dist}(x_0, \cdot)$. Moreover \[ X = \alpha_0\Phi (v\nu-d_\Gamma\nabla v). \] In this case we have \begin{eqnarray*} \zeta = \eta +\langle X, N\rangle = \alpha_0 v + \alpha_1 d_\Gamma + \alpha_0\Phi (v\langle N, \nu\rangle-d_\Gamma\langle N, \nabla v\rangle). \end{eqnarray*} Fixed $\tau\in [0,\varepsilon)$, let $x\in\bar\Omega'$ be a point where the maximal vertical separation between $\Sigma$ and $\Sigma_\tau$ is attained. We first suppose that $x\in \textrm{int}(\partial\Omega'\cap \partial\Omega)$. In this case denoting $y_\tau =\vartheta (u_\tau(x), x)\in \Sigma_\tau$ and $\hat y_\tau=\vartheta(u(x), x)\in \Sigma$ it follows from the Comparison Principle that \begin{equation}
\langle N_\tau, \nu\rangle|_{y_\tau}\ge \langle N, \nu\rangle|_{\hat y_\tau}. \end{equation}
Notice that $\hat y_\tau \in \partial\Sigma$. Moreover since $Z|_{K\cap\mathcal{O}}$ is tangent to $K$ there exists $y\in \partial\Sigma$ such that \[ y = \Xi (-\tau, y_\tau). \] We claim that \begin{equation} \label{der-1}
|\langle \bar\nabla \langle N_\tau, \nu\rangle, \frac{dy_\tau}{d\tau}\big|_{\tau=0}\rangle| \le \alpha_1 (1-\Phi^2) +\widetilde C\alpha_0 \end{equation} for some positive constant $\widetilde C=C(C_\Phi, K, \Omega, R)$.
Hence (\ref{neumann-condition}) implies that \begin{eqnarray*}
\langle N, \nu\rangle|_{\hat y_\tau} - \langle N, \nu\rangle|_{y} = \Phi(\hat y_\tau) - \Phi(y) = \tau \langle \bar\nabla \Phi, \frac{d\hat y_\tau}{d\tau}\big|_{\tau=0}\rangle+ o(\tau). \end{eqnarray*} Therefore \begin{eqnarray*}
\langle N, \nu\rangle|_{y_\tau} - \langle N, \nu\rangle|_{y} \ge \tau \langle \bar\nabla \Phi, \frac{d\hat y_\tau}{d\tau}\big|_{\tau=0}\rangle+ o(\tau). \end{eqnarray*} On the other hand we have \begin{eqnarray*}
\langle N, \nu\rangle|_{y_\tau} - \langle N, \nu\rangle|_{y} = \tau \langle \bar\nabla \langle N, \nu\rangle, \frac{dy_\tau}{d\tau}\big|_{\tau=0}\rangle+ o(\tau). \end{eqnarray*} We conclude that \begin{equation*} \label{ineq-fund}
\tau \langle \bar\nabla \langle N, \nu\rangle, \frac{dy_\tau}{d\tau}\big|_{\tau=0}\rangle \ge \tau \langle \bar\nabla \Phi, \frac{d\hat y_\tau}{d\tau}\big|_{\tau=0}\rangle+ o(\tau). \end{equation*} Hence we have \begin{equation*} \label{ineq-fund}
\alpha_1 (1-\Phi^2)\tau +\widetilde C\alpha_0\tau \ge \tau \langle \bar\nabla \Phi, \frac{d\hat y_\tau}{d\tau}\big|_{\tau=0}\rangle+ o(\tau). \end{equation*} It follows from (\ref{dd}) that \begin{equation*} \label{ineq-fund2} \alpha_1 (1-\Phi^2) +\widetilde C\alpha_0 \ge -\zeta W \langle \bar\nabla \Phi,Y\rangle+ \zeta\langle \bar\nabla \Phi, N\rangle+ o(\tau)/\tau. \end{equation*} Since \[ \langle \bar\nabla\Phi, Y\rangle =\frac{\partial\Phi}{\partial s}\le 0 \] we conclude that \begin{equation} \label{ineq-fund3} W(x) \le C(C_\Phi, \beta', K, \Omega, R). \end{equation} We now prove the claim. For that, observe that Lemma \ref{lema1} ($ii$) implies that \begin{eqnarray*}
& &\langle N, \nu\rangle|_{y_\tau} - \langle N, \nu\rangle|_{y} = \tau \frac{\partial}{\partial\tau}\Big|_{\tau=0}\langle N_\tau, \nu\rangle|_{y_\tau} + o(\tau) \\
& & \,\, = \tau (\langle N, \bar\nabla_Z \nu\rangle|_y-\langle AT+\nabla^\Sigma \zeta, \nu\rangle|_y)+o(\tau). \end{eqnarray*}
Since $Z|_y\in T_y K$ it follows that \begin{eqnarray*}
\langle N, \nu\rangle|_{y_\tau} - \langle N, \nu\rangle|_{y} = -\tau (\langle A_K Z, N\rangle|_y+\langle AT+\nabla^\Sigma \zeta, \nu\rangle|_y)+o(\tau), \end{eqnarray*} where $A_K$ is the Weingarten map of $K$ with respect to $\nu$. We conclude that \begin{equation} \label{ineq222-2}
-\tau (\langle A_K Z, N\rangle|_y+\langle AT+\nabla^\Sigma \zeta, \nu\rangle|_y) \ge \tau \langle \bar\nabla \Phi, \frac{d\hat y_\tau}{d\tau}\big|_{\tau=0}\rangle+ o(\tau) \end{equation} where \[ \nu^T = \nu-\langle N, \nu\rangle N. \]
We have \begin{eqnarray*} \langle \nabla^{\Sigma}\zeta+AT, \nu^T\rangle = \alpha_0 \langle \nabla v, \nu^T\rangle+\alpha_1\langle\nabla^{\Sigma}d_\Gamma, \nu^T\rangle+\langle\nabla^{\Sigma}\langle X,N\rangle, \nu^T\rangle + \langle AT, \nu^T\rangle. \end{eqnarray*} We compute \begin{eqnarray*} & & \langle \nabla^{\Sigma}\langle X,N\rangle,\nu^T\rangle =\alpha_0 (v\langle N, \nu\rangle -d_\Gamma\langle N, \nabla v\rangle) \langle \bar\nabla\Phi, \nu^T\rangle \\ & & \,\,\,\, + \alpha_0 \Phi \big(\langle \nabla v, \nu^T\rangle \langle N, \nu\rangle + v (\langle\bar\nabla_{\nu^T} N, \nu\rangle+\langle N, \bar\nabla_{\nu^T}\nu\rangle) - \langle \nabla d_\Gamma, \nu^T\rangle \langle N, \nabla v\rangle\\ & & \,\,\,\, - d_\Gamma (\langle \bar\nabla_{\nu^T} N, \nabla v\rangle+\langle N, \bar\nabla_{\nu^T}\nabla v\rangle)\big). \end{eqnarray*} Hence we have at $y$ that \begin{eqnarray*} & & \langle \nabla^{\Sigma}\langle X,N\rangle,\nu^T\rangle =\alpha_0 (v\Phi -d_\Gamma\langle N, \nabla v\rangle) \langle \bar\nabla\Phi, \nu^T\rangle \\ & & \,\,\,\, + \alpha_0 \Phi \big(\langle \nabla v, \nu^T\rangle \Phi + v (-\langle A\nu^T, \nu^T\rangle+\langle N, \bar\nabla_{\nu}\nu\rangle-\langle N,\nu\rangle\langle N, \bar\nabla_{N}\nu\rangle) \\ & & \,\,\,\,-\langle \nu, \nu^T\rangle \langle N, \nabla v\rangle - d_\Gamma (-\langle A\nu^T, \nabla v\rangle+\langle N, \bar\nabla_{\nu}\nabla v\rangle-\langle N,\nu\rangle\langle N, \bar\nabla_{N}\nabla v\rangle)\big). \end{eqnarray*} Therefore we have \begin{eqnarray*} & & \langle \nabla^{\Sigma}\langle X,N\rangle,\nu^T\rangle =\alpha_0 (v\Phi -d_\Gamma\langle N, \nabla v\rangle) \langle \bar\nabla\Phi, \nu^T\rangle \\ & & \,\,\,\, +\alpha_0 \Phi \big(\langle \nabla v, \nu^T\rangle \Phi - v (\langle A\nu^T, \nu^T\rangle+\langle N,\nu\rangle\langle N, \bar\nabla_{N}\nu\rangle) \\ & & \,\,\,\,-\langle \nu, \nu^T\rangle \langle N, \nabla v\rangle + d_\Gamma (\langle A\nu^T, \nabla v\rangle-\langle N, \bar\nabla_{\nu}\nabla v\rangle+\langle N,\nu\rangle\langle N, \bar\nabla_{N}\nabla v\rangle)\big). \end{eqnarray*} It follows that \begin{eqnarray*} & & \langle \nabla^{\Sigma}\zeta + AT, \nu^T\rangle =\langle AT, \nu^T\rangle+\alpha_0 \langle \nabla v, \nu^T\rangle+\alpha_1\langle\nu, \nu^T\rangle \\ & & \,\,\,\, +\alpha_0 (v\Phi -d_\Gamma\langle N, \nabla v\rangle) \langle \bar\nabla\Phi, \nu^T\rangle \\ & & \,\,\,\, +\alpha_0 \Phi \big(\langle \nabla v, \nu^T\rangle \Phi - v (\langle A\nu^T, \nu^T\rangle+\langle N,\nu\rangle\langle N, \bar\nabla_{N}\nu\rangle) \\ & & \,\,\,\,-\langle \nu, \nu^T\rangle \langle N, \nabla v\rangle + d_\Gamma (\langle A\nu^T, \nabla v\rangle-\langle N, \bar\nabla_{\nu}\nabla v\rangle+\langle N,\nu\rangle\langle N, \bar\nabla_{N}\nabla v\rangle)\big). \end{eqnarray*} However \[ \langle AT, \nu^T\rangle= \langle A\nu^T, X\rangle =\alpha_0 \Phi v\langle A\nu^T, \nu^T\rangle -\alpha_0 \Phi d_\Gamma\langle A\nu^T, \nabla v\rangle. \] Hence we have \begin{eqnarray*} & & \langle \nabla^{\Sigma}\zeta + AT, \nu^T\rangle =\alpha_0 \langle \nabla v, \nu^T\rangle+\alpha_1\langle\nu, \nu^T\rangle +\alpha_0 (v\Phi -d_\Gamma\langle N, \nabla v\rangle) \langle \bar\nabla\Phi, \nu^T\rangle \\ & & \,\,\,\, +\alpha_0 \Phi \big(\langle \nabla v, \nu^T\rangle \Phi - v \Phi\langle N, \bar\nabla_{N}\nu\rangle -\langle \nu, \nu^T\rangle \langle N, \nabla v\rangle\\ & & \,\,\,\, - d_\Gamma (\langle N, \bar\nabla_{\nu}\nabla v\rangle-\langle N,\nu\rangle\langle N, \bar\nabla_{N}\nabla v\rangle)\big). \end{eqnarray*} Since $d_\Gamma(y)=0$ we have \begin{eqnarray*} & & \langle \nabla^{\Sigma}\zeta + AT, \nu^T\rangle =\alpha_0 \langle \nabla v, \nu^T\rangle+\alpha_1\langle\nu, \nu^T\rangle +\alpha_0 v\Phi \langle \bar\nabla\Phi, \nu^T\rangle \\ & & \,\,\,\, +\alpha_0 \Phi \big(\langle \nabla v, \nu^T\rangle \Phi - v \Phi\langle N, \bar\nabla_{N}\nu\rangle -\langle \nu, \nu^T\rangle \langle N, \nabla v\rangle\big). \end{eqnarray*} Rearranging terms we obtain \begin{eqnarray*} & & \langle \nabla^{\Sigma}\zeta + AT, \nu^T\rangle =\alpha_1(1-\langle N, \nu\rangle^2)+\alpha_0 \langle \nabla v, \nu^T\rangle (1+\Phi^2) +\alpha_0 v\Phi \langle \bar\nabla\Phi, \nu^T\rangle \\ & & \,\,\,\, -\alpha_0 \Phi \big(v \Phi\langle N, \bar\nabla_{N}\nu\rangle +(1-\langle N, \nu\rangle^2) \langle N, \nabla v\rangle\big). \end{eqnarray*} Therefore there exists a constant $C=C(\Phi, K, \Omega, R)$ such that \begin{equation} \label{est-1}
|\langle \nabla^{\Sigma}\zeta + AT, \nu^T\rangle|\le \alpha_1 (1-\Phi^2) +C\alpha_0. \end{equation} Since $d_\Gamma(y)=0$ it holds that \[
|\langle A_K Z, N\rangle| = |A_K| |Z|\le |A_K|(\eta +|X|) \le 4R^2\alpha_0|A_K|(1+\Phi). \] from what we conclude that \begin{equation} \label{der-1}
|\langle \bar\nabla\langle N_\tau, \nu\rangle, \frac{dy_\tau}{d\tau}\big|_{\tau=0}\rangle| \le \alpha_1 (1-\Phi^2) +\widetilde C\alpha_0 \end{equation} for some constant $\widetilde C(C_\Phi, K, \Omega, R)>0$.
Now we suppose that $x\in \overline{\partial\Omega'\cap \Omega}$. In this case, we have $v(x)=0$. Then $\eta=\alpha_1 d_\Gamma$ and \[ X=-\alpha_0 \Phi d_\Gamma\nabla v \] at $x$. Thus \[ \zeta = \eta + \langle X,N\rangle = \alpha_1 d_\Gamma +2\alpha_0 \Phi dd_\Gamma \langle \nabla d, N\rangle. \] Moreover we have \[ W(x) \le \frac{C}{d_\Gamma(x)} \] (see Remark \ref{sphere}). It follows that \begin{eqnarray} \zeta W \le C(\alpha_1 +2\alpha_0 \Phi d\langle \nabla d, N\rangle)\le C(\alpha_1 +4R\alpha_0 \Phi). \end{eqnarray} We conclude that \begin{eqnarray} W(x) \le C(C_\Phi, K, \Omega, R). \end{eqnarray} Now we consider the case when $x\in \Omega \cap \Omega'$. In this case we have \begin{eqnarray*} & &\Delta_\Sigma\zeta =\alpha_0 \Delta_\Sigma v + \alpha_1 \Delta_\Sigma d_\Gamma + \alpha_0\Delta_\Sigma\Phi (v\langle N, \nu\rangle-d_\Gamma\langle N, \nabla v\rangle)\\ & & \,\, +\alpha_0\Phi (\Delta_\Sigma v\langle N, \nu\rangle + v\Delta_\Sigma\langle N, \nu\rangle+2\langle \nabla^\Sigma v, \nabla^\Sigma\langle N, \nu\rangle\rangle-\Delta_\Sigma d_\Gamma\langle N, \nabla v\rangle-d_\Gamma\Delta_\Sigma\langle N, \nabla v\rangle\\ & &\,\,\,\, -2\langle \nabla^\Sigma d_\Gamma, \nabla^\Sigma \langle N, \nabla v\rangle)\\ & & \,\,\,\, +2\alpha_0 \langle \nabla^\Sigma \Phi, \nabla^\Sigma v \langle N, \nu\rangle+v \nabla^\Sigma\langle N, \nu\rangle-\nabla^\Sigma d_\Gamma\langle N, \nabla v\rangle-d_\Gamma\nabla^\Sigma\langle N, \nabla v\rangle\rangle \end{eqnarray*} Notice that given an arbitrary vector field $U$ along $\Sigma$ we have \begin{equation*} \langle \nabla^\Sigma \langle N, U\rangle, V\rangle = -\langle AU^T, V\rangle +\langle N, \bar\nabla_V U\rangle, \end{equation*} for any $V\in \Gamma(T\Sigma)$. Here, $U^T$ denotes the tangential component of $U$. Hence using Codazzi's equation we obtain \begin{eqnarray*}
\Delta_\Sigma\langle N, U\rangle \le \langle \bar\nabla (nH), U^T\rangle +\textrm{Ric}_M (U^T, N) + C|A| \end{eqnarray*} for a constant $C$ depending on $\bar\nabla U$ and $\bar\nabla^2 U$. Hence using (\ref{capillary}) we conclude that \begin{eqnarray}
\Delta_\Sigma\langle N, U\rangle \le \langle \bar\nabla \Psi, U^T\rangle +\widetilde C|A| \end{eqnarray} where $\widetilde C$ is a positive constant depending on $\bar\nabla U, \bar\nabla^2 U$ and $\textrm{Ric}_M$.
We also have \begin{eqnarray*} \Delta_\Sigma d_\Gamma & = & \Delta_P d_\Gamma +\gamma \langle\bar\nabla_Y\bar\nabla d, Y\rangle -\langle \bar\nabla_N \bar\nabla d_\Gamma, N \rangle +nH\langle \bar\nabla d_\Gamma, N\rangle \\ & \le & C_0\Psi + C_1, \end{eqnarray*} where $C_0 $ and $C_1$ are positive constants depending on the second fundamental form of the Killing cylinders over the equidistant sets $d_\Gamma = \delta$ for small values of $\delta$. Similar estimates also hold for $\Delta_\Sigma d$ and then for $\Delta_\Sigma v$.
We conclude that \begin{equation}
\Delta_\Sigma \zeta \ge -\widetilde C_0 - \widetilde C_1 |A|, \end{equation}
where $\widetilde C_0$ and $\widetilde C_1$ are positive constants depending on $\Omega$, $K$, $\textrm{Ric}_M$, $|\Phi|_2$.
Now proceeding similarly as in the proof of Proposition \ref{interior}, we observe that Lemma \ref{lema1} ($iii$) and the Comparison Principle yield \begin{eqnarray*}
H_0(\hat y_\tau)- H_0(y) \ge \frac{\partial H_\tau}{\partial\tau}\Big|_{\tau=0}\tau + o(\tau)= (\Delta_\Sigma\zeta+ |A|^2\zeta + \textrm{Ric}_M(N,N)\zeta)\tau +\tau\langle \bar\nabla\Psi, T\rangle+ o(\tau). \end{eqnarray*} However \begin{equation*}
H_0(\hat y_\tau)- H_0(y) = \langle \bar\nabla\Psi|_y, \xi'(0)\rangle \tau + o(\tau). \end{equation*} Using (\ref{dd}) we have \begin{eqnarray*} \langle\bar\nabla\Psi, \xi'(0)\rangle =\langle \bar\nabla\Psi, Z-\zeta WY \rangle= \langle \bar\nabla\Psi, Z\rangle-\zeta W\frac{\partial\Psi}{\partial s}. \end{eqnarray*} We conclude that \begin{equation*}
-\zeta W\frac{\partial\Psi}{\partial s}\tau+\zeta\langle \bar\nabla\Psi, N\rangle \tau + o(\tau) \ge (\Delta_\Sigma\zeta+ |A|^2\zeta + \textrm{Ric}_M(N,N)\zeta)\tau + o(\tau). \end{equation*} Suppose that \begin{equation}
W > \frac{C+|\bar\nabla\Psi|}{\beta} \end{equation} for a constant $C>0$ as in (\ref{C}). Hence we have \begin{equation*}
(\Delta_\Sigma\zeta+|A|^2\zeta+ \textrm{Ric}_M(N,N)\zeta)\tau + C\zeta \tau \le o(\tau) \end{equation*} We conclude that \begin{equation*}
- C_0 - C_1 |A| + C_2 |A|^2 +C\le \frac{o(\tau)}{\tau}, \end{equation*} a contradiction. It follows from this contradiction that \begin{equation}
W(x) \le \frac{C+|\bar\nabla\Psi|}{\beta}. \end{equation} Now, proceeding as in the end of the proof of Proposition \ref{interior}, we use the estimate for $W(x)$ in each one of the three cases for obtaining a estimate for $W$ in $\Omega'$. This finishes the proof of the Proposition. $
\square$
\section{Proof of the Theorem \ref{main}} \label{section-proof}
We use the classical Continuity Method for proving Theorem \ref{main}. For details, we refer the reader to \cite{gerhardt} and \cite{uraltseva-book}. For any $\tau\in [0,1]$ we consider the Neumann boundary problem $\mathcal{N}_\tau$ of finding $u\in C^{3,\alpha}(\bar\Omega)$ such that \begin{eqnarray} & & \mathcal{F}[\tau, x,u,\nabla u, \nabla^2 u] = 0,\\ & & \langle \frac{\nabla u}{W}, \nu\rangle + \tau \Phi=0, \end{eqnarray} where $\mathcal{F}$ is the quasilinear elliptic operator defined by \begin{eqnarray} \mathcal{F}[x,u,\nabla u, \nabla^2 u]= \textrm{div}\bigg(\frac{\nabla u}{W}\bigg) - \langle \frac{\nabla \gamma}{2\gamma}, \frac{\nabla u}{W}\rangle -\tau\Psi. \end{eqnarray} Since the coefficients of the first and second order terms do not depend on $u$ it follows that \begin{equation} \label{implicit} \frac{\partial\mathcal{F}}{\partial u}= -\tau\frac{\partial\Psi}{\partial u} \le -\tau\beta <0. \end{equation} We define $\mathcal{I}\subset [0,1]$ as the subset of values of $\tau\in [0,1]$ for which the Neumann boundary problem $\mathcal{N}_\tau$ has a solution. Since $u=0$ is a solution for $\mathcal{N}_0$, it follows that $\mathcal{I}\neq \emptyset$. Moroever, the Implicit Function Theorem (see \cite{GT}, Chapter 17) implies that $\mathcal{I}$ is open in view of (\ref{implicit}). Finally, the height and gradient \emph{a priori} estimates we obtained in Sections \ref{section-height} and \ref{section-gradient} are independent of $\tau\in [0,1]$. This implies that (\ref{capillary}) is uniformly elliptic. Moreover, we may assure the existence of some $\alpha_0 \in (0,1)$ for which there there exists a constant $C>0$ independent of $\tau$ such that \[
|u_\tau|_{1,\alpha_0,\bar\Omega}\le C. \] Redefine $\alpha = \alpha_0$. Thus, combining this fact, Schauder elliptic estimates and the compactness of $C^{3,\alpha_0}(\bar\Omega)$ into $C^3(\bar\Omega)$ imply that $\mathcal{I}$ is closed. It follows that $\mathcal{I}=[0,1]$.
The uniqueness follows from the Comparison Principle for elliptic PDEs. We point out that a more general uniqueness statement - comparing a nonparametric solution with a general hypersurface with the same mean curvature and contact angle at corresponding points - is also valid. It is a consequence of a flux formula coming from the existence of a Killing vector field in $M$. We refer the reader to \cite{DHL} for further details.
This finishes the proof of the Theorem \ref{main}.
\noindent Jorge H. Lira\\ Gabriela A. Wanderley\\ Departamento de Matem\'atica \\ Universidade Federal do Cear\'a\\ Campus do Pici, Bloco 914\\ Fortaleza, Cear\'a\\ Brazil\\ 60455-760
\end{document} |
\begin{document}
\begin{center}
{\bf{\LARGE{Online Nonsubmodular Minimization with Delayed \\ [.2cm] Costs: From Full Information to Bandit Feedback}}}
\vspace*{.2in} {\large{ \begin{tabular}{c} Tianyi Lin$^{\star, \diamond}$ \and Aldo Pacchiano$^{\star, \ddagger}$ \and Yaodong Yu$^{\star, \diamond}$ \and Michael I. Jordan$^{\diamond, \dagger}$ \\ \end{tabular} }}
\vspace*{.2in}
\begin{tabular}{c} Department of Electrical Engineering and Computer Sciences$^\diamond$ \\ Department of Statistics$^\dagger$ \\ University of California, Berkeley \\ Microsoft Research, NYC$^\ddagger$ \end{tabular}
\vspace*{.2in}
\today
\vspace*{.2in}
\begin{abstract} Motivated by applications to online learning in sparse estimation and Bayesian optimization, we consider the problem of online unconstrained nonsubmodular minimization with delayed costs in both full information and bandit feedback settings. In contrast to previous works on online unconstrained submodular minimization, we focus on a class of nonsubmodular functions with special structure, and prove regret guarantees for several variants of the online and approximate online bandit gradient descent algorithms in static and delayed scenarios. We derive bounds for the agent's regret in the full information and bandit feedback setting, even if the delay between choosing a decision and receiving the incurred cost is unbounded. Key to our approach is the notion of $(\alpha, \beta)$-regret and the extension of the generic convex relaxation model from~\citet{El-2020-Optimal}, the analysis of which is of independent interest. We conduct and showcase several simulation studies to demonstrate the efficacy of our algorithms. \end{abstract} \let\thefootnote\relax\footnotetext{$^\star$ Tianyi Lin, Aldo Pacchiano and Yaodong Yu contributed equally to this work.} \end{center}
\section{Introduction} With machine learning systems increasingly being deployed in real-world settings, there is an urgent need for online learning algorithms that can minimize cumulative costs over the long run, even in the face of complete uncertainty about future outcomes. There exist a myriad of works that deal with this setting, most prominently in the area of online learning and bandits~\citep{Cesa-2006-Prediction,Lattimore-2020-Bandit}. The majority of this literature deals with problems where the decisions are taken from either a small set (such as in the multi armed bandit framework~\citep{Auer-2002-Using}), a continuous decision space (as in linear bandits~\citep{Auer-2002-Using,Dani-2008-Stochastic}) or in the case the decision set is combinatorial in nature, the response is often assumed to maintain a simple functional relationship with the input (e.g., linear~\cite{Cesa-2012-Combinatorial}).
In this paper, we depart from these assumptions and explore what we believe is a more realistic type of model for the setting where the actions can be encoded as selecting a subset of a universe of size $n$. We study a sequential interaction between an agent and the world that takes place in rounds. At the beginning of round $t$, the agent chooses a \emph{subset} $S^t \subseteq [n]$ (e.g., selecting the set of products in a factory~\citep{Mccormick-2005-Submodular}), after which the agent suffers cost $f_t(S^t)$ such that $f_t$ is an $\alpha-$weakly DR-submodular and $\beta-$weakly DL-supermodular function~\citep{Lehmann-2006-Combinatorial}. The agent then may receive extra information about $f_t$ as feedback, for example in the full information setting the agent observes the whole function $f_t$ and in the bandit feedback scenario the learner does not receive any extra information about $f_t$ beyond the value of $f_t(S^t)$. The standard metric to measure an online learning algorithm is \textit{regret}~\citep{Blum-2007-External}: the regret at time $T$ is the difference between $\sum_{t=1}^T f_t(S^t)$ that is the total cost achieved by the algorithm and $\min_{x\in A} \sum_{t=1}^T f_t(x)$ that is the total cost achieved by the best fixed action in hindsight. A \textit{no-regret} learning algorithm is one that achieves sublinear regret (as a function of $T$). Many no-regret learning algorithms have been developed based on online convex optimization toolbox~\citep{Zinkevich-2003-Online, Kalai-2005-Efficient, Shalev-2006-Convex, Hazan-2007-Logarithmic, Shalev-2011-Online, Arora-2012-Multiplicative, Hazan-2016-Introduction} many of them achieving minimax-optimal regret bounds for different cost functions even when these are produced by the world in an adversarial fashion. However, many online decision-making problems remain open, for example when the decision space is discrete and large (e.g., exponential in the number of problem parameters) and the cost functions are nonlinear~\citep{Hazan-2012-Online}.
To the best of our knowledge, ~\citet{Hazan-2012-Online} were the first to investigate non-parametric online learning in combinatorial domains by considering the setting where the costs $f_t$ are all submodular functions. In this formulation the decision space is the set of all subsets of a set of $n$ elements; and the cost functions are \textit{submodular}. They provided no-regret algorithms for both the full information and bandit settings. Their chief innovation was to propose a computationally efficient algorithm for online submodular learning that resolved the exponential computational and statistical dependence on $n$ suffered by all previous approaches~\citep{Hazan-2012-Online}. These results served as a catalyst for a rich and expanding research area~\citep{Streeter-2008-Online, Jegelka-2011-Online, Buchbinder-2014-Online, Chen-2018-Online, Roughgarden-2018-Optimal, Chen-2018-Projection, Cardoso-2019-Differentially, Anari-2019-Structured, Harvey-2020-Improved, Thang-2021-Online, Matsuoka-2021-Tracking}.
Even though submodularity can be used to model a few important typical cost functions that arise in machine learning problems~\citep{Boykov-2001-Fast, Boykov-2004-Experimental, Narasimhan-2005-Q, Bach-2010-Structured}, it is an insufficient assumption for many other applications where the cost functions do not satisfy submodularity, e.g., structured sparse learning~\citep{El-2015-Totally}, batch Bayesian optimization~\citep{Gonzalez-2016-Batch, Bogunovic-2016-Truncated}, Bayesian A-optimal experimental design~\citep{Bian-2017-Guarantees}, column subset selection~\citep{Sviridenko-2017-Optimal} and so on. In this work we aim to fill in this gap. In view of all this, we consider the following question: \begin{center} \textbf{Can we design online learning algorithms when the cost functions are nonsubmodular?} \end{center} This paper provides an affirmative answer to this question by demonstrating that online/bandit approximate gradient descent algorithm can be directly extended from online submodular minimization~\citep{Hazan-2012-Online} to online nonsubmodular minimization when each cost functions $f_t$ satisfy the regularity condition in~\citet{El-2020-Optimal}.
Moreover, in online decision-making there is often a significant delay between decision and feedback. This delay has an adverse effect on the characterization between marketing feedback and an agent's decision~\citep{Quanrud-2015-Online, Heliou-2020-Gradient}. For example, a click on an ad can be observed within seconds of the ad being displayed, but the corresponding sale can take hours or days to occur. We extend all of our algorithms to the delayed feedback setting by leveraging a pooling strategy recently introduced by~\citet{Heliou-2020-Gradient} into the framework of online/bandit approximate gradient descent.
\paragraph{Contribution.} First, we introduce a new notion of $(\alpha, \beta)$-regret which allows for analyzing no-regret online learning algorithms when the loss functions are nonsubmodular. We then propose two randomized algorithms for both the full-information and bandit feedback settings respectively with the regret bounds in expectation and high-probability sense. We then combine the aforementioned algorithms with the pooling strategy found in~\citep{Heliou-2020-Gradient} and prove that the resulting algorithms are no-regret even when the delays are unbounded (cf. Assumption~\ref{Assumption:delay}). Specifically, when the delay $d_t$ satisfies $d_t = o(t^{\gamma})$, we establish a $O(\sqrt{nT^{1+\gamma}})$ regret bound in full-information setting and a $O(nT^{\frac{2+\gamma}{3}})$ regret bound in bandit feedback setting. To our knowledge, this is the first theoretical guarantee for no-regret learning in online nonsubmodular minimization with delayed costs. Experimental results on sparse learning with synthetic data confirm our theoretical findings.
It is worth comparing our results with that in the existing works~\citep{El-2020-Optimal, Hazan-2012-Online, Heliou-2020-Gradient}. First of all, the results concerning online nonsubmodular minimization are not a straightforward consequence of~\citet{El-2020-Optimal}. Indeed, it is natural yet nontrivial to identify the notion of $(\alpha,\beta)$-regret under which formal guarantees can be established for the nonsubmodular case. This notion does not appear before and appears to be a novel idea and an interesting conceptual contribution. Further, our results provide the first theoretical guarantee for no-regret learning in online and bandit nonsubmodular minimization and generalize the results in~\citet{Hazan-2012-Online}. Even though the online and bandit learning algorithms and regret analysis share the similar spirits with the context of~\citet{Hazan-2012-Online}, the proof techniques are different since we need to deal with the nonsubmodular case with $(\alpha,\beta)$-regret. Finally, we are not aware of any results on online and bandit combinatorial optimization with delayed costs.~\citet{Heliou-2020-Gradient} focused on the gradient-free game-theoretical learning with delayed costs where the action sets are \textit{continuous} and \textit{bounded}. Thus, their results can not imply ours. The only component that two works share is the pooling strategy which has been a common algorithmic component to handle the delays. Even though the pooling strategy is crucial to our delayed algorithms, we make much efforts to combine them properly and prove $(\alpha,\beta)$-regret bound of our new algorithms.
\paragraph{Notation.} We let $[n]$ be the set $\{1, 2, \ldots, n\}$ and $\mathbb{R}_+^n$ be the set of all vectors in $\mathbb{R}^n$ with nonnegative components. We denote $2^{[n]}$ as the set of all subsets of $[n]$. For a set $S \subseteq [n]$, we let $\chi_S \in \{0, 1\}^n$ be the characteristic vector satisfying that $\chi_S(i) = 1$ for each $i \in S$ and $\chi_S(i) = 0$ for each $i \notin S$. For a function $f: 2^{[n]} \mapsto \mathbb{R}$, we denote the marginal gain of adding an element $i$ to $S$ by $f(i \mid S) = f(S \cup \{i\}) - f(S)$. In addition, $f$ is normalized if $f(\emptyset) = 0$ and nondecreasing if $f(A) \leq f(B)$ for $A \subseteq B$. For a vector $x \in \mathbb{R}^n$, its Euclidean norm refers to $\|x\|$ and its $i$-th entry refers to $x_i$. We denote the support set of $x$ by $\textnormal{supp}(x) = \{i \in [n]: x_i \neq 0\}$ and, by abuse of notation, we let $x$ define a set function $x(S) = \sum_{i \in S} x_i$. We let $P_S$ be the projection onto a closed set $S$ and $\textnormal{dist}(x, S) = \inf_{y \in S} \|x - y\|$ denotes the distance between $x$ and $S$. A pair of parameters $(\alpha, \beta) \in \mathbb{R}_+ \times \mathbb{R}_+$ in the regret refer to approximation factors of the corresponding offline setting. Lastly, $a = O(b(\alpha, \beta, n, T))$ refers to an upper bound $a \leq C \cdot b(\alpha, \beta, n, T)$ where $C > 0$ is independent of $\alpha$, $\beta$, $n$ and $T$.
\section{Related Work} The offline nonsubmodular optimization with different notions of approximate submodularity has recently received a lot of attention. Most research focused on the maximization of nonsubmodular set functions, emerging as an important paradigm for studying real-world application problems~\citep{Das-2011-Submodular, Horel-2016-Maximization, Chen-2018-Weakly, Kuhnle-2018-Fast, Hassidim-2018-Optimization, Elenberg-2018-Restricted, Harshaw-2019-Submodular}. In contrast, we are aware of relatively few investigations into the minimization of nonsubmodular set functions. An interesting example is the ratio problem~\citep{Bai-2016-Algorithms} where the objective function to be minimized is the ratio of two set functions and is thus nonsubmodular in general. Note that the ratio problem does not admit a constant factor approximation even when two set functions are submodular~\citep{Svitkina-2011-Submodular}. However, if the objective function to be minimized is approximately modular with bounded curvature, the optimal approximation algorithms exist even when the constrain sets are assumed~\citep{Iyer-2013-Curvature}. Another typical example is the minimization of the difference of two submodular functions, where some approximation algorithms were proposed in~\citet{Iyer-2012-Algorithms} and~\citet{Kawahara-2015-Approximate} but without any approximation guarantee. Very recently,~\citet{El-2020-Optimal} provided a comprehensive treatment of optimal approximation guarantees for minimizing nonsubmodular set functions, characterized by how close the function is to submodular. Our work is close to theirs and our results can be interpreted as the extension of~\citet{El-2020-Optimal} to online learning with delayed feedback.
Another line of relevant works comes from online learning literature and focuses on no-regret algorithms in different settings with delayed costs. In the context of online convex optimization,~\citet{Quanrud-2015-Online} proposed an extension of online gradient descent (OGD) where the agent performs a batched gradient update the moment gradients are received and proved that OGD achieved a regret bound of $O(\sqrt{T + D_T})$ where $D_T$ is the total delay over a horizon $T$. However, their batch update approach can not be extended to bandit convex optimization since it does not work with stochastic estimates of the received gradient information (or when attempting to infer such information from realized costs). This issue was posted by~\citet{Zhou-2017-Countering} and recently resolved by~\citet{Heliou-2020-Gradient} who proposed a new pooling strategy based on a priority queue. The effect of delay was also discussed in the multi-armed bandit (MAB) literature under different assumptions~\citep{Joulani-2013-Online, Joulani-2016-Delay, Vernade-2017-Stochastic, Pike-2018-Bandits, Thune-2019-Nonstochastic, Bistritz-2019-Online, Zhou-2019-Learning, Zimmert-2020-Optimal, Gyorgy-2021-Adapting}. In particular,~\citet{Thune-2019-Nonstochastic} proved the regret bound in adversarial MABs with the cumulative delay and~\citet{Gyorgy-2021-Adapting} studied the adaptive tuning to delays and data in this setting. Further,~\citet{Joulani-2016-Delay} and~\citet{Zimmert-2020-Optimal} also investigated adaptive tuning to the unknown sum of delays while~\citet{Bistritz-2019-Online} and~\citet{Zhou-2019-Learning} gave further results in adversarial and linear contextual bandits respectively. However, the algorithms developed in the aforementioned works have little to do with online nonsubmodular minimization with delayed costs.
\section{Preliminaries and Technical Background}\label{sec:prelim} We present the basic setup for minimizing structured nonsubmodular functions, including motivating examples and convex relaxation based on Lov\'{a}sz extension. We extend the offline setting to online setting and $(\alpha, \beta)$-regret which is important to the subsequent analysis.
\subsection{Structured nonsubmodular function} Minimizing a set function $f: 2^{[n]} \mapsto \mathbb{R}$ is NP-hard in general but is solved exactly with \textit{submodular} structure in polynomial time~\citep{Iwata-2003-Faster, Grotschel-2012-Geometric, Lee-2015-Faster} and in strongly polynomial time~\citep{Schrijver-2000-Combinatorial, Iwata-2001-Combinatorial, Iwata-2009-Simple, Orlin-2009-Faster, Lee-2015-Faster}. More specifically, $f$ is submodular if it satisfies the diminishing returns (DR) property as follows, \begin{equation}\label{def:submodular} f(i \mid A) \geq f(i \mid B), \quad \textnormal{for all } A \subseteq B, \ i \in [n] \setminus B. \end{equation} Further, $f$ is modular if the inequality in Eq.~\eqref{def:submodular} holds as an equality and is supermodular if \begin{equation*} f(i \mid B) \geq f(i \mid A), \quad \textnormal{for all } A \subseteq B, \ i \in [n] \setminus B. \end{equation*} Relaxing these inequalities will bring us the notions of weak DR-submodularity/supermodularity that were introduced by~\citet{Lehmann-2006-Combinatorial} and revisited in the machine learning literature~\citep{Bian-2017-Guarantees}. Formally, we have \begin{definition} A set function $f: 2^{[n]} \mapsto \mathbb{R}$ is $\alpha$-weakly DR-submodular with $\alpha > 0$ if \begin{equation*} f(i \mid A) \geq \alpha f(i \mid B), \quad \textnormal{for all } A \subseteq B, \ i \in [n] \setminus B. \end{equation*} Similarly, $f$ is $\beta$-weakly DR-supermodular with $\beta > 0$ if \begin{equation*} f(i \mid B) \geq \beta f(i \mid A), \quad \textnormal{for all } A \subseteq B, \ i \in [n] \setminus B. \end{equation*} We say that $f$ is $(\alpha, \beta)$-weakly DR-modular if both of the above two inequalities hold true. \end{definition} The above notions of weak DR-submodularity (or weak DR-supermodularity) generalize the notions of submodularity (or supermodularity); indeed, we have $f$ is submodular (or supermodular) if and only if $\alpha = 1$ (or $\beta = 1$). They are also special cases of more general notions of weak submodularity (or weak supermodularity)~\citep{Das-2011-Submodular} and we refer to~\citet[Proposition~1]{Bogunovic-2018-Robust} and~\citet[Proposition~8]{El-2018-Combinatorial} for the details. For an overview of the approximate submodularity, we refer to~\citet[Section~6]{Bian-2017-Guarantees} and~\citet[Figure~1]{El-2020-Optimal}. In addition, the parameters $1 - \alpha$ and $1 - \beta$ are referred to as \textit{generalized inverse curvature} and \textit{generalized curvature} respectively~\citep{Bian-2017-Guarantees, Bogunovic-2018-Robust} and can be interpreted as the extension of inverse curvature and curvature~\citep{Conforti-1984-Submodular} for submodular and supermodular functions. Intuitively, these parameters quantify how far the function $f$ is from being a submodular (or supermodular) function.
Recently,~\citet{El-2020-Optimal} have proposed and studied the problem of minimizing a class of structured nonsubmodular functions as follows, \begin{equation}\label{prob:offline} \min_{S \subseteq [n]} \ f(S) := \bar{f}(S) - \ushort{f}(S), \end{equation} where $\bar{f}$ and $\ushort{f}$ are both normalized (i.e., $\bar{f}(\emptyset) = \ushort{f}(\emptyset) = 0$)\footnote{In general, we can let $\bar{f}(S) \leftarrow \bar{f}(S) - \bar{f}(\emptyset)$ and $\ushort{f}(S) \leftarrow \ushort{f}(S) - \ushort{f}(\emptyset)$ which will not change the minimization problem.} and nondecreasing, $\bar{f}$ is $\alpha$-weakly DR-submodular and $\ushort{f}$ is $\beta$-weakly DR-supermodular. Note that the problem in Eq.~\eqref{prob:offline} is challenging; indeed, $f$ is neither weakly DR-submodular nor weakly DR-supermodular in general since the weak DR-submodularity (or weak DR-supermodularity) are only valid for monotone functions.
It is worth mentioning that Eq.~\eqref{prob:offline} is not necessarily theoretically artificial but encompasses a wide range of applications. We present two typical examples which can be formulated in the form of Eq.~\eqref{prob:offline} and refer to~\citet[Section~4]{El-2020-Optimal} for more details.
\begin{example}[Structured Sparse Learning]\label{def:SSL} We aim to estimate a sparse parameter vector whose support satisfies a particular structure and commonly formulate such problems as $\min_{x \in \mathbb{R}^n} \ell(x) + \lambda f(\textnormal{supp}(x))$, where $\ell: \mathbb{R}^n \mapsto \mathbb{R}$ is a loss function and $f: 2^{[n]} \mapsto \mathbb{R}$ is a set function favoring the desirable supports. Existing approaches such as~\citep{Bach-2010-Structured} proposed to replace the discrete regularization function $f(\textnormal{supp}(x))$ by its closest convex relaxation and is computationally tractable only when $f$ is submodular. However, this problem is often better modeled by a nonsubmodular regularizer in practice~\citep{El-2015-Totally}. An alternative formulation of structured sparse learning problems is \begin{equation}\label{prob:SSL} \min_{S \subseteq [n]} \lambda f(S) - h(S), \end{equation} where $h(S) = \ell(0) - \min_{\textnormal{supp}(x) \subseteq S} \ell(x)$. Note that Eq.~\eqref{prob:SSL} can be reformulated into the form of Eq.~\eqref{prob:offline} under certain conditions; indeed, $h$ is a normalized and nondecreasing function and~\citet[Proposition~5]{El-2020-Optimal} has shown that $h$ is weakly DR-modular if $\ell$ is smooth, strongly convex and is generated from random data. Examples of weakly DR-submodular regularizers $f$ include the ones used in time-series and cancer diagnosis~\citep{Rapaport-2008-Classification} and healthcare~\citep{Sakaue-2019-Greedy}. \end{example} \begin{example}[Batch Bayesian Optimization]
We aim to optimize an unknown expensive-to-evaluate noisy function $\ell$ with as few batches of function evaluations as possible. The evaluation points are chosen to maximize an acquisition function -- the variance reduction function~\citep{Gonzalez-2016-Batch} -- subject to a cardinality constraint. Maximizing the variance reduction may be phrased as a special instance of the problems in Eq.~\eqref{prob:offline} in the form of $\min_{S \subseteq [n]} \lambda |S| - G(S)$, where $G: 2^{[n]} \mapsto \mathbb{R}$ is the variance reduction function defined accordingly and~\citet[Proposition~6]{El-2020-Optimal} has shown that it is also non-decreasing and weakly DR-modular. This formulation allows to include nonlinear costs with (weak) decrease in marginal costs (economies of scale) with some applications in the sensor placement. \end{example}
\subsection{Convex relaxation based on the Lov\'{a}sz extension} The Lov\'{a}sz extension~\citep{Lovasz-1983-Submodular} is a toolbox commonly used for minimizing a submodular set function $f: 2^{[n]} \mapsto \mathbb{R}$. It is a continuous interpolation of $f$ on the unit hypercube $[0, 1]^n$ and can be minimized efficiently since it is \textit{convex} if and only if $f$ is submodular. The minima of the Lov\'{a}sz extension also recover the minima of $f$.
Before the formal argument, we define a maximal chain of $[n]$; that is, $\{A_0, \ldots, A_n\}$ is a maximal chain if $\emptyset = A_0 \subseteq A_1 \subseteq \ldots \subseteq A_n = [n]$. Formally, we have \begin{definition}\label{def:Lovasz} Given a submodular function $f$, the Lov\'{a}sz extension is the function $f_L: [0, 1]^n \mapsto \mathbb{R}$ given by $f_L(x) = \sum_{i=0}^n \lambda_i f(A_i)$ where $\{A_0, \ldots, A_n\}$ is a maximal chain\footnote{Both the chain and the set of $\lambda_i$ may depend on the input $x$.} of $[n]$ so that $\sum_{i=0}^n \lambda_i \chi_{A_i} = x$ and $\sum_{i=0}^n \lambda_i = 1$ where $\chi_{A_i}(j) = 1$ for $\forall j \in A_i$ and $\chi_{A_i}(j) = 0$ for $\forall j \notin S$. \end{definition} Even though Definition~\ref{def:Lovasz} implies that $f_L(\chi_S) = f(S)$ for all $S \subseteq [n]$, it remains unclear how to find the chain or the coefficients. The preceding discussion defines the Lov\'{a}sz extension in an equivalent way that is more amenable for computing the subgradient of $f_L$.
Let $x = (x_1, x_2, \ldots, x_n) \in [0, 1]^n$ and we define that $\pi: [n] \mapsto [n]$ is the sorting permutation of $\{x_1, x_2, \ldots, x_n\}$ where $\pi(i) = j$ implies that $x_j$ is the $i$-th largest element. By definition, we have $1 \geq x_{\pi(1)} \geq \ldots \geq x_{\pi(n)} \geq 0$ and let $x_{\pi(0)} = 1$ and $x_{\pi(n+1)} = 0$ for simplicity. Then, we set $\lambda_i = x_{\pi(i)} - x_{\pi(i+1)}$ for all $0 \leq i \leq n$ and let $A_0 = \emptyset$ and $A_i = \{\pi(1), \ldots, \pi(i)\}$ for all $i \in [n]$. We also have \begin{eqnarray*} \lefteqn{\sum_{i=0}^n \lambda_i \chi_{A_i} = \sum_{i=0}^n (x_{\pi(i)} - x_{\pi(i+1)})(\chi_{A_{i-1}} + e_{\pi(i)})} \\ & = & \sum_{i=1}^n e_{\pi(i)}\sum_{j=i}^n (x_{\pi(j)} - x_{\pi(j+1)}) = x. \end{eqnarray*} As such, we obtain that $f_L(x) = \sum_{i=1}^n x_{\pi(i)} f(\pi(i) \mid A_{i-1})$ where $x_{\pi(1)} \geq x_{\pi(2)} \geq \ldots \geq x_{\pi(n)}$ are the sorted entries in decreasing order, $A_0 = \emptyset$ and $A_i = \{\pi(1), \ldots, \pi(i)\}$ for all $i \in [n]$. Then, the classical results~\citep{Edmonds-2003-Submodular, Fujishige-2005-Submodular} suggest that the subgradient $g$ of $f_L$ at any $x \in [0, 1]^n$ can be computed by simply sorting the entries in decreasing order and taking \begin{equation} g_{\pi(i)} = f(A_i) - f(A_{i-1}), \textnormal{ for all } i \in [n]. \end{equation} Since $f_L$ is convex if and only if $f$ is submodular, we can apply the convex optimization toolbox here. Recently, ~\citet{El-2020-Optimal} have shown that the similar idea can be extended to nonsubmodular optimization in Eq.~\eqref{prob:offline}.
More specifically, we can define the convex closure $f_C$ for any nonsubmodular function $f$; indeed, $f_C: [0, 1]^n \mapsto \mathbb{R}$ is the point-wise largest convex function which always lower bounds $f$. By definition, $f_C$ is the \textit{tightest} convex extension of $f$ and $\min_{S \subseteq [n]} f(S) = \min_{x \in [0, 1]^n} f_C(x)$. In general, it is NP-hard to evaluate and optimize $f_C$~\citep{Vondrak-2007-Submodularity}. Fortunately,~\citet{El-2020-Optimal} demonstrated that the Lov\'{a}sz extension $f_L$ approximates $f_C$ such that the vector computed using the approach in~\citet{Edmonds-2003-Submodular} and~\citet{Fujishige-2005-Submodular} approximates the subgradient of $f_C$. We summarize their results in the following proposition and provide the proofs in Appendix~\ref{app:structure} for completeness. \begin{proposition}\label{Prop:structure} Focusing on Eq.~\eqref{prob:offline}, we let $x \in [0, 1]^n$ with $x_{\pi(1)} \geq \ldots \geq x_{\pi(n)}$ and $g_{\pi(i)} = f(A_i) - f(A_{i-1})$ for all $i \in [n]$ where $A_0 = \emptyset$ and $A_i = \{\pi(1), \ldots, \pi(i)\}$ for all $i \in [n]$. Then, we have \begin{equation}\label{Prop:structure-first} f_L(x) = g^\top x \geq f_C(x), \end{equation} and \begin{equation}\label{Prop:structure-second} g(A) = \sum_{i \in A} g_i \leq \tfrac{1}{\alpha}\bar{f}(A) - \beta\ushort{f}(A), \textnormal{ for all } A \subseteq [n], \end{equation} and \begin{equation}\label{Prop:structure-third} g^\top z \leq \tfrac{1}{\alpha}\bar{f}_C(z) + \beta (-\ushort{f})_C(z), \textnormal{ for all } z \in [0, 1]^n. \end{equation} \end{proposition} Proposition~\ref{Prop:structure} highlights how $f_L$ approximates $f_C$; indeed, we see from Eq.~\eqref{Prop:structure-first} and Eq.~\eqref{Prop:structure-third} that $f_C(x) \leq f_L(x) \leq \tfrac{1}{\alpha}\bar{f}_C(x) + \beta (-\ushort{f})_C(x)$ for all $x \in [0, 1]^n$. As such, it gives the key insight for analyzing the offline algorithms in~\citet{El-2020-Optimal} and will play an important role in the subsequent analysis of our paper.
\subsection{Online nonsubmodular minimization} We consider online nonsubmodular minimization which extends the offline problem in Eq.~\eqref{prob:offline} to the online setting. In particular, an adversary first chooses structured nonsubmodular functions $f_1, f_2, \ldots, f_T: 2^{[n]} \mapsto \mathbb{R}$ given by \begin{equation}\label{prob:online}\small f_t(S) := \bar{f}_t(S) - \ushort{f}_t(S), \textnormal{ for all } S \subseteq [n], \ t \in [T], \end{equation} where $\bar{f}_t$ and $\ushort{f}_t$ are normalized and non-decreasing, $\bar{f}_t$ is $\alpha$-weakly DR-submodular and $\ushort{f}_t$ is $\beta$-weakly DR-supermodular. In each round $t = 1, 2,\ldots, T$, the agent chooses $S^t$ and observes the incurred loss $f_t(S^t)$ after committing to her decision. Throughout the horizon $[0, T]$, one aims to minimize the regret -- the difference between $\sum_{t=1}^T f_t(S^t)$ and the loss at the best fixed solution in hindsight, i.e., $S_\star^T = \mathop{\rm argmin}_{S \subseteq [n]} \sum_{t=1}^T f_t(S)$ -- which is defined by\footnote{If the sets $S^t$ are chosen by a randomized algorithm, we consider the expected regret over the randomness.} \begin{equation}\label{eq:regret}\small R(T) = \sum_{t=1}^T f_t(S^t) - \sum_{t=1}^T f_t(S_\star^T). \end{equation} An algorithm is \textit{no-regret} if $R(T)/T \rightarrow 0$ as $T \rightarrow +\infty$ and \textit{efficient} if it computes each decision set $S^t$ in polynomial time. In the context, the regret is used when the minimization for a known cost, i.e., $\min_{S \subseteq [n]} f(S)$, can be solved exactly. However, solving the optimization problem in Eq.~\eqref{prob:offline} with nonsubmodular costs is NP-hard regardless of any multiplicative constant factor~\citep{Iyer-2012-Algorithms, Trevisan-2014-Inapproximability}. Thus, it is necessary to consider a bicriteria-like approximation guarantee with the factors $\alpha, \beta > 0$ as~\citet{El-2020-Optimal} suggested. In particular, $(\alpha, \beta)$ are bounds on the quality of a solution $S$ returned by a given offline algorithm compared to the optimal solution $\bar{S}$; that is, $f(S) \leq \frac{1}{\alpha}\bar{f}(S_\star) - \beta \ushort{f}(S_\star)$. Such bicriteria-like approximation is optimal:~\citet[Theorem~2]{El-2020-Optimal} has shown that no algorithm with subexponential number of value queries can improve on it in the oracle model.
Our goal is to analyze \textit{online approximate gradient descent algorithm and its bandit variant} for online nonsubmodular minimization. Let $(\alpha, \beta)$ be the approximation factors attained by an offline algorithm that solves $\min_{S \subseteq [n]} f(S)$ for a known nonsubmodular function $f$ in Eq.~\eqref{prob:offline}. The \textit{$(\alpha, \beta)$-regret} compares to the best solution that can be expected in polynomial time and is defined by \begin{equation}\label{eq:alpha-beta-regret}\small R_{\alpha, \beta}(T) = \sum_{t=1}^T f_t(S^t) - \sum_{t=1}^T (\tfrac{1}{\alpha}\bar{f}_t(S_\star^T) - \beta\ushort{f}_t(S_\star^T)), \end{equation} where $S_\star^T = \mathop{\rm argmin}_{S \subseteq [n]} \sum_{t=1}^T f_t(S)$. It is analogous to the $\alpha$-regret which is widely used in online constrained submodular minimization~\citep{Jegelka-2011-Online} and online submodular maximization~\citep{Streeter-2008-Online}.
As mentioned before, we consider the algorithmic design in both \textit{full information} and \textit{bandit feedback} settings. In the former one, the agent is allowed to have unlimited access to the value oracles of $f_t(\cdot)$ after choosing $S^t$ in each round $t$. In the latter one, the agent only observes the incurred loss at the point that she has chosen in each round $t$, i.e., $f_t(S^t)$, and receives no other information.
\section{Online Approximation Algorithm}\label{sec:standard} We analyze online approximate gradient descent algorithm and its bandit variant for regret minimization when the nonsubmodular cost functions are in the form of Eq~\eqref{prob:online}. Due to space limit, we defer the proofs to Appendix~\ref{app:OGD} and~\ref{app:BGD}. \begin{algorithm}[!t] \caption{Online Approximate Gradient Descent}\label{alg:OGD} \begin{algorithmic}[1] \STATE \textbf{Initialization:} the point $x^1 \in [0, 1]^n$ and the stepsize $\eta > 0$; \FOR{$t = 1, 2, \ldots$} \STATE Let $x_{\pi(1)}^t \geq \ldots x_{\pi(n)}^t$ be the sorted entries in the decreasing order with $A_i^t = \{\pi(1), \ldots, \pi(i)\}$ for all $i \in [n]$ and $A_0^t = \emptyset$. Let $x_{\pi(0)}^t = 1$ and $x_{\pi(n+1)}^t = 0$. \STATE Let $\lambda_i^t = x_{\pi(i)}^t - x_{\pi(i+1)}^t$ for all $0 \leq i \leq n$. \STATE Sample $S^t$ from the distribution $\mathbb{P}(S^t = A_i^t) = \lambda_i^t$ for all $0 \leq i \leq n$ and observe the new loss function $f_t$. \STATE Compute $g_{\pi(i)}^t = f_t(A_i^t) - f_t(A_{i-1}^t)$ for all $i \in [n]$. \STATE Compute $x^{t+1} = P_{[0, 1]^n}(x^t - \eta g^t)$. \ENDFOR \end{algorithmic} \end{algorithm}
\subsection{Full information setting} Let $[0, 1]^n$ be the unit hypercube and the cost function on $[0, 1]^n$ corresponding to $f_t$ is the function $(f_t)_C$ that is the convex closure of $f_t$. Equipped with Proposition~\ref{Prop:structure}, we can compute approximate subgradients of $(f_t)_C$ such that the online gradient descent~\citep{Zinkevich-2003-Online} is applicable.
This leads to Algorithm~\ref{alg:OGD} which performs one-step projected gradient descent that yields $x^t$ and then samples $S^t$ from the distribution $\lambda$ over $\{A_i\}_{i= 0}^n$ encoded by $x^t$. It is worth mentioning that $\lambda_i^t = x_{\pi(i)}^t - x_{\pi(i+1)}^t$ for all $0 \leq i \leq n$ and $\lambda$ is thus completely independent of $f_t$. This guarantees that Algorithm~\ref{alg:OGD} is valid in online manner since $f_t$ is realized after the decision maker chooses $S^t$. One of the advantages of Algorithm~\ref{alg:OGD} is that it does not require the value of $\alpha$ and $\beta$ which can be hard to compute in practice. We summarize our results for Algorithm~\ref{alg:OGD} in the following theorem. \begin{theorem}\label{Theorem:OGD} Suppose the adversary chooses nonsubmodular functions in Eq.~\eqref{prob:online} satisfying $\bar{f}_t([n]) + \ushort{f}_t([n]) \leq L$. Fixing $T \geq 1$ and letting $\eta = \frac{\sqrt{n}}{L\sqrt{T}}$ in Algorithm~\ref{alg:OGD}, we have ${\mathbb{E}}[R_{\alpha, \beta}(T)] = O(\sqrt{nT})$ and $R_{\alpha, \beta}(T) = O(\sqrt{nT} + \sqrt{T\log(1/\delta)})$ with probability $1 - \delta$. \end{theorem} \begin{remark} Theorem~\ref{Theorem:OGD} demonstrates that Algorithm~\ref{alg:OGD} is regret-optimal for our setting; indeed, our setting includes online unconstrained submodular minimization as a special case where $(\alpha, \beta)$-regret becomes standard regret in Eq.~\eqref{eq:regret} and~\citet{Hazan-2012-Online} shows that Algorithm~\ref{alg:OGD} is optimal up to constants. Our theoretical result also extends the results in~\citet{Hazan-2012-Online} from submodular cost functions to nonsubmodular cost functions in Eq.~\eqref{prob:online} using the $(\alpha, \beta)$-regret instead of the standard regret in Eq.~\eqref{eq:regret}. \end{remark} \begin{algorithm}[!t] \caption{Bandit Approximate Gradient Descent}\label{alg:BGD} \begin{algorithmic}[1] \STATE \textbf{Initialization:} the point $x^1 \in [0, 1]^n$ and the stepsize $\eta > 0$; the exploration probability $\mu \in (0,1)$. \FOR{$t = 1, 2, \ldots, T$} \STATE Let $x_{\pi(1)}^t \geq \ldots x_{\pi(n)}^t$ be the sorted entries in decreasing order with $A_i^t = \{\pi(1), \ldots, \pi(i)\}$ for all $i \in [n]$ and $A_0^t = \emptyset$. Let $x_{\pi(0)}^t = 1$ and $x_{\pi(n+1)}^t = 0$. \STATE Let $\lambda_i^t = x_{\pi(i)}^t - x_{\pi(i+1)}^t$ for all $0 \leq i \leq n$. \STATE Sample $S^t$ from the distribution $\mathbb{P}(S^t = A_i^t) = (1 - \mu)\lambda_i^t + \frac{\mu}{n+1}$ for all $0 \leq i \leq n$ and observe the loss $f_t(S^t)$. \STATE Compute $\hat{f}_i^t = \frac{\mathbf{1}(S^t = A^t_i)}{(1-\mu)\lambda_i^t + \mu/(n+1)}f_t(S^t)$ for all $0 \leq i \leq n$. \STATE Compute $\hat{g}^t_{\pi(i)} = \hat{f}_i^t - \hat{f}_{i-1}^t$ for all $i \in [n]$. \STATE Compute $x^{t+1}$ $x^{t+1} = P_{[0, 1]^n}(x^t - \eta g^t)$. \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Bandit feedback setting} In contrast with the full-information setting, the agent only observes the loss function $f_t$ at her action $S^t$, i.e., $f_t(S^t)$, in bandit feedback setting. This is a more challenging setup since the agent does not have full access to the new loss function $f_t$ at each round $t$ yet.
Despite the bandit feedback, we can compute an unbiased estimator of the gradient $g^t$ in Algorithm~\ref{alg:OGD} using the technique of importance weighting and try to implement a stochastic version of Algorithm~\ref{alg:OGD}. More specifically, we notice that $\hat{f}_i^t = \frac{\textbf{1}(S^t = A_i^t)}{\lambda_i^t} f_t(S^t)$ is unbiased for estimating $f_t(A_i^t)$ for all $0 \leq i \leq n$. Thus, $\hat{g}_{\pi(i)}^t = \hat{f}_i^t - \hat{f}_{i-1}^t$ for all $i \in [n]$ gives us an unbiased estimator of the gradient $g^t$. However, the variance of the estimator $\hat{g}$ could be undesirably large since the values of $\lambda_i^t$ may be arbitrarily small.
To resolve this issue, we can sample $S^t$ from a mixture distribution that combines (with probability $1-\mu$) samples from $\lambda^t$ and (with probability $\mu$) samples from the uniform distribution over $\{A_i^t\}_{i= 0}^n$. This guarantees that the variance of $\hat{f}_i^t$ is upper bounded by $O(n^2/\mu)$. The similar idea has been employed in~\citet{Hazan-2012-Online} for online submodular minimization. Then, we conduct the careful analysis for the estimators $\hat{g}^t$ such that the scale of the variance is taken into account. Note that our analysis is different from the standard analysis in~\citet{Flaxman-2005-Online} which seems oversimplified for our setting and results in worse regret of $O(T^{3/4})$ compared to our result in the following theorem. \begin{theorem}\label{Theorem:BGD} Suppose the adversary chooses nonsubmodular functions $f_t$ in Eq.~\eqref{prob:online} satisfying $\bar{f}_t([n]) + \ushort{f}_t([n]) \leq L$. Fixing $T \geq 1$ and letting $(\eta, \mu) = (\frac{1}{LT^{2/3}}, \frac{n}{T^{1/3}})$ in Algorithm~\ref{alg:BGD}, we have ${\mathbb{E}}[R_{\alpha, \beta}(T)] = O(nT^{\frac{2}{3}})$ and $R_{\alpha, \beta}(T) = O(nT^{\frac{2}{3}} + \sqrt{n\log(1/\delta)}T^{\frac{2}{3}})$ with probability $1-\delta$. \end{theorem} \begin{remark} Theorem~\ref{Theorem:BGD} demonstrates that Algorithm~\ref{alg:BGD} is no-regret for our setting even when only the bandit feedback is available, further extending the results in~\citet{Hazan-2012-Online} from submodular cost functions to nonsubmodular cost functions in Eq.~\eqref{prob:online} using the $(\alpha, \beta)$-regret instead of the standard regret in Eq.~\eqref{eq:regret}. \end{remark}
\section{Online Delayed Approximation Algorithm}\label{sec:delay} We investigate Algorithm~\ref{alg:OGD} and~\ref{alg:BGD} for regret minimization even when the delay between choosing an action and receiving the incurred cost exists and can be unbounded.
\subsection{The general framework} The general online learning framework with large delay that we consider can be represented as follows. In each round $t = 1, \ldots, T$, the agent chooses the decision $S^t \subseteq [n]$ and this generates a loss $f_t(S^t)$. Simultaneously, $S^t$ triggers a \textit{delay} $d_t \geq 0$ which determines the round $t + d_t$ at which the information about $f_t$ will be received. Finally, the agent receives the information about $f_t$ from all previous rounds $\mathcal{R}_t = \{s: s + d_s = t\}$.
The above model has been stated in an abstract way as the basis for the regret analysis. The information about $f_t$ is determined by whether the setting is full information or bandit feedback. Our blanket assumptions for the stream of the delays encountered will be: \begin{assumption}\label{Assumption:delay} The delays $d_t = o(t^\gamma)$ for some $\gamma < 1$. \end{assumption} Assumption~\ref{Assumption:delay} is not theoretically artificial but uncovers that long delays are observed in practice~\citep{Chapelle-2014-Modeling}; indeed, the data statistics from real-time bidding company suggested that more than $10\%$ of the conversions were $ \geq 2$ weeks old. More specifically,~\citet{Chapelle-2014-Modeling} showed that the delays in online advertising have long-tail distributions when conditioning on context and feature variables available to the advertiser, thus justifying the existence of unbounded delays. Note that Assumption~\ref{Assumption:delay} is mild and the delays can even be adversarial as in~\citet{Quanrud-2015-Online}. \begin{algorithm}[!t] \caption{Delay Online Approximate Gradient Descent}\label{alg:DOGD} \begin{algorithmic}[1] \STATE \textbf{Initialization:} the point $x^1 \in [0, 1]^n$ and the stepsize $\eta_t > 0$; $\mathcal{P}_0 \leftarrow \emptyset$ and $f_\infty = 0$. \FOR{$t = 1, 2, \ldots$} \STATE Let $x_{\pi(1)}^t \geq \ldots x_{\pi(n)}^t$ be the sorted entries in the decreasing order with $A_i^t = \{\pi(1), \ldots, \pi(i)\}$ for all $i \in [n]$ and $A_0^t = \emptyset$. Let $x_{\pi(0)}^t = 1$ and $x_{\pi(n+1)}^t = 0$. \STATE Let $\lambda_i^t = x_{\pi(i)}^t - x_{\pi(i+1)}^t$ for all $0 \leq i \leq n$. \STATE Sample $S^t$ from the distribution $\mathbb{P}(S^t = A_i^t) = \lambda_i^t$ for $0 \leq i \leq n$ and observe the new loss function $f_t$. \STATE Compute $g_{\pi(i)}^t = f_t(A_i^t) - f_t(A_{i-1}^t)$ for all $i \in [n]$ and then trigger a delay $d_t \geq 0$. \STATE Let $\mathcal{R}_t = \{s: s + d_s = t\}$ and $\mathcal{P}_t \leftarrow \mathcal{P}_{t-1} \cup \mathcal{R}_t$. Take $q_t = \min \mathcal{P}_t$ and set $\mathcal{P}_t \leftarrow \mathcal{P}_t \setminus \{q_t\}$. \STATE Compute $x^{t+1}$ using Eq.~\eqref{def:DOGD_main}. \ENDFOR \end{algorithmic} \end{algorithm}
\subsection{Full information setting} At the round $t$, the agent receives the loss function $f_s(\cdot)$ for $\mathcal{R}_t = \{s: s + d_s = t\}$ after committing her decision, i,e., gets to observe $f_s(A_i^t)$ for all $s \in \mathcal{R}_t$ and all $0 \leq i \leq n$. To let Algorithm~\ref{alg:OGD} handle these delays, the first thing to note is that the set $\mathcal{R}_t$ received at a given round might be empty, i.e., we could have $\mathcal{R}_t = \emptyset$ for some $t \geq 1$. Following up the pooling strategy in~\citet{Heliou-2020-Gradient}, we assume that, as information is received over time, the agent adds it to an information pool $\mathcal{P}_t$ and then uses the oldest information available in the pool (where ``oldest" stands for the time at which the information was generated).
Since no information is available at $t = 0$, we have $\mathcal{P}_0 = \emptyset$ and update the agent's information pool recursively: $\mathcal{P}_t = \mathcal{P}_{t-1} \cup \mathcal{R}_t \setminus \{q_t\}$ where $q_t = \min(\mathcal{P}_{t-1} \cup \mathcal{R}_t)$ denotes the oldest round from which the agent has unused information at round $t$. As~\citet{Heliou-2020-Gradient} pointed out, this scheme can be seen as a priority queue where $\{f_s(\cdot), s \in \mathcal{R}_t\}$ arrives at time $t$ and is assigned in order; subsequently, the oldest information is utilized at first. An important issue that arises in the above computation is that, it may well happen that the agent's information pool $\mathcal{P}_t$ is empty at time $t$ (e.g., if we have $d_1 > 0$ at time $t=1$). Following the convention that $\inf \emptyset = +\infty$, we set $q_t = +\infty$ and $g^\infty = 0$ (since it is impossible to have information at time $t=+\infty$). Under this convection, the computation of a new iterate $x^{t+1}$ at time $t$ can be written more explicitly form as follows, \begin{equation}\label{def:DOGD_main} x^{t+1} = \left\{ \begin{array}{ll} x^t & \textnormal{if } \mathcal{P}_t = \emptyset, \\ P_{[0, 1]^n}(x^t - \eta_t g^{q_t}), & \textnormal{otherwise}. \end{array} \right. \end{equation} We present a delayed variant of Algorithm~\ref{alg:OGD} in Algorithm~\ref{alg:DOGD}. There is no information aggregation here but the updates of $x^{t+1}$ follows the pooling policy induced by a priority queue. We summarize our results in the following theorem. \begin{theorem}\label{Theorem:DOGD} Suppose the adversary chooses nonsubmodular functions in Eq.~\eqref{prob:online} satisfying $\bar{f}_t([n]) + \ushort{f}_t([n]) \leq L$ and let the delays satisfy Assumption~\ref{Assumption:delay}. Fixing $T \geq 1$ and letting $\eta_t = \frac{\sqrt{n}}{L\sqrt{t^{1+\gamma}}}$ in Algorithm~\ref{alg:DOGD}, we have ${\mathbb{E}}[R_{\alpha, \beta}(T)] = O(\sqrt{nT^{1+\gamma}})$ and $R_{\alpha, \beta}(T) = O(\sqrt{nT^{1+\gamma}} + \sqrt{T\log(1/\delta)})$ with probability $1 - \delta$. \end{theorem} \begin{remark} Theorem~\ref{Theorem:DOGD} demonstrates that Algorithm~\ref{alg:DOGD} is no-regret if Assumption~\ref{Assumption:delay} hold. To our knowledge, this is the first theoretical guarantee for no-regret learning in online nonsubmodular minimization with delayed costs and also complement similar results for online convex optimization with delayed costs~\citep{Quanrud-2015-Online}. \end{remark} \begin{algorithm}[!t] \caption{Delay Bandit Approximate Gradient Descent}\label{alg:DBGD} \begin{algorithmic}[1] \STATE \textbf{Initialization:} the point $x^1 \in [0, 1]^n$ and the stepsize $\eta_t > 0$; $\mathcal{P}_0 \leftarrow \emptyset$ and $f_\infty = 0$; the exploration probability $\mu_t \in (0, 1)$. \FOR{$t = 1, 2, \ldots$} \STATE Let $x_{\pi(1)}^t \geq \ldots x_{\pi(n)}^t$ be the sorted entries in the decreasing order with $A_i^t = \{\pi(1), \ldots, \pi(i)\}$ for all $i \in [n]$ and $A_0^t = \emptyset$. Let $x_{\pi(0)}^t = 1$ and $x_{\pi(n+1)}^t = 0$. \STATE Let $\lambda_i^t = x_{\pi(i)}^t - x_{\pi(i+1)}^t$ for all $0 \leq i \leq n$. \STATE Sample $S^t$ from the distribution $\mathbb{P}(S^t = A_i^t) = (1-\mu_t)\lambda_i^t + \frac{\mu_t}{n+1}$ for $0 \leq i \leq n$ and observe the loss $f_t(S^t)$. \STATE Compute $\hat{f}_i^t = \frac{\textbf{1}(S^t = A_i^t)}{(1-\mu_t)\lambda_i^t + \mu_t/(n+1)}f_t(S^t)$. \STATE Compute $\hat{g}^t_{\pi(i)} = \hat{f}_i^t - \hat{f}_{i-1}^t$ for all $i \in [n]$ and then trigger a delay $d_t \geq 0$. \STATE Let $\mathcal{R}_t = \{s: s + d_s = t\}$ and $\mathcal{P}_t \leftarrow \mathcal{P}_{t-1} \cup \mathcal{R}_t$. Take $q_t = \min \mathcal{P}_t$ and set $\mathcal{P}_t \leftarrow \mathcal{P}_t \setminus \{q_t\}$. \STATE Compute $x^{t+1}$ using Eq.~\eqref{def:DBGD_main}. \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Bandit feedback setting} As we have done in the previous section, we will make use of an unbiased estimator $\hat{g}$ of the gradient for the bandit feedback setting. However, we only receive the old estimator $\hat{g}^{q_t}$ at the round $t$ due to the delay $d_t$. Following the same reasoning as in the full information setting, the computation of a new iterate $x^{t+1}$ at time $t$ can be written more explicitly form as follows, \begin{equation}\label{def:DBGD_main} x^{t+1} = \left\{ \begin{array}{ll} x^t & \textnormal{if } \mathcal{P}_t = \emptyset, \\ P_{[0, 1]^n}(x^t - \eta_t \hat{g}^{q_t}), & \textnormal{otherwise}. \end{array} \right. \end{equation} Algorithm~\ref{alg:DBGD} follows the same template as Algorithm~\ref{alg:DOGD} but substituting the exact gradients with the gradient estimator. We summarize our results in the following theorem. \begin{theorem}\label{Theorem:DBGD} Suppose the adversary chooses nonsubmodular functions in Eq.~\eqref{prob:online} satisfying $\bar{f}_t([n]) + \ushort{f}_t([n]) \leq L$ and let the delays satisfy Assumption~\ref{Assumption:delay}. Fixing $T \geq 1$ and letting $(\eta_t, \mu_t) = (\frac{1}{Lt^{(2+\gamma)/3}}, \frac{n}{t^{(1-\gamma)/3}})$ in Algorithm~\ref{alg:DBGD}, we have ${\mathbb{E}}[R_{\alpha, \beta}(T)] = O(n T^{\frac{2+\gamma}{3}})$ and $R_{\alpha, \beta}(T) = O(n T^{\frac{2+\gamma}{3}} + \sqrt{n\log(1/\delta)}T^{\frac{4-\gamma}{6}})$ with probability $1 - \delta$. \end{theorem} \begin{remark} Theorem~\ref{Theorem:DBGD} demonstrates that Algorithm~\ref{alg:DBGD} attains the regret of $nT^{\frac{2+\gamma}{3}}$ which is worse that that of $\sqrt{nT^{1+\gamma}}$ for Algorithm~\ref{alg:DOGD} and reduces to that of $nT^{\frac{2}{3}}$ for Algorithm~\ref{alg:BGD}. Since $\gamma < 1$ is assumed, Algorithm~\ref{alg:DBGD} is the first no-regret bandit learning algorithm for online nonsubmodular minimization with delayed costs to our knowledge. \end{remark}
\section{Experiments}\label{sec:exp} We conduct the numerical experiments on structured sparse learning problems and include Algorithm~\ref{alg:OGD}-\ref{alg:DBGD}, which we refer to as OAGD, BAGD, DOAGD, and DBAGD. All the experiments are implemented in Python 3.7 with a 2.6 GHz Intel Core i7 and 16GB of memory. For all our experiments, we set total number of rounds $T=10,000$, dimension $d=10$, number of samples (for round $t$) $n=100$, and sparse parameter $k=2$. For OAGD and DOAGD, we set the default step size $\eta_{\text{o}}=\sqrt{n}/(L\sqrt{T})$ (as described in Theorem~\ref{Theorem:OGD}). For BAGD and DBAGD, we set the default step size $\eta_{\text{b}}=1/(LT^{{2}/{3}})$ (as described in Theorem~\ref{Theorem:BGD}).
Our goal is to estimate the sparse vector $x^\star \in \mathbb{R}^d$ using the structured nonsubmodular model (see Example~\ref{def:SSL}). Following the setup in~\citet{El-2020-Optimal}, we let the function $f^r$ be the regularization in Eq.~\eqref{prob:SSL} such that $f(S) = f^r(S) = \max(S)-\min(S)+1$ for all $S\neq \emptyset$ and $f^r(\emptyset)=0$. We generate true solution $x^\star \in \mathbb{R}^d$ with $k$ consecutive 1's and other $n-k$ elements are zeros. We define the function $h_t(S)$ for the round $t$ as follows: let $y_t = A_t x^\star + \epsilon_t$ where each row of $A_t \in \mathbb{R}^{n \times d}$ is an i.i.d. Gaussian vector and each entry of $\epsilon_t \in \mathbb{R}^n$ is sampled from a normal distribution with standard deviation equals to 0.01. Then, we define the square loss $\ell_t(x) = \|A_t x-y_t\|^2_2$ and let $h_t(S) = \ell_t(0) - \min_{\textnormal{supp}(x) \subseteq S} \ell_t(x)$. We consider the constant delays in our experiments, i.e., the delay $\max_{t} d_t \leq d$ for all $t \geq 1$ where $d > 0$ is a constant.
Figure~\ref{fig:main} summarizes some of experimental results. Indeed, we see from Figure~\ref{fig:main-a} that the bigger delays lead to worse regret for the full-information setting which confirms Theorem~\ref{Theorem:OGD} and~\ref{Theorem:DOGD}. The result in Figure~\ref{fig:main-b} demonstrates the similar phenomenon for the bandit feedback setting which confirms Theorem~\ref{Theorem:BGD} and~\ref{Theorem:DBGD}. Further, Figure~\ref{fig:main-c} demonstrates the effect of bandit feedback and delay simultaneously; indeed, OAGD and DOAGD perform better than BAGD and DBAGD since the regret will increase if only the bandit feedback is available. We implement all the algorithms with varying step sizes and summarize the results in Figure~\ref{fig:appendix-1} and~\ref{fig:appendix-2}. In the former one, we use step sizes $\eta_{\text{o}}/2=\sqrt{n}/(2L\sqrt{T})$ for OAGD and DOAGD and $\eta_{\text{b}}/2=1/(2LT^{{2}/{3}})$ for BAGD and DBAGD. In the latter one, we use step sizes $\eta_{\text{o}}/5=\sqrt{n}/(5L\sqrt{T})$ for OAGD and DOAGD, and $\eta_{\text{b}}/5=1/(5LT^{{2}/{3}})$ for BAGD and DBAGD. Figure~\ref{fig:main}-\ref{fig:appendix-2} demonstrate that our proposed algorithms are not sensitive to the step size choice. \begin{figure*}
\caption{Comparison of our algorithms on sparse learning with delayed costs. In (a) and (b), we examine the effect of delay in the full-information and bandit settings respectively where the maximum delay $d \in \{500, 1000, 2000\}$. In (c), we examine the effect of bandit feedback by comparing the online algorithm with its bandit version where the maximum delay $d = 500$.}
\label{fig:main-a}
\label{fig:main-b}
\label{fig:main-c}
\label{fig:main}
\end{figure*} \begin{figure*}\label{fig:appendix-1-a}
\label{fig:appendix-1-b}
\label{fig:appendix-1-c}
\label{fig:appendix-1}
\end{figure*} \begin{figure*}\label{fig:appendix-2-a}
\label{fig:appendix-2-b}
\label{fig:appendix-2-c}
\label{fig:appendix-2}
\end{figure*}
\section{Concluding Remarks}\label{sec:conclu} This paper studied online nonsubmodular minimization with special structure through the lens of $(\alpha, \beta)$-regret and the extension of generic convex relaxation model. We proved that online approximate gradient descent algorithm and its bandit variant adapted for the convex relaxation model could achieve the bounds of $O(\sqrt{nT})$ and $O(nT^{\frac{2}{3}})$ in terms of $(\alpha, \beta)$-regret respectively. We also investigated the delayed variants of two algorithms and proved new regret bounds where the delays can even be unbounded. More specifically, if delays satisfy $d_t=o(t^{\gamma})$ with $\gamma < 1$, we showed that our proposed algorithms achieve the regret bound of $O(\sqrt{nT^{1+\gamma}})$ and $O(nT^{\frac{2+\gamma}{3}})$ for full-information setting and bandit setting respectively. Simulation studies validate our theoretical findings in practice.
\section{Acknowledgments} This work was supported in part by the Mathematical Data Science program of the Office of Naval Research under grant number N00014-18-1-2764 and by the Vannevar Bush Faculty Fellowship program under grant number N00014-21-1-2941. The work of Michael Jordan is also partially supported by NSF Grant IIS-1901252.
\appendix \section{Proof of Proposition~\ref{Prop:structure}}\label{app:structure} We have \begin{equation}\label{inequality:structure-main} f_C(x) = \max_{g \in \mathbb{R}^d, \rho \in \mathbb{R}} \left\{g^\top x + \rho: g(A) + \rho \leq f(A), \forall A \subseteq [n]\right\}. \end{equation} First, we prove Eq.~\eqref{Prop:structure-first} using the definition of $f_L$ and Eq.~\eqref{inequality:structure-main}. Indeed, we have $f_L(x) = g^\top x$ where we let $x \in [0, 1]^n$ with $x_{\pi(1)} \geq \ldots \geq x_{\pi(n)}$ and $g_{\pi(i)} = f(\pi(i) \mid A_{i-1})$ for all $i \in [n]$. Then, it suffices to show that $g^\top x \geq \tilde{g}^\top x + \tilde{\rho}$ in which $\tilde{g}(A) + \tilde{\rho} \leq f(A)$ for all $A \subseteq [n]$. We have \begin{eqnarray*} \lefteqn{g^\top x - (\tilde{g}^\top x + \tilde{\rho}) = \sum_{i=1}^n x_{\pi(i)}\left(f(\pi(i) \mid A_{i-1}) - \tilde{g}_{\pi(i)}\right) - \tilde{\rho}} \\ & = & \sum_{i=1}^{n-1} \left(x_{\pi(i)} - x_{\pi(i+1)}\right)\left(f(A_i) - \tilde{g}(A_i)\right) + x_{\pi(n)}\left(f([n]) - \tilde{g}([n])\right) - \tilde{\rho}. \end{eqnarray*} Since $\tilde{g}(A) + \tilde{\rho} \leq f(A)$ for all $A \subseteq [n]$, we have \begin{equation*} f([n]) - \tilde{g}([n]) \geq \tilde{\rho}, \quad f(A_i) - \tilde{g}(A_i) \geq \tilde{\rho}, \quad \textnormal{for all } i \in [n]. \end{equation*} Putting these pieces together with $x_{\pi(1)} \geq \ldots \geq x_{\pi(n)}$ yields that \begin{equation*} g^\top x - (\tilde{g}^\top x + \tilde{\rho}) \geq \sum_{i=1}^{n-1} \left(x_{\pi(i)} - x_{\pi(i+1)}\right)\tilde{\rho} + x_{\pi(n)}\tilde{\rho} - \tilde{\rho} = (x_{\pi(1)} - 1)\tilde{\rho}. \end{equation*} Since $x \in [0,1]^n$, we have $x_{\pi(1)} \leq 1$. Since $\tilde{g}(A) + \tilde{\rho} \leq f(A)$ for all $A \subseteq [n]$ and $f(\emptyset) = 0$, we derive by letting $A = \emptyset$ that $\tilde{\rho} \leq f(\emptyset) - \tilde{g}(\emptyset) \leq 0$. This implies the desired result.
Further, we prove Eq.~\eqref{Prop:structure-second} using the definition of weak DR-submodularity. Indeed, we have $g(A) = \sum_{i \in A} g_i$. Since $g_{\pi(i)} = f(\pi(i) \mid A_{i-1})$ for all $i \in [n]$, we have \begin{equation*} g(A) = \sum_{\pi(i) \in A} f(\pi(i) \mid A_{i-1}) = \sum_{\pi(i) \in A} \left(\bar{f}(\pi(i) \mid A_{i-1}) - \ushort{f}(\pi(i) \mid A_{i-1})\right). \end{equation*} Since $\bar{f}$ is $\alpha$-weakly DR-submodular, $\ushort{f}$ is $\beta$-weakly DR-supermodular and $A \cap A_{i-1} \subseteq A_{i-1}$, we have \begin{equation}\label{inequality:structure-DR} \bar{f}(\pi(i) \mid A \cap A_{i-1}) \geq \alpha \bar{f}(\pi(i) \mid A_{i-1}), \quad \ushort{f}(\pi(i) \mid A_{i-1}) \geq \beta\ushort{f}(\pi(i) \mid A \cap A_{i-1}). \end{equation} Putting these pieces together yields that \begin{equation*} g(A) \leq \sum_{\pi(i) \in A} \left(\tfrac{1}{\alpha}\bar{f}(\pi(i) \mid A \cap A_{i-1}) - \beta\ushort{f}(\pi(i) \mid A \cap A_{i-1})\right). \end{equation*} Then, we have \begin{eqnarray*} g(A) & \leq & \tfrac{1}{\alpha} \left(\sum_{i=1}^n \left(\bar{f}(A \cap A_i) - \bar{f}(A \cap A_{i-1})\right)\right) - \beta \left(\sum_{i=1}^n \left(\ushort{f}(A \cap A_i) - \ushort{f}(A \cap A_{i-1})\right)\right) \\ & = & \tfrac{1}{\alpha}\bar{f}(A) - \beta\ushort{f}(A), \quad \textnormal{for all } A \subseteq [n]. \end{eqnarray*} This implies the desired result.
Finally, we prove Eq.~\eqref{Prop:structure-third} using Eq.~\eqref{inequality:structure-main}. Indeed, we have $g = \bar{g} - \ushort{g}$ where $\bar{g}_{\pi(i)} = \bar{f}(\pi(i) \mid A_{i-1})$ and $\ushort{g}_{\pi(i)} = \ushort{f}(\pi(i) \mid A_{i-1})$ for all $i \in [n]$. For any $A \subseteq [n]$, we obtain by using Eq.~\eqref{inequality:structure-DR} that \begin{align*} \bar{g}(A) & \leq \tfrac{1}{\alpha}\left(\sum_{\pi(i) \in A} \bar{f}(\pi(i) \mid A \cap A_{i-1}) \right) = \tfrac{1}{\alpha}\left(\sum_{i=1}^n \left(\bar{f}(A \cap A_i) - \bar{f}(A \cap A_{i-1})\right)\right) = \tfrac{1}{\alpha}\bar{f}(A), \\ -\ushort{g}(A) & \leq -\beta\left(\sum_{\pi(i) \in A} \ushort{f}(\pi(i) \mid A \cap A_{i-1}) \right) = -\beta\left(\sum_{i=1}^n \left(\ushort{f}(A \cap A_i) - \ushort{f}(A \cap A_{i-1})\right)\right) = -\beta\ushort{f}(A). \end{align*} Equivalently, we have $\alpha \bar{g}(A) + 0 \leq \bar{f}(A)$ and $\tfrac{1}{\beta}(-\ushort{g}(A)) + 0 \leq -\ushort{f}(A)$ for any $A \subseteq [n]$. Using Eq.~\eqref{inequality:structure-main}, we have \begin{eqnarray*} \alpha \bar{g}^\top z + 0 \leq \bar{f}_C(z), \quad \tfrac{1}{\beta}(-\ushort{g})^\top z + 0 \leq (-\ushort{f})_C(z), \quad \textnormal{for all } z \in [0, 1]^n. \end{eqnarray*} Since $g = \bar{g} - \ushort{g}$ and $\alpha, \beta > 0$, we have $g^\top z \leq \tfrac{1}{\alpha}\bar{f}_C(z) + \beta(-\ushort{f})_C(z)$. This implies the desired result. \section{Regret Analysis for Algorithm~\ref{alg:OGD}}\label{app:OGD} In this section, we present several technical lemmas for analyzing the regret minimization property of Algorithm~\ref{alg:OGD}. We also give the missing proof of Theorem~\ref{Theorem:OGD}.
\subsection{Technical lemmas} We provide two technical lemmas for Algorithm~\ref{alg:OGD}. The first lemma gives a bound on the vector $g^t$ and the difference between $x^t$ and any fixed $x \in [0, 1]^n$. \begin{lemma}\label{Lemma:OGD-first}
Suppose that the iterates $\{x^t\}_{t \geq 1}$ and the vectors $\{g^t\}_{t \geq 1}$ be generated by Algorithm~\ref{alg:OGD} and $x \in [0, 1]^n$ and let $f_t = \bar{f}_t - \ushort{f}_t$ satisfy that $\bar{f}_t([n]) + \ushort{f}_t([n]) \leq L$ for all $t \geq 1$ and both $\bar{f}_t$ and $\ushort{f}_t$ are nondecreasing. Then, we have $\|x^t - x\| \leq \sqrt{n}$ and $\|g^t\| \leq L$ for all $t \geq 1$. \end{lemma} \begin{proof}
Since $x^t \in [0, 1]^n$ and $x \in [0, 1]^n$ is fixed, we have $\|x^t - x\| \leq \sqrt{\sum_{i=1}^n 1} = \sqrt{n}$ for all $t \geq 1$. By the definition of $g^t$, we have $g_{\pi(i)}^t = f_t(A_i^t) - f_t(A_{i-1}^t)$ for all $i \in [n]$ where $A_i^t = \{\pi(1), \ldots, \pi(i)\}$ for all $i \in [n]$. Then, we have \begin{equation*}
\|g^t\| \leq \sum_{i=1}^n |f_t(A_i^t) - f_t(A_{i-1}^t)| \leq \sum_{i=1}^n |\bar{f}_t(A_i^t) - \bar{f}_t(A_{i-1}^t)| + \sum_{i=1}^n |\ushort{f}_t(A_i^t) - \ushort{f}_t(A_{i-1}^t)|. \end{equation*} Since $\bar{f}_t$ and $\ushort{f}_t$ are both normalized and non-decreasing, we have \begin{equation*}
\sum_{i=1}^n |\bar{f}_t(A_i^t) - \bar{f}_t(A_{i-1}^t)| + \sum_{i=1}^n |\ushort{f}_t(A_i^t) - \ushort{f}_t(A_{i-1}^t)| = \bar{f}_t([n]) + \ushort{f}_t([n]) \leq L. \end{equation*}
Putting these pieces together yields that $\|g^t\| \leq L$ for all $t = 1, 2, \ldots, T$. \end{proof} The second lemma gives a key inequality for analyzing Algorithm~\ref{alg:OGD}. \begin{lemma}\label{Lemma:OGD-second} Suppose that the iterates $\{x^t\}_{t \geq 1}$ are generated by Algorithm~\ref{alg:OGD} and $x \in [0, 1]^n$ and let $f_t = \bar{f}_t - \ushort{f}_t$ satisfy that $\bar{f}_t([n]) + \ushort{f}_t([n]) \leq L$ for all $t \geq 1$. Then, we have \begin{equation*} \sum_{t=1}^T {\mathbb{E}}[(f_t)_L(x^t)] \leq \left(\sum_{t=1}^T \tfrac{1}{\alpha}(\bar{f}_t)_C(x) + \beta(-\ushort{f}_t)_C(x)\right) + \tfrac{n}{2\eta} + \tfrac{\eta L^2 T}{2}. \end{equation*} \end{lemma} \begin{proof} Since $x^{t+1} = P_{[0, 1]^n}(x^t - \eta g^t)$, we have \begin{equation*} (x - x^{t+1})^\top(x^{t+1} - x^t + \eta g^t) \geq 0, \quad \textnormal{for all } x \in [0, 1]^n. \end{equation*} Rearranging the above inequality and using the fact that $\eta > 0$, we have \begin{equation}\label{inequality:OGD-first}
(x^{t+1} - x)^\top g^t \leq \frac{1}{\eta}(x - x^{t+1})^\top(x^{t+1} - x^t) = \tfrac{1}{2\eta}\left(\|x - x^t\|^2 - \|x - x^{t+1}\|^2 - \|x^t - x^{t+1}\|^2\right). \end{equation} Using Young's inequality, we have \begin{equation}\label{inequality:OGD-second}
(x^t - x^{t+1})^\top g^t \leq \tfrac{1}{2\eta}\|x^t - x^{t+1}\|^2 + \tfrac{\eta}{2}\|g^t\|^2. \end{equation} Combining Eq.~\eqref{inequality:OGD-first} and Eq.~\eqref{inequality:OGD-second} yields that \begin{equation*}
(x^t - x)^\top g^t \leq \tfrac{1}{2\eta}\left(\|x - x^t\|^2 - \|x - x^{t+1}\|^2\right) + \tfrac{\eta}{2}\|g^t\|^2. \end{equation*} Since $f_t = \bar{f}_t - \ushort{f}_t$ where $\bar{f}_t$ and $\ushort{f}_t$ are both non-decreasing, $\bar{f}_t$ is $\alpha$-weakly DR-submodular and $\ushort{f}_t$ is $\beta$-weakly DR-supermodular, Proposition~\ref{Prop:structure} implies that \begin{equation*} (x^t - x)^\top g^t \geq (f_t)_L(x^t) - \left(\tfrac{1}{\alpha}(\bar{f}_t)_C(x) + \beta(-\ushort{f}_t)_C(x)\right). \end{equation*}
By Lemma~\ref{Lemma:OGD-first}, we have $\|g^t\| \leq L$ for all $t = 1, 2, \ldots, T$. Then, we have \begin{equation*}
(f_t)_L(x^t) \leq \tfrac{1}{\alpha}(\bar{f}_t)_C(x) + \beta(-\ushort{f}_t)_C(x) + \tfrac{1}{2\eta}\left(\|x - x^t\|^2 - \|x - x^{t+1}\|^2\right) + \tfrac{\eta L^2}{2}. \end{equation*}
Summing up the above inequality over $t = 1, 2, \ldots, T$ and using $\|x^1 - x\| \leq \sqrt{n}$ (cf. Lemma~\ref{Lemma:OGD-first}), we have \begin{equation*} \sum_{t=1}^T (f_t)_L(x^t) \leq \left(\sum_{t=1}^T \tfrac{1}{\alpha}(\bar{f}_t)_C(x) + \beta(-\ushort{f}_t)_C(x)\right) + \tfrac{n}{2\eta} + \tfrac{\eta L^2 T}{2}. \end{equation*} Taking the expectation of both sides yields the desired inequality. \end{proof}
\subsection{Proof of Theorem~\ref{Theorem:OGD}} By the definition of the Lov\'{a}sz extension, we have \begin{equation*} (f_t)_L(x^t) = \sum_{i = 1}^{n-1} (x_{\pi(i)}^t - x_{\pi(i+1)}^t)f_t(A_i^t) + (1 - x_{\pi(1)}^t)f_t(A_0^t) + x_{\pi(n)}^t f_t(A_n^t) = \sum_{i=0}^n \lambda_i^t f_t(A_i^t). \end{equation*} By the update formula, we have ${\mathbb{E}}[f_t(S^t) \mid x^t] = (f_t)_L(x^t)$ which implies that ${\mathbb{E}}[f_t(S^t)] = {\mathbb{E}}[(f_t)_L(x^t)]$. By the definition of the convex closure, we obtain that the convex closure of a set function $f$ agrees with $f$ on all the integer points~\citep[Page~4, Proposition~3.3]{Dughmi-2009-Submodular}. Letting $S_\star^T = \mathop{\rm argmin}_{S \subseteq [n]} \sum_{t=1}^T f_t(S)$, we have $S_\star^T$ is an integer point and \begin{equation*} (\bar{f}_t)_C(\chi_{S_\star^T}) = \bar{f}_t(S_\star^T), \quad (-\ushort{f}_t)_C(\chi_{S_\star^T}) = - \beta\ushort{f}_t(S_\star^T), \end{equation*} which implies that \begin{equation*} \tfrac{1}{\alpha}(\bar{f}_t)_C(\chi_{S_\star^T}) + \beta(-\ushort{f}_t)_C(\chi_{S_\star^T}) = \tfrac{1}{\alpha}\bar{f}_t(S_\star^T) - \beta\ushort{f}_t(S_\star^T). \end{equation*} Putting these pieces together and letting $x = \chi_{S_\star^T}$ in the inequality of Lemma~\ref{Lemma:OGD-second} yields that \begin{equation*} \sum_{t=1}^T {\mathbb{E}}[f_t(S^t)] \leq \left(\sum_{t=1}^T \tfrac{1}{\alpha} \bar{f}_t(S_\star^T) - \beta\ushort{f}_t(S_\star^T)\right) + \tfrac{n}{2\eta} + \tfrac{\eta L^2 T}{2}. \end{equation*} Plugging the choice of $\eta = \frac{\sqrt{n}}{L\sqrt{T}}$ into the above inequality yields that ${\mathbb{E}}[R_{\alpha, \beta}(T)] = O(\sqrt{nT})$ as desired.
We proceed to derive a high probability bound using the concentration inequality. In particular, we review the Hoeffding inequality~\citep{Hoeffding-1963-Probability} and refer to~\citet[Appendix A]{Cesa-2006-Prediction} for a proof. The following proposition is a restatement of~\citet[Corollary~A.1]{Cesa-2006-Prediction}. \begin{proposition}\label{Prop:Hoeffding} Let $X_1, \ldots, X_n$ be independent real-valued random variables such that for each $i = 1, \ldots, n$, there exist some $a_i \leq b_i$ such that $\mathbb{P}(a_i \leq X_i \leq b_i) = 1$. Then for every $\epsilon > 0$, we have \begin{eqnarray*} \mathbb{P}\left(\sum_{i=1}^n X_i - {\mathbb{E}}\left[\sum_{i=1}^n X_i\right] > + \epsilon\right) & \leq & \exp\left(-\frac{2\epsilon^2}{\sum_{i=1}^n (b_i - a_i)^2}\right), \\ \mathbb{P}\left(\sum_{i=1}^n X_i - {\mathbb{E}}\left[\sum_{i=1}^n X_i\right] < -\epsilon\right) & \leq & \exp\left(-\frac{2\epsilon^2}{\sum_{i=1}^n (b_i - a_i)^2}\right). \end{eqnarray*} \end{proposition} Since the sequence of points $x^1, x^2, \ldots, x^T$ is obtained by several deterministic gradient descent steps, we have this sequence is purely deterministic. Since each of $S^t$ is obtained by independent randomized rounding on the point $x^t$, we have the sequence of random variables $X_t = f_t(S^t)$ is independent. By definition of $f_t$, we have \begin{equation*}
|X_t| = |\bar{f}_t(S^t) - \ushort{f}_t(S^t)| \leq \bar{f}_t(S^t) + \ushort{f}_t(S^t). \end{equation*} Since $\bar{f}_t$ and $\ushort{f}_t$ are non-decreasing and $\bar{f}_t([n]) + \ushort{f}_t([n]) \leq L$ for all $t \geq 1$, we have $\mathbb{P}(-L \leq X_i \leq L) = 1$ for all $t \geq 1$. Then, by Proposition~\ref{Prop:Hoeffding}, we have \begin{equation*} \mathbb{P}\left(\sum_{i=1}^n f_t(S^t) - {\mathbb{E}}\left[\sum_{i=1}^n f_t(S^t)\right] > \epsilon\right) \leq \exp\left(-\frac{\epsilon^2}{2nL^2}\right). \end{equation*} Equivalently, we have $\sum_{i=1}^n f_t(S^t) - {\mathbb{E}}[\sum_{i=1}^n f_t(S^t)] \leq L\sqrt{2T\log(1/\delta)}$ with probability at least $1 - \delta$. This together with ${\mathbb{E}}[R_{\alpha, \beta}(T)] = O(\sqrt{nT})$ yields that $R_{\alpha, \beta}(T) = O(\sqrt{nT} + \sqrt{T\log(1/\delta)})$ with probability at least $1 - \delta$ as desired. \section{Regret Analysis for Algorithm~\ref{alg:BGD}}\label{app:BGD} In this section, we present several technical lemmas for analyzing the regret minimization property of Algorithm~\ref{alg:BGD}. We also give the missing proofs of Theorem~\ref{Theorem:BGD}.
\subsection{Technical lemmas} We provide several technical lemmas for Algorithm~\ref{alg:BGD}. The first lemma is analogous to Lemma~\ref{Lemma:OGD-first} and gives a bound on the vector $\hat{g}^t$ (in expectation) and the difference between $x^t$ and any fixed $x \in [0, 1]^n$. \begin{lemma}\label{Lemma:BGD-first}
Suppose that the iterates $\{x^t\}_{t \geq 1}$ and the vectors $\{\hat{g}^t\}_{t \geq 1}$ be generated by Algorithm~\ref{alg:BGD} and $x \in [0, 1]^n$ and let $f_t = \bar{f}_t - \ushort{f}_t$ satisfy that $\bar{f}_t([n]) + \ushort{f}_t([n]) \leq L$ for all $t \geq 1$ and both $\bar{f}_t$ and $\ushort{f}_t$ are nondecreasing. Then, we have $\|x^t - x\| \leq \sqrt{n}$ for all $t \geq 1$ and \begin{equation*}
{\mathbb{E}}[\hat{g}^t \mid x^t] = g^t, \quad {\mathbb{E}}[\|\hat{g}^t\|^2 \mid x^t] \leq \tfrac{8n^2L^2}{\mu}, \quad \|\hat{g}^t\|^2 \leq \tfrac{2(n+1)^2L^2}{\mu^2}. \end{equation*} where we have $g_{\pi(i)}^t = f_t(A_i^t) - f_t(A_{i-1}^t)$ for all $i \in [n]$. \end{lemma} \begin{proof}
Using the same argument as in Lemma~\ref{Lemma:OGD-first}, we have $\|x^t - x\| \leq \sqrt{n}$ for all $t \geq 1$. By the definition of $\hat{g}^t$, we have \begin{equation*} \hat{g}^t_{\pi(i)} = \left(\tfrac{\textbf{1}(S^t = A^t_i)}{(1-\mu)\lambda_i^t + \frac{\mu}{n+1}} - \tfrac{\textbf{1}(S^t = A^t_{i-1})}{(1-\mu)\lambda_{i-1}^t + \frac{\mu}{n+1}}\right)f_t(S^t), \quad \textnormal{for all } i \in [n]. \end{equation*} This together with the sampling scheme for $S^t$ implies that \begin{equation*} {\mathbb{E}}[\hat{g}^t_{\pi(i)} \mid x^t] = f_t(A_i^t) - f_t(A_{i-1}^t), \quad \textnormal{for all } i \in [n], \end{equation*} Since $g_{\pi(i)}^t = f_t(A_i^t) - f_t(A_{i-1}^t)$ for all $i \in [n]$, we have ${\mathbb{E}}[\hat{g}^t \mid x^t] = g^t$. Since $f_t = \bar{f}_t - \ushort{f}_t$ satisfy that $\bar{f}_t([n]) + \ushort{f}_t([n]) \leq L$ for all $t \geq 1$ and $\bar{f}_t$ and $\ushort{f}_t$ are both normalized and non-decreasing, we have \begin{equation*}
{\mathbb{E}}[\|\hat{g}^t\|^2 \mid x^t] \leq \sum_{i=0}^n \tfrac{2(f_t(A_i^t))^2}{(1-\mu)\lambda_i^t + \frac{\mu}{n+1}} \leq \tfrac{2(n+1)^2L^2}{\mu} \leq \tfrac{8n^2L^2}{\mu}. \end{equation*} Further, let $S^t = A_{i_t}$ in the round $t$, we can apply the same argument and obtain that \begin{equation*}
\|\hat{g}^t\|^2 \leq 2\left(\tfrac{f_t(A_{i_t}^t)}{(1-\mu)\lambda_{i_t}^t + \frac{\mu}{n+1}}\right)^2 \leq \tfrac{2(n+1)^2L^2}{\mu^2}. \end{equation*} This completes the proof. \end{proof} The second lemma is analogous to Lemma~\ref{Lemma:OGD-second} and gives a key inequality for analyzing Algorithm~\ref{alg:BGD}. \begin{lemma}\label{Lemma:BGD-second} Suppose that the iterates $\{x^t\}_{t \geq 1}$ are generated by Algorithm~\ref{alg:BGD} and $x \in [0, 1]^n$ and let $f_t = \bar{f}_t - \ushort{f}_t$ satisfy that $\bar{f}_t([n]) + \ushort{f}_t([n]) \leq L$ for all $t \geq 1$. Then, we have \begin{equation*} \sum_{t=1}^T {\mathbb{E}}[(f_t)_L(x^t)] \leq \left(\sum_{t=1}^T \tfrac{1}{\alpha}(\bar{f}_t)_C(x) + \beta(-\ushort{f}_t)_C(x)\right) + \tfrac{n}{2\eta} + \tfrac{4n^2L^2\eta T}{\mu}. \end{equation*} \end{lemma} \begin{proof} Using the same argument as in Lemma~\ref{Lemma:OGD-second}, we have \begin{equation*}
(x^t - x)^\top \hat{g}^t \leq \tfrac{1}{2\eta}\left(\|x - x^t\|^2 - \|x - x^{t+1}\|^2\right) + \tfrac{\eta}{2}\|\hat{g}^t\|^2. \end{equation*}
By Lemma~\ref{Lemma:BGD-first}, we have ${\mathbb{E}}[\hat{g}^t \mid x^t] = g^t$ and ${\mathbb{E}}[\|\hat{g}^t\|^2 \mid x^t] \leq \tfrac{8n^2L^2}{\mu}$ for all $t \geq 1$. This implies that \begin{equation*}
(x^t - x)^\top g^t \leq \tfrac{1}{2\eta}\left(\|x - x^t\|^2 - {\mathbb{E}}[\|x - x^{t+1}\|^2 \mid x^t]\right) + \tfrac{4n^2 L^2\eta}{\mu}. \end{equation*} Since $f_t = \bar{f}_t - \ushort{f}_t$ where $\bar{f}_t$ and $\ushort{f}_t$ are both non-decreasing, $\bar{f}_t$ is $\alpha$-weakly DR-submodular and $\ushort{f}_t$ is $\beta$-weakly DR-supermodular, Proposition~\ref{Prop:structure} implies that \begin{equation*} (x^t - x)^\top g^t \geq (f_t)_L(x^t) - \left(\tfrac{1}{\alpha}(\bar{f}_t)_C(x) + \beta(-\ushort{f}_t)_C(x)\right). \end{equation*}
By Lemma~\ref{Lemma:OGD-first}, we have $\|g^t\| \leq L$ for all $t = 1, 2, \ldots, T$. Then, we have \begin{equation*}
(f_t)_L(x^t) \leq \tfrac{1}{\alpha}(\bar{f}_t)_C(x) + \beta(-\ushort{f}_t)_C(x) + \tfrac{1}{2\eta}\left(\|x - x^t\|^2 - {\mathbb{E}}[\|x - x^{t+1}\|^2 \mid x^t]\right) + \tfrac{4n^2L^2\eta}{\mu}. \end{equation*} Taking the expectation of both sides and summing up the resulting inequality over $t = 1, 2, \ldots, T$, we have \begin{equation*}
\sum_{t=1}^T {\mathbb{E}}[(f_t)_L(x^t)] \leq \left(\sum_{t=1}^T \tfrac{1}{\alpha}(\bar{f}_t)_C(x) + \beta(-\ushort{f}_t)_C(x)\right) + \tfrac{1}{2\eta}\|x - x^1\|^2 + \tfrac{4n^2L^2\eta T}{\mu}. \end{equation*}
Using $\|x^1 - x\| \leq \sqrt{n}$ (cf. Lemma~\ref{Lemma:BGD-first}) yields the desired inequality. \end{proof} To prove the high probability bound, we require the following concentration inequality. In particular, we review the Bernstein inequality for martingales~\citep{Freedman-1975-Tail} and refer to~\citet[Appendix A]{Cesa-2006-Prediction} for a proof. The following proposition is a consequence of~\citet[Lemma~A.8]{Cesa-2006-Prediction}. \begin{proposition}\label{Prop:Bernstein}
Let $X_1, \ldots, X_n$ be a bounded martingale difference sequence with respect to the filtration $\mathcal{F} = (\mathcal{F}_i)_{1 \leq i \leq n}$ such that $|X_i| \leq K$ for each $i = 1, \ldots, n$. We also assume that ${\mathbb{E}}[\|X_{i+1}\|^2 \mid \mathcal{F}_i] \leq V$ for each $i = 1, \ldots, n-1$. Then for every $\delta > 0$, we have \begin{equation*}
\mathbb{P}\left(\left|\sum_{i=1}^n X_i - {\mathbb{E}}[X_i \mid \mathcal{F}_{i-1}]\right| > \sqrt{2TV\log(1/\delta)} + \tfrac{\sqrt{2}}{3}K\log(1/\delta)\right) \leq \delta. \end{equation*} \end{proposition} Then we provide our last lemma which significantly generalizes Lemma~\ref{Lemma:BGD-second} for deriving the high-probability bounds. \begin{lemma}\label{Lemma:BGD-third} Suppose that the iterates $\{x^t\}_{t \geq 1}$ are generated by Algorithm~\ref{alg:BGD} with $\mu = \frac{n}{T^{1/3}}$ and $x \in [0, 1]^n$ and let $f_t = \bar{f}_t - \ushort{f}_t$ satisfy that $\bar{f}_t([n]) + \ushort{f}_t([n]) \leq L$ for all $t \geq 1$. Fixing a sufficiently small $\delta \in (0, 1)$ and letting $T > \log^{\frac{3}{2}}(1/\delta)$. Then, we have \begin{equation*} \sum_{t=1}^T (f_t)_L(x^t) \leq \left(\sum_{t=1}^T \tfrac{1}{\alpha}\bar{f}_t(S) - \beta\ushort{f}_t(S)\right) + \tfrac{n}{2\eta} + \tfrac{4n^2L^2\eta T}{\mu} + 12LT^{\frac{2}{3}}\sqrt{n^2 + n\log(1/\delta)} + 6\eta L^2T\sqrt{n\log(1/\delta)}, \end{equation*} with probability at least $1 - 3\delta$. \end{lemma} \begin{proof} Using the same argument as in Lemma~\ref{Lemma:BGD-second}, we have \begin{equation*}
(x^t - x)^\top \hat{g}^t \leq \tfrac{1}{2\eta}\left(\|x - x^t\|^2 - \|x - x^{t+1}\|^2\right) + \tfrac{\eta}{2}\|\hat{g}^t\|^2, \end{equation*} and \begin{equation*} (x^t - x)^\top g^t \geq (f_t)_L(x^t) - \left(\tfrac{1}{\alpha}(\bar{f}_t)_C(x) + \beta(-\ushort{f}_t)_C(x)\right). \end{equation*} For simplicity, we define $e_t = \hat{g}^t - g^t$. Then, we have \begin{equation}\label{inequality:BGD-first}
(f_t)_L(x^t) - \left(\tfrac{1}{\alpha}(\bar{f}_t)_C(x) + \beta(-\ushort{f}_t)_C(x)\right) \leq (x - x^t)^\top e_t + \tfrac{1}{2\eta}\left(\|x - x^t\|^2 - \|x - x^{t+1}\|^2\right) + \tfrac{\eta}{2}\|\hat{g}^t\|^2 \end{equation}
Summing up Eq.~\eqref{inequality:BGD-first} over $t = 1, 2, \ldots, T$ and using $\|x^1 - x\| \leq \sqrt{n}$ and ${\mathbb{E}}[\|\hat{g}^t\|^2 \mid x^t] \leq \tfrac{8n^2L^2}{\mu}$ for all $t \geq 1$ (cf. Lemma~\ref{Lemma:BGD-first}), we have \begin{eqnarray*}
\sum_{t=1}^T (f_t)_L(x^t) & \leq & \left(\sum_{t=1}^T \tfrac{1}{\alpha}(\bar{f}_t)_C(x) + \beta(-\ushort{f}_t)_C(x) + (x - x^t)^\top e_t + \tfrac{\eta}{2}(\|\hat{g}^t \|^2 - {\mathbb{E}}[\|\hat{g}^t \|^2 \mid x^t])\right) \\ & & + \tfrac{n}{2\eta} + \tfrac{4n^2L^2\eta T}{\mu}. \end{eqnarray*} By the definition of the convex closure, we obtain that the convex closure of a set function $f$ agrees with $f$ on all the integer points~\citep[Page~4, Proposition~3.3]{Dughmi-2009-Submodular}. Letting $S \subseteq [n]$, we have $(\bar{f}_t)_C(\chi_S) = f_t(S)$ and $(-\ushort{f}_t)_C(\chi_S) = - \beta\ushort{f}_t(S)$ which implies that \begin{equation*} \tfrac{1}{\alpha}(\bar{f}_t)_C(\chi_S) + \beta(-\ushort{f}_t)_C(\chi_S) = \tfrac{1}{\alpha}f_t(S) - \beta\ushort{f}_t(S). \end{equation*} Letting $x = \chi_S$, we have \begin{equation}\label{inequality:BGD-second}
\sum_{t=1}^T (f_t)_L(x^t) \leq \left(\sum_{t=1}^T \tfrac{1}{\alpha}\bar{f}_t(S) - \beta\ushort{f}_t(S)\right) + \tfrac{n}{2\eta} + \tfrac{4n^2L^2\eta T}{\mu} + \underbrace{\sum_{t=1}^T (\chi_S - x^t)^\top e_t}_{\textbf{I}} + \tfrac{\eta}{2} \underbrace{\left(\sum_{t=1}^T \|\hat{g}^t \|^2 - {\mathbb{E}}[\|\hat{g}^t \|^2 \mid x^t]\right)}_{\textbf{II}}. \end{equation} In what follows, we prove the high probability bounds for the terms \textbf{I} and \textbf{II} in the above inequality. \paragraph{Bounding \textbf{I}.} Consider the random variables $X_t = (x^t)^\top\hat{g}^t$ for all $1 \leq t \leq T$ that are adapted to the natural filtration generated by the iterates $\{x_t\}_{t \geq 1}$. By Lemma~\ref{Lemma:BGD-first} and the H\"{o}lder's inequality, we have \begin{equation*}
|X_t| \leq \|\hat{g}^t\|_1\|x^t\|_\infty \leq 2\left|\tfrac{f_t(A_{i_t}^t)}{(1-\mu)\lambda_{i_t}^t + \frac{\mu}{n+1}}\right| \leq \tfrac{2(n+1)L}{\mu} \end{equation*}
Since $\mu = \frac{n}{T^{1/3}}$, we have $|X_t| \leq 4LT^{\frac{1}{3}}$. Further, we have \begin{equation*}
{\mathbb{E}}[X_t^2 \mid x_t] \leq {\mathbb{E}}[\|\hat{g}^t\|_1^2\|x^t\|_\infty^2 \mid x_t] = \sum_{i=0}^n \tfrac{4(f_t(A_i^t))^2}{(1-\mu)\lambda_i^t + \frac{\mu}{n+1}} \leq \tfrac{2(n+1)^2L^2}{\mu} \leq 8nL^2 T^{\frac{1}{3}}. \end{equation*} Since ${\mathbb{E}}[\hat{g}^t \mid x^t] = g^t$ and $e_t = \hat{g}^t - g^t$, Proposition~\ref{Prop:Bernstein} implies that \begin{equation*}
\mathbb{P}\left(\left|\sum_{t=1}^T (x^t)^\top e_t\right| > 4LT^{\frac{2}{3}}\sqrt{n\log(1/\delta)} + 2LT^{\frac{1}{3}}\log(1/\delta)\right) \leq \delta. \end{equation*} Since $T > \log^{\frac{3}{2}}(1/\delta)$, we have $T^{\frac{2}{3}}\sqrt{\log(1/\delta)} \geq T^{\frac{1}{3}}\log(1/\delta)$. This implies that \begin{equation*}
\mathbb{P}\left(\left|\sum_{t=1}^T (x^t)^\top e_t\right| > 6LT^{\frac{2}{3}}\sqrt{n\log(1/\delta)} \right) \leq \delta. \end{equation*} Similarly, we fix a set $S \subseteq [n]$ and consider the random variable $X_t = (\chi_S)^\top\hat{g}^t$ for all $1 \leq t \leq T$ that are adapted to the natural filtration generated by the iterates $\{x_t\}_{t \geq 1}$. By repeating the above argument with $\frac{\delta}{2^n}$, we have \begin{equation*}
\mathbb{P}\left(\left|\sum_{t=1}^T (\chi_S)^\top e_t\right| > 6LT^{\frac{2}{3}}\sqrt{n\log(2^n/\delta)} \right) \leq \tfrac{\delta}{2^n}. \end{equation*} By taking a union bound over the $2^n$ choices of $S$, we obtain that \begin{equation*}
\mathbb{P}\left(\left|\sum_{t=1}^T (\chi_S)^\top e_t\right| > 6LT^{\frac{2}{3}}\sqrt{n\log(2^n/\delta)} \right) \leq \delta, \quad \textnormal{for any } S \subseteq [n]. \end{equation*} Since $\sqrt{n\log(2^n/\delta)} \leq \sqrt{n^2 + n\log(1/\delta)}$, we have $\textbf{I} \leq 12LT^{\frac{2}{3}}\sqrt{n^2 + n\log(1/\delta)}$ with probability at least $1 - 2\delta$.
\paragraph{Bounding \textbf{II}.} Consider the random variables $X_t = \|\hat{g}^t\|^2$ for all $1 \leq t \leq T$ that are adapted to the natural filtration generated by the iterates $\{x^t\}_{t \geq 1}$. By Lemma~\ref{Lemma:BGD-first}, we have $|X_t| \leq \tfrac{2(n+1)^2L^2}{\mu^2}$. Since $\mu = \frac{n}{T^{1/3}}$, we have $|X_t| \leq 8L^2 T^{2/3}$. Further, we have \begin{equation*} {\mathbb{E}}[X_t^2 \mid x_t] \leq \sum_{i=0}^n \tfrac{2(f_t(A_i^t))^4}{((1-\mu)\lambda_i^t + \frac{\mu}{n+1})^3} \leq \tfrac{2(n+1)^4L^4}{\mu^3} \leq 32nL^4 T. \end{equation*} Applying Proposition~\ref{Prop:Bernstein}, we have \begin{equation*}
\mathbb{P}\left(\left|\sum_{t=1}^T \|\hat{g}^t\|^2 - {\mathbb{E}}[\|\hat{g}^t\|^2 \mid x^t]\right| > 8L^2T\sqrt{n\log(1/\delta)} + 4L^2T^{\frac{2}{3}}\log(1/\delta)\right) \leq \delta. \end{equation*} Since $T > \log^{\frac{3}{2}}(1/\delta)$, we have $T\sqrt{\log(1/\delta)} \geq T^{\frac{2}{3}}\log(1/\delta)$. This implies that \begin{equation*}
\mathbb{P}\left(\left|\sum_{t=1}^T \|\hat{g}^t\|^2 - {\mathbb{E}}[\|\hat{g}^t\|^2 \mid x^t]\right| > 12L^2T\sqrt{n\log(1/\delta)} \right) \leq \delta. \end{equation*} Therefore, we conclude that $\textbf{II} \leq 12L^2T\sqrt{n\log(1/\delta)}$ with probability at least $1 - \delta$.
Putting these pieces together with Eq.~\eqref{inequality:BGD-second} yields that \begin{equation*} \sum_{t=1}^T (f_t)_L(x^t) \leq \left(\sum_{t=1}^T \tfrac{1}{\alpha}\bar{f}_t(S) - \beta\ushort{f}_t(S)\right) + \tfrac{n}{2\eta} + \tfrac{4n^2L^2\eta T}{\mu} + 12LT^{\frac{2}{3}}\sqrt{n^2 + n\log(1/\delta)} + 6\eta L^2T\sqrt{n\log(1/\delta)}, \end{equation*} with probability at least $1 - 3\delta$. \end{proof}
\subsection{Proof of Theorem~\ref{Theorem:BGD}} By the definition of the Lov\'{a}sz extension and $\lambda^t$, we have \begin{equation*} (f_t)_L(x^t) = \sum_{i = 1}^{n-1} (x_{\pi(i)}^t - x_{\pi(i+1)}^t)f_t(A_i^t) + (1 - x_{\pi(1)}^t)f_t(A_0^t) + x_{\pi(n)}^t f_t(A_n^t) = \sum_{i=0}^n \lambda_i^t f_t(A_i^t). \end{equation*} By the update formula of $S^t$, we have \begin{equation*}
{\mathbb{E}}[f_t(S^t) \mid x^t] - (f_t)_L(x^t) = \mu\sum_{i=0}^n \left(\frac{1}{n+1} - \lambda_i^t\right)f_t(A_i^t) \leq \mu\sum_{i=0}^n \left(\frac{1}{n+1} + \lambda_i^t\right)|f_t(A_i^t)|. \end{equation*} Since $f_t = \bar{f}_t - \ushort{f}_t$ satisfy that $\bar{f}_t([n]) + \ushort{f}_t([n]) \leq L$ for all $t \geq 1$ and $\bar{f}_t$ and $\ushort{f}_t$ are both normalized and non-decreasing, we have \begin{equation}\label{inequality:BGD-third} {\mathbb{E}}[f_t(S^t) \mid x^t] - (f_t)_L(x^t) \leq L\mu\sum_{i=0}^n \left(\frac{1}{n+1} + \lambda_i^t\right) = 2L\mu. \end{equation} which implies that \begin{equation*} {\mathbb{E}}[f_t(S^t)] - {\mathbb{E}}[(f_t)_L(x^t)] \leq 2L\mu. \end{equation*} Using the same argument as in Theorem~\ref{Theorem:OGD}, we have \begin{equation*} \tfrac{1}{\alpha}(\bar{f}_t)_C(\chi_{S_\star^T}) + \beta(-\ushort{f}_t)_C(\chi_{S_\star^T}) = \tfrac{1}{\alpha}f_t(S_\star^T) - \beta\ushort{f}_t(S_\star^T), \quad \textnormal{where } S_\star^T = \mathop{\rm argmin}_{S \subseteq [n]} \sum_{t=1}^T f_t(S). \end{equation*} Putting these pieces together and letting $x = \chi_{S_\star^T}$ in the inequality of Lemma~\ref{Lemma:BGD-second} yields that \begin{equation*} \sum_{t=1}^T {\mathbb{E}}[f_t(S^t)] \leq \left(\sum_{t=1}^T \tfrac{1}{\alpha} \bar{f}_t(S_\star^T) - \beta\ushort{f}_t(S_\star^T)\right) + \tfrac{n}{2\eta} + \tfrac{4n^2L^2\eta T}{\mu} + 2LT\mu. \end{equation*} Plugging the choice of $\eta = \frac{1}{LT^{2/3}}$ and $\mu = \frac{n}{T^{1/3}}$ into the above inequality yields that ${\mathbb{E}}[R_{\alpha, \beta}(T)] = O(nT^{\frac{2}{3}})$ as desired.
We proceed to derive a high probability bound using Lemma~\ref{Lemma:BGD-third}. Indeed, we first consider the case of $T < 2\log^{\frac{3}{2}}(1/\delta)$. Since $f_t = \bar{f}_t - \ushort{f}_t$ satisfy that $\bar{f}_t([n]) + \ushort{f}_t([n]) \leq L$ for all $t \geq 1$, we have \begin{equation*} R_{\alpha, \beta}(T) \leq \sum_{t=1}^T f_t(S^t) - \sum_{t=1}^T (\tfrac{1}{\alpha}\bar{f}_t(S_\star^T) - \beta\ushort{f}_t(S_\star^T)) \leq \left(1 + \tfrac{1}{\alpha} + \beta\right)LT = O(T^{\frac{2}{3}}\sqrt{\log(1/\delta)}). \end{equation*} For the case of $T \geq 2\log^{\frac{3}{2}}(1/\delta)$, we obtain by combining Lemma~\ref{Lemma:BGD-third} with Eq.~\eqref{inequality:BGD-third} that \begin{eqnarray*} \lefteqn{\sum_{t=1}^T {\mathbb{E}}[f_t(S^t) \mid x^t] \leq \left(\sum_{t=1}^T \tfrac{1}{\alpha}\bar{f}_t(S) - \beta\ushort{f}_t(S)\right)} \\ & & + \tfrac{n}{2\eta} + 2nLT^{\frac{2}{3}} + 4nL^2\eta T^{\frac{4}{3}} + 12LT^{\frac{2}{3}}\sqrt{n^2 + n\log(1/\delta)} + 6\eta L^2T\sqrt{n\log(1/\delta)}, \end{eqnarray*}
with probability at least $1 - 3\delta$. Then, it suffices to bound the term $\sum_{t=1}^T f_t(S^t) - \sum_{t=1}^T {\mathbb{E}}[f_t(S^t) \mid x^t]$ using Proposition~\ref{Prop:Bernstein}. Consider the random variables $X_t = f_t(S^t)$ for all $1 \leq t \leq T$ that are adapted to the natural filtration generated by the iterates $\{x^t\}_{t \geq 1}$. Since $f_t = \bar{f}_t - \ushort{f}_t$ satisfy that $\bar{f}_t([n]) + \ushort{f}_t([n]) \leq L$ for all $t \geq 1$, we have $|X_t| \leq L$. Further, we have ${\mathbb{E}}[X_t^2 \mid x_t] \leq L^2$. Applying Proposition~\ref{Prop:Bernstein}, we have \begin{equation*}
\mathbb{P}\left(\left|\sum_{t=1}^T f_t(S^t) - {\mathbb{E}}[f_t(S^t) \mid x^t]\right| > L\sqrt{2T\log(1/\delta)} + \tfrac{L}{2}\log(1/\delta)\right) \leq \delta. \end{equation*} Since $T > \log^{\frac{3}{2}}(1/\delta)$, we have $\sqrt{2T\log(1/\delta)} \geq \frac{1}{2}\log(1/\delta)$. This implies that \begin{equation*}
\mathbb{P}\left(\left|\sum_{t=1}^T f_t(S^t) - {\mathbb{E}}[f_t(S^t) \mid x^t]\right| > 3L\sqrt{T\log(1/\delta)} \right) \leq \delta. \end{equation*} Therefore, we conclude that $\sum_{t=1}^T f_t(S^t) - \sum_{t=1}^T {\mathbb{E}}[f_t(S^t) \mid x^t] \leq 3L\sqrt{T\log(1/\delta)}$ with probability at least $1 - \delta$. Putting these pieces together yields that \begin{eqnarray*} \lefteqn{\sum_{t=1}^T f_t(S^t) \leq \left(\sum_{t=1}^T \tfrac{1}{\alpha}\bar{f}_t(S) - \beta\ushort{f}_t(S)\right) + \tfrac{n}{2\eta} + 3L\sqrt{T\log(1/\delta)} + 2nLT^{\frac{2}{3}}} \\ & & + 4nL^2\eta T^{\frac{4}{3}} + 12nLT^{\frac{2}{3}} + 12LT^{\frac{2}{3}}\sqrt{n\log(1/\delta)} + 6\eta L^2T\sqrt{n\log(1/\delta)}, \end{eqnarray*} with probability at least $1 - 4\delta$. Plugging the choice of $\eta = \frac{1}{LT^{2/3}}$ yields that \begin{equation*} \sum_{t=1}^T f_t(S^t) \leq \left(\sum_{t=1}^T \tfrac{1}{\alpha}\bar{f}_t(S) - \beta\ushort{f}_t(S)\right) + \tfrac{37}{2}nLT^{\frac{2}{3}} + 21LT^{\frac{2}{3}}\sqrt{n\log(1/\delta)}, \end{equation*} with probability at least $1 - 4\delta$. Letting $S = S_\star^T = \mathop{\rm argmin}_{S \subseteq [n]} \sum_{t=1}^T f_t(S)$ and changing $\delta$ to $\frac{\delta}{4}$ yields that $R_{\alpha, \beta}(T) = O(nT^{\frac{2}{3}} + \sqrt{n\log(1/\delta)}T^{\frac{2}{3}})$ with probability at least $1 - \delta$ as desired. \section{Regret Analysis for Algorithm~\ref{alg:DOGD}}\label{app:DOGD} In this section, we present several technical lemmas for analyzing the regret minimization property of Algorithm~\ref{alg:DOGD}. We also give the missing proofs of Theorem~\ref{Theorem:DOGD}.
\subsection{Technical lemmas} We provide one technical lemma for Algorithm~\ref{alg:DOGD} which is analogues to Lemma~\ref{Lemma:OGD-second}. It gives a key inequality for analyzing the regret minimization property of Algorithm~\ref{alg:DOGD}. Note that the results in Lemma~\ref{Lemma:OGD-first} still hold true for the iterates $\{x^t\}_{t \geq 1}$ and $\{g^t\}_{t \geq 1}$ generated by Algorithm~\ref{alg:DOGD}. \begin{lemma}\label{Lemma:DOGD-first} Suppose that the iterates $\{x^t\}_{t \geq 1}$ are generated by Algorithm~\ref{alg:DOGD} and $x \in [0, 1]^n$ and let $f_t = \bar{f}_t - \ushort{f}_t$ satisfy that $\bar{f}_t([n]) + \ushort{f}_t([n]) \leq L$ for all $t \geq 1$. Then, we have \begin{equation*} \sum_{t=1}^T {\mathbb{E}}[(f_t)_L(x^t)] \leq \left(\sum_{t=1}^T \tfrac{1}{\alpha}(\bar{f}_t)_C(x) + \beta(-\ushort{f}_t)_C(x)\right) + \tfrac{n}{2\eta_{\bar{T}}} + \tfrac{L^2}{2}\left(\sum_{t=1}^{\bar{T}} \eta_t\right) + L^2\left(\sum_{t=1}^{\bar{T}} \sum_{s=q_t}^{t-1} \eta_s\right), \end{equation*} where $\bar{T} > 0$ in the above inequality satisfies that $q_{\bar{T}} = T$. \end{lemma} \begin{proof} Using the same argument as in Lemma~\ref{Lemma:OGD-second}, we have \begin{equation*}
(x^t - x)^\top g^{q_t} \leq \tfrac{1}{2\eta_t}\left(\|x - x^t\|^2 - \|x - x^{t+1}\|^2\right) + \tfrac{\eta_t}{2}\|g^{q_t}\|^2. \end{equation*} Since $f_t = \bar{f}_t - \ushort{f}_t$ where $\bar{f}_t$ and $\ushort{f}_t$ are both normalized and non-decreasing, $\bar{f}_t$ is $\alpha$-weakly DR-submodular and $\ushort{f}_t$ is $\beta$-weakly DR-supermodular, Proposition~\ref{Prop:structure} implies that \begin{equation*} (x^{q_t} - x)^\top g^{q_t} \geq (f_{q_t})_L(x^{q_t}) - \left(\tfrac{1}{\alpha}(\bar{f}_{q_t})_C(x) + \beta(-\ushort{f}_{q_t})_C(x)\right). \end{equation*}
By Lemma~\ref{Lemma:OGD-first}, we have $\|g^t\| \leq L$ for all $t \geq 1$. Then, we have \begin{equation}\label{inequality:DOGD-first}
(f_{q_t})_L(x^{q_t}) \leq \tfrac{1}{\alpha}(\bar{f}_{q_t})_C(x) + \beta(-\ushort{f}_{q_t})_C(x) + \tfrac{1}{2\eta_t}\left(\|x - x^t\|^2 - \|x - x^{t+1}\|^2\right) + L\|x^{q_t} - x^t\| + \tfrac{\eta_t L^2}{2}. \end{equation} Further, we have \begin{equation}\label{inequality:DOGD-second}
\|x^{q_t} - x^t\| \leq \sum_{s=q_t}^{t-1} \eta_s \|g^s\| \leq L\left(\sum_{s=q_t}^{t-1} \eta_s\right). \end{equation} Plugging Eq.~\eqref{inequality:DOGD-second} into Eq.~\eqref{inequality:DOGD-first} yields that \begin{equation}\label{inequality:DOGD-third}
(f_{q_t})_L(x^{q_t}) \leq \tfrac{1}{\alpha}(\bar{f}_{q_t})_C(x) + \beta(-\ushort{f}_{q_t})_C(x) + \tfrac{1}{2\eta_t}\left(\|x - x^t\|^2 - \|x - x^{t+1}\|^2\right) + \tfrac{\eta_t L^2}{2} + L^2\left(\sum_{s=q_t}^{t-1} \eta_s\right). \end{equation}
For a fixed horizon $T \geq 1$, we have $q_{\bar{T}} = T$ for some $\bar{T} \geq T$. Then, by summing up Eq.~\eqref{inequality:DOGD-third} over $t = 1, 2, \ldots, \bar{T}$ and using $\|x^t - x\| \leq \sqrt{n}$ for all $t \geq 1$ (cf. Lemma~\ref{Lemma:OGD-first}) and that $\{\eta_t\}_{t \geq 1}$ is nonincreasing, we have \begin{equation*} \sum_{t=1}^{\bar{T}} (f_{q_t})_L(x^{q_t}) \leq \left(\sum_{t=1}^{\bar{T}} \tfrac{1}{\alpha}(\bar{f}_{q_t})_C(x) + \beta(-\ushort{f}_{q_t})_C(x)\right) + \tfrac{n}{2\eta_{\bar{T}}} + \tfrac{L^2}{2}\left(\sum_{t=1}^{\bar{T}} \eta_t\right) + L^2\left(\sum_{t=1}^{\bar{T}} \sum_{s=q_t}^{t-1} \eta_s\right). \end{equation*} Since $q_{\bar{T}} = T$ and our pooling policy is induced by a priority queue (note that $f_{q_t} = \bar{f}_{q_t} = \ushort{f}_{q_t} = 0$ if $\mathcal{P}_t = \emptyset$), we have \begin{eqnarray*} \sum_{t=1}^{\bar{T}} (f_{q_t})_L(x^{q_t}) & = & \sum_{t=1}^T (f_t)_L(x^t), \\ \sum_{t=1}^{\bar{T}} \tfrac{1}{\alpha}(\bar{f}_{q_t})_C(x) + \beta(-\ushort{f}_{q_t})_C(x) & = & \sum_{t=1}^T \tfrac{1}{\alpha}(\bar{f}_t)_C(x) + \beta(-\ushort{f}_t)_C(x). \end{eqnarray*} Therefore, we conclude that \begin{equation*} \sum_{t=1}^T (f_t)_L(x^t) \leq \left(\sum_{t=1}^T \tfrac{1}{\alpha}(\bar{f}_t)_C(x) + \beta(-\ushort{f}_t)_C(x)\right) + \tfrac{n}{2\eta_T} + \tfrac{L^2}{2}\left(\sum_{t=1}^{\bar{T}} \eta_t\right) + L^2\left(\sum_{t=1}^{\bar{T}} \sum_{s=q_t}^{t-1} \eta_s\right). \end{equation*} Taking the expectation of both sides yields the desired inequality. \end{proof}
\subsection{Proof of Theorem~\ref{Theorem:DOGD}} By~\citet[Corollary~1]{Heliou-2020-Gradient}, we have $t - q_t = o(t^\gamma)$ under Assumption~\ref{Assumption:delay}; in particular, we have $t - q_t = o(t)$ and $q_t = \Theta(t)$. Since $q_{\bar{T}} = T$, we have $T = \Theta(\bar{T})$ which implies that $\bar{T} = \Theta(T)$. Recall that $\eta_t = \frac{\sqrt{n}}{L\sqrt{t^{1+\gamma}}}$, we have \begin{eqnarray*} \tfrac{n}{2\eta_{\bar{T}}} & = & \tfrac{L\sqrt{n\bar{T}^{1+\gamma}}}{2} \ = \ O(\sqrt{nT^{1+\gamma}}), \\ \tfrac{L^2}{2}\left(\sum_{t=1}^{\bar{T}} \eta_t\right) & = & \tfrac{\sqrt{n}L}{2}\left(\sum_{t=1}^{\bar{T}} \tfrac{1}{\sqrt{t^{1+\gamma}}}\right) \ \leq \ \tfrac{\sqrt{n}L}{1-\gamma}\sqrt{\bar{T}^{1-\gamma}} \ = \ O(\sqrt{nT^{1-\gamma}}), \\ L^2\left(\sum_{t=1}^{\bar{T}} \sum_{s=q_t}^{t-1} \eta_s\right) & \leq & L^2\left(\sum_{t=1}^{\bar{T}} (t - q_t)\eta_{q_t}\right) \ = \ O\left(\sqrt{n}L\sum_{t=1}^{\bar{T}} \tfrac{1}{\sqrt{t^{1-\gamma}}}\right) \ = \ O(L\sqrt{n\bar{T}^{1+\gamma}}) \ = \ O(\sqrt{nT^{1+\gamma}}), \end{eqnarray*} Putting these pieces together with Lemma~\ref{Lemma:DOGD-first} yields that \begin{equation}\label{inequality:DOGD-fourth} \sum_{t=1}^T {\mathbb{E}}[(f_t)_L(x^t)] - \left(\sum_{t=1}^T \tfrac{1}{\alpha}(\bar{f}_t)_C(x) + \beta(-\ushort{f}_t)_C(x)\right) = O(\sqrt{nT^{1+\gamma}}). \end{equation} By the definition of the Lov\'{a}sz extension, we have \begin{equation*} (f_t)_L(x^t) = \sum_{i = 1}^{n-1} (x_{\pi(i)}^t - x_{\pi(i+1)}^t)f_t(A_i^t) + (1 - x_{\pi(1)}^t)f_t(A_0^t) + x_{\pi(n)}^t f_t(A_n^t). \end{equation*} By the update formula, we have ${\mathbb{E}}[f_t(S^t) \mid x^t] = (f_t)_L(x^t)$ which implies that ${\mathbb{E}}[f_t(S^t)] = {\mathbb{E}}[(f_t)_L(x^t)]$. Further, by using the same argument as in Theorem~\ref{Theorem:OGD}, we have \begin{equation*} \tfrac{1}{\alpha}(\bar{f}_t)_C(\chi_{S_\star^T}) + \beta(-\ushort{f}_t)_C(\chi_{S_\star^T}) = \tfrac{1}{\alpha}\bar{f}_t(S_\star^T) - \beta\ushort{f}_t(S_\star^T). \end{equation*} Putting these pieces together and letting $x = \chi_{S_\star^T}$ in Eq.~\eqref{inequality:DOGD-fourth} yields that \begin{equation*} \sum_{t=1}^T {\mathbb{E}}[f_t(S^t)] - \left(\sum_{t=1}^T \tfrac{1}{\alpha} \bar{f}_t(S_\star^T) - \beta\ushort{f}_t(S_\star^T)\right) = O(\sqrt{nT^{1+\gamma}}). \end{equation*} which implies that ${\mathbb{E}}[R_{\alpha, \beta}(T)] = O(\sqrt{nT^{1+\gamma}})$ as desired.
We proceed to derive a high probability bound using the concentration inequality in Proposition~\ref{Prop:Hoeffding}. Indeed, we have \begin{equation*} \mathbb{P}\left(\sum_{i=1}^n f_t(S^t) - {\mathbb{E}}\left[\sum_{i=1}^n f_t(S^t)\right] > \epsilon\right) \leq \exp\left(-\frac{\epsilon^2}{2nL^2}\right). \end{equation*} Equivalently, we have $\sum_{i=1}^n f_t(S^t) - {\mathbb{E}}[\sum_{i=1}^n f_t(S^t)] \leq L\sqrt{2T\log(1/\delta)}$ with probability at least $1 - \delta$. This together with ${\mathbb{E}}[R_{\alpha, \beta}(T)] = O(\sqrt{nT^{1+\gamma}})$ yields that $R_{\alpha, \beta}(T) = O(\sqrt{nT^{1+\gamma}} + \sqrt{T\log(1/\delta)})$ with probability at least $1 - \delta$. \section{Regret Analysis for Algorithm~\ref{alg:DBGD}}\label{app:DBGD} In this section, we present several technical lemmas for analyzing the regret minimization property of Algorithm~\ref{alg:DBGD}. We also give the missing proofs of Theorem~\ref{Theorem:DBGD}.
\subsection{Technical lemmas} We provide two technical lemmas for Algorithm~\ref{alg:DBGD} which are analogues to Lemma~\ref{Lemma:BGD-second} and~\ref{Lemma:BGD-third}. It gives a key inequality for analyzing the regret minimization property of Algorithm~\ref{alg:DOGD}. Note that the results in Lemma~\ref{Lemma:BGD-first} still hold true for the iterates $\{x^t\}_{t \geq 1}$ and $\{\hat{g}^t\}_{t \geq 1}$ generated by Algorithm~\ref{alg:DBGD}. \begin{lemma}\label{Lemma:DBGD-first} Suppose that the iterates $\{x^t\}_{t \geq 1}$ are generated by Algorithm~\ref{alg:DBGD} and $x \in [0, 1]^n$ and let $f_t = \bar{f}_t - \ushort{f}_t$ satisfy that $\bar{f}_t([n]) + \ushort{f}_t([n]) \leq L$ for all $t \geq 1$. Then, we have \begin{equation*} \sum_{t=1}^T {\mathbb{E}}[(f_t)_L(x^t)] \leq \left(\sum_{t=1}^T \tfrac{1}{\alpha}(\bar{f}_t)_C(x) + \beta(-\ushort{f}_t)_C(x)\right) + \tfrac{n}{2\eta_T} + 4n^2L^2\left(\sum_{t=1}^{\bar{T}} \tfrac{\eta_t}{\mu_{q_t}}\right) + 4nL^2\left(\sum_{t=1}^{\bar{T}} \sum_{s=q_t}^{t-1} \tfrac{\eta_s}{\mu_s}\right), \end{equation*} where $\bar{T} > 0$ in the above inequality satisfies that $q_{\bar{T}} = T$. \end{lemma} \begin{proof} Using the same argument as in Lemma~\ref{Lemma:OGD-second}, we have \begin{equation*}
(x^t - x)^\top \hat{g}^{q_t} \leq \tfrac{1}{2\eta_t}\left(\|x - x^t\|^2 - \|x - x^{t+1}\|^2\right) + \tfrac{\eta_t}{2}\|\hat{g}^{q_t}\|^2. \end{equation*}
Since our pooling policy is induced by a priority queue, $\hat{g}^{q_t}$ has never been used before updating $x^{t+1}$. Thus, we have ${\mathbb{E}}[\hat{g}^{q_t} \mid x^t] = {\mathbb{E}}[\hat{g}^{q_t} \mid x^{q_t}]$ and ${\mathbb{E}}[\|\hat{g}^{q_t}\|^2 \mid x^t] = {\mathbb{E}}[\|\hat{g}^{q_t}\|^2 \mid x^{q_t}]$. By Lemma~\ref{Lemma:BGD-first}, we have ${\mathbb{E}}[\hat{g}^{q_t} \mid x^{q_t}] = g^{q_t}$ and ${\mathbb{E}}[\|\hat{g}^{q_t}\|^2 \mid x^{q_t}] \leq \tfrac{8n^2L^2}{\mu_{q_t}}$ for all $t \geq 1$. Putting these pieces together yields that \begin{equation*}
(x^t - x)^\top g^{q_t} \leq \tfrac{1}{2\eta_t}\left(\|x - x^t\|^2 - {\mathbb{E}}[\|x - x^{t+1}\|^2 \mid x^t]\right) + \tfrac{4n^2 L^2\eta_t}{\mu_{q_t}}. \end{equation*} Since $f_t = \bar{f}_t - \ushort{f}_t$ where $\bar{f}_t$ and $\ushort{f}_t$ are both normalized and non-decreasing, $\bar{f}_t$ is $\alpha$-weakly DR-submodular and $\ushort{f}_t$ is $\beta$-weakly DR-supermodular, Proposition~\ref{Prop:structure} implies that \begin{equation*} (x^{q_t} - x)^\top g^{q_t} \geq (f_{q_t})_L(x^{q_t}) - \left(\tfrac{1}{\alpha}(\bar{f}_{q_t})_C(x) + \beta(-\ushort{f}_{q_t})_C(x)\right). \end{equation*}
By Lemma~\ref{Lemma:OGD-first}, we have $\|g^t\| \leq L$ for all $t \geq 1$. Then, we have \begin{equation}\label{inequality:DBGD-first}
(f_{q_t})_L(x^{q_t}) \leq \tfrac{1}{\alpha}(\bar{f}_{q_t})_C(x) + \beta(-\ushort{f}_{q_t})_C(x) + \tfrac{1}{2\eta_t}\left(\|x - x^t\|^2 - {\mathbb{E}}[\|x - x^{t+1}\|^2 \mid x^t]\right) + L\|x^{q_t} - x^t\| + \tfrac{4n^2 L^2\eta_t}{\mu_{q_t}}. \end{equation} Further, by Lemma~\ref{Lemma:BGD-first}, we have \begin{equation}\label{inequality:DBGD-second}
\|x^{q_t} - x^t\| \leq \sum_{s=q_t}^{t-1} \eta_s \|\hat{g}^s\| \leq 2(n+1)L\left(\sum_{s=q_t}^{t-1} \tfrac{\eta_s}{\mu_s}\right). \end{equation} Plugging Eq.~\eqref{inequality:DBGD-second} into Eq.~\eqref{inequality:DBGD-first} yields that \begin{eqnarray*} \lefteqn{(f_{q_t})_L(x^{q_t}) \leq \tfrac{1}{\alpha}(\bar{f}_{q_t})_C(x) + \beta(-\ushort{f}_{q_t})_C(x)} \\
& & + \tfrac{1}{2\eta_t}\left(\|x - x^t\|^2 - {\mathbb{E}}[\|x - x^{t+1}\|^2 \mid x^t]\right) + \tfrac{4n^2 L^2\eta_t}{\mu_{q_t}} + 4nL^2\left(\sum_{s=q_t}^{t-1} \tfrac{\eta_s}{\mu_s}\right). \end{eqnarray*} By using the same argument as in Lemma~\ref{Lemma:DOGD-first}, we have \begin{eqnarray*}
\sum_{t=1}^T (f_t)_L(x^t) & \leq & \left(\sum_{t=1}^T \tfrac{1}{\alpha}(\bar{f}_t)_C(x) + \beta(-\ushort{f}_t)_C(x)\right) + \sum_{t=1}^{\bar{T}} \tfrac{1}{2\eta_t}\left(\|x - x^t\|^2 - {\mathbb{E}}[\|x - x^{t+1}\|^2 \mid x^t]\right) \\ & & + 4n^2L^2\left(\sum_{t=1}^{\bar{T}} \tfrac{\eta_t}{\mu_{q_t}}\right) + 4nL^2\left(\sum_{t=1}^{\bar{T}} \sum_{s=q_t}^{t-1} \tfrac{\eta_s}{\mu_s}\right). \end{eqnarray*}
Taking the expectation of both sides of the above inequality and using $\|x^t - x\| \leq \sqrt{n}$ for all $t \geq 1$ (cf. Lemma~\ref{Lemma:BGD-first}) and that $\{\eta_t\}_{t \geq 1}$ is nonincreasing yields the desired inequality. \end{proof} Then, we provide our second lemma which significantly generalizes Lemma~\ref{Lemma:DBGD-first} for deriving the high-probability bounds. \begin{lemma}\label{Lemma:DBGD-second} Suppose that the iterates $\{x^t\}_{t \geq 1}$ are generated by Algorithm~\ref{alg:DBGD} with $\eta_t = \frac{1}{Lt^{(2+\gamma)/3}}$, $\mu_t = \frac{n}{t^{(1-\gamma)/3}}$ and $x \in [0, 1]^n$ and let $f_t = \bar{f}_t - \ushort{f}_t$ satisfy that $\bar{f}_t([n]) + \ushort{f}_t([n]) \leq L$ for all $t \geq 1$. Fixing a sufficiently small $\delta \in (0, 1)$ and letting $T > \log^{\frac{3}{2 + \gamma}}(1/\delta)$. Then, we have \begin{eqnarray*} \sum_{t=1}^T (f_t)_L(x^t) & \leq & \left(\sum_{t=1}^T \tfrac{1}{\alpha}\bar{f}_t(S) - \beta\ushort{f}_t(S)\right) + \tfrac{n}{2\eta_{\bar{T}}} + 4n^2L^2\left(\sum_{t=1}^{\bar{T}} \tfrac{\eta_t}{\mu_{q_t}}\right) + 4nL^2\left(\sum_{t=1}^{\bar{T}} \sum_{s=q_t}^{t-1} \tfrac{\eta_s}{\mu_s}\right) \\ & & + 12L\bar{T}^{\frac{4 - \gamma}{6}}\sqrt{n^2 + n\log(1/\delta)} + 6L\sqrt{n\bar{T}\log(1/\delta)}, \end{eqnarray*} with probability at least $1 - 3\delta$ where $\bar{T} > 0$ in the above inequality satisfies that $q_{\bar{T}} = T$. \end{lemma} \begin{proof} Using the same argument as in Lemma~\ref{Lemma:DBGD-first}, we have \begin{equation*}
(x^t - x)^\top \hat{g}^{q_t} \leq \tfrac{1}{2\eta_t}\left(\|x - x^t\|^2 - \|x - x^{t+1}\|^2\right) + \tfrac{\eta_t}{2}\|\hat{g}^{q_t}\|^2, \end{equation*} and \begin{equation*} (x^{q_t} - x)^\top g^{q_t} \geq (f_{q_t})_L(x^{q_t}) - \left(\tfrac{1}{\alpha}(\bar{f}_{q_t})_C(x) + \beta(-\ushort{f}_{q_t})_C(x)\right). \end{equation*}
For simplicity, we define $e_t = \hat{g}^t - g^t$. By Lemma~\ref{Lemma:OGD-first}, we have $\|g^t\| \leq L$ for all $t \geq 1$. Then, we have \begin{eqnarray}\label{inequality:DBGD-third} \lefteqn{(f_{q_t})_L(x^{q_t}) - \left(\tfrac{1}{\alpha}(\bar{f}_{q_t})_C(x) + \beta(-\ushort{f}_{q_t})_C(x)\right)} \\
& \leq & (x - x^t)^\top e_{q_t} + \tfrac{1}{2\eta_t}\left(\|x - x^t\|^2 - \|x - x^{t+1}\|^2\right) + L\|x^{q_t} - x^t\| + \tfrac{\eta_t}{2}\|\hat{g}^{q_t}\|^2. \nonumber \end{eqnarray} Plugging Eq.~\eqref{inequality:DBGD-second} into Eq.~\eqref{inequality:DBGD-third} yields that \begin{eqnarray*} \lefteqn{(f_{q_t})_L(x^{q_t}) - \left(\tfrac{1}{\alpha}(\bar{f}_{q_t})_C(x) + \beta(-\ushort{f}_{q_t})_C(x)\right)} \\
& \leq & (x - x^t)^\top e_{q_t} + \tfrac{1}{2\eta_t}\left(\|x - x^t\|^2 - \|x - x^{t+1}\|^2\right) + \tfrac{\eta_t}{2}(\|\hat{g}^t \|^2 - {\mathbb{E}}[\|\hat{g}^t \|^2 \mid x^t]) + \tfrac{4n^2 L^2\eta_t}{\mu_{q_t}} + 4nL^2\left(\sum_{s=q_t}^{t-1} \tfrac{\eta_s}{\mu_s}\right). \end{eqnarray*} By using the same argument as in Lemma~\ref{Lemma:DOGD-first}, we have \begin{eqnarray*}
\sum_{t=1}^T (f_t)_L(x^t) & \leq & \left(\sum_{t=1}^T \tfrac{1}{\alpha}(\bar{f}_t)_C(x) + \beta(-\ushort{f}_t)_C(x)\right) + \sum_{t=1}^{\bar{T}} (x - x^t)^\top e_{q_t} + \sum_{t=1}^{\bar{T}} \tfrac{\eta_t}{2}(\|\hat{g}^t \|^2 - {\mathbb{E}}[\|\hat{g}^t \|^2 \mid x^t]) \\ & & + \tfrac{n}{2\eta_{\bar{T}}} + 4n^2L^2\left(\sum_{t=1}^{\bar{T}} \tfrac{\eta_t}{\mu_{q_t}}\right) + 4nL^2\left(\sum_{t=1}^{\bar{T}} \sum_{s=q_t}^{t-1} \tfrac{\eta_s}{\mu_s}\right). \end{eqnarray*} By the definition of the convex closure, we obtain that the convex closure of a set function $f$ agrees with $f$ on all the integer points~\citep[Page~4, Proposition~3.3]{Dughmi-2009-Submodular}. Letting $S \subseteq [n]$, we have $(\bar{f}_t)_C(\chi_S) = f_t(S)$ and $(-\ushort{f}_t)_C(\chi_S) = - \beta\ushort{f}_t(S)$ which implies that \begin{equation*} \tfrac{1}{\alpha}(\bar{f}_t)_C(\chi_S) + \beta(-\ushort{f}_t)_C(\chi_S) = \tfrac{1}{\alpha}f_t(S) - \beta\ushort{f}_t(S). \end{equation*} Letting $x = \chi_S$, we have \begin{eqnarray}\label{inequality:DBGD-fourth}
\sum_{t=1}^T (f_t)_L(x^t) & \leq & \left(\sum_{t=1}^T \tfrac{1}{\alpha}\bar{f}_t(S) - \beta\ushort{f}_t(S)\right) + \underbrace{\sum_{t=1}^{\bar{T}} (\chi_S - x^t)^\top e_{q_t}}_{\textbf{I}} + \underbrace{\left(\sum_{t=1}^{\bar{T}} \tfrac{\eta_t}{2}(\|\hat{g}^t \|^2 - {\mathbb{E}}[\|\hat{g}^t \|^2 \mid x^t])\right)}_{\textbf{II}} \nonumber \\ & & + \tfrac{n}{2\eta_{\bar{T}}} + 4n^2L^2\left(\sum_{t=1}^{\bar{T}} \tfrac{\eta_t}{\mu_{q_t}}\right) + 4nL^2\left(\sum_{t=1}^{\bar{T}} \sum_{s=q_t}^{t-1} \tfrac{\eta_s}{\mu_s}\right). \end{eqnarray} In what follows, we prove the high probability bounds for the terms \textbf{I} and \textbf{II} in the above inequality. \paragraph{Bounding \textbf{I}.} Consider the random variables $X_t = (x^t)^\top\hat{g}^{q_t}$ for all $1 \leq t \leq \bar{T}$ that are adapted to the natural filtration generated by the iterates $\{x_t\}_{t \geq 1}$. By Lemma~\ref{Lemma:BGD-first} and the H\"{o}lder's inequality, we have \begin{equation*}
|X_t| \leq \|\hat{g}^{q_t}\|_1\|x^t\|_\infty \leq \tfrac{2(n+1)L}{\mu_t}. \end{equation*}
Since $\mu = \frac{n}{t^{(1-\gamma)/3}}$, we have $|X_t| \leq 4L\bar{T}^{\frac{1-\gamma}{3}}$ for all $1 \leq t \leq \bar{T}$. Further, we have \begin{equation*}
{\mathbb{E}}[X_t^2 \mid x_t] \leq {\mathbb{E}}[\|\hat{g}^t\|_1^2\|x^t\|_\infty^2 \mid x_t] \leq \tfrac{2(n+1)^2L^2}{\mu_t} \leq 8nL^2 \bar{T}^{\frac{1-\gamma}{3}}. \end{equation*} Since ${\mathbb{E}}[\hat{g}^{q_t} \mid x^t] = g^{q_t}$ and $e_{q_t} = \hat{g}^{q_t} - g^{q_t}$, Proposition~\ref{Prop:Bernstein} implies that \begin{equation*}
\mathbb{P}\left(\left|\sum_{t=1}^{\bar{T}} (x^t)^\top e_{q_t}\right| > 4L\bar{T}^{\frac{4 - \gamma}{6}}\sqrt{n\log(1/\delta)} + 2L\bar{T}^{\frac{1-\gamma}{3}}\log(1/\delta)\right) \leq \delta. \end{equation*} Since $\bar{T} \geq T > \log^{\frac{3}{2+\gamma}}(1/\delta)$, we have $\bar{T}^{\frac{4-\gamma}{6}}\sqrt{\log(1/\delta)} \geq \bar{T}^{\frac{1-\gamma}{3}}\log(1/\delta)$. This implies that \begin{equation*}
\mathbb{P}\left(\left|\sum_{t=1}^{\bar{T}} (x^t)^\top e_{q_t}\right| > 6L\bar{T}^{\frac{4 - \gamma}{6}}\sqrt{n\log(1/\delta)} \right) \leq \delta. \end{equation*} Similarly, we fix a set $S \subseteq [n]$ and consider the random variable $X_t = (\chi_S)^\top\hat{g}^t$ for all $1 \leq t \leq \bar{T}$ that are adapted to the natural filtration generated by the iterates $\{x_t\}_{t \geq 1}$. By repeating the above argument with $\frac{\delta}{2^n}$, we have \begin{equation*}
\mathbb{P}\left(\left|\sum_{t=1}^{\bar{T}} (\chi_S)^\top e_{q_t}\right| > 6L\bar{T}^{\frac{4 - \gamma}{6}}\sqrt{n\log(2^n/\delta)} \right) \leq \tfrac{\delta}{2^n}. \end{equation*} By taking a union bound over the $2^n$ choices of $S$, we obtain that \begin{equation*}
\mathbb{P}\left(\left|\sum_{t=1}^{\bar{T}} (\chi_S)^\top e_{q_t}\right| > 6L\bar{T}^{\frac{4 - \gamma}{6}}\sqrt{n\log(2^n/\delta)} \right) \leq \delta, \quad \textnormal{for any } S \subseteq [n]. \end{equation*} Since $\sqrt{n\log(2^n/\delta)} \leq \sqrt{n^2 + n\log(1/\delta)}$, we have $\textbf{I} \leq 12L\bar{T}^{\frac{4 - \gamma}{6}}\sqrt{n^2 + n\log(1/\delta)}$ with probability at least $1 - 2\delta$.
\paragraph{Bounding \textbf{II}.} Consider the random variables $X_t = \frac{\eta_t}{2}\|\hat{g}^{q_t}\|^2$ for all $1 \leq t \leq \bar{T}$ that are adapted to the natural filtration generated by the iterates $\{x^t\}_{t \geq 1}$. By Lemma~\ref{Lemma:BGD-first}, we have $|X_t| \leq \tfrac{(n+1)^2L^2\eta_t}{\mu_t^2}$. Since $\eta_t = \frac{1}{Lt^{(2+\gamma)/3}}$ and $\mu_t = \frac{n}{t^{(1-\gamma)/3}}$, we have $|X_t| \leq 4L$. Further, we have \begin{equation*} {\mathbb{E}}[X_t^2 \mid x_t] \leq \tfrac{(n+1)^4L^4\eta_t^2}{2\mu_t^3} \leq 8nL^2. \end{equation*} Applying Proposition~\ref{Prop:Bernstein}, we have \begin{equation*}
\mathbb{P}\left(\left|\sum_{t=1}^{\bar{T}} \tfrac{\eta_t}{2}(\|\hat{g}^{q_t}\|^2 - {\mathbb{E}}[\|\hat{g}^{q_t}\|^2 \mid x^t])\right| > 4L\sqrt{n\bar{T}\log(1/\delta)} + 2L\log(1/\delta)\right) \leq \delta. \end{equation*} Since $\bar{T} \geq T > \log^{\frac{3}{2+\gamma}}(1/\delta)$, we have $\sqrt{\bar{T}\log(1/\delta)} \geq \log(1/\delta)$. This implies that \begin{equation*}
\mathbb{P}\left(\left|\sum_{t=1}^{\bar{T}} \tfrac{\eta_t}{2}(\|\hat{g}^{q_t}\|^2 - {\mathbb{E}}[\|\hat{g}^{q_t}\|^2 \mid x^t])\right| > 6L\sqrt{n\bar{T}\log(1/\delta)} \right) \leq \delta. \end{equation*} Therefore, we conclude that $\textbf{II} \leq 6L\sqrt{n\bar{T}\log(1/\delta)}$ with probability at least $1 - \delta$. Putting these pieces together with Eq.~\eqref{inequality:DBGD-fourth} yields that \begin{eqnarray*} \sum_{t=1}^T (f_t)_L(x^t) & \leq & \left(\sum_{t=1}^T \tfrac{1}{\alpha}\bar{f}_t(S) - \beta\ushort{f}_t(S)\right) + \tfrac{n}{2\eta_{\bar{T}}} + 4n^2L^2\left(\sum_{t=1}^{\bar{T}} \tfrac{\eta_t}{\mu_{q_t}}\right) + 4nL^2\left(\sum_{t=1}^{\bar{T}} \sum_{s=q_t}^{t-1} \tfrac{\eta_s}{\mu_s}\right) \\ & & + 12L\bar{T}^{\frac{4 - \gamma}{6}}\sqrt{n^2 + n\log(1/\delta)} + 6L\sqrt{n\bar{T}\log(1/\delta)}, \end{eqnarray*} with probability at least $1 - 3\delta$. \end{proof}
\subsection{Proof of Theorem~\ref{Theorem:DBGD}} By~\citet[Corollary~1]{Heliou-2020-Gradient}, we have $t - q_t = o(t^\gamma)$ under Assumption~\ref{Assumption:delay}; in particular, we have $t - q_t = o(t)$ and $q_t = \Theta(t)$. Since $q_{\bar{T}} = T$, we have $T = \Theta(\bar{T})$ which implies that $\bar{T} = \Theta(T)$. Recall that $\eta_t = \frac{1}{Lt^{(2+\gamma)/3}}$ and $\mu_t = \frac{n}{t^{(1-\gamma)/3}}$, we have \begin{eqnarray*} \tfrac{n}{2\eta_{\bar{T}}} & = & \tfrac{nL\bar{T}^{\frac{2+\gamma}{3}}}{2} \ = \ O(nT^{\frac{2+\gamma}{3}}), \\ 4n^2L^2\left(\sum_{t=1}^{\bar{T}} \tfrac{\eta_t}{\mu_{q_t}}\right) & = & 4nL\left(\sum_{t=1}^{\bar{T}} \tfrac{(q_t)^{\frac{1-\gamma}{3}}}{t^{\frac{2+\gamma}{3}}}\right) \ = \ O\left(nL\sum_{t=1}^{\bar{T}} \tfrac{1}{t^{\frac{1+2\gamma}{3}}}\right) \ = \ O(nL\bar{T}^{\frac{2-2\gamma}{3}}) \ = \ O(nT^{\frac{2-2\gamma}{3}}), \\ 4nL^2\left(\sum_{t=1}^{\bar{T}} \sum_{s=q_t}^{t-1} \tfrac{\eta_s}{\mu_s}\right) & \leq & 4L\left(\sum_{t=1}^{\bar{T}} (t - q_t)\tfrac{\eta_{q_t}}{\mu_t}\right) \ = \ O\left(L\sum_{t=1}^{\bar{T}} \tfrac{1}{t^{\frac{1-\gamma}{3}}}\right) \ = \ O(L\bar{T}^{\frac{2+\gamma}{3}}) \ = \ O(T^{\frac{2+\gamma}{3}}), \end{eqnarray*} Putting these pieces together with Lemma~\ref{Lemma:DBGD-first} yields that \begin{equation}\label{inequality:DBGD-fifth} \sum_{t=1}^T {\mathbb{E}}[(f_t)_L(x^t)] - \left(\sum_{t=1}^T \tfrac{1}{\alpha}(\bar{f}_t)_C(x) + \beta(-\ushort{f}_t)_C(x)\right) = O(nT^{\frac{2+\gamma}{3}}). \end{equation} By using the similar argument as in Theorem~\ref{Theorem:BGD}, we have \begin{equation}\label{inequality:DBGD-sixth} {\mathbb{E}}[f_t(S^t) \mid x^t] - (f_t)_L(x^t) \leq L\mu_t\sum_{i=0}^n \left(\tfrac{1}{n+1} + \lambda_i^t\right) = 2L\mu_t. \end{equation} which implies that \begin{equation*} \sum_{t=1}^T {\mathbb{E}}[f_t(S^t)] - {\mathbb{E}}[(f_t)_L(x^t)] \ \leq \ 2L \sum_{t=1}^T\mu_t = O(nT^{\frac{2+\gamma}{3}}). \end{equation*} Using the same argument as in Theorem~\ref{Theorem:OGD}, we have \begin{equation*} \tfrac{1}{\alpha}(\bar{f}_t)_C(\chi_{S_\star^T}) + \beta(-\ushort{f}_t)_C(\chi_{S_\star^T}) = \tfrac{1}{\alpha}f_t(S_\star^T) - \beta\ushort{f}_t(S_\star^T), \quad \textnormal{where } S_\star^T = \mathop{\rm argmin}_{S \subseteq [n]} \sum_{t=1}^T f_t(S). \end{equation*} Putting these pieces together and letting $x = \chi_{S_\star^T}$ in Eq.~\eqref{inequality:DBGD-fifth} yields that \begin{equation*} \sum_{t=1}^T {\mathbb{E}}[f_t(S^t)] - \left(\sum_{t=1}^T \tfrac{1}{\alpha} \bar{f}_t(S_\star^T) - \beta\ushort{f}_t(S_\star^T)\right) = O(nT^{\frac{2+\gamma}{3}}). \end{equation*} which implies that ${\mathbb{E}}[R_{\alpha, \beta}(T)] = O(nT^{\frac{2+\gamma}{3}})$ as desired.
We proceed to derive a high probability bound using Lemma~\ref{Lemma:DBGD-second}. Indeed, we first consider the case of $T < 2\log^{\frac{3}{2+\gamma}}(1/\delta)$. Since $f_t = \bar{f}_t - \ushort{f}_t$ satisfy that $\bar{f}_t([n]) + \ushort{f}_t([n]) \leq L$ for all $t \geq 1$, we have \begin{equation*} R_{\alpha, \beta}(T) \leq \sum_{t=1}^T f_t(S^t) - \sum_{t=1}^T (\tfrac{1}{\alpha}\bar{f}_t(S_\star^T) - \beta\ushort{f}_t(S_\star^T)) \leq \left(1 + \tfrac{1}{\alpha} + \beta\right)LT = O(T^{\frac{4-\gamma}{6}}\sqrt{\log(1/\delta)}). \end{equation*} For the case of $T \geq 2\log^{\frac{3}{2+\gamma}}(1/\delta)$, we obtain by combining Lemma~\ref{Lemma:DBGD-second} with Eq.~\eqref{inequality:DBGD-sixth} that \begin{eqnarray*} \lefteqn{\sum_{t=1}^T {\mathbb{E}}[f_t(S^t) \mid x^t] \leq \left(\sum_{t=1}^T \tfrac{1}{\alpha}\bar{f}_t(S) - \beta\ushort{f}_t(S)\right) + \tfrac{n}{2\eta_{\bar{T}}} + 2L\left(\sum_{t=1}^T\mu_t\right) + 4n^2L^2\left(\sum_{t=1}^{\bar{T}} \tfrac{\eta_t}{\mu_{q_t}}\right)} \\ & & + 4nL^2\left(\sum_{t=1}^{\bar{T}} \sum_{s=q_t}^{t-1} \tfrac{\eta_s}{\mu_s}\right) + 12L\bar{T}^{\frac{4 - \gamma}{6}}\sqrt{n^2 + n\log(1/\delta)} + 6L\sqrt{n\bar{T}\log(1/\delta)}, \end{eqnarray*} with probability at least $1 - 3\delta$. Then, it suffices to bound the term $\sum_{t=1}^T f_t(S^t) - \sum_{t=1}^T {\mathbb{E}}[f_t(S^t) \mid x^t]$ using Proposition~\ref{Prop:Bernstein}. By using the same argument as in Theorem~\ref{Theorem:BGD}, we have \begin{equation*}
\mathbb{P}\left(\left|\sum_{t=1}^T f_t(S^t) - {\mathbb{E}}[f_t(S^t) \mid x^t]\right| > 3L\sqrt{T\log(1/\delta)} \right) \leq \delta, \end{equation*} which implies that $\sum_{t=1}^T f_t(S^t) - \sum_{t=1}^T {\mathbb{E}}[f_t(S^t) \mid x^t] \leq 3L\sqrt{T\log(1/\delta)}$ with probability at least $1 - \delta$. Putting these pieces together yields that \begin{eqnarray*} \lefteqn{\sum_{t=1}^T f_t(S^t) \leq \left(\sum_{t=1}^T \tfrac{1}{\alpha}\bar{f}_t(S) - \beta\ushort{f}_t(S)\right) + \tfrac{n}{2\eta_{\bar{T}}} + 2L\left(\sum_{t=1}^T\mu_t\right) + 4n^2L^2\left(\sum_{t=1}^{\bar{T}} \tfrac{\eta_t}{\mu_{q_t}}\right)} \\ & & + 4nL^2\left(\sum_{t=1}^{\bar{T}} \sum_{s=q_t}^{t-1} \tfrac{\eta_s}{\mu_s}\right) + 3L\sqrt{T\log(1/\delta)} + 12L\bar{T}^{\frac{4 - \gamma}{6}}\sqrt{n^2 + n\log(1/\delta)} + 6L\sqrt{n\bar{T}\log(1/\delta)}, \end{eqnarray*} with probability at least $1 - 4\delta$. Plugging the choices of $\eta_t = \frac{1}{Lt^{(2+\gamma)/3}}$ and $\mu_t = \frac{n}{t^{(1-\gamma)/3}}$ and $\bar{T} = \Theta(T)$ yields that \begin{equation*} \sum_{t=1}^T f_t(S^t) - \left(\sum_{t=1}^T \tfrac{1}{\alpha}\bar{f}_t(S) - \beta\ushort{f}_t(S)\right) = O\left(nT^{\frac{2+\gamma}{3}} + \sqrt{n\log(1/\delta)}T^{\frac{4-\gamma}{6}}\right), \end{equation*} with probability at least $1 - 4\delta$. Letting $S = S_\star^T = \mathop{\rm argmin}_{S \subseteq [n]} \sum_{t=1}^T f_t(S)$ and changing $\delta$ to $\frac{\delta}{4}$ yields that $R_{\alpha, \beta}(T) = O(nT^{\frac{2+\gamma}{3}} + \sqrt{n\log(1/\delta)}T^{\frac{4-\gamma}{6}})$ with probability at least $1 - \delta$ as desired.
\end{document} |
\begin{document}
\begin{abstract} We study the regularity of minimizers for a variant of the soap bubble cluster problem: \begin{align*}
\min \sum_{\ell=0}^N c_{\ell} P( S_\ell)\,, \end{align*}
where $c_\ell>0$, among partitions $\{S_0,\dots,S_N,G\}$ of $\mathbb{R}^2$ satisfying $|G|\leq \delta$ and an area constraint on each $S_\ell$ for $1\leq \ell \leq N$. If $\delta>0$, we prove that for any minimizer, each $\partial S_{\ell}$ is $C^{1,1}$ and consists of finitely many curves of constant curvature. Any such curve contained in $\partial S_{\ell} \cap \partial S_{m}$ or $\partial S_\ell \cap \partial G$ can only terminate at a point in $\partial G \cap \partial S_\ell \cap \partial S_{m}$ at which $G$ has a cusp. We also analyze a similar problem on the unit ball $B$ with a trace constraint instead of an area constraint and obtain analogous regularity up to $\partial B$. Finally, in the case of equal coefficients $c_\ell$, we completely characterize minimizers on the ball for small $\delta$: they are perturbations of minimizers for $\delta=0$ in which the triple junction singularities, including those possibly on $\partial B$, are ``wetted" by $G$. \end{abstract}
\maketitle
\setcounter{tocdepth}{2}
\section{Introduction} \subsection{Overview} A classical problem in the calculus of variations is the soap bubble cluster problem, which entails finding the configuration, or cluster, of least area separating $N$ regions with prescribed volumes, known as chambers. Various generalizations have been studied extensively as well and may involve different coefficients penalizing the interfaces between pairs of regions (the immiscible fluids problem) or anisotropic energies. The existence of minimal clusters and almost everywhere regularity for a wide class of problems of this type were obtained by Almgren in the foundational work \cite{Alm76}. The types of singularities present in minimizers in the physical dimensions are described by Plateau's laws, which were verified in $\mathbb{R}^3$ by Taylor \cite{Taylor}. In the plane, regions in a minimizing cluster are bounded by finitely many arcs of constant curvature meeting at $120^{\circ}$ angles \cite{Morgan94}. We refer to the book \cite{MorganBook} for further discussion on the literature for soap bubble clusters. \par In this article we study the interaction of the regularity/singularities of 2D soap bubbles with other physical properties such as thickness. Soap bubbles are generally modeled as surfaces, or ``dry" soap bubbles. This framework is quite natural for certain questions, e.g. singularity analysis as observed above, but it does not capture features related to thickness or volume of the soap. Issues such as which other types of singularities can be stabilized by ``wetting" the film \cite{Hutzler,WeairePhelan} require the addition of a small volume parameter to the model corresponding to the enclosed liquid; see for example \cite{BrakkeMorgan,Brakke05}. In the context of least-area surfaces with fixed boundary (Plateau problem), the authors in \cite{MSS,KinMagStu22,KMS2,KMS3} have formulated a soap film capillarity model that selects surface tension energy minimizers enclosing a small volume and spanning a given wire frame. The analysis of minimizers is challenging, for example due to the higher multiplicity surfaces that arise if the thin film ``collapses." \par Here we approach these issues through the regularity analysis of minimizers of a version of the planar minimal cluster problem. In the model, there are $N$ chambers of fixed area (the soap bubbles) and an exterior chamber whose perimeters are penalized, and there is also an un-penalized region $G$ of small area at most $\delta>0$. This region may be thought of as the ``wet" part of the soap film where soap accumulates (see Remarks \ref{interpretation remark}-\ref{remark on soft constraint} and \ref{wetting remark}). Our first main result, Theorem \ref{main regularity theorem delta positive all of space}, is a sharp regularity result for minimizers: each of the $N$ chambers as well as the exterior chamber have $C^{1,1}$ boundary, while $\partial G$ is regular away from finitely many cusps. In particular, each bubble is regular despite the fact that the bubbles in the $\delta\to 0$ limit may exhibit singularities. We also study a related problem on the ball in which the area constraints on the chambers are replaced by boundary conditions on the circle and prove a similar theorem up to the boundary (Theorem \ref{main regularity theorem delta positive}). As a consequence, in Theorem \ref{resolution for small delta corollary}, we completely resolve minimizers on the ball for small $\delta$ in terms of minimizers for the limiting ``dry" problem: near each triple junction singularity of the limiting minimizer, there is a component of $G$ ``wetting" the
singularity and bounded by three circular arcs meeting in cusps inside the ball and corners or cusps at the boundary; see Figure \ref{perturbation figure}.
\begin{figure}
\caption{On the left is a minimizing cluster $\mathcal{S}^0$ for the $\delta=0$ problem on the ball with chambers $S_\ell^0$. On the right is a minimizer $\mathcal{S}^\delta$ for small $\delta$, with $|G^\delta|=\delta$. Near the triple junctions of $\mathcal{S}^0$, $\partial G^\delta$ consists of three circular arcs meeting in cusps; see Theorem \ref{resolution for small delta corollary}. }
\label{perturbation figure}
\end{figure}
\subsection{Statement of the problem} For an $(N+2)$-tuple $\mathcal{S}=(S_0,S_1,\dots, S_N, G)$ of disjoint sets of finite perimeter partitioning $\mathbb{R}^2$ ($N\geq 2$), called a cluster, we study minimizers of the energy \begin{align}\notag
\mathcal{F}(\mathcal{S}):= \sum_{\ell=0}^N c_{\ell} P( S_\ell)\,,\qquad c_\ell>0\quad \forall 0 \leq \ell \leq N\,, \end{align} among two admissible classes. First, we consider the problem on all of space \begin{align}\label{all of space problem}
\inf_{\mathcal{S} \in \mathcal{A}_\delta^{\mathbf{m}}} \mathcal{F}(\mathcal{S})\,, \end{align} where the admissible class $\mathcal{A}_\delta^{\mathbf{m}}$ consists of all clusters satisfying \begin{align}\label{lebesgue constraint}
|G|=|\mathbb{R}^2 \setminus \cup_{\ell=0}^N S_\ell| \leq \delta \end{align}
and, for some fixed $\mathbf{m}\in (0,\infty)^N$, $(|S_1|,\dots,|S_N|)=\mathbf{m}$. We also consider a related problem on the unit ball $B=\{(x,y):x^2+y^2<1\}$. We study the minimizers of \begin{align}\label{ball problem}
\inf_{\mathcal{S} \in \mathcal{A}_\delta^h} \mathcal{F}(\mathcal{S})\,, \end{align} where $\mathcal{A}_\delta^h$ consists of all clusters such that, for fixed $h\in BV(\partial B;\{1,\dots,N\})$, \begin{align}\label{trace constraint}
S_\ell \cap \partial B = \{x\in \partial B : h(x) = \ell \}\textup{ for $1\leq \ell \leq N$ in the sense of traces}\,, \end{align} $S_0=\mathbb{R}^2\setminus B$ is the exterior chamber, and $G$ satisfies \eqref{lebesgue constraint}. We remark that since $\mathcal{A}_\delta^{\mathbf{m}}\subset \mathcal{A}_{\delta'}^{\mathbf{m}}$ and $\mathcal{A}_\delta^h \subset \mathcal{A}_{\delta'}^h$ if $\delta<\delta'$, the minimum energy decreases in $\delta$ for both \eqref{all of space problem} and \eqref{ball problem}. \par The main energetic mechanism at work in \eqref{all of space problem} that distinguishes it from the classical minimal cluster problem is that the set $G$ prohibits the creation of corners in the chambers $S_\ell$. If $r\ll 1$, the amount of perimeter saved by smoothing out a corner of $S_\ell$ in $B_r(x)$ using the set $G$ scales like $r$, and this can be accomplished while simultaneously preserving the area constraint by fixing areas elsewhere with cost $\approx r^2$ \cite[VI.10-12]{Alm76}. On the other hand, the regularizing effect of $G$ only extends to the other chambers and not to its own boundary since its perimeter is not penalized.
\subsection{Main results} We obtain optimal regularity results for minimizers of \eqref{all of space problem} and \eqref{ball problem}. In addition, for the problem with equal weights $c_\ell$, we completely resolve minimizers of \eqref{ball problem} for small $\delta>0$ in terms of minimizers for $\delta=0$. In the following theorems and throughout the paper, the term ``arc of constant curvature" may refer to either a single circle arc or a straight line segment.
\begin{theorem}[Regularity on $\mathbb{R}^2$ for $\delta>0$]\label{main regularity theorem delta positive all of space}
If $\mathcal{S}^\delta$ is a minimizer for $\mathcal{F}$ among $\mathcal{A}_\delta^{\mathbf{m}}$ for $\delta>0$, then $\partial S_\ell^\delta$ is $C^{1,1}$ for each $\ell$, and there exists $\kappa^\delta_{\ell m}$ such that each $\partial S_\ell^\delta \cap \partial S_m^\delta$ is a finite union of arcs of constant curvature $\kappa_{\ell m}^\delta$ that can only terminate at a point in $\partial S_\ell^\delta \cap \partial S_m^\delta \cap \partial G^\delta$. Referring to those points in $\partial S_\ell^\delta \cap \partial S_m^\delta \cap \partial G^\delta$ as cusp points, there exist $\kappa_\ell^\delta$ for $0\leq \ell \leq N$ such that
$\partial S_\ell^\delta \cap \partial G^\delta$ is a finite union of arcs of constant curvature $\kappa_\ell^\delta$, each of which can only terminate at a cusp point where $\partial S_\ell^\delta\cap \partial G^\delta $ and $\partial S_m^\delta\cap \partial G^\delta $ meet a component of $\partial S_\ell^\delta \cap \partial S_m^\delta$ tangentially. \end{theorem}
\begin{remark}[Interpretation of $G^\delta$]\label{interpretation remark} For the case $c_\ell =1$, a possible reformulation of \eqref{all of space problem} that views the interfaces as thin regions of liquid rather than surfaces is \begin{align}\label{thin film cluster formulation}
\inf\{ \mathcal{F}(\mathcal{S}) : \mathcal{S}\in \mathcal{A}_\delta^{\mathbf{m}},\, \mbox{$S_\ell$ open $\forall \ell$, $\mathrm{cl}\, S_\ell \cap \mathrm{cl}\, S_m=\emptyset$ $\forall \ell \neq m$} \}\,. \end{align} This is because if $\mathcal{S}$ belongs to this class, then each bubble $S_\ell$ for $1\leq \ell \leq N$ must be separated from the others and the exterior chamber $S_0$ by the soap $G$, and $\mathcal{F}(\mathcal{S}) = P(G)$, which is the energy of the soap coming from surface tension. Theorem \ref{main regularity theorem delta positive all of space} allows for a straightforward construction showing that in fact, \eqref{all of space problem} and \eqref{thin film cluster formulation} are equivalent, in that a minimizer for \eqref{all of space problem} can be approximated in energy by clusters in the smaller class \eqref{thin film cluster formulation}. Therefore, for a minimizer $\mathcal{S}^\delta$ of \eqref{all of space problem}, $G^\delta$ can be understood as the ``wet" part of the interfaces between bubbles where soap accumulates in the limit of a minimizing sequence for \eqref{thin film cluster formulation}, as opposed to $\partial S_\ell^\delta \cap \partial S_m^\delta$ which is the ``dry" part; see Figure \ref{double bubble}. \end{remark}
\begin{remark}[Constraint on $G^\delta$]\label{remark on soft constraint}
We have incorporated $G^\delta$ with a soft constraint $|G^\delta|\leq \delta$ rather than a hard constraint $|G^\delta|=\delta$ to allow the minimizers to ``select" the area of $G^\delta$. A consequence of Theorem \ref{main regularity theorem delta positive all of space} is that if some minimizer $\mathcal{S}^0$ of \eqref{all of space problem} for $\delta=0$ has a singularity, then every minimizer $\mathcal{S}^\delta$ for given $\delta>0$ satisfies $|G^\delta|>0$. Indeed, if $|G^\delta|=0$, then $\mathcal{F}(\mathcal{S}^0)\leq \mathcal{F}(\mathcal{S}^\delta)=\inf_{\mathcal{A}_\delta^{\mathbf{m}}}\mathcal{F}$, so that $\mathcal{S}^0$ is minimal among $\mathcal{A}_\delta^{\mathbf{m}}$ and the regularity in Theorem \ref{main regularity theorem delta positive all of space} for $\mathcal{S}^0$ yields a contradiction. As we prove in Theorem \ref{resolution for small delta corollary}, the minimizer on the ball for small $\delta$ and equal coefficients saturates the inequality $|G^\delta|\leq \delta$, and we suspect this should hold in generality for \eqref{all of space problem} and \eqref{ball problem} with small $\delta$. \end{remark}
\begin{figure}\label{double bubble}
\end{figure}
We turn now to our results regarding the problem \eqref{ball problem} on the ball. Here regularity holds up to the boundary $\partial B$, at which $G^\delta$ may have corners, rather than cusps, at jump points of $h$.
\begin{theorem}[Regularity on the Ball for $\delta>0$]\label{main regularity theorem delta positive}
If $\mathcal{S}^\delta$ is a minimizer for $\mathcal{F}$ among $\mathcal{A}_\delta^h$ for $\delta>0$, then for $\ell,m>0$, $\partial S_\ell^\delta$ is $C^{1,1}$ except at jump points of $h$, and $\partial S_\ell^\delta \cap \partial S_m^\delta \cap B$ is a finite union of line segments terminating on $\partial B$ at a jump point of $h$ between $\ell$ and $m$ or at a point in $\partial S_\ell^\delta \cap \partial S_m^\delta \cap \partial G^\delta \cap B$. Referring to those points in $\partial S_\ell^\delta \cap \partial S_m^\delta \cap \partial G^\delta \cap B$ and $\partial S_\ell^\delta \cap \partial S_m^\delta \cap \partial G^\delta \cap \partial B$ as cusp and corner points, respectively, there exist $\kappa_\ell^\delta$ for $1\leq \ell\leq N$ such that \begin{align}\label{curvature condition}
c_1 \kappa_1^\delta=c_2 \kappa_2^\delta=\cdots =c_N \kappa_N^\delta \end{align} and $\partial S_\ell^\delta \cap \partial G^\delta$ consists of a finite union of arcs of constant curvature $\kappa_\ell^\delta$, each of whose two endpoints are either a cusp point in $B$ or a corner point in $\partial B$ at a jump point of $h$. Furthermore, at cusp points, $\partial S_\ell^\delta\cap \partial G^\delta $ and $\partial S_m^\delta\cap \partial G^\delta $ meet a segment of $\partial S_\ell^\delta \cap \partial S_m^\delta$ tangentially. Finally, any connected component of $S_\ell^\delta$ for $1\leq \ell \leq N$ is convex. \end{theorem}
\begin{remark}
In the case of equal weights $c_\ell=1$, Theorems \ref{main regularity theorem delta positive all of space} and \ref{main regularity theorem delta positive} can be found in \cite{BrakkeMorgan}; see also the paper \cite{MH} for methods of existence and regularity. \end{remark}
To state our asymptotic resolution theorem on the ball, we require some knowledge of the regularity for minimizers of the $\delta=0$ problem. In the general immiscible fluids problem, there may be singular points where more than three chambers meet; see \cite[Figure 1.1]{Chan}, \cite[Figure 7]{FGMMNPY}. Since we are interested in triple junction singularities, below is a description of the behavior of minimizers on the ball in some cases where all singularities are triple junctions.
\begin{theorem}[Regularity on the Ball for $\delta=0$]\label{main regularity theorem delta zero}
If $N=3$ or $c_\ell=1$ for $0\leq \ell \leq N$ and $\mathcal{S}^0$ is a minimizer for $\mathcal{F}$ among $\mathcal{A}_0^h$, then every connected component of $\partial S_\ell^0 \cap \partial S_m^0 \cap B$ for non-zero $\ell$ and $m$ is a line segment terminating at an interior triple junction $x\in \partial S_\ell^0 \cap \partial S_m^0 \cap \partial S_n^0 \cap B$, at $x\in \partial S_{\ell}^0 \cap \partial S_m^0 \cap \partial B$ which is a jump point of $h$, or at a boundary triple junction $x\in \partial S_{\ell}^0 \cap \partial S_m^0 \cap \partial S_n^0\cap \partial B$ which is a jump point of $h$. Moreover, for each triple $\{\ell,m,n\}$ of distinct non-zero indices there exists angles $\theta_\ell$, $\theta_m$, $\theta_n$ satisfying \begin{align}\label{classical angle conditions intro}
\frac{\sin \theta_\ell}{c_{m}+c_n}=\frac{\sin \theta_m}{c_{\ell}+c_n}=\frac{\sin \theta_n}{c_{\ell}+c_m} \end{align} such that if $x\in B$ is an interior triple junction between $S_\ell^0$, $S_m^0$, and $S_m^0$, then there exists $r_x>0$ such that $S_\ell^0\cap B_{r_x}(x)$ is a circular sector determined by $\theta_\ell$, and similarly for $m$, $n$. Finally, any connected component of $S_\ell^0$ for $1\leq \ell\leq N$ is convex. \end{theorem}
\begin{remark}
The proof of Theorem \ref{main regularity theorem delta zero} also applies when $N>3$ or $c_\ell$ are merely positive to show that the interfaces of a minimizer are finitely many segments meeting at isolated points. For the immiscible fluids problem on the ball, this has been observed in \cite[Corollary 4.6]{Morgan98}; see also \cite{White85}. Therefore, one may prove Theorem \ref{main regularity theorem delta zero} by classifying the possible tangent cones if $N=3$ or $c_\ell=1$ (Theorem \ref{zero delta classification theorem}). Since the proof of Theorem \ref{main regularity theorem delta positive}, which is in the language of sets of finite perimeter, can be easily modified to include a full proof of Theorem \ref{main regularity theorem delta zero}, we provide these arguments for completeness and as an alternative to the approach in \cite{Morgan98} via rectifiable chains. \end{remark}
\noindent Our last main result is a complete resolution of minimizers on the ball for small $\delta$ and equal weights.
\begin{theorem}[Resolution for Small $\delta$ on the Ball]\label{resolution for small delta corollary}
Suppose that $c_\ell=1$ for $0\leq \ell\leq N$ and $h\in BV(\partial B;\{1,\dots,N\})$. Then there exists $\delta_0>0$, a function $f(\delta)\to 0$ as $\delta \to 0$, and $r>0$, all depending on $h$, such that if $0<\delta<\delta_0$ and $\mathcal{S}^\delta$ is a minimizer in \eqref{ball problem}, then $|G^\delta|=\delta$ and there exists a minimizer $\mathcal{S}^0$ among $\mathcal{A}_{0}^h$ such that \begin{align}\label{hausdorff convergence of chambers in corollary}
\max \big\{ \sup_{x\in S_\ell^{\delta}} {\rm dist} (x,S_\ell^0)\,,\,\sup_{x\in S_\ell^{0}} {\rm dist} (x,S_\ell^{\delta}) \big\} \leq f(\delta)\quad \textit{for }1\leq \ell \leq N \end{align} and, denoting by $\Sigma$ the set of interior and boundary triple junctions of $\mathcal{S}^0$, \begin{align}\label{hausdorff convergence of remnants in corollary}
\max \big\{ \sup_{x\in G^{\delta}} {\rm dist} (x,\Sigma)\,,\,\sup_{x\in \Sigma} {\rm dist} (x,G^{\delta}) \big\} \leq f(\delta) \end{align} and for each $x\in \Sigma$, $B_{r}(x) \cap \partial G^{\delta}$ consists of three circle arcs of curvature $\kappa=\kappa(\mathcal{S}^\delta)$. \end{theorem}
\begin{remark}[Wetting of Singularities]\label{wetting remark}
For the soap bubble capillarity analogue of \eqref{thin film cluster formulation} on $B$, \begin{align}\label{thin film ball formulation}
\inf\{ \mathcal{F}(\mathcal{S}) : \mathcal{S}\in \mathcal{A}_\delta^{h},\, \mbox{$S_\ell$ open, $\mathrm{cl}\, S_\ell \cap \mathrm{cl}\, S_m\subset \{x\in \partial B: \mbox{$h$ jumps between
}\ell, m\}$} \}\,, \end{align} we may also use Theorem \ref{main regularity theorem delta positive} to approximate a minimizer in \eqref{ball problem} by a sequence satisfying the restrictions in \eqref{thin film ball formulation}. Therefore, if $\delta>0$ is small, a minimizing sequence for \eqref{thin film ball formulation} converges to a minimizer $\mathcal{S}^\delta$ of \eqref{ball problem}, which in turn is close to a minimizer $\mathcal{S}^0$ for the $\delta=0$ problem. Furthermore, by Theorem \ref{resolution for small delta corollary}, if $\delta<\delta_0$ and the weights $c_\ell$ are equal, each singularity of $\mathcal{S}^0$ is ``wetted" by a component of $G^\delta$ bounded by three circular arcs; see Figure \ref{perturbation figure}. Also, \eqref{hausdorff convergence of remnants in corollary} shows that $\Sigma$ coincides with the set of accumulation points of the ``wet" regions $G^\delta$ as $\delta \to 0$. In the context of the Plateau problem in $\mathbb{R}^2$, this equivalence has been conjectured in \cite[Remark 1.7]{KMS3}. \end{remark}
\begin{remark}[Triple Junctions for Vector Allen-Cahn]
Theorem \ref{main regularity theorem delta positive} is used in a construction by \'{E}. Sandier and P. Sternberg of an entire solution $U:\mathbb{R}^2\to \mathbb{R}^2$ to the system $\Delta U = \nabla_u W(U) $ for a triple-well potential $W$ without symmetry assumptions on the potential \cite{SS}. \end{remark}
\subsection{Idea of proof} The outline to prove Theorems \ref{main regularity theorem delta positive all of space} and \ref{main regularity theorem delta positive} can be summarized in two main steps: first, classifying the possible blow-ups at any interfacial point of a minimizer $\mathcal{S}^\delta$, and; second, using one of the (a priori non-unique) blow-ups at $x$ to resolve $\mathcal{S}^\delta$ in a small neighborhood of $x$. To demonstrate the ideas, we describe these steps for a minimizer $\mathcal{S}^\delta$ for the problem \eqref{all of space problem} on $\mathbb{R}^2$ at $x=0$. For the classification of blow-ups, we use a blow-up version of the observation below \eqref{trace constraint} to show that no blow-up of any chamber $S_\ell^\delta$ can be anything other than a halfspace. This of course differs from the usual blow-ups in two-dimensional cluster problems, in which three or more chambers can meet at a point. \par Armed now with a list of the possible blow-ups at $0$, which we do not yet know are unique, we must use them to resolve the minimizer in a small neighborhood of $0$. In the case that there exists a blow-up coming from $G^\delta$ and a single chamber $S_\ell^\delta$, lower area density estimates on the remaining chambers imply that in a small ball $B_r(0)$, $ S_{\ell'}^\delta \cap B_r(0)=\emptyset$ for $\ell \neq \ell'$, so that $\partial S_\ell^\delta \cap B_r(0)$ is regular by the classical theory for volume-constrained perimeter minimizers. The main hurdle is when the blow-up at $0$ is two halfspaces coming from $S_{\ell_i}^\delta$ for $i=1,2$. In the classical regularity theory for planar clusters (see \cite[Section 11]{White} or \cite[Corollary 4.8]{Leo01}), this would imply that on $B_r(0)$, the interface must be an arc of constant curvature separating each $S_{\ell_i}^\delta \cap B_r(0)$. Here, there is the possibility that $0\in \partial G^\delta$ but $G^\delta$ has density $0$ at $0$. This behavior cannot be detected at the blow-up level, although one suspects the interfaces near $0$ should be two ordered graphs over a common line which coincide at $0$ and possible elsewhere also. To prove this and thus complete the local resolution, we use the convergence along a sequence of blow-ups to a pair of halfspaces and the density estimates on the other chambers to locate a small rectangle $Q=[-r,r]\times [r,r]$ such that $Q\subset S_{\ell_1}^\delta \cup S_{\ell_2}^\delta \cup G^\delta$ and $\partial Q \cap \partial S_{\ell_i}^\delta =\{(-r,a_i), (r,b_i)\}$ for some $a_1\leq a_2$ and $b_1\leq b_2$. At this point, since we have the desired graphicality on $\partial Q$, we can combine a symmetrization inequality for sets which are graphical on the boundary of a cube (Lemma \ref{symmetrization lemma}), the minimality of $\mathcal{S}^\delta$, and the necessary conditions for equality in Lemma \ref{symmetrization lemma} to conclude that $\partial S_{\ell_i}^\delta \cap Q$ are two ordered graphs.
\subsection{Organization of the paper} In Section \ref{sec:notation and prelim}, we recall some preliminary facts. Next, we prove the existence of minimizers in Section \ref{sec:existence of minmizers}. Section \ref{sec:existence of blowup cones} contains the proof of the existence and classification of blow-up cones at any interfacial point. In Sections \ref{sec: proofs of main regularity theorems} and \ref{sec:proof of delta 0 theorem}, we prove Theorems \ref{main regularity theorem delta positive all of space} and \ref{main regularity theorem delta positive} and Theorem \ref{main regularity theorem delta zero}, respectively. Finally, in Section \ref{sec:resolution for small delta}, we prove Theorem \ref{resolution for small delta corollary}. \subsection{Acknowledgments} This work was supported by the NSF grant RTG-DMS 1840314. I am grateful to \'{E}tienne Sandier and Peter Sternberg for several discussions during the completion of this work and to Frank Morgan for valuable comments on the literature for such problems.
\section{Notation and Preliminaries}\label{sec:notation and prelim} \subsection{Notation}
Throughout the paper, $B_r(x)=\{y\in \mathbb{R}^2:|y-x|<r\}$. When $x=0$, we set $B_R:=B_R(0)$ and $B=B_1(0)$. Also, for any Borel measurable $U$, we set \begin{align*}
\mathcal{F}(\mathcal{S};U) = \sum_{\ell=0}^N c_\ell P(S_\ell;U)\,. \end{align*} We will use the notation $E^{(t)}$ for the points of Lebesgue density $t\in [0,1]$. \par We remark that since $h\in BV(\partial B; \{1,\dots,N\})$, there exists a partition of $\partial B$ into $N$ pairwise disjoint sets $\{A_1,\dots, A_N\}$ such that $h = \sum_{\ell=1}^N \ell \,1_{A_\ell}$, and each $A_\ell$ is a finite union of pairwise disjoint arcs: \begin{align}\label{arc definition} A_\ell := \cup_{i=1}^{I_\ell}a_i^\ell\,. \end{align} For each $1\leq \ell \leq N$ and $1\leq i \leq I_\ell$, we let \begin{align}\label{chord def} c_i^\ell \end{align} be the chord that shares endpoints with $a_i^\ell$. Finally, we call \begin{align}\label{circular segments} C_i^\ell \end{align} the open circular segments (regions bounded by an arc and its corresponding chord) corresponding to the pair $(a_i^\ell, c_i^\ell)$.
\subsection{Preliminaries} Regarding the functional $\mathcal{F}$, we observe that when $\delta=0$, \begin{align}\notag
\mathcal{F}(\mathcal{S}) = \sum_{0\leq \ell< m\leq N} c_{\ell m}\mathcal{H}^1(\partial^* S_\ell \cap \partial^* S_m)\,, \end{align} where $c_{\ell m } :=c_\ell + c_m$, and the positivity of $c_\ell$ for $1\leq \ell \leq N$ is equivalent to the strict triangle inequalities \begin{align}\label{strict triangle ineq}
c_{\ell m}< c_{\ell i} + c_{i m}\quad \forall \ell\neq m\neq i \neq \ell\,.
\end{align} We also note that for any $h\in BV(\partial B;\{1,\dots,N\})$, the energy of any cluster $\mathcal{S}$ satisfying the boundary condition \eqref{trace constraint} can be decomposed as \begin{align}\label{energy independent of boundary}
\mathcal{F}(\mathcal{S}) = 2\pi c_0 +\sum_{\ell=1}^N c_\ell \mathcal{H}^1(A_\ell)+ \sum_{\ell=1}^N c_\ell P(S_\ell;B) =: C(h) + \mathcal{F}(\mathcal{S};B)\,, \end{align} where $C(h)$ is a constant independent of $\mathcal{S}$. Therefore, minimizing $\mathcal{F}$ among $\mathcal{A}_\delta^h$ for any $\delta>0$ is equivalent to minimizing $\mathcal{F}(\cdot;B)$, so we will often ignore the boundary term for the problem on the ball.\par We now recall some facts regarding sets of finite perimeter. Unless otherwise stated, we will always adhere to the convention that among the Lebesgue representatives of a given set of finite perimeter $E$, we are considering one that satisfies \cite[Proposition 12.19]{Mag12} \begin{align}\label{boundary convention 1}
{\rm spt} \,\mathcal{H}^1 \mres \partial^* E = \partial E \end{align} and \begin{align}\label{boundary convention 2}
\partial E= \{x:0<|E \cap B_r(x)|<\pi r^2\,\,\forall r>0 \}\,. \end{align}
\noindent We will need some facts regarding slicing sets of finite perimeter by lines or circles.
\begin{lemma}[Slicing sets of finite perimeter]\label{slicing lemma}
Let $u(x)=x\cdot \nu$ for some $\nu\in \mathbb{S}^1$ or $u(x)=|x-y|$ for some $y\in \mathbb{R}^2$, and, for any set $A$, let $A_t$ denote $A\cap \{u=t\}$. Suppose that $E\subset \mathbb{R}^2$ is a set of finite perimeter. \begin{enumerate}[label=(\roman*)] \item For every $t\in \mathbb{R}$, there exist traces $E^+_t$, $E^-_t\subset \{u=t \}$ such that \begin{align}\label{trace difference}
\int_{\{u=t\}}|\mathbf{1}_{E^+_t}-\mathbf{1}_{E^-_t}|\,d\mathcal{H}^1 = P(E; \{u=t \})\,. \end{align}
\item Letting $S=\{x:x\cdot \nu^\perp \in [a,b]\}$ for compact $[a,b]$ when $u=x\cdot \nu$ or $S = \mathbb{R}^2$ when $u=|x-y|$, \begin{align}\label{convergence of trace integrals}
\lim_{s\downarrow t} \int_{\{u=s\}\cap S} \mathbf{1}_{E^-_t}\,d\mathcal{H}^1 = \int_{\{u=t\}\cap S} \mathbf{1}_{E^+_t}\,d\mathcal{H}^1\,. \end{align} \item For almost every $t\in \mathbb{R}$, $E_t^+=E_t^-=E_t$ up to an $\mathcal{H}^1$-null set, $E_t$ is a set of finite perimeter in $\{u=t\}$, and \begin{align}\label{equivalence of boundaries}
\mathcal{H}^{0}((\partial^* E)_t \Delta \partial^*_{\{u=t\}}E_t)=0\,. \end{align} \end{enumerate} \end{lemma} \begin{proof}
The first item can be found in \cite[(2.15)]{Giu84}. We prove the second item when $u=\vec{e}_1 \cdot x$; the proof with any other $\nu$ or when $u=|x-y|$ is similar. By the divergence theorem \cite[Theorem 2.10]{Giu84}, \begin{align}\notag
0 &= \int_{(t,s)\times (a,b) \cap E}\mathrm{div}\, \vec{e}_1 \\ \notag
&=\int_{\{u=s\}\cap S} \mathbf{1}_{E^-_t}\,d\mathcal{H}^1 - \int_{\{u=t\}\cap S} \mathbf{1}_{E^+_t}\,d\mathcal{H}^1 + \int_{\partial^* E \cap (t,s)\times (a,b)} \vec{e}_1 \cdot \nu_{E}\,d\mathcal{H}^1 \\ \notag
&\qquad + \int_{\partial^* (E \cap(t,s)\times (a,b)) \cap (t,s)\times\{a,b\}} \vec{e}_1 \cdot \nu_{E \cap(t,s)\times (a,b)}\,d\mathcal{H}^1\,. \end{align} Now the last term on the right hand side is bounded by $2(s-t)$ and vanishes as $s\to t$. Also, the third term on the right hand side is bounded by $P(E;(t,s)\times (a,b))$, which vanishes as $s\to t$ since $(t,s)\times (a,b)$ is a decreasing family of bounded open sets whose intersection is empty and $B\to P(E; B)$ is a Radon measure. The limit \eqref{convergence of trace integrals} follows from letting $s$ decrease to $t$. \par Moving on to $(iii)$, we recall that for $\mathcal{H}^1$-a.e. $x\in \{u=t\}\cap E_t^+$, \begin{align}\label{averaging prop of traces}
1=\lim_{r\to 0}\frac{|B_r(x) \cap E \cap \{u>t \}|}{\pi r^2/2} \end{align} and similarly for $E_t^-$ \cite[2.13]{Giu84}. Next, by \eqref{trace difference},
\begin{align}\label{agreement of plus minus}
\mathcal{H}^1(E_t^+\Delta E_t^-)=0 \quad\textup{ if }P(E;\{u=t\})=0\,,
\end{align} which is all but at most countably many $t$. Now, for any $x \in \{ u=t\}$ that is also a Lebesgue point of $E$, \begin{align}\label{equalities at Lebesgue points}
1= \lim_{r\to 0}\frac{|B_r(x) \cap E|}{\pi r^2} = \lim_{r\to 0}\frac{|B_r(x) \cap E \cap \{u>t \}|}{\pi r^2/2}=\lim_{r\to 0}\frac{|B_r(x) \cap E \cap \{u<t \}|}{\pi r^2/2}\,. \end{align} Since $\mathcal{L}^2$-a.e. $x\in E$ is a Lebesgue point, we conclude from \eqref{averaging prop of traces}, \eqref{agreement of plus minus}, and \eqref{equalities at Lebesgue points} that $\mathcal{H}^1(E_t \Delta E_t^\pm)=0$ for $\mathcal{H}^1$-a.e. $t$. Lastly, \eqref{equivalence of boundaries} when slicing by lines can be found in \cite[Theorem 18.11]{Mag12} for example. The case of slicing by circles follows from the case of lines and the fact that smooth diffeomorphisms preserve reduced boundaries \cite[Lemma A.1]{KinMagStu22}. \end{proof}
\noindent We will use the following fact regarding the intersection of a set of finite perimeter with a convex set.
\begin{lemma}\label{convexity lemma}
If $E$ is a bounded set of finite perimeter and $K$ is a convex set, then \begin{align}\notag
P(E \cap K) \leq P(E)\,, \end{align}
with equality if and only if $|E\setminus K|=0 $. \end{lemma}
\begin{proof}
The argument is based on the facts that the intersection of such $E$ with a halfspace $H$ decreases perimeter (with equality if and only $|H\setminus E|=0$) and any convex set is an intersection of halfspaces. We omit the details. \end{proof}
\noindent Our last preliminary regarding sets of finite perimeter is a symmetrization inequality, which for convenience, we state in the setting it will be employed later.
\begin{lemma}\label{symmetrization lemma}
Let $Q'=[t_1,t_2]\times [-1,1]$. Suppose that $E\subset Q'$ is a set of finite perimeter such that $(t_1,t_2)\times (-1, -1/4)\subset E^{(1)}\subset (t_1,t_2)\times (-1,1/4)$ and, for some $a_1,a_2\in [-1/4,1/4]$, \begin{align}\label{trace assumption}
E_{t_1}^+=[-1,a_1]\,,\quad E_{t_2}^-=[-1,a_2]\quad\textit{up to $\mathcal{H}^1$-null sets}\,, \end{align}
where $E_{t_1}^+$, $E_{t_2}^-$, viewed as subsets of $\mathbb{R}$, are the traces from the right and left, respectively, slicing by $u(x)=x\cdot e_1$. Then the set $E^h = \{(x_1,x_2):-1 \leq x_2 \leq \mathcal{H}^1(E_{x_1})-1 \}$ satisfies $|E^h|=|E|$, \begin{align}\label{preserves traces}
(E^h)^+_{t_1}=[-1,a_1]\,,\quad (E^h)^-_{t_2}=[-1,a_2]\quad\textit{up to $\mathcal{H}^1$-null sets} \end{align} and \begin{align}\label{symmetrization inequality}
P(E^h;\mathrm{int}\,Q') \leq P(E;\mathrm{int}\,Q')\,. \end{align} Moreover, if equality holds in \eqref{symmetrization inequality}, then for every $t\in (t_1,t_2)$, $(E^{(1)})_t$ is an interval. \end{lemma}
\begin{remark}
The superscript $h$ is for ``hypograph." \end{remark}
\begin{figure}
\caption{Both the sets $E$ and $E^h$ have the same trace on $\partial Q'$, and $P(E^h;\mathrm{int}\,Q')<P(E;\mathrm{int}\,Q')$ because $E$ has vertical slices which are not intervals.}
\label{symm figure}
\end{figure}
\begin{proof}
The preservation of area $|E^h|=|E|$ is immediate by Fubini's theorem, so we begin with the first equality in \eqref{preserves traces}, and the second is analogous. We recall from \eqref{averaging prop of traces} that for $\mathcal{H}^1$-a.e. $x\in \{t_1\}\times [-1,1]\cap (E^h)_{t_1}^+$, \begin{align}\label{averaging prop of traces 2}
1=\lim_{r\to 0}\frac{|B_r(x) \cap E^h \cap Q'|}{\pi r^2/2}\,. \end{align} From this property and the fact that the vertical slices of $E^h$ are intervals of height at least $3/4$, it follows that $(E^h)_{t_1}^+$ is $\mathcal{H}^1$-equivalent to an interval $[-1,a]$ for some $a\geq -1/4$. Furthermore, $a=a_1$ is a consequence of \eqref{convergence of trace integrals} and the fact that the rearrangement $E^h$ preserves the $\mathcal{H}^1$-measure of each vertical slice: \begin{align*}
a_1 = \int_{\{t_1\}\times [-1,1]} \mathbf{1}_{E_{t_1}^+}\,d\mathcal{H}^1 &= \lim_{s\downarrow t_1}\int_{\{s\}\times [-1,1]} \mathbf{1}_{E_{s}^-}\,d\mathcal{H}^1\\
&=\lim_{s\downarrow t_1}\int_{\{s\}\times [-1,1]} \mathbf{1}_{(E^h)_{s}^-}\,d\mathcal{H}^1=\int_{\{t_1\}\times [-1,1]} \mathbf{1}_{(E^h)_{t_1}^+}\,d\mathcal{H}^1=a\,. \end{align*} \par Moving on to \eqref{symmetrization inequality}, let consider the sets $E^r$ which is the reflection of $E$ over $\{x_2=-1\}$, and $G=E \cup E^r$. We denote by the superscript $s$ the Steiner symmetrization of a set over $\{x_2=-1\}$. We note that \begin{align}\notag
G^s \cap Q' = E^h\,. \end{align} Since $(t_1,t_2)\times (-1, -1/4)\subset E^{(1)}\subset (t_1,t_2)\times (-1,1/4)$ and Steiner symmetrizing decreases perimeter, we therefore have \begin{align}\notag
P(E; \mathrm{int}\,Q') = \frac{P(G;\{x_1\in (t_1,t_2)\})}{2} \geq \frac{P(G^s;\{x_1\in (t_1,t_2)\})}{2}=P(G^s;\mathrm{int}\,Q')=P(E^h;\mathrm{int}\,Q')\,, \end{align} which is \eqref{symmetrization inequality}. Furthermore, equality can only hold if every almost every vertical slice of $G$ is an interval, which in turn implies that $E_t$ is an interval for almost every $t\in (t_1,t_2)$. By \cite[Lemma 4.12]{Fus04}, every slice $(E^{(1)})_t$ is an interval. \end{proof}
We conclude the preliminaries with a lemma regarding of the convergence of convex sets.
\begin{lemma}\label{convex sets convergence lemma}
If $\{C_n\}$ is a sequence of equibounded, compact, and convex sets in $\mathbb{R}^n$, then there exists compact and convex $C \subset\mathbb{R}^n$ such that $\mathbf{1}_{C_n}\to \mathbf{1}_C$ almost everywhere and \begin{align}\label{hausdorff convergence of convex sets}
\max \big\{ \sup_{x\in C_n} {\rm dist} (x,C)\,,\,\sup_{x\in C} {\rm dist} (x,C_n) \big\} \to 0 \,. \end{align} \end{lemma}
\begin{proof} By the Arzel\'{a}-Ascoli Theorem, there exists a compact set $C\subset\mathbb{R}^n$ such that ${\rm dist}(\cdot, C_n)\to {\rm dist}(\cdot,C)$ uniformly. Therefore, $C_n\to C$ in the Kuratowski sense \cite[Section 2]{FFLM}, $C$ is convex, and $\mathbf{1}_{C_n}\to \mathbf{1}_C$ almost everywhere \cite[Remark 2.1]{FFLM}. Since $C_n$ are equibounded and $C$ is compact, the Kuratowski convergence is equivalent to Hausdorff convergence, which is \eqref{hausdorff convergence of convex sets}.\end{proof}
\section{Existence of Minimizers}\label{sec:existence of minmizers}
First we establish the existence of minimizers for the problem \eqref{ball problem} on the ball. A byproduct of the proof is a description of minimizers on each of the circular segments from \eqref{circular segments}; see Fig. \ref{polygon figure}.
\begin{theorem}[Existence on the ball]\label{existence theorem} For any $\delta \geq 0$ and $h\in BV(\partial B;\{1,\dots,N\})$, there exists a minimizer of $\mathcal{F}$ among the class $\mathcal{A}_\delta^h$. Moreover, any minimizer $\mathcal{S}^\delta$ for $\delta \geq 0$ satisfies \begin{align}\label{circular segment containment} \cup_{i=1}^{I_\ell} C_i^\ell \subset S_\ell^{(1)}\quad\textit{for each }1\leq \ell \leq N\,. \end{align} \end{theorem}
\begin{proof}The proof is divided into two steps. The closed convex sets
\begin{align*}
K_\ell:= \mathrm{cl}\, (B \setminus (\cup_{i=1}^{I_\ell} C_i^\ell ))\,, \quad 1\leq \ell \leq N
\end{align*} will be used throughout.\par
\noindent\textit{Step one}: First we show that given any $\mathcal{S}\in \mathcal{A}_\delta^h$, the cluster $\tilde{\mathcal{S}}$ defined via \begin{align*}
\tilde{S}_\ell &:= \Big(S_\ell \cap \bigcap_{j\neq \ell} K_j \Big)\cup \bigcup_{i=1}^{I_\ell} C_i^\ell \quad 1\leq \ell \leq N\,,\qquad \tilde{S}_0=B^c\,, \qquad \tilde{G}=(\tilde{S}_0\cup \dots \cup \tilde{S}_N)^c
\end{align*} satisfies $\tilde{\mathcal{S}}\in \mathcal{A}_\delta^h$ and \begin{align}\label{less energy}
\mathcal{F}(\tilde{\mathcal{S}}) \leq \mathcal{F} (\mathcal{S})\,, \end{align} \begin{figure}
\caption{Here $h$ jumps at $x_i$, $1\leq i\leq 5$, and $\{h=1\}$ on the arcs $a_1^1$ and $a_2^1$. Equation \eqref{circular segment containment} states that for any minimizer $\mathcal{S}$ with this boundary data, $S_1^{(1)}$ must contain the regions bounded by $a_j^1$ and the chords $c_j^1$ for $j=1,2$.}
\label{polygon figure}
\end{figure}
with equality if only if \begin{align}\label{containment} \cup_{i=1}^{I_\ell} C_i^\ell \subset S_\ell^{(1)}\quad\quad \forall 1\leq \ell \leq N\,. \end{align}
The proof relies on Lemma \ref{convexity lemma}, which states that if $E$ is a set of finite perimeter, $|E|<\infty$, and $K$ is a closed convex set, then $E \cap K$ is a set of finite perimeter and \begin{align}\label{convexity inequality}
P(E \cap K) \leq P(E)\,, \end{align}
with equality if and only if $|E\setminus K|=0$.
For given $\mathcal{S} \in \mathcal{A}_\delta^h$, let us first consider the cluster $\mathcal{S}'$, where \begin{align*}
S_1' := S_1 \cup \bigcup_{i=1}^{I_1} C_i^1\,,\,\, S_\ell' := S_\ell \cap K_1\,,\,\, 2\leq \ell \leq N\qquad S'_0=B^c\,, \qquad G'=(S_0'\cup \dots \cup S_N')^c\,. \end{align*} By the trace condition \eqref{trace constraint} and the definition of $S_\ell'$, \begin{align}\label{trace constraint 2}
S_\ell' \cap \partial B = \{x\in \partial B : h(x) = \ell \}\textup{ for $1\leq \ell \leq N$ in the sense of traces}\,. \end{align} Also, since $ G'= B \setminus \cup_\ell S_\ell'$ satisfies \begin{align*}
| G'| = |(B\cap K_1) \setminus \cup_\ell S_\ell| \leq |B \setminus \cup_\ell S_\ell|\leq \delta\,, \end{align*} we have \begin{equation}\label{remains in A delta} \mathcal{S}' \in \mathcal{A}_\delta^h\,. \end{equation} Now for $2\leq \ell\leq N$, we use \eqref{convexity inequality} to estimate \begin{align}\label{23 estimate}
c_\ell P(S_\ell)
\geq c_\ell P(S_\ell \cap K_1) =
c_\ell P(S_\ell')\,. \end{align} For $\ell=1$, we first recall the fact for any set of finite perimeter $E$, \begin{equation}\label{complement equality} P(E;B) = P(E^c ; B)\,. \end{equation} Applying \eqref{complement equality} with $S_1$, then \eqref{trace constraint}, \eqref{convexity inequality}, and \eqref{trace constraint 2}, and finally \eqref{complement equality} with $S_1'$, we find that \begin{align}\notag
P(S_1;B) &= P((\cup_{\ell=2}^N S_\ell \cup G) ; B) \\ \notag
&= P(\cup_{\ell=2}^N S_\ell \cup G) - \mathcal{H}^1(\cup_{\ell=2}^N A_\ell) \\ \notag
&\geq P((\cup_{\ell=2}^N S_\ell \cup G) \cap K_1) - \mathcal{H}^1(\cup_{\ell=2}^N A_\ell) \\ \notag
&= P(\cup_{\ell=2}^N S_\ell' \cup G'; B) \\ \label{1 estimate}
&= P(S_1' ; B)\,. \end{align} Adding $\mathcal{H}^1(A_\ell)$ to \eqref{1 estimate}, multiplying by $c_1$, and combining with \eqref{23 estimate} gives \begin{align}\label{first projection inequality}
\mathcal{F}(\mathcal{S})=\sum_{\ell=0}^N c_\ell P(S_\ell) \geq \sum_{\ell=0}^N c_\ell P(S_\ell')=\mathcal{F} (\mathcal{S}')\,, \end{align} and so we have a new cluster $\mathcal{S}'$, belonging to $\mathcal{A}_\delta^h$ by \eqref{remains in A delta}, that satisfies \begin{align}\label{first step containment}
\cup_{i=1}^{I_1} C_i^1 \subset (S_1')^{(1)}\,. \end{align} Repeating this argument for $2\leq\ell\leq N$ yields $\tilde{\mathcal{S}}\in\mathcal{A}_\delta^h$ satisfying \eqref{less energy} as desired. Turning now towards the proof that equality in \eqref{less energy} implies \eqref{containment}, we prove that the containment for $\ell=1$ in \eqref{containment} is entailed by equality; the other $N-1$ implications are analogous. If \eqref{less energy} holds as an equality, then \eqref{23 estimate} and \eqref{1 estimate} must hold as equalities as well. But by the characterization of equality in \eqref{convexity inequality}, this can only hold if $(\cup_{\ell=2}^N S_\ell \cup G) \cap K_1 = \cup_{\ell=2}^N S_\ell \cup G$, which yields the first containment in \eqref{containment}.\par Finally, let us also remark that an immediate consequence of this step is that if a minimizer of $\mathcal{F}$ exists among $\mathcal{A}_\delta^h$, then \eqref{circular segment containment} must hold. It remains then to prove the existence of a minimizer.\par
\noindent\textit{Step two}: Let $\{\mathcal{S}^m\}_m$ be a minimizing sequence of clusters for $\mathcal{F}$ among $\mathcal{A}_\delta^h$ (the infimum is finite). Due to the results of step one, we can modify our minimizing sequence so that \begin{align}\label{segment containment for existence}
\cup_{i=1}^{I_\ell} C_i^\ell \subset (S_{\ell}^m)^{(1)}\quad \forall m\,,\,\, \forall 1\leq \ell \leq N \end{align} while also preserving the asymptotic minimality of the sequence. By compactness in $BV$ and \eqref{segment containment for existence}, after taking a subsequence, we obtain a limiting cluster $\mathcal{S}$ that satisfies the trace condition \eqref{trace constraint}, and, by lower-semicontinuity in $BV$, minimizes $\mathcal{F}$ among $\mathcal{A}_\delta^h$.
\end{proof}
\begin{remark}[Existence of minimizer for a functional with boundary energy]
One might also consider the minimizing the energy \begin{align*}
\mathcal{F}(\mathcal{S};B) + \sum_{m=1}^N\sum_{\ell\neq m} (c_\ell + c_m)\mathcal{H}^1(\partial^* S_\ell \cap \{h=m\})\,, \end{align*} which penalizes deviations from $h$ rather than enforcing a strict trace condition, among the class \begin{align}\notag
\{(S_0,\dots,S_N,G) : |S_\ell \cap S_m|=0 \textup{ if $\ell \neq m$, }|G|\leq \delta,\, B^c = S_0\}\,. \end{align} For this problem, the same convexity-based argument as in step one of the proof of Theorem \ref{existence theorem} shows that in fact, minimizers exists and attain the boundary values $h$ $\mathcal{H}^1$-a.e. on $\partial B$. When $\delta=0$, this problem arises as the $\Gamma$-limit of a Modica-Mortola problem with Dirichlet condition \cite{SS}. \end{remark}
Next, we prove existence for the problem on all of space. Since we are in the plane, the proof utilizes the observation that perimeter and diameter scale the same in $\mathbb{R}^2$. Existence should also hold in $\mathbb{R}^n$ for $n\geq 3$ using the techniques of \cite{Alm76}.
\begin{theorem}[Existence on $\mathbb{R}^2$]\label{existence on all of space}
For any $\mathbf{m}\in (0,\infty)^N$, there exists $R=R(\mathbf{m})$ such that for all $\delta \geq 0$, there exists a minimizer of $\mathcal{F}$ among the class $\mathcal{A}_\delta^{\mathbf{m}}$ satisfying $\mathbb{R}^2\setminus B_R \subset S_0$. \end{theorem}
\begin{proof} Let $\{\mathcal{S}^j\}_j\subset \mathcal{A}_\delta^{\mathbf{m}}$ be a minimizing sequence with remnants $G^j$. The existence of a minimizer is straightforward if we can find $R>0$ such that, up to modifications preserving the asymptotic minimality, $B_R^c \subset S_0^j$ for each $j$. We introduce the sets of finite perimeter $E_j=\cup_{\ell=1}^N S_\ell^j \cup G^j$, which satisfy
$P(E_j)\leq \max\{c_\ell^{-1}\}\mathcal{F}(\mathcal{S}^j)$ and $\partial^* E_j \subset \cup_{\ell=1}^N \partial^* S_\ell^j$. Decomposing $E_j$ into its indecomposable components $\{E^j_k\}_{k=1}^\infty$ \cite[Theorem 1]{AMMN01}, we have $\mathcal{H}^1(\partial^* E_k^j \cap \partial^* E_{k'}^j)=0$ for $k\neq k'$. Therefore, for the clusters $\mathcal{S}_k^j=((E_k^j)^c, S_1^j \cap E_k^j,\dots,S_N^j \cap E_k^j,G^j \cap E_k^j )$, \begin{align}\notag
\mathcal{F}(\mathcal{S}^j) = \sum_{k=1}^\infty \mathcal{F}(\mathcal{S}_k^j)\,. \end{align} Furthermore, by the indecomposability of any $E_k^j$, there exists $x_k^j\in \mathbb{R}^2$ such that \begin{align}\notag
(G^j \cap E_k^j)\cup \cup_{\ell=1}^N S_\ell^j \cap E_k^j \subset E_k^j \subset B_{P(E_k^j)}(x_k^j)\,. \end{align} By the uniform energy bound along the minimizing sequence and this containment, for any $j$, we may translate each $\mathcal{S}_k^j$ so that the resulting sequence of clusters satisfies $B_R^c \subset S_0^j$. Finally, we note that $R\leq 2\max\{c_\ell^{-1}\}\inf_{\mathcal{A}_\delta^{\mathbf{m}}}\mathcal{F}$, and since that infimum is bounded independently of $\delta$, it depends only on $\mathbf{m}$. \end{proof}
\section{Existence and Classification of Blow-up Cones}\label{sec:existence of blowup cones} In this section, we prove the existence of blow-up cones for minimizers and classify the possibilities. Since the proofs are mostly modified versions of standard arguments, we will often be brief in this section and describe the main ideas and adjustments. Also, we do not include any arguments for the case $\mathcal{A}_0^{\mathbf{m}}$ as that regularity is known in $\mathbb{R}^2$ \cite{White85,Morgan98}.
\subsection{Perimeter-almost minimizing clusters} Lemma \ref{lambda r nought lemma} allows us to test minimality of $\mathcal{S}^\delta$ against competitors that do not satisfy the constraint required for membership in $\mathcal{A}_\delta^h$ or $\mathcal{A}_\delta^{\mathbf{m}}$. \begin{lemma}\label{lambda r nought lemma} If $\mathcal{S}^\delta$ is a minimizer for $\mathcal{F}$, then there exist $r_0>0$ and $0\leq \Lambda<\infty$, both depending on $\mathcal{S}^\delta$, with the following property: \begin{enumerate}[label=(\roman*)] \item if $\delta>0$, $\mathcal{S}^\delta$ minimizes $\mathcal{F}$ among $\mathcal{A}_\delta^h$, $\mathcal{S}'$ satisfies the trace condition \eqref{trace constraint}, and $S_\ell^\delta \Delta S_\ell' \subset B_r(x)$ for $r<r_0$ and $1\leq \ell \leq N$, then, setting $G^\delta=B\setminus \cup_{\ell}S_\ell$ and $G'=B\setminus \cup_{\ell}S_\ell'$, \begin{align}\label{lambda r nought inequality}
\mathcal{F}(\mathcal{S}^\delta) \leq \mathcal{F}(\mathcal{S}') + \Lambda\big| |G^\delta| - |G'|\big|\,; \end{align} \item if $\delta > 0$, $\mathcal{S}^\delta$ minimizes $\mathcal{F}$ among $\mathcal{A}_\delta^{\mathbf{m}}$ and $\mathcal{S}'$ satisfies $S_\ell^\delta \Delta S_\ell' \subset B_r(x)$ for $r<r_0$ and $1\leq \ell \leq N$, then \begin{align}\label{lambda r nought inequality volume version}
\mathcal{F}(\mathcal{S}^\delta) \leq \mathcal{F}(\mathcal{S}') + \Lambda\sum_{\ell=1}^N \big| |S_\ell^\delta|-|S_\ell'| \big|\,. \end{align} \end{enumerate} \end{lemma}
\begin{proof} For $(i)$, since we do not have to fix the areas of each chamber but only the remnant set, the proof is an application of the standard volume-fixing variations construction for sets of finite perimeter along the lines of \cite[Lemma 17.21 and Example 21.3]{Mag12}. For $(ii)$, we use volume-fixing variations idea for clusters originating in \cite[VI.10-12]{Alm76}. More specifically, by considering the $(N+1)$-cluster $(S_0^\delta,\dots,S_N^\delta,G^\delta)$, \eqref{lambda r nought inequality volume version} follows directly from using \cite[Equations (29.80)-(29.82)]{Mag12} on this $(N+1)$-cluster to modify $\mathcal{S}'$ so that its energy may be tested against $\mathcal{S}^\delta$.\end{proof}
\subsection{Preliminary regularity when $\delta>0$} Density estimates and regularity along $(G^\delta)^{1/2}\cap (S_\ell^\delta)^{1/2}$ can be derived from Lemma \ref{lambda r nought inequality}.
\begin{lemma}[Infiltration Lemma for $\delta>0$]\label{infiltration lemma}
If $\mathcal{S}^\delta$ is a minimizer for $\mathcal{F}$ among $\mathcal{A}_\delta^h$ or $\mathcal{A}_\delta^{\mathbf{m}}$ for $\delta> 0$, then there exist constants $\varepsilon_0=\varepsilon_0>0$ and $r_*>0$ with the following property:
\par
if $x\in \mathrm{cl}\, B$ when $\mathcal{S}^\delta \in \mathcal{A}_\delta^h$ or $x\in \mathbb{R}^2$ when $\mathcal{S}^\delta \in \mathcal{A}_\delta^{\mathbf{m}}$, $r<r_*$, $0\leq \ell \leq N$, and \begin{align}\label{small density assumption lemma}
|S_\ell^\delta \cap B_r(x)| \leq \varepsilon_0 r^2\,, \end{align} then \begin{align}\label{no infiltration equation}
|S_\ell^\delta \cap B_{r/4}(x)| =0\,. \end{align} \end{lemma}
\begin{proof}We prove the lemma for $\mathcal{A}_\delta^h$ case in steps one and two. The case for $\mathcal{A}_\delta^{\mathbf{m}}$ is the the same except that one uses \eqref{lambda r nought inequality volume version} instead of \eqref{lambda r nought inequality} when testing minimality in \eqref{on the way to differential inequality} below. \par
\noindent\textit{Step one}: In the first step, we show that there exists $\varepsilon(h)>0$ such that if $x\in \mathrm{cl}\, B$, $r<1$, and \begin{align}\label{small density assumption lemma step one}
|S_\ell^\delta \cap B_r(x)| \leq \varepsilon r^2\quad \textup{ for some $1\leq \ell \leq N$}\,, \end{align} for a minimizer among $\mathcal{A}_\delta^h$, then \begin{align}\label{doesnt affect trace}
B_{r/2}(x) \cap \{ h=\ell\}=\emptyset\,. \end{align}
If $B_r(x) \cap \partial B = \emptyset$, \eqref{doesnt affect trace} is immediate, so we may as well assume in addition that \begin{align}\label{step one infiltration assumptions}
B_r(x) \cap \partial B \neq \emptyset \,. \end{align}
In order to choose $\varepsilon$, we recall the inclusion \eqref{circular segment containment} from Theorem \ref{existence theorem}, which allows us to pick $\varepsilon$ small enough (independent of $\delta$ or the particular minimizer) so that if $y\in \{h=\ell\}$, then \begin{align}\label{circular segment density assumption}
\inf_{0<r<1} \frac{|S_\ell^\delta \cap B_r(y)|}{r^2}> 4\varepsilon\,. \end{align} Now if $B_r(x)$ satisfies \eqref{small density assumption lemma step one}-\eqref{step one infiltration assumptions}, we claim that \begin{equation}\label{no S ell trace inside smaller ball}
B_{r/2}(x)\cap\{h=\ell\} =\emptyset\,, \end{equation} which is \eqref{doesnt affect trace}. Indeed, if \eqref{no S ell trace inside smaller ball} did not hold, then we could find $y\in B_{r/2}(x)$ such that $h(y)=\ell$, in which case by \eqref{circular segment density assumption}, \begin{align}\label{too much density inside small ball}
\frac{|S_\ell^\delta \cap B_{r/2}(y)|}{r^2/4} > 4\varepsilon\,. \end{align}
But $B_{r/2}(y) \subset B_{r}(x)$, so that \eqref{too much density inside small ball} implies $|S_\ell^\delta \cap B_r(x)| > \varepsilon r^2$, which contradicts our assumption \eqref{small density assumption lemma step one}. \par
\noindent\textit{Step two}: Let $\varepsilon_0<\varepsilon$ and $r_* < 1$ to be positive constants to be specified later, and suppose that \eqref{small density assumption lemma} holds for some $1\leq \ell\leq N$ and $x\in \mathrm{cl}\, B$ with $r<r_*$. We set $m(r)=|S_{\ell}^\delta \cap B_r(x)|$, so that for almost every $r$, the coarea formula gives \begin{align}\label{cons of coarea}
m'(r) = \mathcal{H}^1((S_{\ell}^\delta)^{(1)} \cap \partial B_r(x)) = \mathcal{H}^1((S_{\ell}^\delta)^{(1)} \cap \partial B_r(x) \cap B)\,. \end{align} By the conclusion \eqref{doesnt affect trace} of step one, \begin{align}\label{doesnt affect trace step two}
B_{r/2}(x) \cap \{ h=\ell\}=\emptyset\,. \end{align} Therefore, for $s< r/2$, \begin{align}\label{trace still satisfied}
(S_{\ell}^\delta \setminus B_s(x))\cap \partial B = \{x\in \partial B : h(x) = \ell \}\quad\textup{in the sense of traces}\,. \end{align} In particular, removing $B_s(x)$ from $S_{\ell}^\delta$ does not disturb the trace condition \eqref{trace constraint}. Then we may apply \eqref{lambda r nought inequality} from Lemma \ref{lambda r nought lemma}, yielding for almost every $s<r/2$ \begin{align}\notag
\mathcal{F}(\mathcal{S}^\delta)
&\leq \mathcal{F}(B^c,S_1^\delta,\dots ,S_{\ell}^\delta \setminus B_s(x), \dots,S_N^\delta, G^\delta \cup (S_\ell^\delta \cap B_s(x))) + \Lambda |S_{\ell}^\delta \cap B_s(x)| \\ \label{on the way to differential inequality}
&= \mathcal{F}(\mathcal{S}^\delta) - c_1P(S_1^\delta;B_s(x))+ c_1\mathcal{H}^1((S_1^\delta)^{(1)} \cap \partial B_s(x))+ \Lambda |S_1^\delta \cap B_s(x)|\,; \end{align} in the second line we have used the formula $$ P(S_{\ell}^\delta \setminus B_s(x);B)=P(S_{\ell}^\delta; B\setminus \mathrm{cl}\, B_s(x)) + \mathcal{H}^1((S_{\ell}^\delta)^{(1)} \cap \partial B_s(x))\,, $$ which holds for all but those countably many $s$ with $\mathcal{H}^1(\partial^* S_{\ell}^\delta\cap \partial B_s(x))>0$. After rearranging \eqref{on the way to differential inequality} and using the isoperimetric inequality to obtain \begin{align}\notag
2c_{\ell}\pi^{1/2}m(s)^{1/2} \leq 2c_{\ell}m'(s) + \Lambda m(s)\,, \end{align} we may reabsorb the term $\Lambda m(s)$ onto the left hand side and integrate to obtain the requisite decay on $m$.
\end{proof}
\begin{corollary}[Regularity along $(G^\delta)^{1/2} \cap (S_\ell^\delta)^{1/2}$]\label{reduced boundary regularity delta positive} If $\delta>0$ and, for a minimizer $\mathcal{S}\in \mathcal{A}_\delta^h$ or $\mathcal{S}\in \mathcal{A}_\delta^{\mathbf{m}}$ and point $x\in B$ or $x\in \mathbb{R}^2$, respectively, there exists $r_j\to 0$ and $ \ell$ such that \begin{align}\label{sees 1/2}
1=\lim_{j\to \infty}\frac{|(G^\delta \cup S_\ell^\delta)\cap B_{r_j}(x)|}{\pi r_j^2}\,, \end{align} then for large $j$, $\partial G^\delta \cap \partial S_\ell^\delta \cap B_{r_j}(x)$ is an arc of constant curvature and $S_{\ell'} \cap B_{r_j}(x) = \emptyset$ for $\ell'\neq \ell$. \end{corollary}
\begin{proof} By our assumption \eqref{sees 1/2} and the infiltration lemma, for some $j$ large enough, $B_{r_j}(x) \subset S_\ell^\delta \cup G^\delta$, in which case the classical regularity theory for volume-constrained minimizers of perimeter gives the conclusion. \end{proof}
\begin{corollary}[Density Estimates]\label{density corollary}
If $\mathcal{S}^\delta$ minimizes $\mathcal{F}$ among $\mathcal{A}_\delta^h$ or $\mathcal{A}_\delta^{\mathbf{m}}$ for some $\delta> 0$, then there exists $0<\alpha_1,\, \alpha_2< 1$ and $r_{**}>0$ such that if $x\in \partial S_\ell^\delta$, then for all $r<r_{**}$, \begin{align}\label{area density equation}
\alpha_1 \pi r^2 \leq |S_\ell^\delta \cap B_r(x)| &\leq (1-\alpha_1)\pi r^2 \\ \label{perimeter density equation}
P(S_\ell^\delta;B_r(x)) &\leq \alpha_2r\,. \end{align} Also, $\mathcal{H}^1(\partial S_\ell^\delta\setminus \partial^* S_\ell^\delta)=0$ and each $(S_\ell^\delta)^{(1)}$ and $(G^{\delta})^{(1)}$ is open and satisfies \eqref{boundary convention 1}-\eqref{boundary convention 2}. \end{corollary}
\begin{proof}
We consider the case $\mathcal{S}^\delta \in \mathcal{A}_\delta^h$ and $1\leq \ell \leq N$, and the other cases are similar. First we prove the lower bound in \eqref{area density equation}. Let $x\in \partial S_\ell^\delta$. Then by our convention \eqref{boundary convention 1}-\eqref{boundary convention 2} regarding topological boundaries, \begin{align}\notag
|S_\ell^\delta \cap B_r(x)| > 0 \quad\textup{for all $r>0$}\,. \end{align}
By the infiltration lemma, the lower area density bound follows with $\alpha_1=\varepsilon_0$ and $r_{**}=r_*$.
\par
For the upper area bound, let us choose $r_{**}\leq r_*$ such that \begin{align}\label{scale invariant small}
\Lambda r_{**} \leq 1\,. \end{align} We claim that for any $x\in \partial S_\ell^0$ and $r<r_{**}$, \begin{align}\label{upper area bound when delta positive}
|S_\ell^\delta \cap B_r(x)| \leq \max\left\{\pi - \varepsilon_0,c_* \right\} r^2 \end{align} for a dimensional constant $c_*$ to be specified shortly. Suppose that this were not the case. Then by the smoothness of $\partial B$ and the containment of $S_\ell^0$ in $B$, \begin{align}\label{far from boundary 2}
{\rm dist} (x, \partial B ) \geq c(B) r \end{align} for some constant $c(B)<1/2$ depending $B$, so that $B_{c(B)r/2}(x)\subset B$. Also, by the infiltration lemma, $S_{\ell'}^\delta \cap B_{r/4}(x)=\emptyset$ for $\ell' \neq \ell$.
These two facts combined imply that $B_{c(B)/2}(x) \subset\!\subset S_\ell^\delta \cup G^\delta$. By Lemma \ref{lambda r nought lemma}, $S_\ell^\delta$ is a $(\Lambda, r_{**})$-minimizer of perimeter in $B_{c(B)/2}(x)$ with $\Lambda r_{**}\leq 1$ by \eqref{scale invariant small}. Then the density estimates \cite[Theorem 21.11]{Mag12} for these minimizers give \begin{align}
|S_\ell^\delta \cap B_{c(B)/2}(x)| \leq \frac{15\pi}{64}c(B)^2\,. \end{align} By choosing $c_{**}$ close enough to $\pi$, we have a contradiction.The upper area bound follows from a construction which we omit, and the mild regularity on $\partial S_\ell^\delta$ follows from our normalization \eqref{boundary convention 1}-\eqref{boundary convention 2}, the area bounds, and Federer's theorem \cite[4.5.11]{Fed69}.\end{proof}
\begin{remark}[Lebesgue representatives]\label{lebesgue representative remark}
For the rest of the paper, we will always assume that we are considering the open set $(S_\ell^\delta)^{(1)}$ or $(G^\delta)^{(1)}$ as the Lebesgue representative of $S_\ell^\delta$ or $G^\delta$. \end{remark}
\subsection{Preliminary regularity when $\delta=0$}
The following infiltration (or ``elimination") lemma for a minimizer among $\mathcal{A}_0^{\mathbf{m}}$ is due to \cite[Theorem 3.1]{Leo01} and can be adapted easily to the problem on the ball; the reader may also consult \cite[Section 11]{White} for a similar statement.
\begin{lemma}[Infiltration Lemma for $\delta=0$]\label{infiltration lemma 2}
If $\mathcal{S}^0$ is a minimizer for $\mathcal{F}$ among $\mathcal{A}_0^h$, then there exist constants $\varepsilon_0=\varepsilon_0>0$ and $r_*>0$ with the following property:
\par
if $x\in \mathbb{R}^2$, $r<r_*$, and $0\leq \ell_0< \ell_1\leq N$ are such that \begin{align}\label{small density assumption lemma 2}
|B_r(x) \setminus (S_{\ell_0}^0\cup S_{\ell_1}^0)| \leq \varepsilon_0 r^2\,, \end{align} then \begin{align}\label{no infiltration equation 2}
|(S_{\ell_0}^0\cup S_{\ell_1}^0) \cap B_{r/4}(x)| =0\,. \end{align} \end{lemma}
\begin{proof} Repeating the argument from step one of Lemma \ref{infiltration lemma}, there exists $\varepsilon(h)>0$ such that if $x\in \mathrm{cl}\, B$, $r<1$, and \begin{align}\label{small density assumption lemma step one 2}
|S_\ell^\delta \cap B_r(x)| \leq \varepsilon r^2\quad \textup{ for some $\ell\in\{0,\dots, N\}\setminus \{\ell_0,\ell_1\}$}\,, \end{align} for a minimizer among $\mathcal{A}_\delta^h$, then \begin{align}\label{doesnt affect trace 2}
B_{r/2}(x) \cap \{ h=\ell\}=\emptyset\,. \end{align} In particular, by using Lemma \ref{lambda r nought lemma}, we may compare the minimality of $\mathcal{S}^0$ against competitors constructed by donating $B_s(x)\setminus (S_{\ell_0}^0 \cup S_{\ell_1}^0)$ to $S_{\ell_0}$ or $S_{\ell_1}$. The remainder of the argument is the same as in \cite{Leo01}. \end{proof}
The next two results may be proved as in Corollary \ref{reduced boundary regularity delta positive} and Corollary \ref{density corollary}.
\begin{corollary}[Regularity along $(S_\ell^0)^{1/2} \cap (S_{\ell'}^0)^{1/2}$]\label{reduced boundary regularity delta 0} If $\mathcal{S}^0$ is a minimizer among $\mathcal{A}_0^h$ and $x\in (S_\ell^0)^{1/2} \cap (S_{\ell'}^0)^{1/2}$ for $\ell,\ell'\in \{1,\dots,N\}$, then in a neighborhood of $x$, every other chamber is empty and $\partial S_\ell^0 \cap \partial S_{\ell'}^0$ is a segment. \end{corollary}
\begin{lemma}[Upper Area and Perimeter Bounds]\label{upper perimeter bounds delta 0}
If $\mathcal{S}^0$ minimizes $\mathcal{F}$ among $\mathcal{A}_0^h$, then there exists $\alpha_3>0$, $\alpha_4<1$, and $r_3>0$, such that \begin{align} \label{perimeter density equation 2}
\mathcal{F}(\mathcal{S}^0;B_r(x)) &\leq \alpha_3r\quad \forall\, r>0\,,\quad x\in \mathbb{R}^2\,, \end{align} and \begin{align}\label{upper area bound delta zero}
|B_r(x) \cap S_\ell^0| \leq \alpha_4 \pi r^2 \quad \forall x\in \partial S_{\ell}^0\,,\quad r<r_3\,. \end{align} \end{lemma}
\subsection{Monotonicity formula} This is the last technical tool necessary for obtaining blow-up cones. \begin{theorem}[Monotonicity Formula]\label{mon formula}
If $\mathcal{S}^\delta$ minimizes $\mathcal{F}$ among $\mathcal{A}_\delta^h$ for $\delta\geq 0$ or $\mathcal{A}_\delta^{\mathbf{m}}$ for $\delta> 0$, then there exists $\Lambda_0\geq 0$ such that if $x\in \mathbb{R}^2$,
\begin{align}\label{mon equation}
\sum_{\ell=0}^N\frac{c_\ell}{2}
&\int_{\partial^* S_\ell^\delta \cap (B_r(x) \setminus B_s(x))}\frac{((y-x)\cdot \nu_{S_\ell^\delta})^2}{|y-x|^3} \,d\mathcal{H}^1(y)\leq \frac{\mathcal{F}(\mathcal{S}^\delta;B_r(x))}{r} - \frac{\mathcal{F}(\mathcal{S}^\delta;B_s(x))}{s} +\Lambda_0 (r-s)
\end{align} for any $0<s<r<r_x$. \end{theorem}
\begin{proof} We consider the case $\mathcal{S}^\delta$ is minimal among $\mathcal{A}_\delta^h$ and $x\in \partial B$ is a jump point of $h$; the other cases are simpler since the trace constraint may be avoided. First, we observe that by the smoothness of the circle, there exists $\Lambda'>0$ such that \begin{align}\notag
\sum_{\ell=0}^N\frac{c_\ell}{2}
\int_{\partial^* S_\ell^\delta \cap (B_r(x) \setminus B_s(x)\cap \partial B)}\frac{((y-x)\cdot \nu_{S_\ell^\delta})^2}{|y-x|^3} \,d\mathcal{H}^1(y)&\leq \frac{\mathcal{F}(\mathcal{S}^\delta;B_r(x) \cap \partial B)}{r} - \frac{\mathcal{F}(\mathcal{S}^\delta;B_s(x)\cap \partial B)}{s} \\ \notag
&\qquad +\Lambda'\pi (r-s) \quad \forall 0<s<r<r_x \end{align} for some $r_x$; that is, we have the desired monotonicity for energy along $\partial B$. For the remainder of the proof, we therefore focus on the energy inside $B$. We define the increasing function \begin{align}\label{p definition}
p(r) := \sum_{\ell=1}^N c_\ell P(S_\ell^\delta ; B_r(x)\cap B) = \mathcal{F}(\mathcal{S};B_r(x) \cap B) \end{align} where, since it will be clear by context, we have suppressed the dependence of $p$ on $x$. The proof requires two steps: first, deriving a differential inequality for $p$ using comparison with cones (see \eqref{testing against cones}), and second, integrating and employing a slicing argument. The computations in the second step are the same as those in the proof of the monotonicity formula for almost minimizing integer rectifiable currents \cite[Proposition 2.1]{DeLSpaSpo17}, so we omit them. \par We prove that given $x\in \partial B$ which is a jump point of $h$, there exists $r_{x}>0$ such that \begin{align}\label{testing against cones}
\frac{p(r)}{r^2} \leq \frac{1}{r}\sum_{\ell=1}^N c_\ell \mathcal{H}^0(\partial^* S_\ell^\delta \cap \partial B_r(x) \cap B)+ \Lambda \pi \quad\textup{ for a.e. }r<r_x\,, \end{align} where $\Lambda$ is from Lemma \ref{lambda r nought lemma}. As mentioned above, the monotonicity formula can then be derived from \eqref{testing against cones}.
For concreteness, suppose that $h$ jumps between $1$ and $2$ at $x$. Then, recalling \eqref{chord def} there are chords $c_i^1$ and $c_j^2$ connecting $x$ to the nearest jump points on either side and corresponding circular segments $C_i^1$ and $C_j^2$. Let $0<r_x<r_0$ be small enough such that $\mathrm{cl}\, B_{r_x}(x)$ intersects no other circular segments from \eqref{circular segments} other than those two. By the inclusion \eqref{circular segment containment} for the minimizer and our choice of $r_x$, for every $r<r_x$, \begin{align}\label{circular segment containment on small circle}
C_i^1 \cap \mathrm{cl}\, B_r(x) \subset S_1^\delta\,,\quad C_j^2 \cap \mathrm{cl}\, B_r(x) \subset S_2^\delta\,,\quad\textup{and}\quad \partial B_r(x) \cap \partial B \subset \partial C_i^1 \cup \partial C_j^2 \,. \end{align} For $r<r_x$ to be specified momentarily, we consider the cluster $\tilde{\mathcal{S}}$ defined by \begin{align}\notag
\tilde{S}_1 &= (S_1^\delta \setminus \mathrm{cl}\, B_r(x)) \cup \{y\in B_r(x) \setminus C_i^1 : x + r(y-x)/|y-x| \in S_1^\delta\} \cup C_i^1\,, \\ \notag
\tilde{S}_2 &= (S_2^\delta \setminus \mathrm{cl}\, B_r(x)) \cup \{y\in B_r(x) \setminus C_j^2 : x + r(y-x)/|y-x| \in S_2^\delta\} \cup C_j^2\,, \\ \notag
\tilde{S}_\ell &= (S_\ell^\delta \setminus \mathrm{cl}\, B_r(x)) \cup \{y\in B_r(x) : x + r(y-x)/|y-x| \in S_\ell^\delta\}\,,\quad 3\leq \ell \leq N\,. \end{align} Note that by \eqref{circular segment containment on small circle}, each $\partial \tilde{S}_\ell \cap B_r(x)$ consists of radii of $B_r(x)$ contained in $B_r(x) \setminus (C_i^1 \cup C_j^2)$. Then by Lemma \ref{slicing lemma}, for almost every $r<r_x$, each $\tilde{S}_\ell$ is a set of finite perimeter and \begin{align}\notag
\sum_{\ell=1}^N c_\ell P(\tilde{S}_\ell; B )&= \sum_{\ell=1}^N c_\ell P(\tilde{S}_\ell; B_r(x) \cap B) + \sum_{\ell=1}^N c_\ell P(\tilde{S}_\ell; B \setminus \mathrm{cl}\, B_r(x)) \\ \label{cone perimeter}
&= r \sum_{\ell=1}^N c_\ell \mathcal{H}^0(\partial^* S_\ell^\delta \cap \partial B_r(x) \cap B) + \sum_{\ell=1}^N c_\ell P(S_\ell^\delta; B \setminus \mathrm{cl}\, B_r(x))\,. \end{align} Also, by \eqref{circular segment containment on small circle} and our definition of the $\tilde{\mathcal{S}}$, $\tilde{\mathcal{S}}$ satisfies the trace condition \eqref{trace constraint}. If we set $\tilde{G} = B \setminus (\cup_\ell \tilde{S}_\ell)$, then we can plug \eqref{cone perimeter} into the comparison inequality \eqref{lambda r nought inequality} from Lemma \ref{lambda r nought lemma} and cancel like terms, yielding \begin{align} \notag
p(r)=\sum_{\ell=1}^N c_\ell P(S_\ell^\delta; B \cap B_r(x))
\leq r \sum_{\ell=1}^N c_\ell \mathcal{H}^0(\partial^* S_\ell^\delta \cap \partial B_r(x) \cap B) + \Lambda \pi r^2 \quad\textup{for a.e. $r<r_x$}\,. \end{align} This is precisely \eqref{testing against cones} multiplied by $r^2$. \end{proof}
\subsection{Existence of blow-up cones}
The monotonicity formula allows us to identify blow-up minimal cones at interfacial points of a minimizer. It will be convenient to identify interfacial points for minimizers among $\mathcal{A}_\delta^{\mathbf{m}}$ with interfacial points in $B$ for minimizers among $\mathcal{A}_\delta^h$, since, at the level of blow-ups, the behavior is the same.
\begin{definition}[Interior and boundary interface points]\label{int bound int pt def}
If $\mathcal{S}^\delta$ is minimal among $\mathcal{A}_\delta^h$ and $x\in B \cap \partial S_\ell^\delta$ or $\mathcal{S}^\delta$ is minimal among $\mathcal{A}_\delta^{\mathbf{m}}$ and $x\in \partial S_\ell^\delta$ for some $\ell$, we say $x$ is an {\bf interior interface point}. If $\mathcal{S}^\delta$ is minimal among $\mathcal{A}_\delta^h$ and $x\in \partial B$, we call $x$ a {\bf boundary interface point}. \end{definition}
The blow-ups at a boundary interface point will be minimal in a halfspace among competitors satisfying a constraint coming from the trace condition \eqref{trace constraint} and the inclusion \eqref{circular segment containment} from Theorem \ref{existence theorem}.
\begin{definition}[Admissible blow-ups at jump points of $h$]\label{boundary blow up definition} Let $x\in \partial B$ be a jump point of $h$, and let $C_i^{\ell_0}$ and $C_j^{\ell_1}$ be the circular segments from \eqref{circular segments} meeting at $x$. Let \begin{align}\label{circular segment blowup}
C_\infty^{\ell_0}=\bigcup_{\lambda >0} \lambda (C_i^{\ell_0}-x)\,,\quad C_\infty^{\ell_1}=\bigcup_{\lambda >0} \lambda (C_j^{\ell_1}-x) \end{align} be the blow-ups of the convex sets $C_i^{\ell_0}$ and $C_j^{\ell_1}$ at their common boundary point $x$. We define \begin{align}\label{ax definition}
\mathcal{A}_x:=\{\mathcal{S} : \forall \ell \neq 0,\,S_\ell \subset \{y \cdot x < 0 \}=S_0^c\,,\,\,S_\ell \cap S_{\ell'}=\emptyset \textup{ if }\ell\neq \ell'\,,\,\,C_\infty^{\ell_k}\subset S_{\ell_k}\textup{ for }k=0,1\}\,. \end{align} \end{definition}
\begin{theorem}[Existence of Blow-up Cones]\label{existence of cones} If $\mathcal{S}^\delta$ minimizes $\mathcal{F}$ among $\mathcal{A}_\delta^h$ for some $\delta \geq 0$ or among $\mathcal{A}_\delta^{\mathbf{m}}$ for $\delta>0$, then for any sequence $r_j \to 0$, there exists a subsequence $r_{j_k}\to 0$ and cluster $\mathcal{S}=(S_0,\dots,S_N,G)$ partitioning $\mathbb{R}^2$, satisfying the following properties: \begin{enumerate}[label=(\roman*)] \item $(S_\ell^\delta - x)/r_{j_k} \overset{L^1_{\rm loc}}{\to}
S_\ell$ for each $0\leq \ell\leq N$;
\item $\mathcal{H}^1\mres (\partial S_\ell^\delta - x)/r_{j_k}\weakstar \mathcal{H}^1 \mres \partial S_\ell$ for each $0\leq \ell\leq N$; \item $S_\ell$ is an open cone for each $0\leq \ell \leq N$; \item if $x$ is an interior interface point and $\tilde{\mathcal{S}}$ is such that for $0\leq \ell \leq N$, $\tilde{S}_\ell\Delta S_\ell \subset\!\subset B_R$ and, for the problem on the ball, $S_0=\emptyset$, then \begin{align}\label{interior cone is minimal} \mathcal{F}(\mathcal{S}; B_R) \leq \mathcal{F}(\tilde{\mathcal{S}}; B_R); \end{align} \item if $x\in \partial S_{\ell_0}^\delta \cap \partial B$ is not a jump point of $h$, then $S_{\ell_0} = \{y: y\cdot x < 0 \}$ and $S_0=\{y:y\cdot x > 0\}$; \item if $x\in \partial S_{\ell_0}^\delta \cap \partial B$ is a jump point of $h$, then $\mathcal{S} \subset \mathcal{A}_x$ and if $\tilde{\mathcal{S}}\in \mathcal{A}_x$ is such that for $0\leq \ell \leq N$, $\tilde{S}_\ell\Delta S_\ell \subset\!\subset B_R$, then \begin{align}\label{boundary cone is minimal} \mathcal{F}(\mathcal{S}; B_R ) \leq \mathcal{F}(\tilde{\mathcal{S}}; B_R )\,. \end{align} \end{enumerate} \end{theorem}
\begin{proof}
When $x$ is a boundary interface point and is not a jump point of $h$, then $S_{\ell_0}^\delta \cap B \cap B_{r_x}(x)= B \cap B_{r_x}(x)$ for some $r_x>0$ by \eqref{circular segment containment} from Theorem \ref{existence theorem}. In this case, items $(i)$-$(iii)$ and $(v)$ are trivial. Also, the case of interior interface points is essentially a simpler version of the argument when $x\in \partial S_{\ell_0}^\delta \cap \partial B$ is a jump point of $h$. Therefore, for the rest of the proof, we focus on items $(i)$-$(iii)$ and $(vi)$ when $x\in \partial B \cap \partial S_{\ell_0}^\delta$ is a jump point of $h$.
\par The upper perimeter bounds from Corollary \ref{density corollary} or Lemma \ref{upper perimeter bounds delta 0} and compactness in $BV$ give the existence of $r_{j_k}\to 0$ and $\mathcal{S}$ such that the convergence in $(i)$ holds. In addition, this compactness gives \begin{align}\label{convergence of vector measures}
\mu_k^\ell:= (\nu_{ S_\ell^\delta } \mathcal{H}^{1} \mres \partial S_\ell^\delta - x)/r_{j_k}\weakstar \nu_{S_\ell }\mathcal{H}^1 \mres \partial^* S_\ell=: \mu_\ell \quad \forall 0\leq \ell \leq N\,. \end{align} It is easy to check from the inclusion \eqref{circular segment containment} that $\mathcal{S} \in \mathcal{A}_x$. We now discuss in order \eqref{boundary cone is minimal}, $(ii)$, and $(iii)$. The proofs of \eqref{boundary cone is minimal} and $(ii)$ are standard compactness arguments that proceed mutatis mutandis as the proof of the compactness theorem for $(\Lambda,r_0)$-perimeter minimizers \cite[Theorem 21.14]{Mag12}. Finally, $(iii)$ follows from the monotonicity formula \eqref{mon equation}, which implies that $\mathcal{F}(\mathcal{S};B_r)/r$ is constant in $r$, and the characterization of cones \cite[Proposition 28.8]{Mag12}. \end{proof}
\subsection{Classification of blow-up cones}\label{sec: classification of blowup cones}
We classify the possible blow-up cones for a minimizer using the terminology set forth in Definition \ref{int bound int pt def}.
\begin{theorem}[Classification of Blow-up Cones for $\delta>0$]\label{positive delta classification theorem}
If $\mathcal{S}^\delta$ minimizes $\mathcal{F}$ among $\mathcal{A}_\delta^h$ or $\mathcal{A}_\delta^{\mathbf{m}}$ for some $\delta > 0$, and $\mathcal{S}$ is a blow-up cluster for $x\in \partial S_{\ell_0}^\delta$ and some $r_{j}\to 0$, then exactly one of the following is true: \begin{enumerate}[label=(\roman*)] \item $x\in \partial S_{\ell_0}^\delta$ is an interior interface point and $S_{\ell_0}=\{y\cdot \nu_{S_{\ell_0}^\delta}(x)<0\}$, $G = \mathbb{R}^2 \setminus S_{\ell_0}$; \item $x\in \partial S_{\ell_0}^\delta$ is an interior interface point, $S_{\ell_0}=\{y\cdot \nu <0\}$ for some $\nu \in \mathbb{S}^1$, and $S_{\ell_1}=\mathbb{R}^2 \setminus S_{\ell_0}$ for some $\ell_1\neq \ell_0$; \item $x\in \partial S_{\ell_0}^\delta \cap \partial B$ is a boundary interface point and jump point of $h$, $S_{\ell_0}=\{y\cdot \nu <0,\, y \cdot x < 0\}$, and $S_{\ell_1}=\{y\cdot \nu >0,\, y \cdot x < 0\}$ for some $\nu \in \mathbb{S}^1$ and $\ell_1 \neq \ell_0$; \item $x\in \partial S_{\ell_0}^\delta \cap \partial B$ is a boundary interface point and jump point of $h$, $S_{\ell_0}=\{y\cdot \nu_0 <0,\, y \cdot x < 0\}$, $S_{\ell_1}=\{y\cdot \nu_1 >0,\, y \cdot x < 0\}$, $S_0=\{y\cdot x >0\}$, and $G= (S_0 \cup S_{\ell_0}\cup S_{\ell_1})^c$ for some $\nu_0,\, \nu_1 \in \mathbb{S}^1$ and $\ell_1 \neq \ell_0$; \item $x\in \partial S_{\ell_0}^\delta \cap \partial B$ is a boundary interface point, not a jump point of $h$, $S_{\ell_0}=\{y\cdot x <0\}=S_0^c$.
\end{enumerate} \end{theorem}
\begin{proof}[Proof of Theorem \ref{positive delta classification theorem}] \textit{Step one}: In this step we consider an interior interface point $x$ and show that either $(i)$ or $(ii)$ holds. First, we note that since $x\in \partial S_{\ell_0}^\delta$ and the density estimates \eqref{area density equation} pass to the blow-up limit, $S_{\ell_0}\neq \emptyset$ and $S_{\ell_0} \neq\mathbb{R}^2$, so $\mathcal{S}$ is non-trivial. We claim that no non-empty connected component $S_\ell$ of $\mathcal{S}$ for $0\leq \ell \leq N$ can be anything other than a halfspace; from this claim it follows that $(i)$ or $(ii)$ holds. Indeed, suppose that there was such a component $C$, say of $S_1$, defined by an angle $\theta\neq \pi$ with $\partial C \cap \partial B=\{c_1,c_2\}$. Let $K$ be the convex hull of $c_1$, $c_2$, and $0$. If $\theta<\pi$, then the triangle inequality implies that the cluster $(S_0,S_1 \setminus K, S_2,\dots, S_N, G\cup K)$ satisfies $\mathcal{F}(\mathcal{S}';B_2)<\mathcal{F}(\mathcal{S};B_2)$, contradicting the minimality property \eqref{interior cone is minimal}. On the other hand, if $\theta>\pi$, then the cluster $\mathcal{S}'=(S_0\setminus K,S_1 \cup K, S_2 \setminus K, \dots, S_N \setminus K, G\setminus K)$ also contradicts \eqref{interior cone is minimal} due to the triangle inequality. \par
\noindent\textit{Step two}: Moving on to the case of a boundary interface point, we begin by observing that $(v)$ is trivial by \eqref{circular segment containment} when $x$ is not a jump point of $h$. If $x$ is a jump point of $h$, say between $h=1$ and $h=2$, then $S_0=\{y\cdot x > 0\}$, and $\{S_1,\dots,S_N,G\}$ partition $S_0^c$. Now the same argument as in the previous step using the triangle inequality shows that $S_\ell=\emptyset$ for $3 \leq \ell \leq N$ and $S_1$ and $S_2$ each only have one connected component bordering $S_0$. It follows that either $(iii)$ or $(iv)$ holds.\end{proof}
\begin{corollary}[Regularity for $\partial G^{\delta}$ away from $(G^\delta)^{(0)}$]\label{no termination corollary} If $\mathcal{S}^\delta$ minimizes $\mathcal{F}$ among $\mathcal{A}_\delta^h$ or $\mathcal{A}_\delta^{\mathbf{m}}$ for some $\delta > 0$ and $x$ is an interior interface point such that \begin{align}\label{limsup assumption}
\limsup_{r\to 0} \frac{|G^{\delta} \cap B_r(x)|}{\pi r^2} > 0\,, \end{align} then there exists $r_x>0$ such that $\partial G^{\delta}\cap B_{r_x}(x)$ is an arc of constant curvature dividing $B_{r_x}(x)$ into $G^{\delta} \cap B_{r_x}(x)$ and $S_{\ell}^\delta \cap B_{r_x}(x)$. \end{corollary}
\begin{proof}
If $r_j\to 0$ is a sequence achieving the limit superior in \eqref{limsup assumption}, then any subsequential blow-up at $x$ must be characterized by case $(i)$ of Theorem \ref{positive delta classification theorem}. The desired conclusion now follows from Corollary \ref{reduced boundary regularity delta positive}. \end{proof}
Lastly, we classify blow-up cones for $\delta=0$ when either $N=3$ or the weights are equal.
\begin{theorem}[Classification of Blow-up Cones for $\delta=0$ on the Ball]\label{zero delta classification theorem}
If $N=3$ or $c_\ell=1$ for $0\leq \ell \leq N$, $\mathcal{S}^0$ minimizes $\mathcal{F}$ among $\mathcal{A}_0^h$, and $\mathcal{S}$ is a blow-up cluster at an interface point $x$, then exactly one of the following is true: \begin{enumerate}[label=(\roman*)] \item $x\in \partial^* S_{\ell_0}^0 \cap \partial^* S_{\ell_1}^0$ is an interior interface point and $S_{\ell_0}=\{y:y\cdot \nu_{S_{\ell_0}^\delta}(x)<0\}$, $S_{\ell_1} = \mathbb{R}^2 \setminus S_{\ell_0}$; \item $x$ is an interior interface point, and the non-empty chambers of $\mathcal{S}$ are three connected cones $S_{\ell_i}$, $i=0,1,2$, with vertex at the origin satisfying \begin{align}\label{classical angle conditions}
\frac{\sin \theta_{\ell_0}}{c_{\ell_1}+c_{\ell_2}}=\frac{\sin \theta_{\ell_1}}{c_{\ell_0}+c_{\ell_2}}=\frac{\sin \theta_{\ell_2}}{c_{\ell_0}+c_{\ell_1}} \end{align} where $\theta_{\ell_i}=\mathcal{H}^1(S_{\ell_i} \cap \partial B)$; \item $x\in \partial S_{\ell_0}^\delta \cap \partial B$ is not a jump point of $h$, and $S_{\ell_0} = \{y: y\cdot x <0 \}=S_0^c$; \item $x\in \partial S_{\ell_0}^\delta \cap \partial B$ is a jump point of $h$, $S_{\ell_0}=\{y:y\cdot \nu <0,\, y \cdot x < 0\}$, and $S_{\ell_1}=\{y:y\cdot \nu >0,\, y \cdot x < 0\}$ for some $\nu \in \mathbb{S}^1$ and $\ell_1 \neq \ell_0$, and $S_0=\{y:y\cdot x >0\}$; \item $x\in \partial S_{\ell_0}^\delta \cap \partial B$ is a jump point of $h$, and the non-empty chambers of $\mathcal{S}\in\mathcal{A}_x$ are $S_0=\{y:y\cdot x >0\}$ and three connected cones $S_{\ell_i}$, $i=0,1,2$, partitioning $S_0^c$.
\end{enumerate} \end{theorem}
\begin{proof} We begin with the observation that no blow-up at $x$ can consist of a single chamber. To see this, since $x$ is an interface point, it belongs to $\partial S_{\ell}^0$ for some $\ell$. By our normalization \eqref{boundary convention 1}-\eqref{boundary convention 2} for reduced and topological boundaries, $x\in {\rm spt} \,\mathcal{H}^1\mres \partial^* S_{\ell}^0$. Therefore, due to the upper area bound \eqref{upper area bound delta zero}, no blow-up limit at $x$ can consist of a single chamber $S_{\ell'}$; if so, the $L^1$ convergence and the infiltration lemma would imply that $x\in \mathrm{int}\, S_{\ell'}^0$, contradicting $x\in {\rm spt} \,\mathcal{H}^1 \mres \partial^* S_{\ell}^0$. Therefore, there are at least two chambers in the blow-up cluster at $x$. \par Next, we claim that when $N=3$ or $c_\ell=1$ for all $\ell$, there cannot be four or more non-empty connected components of chambers of $\mathcal{S}$ comprising $\mathbb{R}^2$ if the blow-up is at an interior interface point or comprising $\{y:y\cdot x<0\}$ at a boundary interface point. If $N=3$ and this were the case, then there must be some $S_\ell$, say $S_1$, which has two connected components $C_1$ and $C_2$ separated by a circular sector $C_3$ with $\partial C_3 \cap \partial B =\{c,c'\}$ and ${\rm dist}(c,c')<2$. We set $K$ to be the convex hull of $0$, $c$, and $c'$ and define the cluster $\mathcal{S}'=(S_0, S_1 \cup K, S_2\setminus K, S_3\setminus K,\emptyset)$. Note $\mathcal{S}'\in \mathcal{A}_x$ when $x$ is a boundary interface point. Then the triangle inequality implies that $\mathcal{F}(\mathcal{S}';B_2)<\mathcal{F}(\mathcal{S};B_2)$, which contradicts the minimality condition \eqref{interior cone is minimal} or \eqref{boundary cone is minimal}. For the case when $c_\ell=1$ for all $\ell$, if there was more than three connected components, there must be some component $C\subset S_\ell$ with $\mathcal{H}^1(C \cap B_1)<2\pi/3$, and when $x$ is a boundary interface point, $\partial C \cap \{y\cdot x=0\}=\{0\}$. Then the construction in \cite[Proposition 30.9]{Mag12}, in which triangular portions of $C$ near $0$ are allotted to the neighboring chambers allows us to construct a competitor (belonging to $\mathcal{A}_x$ if required) that contradicts the minimality \eqref{interior cone is minimal} or \eqref{boundary cone is minimal}. \par We may now conclude the proof. If $x$ is an interior interface point, then there are either two or three distinct connected chambers in the blow-up at $x$. Similar to the previous theorem, the triangle inequality implies that if there are two, they are both halfspaces. If there are three, the angle conditions \eqref{classical angle conditions} follow from a first variation argument. If $x$ is a boundary interface point, then $(iii)$ holds by \eqref{circular segment containment} if $x$ is not a jump point of $h$. If $x$ is a jump point of $h$, then $\{y\cdot x < 0\}$ is partitioned into either two or three connected cones. The former is case $(iv)$, and the latter is case $(v)$. \end{proof}
\section{Proof of Theorem \ref{main regularity theorem delta positive}}\label{sec: proofs of main regularity theorems}
To streamline the statement below, the terminology ``arc of constant curvature" includes segments in addition to circle arcs.
\begin{theorem}[Interior Resolution for $\delta>0$]\label{interior resolution theorem delta positive}
If $\mathcal{S}^\delta$ minimizes $\mathcal{F}$ among $\mathcal{A}_\delta^h$ or $\mathcal{A}_\delta^{\mathbf{m}}$ for some $\delta>0$ and $x\in \partial S_{\ell_0}^\delta$ is an interior interface point, then there exists $r_x>0$ such that exactly one of the following is true: \begin{enumerate}[label=(\roman*)] \item $S_{\ell'}^\delta \cap B_{r_x}(x)=\emptyset$ for $\ell' \neq \ell_0$ and $\partial S_{\ell_0}^\delta \cap B_{r_x}(x)$ is an arc of constant curvature separating $\partial S_{\ell_0}^\delta \cap B_{r_x}(x)$ and $G^\delta \cap B_{r_x}(x)$; \item $\partial S_{\ell_0}^\delta \cap B_{r_x}(x)$ is an arc of constant curvature separating $B_{r_x}(x)$ into $S_{\ell_0}^\delta \cap B_{r_x}(x)$ and $S_{\ell'}^\delta \cap B_{r_x}(x)$ for some $\ell'\neq \ell_0$; \item there exist circle arcs $a_1$ and $a_2$ meeting tangentially at $x$ such that \begin{align}\notag
\partial S_{\ell_0}^\delta\cap \partial G^\delta \cap B_{r_x}(x)= a_1\,,\quad \partial S_{\ell'}^\delta\cap \partial G^\delta \cap B_{r_x}(x)= a_2\,, \quad \partial S_{\ell_0}^\delta \cap \partial S_{\ell'}^\delta \cap B_{r_x}(x)=\{x\}\,; \end{align}
\item there exists circle arcs $a_1$ and $a_2$ meeting in a cusp at $x$ and an arc $a_3$ of constant curvature reaching the cusp tangentially at $x$, and \begin{align}\notag
\partial S_{\ell_0}^\delta\cap \partial G^\delta \cap B_{r_x}(x)= a_1\,,\quad \partial S_{\ell'}^\delta\cap \partial G^\delta \cap B_{r_x}(x)= a_2\,, \quad \partial S_{\ell_0}^\delta \cap \partial S_{\ell'}^\delta \cap B_{r_x}(x)=a_3\,. \end{align} \end{enumerate} \end{theorem}
\begin{proof} Let us assume for simplicity that $x$ is the origin; the proof at any other point is similar. \par
\noindent\textit{Step zero}: If $0\notin \partial S_{\ell'}^\delta$ for all $\ell'\neq \ell_0$, then by the density estimates \eqref{area density equation}, $B_{r_0}\cap S_{\ell'}^\delta =\emptyset$ for some $r_0$ and all $\ell'\neq \ell_0$. From the classification of blowups in Theorem \ref{positive delta classification theorem}, $(i)$ must hold at $0$. \par
\noindent\textit{Step one}: For the rest of the proof, we assume instead that for some $\ell'\neq \ell_0$, $0\in \partial S_{\ell'}^\delta$. By Theorem \ref{positive delta classification theorem} and the fact that the density estimates \eqref{area density equation} pass to all blow-up limits, we are in case $(ii)$ of that theorem: any possible blow-up limit at $0$ is a pair of halfspaces coming from $S_{\ell_0}^\delta$ and $S_{\ell'}^\delta$. In this step we identify a rectangle $Q'$ small enough such that $S_{\ell_0}^\delta \cap Q'$ and $S_{\ell'}^\delta \cap Q'$ are a hypograph and epigraph, respectively, over a common axis. \par Let us fix $r_j\to 0$ such that applying Theorem \ref{positive delta classification theorem} and rotating if necessary, we obtain \begin{align}\label{L1 convergence blowup}
S_{\ell_0}^\delta/r_j \overset{L^1_{\rm loc}}{\to} \mathbb{H}^-:= \{y: y\cdot e_2 < 0 \}\,,&\quad S_{\ell'}^\delta/r_j \overset{L^1_{\rm loc}}{\to} \mathbb{H}^+:= \{y: y\cdot e_2 > 0 \}\,, \\ \label{blow-up weak star}
\mathcal{H}^1 \mres \partial S_{\ell_0}/r_j\,, \,\mathcal{H}^1 \mres \partial &S_{\ell'}/r_j \weakstar\mathcal{H}^1 \mres \partial \mathbb{H}^+\,. \end{align} Set \begin{align}\notag
Q = [-1,1]\times [-1,1]\,. \end{align} We note that for all $r<r_{**}/r_j$, \begin{align}\label{area density equation blowup}
\alpha_1 \pi r^2 \leq |S_\ell^\delta/r_j \cap B_r(y)|\leq (1-\alpha_1)\pi r^2 \quad \textup{if }y\in \partial S_\ell^\delta \textup{ for }\ell = \ell_0 \textup{ or } \ell' \end{align} due to \eqref{area density equation}. Also due to \eqref{area density equation} and \eqref{L1 convergence blowup}, \begin{align}\label{no third chamber}
S_{\ell}^\delta \cap B_{r_j}= \emptyset\quad \forall \ell \notin \{\ell',\ell_0\}\,, \quad\textup{for large }j; \end{align} we may assume by restricting to the tail that \eqref{no third chamber} holds for all $j$. Next, a standard argument utilizing \eqref{L1 convergence blowup} and \eqref{area density equation blowup} implies that there exists $J\in \mathbb{N}$ such that for all $j\geq J$, \begin{align}\label{boundary trapped}
(\partial S_{\ell_0}^\delta/r_j \cup \partial S_{\ell'}^\delta/ r_j) \cap Q \subset [-1,1]\times [-1/4,1/4]\,. \end{align} Now for almost every $t\in [-1,1]$, by Lemma \ref{slicing lemma}, the vertical slices (viewed as subsets of $\mathbb{R}$) $$ (S_{\ell_0}^\delta/r_j)_t=S_{\ell_0}^\delta/r_j \cap Q \cap \{y:y\cdot e_1=t \}\,,\quad (S_{\ell'}^\delta/r_j)_t:=(S_{\ell'}^\delta)_t\cap Q \cap \{y:y\cdot e_1=t \} $$ are one-dimensional sets of finite perimeter and, by \eqref{boundary trapped} and \cite[Proposition 14.5]{Mag12}, \begin{align}\notag
2c_{\ell_0} + 2c_{\ell'}&\leq \int_{-1}^1 c_{\ell_0}P((S_{\ell_0}^\delta/r_j)_t;(-1,1))+c_{\ell'}P((S_{\ell'}^\delta/r_j)_t;(-1,1)) \,dt\\ \label{slicing perimeter estimate}
&\leq c_{\ell_0}P(S_{\ell_0}^\delta/r_j;\mathrm{int}\,Q) + c_{\ell'}P(S_{\ell'}^\delta/r_j;\mathrm{int}\,Q)\,. \end{align} Since $\mathcal{H}^1(\partial \mathbb{H}^+ \cap \partial Q)=0$, \eqref{blow-up weak star} implies that \begin{align}\label{weak star estimate}
\lim_{j\to \infty }c_{\ell_0}P(S_{\ell_0}^\delta/r_j;\mathrm{int}\,Q) + c_{\ell'}P(S_{\ell'}^\delta/r_j;\mathrm{int}\,Q) = 2c_{\ell_0} + 2c_{\ell'}\,. \end{align} Together, \eqref{boundary trapped}-\eqref{weak star estimate} and Lemma \ref{slicing lemma} allow us to identify $j$ as large as we like (to be specified further shortly) and $1<t_1< t_2< 1 $ such that for $i=1,2$, \begin{align}\label{one perimeter} P((S_{\ell_0}^\delta/r_j)_{t_i};(-1,1))&=1=P((S_{\ell'}^\delta/r_j)_{t_i} ;(-1,1))\,, \\ \notag
0&=\int_{(-1,1)}|\mathbf{1}_{(S_{\ell_0}^\delta/r_j)^+_{t_i}}-\mathbf{1}_{(S_{\ell_0}^\delta/r_j)^-_{t_i}}|+|\mathbf{1}_{(S_{\ell_0}^\delta/r_j)^+_{t_i}}- \mathbf{1}_{(S_{\ell_0}^\delta/r_j)_{t_i}}|\,d\mathcal{H}^1\\ \label{traces agree}
&=\int_{(-1,1)}|\mathbf{1}_{(S_{\ell'}^\delta/r_j)^+_{t_i}}-\mathbf{1}_{(S_{\ell'}^\delta/r_j)^-_{t_i}}|+|\mathbf{1}_{(S_{\ell'}^\delta/r_j)^+_{t_i}}- \mathbf{1}_{(S_{\ell'}^\delta/r_j)_{t_i}}|\,d\mathcal{H}^1\,, \end{align} where here and in the rest of the argument, the minus and plus superscripts denote left and right traces along $\{y\cdot e_1=t_i\}$ (again viewed as subsets of $\mathbb{R}$). From \eqref{boundary trapped} and \eqref{one perimeter}-\eqref{traces agree}, we deduce that there exist $-1/4\leq a_1 \leq b_1 \leq 1/4$ and $-1/4\leq a_2 \leq b_2 \leq 1/4$ such that
\begin{align}\label{left and right traces}
\mathcal{H}^1((S_{\ell_0}^\delta/r_j)^\pm_{t_i} \Delta [-1, a_i])=&0= \mathcal{H}^1((S_{\ell'}^\delta/r_j)^\pm_{t_i} \Delta [b_i, 1]) \quad\textup{ for }i=1,2\,.
\end{align} Let us call $Q' = [t_1,t_2]\times [-1,1]$.
Since it will be useful later, we record the equality \begin{align}\label{no perimeter on boundary cube}
\mathcal{F}(\mathcal{S}^\delta) = \mathcal{F}(S^\delta;\mathbb{R}^2\setminus r_jQ') + c_{\ell_0}P(S_{\ell_0}^\delta; \mathrm{int}\, r_jQ') + c_{\ell'}P(S_{\ell'}^\delta; \mathrm{int}\, r_jQ')\,, \end{align} which follows from \eqref{no third chamber}, \eqref{traces agree}, and Lemma \ref{slicing lemma}. \par Using the explicit description given by \eqref{boundary trapped} and \eqref{left and right traces}, we now identify a variational problem on $Q'$ for which our minimal partition must be optimal. We consider the minimization problem \begin{align}\notag
\inf_{\mathcal{A}_{Q'}} c_{\ell_0}P(A;\mathrm{int}\, Q') + c_{\ell'}P(B;\mathrm{int}\, Q')\,, \end{align} where \begin{align}\notag
\mathcal{A}_{Q'}&:=\{(A,B):A, B\subset Q',\, \restr{A}{\partial Q'}=S_{\ell_0}^\delta/r_j,\, \restr{B}{\partial Q'}=S^\delta_{\ell'}/r_j\textup{ in the trace sense},\, \\ \notag
&\quad\qquad |A\cap B|=0,\,|A\cap Q'|=|(S_{\ell_0}^\delta/r_j)\cap Q'|,\,|B\cap Q'|=|(S_{\ell'}^\delta/r_j)\cap Q'|\}\,. \end{align} By the area constraint on elements in the class $\mathcal{A}_{Q'}$ and $\mathcal{S}^\delta \in \mathcal{A}_\delta^h$ or $\mathcal{S}^\delta \in \mathcal{A}_\delta^{\mathbf{m}}$, any $\mathcal{S}$ given by \begin{align}\notag
S_{\ell_0} = (S_{\ell_0}\setminus r_jQ')\cup r_j(A \cap Q')\,, \quad S_{\ell'}= (S_{\ell'}\setminus r_jQ')\cup r_j(B \cap Q')\,, \quad
S_{\ell}= S_{\ell}^\delta\quad \ell\notin\{\ell_0,\ell'\}\,, \end{align}
satisfies $|\mathbb{R}^2 \setminus \cup_\ell S_\ell| \leq \delta$ in the former case and $|\mathbb{R}^2 \setminus \cup_\ell S_\ell| \leq \delta$ and $(|S_1|,\dots,|S_N|)=\mathbf{m}$ in the latter. Also, once $r_j$ is small enough, if $\mathcal{S}^\delta\in \mathcal{A}_\delta^h$, then $\mathcal{S}$ satisfies the trace condition \eqref{trace constraint} also. Therefore, $\mathcal{S}\in \mathcal{A}_\delta^h$ or $\mathcal{S}\in \mathcal{A}_\delta^{\mathbf{m}}$, so we can compare \begin{align}\notag \mathcal{F}(\mathcal{S}^\delta)\hspace{.15cm} &\hspace{-.25cm}\overset{\eqref{no perimeter on boundary cube}}{=} \mathcal{F}(\mathcal{S}^\delta;\mathbb{R}^2\setminus r_jQ') + c_{\ell_0}P(S_{\ell_0}^\delta; \mathrm{int}\, r_jQ') + c_{\ell'}P(S_{\ell'}^\delta; \mathrm{int}\, r_jQ') \\ \notag &\leq \mathcal{F}(\mathcal{S})= \mathcal{F}(\mathcal{S}^\delta;\mathbb{R}^2\setminus r_jQ') + c_{\ell_0}P(S_{\ell_0}^\delta; \mathrm{int}\, r_jQ') + r_j c_{\ell_0}P(A; \mathrm{int}\, Q') + r_j c_{\ell'}P(B; \mathrm{int}\, Q')\,, \end{align} where in the last equality we have used the trace condition on $\mathcal{A}_{Q'}$ and the formula \eqref{trace difference} for computing $\mathcal{F}(\cdot;\partial Q')$. Discarding identical terms and rescaling, this inequality yields \begin{align}\label{minimality on Q'}
c_{\ell_0}P(S_{\ell_0}^\delta/r_j; \mathrm{int}\, Q') + c_{\ell'}P(S_{\ell'}^\delta/r_j; \mathrm{int}\, Q') \leq c_{\ell_0}P(A; \mathrm{int}\, Q') + c_{\ell'}P(B; \mathrm{int}\, Q')\,, \end{align} where $(A,B)\in \mathcal{A}_{Q'}$ is arbitrary. Simply put, our minimal partition must be minimal on $r_j Q'$ among competitors with the same traces and equal areas of all chambers. \par We now test \eqref{minimality on Q'} with a well-chosen competitor based on symmetrization. Let \begin{align}\notag
A=\{(x_1,x_2):-1\leq x_2 \leq \mathcal{H}^1((S_{\ell_0}^\delta/r_j)_{x_1})-1\} \,,\quad B= \{(x_1,x_2): 1\geq x_2 \geq 1-\mathcal{H}^1((S_{\ell'}^\delta)_{x_1})\}\,. \end{align} In the notation set forth in Lemma \ref{symmetrization lemma}, \begin{align}\notag
A= (S_{\ell_0}^\delta/r_j)^h\,,\quad B= -(-S_{\ell'}^\delta/r_j)^h\,. \end{align} By \eqref{left and right traces} and \eqref{boundary trapped}, the assumptions of Lemma \ref{symmetrization lemma} are satisfied by $S_{\ell_0}^\delta/r_j$ and $ - S_{\ell'}^\delta/r_j$. Then the conclusions of that lemma imply that $(A,B) \in \mathcal{A}_{Q'}$, so \eqref{minimality on Q'} holds.
However, \eqref{symmetrization inequality} also gives \begin{align}\label{reverse minimality on Q'}
c_{\ell_0}P(S_{\ell_0}^\delta/r_j; \mathrm{int}\, Q') + c_{\ell'}P(-S_{\ell'}^\delta/r_j; \mathrm{int}\, Q') \geq c_{\ell_0}P(A; \mathrm{int}\, Q') + c_{\ell'}P(-B; \mathrm{int}\, Q')\,, \end{align} so that in fact there is equality. But according to Lemma \ref{symmetrization lemma}, every vertical slice of $(S_{\ell_0}^\delta/r_j)\cap Q'$ and $(-S_{\ell'}^\delta/r_j)\cap Q'$ must therefore be an interval with one endpoint at $-1$. This is precisely what we set out to prove in this step. \par
\noindent\textit{Step two}: Here we prove that for the open set $G^{\delta}$ (see Remark \ref{lebesgue representative remark}), the set \begin{align}\notag
\mathcal{I}:=\{t\in [r_jt_1/2,r_jt_2/2]: (G^\delta \cap r_jQ')_t = \emptyset \} \end{align} is a closed interval. $\mathcal{I}$ is closed since the projection of the open set $G^\delta \cap r_j Q'$ onto the $x_1$ axis is open, so we only need to prove it is an interval. First, we claim that for any rectangle $R'=(T_1,T_2)\times [-r_j,r_j]$ with $(T_1,T_2)\subset \mathcal{I}^c$, \begin{align}\label{graphicality claim} \mbox{$\partial S_{\ell_0}^\delta \cap R$ and $\partial S_{\ell'}^\delta \cap R$ are graphs of functions $F_0$ and $F'$} \end{align} with $F_0<F'$, over the $x_1$-axis of constant curvature with no vertical tangent lines in $R'$. To see this, first note that for any $(a,b)\subset\!\subset (T_1,T_2)$, $ \partial S_{\ell_0}^\delta \cap ((a,b) \times [r_j,r_j])$ and $\partial S_{\ell'}^\delta \cap ((a,b) \times [r_j,r_j])$ must be at positive distance from each other by the definition of $\mathcal{I}^c$. Then a first variation argument implies that each has constant mean curvature in the distributional sense, and a graph over $(a,b)$ with constant distributional mean curvature must be a single arc of constant curvature with no vertical tangent lines in the interior. Letting $(a,b)$ exhaust $(T_1,T_2)$, we have proven the claim. \par Suppose for contradiction that there exist $T_i\in \mathcal{I}$, $i=1,2$, such that $(T_1,T_2)\subset \mathcal{I}^c$. Set $(T_1,T_2)\times [-r_j,r_j]= R$. Now $F_0$ and $F_1$ extend continuously to $T_1$ and $T_2$ with $F_0(T_i) \leq F'(T_i)$ for each $i$. In fact $F_0(T_i)=F'(T_i)$. If instead we had for example $F_0(T_1)<F'(T_1)$, then $G^\delta$ would contain a rectangle $(t,T_1)\times (c,d)$ for some $t<T_1$ and $c<d$, which would imply that $G^\delta$ has positive density at $(T_1, F_0(T_1))$ and $(T_1,F'(T_1))$. By Corollary \ref{no termination corollary}, $\partial G^\delta \cap \partial S_{\ell_0}^\delta$ is single arc of constant curvature in neighborhood $N$ of $(T_1,F_0(T_1))$, which, by $T_1\in \mathcal{I}$, has vertical tangent line at $(T_1,F_0(T_1))$. Therefore, $ \partial S_{\ell_0}^\delta \cap N \cap R$ is either a vertical segment or a circle arc with vertical tangent line at $(T_1,F_0(T_1))$, and both of these scenarios contradict \eqref{graphicality claim}. So we have $F_0(T_i)=F'(T_i)$, and thus $(T_i,F_0(T_i))\in \partial S_{\ell_0}^\delta \cap \partial S_{\ell'}^\delta \cap \partial G^\delta$. As a consequence, by Corollary \ref{no termination corollary}, $G^\delta$ must have density $0$ at $(T_i,F_0(T_i))$, which means that the graphs of $F_0$ and $F'$ meet tangentially at $T_i$. But the only way for two circle arcs to meet tangentially at two common points is if they are the same arc, that is $F_0=F'$, which is a contradiction of $F_0<F'$. We have thus shown that $\mathcal{I}$ is a closed interval.
\par
\noindent\textit{Step three}: Finally we may finish the proof. We note that by our assumption $0 \in \partial S_{\ell'}^\delta \cap \partial S_{\ell_0}^\delta$ (see the beginning of step one), $0\in I$. Now if $0\in \mathrm{int}\, I$, then $|G^\delta \cap B_{r}(0)|=0$ for some small $r$, and we have $(ii)$. If $\{0\}=I$, then by the same argument as at the beginning of the previous step, we know that $\partial S_{\ell_0}^\delta/r_j \cap (Q' \setminus \{0\})$ and $\partial S_{\ell'}^\delta/r_j \cap (Q'\setminus \{0\})$ are each two circle arcs of equal curvature meeting at the origin. Furthermore, since the blow-up of $G^\delta$ is empty at $0$, we see that all four of these arcs must meet tangentially at the origin, so that $(iii)$ holds. Lastly, if $\mathrm{int}\,I\neq \emptyset$ and $0\in \partial I$, the combined arguments of the previous two cases imply that $(iv)$ holds.\end{proof}
\begin{theorem}[Boundary Resolution for $\delta>0$]\label{boundary resolution theorem for positive delta}
If $\mathcal{S}^\delta$ minimizes $\mathcal{F}$ among $\mathcal{A}_\delta^h$ for some $\delta>0$ and $x\in \partial S_{\ell_0}^\delta \cap \partial B$, then there exists $r_x>0$ such that exactly one of the following is true: \begin{enumerate}[label=(\roman*)] \item $x$ is not a jump point of $h$ and $B_{r_x}(x) \cap B = B_{r_x}(x) \cap S_{\ell_0}^\delta$; \item $x$ is a jump point of $h$ and $\partial S_{\ell_0}^\delta \cap B_{r_x}(x)$ is a line segment separating $B_{r_x}(x) \cap B$ into $S_{\ell_0}^\delta \cap B_{r_x}(x) \cap B$ and $S_{\ell'}^\delta \cap B_{r_x}(x) \cap B$ for some $\ell'\neq \ell_0$; \item $x$ is a jump point of $h$, and there exists circle arcs $a_1$ and $a_2$ meeting at $x$ such that \begin{align}\notag
\partial S_{\ell_0}^\delta\cap \partial G^\delta \cap B_{r_x}(x)= a_1\,,\quad \partial S_{\ell'}^\delta\cap \partial G^\delta \cap B_{r_x}(x)= a_2\,, \quad \partial S_{\ell_0}^\delta \cap \partial S_{\ell'}^\delta \cap B_{r_x}(x)=\{x\}\,. \end{align}
\end{enumerate} \end{theorem}
\begin{proof}
Let us assume for simplicity that $x=\vec{e}_1$. The proof at any other point in $\partial B$ is the same. \par
\noindent\textit{Step zero}: If $\vec{e}_1\in \partial B$ is not a jump point of $h$, then by the inclusion \eqref{circular segment containment} from Theorem \ref{existence theorem}, $(i)$ holds. \par
\noindent\textit{Step one}: For the rest of the proof, we assume that $\vec{e}_1$ is a jump point of $h$. By Theorem \ref{positive delta classification theorem}, there exists $\ell'\neq \ell_0$ such that any blow-up at $\vec{e}_1$ consists of the blow-up chambers $S_{\ell_0}$, $S_{\ell'}$, each of which is the intersection of a halfspace with $\{y:y\cdot \vec{e}_1<0\}$, $S_0=\{y:y\cdot \vec{e}_1>0\}$, and $G = \mathbb{R}^2\setminus (S_0\cup S_{\ell_0}\cup S_{\ell'})$ is a possibly empty connected cone contained in $\{y:y\cdot \vec{e}_1<0\}$. In this step we argue that on a small rectangle $Q'$ with $0\in \partial Q'$, $( S_{\ell_0}-\vec{e}_1)/r_j \cap Q'$ and $( S_{\ell'}^\delta - \vec{e}_1)/r_j \cap Q'$ are the hypograph and epigraph, respectively of two functions over $\{y\cdot \vec{e}_1=0\}$. \par Let us choose $r_j \to 0$ such that by Theorem \ref{positive delta classification theorem}, we have a blow-up limit belonging to $\mathcal{A}_{\vec{e}_1}$. By the density estimates \eqref{area density equation}, $B_{r_j}(x) \subset S_{\ell_0}^\delta \cup S_{\ell'}^\delta \cup G^\delta \cup S_0$ for all large enough $j$, so we can ignore the other chambers. Also, for convenience, by the containment \eqref{circular segment containment} of the circular segments in $S_{\ell_0}^\delta$ and $S_{\ell'}^\delta$ from Theorem \ref{existence theorem}, we extend $S_{\ell_0}^\delta$ and $S_{\ell'}^\delta$ on $\{y:y\cdot \vec{e}_1<1\}$ so that for all large $j$, $$ \{y:y\cdot \vec{e}_1 = 1\}\cap B_{r_j}(\vec{e}_1) \subset \partial S_{\ell_0}^\delta \cup \partial S_{\ell'}^\delta $$ rather than $$ \partial B\cap B_{r_j}(\vec{e}_1) \subset \partial S_{\ell_0}^\delta \cup \partial S_{\ell'}^\delta\,; $$ this allows us to work on a rectangle along the sequence of blow-ups rather than $(B - \vec{e}_1)/r_j$. Now due to the inclusion \eqref{circular segment containment} from Theorem \ref{existence theorem}, there exists a rectangle \begin{align}\notag
Q = [T,0]\times [-1,1] \end{align} such that for all large $j$, up to interchanging the labels $\ell_0$ and $\ell'$, in the trace sense, \begin{align}\notag
(\{T\}\times [-1, -1/2]) \cup ([T,0]\times \{-1\}) \cup (\{0\}\times [-1,0]) &\subset ( S_{\ell_0}^\delta - \vec{e}_1)/r_j\,, \\ \notag
(\{T\}\times [1/2, 1]) \cup ([T,0]\times \{1\}) \cup (\{0\}\times [0,1]) &\subset ( S_{\ell'}^\delta - \vec{e}_1)/r_j\,. \end{align} Then a similar slicing argument as leading to \eqref{left and right traces} implies that for some large $j$, there exist $-1/2\leq a_1\leq a_2 \leq 1/2$ and $t\in [T, 0)$ such that, in the trace sense, \begin{align}\notag
(\{t\}\times [-1, a_1]) \cup ([t,0]\times \{-1\}) \cup (\{0\}\times [-1,0]) &= ( S_{\ell_0}^\delta - \vec{e}_1)/r_j \\ \notag
(\{t\}\times [a_2, 1]) \cup ([t,0]\times \{1\}) \cup (\{0\}\times [0,1]) &= ( S_{\ell'}^\delta - \vec{e}_1)/r_j\,. \end{align} Given this explicit description on the boundary of $Q':=[t,0]\times [-1,1]$, the same argument as in the proof of Theorem \ref{interior resolution theorem delta positive} gives claim of this step. \par
\noindent\textit{Step two}: We may finally finish the proof of Theorem \ref{boundary resolution theorem for positive delta}. By the same argument as in the previous theorem, the set \begin{align}\notag
\mathcal{I}:=\{s\in [r_jt,1]: (G^{\delta} \cap (r_jQ'+\vec{e}_1))_s=\emptyset \} \end{align} is a closed interval. Furthermore, since $\vec{e}_1$ is a jump point of $h$, $\mathcal{I}$ contains $0$. If $\mathrm{int}\, I \neq \emptyset$, we immediately see that $(ii)$ holds. On the other hand, if $\mathcal{I}=\{0\}$, then the vertical slices of $G^\delta$ are non-empty for all $s\in (r_jt,1)$. Again the same argument as in the previous theorem shows that $(iii)$ holds. \end{proof}
\begin{proof}[Proof of Theorem \ref{main regularity theorem delta positive}] At any $x\in \mathrm{cl}\, B$, Theorems \ref{interior resolution theorem delta positive} and \ref{boundary resolution theorem for positive delta} yield the existence of $r_x>0$ such that either $x$ is an interior point of $S_\ell^\delta$ or $G^\delta$ or on $B_{r_x}(x)$, the minimizer is described by one of the options listed in those theorems. By the enumeration of possible local resolutions in those theorems, we see that $\partial S_\ell^\delta \cap B$ is $C^{1,1}$ as desired, since it is analytic except where two arcs of constant curvature intersect tangentially. Now if $x$ and $y$ are both in $\partial G^\delta \cap \partial S_\ell^\delta$ for some $\ell\geq 1$, then one of Theorem \ref{interior resolution theorem delta positive}.$(i)$, $(iii)$, or $(iv)$ or Theorem \ref{boundary resolution theorem for positive delta}.$(iii)$ holds on $B_{r_x}(x)$ and $B_{r_y}(y)$; in particular, each $\partial G^\delta \cap \partial S_\ell^\delta \cap B_{r_x}(x)$ and $\partial G^\delta \cap \partial S_\ell^\delta \cap B_{r_y}(y)$ is an arc of constant curvature. A first variation argument then gives \eqref{curvature condition} if $G^\delta \neq \emptyset$. Also, by the compactness of $\mathrm{cl}\, B$ and the interior resolution theorem, there are only finitely many arcs in $\partial G^\delta \cap \partial S_\ell^\delta$. We note that $H_{S_\ell^\delta}$ cannot be negative along $\partial^* S_\ell^\delta \cap \partial^* G^\delta$, since local variations which decrease the area of $G^\delta$ are admissible. A similar argument based on the interior local resolution result implies that if $\mathcal{H}^1(\partial S_\ell \cap \partial S_m)>0$ for $\ell,m\geq 1$, then $\partial S_{\ell}^\delta \cap \partial S_{m}^\delta$ is composed of finitely many straight line segments. We have thus decomposed each such $\partial S_\ell^\delta \cap \partial S_m^\delta$ and $\partial G^\delta \cap \partial S_\ell^\delta$ into finitely many line segments and arcs of constant curvature, respectively. \par Moving on to showing that each connected component, say $C$, of $S_\ell^\delta$ for $1\leq \ell \leq N$ is convex, consider any $x\in \partial C$. $C \cap B_{r_x}(x)$ must be convex by Theorems \ref{interior resolution theorem delta positive} and \ref{boundary resolution theorem for positive delta} and $H_{S_\ell^\delta}\geq 0$ along $\partial^* S_\ell^\delta \cap \partial^* G^\delta$. Since $\partial C$ consists of a finite number of segments and circular arcs and $C$ is connected, the convexity of $C$ follows from this local convexity. To finish proving the theorem, it remains to determine the ways in which these line segments and arcs may terminate. We note that each component of $\partial G^\delta$ must terminate. If one did not, then by Corollary \ref{no termination corollary}, it forms a circle contained in $\partial S_\ell^\delta \cap \partial G^\delta$. This configuration cannot be minimal however, since that component of $G^\delta$ may be added to $S_\ell^\delta$ to decrease the energy. Suppose that one of these components terminates at $x$. Next, by applying the local resolution at $x$, either Theorem \ref{interior resolution theorem delta positive}.$(iv)$ holds if $x\in B$ or item $(ii)$ or $(iii)$ from Theorem \ref{boundary resolution theorem for positive delta} holds, where $x\in \partial B$ is a jump point of $h$. This yields the desired conclusion.\end{proof}
\begin{proof}[Proof of Theorem \ref{main regularity theorem delta positive all of space}]
The proof is similar to the proof of Theorem \ref{main regularity theorem delta positive}. Since every interface point is an interior interface point, determining the ways in which arcs may terminate proceeds as in the case $x\in B$ in that theorem.
\end{proof}
\section{Proof of Theorem \ref{main regularity theorem delta zero}}\label{sec:proof of delta 0 theorem}
\begin{proof}[Proof of Theorem \ref{main regularity theorem delta zero}]
\textit{Step one}: First, we show that the set $\Sigma$ of interior triple junctions, or more precisely the set \begin{align}\notag
\Sigma := \{x\in B : \textup{$\exists$ a blow-up at $x$ given by $(ii)$ from Theorem \ref{zero delta classification theorem}}\} \end{align} does not have any accumulation points in $\mathrm{cl}\, B$. Suppose for contradiction that $\{x_k\}$ is a sequence of such points accumulating at $x\in \mathrm{cl}\, B$. By restricting to a subsequence and relabeling the chambers, we can assume that the three chambers in the blow-ups at each $x_k$ are $S^0_\ell$ for $1\leq \ell \leq 3$. In both cases $x\in B$ and $x\in \partial B$, the argument is similar (and follows classical arguments, e.g. \cite[Theorem 30.7]{Mag12}), so we consider only the case where $x\in \partial B$. If $\{x_k\}\subset \Sigma$ and $x_k \to x\in \partial B$, then by \eqref{circular segment containment}, $x\in \partial B$ is a jump point of $h$. We claim that up to a subsequence which we do not notate, \begin{align}\label{local L1 blowup conv}
\frac{S_\ell^0-x}{|x-x_k|} \to S_{\ell}\quad\textup{ locally in $L^1$ for $\ell=1,2,3$} \end{align} for a blow-up cluster $\mathcal{S}$ of the form from item $(v)$ in Theorem \ref{zero delta classification theorem}. To see this, we first note that by our assumption on $x_k$, \begin{align}\label{in all three boundaries}
x\in \partial S_1^0 \cap \partial S_2^0 \cap \partial S_3^0\,. \end{align} This inclusion rules out item $(iv)$ from Theorem \ref{zero delta classification theorem}, and so the blow-up cluster is three connected cones partitioning $\{y:y\cdot x<0\}$. Up to a further subsequence, we may assume that \begin{align}\notag
\frac{x_k - x}{|x_k - x|}\to \nu \in \{y:y\cdot x < 0\}\,, \end{align}
where we have used \eqref{circular segment containment} to preclude the possibility that $x_k$ approaches $x$ tangentially. Now for some $r>0$, $B_r(\nu)$, and $\ell_0\in \{1,2,3\}$, say $\ell_0=1$, the description of the blow-up cluster implies that $B_r(\nu) \subset S_2\cup S_3$. Combined with the $L^1$ convergence \eqref{local L1 blowup conv} and the infiltration lemma, we conclude that $B_{|x_k-x|r/4}\subset S_2^0 \cup S_3^0$ for large enough $k$, which is in direct conflict with $x_k \in \Sigma$. We have thus proven that $\Sigma$ has no accumulation points in $\mathrm{cl}\, B$; in particular, it is finite. \par
\noindent\textit{Step two}: We finally conclude the proof of Theorem \ref{main regularity theorem delta zero}. For any $x\in (B\setminus \Sigma) \cap \partial S_{\ell_0}^0$, Theorem \ref{zero delta classification theorem} and the infiltration lemma imply that $x\in \partial^* S_{\ell_0}^0 \cap \partial^* S_{\ell_1}^0$. In turn, by Corollary \ref{reduced boundary regularity delta 0}, there exists $r_x>0$ such that $B_{r_x}(x) \cap \partial S_{\ell_0}^0$ is a diameter of $B_{r_x}(x)$. Recalling from Corollary \ref{density corollary} that $\mathcal{H}^1(\partial S_{\ell_0}^0 \setminus \partial^* S_{\ell_0}^0)=0$, we may thus decompose $\partial S_{\ell_0}$ as a countable number of line segments, each of which must terminate at a point in the finite set $\Sigma$ or a jump point of $h$. Therefore, $\partial S_{\ell_0}^\delta$ is a finite number of line segments. The remainder of Theorem \ref{main regularity theorem delta zero} now follows directly from this fact and the classification of blow-ups in Theorem \ref{zero delta classification theorem}, items $(ii)$, $(iv)$, and $(v)$. Indeed, since the interfaces are a finite number of line segments, at $x\in \Sigma$ or $x\in \partial B$ which is a jump point of $h$, the blow-up is unique, and the minimal partition $\mathcal{S}^0$ must coincide with the blow-up on a neighborhood of $x$. The convexity of connected components of $S_\ell^0$ for $1\leq \ell \leq N$ follows as in the $\delta>0$ case. \end{proof}
\section{Resolution for small $\delta$ on the ball}\label{sec:resolution for small delta}
\begin{proof}[Proof of Theorem \ref{resolution for small delta corollary}] \textit{Step zero}: We begin by reducing the statement of the theorem to one phrased in terms of a sequence of minimizers $\{\mathcal{S}^{\delta_j}\}$. More precisely, to prove Theorem \ref{resolution for small delta corollary}, we claim it is enough to consider a sequence $\{\mathcal{S}^{\delta_j}\}$ of minimizers for $\delta_j\to 0$ and show that up to a subsequence, there exists a minimizer $\mathcal{S}^0$ among $\mathcal{A}_{0}^h$ with singular set $\Sigma$ such that \begin{align}\label{hausdorff convergence of chambers in proof}
\max \big\{ \sup_{x\in S_\ell^{\delta_j}} {\rm dist} (x,S_\ell^0)\,,\,\sup_{x\in S_\ell^{0}} {\rm dist} (x,S_\ell^{\delta_j}) \big\} &\to 0\quad \textit{for }1\leq \ell \leq N \\
\label{hausdorff convergence of remnants in proof}
\max \big\{ \sup_{x\in G^{\delta_j}} {\rm dist} (x,\Sigma)\,,\,\sup_{x\in \Sigma} {\rm dist} (x,G^{\delta_j}) \big\} &\to 0 \,, \end{align}
and, for large enough $j$ and each $x\in \Sigma$, $B_{r}(x) \cap \partial G^{\delta_j}$ consists of three circle arcs of curvature $\kappa_{j}$, with total area $|G^{\delta_j}|=\delta_j$. To see why this is sufficient, if Theorem \ref{resolution for small delta corollary} were false, then there would be some sequence $\delta_j\to 0$ with minimizers $\mathcal{S}^{\delta_j}$ among $\mathcal{A}_{\delta_j}^h$ such that for any subsequence and choice of minimizer $\mathcal{S}^0$ among $\mathcal{A}_0^h$, at least one of \eqref{hausdorff convergence of chambers in corollary}-\eqref{hausdorff convergence of remnants in corollary} or the asymptotic resolution near singularities of $\mathcal{S}^0$ did not hold.
But this would contradict the subsequential claim above.
\par We point out that if we knew that $\partial G^\delta$ is described near singularities by three circle arcs for small $\delta$, the saturation of the area inequality $|G^\delta|\leq \delta$ follows from the facts that $\partial G^\delta$ has negative mean curvature away from its cusps and increasing the area of $G^\delta$ is admissible if $|G^\delta|<\delta$. Therefore, the rest of the proof is divided into steps proving \eqref{hausdorff convergence of chambers in proof}-\eqref{hausdorff convergence of remnants in proof} and the asymptotic resolution near singular points. First we prove that due to $c_\ell=1$ for $1\leq \ell \leq N$, there are no ``islands" inside $B$. Second, we extract a minimizer $\mathcal{S}^0$ for $\mathcal{A}_0^h$ from a minimizing (sub-)sequence $\mathcal{S}^{\delta_j}$ with $\delta_j \to 0$ and prove \eqref{hausdorff convergence of chambers in proof}. There are then two cases. In the first, we suppose that the set of triple junctions $\Sigma$ is empty and show that $G^{\delta_j}=\emptyset$ for large $j$, so that \eqref{hausdorff convergence of remnants in proof} is trivial. In the other case, we assume that $\Sigma \neq \emptyset$ and prove \eqref{hausdorff convergence of remnants in proof} and the final resolution near singularities of the limiting cluster. \par
\noindent\textit{Step one}: Let $\mathcal{S}^\delta$ be a minimizer for $\delta>0$. We claim that for any connected component $C$ of any chamber $S_\ell^\delta$ with $1\leq \ell \leq N$, $\partial C \cap \{h=\ell\}\neq \emptyset$. Suppose that this were not the case for some $C\subset S_\ell$. Then in fact, $\mathrm{cl}\, C \subset B$, since by Theorem \ref{boundary resolution theorem for positive delta} and \eqref{circular segment containment}, the only components that can intersect $\partial B$ are those bordering $\partial B$ along an arc where $h=\ell$. By Theorem \ref{interior resolution theorem delta positive}, $\partial C$ is $C^{1,1}$ since its boundary is contained in $B$. If $\mathcal{H}^1(\partial C \cap \partial S_{\ell'}^\delta)>0$ for some $\ell'$, then since all $c_\ell$ are equal, removing $C$ from $S_\ell^\delta$ and adding it to $S_{\ell'}^\delta$ contradicts the minimality of $\mathcal{S}^\delta$. So it must be the case that $\partial C \subset \partial G^\delta$ except for possibly finitely many points. We translate $C$ if necessary until it intersects $\partial G^\delta \cap \partial C'$ for a connected component $C'\neq C$ of some $S_{\ell'}^\delta$ at $y\in B$, which does not increase the energy. Creating a new minimal cluster $\tilde{\mathcal{S}}$ by adding $C$ to $S_{\ell'}^\delta$ and removing it from $S_\ell^\delta$ gives a contradiction. This is because by Corollary \ref{density corollary}, $y\in (\tilde{S}_{\ell'}^\delta)^{(1)}$ implies that $y\in\mathrm{int}\,(\tilde{S}_{\ell'}^\delta)^{(1)}$, and so $\mathcal{F}(\tilde{\mathcal{S}};B_r(y))=0$ for some $r>0$, against the minimality of $\mathcal{S}^\delta$.
\par We note that as a consequence, the total number of connected components in $\mathcal{S}^\delta$ is bounded in terms of the number of jumps of $h$, and in addition the area of any connected component is bounded from below by the area of the smallest circular segment from \eqref{circular segments}. \par
\noindent\textit{Step two}: Here we identify our subsequence, limiting minimizer among $\mathcal{A}_0^h$, and prove \eqref{hausdorff convergence of chambers in proof}. Let us decompose each $S_{\ell}^{\delta_j}$ into its open connected components \begin{align}
S_\ell^{\delta_j} = \cup_{i=1}^{N_\ell^{j}} C_i^{\ell,j}\,, \end{align}
where by the previous step, $N_\ell^j\leq N_\ell(h)$ for all $j$ and $|C_i^{\ell,j}|\geq C(h)$ for all $j$ and $i$. Up to a subsequence which we do not notate, we may assume therefore that for each $1\leq \ell\leq N$, \begin{align}\label{number and area bound equation}
N_\ell^j = M_\ell\leq N_\ell(h)\quad \textup{and}\quad |C_i^{\ell,j}|\geq C(h)\quad \forall j \quad\textup{and} \quad i\in \{1,\dots, M_\ell\}\,. \end{align} Since \begin{align}\label{ordering of infimums} \min_{\mathcal{A}_{\delta_j}^h}\mathcal{F} \leq \min_{\mathcal{A}_{0}^h}\mathcal{F}\quad \forall j\,, \end{align} up to a further subsequence, the compactness for sets of finite perimeter and \eqref{number and area bound equation} yield a partition $\{C_i^\ell\}_{\ell,i}$ of $B$, with no trivial elements thanks to \eqref{number and area bound equation}, such that \begin{align}\label{almost everywhere convergence}
\mathbf{1}_{C_i^{\ell,j}} &\to \mathbf{1}_{C_i^\ell}\quad\textup{a.e. }\quad\textup{and} \\ \label{lsc delta to 0}
\liminf_{j\to \infty}\mathcal{F}(\mathcal{S}^{\delta_j};B)= \liminf_{j\to \infty}&\sum_{\ell=1}^N\sum_{i=1}^{M_\ell} P(C_i^{\ell,j};B) \geq \sum_{\ell=1}^N\sum_{i=1}^{M_\ell} P(C_i^{\ell};B)\quad \forall 1\leq \ell\leq N. \end{align} Actually, by Lemma \ref{convex sets convergence lemma}, we may assume that each $\mathrm{cl}\,\, C_i^\ell$ is compact and convex, $C_i^\ell$ is open, and, for each $1\leq \ell\leq N$, \begin{align}\label{hausdorff convergence of components delta to 0}
\max \big\{ \sup_{x\in C_i^\ell} {\rm dist} (x,C_{i}^{\ell,j})\,,\,\sup_{x\in C_i^{\ell,j}} {\rm dist} (x,C_i^\ell) \big\} \to 0 \quad \forall 1\leq i \leq M_\ell\,. \end{align} We claim that the cluster \begin{align}\notag \mathcal{S}^0=(\mathbb{R}^2\setminus B,S_1^0,\dots,S_N^0, \emptyset)=\Big(\mathbb{R}^2\setminus B,\bigcup_{i=1}^{M_1}C_i^1,\dots,\bigcup_{i=1}^{M_N}C_i^N,\emptyset\Big) \end{align} of $B$ is minimal for $\mathcal{F}$ on $\mathcal{A}_0^h$. It belongs to $\mathcal{A}_0^h$ by the inclusion \eqref{circular segment containment} for each $j$ and by $\delta_j\to 0$. For minimality, we use \eqref{ordering of infimums} and \eqref{lsc delta to 0} to write \begin{align}\label{minimality of thing}
\min_{\mathcal{S}\in\mathcal{A}_0^h}\mathcal{F}(\mathcal{S};B) \geq \sum_{\ell=1}^N\liminf_{j\to \infty}\sum_{i=1}^{M_\ell} P(C_i^{\ell,j};B) \geq \sum_{\ell=1}^N\sum_{i=1}^{M_\ell} P(C_i^{\ell};B) \geq \sum_{\ell=1}^N P(S_\ell^0;B)\,. \end{align} This proves $\mathcal{S}^0$ is minimal. The Hausdorff convergence \eqref{hausdorff convergence of chambers in proof} follows from \eqref{hausdorff convergence of components delta to 0}. \par We note that by the minimality of $\mathcal{S}^0$, \eqref{minimality of thing} must be an equality, so that in turn \begin{align}\label{perimeter equality}
\sum_{i=1}^{M_\ell} P(C_i^{\ell};B) = P(S_\ell^0;B)\quad\forall 1\leq \ell\leq N\,. \end{align} Now each $C_i^\ell$ is open and convex; in particular, they are all indecomposable sets of finite perimeter. This indecomposability and \eqref{perimeter equality} allow us to conclude from \cite[Theorem 1]{AMMN01} that $\{C_i^\ell\}_{i}$ is the unique decomposition of $S_\ell^0$ into pairwise disjoint indecomposable sets such that \eqref{perimeter equality} holds. Also, by Theorem \ref{main regularity theorem delta zero}, each $(S_\ell^0)^{(1)}$ is an open set whose boundary is smooth away from finitely many points. By \cite[Theorem 2]{AMMN01}, which states that for an open set with $\mathcal{H}^1$-equivalent topological and measure theoretic boundaries (e.g.\ $(S_\ell^0)^{(1)}$) the decompositions into open connected components and maximal indecomposable components coincide, we conclude that the connected components of $(S_\ell^0)^{(1)}$ are $\{C_i^\ell\}_{i=1}^{M_\ell}$, and $S_\ell^0=(S_\ell^0)^{(1)}$. We have in fact shown in \eqref{hausdorff convergence of components delta to 0} that the individual connected components of $S_\ell^{\delta_j}$ converge in the Hausdorff sense to the connected components of $S_\ell^0$ for each $\ell$. \par
\noindent\textit{Step three}: In this step, we suppose that $\Sigma=\emptyset$ and show that $G^{\delta_j}=\emptyset$ for large $j$, which finishes the proof in this case. If $\Sigma=\emptyset$, then every component of $\partial S_\ell^0 \cap \partial S_{\ell'}^0$ is a segment which, by Theorem \ref{main regularity theorem delta zero}, can only terminate at a pair of jump points of $h$ which are not boundary triple junctions. Therefore, every connected component of a chamber $S_\ell^0$ is the convex hull of some finite number of arcs on $\partial B$ contained in $\{h=\ell\}$. Now for large $j$, by the Hausdorff convergence in step two and the containment \eqref{circular segment containment}, given any connected component $C$ of a chamber of $\mathcal{S}^{\delta_j}$ there exists connected component $C'$ of a chamber of $\mathcal{S}^0$ such that $\partial C \cap \partial B = \partial C' \cap \partial B$. Since every connected component of every chamber is convex for $\delta \geq 0$, we see that in fact it must be $C=C'$. So the minimal partition $\mathcal{S}^{\delta_j}$ coincides with $\mathcal{S}^0$ for all large $j$ when there are no triple junctions of $\mathcal{S}^0$. \par
\noindent\textit{Step four}: For the rest of the proof, we assume that $\Sigma \neq \emptyset$. In this step, we show that \begin{align}\label{diverging curvatures} G^{\delta_j}\neq \emptyset \quad\textup{for all $j$}\quad\textup{and}\quad \kappa_{j}\to \infty\,. \end{align} Assume for contradiction that $G^{\delta_j}= \emptyset$ for some $j$. Then $\mathcal{S}^{\delta_j}$ is minimal for $\mathcal{F}$ among $\mathcal{A}_0^h$, so $\mathcal{F}(\mathcal{S}^{\delta_j})= \mathcal{F}(\mathcal{S}^0)$ and $\mathcal{S}^0$ is minimal among $\mathcal{A}_h^{\delta_j}$, too. But this is impossible, since $\Sigma\neq \emptyset$ and Theorem \ref{main regularity theorem delta positive} precludes the presence of interior or boundary triple junctions for minimizers when $\delta>0$. Moving on to showing that $ \kappa_{j}\to \infty$, we fix $y\in \Sigma$. Let us assume that $y\in \partial B$ is a jump point of $h$ between $h=1$ and $h=2$ with $S_3^0$ being the third chamber in the triple junction, since the case when $y\in B$ is easier. For all $j$, by the containment \eqref{circular segment containment} of the neighboring circular segments in $S_1^{\delta_j}$ and $S_2^{\delta_j}$, there exists $r>0$ such that for all $j$ and $3\leq \ell \leq N$, $\partial S_\ell^{\delta_j} \cap B_r(y) \subset B$ for some small $r$. In particular, $\partial S_3^{\delta_j} \cap B_r(y)$ is $C^{1,1}$ by Theorem \ref{main regularity theorem delta positive}. Furthermore, since $S_3^{\delta_j}$ converges as $j\to \infty$ to a set with a corner in $B_r(y)$, the $C^{1,1}$ norms of $\partial S_3^{\delta_j}$ must be blowing up on that ball. These norms are controlled in terms of $\kappa_j$, and so $\kappa_{j}\to \infty$. \par
\noindent\textit{Step five}: In the next two steps, we prove \eqref{hausdorff convergence of remnants in proof}. Here we show that \begin{align}\label{first half of hausdorff convergence}
\sup_{x\in G^{\delta_j}} {\rm dist} (x, \Sigma) \to 0\,. \end{align} Suppose for contradiction that \eqref{first half of hausdorff convergence} did not hold. Then, up to a subsequence, we could choose $r>0$ and $y_j \in \mathrm{cl}\,\, G^{\delta_j}$ such that $$ y_j \to y\in \mathrm{cl}\, B \setminus \cup_{z\in \Sigma}B_r(z)\,. $$ Let us assume that $y=\vec{e}_1\in \partial B$; we will point out the difference in the $y\in B$ argument when the moment arises. We note that $y$ must be a jump point of $h$, say between $h=1$ and $h=2$, due to \eqref{circular segment containment}. Furthermore, by Theorem \ref{main regularity theorem delta zero} and $y\notin \Sigma$, there exists $r'>0$ such that $$ B_{r'}(y) \cap B \subset \mathrm{cl}\,\, S_1^0 \cup \mathrm{cl}\,\, S_2^0\,. $$ In particular, ${\rm dist}(y,S_\ell^0)>r'/2$ for $3\leq \ell \leq N$. Therefore, due to \eqref{hausdorff convergence of chambers in proof}, ${\rm dist} (y, S_\ell^{\delta_j})\geq r'/2$ for large enough $j$. Also by \eqref{circular segment containment} applied to $S_1^{\delta_j}$ and $S_2^{\delta_j}$ and the convexity of connected components of those sets, we may choose small $\varepsilon_1$ and $\varepsilon_2$ such that on the rectangle \begin{align*}
R = [1-\varepsilon_1,1]\times [-\varepsilon_2,\varepsilon_2] \subset B_{r'/2}(y)\,, \end{align*} $\partial S_1^{\delta_j} \cap R \cap B$ and $\partial S_2^{\delta_j}\cap R \cap B$ are graphs of functions $f_1^j$ and $f_2^j$ over the $\vec{e}_1$-axis for all $j$. Relabeling if necessary, we may take \begin{align}\notag
-\varepsilon_2 \leq f_1^j \leq f_2^j \leq \varepsilon_2\quad \textup{and}\quad (f_1^j)''\leq 0\,,\,\, (f_2^j)'' \geq 0\,. \end{align} It is at this point that in the case $y\in B$, we instead appeal to the Hausdorff convergence \eqref{hausdorff convergence of chambers in proof} and the convexity of the components of $S_\ell^{\delta_j}$ to conclude that graphicality holds. Now the set \begin{align}\notag
\mathcal{I}_j=\{t\in [1-\varepsilon_1,1]:f_1^j=f_2^j\} \end{align} is a non-empty interval by the convexity of connected components of the chambers and the fact that $f_1^j(1)=0=f_2^j(1)$. In addition, for each $i=1,2$ and large $j$, \begin{align}\notag
\textup{$f_i^j([1-\varepsilon_1,1]\setminus \mathcal{I}_j )$ is a graph of constant curvature $\kappa_{j}$} \end{align} since $f_1^j<f_2^j$ implies that $(t,f_i^j(t))\in \partial G^{\delta_j}$. Since a graph of constant curvature $\kappa_j$ can be defined over an interval of length at most $2\kappa_j^{-1}$ and $\kappa_j\to \infty$, we deduce that $\mathcal{H}^1(\mathcal{I}_j)\to \varepsilon_1$. Since $1\in \mathcal{I}_j$ for all $j$ and $G_j \cap \mathrm{int}\,\mathcal{I}_j \times [-\varepsilon_2,\varepsilon_2]=\emptyset$, we conclude that $G^{\delta_j}$ stays at positive distance from $y=\vec{e}_1$, which is a contradiction. We have thus proved \eqref{first half of hausdorff convergence}. \par
\noindent\textit{Step six}: In this step, we prove the other half of \eqref{hausdorff convergence of remnants in proof}, namely \begin{align}\label{second half of hausdorff convergence}
\sup_{x\in \Sigma} {\rm dist} (x, G^{\delta_j}) \to 0\,. \end{align} For such an $x$, say which is a triple junction between $S_1^0$, $S_2^0$, and $S_3^0$, by \eqref{hausdorff convergence of chambers in proof} and the definition of $\Sigma$, there exists $r_0>0$ such that given $r<r_0$, there exists $J(r)$ such that \begin{align}\label{ball sees all three} B_r(x) \cap S_\ell^{\delta_j}\neq \emptyset\quad\textup{ for $\ell=1,2,3$ and $j\geq J(r)$}\,. \end{align} Furthermore, by decreasing $r_0$ if necessary when $x\in \partial B \cap \Sigma$ is a jump point of $h$, the boundary condition \eqref{trace constraint} and absence of triple junctions for $\delta>0$ allow us to choose $1\leq \ell\leq 3$ such that \begin{align}\label{no boundary on boundary 2} \partial S_\ell^\delta \cap \partial B \cap B_{r_0}(x)=\emptyset\quad\textup{for all $j$}\,. \end{align} Now \eqref{ball sees all three} and \eqref{no boundary on boundary 2} imply that $\partial S_\ell^{\delta_j} \cap B_r(x) \subset B$ and is also non-empty for $j\geq J(r)$. Since Theorem \ref{main regularity theorem delta positive} implies that line segments in $\partial S_\ell^{\delta_j}$ can only terminate inside $B$ at interior cusp points in $\partial G^\delta$ and $S_\ell^{\delta_j}\cap B_{r}(x)$ converges to a sector with angle strictly less than $\pi$, we find that $G^{\delta_j}\cap B_r(x)\neq \emptyset$ for all $j\geq J(r)$. Letting $r\to 0$ gives \eqref{second half of hausdorff convergence}. \par
\noindent\textit{Step seven}: Finally, under the assumption that $\Sigma=\{x_1,\dots, x_P\}\neq \emptyset$, we show that for large enough $j$, $G^{\delta_j}$ consists of $P$ connected components, each of which is determined by three circle arcs contained in $\partial S_{\ell_i}^{\delta_j}\cap \partial G^{\delta_j}$ for the three indices $\ell_i$, $i=1,2,3$, in the triple junction at $x$. We fix $x\in \Sigma$ which is a triple junction between the first three chambers, so there is some $B_{2r}(x)$ such that for each $\ell$, $B_{2r}(x)\cap S_\ell^0$ consists of exactly one connected component $C_\ell$ of $S_\ell^0$ for $1\leq \ell \leq 3$ (also $S_\ell^0 \cap B_{2r}(x)=\emptyset$ for $\ell\geq 4$). Up to decreasing $r$, we may also assume that \begin{align}\label{no other sigma}
( \Sigma\setminus \{x\} )\cap \mathrm{cl}\, B_{2r}(x)=\emptyset\,. \end{align} Recalling from step two (see \eqref{hausdorff convergence of components delta to 0} and the last paragraph) that the connected components of $S_\ell^{\delta_j}$ converge in the Hausdorff sense to those of $S_\ell^0$, for $j$ large enough, we must have \begin{align}\label{one component in the ball}
B_r(x) \cap S_\ell^{\delta_j} = B_r(x) \cap C_\ell^j \neq \emptyset\quad 1\leq \ell \leq 3 \end{align} for a single connected component $C_\ell^j$, and, due to \eqref{hausdorff convergence of remnants in proof} and \eqref{no other sigma}, \begin{align}\label{G only near x}
\mathrm{cl}\, G^{\delta_j} \cap \mathrm{cl}\, B_r(x) \subset B_{r/4}(x)\,. \end{align} Now $\partial G_{\delta_j} \cap B_r(x)$ consists of finitely many circle arcs and has negative mean curvature (with respect to the outward normal $\nu_{G^{\delta_j}}$) along these arcs away from cusps. We claim that for $j$ large, there are precisely three such arcs, one bordering each $S_\ell^{\delta_j}$ for $1\leq \ell \leq 3$ and together bounding one connected component of $G^{\delta_j}$. There must be at least three arcs, since an open set bounded by two circle arcs has corners rather than cusps. To finish the proof, it suffices to show that there cannot be more than two distinct arcs belonging to $\partial G^{\delta_j} \cap \partial S_\ell^{\delta_j} \cap B_{r/4}(x)$ for a single $\ell\in \{1,2,3\}$. If there were, then $\partial S_\ell^{\delta_j}\cap B_r(x)$ would contain at least three distinct segments, because with only two, each of which has one endpoint outside of $B_{r}(x)$ according to \eqref{one component in the ball}-\eqref{G only near x}, one cannot resolve three cusp points as dictated by Theorem \ref{main regularity theorem delta positive}. As a consequence, there exists $\ell'\neq \ell$ such that up to a subsequence, for large $j$, there are two distinct segments, $L_1$ and $L_2$, both belonging $\partial S_\ell^{\delta_j} \cap \partial S_{\ell'}^{\delta_j} \cap B_r(x)$ and separated by at least one circle arc. It is therefore the case that $L_1$ and $L_2$ are not collinear. Also by \eqref{one component in the ball}, there is only a single convex component $C_{\ell'}^{j}$ of $S_{\ell'}^{\delta_j}$ containing $S_{\ell'}^{\delta_j}\cap B_r(x)$. Therefore, $L_1 \cup L_2\subset \partial C_{\ell}^j \cap \partial C_{\ell'}^j$. But this is impossible: since a planar convex set lies on one side of any tangent line, $\partial C_{\ell}^j$ and $\partial C_{\ell'}^j$ cannot share two non-collinear segments.\end{proof}
\end{document} |
\begin{document}
\title{Asymptotically Optimal Knockoff Statistics via the Masked Likelihood Ratio}
\begin{abstract}
This paper introduces a class of asymptotically most powerful knockoff statistics based on a simple principle: that we should prioritize variables in order of our ability to distinguish them from their knockoffs. Our contribution is threefold. First, we argue that feature statistics should estimate “oracle masked likelihood ratios,” which are Neyman-Pearson statistics for discriminating between features and knockoffs using partially observed (masked) data. Second, we introduce the masked likelihood ratio (MLR) statistic, a knockoff statistic that estimates the oracle MLR. We show that MLR statistics are asymptotically average-case optimal, i.e., they maximize the expected number of discoveries made by knockoffs when averaging over a user-specified prior on unknown parameters. Our optimality result places no explicit restrictions on the problem dimensions or the unknown relationship between the response and covariates; instead, we assume a “local dependence” condition which depends only on simple quantities that can be calculated from the data. Third, in simulations and three real data applications, we show that MLR statistics outperform state-of-the-art feature statistics, including in settings where the prior is highly misspecified. We implement MLR statistics in the open-source python package \texttt{knockpy}; our implementation is often (although not always) faster than computing a cross-validated lasso. \end{abstract}
\section{Introduction}
Given a design matrix $\bX = (\bX_1, \dots, \bX_p) \in \R^{n \times p}$ and a response vector $\by \in \R^n$, the task of \textit{controlled feature selection} is, informally, to discover features that influence $\by$ while controlling the false discovery rate (FDR). Knockoffs \citep{fxknock, mxknockoffs2018} is a powerful method for performing this statistical task. Informally, knockoffs are fake variables $\widetilde{\mathbf{X}} \in \R^{n \times p}$ which act as negative controls for the features $\bX$. Remarkably, employing knockoff variables allows analysts to use nearly any machine learning model or test statistic, often known as a ``feature statistic" or ``knockoff statistic," to discover important features while controlling the FDR in finite samples. As a result, knockoffs has become quite popular in a wide variety of settings, including analysis of genetic studies, financial data, clinical trials, and more \citep{genehunting2017, knockoffzoom2019, challet2021, sechidis2021predictiveknockoffs}.
The flexibility of knockoffs has inspired the development of a wide variety of feature statistics, based on penalized regression coefficients, sparse Bayesian models, random forests, neural networks, and more (see, e.g., \cite{fxknock, mxknockoffs2018, knockoffsmass2018, deeppink2018}). These feature statistics not only reflect different modeling assumptions, but more fundamentally, they estimate different quantities, including coefficient sizes, Bayesian posterior inclusion probabilities, and various other measures of variable importance. Yet to our knowledge, there has been relatively little theoretical comparison of these methods, in large part because analyzing the power of knockoffs can be very technically challenging (see Section \ref{subsec::literature} for discussion). Our work aims to fill this gap: in particular, we consider the question of designing provably optimal knockoff statistics.
\subsection{Contribution and overview of results}\label{subsec::contribution}
This paper develops a class of feature statistics called \textit{masked likelihood ratio (MLR)} statistics which are asymptotically optimal, computationally efficient, and powerful in a variety of practical settings. In particular, our contribution is threefold.\footnote{For brevity, in this section, we use the notation of model-X knockoffs to outline our results: analagous results for fixed-X knockoffs are available throughout the paper and appendix.}
\textbf{1. Conceptual contribution: selecting the estimand.} Existing knockoff feature statistics measure many different proxies for variable importance, ranging from regression coefficients to posterior inclusion probabilities. We argue that in place of estimating variable importances, knockoff statistics should instead estimate quantities (or estimands) known as ``masked likelihood ratios," which are statistics that optimally distinguish between a feature $\bX_j$ and its knockoff $\widetilde{\mathbf{X}}_j$. In contrast, other knockoff methods suffer by incorrectly prioritizing features $\bX_j$ that are predictive of $\by$ but are nearly indistinguishable from their knockoffs $\widetilde{\mathbf{X}}_j$.
In particular, we reformulate knockoffs as a guessing game on \textit{masked data} $D = (\by, \{\bX_j, \widetilde{\mathbf{X}}_j\}_{j=1}^p)$. After observing $D$, the analyst must (i) guess the value of $\bX_j$ from the unordered pair $\{\bX_j, \widetilde{\mathbf{X}}_j\}$ and (ii) arbitrarily reorder the features. The analyst earns the right to make $k$ discoveries if approximately $k$ of their first $(1+q)k$ guesses are correct (according to the order they specify), where $q$ is the FDR level. We show that to maximize the expected number of discoveries, an asymptotically optimal strategy is to (i) guess the value of $\bX_j$ using a Neyman-Pearson statistic and (ii) order the features by the probability that each guess is correct. In the traditional language of knockoffs, this corresponds to using the Neyman-Pearson test statistic for testing $H_0 : \bX_j = \widetilde{\mathbf{x}}_j$ against $H_a : \bX_j = \bx_j$ as the feature statistic, where $\bx_j$ (resp. $\widetilde{\mathbf{x}}_j$) is the observed value of the $j$th feature (resp. knockoff) vector. We refer to this as the \textit{oracle masked likelihood ratio}, shown below:
\begin{equation}\label{eq::oraclemlr}
\mathrm{MLR}_j^{\mathrm{oracle}} \defeq \log\left(\frac{L_{\theta}(\bX_j = \bx_j \mid D)}{L_{\theta}(\bX_j = \widetilde{\mathbf{x}}_j \mid D)}\right). \end{equation} Above, $L_{\theta}(\bX_j =\bx \mid D)$ is the likelihood of observing $\bX_j = \bx \in \R^n$ conditional on the masked data $D$, and $\theta$ are any unknown parameters, such as coefficients in a generalized linear model (GLM) or parameters in a random forest or neural network. To aid intuition, observe that swapping $\bx_j$ and $\widetilde{\mathbf{x}}_j$ flips the sign of $\mathrm{MLR}_j^{\mathrm{oracle}}$, and thus $\mathrm{MLR}_j^{\mathrm{oracle}}$ is a valid knockoff statistic (as reviewed in Section \ref{subsec::knockreview}). We also note that \cite{katsevichmx2020} previously argued the \textit{unmasked} likelihood had certain (weaker) optimality properties: we reconcile these results in Sections \ref{subsec::literature} and \ref{subsec::mlr}.
\textbf{2. Theoretical contribution: optimality of masked likelihood ratio (MLR) statistics}. The exact optimal knockoff statistics (\ref{eq::oraclemlr}) depend on $\theta$, which is unknown by assumption. Thus, we settle for statistics which are average-case optimal over a user-specified prior $\pi$ on the unknown parameters $\theta$. Marginalizing over $\theta$ yields the \textit{masked likelihood ratio (MLR) statistic}: \begin{equation}\label{eq::mlr}
W_j\opt \defeq \log\left(\frac{\E_{\theta \sim \pi}\left[L_{\theta}(\bX_j = \bx_j \mid D)\right]}{\E_{\theta \sim \pi}\left[L_{\theta}(\bX_j = \widetilde{\mathbf{x}}_j \mid D)\right]}\right). \end{equation}
Our main theoretical result (Theorem \ref{thm::avgopt}) shows that MLR statistics are asymptotically average-case optimal over $\pi$, i.e., they maximize the expected number of discoveries. The proof is involved, but our result allows for arbitrarily high-dimensional asymptotic regimes and allows the likelihood to take any form---in particular, we do not assume $\by \mid \bX$ follows a linear model. Instead, the key assumption we make is that the signs of the MLR statistics satisfy a local dependency condition, similar in flavor to dependency conditions assumed on p-values in the multiple testing literature \citep{genovese2004, storey2004, ferreira2006, farcomeni2007}. Crucially, our local dependency condition only depends on simple quantities which one can compute from the data, so it can be roughly diagnosed using the data at hand.
\textbf{3. Methodological contribution: practical and powerful MLR statistics.} Our third contribution is to demonstrate via simulations and three real data analyses that MLR statistics are powerful in practical settings. {In particular, our theory only shows that MLR statistics are \textit{average-case} optimal over a prior on the distribution of $y \mid X$ (although the analyst may choose any prior), so we aim to show that MLR statistics can be powerful even when the prior is highly mispecified}.\footnote{Of course, knockoffs provably control the FDR even when the prior is mispecified.} \begin{itemize}[topsep=0pt]
\item We develop concrete instantiations of MLR statistics based on (relatively) uninformative priors, including versions developed for use in Gaussian linear models, generalized additive models, and binary generalized linear models. We also develop efficient software in python to fit MLR feature statistics; in many cases, our implementation is faster than fitting a cross-validated lasso.
\item In extensive empirical studies, we show that MLR statistics outperform other state-of-the-art feature statistics, often by wide margins, including in settings where the prior is highly mispecified. Indeed, MLR statistics often nearly match the performance of the oracle procedure in Equation (\ref{eq::oraclemlr}), showing that the mispecified prior has little effect. Furthermore, in settings where $\by$ has a highly nonlinear relationship with $\bX$, MLR statistics also outperform feature statistics based on black-box machine learning algorithms.
\item We replicate three knockoff-based analyses of drug resistance \citep{fxknock}, financial factor selection \citep{challet2021}, and graphical model discovery for gene expression data \citep{nodewiseknock}, and we show that MLR statistics (using an uninformative prior) make up to an order of magnitude more discoveries than lasso-based competitors. \end{itemize} Overall, our results suggest that MLR statistics can substantially increase the power of knockoffs.
\subsection{Notation}
Let $\bX \in \R^{n \times p}$ and $\by \in \R^{n}$ denote the design matrix and response vector in a feature selection problem with $n$ data points and $p$ features. In settings where the design matrix is random, we let $\bx_1,\dots, \bx_p \in \R^{n}$ denote the observed values of the columns of the design matrix. For convenience, we let the non-bold versions $X = (X_1, \dots, X_p) \in \R^p$ and $Y \in \R$ denote the features and response for an arbitrary single observation. For any integer $k$, define $[k] \defeq \{1, \dots, k\}$. For any matrix $M \in \R^{m \times k}$ and any $J \subset [k]$, let $M_J$ denote the columns of $M$ corresponding to the indices in $J$. Similarly, $M_{-J}$ denotes the columns of $M$ which do not appear in $J$, and let $M_{-j}$ denote all columns except column $j \in [k]$. For any matrices $M_1 \in \R^{n \times k_1}, M_2 \in \R^{n \times k_2}$, we define $[M_1, M_2] \in \R^{n \times (k_1 + k_2)}$ to be the column-wise concatenation of $M_1, M_2$. Let $[M_1, M_2]_{\swap{j}}$ denote the matrix $[M_1, M_2]$ but with the $j$th column of $M_1$ and $M_2$ swapped: similarly, $[M_1, M_2]_{\swap{J}}$ swaps all columns $j \in J$ of $M_1$ and $M_2$. Let $I_n$ denote the $n \times n$ identity. Furthermore, for any vector $x \in \R^n$, we let $\bar x_k \defeq \frac{1}{k} \sum_{i=1}^k x_i$ be the sample mean of the first $k$ elements of $x$. Additionally, for $x \in \R^n$ and a permutation $\kappa : [n] \to [n]$, $\kappa(x)$ denotes the coordinates of $x$ permuted according to $\kappa$, so that $\kappa(x)_i = x_{\kappa(i)}$.
\subsection{Review of knockoffs}\label{subsec::knockreview}
We start with a review of model-X (MX) knockoffs \citep{mxknockoffs2018}, which tests the hypotheses $H_j : \bX_j \Perp \by \mid \widetilde{\mathbf{X}}_{-j}$ assuming only that the data are i.i.d. and that the distribution of $\bX$ is known. Applying MX knockoffs typically requires three steps.
\textit{Step 1: constructing knockoffs.} Valid MX knockoffs $\widetilde{\mathbf{X}}$ must satisfy two properties. First, the columns of $\bX$ and $\widetilde{\mathbf{X}}$ must be \textit{pairwise exchangeable} in the sense that $[\bX, \widetilde{\mathbf{X}}]_{\swap{j}} \disteq [\bX, \widetilde{\mathbf{X}}]$ for each $j \in [p]$. Second, we require that $\widetilde{\mathbf{X}} \Perp \by \mid \bX$, which holds, (e.g.) if one constructs $\widetilde{\mathbf{X}}$ without looking at $\by$. Informally, these constraints guarantee that $\bX_j, \widetilde{\mathbf{X}}_j$ are ``indistinguishable" under $H_j$. Sampling knockoffs can be challenging, but this problem is well studied (see, e.g., \cite{metro2019}).
\textit{Step 2: fitting feature statistics.} Next, one can use almost any machine learning algorithm to fit feature importances $Z = z([\bX, \widetilde{\mathbf{X}}], \by) \in \R^{2p}$, where $Z_j$ and $Z_{j+p}$ heuristically measure the ``importance" of $\bX_j$ and $\widetilde{\mathbf{X}}_j$ in predicting $Y$. The only restriction on $z$ is that swapping the positions of $\bX_j$ and $\widetilde{\mathbf{X}}_j$ must also swap the feature importances $Z_j$ and $Z_{j+p}$, and swapping the positions of $\bX_i$ and $\widetilde{\mathbf{X}}_i$ does not change $Z_j$ or $Z_{j+p}$ for $i \ne j$. This restriction is satisfied by most machine learning algorithms, such as the lasso or various neural networks \citep{deeppink2018}.
Given $Z$, we define the \textit{feature statistics} $W = w([\bX, \widetilde{\mathbf{X}}], \by) \in \R^p$ via $W_j = f(Z_j, Z_{j+p})$, where $f$ is any ``antisymmetric`` function, meaning $f(x,y) = - f(y,x)$. For example, the lasso coefficient difference (LCD) statistic sets $W_j = |Z_j| - |Z_{j+p}|$, where $Z_j$ and $Z_{j+p}$ are coefficients from a lasso fit on $[\bX, \widetilde{\mathbf{X}}], \by$. Intuitively, when $W_j$ has a positive sign, this suggests that $\bX_j$ is more important than $\widetilde{\mathbf{X}}_j$ in predicting $\by$ and thus is evidence against the null. Indeed, Step 1 and 2 guarantee that the signs of the null $W_j$ are distributed as i.i.d. random coin flips.
\textit{Step 3: make rejections.} Define the data-dependent threshold $T \defeq \inf\left\{t > 0: \frac{\#\{j : W_j \le -t \} + 1}{\#\{W_j \ge t \}} \le q \right\}$.\footnote{By convention we define the infimum of the empty set to be $\infty$.} Then, reject $\hat S = \{j : W_j \ge T\}$, which guarantees finite-sample FDR control at level $q \in (0,1)$. It is important to note that knockoffs only makes rejections when the feature statistics that have high absolute values are also consistently positive.
Lastly, we also briefly describe fixed-X knockoffs \citep{fxknock}, which does not require knowledge of the distribution of $X$ and controls the FDR assuming the Gaussian linear model $\by \mid \bX \sim \mcN(\bX \beta, \sigma^2 I_n)$. In particular, FX knockoffs do not need to satisfy the constraints in Step 1: instead, $\widetilde{\mathbf{X}}$ must satisfy (i) $\widetilde{\mathbf{X}}^T \widetilde{\mathbf{X}} = \bX^T \bX$ and (ii) $\widetilde{\mathbf{X}}^T \widetilde{\mathbf{X}} = \bX^T \bX - S$, for some diagonal matrix $S$ satisfying $2 \bX^T \bX \succ S$. The other difference between FX and MX knockoffs is that the feature importances $Z$ can only depend on $\by$ only through $[\bX, \widetilde{\mathbf{X}}]^T \by$, which permits the use of many machine learning models, but not all (for example, it prohibits the use of cross-validation). Other than these differences, fitting FX knockoffs uses the same three steps outlined above.
\subsection{Related literature}\label{subsec::literature}
The knockoffs literature contains a vast number of proposed feature statistics, which can (roughly) be separated into three categories. First, some of the most common feature statistics are those based on penalized regression coefficients, notably the lasso signed maximum (LSM) and lasso coefficient difference (LCD) statistics of \cite{fxknock}. Indeed, these lasso-based statistics are often used in applied work \citep{knockoffzoom2019} and have received a great deal of theoretical attention \citep{weinstein2017, rank2017, ke2020, weinstein2020power, wang2021power}. Perhaps surprisingly, we argue in this paper that many of these statistics are estimating the wrong quantity, leading to substantially reduced power. Second, some previous works have introduced Bayesian knockoff statistics (see, e.g., \cite{mxknockoffs2018, RenAdaptiveKnockoffs2020}). MLR statistics have a Bayesian flavor, but they take a different form than previous statistics. Furthermore, our motivation differs from those of previous works: the real innovation of MLR statistics is to attempt to estimate a (masked) likelihood ratio, and we only use a Bayesian framework to appropriately quantify our uncertainty about nuisance parameters, as discussed in Section \ref{subsec::mlr}. In contrast, previous works largely motivated Bayesian statistics as a way to incorporate prior information \citep{mxknockoffs2018} or side information about which features are non-null \citep{RenAdaptiveKnockoffs2020}. That said, it is worth noting that an important special case of MLR statistics is similar to the ``BVS" statistics from \cite{mxknockoffs2018}, as we discuss in Section \ref{subsec::computation}. Third and finally, many feature statistics take advantage of ``black-box" machine learning methods to assign variable importances (see, e.g., \cite{deeppink2018, knockoffsmass2018}). Although MLR statistics are most easily computable when the analyst specifies a parametric model, our implementation of MLR statistics based on regression splines performs favorably compared to ``black-box" feature statistics in Section \ref{sec::sims}.
Previous analyses of the power of knockoffs have largely focused on showing that coefficient-difference feature statistics are consistent under regularity conditions on $\bX$ \citep{liurigollet2019, rank2017, mrcknock2022} or explicitly quantifying the power of coefficient-difference feature statistics assuming $\bX$ has i.i.d. Gaussian entries \citep{weinstein2017, weinstein2020power, wang2021power}. \cite{ke2020} also derive a phase diagram for LCD statistics assuming $\bX$ is blockwise orthogonal with a blocksize of $2$. Our work has a different goal, which is to show that MLR statistics are asymptotically optimal, with particular focus on nontrivial settings where the asymptotic power of MLR statistics is strictly between $0$ and $1$. Furthermore, our analysis places no explicit restrictions on the conditional distribution of $\by \mid \bX$, the design matrix $\bX$, or the dimensionality of the problem, in contrast to the aforementioned works, which exclusively focus on the case where $\by \mid \bX$ follows a Gaussian linear model. Instead, the key assumption we make is that the signs of the MLR statistics satisfy a local dependency condition, similar to dependency conditions assumed on p-values in the multiple testing literature \citep{genovese2004, storey2004, ferreira2006, farcomeni2007}. However, our proof technique is novel and specific to knockoffs.
Our theory is close in spirit to that of \cite{whiteout2021}, who developed knockoff$\star$, an oracle statistic for FX knockoffs which provably maximizes power in finite samples. Indeed, the oracle masked likelihood ratio statistics are equivalent to knockoff$\star$ when the conditional distribution of $\by \mid \bX$ follows a Gaussian linear model and $\widetilde{\mathbf{X}}$ are fixed-X knockoffs. To our knowledge, the only other work which attempts to prove optimality guarantees for knockoff statistics is \cite{katsevichmx2020}, who showed that using the likelihood as the feature statistic maximizes $\P(W_j > 0)$ against a point alternative. We see our work as building on that of \cite{katsevichmx2020}, as MLR statistics also have this property (averaging over the prior), although we go farther and show that MLR statistics maximize the number of discoveries of the overall knockoffs procedure. Another key difference is that unmasked likelihood statistics technically violate the pairwise exchangeability condition required of knockoff statistics (see Appendix \ref{appendix::unmasked_lr}), whereas working with the masked likelihood guarantees FDR control. Lastly, we note that the oracle procedures derived in \cite{whiteout2021} and \cite{katsevichmx2020} cannot be used in practice since they depend on unknown parameters---to our knowledge, MLR statistics are the first usable knockoff statistics with explicit (if asymptotic) optimality guarantees on their power.
\subsection{Outline}
The rest of the paper proceeds as follows. In Section \ref{sec::mlr}, we prove an equivalent formulation of the knockoffs methodology, introduce MLR statistics, and present the main theoretical results of the paper. We also give concrete suggestions on the choice of prior and discuss how to compute MLR statistics efficiently. In Section \ref{sec::sims}, we present simulations comparing the power of MLR statistics to common feature statistics from the literature. In Section \ref{sec::data}, we apply MLR statistics to three real datasets previously analyzed using knockoffs. Finally, Section \ref{sec::discussion} concludes with a discussion of future directions.
\section{Introducing masked likelihood ratio statistics}\label{sec::mlr}
\subsection{A motivating example from an HIV drug resistance dataset}\label{subsec::badlasso}
Before proving our core results, we first explain intuitively why standard lasso-based feature statistics are sub-optimal, using the HIV drug resistance dataset from \cite{rhee2006data} as an illustrative example. This dataset has been used as a benchmark in several papers about knockoffs, e.g., \cite{fxknock, deepknockoffs2018}, and we will perform a more complete analysis of this dataset in Section \ref{sec::data}. For now, we note only that the design matrix $\bX$ consists of genotype data from $n \approx 750$ HIV samples, the response $\by$ measures the resistance of each sample to a drug (in this case Indinavir), and we apply knockoffs to discover which genetic variants affect drug resistance---note our analysis exactly mimics that of \cite{fxknock}. As notation, let $(\hat \beta^{(\lambda)}, \tilde{\beta}^{(\lambda)}) \in \R^{2p}$ denote the estimated lasso coefficients fit on $[\bX, \widetilde{\mathbf{X}}]$ and $\by$ with regularization parameter $\lambda$. Furthermore, let $\hat \lambda_j$ (resp. $\tilde{\lambda}_j)$ denote the smallest value of $\lambda$ such that $\hat \beta^{(\lambda)}_j \ne 0$ (resp. $\tilde{\beta}^{(\lambda)}_j \ne 0$). Then the LCD and LSM statistics are defined as: \begin{equation}\label{eq::lassodef}
W_j^{\mathrm{LCD}} = |\hat\beta_j^{(\lambda)}| - |\tilde{\beta}_j^{(\lambda)}|, \,\,\,\,\,\,\,\,\,\, W_j^{\mathrm{LSM}} = \sign(\hat \lambda_j - \tilde{\lambda}_j) \max(\hat \lambda_j, \tilde{\lambda}_j). \end{equation}
\begin{figure}
\caption{We plot the first $50$ LCD, LSM, and MLR feature statistics sorted in descending order of absolute value when applied to the HIV drug resistance dataset for the drug Indinavir (IDV). The data-dependent threshold is shown by the black line. This figure shows that when $\bX$ is correlated, LCD and LSM statistics make few discoveries because they occasionally yield highly negative $W$-statistics for highly predictive variables which have low-quality knockoffs, such as the ``P90.M" variant, as described in Section \ref{subsec::badlasso}. In contrast, MLR statistics deprioritize the P90.M variant. For visualization, we apply a monotone transformation to $\{W_j\}$ such that $|W_j| \le 1$, which (provably) does not change the performance of knockoffs. See Appendix \ref{appendix::realdata} for further details and corresponding plots for the other fifteen drugs in the dataset.}
\label{fig::wstatplot}
\end{figure}
As a thought experiment, imagine that we observe a covariate $\bX_j$ which appears to significantly influence $\by$: however, due to high correlations within $\bX$, we must create a knockoff $\widetilde{\mathbf{X}}_j$ which is highly correlated with $\bX_j$. For example, the ``P90.M" variant in the HIV dataset is extremely predictive of resistance to Indinavir (IDV), as its OLS t-statistic is $\approx 8.95$, but the P90.M variant is $> 99\%$ correlated with its knockoff. In this situation, the lasso may select $\widetilde{\mathbf{X}}_j$ instead of $\bX_j$, and since the lasso induces sparsity, it is unlikely to select \textit{both} $\bX_j$ and $\widetilde{\mathbf{X}}_j$ because they are highly correlated. As a result, $W_j^\mathrm{LCD}$ and $W_j^\mathrm{LSM}$ will both have large absolute values, since $\bX_j$ appears significant, but $W_j^\mathrm{LCD}$ and $W_j^\mathrm{LSM}$ will have a reasonably high probability of being negative, since $\bX_j \approx \widetilde{\mathbf{X}}_j$. Indeed, the LCD and LSM statistics for the P90.M variant have, respectively, the largest and third largest absolute values among all genetic variants, but both statistics are negative because the lasso selected the knockoff instead of the feature. This is a serious problem, since as shown in Figure \ref{fig::wstatplot}, knockoffs can only make discoveries when the feature statistics $W_j$ with the largest absolute values are consistently positive. Indeed, even a few highly negative $W$-statistics can prevent knockoffs from making \textit{any} discoveries.
That said, this problem is largely avoidable. If $\cor(X_j,\widetilde{X}_j)$ is very large and $W_j$ may be highly negative, we can ``deprioritize" $W_j$ by lowering its absolute value, allowing us to potentially discover $W_j$ while reducing the risk that it prevents the procedure from making many discoveries. As shown by Figure \ref{fig::wstatplot}, this is exactly what MLR statistics do for the P90.M variant. Overall, while it is important to rank the $W_j$ by some notion of the signal strength, the signal strength for knockoffs is not fully captured by the estimated coefficients and also depends on the (known) dependence structure among $[\bX, \widetilde{\mathbf{X}}]$. In the next two sections, we argue that the masked likelihood ratio is the ``correct" measure of signal strength.
\subsection{Knockoffs as inference on masked data}\label{subsec::maskeddata}
In the previous section, we argued that to maximize power, $W_j$ should have a large absolute value if and only if $\P(W_j > 0)$ is large.
This raises the question: how we can know when $\P(W_j > 0)$ is large? Indeed, we will need to estimate $\P(W_j > 0)$ from the data, but we cannot use \textit{all} the data for this purpose: for example, we cannot directly observe $\sign(W)$ and use it to adjust $|W|$ without violating FDR control. To resolve this ambiguity, we reformulate knockoffs as inference on \textit{masked data}, as defined below.
\begin{definition} Suppose we observe data $\bX, \by$, knockoffs $\widetilde{\mathbf{X}}$, and independent random noise $U$. ($U$ may be used to fit a randomized feature statistic.) The \emph{masked data} $D$ is defined as \begin{equation}\label{eq::maskeddata}
D = \begin{cases}
(\by, \{\bX_j, \widetilde{\mathbf{X}}_j\}_{j=1}^p, U) & \text{ for model-X knockoffs} \\
(\bX, \widetilde{\mathbf{X}}, \{\bX_j^T \by, \widetilde{\mathbf{X}}_j^T \by\}_{j=1}^p, U) & \text{ for fixed-X knockoffs.}
\end{cases} \end{equation} \end{definition}
As we will see, the masked data $D$ is all of the data we are ``allowed" to use when fitting a feature statistic $W$, and knockoffs will be powerful precisely when we can distinguish many of the non-null variables from their knockoffs using $D$. For example, in the model-X case, $D$ contains $\by$ and the unordered pairs of vectors $\{\bX_j, \widetilde{\mathbf{X}}_j\}$, and Proposition \ref{prop::mxdistinguish} tells us that ensuring $W_j$ is positive is equivalent to identifying $\bX_j$ from $\{\bX_j, \widetilde{\mathbf{X}}_j\}$.\footnote{Throughout the paper, we only consider $W$-statistics which are nonzero with probability one, because one can provably increase the power of knockoffs by ensuring that each coordinate of $W$ is nonzero.}
\begin{proposition}\label{prop::mxdistinguish} Let $\widetilde{\mathbf{X}}$ be model-X knockoffs such that $\bX_j \ne \widetilde{\mathbf{X}}_j$ for $j \in [p]$. Then $W = w([\bX, \widetilde{\mathbf{X}}], \by)$ is a valid feature statistic if and only if \begin{enumerate}[topsep=0pt]
\setlength{\parskip}{0pt}
\setlength{\itemsep}{0pt plus 1pt}
\item $|W|$ is a function of the masked data $D$
\item there exist \emph{distinguishing} functions $\widehat{\bX}_j = g_j(D)$ such that $W_j > 0$ if and only if $\widehat{\bX}_j = \bX_j$. \end{enumerate} \end{proposition}
Proposition \ref{prop::mxdistinguish} reformulates knockoffs as a guessing game, where ensuring $W_j > 0$ is equivalent to identifying $\bX_j$ from $\by$ and $\{\bX_j, \widetilde{\mathbf{X}}_j\}_{j=1}^p$. (Note that $\widehat{\bX}_j$ is perhaps an unusual estimator, as it is a vector but can only take one of two values, namely $\widehat{\bX}_j \in \{\bX_j, \widetilde{\mathbf{X}}_j\}$.) If our ``guess" is right, meaning $\widehat{\bX}_j = \bX_j$, then we are rewarded and $W_j > 0$: else $W_j < 0$. Furthermore, to avoid highly negative $W$-statistics, this suggests that we should only assign $W_j$ a large absolute value when we are a confident that our ``guess" $\widehat{\bX}_j$ is correct. We discuss more implications of this result in the next section: for now, we obtain an analagous result for fixed-X knockoffs (similar to a result from \cite{whiteout2021}) by substituting $\{\bX_j^T \by, \widetilde{\mathbf{X}}_j^T \by\}$ for $\{\bX_j, \widetilde{\mathbf{X}}_j\}$.
\begin{proposition}\label{prop::fxdistinguish} Let $\widetilde{\mathbf{X}}$ be fixed-X knockoffs satisfying $\bX_j^T \by \ne \widetilde{\mathbf{X}}_j^T \by$ for $j \in [p]$. Then $W = w([\bX, \widetilde{\mathbf{X}}], \by)$ is a valid feature statistic if and only if \begin{enumerate}[topsep=0pt]
\setlength{\parskip}{0pt}
\setlength{\itemsep}{0pt plus 1pt}
\item $|W|$ is a function of the masked data $D$
\item there exist \emph{distinguishing} functions $g_j(D)$ such that $W_j > 0$ if and only if $\bX_j^T \by = g_j(D)$. \end{enumerate} \end{proposition}
Note that Propositions \ref{prop::mxdistinguish} and \ref{prop::fxdistinguish} hold for the original definition of knockoffs in \cite{fxknock, mxknockoffs2018}: however, one could slightly extend the definition of knockoffs and augment $D$ to contain any random variable which is independent of those specified in Equation (\ref{eq::maskeddata}). For example, in the fixed-X case, one can also include $\hat \sigma^2 = \|(I_n - H) \by\|_2^2$ where $H$ is the projection matrix of $[\bX, \widetilde{\mathbf{X}}]$ \citep{altsign2017, whiteout2021}. Our proofs for the rest of the paper also apply to such extensions.
\subsection{Introducing masked likelihood ratio (MLR) statistics}\label{subsec::mlr}
In this section, we introduce masked likelihood ratio (MLR) statistics. For brevity, this subsection focuses on the case of model-X knockoffs; the analagous results for fixed-X knockoffs merely replace $\{\bX_j, \widetilde{\mathbf{X}}_j\}$ with $\{\bX_j^T \by, \widetilde{\mathbf{X}}_j^T \by\}$, as in Definition \ref{def::mlr}. To build intuition, recall that Propositon \ref{prop::mxdistinguish} reformulates model-X knockoffs as the following guessing game. After observing the masked data $D = (\by, \{\bX_j, \widetilde{\mathbf{X}}_j\}_{j=1}^p)$, the analyst must do the following. \begin{enumerate}[topsep=0pt]
\setlength\itemsep{0pt}
\item Step 1: Using $D$, the analyst produces ``guesses" $\widehat{\bX}_j$ of the true value of $\bX_j$ for each $j \in [p]$.
\item Step 2: The analyst may arbitrarily reorder the hypotheses.
\item Step 3: The analyst now sequentially checks which guesses $\widehat{\bX}_j$ are correct (in the order they specified). To make discoveries, let $k$ be the maximum number such that $\ceil{(k+1)/(1+q)}$ of the first $k$ guesses were correct. Then, the analyst can reject the null hypotheses corresponding to their first $\ceil{(k+1)/(1+q)}$ correct guesses while provably controlling the FDR at level $q$. \end{enumerate}
Note that in the traditional language of knockoffs, Step 1 determines $\sign(W)$, Step 2 corresponds to choosing the absolute values $|W|$, and Step $3$ is simply a description of the SeqStep procedure. However, if we assume for a moment that the analyst has oracle knowledge of the masked likelihood $L_{\theta}(D)$, then this reformulation suggests a very intuitive strategy to maximize the number of discoveries: \begin{enumerate}[topsep=0pt]
\setlength\itemsep{0pt}
\item In Step 1, the analyst should guess the value $\widehat{\bX}_j \in \{\bx_j, \widetilde{\mathbf{x}}_j\}$ which maximizes the likelihood of observing the masked data, denoted $L_{\theta}(D)$. Indeed, this is a standard binary decision problem, so this choice of $\widehat{\bX}_j$ maximizes the chance that each guess is correct. Equivalently, this maximizes $\P(W_j > 0 \mid D)$ since $\P(W_j > 0 \mid D) = \P(\widehat{\bX}_j = \bX_j \mid D)$.
\item In Step 2, the analyst should rank the hypotheses by the likelihood that each guess $\widehat{\bX}_j$ is correct. This ensures that for each $k$, the first $k$ ranked hypotheses contain as many ``good guesses" as possible, which thus maximizes the number of discoveries in Step 3. Equivalently, in the traditional notation of knockoffs, this means that $|W_j| > |W_i|$ if and only if $\P(W_j > 0 \mid D) > \P(W_i > 0 \mid D)$. When this is true, we say $\{|W_j|\}_{j=1}^p$ has the same order as $\{\P(W_j > 0 \mid D)\}_{j=1}^p$. \end{enumerate}
Both of these criteria are achieved by the Neyman-Pearson test statistic which tests $H_0 : \bX_j = \widetilde{\mathbf{x}}_j$ against the alternative $H_a : \bX_j = \bx_j$ using $D$. We refer to this statistic as the (oracle) masked likelihood ratio statistic, defined below: \begin{equation}
\mathrm{MLR}_j^{\mathrm{oracle}} = \log\left(\frac{L_{\theta}(\bX_j = \bx_j \mid D)}{L_{\theta}(\bX_j = \widetilde{\mathbf{x}}_j \mid D)}\right), \tag{\ref{eq::oraclemlr}} \end{equation} where $L_{\theta}(\bX_j = \bx \mid D)$ is the likelihood of observing $\bX_j = \bx$ conditional on $D$, and $\theta$ are any unknown parameters that affect the likelihood, such as the coefficients in a GLM or the parameters in a neural network. In a moment, we will verify that this achieves the criteria specified above, and in Section \ref{subsec::avgopt}, we will also show that for a fixed distribution of $(\bX, \by)$, choosing $\mathrm{MLR}_j^{\mathrm{oracle}}$ as the test statistic asymptotically maximizes the expected number of discoveries under mild regularity conditions.
Unfortunately, $\mathrm{MLR}_j^{\mathrm{oracle}}$ cannot be used in practice, since it depends on $\theta$, which is unknown. A heuristic solution would be to replace $\theta$ with an estimator $\hat \theta$ of $\theta$, but in our investigations we found that this ``plug-in" approach performed poorly since it does not account for uncertainty about $\theta$. Indeed, the high-dimensional settings in which knockoffs are most popular are precisely the settings in which we have significant uncertainty about the value of $\theta$. Thus, the ``plug-in" approach is not appropriate---and to make matters worse, the masked likelihood $L_{\theta}(D)$ is sometimes multimodal and non-convex (see Appendix \ref{subsec::noplugin}) for discussion), making it challenging to get a useful point estimate of $\theta$.
Instead, to appropriately account for uncertainty over $\theta$, we recommend a Bayesian approach. Let $\pi$ be any user-specified prior over the unknown parameters $\theta$---in Section \ref{sec::sims}, we will give several choices of uninformative priors which perform well even when they are mispecified. We will settle for feature statistics which are \textit{average-case} optimal over $\pi$, which is perhaps the best we could hope for, since the optimal statistics for fixed $\theta$ are effectively uncomputable. Note that while MLR statistics are only average-case optimal, we emphasize that this in no way compromises their (pointwise) ability to control the FDR, which is always guaranteed by the knockoff procedure. To quote \cite{RenAdaptiveKnockoffs2020}, ``wrong models [or priors] do not hurt FDR control!"
Given such a prior $\pi$, we define MLR statistics below.
\begin{definition}[MLR statistics]\label{def::mlr} Suppose we observe data $\by, \bX$ with knockoffs $\widetilde{\mathbf{X}}$. Let $D$ be the masked data as in Equation (\ref{eq::maskeddata}). For any prior $\theta \sim \pi$ on the masked likelihood $L_{\theta}(D)$, we define the model-X masked likelihood ratio (MLR) statistic by marginalizing over $\theta$: \begin{equation}
W_j\opt \defeq
\log\left(\frac{\E_{\theta \sim \pi}\left[L_{\theta}(\bX_j = \bx_j \mid D)\right]}{\E_{\theta \sim \pi}\left[L_{\theta}(\bX_j = \widetilde{\mathbf{x}}_j \mid D)\right]}\right) \text{ for model-X knockoffs.}\footnote{In the edge case where the masked likelihood ratio is exactly zero, we set $W_j\opt \simind \Unif(\{-\epsilon, \epsilon\})$ where $\epsilon$ is chosen such that $\epsilon < |W_k\opt|$ for each $k$ such that the MLR is nonzero.}
\end{equation} The fixed-X MLR statistic is analagous but replaces $\{\bX_j, \widetilde{\mathbf{X}}_j\}$ with $\{\bX_j^T \by, \widetilde{\mathbf{X}}_j^T \by\}$: in this case, \begin{align}
W_j\opt &\defeq \log\left(\frac{\E_{\theta \sim \pi}\left[L_{\theta}(\bX_j^T \by = \bx_j^T \by \mid D)\right]}{\E_{\theta \sim \pi}\left[L_{\theta}(\bX_j^T \by = \widetilde{\mathbf{x}}_j^T \by \mid D)\right]}\right) \text{ for fixed-X knockoffs.} \end{align} \end{definition} Note that we can easily extend this definition to apply to \textit{group} knockoffs, but for brevity, we defer this extension to Appendix \ref{appendix::groupmlr}.
The MLR statistic is also closely related to the Neyman-Pearson test statistic which (in the model-X case) tests $H_0 : \bX_j = \widetilde{\mathbf{x}}_j$ against the alternative $H_a : \bX_j = \bx_j$, except now, these null and alternative hypotheses implicitly marginalize over the unknown parameters $\theta$ according to $\pi$. As a result, we can verify that averaging over $\pi$, MLR statistics achieve the intuitive optimality criteria outlined at the beginning of this section. Note that the proposition below also applies to the oracle MLR statistics, since $\mathrm{MLR}_j^{\mathrm{oracle}} = W_j\opt$ in the special case where $\pi$ is a point mass on some fixed $\theta$.
\begin{proposition}\label{prop::bestsigns} Given data $\by, \bX$ and knockoffs $\widetilde{\mathbf{X}}$, let $W\opt$ be the MLR statistics and let $W$ be any other valid knockoff feature statistic. Then, \begin{equation}\label{eq::bestsigns}
\P(W_j\opt > 0 \mid D) \ge \P(W_j > 0 \mid D). \end{equation}
Furthermore, $\{|W_j\opt|\}_{j=1}^p$ has the same order as $\{\P(W_j\opt > 0 \mid D)\}_{j=1}^p$. More precisely, \begin{equation}\label{eq::bestmags}
\P(W_j\opt > 0 \mid D) = \frac{\exp(|W_j\opt|)}{1 + \exp(|W_j\opt|)}. \end{equation} These results hold over the posterior distribution $(\by, \bX, \theta) \mid D$. \begin{proof} First, we prove Equation (\ref{eq::bestsigns}). Recall by Proposition \ref{prop::mxdistinguish} that there exists a function $\widehat{\bX}_j$ of $D$ such that $W_j\opt > 0$ if and only if $\widehat{\bX}_j = \bX_j$. Standard theory for binary decision problems tells us that $W_j\opt$ maximizes $\P(W_j > 0 \mid D)$ if $\widehat{\bX}_j = \argmax_{\bx \in \{\bx_j, \widetilde{\mathbf{x}}_j\}} \P(\bX_j = \bx \mid D)$, where the probability is taken over $(\by, \bX, \theta) \mid D$. However, by the tower property, $\P(\bX_j = \bx \mid D) = \E_{\theta \sim \pi}\left[L_{\theta}(\bX_j = \bx \mid D) \right]$, and thus by construction $W_j\opt > 0$ if and only if $\P(\bX_j = \bx_j \mid D) > \P(\bX_j = \widetilde{\mathbf{x}}_j \mid D)$. This proves that $\widehat{\bX}_j = \argmax_{\bx \in \{\bx_j, \widetilde{\mathbf{x}}_j\}} \P(\bX_j = \bx \mid D)$, which thus proves Equation (\ref{eq::bestsigns}).
To prove Equation (\ref{eq::bestmags}), observe $\E_{\theta \sim \pi}[L(\bX_j = \bx \mid D)] = \P(\bX_j = \bx_j \mid D) = 1 - \P(\bX_j = \widetilde{\mathbf{x}}_j \mid D)$, so \begin{equation*}
|W_j\opt| = \log\left(\frac{\max_{\bx \in \{\bx_j, \widetilde{\mathbf{x}}_j\}} \P(\bX_j = \bx \mid D)}{1 - \max_{\bx \in \{\bx_j, \widetilde{\mathbf{x}}_j\}} \P(\bX_j = \bx \mid D)}\right) = \log\left(\frac{\P(W_j\opt > 0)}{1-\P(W_j\opt > 0)}\right), \end{equation*} where the second step uses the fact that $\P(W_j\opt > 0 \mid D) = \P\left(\bX_j = \widehat{\bX}_j \mid D \right)$ for $\widehat{\bX}_j$ as defined above. This completes the proof for model-X knockoffs; the proof in the fixed-X case is analagous. \end{proof} \end{proposition}
Proposition \ref{prop::bestsigns} tells us that MLR statistics avoid the problem identified in Section \ref{subsec::badlasso}. In particular, if $\bX_j$ appears highly significant but $\widetilde{\mathbf{X}}_j$ is nearly indistinguishable from $\bX_j$, the absolute value of $W_j\opt$ will still be quite small, since $\bx_j \approx \widetilde{\mathbf{x}}_j$ intuitively implies $L_{\theta}(\bX_j = \bx_j \mid D) \approx L_{\theta}(\bX_j = \widetilde{\mathbf{x}}_j \mid D)$. As a result, MLR statistics are explicitly designed to avoid situations where $W_j\opt$ is highly negative, as confirmed by Figure \ref{fig::wstatplot}. Indeed, Equation (\ref{eq::bestmags}) tells us that the absolute values $|W_j\opt|$ have the same order as $\P(W_j\opt > 0 \mid D)$, so, MLR statistics will always prioritize the hypotheses ``correctly."
Lastly, we pause to make two connections to the literature. First, Proposition \ref{prop::bestsigns} suggests a similarity between MLR statistics and a proposal in \cite{RenAdaptiveKnockoffs2020} to rank the hypotheses by $\P(W_j > 0 \mid |W_j|)$ (see their footnote 8) when using an ``adaptive" extension of knockoffs. MLR statistics appear to satisfy a similar property, although we condition on \textit{all} of the masked data $D$, leading to higher power. That said, this initial similarity is perhaps misleading, since \cite{RenAdaptiveKnockoffs2020} do not propose a feature statistic per se; rather, they provide an extension of SeqStep that can wrap on top of any predefined feature statistic, including MLR statistics, and the power of this extension depends to a large extent on the power of the initial feature statistic. Indeed, when \cite{RenAdaptiveKnockoffs2020} write ``$\P(W_j > 0 \mid |W_j|)$," they are referring to a different probability space than we are, as they are referring to predicted probabilities from an auxiliary (heuristic) Bayesian model. For brevity, we defer the details of this comparison to Appendix \ref{appendix::adaknock}; for now, we emphasize that their contribution is both distinct from and complementary to ours.
Second, Proposition \ref{prop::bestsigns} is similar to Theorem $5$ of \cite{katsevichmx2020}, who show that the \textit{unmasked} likelihood statistic maximizes $\P(W_j > 0)$; indeed, we see our work as building on theirs. However, there are two important differences. To start, the unmasked likelihood statistic does not satisfy joint pairwise exchangeability even though it is marginally symmetric under the null (see Appendix \ref{appendix::unmasked_lr}): in contrast, MLR statistics provably control the FDR. Second and more importantly, unlike in \cite{katsevichmx2020}, MLR statistics are associated with guarantees on their \textit{magnitudes}, allowing us to show much stronger theoretical results, as we do in the next section.
\subsection{MLR statistics are asymptotically optimal}\label{subsec::avgopt}
We now show that MLR statistics asymptotically maximize the expected number of discoveries. Indeed, the result in Proposition \ref{prop::bestsigns} is strong enough that one could be forgiven for thinking that MLR statistics are trivially optimal in finite samples, since MLR statistics simultaneously maximize $\P(W_j > 0 \mid D)$ for every $j$ and perfectly order the hypotheses in order of $\P(W_j > 0 \mid D)$. This intuition is correct when the vector $\sign(W\opt)$ has conditionally independent entries given $D$, as stated below. (Note that the following proposition is a generalization of Proposition 2 in \cite{whiteout2021}, and the proofs of these propositions are effectively identical.)
\begin{proposition}\label{prop::exactopt} When $\{\I(W_j\opt > 0)\}_{j=1}^p$ are conditionally independent given $D$, then the MLR statistics $W\opt$ maximize the expected number of discoveries in finite samples. Furthermore, if $\by \mid \bX \sim \mcN(\bX \beta, \sigma^2 I_n)$ and $\pi$ is a point mass on the true value of $\beta$ and $\sigma^2$, then $\{\I(W_j\opt > 0)\}_{j=1}^p$ are conditionally independent given $D$ whenever $\bX$ are fixed-X knockoffs or Gaussian conditional model-X knockoffs \citep{condknock2019}. \end{proposition}
However, absent independence conditions, MLR statistics are not always finite-sample optimal, because it is sometimes possible to exploit dependencies among the coordinates of $\sign(W\opt)$ to slightly improve power. That said, as we discuss in Appendix \ref{subsec::finitesampleopt}, to improve power even slightly usually requires pathological dependencies, making it hard to imagine that accounting for dependencies can substantially increase power in realistic settings. More formally, we now show that MLR statistics are asymptotically optimal under mild (and realistic) regularity conditions on the dependence of $\sign(W) \mid D$.
To this end, consider any asymptotic regime where we observe $\bX^{(n)} \in \R^{n \times p_n}, \by^{(n)} \in \R^{n}$ and construct knockoffs $\widetilde{\mathbf{X}}^{(n)}$. For each $n$, let $L(\by^{(n)}; \bX^{(n)}, \theta^{(n)})$ denote the likelihood of $\by^{(n)}$ given $\bX^{(n)}$ and (unknown) parameters $\theta^{(n)}$ and let $\pi^{(n)}$ be a prior distribution on $\theta^{(n)}$. Let $D^{(n)}$ denote the masked data for knockoffs as defined in Section \ref{subsec::maskeddata}. We will analyze the limiting \textit{empirical power} of a sequence of feature statistics $W\upn = w_n([\bX\upn, \widetilde{\mathbf{X}}\upn], \by\upn)$, defined as the expected number of discoveries normalized by the expected number of non-nulls $s\upn$. Formally, let $S\upn(q)$ denote the rejection set of $W\upn$ when controlling the FDR at level $q$. Then we define \begin{equation}\label{eq::powerdef}
\widetilde{\mathrm{Power}}_q(w_n) = \frac{\E[|S\upn(q)|]}{s\upn}, \end{equation}
where the expectation in the numerator is over $\by\upn, \bX\upn, \theta\upn$ given $D$. This notion of power is slightly unconventional, as it counts the number of discoveries instead of the number of \textit{true} discoveries. We make two remarks about this. First, in Appendix \ref{subsec::tpr}, we introduce a modification of MLR statistics and give a proof sketch that this modification maximizes the expected number of \textit{true} discoveries: however, computing this modification is prohibitively expensive. Since MLR statistics perform very well anyway, the difference between these procedures likely does not justify the cost in computation, and thus we choose to analyze $\widetilde{\mathrm{Power}}$ as stated above. Second, very heuristically, we might hope that there is not too much difference between these metrics of power anyway. Indeed, let $T\upn(q)$ and $V\upn(q)$ count the number of true and false discoveries, and define the true positive rate $\mathrm{TPR} = \frac{\E[T\upn(q)]}{s\upn}$ and the marginal FDR $\mathrm{mFDR} \defeq \frac{\mathbb\E[V\upn(q)]}{\mathbb{E}[|S\upn(q)|]}$. Then we can write $\widetilde{\mathrm{Power}} = \mathrm{TPR}(1-\mathrm{mFDR})$: since knockoffs provably control the (non-marginal) FDR at level $q$, we might hope intuitively that $\widetilde{\mathrm{Power}}$ is a good proxy for the TPR. That said, this is certainly not a formal argument, especially since some knockoff feature statistics will be conservative. Thus, we use the tilde above $\widetilde{\mathrm{Power}}$ to mark the difference between our measure of power and the conventional one.
As stated so far, this is a completely general asymptotic regime: we have made no assumptions whatsoever about the distribution of $\by\upn, \bX\upn, \pi\upn$, or the form of the likelihood. To show that MLR statistics maximize the expected number of discoveries, the main condition we need is that conditional on the masked data $D\upn$, the signs of $W\opt$ are not too strongly dependent, and thus the successive averages of $\sign(W\opt)$ converge to their conditional means. Intuitively, we should expect this condition to hold in most settings, since knockoffs are designed to ensure that $\sign(W)$ are conditionally independent for null variables, and, as we shall see in Section \ref{sec::sims}, $\sign(W)$ are usually only very weakly conditionally dependent even for non-null variables (see also Appendix \ref{appendix::dependencediscussion} for more discussion). Furthermore, we note that the concept of a local dependence condition has appeared before in the multiple testing literature \citep{genovese2004, storey2004, ferreira2006, farcomeni2007}, suggesting that this assumption is plausible. (That said, our proof technique is novel.) We will give further justification for this assumption in a moment---however, we first state our main theoretical result.
\begin{theorem}\label{thm::avgopt} Consider any arbitrarily high-dimensional asymptotic regime where we observe data $\bX^{(n)} \in \R^{n \times p_n}, \by^{(n)} \in \R^{n}$ and construct knockoffs $\widetilde{\mathbf{X}}^{(n)}$ with $D^{(n)}$ denoting the masked data. Let $W^{\opt} = w_n\opt([\bX^{(n)}, \widetilde{\mathbf{X}}^{(n)}], \by^{(n)})$ denote the MLR statistics with respect to a prior $\pi\upn$ on the parameters $\theta\upn$. Let $W = w_n([\bX\upn, \widetilde{\mathbf{X}}\upn], \by\upn)$ denote any other sequence of feature statistics.
Assume only the following conditions. First, assume that $\lim_{n \to \infty} \widetilde{\mathrm{Power}}_q(w_n\opt)$ and $\lim_{n \to \infty} \widetilde{\mathrm{Power}}_q(w_n)$ exist for each $q \in (0,1)$. Second, assume that $s\upn$, the expected number of non-nulls, grows faster than $\log(p_n)^5$, i.e., $\lim_{n \to \infty} \frac{s\upn}{\log(p_n)^5} = \infty$. Finally, assume that conditional on $D\upn$, the covariance between the signs of $W\opt$ decays exponentially. That is, there exist constants $C\ge 0, \rho \in (0,1)$ such that
\begin{equation}\label{eq::expcovdecay}
|\cov(\I(W\opt_i > 0), \I(W\opt_j > 0) \mid D\upn)| \le C \rho^{|i-j|}. \end{equation} Then for all but countably many $q \in (0,1)$, \begin{equation}
\lim_{n \to \infty} \widetilde{\mathrm{Power}}_q(w_n\opt) \ge \lim_{n \to \infty} \widetilde{\mathrm{Power}}_q(w_n). \end{equation} \end{theorem}
Theorem \ref{thm::avgopt} tells us that MLR statistics asymptotically maximize the number of expected discoveries without placing any explicit restrictions on the relationship between $\by$ and $\bX$ or the dimensionality. Indeed, the first two assumptions of Theorem \ref{thm::avgopt} are quite weak: the first assumption merely guarantees that the limiting powers we aim to study actually exist, and the second is only a mild restriction on the sparsity regime. Indeed, our setting allows for many previously studied sparsity regimes, such as a polynomial sparsity model \citep{donohojin2004, ke2020} and the linear sparsity regime \citep{weinstein2017}. Nonetheless, these assumptions can be substantially weakened at the cost of a more technical theorem statement, as discussed in Appendix \ref{subsec::assumptions}.
Equation (\ref{eq::expcovdecay}), which asks that $\sign(W\opt)$ is only locally dependent, is a stronger assumption. That said, there are a few reasons to think it is not too strong. First, as discussed previously in this section, knockoffs are designed to ensure that $\sign(W\opt)$ are conditionally independent, and although this is not exactly true in the posterior, we show empirically in Section \ref{sec::sims} that $\sign(W\opt) \mid D$ are usually only very weakly dependent (and we discuss this further in Appendix \ref{appendix::dependencediscussion}). Second, this is roughly a \textit{checkable} condition. Indeed, computing MLR statistics usually requires sampling from the joint distribution of $\sign(W\opt)$ conditional on $D$. This means that we can actually inspect $\cov(\sign(W\opt) \mid D)$, allowing us to assess the conditional dependence among $\sign(W\opt)$. Furthermore, we expect the theorem's conclusion to be robust to mild violations of Equation (\ref{eq::expcovdecay}), since from a theoretical perspective, the conclusion only requires that the successive averages of $\sign(W\opt)$ obey a law of large numbers conditional on $D$ (see Appendix \ref{subsec::assumptions} for discussion). And lastly, in some restricted settings, we can prove the local dependence condition. For example, Equation (\ref{eq::expcovdecay}) holds in the special case where $\by, \bX$ follow a Gaussian linear model and $\bX^T \bX$ is block-diagonal, similar to the setting of \cite{ke2020}, as stated below.
\begin{proposition}\label{prop::unhappy} Suppose $\by \mid \bX \sim \mcN(\bX \beta, \sigma^2 I_n)$ and $\bX^T \bX$ is a block-diagonal matrix with maximum block size $M \in \N$. Suppose $\pi$ is any prior such that the coordinates of $\beta$ are a priori independent and $\sigma^2$ is a known constant. Then if $\widetilde{\mathbf{X}}$ are either fixed-X knockoffs or conditional Gaussian model-X knockoffs \citep{condknock2019}, the coordinates of $\sign(W\opt)$ are $M$-dependent conditional on $D$, implying that Equation (\ref{eq::expcovdecay}) holds, e.g., with $C = 2^M$ and $\rho = \frac{1}{2}$. \end{proposition}
The block-diagonal assumption in Proposition \ref{prop::unhappy} is certainly quite restrictive due to technical challenges in analyzing the posterior distribution of $\sign(W\opt) \mid D$, but we expect that Equation (\ref{eq::expcovdecay}) should hold much more generally, as discussed earlier and as we verify empirically in Section \ref{sec::sims}. Thus, perhaps the weakest aspect of Theorem \ref{thm::avgopt} is its conclusion: MLR statistics are only average-case optimal with respect to a user-specified prior $\pi$ on the parameters $\theta$. If the true parameters $\theta$ are very different from the average case specified by the prior, MLR statistics may not perform well. With this motivation, in the next section, we suggest practical choices of $\pi$ which performed well empirically, even when they were very poorly specified.
\subsection{Computing MLR statistics}\label{subsec::computation}
\subsubsection{General strategy}
In this section, we discuss how to compute $W_j\opt = \log\left(\frac{\E_{\theta \sim \pi}[L_{\theta}(\bX_j = \bx_j \mid D)]}{\E_{\theta \sim \pi}\left[L_{\theta}(\bX_j = \widetilde{\mathbf{x}}_j \mid D) \right]}\right)$ under the assumption that for any fixed value of $\theta$, we can compute $L_{\theta}(\by \mid \bX)$. For example, if one assumes $\by_i \mid \bX_i \simind \mcN(f_{\theta}(\bX_i), \sigma^2)$ for some class of functions $\{f_{\theta} : \theta \in \Theta\}$, then $L_{\theta}(\by \mid \bX) = \prod_{i=1}^n \varphi\left(\frac{\by_i - f_{\theta}(\bX_i)}{\sigma}\right)$ where $\varphi$ is the standard Gaussian PDF. The challenge of computing $W_j\opt$ is that we must marginalize over both $\theta$ \textit{and} the unknown values of $\bX_{-j}$, as we only observe the unordered pairs $\{\bx_{j'}, \widetilde{\mathbf{x}}_{j'}\}$ for $j' \ne j$.
This suggests a general approach based on Gibbs sampling, as described by Algorithm \ref{alg::mlrgibbs}. Informally, let $\theta_j$ denote the coordinates of $\theta$ which determine whether or not $\bX_j$ is non-null: for example, in a linear regression where $\by \mid \bX \sim \mcN(\bX \beta, \sigma^2 I_n)$, $\theta_j$ corresponds to $\beta_j$. The algorithm is then as follows. First, after initializing values for $\theta$ and $\bX$, we iteratively resample $\bX_j \mid \by, \bX_{-j}, \theta, D$ and $\theta_j \mid \by, \bX, \theta_{-j}$. Resampling $\theta_j$ can be done using any off-the-shelf Bayesian Gibbs sampler, since this step is identical to a typical Bayesian regression problem. Resampling $\bX_j \mid \by, \bX_{-j}, \theta, D$ is also straightforward, since conditional on $D$, $\bX_j$ must take one of the two values $\{\bx_j, \widetilde{\mathbf{x}}_j\}$. In particular, \begin{align}
\frac{\P(\bX_j = \bx_j \mid \by, \bX_{-j}, \theta, D)}{\P(\bX_j = \widetilde{\mathbf{x}}_j \mid \by, \bX_{-j}, \theta, D)}
&=
\frac{\pi(\theta) \P(\bX_j = \bx_j, \widetilde{\mathbf{X}}_j = \widetilde{\mathbf{x}}_j \mid \bX_{-j}, \widetilde{\mathbf{X}}_{-j}) L_{\theta}(\by \mid \bX_j = \bx_j, \bX_{-j})}{\pi(\theta) \P(\bX_j = \widetilde{\mathbf{x}}_j, \widetilde{\mathbf{X}}_j = \bx_j \mid \bX_{-j}, \widetilde{\mathbf{X}}_{-j}) L_{\theta}(\by \mid \bX_j = \widetilde{\mathbf{x}}_j, \bX_{-j})} \\
&=
\frac{L_{\theta}(\by \mid \bX_j = \bx_j, \bX_{-j})}{L_{\theta}(\by \mid \bX_j = \widetilde{\mathbf{x}}_j, \bX_{-j})} \label{eq::resamplexj}, \end{align} where the first step uses the fact that $\widetilde{\mathbf{X}} \Perp \by \mid \bX$ and the second step uses the pairwise exchangeability of $\{\bX_j, \widetilde{\mathbf{X}}_j\}$. The resulting ``meta-algorithm" is summarized below in Algorithm \ref{alg::mlrgibbs}. Of course, there are many natural variants of Algorithm \ref{alg::mlrgibbs}: in particular, our implementations modify lines 4 and 5 of Algorithm \ref{alg::mlrgibbs} to marginalize over the possible values of $\theta_j$ when resampling $X_j$, which helps the Markov chain converge more quickly.
\begin{algorithm}[h!] \caption{Gibbs sampling meta-algorithm to compute MLR statistics.}\label{alg::mlrgibbs} \textbf{Input:}\, Masked data $D = (\by, \{\bx_j, \widetilde{\mathbf{x}}_j\}_{j=1}^p)$, a likelihood $L_{\theta}(\by \mid \bX)$ and a prior $\pi$ on $\theta$. \begin{algorithmic}[1]
\State Initialize $\bX_j \simiid \Unif(\{\bx_j, \widetilde{\mathbf{x}}_j\})$ and $\theta \sim \pi$.
\For {$i=1,2,\dots,n_{\mathrm{sample}}$}
\For {$j=1,\dots,p$}:
\State Set $\eta_j^{(i)} = \log\left(L_{\theta}(\by \mid \bX_{-j}, \bX_j = \bx_j)\right) - \log\left(L_{\theta}(\by \mid \bX_{-j}, \bX_j = \widetilde{\mathbf{x}}_j)\right)$.
\State Sample $\bX_j \mid \bX_{-j}, \by, \theta, D$, using Equation (\ref{eq::resamplexj}) and $\eta_j^{(i)}$.
\State Sample $\theta_j \mid \by, \bX, \theta_{-j}$.
\EndFor
\State Resample any other unknown parameters (e.g., $\sigma^2$ in linear regression).
\EndFor
\State For each $j$, return $W_j\opt = \log\left(\sum_{i=1}^{n_{\mathrm{sample}}} \frac{\exp(\eta_j^{(i)})}{1 + \exp(\eta_j^{(i)})}\right) - \log\left(\sum_{i=1}^{n_{\mathrm{sample}}} \frac{1}{1 + \exp(\eta_j^{(i)})}\right)$.
\end{algorithmic} \end{algorithm}
In a moment, we will briefly describe a few particular modeling choices which performed well in a variety of settings. Before doing this, it is important to note that in Gaussian linear models where there is a sparse ``spike-and-slab" prior on the coefficients $\beta$, Algorithm \ref{alg::mlrgibbs} is similar in flavor to the ``Bayesian Variable Selection" (BVS) feature statistic from \cite{mxknockoffs2018}, although there are substantial differences in the Gibbs sampler and some differences in the final estimand. Broadly, we see our work as complementary to theirs; however, aside from technical details, a main difference is that \cite{mxknockoffs2018} seemed to argue that the main advantage of BVS was to incorporate accurate prior information. In contrast, we argue that a main advantage of MLR statistics is that they are estimating the right \textit{estimand}, and thus MLR statistics can be very powerful even when using mispecified priors (see Section \ref{sec::sims}).
\subsubsection{Sparse priors for generalized additive models and binary GLMs}\label{subsubsec::gams}
We start by considering a generalized additive model of the form \begin{equation} Y \mid X \sim \mcN\left(\sum_{j=1}^p \phi_j(X_j)^T \beta^{(j)}, \sigma^2\right) \end{equation} where $\phi_j : \R \to \R^{p_j}$ is a prespecified set of basis functions and $\beta^{(j)} \in \R^{p_j}$ are linear coefficients. For example, in a Gaussian linear model, $\phi_j$ is the identity function so $\E[Y \mid X] = X^T \beta$. More generally, in Section \ref{sec::sims}, we will take $\phi_j(\cdot)$ to be the basis representation of $d$-degree regression splines with $K$ knots (see \cite{esl2001} for a review). We picked this model because it can flexibly model nonlinear relationships between $\by$ and $\bX$ while also allowing efficient computation of MLR statistics.
For the prior, we assume $\beta^{(j)} = 0 \in \R^{p_j}$ a priori with probability $p_0$, and otherwise $\beta^{(j)} \sim \mcN(0, \tau^2 I_{p_j})$. This group-sparse prior is effectively a ``two-groups" model, as $\bX_j$ is null if and only if $\beta^{(j)} = 0$. Since the sparsity $p_0$ and signal size $\tau^2$ are typically not known a priori, we use the conjugate hyperpriors $\tau^2 \sim \invGamma(a_{\tau}, b_{\tau}), \sigma^2 \sim \invGamma(a_{\sigma}, b_{\sigma}) \text{ and } p_0 \sim \Beta(a_0, b_0).$ As we will see in Section \ref{sec::sims}, using these hyperpriors will allow us to adaptively estimate the sparsity level. Using standard techniques for sampling from ``spike-and-slab" models \citep{mcculloch1997}, we can compute MLR statistics in at most $O(n_{\mathrm{iter}} n p)$ operations (assuming that the dimensionality of $\phi_j$ is fixed). This complexity is comparable to the cost of fitting the LASSO, which is roughly $O(n_{\mathrm{iter}} n p)$ using coordinate descent or $O(n p^2)$ using the LARS algorithm \citep{efron2004}, and it is usually faster than the cost of computing Gaussian model-X or fixed-X knockoffs (which is usually $O(n p^2 + p^3)$). Please see Appendix \ref{appendix::gibbs} for a detailed derivation of the Gibbs updates for this model.
Lastly, we can easily extend this algorithm to binary responses. In particular, using standard data augmentation strategies developed in \cite{albertchib1993, polygamma2013}, we can compute Gibbs updates in the same computational complexity when $\P(Y = 1 \mid X) = s\left(\sum_{j=1}^p \phi_j(X_j)^T \beta^{(j)} \right)$, where $s$ is either the probit or logistic link function. See Appendix \ref{appendix::gibbs} for further details.
\section{Simulations}\label{sec::sims}
In this section, we show via simulations that MLR statistics are powerful in a variety of settings. At the outset, we emphasize that in every simulation in this section, MLR statistics do \textit{not} have accurate prior information: indeed, we use exactly the same hyperpriors to compute MLR statistics in every single plot. Furthermore, we consider settings where $\bX$ is highly correlated in order to test whether MLR statistics perform well even when the ``local dependence" assumption from Theorem \ref{thm::avgopt} may not hold. Nonetheless, the MLR statistics perform very well, suggesting that they are robust to misspecification of the prior and strong dependencies in $\bX$.
Throughout this section, we control the FDR at level $q=0.05$ unless otherwise specified. All plots have two standard-deviation error bars, although sometimes the bars are so small they are not visible. In each plot in this section, knockoffs provably control the FDR, so we only plot power. All simulation code is available at \url{https://github.com/amspector100/mlr_knockoff_paper}.
\subsection{Gaussian linear models}\label{subsec::linear_sims}
\begin{figure}
\caption{Power of MLR, LCD, and LSM statistics in a sparse Gaussian linear model with $p=500$ and $50$ non-nulls. Note that when computing fixed-X knockoffs, MLR statistics almost exactly match the power of the oracle procedure which provably upper bounds the power of any knockoff feature statistic. For MX knockoffs, MLR statistics are slightly less powerful than the oracle, although they are still very powerful compared to the lasso-based statistics. Note that perhaps surprisingly, the power of knockoffs can be roughly constant in $n$ in the ``SDP" setting: this is because SDP knockoffs sometimes have identifiability issues \citep{mrcknock2022}. Also, note that the ``LCD" and ``LSM" curves completely overlap in two of the bottom panels, where both methods have zero power. See Appendix \ref{appendix::simdetails} for precise simulation details.}
\label{fig::linear_model}
\end{figure}
In this section, we assume $\by \mid \bX \sim \mcN(\bX \beta, I_n)$ for sparse $\beta$. We draw $\bX \sim \mcN(0, \Sigma)$ for two choices of $\Sigma$: by default, we take $\Sigma$ to correspond to a highly correlated nonstationary AR(1) process, inspired by real design matrices in genetic studies. However, we also analyze a setting where $\Sigma$ is $80\%$ sparse with $20\%$ of its entries drawn to be nonzero uniformly at random, in accordance with an ErdosRenyi (ER) procedure, so in this setting $\bX$ does not exhibit local dependencies. We also compute both ``SDP" and ``MVR" knockoffs \citep{mrcknock2022, mxknockoffs2018} to show that MLR statistics perform well in both cases. Please see Appendix \ref{appendix::simdetails} for specific simulation details.
We compare four types of feature statistics. First, we compute MLR statistics using the identity basis functions as specified in Section \ref{subsec::computation}---as this is our default choice, and in plots, ``MLR" refers to this version of MLR statistics. Second, we compute the ``gold-standard" LCD and LSM statistics as described in Section \ref{subsec::badlasso}. Lastly, we compute the oracle MLR statistics---note that for fixed-X knockoffs, the oracle MLR statistics are equivalent to ``knockoff$\star$" procedure from \cite{whiteout2021}, which is a provable finite-sample upper-bound on the power of \textit{any} knockoff feature statistic. Figure \ref{fig::linear_model} shows the results while varying $n$ in both a low-dimensional setting (using fixed-X knockoffs) and high-dimensional setting (using MX knockoffs). It shows that MLR statistics are substantially more powerful than the lasso-based statistics and, in the fixed-X case, MLR statistics almost perfectly match the power of the oracle procedure. Indeed, this result holds even for the ``ErdosRenyi" covariance matrix, where $\bX$ exhibits strong non-local dependencies (in contrast to the theoretical assumptions in Theorem \ref{thm::avgopt}). Furthermore, Figure \ref{fig::comptime} shows that MLR statistics are quite efficient, often faster than computing a cross-validated lasso in the model-X case, and comparable to the cost of computing knockoffs in the fixed-X case.
\begin{figure}
\caption{This figure shows the computation time for various feature statistics in the same setting as Figure \ref{fig::linear_model}, as well as the cost of computing knockoffs. It shows that MLR statistics are competitive with state-of-the-art feature statistics (in the model-X case) or comparable to the cost of computing knockoffs (in the fixed-X case).}
\label{fig::comptime}
\end{figure}
Next, we analyze the performance of MLR statistics in a setting where the prior is very mispecified. In Figure \ref{fig::misspec}, we vary the sparsity (proportion of non-nulls) between $5\%$ and $40\%$, and we consider settings where (i) the non-null coefficients are heavy-tailed and drawn as i.i.d. Laplace random variables and (ii) the non-nulls are ``light-tailed" and drawn as i.i.d. $\Unif([-1/2, -1/4] \cup [1/4, 1/2])$ random variables. In both settings, the MLR prior assumes the non-null coefficients are i.i.d. $\mcN(0, \tau^2)$. Nonetheless, as shown by Figure \ref{fig::misspec}, MLR statistics still consistently outperform the lasso-based statistics and nearly match the performance of the oracle. This result is of particular interest in the fixed-X setting, where one cannot use cross validation to adaptively tune hyperparameters to the sparsity level---in contrast, MLR statistics match the power of the oracle for all sparsity levels.
\begin{figure}
\caption{This figure shows the power of MLR, LCD, and LSM statistics when varying the sparsity level and drawing the non-null coefficients from a heavy-tailed (Laplace) and light-tailed (Uniform) distribution, with $p=500$ and $n=1250$. The setting is otherwise identical to the AR1 setting from Figure \ref{fig::linear_model}. It shows that the MLR statistics perform well despite using the same (misspecified) prior in every setting.}
\label{fig::misspec}
\end{figure}
Lastly, we verify that the local dependence condition assumed in Theorem \ref{thm::avgopt} holds empirically. We consider the AR(1) setting but modify the parameters so that $\bX$ is extremely highly correlated, with adjacent correlations drawn as i.i.d. $\Beta(50, 1)$ variables. We also consider a setting where $\bX$ is equicorrelated with correlation $95\%$. In both settings, Figure \ref{fig::dependence} shows that the covariance matrix $\cov(\I(W\opt > 0) \mid D)$ has entries which decay off the main diagonal---in fact, the maximum off-diagonal covariance in both examples is $0.07$. As per Section \ref{subsec::avgopt}, these results may not be too surprising, since knockoffs were designed to ensure that the null coordinates of $\sign(W)$ are exactly independent conditional on $(D, \theta)$ regardless of the correlation structure of $\bX$. Empirically, the non-null coordinates also appear to be roughly uncorrelated (in Figure \ref{fig::dependence}, every variable is non-null); we offer some intuition for this in Appendix \ref{appendix::dependencediscussion}, although we cannot perfectly explain this result.
\begin{figure}
\caption{In the AR(1) and equicorrelated settings, this plot shows both the correlation matrix of $\bX$ as well as the conditional covariance of signs of the MLR statistic $W\opt$, computed using the Gibbs sampler for the MLR statistic using model-X knockoffs. It shows that even when $\bX$ is very highly correlated, the signs of $W\opt$ are only locally dependent. Note in this plot, every feature is non-null and the power of knockoffs is $53\%$ and $10\%$ for the AR(1) and equicorrelated settings, respectively.}
\label{fig::dependence}
\end{figure}
\subsection{Generalized additive models}
We now consider generalized additive models, where $Y \mid X \sim \mcN(h(X)^T \beta, 1)$ for some non-linear function $h$ applied element-wise to $X$. We run simulations in the AR(1) setting from Section \ref{subsec::linear_sims} with four choices of $h$: $h(x) = \sin(x), h(x) = \cos(x), h(x) = x^2, h(x) = x^3$. We compare six feature statistics: the linear MLR statistics, MLR based on cubic regression splines with one knot, the LCD, a random forest with swap importances as in \cite{knockoffsmass2018}, and DeepPINK \citep{deeppink2018}, a feature statistic based on a feedforward neural network. We note that this setting is much more challenging than the linear regression setting, since these feature statistics must learn (or approximate) the function $h$ in addition to estimating the coefficients $\beta$. For this reason, our simulations in this section are low-dimensional with $n > p$, and we should not expect any feature statistic to match the performance of the oracle MLR statistics.
\begin{figure}
\caption{This plot shows the power of MLR statistics and competitors in generalized additive models where $Y \mid X \sim \mcN(h(X)^T \beta, 1)$, for $h$ applied elementwise to $X$. The x-facets show the choice of $h$, and the y-facets show results for both MVR and SDP knockoffs; note $p=200$ and there are $60$ non-nulls. For this plot only, we choose $q=0.1$ because several of the competitor statistics made almost no discoveries at $q=0.05$. See Appendix \ref{appendix::simdetails} for the corresponding plot with $q=0.05$. Note that in the top-most ``cubic" panel, the MLR (splines) statistic has $100\%$ power and completely overlaps with the oracle.}
\label{fig::nonlin}
\end{figure}
Figure \ref{fig::nonlin} shows that the ``MLR (splines)" feature statistic uniformly outperforms every other feature statistic, often by wide margins. Note that the linear MLR and LCD statistics are powerless in the $\cos$ and quadratic settings, since in this case $h$ is an even function and thus the non-null features have no linear relationship with the response. However, in the $\sin$ and cubic settings, the linear MLR statistics outperform the LCD statistics, suggesting that even when the linear model is misspecified, linear MLR statistics can be powerful as long as there is some linear effect.
\subsection{Logistic regression}\label{subsec::logit_sims}
Lastly, we now consider the setting of logistic regression, so $Y \mid X \sim \Bern(s(X^T \beta))$ where $s$ is the sigmoid function. We run the same simulation setting as Figure \ref{fig::linear_model}, except that now $Y$ is binary and we consider low-dimensional settings, since inference in logistic regression is generally more challenging than in linear regression. The results are shown by Figure \ref{fig::logistic}, which shows that MLR statistics generally perform better than the LCD, although there is a substantial gap between the performances of MLR statistics and the oracle MLR statistics. That said, it is worth noting that the MLR statistics take roughly three times longer to compute than the LCD in this setting due to the data-augmentation step in the Gibbs sampler for binary regression (see Appendix \ref{appendix::gibbs}). It may be possible to speed up the Gibbs sampler, but we leave this possibility to future work.
\begin{figure}
\caption{This plot shows the power of MLR statistics compared to the cross-validated LCD in logistic regression, with $p=500$, $50$ non-nulls, and $n$ varied between $1500$ and $4500$. The setting is otherwise identical to that of Figure \ref{fig::linear_model}.}
\label{fig::logistic}
\end{figure}
\section{Real applications}\label{sec::data}
In this section, we apply MLR statistics to three real datasets which have been previously analyzed using knockoffs. In each case, MLR statistics have comparable or higher power than competitor statistics. All code and data are available at \url{https://github.com/amspector100/mlr_knockoff_paper}.
\subsection{HIV drug resistance}
We begin with the HIV drug resistance dataset from \cite{rhee2006data}, which was previously analyzed by (e.g.) \cite{fxknock} using knockoffs. The dataset consists of genotype data from roughly $750$ HIV samples as well as drug resistance measurements for $16$ different drugs, and the goal of our analysis is to discover genetic variants which affect drug resistance for each of the drugs. An advantage of this dataset is that \cite{rhee2005corroborate} published treatment-selected mutation panels for exactly this setting, so we can check whether any discoveries made by knockoffs are corroborated by this separate analysis.
We preprocess and model the data in exactly the same way as \cite{fxknock}, and following \cite{fxknock}, we apply fixed-X knockoffs with LCD, LSM, and MLR statistics and FDR level $q=0.05$. For both MVR and SDP knockoffs, Figure \ref{fig::hiv} shows the total number of discoveries made by each statistic, stratified by whether each discovery is corroborated by \cite{rhee2005corroborate}. For SDP knockoffs, the MLR statistics make nearly an order of magnitude more discoveries than the competitor methods with a comparable corroboration rate. For MVR knockoffs, all three statistics perform roughly equally well, although MLR statistics make slightly more discoveries with a slightly higher corroboration rate. Overall, in this setting, MLR statistics are competitive with and sometimes substantially outperform the lasso-based statistics. See Appendix \ref{appendix::realdata} for further details and specific results for each drug.
\begin{figure}
\caption{This figure shows the total number of discoveries made by the LCD, LSM, and MLR feature statistics in the HIV drug resistance dataset from \cite{rhee2006data}, summed across all $16$ drugs.}
\label{fig::hiv}
\end{figure}
\subsection{Financial factor selection}\label{subsec::fundrep}
Next, we consider a ``fund replication" dataset inspired by \cite{challet2021}. In finance, analysts often aim to select a few key factors which drive the performance of an asset such as an index fund or hedge fund. \cite{challet2021} applied knockoffs to factor selection, and as a benchmark, they applied fixed-X knockoffs with the LCD to test which US equities explained the performance of an index fund for the energy sector (ticker name XLE). Since the XLE index fund is essentially a weighted combination of a known list of US equities, \cite{challet2021} were able to tell whether each discovery made by knockoffs was a true or false positive.
We perform the same analysis for ten index funds describing key sectors of the US economy, including index funds for energy (XLE), technology (XLK), real estate (XLRE), and more (see Appendix \ref{appendix::realdata} for the full list). In our analysis, $\by$ is the daily log return of the index fund and $\bX$ consists of the daily log returns of each stock in the S\&P 500 since $2013$, so $p \approx 500$ and $n \approx 2300$, although for several of the index funds, fewer than ten years of data is available. We compute fixed-X knockoffs using both MVR and SDP knockoffs and apply LCD, LSM, and MLR statistics. Figure \ref{fig::fund_rep} shows the total number of true and false discoveries summed across all ten index funds with FDR level $q=0.05$. In particular, Figure \ref{fig::fund_rep} shows that MLR statistics make substantially more discoveries than either the LCD or LSM statistics: indeed, MLR statistics make $35\%$ and $78\%$ more discoveries than the LCD statistic for MVR and SDP knockoffs (respectively), and the LSM statistic makes more than $5$ times fewer discoveries than the MLR statistics. We also note that the FDP (averaged across all ten index funds) is well below $5\%$ for each method analyzed in Figure \ref{fig::fund_rep}, as shown in Appendix \ref{appendix::realdata}. Thus, MLR statistics substantially outperform the lasso-based statistics in this analysis.
\begin{figure}
\caption{This figure shows the total number of discoveries made by each method in the fund replication dataset inspired by \cite{challet2021}, summed across all ten index funds. See Appendix \ref{appendix::realdata} for a table showing that the average FDP for each method is below the nominal level of $q=0.05$.}
\label{fig::fund_rep}
\end{figure}
\subsection{Graphical model discovery for gene networks}
Lastly, we consider the problem of recovering a gene network from single-cell RNAseq data. Our analysis is inspired by that of \cite{nodewiseknock}, who analyze a single-cell RNAseq dataset from \cite{zheng2017ggmdata} by modeling the gene expression log-counts as a Gaussian graphical model (see \cite{nodewiseknock} for justification of the Gaussian assumption and the precise data processing steps). In particular, \cite{nodewiseknock} developed an extension of fixed-X knockoffs to detect edges in Gaussian graphical models while controlling the false discovery rate across discovered edges, and they applied this to the RNAseq data for the $50$ genes with the highest variance from \cite{zheng2017ggmdata}. Note that the ground truth is not available in this setting, so following \cite{nodewiseknock}, we only evaluate methods based on the number of discoveries they make.
We analyze the same dataset using the same model, but we instead compare the performance of LCD, LSM, and MLR statistics for both MVR and SDP fixed-X knockoffs. Since no methods made any discoveries at level $q=0.05$, we plot the number of discoveries as a function of $q$, which we vary between $0$ and $0.5$. Figure \ref{fig::ggm} shows the results, namely that the MLR statistics make the most discoveries for nearly every value of $q$, although often by a relatively small margin. That said, for small values of $q$, the LSM statistic performs poorly, and for large values of $q$, the LCD statistic performs poorly, whereas the MLR statistic is the most consistently powerful.
\begin{figure}
\caption{This figure shows the number of discoveries made by LCD, LSM, and MLR statistics when used to detect edges in a Gaussian graphical model for gene expression data, as in \cite{nodewiseknock}.}
\label{fig::ggm}
\end{figure}
\section{Discussion}\label{sec::discussion}
This paper introduced masked likelihood ratio statistics, a class of knockoff statistics which are provably asymptotically optimal under mild regularity conditions. We show in simulations and three data applications that in addition to having appealing theoretical properties, MLR statistics are efficient and powerful in practical applications. However, our work leaves open several possible directions for future research.
\begin{itemize}[topsep=0pt]
\item Our theory shows that MLR statistics are asymptotically \textit{average-case} optimal over a user-specified prior on the unknown parameters. However, it might be worthwhile to develop \textit{minimax-optimal} knockoff-statistics, e.g., by computing a ``least-favorable" prior.
\item A limitation of our theory is that it requires a ``local dependency" condition which is challenging to verify analytically, although our local dependency condition can be diagnosed using the data at hand. It might be interesting to investigate (i) precisely when this local dependency condition holds and (ii) whether MLR statistics are still in any way optimal when it fails to hold.
\item Our paper only considers a few instantiations of MLR statistics designed for linear models, generalized additive models, and binary generalized linear models. However, other classes of MLR statistics could be more powerful or more computationally efficient. For example, it might be possible to fit a MLR statistic based on regression trees using priors from the literature on Bayesian additive regression trees \citep{bart2010}. Similarly, it is not clear how to extend the MLR statistics in Section \ref{subsec::computation} to allow them to detect interactions between features without substantially increasing the computational burden. Lastly, it might be worthwhile to improve the efficiency of MLR statistics in the case where $\by$ is binary. We leave such questions to future work. \end{itemize}
\appendix
\section{Main proofs and interpretation}\label{appendix::proofs}
In this section, we prove the main results of the paper. We also offer a few additional discussions of these results.
\subsection{Knockoffs as inference on masked data}
We start by proving Propositions \ref{prop::mxdistinguish}, \ref{prop::fxdistinguish}, and a related corollary which will be useful when proving Theorem \ref{thm::avgopt}. It may be helpful to recall that throughout, we only consider feature statistics $W$ which are nonzero with probability one, since other feature statistics are provably inadmissible (one can always make more discoveries by adding a tiny amount of noise to the entries of $W$ which are zeros).
\begingroup \def\ref{prop::unhappy}{\ref{prop::mxdistinguish}} \begin{proposition} Let $\widetilde{\mathbf{X}}$ be model-X knockoffs such that $\bX_j \ne \widetilde{\mathbf{X}}_j$ for $j \in [p]$. Then $W = w([\bX, \widetilde{\mathbf{X}}], \by)$ is a valid feature statistic if and only if \begin{enumerate}[topsep=0pt]
\setlength{\parskip}{0pt}
\setlength{\itemsep}{0pt plus 1pt}
\item $|W|$ is a function of the masked data $D$
\item there exist \emph{distinguishing} functions $\widehat{\bX}_j = g_j(D)$ such that $W_j > 0$ if and only if $\widehat{\bX}_j = \bX_j$. \end{enumerate}
\begin{proof} \underline{Forward direction}: Suppose $W$ is a valid feature statistic; we will now show conditions (i) and (ii). To show (i), note that observing $\{\bX_j, \widetilde{\mathbf{X}}_j\}_{j=1}^p$ is equivalent to observing $[\bX, \widetilde{\mathbf{X}}]_{\swap{J}}$ for some unobserved $J \subset [p]$ chosen uniformly at random, since we can always randomly assign $\bX_j, \widetilde{\mathbf{X}}_j$ to the left and right slots, respectively. Define $[\bX^{(1)}, \bX^{(2)}] \defeq [\bX, \widetilde{\mathbf{X}}]_{\swap{J}}$ and let $W' = w([\bX^{(1)}, \bX^{(2)}], \by)$. Then by the swap invariance property of knockoffs, we have that $|W| = |W'|$, and $W'$ is a function of $D$: thus, $|W|$ is a function of $D$ as desired.
To show (ii), we construct $g_j$ as follows: \begin{equation*}
\widehat{\bX}_j \defeq g_j(\by, \{\bX_j, \widetilde{\mathbf{X}}_j\}_{j=1}^p) \defeq g_j(\by, [\bX,\widetilde{\mathbf{X}}]_{\swap{J}}) = \begin{cases}
\bX_j^{(1)} & W_j' > 0 \\
\bX_j^{(2)} & W_j' < 0.
\end{cases} \end{equation*} To show $g_j$ is well-defined (does not depend on $J$), it suffices to show $\widehat{\bX}_j = \bX_j$ if and only if $W_j > 0$, which also shows (ii). There are two cases. In the first case, if $j \not \in J$, then $\bX_j^{(1)} = \bX_j$ by definition of $\bX^{(1)}$ and also $W_j' = W_j$ by the ``flip-sign" property of $w$. Thus $\widehat{\bX}_j = \bX_j^{(1)} = \bX_j$ if and only if $W_j > 0$. The second case is analagous: if $j \in J$, then $W_j' = - W_j$, so $\widehat{\bX}_j = \bX_j^{(2)} = \bX_j$ if and only if $W_j > 0$. In both cases, $W_j > 0$ if and only if $\widehat{\bX}_j \ne \bX_j$, proving (ii).
\underline{Backwards direction}: To show $W = w([\bX, \widetilde{\mathbf{X}}], \by)$ is a valid feature statistic, it suffices to show the flip-sign property, namely that $W' \defeq w([\bX, \widetilde{\mathbf{X}}]_{\swap{J}}, \by) = -1_{J} \odot W$, where $\odot$ denotes elementwise multiplication and $-1_{J}$ is the vector of all ones but with negative ones at the indices in $J$. To do this, note that $D$ is invariant to swaps of $\bX$ and $\widetilde{\mathbf{X}}$, so $|W| = |W'|$ because by assumption $|W|, |W'|$ are a function of $D$. Furthermore, for any $j \in [p]$, we have that $W_j > 0$ if and only if $\widehat{\bX}_j = \bX_j$; however, since $\widehat{\bX}_j$ is also a function of $D$, we have that $\sign(W_j) = \sign(W'_j)$ if and only if $j \not \in J$. This completes the proof. \end{proof} \end{proposition} \addtocounter{proposition}{-1} \endgroup
The proof of Proposition \ref{prop::fxdistinguish} is identical to the proof of Proposition \ref{prop::mxdistinguish}, so we omit it for brevity. However, the following corollary of Propositions \ref{prop::mxdistinguish} and \ref{prop::fxdistinguish} will be important when proving Theorem \ref{thm::avgopt}.
\begin{corollary}\label{cor::determdiff} Let $W, W'$ be two knockoff feature statistics. Then in the same setting as Propositions \ref{prop::mxdistinguish} and \ref{prop::fxdistinguish}, the event $\sign(W_j) = \sign(W_j')$ is a deterministic function of the masked data $D$. \begin{proof} We give the proof for the model-X case, and the fixed-X case is analagous. By Proposition (\ref{prop::mxdistinguish}), there exist deterministic functions $\widehat{\bX}_j = g_j(D)$, $\widehat{\bX}_j' = g_j'(D)$ such that $W_j > 0 \Leftrightarrow \widehat{\bX}_j = \bX_j$ and $W_j' > 0 \Leftrightarrow \widehat{\bX}_j' = \bX_j$. (Note this does not exclude the possibility of randomized feature statistics since $D$ includes auxiliary random noise.) Since $\widehat{\bX}_j, \widehat{\bX}_j'$ must take one of exactly two distinct values, this implies that \begin{equation*}
\sign(W_j) = \sign(W_j') \Leftrightarrow \widehat{\bX}_j = \widehat{\bX}_j' \Leftrightarrow g_j(D) = g_j'(D), \end{equation*} where we note that the right-most expression is clearly a deterministic function of $D$. This completes the proof. \end{proof} \end{corollary}
\subsection{How far from optimality are MLR statistics in finite samples?}\label{subsec::finitesampleopt}
Our main result (Theorem \ref{thm::avgopt}) shows that MLR statistics \textit{asymptotically} maximize the number of discoveries made by knockoffs. However, before rigorously proving Theorem \ref{thm::avgopt}, we give an intuitive argument that even in finite samples, MLR statistics are probably nearly optimal anyways.
Recall from Section \ref{subsec::mlr} that in finite samples, MLR statistics (i) maximize $\P(W_j\opt > 0 \mid D)$ for each $j \in [p]$ and (ii) ensure that the absolute values of the feature statistics $\{|W_j\opt|\}_{j=1}^p$ have the same order as the probabilities $\{\P(W_j\opt > 0)\}_{j=1}^p$. As discussed previously, this strategy is exactly optimal when the vector of signs $\sign(W\opt)$ are conditionally independent given $D$, but in general, it is possible to exploit conditional dependencies among the coordinates of $\sign(W\opt)$ to slightly improve power. However, we argue below that it is challenging to even slightly improve power without unrealistically strong dependencies.
To see why this is the case, consider a very simple setting with $p=6$ features where we aim to control the FDR at level $q=0.2$, so knockoffs will make discoveries exactly when the first five $W$-statistics with the largest absolute values have positive signs. Suppose that $W_1, \dots, W_5 \mid D$ are perfectly correlated and symmetric, so $\P(W_1 > 0 \mid D) = \dots = \P(W_5 > 0 \mid D) = 70\%$, and $W_6 \Perp W_{1:5} \mid D$ satisfies $\P(W_6 > 0 \mid D) = 90\%$. Since $W_6$ has the highest chance of being positive, MLR statistics will assign it the highest absolute value, in which case knockoffs will make discoveries with probability $70\% \cdot 90\% = 63\%$. However, in this example, knockoffs will be more powerful if we ensure that $W_1, \dots, W_5$ have the five largest absolute values, since their signs are perfectly correlated and thus $\P(W_1 > 0, \dots, W_5 > 0) = 70\% > 63\%$.\footnote{Note that there is nothing special about positive correlations in this example: one can also find similar examples where negative correlations among $W\opt$ can be prioritized to slightly increase power.}
This example has two properties which shed light on the more general situation. First, to even get a slight improvement in power required extremely strong dependencies among $\sign(W\opt)$, which is not realistic: in Figure \ref{fig::dependence}, we see empirically that $\sign(W\opt)$ appear to be almost completely conditionally uncorrelated even when $\bX$ is extremely highly correlated. Thus, although it may be possible to slightly improve power by exploiting dependencies among $\sign(W\opt)$, the \textit{magnitude} of the improvement in power is likely to be small. Second, the reason that it is possible to exploit dependencies to improve power is because there is a ``hard" threshold where knockoffs can only make any discoveries if it makes at least 5 discoveries, and exploiting conditional correlations among the vector $\sign(W\opt)$ can slightly improve the probability that we reach that initial threshold. However, this ``threshold" phenomenon is less important in situations where knockoffs are guaranteed to make at least a few discoveries; thus, if the number of discoveries grows with $p$, this effect should be relatively small asymptotically.
\subsection{Proof of Theorem \ref{thm::avgopt}}\label{subsec::mainproof}
There are two main ideas behind Theorem \ref{thm::avgopt}. First, for any feature statistic $W$, we will compare the power of $W$ to the power of a ``soft" version of the SeqStep procedure, which depends only on the conditional expectation of $\sign(W)$ instead of the realized values of $\sign(W)$. Roughly speaking, if the coordinates of $\sign(W)$ obey a strong law of large numbers, the power of SeqStep and the power of the ``soft" version of SeqStep will be the same asymptotically. Second, we will show that MLR statistics $W\opt$ exactly maximize the power of the ``soft" version of the SeqStep procedure. Taken together, these two results imply that MLR statistics are asymptotically optimal.
To make this precise, for a feature statistic $W$, let $\mathrm{sorted}(W)$ denote $W$ sorted in decreasing order of its absolute values, and let $R = \I(\mathrm{sorted}(W) > 0) \in \{0,1\}^p$ be the vector indicating where $\mathrm{sorted}(W)$ has positive entries. The number of discoveries made by knockoffs only depends on $R$. Indeed, for any vector $\eta \in [0,1]^p$ and any desired FDR level $q \in (0,1)$, define \begin{equation}\label{eq::psidef}
\psi_q(\eta) \defeq \max_{k \in [p]} \left\{k : \frac{k-k \bar \eta_k + 1}{k \bar \eta_k} \le q\right\} \text{ and } \tau_q(\eta) = \left\lceil \frac{\psi_q(\eta)+1}{1+q}\right\rceil, \end{equation} where by convention we set $\frac{x}{0} = \infty$ for any $x \in \R_{> 0}$. It turns out that knockoffs makes exactly $\tau_q(R)$ discoveries. For brevity, we refer the reader to Lemma B.3 of \cite{mrcknock2022} for a formal proof of this: however, to see this intuitively, note that $k - k \bar R_k$ (resp. $k \bar R_k$) counts the number of negative (resp. positive) entries in the first $k$ coordinates of $\mathrm{sorted}(W)$, so this definition lines up with the definition of the data-dependent threshold in Section \ref{subsec::knockreview}.
Now, let $\delta \defeq \E[R \mid D] \in [0,1]^p$ be the conditional expectation of $R$ given masked data $D$ (defined in Equation \ref{eq::maskeddata}). The ``soft" version of SeqStep simply applies the functions $\psi_q$ and $\tau_q$ to the conditional expectation $\delta$ instead of the realized indicators $R$. Intuitively speaking, our goal will be to apply a law of large numbers to show the following asymptotic result: \begin{equation*}
|\tau_q(\delta) - \tau_q(R)| = o_p\left(\# \text{ of non-nulls}\right). \end{equation*} Once we have shown this, it will be straightforward to show that MLR statistics are asymptotically optimal, since MLR statistics maximize $\tau_q(\delta)$ in finite samples.
We now begin to prove Theorem \ref{thm::avgopt} in earnest. In particular, the following pair of lemmas tells us that if $\bar R_k$ converges to $\bar \delta_k$ uniformly in $k$, then $\tau_q(\delta) \approx \tau_q(R)$.
\begin{lemma}\label{lem::ddt2slln} Let $W = w([\bX, \widetilde{\mathbf{X}}], \by)$ be any feature statistic with $R, \delta, \psi_q, \tau_q$ as defined earlier. Fix any $k_0 \in [p]$ and sufficiently small $\epsilon > 0$ such that $\eta \defeq 3(1+q)\epsilon < q$. Define the event \begin{equation*}
A_{k_0,\epsilon} = \left\{\max_{k_0 \le k \le p} |\bar R_k - \bar \delta_k| \le \epsilon\right\}. \end{equation*} Then on the event $A_{k_0,\epsilon}$, we have that \begin{equation}\label{eq::bound_onA}
\frac{1}{1+3\epsilon} \tau_{q-\eta}(R) - k_0 - 1 \le \tau_q(\delta) \le (1+3\epsilon) \tau_{q+\eta}(R) + k_0 + 1. \end{equation} This implies that \begin{align}
|\tau_q(R) - \tau_q(\delta)|
&\le
p \I(A_{k_0,\epsilon}^c) + \big[\tau_{q+\eta}(R) - \tau_{q-\eta}(R)\big] + k_0 + 1 + 3 \epsilon \tau_{q+\eta}(R). \label{eq::ddt2slln} \end{align} \begin{proof} Note the proof is entirely algebraic (there is no probabilistic content). We proceed in two steps, first showing Equation (\ref{eq::bound_onA}), then Equation (\ref{eq::ddt2slln}).
\underline{Step 1}: We now prove Equation (\ref{eq::bound_onA}). To start, define the sets \begin{equation*}
\mcR = \left\{k \in [p] : \frac{k - k \bar R_k + 1}{k \bar R_k} \le q + \eta \right\} \text{ and } \mcD = \left\{k \in [p] : \frac{k - k \bar \delta_k + 1}{k \bar \delta_k} \le q \right\} \end{equation*} and recall that by definition $\psi_{q + \eta}(R) = \max(\mcR)$, $\psi_{q}(\delta) = \max(\mcD)$. To analyze the difference between these quantities, fix any $k \in \mcD \setminus \mcR$. Then by definition of $\mcD$ and $\mcR$, we know \begin{align*}
\frac{k - k \bar \delta_k + 1}{k \bar \delta_k} \le q < q + \eta < \frac{k - k \bar R_k + 1}{k \bar R_k}.
\end{align*} However, Lemma \ref{lem::alg} (proved in a moment) tells us that this implies the following algebraic identity: \begin{equation*}
\bar \delta_k - \bar R_k \ge \frac{\eta}{3(1+q)} = \frac{3(1+q)\epsilon}{3(1+q)} = \epsilon. \end{equation*} However, on the event $A_{k_0, \epsilon}$ this cannot occur for any $k \ge k_0$. Therefore, on the event $A_{k_0, \epsilon}$, $\mcD \setminus \mcR \subset \{1, \dots, k_0-1\}$. This implies that \begin{equation}\label{eq::psidif1}
\psi_q(\delta) - \psi_{q+\eta}(R) = \max(\mcD) - \max(\mcR) \le \begin{cases}
0 & \max(\mcD) \ge k_0 \\
k_0 - 1 & \max(\mcD) < k_0.
\end{cases} \end{equation} We can combine these conditions by writing that $\psi_q(\delta) - \psi_{q+\eta}(R) \le k_0 - 1$. Using the definition of $\tau_q(\cdot)$, we conclude \begin{align*}
\tau_q(\delta) - \tau_{q+\eta}(R)
&=
\ceil{\frac{\psi_q(R)+1}{1+q}} - \ceil{\frac{\psi_{q+\eta}(\delta)+1}{1+q+\eta}} \\
&\le
1 + \frac{\psi_q(R)+1}{1+q} - \frac{\psi_{q+\eta}(\delta)+1}{1+q+\eta} \\
&\le
2 + \frac{\psi_q(R) - \psi_{q+\eta}(\delta)}{1+q} + \left(\frac{1}{1+q} - \frac{1}{1+q+\eta}\right) \psi_{q+\eta}(R) \\
&=
2 + \frac{\psi_q(R) - \psi_{q+\eta}(\delta)}{1+q} + \frac{3 \epsilon}{1+q+\eta} \psi_{q+\eta}(R)& \text{ by def. of $\eta$}\\
&\le
2 + \frac{k_0 - 1}{1+q} + \frac{3 \epsilon}{1+q+\eta} \psi_{q+\eta}(R)&\text{ by Eq. (\ref{eq::psidif1})}\\
&\le
k_0 + 1 + 3 \epsilon \tau_{q+\eta}(R) & \text{ by def. of $\tau_{q}(R)$.} \end{align*} This proves the upper bound, namely that $\tau_q(\delta) \le (1+3\epsilon) \tau_{q+\eta}(R) + k_0 + 1$. To prove the lower bound, note that we can swap the role of $R$ and $\delta$ and apply the upper bound to $q' = q - \eta$. Then if we take $\eta' = 3(1+q') \epsilon < \eta < 1$, applying the upper bound yields \begin{equation*}
\tau_{q'}(R) \le (1+3\epsilon) \tau_{q'+\eta'}(\delta) + k_0 + 1. \end{equation*} Observe that $\tau_q(\cdot)$ is nondecreasing in $q$, and since $\eta' < \eta$, we have that (1) $q - \eta \le q' = q - \eta'$ and (2) $q' + \eta' = q - \eta + \eta' < q$. Therefore, by monotonicity, we conclude \begin{equation*}
\tau_{q-\eta}(R) \le \tau_{q'}(R) \le (1+3\epsilon) \tau_{q'+\eta'}(\delta) + k_0 + 1 \le (1+3\epsilon) \tau_{q}(\delta) + k_0 + 1. \end{equation*} This implies the lower bound $\frac{1}{1+3\epsilon} \tau_{q-\eta}(R) - k_0 -1 \le \tau_q(\delta)$.
\underline{Step 2}: Now, we show Equation (\ref{eq::ddt2slln}) follows from Equation (\ref{eq::bound_onA}). To see this, we consider the two cases where $\tau_q(\delta) \ge \tau_q(R)$ and vice versa and apply Equation (\ref{eq::bound_onA}). In particular, on the event $A_{k_0, \epsilon}$, then: \begin{align*}
|\tau_q(\delta) - \tau_q(R)|
&=
\begin{cases}
\tau_q(\delta) - \tau_q(R) & \tau_q(\delta) \ge \tau_q(R) \\
\tau_q(R) - \tau_q(\delta) & \tau_q(\delta) \le \tau_q(R)
\end{cases} \\
&\le
\begin{cases}
(1+3\epsilon) \tau_{q+\eta}(R) + k_0 + 1 - \tau_q(R) & \tau_q(\delta) \ge \tau_q(R) \\
\tau_q(R) - \frac{1}{1+3\epsilon} \tau_{q-\eta}(R) + k_0 + 1 & \tau_q(\delta) \le \tau_q(R)
\end{cases} & \text{ by Eq. (\ref{eq::bound_onA})}\\
&=
k_0 + 1 + \begin{cases}
\tau_{q+\eta}(R) - \tau_q(R) - 3\epsilon \tau_{q+\eta}(R) & \tau_q(\delta) \ge \tau_q(R) \\
\tau_q(R) - \tau_{q-\eta}(R) + \frac{3\epsilon}{1+3\epsilon} \tau_{q-\eta}(R)& \tau_q(\delta) \le \tau_q(R)
\end{cases} \\
&\le k_0 + 1 + \tau_{q+\eta}(R) - \tau_{q-\eta}(R) + 3 \epsilon \tau_{q+\eta}(R), \end{align*}
where the last line follows because $\tau_q(R)$ is monotone in $q$. This implies Equation (\ref{eq::ddt2slln}), because $|\tau_q(\delta) - \tau_q(R)| \le p$ trivially on the event $A_{k_0, \epsilon}^c$ because $\tau_q(\delta), \tau_q(R) \in [p]$. \end{proof} \end{lemma}
The following lemma proves a very simple algebraic identity used in the proof of Lemma \ref{lem::ddt2slln}.
\begin{lemma}\label{lem::alg} For any $x, y \in [0,1]$, $k \in \N,$ and any $\gamma \in (0,1)$, suppose that $\frac{1+k-kx}{kx} \le q < q + \gamma \le \frac{1+k-ky}{ky}$. Then \begin{equation*}
x - y \ge \frac{\gamma}{(1+q)(1+q+\gamma)} \ge \frac{\gamma}{3(1+q)}. \end{equation*} \begin{proof} By assumption, $x \ne 0$, since otherwise $\frac{1+k-kx}{kx} = \infty$ by convention. For $x > 0$, we have that \begin{equation}\label{eq::algxeq}
\frac{1+k-kx}{kx} \le q \implies 1 + k - kx \le kqx \implies x \ge \frac{k+1}{k(1+q)}. \end{equation} Now, there are two cases. If $y = 0$, the inequality holds trivially: \begin{equation*}
x - y = x \ge \frac{k+1}{k} \cdot \frac{1}{1+q} \ge \frac{\gamma}{3(1+q)}. \end{equation*} Alternatively, if $y > 0$, we observe similarly to before that \begin{equation}\label{eq::algyeq}
\frac{1+k-ky}{ky} \ge q+\gamma \implies y \le \frac{k+1}{k(1+q+\gamma)}. \end{equation} Combining Equations (\ref{eq::algxeq})--(\ref{eq::algyeq}) yields the result: \begin{align*}
x - y
&\ge
\frac{k+1}{k} \left(\frac{1}{1+q} - \frac{1}{1+q+\gamma}\right)
=
\frac{k+1}{k} \frac{\gamma}{(1+q+\gamma)(1+q)}
\ge
\frac{\gamma}{3(1+q)}. \end{align*} \end{proof} \end{lemma}
We are now ready to prove Theorem \ref{thm::avgopt}. As a reminder, we consider an asymptotic regime with data $\bX^{(n)} \in \R^{n \times p_n}, \by^{(n)} \in \R^{n}$ and knockoffs $\widetilde{\mathbf{X}}^{(n)}$, where the likelihood is denoted by $L(\by\upn; \bX\upn, \theta\upn)$ and $\pi\upn$ is a prior on the unknown parameters $\theta\upn$. We let $D^{(n)}$ denote the masked data for knockoffs as defined in Section \ref{subsec::maskeddata} and let $s\upn$ denote the expected number of non-nulls under $\pi\upn$. We will analyze the limiting \textit{empirical power} of feature statistics $W\upn = w_n([\bX\upn, \widetilde{\mathbf{X}}\upn], \by)$ with rejection set $S\upn(q)$, defined as the expected number of discoveries divided by the expected number of non-nulls: \begin{equation}
\widetilde{\mathrm{Power}}_q(w_n) = \frac{\E[|S\upn(q)|]}{s\upn}. \tag{\ref{eq::powerdef}} \end{equation}
For convenience, we restate Theorem \ref{thm::avgopt} and then prove it.
\begingroup \def\ref{thm::avgopt}{\ref{thm::avgopt}} \begin{theorem} For each $n$, let $W^{\opt} = w_n\opt([\bX^{(n)}, \widetilde{\mathbf{X}}^{(n)}], \by^{(n)})$ denote the MLR statistics with respect to $\pi\upn$ and let $W = w_n([\bX\upn, \widetilde{\mathbf{X}}\upn], \by\upn)$ denote any other sequence of feature statistics. Let $(\by\upn, \bX\upn, \theta\upn)$ be any sequence such that the following three conditions hold: \begin{itemize}
\item Assume $\lim_{n \to \infty} \widetilde{\mathrm{Power}}_q(w_n\opt)$ and $\lim_{n \to \infty} \widetilde{\mathrm{Power}}_q(w_n)$ exist for each $q \in (0,1)$.
\item The expected number of non-nulls grows faster than $\log(p_n)^4$. Formally, assume that for some $\gamma > 0$, $\lim_{n \to \infty} \frac{s\upn}{\log(p_n)^{4+\gamma}} = \infty$.
\item Assume that conditional on $D\upn$, the covariance between the signs of $W\opt$ decays exponentially. That is, there exist constants $C\ge 0, \rho \in (0,1)$ such that \begin{equation}
|\cov(\I(W\opt_i > 0), \I(W\opt_j > 0) \mid D\upn)| \le C \rho^{|i-j|}. \tag{\ref{eq::expcovdecay}} \end{equation} \end{itemize}
Then for all but countably many $q \in (0,1)$, \begin{equation}
\lim_{n \to \infty} \widetilde{\mathrm{Power}}_q(w_n\opt) \ge \lim_{n \to \infty} \widetilde{\mathrm{Power}}_q(w_n). \end{equation} \begin{proof} The proof proceeds in three main steps: however, we begin by introducing some notation and outlining the strategy of the proof. Following Lemma \ref{lem::ddt2slln}, let $R = \I(\mathrm{sorted}(W) > 0)$ and $R\opt = \I(\mathrm{sorted}(W\opt) > 0)$, and let $\delta = \E[W \mid D\upn]$ and $\delta\opt = \E[W\opt \mid D\upn]$ be their conditional expectations. (Note that $W, R, \delta, W\opt, R\opt$ and $\delta\opt$ all change with $n$---however, we omit this dependency to lighten the notation.) As in Equation (\ref{eq::psidef}), we can write the number of discoveries made by $W$ and $W\opt$ as a function of $R\opt$ and $R$, so: \begin{equation*}
\widetilde{\mathrm{Power}}_q(w_n\opt) - \widetilde{\mathrm{Power}}_q(w_n) = \frac{\E\left[\tau_q(R\opt)\right]}{s\upn} - \frac{\E\left[\tau_q(R)\right]}{s\upn}. \end{equation*} We will show that the limit of this quantity is nonnegative, and the main idea is to make the approximations $\tau_q(R\opt) \approx \tau_q(\delta\opt)$ and $\tau_q(R) \approx \tau_q(\delta)$. In particular, we can decompose \begin{align}
\widetilde{\mathrm{Power}}_q(w_n\opt) - \widetilde{\mathrm{Power}}_q(w_n)
&=
\frac{\E\left[\tau_q(R\opt) - \tau_q(R)\right]}{s\upn} \nonumber \\
&=
\frac{\E[\tau_q(R\opt) - \tau_q(\delta\opt)]}{s\upn} + \frac{\E[\tau_q(\delta\opt) - \tau_q(\delta)]}{s\upn} + \frac{\E\left[\tau_q(\delta) - \tau_q(R)\right]}{s\upn} \nonumber \\
&\ge
\frac{\E[\tau_q(\delta\opt) - \tau_q(\delta)]}{s\upn} - \frac{\E|\tau_q(R\opt) - \tau_q(\delta\opt)|}{s\upn} - \frac{\E|\tau_q(\delta) - \tau_q(R)|}{s\upn}. \label{eq::mainstrategy} \end{align} In particular, Step 1 of the proof is to show that $\tau_q(\delta\opt) \ge \tau_q(\delta)$ holds deterministically, for fixed $n$. This implies that the first term of Equation (\ref{eq::mainstrategy}) is nonnegative for fixed $n$. In Step 2, we show that as $n \to \infty$, the second and third term of Equation (\ref{eq::mainstrategy}) vanish. In Step 3, we combine these results and take limits to yield the final result.
\underline{Step 1}: In this step, we show that $\tau_q(\delta\opt) \ge \tau_q(\delta)$ holds deterministically for fixed $n$. To do this, it suffices to show that $\bar \delta\opt_k \ge \bar \delta_k$ for each $k \in [p_n]$. To see this, recall that $\tau_q(\delta\opt)$ and $\tau_q(\delta)$ are increasing functions of $\psi_q(\delta\opt)$ and $\psi_q(\delta)$, as defined below: \begin{equation}\label{eq::taudefreminder}
\psi_q(\delta\opt) = \max_{k \in [p_n]} \left\{\frac{k - k \bar \delta\opt_k + 1}{k \bar \delta\opt_k} \le q \right\} \text{ and } \psi_q(\delta) = \max_{k \in [p_n]} \left\{\frac{k-k\bar\delta_k + 1}{k \bar \delta_k} \le q \right\}. \end{equation} Since the function $\gamma \mapsto \frac{k-k\gamma+1}{k\gamma}$ is decreasing in $\gamma$, $\bar\delta\opt_k \ge \bar \delta_k$ implies that $\frac{k - k \bar \delta\opt_k + 1}{k \bar \delta\opt_k} \le \frac{k - k \bar \delta_k + 1}{k \bar \delta_k}$ for each $k$, and therefore $\psi_q(\delta\opt) \ge \psi_q(\delta)$, which implies $\tau_q(\delta\opt) \ge \tau_q(\delta)$. Thus, it suffices to show that $\bar\delta\opt_k \ge \bar \delta_k$ holds for each $k$.
Intuitively, it makes sense that $\bar \delta\opt_k \ge \bar \delta_k$ for each $k$, since $W\opt$ maximizes $\P(W_j\opt > 0 \mid D)$ coordinate-wise and chooses $W\opt$ so that $\delta\opt$ is sorted in decreasing order. To prove this formally, we first argue that conditional on $D\upn$, $R\opt$ is a deterministic function of $R$. Recall that according to Corollary \ref{cor::determdiff}, the event $\sign(W_j) \ne \sign(W_j\opt)$ is completely determined by the masked data $D\upn$. Furthermore, since $R\opt$ and $R$ are random permutations of the vectors $\I(W > 0)$ and $\I(W\opt > 0)$ where the random permutations only depend on $|W|$ and $|W\opt|$, this implies there exists a random vector $\xi \in \{0,1\}^{p_n}$ and a random permutation $\sigma \in S_{p_n}$ such that $R\opt = \xi \oplus \sigma(R)$ and $\xi, \sigma$ are deterministic conditional on $D\upn$. (Note that $\oplus$ denotes the XOR function, so $1 \oplus 1 = 1, 1 \oplus 0 = 0$, $0 \oplus 0 = 1$.) The intuition here is that following Proposition \ref{prop::mxdistinguish}, fitting a feature statistic $W$ is equivalent to observing $D\upn$, assigning an ordering to the features, and then guessing which one of $\{\bX_j,\widetilde{\mathbf{X}}_j\}$ is the true feature and which is a knockoff, where $W_j > 0$ if and only if this ``guess" is correct. Since these decisions are made as deterministic functions of $D\upn$, $W\opt$ can only be different than $W$ in that it sometimes makes different guesses, flipping the sign of $W$ (as represented by $\xi)$, and is in a different order (as represented by $\sigma$).
Now, since $\xi$ and $\sigma$ are deterministic functions of $D\upn$, this implies that \begin{equation*}
\delta\opt_i = \E[R_i\opt \mid D\upn] = \E[\xi_i \oplus R_{\sigma(i)} \mid D\upn] = \begin{cases}
1 - \delta_{\sigma(i)} & \xi_i = 1\\
\delta_{\sigma(i)} & \xi_i = 0.
\end{cases} \end{equation*} However, by definition, $\E[R_i\opt \mid D\upn] = \P(\mathrm{sorted}(W\opt)_i > 0 \mid D\upn)$, and Proposition \ref{prop::bestsigns} tells us that $\P(W\opt_i > 0 \mid D\upn) \ge 0.5$ for all $i \in [p_n]$: since the ordering of $W\opt$ is deterministic conditional on $D\upn$, this also implies $\P(\mathrm{sorted}(W\opt)_i > 0 \mid D\upn)$. Therefore, $\delta\opt_i \ge 0.5$ and thus $\delta\opt_i \ge \delta_{\sigma(i)}$ for each $i \in [p_n]$. Additionally, by construction $W\opt$ ensures that $\delta\opt_1 \ge \delta\opt_2 \ge \dots \ge \delta\opt_{p_n}$. If $\delta_{(1)}, \dots, \delta_{(p_n)}$ are the order statistics of $\delta$ in decreasing order, this implies that $\delta\opt_i \ge \delta_{(i)}$ for all $i$. Therefore, \begin{equation*}
\bar \delta\opt_k = \frac{1}{k} \sum_{i=1}^k \delta\opt_i \ge \frac{1}{k} \sum_{i=1}^k \delta_{(i)} \ge \frac{1}{k} \sum_{i=1}^n \delta_i. \end{equation*} By the previous analysis, this proves that $\tau_q(\delta\opt) \ge \tau_q(\delta)$.
\underline{Step 2}: In this step, we show that $\frac{\E|\tau_q(\delta\opt) - \tau_q(R\opt)|}{s\upn} \to 0$ for all but countably many $q \in (0,1)$, as well as the analagous result for $R$ and $\delta$. We first prove the result for $R\opt$ and $\delta\opt$, and in particular, for any fixed $v > 0$, we will show that $\limsup_{n \to \infty} \frac{\E|\tau_q(\delta\opt) - \tau_q(R\opt)|}{s\upn} \le v$. Since we will show this for any arbitrary $v > 0$, this implies $\frac{\E|\tau_q(\delta\opt) - \tau_q(R\opt)|}{s\upn} \to 0$.
We begin by applying Lemma \ref{lem::ddt2slln}. In particular, fix any $k_n \in [p_n]$, any $\epsilon > 0$, and define \begin{equation*}
A_n = \left\{\max_{k_n \le k \le p_n} |\bar R_k\opt - \bar \delta_k\opt| \le \epsilon\right\}. \end{equation*} Then by Lemma \ref{lem::ddt2slln}, \begin{equation*}
|\tau_q(R\opt) - \tau_q(\delta\opt)| \le p_n \I(A_n^c) + \tau_{q+\eta}(R\opt) - \tau_{q-\eta}(R\opt) + k_n + 1 + 3 \epsilon \tau_{q+\eta}(R\opt). \end{equation*} where $\eta = 3(1+q)\epsilon$. Therefore, \begin{equation}
\frac{\E|\tau_q(R\opt) - \tau_q(\delta\opt)|}{s\upn} \le \frac{p_n \P(A_n^c)}{s\upn} + \frac{k_n + 1}{s\upn} + \frac{\E[\tau_{q+\eta}(R\opt)] - \E[\tau_{q-\eta}(R\opt)]}{s\upn} + \frac{3 \epsilon \E[\tau_{q+\eta}(R\opt)]}{s\upn}. \end{equation} We now analyze these terms in order: while doing so, we will choose a sequence $\{k_n\}$ and constant $\epsilon > 0$ which guarantee the desired result. Note that eventually, our choice of $\epsilon$ will depend on $q$, so the convergence is not necessarily uniform, but that does not pose a problem for our proof.
\textit{First term}: To start, we will first apply a finite-sample concentration result to bound $\P(A_n^c)$. In particular, we show in Corollary \ref{cor::expdecay} that if $X_1, \dots, X_n$ are mean-zero, $[-1,1]$-valued random variables satisfying the exponential decay condition from Equation (\ref{eq::expcovdecay}), then there exists a universal constant $C' > 0$ depending only on $C$ and $\rho$ such that \begin{equation}\label{eq::permconcv1}
\P\left(\max_{n_0 \le i \le n} |\bar X_i| \ge t \right) \le n \exp(-C' t^{2 } n_0^{1/4}). \end{equation} Furthermore, Corollary \ref{cor::expdecay} shows that this result holds even if we permute $X_1, \dots, X_n$ according to some arbitrary \textit{fixed} permutation $\sigma$. Now, observe that conditional on $D\upn$, $R_j\opt - \delta_j\opt$ is a zero-mean, $[-1,1]$-valued random variable which is a fixed permutation of $\I(W\opt > 0)$ minus its (conditional) expectation. Since $\I(W\opt > 0)$ obeys the conditional exponential decay condition in Equation (\ref{eq::expcovdecay}), we can apply Corollary \ref{cor::expdecay} to $R_j\opt - \delta_j\opt$: \begin{equation}\label{eq::permconcapp}
\P(A_n^c \mid D\upn) \le p_n \exp(-C' \epsilon^{2 } k_n^{1/4}) \end{equation} which implies by the tower property that $p_n \P(A_n^c) \le p_n^2 \exp(-C' \epsilon^{2 } k_n^{1/4})$. Now, suppose we take \begin{equation*}
k_n = \ceil{\log(p_n)^{4 + \gamma}}. \end{equation*} Then observe that $\epsilon$ is fixed, so as $n \to \infty$, $k_n^{1/4} \epsilon^{2 } = \Omega(\log(p_n)^{1+\gamma/4})$. Thus \begin{equation*}
\log(p_n \P(A_n^c)) \le 2 \log(p_n) - \Omega\left(\log(p_n)^{1 + \gamma/4}\right) \to - \infty. \end{equation*} Therefore, for this choice of $k_n$, we have shown the stronger result that $p_n \P(A_n^c) \to 0$. Of course, this implies $\frac{p_n \P(A_n^c)}{s\upn} \to 0$ as well.
\textit{Second term}: This term is easy, as we assume in the statement that $\frac{k_n}{s\upn} \sim \frac{\log(p_n)^{4 + \gamma}}{s\upn} \to 0$.
\textit{Third term}: We will now show that for all but countably many $q \in (0,1)$, for any sufficiently small $\epsilon$, $\limsup_{n \to \infty} \frac{\E[\tau_{q+\eta}(R\opt)] - \E[\tau_{q-\eta}(R\opt)]}{s\upn} \le v/2$ for any fixed $v > 0$.
To do this, recall by assumption that for all $q \in (0,1)$, we have that $\lim_{n \to \infty} \frac{\E[\tau_q(R\opt)]}{s\upn}$ exists and converges to some (extended) real number $L(q)$. Furthermore, we show in Lemma \ref{lem::pferbound} that $L(q)$ is always finite---this is intuitively a consequence of the fact that knockoffs controls the false discovery rate, and thus the number of discoveries cannot exceed the number of non-nulls by more than a constant factor. Importantly, since $\tau_q(R\opt)$ is increasing in $q$, the function $L(q)$ is increasing in $q$ for all $q \in (0,1)$: therefore, it is continuous on $(0,1)$ except on a countable set.
Supposing that $q$ is a continuity point of $L(q)$, there exists some $\beta > 0$ such that $|q-q'| \le \beta \implies |L(q) - L(q')| \le v/4$. Take $\epsilon$ to be any positive constant such that $\epsilon \le \frac{\beta}{3(1+q)}$ and thus $\eta \le \beta$. Then we conclude \begin{align*}
\limsup_{n \to \infty} \frac{\E[\tau_{q+\eta}(R\opt)] - \E[\tau_{q-\eta}(R\opt)]}{s\upn}
&=
L(q+\eta) - L(q-\eta) & \text{ because } \frac{\E[\tau_{q}(R\opt)]}{s\upn} \to L(q) \text{ pointwise} \\
&\le
\frac{v}{2}. & \text{ by continuity} \end{align*}
\textit{Fourth term}: We now show that for all but countably many $q \in (0,1)$, for any sufficiently small $\epsilon$, $\lim_{n \to \infty} \frac{3 \epsilon \E[\tau_{q+\eta}(R\opt)]}{s\upn} = 3 \epsilon L(q+\eta) \le v / 2$. However, this is simple, since Lemma \ref{lem::pferbound} tells us that $L(q)$ is finite and continuous except at countably many points. Thus, we can take $\epsilon$ sufficiently small so that $L(q+\eta) = L(q+3(1+q)\epsilon) \le L(q) + 1$, and then also take $\epsilon < \frac{v}{6( L(q) + 1)}$ so that $3 \epsilon L(q+ \eta) \le v/2$.
Combining the results for all four terms, we see the following: for each $v > 0$, there exist a sequence $\{k_n\}$ and a constant $\epsilon$ guaranteeing that \begin{align*}
\limsup_{n \to \infty} \frac{\E|\tau_q(R\opt) - \tau_q(\delta\opt)|}{s\upn} \le v. \end{align*}
Since this holds for all $v > 0$, we conclude $\lim_{n \to \infty} \frac{\E|\tau_q(R\opt) - \tau_q(\delta\opt)|}{s\upn} = 0$ as desired.
Lastly in the step, we need to show the same result for $R$ and $\delta$ in place of $R\opt$ and $\delta\opt$. However, the proof for for $R$ and $\delta$ is identical to the proof for $R\opt$ and $\delta\opt$. The one subtlety worth mentioning is that we do not directly assume the exponential decay condition in Equation (\ref{eq::expcovdecay}) for $W$. However, as we argued in Step 1, we can write $\I(W > 0) = \xi \oplus \I(W\opt > 0)$ for some random vector $\xi \in \{0,1\}^{p_n}$ which is a deterministic function of $D\upn$. As a result, we have that \begin{align*}
|\cov(\I(W_i > 0), \I(W_j > 0) \mid D\upn)|
=
|\cov(\I(W\opt_i > 0), \I(W\opt_j > 0) \mid D\upn)|
\le C \rho^{|i-j|}. \end{align*}
Thus, we also conclude that $\lim_{n \to \infty} \frac{\E|\tau_q(R) - \tau_q(\delta)|}{s\upn} = 0$.
\underline{Step 3: Finishing the proof}. Recall Equation (\ref{eq::mainstrategy}), which states that \begin{align}
\widetilde{\mathrm{Power}}_q(w_n\opt) - \widetilde{\mathrm{Power}}_q(w_n)
&\ge
\frac{\E[\tau_q(\delta\opt) - \tau_q(\delta)]}{s\upn} - \frac{\E|\tau_q(R\opt) - \tau_q(\delta\opt)|}{s\upn} - \frac{\E|\tau_q(\delta) - \tau_q(R)|}{s\upn}. \tag{\ref{eq::mainstrategy}} \end{align} In Step 1, we showed that $\tau_q(\delta\opt) \ge \tau_q(\delta)$ for fixed $n$. Furthermore, in Step 2, we showed that the second two terms vanish asymptotically. As a result, we take limits and conclude \begin{equation*}
\liminf_{n \to \infty} \widetilde{\mathrm{Power}}_q(w_n\opt) - \widetilde{\mathrm{Power}}_q(w_n) \ge 0. \end{equation*} Furthermore, since we assume that the limits $\lim_{n\to \infty} \widetilde{\mathrm{Power}}_q(w_n\opt), \lim_{n\to \infty} \widetilde{\mathrm{Power}}_q(w_n)$ exist, this implies that \begin{equation*}
\lim_{n \to \infty} \widetilde{\mathrm{Power}}_q(w_n\opt) - \lim_{n \to \infty} \widetilde{\mathrm{Power}}_q(w_n) \ge 0. \end{equation*} This concludes the proof.
\end{proof} \end{theorem} \endgroup
\subsection{Relaxing the assumptions in Theorem \ref{thm::avgopt}}\label{subsec::assumptions}
In this section, we discuss a few ways to relax the assumptions in Theorem \ref{thm::avgopt}.
First, we can easily relax the assumption that the limits $L(q) \defeq \lim_{n \to \infty} \widetilde{\mathrm{Power}}_q(w_n)$ and $L\opt(q) \defeq \lim_{n \to \infty} \widetilde{\mathrm{Power}}_n(w_n\opt)$ exist for each $q \in (0,1)$. Indeed, the proof of Theorem \ref{thm::avgopt} only uses this assumption to argue that
there exists a sequence $\eta_n \to 0$ such that $L(q+\eta_n) \to L(q), L(q-\eta_n) \to L(q)$ (and similarly for $L\opt(q)$). Thus, we do not need the limits $L(q)$ to exist for every $q \in (0,1)$: in contrast, the result of Theorem \ref{thm::avgopt} will hold (e.g.) for any $q$ such that $L(\cdot), L\opt(\cdot)$ are continuous at $q$. Intuitively, this means that the result in Theorem \ref{thm::avgopt} holds except at points $q$ which delineate a ``phase transition," where the power of knockoffs jumps in a discontinuous fashion as $q$ increases.
Second, it is important to note that the precise form of the local dependency condition (\ref{eq::expcovdecay}) is not crucial. Indeed, the proof of Theorem \ref{thm::avgopt} only uses this condition to show that the partial sums of $\I(W\opt > 0)$ converge to their conditional mean given $D$. To be precise, fix any permutation $\kappa : [p] \to [p]$ and let $R = \I(\kappa(W\opt) > 0)$ where $\kappa(W\opt)$ permutes $W\opt$ according to $\kappa$. Let $\delta = \E[R \mid D]$. Then the proof of Theorem \ref{thm::avgopt} will go through exactly as written if we replace Equation (\ref{eq::expcovdecay}) with the following condition: \begin{equation}\label{eq::stronglawcondition}
\P\left(\max_{k_n \le k \le p_n} |\bar R_k - \bar \delta_k| \mid D \right) = o(p_n^{-1}) \end{equation} where $k_n$ is some sequence satisfying $k_n \to \infty$ and $\frac{k_n}{s\upn} \to 0$.
The upshot is this: under any condition where each permutation of $\I(W\opt > 0)$ obeys a certain strong law of large numbers, we should expect Theorem \ref{thm::avgopt} to hold. Although it is perhaps slightly unusual to require that a strong law holds for any fixed permutation of a vector, usually there is a ``worst-case" permutation where if Equation (\ref{eq::stronglawcondition}) holds for some choice of $\kappa$, then it holds for every choice of $\kappa$. For example, in Corollary \ref{cor::expdecay}, we show that if the exponential decay condition holds, then it suffices to show Equation (\ref{eq::stronglawcondition}) in the case where $\kappa$ is the identity permutation, since the identity permutation places the most correlated coordinates of $W\opt$ next to each other.
\subsection{Maximizing the expected number of true discoveries}\label{subsec::tpr}
One weakness of Theorem \ref{thm::avgopt} is that it shows that MLR statistics maximize the (normalized) expected number of discoveries, which is not exactly the same as maximizing the expected number of \textit{true} discoveries. In this section, we introduce a modification of MLR statistics and give a proof sketch that they maximize the expected number of true discoveries. However, computing these modified MLR statistics is extremely computationally expensive, so we prefer to use the MLR statistics defined in the paper, as they perform quite well anyway.
Throughout this section, we use the notation introduced in Section \ref{subsec::mainproof}. As a reminder, for any feature statistic $W$, let $R = \I(\mathrm{sorted}(W) > 0)$, let $\delta = \E[R \mid D]$, and let $\psi_q(\cdot)$ be as defined in Equation (\ref{eq::psidef}) so that MLR statistics make $\tau_q(R) = \ceil{\frac{\psi_q(R) + 1}{1+q}}$ discoveries. The key idea behind the proof of Theorem \ref{thm::avgopt} is to observe that: \begin{enumerate}[topsep=0pt]
\item $\tau_q(R)$ only depends on the successive partial averages of $R$, denoted $\bar R_k = \frac{1}{k} \sum_{i=1}^k R_i$.
\item As $p \to \infty$, $\bar R_k \toas \bar \delta_k$ under suitable assumptions. Thus, $\tau_q(R) \approx \tau_q(\delta)$.
\item If $R\opt = \I(\mathrm{sorted}(W\opt) > 0)$ are MLR statistics with $\delta\opt = \E[R\opt \mid D]$, then $R\opt$ is asymptotically optimal because $\tau_q(\delta\opt) \ge \tau_q(\delta)$ holds in finite samples for any choice of $\delta$. In particular, this holds because MLR statistics ensure $\delta\opt$ is in descending order. \end{enumerate}
To show a similar result for the number of \textit{true} discoveries, we can now effectively repeat the three steps used in the proof of Theorem \ref{thm::avgopt}. To do this, let $I_j$ be the indicator that the feature corresponding the the $j$th coordinate of $R_j$ is non-null, and let $B_j = \I(I_j = 1, R_j = 1)$ be the indicator that $\mathrm{sorted}(W)_j > 0$ \textit{and} that the corresponding feature is non-null. Let $b = \E[B \mid D]$. Then:
\begin{enumerate}[topsep=0pt]
\item Let $T_q(R, B)$ denote the number of \textit{true} discoveries. We claim that $T_q(R, B)$ is a function of the successive partial means of $R$ and $B$. To see this, recall that the knockoffs procedure applied to $W$ will make $\tau_q(R)$ discoveries, and in particular it will make discoveries corresponding to any of the first $\psi_q(R)$ coordinates of $R$ which are positive. Therefore,
\begin{equation*}
T_q(R, B) = \sum_{j=1}^{\psi_q(R)} B_j = \psi_q(R) \cdot \frac{1}{\psi_q(R)} \sum_{j=1}^{\psi_q(R)} B_j.
\end{equation*}
Since $\psi_q(R)$ only depends on the successive averages of $R$ and the second term is itself a successive partial average of $\{B_j\}$, this finishes the first step.
\item The second step in the ``proof sketch" is to show that as $p \to \infty$, $\bar B_k \toas \bar b_k, \bar R_k \toas \bar \delta_k$ and therefore $T_q(R, B) \approx T_q(\delta, b)$. This can be done using the same techniques as Theorem \ref{thm::avgopt}, although it requires an extra assumption that $B$ also obeys the local dependency condition (\ref{eq::expcovdecay}). However, just like the original local dependency condition, this condition also only depends on the posterior of $B$, so it can be diagnosed using the data as hand.
\item To complete the proof, we need to define a modified MLR statistic $W'$ with corresponding $R', \delta', b'$ such that $T_q(\delta', b') \ge T_q(\delta, b)$ holds in finite samples for any other feature statistic $W$. It is easy to see that $W'$ must have the same \textit{signs} as the original MLR statistics $W\opt$, since the signs of $W\opt$ maximize $\delta\opt$ and $b\opt$ coordinatewise. However, the \textit{absolute values} of $W'$ may differ from those of $W\opt$, since it is not always true that sorting $\delta$ in decreasing order maximizes $T_q(\delta, b)$. Since changing the absolute values of $W\opt$ merely permutes $b\opt$ and $\delta\opt$, the modified MLR statistic must solve the following combinatorial optimization problem:
\begin{equation}\label{eq::modifiedmlr}
\kappa\opt = \argmax_{\kappa : [p] \to [p]} T_q(\kappa(\delta\opt), \kappa(b\opt)) = \argmax_{\kappa : [p] \to [p]} \sum_{j=1}^{\psi_q(\kappa(\delta\opt))} b_{\kappa(j)}\opt.
\end{equation}
Having found this optimal choice of $\kappa\opt$, we can construct the modified MLR statistics setting $W' \approx W\opt$, except we change the magnitudes of $W'$ so that the ordering implied by its absolute values agrees with the one implied by $\kappa\opt$, i.e., $\E[\mathrm{sorted}(W') > 0\mid D] = \kappa\opt(\delta\opt)$. \end{enumerate}
Intuitively, we do not expect the modified MLR statistics $W'$ to look very different from $W$. To see this, note that in the special case where $\delta\opt$ has the same order as $b\opt$, MLR statistics exactly maximize $T_q(\delta\opt, b\opt)$. Usually, $\delta\opt$ will have a very similar order to $b\opt$, because \begin{equation*}
\delta_j\opt = \P(\mathrm{sorted}(W\opt)_j > 0 \mid D) \text{ and } b\opt = \P(\mathrm{sorted}(W\opt)_j > 0, I_j > 0 \mid D). \end{equation*} Indeed, the pairwise exchangeability of knockoffs guarantees that $\delta_j\opt$ can only be large whenever $I_j = 1$ with high probability, so $\delta_j\opt$ and $b_j\opt$ are extremely closely related. Intuitively, this makes sense: a knockoff $W$-statistic will only be highly likely to be positive if feature $j$ has a large posterior probability of being non-null. This suggests that MLR statistics will approximately maximize the expected number of true discoveries, even though we did not prove this rigorously.
\subsection{Verifying the local dependence assumption in a simple setting}\label{subsec::unhappy}
We now prove Proposition \ref{prop::unhappy}, which verifies the local dependency condition (\ref{eq::expcovdecay}) in the setting where $\bX^T \bX$ is block-diagonal and $\sigma^2$ is known.
\begingroup \def\ref{prop::unhappy}{\ref{prop::unhappy}} \begin{proposition} Suppose $\by \mid \bX \sim \mcN(\bX \beta, \sigma^2 I_n)$ and $\bX^T \bX$ is a block-diagonal matrix with maximum block size $M \in \N$. Suppose $\pi$ is any prior such that the coordinates of $\beta$ are a priori independent and $\sigma^2$ is a known constant. Then if $\widetilde{\mathbf{X}}$ are either fixed-X knockoffs or conditional Gaussian model-X knockoffs \citep{condknock2019}, the coordinates of $\sign(W\opt)$ are $M$-dependent conditional on $D$, implying that Equation (\ref{eq::expcovdecay}) holds, e.g., with $C = 2^M$ and $\rho = \frac{1}{2}$. \end{proposition}
\begin{proof} Define $R \defeq \I(W > 0)$. We will prove the stronger result that if $J_1, \dots, J_m \subset [p]$ are a partition of $[p]$ corresponding to the blocks of $\bX^T \bX$, then $R_{J_1}, \dots, R_{J_m}$ are jointly independent conditional on $D$. As notation, suppose without loss of generality that $J_1, \dots, J_m$ are contiguous subsets and $\bX^T \bX = \diag{\Sigma_1, \dots, \Sigma_m}$ for $\Sigma_i \in \R^{|J_i| \times |J_i|}$.
We give the proof for model-X knockoffs; the proof for fixed-X knockoffs is quite similar. Recall by Proposition \ref{prop::mxdistinguish} that we can write $R_j = \I(W_j > 0) = \I(\bX_j = \widehat{\bX}_j)$ where $\widehat{\bX}_j$ is a function of the masked data $D$. Therefore, to show $R_{J_1}, \dots, R_{J_m}$ are independent conditional on $D$, it suffices to show $\bX_{J_1}, \dots, \bX_{J_m}$ are conditionally independent given $D$. To do this, it will first be useful to note that for any value of $\bX$ which is consistent with $D$ (in the sense that $\bX_j \in \{\bx_j, \widetilde{\mathbf{x}}_j\}$ for each $j$), we have that \begin{align*}
L_{\beta,\sigma}(\by \mid \bX)
&\propto
\exp\left(-\frac{1}{2 \sigma^2}||\by - \bX \beta||_2^2 \right) \\
&\propto
\exp\left(\frac{2 \beta^T \bX^T \by - \beta^T \bX^T \bX \beta}{2 \sigma^2}\right) \\
&\propto
\prod_{i=1}^m \exp\left(\frac{2 \beta_{J_i}^T \bX_{J_i}^T \by - \beta_{J_i}^T \Sigma_i \beta_{J_i}}{2 \sigma^2}\right). \end{align*} A subtle but important observation in the calculation above is that we can verify that $\bX^T \bX = \diag{\Sigma_1, \dots, \Sigma_m}$ having only observed $D$ without observing $\bX$. Indeed, this follows because for conditional Gaussian MX knockoffs, $\widetilde{\mathbf{X}}^T \widetilde{\mathbf{X}} = \bX^T \bX$ and $\widetilde{\mathbf{X}}^T \bX$ only differs from $\bX^T \bX$ on the main diagonal (just like in the fixed-X case). With this observation in mind, let $p(\cdot \mid \cdot)$ denote an arbitrary conditional density, and observe \begin{align*}
p(\bX \mid D)
&\propto
p(\bX, \by \mid \{\bX_j, \widetilde{\mathbf{X}}_j\}_{j=1}^p) \\
&=
p(\bX \mid \{\bX_j, \widetilde{\mathbf{X}}_j\}_{j=1}^p)
p(\by \mid \bX) \,\,\,\,\,\,\,\,\,\, \text{ since } \by \Perp \widetilde{\mathbf{X}} \mid \bX \\
&=
\frac{1}{2^p} p(\by \mid \bX) \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \text{ by pairwise exchangeability} \\
&\propto
\int_{\beta} p(\beta) p(\by \mid \bX, \beta) d\beta \\
&\propto
\int_{\beta} \prod_{i=1}^m p(\beta_{J_i}) \exp\left(\frac{- \beta_{J_i}^T \Sigma_i \beta_{J_i}}{2 \sigma^2} \right) \exp\left(\frac{\beta_{J_i}^T \bX_{J_i}^T \by}{\sigma^2} \right) d\beta \\
&\propto
\int_{\beta_{J_1}} \dots, \int_{\beta_{J_m}} \prod_{i=1}^m p(\beta_{J_i}) \exp\left(\frac{- \beta_{J_i}^T \Sigma_i \beta_{J_i}}{2 \sigma^2} \right) \exp\left(\frac{\beta_{J_i}^T \bX_{J_i}^T \by}{\sigma^2} \right) d\beta_{J_1} \, \beta_{J_2} \dots \, d\beta_{J_m}. \end{align*} At this point, we can iteratively pull out parts of the product. In particular, define the following function: \begin{equation*}
q_i(\bX_{J_i}) \defeq \int_{\beta_{J_i}} p(\beta_{J_i}) \exp\left(\frac{- \beta_{J_i}^T \Sigma_i \beta_{J_i}}{2 \sigma^2} \right) \exp\left(\frac{\beta_{J_i}^T \bX_{J_i}^T \by}{\sigma^2} \right) d \beta_{J_i}. \end{equation*} Since $\by, \sigma^2$ and $\Sigma_i$ are fixed, $q_i(\bX_{J_i})$ is a deterministic function of $\bX_{J_i}$ that does not depend on $\beta_{-J_i}$. Therefore, we can iteratively integrate as below: \begin{align*}
p(\bX \mid D)
&\propto
\int_{\beta_{J_1}} \dots, \int_{\beta_{J_m}} \prod_{i=1}^m p(\beta_{J_i}) \exp\left(\frac{- \beta_{J_i}^T \Sigma_i \beta_{J_i}}{2 \sigma^2} \right) \exp\left(\frac{\beta_{J_i}^T \bX_{J_i}^T \by}{\sigma^2} \right) d\beta_{J_1} \, d\beta_{J_2} \dots \, d\beta_{J_m} \\
&=
\int_{\beta_{J_1}} \dots \int_{\beta_{J_{m-1}}} \prod_{i=1}^{m-1} p(\beta_{J_i}) \exp\left(\frac{- \beta_{J_i}^T \Sigma_i \beta_{J_i}}{2 \sigma^2} \right) \exp\left(\frac{\beta_{J_i}^T \bX_{J_i}^T \by}{\sigma^2} \right) q_m(\bX_{j_m}) d\beta_{J_1} \, d\beta_{J_2} \dots \, d\beta_{J_{m-1}} \\
&=
\prod_{i=1}^m q_i(\bX_{J_i}). \end{align*} This shows that $\bX_{J_1}, \dots, \bX_{J_m} \mid D$ are jointly (conditionally) independent since their density factors, thus completing the proof. For fixed-X knockoffs, the proof is very similar as one can show that the density of $p(\bX^T \by \mid D)$ factors into blocks. \end{proof} \endgroup
\subsection{Intuition for the local dependency condition and Figure \ref{fig::dependence}}\label{appendix::dependencediscussion}
In Figure \ref{fig::dependence}, we see that even when $\bX$ is very highly correlated, $\cov(\sign(W\opt) \mid D)$ looks similar to a diagonal matrix, indicating that the local dependency condition (\ref{eq::expcovdecay}) holds very well empirically. The empirical result is striking and may be a bit surprising initially, so in this section we offer some explanation.
For the purposes of building intuition, suppose that we are fitting model-X knockoffs and $\by \mid \bX \sim \mcN(\bX \beta, \sigma^2 I_n)$ with a spike-and-slab prior $\pi$. After observing $D = (\by, \{\bx_j, \widetilde{\mathbf{x}}_j\}_{j=1}^p)$, we can sample from the posterior distribution $\sign(W\opt) \mid D$ via the following Gibbs sampling approach: \begin{enumerate}
\item For each $j \in [p]$, initialize $\beta_j$ and $\bX_j$ to some value.
\item For $j \in [p]$:
\begin{enumerate}[(a)]
\item Sample from $\bX_j \mid \bX_{-j}, \beta_{-j}, D$. It may be helpful to recall $\bX_j \in \{\bx_j, \widetilde{\mathbf{x}}_j\}$.
\item Sample from $\beta_j \mid \bX, \beta_{-j}, D$.
\end{enumerate}
\item Repeat step 2 $n_{\mathrm{iter}}$ times. \end{enumerate}
Now, recall that $W_j\opt > 0$ if and only if the Gibbs sampler consistently chooses $\bX_j = \bx_j$ instead of $\bX_j = \widetilde{\mathbf{x}}_j$. Thus, to analyze $\cov(\sign(W_j\opt), \sign(W_i\opt) \mid D)$, we must ask the following question: for fixed $i,j$ such that $i \ne j$, does the choice of $\bX_i \in \{\bx_i, \widetilde{\mathbf{x}}_i\}$ strongly affect how we resample $\bX_j$?
To answer this question, the following key fact is useful. At step 2(a), for any $j$, let $r_j = \by - \bX_{-j} \beta_{-j}$ be the residuals excluding feature $j$ for the current value of $\bX_{-j}$ and $\beta_{-j}$ in the Gibbs sampler. Then standard calculations show that $\P(\bX_j = \bx_j \mid D, \bX_{-j}, \beta_{-j})$ only depends on $\bX_{-j}$ and $\beta_{-j}$ through the following inner products:
\begin{equation*}
\alpha_j \defeq \bx_j^T r_j = \bx_j^T \by - \sum_{i \ne j} \bx_j^T \bX_i \beta_i \end{equation*} \begin{equation*}
\tilde{\alpha}_j \defeq \widetilde{\mathbf{x}}_j^T r_j = \widetilde{\mathbf{x}}_j^T \by - \sum_{i \ne j} \widetilde{\mathbf{x}}_j^T \bX_i \beta_i \end{equation*} Thus, the question we must answer is: how does the choice of $\bX_i \in \{\bx_i, \widetilde{\mathbf{x}}_i\}$ affect the value of $\alpha_j, \tilde{\alpha}_j$? Heuristically, the answer is ``not very much," since $\bX_i$ only appears above through inner products of the form $\bx_j^T \bX_i$ and $\widetilde{\mathbf{x}}_j^T \bX_i$, and by definition of the knockoffs we know that $\bx_j^T \bx_i \approx \bx_j^T \widetilde{\mathbf{x}}_i$ and $\widetilde{\mathbf{x}}_j^T \bx_i \approx \widetilde{\mathbf{x}}_j^T \widetilde{\mathbf{x}}_i$. Indeed, for fixed-X knockoffs, we know that this actually holds exactly, and for model-X knockoffs, the law of large numbers should ensure that these approximations are very accurate.
The main way that the choice of $\bX_i$ can significantly influence the choice of $\bX_j$ is that the choice of $\bX_i$ may change the value of $\beta_i$. In general, we expect this effect to be rather small, since in many highly-correlated settings, $\bX_i$ and $\widetilde{\mathbf{X}}_i$ are necessarily highly correlated and thus choosing $\bX_i$ or $\widetilde{\mathbf{X}}_i$ should not affect the choice of $\beta_i$ too much. That said, there are a few known pathological settings where the choice of $\bX_i$ and $\widetilde{\mathbf{X}}_i$ does substantially change the value of $\beta_i$ (\cite{altsign2017, mrcknock2022}), and in these settings, the coordinates of $\sign(W\opt)$ may be strongly conditionally dependent. The good news is that using MVR knockoffs instead of SDP knockoffs should ameliorate this problem (see \cite{mrcknock2022}).
Overall, we recognize that this explanation is purely heuristic and does not fully explain the results in Figure \ref{fig::dependence}. However, we do hope that it provides some intuitive insight. A more rigorous theoretical analysis of $\cov(\sign(W\opt) \mid D)$ would be interesting; however, we leave this to future work.
\section{Technical proofs}\label{appendix::techproofs}
\subsection{Key concentration results}
The proof of Theorem \ref{thm::avgopt} relies on the fact that the successive averages of the vector $\I(\mathrm{sorted}(W) > 0) \in \R^p$ converge uniformly to their conditional expectation given the masked data $D\upn$. In this section, we give a brief proof of this result, which is essentially an application of Theorem 1 from \cite{doukhan2007}. For convenience, we first restate a special case of this theorem (namely, the case where the random variables in question are bounded and we have bounds on pairwise correlations) before proving the corollary we use in Theorem \ref{thm::avgopt}.
\begin{theorem}[\cite{doukhan2007}]\label{thm::doukhan2007} Suppose that $X_1, \dots, X_n$ are mean-zero random variables taking values in $[-1,1]$ such that $\var(\bar X_n) \le C_0 n$ for a constant $C_0 > 0$. Let $L_1, L_2 < \infty$ be constants such that for any $i \le j$, \begin{equation*}
|\cov(X_i, X_j)| \le 4\varphi(j-i) \end{equation*} where $\{\varphi(k)\}_{k \in \N}$ is a nonincreasing sequence satisfyng \begin{equation*}
\sum_{s=0}^{\infty} (s+1)^k \varphi(s) \le L_1 L_2^k k! \text{ for all } k \ge 0. \end{equation*} Then for all $t \in (0,1)$, there exists a universal constant $C_1 > 0 $ only depending on $C_0, L_1$ and $L_2$ such that \begin{equation*}
\P\left(\bar X_n \ge t \right) \le \exp\left(-\frac{t^2}{C_0 n + C_1 t^{7/4} n^{7/4}}\right) \le \exp\left(-C' t^{2 } n^{1/4}\right), \end{equation*} where $C'$ is a universal constant only depending on $C_0, L_1, L_2$. \end{theorem} If we take $\varphi(s) = c \rho^s$, this yields the following corollary.
\begin{corollary}\label{cor::expdecay} Suppose that $X_1, \dots, X_n$ are mean-zero random variables taking values in $[-1,1]$. Suppose that for some $C \ge 0, \rho \in (0,1)$, the sequence satisfies \begin{equation}\label{eq::expdecay}
|\cov(X_i, X_j)| \le C \rho^{|i-j|}. \end{equation} Then there exists a universal constant $C'$ depending only on $C$ and $\rho$ such that \begin{equation}\label{eq::expconc}
\P(\bar X_n \ge t) \le \exp\left(-C' t^{2 } n^{1/4}\right). \end{equation} Furthermore, let $\pi : [n] \to [n]$ be any permutation. For $k \le n$, define $\bar X_k^{(\pi)} \defeq \frac{1}{k} \sum_{i=1}^k X_{\pi(i)}$ to be the sample mean of the first $k$ random variables after permuting $(X_1, \dots, X_n)$ according to $\pi$. Then for any $n_0 \in \N, t \ge 0$, \begin{equation}\label{eq::permconc}
\sup_{\pi \in S_n} \P\left(\max_{n_0 \le i \le n} |\bar X_k^{(\pi)}| \ge t \right) \le n \exp(-C' t^{2 } n_0^{1/4}). \end{equation} where $S_n$ is the symmetric group.
\begin{proof} The proof of Equation (\ref{eq::expconc}) follows an observation of \cite{doukhan2007}, where we note $\varphi(s) = C \exp(-a s)$ for $a = - \log(\rho)$. Then \begin{equation*}
\sum_{s=0}^{\infty} (s+1)^k \exp(-as) \le \sum_{s=0}^{\infty} \prod_{i=1}^k (s+i) \exp(-as) = \frac{d^k}{dp^k} \left(\frac{1}{1-p}\right) \bigg|_{p=\exp(-a)} = k! \frac{1}{(1-\exp(-a))^k}. \end{equation*} As a result, $\sum_{s=0}^{\infty} (s+1)^k \varphi(s) \le C \left(\frac{1}{(1-\exp(-a))}\right)^k k!$, so we take $L_1 = \frac{1}{(1-\exp(-a))}$ and $L_2 = C$. Lastly, we observe that employing another geometric series argument, \begin{equation*}
\var(\bar X_n) = \sum_{i=1}^n \sum_{j=1}^n \cov(X_i, X_j) \le \sum_{i=1}^n C \sum_{j=1}^n \rho^{|i-j|} \le n C \frac{2}{1-\rho}. \end{equation*}
Thus, we take $C_0 = \frac{2C}{1-\rho}$ and apply Theorem \ref{thm::doukhan2007}, which yields the first result. To prove Equation (\ref{eq::permconc}), the main idea is that we can apply Equation (\ref{eq::expconc}) to each sample mean $|\bar X_k^{(\pi)}|$, at which point the Equation (\ref{eq::permconc}) follows from a union bound.
To prove this, note that if we rearrange $(X_{\pi(1)}, \dots, X_{\pi(k)})$ into their ``original order," then these variables satisfy the condition in Equation (\ref{eq::expdecay}). Formally, let $A = \{\pi(1), \dots,\pi(k)\}$ and let $\nu : A \to A$ be the permutation such that $\nu(\pi(i)) > \nu(\pi(j))$ if and only if $i > j$, for $i,j \in [k]$. Then define $Y_i = X_{\nu(\pi(i))}$ for $i \in [k]$, and note that \begin{equation*}
|\cov(Y_i, Y_j)| = |\cov(X_{\nu(\pi(i))}, X_{\nu(\pi(j))})| \le C \rho^{|\nu(\pi(i)) - \nu(\pi(j))|} \le C \rho^{|i-j|}, \end{equation*}
where in the last step, $|i-j| \le |\nu(\pi(i)) - \nu(\pi(j))|$ follows by construction of $\nu$.
This analysis implies by Equation (\ref{eq::expconc}) that for any $\pi \in S_n$, \begin{equation*}
\P\left(\max_{n_0 \le k \le n} |\bar X_k^{(\pi)}| \ge t \right) \le \sum_{k=n_0}^n \P(|\bar X_k^{(\pi)}| \ge t) \le \sum_{k=n_0}^n \exp(-C' t^{2 } k^{1/4}) \le n \exp(-C' t^{2 } n_0^{1/4}). \end{equation*} This completes the proof. \end{proof} \end{corollary}
\subsection{Bounds on the expected number of false discoveries}\label{subsec::pferbound}
The proof of Theorem \ref{thm::avgopt} relied on the fact that $\lim_{n \to \infty} \widetilde{\mathrm{Power}}_q(w_n)$ is finite whenever it exists. This is a simple consequence of the following Lemma, proved below.
\begin{lemma}\label{lem::pferbound} For any data-generating process $(\bX, \by)$, any valid knockoffs $\widetilde{\mathbf{X}}$, any prior $\pi$, and any valid knockoff statistic $W = w([\bX, \widetilde{\mathbf{X}}], \by)$, $\widetilde{\mathrm{Power}}_q(w) \le 1 + C(q)$ where $C(q)$ is a finite constant depending only on $q$.
\begin{proof} Fix any $q > 0$. If $L_{\theta}(\by \mid \bX)$ is the likelihood, we will prove this conditionally on $\theta$. In particular, let $\mcH_0(\theta)$ denote the set of null features, let $S$ be the rejection set and let $M(\theta) = p - |\mcH_0(\theta)|$ denote the number of non-nulls. If $V = |S \cap \mcH_0(\theta)|$ is the number of false discoveries, it suffices to show \begin{equation}\label{eq::pferbound}
\E\left[V \mid \theta \right] \le C(q) M(\theta). \end{equation}
Proving Equation (\ref{eq::pferbound}) proves the claim because it implies by the tower property that $\E[V] \le C(q) \E[M(\theta)]$. Therefore, since $|S| \le V + M(\theta)$, Equation (\ref{eq::pferbound}) implies \begin{equation*}
\widetilde{\mathrm{Power}}_q(w) = \frac{\E[|S|]}{\E[M(\theta)]} \le 1 + \frac{\E[V]}{\E[M(\theta)]} \le 1 + C(q). \end{equation*} Thus, it suffices to prove (\ref{eq::pferbound}). The rest of the proof proceeds conditionally on $\theta$, so we are essentially in the fully frequentist setting. Thus, for the rest of the proof, we will abbreviate $M(\theta)$ as $M$. We will also assume the ``worst-case" values for the non-null coordinates of $W_j$: in particular, let $W'$ denote $W$ but with all of the non-null coordinates replaced with the value $\infty$, and let $V'$ be the number of false discoveries made when applying SeqStep to $W'$. These are the ``worst-case" values in the sense that $V' \ge V$ deterministically (see \cite{mrcknock2022}, Lemma B.4), so it suffices to show that $\E[V'] \le C(q) M$.
As notation, let $U = \sign(\mathrm{sorted}(W'))$ denote the signs of $W'$ when sorted in descending order of absolute value. Following the notation in Equation (\ref{eq::psidef}), let $\psi(U) = \max \left\{k : \frac{k - k \bar U_k + 1}{k \bar U_k} \le q\right\}$, where $\bar U_k = \frac{1}{k} = \sum_{i=1}^{\min(k,p)} U_i$. This ensures that $\tau \defeq \ceil{\frac{\psi(U)+1}{1+q}} \le \psi(U)$ is the number of discoveries made by knockoffs (\cite{mrcknock2022}, Lemma B.3). To prove the Lemma, we observe that it suffices to show $\E[\psi(U)] \le C(q) M$. To do this, let $K = \ceil{\frac{M+1}{1+q}}$ and fix any integer $c > 0$. Observe that \begin{align*}
\E[\psi(U)]
&\le
cK \P(\psi(U) \le c K) + \sum_{k=cK}^{\infty} k \P(\psi(U) = k) \\
&\le
cK + \sum_{k=cK}^{\infty} k \P\left(\Bin(k-M, 1/2) \ge \ceil{\frac{k+1}{1+q}}-M\right).
\end{align*} where the second line follows because the event $\psi(U) = k$ implies that at least $\ceil{\frac{k+1}{1+q}}$ of the first $k$ coordinates of $U$ are positive. Crucially, the knockoff flip-sign property guarantees that conditional on $\theta$, the null coordinates of $U_j'$ are i.i.d. random signs conditional on the values of the non-null coordinates of $U'$. Thus, doing simple arithmetic, in the first $k$ coordinates of $U$, there are $k-M$ null i.i.d. signs, of which at least $\ceil{\frac{k+1}{1+q}} - M$ must be positive, yielding the expression above. At this point, we will apply Hoeffding's inequality. In particular, choose any $c > \frac{1}{2} \left(\frac{1}{1+q} - \frac{1}{2}\right)^{-1}$, which ensures that for $k \ge cK \ge cM$, $\frac{k-M}{2} \le \ceil{\frac{k+1}{1+q}}-M$. Indeed, to verify this, observe \begin{equation*}
\frac{k+1}{1+q} - M - \frac{k-M}{2} \ge k \left(\frac{1}{1+q} - \frac{1}{2} \right) - \frac{M}{2} \ge c M \left(\frac{1}{1+q} - \frac{1}{2} \right) - \frac{M}{2} \ge 0 \end{equation*} where the last inequality follows by the choice of $c$. Having chosen this $c$, we may apply Hoeffding's inequality for $k \ge cK$. Indeed, applying the previous line, if $\alpha_q = \frac{1}{1+q} - \frac{1}{2}$, then Hoeffding's inequality yields \begin{align*}
\E[\psi(U)]
&\le
cK + \sum_{k=cK}^{\infty} k \exp\left(- 2\left(k \alpha_q - \frac{M}{2}\right)^2\right) \le cK + \sum_{\ell=0}^{\infty} (\ell + cK) \exp\left(-2\ell^2 \alpha_q^2\right).
\end{align*} Note that the sums $\sum_{\ell=0}^{\infty} \ell \exp(-2\ell^2\alpha_q^2)$ and $\sum_{\ell=0}^{\infty} \exp(-2\ell^2 \alpha_q^2)$ are both convergent. As a result, $\E[\psi(U)]$ is bounded by a constant multiple of $cK \sim \frac{c}{1+q} M$, where the constant depends on $q$ but nothing else. Since $\psi(U) \ge \tau \ge V' \ge V$ as previously argued, this completes the proof. \end{proof} \end{lemma}
\section{Additional comparison to prior work}\label{appendix::priorwork}
\subsection{Comparison to the unmasked likelihood ratio}\label{appendix::unmasked_lr}
In this section, we compare MLR statistics to the earlier \textit{unmasked} likelihood statistic introduced by \cite{katsevichmx2020}, which this work builds upon. The upshot is that unmasked likelihood statistics give the most powerful ``binary $p$-values,'' as shown by \cite{katsevichmx2020}, but do not yield jointly valid knockoff feature statistics in the sense required for the FDR control proof in \cite{fxknock} and \cite{mxknockoffs2018}.
In particular, we call a statistic $T_j([\bX, \widetilde{\mathbf{X}}], \by)$ a \textit{marginally symmetric knockoff statistic} if $T_j$ satisfies $T_j([\bX, \widetilde{\mathbf{X}}]_{\swap{j}}, \by) = - T_j([\bX, \widetilde{\mathbf{X}}], \by)$. Under the null, $T_j$ is marginally symmetric, so the quantity $p_j = \frac{1}{2} + \frac{1}{2} \I(T_j \le 0)$ is a valid \textit{``binary $p$-value"} which only takes values in $\{1/2, 1\}$. Theorem 5 of \cite{katsevichmx2020} shows that for any marginally symmetric knockoff statistic, $\P(p_j = 1/2) = \P(T_j > 0)$ is maximized if $T_j > 0 \Leftrightarrow L(\by \mid \bX_j=\bx_j, \bX_{-j}) > L(\by \mid \bX_j=\widetilde{\mathbf{x}}_j, \bX_{-j})$. As such, one might initially hope to use the unmasked likelihood ratio as a knockoff statistic: \begin{equation*}
W_j^{\mathrm{unmasked}} = \log\left(\frac{L_{\theta}(\by \mid \bX_j = \bx_j, \bX_{-j})}{L_{\theta}(\by \mid \bX_j = \widetilde{\mathbf{x}}_j, \bX_{-j})}\right). \end{equation*} However, a marginally symmetric knockoff statistic is not necessarily a valid knockoff feature statistic, which must satisfy the following stronger property \citep{fxknock, mxknockoffs2018}: \begin{equation*}
W_j([\bX, \widetilde{\mathbf{X}}]_{\swap{J}}, \by) = \begin{cases}
W_j([\bX, \widetilde{\mathbf{X}}], \by) & j \not \in J \\
- W_j([\bX, \widetilde{\mathbf{X}}], \by) & j \in J,
\end{cases} \end{equation*} for any $J \subset [p]$. This flip-sign property guarantees that the signs of the null coordinates of $W$ are \textit{jointly} i.i.d. and symmetric. However, the unmasked likelihood statistic does not satisfy this property, as changing the value of $\bX_i$ for $i \ne j$ will often change the value of the likelihood $L_{\theta}(\by \mid \bX = \bx_j, \bX_{-j})$.
\subsection{Comparison to the adaptive knockoff filter}\label{appendix::adaknock}
In this section, we compare our methodological contribution, MLR statistics, to the adaptive knockoff filter described in \cite{RenAdaptiveKnockoffs2020}, namely their approach based on Bayesian modeling. The main point is that although MLR statistics and the procedure from \cite{RenAdaptiveKnockoffs2020} have some intuitive similarities, the procedures are quite different and in fact complementary, since one could use the Bayesian adaptive knockoff filter from \cite{RenAdaptiveKnockoffs2020} in combination with MLR statistics.
As review, recall from Section \ref{subsec::mlr} that valid knockoff feature statistics $W$ as initially defined by \cite{fxknock, mxknockoffs2018} must ensure that $|W|$ is a function of the masked data $D$, and thus $|W|$ cannot explicitly depend on $\sign(W)$. (It is also important to remember that $|W|$ determines the order and ``prioritization" of the SeqStep hypothesis testing procedure.) The key innovation of \cite{RenAdaptiveKnockoffs2020} is to relax this restriction: in particular, they define a procedure where the analyst sequentially reveals the signs of $\sign(W)$ in reverse order of their prioritization, and after each sign is revealed, the analyst may arbitrarily reorder the remaining hypotheses in the SeqStep. The advantage of this approach is that revealing the sign of (e.g.) $W_1$ may reveal information that can be used to more accurately prioritize the hypotheses while still guaranteeing provable FDR control.
This raises the question: how should the analyst reorder the hypotheses after each coordinate of $\sign(W)$ is revealed? One proposal from \cite{RenAdaptiveKnockoffs2020} is to introduce an auxiliary Bayesian model for the relationship between $\sign(W)$ and $|W_j|$ (the authors also discuss the use of additional side information, although for brevity we do not discuss this here). For example, \cite{RenAdaptiveKnockoffs2020} suggest using a two-groups model where \begin{equation}\label{eq::adaknockmodel}
H_j \simind \Bern(p_j) \text{ and } W_j \mid H_j \sim \begin{cases} \mcP_1(W_j) & H_j = 1 \\ \mcP_0(W_j) & H_j = 0. \end{cases} \end{equation} Above, $H_j$ is the indicator of whether the $j$th hypothesis is non-null, and $\mcP_1$ and $\mcP_0$ are (e.g.) unknown parametric distributions that the analyst fits as they observe $\sign(W)$. With this notation, the proposal from \cite{RenAdaptiveKnockoffs2020} can be roughly summarized as follows: \begin{enumerate}
\item Fit an initial feature statistic $W$, such as an LCD statistic, and observe $|W|$.
\item Fit an initial version of the model in Equation (\ref{eq::adaknockmodel}) and use it to compute $\gamma_j \defeq \P(W_j > 0, H_j = 1 \mid |W_j|)$.
\item Observe $\sign(W_j)$ for $j = \argmin_j \{\gamma_j : \sign(W_j) \text{ has not yet been observed }\}$.
\item Using $\sign(W_j)$, update the model in Equation (\ref{eq::adaknockmodel}), update $\{\gamma_j\}_{j=1}^p$, and return to Step 3.
\item Terminate when all of $\sign(W)$ has been revealed, at which point $\sign(W)$ is passed to SeqStep in the reverse of the order that the signs were revealed. \end{enumerate}
Note that in Step 3, the reason \cite{RenAdaptiveKnockoffs2020} choose $j$ to be the index minimizing $\P(W_j > 0, H_j = 1 \mid |W_j|)$ is that in Step 5, SeqStep observes $\sign(W)$ in reverse order of the analyst. Thus, the analyst should observe the least important hypotheses first so that SeqStep can observe the most important hypotheses first.
The main similarity between this procedure and MLR statistics is that both procedures, roughly speaking, attempt to prioritize the hypotheses according to $\P(W_j > 0)$, although we condition on the full masked data to maximize power. That said, there are two important differences. First, when \cite{RenAdaptiveKnockoffs2020} write ``$\P(W_j > 0, H_j = 1 \mid |W_j|)$," they are taking the probability over the auxiliary (heuristic) Bayesian model (\ref{eq::adaknockmodel}). Note that this Bayesian model is not the same as the underlying model for the data $\by$ and $\bX$; in fact, Equation (\ref{eq::adaknockmodel}) defines a new probability space where the random variables $\bX$ and $\by$ are not defined. In contrast, when we say that MLR statistics order the hypotheses in decreasing order of $\P(W_j > 0 \mid D)$, we are referring to the posterior distribution of $(\by, \bX, \theta) \mid D$ with respect to the true data generating process (by assumption). Thus, despite their initial similarity, these procedures are quite different.
The second and more important difference is that the procedure above is not a feature statistic, per se; rather, it is an extension of SeqStep that wraps on top of any initial feature statistic. This ``adaptive knockoffs" procedure augments the power of any feature statistic, although if the initial feature statistic $W$ has many negative signs to begin with or its absolute values $|W|$ are truly uninformative of its signs, the procedure may still be powerless. Since MLR statistics have provable optimality guarantees, e.g., they maximize $\P(W_j > 0 \mid D)$, one might expect that using MLR statistics in place of a lasso statistic could improve the power of the adaptive knockoff filter.
\section{Methodological and computational details for MLR statistics}\label{appendix::gibbs}
\subsection{Why not use a plug-in estimator?}\label{subsec::noplugin}
Since the oracle MLR statistics depend on any unknown parameters $\theta$ which affect the likelihood, one natural choice of feature statistic would be to ``plug in" an estimator $\hat \theta$ in place of $\theta$, where $\hat \theta$ (e.g.) maximizes the regularized masked likelihood $L_{\theta}(D)$. In particular, we define the plug-in MLR statistic as \begin{equation*}
W_j \defeq \log\left(\frac{L_{\hat \theta}(\bX_j = \bx_j \mid D)}{L_{\hat \theta}(\bX_j = \widetilde{\mathbf{x}}_j \mid D)}\right) \text{ for } \hat\theta=\argmax_{\theta} L_{\theta}(D) + \mcP(\theta) \end{equation*} where $\mcP$ is some penalty on $\theta$. In our (unpublished) explorations, we found that the plug-in statistic performed reasonably well, but not nearly as well as the MLR statistics defined in the paper. As discussed in Section \ref{subsec::mlr}, our understanding of this phenomenon is that the plug-in estimator does not account for uncertainty in $\theta$, leading to reduced power. To make matters worse, $L_{\theta}(D)$ is sometimes non-convex. Intuitively, this is because the estimated value of $\theta$ depends very much on the value of $\bX_j$, which is not known when we only observe $D$. For example, if $\by \mid \bX \sim \mcN(\bX \beta, I_n)$, we should expect the estimate of $\beta_j$ to be different in the two settings where (i) $\bX_j = \bx_j$ and (ii) $\bX_j = \widetilde{\mathbf{x}}_j$ \citep{altsign2017, mrcknock2022}, which causes $\theta \mapsto L_{\theta}(D)$ to be multimodal. This has two consequences: \begin{enumerate}
\item Computing $\hat \theta = \argmax_{\theta} L_{\theta}(D)$ is likely to be computationally expensive. That said, this challenge is not insurmountable, since we could use the EM algorithm to account for the fact that we do not know the value of $\bX$ (in the model-X case) or $\bX^T \by$ (in the fixed-X case).
\item This exacerbates the aforementioned problem where the plug-in estimator does not appropriately account for our uncertainty about $\theta$, since there are many local maxima in the likelihood landscape $\theta \mapsto L_{\theta}(D)$ and the plug-in estimate only accounts for the highest peak. \end{enumerate}
As a simple example, we show that $L_{\theta}(D)$ is often non-convex when $\widetilde{\mathbf{X}}$ are fixed-X knockoffs and $\by \mid \bX \sim \mcN(\bX \beta, I_n)$. In this setting, the masked likelihood has a particularly simple form, since usually, exactly computing the masked likelihood often requires summing over the $2^p$ possible values of $\bX^T \by$. However, in the fixed-X case, it can be shown \citep{whiteout2021} that observing the masked data $D$ is equivalent to observing $[\bX, \widetilde{\mathbf{X}}]$ and the random variables $(|\tilde{\beta}|, \xi)$, where \begin{equation}\label{eq::whiteout}
\tilde{\beta} = S^{-1} (\bX - \widetilde{\mathbf{X}})^T \by \text{ and } \xi = \frac{1}{2} (\bX + \widetilde{\mathbf{X}})^T \by, \end{equation} where $S \defeq \bX^T \bX - \widetilde{\mathbf{X}}^T \bX \succ 0$ is diagonal. Both $\tilde{\beta}$ and $\xi$ are Gaussian, so we can easily derive the masked log-likelihood in this setting. Indeed, let $A = \bX^T \bX - S/2$. Then ignoring additive constants, \begin{equation}\label{eq::fxmaskedliklihood}
\log L_{\beta}(D) = \beta^T \xi - \frac{1}{2} \beta^T A \beta - \sum_{j=1}^p \left[\frac{S_{j,j} \beta_j^2}{4} + \log \cosh\left(-\frac{1}{2} \beta_j |\tilde{\beta}_j| S_{j,j}\right) \right] \end{equation} The function $x \mapsto ax^2 - \log \cosh(x)$ is convex for $a \ge \frac{1}{2}$, so the sum in (\ref{eq::fxmaskedliklihood}) is concave if $\tilde{\beta}_j^2 \le \frac{1}{2 S_{j,j}}$ for all $j$. This is extremely unlikely to be the case, since $\tilde{\beta}_j \simind \mcN\left(\beta_j, 2 S_{j,j}^{-1} \right)$. Thus, even in the simple setting of fixed-X knockoffs, we should expect $\log L_{\beta}(D)$ to be highly multimodal.
\subsection{Gibbs sampling updates for MLR statistics}\label{subsec::gibbs}
In this section, we derive the Gibbs sampling updates for class of MLR statistics defined in Section \ref{subsubsec::gams}. First, for convenience, we restate the model and choice of $\pi$.
\subsubsection{Model and prior}
First, we consider the model-X case. For each $j \in [p]$, let $\phi_j(\bX_j) \in \R^{n \times p_j}$ denote any vector of prespecified basis functions applied to $\bX_j$. We assume the following additive model: \begin{equation*}
\by \mid \bX, \beta, \sigma^2 \sim \mcN\left(\sum_{j=1}^p \phi_j(\bX_j) \beta^{(j)}, \sigma^2 I_n \right) \end{equation*} with the following prior on $\beta^{(j)} \in \R^{p_j}$: \begin{equation*}
\beta^{(j)} \simind \begin{cases}
0 \in \R^{p_j} & \text{w.p. } p_0 \\
\mcN(0, \tau^2 I_{p_j}) & \text{w.p. } 1-p_0.
\end{cases} \end{equation*} with the usual hyperpriors \begin{equation*}
\tau^2 \sim \invGamma(a_{\tau}, b_{\tau}), \sigma^2 \sim \invGamma(a_{\sigma}, b_{\sigma}) \text{ and } p_0 \sim \Beta(a_0, b_0). \end{equation*}
This is effectively a \textit{group} spike-and-slab prior on $\beta^{(j)}$ which ensures group sparsity of $\beta^{(j)}$, meaning that either the whole vector equals zero or the whole vector is nonzero. We use this group spike-and-slab prior for two reasons. First, it reflects the intuition that $\phi_j$ is meant to represent only a single feature and thus $\beta^{(j)}$ will likely be entirely sparse (if $\bX_j$ is truly null) or entirely non-sparse. Second, and more importantly, the group sparsity will substantially improve computational efficiency in the Gibbs sampler.
Lastly, for the fixed-X case, we assume exactly the same model but with the basis functions $\phi_j(\cdot)$ chosen to be the identity. Thus, this model is a typical spike-and-slab Gaussian linear model in the fixed-X case \citep{mcculloch1997}. It is worth noting that our implementation for the fixed-X case actually uses a slightly more general Gaussian mixture model as the prior on $\beta_j$, where the density $p(\beta_j) = \sum_{k=1}^m p_k \mcN(\beta_j; 0, \tau_k^2)$ for hyperpriors $\tau_0 = 0, \tau_k \simind \invGamma(a_k, b_k)$, and $(p_0, \dots, p_m) \sim \Dir(\alpha)$. However, for brevity, we only derive the Gibbs updates for the case of two mixture components.
\subsubsection{Gibbs sampling updates}\label{subsubsec::gibbs}
Following Section \ref{subsec::computation}, we now review the details of the MLR Gibbs sampler which samples from the posterior of $(\bX, \beta)$ given the masked data $D = \{\by, \{\bx_j, \widetilde{\mathbf{x}}_j\}_{j=1}^p\}$.\footnote{This is a standard derivation, but we review it here for the reader's convenience.} As notation, let $\beta$ denote the concatenation of $\{\beta^{(j)}\}_{j=1}^p$, let $\beta^{(\mathrm{-}j)}$ denote all of the coordinates of $\beta$ except those of $\beta^{(j)}$, let $\gamma_j$ denote the indicator that $\beta^{(j)} \ne 0$, and let $\phi(\bX) \in \R^{n \times \sum_j p_j}$ denote all of the basis functions concatenated together. Also note that although this section mostly uses the language of model-X knockoffs, when the basis functions $\phi_j(\cdot)$ are the identity, the Gibbs updates we are about to describe satisfy the sufficiency property required for fixed-X statistics, and indeed the resulting Gibbs sampler is actually a valid implementation of the fixed-X MLR statistic.
To improve the convergence of the Gibbs sampler, we slightly modify the meta-algorithm in Algorithm \ref{alg::mlrgibbs} to marginalize over the value of $\beta^{(j)}$ when resampling $\bX_j$. To be precise, this means that instead of sampling $\bX_j \mid \bX_{-j}, \beta, \sigma^2$, we sample $\bX_j \mid \bX_{-j}, \beta\negupj$. We derive this update in three steps, and along the way we derive the update for $\beta^{(j)} \mid \bX, \beta^{(\mathrm{-}j)}, D$.
\underline{Step 1}: First, we derive the update for $\gamma_j \mid \bX, \beta\negupj, D$. Observe \begin{align*}
\frac{\P(\gamma_j = 0 \mid \bX, \beta^{(\mathrm{-}j)}, D)}{\P(\gamma_j = 1 \mid \bX, \beta^{(\mathrm{-}j)}, D)}
&=
\frac{p_0 p(\by \mid \bX, \beta\negupj, \beta^{(j)} = 0)}{(1-p_0) p(\by \mid \bX, \beta\negupj, \beta^{(j)} \ne 0)}. \end{align*} Analyzing the numerator is easy, as the model specifies that if we let $\mathbf{r} = \by - \phi(\bX_{-j}) \beta^{(\mathrm{-}j)}$, then \begin{equation*}
p(\by \mid \bX, \beta\negupj, \beta^{(j)} = 0) \propto \det(\sigma^2 I_n)^{-1/2} \exp\left(-\frac{1}{2 \sigma^2} \|\mathbf{r}\|_2^2 \right). \end{equation*} For the denominator, observe that $\mathbf{r}, \beta^{(j)} \mid \bX, \beta^{(\mathrm{-}j)}, \beta^{(j)} \ne 0$ is jointly Gaussian: in particular, \begin{equation}\label{eq::joint_r_betaj}
(\beta^{(j)}, \mathbf{r}) \mid \bX, \beta^{(\mathrm{-}j)}, \beta^{(j)} \ne 0 \sim \mcN\left(0, \begin{bmatrix} \tau^2 I_{p_j} & \tau \phi_j(\bX_j)^T \\ \tau \phi_j(\bX_j) & \tau^2 \phi_j(\bX_j) \phi_j(\bX_j)^T + \sigma^2 I_n \end{bmatrix} \right). \end{equation} To lighten notation, let $Q_j \defeq I_{p_j} + \frac{\tau^2}{\sigma^2} \phi(\bX_j)^T \phi(\bX_j)$. Using the above expression plus the Woodbury identity applied to the density of $\by \mid \bX, \beta^{(\mathrm{-}j)}, \beta^{(j)} \ne 0$, we conclude \begin{equation*}
\frac{\P(\gamma_j = 0 \mid \bX, \beta^{(\mathrm{-}j)}, D)}{\P(\gamma_j = 1 \mid \bX, \beta^{(\mathrm{-}j)}, D)} = \frac{p_0}{1-p_0} \det(Q_j)^{1/2} \exp\left(- \frac{\tau^2}{2 \sigma^4} \mathbf{r}^T \phi_j(\bX_j) Q_j^{-1} \phi_j(\bX_j)^T \mathbf{r} \right). \end{equation*} Since $Q_j$ is a $p_j \times p_j$ matrix, this quantity can be computed relatively efficiently.
\underline{Step 2}: Next, we derive the distribution of $\beta^{(j)} \mid \by, \bX, \beta^{(\mathrm{-}j)}, \gamma_j$. Of course, the case where $\gamma_j = 0$ is trivial since then $\beta^{(j)} = 0$ by definition: in the alternative case, note from Equation (\ref{eq::joint_r_betaj}) that we have \begin{equation*}
\beta^{(j)} \mid \by, \bX, \beta^{(\mathrm{-}j)}, \gamma_j = 1 \sim \mcN\left(\frac{\tau^2}{\sigma^2} \phi_j^T \mathbf{r} - \frac{\tau^4}{\sigma^4} \phi_j^T \phi_j Q_j^{-1} \phi_j^T \mathbf{r}, \tau^2 I_{p_j} - \frac{\tau^4}{\sigma^2} \phi_j^T \phi_j + \frac{\tau^6}{\sigma^4} \phi_j^T \phi_j Q_j^{-1} \phi_j^T \phi_j \right), \end{equation*} where above, we use $\phi_j$ as shorthand for $\phi_j(\bX_j)$ to lighten notation.
\underline{Step 3}: Lastly, we derive the update for $\bX_j$ given $\bX_{-j}, \beta^{(\mathrm{-}j)}, D$. In particular, for any vector $\bx$, let $\kappa(\bx) \defeq \P(\gamma = 0 \mid \bX_j = \bx, \bX_{-j}, \beta^{(\mathrm{-}j)})$. Then by the law of total probability and the same Woodbury calculations as before, \begin{align*}
\P(\bX_j = \bx \mid \bX_{-j}, \beta^{(\mathrm{-}j)}, D)
\propto&
p(\by \mid \bX_j = \bx, \bX_{-j}, \beta\negupj) \\
&=
\kappa(\bx) p(\by \mid \bX_j = \bx, \bX_{-j}, \beta\negupj, \beta^{(j)} = 0) \\
&+
(1-\kappa(\bx)) p(\by \mid \bX_j = \bx, \bX_{-j}, \beta\negupj, \beta^{(j)} \ne 0) \\
&\propto
\kappa(\bx) \exp\left(-\frac{1}{2 \sigma^2} \|\mathbf{r}\|_2^2 \right) \\
&+
(1-\kappa(\bx)) \det(Q_j(\bx))^{-1/2} \exp\left(-\frac{1}{2 \sigma^2} \|\mathbf{r}\|_2^2 + \frac{\tau^2}{2\sigma^4} \mathbf{r}^T \phi_j(\bx)^T Q_j(\bx)^{-1} \phi_j(\bx)^T \mathbf{r} \right) \\
&\propto
\kappa(\bx) + (1-\kappa(\bx)) \det(Q_j(\bx))^{-1/2} \exp\left(\frac{\tau^2}{2\sigma^4} \mathbf{r}^T \phi_j(\bx)^T Q_j(\bx)^{-1} \phi_j(\bx)^T \mathbf{r}\right) \end{align*} where above $Q_j(\bx) = I_{p_j} + \frac{\tau^2}{\sigma^2} \phi_j(\bx)^T \phi_j(\bx)$ as before.
The only other sampling steps required in the Gibbs sampler are to sample from the conditional distributions of $\sigma^2, \tau^2$ and $p_0$; however, this is straightforward since we use conjugate hyperpriors for each of these parameters.
\subsubsection{Extension to binary regression}
We can easily extend the Gibbs sampler in the preceding section to handle the case the response is binary via data-augmentation. Indeed, let us start by considering the case of Probit regression, which means we observe $\bz = \I(\by \ge 0) \in \{0,1\}^n$ instead of the continuous outcome $\by$. Following \cite{albertchib1993}, we note that distribution of $\by \mid \bz, \bX, \beta$ is truncated normal, namely \begin{equation}\label{eq::dataaugment}
\by_i \mid \bz, \bX, \beta \simind \begin{cases}
\TruncNorm(\mu_i, \sigma^2; (0,\infty)) & \bz_i = 1 \\
\TruncNorm(\mu_i, \sigma^2; (-\infty, 0) & \bz_i = 0,
\end{cases} \end{equation} where $\mu = \phi(\bX) \beta = \E[\by \mid \bX, \beta]$. Thus, when we observe a binary response $\bz$ instead of the continuous response $\by$, we can employ the same Gibbs sampler as in Section \ref{subsubsec::gibbs} except that after updating $\beta^{(j)} \mid \bX, \beta^{(\mathrm{-}j)}, \by$, we resample the latent variables $\by$ according to Equation (\ref{eq::dataaugment}), which takes $O(n)$ computation per iteration (since we can continuously update the value of $\mu$ whenever we update $\bX$ or $\beta$ in $O(n)$ operations as well). As a result, the computational complexity of this algorithm is the same as that of the algorithm in Section \ref{subsubsec::gibbs}. A similar formulation based on PolyGamma random variables is available for the case of logistic regression (see \cite{polygamma2013}).
\section{MLR statistics for group knockoffs}\label{appendix::groupmlr}
In this section, we describe how MLR statistics extend to the setting of group knockoffs \citep{daibarber2016}. In particular, for a partition $G_1, \dots, G_m \subset [p]$, group knockoffs allow analysts to test the \textit{group} null hypotheses $H_{G_j} : X_{G_j} \Perp Y \mid X_{-G_j}$, which can be useful in settings where $\bX$ is highly correlated and there is not enough data to discover individual null variables. In particular, knockoffs $\widetilde{\mathbf{X}}$ are model-X \textit{group} knockoffs if they satisfy the \textit{group} pairwise-exchangeability condition $[\bX, \widetilde{\mathbf{X}}]_{\swap{G_j}} \disteq [\bX, \widetilde{\mathbf{X}}]$ for each $j \in [m]$. Similarly, $\widetilde{\mathbf{X}}$ are fixed-X group knockoffs if (i) $\bX^T \bX = \widetilde{\mathbf{X}}^T \widetilde{\mathbf{X}}$ and (ii) $S = \bX^T \bX - \widetilde{\mathbf{X}}^T \bX$ is block-diagonal, where the blocks correspond to groups $G_1, \dots, G_m$. Given group knockoffs, one computes a single knockoff feature statistic for each group.
MLR statistics extend naturally to the group knockoff setting because we can treat each group of features $X_{G_j}$ as a single compound feature. In particular, the masked data for group knockoffs is \begin{equation}\label{eq::groupmaskeddata}
D = \begin{cases}
(\by, \{\bX_{G_j}, \widetilde{\mathbf{X}}_{G_j}\}_{j=1}^m) & \text{ for model-X knockoffs} \\
(\bX, \widetilde{\mathbf{X}}, \{\bX_{G_j}^T \by, \widetilde{\mathbf{X}}_{G_j}^T \by\}_{j=1}^m) & \text{ for fixed-X knockoffs,}
\end{cases} \end{equation} and the corresponding MLR statistics are \begin{equation*}
W_j\opt = \log\left(\frac{\E_{\theta \sim \pi}\left[L_{\theta}(\bX_{G_j} = \bx_{G_j} \mid D)\right]}{\E_{\theta \sim \pi}\left[L_{\theta}(\bX_{G_j} = \widetilde{\mathbf{x}}_{G_j} \mid D)\right]}\right) \text{ for model-X knockoffs} \end{equation*} \begin{equation*}
W_j\opt = \log\left(\frac{\E_{\theta \sim \pi}\left[L_{\theta}(\bX_{G_j}^T \by = \bx_{G_j}^T \by \mid D)\right]}{\E_{\theta \sim \pi}\left[L_{\theta}(\bX_{G_j}^T \by = \widetilde{\mathbf{x}}_{G_j}^T \by \mid D)\right]}\right) \text{ for fixed-X knockoffs.} \end{equation*} Throughout the paper, we have proved several optimality properties of MLR statistics, and if we treat $X_{G_j}$ as a single compound feature, all of these theoretical results (namely Proposition \ref{prop::bestsigns} and Theorem \ref{thm::avgopt}) immediately apply to group MLR statistics as well.
To compute group MLR statistics, we can use exactly the same Gibbs sampling strategy as in Section \ref{subsec::gibbs}---indeed, one can just treat $X_{G_j}$ as a basis representation of a single compound feature and use exactly the same equations as derived previously.
\section{Additional details for the simulations}\label{appendix::simdetails}
\begin{figure}
\caption{This plot is identical to Figure \ref{fig::nonlin} except it shows the results for $q=0.05$.}
\label{fig::nonlin2}
\end{figure}
In this section, we describe the simulation settings in Section \ref{sec::sims}, and we also give the corresponding plot to Figure \ref{fig::nonlin} which shows the results when $q=0.05$. To start, we describe the simulation settings for each plot.
\begin{enumerate}
\item \underline{Sampling $\bX$}: We sample $\bX \sim \mcN(0, \Sigma)$ in all simulations, with two choices of $\Sigma$. First, in the ``AR(1)" setting, we take $\Sigma$ to correspond to a nonstationary AR(1) Gaussian Markov chain, so $\bX$ has i.i.d. rows satisfying $X_j \mid X_1, \dots, X_{j-1} \sim \mcN(\rho_j X_{j-1}, 1)$ with $\rho_j \iid \min(0.99, \Beta(5,1))$. Note that the AR(1) setting is the default used in any plot where the covariance matrix is not specified. Second, in the ``ErdosRenyi" (ER) setting, we sample a random matrix $V$ such that $80\%$ of its off-diagonal entries (selected uniformly at random) are equal to zero; for the remaining entries, we sample $V_{ij} \iid \Unif((-1, -0.1) \cup (0.1, 1))$. To ensure the final covariance matrix is positive definite, we set $\Sigma = (V + V^T) + (0.1 + \lambda_{\min}(V^T + V)) I_p$ and then rescale $\Sigma$ to be a correlation matrix.
\item \underline{Sampling $\beta$}: Unless otherwise specified in the plot, we randomly choose $s=10\%$ of the entries of $\beta$ to be nonzero and sample the nonzero entries as i.i.d. $\Unif([-\tau, -\tau/2] \cup [\tau/2, \tau])$ random variables with $\tau = 0.5$ by default. The exceptions are: (1) in Figure \ref{fig::misspec}, we set $\tau = 0.3$, vary $s$ between $0.05$ and $0.4$ as shown in the plot, and in some panels sample the non-null coefficients as $\Laplace(\tau)$ random variables, (2) in Figure \ref{fig::nonlin} we take $\tau = 2$ and $s=0.3$, and (3) in Figure \ref{fig::logistic} we take $\tau = 1$.
\item \underline{Sampling $\by$}: Throughout we sample $\by \mid \bX \sim \mcN(\bX \beta, I_n)$, with only two exceptions. First, in Figure \ref{fig::nonlin}, we sample $\by \mid \bX \sim \mcN(h(\bX) \beta, I_n)$ where $h$ is a nonlinear function applied elementwise to $\bX$, for $h(x)=\sin(x), h(x)=\cos(x), h(x)=x^2$ and $h(x)=x^3$. Second, in Figure \ref{fig::logistic}, $\by$ is binary and $\P(Y = 1 \mid X) = \frac{\exp(X^T \beta)}{1 + \exp(X^T \beta)}$.
\item \underline{Sampling knockoffs}: We sample MVR and SDP Gaussian knockoffs using the default parameters from \texttt{knockpy} version 1.3, both in the fixed-X and model-X case. Note that in the model-X case, we use the true covariance matrix $\Sigma$ to sample knockoffs, thus guaranteeing finite-sample FDR control.
\item \underline{Fitting feature statistics}: We fit the following types of feature statistics throughout the simulations: LCD statistics, LSM statistics, a random forest with swap importances \citep{knockoffsmass2018}, DeepPINK \citep{deeppink2018}, MLR statistics (linear variant), MLR statistics with splines, and the MLR oracle. In all cases we use the default hyperparameters from \text{knockpy} version 1.3, and we do not adjust the hyperparameters, so that the MLR statistics do not have well-specified priors. The exception is that the MLR oracle has access to the underlying data-generating process and the true coefficients $\beta$, which is why it is an ``oracle." \end{enumerate}
Now, recall that in Figure \ref{fig::nonlin}, we showed the results for $q=0.1$ because several competitor feature statistics made no discoveries at $q=0.05$. Figure \ref{fig::nonlin2} is corresponding plot for $q=0.05$.
\section{Additional results for the real data applications}\label{appendix::realdata}
\subsection{HIV drug resistance}\label{appendix::hiv}
For the HIV drug resistance application, Figures \ref{fig::hiv_pi_sdp} and \ref{fig::hiv_pi_mvr} show the same results as in Figure \ref{fig::wstatplot} but for all drugs in the protease inhibitor (PI) class; broadly, they show that MLR statistics have higher power because they ensure that the feature statistics with high absolute values are consistently positive, as discussed in Section \ref{subsec::badlasso}. Note that in these plots, for the lasso-based statistics, we plot the normalized statistics $\frac{W_j}{\max_i|W_i|}$ so that the absolute value of each statistic is less than one. Similarly, for the MLR statistics, instead of directly plotting the masked likelihood ratio as per Equation (\ref{eq::mlr}), we plot \begin{equation*}
W_j^{\star\star} \defeq 2\left(\logit^{-1}(|W_j\opt|) - 0.5\right) = 2\left(\P(W_j\opt > 0 \mid D) - 0.5\right) \end{equation*} because we find this quantity easier to interpret than a log likelihood ratio. In particular, $W_j^{\star\star} = 0$ if and only if $W_j\opt$ is equally likely to be positive or negative, and $W_j^{\star\star} = 1$ when $W_j\opt$ is guaranteed to be positive, at least according to the approximate posterior.
Additionally, Figures \ref{fig::hiv_sdp} and \ref{fig::hiv_mvr} show the number of discoveries made by each feature statistic for SDP and MVR knockoffs, respectively, stratified by the drug in question. Note that the specific data analysis is identical to that of \cite{fxknock} and \cite{dbh2020} other than the choice of feature statistic---see either of those papers or \url{https://github.com/amspector100/mlr_knockoff_paper} for more details.
\begin{figure}
\caption{The same as Figure \ref{fig::wstatplot}, except this shows results for drugs in the PI class using SDP knockoffs. It shows that MLR statistics with large absolute values have consistently positive signs, whereas this is not always true for LCD and LSM statistics.}
\label{fig::hiv_pi_sdp}
\end{figure}
\begin{figure}
\caption{The same as Figure \ref{fig::wstatplot}, except this shows results for drugs in the PI class using SDP knockoffs. It shows that MLR statistics with large absolute values have consistently positive signs, whereas this is not always true for LCD and LSM statistics.}
\label{fig::hiv_pi_mvr}
\end{figure}
\begin{figure}
\caption{This figure shows the number of discoveries made by each feature statistic for each drug in the HIV drug resistance dataset.}
\label{fig::hiv_sdp}
\end{figure}
\begin{figure}
\caption{This figure shows the number of discoveries made by each feature statistic for each drug in the HIV drug resistance dataset.}
\label{fig::hiv_mvr}
\end{figure}
\subsection{Financial factor selection}
We now present a few additional details for the financial factor selection analysis from Section \ref{subsec::fundrep}. First, we list the ten index funds we analyze, which are: XLB (materials), XLC (communication services), XLE (energy), XLF (financials), XLK (information technology), XLP (consumer staples), XLRE (real estate), XLU (utilities), XLV (health care), and XLY (consumer discretionary). Second, for each feature statistic, Table \ref{tab::fundfdp} shows the average realized FDP across all ten analyses---as desired, the average FDP for each method is lower than the nominal level of $q=0.05\%$. \begin{table}
\centering
\begin{tabular}{|l|l|l|r|}
\hline
Knockoff Type & Feature Stat. & Average FDP \\ \hline
MVR & LCD & 0.013636 \\ \hline
& LSM & 0.004545 \\ \hline
& MLR & 0.038571 \\ \hline
SDP & LCD & 0.000000 \\ \hline
& LSM & 0.035000 \\ \hline
& MLR & 0.039002 \\ \hline
\end{tabular}
\caption{This table shows the average FDP, defined above, for each method in the financial factor selection analysis from Section \ref{subsec::fundrep}.}
\label{tab::fundfdp} \end{table}
\end{document} |
\begin{document}
\title{The Dual Minkowski Problem for Negative Indices
}
\author{Yiming Zhao }
\institute{Yiming Zhao \at
Department of Mathematics, Tandon School of Engineering, New York University, NY, USA\\
\email{[email protected]} }
\date{Received: date / Accepted: date}
\maketitle
\begin{abstract} Recently, the duals of Federer's curvature measures, called dual curvature measures, were discovered by Huang, Lutwak, Yang \& Zhang \cite{HLYZ}. In the same paper, they posed the dual Minkowski problem, the characterization problem for dual curvature measures, and proved existence results when the index, $q$, is in $(0,n)$. The dual Minkowski problem includes the Aleksandrov problem ($q=0$) and the logarithmic Minkowski problem ($q=n$) as special cases. In the current work, a complete solution to the dual Minkowski problem whenever $q<0$, including both existence and uniqueness, is presented. \keywords{the dual Minkowski problem \and dual curvature measures \and Monge-Amp\`{e}re equation \and the dual Brunn-Minkowski theory}
\subclass{52A40} \end{abstract}
\section{Introduction} \label{section introduction} Quermassintegrals, which include volume and surface area as special cases, are among the most fundamental geometric invariants in the Brunn-Minkowski theory of convex bodies (compact, convex subsets of $\mathbb{R}^n$ with non-empty interiors). Denote by $\mathcal{K}_o^n$ the set of all convex bodies with the origin in their interiors. For $i=1,\cdots, n$, the $(n-i)$-th quermassintegral $W_{n-i}(K)$ of $K\in \mathcal{K}_o^n$ can be seen as the mean of the projection areas of $K$: \begin{equation} \label{eq intro quermassintegral}
W_{n-i}(K)=\frac{\omega_n}{\omega_i}\int_{G(n,i)}\mathcal{H}^i(K|\xi)d\xi, \end{equation}
where the integration is with respect to the Haar measure on the Grassmannian $G(n,i)$ containing all $i$ dimensional subspaces of $\mathbb{R}^n$. Here $K|\xi$ is the image of the orthogonal projection of $K$ onto $\xi \in G(n,i)$, $\mathcal{H}^i$ is the $i$ dimensional Hausdorff measure and $\omega_j$ is the $j$ dimensional volume of the unit ball in $\mathbb{R}^j$ for each positive integer $j$. Three families of geometric measures can be viewed as differentials of quermassintegrals: area measures, (Federer's) curvature measures, and $L_p$ surface area measures. If one replaces the orthogonal projection in \eqref{eq intro quermassintegral} by intersection, the fundamental geometric functionals in the dual Brunn-Minkowski theory will appear. The $(n-i)$-th dual quermassintegral $\widetilde{W}_{n-i}(K)$ can be defined by \begin{equation} \label{eq intro dual quermassintegral} \widetilde{W}_{n-i}(K)=\frac{\omega_n}{\omega_i}\int_{G(n,i)}\mathcal{H}^i(K\cap \xi)d\xi. \end{equation} Compare \eqref{eq intro quermassintegral} with \eqref{eq intro dual quermassintegral}. As opposed to quermassintegrals which capture boundary information about convex bodies, dual quermassintegrals, the fundamental geometric functionals in the dual Brunn-Minkowski theory, encode interior properties of convex bodies. Arising from dual quermassintegrals is a new family of geometric measures, dual curvature measures $\widetilde{C}_q(K,\cdot)$ for $q\in \mathbb{R}$, discovered by Huang, Lutwak, Yang \& Zhang (Huang-LYZ) in their groundbreaking work \cite{HLYZ}. This new family of measures miraculously connects well-known measures like Aleksandrov's integral curvature and the cone volume measure.
Huang-LYZ \cite{HLYZ} asked for the conditions on a given Borel measure $\mu$ necessary and sufficient for it to be exactly the $q$-th dual curvature measure of a convex body. This problem is called \emph{the dual Minkowski problem}. The Aleksandrov problem and the logarithmic Minkowski problem are important special cases of the dual Minkowski problem. In the case when $\mu$ has $f$ as its density, solving the dual Minkowski problem is equivalent to solving the following Monge-Amp\`{e}re type equation on the unit sphere $S^{n-1}$: \begin{equation*}
\frac{1}{n}h(v)|\nabla_{S^{n-1}}h(v)+h(v)v|^{q-n}\text{det}(h_{ij}(v)+h(v)\delta_{ij})=f(v), \end{equation*} where $\nabla_{S^{n-1}}h$ is the gradient of $h$ on $S^{n-1}$, $h_{ij}$ is the Hessian of $h$ with respect to an orthonormal frame on $S^{n-1}$, and $\delta_{ij}$ is the Kronecker delta. Huang-LYZ \cite{HLYZ} considered the dual Minkowski problem for $q\in (0,n)$ when the given measure is even and gave a sufficient condition that ensures the existence of a solution. The uniqueness of the solution remains open (except when $q=0$). Very recently, for the critical integer cases $q=1,2,\cdots, n-1$, a better sufficient condition for the existence part of the dual Minkowski problem when the given measure is even was given by the author in \cite{YZ}. Independently and simultaneously, the same condition was presented by B\"{o}r\"{o}czky, Henk \& Pollehn in \cite{BH} and shown by them to be necessary. In the current work, we consider the dual Minkowski problem for the case when $q<0$. A complete solution in this case, including existence and uniqueness, will be presented.
The family, $L_p$ surface area measures $S_p(K,\cdot)$ for $p\in \mathbb{R}$, introduced by Lutwak \cite{MR1231704,MR1378681}, is the family of fundamental geometric measures in the $L_p$ Brunn-Minkowski theory and has appeared in a growing number of works, see, for example, Haberl \cite{MR2966660}, Haberl \& Parapatits \cite{MR3194492,MR3176613}, Haberl \& Schuster \cite{MR2545028}, and LYZ \cite{MR1863023,MR2142136}. When $p=0$, $L_p$ surface area measure is best known as the cone volume measure and is being intensively studied, see, for example, Barthe, Gu\'{e}don, Mendelson \& Naor \cite{MR2123199}, B\"{o}r\"{o}czky-LYZ \cite{MR2964630,BLYZ,MR3316972}, B\"{o}r\"{o}czky \& Henk \cite{MR3415694}, Henk \& Linke \cite{MR3148545}, Ludwig \& Reitzner \cite{MR2680490}, Stancu \cite{MR1901250,MR2019226}, Zhu \cite{MR3228445}, and Zou \& Xiong \cite{MR3255458}.
The characterization problem for $L_p$ surface area measure is called the $L_p$ Minkowski problem, which asks for necessary and sufficient condition(s) on the given measure so that it is exactly the $L_p$ surface area measure of a convex body. The solution to the $L_p$ Minkowski problem when $p>1$ was given by Chou \& Wang \cite{MR2254308}. See also Chen \cite{MR2204749}, Lutwak \cite{MR1231704}, LYZ \cite{MR2067123}, and Zhu \cite{MR3352764}. The solution to the $L_p$ Minkowski problem has proven to be essential in establishing different analytic affine isoperimetric inequalities (see, for example, Haberl \& Schuster \cite{MR2530600}, LYZ \cite{MR1987375}, Wang \cite{MR2927377}, and Zhang \cite{MR1776095}). Two major unsolved cases of the $L_p$ Minkowski problem are: when $p=-n$, the centro-affine Minkowski problem (see Zhu \cite{MR3356071}) and when $p=0$, the logarithmic Minkowski problem. For the logarithmic Minkowski problem, a necessary and sufficient condition has been given by B\"{o}r\"{o}czky-LYZ \cite{MR2964630} in the even case to ensure the existence of a solution. When the measure is arbitrary, different efforts have been made by B\"{o}r\"{o}czky, Heged\H{u}s \& Zhu \cite{Boroczky20062015}, Stancu \cite{MR1901250,MR2019226}, and Zhu \cite{MR3228445}. In both cases, the uniqueness part of the logarithmic Minkowski problem has proven to be extremely difficult and still remains open. See B{\"o}r{\"o}czky-LYZ \cite{MR2964630} and Stancu \cite{MR1901250,MR2019226} for some progress in the planar case. The logarithmic Minkowski problem has strong connections with whether a measure has an affine isotropic image (B{\"o}r{\"o}czky-LYZ \cite{MR3316972}) and curvature flows (Andrews \cite{MR1714339,MR1949167}).
The first curvature measure $C_0(K,\cdot)$ is also known as Aleksandrov's integral curvature. Its characterization problem is called the Aleksandrov problem whose solution was given by Aleksandrov using a topological argument, namely, his Mapping Lemma \cite{MR0007625}. A very interesting new approach to solving the Aleksandrov problem was presented by Oliker \cite{MR2332603}. Aleksandrov's integral curvature (the Aleksandrov problem resp.) and the cone volume measure (the logarithmic Minkowski problem resp.) were never thought to be connected until the recent remarkable work \cite{HLYZ} by Huang-LYZ. They discovered a family of geometric measures, dual curvature measures $\widetilde{C}_q(K,\cdot)$ for $q\in \mathbb{R}$, that can be viewed as differentials of dual quermassintegrals. Readers are recommended to see Section 3 in \cite{HLYZ} for the detailed construction of dual curvature measures. The family of dual curvature measures serves as a bridge linking Aleksandrov's integral curvature measure and the cone volume measure. When $q=0$, $0$-th dual curvature measure is (up to a constant) the same as Aleksandrov's integral curvature for the polar body. When $q=n$, $n$-th dual curvature measure is (up to a constant) equal to the cone volume measure. Huang-LYZ posed the characterization problem for dual curvature measures:
\textbf{The dual Minkowski problem} \cite{HLYZ}:\textit{ Given a finite Borel measure $\mu$ on $S^{n-1}$ and $q\in \mathbb{R}$, find the necessary and sufficient condition(s) on $\mu$ so that there exists a convex body $K$ containing the origin in its interior and $\mu(\cdot) = \widetilde{C}_q(K,\cdot)$. }
The dual Minkowski problem contains two special cases: when $q=0$, it becomes the Aleksandrov problem; when $q=n$, it becomes the logarithmic Minkowski problem.
In this paper, we will consider the dual Minkowski problem for the case $q<0$. A complete solution, including the existence and the uniqueness part, will be presented. Note that this works not only in the even case, but in the case when the given measure is arbitrary as well. In particular, the main theorems in this paper are:
\begin{customthm}{(Existence part of the dual Minkowski problem for negative indices)} Suppose $q<0$ and $\mu$ is a non-zero finite Borel measure on $S^{n-1}$. There exists a convex body $K\subset \mathbb{R}^n$ that contains the origin in its interior, such that $\mu(\cdot)=\widetilde{C}_q(K,\cdot)$ if and only if $\mu$ is not concentrated in any closed hemisphere. \end{customthm} \begin{customthm}{(Uniqueness part of the dual Minkowski problem for negative indices)} Suppose $q<0$ and $K,L$ are two convex bodies that contain the origin in their interiors. If $\widetilde{C}_q(K,\cdot) = \widetilde{C}_q(L,\cdot)$, then $K=L$. \end{customthm}
Dual curvature measures and dual Minkowski problem are concepts belonging to the dual Brunn-Minkowski theory initiated by Lutwak (see Schneider \cite{schneider2014}). The theory started out by replacing support functions by radial functions, and mixed volumes by dual mixed volumes. The dual Brunn-Minkowski theory has been most effective in dealing with questions related to intersections, while the Brunn-Minkowski theory has been most helpful in answering questions related to projections. One of the major triumphs of the dual Brunn-Minkowski theory is tackling the famous Busemann-Petty problem, see Gardner \cite{MR1298719}, Gardner, Koldobsky \& Schlumprecht \cite{MR1689343}, Lutwak \cite{MR963487}, and Zhang \cite{MR1689339}. Although the duality demonstrated in the dual Brunn-Minkowski theory is only heuristic, it has provided numerous strikingly similar (formally), yet significant concepts and results, see, for example, Gardner \cite{MR2353261}, Gardner, Hug \& Weil \cite{MR3120744}, Haberl \cite{MR2397461}, Haberl \& Ludwig \cite{MR2250020}, and Zhang \cite{MR1443203}. Also see Gardner \cite{MR2251886} and Schneider \cite{schneider2014} for a detailed account.
Recall that dual quermassintegrals are the means of the intersection areas of convex bodies (see \eqref{eq intro dual quermassintegral}). Define the normalized $(n-i)$-th dual quermassintegral $\bar{W}_{n-i}(K)$ of $K\in \mathcal{K}_o^n$ to be \begin{equation*} \bar{W}_{n-i}(K)=\left(\frac{1}{\omega_n}\widetilde{W}_{n-i}(K)\right)^\frac{1}{i}. \end{equation*} Both dual quermassintegrals and normalized dual quermassintegrals can be naturally extended to real indices, see \eqref{eq dual quermassintegral}, \eqref{eq normalized dual quermassintegral 1}, and \eqref{eq normalized dual quermassintegral 2}. For each $q\in \mathbb{R}$, the $q$-th dual curvature measure, denoted by $\widetilde{C}_q(K,\cdot)$, of a convex body $K$ containing the origin in its interior, may be defined to be the unique Borel measure on $S^{n-1}$ such that \begin{equation} \label{eq intro variational formula}
\left.\frac{d}{dt}\log \bar{W}_{n-q}([K,f]_t)\right|_{t=0}=\frac{1}{\widetilde{W}_{n-q}(K)}\int_{S^{n-1}}f(v)d\widetilde{C}_q(K,v), \end{equation} holds for each continuous $f:S^{n-1}\rightarrow \mathbb{R}$. Here $[K,f]_t$ is the \emph{logarithmic Wulff shape} generated by $K$ and $f$, i.e., \begin{equation*} [K,f]_t=\{x\in \mathbb{R}^n:x\cdot v\leq h_K(v)e^{tf(v)},\forall v\in S^{n-1}\}. \end{equation*} Of critical importance is the fact that dual curvature measures are valuations, i.e., $$\widetilde{C}_q(K,\cdot)+\widetilde{C}_q(L,\cdot)= \widetilde{C}_q(K\cup L,\cdot)+\widetilde{C}_q(K\cap L,\cdot),$$ for each $K,L\in \mathcal{K}_o^n$ such that $K\cup L\in \mathcal{K}_o^n$. See, e.g., Haberl \cite{MR2966660}, Haberl \& Ludwig \cite{MR2250020} ,Haberl \& Parapatits \cite{MR3194492,MR3176613}, Ludwig \cite{MR2159706,MR2772547}, Ludwig \& Reitzner \cite{MR2680490}, Schuster \cite{MR2435426,MR2668553}, Schuster \& Wannerer \cite{MR2846354} and the references therein for important valuations and their characterizations in the theory of convex bodies.
The current paper aims to give a complete solution, including existence and uniqueness, to the dual Minkowski problem for negative indices.
\section{Preliminaries} \label{section preliminaries} \subsection{Basics regarding convex bodies} Books such as \cite{MR2251886} and \cite{schneider2014} often serve as good references for the theory of convex bodies.
We will mainly be working in $\mathbb{R}^n$ equipped with the usual Euclidean norm $|\cdot|$. The standard inner product will be written as $x\cdot y$ for vectors $x,y\in \mathbb{R}^n$. The standard $n$-dimensional unit ball will be denoted by $B_n$ and its volume by $\omega_n$. Write $S^{n-1}$ for the boundary of $B_n$ and recall that its surface area is $n\omega_n$. We will use $C(S^{n-1})$ to denote the space of continuous functions on $S^{n-1}$ with the usual max norm; i.e., $||f||=\max\{|f(u)|:u\in S^{n-1}\}$. We will also write $C^+(S^{n-1})$ for the set of positive continuous functions on $S^{n-1}$. For a given measure $\mu$, we will use $|\mu|$ for its total measure.
A subset $K$ of $\mathbb{R}^n$ is called a \emph{convex body} if it is a compact convex set with non-empty interior. The set of all convex bodies that contain the origin in the interior is denoted by $\mathcal{K}_o^n$. The boundary of $K$ will be denoted by $\partial K$.
Associated to each convex body $K\in \mathcal{K}_o^n$ is the support function $h_K:S^{n-1}\rightarrow \mathbb{R}$ given by \begin{equation*} h_K(v)=\max \{v\cdot x: x\in K\}, \end{equation*} for each $v\in S^{n-1}$. It is easy to see that $h_K$ is a continuous function and $h_K>0$. Hence, the support function $h_K$ is bounded away from $0$.
Another function that can be associated to a convex body $K\in \mathcal{K}_o^n$ is the radial function $\rho_K$. Define $\rho_K:S^{n-1}\rightarrow \mathbb{R}$ by \begin{equation*} \rho_K(u)=\max\{\lambda>0:\lambda u\in K\}, \end{equation*} for each $u\in S^{n-1}$. Again, it can be seen that $\rho_K$ is a continuous function and $\rho_K>0$. Hence, the radial function $\rho_K$ is bounded away from $0$.
The set $\mathcal{K}_o^n$ can be endowed with two metrics: the Hausdorff metric is such that the distance between $K,L\in \mathcal{K}_o^n$ is $||h_K-h_L||$; the radial metric is such that the distance between $K,L\in \mathcal{K}_o^n$ is $||\rho_K-\rho_L||$. Note that the two metrics are equivalent, i.e., if $K\in \mathcal{K}_o^n$ and $K_1,\cdots,K_n,\cdots\in \mathcal{K}_o^n$, then \begin{equation*} h_{K_i} \rightarrow h_K \text{ uniformly} \qquad\text{ if and only if } \qquad\rho_{K_i} \rightarrow \rho_K \text{ uniformly.} \end{equation*} Thus, we may write $K_i\rightarrow K$ without specifying which metric is in use.
For each $K\in \mathcal{K}_o^n$, we can define the polar body $K^*$ by \begin{equation*} K^*=\{x\in \mathbb{R}^n:x\cdot y\leq 1, \text{ for all }y\in K\}. \end{equation*} Note that $K^*\in \mathcal{K}_o^n$ and by its definition, we have \begin{equation*} \rho_K = 1/h_{K^*} \qquad\text{ and } \qquad h_K=1/\rho_{K^*}. \end{equation*} We will also be using the fact that if $K\in \mathcal{K}_o^n$ and $K_1,\cdots, K_n,\cdots\in \mathcal{K}_o^n$, then \begin{equation} \label{eq polar convergence} K_i\rightarrow K \qquad\text{ if and only if } \qquad K_i^*\rightarrow K^*. \end{equation}
For each $f\in C^+(S^{n-1})$, define $[f] \in \mathcal{K}_o^n$ to be the Wulff shape generated by $f$, i.e., \begin{equation*} [f]= \{x\in \mathbb{R}^n: x\cdot v \leq f(v) \text{ for all } v\in S^{n-1}\}. \end{equation*} Obviously, one has \begin{equation} \label{radial of convex hull} h_{[f]}\leq f, \end{equation} and \begin{equation} \label{radial of convex hull of radial} [ h_K] = K, \end{equation} for each $K\in \mathcal{K}_o^n$. Suppose $K\in \mathcal{K}_o^n$ and $g\in C(S^{n-1})$. When $\delta>0$ is small enough, we can define $h_t\in C^+(S^{n-1})$ by \begin{equation} \label{logarithmic convex hull} \log h_t(v)=\log h_K(v)+tg(v), \end{equation} for each $v\in S^{n-1}$ and $t\in (-\delta,\delta)$. We will usually write $[K,g,t]$ for the convex body $[h_t]$.
Let us assume $K\in \mathcal{K}_o^n$. The \emph{supporting hyperplane} $P(K,v)$ of $K$ for each $v\in S^{n-1}$ is given by \begin{equation*} P(K,v)=\{x\in \mathbb{R}^n:x\cdot v = h_K(v)\}. \end{equation*} At each boundary point $x\in \partial K$, a unit vector $v$ is said to be an \emph{outer unit normal} of $K$ at $x\in \partial K$ if $P(K,v)$ passes through $x$.
For a subset $\omega\subset S^{n-1}$, the \emph{radial Gauss image}, $\vec{\alpha}_K(\omega)$, of $K$ at $\omega$, is the set of all outer unit normals of $K$ at points in $\{\rho_K(u)u:u\in \omega\}$. When $\omega=\{u\}$, we usually write $\vec{\alpha}_K(u)$ instead of $\vec{\alpha}_K(\{u\})$. Let $\omega_K\subset S^{n-1}$ be the set consisting of all $u\in S^{n-1}$ such that the set $\vec{\alpha}_K(u)$ contains more than one single element. It can be shown that the set $\omega_K$ is of spherical Lebesgue measure $0$ (see Theorem 2.2.5 in \cite{schneider2014}). The \emph{radial Gauss map}, $\alpha_K:S^{n-1}\setminus \omega_K\rightarrow S^{n-1}$ is the map that takes each $u\in S^{n-1}\setminus \omega_K$ to the unique element in $\vec{\alpha}_K(u)$.
Similarly, for each $\eta\subset S^{n-1}$, the \emph{reverse radial Gauss image}, $\vec{\alpha}^*_K(\eta)$, of $K$ at $\eta$, is the set of all radial directions $u\in S^{n-1}$ such that the boundary point $\rho_K(u)u$ has at least one element in $\eta$ as its outer unit normal, i.e., $\vec{\alpha}^*_K(\eta)=\{u\in S^{n-1}:\vec{\alpha}_K(u)\cap\eta \neq \emptyset\}$. When $\eta = \{v\}$, we usually write $\vec{\alpha}_K^*(v)$ instead of $\vec{\alpha}_K^*(\{v\})$. Let $\eta_K\subset S^{n-1}$ be the set consisting of all $v\in S^{n-1}$ such that the set $\vec{\alpha}_K^*(v)$ contains more than one single element. The set $\eta_K$ is of spherical Lebesgue measure $0$ (see Theorem 2.2.11 in \cite{schneider2014}). The \emph{reverse radial Gauss map}, $\alpha_K^*:S^{n-1}\setminus \eta_K \rightarrow S^{n-1}$, is the map that takes each $v\in S^{n-1}\setminus \eta_K$ to the unique element in $\vec{\alpha}_K^*(v)$.
A detailed description of the radial Gauss map and the reverse radial Gauss map, together with a list of their properties, can be found in Section 2.2 \cite{HLYZ}.
While central to the Brunn-Minkowski theory are quermassintegrals, the geometric functionals central to the dual Brunn-Minkowski theory are \emph{dual quermassintegrals}. Let $q\in \mathbb{R}$ and $K\in \mathcal{K}_o^n$, the $(n-q)$-th dual quermassintegral $\widetilde{W}_{n-q}(K)$ may be defined by \begin{equation} \label{eq dual quermassintegral} \widetilde{W}_{n-q}(K)= \frac{1}{n}\int_{S^{n-1}}\rho_K^q(u)du. \end{equation} For real $q\neq 0$, the \emph{normalized dual quermassintegral} $\bar{W}_{n-q}(K)$ is given by \begin{equation} \label{eq normalized dual quermassintegral 1} \bar{W}_{n-q}(K)=\left(\frac{1}{n\omega_n}\int_{S^{n-1}}\rho_K^q(u)du\right)^{\frac{1}{q}}, \end{equation} and for $q=0$, by \begin{equation} \label{eq normalized dual quermassintegral 2} \bar{W}_n(K)= \exp\left(\frac{1}{n\omega_n}\int_{S^{n-1}}\log \rho_K(u)du\right). \end{equation} We will write \begin{equation*} \widetilde{V}_q(K)= \widetilde{W}_{n-q}(K)\qquad \text{ and }\qquad \bar{V}_q(K) = \bar{W}_{n-q}(K). \end{equation*} The functionals $\widetilde{V}_q$ and $\bar{V}_q$ are called the \emph{$q$-th dual volume} and the \emph{normalized $q$-th dual volume}, respectively.
\subsection{Dual curvature measures and the dual Minkowski problem} \label{sec dual curvature measures} For quick later references, we gather here some facts about dual curvature measures and the dual Minkowski problem.
While curvature measures can be viewed as differentials of quermassintegrals, they can also be defined by considering \emph{local parallel sets} (see the construction in Chapter 4 \cite{schneider2014}). In the spirit of \emph{conceptual duality}, Huang-LYZ \cite{HLYZ} were able to discover dual curvature measures $\widetilde{C}_q(K,\cdot)$ for $q\in \mathbb{R}$ using what they call \emph{local dual parallel bodies}. See Section 3 in \cite{HLYZ} for a detailed construction of dual curvature measures. This new family of geometric measures can also be viewed as differentials of dual quermassintegrals (see \eqref{eq intro variational formula}) and thus warrant being called ``dual'' curvature measures. Dual curvature measures have the following integral representation. For each Borel set $\eta\subset S^{n-1}$ and $q\in \mathbb{R}$, the dual curvature measure $\widetilde{C}_q(K,\cdot)$ of $K\in\mathcal{K}_o^n$ can be defined by \begin{equation} \label{eq curvature measure integral representation} \widetilde{C}_q(K,\eta)=\frac{1}{n}\int_{\vec{\alpha}_K^*(\eta)}\rho_K^q(u)du. \end{equation} The total measure of $\widetilde{C}_q(K,\cdot)$ is equal to the $(n-q)$-th dual quermassintegral, i.e., \begin{equation} \widetilde{W}_{n-q}(K)=\widetilde{C}_q(K,S^{n-1}). \end{equation} It is not hard to see that $\widetilde{C}_q$ is homogeneous of degree $q$. That is \begin{equation*} \widetilde{C}_q(\lambda K,\cdot)=\lambda^q\widetilde{C}_q(K,\cdot), \end{equation*} for each $\lambda>0$. From the integral representation \eqref{eq curvature measure integral representation} and Lemma 2.5 in \cite{HLYZ}, it is not hard to see that the $0$-th dual curvature measure is (up to a constant) equal to the Aleksandrov's integral curvature for the polar body and the $n$-th curvature measure is (up to a constant) equal to the cone volume measure.
When $K$ has smooth boundary with everywhere positive Gauss curvature, the dual curvature measure $\widetilde{C}_q(K,\cdot)$ of $K$ is absolutely continuous with respect to the spherical Lebesgue measure and the density is given (in terms of the support function of $K$) by \begin{equation*}
d\widetilde{C}_q(K,v)=\frac{1}{n}h_K(v)|\nabla h_K(v)|^{q-n}\det(h_{ij}(v)+h_K(v)\delta_{ij})dv, \end{equation*} where $(h_{ij})$ is the Hessian matrix of $h_K$ on $S^{n-1}$ with respect to an orthonormal basis. When $K$ is a polytope with outer unit normal $\{v_1,\cdots, v_m\}$, the dual curvature measure $\widetilde{C}_{q}(K,\cdot)$ is a discrete measure concentrated on $\{v_1,\cdots,v_m\}$ and is given by \begin{equation*} \widetilde{C}_q(K,\cdot)= \sum_{i=1}^m c_i\delta_{v_i}, \end{equation*} where $\delta_{v_i}$ is the Dirac measure concentrated at $v_i$ and \begin{equation*} c_i = \frac{1}{n} \int_{\vec{\alpha}_K^*(v_i)}\rho_K^q(u)du. \end{equation*} See \cite{HLYZ} for details.
It is natural to look for requirements on a given measure so that it becomes the $q$-th dual curvature measure of a convex body $K$ for a given $q\in \mathbb{R}$. The characterization problem for dual curvature measures, called the dual Minkowski problem, was posed in \cite{HLYZ}:
\textbf{The dual Minkowski problem:}\textit{ Given a finite Borel measure $\mu$ on $S^{n-1}$ and $q\in \mathbb{R}$, find the necessary and sufficient condition(s) on $\mu$ so that it becomes the $q$-th dual curvature measure of a convex body $K\in \mathcal{K}_o^n$. }
When $q=0$, the dual Minkowski problem is the same as the Aleksandrov problem. Both the existence and the uniqueness of the solution to Aleksandrov problem were given by Aleksandrov \cite{MR0007625}. See also Oliker \cite{MR2332603} for an intriguingly new approach and its connection to optimal transport. When $q=n$, the dual Minkowski problem becomes the logarithmic Minkowski problem whose complete solution still remains open. When restricted to the even case, the existence part of the logarithmic Minkowski problem was established by B{\"o}r{\"o}czky-LYZ \cite{BLYZ}. For non-even cases, Stancu \cite{MR1901250,MR2019226} studied the problem in the planar case. Zhu \cite{MR3228445} gave a sufficient condition when the given measure is discrete, but not necessarily even. B\"{o}r\"{o}czky, Heged\H{u}s, \& Zhu \cite{Boroczky20062015} later found a sufficient condition for the discrete case, which includes \cite{BLYZ} (in the discrete case) and \cite{MR3228445} as special cases. See also Chou \& Wang \cite{MR2254308} for the case when the given measure has a positive density. The uniqueness part of the logarithmic Minkowski problem, in general, remains open (see B{\"o}r{\"o}czky-LYZ \cite{MR2964630} and Stancu \cite{MR1901250,MR2019226} for results in the planar case).
When $0<q<n$, the dual Minkowski problem was considered in Huang-LYZ \cite{HLYZ}. Restricting to the class of even measures and origin-symmetric convex bodies, they found a sufficient condition that would guarantee the existence of a solution to the dual Minkowski problem.
\begin{thm}[\cite{HLYZ}] Suppose $\mu$ is a non-zero finite even Borel measure on $S^{n-1}$ and $q\in (0,n]$. If the measure $\mu$ satisfies: \begin{enumerate}[(1)] \item when $q\in [1,n]$, \begin{equation} \label{eq tmp 0000}
\frac{\mu(S^{n-1}\cap \xi_{n-i})}{|\mu|}<1-\frac{i(q-1)}{(n-1)q}, \end{equation} for all $i<n$ and all $(n-i)$ dimensional subspaces $\xi_{n-i}\subset \mathbb{R}^n$; \item when $q\in (0,1)$, \begin{equation*}
\frac{\mu(S^{n-1}\cap \xi_{n-1})}{|\mu|}<1, \end{equation*} for all $(n-1)$ dimensional subspaces $\xi\subset\mathbb{R}^n$, \end{enumerate} then there exists an origin-symmetric convex body $K$ in $\mathbb{R}^n$ such that \begin{equation*} \widetilde{C}_q(K,\cdot)=\mu(\cdot). \end{equation*} \end{thm}
In the following, the dual Minkowski problem when $q<0$ will be considered.
\section{The Optimization Problem} \label{section optimization problem} The first step towards solving various kinds of Minkowski problems using variational method is to properly convert the original problem to an optimization problem whose Euler-Lagrange equation would imply that the given measure is equal to the geometric measure (under investigation) of an optimizer. The associated optimization problem for the dual Minkowski problem was asked in \cite{HLYZ}. It was also established that for even measures, a solution to the optimization problem will lead to a solution to the dual Minkowski problem. Note that the proof works essentially in the same way even if the measure is not even. For the sake of completeness, we shall first describe the optimization problem and then give a short account of how a solution to the optimization problem would lead to a solution to the dual Minkowski problem.
Suppose $\mu$ is a non-zero finite Borel measure. Since we are dealing with the dual Minkowski problem for negative indices, we may restrict our attention to $0\neq q \in \mathbb{R}$. Define $\Phi:C^+(S^{n-1})\rightarrow \mathbb{R}$ by letting \begin{equation} \label{definition of Phi}
\Phi(h)=-\frac{1}{|\mu|}\int_{S^{n-1}}\log h(v)d\mu(v)+\log \bar{V}_q([h]), \end{equation} for every $h\in C^+(S^{n-1})$. Note that the functional $\Phi$ is homogeneous of degree $0$; i.e., \begin{equation} \label{eq_homogeneity of Phi} \Phi(ch)=\Phi(h), \end{equation} for all $c>0$.
When $q=n$, the functional $\Phi$ becomes \begin{equation*}
\Phi(h)= - \frac{1}{|\mu|}\int_{S^{n-1}}\log h(v)d\mu(v)+\frac{1}{n}\log(\text{vol}([h])/\omega_n) \end{equation*} and is a key component in solving the even logarithmic Minkowski problem in B{\"o}r{\"o}czky-LYZ \cite{BLYZ}.
When $q=0$, the functional $\Phi$ becomes \begin{equation*}
\Phi(h) = -\frac{1}{|\mu|}\int_{S^{n-1}}\log h(v)d\mu (v)+\frac{1}{n\omega_n}\int_{S^{n-1}}\log \rho_{[h]}(u)du \end{equation*} and in a slightly different form was studied in Oliker \cite{MR2332603}.
\textbf{The optimization problem (I):} \begin{equation*}
\sup\{\Phi(h):\widetilde{V}_q([h])=|\mu|,h\in C^+(S^{n-1})\}. \end{equation*}
Note that for each $h\in C^+(S^{n-1})$, by \eqref{radial of convex hull} and \eqref{radial of convex hull of radial}, \begin{equation*} \Phi(h)\leq \Phi(h_{[h]}) \text{ and } \widetilde{V}_q([h]) = \widetilde{V}_q([h_{[h]}]). \end{equation*} Thus, we may restrict our attention in the search of a maximizer to the set of all support functions. That is, $h_{Q_0}$ is a maximizer to the optimization problem (I) if and only if $Q_0$ is a maximizer to the following optimization problem:
\textbf{The optimization problem (II):} \begin{equation*}
\sup\{\Phi_\mu(K):\widetilde{V}_q(K)=|\mu|,K\in \mathcal{K}_o^n\}, \end{equation*} where $\Phi_\mu:\mathcal{K}_o^n\rightarrow \mathbb{R}$ is defined by letting \begin{equation*}
\Phi_\mu(K)=-\frac{1}{|\mu|}\int_{S^{n-1}}\log h_K(v)d\mu(v)+\log\bar{V}_q(K), \end{equation*} for each $K\in \mathcal{K}_o^n$. Note that on $\mathcal{K}_o^n$, the functional $\Phi_\mu$ is continuous with respect to the Hausdorff metric.
One potential difficulty in obtaining the Euler-Lagrange equation for the optimization problem (I) or (II) is that taking the differential of $\Phi$ or $\Phi_\mu$, more specifically the functional $\log \bar{V}_q([h])$, can be hard. The following variational formula was established in \cite{HLYZ} (see Theorem 4.5): \begin{equation} \label{variational formula}
\left.\frac{d}{dt} \log \bar{V}_q([K,g,t])\right|_{t=0}=\frac{1}{\widetilde{V}_q(K)}\int_{S^{n-1}}g(v)d\widetilde{C}_q(K,v), \end{equation} where $[K,g,t]$ is defined in \eqref{logarithmic convex hull} and $g$ is an arbitrary continuous function on $S^{n-1}$. With the help of \eqref{variational formula}, we may now show how a maximizer to the optimization (I), or equivalently (II), will lead to a solution to the dual Minkowski problem.
Suppose $Q_0$ is a maximizer to (II), or equivalently $h_{Q_0}$ is a maximizer to (I); i.e., $\widetilde{V}_q(Q_0)=|\mu|$ and \begin{equation*}
\Phi(h_{Q_0})=\sup\{\Phi(h):\widetilde{V}_q([h])=|\mu|,h\in C^+(S^{n-1})\}. \end{equation*} By \eqref{eq_homogeneity of Phi}, \begin{equation} \label{eq_tmp_optimization problem 1} \Phi(h_{Q_0})\geq \Phi(h), \end{equation} for each $h\in C^+(S^{n-1})$. Let $g:S^{n-1}\rightarrow \mathbb{R}$ be an arbitrary continuous function. For $\delta>0$ small enough and $t\in (-\delta,\delta)$, define $h_t:S^{n-1}\rightarrow \mathbb{R}$ by \begin{equation*} h_t = h_{Q_0}e^{tg}. \end{equation*}
By \eqref{eq_tmp_optimization problem 1}, \eqref{definition of Phi}, \eqref{variational formula}, and the fact that $\widetilde{V}_q(Q_0)=|\mu|$, \begin{equation*} \begin{aligned}
0 &= \left.\frac{d}{dt}\Phi(h_t)\right|_{t=0}\\
&= \left.\frac{d}{dt}\left(-\frac{1}{|\mu|}\int_{S^{n-1}}\log h_{Q_0}(v)+tg(v)d\mu(v)+\log \bar{V}_q([h_t])\right)\right|_{t=0}\\
&=-\frac{1}{|\mu|}\int_{S^{n-1}}g(v)d\mu(v)+\frac{1}{\widetilde{V}_q(Q_0)}\int_{S^{n-1}}g(v)d\widetilde{C}_q(Q_0,v)\\
&=\frac{1}{|\mu|}\left(-\int_{S^{n-1}}g(v)d\mu(v)+\int_{S^{n-1}}g(v)d\widetilde{C}_q(Q_0,v)\right). \end{aligned} \end{equation*} Since this holds for any arbitrary function $g\in C(S^{n-1})$, we have \begin{equation*} \mu(\cdot) = \widetilde{C}_q(Q_0,\cdot). \end{equation*}
Thus, we have \begin{lem} \label{lemma optimization problem}
Suppose $q<0$ and $\mu$ is a non-zero finite Borel measure. Assume $Q_0\in \mathcal{K}_o^n$. If $\widetilde{V}_q(Q_0)=|\mu|$ and \begin{equation*}
\Phi_\mu(Q_0)=\sup\{\Phi_\mu(K):\widetilde{V}_q(K)=|\mu|,K\in \mathcal{K}_o^n\}, \end{equation*} then \begin{equation*} \mu(\cdot)=\widetilde{C}_q(Q_0,\cdot). \end{equation*} \end{lem}
Note that the above lemma works for other $q$'s as well. But since only the dual Minkowski problem for negative indices is considered here, we choose to state the lemma only for $q<0$.
\section{Solving the Optimization Problem} This section is dedicated to showing that the optimization problem (II) has a maximizer when the given measure $\mu$ is not concentrated in any closed hemisphere. This, together with Lemma \ref{lemma optimization problem}, immediately implies that the dual Minkowski problem when $q<0$ has a solution.
The following lemma gives an upper bound for the polar of convex bodies that have fixed $q$-th dual volume. \begin{lem} \label{lemma boundedness} Suppose $q<0$ and $c>0$. Assume $K\in \mathcal{K}_o^n$. If \begin{equation*} \widetilde{V}_q(K)=\frac{1}{n}\int_{S^{n-1}}\rho_K^{q}(u)du = c, \end{equation*} then there exists $M=M(c)>0$ such that \begin{equation*} K^*\subset M B_n. \end{equation*} \end{lem} \begin{proof} First we note that by the rotational invariance of the spherical Lebesgue measure, the integral \begin{equation*} \int_{S^{n-1}}(u\cdot v)_+^{-q}du \end{equation*} is independent of the choice of $v\in S^{n-1}$. Here $(u\cdot v)_+=\max\{u\cdot v,0\}$. Since the spherical Lebesgue measure is not concentrated in any closed hemisphere, \begin{equation} \label{value of an integral} m_0 := \int_{S^{n-1}}(u\cdot v)_+^{-q}du >0. \end{equation}
Let $v_0\in S^{n-1}$ be such that \begin{equation*} \rho_{K^*}(v_0)= \max_{v\in S^{n-1}}\rho_{K^*}(v). \end{equation*} By definition of the support function and the fact that $K\in \mathcal{K}_o^n$, \begin{equation*} h_{K^*}(u)\geq (u\cdot v_0)_+\rho_{K^*}(v_0). \end{equation*} This, the fact that $q<0$, and \eqref{value of an integral} imply \begin{equation*} \begin{aligned} c = \widetilde{V}_q(K)&=\frac{1}{n}\int_{S^{n-1}}\rho_K^{q}(u)du\\
&=\frac{1}{n}\int_{S^{n-1}}h_{K^*}^{-q}(u)du\\
&\geq \frac{1}{n}\int_{S^{n-1}}(u\cdot v_0)_+^{-q}\rho_{K^*}^{-q}(v_0)du\\
&=\frac{1}{n} m_0\rho_{K^*}^{-q}(v_0). \end{aligned} \end{equation*} This implies that \begin{equation*} \rho_{K^*}(v_0)\leq \left(\frac{nc}{m_0}\right)^{-\frac{1}{q}}. \end{equation*} By the choice of $v_0$, we may choose $M=\left(\frac{nc}{m_0}\right)^{-\frac{1}{q}}$ and thus \begin{equation*} K^*\subset MB_n. \end{equation*} \qed \end{proof}
The next lemma will solve the optimization problem (II). \begin{lem} \label{existence of maximizer}
Suppose $q<0$ and $\mu$ is a non-zero finite Borel measure. If $\mu$ is not concentrated in any closed hemisphere, then there exists $Q_0\in \mathcal{K}_o^n$ with $\widetilde{V}_q(Q_0) = |\mu|$ and \begin{equation*}
\Phi_\mu(Q_0)=\sup\{\Phi_\mu(K):\widetilde{V}_q(K)=|\mu| \text{ and } K\in \mathcal{K}_o^n\}. \end{equation*} \end{lem} \begin{proof}
Suppose $\{Q_i\}\subset \mathcal{K}_o^n$ is a maximizing sequence; i.e., $\widetilde{V}_q(Q_i)= |\mu|$ and \begin{equation} \label{Q_i is a maximizing sequence}
\lim_{i\rightarrow \infty}\Phi_\mu(Q_i)=\sup\{\Phi_\mu(K):\widetilde{V}_q(K)=|\mu| \text{ and }K\in \mathcal{K}_o^n\}. \end{equation}
By Lemma \ref{lemma boundedness}, there exists $M=M(|\mu|)>0$ such that \begin{equation} \label{bound for Q_i} Q_i^*\subset MB_n. \end{equation} By Blaschke's selection theorem, we may assume (by taking subsequence) that there exists a compact convex set $K_0\subset \mathbb{R}^n$ such that \begin{equation*} Q_i^*\rightarrow K_0. \end{equation*}
We note that if $o\in \text{int }K_0$, then we are done. Indeed, we may take $Q_0=K_0^*$. To see why this works, we may use the continuity of $\Phi_\mu$ and $\widetilde{V}_q$, and \eqref{eq polar convergence} to conclude that \begin{equation*}
\widetilde{V}_q(Q_0)= \widetilde{V}_q(K_0^*)=\lim_{i\rightarrow \infty}\widetilde{V}_q(Q_i)=|\mu|, \end{equation*} and \begin{equation*}
\Phi_\mu(Q_0)=\Phi_\mu(K_0^*)=\lim_{i\rightarrow \infty}\Phi_\mu(Q_i)=\sup \{\Phi_\mu(K):\widetilde{V}_q(K)=|\mu| \text{ and }K\in \mathcal{K}_o^n\}. \end{equation*}
Let us now show that $o\in \text{int } K_0$. Assume otherwise, i.e., $o\in \partial K_0$. Hence there exists $u_0\in S^{n-1}$ such that $h_{K_0}(u_0)=0$. Since $Q_i^*$ converges to $K_0$ in Hausdorff metric, we have \begin{equation} \label{support funtion goes to 0} \lim_{i\rightarrow \infty} h_{Q_i^*}(u_0)= h_{K_0}(u_0)= 0. \end{equation} For $0<\delta<1$, define \begin{equation*} \omega_\delta(u_0) = \{v\in S^{n-1}:v\cdot u_0>\delta\}. \end{equation*} For each $v\in \omega_\delta(u_0)$, \begin{equation*} h_{Q_i^*}(u_0)\geq (v\cdot u_0)\rho_{Q_i^*}(v)> \delta \rho_{Q_i^*}(v), \end{equation*} which implies \begin{equation*} \rho_{Q_i^*}(v)< h_{Q_i^*}(u_0)/\delta. \end{equation*} This, together with \eqref{support funtion goes to 0}, shows that $\rho_{Q_i^*}$ converges to $0$ uniformly on $\omega_\delta(u_0)$.
By monotone convergence theorem and the fact that $\mu $ is a finite measure that is not concentrated in any closed hemisphere, we have \begin{equation*} \lim_{\delta\rightarrow 0} \mu\left(\omega_\delta(u_0)\right)= \mu\left(\{v\in S^{n-1}:v\cdot u_0>0\}\right)>0. \end{equation*} This implies the existence of $\delta_0>0$ such that \begin{equation} \label{positive measure} \mu\left(\omega_{\delta_0}(u_0)\right)>0. \end{equation} Hence, by \eqref{bound for Q_i}, \eqref{positive measure}, and the fact that $\rho_{Q_i^*}$ converges to $0$ uniformly on $\omega_{\delta_0}(u_0)$, \begin{equation*} \begin{aligned}
&\Phi_\mu(Q_i) \\=&-\frac{1}{|\mu|}\int_{\omega_{\delta_0}(u_0)}\log h_{Q_i}(v)d\mu(v)-\frac{1}{|\mu|}\int_{S^{n-1}\setminus \omega_{\delta_0}(u_0)}\log h_{Q_i}(v)d\mu(v)+\frac{1}{q}\log \frac{|\mu|}{\omega_n}\\
=&\frac{1}{|\mu|}\int_{\omega_{\delta_0}(u_0)}\log \rho_{Q_i^*}(v)d\mu(v)+\frac{1}{|\mu|}\int_{S^{n-1}\setminus \omega_{\delta_0}(u_0)}\log \rho_{Q_i^*}(v)d\mu(v)+\frac{1}{q}\log \frac{|\mu|}{\omega_n}\\
\leq & \frac{1}{|\mu|}\int_{\omega_{\delta_0}(u_0)}\log \rho_{Q_i^*}(v)d\mu(v)+\frac{1}{|\mu|}\mu(S^{n-1}\setminus \omega_{\delta_0}(u_0))\log M +\frac{1}{q}\log \frac{|\mu|}{\omega_n}\\ \rightarrow& -\infty, \end{aligned} \end{equation*} as $i\rightarrow \infty$. This is clearly a contradiction to $\{Q_i\}$ being a maximizing sequence.\qed \end{proof}
Lemmas \ref{lemma optimization problem} and \ref{existence of maximizer} immediately give the following theorem. \begin{thm} Suppose $q<0$ and $\mu$ is a finite non-zero Borel measure. There exists $K\in \mathcal{K}_o^n$ such that $\widetilde{C}_q(K,\cdot)=\mu(\cdot)$ if and only if $\mu$ is not concentrated in any closed hemisphere. \end{thm} \begin{proof} The only if part is obvious, while the if part follows from Lemmas \ref{lemma optimization problem} and \ref{existence of maximizer}.\qed \end{proof}
\section{Uniqueness} For the dual Minskowski problem when $q<0$, not only does a solution exist, the uniqueness of the solution can be established as well. The primary goal of this section is to establish this fact.
The following lemma is needed.
\begin{lem} \label{lemma uniqueness} Suppose $Q_1,Q_2\in \mathcal{K}_o^n$. If the following sets \begin{equation*} \eta_1=\{v\in S^{n-1}:h_{Q_1}(v)>h_{Q_2}(v)\}, \end{equation*} \begin{equation*} \eta_2=\{v\in S^{n-1}:h_{Q_1}(v)<h_{Q_2}(v)\}, \end{equation*} \begin{equation*} \eta_0=\{v\in S^{n-1}:h_{Q_1}(v)=h_{Q_2}(v)\}, \end{equation*} are non-empty, then the following statements are true: \begin{enumerate}[(a)] \item \label{(1)}If $u\in \vec{\alpha}_{Q_1}^*(\eta_1)$, then $\rho_{Q_1}(u)>\rho_{Q_2}(u)$; \item \label{(2)}If $u\in \vec{\alpha}_{Q_2}^*(\eta_2\cup \eta_0)$, then $\rho_{Q_2}(u)\geq \rho_{Q_1}(u)$; \item \label{(3)}$\vec{\alpha}_{Q_1}^*(\eta_1)\subset \vec{\alpha}_{Q_2}^*(\eta_1)$; \item \label{(4)}$\mathcal{H}^{n-1}(\vec{\alpha}_{Q_2}^*(\eta_1))>0$ and $\mathcal{H}^{n-1}(\vec{\alpha}_{Q_1}^*(\eta_2))>0$. \end{enumerate} \end{lem} \begin{proof}
\begin{enumerate}[(a)] \item We prove by contradiction. Assume that $\rho_{Q_1}(u)\leq \rho_{Q_2}(u)$. Since $u\in \vec{\alpha}_{Q_1}^*(\eta_1)$, there exists $v_0\in \eta_1$ such that $u\cdot v_0>0$ and $\rho_{Q_1}(u)u\cdot v_0=h_{Q_1}(v_0)$. Hence, \begin{equation*} h_{Q_1}(v_0)=\rho_{Q_1}(u)u\cdot v_0\leq \rho_{Q_2}(u)u\cdot v_0\leq h_{Q_2}(v_0). \end{equation*} This is a contradiction to the fact that $v_0\in \eta_1$. \item We again prove by contradiction. Assume $\rho_{Q_1}(u)>\rho_{Q_2}(u)$. Since $u\in \vec{\alpha}_{Q_2}^*(\eta_2\cup \eta_0)$, there exists $v_0\in \eta_2\cup\eta_0$ such that $u\cdot v_0>0$ and $\rho_{Q_2}(u)u\cdot v_0 = h_{Q_2}(v_0)$. Hence, \begin{equation*} h_{Q_1}(v_0)\geq \rho_{Q_1}(u)u\cdot v_0>\rho_{Q_2}(u)u\cdot v_0=h_{Q_2}(v_0). \end{equation*} This is a contradiction to the fact that $v_0\in \eta_2\cup \eta_0$. \item Suppose $u\in S^{n-1}$ is such that $u\in \vec{\alpha}_{Q_1}^*(\eta_1)$ but $u\notin \vec{\alpha}_{Q_2}^*(\eta_1)$. Then $u\in \vec{\alpha}_{Q_1}^*(\eta_1)\cap \vec{\alpha}_{Q_2}^*(\eta_2\cup \eta_0)$. Then \eqref{(1)} and \eqref{(2)} provide a contradiction. \item By symmetry, we only need to show $\mathcal{H}^{n-1}(\vec{\alpha}_{Q_2}^*(\eta_1))>0$. Suppose $$\mathcal{H}^{n-1}(\vec{\alpha}_{Q_2}^*(\eta_1))=0.$$ Then by \eqref{(2)}, \begin{equation} \label{eq local 02} \rho_{Q_2}(u)\geq \rho_{Q_1}(u), \end{equation} for $\mathcal{H}^{n-1}$ almost all $u\in S^{n-1}$. By continuity of radial functions, \eqref{eq local 02} is valid for all $u\in S^{n-1}$. This implies $Q_1\subset Q_2$, which is a contradiction to $\eta_1$ being non-empty.\qed \end{enumerate} \end{proof}
The following theorem establishes the uniqueness for the solution to the dual Minkowski problem for negative $q's$.
\begin{thm} Assume $q<0$ and $K,L\in \mathcal{K}_o^n$. If $\widetilde{C}_q(K,\cdot)=\widetilde{C}_q(L,\cdot)$, then $K=L$. \end{thm} \begin{proof} By homogeneity of $\widetilde{C}_q$, it suffices to show that $K$ is a dilate of $L$. Assume not. Then there exists $\lambda>0$ and $K'=\lambda K$ such that \begin{equation*} \begin{aligned} \eta'&=\{v\in S^{n-1}:h_{K'}(v)>h_L(v)\},\\ \eta &=\{v\in S^{n-1}:h_{K'}(v)<h_L(v)\},\\ \eta_0 &=\{v\in S^{n-1}:h_{K'}(v)=h_L(v)\}, \end{aligned} \end{equation*} are non-empty.
Lemma \ref{lemma uniqueness}\eqref{(4)}, together with the definition of $\widetilde{C}_q$, shows that $\widetilde{C}_q(L,\eta')>0$. This, in turn, implies that \begin{equation} \label{eq local 1000} \widetilde{C}_q(K,\eta')>0. \end{equation} This implies that \begin{equation} \label{eq local 1001} \mathcal{H}^{n-1}(\vec{\alpha}_{K'}^*(\eta'))=\mathcal{H}^{n-1}(\vec{\alpha}_K^*(\eta'))>0. \end{equation}
By Lemma \ref{lemma uniqueness}\eqref{(3)}, Lemma \ref{lemma uniqueness}\eqref{(1)}, the fact that $q$ is negative, \eqref{eq local 1001}, and the homogeneity of $\widetilde{C}_q$, we have \begin{equation} \label{eq local 03} \begin{aligned} \widetilde{C}_q(K,\eta')&=\widetilde{C}_q(L,\eta')\\ &=\frac{1}{n}\int_{\vec{\alpha}_L^*(\eta')}\rho_L^q(u)du\\ &\geq \frac{1}{n}\int_{\vec{\alpha}_{K'}^*(\eta')}\rho_L^q(u)du\\ &>\frac{1}{n}\int_{\vec{\alpha}_{K'}^*(\eta')}\rho_{K'}^q(u)du\\ &=\widetilde{C}_q(K',\eta')\\ &=\lambda^q\widetilde{C}_q(K,\eta'). \end{aligned} \end{equation}
Hence \eqref{eq local 1000} and \eqref{eq local 03} imply that \begin{equation} \label{eq local 05} \lambda^q<1. \end{equation}
Similarly, Lemma \ref{lemma uniqueness}\eqref{(4)}, together with the definition of $\widetilde{C}_q$, shows that $$\widetilde{C}_q(K',\eta)>0.$$ This, in turn, implies that \begin{equation} \label{eq local 1002} \widetilde{C}_q(K,\eta)>0. \end{equation} Thus $\widetilde{C}_q(L,\eta)>0$, which implies that \begin{equation} \label{eq local 1003} \mathcal{H}^{n-1}(\vec{\alpha}_L^*(\eta))>0. \end{equation}
By Lemma \ref{lemma uniqueness}\eqref{(1)}, the fact that $q$ is negative, \eqref{eq local 1003}, Lemma \ref{lemma uniqueness}\eqref{(3)}, and the homogeneity of $\widetilde{C}_q$, \begin{equation} \label{eq local 04} \begin{aligned} \widetilde{C}_q(K,\eta)&=\widetilde{C}_q(L,\eta)\\ &=\frac{1}{n}\int_{\vec{\alpha}_L^*(\eta)}\rho_L^q(u)du\\ &<\frac{1}{n}\int_{\vec{\alpha}_L^*(\eta)}\rho_{K'}^{q}(u)du\\ &\leq \frac{1}{n}\int_{\vec{\alpha}_{K'}^*(\eta)}\rho_{K'}^q(u)du\\ &=\widetilde{C}_q(K',\eta)\\ &=\lambda^q\widetilde{C}_q(K,\eta). \end{aligned} \end{equation}
Hence \eqref{eq local 1002} and \eqref{eq local 04} imply that \begin{equation} \label{eq local 06} \lambda^q>1. \end{equation}
There is a contradiction between \eqref{eq local 05} and \eqref{eq local 06}.\qed \end{proof}
\end{document} |
\begin{document}
\title[Rational Approximants to Algebraic Functions]{Weighted Extremal Domains and Best Rational Approximation}
\author[L. Baratchart]{Laurent Baratchart} \address{INRIA, Project APICS, 2004 route des Lucioles --- BP 93, 06902 Sophia-Antipolis, France} \email{[email protected]}
\author[H. Stahl]{Herbert Stahl} \address{Beuth Hochschule/FB II; Luxemburger Str. 10; D-13353 Berlin, Germany} \email{[email protected]}
\author[M. Yattselev]{Maxim Yattselev} \address{Corresponding author, Department of Mathematics, University of Oregon, Eugene, OR, 97403, USA} \email{[email protected]}
\thanks{The research of first and the third authors was partially supported by the ANR project ``AHPI'' (ANR-07-BLAN-0247-01). The research of the second author has been supported by the Deutsche Forschungsgemeinschaft (AZ: STA 299/13-1).}
\begin{abstract} Let $f$ be holomorphically continuable over the complex plane except for finitely many branch points contained in the unit disk. We prove that best rational approximants to $f$ of degree $n$, in the $L^2$-sense on the unit circle, have poles that asymptotically distribute according to the equilibrium measure on the compact set outside of which $f$ is single-valued and which has minimal Green capacity in the disk among all such sets. This provides us with $n$-th root asymptotics of the approximation error. By conformal mapping, we deduce further estimates in approximation by rational or meromorphic functions to $f$ in the $L^2$-sense on more general Jordan curves encompassing the branch points. The key to these approximation-theoretic results is a characterization of extremal domains of holomorphy for $f$ in the sense of a weighted logarithmic potential, which is the technical core of the paper. \end{abstract}
\subjclass[2000]{42C05, 41A20, 41A21}
\keywords{rational approximation, meromorphic approximation, extremal domains, weak asymptotics, non-Hermitian orthogonality.}
\maketitle
\section*{List of Symbols}
\begin{flushleft} \begin{tabular}{ p{2cm} l } {\bf Sets}:& \\ $\overline\C$ & extended complex plane\\ $T$ & Jordan curve with exterior domain $O$ and interior domain $G$\\ $\T$ & unit circle with exterior domain $\Om$ and interior domain $\D$\\ $E_f$ & set of the branch points of $f$\\ $K^*$ & reflected set $\{z:1/\bar z\in K\}$\\ $\K$ & set of minimal condenser capacity in $\pk_f(G)$\\ $\Gamma_\nu$ and $D_\nu$ & minimal set for Problem $(f,\nu)$ and its complement in $\overline\C$ \\ $(\Gamma)_\epsilon$ & $\{z\in\D:\dist(z,\Gamma)<\epsilon\}$\\ $\gamma^u$ & image of a set $\gamma$ under $1/(\cdot - u)$\\ \end{tabular}
\begin{tabular}{p{2cm} l} {\bf Collections}: &\\ $\pk_f(G)$ & admissible sets for $f\in\alg(G)$, $\pk_f=\pk_f(\D)$\\ $\pd_f$ & admissible sets in $\pk_f$ comprised of a finite number of continua\\ $\Lambda(F)$ & probability measures on $F$\\ \end{tabular}
\begin{tabular}{p{2cm} l} {\bf Spaces}:&\\ $\poly_n$ & algebraic polynomials of degree at most $n$\\ $\mpoly_n(G)$ & monic algebraic polynomials of degree $n$ with $n$ zeros in $G$, $\mpoly_n=\mpoly_n(\D)$\\ $\rat_n(G)$ & $\rat_n(G):=\poly_{n-1}\mpoly_n^{-1}(G)$, $\rat_n=\rat_n(\D)$\\ $\alg(G)$ & holomorphic functions $\overline\C$ except for branch-type singularities in $G$\\
$L^p(T)$ & classical $L^p$ spaces, $p<\infty$, with respect to arclength on $T$ and the norm $\|\cdot\|_{p,T}$\\
$\|\cdot\|_K$ & supremum norm on a set $K$\\ $E^2(G)$ & Smirnov class of holomorphic functions in $G$ with $L^2$ traces on $T$\\ $E_n^2(G)$ & $E_n^2(G):=E^2(G)\mpoly_n^{-1}(G)$\\ $H^2$ & classical Hardy space of holomorphic functions in $\D$ with $L^2$ traces on $\T$\\ $H_n^2$ & $H_n^2:=H^2\mpoly_n^{-1}$\\ \end{tabular}
\begin{tabular}{p{2cm} l} {\bf Measures}:&\\ $\omega^*$ & reflected measure, $\omega^*(B)=\omega(B^*)$\\ $\widehat\omega$ or $\widetilde\omega$ & balayage of $\omega$, $\supp(\omega)\subset D$, onto $\partial D$\\ $\omega_F$ & equilibrium distribution on $F$\\ $\omega_{F,\psi}$ & weighted equilibrium distribution on $F$ in the field $\psi$\\ $\omega_{(F,E)}$ & Green equilibrium distribution on $F$ relative to $\overline\C\setminus E$\\ \end{tabular}
\begin{tabular}{p{2cm} l} {\bf Capacities}:&\\ $\cp(K)$ & logarithmic capacity of $K$\\ $\cp_\nu(K)$ & $\nu$-capacity of $K$\\ $\cp(E,F)$ & capacity of the condenser $(E,F)$\\ \end{tabular}
\begin{tabular}{p{2cm} l} {\bf Energies}:&\\ $I[\omega]$ & logarithmic energy of $\omega$\\ $I_\psi[\omega]$ & weighted logarithmic energy of $\omega$ in the field $\psi$\\ $I_D[\omega]$ & Green energy of $\omega$ relative to $D$\\ $I_\nu[K]$ & $\nu$-energy of a set $K$\\ $\di_D(u,v)$ & Dirichlet integral of functions $u,v$ in a domain $D$\\ \end{tabular}
\begin{tabular}{p{2cm} l} {\bf Potentials}:&\\ $V^\omega$ & logarithmic potential of $\omega$\\ $V_*^\omega$ & spherical logarithmic potential of $\omega$\\ $U^\nu$ & spherically normalized logarithmic potential of $\nu^*$\\ $V_D^\omega$ & Green potential of $\omega$ relative to $D$\\ $g_D(\cdot,u)$ & Green function for $D$ with pole at $u$\\ \end{tabular}
\begin{tabular}{p{2cm} l} {\bf Constants}: &\\ $c(\psi;F)$ & modified Robin constant, $c(\psi;F)=I_\psi[\omega_{F,\psi}]-\int\psi d\omega_{F,\psi}$\\ $c(\nu;D)$ & is equal to $\int g_D(z,\infty)d\nu(z)$ if $D$ is unbounded and to $0$ otherwise\\ \end{tabular}
\end{flushleft}
\section{Introduction} \label{sec:intro}
Approximation theory in the complex domain has undergone striking developments over the last years that gave new impetus to this classical subject. After the solution to the Gonchar conjecture \cite{Par86,Pr93} and the achievement of weak asymptotics in Pad\'e approximation \cite{St85,St86,GRakh87} came the disproof of the Baker-Gammel-Wills conjecture \cite{Lub03,Bus02}, and the Riemann-Hilbert approach to the limiting behavior of orthogonal polynomials \cite{DKMLVZ99a,KMcLVAV04} that opened the way to unprecedented strong asymptotics in rational interpolation \cite{ApKVA08,Ap02,BY10} (see \cite{Deift,KamvissisMcLaughlinMiller} for other applications of this powerful device). Meanwhile, the spectral approach to meromorphic approximation \cite{AAK71}, already instrumental in \cite{Par86}, has produced sharp converse theorems in rational approximation and fueled engineering applications to control systems and signal processing \cite{GL84,Peller,Nikolskii,Partington2}.
In most investigations involved with non-Hermitian orthogonal polynomials and rational interpolation, a central role has been played by certain geometric extremal problems from logarithmic potential theory, close in spirit to the Lavrentiev type \cite{Kuz80}, that were introduced in \cite{St85b}. On the one hand, their solution produces systems of arcs over which non-Hermitian orthogonal polynomials can be analyzed; on the other hand such polynomials are precisely denominators of rational interpolants to functions that may be expressed as Cauchy integrals over this system of arcs, the interpolation points being chosen in close relation with the latter.
One issue facing now the theory is to extend to \emph{best} rational or meromorphic approximants of prescribed degree to a given function the knowledge that was gained about rational interpolants. Optimality may of course be given various meanings. However, in view of the connections with interpolation theory pointed out in \cite{Lev69,BSW96,BS02}, and granted their relevance to spectral theory, the modeling of signals and systems, as well as inverse problems \cite{Antoulas,FournierLeblond03,HannanDeistler,Nikolskii2,BMSW06,Isakov,Regalia}, it is natural to consider foremost best approximants in Hardy classes.
The main interest there attaches to the behavior of the poles whose determination is the non-convex and most difficult part of the problem. The first obstacle to value interpolation theory in this context is that it is unclear whether best approximants of a given degree should interpolate the function at enough points, and even if they do these interpolation points are no longer parameters to be chosen adequately in order to produce convergence but rather unknown quantities implicitly determined by the optimality property. The present paper deals with $H^2$-best rational approximation in the complement of the unit disk, for which maximum interpolation is known to take place; it thus remains in this case to locate the interpolation points. This we do asymptotically, when the degree of the approximant goes large, for functions whose singularities consist of finitely many poles and branch points in the disk. More precisely, we prove that the normalized (probability) counting measures of the poles of the approximants converge, in the weak star sense, to the equilibrium distribution of the continuum of minimum Green capacity, in the disk, outside of which the approximated function is single-valued. By conformal mapping, the result carries over to best meromorphic approximants with a prescribed number of poles, in the $L^2$-sense on a Jordan curve encompassing the poles and branch points. We also estimate the approximation error in the $n$-th root sense, that turns out to be the same as in uniform approximation for the functions under consideration. Note that $H^2$-best rational approximants on the disk are of fundamental importance in stochastic identification \cite{HannanDeistler} and that functions with branch points arise naturally in inverse sources and potential problems \cite{BAbHL05,BLM06}, so the result may be regarded as a prototypical case of the above-mentioned program.
The paper is organized as follows. In Sections~\ref{sec:ral2} and~\ref{sec:ra}, we fix the terminology and recall some known facts about $H^2$-best rational approximants and sets of minimal condenser capacity, before stating our main results (Theorems~\ref{thm:L2T} and~\ref{thm:convcap}) along with some corollaries. We set up in Section~\ref{sec:min} a weighted version of the extremal potential problem introduced in \cite{St85b} ({\it cf.} Definition~\ref{df:minset}) and stress its main features. Namely, a solution exists uniquely and can be characterized, among continua outside of which the approximated function is single-valued, as a system of arcs possessing the so-called $S$-property in the field generated by the weight ({\it cf.} Definition~\ref{df:sym} and Theorem~\ref{thm:minset}). Section~\ref{sec:pade} is a brief introduction to multipoint Pad\'e interpolants, of which $H^2$-best rational approximants are a particular case. Section~\ref{sec:proofs} contains the proofs of all the results: first we establish Theorem~\ref{thm:minset}, which is the technical core of the paper, using compactness properties of the Hausdorff metric together with the {\it a priori} geometric estimate of Lemma~\ref{lem:1pr} to prove existence; the $S$-property is obtained by showing the local equivalence of our weighted extremal problem with one of minimal condenser capacity (Lemma~\ref{lem:1sym}); uniqueness then follows from a variational argument using Dirichlet integrals (Lemma~\ref{lem:1e}). After Theorem~\ref{thm:minset} is established, the proof of Theorem~\ref{thm:convcap} is not too difficult. We choose as weight (minus) the potential of a limit point of the normalized counting measures of the interpolation points of the approximants and, since we now know that a compact set of minimal weighted capacity exists and that it possesses the $S$-property, we can adapt results from \cite{GRakh87} to the effect that the normalized counting measures of the poles of the approximants converge to the weighted equilibrium distribution on this system of arcs. To see that this is nothing but the Green equilibrium distribution, we appeal to the fact that poles and interpolation points are reflected from each other across the unit circle in $H^2$-best rational approximation. The results carry over to more general domains as in Theorem~\ref{thm:L2T} by a conformal mapping (Theorem~\ref{cor:L2T}). The appendix in Section~\ref{sec:pt} gathers some technical results from logarithmic potential theory that are needed throughout the paper.
\section{Rational Approximation in $L^2$} \label{sec:ral2}
In this work we are concerned with rational approximation of functions analytic at infinity having multi-valued meromorphic continuation to the entire complex plane deprived of a finite number of points. The approximation will be understood in the $L^2$-norm on a rectifiable Jordan curve encompassing all the singularities of the approximated function. Namely, let $T$ be such a curve. Let further $G$ and $O$ be the interior and exterior domains of $T$, respectively, i.e., the bounded and unbounded components of the complement of $T$ in the extended complex plane~$\overline\C$. We denote by $L^2(T)$ the space of square-summable functions on $T$ endowed with the usual norm \[
\|f\|_{2,T}^2 := \int_T |f|^2ds, \] where $ds$ is the arclength differential. Set $\poly_n$ to be the space of algebraic polynomials of degree at most $n$ and $\mpoly_n(G)$ to be its subset consisting of monic polynomials with $n$ zeros in $G$. Define \begin{equation} \label{eq:rat} \rat_n(G) := \left\{\frac{p(z)}{q(z)}=\frac{p_{n-1}z^{n-1}+p_{n-2}z^{n-2}+\cdots+p_0}{z^n+q_{n-1}z^{n-1}+\cdots+q_0}:~ p\in\poly_{n-1},~q\in\mpoly_n(G)\right\}. \end{equation} That is, $\rat_n(G)$ is the set of rational functions with at most $n$ poles that are holomorphic in some neighborhood of $\overline O$ and vanish at infinity. Let $f$ be a function holomorphic and vanishing at infinity (vanishing at infinity is a normalization required for convenience only). We say that $f$ belongs to the class $\alg(G)$ if {\it \begin{itemize} \item [(i)] $f$ admits holomorphic and single-valued continuation from infinity to an open neighborhood of $\overline O$; \item[(ii)] $f$ admits meromorphic, possibly multi-valued, continuation along any arc in $\overline G\setminus E_f$ starting from $T$, where $E_f$ is a finite set of points in $G$; \item[(iii)] $E_f$ is non-empty, the meromorphic continuation of $f$ from infinity has a branch point at each element of $E_f$. \end{itemize} }
The primary example of functions in $\alg(G)$ is that of algebraic functions. Every algebraic function $f$ naturally defines a Riemann surface. Fixing a branch of $f$ at infinity is equivalent to selecting a sheet of this covering surface. If all the branch points and poles of $f$ on this sheet lie above $G$, the function $f$ belongs to $\alg(G)$. Other functions in $\alg(G)$ are those of the form $g\circ\log(l_1/l_2)+r$, where $g$ is entire and $l_1,l_2\in\mpoly_m(G)$ while $r\in\rat_k(G)$ for some $m,k\in\N$. However, $\alg(G)$ is defined in such a way that it contains no function in $\rat_n(G)$, $n\in\N$, in order to avoid degenerate cases.
With the above notation, the goal of this section is to describe the asymptotic behavior of \begin{equation} \label{eq:L2besterror}
\rho_{n,2}(f,T) := \inf\left\{\|f-r\|_{2,T}:~r\in\rat_n(G)\right\}, \quad f\in\alg(G). \end{equation} This problem is, in fact, a variation of a classical question in Chebyshev (uniform) rational approximation of holomorphic functions where it is required to describe the asymptotic behavior of \[
\rho_{n,\infty}(f,T) := \inf\left\{\|f-r\|_T:~r\in\rat_n(G)\right\}, \]
where $\|\cdot\|_T$ is the supremum norm on $T$. The theory behind Chebyshev approximation is rather well established while its $L^2$-counterpart, which naturally arises in system identification and control theory \cite{B_CMFT99} and serves as a method to approach inverse source problems \cite{BAbHL05,BLM06,BMSW06}, is not so much developed. In particular, it follows from the techniques of rational interpolation devised by Walsh \cite{Walsh} that \begin{equation} \label{eq:limsup} \limsup_{n\to\infty}\rho_{n,\infty}^{1/n}(f,T) \leq \exp\left\{-\frac{1}{\cp(K,T)}\right\} \end{equation} for any function $f$ holomorphic outside of $K\subset G$, where $\cp(K,T)$ is the condenser capacity (Section~\ref{sss:cc}) of a set $K$ contained in a domain $G$ relative to this domain\footnote{In Section~\ref{sec:pt} the authors provide a concise but self-contained account of logarithmic potential theory. The reader may want to consult this section to get accustomed with the employed notation for capacities, energies, potentials, and equilibrium measures.}. On the other hand, it was conjectured by Gonchar and proved by Parf\"enov \cite[Sec. 5]{Par86} on simply connected domains, also later by Prokhorov \cite{Pr93} in full generality, that \begin{equation} \label{eq:liminf} \liminf_{n\to\infty}\rho_{n,\infty}^{1/2n}(f,T) \leq \exp\left\{-\frac{1}{\cp(K,T)}\right\}. \end{equation} Notice that only the $n$-th root is taken in \eqref{eq:limsup} while \eqref{eq:liminf} provides asymptotics for the $2n$-th root. Observe also that there are many compacts $K$ which make a given $f\in\alg(G)$ single-valued in their complement. Hence, \eqref{eq:limsup} and \eqref{eq:liminf} can be sharpened by taking the infimum over $K$ on the right-hand side of both inequalities. To explore this fact we need the following definition.
\begin{df} \label{df:admiss} We say that a compact $K\subset G$ is admissible for $f\in\alg(G)$ if $\overline\C\setminus K$ is connected and $f$ has meromorphic and single-valued extension there. The collection of all admissible sets for $f$ we denote by $\pk_f(G)$. \end{df}
As equations \eqref{eq:limsup} and \eqref{eq:liminf} suggest and Theorem \ref{thm:L2T} below shows, the relevant admissible set in rational approximation to $f\in\alg(G)$ is the set of \emph{minimal condenser capacity} \cite{St85,St85b,St86,St89} relative to $G$:
\begin{df} \label{df:dmcc} Let $f\in\alg(G)$. A compact $\K\in\pk_f(G)$ is said to be a set of minimal condenser capacity for $f$ if \begin{itemize} \item [(i)] $\cp(\K,T)\leq\cp(K,T)$ for any $K\in\pk_f(G)$;
\item [(ii)] $\K\subset K$ for any $K\in\pk_f(G)$ such that $\cp(K,T)=\cp(\K,T)$. \end{itemize} \end{df} It follows from the properties of condenser capacity that $\cp(\K,T)=\cp(T,\K)=\cp(\overline O,\K)$ since $\K$ has connected complement that contains $T$ by Definition~\ref{df:admiss}. In other words, the set $\K$ can be seen as the complement of the ``largest'' (in terms of capacity) domain containing $\overline O$ on which $f$ is single-valued and meromorphic. In fact, this is exactly the point of view taken up in \cite{St85,St85b,St86,St89}. It is known that such a set always exists, is unique, and has, in fact, a rather special structure. To describe it, we need the following definition.
\begin{df} \label{df:elmin} We say that a set $K\in\pk_f(G)$ is a smooth cut for $f$ if $K=E_0\cup E_1\cup\bigcup \gamma_j$, where $\bigcup \gamma_j$ is a finite union of open analytic arcs, $E_0\subseteq E_f$ and each point in $E_0$ is the endpoint of exactly one $\gamma_j$, while $E_1$ is a finite set of points each element of which is the endpoint of at least three arcs $\gamma_j$. Moreover, we assume that across each arc $\gamma_j$ the jump of $f$ is not identically zero. \end{df}
Let us informally explain the motivation behind Definition~\ref{df:elmin}. In order to make $f$ single-valued, it is intuitively clear that one needs to choose a proper system of cuts joining certain points in $E_f$ so that one cannot encircle these points nor access the remaining ones without crossing the cut. It is then plausible that the geometrically ``smallest'' system of cuts comprises of Jordan arcs. In the latter situation, the set $E_1$ consists of the points of intersection of these arcs. Thus, each element of $E_1$ serves as an endpoint for at least three arcs since two arcs meeting at a point are considered to be one. In Definition~\ref{df:elmin} we also impose that the arcs be analytic. It turns out that the set of minimal condenser capacity (Theorem~\hyperref[thm:S]{S}) as well as minimal sets from Section~\ref{sec:min} (Theorem~\ref{thm:minset}) have exactly this structure. It is possible for $E_0$ to be a proper subset of $E_f$. This can happen when some of the branch points of $f$ lie above $G$ but on different sheets of the Riemann surface associated with $f$ that cannot be accessed without crossing the considered system of cuts.
The following is known about the set $\K$ (Definition~\ref{df:dmcc}) \cite[Thm. 1 and 2]{St85} and \cite[Thm.~1]{St85b}.
\begin{stahl} \label{thm:S} Let $f\in\alg(G)$. Then $\K$, the set of minimal condenser capacity for $f$, exists and is unique. Moreover, it is a smooth cut for $f$ and \begin{equation} \label{eq:mincondcap} \frac{\partial}{\partial\n^+}V_{\overline\C\setminus\K}^{\omega_{(T,\K)}} = \frac{\partial}{\partial\n^-} V_{\overline\C\setminus\K}^{\omega_{(T,\K)}} \quad \mbox{on} \quad \bigcup \gamma_j, \end{equation} where $\partial/\partial\n^\pm$ are the partial derivatives\footnote{\label{hacross}Since the arcs $\gamma_j$ are analytic and the potential $V_{\overline\C\setminus\K}^{\omega_{(T,\K)}}$ is identically zero on them, $V_{\overline\C\setminus\K}^{\omega_{(T,\K)}}$ can be harmonically continued across each $\gamma_j$ by reflection. Hence, the partial derivatives in \eqref{eq:mincondcap} exist and are continuous.} with respect to the one-sided normals on each $\gamma_j$, $V_{\overline\C\setminus\K}^{\omega_{(T,\K)}}$ is the Green potential of $\omega_{(T,\K)}$ relative to $\overline\C\setminus\K$, and $\omega_{(T,\K)}$ is the Green equilibrium distribution on $T$ relative to $\overline\C\setminus\K$ (Section~\ref{sss:cc}). \end{stahl}
Note that \eqref{eq:mincondcap} is independent of the orientation chosen on $\gamma_j$ to define $\partial/\partial\n^\pm$. Property \eqref{eq:mincondcap} turns out to be more beneficial than Definition~\ref{df:dmcc} in the sense that all the forthcoming proofs use only \eqref{eq:mincondcap}. However, one does not achieve greater generality by relinquishing the connection to the condenser capacity and considering \eqref{eq:mincondcap} by itself as this property uniquely characterizes $\K$. Indeed, the following theorem is proved in Section~\ref{ss:53}.
\begin{thm} \label{thm:dmcc} The set of minimal condenser capacity for $f\in\alg(G)$ is uniquely characterized as a smooth cut for $f$ that satisfies \eqref{eq:mincondcap}. \end{thm}
With all the necessary definitions at hand, the following result takes place. \begin{thm} \label{thm:L2T} Let $T$ be a rectifiable Jordan curve with interior domain $G$ and exterior domain $O$. If $f\in\alg(G)$, then \begin{equation} \label{eq:exactrate} \lim_{n\to\infty} \rho_{n,2}^{1/2n}(f,T) = \lim_{n\to\infty} \rho_{n,\infty}^{1/2n}(f,T) = \exp\left\{-\frac{1}{\cp(\K,T)}\right\}, \end{equation} where $\K$ is set of minimal condenser capacity for $f$. \end{thm}
The second equality in \eqref{eq:exactrate} follows from \cite[Thm $1^\prime$]{GRakh87}, where a larger class of functions than $\alg(G)$ is considered (see Theorem~\hyperref[thm:GR]{GR} in Section \ref{ss:52}). To prove the first equality, we appeal to another type of approximation, namely, \emph{meromorphic} approximation in $L^2$-norm on $T$, for which asymptotics of the error and the poles are obtained below. This type of approximation turns out to be useful in certain inverse source problems \cite{BLM06,LPadRigZgh08,uClLMPap}. Observe that $|T|^{1/p-1/2}\|h\|_{2,T}\leq \|h\|_{p,T}\leq|T|^{1/p}\|h\|_T$ for any $p\in(2,\infty)$ and any bounded function $h$ on $T$ by H\"older inequality, where $\|\cdot\|_{p,T}$ is the usual $p$-norm on $T$ with respect to $ds$ and $|T|$ is the arclength of $T$. Thus, Theorem~\ref{thm:L2T} implies that \eqref{eq:exactrate} holds for $L^p(T)$-best rational approximants as well when $p\in(2,\infty)$. In fact, as Vilmos Totik pointed out to the authors \cite{Vilmos}, with a different method of proof Theorem~\ref{thm:L2T} can be extended to include the full range $p\in[1,\infty]$.
Just mentioned best meromorphic approximants are defined as follows. Denote by $E^2(G)$ the Smirnov class\footnote{A function $h$ belongs to $E^2(G)$ if $h$ is holomorphic in $G$ and there exists a sequence of rectifiable Jordan curves, say $\{T_n\}$, whose interior domains exhaust $G$, such that $\|h\|_{2,T_n}\leq\const$ independently of $n$.} for $G$ \cite[Sec. 10.1]{Duren}. It is known that functions in $E^2(G)$ have non-tangential boundary values a.e. on $T$ and thus formed traces of functions in $E^2(G)$ belong to $L^2(T)$. Now, put $E_n^2(G):=E^2(G)\mpoly^{-1}_n(G)$ to be the set of meromorphic functions in $G$ with at most $n$ poles there and square-summable traces on $T$. It is known \cite[Sec. 5]{BMSW06} that for each $n\in\N$ there exists $g_n\in E_n^2(G)$ such that \[
\|f-g_n\|_{2,T} = \inf\left\{\|f-g\|_{2,T}:~g\in E_n^2(G)\right\}. \] That is, $g_n$ is a best meromorphic approximant for $f$ in the $L^2$-norm on $T$. \begin{thm} \label{cor:L2T} Let $T$ be a rectifiable Jordan curve with interior domain $G$ and exterior domain $O$. If $f\in\alg(G)$, then \begin{equation} \label{ErrorBestMer}
|f-g_n|^{1/2n} \cic \exp\left\{V_G^{\omega_{(\K,T)}}-\frac{1}{\cp(\K,T)}\right\} \quad \mbox{in} \quad G\setminus\K, \end{equation} where the functions $g_n\in E^2_n(G)$ are best meromorphic approximants to $f$ in the $L^2$-norm on $T$, $\K$ is the set of minimal condenser capacity for $f$ in $G$, $\omega_{(\K,T)}$ is the Green equilibrium distribution on $\K$ relative to $G$, and $\cic$ denotes convergence in capacity (see Section~\ref{sss:lc}). Moreover, the counting measures of the poles of $g_n$ converge weak$^*$ to $\omega_{(\K,T)}$. \end{thm}
\section{$\bar H_0^2$-Rational Approximation} \label{sec:ra}
To prove Theorems~\ref{thm:L2T} and~\ref{cor:L2T}, we derive a stronger result in the model case where $G$ is the unit disk, $\D$. The strengthening comes from the facts that in this case $L^2$-best meromorphic approximants specialize to $L^2$-best rational approximants the latter also turn out to be interpolants. In fact, we consider not only best rational approximants but also critical points in rational approximation.
Let $\T$ be the unit circle and set for brevity $L^2:=L^2(\T)$. Denote by $H^2\subset L^2$ the Hardy space of functions whose Fourier coefficients with strictly negative indices are zero. The space $H^2$ can be described as the set of traces of holomorphic functions in the unit disk whose square-means on concentric circles centered at zero are uniformly bounded above\footnote{Each such function has non-tangential boundary values almost everywhere on $\T$ and can be recovered from these boundary values by means of the Cauchy or Poisson integral.} \cite{Duren}. Further, denote by $\bar H_0^2$ the orthogonal complement of $H^2$ in $L^2$, $L^2=H^2\oplus\bar H_0^2$, with respect to the standard scalar product \[
\langle f,g \rangle := \int_\T f(\tau)\overline{g(\tau)}|d\tau|, \quad f,g\in L^2. \]
From the viewpoint of analytic function theory, $\bar H_0^2$ can be regarded as a space of traces of functions holomorphic in $\Om:=\overline\C\setminus\overline\D$ and vanishing at infinity whose square-means on the concentric circles centered at zero (this time with radii greater then 1) are uniformly bounded above. In what follows, we denote by $\|\cdot\|_2$ the norm on $L^2$ induced by the scalar product $\langle\cdot,\cdot\rangle$. In fact, $\|\cdot\|_2$ is a norm on $H^2$ and $\bar H_0^2$ as well.
We set $\mpoly_n:=\mpoly_n(\D)$ and $\rat_n:=\rat_n(\D)$. Observe that $\rat_n$ is the set of rational functions of degree at most $n$ belonging to $\bar H^2_0$. With the above notation, consider the following $\bar H_0^2$-rational approximation problem:
\noindent
{\it Given $f\in\bar H_0^2$ and $n\in\N$, minimize $\|f-r_n\|_2$ over all $r\in\rat_n$.}
\noindent It is well-known (see \cite[Prop. 3.1]{B06} for the proof and an extensive bibliography on the subject) that this minimum is always attained while any minimizing rational function, also called {\it a best rational approximant} to $f$, lies in $\rat_n\setminus\rat_{n-1}$ unless $f\in\rat_{n-1}$.
Best rational approximants are part of the larger class of \emph{critical points} in $\bar H_0^2$-rational approximation. From the computational viewpoint, critical points are as important as best approximants since a numerical search is more likely to yield a locally best rather than a best approximant. For fixed $f\in\bar H^2_0$, critical points can be defined as follows. Set \begin{equation} \label{eq:ef} \begin{array}{rll} \eba_{f,n}:\poly_{n-1}\times\mpoly_n &\to& [0,\infty) \\
(p,q) &\mapsto& \|f-p/q\|_2^2. \end{array} \end{equation} In other words, $\eba_{f,n}$ is the squared error of approximation of $f$ by $r=p/q$ in $\rat_n$. We topologically identify $\poly_{n-1}\times\mpoly_n$ with an open subset of $\C^{2n}$ with coordinates $p_j$ and $q_k$, $j,k\in\{0,\ldots,n-1\}$ (see \eqref{eq:rat}). Then a pair of polynomials $(p_c,q_c)\in\poly_{n-1}\times\mpoly_n$, identified with a vector in $\C^{2n}$, is said to be a {\it critical pair of order} $n$, if all the partial derivatives of $\eba_{f,n}$ do vanish at $(p_c,q_c)$. Respectively, a rational function $r_c\in\rat_n$ is a {\it critical point of order} $n$ if it can be written as the ratio $r_c=p_c/q_c$ of a critical pair $(p_c,q_c)$ in $\poly_{n-1}\times\mpoly_n$. A particular example of a critical point is a {\it locally best approximant}. That is, a rational function $r_l=p_l/q_l$ associated with a pair $(p_l,q_l)\in\poly_{n-1}\times\mpoly_n$ such that $\eba_{f,n}(p_l,q_l)\leq\eba_{f,n}(p,q)$ for all pairs $(p,q)$ in some neighborhood of $(p_l,q_l)$ in $\poly_{n-1}\times\mpoly_n$. We call a critical point of order $n$ \emph{irreducible} if it belongs to $\rat_n\setminus\rat_{n-1}$. As we have already mentioned, best approximants, as well as local minima, are always irreducible critical points unless $f\in\rat_{n-1}$. In general there may be other critical points, reducible or irreducible, which are saddles or maxima. In fact, to give amenable conditions for uniqueness of a critical point it is a fairly open problem of great practical importance, see \cite{B_CMFT99,BSW96,BY_RTOPAT10} and the bibliography therein.
One of the most important properties of critical points is the fact that they are ``maximal'' rational interpolants. More precisely, let $f\in\bar H^2_0$ and $r_n$ be an irreducible critical point of order $n$, then \emph{$r_n$ interpolates $f$ at the reflection ($z\mapsto1/\bar z$) of each pole of $r_n$ with order twice the multiplicity that pole} \cite{Lev69}, \cite[Prop. 2]{BY_RTOPAT10}, which is the maximal number of interpolation conditions ({\it i.e., $2n$}) that can be imposed in general on a rational function of type $(n-1,n)$ (i.e., the ratio of a polynomial of degree $n-1$ by a polynomial of degree~$n$).
With all the definitions at hand, we are ready to state our main results concerning the behavior of critical points in $\bar H_0^2$-rational approximation for functions in $\alg(\D)$, which will be proven in Section~\ref{ss:53}. \begin{thm} \label{thm:convcap} Let $f\in\alg(\D)$ and $\{r_n\}_{n\in\N}$ be a sequence of irreducible critical points in $\bar H_0^2$-rational approximation for $f$. Further, let $\K$ be the set of minimal condenser capacity for $f$. Then the normalized counting measures\footnote{The normalized counting measure of poles/zeros of a given function is a probability measure having equal point masses at each pole/zero of the function counting multiplicity.} of the poles of $r_n$ converge weak$^*$ to the Green equilibrium distribution on $\K$ relative to $\D$, $\ged$. Moreover, it holds that \begin{equation} \label{eq:Convergence1}
|(f-r_n)|^{1/2n} \cic \exp\left\{-V_{\overline\C\setminus\K}^{\omega^*_{(\K,\T)}}\right\} \quad \mbox{in} \quad \overline\C\setminus(\K\cup \K^*), \end{equation} where $\K^*$ and $\omega_{(\K,\T)}^*$ are the reflections\footnote{For every set $K$ we define the reflected set $K^*$ as $K^*:=\{z:~1/\bar z\in K\}$. If $\omega$ is a Borel measure in $\overline\C$, then $\omega^*$ is a measure such that $\omega^*(B)=\omega(B^*)$ for every Borel set $B$.} of $\K$ and $\ged$ across $\T$, respectively, and $\cic$ denotes convergence in capacity. In addition, it holds that \begin{equation} \label{eq:Convergence2}
\limsup_{n\to\infty}|(f-r_n)(z)|^{1/2n} \leq \exp\left\{-V_{\overline\C\setminus\K}^{\omega^*_{(\K,\T)}}(z)\right\} \end{equation} uniformly for $z\in\overline\Om$. \end{thm}
Using the fact that the Hardy space $H^2$ is orthogonal to $\bar H_0^2$, one can show that $L^2$-best meromorphic approximants discussed in Theorem~\ref{cor:L2T} specialize to $L^2$-best rational approximants when $G=\D$ (see the proof of Theorem~\ref{cor:L2T}). Moreover, it is shown in Lemma~\ref{lem:pt} in Section~\ref{sec:pt} that $\displaystyle -V_{\overline\C\setminus\K}^{\omega_{(\K,\T)}^*} \equiv V_\D^{\omega_{(\K,\T)}}-1/\cp(\K,\T)$ in $\overline\D$. So, formula \eqref{ErrorBestMer} is, in fact, a generalization of \eqref{eq:Convergence1}, but only in $G\setminus\K$. Lemma~\ref{lem:pt} also implies that $V_{\overline\C\setminus\K}^{\omega_{(\K,\T)}^*}\equiv1/\cp(\K,\T)$ on $\T$. In particular, the following corollary to Theorem \ref{thm:convcap} can be stated.
\begin{cor} \label{cor:normconv} Let $f$, $\{r_n\}$, and $\K$ be as in Theorem \ref{thm:convcap}. Then \begin{equation} \label{eq:Convergence3}
\lim_{n\to\infty} \|f-r_n\|_2^{1/2n} = \lim_{n\to\infty}\|f-r_n\|_\T^{1/2n} = \exp\left\{-\frac{1}{\cp(\K,\T)}\right\}, \end{equation}
where $\|\cdot\|_\T$ stands for the supremum norm on $\T$. \end{cor} Observe that Corollary~\ref{cor:normconv} strengthens Theorem~\ref{thm:L2T} in the case when $T=\T$. Indeed, \eqref{eq:Convergence3} combined with \eqref{eq:exactrate} implies that the critical points in ${\bar H}_0^2$-rational approximation also provide the best rate of uniform approximation in the $n$-th root sense for $f$ on $\overline\Om$.
\section{Domains of Minimal Weighted Capacity} \label{sec:min}
Our approach to Theorem~\ref{thm:convcap} lies in exploiting the interpolation properties of the critical points in $\bar H^2_0$-rational approximation. To this end we first study the behavior of rational interpolants with predetermined interpolation points (Theorem~\ref{thm:mpa} in Section~\ref{sec:pade}). However, before we are able to touch upon the subject of rational interpolation proper, we need to identify the corresponding minimal sets. These sets are the main object of investigation in this section.
Let $\nu$ be a probability Borel measure supported in $\overline\D$. We set \begin{equation} \label{eq:unu}
U^\nu(z) := -\int\log|1-z\bar u|d\nu(u). \end{equation} The function $U^\nu$ is simply the spherically normalized logarithmic potential of $\nu^*$, the reflection of $\nu$ across $\T$ (see \eqref{eq:sphpot}). Hence, it is a harmonic function outside of $\supp(\nu^*)$, in particular, in $\D$. Considering $-U^\nu$ as an {\it external field} acting on non-polar compact subsets of $\D$, we define the weighted capacity in the usual manner (Section~\ref{sss:wc}). Namely, for such a set $K\subset\D$, we define the $\nu$-capacity of $K$ by \begin{equation} \label{eq:wcap} \cp_\nu(K) := \exp\left\{-I_\nu[K]\right\}, \quad I_\nu[K] := \min_\omega\left(I[\omega]-2\int U^\nu d\omega\right), \end{equation} where the minimum is taken over all probability Borel measures $\omega$ supported on $K$ (see Section~\ref{sss:lc} for the definition of energy $I[\cdot]$). Clearly, $U^{\delta_0}\equiv0$ and therefore $\cp_{\delta_0}(\cdot)$ is simply the classical logarithmic capacity (Section~\ref{sss:lc}), where $\delta_0$ is the Dirac delta at the origin.
The purpose of this section is to extend results in \cite{St85, St85b} obtained for $\nu=\delta_0$. For that, we introduce a notion of a \emph{minimal set} in a weighted context. This generalization is the key enabling us to adapt the results of \cite{GRakh87} to the present situation, and its study is really the technical core of the paper. For simplicity, we put $\pk_f:=\pk_f(\D)$.
\begin{df} \label{df:minset} Let $\nu$ be a probability Borel measure supported in $\overline\D$. A compact $\Gamma_\nu\in\pk_f$, $f\in\alg(\D)$, is said to be a minimal set for Problem $(f,\nu)$ if \begin{itemize} \item [(i)] $\cp_\nu(\Gamma_\nu)\leq\cp_\nu(K)$ for any $K\in\pk_f$;
\item [(ii)] $\Gamma_\nu\subset\Gamma$ for any $\Gamma\in\pk_f$ such that $\cp(\Gamma)=\cp(\Gamma_\nu)$. \end{itemize} \end{df}
The set $\Gamma_\nu$ will turn out to have geometric properties similar to those of minimal condenser capacity sets (Definition \ref{df:dmcc}). This motivates the following definition.
\begin{df} \label{df:sym} A compact $\Gamma\in\pk_f$ is said to be symmetric with respect to a Borel measure $\omega$, $\supp(\omega)\cap\Gamma=\varnothing$, if $\Gamma$ is a smooth cut for $f$ (Definition \ref{df:elmin}) and \begin{equation} \label{eq:GreenPotSym} \frac{\partial}{\partial\n^+}V_{\overline\C\setminus\Gamma}^\omega = \frac{\partial}{\partial\n^-}V_{\overline\C\setminus\Gamma}^\omega \quad \mbox{on} \quad \bigcup \gamma_j, \end{equation} where $\partial/\partial\n^\pm$ are the partial derivatives with respect to the one-sided normals on each side of $\gamma_j$ and $\displaystyle V_{\overline\C\setminus\Gamma}^\omega$ is the Green potential of $\omega$ relative to $\overline\C\setminus\Gamma$. \end{df}
Definition \ref{df:sym} is given in the spirit of \cite{St85b} and thus appears to be different from the \emph{S-property} defined in \cite{GRakh87}. Namely, a compact $\Gamma\subset\D$ having the structure of a smooth cut is said to possess the S-property in the field $\psi$, assumed to be harmonic in some neighborhood of $\Gamma$, if \begin{equation} \label{eq:sproperty} \frac{\partial (V^{\omega_{\Gamma,\psi}}+\psi)}{\partial\n^+} = \frac{\partial (V^{\omega_{\Gamma,\psi}}+\psi)}{\partial\n^-}, \quad \mbox{q. e. on} \quad \bigcup \gamma_j, \end{equation} where $\omega_{\Gamma,\psi}$ is the weighted equilibrium distribution on $\Gamma$ in the field $\psi$ and the normal derivatives exist at every tame point of $\supp(\omega_{\Gamma,\psi})$ (see Section \ref{ss:52}). It follows from \eqref{eq:whnustar} and \eqref{eq:toRemind1} that $\Gamma$ has the S-property in the field $-U^\nu$ if and only if it is symmetric with respect to $\nu^*$, taking into account that $V^{\omega_{\Gamma,-U^\nu}}-U^\nu$ is constant on the arcs $\gamma_j$ which are regular (see Section~\ref{dubalai}) hence the normal derivatives exist at every point. This reconciles Definition \ref{df:sym} with the one given in \cite{GRakh87} in the setting of our work.
The symmetry property \eqref{eq:GreenPotSym} entails that $V_{\overline\C\setminus\Gamma}^\omega$ has a very special structure.
\begin{prop} \label{prop:minset} Let $\Gamma = E_0\cup E_1\cup\bigcup\gamma_j$ and $V_{\overline\C\setminus\Gamma}^\omega$ be as in Definitions~\ref{df:elmin} and~\ref{df:sym}. Then the arcs $\gamma_j$ possess definite tangents at their endpoints. The tangents to the arcs ending at $e\in E_1$ (there are at least three by definition of a smooth cut) are equiangular. Further, set \begin{equation} \label{eq:funh}
H_{\omega,\Gamma} := \partial_z V_{\overline\C\setminus\Gamma}^\omega, \quad \partial_z := ( \partial_x-i\partial_y)/2. \end{equation} Then $H_{\omega,\Gamma}$ is holomorphic in $\overline\C\setminus(\Gamma\cup\supp(\omega))$ and has continuous boundary values from each side of every $\gamma_j$ that satisfy $H_{\omega,\Gamma}^+ = -H_{\omega,\Gamma}^-$ on each $\gamma_j$. Moreover, $H_{\omega,\Gamma}^2$ is a meromorphic function in $\overline{\C}\setminus\mbox{supp}(\omega)$ that has a simple pole at each element of $E_0$ and a zero at each element $e$ of $E_1$ whose order is equal to the number of arcs $\gamma_j$ having $e$ as endpoint minus 2. \end{prop}
The following theorem is the main result of this section and is a weighted generalization of \cite[Thm. 1 and 2]{St85} and \cite[Thm.~1]{St85b} for functions in $\alg(\D)$.
\begin{thm} \label{thm:minset}
Let $f\in\alg(\D)$ and $\nu$ be a probability Borel measure supported in $\overline\D$. Then a minimal set for Problem $(f,\nu)$, say $\Gamma_\nu$, exists, is unique and contained in $\overline\D_r$, $r:=\max_{z\in E_f}|z|$. Moreover, $\Gamma\in\pk_f$ is minimal if and only if it is symmetric with respect to $\nu^*$. \end{thm}
The proof of Theorem~\ref{thm:minset} is carried out in Section~\ref{ss:51} and the proof of Proposition~\ref{prop:minset} is presented in Section~\ref{ss:prop}.
\section{Multipoint Pad\'e Approximation} \label{sec:pade}
In this section, we state a result that yields complete information on the $n$-th root behavior of rational interpolants to functions in $\alg(\D)$. It is essentially a consequence both of Theorem~\ref{thm:minset} and Theorem~4 in \cite{GRakh87} on the behavior of multipoint Pad\'e approximants to functions analytic off a symmetric contour, whose proof plays here an essential role.
Classically, diagonal multipoint Pad\'e approximants to $f$ are rational functions of type $(n,n)$ that interpolate $f$ at a prescribed system of $2n+1$ points. However, when the approximated function is holomorphic at infinity, as is the case $f\in\alg(\D)$, it is customary to place at least one interpolation point there. More precisely, let $\E=\{E_n\}$ be a triangular scheme of points in $\overline\C\setminus E_f$ and let $v_n$ be the monic polynomial with zeros at the finite points of $E_n$. In other words, $\E:=\{E_n\}_{n\in\N}$ is such that each $E_n$ consists of $2n$ not necessarily distinct nor finite points contained in $\overline\C\setminus E_f$.
\begin{df} \label{df:pade} Given $f\in\alg(\D)$ and a triangular scheme $\E$, the $n$-th diagonal Pad\'e approximant to $f$ associated with $\E$ is the unique rational function $\Pi_n=p_n/q_n$ satisfying: \begin{itemize} \item $\deg p_n\leq n$, $\deg q_n\leq n$, and $q_n\not\equiv0$;
\item $\left(q_n(z)f(z)-p_n(z)\right)/v_n(z)$ has analytic (multi-valued) extension to $\overline\C\setminus E_f$;
\item $\left(q_n(z)f(z)-p_n(z)\right)/v_n(z)=O\left(1/z^{n+1}\right)$ as $z\to\infty$. \end{itemize} \end{df}
Multipoint Pad\'e approximants always exist since the conditions for $p_n$ and $q_n$ amount to solving a system of $2n+1$ homogeneous linear equations with $2n+2$ unknown coefficients, no solution of which can be such that $q_n\equiv0$ (we may thus assume that $q_n$ is monic); note that the required interpolation at infinity is entailed by the last condition and therefore $\Pi_n$ is, in fact, of type $(n-1,n)$.
We define {\it the support of} $\E$ as $\supp(\E):=\cap_{n\in\N}\overline{\cup_{k\geq n}E_k}$. Clearly, $\supp(\E)$ contains the support of any weak$^*$ limit point of the normalized counting measures of points in $E_n$ (see Section~\ref{sss:wsccic}). We say that a Borel measure $\omega$ is the \emph{asymptotic distribution for $\E$} if the normalized counting measures of points in $E_n$ converge to $\omega$ in the weak$^*$ sense.
\begin{thm} \label{thm:mpa} Let $f\in\alg(\D)$ and $\nu$ be a probability Borel measure supported in $\overline\D$. Further, let $\E$ be a triangular scheme of points, $\supp(\E)\subset\overline\Om$, with asymptotic distribution $\nu^*$. Then \begin{equation} \label{eq:mpa}
\displaystyle |f-\Pi_n|^{1/2n}\cic\exp\left\{-V_{D_\nu}^{\nu^*}\right\} \quad \mbox{in} \quad D_\nu\setminus \supp(\nu^*), \quad D_\nu =\overline\C\setminus\Gamma_\nu, \end{equation} where $\Pi_n$ are the diagonal Pad\'e approximants to $f$ associated with $\E$ and $\Gamma_\nu$ is the minimal set for Problem $(f,\nu)$. It also holds that the normalized counting measures of poles of $\Pi_n$ converge weak$^*$ to $\widehat\nu^*$, the balayage (Section~\ref{ss:balayage}) of $\nu^*$ onto $\Gamma_\nu$ relative to $D_\nu$. In particular, the poles of $\Pi_n$ tend to $\Gamma_\nu$ in full proportion. \end{thm}
\section{Proofs} \label{sec:proofs}
\subsection{Proof of Theorem \ref{thm:minset}} \label{ss:51}
In this section we prove Theorem \ref{thm:minset} in several steps that are organized as separate lemmas.
Denote by $\pd_f$ the subset of $\pk_f$ comprised of those admissible sets that are unions of a finite number of disjoint continua each of which contains at least two point of $E_f$. In particular, each member of $\pd_f$ is a regular set \cite[Thm. 4.2.1]{Ransford} and $\cp(\Gamma_1\setminus(\Gamma_1\cap\Gamma_2))>0$ when $\Gamma_1\neq\Gamma_2$, $\Gamma_1,\Gamma_2\in\pd_f$ (if $\Gamma_1\neq\Gamma_2$, there exists a continuum $\gamma\subset\Gamma_1\setminus(\Gamma_1\cap\Gamma_2)$; as any continuum has positive capacity \cite[Thm. 5.3.2]{Ransford}, the claim follows). Considering $\pd_f$ instead of $\pk_f$ makes the forthcoming analysis simpler but does not alter the original problem as the following lemma shows.
\begin{lem} \label{lem:1haus} It holds that $\displaystyle \inf_{\Gamma\in\pd_f}\cp_\nu(\Gamma) = \inf_{K\in\pk_f}\cp_\nu(K)$. \end{lem} \begin{proof} Pick $K\in\pk_f$ and let $\mathcal{O}$ be the collection of all domains containing $\overline{\C}\setminus K$ to which $f$ extends meromorphically. The set $\mathcal{O}$ is nonempty as it contains $\overline{\C}\setminus K$, it is partially ordered by inclusion, and any totally ordered subset $\{O_\alpha\}$ has an upper bound, {\it e.g.} $\cup_\alpha O_\alpha$. Therefore, by Zorn's lemma \cite[App. 2, Cor.2.5]{Lang}, $\mathcal{O}$ has a maximal element, say $O$.
Put $F=\overline{\C}\setminus O$. With a slight abuse of notation, we still denote by $f$ the meromorphic continuation of the latter to $\overline{\C}\setminus F$. Note that a point in $E_f$ is either ``inactive'' (i.e., is not a branch point for that branch of $f$ that we consider over $\overline{\C}\setminus F$) or belongs to $F$.
If $F$ is not connected, there are two bounded disjoint open sets $V_1$, $V_2$ such that $(V_1\cup V_2)\cap F=F$ and, for $j=1,2$, $\partial V_j\cap F=\varnothing$, $V_j\cap F\neq\varnothing$. If $V_j$ contains only one connected component of $F$, we do not refine it further. Otherwise, there are two disjoint open sets $V_{j,1}, V_{j,2} \subset V_{j}$ such that $(V_{j,1}\cup V_{j,2})\cap F=V_{j}\cap F$ and, for $k=1,2$, $\partial V_{j,k}\cap F=\varnothing$, $V_{j,k}\cap F\neq\varnothing$. Iterating this process, we obtain successive generations of bounded finite disjoint open covers of $F$, each element of which contains at least one connected component of $F$ and has boundary that does not meet $F$. The process stops if $F$ has finitely many components, and then the resulting open sets separate them. Otherwise the process can continue indefinitely and, if $C_1,\ldots,C_N$ are the finitely many connected components of $F$ that meet $E_f$, at least one open set of the $N+1$-st generation contains no $C_j$. In any case, if $F$ has more than $N$ connected components, there is a bounded open set $V$, containing at least one connected component of $F$ and no point of $E_f\cap F$, such that $\partial V\cap F=\varnothing$.
Let $A$ be the unbounded connected component of $\overline{\C}\setminus V$ and $A_1,\ldots,A_L$ those bounded components of $\overline{\C}\setminus V$, if any, that contain some $C_j$ (if $L=0$ this is the empty collection). Since $O=\overline{\C}\setminus F$ is connected, each $\partial A_\ell$ can be connected to $\partial A$ by a closed arc $\gamma_{\ell}\subset O$. Then $W:=V\setminus \cup_{\ell}\gamma_{\ell}$ is open with $\partial W\cap F=\varnothing$, it contains at least one connected component of $F$, and no bounded component of its complement meets $E_f\cap F$. Let $X$ be the unbounded connected component of $\overline{\C}\setminus W$ and put $U:=\overline{\C}\setminus X$. The set $U$ is open, simply connected, and $\partial U\subset \partial W$ is compact and does not meet $F$. Moreover, since it is equal to the union of $W$ and all the bounded components of $\overline{\C}\setminus W$, $U$ does not meet $E_f\cap F$.
Now, $f$ is defined and meromorphic in a neighborhood of $\partial U\subset O$, and meromorphically continuable along any path in $U$ since the latter contains no point of $E_f\cap F$. Since $U$ is simply connected, $f$ extends meromorphically to $O\cup U$ by the monodromy theorem. However the latter set is a domain which strictly contains $O$ since $U$ contains $W$ and thus at least one connected component of $F$. This contradicts the maximality of $O$ and shows that $F$ consists precisely of $N$ connected components, namely $C_1,\ldots,C_N$. Moreover, if $\Gamma_j$ is a Jordan curve encompassing $C_j$ and no other $C_\ell$, then by what precedes $f$ must be single-valued along $\Gamma_j$ which is impossible if $C_j\cap E_f$ is a single point by property (iii) in the definition of the class $\alg(G)$. Therefore $F\in \pd_f$ and since $F\subset K$ it holds that $\cp_\nu (F)\leq\cp_\nu(K)$. This achieves the proof. \end{proof}
For any $\Gamma\in\pd_f$ and $\epsilon>0$, set $(\Gamma)_\epsilon:=\{z\in\D:\dist(z,\Gamma)<\epsilon\}$. We endow $\pd_f$ with the Hausdorff metric, i.e., \[ d_H(\Gamma_1,\Gamma_2) := \inf\{\epsilon:\Gamma_1\subset(\Gamma_2)_\epsilon,\Gamma_2\subset(\Gamma_1)_\epsilon\}. \] By standard properties of the Hausdorff distance \cite[Sec. 3.16]{Dieudonne}, $\clos_{d_H}(\pd_f)$, the closure of $\pd_f$ in the $d_H$-metric, is a compact metric space. Observe that taking $d_H$-limit cannot increase the number of connected components since any two components of the limit set have disjoint $\epsilon$-neighborhoods. That is, the $d_H$-limit of a sequence of compact sets having less than $N$ connected components has in turn less than $N$ connected components. Moreover, each component of the $d_H$-limit of a sequence of compact sets $E_n$ is the $d_H$-limit of a sequence of unions of components from $E_n$. Thus, each element of $\clos_{d_H}(\pd_f)$ still consists of a finite number of continua each containing at least two points from $E_f$ but possibly with multiply connected complement. However, the polynomial convex hull of such a set, that is, the union of the set with the bounded components of its complement, again belongs to $\pd_f$ unless the set touches $\T$.
\begin{lem} \label{lem:1cont} Let $\pd\subset\pd_f$ be such that each element of $\clos_{d_H}(\pd)$ is contained in $\D$. Then the functional $I_\nu[\cdot]$ is finite and continuous on $\clos_{d_H}(\pd)$. \end{lem} \begin{proof} Let $\Gamma_0\in\clos_{d_H}(\pd)$ be fixed. Set $\epsilon_0:=\dist(\Gamma_0,\T)/4>0$ and define \begin{equation} \label{eq:nbhd} \mathcal{N}_{\epsilon_0}(\Gamma_0) := \left\{\Gamma\in\clos_{d_H}(\pd):~d_H(\Gamma_0,\Gamma)<\epsilon_0\right\}. \end{equation} Then it holds that $\dist((\Gamma)_\epsilon,\T) \geq 2\epsilon_0$ for any $\Gamma\in\mathcal{N}_{\epsilon_0}(\Gamma_0)$ and $\epsilon\leq\epsilon_0$. Thus, the closure of each such $(\Gamma)_\epsilon$ is at least $\epsilon_0$ away from $\T_{1-\epsilon_0}$.
Let $\Gamma\in\mathcal{N}_{\epsilon_0}(\Gamma_0)$ and set $\epsilon:=d_H(\Gamma_0,\Gamma)$. Denote by $D_0$ and $D$ the unbounded components of the complements of $\Gamma_0$ and $\Gamma$, respectively. It follows from \eqref{eq:weightgreen} that $I_\nu[\Gamma_0]$ is finite and that \[ I_\nu[\Gamma] - I_\nu[\Gamma_0] = \iint \left(g_D(z,u)-g_{D_0}(z,u)\right)d\widetilde\nu^*(u)d\widetilde\nu^*(z), \] where $\widetilde\nu^*$ is the balayage of $\nu^*$ onto $\T_{1-\epsilon_0}$. Since $\Gamma\subset\overline{(\Gamma_0)_\epsilon}$ and $\Gamma_0\subset\overline{(\Gamma)_\epsilon}$, $g_D(\cdot,u)-g_{D_0}(\cdot,u)$ is a harmonic function in $G:=\overline\C\setminus(\overline{(\Gamma)_\epsilon}\cap\overline{(\Gamma_0)_\epsilon})$ for each $u\in G$ by the first claim in Section~\ref{ss:gp} (recall that we agreed to continue $g_{D_0}(\cdot,u)$ and $g_D(\cdot,u)$ by zero outside of the closures of $D_0$ and $D$, respectively). Thus, since Green functions are non-negative, we get from the maximum principle for harmonic functions and the fact that $\widetilde\nu^*$ is a unit measure that \begin{eqnarray}
\left|I_\nu[\Gamma] - I_\nu[\Gamma_0]\right| &\leq& \max_{u\in\T_{1-\epsilon_0}} \max_{z\in\partial G}|g_D(z,u)-g_{D_0}(z,u)| \nonumber \\ \label{eq:gfbound} {} &<& \max_{u\in\T_{1-\epsilon_0}}\left(\max_{z\in\partial(\Gamma)_\epsilon}g_D(z,u) + \max_{z\in\partial(\Gamma_0)_\epsilon}g_{D_0}(z,u)\right). \end{eqnarray}
Let $\gamma$ be any connected component of $\Gamma$ and $G_\gamma$ be the unbounded component of its complement. Observe that $(\Gamma)_\epsilon=\cup_\gamma(\gamma)_\epsilon$, where the union is taken over the (finitely many) components of $\Gamma$. Since $D\subset G_\gamma$, we get that \begin{equation} \label{eq:twogreen} g_D(z,u) \leq g_{G_\gamma}(z,u) \end{equation} for any $u\in D$ and $z\in G_\gamma\setminus{u}$ by the maximum principle.
Set $\delta:=\sqrt{2\epsilon/\cp(\gamma)}$ and $L$ to be the $\log(1+\delta)$-level line of $g_{G_\gamma}(\cdot,\infty)$. As $G_\gamma$ is simply connected, $L$ is a smooth Jordan curve.\footnote{By conformal invariance of Green functions it is enough to check it for $G_\gamma=\Om$ in which case it is obvious.} Since $\gamma$ is a continuum, it is well-known that $\cp(\gamma)\geq\diam(\gamma)/4$ \cite[Thm. 5.3.2]{Ransford}. Recall also that $\gamma$ contains at least two points from $E_f$. Thus, $\diam(\gamma)$ is bounded from below by the minimal distance between the algebraic singularities of $f$. Hence, we can assume without loss of generality that $\delta\leq1$. We claim that $\dist(\gamma,L)\geq\epsilon$ and postpone the proof of this claim until the end of this lemma. The claim immediately implies that $(\gamma)_\epsilon$ is contained in the bounded component of the complement of $L$ and that \begin{equation} \label{eq:maxgreen} \max_{z\in\partial(\gamma)_\epsilon} g_{G_\gamma}(z,\infty)\leq\log(1+\delta) \leq \delta. \end{equation}
It follows from the conformal invariance of the Green function \cite[Thm. 4.4.2]{Ransford} and can be readily verified using the characteristic properties that $g_{G_\gamma}(z,u)=g_{G_\gamma^u}(1/(z-u),\infty)$, where $G_\gamma^u$ is the image of $G_\gamma$ under the map $1/(\cdot-u)$. It is also simple to compute that \begin{equation} \label{eq:neweps} \dist(\gamma^u,\partial(\gamma)_\epsilon^u) \leq \frac{\epsilon}{\dist(u,\gamma)\dist(u,\partial(\gamma)_\epsilon)} \leq\frac{\epsilon}{\epsilon_0^2}, \quad u\in\T_{1-\epsilon_0}, \end{equation} by the remark after \eqref{eq:nbhd}, where $\gamma^u$ and $(\gamma)_\epsilon^u$ have obvious meaning. So, combining \eqref{eq:neweps} with \eqref{eq:maxgreen} applied to $\gamma^u$, we deduce that \begin{equation} \label{eq:maxgreenu} \max_{z\in\partial(\gamma)_\epsilon}g_{G_\gamma}(z,u) = \max_{z\in\partial(\gamma)_\epsilon^u}g_{G_\gamma^u}(z,\infty) \leq \max_{z\in\partial(\gamma^u)_{\epsilon/\epsilon_0^2}}g_{G_\gamma^u}(z,\infty) \leq \delta_u, \quad u\in\T_{1-\epsilon_0} ,\end{equation} where we put $\delta_u:=\sqrt{2\epsilon/\epsilon_0^2\cp(\gamma^u)}$.
As we already mentioned, $\cp(\gamma)\geq\diam(\gamma)/4$. Hence, it holds that \begin{equation} \label{eq:belowcap}
\min_{u\in\T_{1-\epsilon_0}}\cp(\gamma^u) \geq \frac14\min_{u\in\T_{1-\epsilon_0}}\max_{z,w\in\gamma}\left|\frac{1}{z-u}-\frac{1}{w-u}\right| \geq \frac{\diam(\gamma)}{16}. \end{equation} Gathering together \eqref{eq:twogreen}, \eqref{eq:maxgreenu}, and \eqref{eq:belowcap}, we derive that \[ \max_{u\in\T_{1-\epsilon_0}}\max_{z\in\partial(\Gamma)_\epsilon}g_D(z,u) \leq \max_\gamma \frac{4}{\epsilon_0}\sqrt{\frac{2\epsilon}{\diam(\gamma)}}, \] where $\gamma$ ranges over all components of $\Gamma$. Recall that each component of $\Gamma$ contains at least two points from $E_f$. Thus, $1/\diam(\gamma)$ is bounded above by a constant that depends only on $f$.
Arguing in a similar fashion for $\Gamma_0$, we obtain from \eqref{eq:gfbound} that \[
|I_\nu[\Gamma] - I_\nu[\Gamma_0]| \leq \frac{\const}{\epsilon_0} \sqrt{d_H(\Gamma,\Gamma_0)} \quad \mbox{for any} \quad \Gamma\in\mathcal{N}_{\epsilon_0}(\Gamma_0), \] where $\const$ is a constant depending only on $f$. This finishes the proof of the lemma granted we prove the claim made before \eqref{eq:maxgreen}.
It was claimed that for a continuum $\gamma$ and the $\log(1+\delta)$-level line $L$ of $g_{G_\gamma}(\cdot,\infty)$, $\delta\leq1$, it holds that \begin{equation} \label{eq:rakhper} \dist(\gamma,L) \geq \frac{\delta^2\cp(\gamma)}{2}, \end{equation} where $G_\gamma$ is the unbounded component of the complement of $\gamma$. Inequality \eqref{eq:rakhper} was proved in \cite[Lem. 1]{uPerevRakh}, however, this work was never published and the authors felt compelled to reproduce this lemma here.
Let $\Phi$ be a conformal map of $\Om$ onto $G_\gamma$, $\Phi(\infty)=\infty$. It is well-known that $|\Phi(z)z^{-1}|\to\cp(\gamma)$ as $z\to\infty$ and that $g_{G_\gamma}(\cdot,\infty)=\log|\Phi^{-1}|$, where $\Phi^{-1}$ is the inverse of $\Phi$ (that is, a conformal map of $G_\gamma$ onto $\Om$, $\Phi^{-1}(\infty)=\infty$). Then it follows from \cite[Thm. IV.2.1]{Goluzin} that \begin{equation} \label{eq:goluzin}
|\Phi^\prime(z)| \geq \cp(\gamma)\left(1-\frac{1}{|z|^2}\right), \quad z\in\Om. \end{equation}
Let $z_1\in\gamma$ and $z_2\in L$ be such that $\dist(\gamma,L)=|z_1-z_2|$. Denote by $[z_1,z_2]$ the segment joining $z_1$ and $z_2$. Observe that $\Phi^{-1}$ maps the annular domain bounded by $\gamma$ and $L$ onto the annulus $\{z:1<|z|<1+\delta\}$. Denote by $S$ the intersection of $\Phi^{-1}((z_1,z_2))$ with this annulus. Clearly, the angular projection of $S$ onto the real line is equal to $(1,1+\delta)$. Then \begin{eqnarray}
\dist(\gamma,L) &=& \int_{(z_1,z_2)}|dz| = \int_{\Phi^{-1}((z_1,z_2))}|\Phi^\prime(z)||dz| \geq \cp(\gamma)\int_{\Phi^{-1}((z_1,z_2))}\left(1-\frac{1}{|z|^2}\right)|dz| \nonumber \\
{} &\geq& \cp(\gamma)\int_{S}\left(1-\frac{1}{|z|^2}\right)|dz| \geq \cp(\gamma)\int_{(1,1+\delta)}\left(1-\frac{1}{|z|^2}\right)|dz| = \frac{\delta^2\cp(\gamma)}{1+\delta}, \nonumber \end{eqnarray} where we used \eqref{eq:goluzin}. This proves \eqref{eq:rakhper} since it is assumed that $\delta\leq1$. \end{proof}
Set $\pr_\rho(\cdot)$ to be the radial projection onto $\overline\D_\rho$, i.e., $\pr_\rho(z)=z$ if $|z|\leq\rho$ and $\pr_\rho(z)=\rho z/|z|$ if $\rho<|z|<\infty$. Put further $\pr_\rho(K):=\{\pr_\rho(z):z\in K\}$. In the following lemma we show that $\pr_\rho$ can only increase the value of $I_\nu[\cdot]$.
\begin{lem} \label{lem:1pr}
Let $\Gamma\in\pd_f$ and $\rho\in[r,1)$, $r=\max_{z\in E_f}|z|$. Then $\pr_\rho(\Gamma)\in\pd_f$ and $\cp_\nu(\pr_\rho(\Gamma))\leq\cp_\nu(\Gamma)$. \end{lem} \begin{proof} As $E_f\subset\overline\D_r$, $f$ naturally extends along any ray $t\xi$, $\xi\in\T$, $t\in(r,\infty)$. Thus, the germ $f$ has a representative which is single-valued and meromorphic outside of $\pr_\rho(\Gamma)$. It is also true that $\pr_\rho$ is a continuous map on $\C$ and therefore cannot disconnect the components of $\Gamma$ although it may merge some of them. Thus, $\pr_\rho(\Gamma)\in\pd_f$.
Set $w=\exp\{U^\nu\}$ and \[
\delta_m^w(\Gamma) := \sup_{z_1,\ldots,z_m\in\Gamma}\left[\prod_{1\leq j<i\leq m}|z_i-z_j|w(z_i)w(z_j)\right]^{2/m(m-1)}. \] It is known \cite[Thm. III.1.3]{SaffTotik} that $\delta_m^w(\Gamma)\to\cp_\nu(\Gamma)$ as $m\to\infty$. Thus, it is enough to obtain that $\delta_m^w(\pr_\rho(\Gamma))\leq\delta_m^w(\Gamma)$ holds for any $m$. In turn, it is sufficient to show that \begin{equation} \label{eq:leq}
|\pr_\rho(z_1)-\pr_\rho(z_2)|w(\pr_\rho(z_1))w(\pr_\rho(z_2)) \leq |z_1-z_2|w(z_1)w(z_2) \end{equation} for any $z_1,z_2\in\D$.
Assume for the moment that $\nu=\delta_u$ for some $u\in\overline\D$, i.e., $w(z)=1/|1-z\bar u|$. It can be readily seen that it is enough to consider only two cases: $|z_1|\leq\rho$, $|z_2|=x>\rho$ and $|z_1|=|z_2|=x>\rho$. In the former situation, \eqref{eq:leq} will follow upon showing that \[
l_1(x) := \frac{x^2+|z_1|^2-2x|z_1|\cos\phi}{1+x^2|u|^2-2x|u|\cos\psi} \]
is an increasing function on $(|z_1|,1/|u|)$ for any choice of $\phi$ and $\psi$. Since \begin{eqnarray}
l_1^\prime(x) &=& 2\frac{x(1-|u|^2|z_1|^2)-|z_1|\cos\phi(1-x^2|u|^2)-|u|\cos\psi(x^2-|z_1|^2)}{(1+x^2|u|^2-2x|u|\cos\psi)^2} \nonumber \\
{} &>& 2\frac{(1-|u||z_1|)(1-x|u|)(x-|z_1|)}{(1+x|u|)^4} > 0, \nonumber \end{eqnarray}
$l_1$ is indeed strictly increasing on $(|z_1|,1/|u|)$. In the latter case, \eqref{eq:leq} is equivalent to showing that \[
l_2(x) := (1/x+x|u|^2-2|u|\cos\phi)(1/x+x|u|^2-2|u|\cos\psi) \]
is a decreasing function on $(\rho,1/|u|)$ for any choice of $\phi$ and $\psi$. This is true since \[
l_2^\prime(x) = 2(|u|^2-1/x^2)(1/x+x|u|^2-|u|(\cos\phi+\cos\psi)) < 0. \] Thus, we verified \eqref{eq:leq} for $\nu=\delta_u$.
In the general case it holds that \[
|z_1-z_2|w(z_1)w(z_2) = \exp\left\{\int\log\frac{|z_1-z_2|}{|1-z_1\bar u||1-z_2\bar u|}d\nu(u)\right\}. \] As the kernel on the right-hand side of the equality above gets smaller when $z_j$ is replaced by $\pr_\rho(z_j)$, $j=1,2$, by what precedes, the validity of \eqref{eq:leq} follows. \end{proof}
Combining Lemmas \ref{lem:1haus}--\ref{lem:1pr}, we obtain the existence of minimal sets.
\begin{lem} \label{lem:1b}
A minimal set $\Gamma_\nu$ exists and is contained in $\overline\D_r$, $r=\max\{|z|:~z\in E_f\}$. \end{lem} \begin{proof} By Lemma \ref{lem:1haus}, it is enough to consider only the sets in $\pd_f$. Let $\{\Gamma_n\}\subset\pd_f$ be a maximizing sequence for $I_\nu[\cdot]$ (minimizing sequence for the $\nu$-capacity), that is, $I_\nu[\Gamma_n]$ tends to $\sup_{\Gamma\in\pd_f}I_\nu[\Gamma]$ as $n\to\infty$. Then it follows from Lemma \ref{lem:1pr} that $\{\pr_r(\Gamma_n)\}$ is another maximizing sequence for $I_\nu[\cdot]$ in $\pd_f$, and $\pr_r(\Gamma_n)\in\pd_r:=\{\Gamma\in\pd_f:~\Gamma\subseteq\overline\D_r\}$. As $\clos_{d_H}(\pd_r)$ is a compact metric space, there exists at least one limit point of $\{\pr_r(\Gamma_n)\}$ in $\clos_{d_H}(\pd_r)$, say $\Gamma_0$, and $\Gamma_0\subset\overline\D_r$. Since $I_\nu[\cdot]$ is continuous on $\clos_{d_H}(\pd_r)$ by Lemma \ref{lem:1cont}, $I_\nu[\Gamma_0]=\sup_{\Gamma\in\pd_f}I_\nu[\Gamma]$. Finally, as the polynomial convex hull of $\Gamma_0$, say $\Gamma_0^\prime$, belongs to $\pd_f$ and since $I_\nu[\Gamma_0]=I_\nu[\Gamma_0^\prime]$ (see Section~\ref{sss:wcf}), we may put $\Gamma_\nu=\Gamma_0^\prime$. \end{proof}
To continue with our analysis we need the following theorem \cite[Thm. 3.1]{Kuz80}. It describes the continuum of minimal condenser capacity connecting finitely many given points as a union of closures of the \emph{non-closed negative critical trajectories} of a quadratic differential. Recall that a negative trajectory of the quadratic differential $q(z)dz^2$ is a maximally continued arc along which $q(z)dz^2<0$; the trajectory is called critical if it ends at a zero or a pole of $q(z)$\cite{Kuz80,Pommerenke2}.
\begin{kuzmina} \label{thm:K} Let $A=\{a_1,\ldots,a_m\}\subset\D$ be a set of $m\geq2$ distinct points. Then there uniquely exists a continuum $K_0$, $A\subset K_0\subset\D$, such that \[ \cp(K_0,\T) \leq \cp(K,\T) \] for any other continuum with $A\subset K\subset\D$. Moreover, there exist $m-2$ points $b_1,\ldots,b_{m-2}\in\D$ such that $K_0$ is the union of the closures of the non-closed negative critical trajectories of the quadratic differential \[ q(z)dz^2, \quad q(z) := \frac{(z-b_1)\cdot\ldots\cdot(z-b_{m-2})(1-\bar b_1z)\cdot\ldots\cdot(1-\bar b_{m-2}z)}{(z-a_1)\cdot\ldots\cdot(z-a_m)(1-\bar a_1 z)\cdot\ldots\cdot(1-\bar a_mz)}, \] contained in $\D$. There exists only finitely many such trajectories. Furthermore, the equilibrium potential $V_\D^{\omega_{(K_0,\T)}}$ satisfies $\displaystyle\left(2\partial_zV_\D^{\omega_{(K_0,\T)}}(z)\right)^2 = q(z)$, $z\in\overline\D$. \end{kuzmina}
The last equation in Theorem~\hyperref[thm:K]{K} should be understood as follows. The left-hand side of this equality is defined in $\D\setminus K_0$ and represents a holomorphic function there, which coincides with $q$ on its domain of definition. As $K_0$ has no interior because critical trajectories are analytic arcs with limiting tangents at their endpoints \cite{Pommerenke2}, the equality on the whole set $\overline\D$ is obtained by continuity. Note also that $\D\setminus K_0$ is connected by unicity claimed in Theorem \ref{thm:K}, for the polynomial convex hull of $K_0$ has the same Green capacity as $K_0$ ({\it cf.} section \ref{sss:cc}). Moreover, it follows from the local theory of quadratic differentials that each $b_j$ is the endpoint of at least three arcs of $K_0$ (because $b_j$ is a zero of $q(z)$) and that each $a_j$ is the endpoint of exactly one arc of $K_0$ (because $a_j$ is a simple pole of $q(z)$).
Having Theorem~\hyperref[thm:K]{K} at hand, we are ready to describe the structure of a minimal set $ \Gamma_\nu$.
\begin{lem} \label{lem:1sym} A minimal set $\Gamma_\nu$ is symmetric \textnormal{(}Definition \ref{df:sym}\textnormal{)} with respect to $\nu^*$. \end{lem} \begin{proof} Let $\widetilde\nu^*$ be the balayage of $\nu^*$ onto $\T_\rho$ with $\rho<1$ but large enough to contain $\Gamma_\nu$ in the interior of $\D_\rho$. Let $\gamma$ be any of the continua constituting $\Gamma_\nu$. Clearly $V:=V_{D_\nu}^{\widetilde\nu^*}$, where $D_\nu=\overline\C\setminus\Gamma_\nu$, is harmonic in $D_\nu\setminus\T_\rho$ and extends continuously to the zero function on $\Gamma_\nu$ since $\Gamma_\nu$ is a regular set. Moreover, by Sard's theorem on regular values \cite[Sec. 1.7]{GuilleminPollack} there exists $\delta>0$ arbitrarily small such that $\Omega$, the component of $\{z:V(z)<\delta\}$ containing $\gamma$, is itself contained in $\D_\rho$ and its boundary is an analytic Jordan curve, say $L$. Let $\phi$ be a conformal map of $\Omega$ onto $\D$. Set $\widetilde\gamma:=\phi^{-1}(\widetilde K)$, where $\widetilde K$ is the continuum of minimal condenser capacity\footnote{In other words, if we put $\phi(E_f\cap\gamma)=\{p_1,\ldots,p_m\}$ and $g(z):=1/\sqrt[m]{\prod(z-p_j)}$, then $\widetilde K$ is the set of minimal condenser capacity for $g$ as defined in Definition \ref{df:dmcc}.} for $\phi(E_f\cap\gamma)$. Our immediate goal is to show that $\gamma=\widetilde\gamma$.
Assume to the contrary that $\gamma\neq\widetilde\gamma$, i.e., $\phi(\gamma)=:K\neq\widetilde K$, and therefore \begin{equation} \label{eq:asump} \cp(\widetilde K,\T)<\cp(K,\T). \end{equation}
Set \begin{equation} \label{eq:dftV} \widetilde V := \left\{ \begin{array}{ll} \delta\cp(\widetilde K,\T)\left[V_{\overline\C\setminus\widetilde K}^{\omega_{(\T,\widetilde K)}}\circ\phi\right], & z\in \overline\Omega,
\\ V, & z\notin\overline\Omega, \end{array} \right. \end{equation} where $\omega_{(\T,\widetilde K)}$ is the Green equilibrium distribution on $\T$ relative to $\overline\C\setminus\widetilde K$. The functions $V$ and $\widetilde V$ are continuous in $\overline\Omega$ and equal to $\delta$ on $L$. Furthermore, they are harmonic in $\Omega\setminus\gamma$ and $\Omega\setminus\widetilde\gamma$ and equal to zero on $\gamma$ and $\widetilde\gamma$, respectively. Then it follows from Lemma~\ref{lem:nder1} and the conformal invariance of the condenser capacity \eqref{eq:coninv} that \begin{equation} \label{eq:normalV} \frac{1}{2\pi}\int_L\frac{\partial V}{\partial\n}ds = -\delta\cp(K,\T) \quad \mbox{and} \quad \frac{1}{2\pi}\int_L\frac{\partial\widetilde V}{\partial\n}ds = -\delta\cp(\widetilde K,\T), \end{equation} where $\partial/\partial\n$ stands for the partial derivative with respect to the inner normal on $L$. (In Lemma~\ref{lem:nder1}, $L$ should be contained within the domain of harmonicity of $V$ and $\widetilde V$. As $V$ and $\widetilde V$ are constant on $L$, they can be harmonically continued across by reflection. Thus, Lemma~\ref{lem:nder1} does apply.) Moreover, $\widetilde V-V_{\widetilde D}^{\widetilde\nu^*}$ is a continuous function on $\overline\C$ that is harmonic in $\widetilde D\setminus L$ by the first claim in Section~\ref{ss:gp}, where $\widetilde D:=(D_\nu\cup\gamma)\setminus\widetilde\gamma$, and is identically zero on $\Gamma:=\overline\C\setminus\widetilde D$. Thus, we can apply Lemma~\ref{lem:nder0} with $\widetilde V-V_{\widetilde D}^{\widetilde\nu^*}$ and $\widetilde D$ (smoothness properties of $V-V_{\widetilde D}^{\widetilde\nu^*}$ follow from the fact that $\widetilde V$ can be harmonically continued across $L$), which states that \begin{equation} \label{eq:reprtV} \widetilde V = V_{\widetilde D}^{\widetilde\nu^*-\sigma}, \quad d\sigma := \frac{1}{2\pi}\frac{\partial(\widetilde V-V)}{\partial\n}ds, \end{equation} where $\sigma$ is a finite signed measure supported on $L$ (observe that the outer and inner normal derivatives of $V_{\widetilde D}^{\widetilde\nu^*}$ on $L$ are opposite to each other as $V_{\widetilde D}^{\widetilde\nu^*}$ is harmonic across $L$ and therefore they do not contribute to the density of $\sigma$; due to the same reasoning the outer normal derivative of $\widetilde V$ is equal to minus the inner normal derivative of $V$ by \eqref{eq:dftV}). Hence, one can easily deduce from \eqref{eq:normalV} and \eqref{eq:asump} that \begin{equation} \label{eq:positivetotal} \sigma(L) = \delta\left(\cp(K,\T)-\cp(\widetilde K,\T)\right)>0. \end{equation}
Since the components of $\Gamma_\nu$ and $\Gamma$ contain exactly the same branch points of $f$ and $\Gamma$ has connected complement (for $D_\nu$ is connected and so is $\overline{\C}\setminus\widetilde{\gamma}$ because $\D\setminus\widetilde{K}$ is connected), it follows that $\Gamma\in\pd_f$ by the monodromy theorem. Moreover, we obtain from \eqref{eq:weightgreen}, \eqref{eq:dftV}, and \eqref{eq:reprtV} that \[ I_\nu[\Gamma] - I_\nu[\Gamma_\nu] = I_{\widetilde D}[\widetilde\nu^*] - I_{D_\nu}[\widetilde\nu^*] = \int\left(V_{\widetilde D}^{\widetilde\nu^*} - V\right)d\widetilde\nu^* = \int V_{\widetilde D}^\sigma d\widetilde\nu^* \] since $\supp(\widetilde\nu^*)\cap\overline\Omega=\varnothing$. Further, applying the Fubini-Tonelli theorem and using \eqref{eq:reprtV} once more, we get that \[ I_\nu[\Gamma] - I_\nu[\Gamma_\nu] = \int V_{\widetilde D}^{\widetilde\nu^*}d\sigma = \int\widetilde Vd\sigma + I_{\widetilde D}[\sigma] = \delta\sigma(L) + I_{\widetilde D}[\sigma] > 0 \] by \eqref{eq:positivetotal} and since the Green energy of a signed compactly supported measure of finite Green energy is positive by \cite[Thm. II.5.6]{SaffTotik}. However, the last inequality clearly contradicts the fact that $I_\nu[\Gamma_\nu]$ is maximal among all sets in $\pd_f$ and therefore $\gamma=\widetilde\gamma$. Hence, $K = \widetilde K=\phi(\gamma)$ and $\widetilde V=V$.
Observe now that by Theorem~\hyperref[thm:K]{K} stated just before this lemma and the remarks thereafter, the set $K$ consists of a finite number of open analytic arcs and their endpoints. These fall into two classes $a_1,\ldots,a_m$ and $b_1,\ldots,b_{m-2}$, members of the first class being endpoints of exactly one arc and members of the second class being endpoints of at least three arcs. Thus, the same is true for $\gamma$. Moreover, the jump of $f$ across any open arc $C\subset\gamma$ cannot vanish, otherwise excising out this arc would leave us with an admissible compact set $\Gamma^\prime\subset\Gamma_\nu$ of strictly smaller $\nu$-capacity since $\omega_{\Gamma_\nu,-U^\nu}(C)>0$ by (\ref{eq:whnustar}) and the properties of balayage at regular points (see Section~\ref{sss:wcf}). Hence $\Gamma_\nu$ is a smooth cut (Definition \ref{df:elmin}). Finally, we have that \[
\frac{\partial V}{\partial\n_\gamma^\pm} = \delta\cp(\phi(\gamma),\T)\left(\frac{\partial}{\partial\n_K^\pm} V_{\overline\C\setminus K}^{\omega_{(\T,K)}}\right)|\phi^\prime| \] by \eqref{eq:dftV} and the conformality of $\phi$, where $\partial/\partial\n_\gamma^\pm$ and $\partial/\partial\n_K^\pm$ are the partial derivatives with respect to the one-sided normals at the smooth points of $\gamma$ and $K$, respectively. Thus, it holds that \[ \frac{\partial V}{\partial\n_\gamma^-} = \frac{\partial V}{\partial\n_\gamma^+} \] on the open arcs constituting $\gamma$ since the corresponding property holds for $V_{\overline\C\setminus K}^{\omega_{(\T,K)}}$ by \eqref{eq:mincondcap}. As $\gamma$ was arbitrary continuum from $\Gamma_\nu$, we see that all the requirements of Definition~\ref{df:sym} are fulfilled. \end{proof}
To finish the proof of Theorem~\ref{thm:minset}, it only remains to show uniqueness of $\Gamma_\nu$, which is achieved through the following lemma:
\begin{lem} \label{lem:1e} $\Gamma_\nu$ is uniquely characterized as a compact set symmetric with respect to $\nu^*$. \end{lem} \begin{proof} Let $\Gamma_s\in\pd_f$ be symmetric with respect to $\nu^*$ and $\Gamma_\nu$ be any set of minimal capacity for Problem $(f,\nu)$. Such a set exists by Lemma \ref{lem:1b} and it is symmetric by Lemma \ref{lem:1sym}. Suppose to the contrary that $\Gamma_s\neq\Gamma_\nu$, that is, \begin{equation} \label{assumption} \Gamma_s\cap (\C\setminus \Gamma_\nu)\neq\varnothing \end{equation} ($\Gamma_s$ cannot be a strict subset of $\Gamma_\nu$ for it would have strictly smaller $\nu$-capacity as pointed out in the proof of Lemma \ref{lem:1sym}). We want to show that \eqref{assumption} leads to \begin{equation} \label{contradiction} I_\nu[\Gamma_s] - I_\nu[\Gamma_\nu] > 0. \end{equation} Clearly, \eqref{contradiction} is impossible by the very definition of $\Gamma_\nu$ and therefore the lemma will be proven.
By the very definition of symmetry (Definition \ref{df:sym}), $\Gamma_\nu$ and $\Gamma_s$ are smooth cuts for $f$. In particular, $\overline{C}\setminus\Gamma_\nu$, $\overline{C}\setminus\Gamma_s$ are connected and we have a decomposition of the form \[ \Gamma_s = E_0^s\cup E_1^s \cup \bigcup \gamma_j^s \quad \mbox{and} \quad \Gamma_\nu = E_0^\nu\cup E_1^\nu \cup \bigcup \gamma_j^\nu, \] where $E_0^s, E_0^\nu\subseteq E_f$, $\gamma_j^\nu,\gamma_j^s$ are open analytic arcs, and each element of $E_0^s, E_0^\nu$ is an endpoint of exactly one arc from $\bigcup \gamma_j^s$, $\bigcup \gamma_j^\nu$ while $E_1^s,E_1^\nu$ are finite sets of points each elements of which serving as an endpoint for at least three arcs from $\bigcup \gamma_j^s$, $\bigcup \gamma_j^\nu$, respectively. Moreover, the continuations of $f$ from infinity that are meromorphic outside of $\Gamma_s$ and $\Gamma_\nu$, say $f_s$ and $f_\nu$, are such that the jumps $f_s^+-f_s^-$ and $f_\nu^+-f_\nu^-$ do not vanish on any subset with a limit point of $\bigcup\gamma_j^s$ and $\bigcup\gamma_j^\nu$, respectively. Note that $\Gamma_s\cap\Gamma_\nu\neq\varnothing$ otherwise $\overline{\C}\setminus(\Gamma_\nu\cup\Gamma_s)$ would be connected, so $f$ could be continued analytically over $(\overline{C}\setminus\Gamma_\nu)\cup (\overline{C}\setminus\Gamma_s) =\overline{\C}$ and it would be identically zero by our normalization.
Write $\Gamma_s =\Gamma_s^1\cup\Gamma_s^2$ and $\Gamma_\nu =\Gamma_\nu^1\cup\Gamma_\nu^2$, where $\Gamma_s^k$ (resp. $\Gamma_\nu^k$) are compact disjoint sets such that each connected component of $\Gamma_s^1$ (resp. $\Gamma_\nu^1$) has nonempty intersection with $\Gamma_\nu$ (resp. $\Gamma_s$) while $\Gamma_s^2\cap\Gamma_\nu=\Gamma_\nu^2\cap\Gamma_s=\varnothing$.
Now, put, for brevity, $D_\nu:=\overline\C\setminus\Gamma_\nu$ and $D_s:=\overline\C\setminus\Gamma_s$. Denote further by $\Omega$ the unbounded component of $D_\nu\cap D_s$. Then \begin{equation} \label{commonpoint} \overline{\Omega}\cap E_0^s \cap \Gamma_s^1 = \overline{\Omega}\cap E_0^\nu \cap \Gamma_\nu^1. \end{equation} Indeed, assume that there exists $e\in(\overline{\Omega}\cap E_0^s \cap \Gamma_s^1)\setminus(E_0^\nu \cap \Gamma_\nu^1)$ and let $\gamma^s_e$ be the arc in the union $\bigcup\gamma_j^s$ that has $e$ as one of the endpoints. By our assumption there is an open disk $W$ centered at $e$ such that $W\cap\Gamma_s=\{e\}\cup (W\cap \gamma^s_e)$ and $W\cap\Gamma_\nu=\varnothing$. Thus $W\setminus(\{e\}\cup\gamma^s_e)\subset D_\nu\cap D_s$. Anticipating the proof of Proposition~\ref{prop:minset} in Section~\ref{ss:prop} (which is independent of the present proof), $\gamma^s_e$ has well-defined tangent at $e$ so we can shrink $W$ to ensure that $\partial W\cap\gamma^s_e$ is a single point. Then $W\setminus(\{e\}\cup\gamma^s_e)$ is connected hence contained in a single connected component of $D_\nu\cap D_s$ which is necessarily $\Omega$ since $e\in\overline{\Omega}$. As $f_s$ and $f_\nu$ coincide on $\Omega$ and $f_\nu$ is meromorphic in $W$, $f_s$ has identically zero jump on $\gamma^s_e\cap W$ which is impossible by the definition of a smooth cut. Consequently the left hand side of \eqref{commonpoint} is included in the right hand side and the opposite inclusion can be shown similarly.
\begin{figure}
\caption{\small A particular example of $\Gamma_s$ (solid lines) and $\Gamma_\nu$ (dashed lines). Black dots represent branch points (black dots within big gray disk are branch point $f$ that lie on other sheets of the Riemann surface than the one we fixed). The white area on the figure represents domain $\Omega$.}
\label{fig:TwoCuts}
\end{figure}
Next, observe that \begin{equation} \label{Gammas2} \Gamma_s^2\cap\overline\Omega = \varnothing{\color{red}.} \end{equation} Indeed, since $\partial\Omega\subset\Gamma_s\cup\Gamma_\nu$ and $\Gamma_s^2$, $\Gamma^1_s\cup\Gamma_\nu$ are disjoint compact sets, a connected component of $\partial\Omega$ that meets $\Gamma_s^2$ is contained in it. If $z\in\Gamma_s^2\cap\partial\Omega$ lies on $\gamma_j^s$, then by analyticity of the latter each sufficiently small disk $D_z$ centered at $z$ is cut out by $\gamma_j^s\cap D_z$ into two connected components included in $D_\nu\cap D_s$, and of necessity one of them is contained in $\Omega$. Hence $\gamma_j^s\cap D_z$ is contained in $\partial\Omega$, and in turn so does the entire arc $\gamma_j^s$ by connectedness. Hence every component of $\Gamma_s^2\cap\partial\Omega$ consists of a union of arcs $\gamma_j^s$ connecting at their endpoints. Because $\Gamma_s^2$ has no loop, one of them has an endpoint $z_1\in E_0^s\cup E_1^s$ belonging to no other arc. If $z_1\in E_0^s$, reasoning as we did to prove \eqref{commonpoint} leads to the absurd conclusion that $f_s$ has zero jump across the initial arc. If $z_1\in E_1^s$, anticipating the proof of Proposition~\ref{prop:minset} once again, each sufficiently small disk $D_{z_1}$ centered at $z_1$ is cut out by $\Gamma_s^2\cap D_{z_1}$ into curvilinear sectors included in $D_\nu\cap D_s$, and of necessity one of them is contained in $\Omega$ whence at least two adjacent arcs $\gamma_j^s$ emanating from $z_1$ are included in $\partial\Omega$. This contradicts the fact that $z_1$ belongs to exactly one arc of the hypothesized component of $\Gamma_s^2\cap\partial\Omega$, and proves \eqref{Gammas2}.
Finally, set \[ \Gamma_s^3 := \left[\Gamma_s^1\setminus\left(\partial\Omega\setminus E_1^s\right)\right]\cap D_\nu \quad \mbox{and} \quad \Gamma_s^4 := \left[\Gamma_s^1\cap\bigcup\gamma_j^s\right]\cap\partial\Omega\cap D_\nu. \] Clearly \begin{equation} \label{Gammas3} \left(\Gamma_s^3\setminus E_1^s\right)\cap\overline\Omega = \varnothing. \end{equation} Moreover, observing that any two arcs $\gamma_j^s$, $\gamma_k^\nu$ either coincide or meet in a (possibly empty) discrete set and arguing as we did to prove \eqref{Gammas2}, we see that $\left[\Gamma_s^1\cap\bigcup\gamma_j^s\right]\cap\partial\Omega$ consists of subarcs of arcs $\gamma_j^s$ whose endpoints either belong to some intersection $\gamma_j^s\cap\gamma_k^\nu$ (in which case they contain this endpoint) or else lie in $E_0^s\cup E_1^s$ (in which case they do not contain this endpoint). Thus $\Gamma_s^4$ is comprised of open analytic arcs $\widetilde{\gamma}_\ell^s$ contained in $\partial\Omega\cap\bigcup\gamma_j^s$ and disjoint from $\Gamma_\nu$. Hence for any $z\in\Gamma_s^4$, say $z\in \widetilde{\gamma}_\ell^s$, and any disk $D_z$ centered at $z$ of small enough radius it holds that $D_z\cap\partial\Omega=D_z\cap\widetilde{\gamma}_\ell^s$ and that $D_z\setminus\widetilde{\gamma}_\ell^s$ has exactly two connected components: \begin{equation} \label{Gammas4} D_z\cap\Omega \neq \varnothing \quad \mbox{and} \quad D_z\cap\left(\overline\C\setminus\overline\Omega\right)\neq\varnothing \end{equation} for if $z\in \widetilde{\gamma}_\ell^s$ was such that $D_z\setminus\widetilde{\gamma}_\ell^s\subset\Omega$, the jump of $f_s$ across $\widetilde{\gamma}_\ell^s$ would be zero as the jump of $f_\nu$ is zero there and $f_s=f_\nu$ in $\Omega$ (see Figure~\ref{fig:TwoCuts}).
As usual, denote by $\widetilde\nu^*$ the balayage of $\nu^*$ onto $\T_\rho$ with $\rho \in (r, 1)$ but large enough so that $\Gamma_s$ and $\Gamma_\nu$ are contained in the interior of $\D_\rho$ (see Lemma~\ref{lem:1b} for the definition of $r$). Then, according to \eqref{eq:weightgreen} and \eqref{eq:di1}, it holds that \begin{equation} \label{eq:apprstep2} I_\nu[\Gamma_s]-I_\nu[\Gamma_\nu] = I_{D_s}[\widetilde\nu^*] - I_{D_\nu}[\widetilde\nu^*] = \di_{D_s}(V_s) - \di_{D_\nu}(V_\nu), \end{equation} where $V_s:=V_{D_s}^{\widetilde\nu^*}$ and $V_\nu:=V_{D_\nu}^{\widetilde\nu^*}$. Indeed, as $\widetilde\nu^*$ has finite energy (see Section~\ref{BP}), the Dirichlet integrals of $V_s$ and $V_\nu$ in the considered domains (see Section~\ref{DI}) are well-defined by Proposition~\ref{prop:minset}, which is proven later but independently of the results in this section.
Set $D:=D_\nu\setminus(\Gamma_s^2\cup\Gamma_s^3)$. Since $\left[\Gamma_s^1\setminus\left(\partial\Omega\setminus E_1^s\right)\right]$ consists of piecewise smooth arcs in $\Gamma_s^1$ whose endpoints either belong to this arc (if they lie in $E_1^s$), or to $E_0^s\cap\Gamma_s^1$ (hence also to $\Gamma_\nu$ by \eqref{commonpoint}), or else to some intersection $\gamma_j^s\cap\gamma_k^\nu$ (in which case they belong to $\Gamma_\nu$ again), we see that $D$ is an open set. As $V_\nu$ is harmonic across $\Gamma_s^2\cup\Gamma_s^3$ and $V_s$ is harmonic across $\Gamma_\nu\setminus\Gamma_s$, we get from \eqref{ajoutbord} that \begin{equation} \label{Dirichlet1} \di_{D_\nu}(V_\nu) = \di_D(V_\nu) \quad \mbox{and} \quad \di_{D_s}(V_s) = \di_{D\setminus\Gamma_s^4}(V_s) \end{equation} since $D_s\setminus\Gamma_\nu=D_\nu\setminus\Gamma_s=D\setminus\Gamma_s^4$, by inspection on using \eqref{commonpoint}.
Now, recall that $\Gamma_s$ has no interior and $V_s\equiv0$ on $\Gamma_s$, that is, $V_s$ is defined in the whole complex plane. So, we can define a function on $\C$ by putting \begin{equation} \label{defVt} \widetilde V := \left\{ \begin{array}{rl} V_s, & \mbox{in} \quad \Omega,
\\ -V_s, & \mbox{otherwise}. \end{array} \right. \end{equation} We claim that $\widetilde V$ is superharmonic in $D$ and harmonic in $D\setminus\T_\rho$. Indeed, it is clearly harmonic in $D\setminus(\Gamma_s^4\cup\T_\rho)=(D_s\cap D_\nu)\setminus\T_\rho$and superharmonic in a neighborhood of $\T_\rho\subset\Omega$ where its weak Laplacian is $-2\pi\widetilde\nu^*$ which is a negative measure. Moreover, $\Gamma_s^4$ is a collection of open analytic arcs such that $\partial V_s/\partial\n^+=\partial V_s/\partial\n^-$ by the symmetry of $\Gamma_s$, where $\n^\pm$ are the two-sided normal on each subarc of $\Gamma_s^4$. The equality of the normals means that $V_s$ can be continued harmonically across each subarc of $\Gamma_s^4$ by $-V_s$. Hence, \eqref{Gammas4} and the definition of $\widetilde V$ yield that it is harmonic across $\Gamma_s^4$ thereby proving the claim. Thus, using \eqref{decompositivity} (applied with $D^\prime=\Omega$) and \eqref{ajoutbord}, we obtain \begin{equation} \label{Dirichlet2} \di_{D\setminus\Gamma_s^4}(V_s) = \di_{D\setminus\Gamma_s^4}(\widetilde V) = \di_D(\widetilde V) \end{equation} hence combining \eqref{eq:apprstep2}, \eqref{Dirichlet1}, and \eqref{Dirichlet2}, we see that \begin{equation} \label{contr1} I_\nu[\Gamma_s]-I_\nu[\Gamma_\nu] = \di_D(\widetilde V) - \di_D(V_\nu). \end{equation}
By the first claim in Section~\ref{ss:gp}, it holds that $h:=\widetilde V-V_\nu$ is harmonic in $D$. Observe that $h$ is not a constant function, for it tends to zero at each point of $\Gamma_s\cap\Gamma_\nu\subset\partial D$ whereas it tends to a strictly negative value at each point of $\Gamma_s\cap D_\nu\subset\overline{D}$ which is nonempty by \eqref{assumption}. Then \begin{equation} \label{contr2} \di_D(\widetilde V) = \di_D(V_\nu) + \di_D(h) + 2\di_D(V_\nu,h). \end{equation} Now, $V_\nu\equiv0$ on $\Gamma_\nu$ and it is harmonic across $\Gamma_s^2\cup\Gamma_s^3$, hence \[ \frac{\partial h}{\partial \n^+}+\frac{\partial h}{\partial \n^-} = \frac{\partial {\widetilde V}}{\partial \n^+}+\frac{\partial {\widetilde V}}{\partial \n^-} \quad \mbox{on} \quad \Gamma_s^2\cup\Gamma_s^3. \] Consequently, we get from \eqref{eq:greenformula}, since $\widetilde V=-V_s$ in the neighborhood of $\Gamma_s^2\cup\Gamma_s^3$ by \eqref{Gammas2} and \eqref{Gammas3}, that \begin{equation} \label{contr3} \di_D(V_\nu,h) = -\int_{\Gamma_s^2\cup\Gamma_s^3}V_\nu\left(\frac{\partial \widetilde V}{\partial \n^+}+\frac{\partial \widetilde V}{\partial \n^-}\right)\frac{ds}{2\pi} = \int_{\Gamma_s^2\cup\Gamma_s^3}V_\nu\left(\frac{\partial V_s}{\partial \n^+}+\frac{\partial V_s}{\partial \n^-}\right)\frac{ds}{2\pi}\geq0 \end{equation} because $V_\nu$ is nonnegative while $\partial V_s/\partial n^+$, $\partial V_s/\partial n^-$ are also nonnegative on $\Gamma_s^2\cup\Gamma_s^3$ as $V_s\geq0$ vanishes there. Altogether, we obtain from \eqref{contr1}, \eqref{contr2}, and \eqref{contr3} that \[ I_\nu[\Gamma_s]-I_\nu[\Gamma_\nu] \geq \di_D(h) > 0 \] by \eqref{positivity} and since $h=\tilde V -V_\nu$ is a non-constant harmonic function in $D$. This shows \eqref{contradiction} and finishes the proof of the lemma. \end{proof}
\subsection{Proof of Proposition~\ref{prop:minset}} \label{ss:prop}
It is well known that $H_{\omega,\Gamma}$ is holomorphic in the domain of harmonicity of $V_{\overline\C\setminus\Gamma}^\omega$, that is, in $\overline\C\setminus(\Gamma\cup\supp(\omega))$. It is also clear that $H_{\omega,\Gamma}^\pm$ exist smoothly on each $\gamma_j$ since $V_{\overline\C\setminus\Gamma}^\omega$ can be harmonically continued across each side of $\gamma_j$.
Denote by $\n_t^\pm$ the one-sided unit normals at $t\in\bigcup\gamma_j$ and by $\tau_t$ the unit tangent pointing in the positive direction. Let further $n^\pm(t)$ be the unimodular complex numbers corresponding to vectors $\n^\pm_t$. Then the complex number corresponding to $\tau_t$ is $\mp in^\pm(t)$ and it can be readily verified that \[ \frac{\partial V_{\overline\C\setminus\Gamma}^\omega}{\partial\n^\pm_t} = 2\re\left(n^\pm(t)H_{\omega,\Gamma}^\pm(t)\right) \quad \mbox{and} \quad \frac{\partial \left(V_{\overline\C\setminus\Gamma}^\omega\right)^\pm}{\partial\tau_t} = \mp2\im\left(n^\pm(t)H_{\omega,\Gamma}^\pm(t)\right). \] As $\left(V_{\overline\C\setminus\Gamma}^\omega\right)^\pm\equiv0$ on $\Gamma$, the tangential derivatives above are identically zero, therefore $n^\pm H_{\omega,\Gamma}^\pm$ is real on $\Gamma$. Moreover since $n^+=-n^-$ and by the symmetry property \eqref{eq:GreenPotSym}, it holds that $H_{\omega,\Gamma}^+=-H_{\omega,\Gamma}^-$ on $\bigcup\gamma_j$. Hence, $H_{\omega,\Gamma}^2$ is holomorphic in $\overline\C\setminus(E_0\cup E_1\cup\supp(\omega))$. Since $E_0\cup E_1$ consists of isolated points around which $H_{\omega,\Gamma}^2$ is holomorphic each $e\in E_0\cup E_1$ is either a pole, a removable singularity, or an essential one. As $H_{\omega,\Gamma}$ is holomorphic on a two-sheeted Riemann surface above the point, it cannot have an essential singularity since its primitive has bounded real part $\pm V_{\overline\C\setminus\Gamma}^\omega$. Now, by repeating the arguments in \cite[Sec. 8.2]{Pommerenke2}, we deduce that $(z-e)^{j_e-2}H^2_{\omega,\Gamma}(z)$ is holomorphic and non-vanishing in some neighborhood of $e$ where $j_e$ is the number of arcs $\gamma_j$ having $e$ as an endpoint, that the tangents at $e$ to these arcs exist, and that they are equiangular if $j_e>1$.
\subsection{Proof of Theorem \ref{thm:mpa}} \label{ss:52}
The following theorem \cite[Thm. 3]{GRakh87} and its proof are essential in establishing Theorem~\ref{thm:mpa}. Before stating this result, we remind the reader that a polynomial $v$ is said to by spherically normalized if it has the form \begin{equation} \label{eq:sphnp}
v(z) = \prod_{v(e)=0,~|e|\leq1}(z-e)\prod_{v(e)=0,~|e|>1}(1-z/e). \end{equation} We also recall from \cite{GRakh87} the notions of a \emph{tame set} and a \emph{tame point} of a set. A point $z$ belonging to a compact set $\Gamma$ is called tame, if there is a disk centered at $z$ whose intersection with $\Gamma$ is an analytic arc. A compact set $\Gamma$ is called tame, if $\Gamma$ is non-polar and quasi-every point of $\Gamma$ is tame.
A tame compact set $\Gamma$ is said to have the S-property in the field $\psi$, assumed to be harmonic in some neighborhood of $\Gamma$, if $\supp(\omega_{\Gamma,\psi})$ forms a tame set as well, every tame point of $\supp(\omega_{\Gamma,\psi})$ is also a tame point of $\Gamma$, and the equality in \eqref{eq:sproperty} holds at each tame point of $\supp(\omega_{\Gamma,\psi})$.
Whenever the tame compact set $\Gamma$ has connected complement in a simply connected region $G\supset\Gamma$ and $g$ is holomorphic in $G\setminus\Gamma$, we write $\oint_\Gamma g(t)\,dt$ for the contour integral of $g$ over some (hence any) system of curves encompassing $\Gamma$ once in $G$ in the positive direction. Likewise, the Cauchy integral $\oint_\Gamma g(t)/(z-t)\,dt$ can be defined at any $z\in\overline{\C}\setminus\Gamma$ by choosing the previous system of curves in such a way that it separates $z$ from $\Gamma$.
If $g$ has limits from each side at tame points of $\Gamma$, and if these limits are integrable with respect to linear measure on $\Gamma$, then the previous integrals may well be rewritten as integrals on $\Gamma$ with $g$ replaced by its jump across $\Gamma$. However, this is \emph{not} what is meant by the notation $\oint_\Gamma$.
\begin{gonchar} \label{thm:GR} Let $G\subset\D$ be a simply connected domain and $\Gamma\subset G$ be a tame compact set with connected complement. Let also $g$ be holomorphic in $G\setminus\Gamma$ and have continuous limits on $\Gamma$ from each side in the neighborhood of every tame point, whose jump across $\Gamma$ is non-vanishing q.e. Further, let $\{\Psi_n\}$ be a sequence of functions that satisfy: \begin{enumerate}
\item $\Psi_n$ is holomorphic in $G$ and $-\frac{1}{2n}\log|\Psi_n| \to\psi$ locally uniformly there, where $\psi$ is harmonic in $G$; \item $\Gamma$ possesses the S-property in the field $\psi$ \textnormal{(}see \eqref{eq:sproperty}\textnormal{)}. \end{enumerate} Then, if the polynomials $q_n$, $\deg(q_n)\leq n$, satisfy the orthogonality relations\footnote{Note that the orthogonality in \eqref{eq:orthorel} is non-Hermitian, that is, no conjugation is involved.} \begin{equation} \label{eq:orthorel} \oint_\Gamma q_n(t)l_{n-1}(t)\Psi_n(t)g(t)dt = 0, \quad \mbox{for any} \quad l_{n-1}\in\poly_{n-1}, \end{equation} then $\mu_n\cws\omega_{\Gamma,\psi}$, where $\mu_n$ is the normalized counting measure of zeros of $q_n$. Moreover, if the polynomials $q_n$ are spherically normalized, it holds that \begin{equation} \label{eq:funan1}
\left|A_n(z)\right|^{1/2n} \cic \exp\{-c(\psi;\Gamma)\} \quad \mbox{in} \quad \overline\C\setminus\Gamma, \end{equation} where $c(\psi;\Gamma)$ is the modified Robin constant \textnormal{(}Section~\ref{sss:wc}\textnormal{)}, and \begin{equation} \label{eq:funan2} A_n(z) := \oint_\Gamma q_n^2(t)\frac{(\Psi_ng)(t)dt}{z-t} = \frac{q_n(z)}{l_n(z)}\oint_\Gamma (l_nq_n)(t)\frac{(\Psi_ng)(t)dt}{z-t}, \end{equation} where $l_n$ can be any\footnote{The fact that we can pick an arbitrary polynomial $l_n$ for this integral representation of $A_n$ is a simple consequence of orthogonality relations \eqref{eq:orthorel}.} nonzero polynomial of degree at most $n$. \end{gonchar} \begin{proof}[Proof of Theorem \ref{thm:mpa}]
Let $E_n$ be the sets constituting the interpolation scheme $\E$. Set $\Psi_n$ to be the reciprocal of the spherically normalized polynomial with zeros at the finite elements of $E_n$, i.e., $\Psi_n=1/\tilde v_n$, where $\tilde v_n$ is the spherical renormalization of $v_n$ (see Definition~\ref{df:pade} and \eqref{eq:sphnp}). Then the functions $\Psi_n$ are holomorphic and non-vanishing in $\C\setminus\supp(\E)$ (in particular, in $\D$), $\frac{1}{2n}\log|\Psi_n| \cic U^\nu$ in $\overline\C\setminus\supp(\nu^*)$ by Lemma~\ref{lem:cwssimcic}, and this convergence is locally uniform in $\D$ by definition of the asymptotic distribution and since $\log1/|z-t|$ is continuous on a neighborhood of $\supp(\E)$ for fixed $z\in\D$. As $U^{\nu}$ is harmonic in $\D$, requirement (1) of Theorem~\hyperref[thm:GR]{GR} is fulfilled with $G=\D$ and $\psi=-U^\nu$. Further, it follows from Theorem~\ref{thm:minset} that $\Gamma_\nu$ is a symmetric set. In particular it is a smooth cut, hence it is tame with tame points $\cup_j\gamma_j$. Moreover, since $\Gamma_\nu$ is regular, we have that $\supp(\omega_{\Gamma,\psi})=\Gamma_\nu$ by \eqref{eq:weqpot} and properties of balayage (Section \ref{dubalai}). Thus, by the remark after Definition~\ref{df:sym}, symmetry implies that $\Gamma_\nu$ possesses the S-property in the field $-U^\nu$ and therefore requirement (2) of Theorem~\hyperref[thm:GR]{GR} is also fulfilled. Let now $Q$, $\deg(Q)=:m$, be a fixed polynomial such that the only singularities of $Qf$ in $\D$ belong to $E_f$. Then $Qf$ is holomorphic and single-valued in $\D\setminus\Gamma_\nu$, it extends continuously from each side on $\cup\gamma_j$, and has a jump there which is continuous and non-vanishing except possibly at countably many points. All the requirement of Theorem~\hyperref[thm:GR]{GR} are then fulfilled with $g=Qf$.
Let $L\subset\D$ be a smooth Jordan curve that separates $\Gamma_\nu$ and the poles of $f$ (if any) from $\E$. Denote by $q_n$ the spherically normalized denominators of the multipoint Pad\'e approximants to $f$ associated with $\E$. It is a standard consequence of Definition \ref{df:pade} (see {\it e.g.} \cite[sec. 1.5.1]{GRakh87}) that \begin{equation} \label{eq:proof1} \int_{L} z^jq_n(z)\Psi_n(z)f(z)dz = 0, \quad j\in\{0,\ldots,n-1\}. \end{equation}
Clearly, relations \eqref{eq:proof1} imply that \begin{equation}
\label{eq:proof2}
\oint_{\Gamma_\nu} (lq_n\Psi_nfQ)(t)dt = 0, \quad \deg(l) < n-m. \end{equation}
Equations \eqref{eq:proof2} differ from \eqref{eq:orthorel} only in the reduction of the degree of polynomials $l$ by a constant $m$. However, to derive the first conclusion of Theorem~\hyperref[thm:GR]{GR}, namely that $\mu_n\cws\omega_{\Gamma,\psi}$, orthogonality relations \eqref{eq:orthorel} are used solely when applied to a specially constructed sequence $\{l_n\}$ such that $l_n=l_{n,1}l_{n,2}$, where $\deg(l_{n,1})\leq n\theta$, $\theta<1$, and $\deg(l_{n,2})=o(n)$ as $n\to\infty$ (see the proof of \cite[Thm. 3]{GRakh87} in between equations (27) and (28)). Thus, the proof is still applicable in our situation, to the effect that the normalized counting measures of the zeros of $q_n$ converge weak$^*$ to $\widehat\nu^*=\omega_{\Gamma_\nu,-U^\nu}$, see \eqref{eq:whnustar}.
For each $n\in\N$, let $q_{n,m}$, $\deg(q_{n,m})=n-m$, be a divisor of $q_n$. Observe that the polynomials $q_{n,m}$ have exactly the same asymptotic zero distribution in the weak$^*$ sense as the polynomials $q_n$. Put
\begin{equation}
\label{eq:proof3}
A_{n,m}(z) := \oint (q_{n,m}q_n)(t)\frac{(\Psi_nfQ)(t)dt}{z-t}, \quad z\in D_\nu.
\end{equation} Due to orthogonality relations \eqref{eq:proof2}, $A_{n,m}$ can be equivalently rewritten as
\begin{equation}
\label{eq:proof4}
A_{n,m}(z) := \frac{q_{n,m}(z)}{l_{n-m}(z)}\oint (l_{n-m}q_n)(t)\frac{(\Psi_n fQ)(t)dt}{z-t}, \quad z\in D_\nu,
\end{equation}
where $l_{n-m}$ is an arbitrary polynomial of degree at most $n-m$. Formulae \eqref{eq:proof3} and \eqref{eq:proof4} differ from \eqref{eq:funan2} in the same manner as orthogonality relations \eqref{eq:proof2} differ from those in \eqref{eq:orthorel}. Examination of the proof of \cite[Thm. 3]{GRakh87} (see the discussion there between equations (33) and (37)) shows that limit \eqref{eq:funan1} is proved using expression \eqref{eq:funan2} for $A_n$ with a choice of polynomials $l_n$ that satisfy some set of \emph{asymptotic} requirements and can be chosen to have the degree $n-m$. Hence it still holds that \begin{equation} \label{eq:proof6}
|A_{n,m}(z)|^{1/2n}\cic\exp\left\{-c(-U^\nu;\Gamma_\nu)\right\} \quad \mbox{in} \quad D_\nu. \end{equation}
Finally, using the Hermite interpolation formula like in \cite[Lem. 6.1.2]{StahlTotik}, the error of approximation has the following representation
\begin{equation} \label{eq:proof5} (f-\Pi_n)(z) = \frac{A_{n,m}(z)}{(q_{n,m}q_nQ\Psi_n)(z)}, \quad z\in D_\nu. \end{equation}
From Lemma~\ref{lem:cwssimcic} we know that $\log(1/|q_n|)/n\cic V_*^{\widehat{\nu^*}}=V^{\widehat{\nu^*}}$ in $D_\nu$, since ordinary and spherically normalized potentials coincide for measures supported in $\overline{\D}$. This fact together with \eqref{eq:proof6} and \eqref{eq:proof5} easily yield that \[
|f-\Pi_n|^{1/2n} \cic \exp\left\{-c(-U^\nu;\Gamma_\nu)+V^{\widehat\nu^*}-U^\nu\right\} \quad \mbox{in} \quad D_\nu\setminus\supp(\nu^*). \] Therefore, \eqref{eq:mpa} follows from \eqref{eq:toRemind1} and the fact that $U^\nu=V_*^{\nu^*}$ by the remark at the beginning of Section~\ref{sss:wcf}. \end{proof}
\subsection{Proof of Theorem~\ref{thm:dmcc}, Theorem~\ref{thm:convcap}, Corollary~\ref{cor:normconv}, Theorem~\ref{cor:L2T}, and Theorem~\ref{thm:L2T}} \label{ss:53}
\begin{proof}[Proof of Theorem \ref{thm:dmcc}] Let $\Gamma\in\pk_f(G)$ be a smooth cut for $f$ that satisfies \eqref{eq:mincondcap} and $\Theta$ be a conformal map of $\D$ onto $G$. Set $K:=\Theta^{-1}(\Gamma)$. Then we get from the conformal invariance of the condenser capacity (see \eqref{eq:coninv}) and the maximum principle for harmonic functions that \[ \cp(\Gamma,T) = \cp(K,\T) \quad \mbox{and} \quad V_{\overline\C\setminus K}^{\omega_{(K,\T)}} = V_{\overline\C\setminus\Gamma}^{\omega_{(\Gamma,T)}}\circ\Theta \quad \mbox{in} \quad \D. \] As $\Theta$ is conformal in $\D$, it can be readily verified that $V_{\overline\C\setminus K}^{\omega_{(K,\T)}}$ satisfies \eqref{eq:mincondcap} as well (naturally, on $K$). Univalence of $\Theta$ also implies that the continuation properties of $(f\circ\Theta)(\Theta^\prime)^{1/2}$ in $\D$ are exactly the same as those of $f$ in $G$. Moreover, this is also true for $f_\Theta$, the orthogonal projection of $(f\circ\Theta)(\Theta^\prime)^{1/2}$ from $L^2$ onto $\bar H_0^2$ (see Section~\ref{sec:ra}). Indeed, $f_\Theta$ is holomorphic in $\Om$ by its very definition and can be continued analytically across $\T$ by $(f\circ\Theta)(\Theta^\prime)^{1/2}$ minus the orthogonal projection of the latter from $L^2$ onto $H^2$, which is holomorphic in $\D$ by definition. Thus, $f_\Theta\in\alg(\D)$ and $\Gamma\in\pk_f(G)$ if and only if $K\in\pk_{f_\Theta}$. Therefore, it is enough to consider only the case $G=\D$.
Let $\Gamma\in\pk_f$ be a smooth cut for $f$ that satisfies \eqref{eq:mincondcap} and $\K$ be the set of minimal condenser capacity ({\it cf.} Theorem \ref{thm:S}). We must prove that $\Gamma=\K$. Set, for brevity, $D_\Gamma:=\overline\C\setminus\Gamma$, $V_\Gamma:=V_{D_\Gamma}^{\omega_{(\T,\Gamma)}}$, $D_\K:=\overline\C\setminus\K$, $V_\K:=V_{D_\K}^{\omega_{(\T,\K)}}$, and $\Omega$ to be the unbounded component of $D_\K\cap D_\Gamma$. Let also $f_{D_\Gamma}$ and $f_{D_\K}$ indicate the meromorphic branches of $f$ in $D_\Gamma$ and $D_\K$, respectively. Arguing as we did to prove (\ref{Gammas2}), we see that no connected component of $\partial\Omega$ can lie entirely in $\Gamma\setminus \K$ (resp. $\K\setminus\Gamma$) otherwise the jump of $f_{D_\Gamma}$ (resp. $f_{D_\K}$) across some subarc of $\Gamma$ (resp. $\K$) would vanish. Hence by connectedness \begin{equation} \label{eq:gammanotk1} \Gamma\cap \K\cap\partial\Omega\neq\varnothing. \end{equation} First, we deal with the special situation where $\omega_{(\T,\K)}=\omega_{(\T,\Gamma)}$. Then $V_\Gamma-V_\K$ is harmonic in $\Omega$ by the first claim in Section~\ref{ss:gp}. As both potentials are constant in $\Om\subset\Omega$, we get that $V_\Gamma=V_\K+\const$ in $\Omega$. Since $\K$ and $\Gamma$ are regular sets, potentials $V_\Gamma$ and $V_\K$ extend continuously to $\partial\Omega$ and vanish at $\partial\Omega\cap\Gamma\cap\K$ which is non-empty by \eqref{eq:gammanotk1}. Thus, equality of the equilibrium measures means that $V_\Gamma\equiv V_\K$ in $\overline{\Omega}$. However, because $V_\Gamma$ (resp. $V_\K$) vanishes precisely on $\Gamma$ (resp. $\K$), this is possible only if $\partial\Omega\subset\Gamma\cap\K$. Taking complements in $\overline{\C}$, we conclude that $D_\Gamma\cup D_\K$, which is connected and contains $\infty$, does not meet $\partial\Omega$. Therefore $D_\Gamma\cup D_\K\subset\Omega\subset D_\Gamma\cap D_\K$, hence $D_\Gamma= D_\K$ thus $\Gamma=\K$, as desired.
In the rest of the proof we assume for a contradiction that $\Gamma\neq\K$. Then $\omega_{(\T,\K)}\neq\omega_{(\T,\Gamma)}$ in view of what precedes, and therefore \begin{equation} \label{eq:pr1-2} \di_{D_\K}(V_\K) = I_{D_\K}\left[\omega_{(\T,\K)}\right] < I_{D_K}\left[\omega_{(\T,\Gamma)}\right] = \di_{D_\K}(V_{D_\K}^{\omega_{(\T,\Gamma)}}) \end{equation} by \eqref{eq:di1} and since the Green equilibrium measure is the unique minimizer of the Green energy.
The argument now follows the lines of the proof of Lemma \ref{lem:1e}. Namely, we write \[ \Gamma= E_0^\Gamma\cup E_1^\Gamma \cup \bigcup \gamma_j^\Gamma, \qquad \K = E_0^\K\cup E_1^\K \cup \bigcup \gamma_j^\K, \] and we define the sets $\Gamma^1$, $\Gamma^2$, $\Gamma^3$, $\Gamma^4$ like we did in that proof for $\Gamma^1_s$, $\Gamma_s^2$, $\Gamma_s^3$, $\Gamma_s^4$, upon replacing $D_s$ by $D_\Gamma$, $D_\nu$ by $D_\K$, $E_j^s$ by $E_j^\Gamma$, $E_j^\nu$ by $E_j^\K$, $\gamma_j^s$ by $\gamma_j^\Gamma$ and $\gamma_j^\nu$ by $\gamma_j^\K$. The same reasoning that led to us to \eqref{Gammas2} and \eqref{Gammas3} yields \begin{equation} \label{GammaG} \Gamma^2\cap\overline\Omega = \varnothing,\qquad \left(\Gamma^3\setminus E_1^\Gamma\right)\cap\overline\Omega = \varnothing. \end{equation} Subsequently we set $D:=D_\K\setminus(\Gamma^2\cup\Gamma^3)$ and we prove in the same way that it is an open set satisfying \begin{equation} \label{dirichlet0G} \di_{D_K}(V_{D_\K}^{\omega_{(\T,\Gamma)}}) = \di_D(V_{D_\K}^{\omega_{(\T,\Gamma)}}) \quad \mbox{and} \quad \di_{D_\Gamma}(V_\Gamma) = \di_{D\setminus\Gamma^4}(V_\Gamma) \end{equation} (compare \eqref{Dirichlet1}). Defining $\widetilde V$ as in \eqref{defVt} with $V_s$ replaced by $V_\Gamma$, and using the symmetry of $\Gamma$ (that is, \eqref{eq:mincondcap} with $\Gamma$ instead of $\K$, which allows us to continue $V_\Gamma$ harmonically by $-V_\Gamma$ across each arc $\gamma_j^\Gamma$) we find that $\widetilde V$ is harmonic in $D\setminus\T$, superharmonic in $D$, and that \begin{equation} \label{dirichlet1G} \di_{D_\Gamma\setminus\Gamma^4}(V_\Gamma) = \di_D(\widetilde V) \end{equation} (compare \eqref{Dirichlet2}). Next, we set $h:=\widetilde V-V_{D_\K}^{\omega_{(\T,\Gamma)}}$ which is harmonic in $D$ by the first claim in Section~\ref{ss:gp}, and since $h=V_\Gamma-V_{D_\K}^{\omega_{(\Gamma,\T)}}$ in $\Omega\supset\T$. Because $\widetilde V=-V_\Gamma$ in the neighborhood of $\Gamma_s^2\cup\Gamma_s^3$ by \eqref{GammaG}, the same computation as in \eqref{contr3} gives us \[ \di_D(V_{D_\K}^{\omega_{(\T,\Gamma)}},h)\geq0, \] so we get from \eqref{eq:di1}, \eqref{dirichlet0G}, \eqref{dirichlet1G}, \eqref{positivity} and \eqref{eq:pr1-2} that \begin{eqnarray} I_{D_\Gamma}[\omega_{(\T,\Gamma)}] &=& \di_{D_\Gamma}(V_\Gamma) = \di_D(\widetilde V) = \di_D(V_{D_\K}^{\omega_{(\T,\Gamma)}} + h)\nonumber\\ &=&\di_D(V_{D_\K}^{\omega_{(\T,\Gamma)}}) + 2\di_D(V_{D_\K}^{\omega_{(\T,\Gamma)}},h) + \di_D(h) \nonumber \\ \label{eq:pr1-3} {} &\geq& \di_{D_\K}(V_{D_\K}^{\omega_{(\T,\Gamma)}}) + \di_D(h) > \di_{D_\K}(V_\K) = I_{D_\K}[\omega_{(\T,\K)}]. \end{eqnarray} However, it holds that \[ I_{D_\K}[\omega_{(\T,\K)}]=1/\cp(\K,\T) \quad \mbox{and} \quad I_{D_\Gamma}[\omega_{(\T,\Gamma)}]=1/\cp(\Gamma,\T) \] by \eqref{eq:exchange}. Thus, \eqref{eq:pr1-3} yields that $\cp(\Gamma,\T)<\cp(\K,\T)$, which is impossible by the very definition of $\K$. This contradiction finishes the proof. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:convcap}] Let $\{r_n\}$ be a sequence of irreducible critical points for $f$. Further, let $\nu_n$ be the normalized counting measures of the poles of $r_n$ and $\nu$ be a weak$^*$ limit point of $\{\nu_n\}$, i.e., $\nu_n\cws\nu$, $n\in\N_1\subset\N$. Recall that all the poles of $r_n$ are contained in $\D$ and therefore $\supp(\nu)\subseteq\overline\D$.
By Theorem \ref{thm:minset}, there uniquely exists a minimal set $\Gamma_\nu$ for Problem $(f,\nu)$. Let $Z_n$ be the set of poles of $r_n$, where each pole appears with twice its multiplicity. As mentioned in Section~\ref{sec:ra}, each $r_n$ interpolates $f$ at the points of $Z_n^*$, counting multiplicity. Hence, $\{r_n\}_{n\in\N_1}$ is the sequence of multipoint Pad\'e approximants associated with the triangular scheme $\E=\{Z_n^*\}_{n\in\N_1}$ that has asymptotic distribution $\nu^*$, where $\nu^*$ is the reflection of $\nu$ across $\T$. So, according to Theorem~\ref{thm:mpa} (applied for subsequences), it holds that $\nu = \widehat\nu^*$, $\supp(\nu)=\Gamma_\nu$, i.e., $\nu$ is the balayage of its own reflection across $\T$ relative to $D_\nu$.
Applying Lemma~\ref{lem:pt}, we deduce that $\nu$ is the Green equilibrium distribution on $\Gamma_\nu$ relative to $\D$, that is, $\nu=\omega_{(\Gamma_\nu,\T)}$, and $\widetilde\nu$, the balayage of $\nu$ onto $\T$, is the Green equilibrium distribution on $\T$ relative to $D_\nu$, that is, $\widetilde\nu=\omega_{(\T,\Gamma_\nu)}$. Moreover, Lemma~\ref{lem:pt} yields that $V_{D_\nu}^{\nu^*}=V_{D_\nu}^{\widetilde\nu}$ in $\D$ and therefore $V_{D_\nu}^{\widetilde\nu}$ enjoys symmetry property \eqref{eq:mincondcap} by Theorem~\ref{thm:minset}. Hence, we get from Theorem~\ref{thm:dmcc} that $\Gamma_\nu=\K$, the set of minimal condenser capacity for $f$, and that $\nu=\ged$. Since $\nu$ was an arbitrary limit point of $\{\nu_n\}$, we have that $\nu_n\cws\ged$ as $n\to\infty$. Finally, observe that \eqref{eq:Convergence1} is a direct consequence of Theorem~\ref{thm:mpa}.
To prove \eqref{eq:Convergence2}, we need to go back to representation \eqref{eq:proof5}, where $q_n\in\mpoly_n$ is the denominator of an irreducible critical point $r_n$ and $q_{n,m}$, $\deg(q_{n,m})=n-m$, is an arbitrary divisor of $q_n$, while $\Psi_n=1/\widetilde q_n^2$ with $\widetilde q_n(z)=z^n\overline{q_n(1/\bar z)}$.
Denote by $b_n$ the Blaschke product $q_n/\widetilde q_n$. It is easy to check that $b_n(z)\overline{b_n(1/\bar z)}\equiv1$ by algebraic properties of Blaschke products. Thus, \eqref{eq:proof5} yields that
\begin{equation} \label{eq:4step} (f-r_n)(z) = \overline{b^2_n(1/\bar z)}(l_{n,m}A_{n,m}/Q)(z), \quad z\in\overline\Om. \end{equation} where $l_{n,m}$ is the polynomial of degree $m$ such that $q_n=q_{n,m}l_{n,m}$. Choose $\epsilon>0$ so small that $\K\subset\D_{1-\epsilon}$ (see Theorem \ref{thm:minset}). As $l_{n,m}$ is an arbitrary divisor of $q_n$ of degree $m$, we can choose it to have zeros only in $\D_{1-\epsilon}$ for all $n$ large enough (this is possible since in full proportion the zeros of $q_n$ approach $\K$). Then it holds that \begin{equation} \label{eq:1step}
\lim_{n\to\infty}|l_{n,m}/Q|^{1/2n} = 1 \end{equation} uniformly on $\overline\Om$. Further, by \eqref{eq:Convergence1} and the last claim of Lemma~\ref{lem:pt}, we have that \begin{equation} \label{eq:5step}
|f-r_n|^{1/2n} \cic \exp\left\{-\frac{1}{\cp(\K,\T)}\right\} \quad \mbox{on} \quad \T. \end{equation} As any Blaschke product is unimodular on the unit circle, we deduce from \eqref{eq:4step}--\eqref{eq:5step} with the help of \eqref{eq:proof6} ({\it i.e.,} $A_{n,m}$ goes to a constant) that \[
|A_{n,m}|^{1/2n} \cic \exp\left\{-\frac{1}{\cp(\K,\T)}\right\} \quad \mbox{in} \quad \overline\C\setminus\K. \] Then we get from Lemma~\ref{lem:cic} that \begin{equation} \label{eq:2step}
\limsup_{n\to\infty} |A_{nm}|^{1/2n} \leq \exp\left\{-\frac{1}{\cp(\K,\T)}\right\} \end{equation} uniformly on closed subsets of $\overline\C\setminus\K$, in particular, uniformly on $\overline\Om$. Set $q_{n,\epsilon}$ for the monic polynomial whose zeros are those of $q_n$ lying in $\D_{1-\epsilon}$. Put $n_\epsilon:=\deg(q_{n,\epsilon})$, $\widetilde q_{n,\epsilon}(z)=z^{n_\epsilon}\overline{q_{n,\epsilon}(1/\bar z)}$, and let $\nu_{n,\epsilon}$ be the normalized counting measure of the zeros of $q_{n,\epsilon}$. As $\nu_n\cws\ged$, it is easy to see that $n_{\epsilon}/n\to 1$ and that $\nu_{n,\epsilon}\cws\ged$ when $n\to+\infty$. Thus, by the principle of descent (Section~\ref{sss:wsccic}), it holds that \begin{equation} \label{estqeps}
\limsup_{n\to\infty}|q_{n,\epsilon}|^{1/n} =
\limsup_{n\to\infty}|q_{n,\epsilon}|^{1/{n_\epsilon}} \leq \exp\left\{-V^{\ged}\right\}, \end{equation}
locally uniformly in $\C$. In another connection, since $\log|1-z\bar{u}|$ is continuous for $(z,u)\in \D_{1/(1-\epsilon)}\times\D_{1-\epsilon}$, it follows easily from the weak$^*$ convergence of $\nu_{n,\epsilon}$ that \begin{equation} \label{estqteps}
\lim_{n\to\infty}|\widetilde q_{n,\epsilon}(z)|^{1/n} =
\lim_{n\to\infty}|\widetilde q_{n,\epsilon}(z)|^{1/{n_\epsilon}}
=\exp\left\{\int\log|1-z\bar{u}|d\ged(u)\right\}, \end{equation}
uniformly in $\overline{\D}$. Put $b_{n,\epsilon}:=q_{n,\epsilon}/\widetilde{q}_{n.\epsilon}$. Since the Green function of $\D$ with pole at $u$ is given by $\log|(1-z\bar u)/(z-u)|$, we deduce from \eqref{estqeps}, \eqref{estqteps}, and a simple majorization that \[
\limsup_{n\to\infty}|b_n|^{1/n} \leq \limsup_{n\to\infty}|b_{n,\epsilon}|^{1/n} \leq \exp\left\{-V_\D^{\ged}\right\} \]
uniformly in $\overline\D$. Besides, the Green function of $\Om$ is still given by $\log|(1-z\bar u)/(z-u)|$, hence $V_\D^\omega(1/\bar z)=V_\Om^{\omega^*}(z)$, $z\in\Om$, where $\omega$ is any measure supported in $\D$. Thus, we derive that \begin{equation} \label{eq:3step}
\limsup_{n\to\infty}|b_n^2(1/\bar z)|^{1/2n} \leq \exp\left\{-V_\Om^{\ged^*}(z)\right\} \end{equation} holds uniformly on $\overline\Om$. Combining \eqref{eq:4step}--\eqref{eq:3step}, we deduce that \[
\limsup_{n\to\infty}|f-r_n|^{1/2n} \leq \exp\left\{-\frac{1}{\cp(\K,\T)}-V_\Om^{\ged^*}\right\} \] uniformly on $\overline\Om$. This finishes the proof of the theorem since $V_{\overline\C\setminus\K}^{\ged^*}=\frac{1}{\cp(\K,\T)}+V_\Om^{\ged^*}$ in $\overline\Om$ by Lemma~\ref{lem:pt}, the maximum principle for harmonic functions applied in $\Om$, and the fact that the difference of two Green potentials of the same measure but on different domains is harmonic in a neighborhood of the support of that measure by the first claim in Section~\ref{ss:gp}. \end{proof}
\begin{proof}[Proof of Corollary~\ref{cor:normconv}]
It follows from \eqref{eq:Convergence2} and Lemma~\ref{lem:pt} that \[
\limsup_{n\to\infty}\|f-r_n\|_\T^{1/2n} \leq \exp\left\{-\frac{1}{\cp(\K,\T)}\right\}. \] On the other hand, by \eqref{eq:Convergence1} and the very definition of convergence in capacity, we have for any $\epsilon>0$ small enough that \[
|f-r_n|>\left(\exp\left\{-\frac{1}{\cp(\K,\T)}\right\} - \epsilon\right)^{2n} \quad \mbox{on} \quad \T\setminus S_{n,\epsilon}, \]
where $\cp(S_{n,\epsilon})\to0$ as $n\to\infty$. In particular, it means that $|S_{n,\epsilon}|\to0$ by \cite[Thm. 5.3.2(d)]{Ransford}, where $|S_{n,\epsilon}|$ is the arclength measure of $S_{n,\epsilon}$. Hence, we have that \begin{eqnarray}
\liminf_{n\to\infty}\|f-r_n\|_2^{1/2n} &\geq& \lim_{n\to\infty}\left(\frac{|\T\setminus S_{n,\epsilon}|}{2\pi}\right)^{1/4n}\left(\exp\left\{-\frac{1}{\cp(\K,\T)}\right\}-\epsilon\right) \nonumber \\ {} &=& \exp\left\{-\frac{1}{\cp(\K,\T)}\right\}-\epsilon. \nonumber \end{eqnarray}
As $\epsilon$ was arbitrary and since $\|f-r_n\|_2\leq2\pi\|f-r_n\|_\T$, this finishes the proof of the corollary. \end{proof}
\begin{proof}[Proof of Theorem~\ref{cor:L2T}] Let $\Theta$ be the conformal map of $\D$ onto $G$. Observe that $\Theta^\prime$ is a holomorphic function in $\D$ with integrable trace on $\T$ since $T$ is rectifiable \cite[Thm.~3.12]{Duren}, and that $\Theta$ extends in a continuous manner to $\T$ where it is absolutely continuous. Hence, $(f\circ\Theta)(\Theta^\prime)^{1/2}\in L^2$. Moreover, $g$ lies in $E_n^2(G)$ if and only if $(g\circ\Theta)(\Theta')^{1/2}$ lies in $H_n^2:=H^2\mpoly^{-1}_n$. Indeed, denote by $E^\infty(G)$ the space of bounded holomorphic functions in $G$ and set $E_n^\infty(G):=E^\infty(G)\mpoly^{-1}_n(G)$. It is clear that $g\in E_n^\infty(G)$ if and only if it is meromorphic in $G$ and bounded outside a compact subset thereof. This makes it obvious that $g\in E_n^\infty(G)$ if and only if $g\circ\Theta\in H_n^\infty:=H^\infty\mpoly^{-1}_n$, where $H^\infty$ is the space of bounded holomorphic functions in $\D$. It is also easy to see that $E_n^2(G)=E^2(G)E_n^\infty(G)$. Since it is known that $g\in E^2(G)$ if and only if $(g\circ\Theta)(\Theta')^{1/2}\in H^2$ \cite[corollary to Thm.~10.1]{Duren}, the claim follows. Notice also that $g_n$ is a best approximant for $f$ from $E_n^2(G)$ if and only if $(g_n\circ\Theta)(\Theta')^{1/2}$ is a best approximant for $(f\circ\Theta)(\Theta')^{1/2}$ from $H_n^2$. This is immediate from the change of variable formula, namely, \[
\|f-g\|_{2,T}^2=\int_{\T}|f\circ\Theta-g\circ\Theta|^2|\Theta^\prime|d\theta = \|(f\circ\Theta)(\Theta^\prime)^{1/2}-(g_n\circ\Theta)(\Theta^\prime)^{1/2}\|_2^2, \]
where we used the fact that $|d\Theta(e^{i\theta})| =|\Theta'(e^{i\theta})| d\theta$ a.e. on $\T$ \cite[Thm.~3.11]{Duren}.
Now, let $g_n$ be a best meromorphic approximants for $f$ from $E^2_n(G)$. As $L^2=H^2\oplus\bar H^2_0$, it holds that $(g_n\circ\Theta)(\Theta^\prime)^{1/2}=g_n^++r_n$ and $(f\circ\Theta)(\Theta^\prime)^{1/2}=f^++f^-$, where $g_n^+,f^+\in H^2$ and $r_n,f^-\in\bar H^2_0$. Moreover, it can be easily checked that $r_n\in\rat_n$ and, as explained at the beginning of the proof of Theorem~\ref{thm:dmcc}, that $f^-\in\alg(\D)$. Since by Parseval's relation \[
\|(f\circ\Theta)(\Theta^\prime)^{1/2}-(g_n\circ\Theta)(\Theta^\prime)^{1/2}\|_2^2 = \|f^+-g_n^+\|_2^2+\|f^--r_n\|_2^2, \] we immediately deduce that $g_n^+=f^+$ and that $r_n$ is an $\bar H^2_0$-best rational approximant for $f^-$. Moreover, by the conformal invariance of the condenser capacity (see \eqref{eq:coninv}), $\cp(\K,T)=\cp(\Theta^{-1}(\K),\T)$. It is also easy to verify that $K\in\pk_f(G)$ if and only if $\Theta^{-1}(K)\in\pk_{f^-}(\D)$. Hence, we deduce from Theorem~\ref{thm:convcap} and the remark thereafter that \[
|f^--r_n|^{1/2n} \cic \exp\left\{V_\D^{\omega_{(\Theta^{-1}(\K),\T)}}-\frac{1}{\cp(\Theta^{-1}(\K),\T)}\right\} \quad \mbox{in} \quad \D\setminus\Theta^{-1}(\K). \] The result then follows from the conformal invariance of the Green equilibrium measures, Green capacity, and Green potentials and the fact that, since $\Theta$ is locally Lipschitz-continuous in $\D$, it cannot locally increase the capacity by more than a multiplicative constant \cite[Thm. 5.3.1]{Ransford}. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:L2T}] By Theorem~\hyperref[thm:S]{S} and decomposition \eqref{eq:toRemind}, the set $\K$ of minimal condenser capacity for $f$ is a smooth cut, hence a tame compact set with tame points $\cup\gamma_j$, such that \[ \frac{\partial}{\partial\n^+}V^{\widehat\omega_{(T,\K)}-\omega_{(T,\K)}} = \frac{\partial}{\partial\n^-} V^{\widehat\omega_{(T,\K)}-\omega_{(T,\K)}} \quad \mbox{on} \quad \bigcup \gamma_j, \] where $\widehat\omega_{(T,\K)}$ is the balayage of $\omega_{(T,\K)}$ onto $\K$. As $\widehat\omega_{(T,\K)}$ is the weighted equilibrium distribution on $\K$ in the field $V^{-\omega_{(T,\K)}}$ (see \eqref{eq:weqpot}), the set $\K$ possesses the S-property in the sense of \eqref{eq:sproperty}. If $f$ is holomorphic in $\overline{\C}\setminus\K$ and since it extends continuously from both sides on each $\gamma_j$ with a jump that can vanish in at most countably many points, we get from \cite[Thm. 1$^\prime$]{GRakh87} that \begin{equation} \label{eq:upperrho} \lim_{n\to\infty}\rho_{n,\infty}^{1/2n}(f,T) = \exp\left\{-\frac{1}{\cp(\K,T)}\right\}. \end{equation}
However, Theorem~1$^\prime$ in \cite{GRakh87} is obtained as an application of Theorem~\hyperref[thm:GR]{GR}. Since the latter also holds for functions in $\alg(G)$, that is, those that are meromorphic in $\overline{\C}\setminus\K$, (see the explanation in the proof of Theorem~\ref{thm:mpa}), \eqref{eq:upperrho} is valid for these functions as well. As $\rho_{n,2}(f,T)\leq |T|\rho_{n,\infty}(f,T)$, where $|T|$ is the arclength of $T$, we get from \eqref{eq:upperrho} that \[ \limsup_{n\to\infty}\rho_{n,2}^{1/2n}(f,T) \leq \exp\left\{-\frac{1}{\cp(\K,T)}\right\}. \]
On the other hand, let $g_n$ be a best meromorphic approximants for $f$ from $E_n^2(G)$ as in Theorem~\ref{cor:L2T}. Using the same notation, it was shown that $((f-g_n)\circ\Theta)(\Theta^\prime)^{1/2}=(f^--r_n)$, where $r_n$ is a best $\bar H_0^2$-rational approximant for $f^-$ from $\rat_n$. Hence, we deduce from the chain of equalities \[
\|f-g_n\|_{2,T} = \|(f\circ\Theta)(\Theta^\prime)^{1/2}-(g_n\circ\Theta)(\Theta^\prime)^{1/2}\|_2 = \|f^--r_n\|_2 \] and Corollary~\ref{cor:normconv} that \[
\lim_{n\to\infty}\|f-g_n\|_{2,T}^{1/2n} = \exp\left\{-\frac{1}{\cp(\Theta^{-1}(\K),\T)}\right\} = \exp\left\{-\frac{1}{\cp(\K,T)}\right\}. \]
As $\rho_{n,2}(f,T)\geq\|f-g_n\|_{2,T}$ by the very definition of $g_n$ and the inclusion $\rat_n(G)\subset E^2_n(G)$, the lower bound for the limit inferior of $\rho_{n,2}^{1/2n}(f,T)$ follows. \end{proof}
\section{Some Potential Theory} \label{sec:pt}
Below we give a brief account of logarithmic potential theory that was used extensively throughout the paper. We refer the reader to the monographs \cite{Ransford,SaffTotik} for a thorough treatment.
\subsection{Capacities}
In this section we introduce, logarithmic, weighted, and condenser capacities.
\subsubsection{Logarithmic Capacity} \label{sss:lc} The {\it logarithmic potential} of a finite positive measure $\omega$, compactly supported in $\C$, is defined by \[
V^\omega(z):=-\int\log|z-u|d\omega(u), \quad z\in\C. \] The function $V^\omega$ is superharmonic with values in $(-\infty,+\infty]$ and is not identically $+\infty$. The {\it logarithmic energy} of $\omega$ is defined by \[
I[\omega]:=\int V^\omega(z)d\omega(z)=-\iint\log|z-u|d\omega(u)d\omega(z). \] As $V^\omega$ is bounded below on $\supp(\omega)$, it follows that $I[\omega]\in(-\infty,+\infty]$.
Let $F\subset \C$ be compact and $\Lm(F)$ denote the set of all probability measures supported on $F$. If the logarithmic energy of every measure in $\Lm(F)$ is infinite, we say that $F$ is {\it polar}. Otherwise, there exists a unique $\omega_F\in\Lm(F)$ that minimizes the logarithmic energy over all measures in $\Lm(F)$. This measure is called the {\it equilibrium distribution} on $F$ and it is known that $\omega_F$ is supported on the outer boundary of $F$, i.e., the boundary of the unbounded component of the complement of $F$. Hence, if $K$ and $F$ are two compact sets with identical outer boundaries, then $\omega_K=\omega_F$.
The {\it logarithmic capacity}, or simply the capacity, of $F$ is defined as \[ \cp(F)=\exp\{-I[\omega_F]\}. \] By definition, the capacity of an arbitrary subset of $\C$ is the {\it supremum} of the capacities of its compact subsets. We agree that the capacity of a polar set is zero. It follows readily from what precedes that the capacity of a compact set is equal to the capacity of its outer boundary.
We say that a property holds \emph{quasi everywhere (q.e.)} if it holds everywhere except on a set of zero capacity. We also say that a sequence of functions $\{h_n\}$ converges {\it in capacity} to a function $h$, $h_n\cic h$, on a compact set $K$ if for any $\epsilon>0$ it holds that \[
\lim_{n\to\infty} \cp\left(\left\{z\in K:~ |h_n(z)-h(z)| \geq \epsilon\right\}\right) = 0. \] Moreover, we say that the sequence $\{h_n\}$ converges in capacity to $h$ in a domain $D$ if it converges in capacity on each compact subset of $D$. In the case of an unbounded domain, $h_n\cic h$ around infinity if $h_n(1/\cdot)\cic h(1/\cdot)$ around the origin.
When the support of $\omega$ is unbounded, it is easier to consider $V_*^\omega$, the spherical logarithmic potential of $\omega$, i.e., \begin{equation} \label{eq:sphpot} V_*^\omega(z) = \int k(z,u)d\omega(u), \quad k(z,u) = -\left\{ \begin{array}{ll}
\log|z-u|, & \mbox{if} ~ |u|\leq1,
\\
\log|1-z/u|, & \mbox{if} ~ |u|>1. \end{array} \right. \end{equation} The advantages of dealing with the spherical logarithmic potential shall become apparent later in this section.
\subsubsection{Weighted Capacity} \label{sss:wc}
Let $F$ be a non-polar compact set and $\psi$ be a lower semi-continuous function on $F$ such that $\psi<\infty$ on a non-polar subset of $F$. For any measure $\omega\in\Lm(F)$, we define the weighted energy\footnote{Logarithmic energy with an external field is called weighted as it turns out to be an important object in the study of weighted polynomial approximation \cite[Ch. VI]{SaffTotik}.} of $\omega$ by \[ I_\psi[\omega] := I[\omega] + 2\int\psi d\omega. \] Then there exists a unique measure $\omega_{F,\psi}$, the \emph{weighted equilibrium distribution} on $F$, that minimizes $I_\psi[\omega]$ among all measures in $\Lm(F)$ \cite[Thm. I.1.3]{SaffTotik}. Clearly, $\omega_{F,\psi}=\omega_F$ when $\psi\equiv0$.
The measure $\omega_{F,\psi}$ admits the following characterization \cite[Thm. I.3.3]{SaffTotik}. Let $\omega$ be a positive Borel measure with compact support and finite energy such that $V^\omega+\psi$ is constant q.e. on $\supp(\omega)$ and at least as large as this constant q.e. on $F$. Then $\omega=\omega_{F,\psi}$. The value of $V^\omega+\psi$ q.e. on $\supp(\omega_{F,\psi})$ is called the \emph{modified Robin constant} and it can be expressed as \begin{equation} \label{eq:robinconst} c(\psi;F) = I_\psi[\omega_{F,\psi}]-\int\psi d\omega_{F,\psi} = I[\omega_{F,\psi}]+\int\psi d\omega_{F,\psi}. \end{equation} The \emph{weighted capacity} of $F$ is defined as $\displaystyle \cp_\psi(F) = \exp\left\{-I_\psi[\omega_{F,\psi}]\right\}$.
\subsubsection{Condenser Capacity} \label{sss:cc}
Let now $D$ be a domain with non-polar boundary and $g_D(\cdot,u)$ be the Green function for $D$ with pole at $u\in D$. That is, the unique function such that \begin{itemize} \item [(i)] $g_D(z,u)$ is a positive harmonic function in $D\setminus\{u\}$, which is bounded outside each neighborhood of $u$;
\item [(ii)] $\displaystyle g_D(z,u) + \left\{\begin{array}{ll}
-\log|z|, & \mbox{if}\quad u=\infty, \\
\log|z-u|, & \mbox{if}\quad u\neq\infty, \end{array}\right.$ is bounded near $u$;
\item [(iii)] $\displaystyle \lim_{z\to\xi, \; z\in D}g_D(z,u)=0$ for quasi every $\xi\in\partial D$. \end{itemize} For definiteness, we set $g_D(z,u)=0$ for any $z\in\overline\C\setminus\overline D$, $u\in D$. Thus, $g_D(z,u)$ is defined throughout the whole extended complex plane.
It is known that $g_D(z,u)=g_D(u,z)$, $z,u\in D$, and that the subset of $\partial D$ for which (iii) holds does not depend on $u$. Points of continuity of $g_D(\cdot,u)$ on $\partial D$ are called {\it regular}, other points on $\partial D$ are called irregular; the latter form $F_\sigma$ polar set (in particular, it is totally disconnected). When $F$ is compact and non-polar, we define regular points of $F$ as points of continuity of $g_D(\cdot,\infty)$, where $D$ is the unbounded component of the complement of $F$. In particular, all the inner points of $F$ are regular, i.e., the irregular points of $F$ are contained in the outer boundary of $F$, that is, $\partial D$. We call $F$ \emph{regular} if all the point of $F$ are regular.
It is useful to notice that for a compact non-polar set $F$ the uniqueness of the Green function implies that \begin{equation} \label{eq:EqPotGreenF} g_{\overline\C\setminus F} (z,\infty) \equiv -\log\cp(F) - V^{\omega_F}(z), \quad z\in\overline\C\setminus F, \end{equation} by property (ii) in the definition of the Green function and the characterization of the equilibrium potential (see explanation before \eqref{eq:robinconst}).
In analogy to the logarithmic case, one can define the {\it Green potential} and the {\it Green energy} of a positive measure $\omega$ supported in a domain $D$ as \[ V_D^\omega(z):=\int g_D(z,u)d\omega(u) \quad \mbox{and} \quad I_D[\omega] := \iint g_D(z,w)d\omega(z)d\omega(w). \] Exactly as in the logarithmic case, if $E$ is a non-polar compact subset of $D$, there exists a unique measure $\omega_{(E,\partial D)}\in\Lm(E)$ that minimizes the Green energy among all measures in $\Lm(E)$. This measure is called the {\it Green equilibrium distribution} on $E$ relative to $D$. The \emph{condenser capacity of $E$} relative to $D$ is defined as \[ \cp(E,\partial D) := 1/I_D[\omega_{(E,\partial D)}]. \] It is known that the Green potential of the Green equilibrium distribution satisfies \begin{equation} \label{eq:GreenEqual} V_D^{\omega_{(E,\partial D)}}(z) = \frac{1}{\cp(E,\partial D)}, \quad \mbox{for q.e.} \quad z\in E. \end{equation} Moreover, the equality in (\ref{eq:GreenEqual}) holds at all the regular points of $E$. Furthermore, it is known that $\omega_{(E,\partial D)}$ is supported on the outer boundary of $E$. That is, \begin{equation} \label{eq:outerboundary} \omega_{(E,\partial D)}=\omega_{(\partial\Omega,\partial D)}, \end{equation} where $\Omega$ is the unbounded component of the complement of $E$.
Let $F$ be a non-polar compact set, $D$ any component of the complement of $F$, and $E$ a non-polar subset of $D$. Then we define $\omega_{(E,F)}$ and $\cp(E,F)$ as $\omega_{(E,\partial D)}$ and $\cp(E,\partial D)$, respectively. It is known that \begin{equation} \label{eq:exchange} \cp(E,F)=\cp(F,E), \end{equation} where $F$ and $E$ are two disjoint compact sets with connected complements. That is, the condenser capacity is symmetric with respect to its entries and only the outer boundary of a compact plays a role in calculating the condenser capacity.
As in the logarithmic case, the Green equilibrium measure can be characterized by the properties of its potential. Namely, if $\omega$ has finite Green energy, $\supp(\omega)\subseteq E$, $V_D^\omega$ is constant q.e. on $\supp(\omega)$ and is at least as large as this constant q.e. on $E$, then $\omega=\omega_{(E,\partial D)}$ \cite[Thm. II.5.12]{SaffTotik}. Using this characterization and the conformal invariance of the Green function, one can see that the condenser capacity is also conformally invariant. In other words, it holds that \begin{equation} \label{eq:coninv} \cp(E,\partial D) = \cp(\phi(E),\partial\phi(D)), \end{equation} where $\phi$ is a conformal map of $D$ onto its image.
\subsection{Balayage} \label{ss:balayage}
In this section we introduce the notion of balayage of a measure and describe some of its properties.
\subsubsection{Harmonic Measure} Let $D$ be a domain with compact boundary $\partial D$ of positive capacity and $\{\omega_z\}_{z\in D}$, be the harmonic measure for $D$. That is, $\{\omega_z\}_{z\in D}$ is the collection of probability Borel measures on $\partial D$ such that for any bounded Borel function $f$ on $\partial D$ the function \[ P_Df(z) := \int fd\omega_z, \quad z\in D, \] is harmonic \cite[Thm. 4.3.3]{Ransford} and $\lim_{z\to x}P_Df(z) = f(x)$ for any regular point $x\in\partial D$ at which $f$ is continuous \cite[Thm. 4.1.5]{Ransford}.
The generalized minimum principle \cite[Thm. I.2.4]{SaffTotik} says that if $u$ is superharmonic, bounded below, and $\liminf_{z\to x,z\in D}u(z)\geq m$ for q.e. $x\in\partial D$, then $u>m$ in $D$ unless $u$ is a constant. This immediately, implies that \begin{equation} \label{eq:equalharmonic} P_Dh=h \end{equation} for any $h$ which is bounded and harmonic in $D$ and extends continuously to q.e. point of $\partial D$.
For $z\in\C$ and $z\neq w\in D\setminus\{\infty\}$, set \begin{equation} \label{eq:funhd} h_D(z,w) := \left\{ \begin{array}{l}
\log|z-w|+g_D(z,w), \quad \mbox{if} ~ D ~ \mbox{is bounded},
\\
\log|z-w|+g_D(z,w)-g_D(z,\infty)-g_D(w,\infty), \quad \mbox{otherwise}. \end{array} \right. \end{equation}
Observe that by the properties of Green function $h_D(z,\cdot)$ is harmonic at $z$. Moreover, it can be computed using \eqref{eq:EqPotGreenF} that $\lim_{|zw|\to\infty}h_D(z,w)=\log\cp(\partial D)$ when $D$ is unbounded. Therefore, $h_D(z,w)$ is defined for all $w\in D$ and $z\in\C\cup D$. Moreover, for each $w\in D$, the function $h_D(\cdot,w)$ is bounded and harmonic in $D$ and extends continuously to every regular point of $\partial D$. It is also easy to see that $h_D(z,w)=h_D(w,z)$ for $z,w\in D$. Hence, we deduce from \eqref{eq:equalharmonic} that \begin{equation} \label{eq:harmoniclog} h_D(z,w) = \left\{ \begin{array}{l}
P_D(\log|z-\cdot|)(w), \quad \mbox{if} ~ D ~ \mbox{is bounded},
\\
P_D(\log|z-\cdot|-g(z,\infty))(w), \quad \mbox{otherwise}, \end{array} \right. \end{equation} $z\in(\C\cup D)\setminus\partial D$, for $w\in D$ and all regular $w\in\partial D$.
\subsubsection{Balayage} \label{dubalai} Let $\nu$ be a finite Borel measure supported in $D$. The \emph{balayage} of $\nu$, denoted by $\widehat\nu$, is a Borel measure on $\partial D$ defined by \begin{equation} \label{eq:balayage} \widehat\nu(B) := \int\omega_t(B)d\nu(t) \end{equation} for any Borel set $B\subset\partial D$. Since $\omega_z(\partial D)=1$, the total mass of $\widehat\nu$ is equal to the total mass of $\nu$. Moreover, it follows immediately from \eqref{eq:balayage} that $\widehat\delta_z=\omega_z$, $z\in D$. In particular, if $D$ is unbounded, $\widehat\delta_\infty=\omega_\infty=\omega_{\partial D}$ (for the last equality see \cite[Thm. 4.3.14]{Ransford}). In other words, $\widehat\delta_\infty$ is the logarithmic equilibrium distribution on $\partial D$.
It is a straightforward consequence of \eqref{eq:balayage} that \begin{equation} \label{eq:sweep} \int fd\widehat\nu = \int P_Dfd\nu \end{equation} for any bounded Borel function on $\partial D$. Thus, we can conclude from \eqref{eq:equalharmonic} and \eqref{eq:sweep} that \begin{equation} \label{balayageh} \int hd\widehat\nu=\int hd\nu \end{equation} for any function $h$ which is bounded and harmonic in $D$ and extends continuously to q.e. point of $\partial D$.
Assume now that $x\in\partial D$ is a regular point and $W$ an open neighborhood of $x$ in $\partial D$. Let further $f\geq0$ be a continuous function on $\partial D$ which is supported in $W$ and such that $f(x)>0$. Since $P_Df(z)\to f(x)$ when $D\ni z\to x$, we see from (\ref{eq:sweep}) that $\hat{\nu}(W)>0$. In particular, $\partial D\setminus\supp(\hat{\nu})$ is polar.
Let $D^\prime$ be a domain with non-polar compact boundary such that $\overline D\subset D^\prime$ and let $\{\omega_z^\prime\}_{z\in D^\prime}$ be the harmonic measure for $D^\prime$. For any Borel set $B\subset\partial D^\prime$ it holds that $\omega_z^\prime(B)$ is a harmonic function in $D$ with continuous boundary values on $\partial D$. Thus, \[ \int\omega_z^\prime(B)d\widehat\nu(z) = \int\omega_z^\prime(B)d\nu(z) \] by \eqref{balayageh}. This immediately implies that \begin{equation} \label{eq:balinsteps} \widetilde\nu=\widetilde{\widehat\nu}, \end{equation} where $\widetilde\nu$ is the balayage of $\nu$ onto $\partial D^\prime$. In other words, balayage can be done step by step.
\subsubsection{Balayage and Potentials} \label{BP} It readily follows from \eqref{eq:funhd}, \eqref{eq:harmoniclog}, and \eqref{balayageh} that \begin{equation} \label{eq:inthd} \int h_D(z,w)d\nu(w) = \left\{ \begin{array}{l} -V^{\widehat\nu}(z), \quad \mbox{if} ~ D ~ \mbox{is bounded},
\\ -V^{\widehat\nu}(z)-g_D(z,\infty), \quad \mbox{otherwise}, \end{array} \right. \quad z\in (\C\cup D)\setminus\partial D. \end{equation} Clearly, the left-hand side of \eqref{eq:inthd} extends continuously to q.e. $z\in\partial D$. Thus, the same is true for the right-hand side. In particular, this means that $V^{\widehat\nu}$ is bounded on $\partial D$ and continuous q.e. on $\partial D$. Hence, $\widehat\nu$ has finite energy.
In the case when $\nu$ is compactly supported in $D$, formula \eqref{eq:inthd} has even more useful consequences. Namely, it holds that \begin{equation} \label{eq:toRemind} V_D^\nu(z) = V^{\nu-\widehat\nu}(z) + c(\nu;D), \quad z\in\overline\C, \end{equation} where $c(\nu;D)=\int g_D(z,\infty)d\nu(z)$ if $D$ is unbounded and $c(\nu;D)=0$ otherwise, and where we used a continuity argument to extend \eqref{eq:toRemind} to every $z\in\overline\C$. This, in turn, yields that \begin{equation} \label{eq:equalBal} V^{\widehat\nu}(z) = V^\nu(z)+c(\nu;D) \quad \mbox{for q.e.} \quad z\in\C\setminus D, \end{equation} where equality holds for all $z\in\C\setminus\overline D$ and also at all regular points of $\partial D$. Moreover, employing the characterization of weighted equilibrium measures, we obtain from \eqref{eq:equalBal} that \begin{equation} \label{eq:weqpot} \widehat\nu = \omega_{\partial D,-V^\nu} \quad \mbox{and} \quad c(-V^\nu;\partial D) = c(\nu;D). \end{equation}
If a measure $\nu$ is not compactly supported, the logarithmic potential of $\nu$ may not be defined. However, representations similar to \eqref{eq:toRemind}--\eqref{eq:weqpot} can be obtained using the spherical logarithmic potentials. Indeed, it follows from \eqref{eq:inthd} that \begin{eqnarray}
(V_*^\nu-V^{\widehat\nu} - V_D^\nu)(z) &=& \int \left[k(z,u)+\log|z-u|-g_D(u,\infty)\right]d\nu(u) \nonumber \\
{} &=& \int_{|u|>1}\left[\log|u|-g_D(u,\infty)\right]d\nu(u)-\int_{|u|\leq1}g_D(u,\infty)d\nu(u). \nonumber \end{eqnarray} As the right-hand side of the chain of the equalities above is a finite constant and $V_D^\nu$ vanishes quasi everywhere on $\partial D$, we deduce as in \eqref{eq:toRemind}--\eqref{eq:weqpot} that this constant is $-c(-V_*^\nu;\partial D)$ and that \begin{equation} \label{eq:weqspot} \widehat\nu = \omega_{\partial D,-V_*^\nu}. \end{equation} Moreover, it holds that \begin{equation} \label{eq:toRemind1} V_D^\nu(z) = V_*^\nu(z) - V^{\widehat\nu}(z) + c(-V_*^\nu;\partial D), \quad z\in\overline\C. \end{equation}
Let now $D$ be a bounded domain and $K$ be a compact non-polar subset of $D$. If $E\subseteq K$ is also non-polar and compact, then \begin{equation} \label{eq:energies}
|I_D[\omega_E]-I[\omega_E]| \leq \max_{z\in K,u\in\partial D}|\log|z-u|| \end{equation} by integrating both sides of \eqref{eq:toRemind} against $\omega_E$ with $\nu=\omega_E$. This, in particular, yields that \begin{equation} \label{eq:capgreencap}
\left|\frac{1}{\cp(E,\partial D)}+\log\cp(E)\right| \leq \max_{z\in K,u\in\partial D}|\log|z-u||. \end{equation}
\subsubsection{Weighted Capacity in the Field $-U^\nu$} \label{sss:wcf}
Let $\nu$ be a probability Borel measure supported in $\overline\D$, $K\subset\D_r$, $r<1$, be a compact non-polar set, and $D$ be the unbounded component of the complement of $K$. Further, let $U^\nu(z)=-\int\log|1-z\bar u|d\nu(u)$ as defined in \eqref{eq:unu}. It is immediate to see that $U^\nu=V_*^{\nu^*}$, where, as usual, $\nu^*$ is the reflection of $\nu$ across $\T$. In particular, it follows from \eqref{eq:weqspot}, \eqref{eq:toRemind1}, and the characterization of the weighted equilibrium distribution that \begin{equation} \label{eq:whnustar} \widehat\nu^* = \omega_{K,-U^\nu}, \end{equation} where $\widehat\nu^*$ is the balayage of $\nu^*$ onto $\partial D$ relative to $D$. Thus, $\omega_{K,-U^\nu}$ is supported on the outer boundary of $K$ and remains the same for all sets whose outer boundaries coincide up to a polar set. In another connection, it holds that \[
U^\nu(z) = -\int\log|1-z/u|d\nu^*(u) = -\int\log|1-z/u|d\widetilde\nu^*(u) = V^{\widetilde\nu^*}(z) - V^{\widetilde\nu^*}(0) \]
for any $z\in\D_r$ by \eqref{balayageh} and harmonicity of $\log|1-z/u|$ as a function of $u\in\D_r^*$, where $\widetilde\nu^*$ is the balayage of $\nu^*$ onto $\T_r$. It is also true that $\widehat\nu^* = \widehat{\widetilde{\nu^*}}$ by \eqref{eq:balinsteps}. Thus, \[ I_\nu[K] = I[\widehat\nu^*] - 2\int V^{\widetilde\nu^*}d\widehat\nu^* + 2V^{\widetilde\nu^*}(0), \] where $I_\nu[K]$ was defined\footnote{In \eqref{eq:wcap} we slightly changed the notation comparing to Section~\ref{sss:wc}. Clearly, $I_\nu[\cdot]$ and $\cp_\nu(\cdot)$ should be $I_{-U^\nu}[\cdot]$ and $\cp_{-U^\nu}(\cdot)$. Even though this change is slightly ambiguous, it greatly alleviates the notation throughout the paper.} in \eqref{eq:wcap}. Using the harmonicity of $V^{\widehat\nu^*}+g_D(\cdot,\infty)$ in $D$ and continuity at regular points of $\partial D$, \eqref{balayageh}, the Fubini-Tonelli theorem, and \eqref{eq:toRemind}, we obtain that \begin{eqnarray} I_\nu[K] &=& \int\left(V^{\widehat\nu^*}(z) + g_D(z,\infty)\right)d\widehat\nu^*(z) - 2\int V^{\widehat\nu^*}d\widetilde\nu^* + 2V^{\widetilde\nu^*}(0) \nonumber \\ {} &=& \int\left(V_D^{\widetilde\nu^*}(z) - V^{\widetilde\nu^*}(z) - c(\widetilde\nu^*;D) + g_D(z,\infty)\right)d\widetilde\nu^*(z) + 2V^{\widetilde\nu^*}(0) \nonumber \\ \label{eq:weightgreen} {} &=& I_D[\widetilde\nu^*] - I[\widetilde\nu^*] + 2V^{\widetilde\nu^*}(0). \end{eqnarray} Equation \eqref{eq:weightgreen}, in particular, means that the problem of maximizing $I_\nu[\cdot]$ among the sets in $\D_r$ is equivalent to the problem of maximizing the Green energy of $\widetilde\nu^*$ among the domains with boundary in $\D_r$.
\subsubsection{Weak$^*$ Convergence and Convergence in Capacity} \label{sss:wsccic}
By a theorem of F. Riesz, the space of complex continuous functions on $\overline{\C}$, endowed with the {\it sup} norm, has dual the space of complex measures on $\overline{\C}$ normed with the mass of the total variation (the so-called strong topology for measures). We say that a sequence of Borel measures $\{\omega_n\}$ on $\overline{\C}$ converges \emph{weak$^*$} to a Borel measure $\omega$ if $\int fd\omega_n\to\int fd\omega$ for any complex continuous function $f$ on $\overline{\C}$. By the Banach-Alaoglu theorem, any bounded sequence of measures has a subsequence that converges in the weak$^*$ sense. Conversely, by the Banach-Steinhaus theorem, a weak$^*$ converging sequence is bounded.
We shall denote weak$^*$ convergence by the symbol $\cws$. Weak$^*$ convergence of measures implies some convergence properties of logarithmic and spherical logarithmic potentials, which we mention below.
The following statement is known as the \emph{Principle of Descent} \cite[Thm. I.6.8]{SaffTotik}. Let $\{\omega_n\}$ be a sequence of probability measures all having support in a fixed compact set. Suppose that $\omega_n\cws\omega$ and $z_n\to z$, $z_n,z\in\C$. Then \[ V^\omega(z) \leq \liminf_{n\to\infty}V^{\omega_n}(z_n) \quad \mbox{and} \quad I[\omega] \leq \liminf_{n\to\infty}I[\omega_n]. \] Weak$^*$ convergence of measures entails some convergence in capacity of their spherical potentials. This is stated rather informally in \cite[Sec. 3 and 4]{GRakh87}, but the result is slightly subtle because, as examples show, convergence in capacity generally occurs outside the support of the limiting measure only. A precise statement is as follows.
\begin{lem} \label{lem:cwssimcic} Let $\{\omega_n\}$ be a sequence of positive Borel measures such that $\omega_n\cws\omega$. Then $V_*^{\omega_n}\cic V_*^\omega$ in $\overline\C\setminus\supp(\omega)$. In particular, if $\omega$ is the zero measure, then the spherically normalized potentials $V_*^{\omega_n}$ converge to zero in capacity in the whole extended complex plane. \end{lem} \begin{proof} Suppose first that $\omega_n$ converges weak$^*$ to the zero measure. Then the convergence is actually strong. Assume moreover that the measures $\omega_n$ are supported on a fixed compact set $K\subset\C$. Let $G$ be a simply connected domain that contains $K$, $L$ be a Jordan curve that contains the closure of $G$ in its interior, and $D$ be a bounded simply connected domain that contains $L$. Fix $\epsilon>0$ and define $E_n:=\{z\in D:~V_D^{\omega_n}(z)>\epsilon\}$. By superharmonicity of $V_D^{\omega_n}$ the set $E_n$ is open, and we can assume $E_n\subset G$ by taking $n$ large enough. If $E_n$ is empty then $\cp(E_n)=0$, otherwise let $E\subset E_n$ be a nonpolar compact set. Then the Green equilibrium potential $V_D^{\omega_{(E,\partial D)}}$ is bounded above by $1/\cp(E,\partial D)$ \cite[Thm. 5.11]{SaffTotik} which is finite. Hence $h:=V_D^{\omega_n}-\varepsilon\cp(E,\partial D) V_D^{\omega_{(E,\partial D)}}$ is superharmonic and bounded below in in $D\setminus E$, with $\liminf h(z)\geq0$ as $z$ tends to $\partial E\cup\partial D$. By the minimum principle, we thus have \[ V_D^{\omega_n} \geq \epsilon\cp(E,\partial D) V_D^{\omega_{(E,\partial D)}} \quad \mbox{in} \quad D\setminus E. \] Set \[ m := \min_{u\in\overline G}\min_{z\in L}g_D(z,u)>0. \] Clearly, $V_D^{\omega_{(E,\partial D)}}(z)>m$, $z\in L$, thus \[ V_D^{\omega_n} \geq \epsilon m\cp(E,\partial D) \quad \mbox{on} \quad L. \] Hence, in view of \eqref{eq:capgreencap} applied with $K=\overline{G}$, we get \[ -\log\cp(E_n)=-\sup_{E\subset E_n}\log\cp(E) \geq \frac{\epsilon m}{\sup_L V_D^{\omega_n}}-C \]
where $C$ is independent of $n$. Using the uniform convergence to 0 of $V_D^{\omega_n}$ on $L$, we get that $\cp(E_n)\to0$ as $n\to\infty$, that is, $V_D^{\omega_n}\cic0$ in $D$. Let, as usual, $\widehat\omega_n$ be the balayage of $\omega_n$ onto $\partial D$. Since $|\widehat\omega_n|=|\omega_n|\to0$ as $n\to\infty$, we have that $V^{\widehat\omega_n}\to0$ locally uniformly in $D$. Combining this fact with \eqref{eq:toRemind}, we get that $V^{\omega_n}\cic0$ in $D$. Let $u$ be an arbitrary point in $G$. Then $\{V^{\omega_n}+|\omega_n|\log|\cdot-u|\}$ is a sequence of harmonic functions in $\overline\C\setminus G$. It is easy to see that this sequence converges uniformly to 0 there. As $|\omega_n|\log|\cdot-u|\cic0$ in $\overline\C$, we deduce that $V^{\omega_n}\cic0$ in the whole extended complex plane and so does $V_*^{\omega_n}=V^{\omega_n}+\int\log^+|u|d\omega_n$ ({\it cf.} \eqref{eq:sphpot}) since $\supp(\omega_n)\subset K$.
Next, let $\{\omega_n\}$ be an arbitrary sequence of positive measures that converges to the zero measure. As the restriction ${\omega_n}_{|{\overline{\D}}}$ converges to zero, we may assume by the first part of the proof that $\supp(\omega_n)\subset \overline{\Om}$. It can be easily seen from the definition of the spherical potential \eqref{eq:sphpot} that \begin{equation} \label{refsper}
V_*^{\omega_n}(1/z) = V_*^{\tilde\omega_n}(z)+|\omega_n|\log|z|, \quad z\in\C\setminus\{0\}, \end{equation} where $\tilde\omega_n$ is the reciprocal measure of $\omega_n$, i.e., $\tilde\omega_n(B)=\omega_n(\{z:1/z\in B\})$ for any Borel set $B$.
Clearly $\tilde\omega_n\to0$ and $\supp(\tilde\omega_n)\subset\overline{\D}$, thus from the first part of the proof we get $V_*^{\tilde\omega_n}\cic0$. Since $|\omega_n|\to0$, we also see by inspection that
$|\omega_n|\log|z|\cic0$. Therefore, by \eqref{refsper}, we obtain that $V_*^{\omega_n}(1/z)\cic0$ which is equivalent to $V_*^{\omega_n}\cic0$.
Let now $\{\omega_n\}$ be a sequence of positive measures converging weak$^*$ to some Borel measure $\omega\neq0$. If $\supp(\omega)=\overline{\C}$, there is nothing to prove. Otherwise, to each $\varepsilon>0$, we set $F_\varepsilon:=\{z\in\overline{\C}:~d_c(z,\supp(\omega))\geq\varepsilon\}$ where $d_c$ is the chordal distance on the Riemann sphere. Pick a continuous function $f$, with $0\leq f\leq1$, which is identically $1$ on $F_\varepsilon$ and supported in $F_{\varepsilon/2}$.By the positivity of $\omega_n$ and its weak$^*$ convergence to $\omega$, we get \[ 0\leq \lim_{n\to+\infty}\omega_n(F_\varepsilon) \leq \lim_{n\to+\infty}\int f\,d\omega_n=\int f \,d\omega =0. \]
From this, it follows easily that if $\varepsilon_n\to0$ slowly enough, then the restriction $\omega_n^1:={\omega_n}_{|F_{\varepsilon_n}}$ converges strongly to the zero measure. Therefore $V_*^{\omega^1_n}\cic0$ in $\overline\C$ by the previous part of the proof. Now, put $\omega^2_n:=\omega_n-\omega_n^1={\omega_n}_{|\overline{\C}\setminus F_{\varepsilon_n}}$. For fixed $z\in \C\setminus\supp(\omega)$, the function $k(z,u)$ from \eqref{eq:sphpot} is continuous on a neighborhood of $\overline{\C}\setminus F_{\varepsilon_n}$ for all $n$ large enough. Redefining $k(z,u)$ near $z$ to make it continuous does not change its integral against $\omega$
nor $\omega_n^2$, therefore $V_*^\omega(z)-V_*^{\omega_n^2}(z)\to0$ as $n\to+\infty$ since $\omega^2_n\cws\omega$. Moreover, it is straightforward to check from the boundedness of $|\omega^2_n|$ that the convergence is locally uniform with respect to $z\in\C$. Finally, if $\supp(\omega)$ is bounded, we observe that when $z\to\infty$ \[
V_*^\omega(z)-V_*^{\omega_n^2}(z)\sim\log|z|\left(\omega_n^2(\overline{\C}) - \omega(\overline{\C})\right)+\int\log^+|u|\,d\omega - \int \log^+|u| \,d\omega_n^2 \] which goes to zero in capacity since $\omega_n^2(\overline{\C})\to \omega(\overline{\C})$ and $\log^+$ is continuous in a neighborhood of both $\supp(\omega)$ and $\supp(\omega^2_n)$ for $n$ large enough. This finishes the proof of the lemma. \end{proof}
The following lemma is needed for the proof of Theorem~\ref{thm:convcap}.
\begin{lem} \label{lem:cic}
Let $D$ be a domain in $\overline\C$ and $\{A_n\}$ be a sequence of holomorphic functions in $D$ such that $|A_n|^{1/n}\cic c$ in $D$ as $n\to\infty$ for some constant $c$. Then $\limsup_{n\to\infty}|A_n|^{1/n} \leq c$ uniformly on closed subsets of $D$. \end{lem} \begin{proof}
By the maximum principle, it is enough to consider only compact subsets of $D$ and therefore it is sufficient to consider closed disks. Let $z\in D$ and $x>0$ be such that the closure of $D_{3x}:=\{w:|w-z|<3x\}$ is contained in $D$. We shall show that $\limsup_{n\to\infty}\|A_n\|_{\overline D_x}^{1/n}\leq c$.
Fix $\epsilon>0$. As $|A_n|^{1/n}\cic c$ on $\overline D_{3x}\setminus D_{2x}$, there exists $y_n\in(2x,3x)$ such that $|A_n|^{1/n}\leq c+\epsilon$ on $L_n:=\{w:|w-z|=y_n\}$ for all $n$ large enough. Indeed, define $S_n:=\{w\in \overline D_{3x}\setminus D_{2x}:||A_n(w)|^{1/n}-c|>\epsilon\}$. By the definition of convergence in capacity, we have that $\cp(S_n)\to0$ as $n\to\infty$. Further, define $S_n^\prime:=\{|w-z|:w\in S_n\}\subset[2x,3x]$. Since the mapping $w\mapsto|w-z|$ is contractive, $\cp(S_n^\prime)\leq\cp(S_n)$ by \cite[Thm. 5.3.1]{Ransford} and therefore $\cp(S_n^\prime)\to0$ as $n\to\infty$. The latter {\it a fortiori} implies that $|S_n^\prime|\to0$ as $n\to\infty$ by \cite[Thm. 5.3.2(c)]{Ransford}, where $|S_n^\prime|$ is the Lebesgue measure of $S_n^\prime$. Thus, $y_n$ with the claimed properties always exists for all $n$ large enough. Using the Cauchy integral formula, we get that \[
\|A_n\|_{\overline D_x}^{1/n} \leq \left(\frac{\max_{L_n}|A_n|}{2\pi x}\right)^{1/n} \leq \frac{c+\epsilon}{\sqrt[n]{2\pi x}}. \] As $x$ is fixed and $\epsilon$ is arbitrary, the claim of the lemma follows. \end{proof}
\subsection{Green Potentials} \label{ss:gp} In this section, we prove some facts about Green potentials that we used throughout the paper. We start from the following useful fact.
Let $D_1$ and $D_2$ be two domains with non-polar boundary and $\omega$ be a Borel measure supported in $D_1\cap D_2$. \emph{Then $V_{D_1}^\omega-V_{D_2}^\omega$ is harmonic in $D_1\cap D_2$.}
Clearly, this claim should be shown only on the support of $\omega$. Using the conformal invariance of Green potentials, it is only necessary to consider measures with compact support. Denote by $\widehat\omega$ and $\widetilde\omega$ the balayages of $\omega$ onto $\partial D_1$ and $\partial D_2$, respectively. Since $V_{D_1}^\omega=V^{\omega-\widehat\omega}+c(\omega;D_1)$ and $V_{D_2}^\omega=V^{\omega-\widetilde\omega}+c(\omega;D_2)$ by \eqref{eq:toRemind} and $V^{\widehat\omega}$ and $V^{\widetilde\omega}$ are harmonic on $\supp(\omega)$, it follows that $V_{D_1}^\omega-V_{D_2}^\omega=V^{\widetilde\omega-\widehat\omega}+c(\omega;D_1)-c(\omega;D_2)$ is also harmonic there.
\subsubsection{Normal derivatives} Throughout this section, $\partial/\partial\n_i$ (resp. $\partial/\partial\n_o$) will stand for the partial derivative with respect to the inner (resp. outer) normal on the corresponding curve.
\begin{lem} \label{lem:nder0} Let $L$ be a $C^1$-smooth Jordan curve in a domain $D$ and $V$ be a continuous function in $D$. If $V$ is harmonic in $D\setminus L$, extends continuously to the zero function on $\partial D$ and to $C^1$-smooth functions on each side of $L$, then $V=-V_D^\sigma$, where $\sigma$ is a signed Borel measure on $L$ given by \[ d\sigma = \frac{1}{2\pi}\left(\frac{\partial V}{\partial\n_i}+\frac{\partial V}{\partial\n_o}\right)ds \] and $ds$ is the arclength differential on $L$. \end{lem} \begin{proof} As discussed just before this section, the distributional Laplacian of $-V_D^\sigma$ in $D$ is equal to $2\pi\sigma$. Thus, according to Weyl's Lemma and the fact that $V=V_D^\sigma\equiv0$ on $\partial D$, we only need to show that $\Delta V=2\pi\sigma$. By the very definition of the distributional Laplacian, it holds that \begin{equation} \label{eq:weakL1} \iint_D \phi\Delta V dm_2 = \iint_D V\Delta\phi dm_2 = \iint_{O}V\Delta\phi dm_2 + \iint_{D\setminus\overline O} V\Delta\phi dm_2, \end{equation} for any infinitely smooth function $\phi$ compactly supported in $D$, where $O$ is the interior domain of $L$ and $dm_2$ is the area measure. According to Green's formula (see \eqref{eq:greenformula} further below) it holds that \begin{equation} \label{eq:weakL2} \iint_{O}V\Delta\phi dm_2 = \iint_{O}\Delta V\phi dm_2 + \int_L\left(\phi\frac{\partial V}{\partial\n_i}-V\frac{\partial\phi}{\partial\n_i}\right)ds = \int_L\left(\phi\frac{\partial V}{\partial\n_i}-V\frac{\partial\phi}{\partial\n_i}\right)ds \end{equation} as $V$ is harmonic in $O$. Analogously, we get that \begin{equation} \label{eq:weakL3} \iint_{D\setminus\overline O} V\Delta\phi dm_2 = \int_L\left(\phi\frac{\partial V}{\partial\n_o}-V\frac{\partial\phi}{\partial\n_o}\right)ds, \end{equation} where we also used the fact that $\phi\equiv0$ in some neighborhood of $\partial D$. Combining \eqref{eq:weakL2} and \eqref{eq:weakL3} with \eqref{eq:weakL1} and observing that $\partial\phi/\partial\n_i=-\partial\phi/\partial\n_o$ yield \[ \iint_D \phi\Delta V dm_2 = \int_L \left(\frac{\partial V}{\partial\n_i}+\frac{\partial V}{\partial\n_o}\right) ds = 2\pi\int_L\phi d\sigma. \] That is, $\Delta V=2\pi\sigma$, which finishes the proof of the lemma. \end{proof}
\begin{lem} \label{lem:nder1} Let $F$ be a regular compact set and $G$ a simply connected neighborhood of $F$. Let also $V$ be a continuous function in $G$ that is harmonic in $G\setminus F$ and is identically zero on $F$. If $L$ is an analytic Jordan curve in $G$ such that $V\equiv\delta>0$ on $L$, then \[ \frac{1}{2\pi}\int_L \frac{\partial V}{\partial\n_i} ds = -\delta\cp(F\cap\Omega,L), \] where $\Omega$ is the inner domain of $L$. \end{lem} \begin{proof} It follows immediately from the maximum principle for harmonic functions, applied in $\Omega\setminus F$, that $V = \delta\cp(F\cap\Omega,L) V_D^\omega$ in $\overline\Omega$, where $D:=\overline\C\setminus(F\cap\Omega)$ and $\omega:=\omega_{(L,F\cap\Omega)}$. Thus, it is sufficient to show that \begin{equation} \label{eq:nder4} \frac{1}{2\pi}\int_L \frac{\partial V_D^\omega}{\partial\n_i} ds = -1. \end{equation} Observe that $V_D^\omega$ can be reflected harmonically across $L$ by the assumption on $V$ and therefore normal inner derivative of $V_D^\omega$ does exist at each point of $L$. According to \eqref{eq:toRemind}, it holds that \begin{equation} \label{eq:nder0} \frac{1}{2\pi}\int_L \frac{\partial V_D^\omega}{\partial\n_i} ds = \frac{1}{2\pi}\int_L \frac{\partial V^{\omega-\widehat\omega}}{\partial\n_i} ds, \end{equation} where $\widehat\omega$ is the balayage of $\omega$ onto $F\cap\Omega$. By Gauss' theorem \cite[Thm. II.1.1]{SaffTotik}, it is true that \begin{equation} \label{eq:nder1} \frac{1}{2\pi}\int_L \frac{\partial V^{\widehat\omega}}{\partial\n_i} ds = \widehat\omega(\Omega) = \widehat\omega(F\cap\Omega) = 1. \end{equation} Since $V_D^\omega\equiv1/\cp(L,F\cap\Omega)$ outside of $\Omega$ and $V^{\widehat\omega}$ is harmonic across $L$, we get from \eqref{eq:nder1} and the analog of \eqref{eq:nder0} with $\partial\n_i$ replaced by $\partial\n_0$, that \begin{equation} \label{eq:nder2} \frac{1}{2\pi}\int_L \frac{\partial V^\omega}{\partial\n_o} ds = \frac{1}{2\pi}\int_L \frac{\partial V^{\widehat\omega}}{\partial\n_o} ds = -\frac{1}{2\pi}\int_L \frac{\partial V^{\widehat\omega}}{\partial\n_i} ds = -1. \end{equation} As $\partial V^\omega/\partial\n_i$ and $\partial V^\omega/\partial\n_o$ are smooth on $L$ by \eqref{eq:toRemind}, in particular, Lipschitz smooth, we obtain from \cite[Thm. II.1.5]{SaffTotik} that \[ d\omega = -\frac{1}{2\pi}\left(\frac{\partial V^\omega}{\partial\n_i}+\frac{\partial V^\omega}{\partial\n_o}\right)ds \] and therefore \begin{equation} \label{eq:nder3} \frac{1}{2\pi}\int_L \frac{\partial V^\omega}{\partial\n_i} ds = -\omega(L)-\frac{1}{2\pi}\int_L \frac{\partial V^\omega}{\partial\n_o} ds = 0 \end{equation} by \eqref{eq:nder2}. Finally, by plugging \eqref{eq:nder1} and \eqref{eq:nder3} into \eqref{eq:nder0}, we see the validity of \eqref{eq:nder4}. Hence, the lemma follows. \end{proof}
\subsubsection{Reflected sets} \label{sss:rs} In the course of the proof of Theorem \ref{thm:convcap}, we used the conclusions of Lemma \ref{lem:pt} below. It has to do with the specific geometry of the disk, and we could not find an appropriate reference for it in the literature.
\begin{lem} \label{lem:pt} Let $E\subset\D$ be a compact set of positive capacity with connected complement $D$, and $E^*$ stand for the reflection of $E$ across $\T$. Further, let $\omega\in\Lm(E)$ be such that $\omega=\widehat\omega^*$, where $\omega^*$ is the reflection of $\omega$ across $\T$ and $\widehat\omega^*$ is the balayage of $\omega^*$ onto $E$. Then $\omega=\omega_{(E,\T)}$ and $\widetilde\omega=\omega_{(\T,E)}$, where $\widetilde\omega$ is the balayage of $\omega$ onto $\T$ relative to $\D$. Moreover, it holds that $V_D^{\omega^*}=V_D^{\widetilde\omega}=1/\cp(E,\T)-V_\D^\omega$ in $\overline\D$. \end{lem}
\begin{proof} Denote by $\widetilde\omega$ and $\widetilde\omega^*$ the balayage of $\omega$ onto $\T$ relative to $\D$ and the balayage of $\omega^*$ onto $\T$ relative to $\Om$. It holds that $\widetilde\omega=\widetilde\omega^*$. Indeed, since $g_{\Om}(z,\infty)=\log|z|$, we get from \eqref{eq:inthd} for $z\in\T$ that \begin{eqnarray}
V^{\widetilde\omega^*}(z) &=& \int\left[\log|t|-\log|z-t|\right]d\omega^*(t) = \int\left[-\log|u|-\log|z-1/\bar u|\right]d\omega(u) \nonumber \\
{} &=& -\int\log|1-z\bar u|d\omega(u) = V^\omega(z) = V^{\widetilde\omega}(z), \nonumber \end{eqnarray} where we used the fact that $z=1/\bar z$ for $z\in\T$ and \eqref{eq:equalBal} applied to $\omega$. Since both measures, $\widetilde\omega$ and $\widetilde\omega^*$, have finite energy, the uniqueness theorem \cite[Thm. II.4.6]{SaffTotik} yields that $\widetilde\omega=\widetilde\omega^*$.
By \eqref{eq:toRemind}, we have that $V_D^{\widetilde\omega} = V^{\widetilde\omega-\widehat{\widetilde\omega}} + c(\widetilde\omega;D)$. Since $\widehat{\widetilde\omega} = \widehat{\widetilde\omega^*} = \widehat\omega^* = \omega$ and by the equality $\widetilde\omega=\widetilde\omega^*$, \eqref{eq:balinsteps}, and the conditions of the lemma, it holds that \begin{equation} \label{eq:eqpots} V_D^{\widetilde\omega}(z) = V^{\widetilde\omega-\omega}(z)+c(\widetilde\omega;D) = c(\widetilde\omega;D) - V_\D^\omega(z), \quad z\in\overline\C, \end{equation} where we used \eqref{eq:toRemind} once more. Hence, $V_\D^\omega=c(\widetilde\omega;D)$ q.e. on $E$ and the unique characterization of the Green equilibrium distribution implies that $\omega=\omega_{(E,\T)}$ and $c(\widetilde\omega;D)=1/\cp(E,\T)$. Moreover, it also holds that $V_D^{\widetilde\omega}=c(\widetilde\omega;D)=1/\cp(E,\T)$ in $\overline\Om$ and therefore $\widetilde\omega=\omega_{(\T,E)}$, again by the characterization of the Green equilibrium distribution.
The first part of the last statement of the lemma is independent of the geometry of the reflected sets and follows easily from \eqref{balayageh} and the fact that for any $z\in\D$ the function $g_D(z,u)$ is a harmonic function of $u\in\Om$ continuous on $\T$. The second part was shown in \eqref{eq:eqpots}. \end{proof}
\subsection{Dirichlet Integrals} \label{DI} Let $D$ be a domain with compact boundary comprised of finitely many analytic arcs that possess tangents at the endpoints. In this section we only consider functions continuous on $\overline D$ whose weak ({\it i.e.,} distributional) Laplacian in $D$ is a signed measure supported in $D$ with total variation of finite Green energy, and whose gradient, which is smooth off the support of the Laplacian, extends continuously to $\partial D$ except perhaps at the corners where its norm grows at most like the reciprocal of the square root of the distance to the corner. These can be written as a sum of a Green potential of a signed measure as above and a harmonic function whose boundary behavior has the smoothness just prescribed above. By Proposition \ref{prop:minset}, the results apply for instance to $V_{\overline\C\setminus\Gamma}^\omega$ on $\overline{\C}\setminus\Gamma$ as soon as $\omega$ has finite energy.
Let $u$ and $v$ be two such functions. We define the Dirichlet integral of $u$ and $v$ in $D$ by \begin{equation} \label{eq:greenformula} \di_D(u,v) = -\frac{1}{2\pi}\iint_Du\Delta vdm_2 - \frac{1}{2\pi}\int_{\partial D}u\frac{\partial v}{\partial\n}ds, \end{equation}
where $\Delta v$ is the weak Laplacian of $v$ and $\partial/\partial\n$ is the partial derivative with respect to the inner normal on $\partial D$. The Dirichlet integral is well-defined since the measure $|\Delta v|$ has finite Green energy and is supported in $D$ while $|u\partial v/\partial\n|$ is integrable on $\partial D$. Moreover, it holds that \begin{equation} \label{symmetry} \di_D(u,v) = \di_D(v,u). \end{equation} Indeed, this follows from Fubini's theorem if $u$ and $v$ are both Green potentials and from Green's formula when they are both harmonic. Thus, we only need to check \eqref{symmetry} when $v$ is harmonic and $u$ is a Green potential. Clearly, then it should hold $\di_D(u,v)=0$. Let $a$ be a point in the support of $\Delta u$ and $\varepsilon>0$ be a regular value of $g_D(.,a)$ which is so small that the open set $A :=\{z\in D:~ g_D(z,a)<\varepsilon\}$ does not intersect the support of $\Delta u$. By our choice of $\varepsilon$, the boundary of $A$ consists of $\partial D$ and a finite union of closed smooth Jordan curves. Write $v=v_1+v_2$ for some $C^\infty$-smooth functions $v_1, v_2$ such that the support of $v_2$ is included in $A$ (hence $v_2$ is identically zero in a neighborhood of $\overline{D\setminus A}$ where the closure is taken with respect to $D$) while the support of $v_1$ is compact in $D$. Such a decomposition is easily constructed using a smooth partition of unity subordinated to the open covering of $D$ consisting of $A$ and $\{z\in D:~ g_D(z,a)>\varepsilon/2\}$. By the definition of the weak Laplacian we have that \begin{eqnarray} \di_D(v,u) &=& -\frac{1}{2\pi}\iint_Dv_1\Delta udm_2 - \frac{1}{2\pi}\int_{\partial D}v_2\frac{\partial u}{\partial\n}ds\nonumber \\ {} &=& -\frac{1}{2\pi}\iint_Du\Delta v_1dm_2 - \frac{1}{2\pi}\int_{\partial D}v_2\frac{\partial u}{\partial\n}ds \nonumber \\ {} &=& - \frac{1}{2\pi}\iint_Du\Delta v_1dm_2 -\frac{1}{2\pi}\iint_Du\Delta v_2dm_2 = 0, \nonumber \end{eqnarray} where we used Green's formula \begin{equation} \label{greensformula} \iint_{A}(v_2\Delta u-u\Delta v_2)dm_2 = \int_{\partial A}\left(u\frac{\partial v_2}{\partial\n}-v_2\frac{\partial u}{\partial \n}\right)ds. \end{equation}
Note that if $\gamma\subset D$ is an analytic arc which is closed in $D$ and $u,v$ are harmonic across $\gamma$, then \begin{equation} \label{ajoutbord} \di_{D}(u,v)=\di_{D\setminus\gamma}(u,v) \end{equation} because the rightmost integral in (\ref{eq:greenformula}) vanishes on $\gamma$ as the normal derivatives of $v$ from each side of $\gamma$ have opposite signs.
Observe also that if $\nu$ is a positive Borel measure supported in $D$ with finite Green's energy then $\Delta V_D^\nu=-2\pi \nu$ by Weyl's lemma (see Section~\ref{ss:gp}) and so by \eqref{eq:greenformula} \begin{equation} \label{eq:di1} \di_D(V_D^\nu) := \di_D(V_D^\nu,V_D^\nu) = I_D[\nu]. \end{equation} Finally, if $v$ is harmonic in $D$, it follows from the divergence theorem that \begin{equation} \label{positivity}
\di_D(v) = \iint_{D}\|\nabla v\|^2dm_2, \end{equation} which is the usual definition for Dirichlet integrals. In particular, if $D^\prime\subset D$ is a subdomain with the same smoothness as $D$, and if we assume that $\supp\,\Delta v\subset D^\prime$, we get from \eqref{ajoutbord} and \eqref{positivity} that \begin{equation} \label{decompositivity}
\di_D(v) = \di_{D^\prime}(v)+\di_{D\setminus \overline{D^\prime}}(v) = \di_{D^\prime}(v)+\iint_{D\setminus\overline{ D^\prime}}\|\nabla v\|^2dm_2. \end{equation}
\section{Numerical Experiments} \label{sec:numer}
In order to numerically construct rational approximants, we first compute the truncated Fourier series of the approximated function (resulting rational functions are polynomials in $1/z$ that converge to the initial function in the Wiener norm) and then use {\it Endymion} software (it uses the same algorithm as the previous version \emph{Hyperion} \cite{rGr}) to compute critical points of given degree $n$. The numerical procedure in Endymion is a descent algorithm followed by a quasi-Newton iteration that uses a compactification of the set $\rat_n$ whose boundary consists of $n$ copies of $\rat_{n-1}$ and $n(n-1)/2$ copies of $\rat_{n-2}$ \cite{BCarO91}. This allows to generate several initial conditions leading to a critical point. If the sampling of the boundary gets sufficiently refined, the best approximant will be attained. In practice, however, one cannot be absolutely sure the sampling was fine enough. This why we speak below of rational approximants and do not claim they are best rational approximants. They are, however, irreducible critical points, up to numerical precision.
In the numerical experiments below we approximate functions given by \[ f_1(z) = \frac{1}{\sqrt[4]{(z-z_1)(z-z_2)(z-z_3)(z-z_4)}} + \frac{1}{z-z_1}, \] where $z_1=0.6+0.3i$, $z_2=-0.8+0.1i$, $z_3=-0.4+0.8i$, $z_4=0.6-0.6i$, and $z_5=-0.6-0.6i$; and \[ f_2(z) = \frac{1}{\sqrt[3]{(z-z_1)(z-z_2)(z-z_3)}} + \frac{1}{\sqrt{(z-z_4)(z-z_5)}}, \] where $z_1=0.6+0.5i$, $z_2=-0.1+0.2i$, $z_3=-0.2+0.7i$, $z_4=-0.4-0.4i$, and $z_5=0.1-0.6i$. We take the branch of each function such that $\lim_{z\to\infty}zf_j(z)=2$, $j=1,2$, and use first 100 Fourier coefficients for each function.
\begin{figure}
\caption{\small The poles of rational approximant to $f_1$ of degree 12 (left) and superimposed poles of rational approximants to $f_1$ of degrees 12 and 16 (right).}
\end{figure}
\begin{figure}
\caption{\small The poles of rational approximant to $f_2$ of degree 16 and superimposed poles of rational approximants to $f_2$ of degrees 13 and 16 (right).}
\end{figure}
On the figures diamonds depict the branch points of $f_j$, $j=1,2$, and disks denote the poles of the corresponding approximants.
\end{document} |
\begin{document}
\title{SLPerf: a Unified Framework for Benchmarking Split Learning}
\begin{abstract}
Data privacy concerns has made centralized training of data, which is scattered across silos, infeasible, leading to the need for collaborative learning frameworks. To address that, two prominent frameworks emerged, i.e., federated learning (FL) and split learning (SL). While FL has established various benchmark frameworks and research libraries,SL currently lacks a unified library despite its diversity in terms of label sharing, model aggregation, and cut layer choice. This lack of standardization makes comparing SL paradigms difficult. To address this, we propose SLPerf, a unified research framework and open research library for SL, and conduct extensive experiments on four widely-used datasets under both IID and Non-IID data settings. Our contributions include a comprehensive survey of recently proposed SL paradigms, a detailed benchmark comparison of different SL paradigms in different situations, and rich engineering take-away messages and research insights for improving SL paradigms. SLPerf can facilitate SL algorithm development and fair performance comparisons. The code is available at \url{https://github.com/Rainysponge/Split-learning-Attacks}. \end{abstract}
\section{Introduction} Deep learning has strong application value and potential in various fields such as computer vision \cite{Krizhevsky2012ImageNet}, disease diagnosis \cite{8086133}, financial fraud detection \cite{DBLP:journals/corr/abs-2107-13673}, and malware detection \cite{DBLP:journals/corr/WangGZXGL16}. However, a large amount of high-quality labeled data is often required to train an effective deep-learning model \cite{CHERVENAK2000187}. Unfortunately, data are usually scattered across different silos or edge devices. Directly collecting them for training in a centralized fashion will inevitably introduce privacy issues. For instance, exchanging sensitive raw patient data between healthcare providers for machine learning could lead to serious privacy concerns \cite{HUANG2019103291,DBLP:journals/corr/LiuGNDKBVTNCHPS17}, which would no longer be permitted with the enforcement of data privacy laws \cite{Alessandro2013The}. Various collaborative learning methods have been proposed to facilitate joint model training without compromising data privacy \cite{LSDG2,DBLP:journals/corr/abs-1812-03288}, such as federated learning (FL) \cite{2016Communication} and split learning (SL) \cite{DBLP:journals/corr/abs-1810-06060,DBLP:journals/corr/abs-1912-12115}. FL enables collaboratively training a shared model while keeping all the raw data locally and exchanging only gradients. However, it often requires clients to have sufficient computational power for local training. To address the challenge of training heavy deep learning models on resource-constrained IoT devices, such as smartphones, wearables, or sensors, split learning is introduced. It splits the whole network into two parts that are computed by different participants (i.e., clients and a server), and necessary information is transmitted between them, including the data of the cut layer from clients and the gradients from the server.
Various benchmark frameworks or research libraries have been provided for FL such as FedML \cite{DBLP:journals/corr/abs-2007-13518}, PySyft\cite{DBLP:journals/corr/abs-1811-04017}, Flower\cite{FLOWER} and LEAF\cite{DBLP:journals/corr/abs-1812-01097}. However, unlike FL, there is no unified library for SL researchers yet. Therefore, despite significant progress made in SL research, there are several critical limitations that need to be addressed.
\begin{itemize}
\item \textbf{Lack of support for different SL paradigms}. SL has a high degree of diversity in terms of label sharing, model aggregation, etc. However, the lack of a comprehensive SL research library poses a problem for researchers to reinvent the wheel when comparing existing SL paradigms.
\item \textbf{Lack of a standardized SL evaluation benchmark}. The lack of a standardized SL evaluation benchmark makes it difficult to compare different paradigms fairly. With more paradigms being proposed, some studies train on CV datasets, while others train on sequential/time-series data \cite{DBLP:journals/corr/abs-2003-12365}. Dataset partitioning also varies, such as dividing based on labels or Dirichlet distribution in Non-IID settings.
\end{itemize}
In this work, we systematically survey and compare different SL paradigms. We specifically classify these paradigms across various dimensions and present a unified research framework called SLPerf for benchmarking. To comprehensively verify the effectiveness of different SL paradigms, we conduct extensive experiments on four widely used datasets covering a wide range of tasks including computer vision, medical data analysis, tabular dataset, and drug discovery, under both IID and Non-IID data setting. By analyzing these empirical results, we provide rich engineering take-away messages for guiding ML engineers to choose appropriate SL paradigms according to their specific applications. We also provide research insights for improving the robustness and efficiency of current SL paradigms.Compared to previous works, our study compares the performance of different SL paradigms with various data partitioning, providing a multi-dimensional perspective for evaluating SL paradigms. Our contributions are summarized as follows: \begin{itemize} \item \textbf{Unified framework for different SL paradigms.} We introduce our framework SLPerf, an open research library and benchmark to facilitate SL algorithm development and fair performance. \item \textbf{Comprehensive survey of the recently proposed SL paradigms.} We conduct a survey of recently proposed SL paradigms and classified them according to their characteristics. We also provide some suggestions for which scenarios are suitable for each SL paradigm. \item \textbf{Detailed comparison of different SL paradigms.}
Prior research lacks detailed experimental comparisons of different SL paradigms. In this paper, we conduct experiments to compare different SL paradigms in different situations and provide a benchmark for SL. \item \textbf{Applying SL to graph learning.} Sharing raw drug data is often not possible due to business concerns, making it challenging to build ML models. Our paper proposes using SL paradigms to solve this problem and has shown promising results on the Ogbg-molhiv dataset.
\end{itemize} \begin{table*}[htbp] \caption{The comparison of SL paradigms.} \label{The comparison of Split learning paradigms.}
\begin{center} \begin{small} \begin{sc}
\begin{tabular}{ c|lcc} \toprule ¶digms &{\upshape Reduce Communication Cost} & {\upshape Label Protection}\\ \midrule \multirow{4}{*}{\upshape Model-Split-Only}&{\upshape Vanilla\cite{DBLP:journals/corr/abs-1810-06060}} & $\times$ & $\times$\\ &{\upshape U-shape \cite{DBLP:journals/corr/abs-1810-06060}}& $\times$ & $\surd$ \\ &{\upshape PSL\cite{2020Privacy}} & $\times$& $\times$\\ &{\upshape AsyncSL\cite{2021Communication}} & $\surd$& $\times$\\ \midrule \multirow{6}{*}{\upshape Weight Aggregation-based}&{\upshape SplitFed\cite{SplitFed}} & $\times$ & $\times$\\ &FSL\cite{9582171} & $\surd$ & $\times$\\ &{\upshape FeSTA\cite{FeSVT}} & $\times$& $\surd$\\ &{\upshape 3C-SL\cite{3C_SL}}& $\surd$ & $\times$ \\ &{\upshape HSFL\cite{857534b2836744119e61880381fa08e8_HFSL}}&$\surd$ & $\times$ \\ &{\upshape CPSL\cite{2022Split_CPSL}}&$\surd$ & $\times$ \\ \midrule \multirow{3}{*}{\upshape Intermediate Data Aggregation-based}&{\upshape SGLR\cite{2021Server}} & $\times$ &$\times$\\ &{\upshape LocFedMix-SL\cite{10.1145/3485447.3512153}}& $\times$& $\times$\\ &{\upshape CutMixSL \cite{10.1145/3485447.3512153}} & $\surd$& $\times$\\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \end{table*}
\begin{figure}
\caption{Overview of our proposed framework SLPerf.}
\label{fig:framework}
\end{figure}
\section{Framework}
We propose SLPerf, an open research library for SL that offers various interfaces and methods to rapidly develop SL algorithms while enabling fair performance comparisons across different configurations. SLPerf supports various SL training configurations, including dataset partitioning methods, numbers of clients, and paradigms. It also provides standardized benchmarks with explicit evaluation metrics to validate baseline results, facilitating fair performance comparisons. Moreover, our framework includes standardized implementations of multiple SL paradigms, allowing users to familiarize themselves with our framework's API.
Figure~\ref{fig:framework} provides an overview of our framework. We divide the framework into three modules: High-level SL API, Low-level SL API, and Tools. When conducting SL experiments, users will use the functions from High-level SL API by providing a configuration file that includes the selection of the paradigm, dataset, and data partitioning method. Then, using the corresponding factory, functions from Low-level SL API will be called to generate clients and server objects for training. Once the training is completed, the experimental results will be recorded in a log file that contains details such as accuracy, AUC, communication cost, and other pertinent information. Users can visualize and analyze the data using the code provided in the Tool Module. If users need to develop their own SL paradigms, they can utilize the interfaces provided in the Low-level SL API to custimize. By overloading certain methods, such as determining whether to aggregate model weights, they can define the necessary communication information.
\section{Paradigm} \label{Paradigm section}
In this section, we will present an overview of various split learning (SL) paradigms. We will conduct a comprehensive comparison of these paradigms using Table ~\ref{The comparison of Split learning paradigms.}, which lists mainstream SL paradigms and compares them based on various features. We categorize paradigms into three groups based on whether intermediate data or model weights are aggregated during training.
The “model-split-only” group partitions the model architecture across participants without data or weight aggregation. The “weight aggregation-based” group uses techniques like FedAvg \cite{2016Communication} to aggregate model weights from different devices. The “intermediate data aggregation-based” group aggregates intermediate data like gradients or smashed data. This categorization provides a clear understanding of SL approaches for researchers and practitioners.
\subsection{Model-Split-Only} \begin{figure}
\caption{The overview of the Vanilla SL paradigms: (a) Vanilla (centralized) (b) Vanilla (peer-to-peer) }
\label{fig:VanillaSL: VanillaSL}
\label{fig:VanillaSL: VanillaSL2}
\label{fig:VanillaSL}
\end{figure}
The Vanilla Split Learning paradigm \cite{DBLP:journals/corr/abs-1810-06060} divides the original model into two parts: the server model and the client model. The client trains the model locally using its own data and then transmits the “Smashed Data” to the server for updating the server model. The server sends the backward-propagated gradients back to the client for updating the client model. And each client must wait for the previous client to complete a training round before interacting with the server. This paradigm has two setups: (1) centralized, where the client uploads model-weight-containing files to a server or trusted third party, and the next client downloads the weights and trains the model, as shown in Figure~\ref{fig:VanillaSL: VanillaSL}; (2) peer-to-peer (P2P), where the server sends the previous trained client’s address to the current client for transferring the weights, as shown in Figure~\ref{fig:VanillaSL: VanillaSL2}.
However, the Vanilla paradigm has a label leakage issue which is addressed by the U-shape paradigm \cite{DBLP:journals/corr/abs-1810-06060}. The U-shape paradigm divides the original model into three parts: the head and tail on the client-side and the body on the server-side. The client only sends smashed data to the server, which forwards it for forward propagation and sends computed results back to the client. The tail calculates loss and gradient, updates with backpropagation, and sends the gradient to the server. The server updates the body and sends the backpropagation result to the client for updating the head.
\begin{figure}
\caption{The overview of Model-Split-Only paradigms: (a) U-shape (b) PSL}
\label{fig:USHAPE: USHAPE_SL}
\label{fig:parallelSL: parallelSL}
\label{fig:USHAPE}
\end{figure} The synchronous training properties of Vanilla SL may result in significant delays, which can be mitigated through parallel training. To address this issue, the Parallel Split Learning (PSL) \cite{2020Privacy} paradigm has been proposed, as shown in Figure~\ref{fig:parallelSL: parallelSL}. In PSL, clients initiate forward propagation in parallel and transmit the smashed data $z_i$ (where $i$ denotes the index of the client node) to the server. Upon completion of forward and backward propagation, the server sends back the computed gradient $g_i$ derived from $z_i$ to the i-th client, reducing the overall training time.
For reducing the communication cost, the Split Learning using Asynchronous Training (AsyncSL) \cite{2021Communication} paradigm has been developed, which is a loss-based asynchronous training scheme that updates the client-side model less frequently and only sends/receives smashed data /gradients in selected epochs.
When the state is A, clients send smashed data to the server, and receive the gradient from the server. In state B, only the client sends smashed data to the server, and the server does not send the gradient back. State C involves no exchange of information between clients and the server. Also to reduce the communication cost, the circular convolution-based batch-wise compression for SL (C3-SL) \cite{DBLP:journals/corr/abs-2207-12397} is a paradigm combining circular convolution with SL. Clients use cyclic convolution \cite{Plate1995Holographic} to compress multiple features into a single compressed feature, which is decoded on the server side through cyclic correlation \cite{Plate1995Holographic}
\begin{figure}
\caption{The overview of Weight Aggregation-based SL paradigms: (a) SplitFed (b) FeSTA.}
\label{fig:AVERAGE: splitFed}
\label{fig:USHAPE: multiTask}
\label{fig:AVERAGE}
\end{figure} \subsection{Weight Aggregation-based}
Weight Aggregation-based SL paradigms use Weight Aggregation to train a shared client model by aggregating locally trained models from clients. During the training process, some or all of the clients will send their models to the server or a trusted third party for model weight aggregation at a certain stage, such as SplitFed\cite{SplitFed}, FSL\cite{9582171}, HSFL\cite{857534b2836744119e61880381fa08e8_HFSL}. Weight Aggregation-based SL can handle resource-limited clients such as those found in IoT and is advantageous for addressing Non-i.i.d. data scenarios.
In SplitFed\cite{SplitFed}, the model is split into two parts and trained in a similar way to PSL. And after each client completes one round of training, its local model weights are transmitted to the server, which aggregates the local models using the FedAvg aggregation method. The process is illustrated in Figure~\ref{fig:AVERAGE: splitFed}. Additionally, the authors propose a variant, SplitFedv2\cite{SplitFed}, in which clients are trained linearly in order, rather than in parallel. In the context of Federated Split Learning (FSL) \cite{9582171}, each client trains its local model in accordance with the SL paradigm in conjunction with its corresponding Edge Server, instead of interacting with a central server. Subsequent to the completion of one epoch, the model weights of the Edge Server are aggregated via the central server.
FeSTA \cite{FeSVT} is a paradigm that utilizes the Vision Transformer and the concept of U-shape paradigms. In this paradigm, clients train various models and are divided into groups depending on their tasks and trained in parallel. After each training epoch, clients send the model head and tail parameters to the server, which aggregates the model parameters for the same task and returns them to the clients using techniques such as FedAvg. This paradigm is suitable for training models for multiple similar tasks as it has a shared Transformer body.
Hybrid Split and Federated Learning (HSFL) \cite{857534b2836744119e61880381fa08e8_HFSL} and Cluster-based Parallel SL (CPSL) \cite{2022Split_CPSL} are paradigms designed to address the practical constraints of power, bandwidth, and channel quality typically present in client devices. Both methods utilize greedy algorithms to optimize their problems.
\begin{figure}
\caption{The overview of Intermediate Data Aggregation-based SL paradigms: (a) SLGR (b) CutMixSL.}
\label{fig:AVERAGE: SLGR}
\label{fig:communication_cost: MIXer}
\label{fig:communication_cost}
\end{figure} \subsection{Intermediate Data Aggregation-based} Unlike model weight aggregation, some SL paradigms aggregate intermediate data, such as gradients and smashed data, from different clients to enable the exchange of local data information and minimize communication costs.
In SGLR \cite{2021Server}, the server aggregates the local gradients of a subset of clients rather than the model parameters. This approach reduces the amount of information shared among clients and eliminates the client decoupling problem. The learning rate is also separated into two parts to improve the overall training performance and address the Server-Side Large Effective Batch Problem. The process is shown in Figure~\ref{fig:AVERAGE: SLGR}. Also, inspired by existing learning rate acceleration work \cite{DBLP:journals/corr/GoyalDGNWKTJH17}, the learning rate is separated into two parts. The server model’s learning rate is accelerated to solve the Server-Side Large Effective Batch Problem \cite{2021Server}.
In LocFedMix-SL\cite{10.1145/3485447.3512153}, the server mixes up the smashed data from different clients to create new smashed data for forward propagation. This smashed data mixing increases the effective batch size at the model's top layers to match its higher effective learning rate. Each client has an auxiliary network designed to maximize the mutual information between the smashed data and input data which can make the relationship between the smashed data and model input more tight, thereby improving model performance and training speed. In CutMixSL \cite{2022Visual}, the authors combined SL with ViT and proposed a new type of data called CutSmashed. As shown in Figure~\ref{fig:communication_cost: MIXer}, each client creates their own CutSmashed data by randomly masking a portion of their original smashed data. Then, clients send this partial data to a Mixer, which combines them and sends them to the server for training. This method reduces communication costs as clients only need to send a portion of their data, and the server can use multi-casting to send back the gradient. And CutSmashed data increases data privacy as adversaries find it difficult to reconstruct the untransmitted regions.
\section{Evaluation} \begin{table*}[htbp] \caption{The overview of different SL paradigms on different settings. Centralized learning results are provided in bold.} \label{summary list}
\begin{center} \begin{small} \begin{sc} \resizebox{\textwidth}{13mm}{
\begin{tabular}{c|ccc|ccc|ccc|cc} \toprule \multirow{2}{*}{\diagbox{paradigm}{$\alpha$}}
& \multicolumn{3}{c}{MNIST(Acc = \textbf{0.9843})} & \multicolumn{3}{c}{CIFAR-10(Acc = \textbf{0.9114})}& \multicolumn{3}{c}{Adult(Acc = \textbf{0.8173})}& \multicolumn{2}{c}{molhiv(AUC = \textbf{0.7472})} \\ \cline{2-12} &
0 & 0.10&
\makecell[c]{$+\infty$ \\ (IID)}
& $0$ & $0.10$ & \makecell[c]{$+\infty$ (IID)}& $0$ & 0.1 & \makecell[c]{$+\infty$ (IID)} & \makecell[c]{$+\infty$ (IID)} & 0.10\\ \midrule {\upshape SplitFed} & 0.9082 & 0.9768 & 0.9882 &0.6155&0.7781&0.9089 &0.7217&0.7832&0.8027&0.7374&0.6524\\ {\upshape PSL} & 0.3486& 0.6764& 0.9713 &0.4023&0.6484&0.9139 & 0.5319 &0.6042&0.8163&0.7341& 0.7053 \\ {\upshape Vanilla} & 0.1826&0.7288& 0.9792 & 0.3424&0.3534&0.9008 & 0.5171&0.5612&0.7979&0.7428&0.7004\\ {\upshape U-shape} & 0.3297&0.7019&0.9728& 0.3691& 0.3513& 0.9067 & 0.5239 & 0.5957&0.8078&-&-\\ {\upshape SGLR} & 0.3312&0.8603&0.9734&0.4588&0.6765&0.8954& 0.5953 &0.6101&0.7932&-&-\\ \bottomrule \end{tabular} } \end{sc} \end{small} \end{center} \end{table*} As the above sections show, our framework has implemented the current mainstream SL paradigms and support typical data splitting schemes including IID and Non-IID. In this part, we provide comprehensive empirical studies of these paradigms against a wide range of tasks including computer vision, medical data analysis, tabular dataset, and drug discovery. By summarizing and analyzing these empirical results, we provide rich engineering take-away messages for guiding the ML engineer to choose appropriate SL paradigms according to their own applications. We also provide research insights for improving current SL paradigms from the dimension of robustness, efficiency, and privacy protection. As a summary, we first introduce evaluation setup in Section \ref{sec:Evaluation setup}. Then we demonstrate empirical experimental results and provide detailed analysis in Section \ref{sec:Basic analysis}.
\subsection{Evaluation setup} \label{sec:Evaluation setup} We introduce the basic evaluation setup used in this paper including dataset/model setup, evaluation criteria, and run-time environment. \subsubsection{Dataset and model setup}
To evaluate our framework, we conducted extensive experiments using various models and datasets. For all experiments, we trained the models using SGD with a fixed learning rate of 0.01 on minibatches of size 64.
We evaluated different SL paradigms on four commonly-used datasets, namely, MNIST, CIFAR-10, UCI Adult, and Ogbg-molhiv, which cover a wide range of data types, including images, tabular data, and graphs.
\nosection{MNIST}
The MNIST dataset \cite{1990Handwritten} is a collection of handwritten digits from postal codes, with 70,000 samples divided into 60,000 training and 10,000 testing samples. The digits are centered on a 28x28 grid, and the goal is to classify them into 0-9. We trained the LeNet model on this dataset and split the network layers after the 2D MaxPool layer.
\nosection{CIFAR-10}
The CIFAR-10 dataset comprises 60,000 32x32 RGB color images with 10 exclusive class labels, divided into 50,000 training samples and 10,000 testing samples. The task is to classify objects in the images.
ResNet56 \cite{ResNet} is used for evaluation, with the model split at the first ReLU activation layer.
\nosection{Ogbg-molhiv}
The ogbg-molhiv dataset contains molecular samples and labels indicating their ability to inhibit HIV replication. This dataset is used to model a graph learning task, where the molecular data is represented as a graph with atoms as nodes and edges between them.
This task is an example of molecular property prediction, which is crucial in the AI-drug domain.
\nosection{Adult Data Set} The UCI Adult dataset~\cite{osti_421279} is a classic data mining benchmark with 48,842 instances from UC Irvine repository \footnote{{\tt http://archive.ics.uci.edu\\/ml/datasets/Census+Income }}. The dataset has 14 features such as country, age, education. The goal for this binary classification task is to predict whether income exceeds $50K/yr$ based on the census information. We use a network consists of a fully connected (FC) layer, a ReLu activation layer and an other FC layer which is split at the first ReLu activation layer.
\subsection{Analysis} \label{sec:Basic analysis} In this section, we demonstrate our empirical results and try to analyze them from the following dimensions: \begin{itemize}
\item Comparison among different SL paradigms mentioned in Table~\ref{The comparison of Split learning paradigms.}. We provide the results of different SL paradigms when training on IID and Non-IID settings.
\item Comparison with typical FL paradigm FedAvg \cite{2016Communication}.
\item The comparison of the communication costs of different SL paradigms.
\item Comparison with on-device local training. \end{itemize}
\begin{figure*}
\caption{The performance of three SL paradigms on different $\alpha$ when the number of clients is 3 on the MNIST and CIFAR-10 datasets.}
\label{fig:Dirichlet MNIST:PSL}
\label{fig:Dirichlet MNIST:SFL}
\label{fig:Dirichlet MNIST:Vanilla}
\label{fig:Dirichlet MNIST:SGLR}
\label{fig:Dirichlet cifar:PSL}
\label{fig:Dirichlet cifar:SFL}
\label{fig:Dirichlet cifar:Vanilla}
\label{fig:Dirichlet cifar:SGLR}
\label{fig:Dirichlet MNIST}
\end{figure*}
\subsubsection{Comparisons in the IID setting} \label{sec: analysis of IID} First, we investigate the performance of different SL paradigms in the IID setting. The overall results are shown in Table ~\ref{summary list}. For most of the datasets we studied in this paper, all SL paradigms have achieved similar accuracy and are comparable with the centralized one. For example, on the CIFAR-10 dataset with 3 clients, the Vanilla SL paradigm has achieved an accuracy of 90.08$\%$ while the centralized training achieves 91.14$\%$. In contrast, a commonly-used paradigm named U-shape has achieved the accuracy of centralized training. For the graph dataset, SL also have achieved a promising result, for example, vanilla achieved an AUC of 0.7428.
\subsubsection{Comparisons in the Non-IID setting} \label{sec: analysis of Non IID} In practice, the Non-IID setting is more frequently encountered than the IID setting. To simulate such Non-IID settings, we employ label-based splitting for the MNIST and CIFAR-10 datasets. For UCI-Adult dataset, we use attribute-based splitting based on relationship. Following prior work, we first sample $p_{i,k}$ from Dirichlet distribution Dir($\alpha$) and then assign $p_{i,k}$ proportion of the samples of class/attribute $k$ to client $i$, where we set $\alpha$ from 0.05 to 0.3 to adjust the level of data heterogeneity in our experiments. The smaller the $\alpha$, the larger the data heterogeneity. When $\alpha = 0$, samples from different clients have non-overlap label/attribute spaces. For example, the MNIST dataset with 10 labels will be divided into five clients in this way, and each client will be assigned samples corresponding to two labels.
And We will follow the evaluation settings in Section \ref{sec:Evaluation setup} to discuss the performance of paradigms on Non-IID data. \\
The overall results are shown in Table ~\ref{summary list}. In this table, we select two representative values of $\alpha$ to demonstrate results of different paradigms on different datasets (i.e., $\alpha=0$ and $\alpha=0.1$ ). The client number is set to $3$ for all results in this table (More results with other client number are shown in the appendix). We summarize the following observations from the results: \begin{itemize}
\item
SL paradigms struggle to generalize to Non-IID settings, especially when data heterogeneity is high (small $\alpha$). Table~\ref{summary list} shows that most paradigms perform poorly on the CIFAR-10 when $\alpha=0$, with Vanilla achieving only 34.24$\%$ accuracy. When samples are not strictly divided by labels/attributes (e.g., $\alpha=0.1$), most SL paradigms' accuracy are still worse than centralized learning.
\item
SplitFed has demonstrated higher robustness and higher performance compared to other methods. Figure~\ref{fig:Dirichlet MNIST:SFL} shows that SplitFed's performance remains stable when $\alpha$ is small(e.g., $\alpha=0.10$), achieving an accuracy of $97.68\%$. However, Vanilla and PSL's accuracy is low when $\alpha$ is small, as shown in Figure~\ref{fig:Dirichlet MNIST:Vanilla} and Figure~\ref{fig:Dirichlet MNIST:PSL}. Similar findings are shown in Figure~\ref{fig:Dirichlet cifar:PSL}-~\ref{fig:Dirichlet cifar:SGLR} on the CIFAR-10 dataset. SplitFed allows clients to share information about their model, while SGLR shares information by averaging the local gradients of the cut layer, achieving better performance than other paradigms except for SplitFed when $\alpha$ is small, such as an accuracy of $67.65\%$ when $\alpha=0.1$.
\item Vanilla performs worse than other paradigms and even worse than local training in some cases. As shown in Table \ref{summary list}, Vanilla only achieves $18.26\%$ accuracy when $\alpha$ is 0. Vanilla's training pattern prevents the model from converging in these case.
\end{itemize}
\begin{figure}
\caption{The performance of FedAvg with different $\alpha$ on the MNIST dataset and CIFAR-10 dataset. In (a)(b) we set the number of clients as 3, and in (c)(d) the number of clients is 5.}
\label{fig:fedavg Dirichlet:MNIST}
\label{fig:fedavg Dirichlet:CIFAR10}
\label{fig:fedavg Dirichlet:MNIST_5}
\label{fig:fedavg Dirichlet:CIFAR10_5}
\label{fig:fedavg Dirichlet}
\end{figure}
\subsubsection{Comparison with typical FL paradigm}
\label{sec: compare with FL}
In this section, we compare the performance of FedAvg with that of SL paradigms on the MNIST and CIFAR-10 datasets. The experimental results of SL paradigms are shown in Figure~\ref{fig:Dirichlet MNIST}, and those of FedAvg are shown in Figure~\ref{fig:fedavg Dirichlet}.
First, we observe that on both two datasets SplitFed can always converge faster than FedAvg with the same data partition method, as shown in Figure~\ref{fig:Dirichlet cifar:SFL} and Figure~\ref{fig:fedavg Dirichlet:CIFAR10} .
Second, Fedavg performs better than other SL paradigms when the dataset and model are simple on Non-IID setting, such as the MNIST dataset and LeNet-5. For example, as illustrated in Figure \ref{fig:Dirichlet MNIST:PSL} -\ref{fig:Dirichlet MNIST:SGLR} and Figure \ref{fig:fedavg Dirichlet:MNIST}, when $\alpha$ is small, FedAvg has shown superior accuracy compared to most SL paradigms, with the exception of SplitFed.
However, when the data and models become more complex, FedAvg learns much more slowly than SL paradigms. For instance, on the CIFAR-10 dataset, most SL paradigms can achieve better accuracy than FedAvg, as shown in Figure~\ref{fig:fedavg Dirichlet:CIFAR10} and ~\ref{fig:Dirichlet cifar:PSL}-~\ref{fig:Dirichlet cifar:SGLR}.
\subsubsection{The communication cost of different paradigms} \label{sec: communication cost}
We compared the communication cost of PSL, SplitFed, and AsyncSL, representing intermediate data aggregation, model aggregation, and reduced communication respectively. The communication cost is computed by the size of messages transferred between clients and the server. The $loss_{threshold}$ \cite{2021Communication} we set in AsyncSL is 0.10. We split the networks at different layers to compare the impact of the choice of the cut layer. In Table~\ref{communication cost MNIST}, the $small$ indicates a network with one convolution-layer and $large$ indicates a network with two convolution-layers.
\begin{table}[t] \caption{The communication cost of training per epoch on MINST when the number of clients is 3.} \label{communication cost MNIST} \vskip 1pt \begin{center} \begin{small} \begin{sc} \begin{tabular}{ccccc} \toprule \multirow{2}{*} {paradigm} & \multicolumn{2}{c}{small(MB)} & \multicolumn{2}{c}{large(MB)} \\ \cline{2-3} \cline{4-5}
&send & receive&send & receive\\ \makecell[c]{\upshape SplitFed \\ \upshape half data} & \makecell[c]{18.33\\ 9.36}& \makecell[c]{197.59\\ 98.86} & \makecell[c]{4.04\\3.97} &\makecell[c]{58.70\\29.40} \\ \cline{1-5} \makecell[c]{\upshape PSL \\ \upshape half data} & \makecell[c]{ 17.86\\8.79 } &\makecell[c]{197.59\\ 98.89 } & \makecell[c]{3.93\\ 3.88 } & \makecell[c]{58.65\\29.42} \\ \cline{1-5} \makecell[c]{\upshape AsyncSL\\ \upshape half data } & \makecell[c]{4.58\\ 2.43 } &\makecell[c]{28.24\\ 15.43}& \makecell[c]{1.84\\ 1.03} & \makecell[c]{8.39\\ 4.32}\\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \end{table}
\begin{itemize}
\item Dataset size affects communication cost. Table ~\ref{communication cost MNIST} shows that communication cost decreases significantly when only half of the dataset is used.
\item Different SL paradigms result in different communication costs. For example, SplitFed sends more data than PSL because clients in SplitFed need to transmit local model parameters to the server.
However, when the model is more complex or the number of clients is much larger, the communication cost of models cannot be ignored \cite{DBLP:journals/corr/abs-1812-03288}.
\item The choice of the cut layer may have an impact on the communication cost. Table ~\ref{communication cost MNIST} shows that when training on the MNIST dataset, communication cost decreases significantly when clients hold two convolution layers instead of one. This is because the choice of the cut layer affects the size of the smash data and gradients.
\end{itemize}
\subsubsection{Comparison with on-device local training} \label{Local training section} One of the primary objectives of SL is to enhance the model's performance of each client without requiring the exchange of raw data. Therefore, this section aims to compare the performance of SL with on-device local training. Thus, we include an extra set of experiments to demonstrate the effectiveness of on-device local training. In these experiments, each client follows the same data partitioning process as in the other SL experiments and employs an incomplete dataset to train their local model without any communication.
\begin{figure}
\caption{The effect of client number and $\alpha$ on model accuracy.}
\label{fig:LC CN}
\label{fig:LC alpha}
\label{fig:LC}
\end{figure} \nosection{IID settings} The results of local training and two SL paradigms under IID settings are presented in Figure~\ref{fig:LC CN}. On the MNIST dataset, there is slight difference among the three algorithms, with only a small drop in local training accuracy at 10 clients. However, on the CIFAR-10 dataset, local training shows a significant drop, with an accuracy of only $71.47\%$ at 10 clients, which is probably due to the complexity of the learning task. Unlike handwritten characters, CIFAR-10 contains real-world objects, necessitating a sufficient amount of training data. Both SL paradigms have shown promising results, indicating that they can improve each client's model performance without exchanging the original data.
\nosection{Non-IID settings} The performance of local training and two SL paradigms under varying $\alpha$ in the Non-IID setting is presented in Figure~\ref{fig:LC alpha}. On the MNIST dataset, the performance of both Vanilla and local training decreases as $\alpha$ decreases, but the overall performance of Vanilla is still higher than local training. When $\alpha$ is 0, the accuracy of Vanilla also drops to a level similar to local training. On the CIFAR-10 dataset, Vanilla's performance is lower than expected, when $\alpha$ is less than 0.5, it has lower accuracy than local training.
SplitFed performs well on both datasets, but on the CIFAR-10 dataset, there is a large drop in accuracy when $\alpha$ is below 0.1. At the most extreme Non-IID setting(i.e., $\alpha=0$), SplitFed achieves an accuracy of 64.73$\%$. SplitFed also has difficulty converging in this case, but overall its performance is still better than the other two paradigms. SplitFed can mitigate this problem by increasing the frequency of aggregation.
In general, Vanilla can help clients improve performance when data is evenly distributed. However, when the task is complex and the Non-IID distribution is extreme, we recommend using SplitFed to help local devices improve performance.
\section{Conclusion} In this paper, we propose SLPerf, an open research library and benchmark designed for SL researchers to facilitate the development of SL paradigms. We summarize and categorize existing SL paradigms and compare their performance on four widely-used datasets under both IID and Non-IID data settings. Our experiments demonstrate the importance of investigating how SL can perform better under Non-IID data settings and determining the optimal layer for model splitting.
\appendix \section*{Acknowledgments} This work was supported by the National Natural Science Foundation of China under Grant No. 62202170.
\end{document} |
\begin{document}
\title{Representations associated to small nilpotent orbits for complex Spin Groups}
\begin{abstract} This paper provides a comparison between the $K$-structure of unipotent representations and regular sections of bundles on nilpotent orbits for complex groups of type $D$. Precisely, let $ G_ 0 =Spin(2n,{\mathbb{C}})$ be the Spin complex group viewed as a real group, and $K\cong G_0$ be the complexification of the maximal compact subgroup of $G_0$.
We compute $K$-spectra of the regular functions on some small nilpotent orbits ${\mathcal{O}}$ transforming according to characters $\psi$ of $C_{ K}({\mathcal{O}})$ trivial on the connected component of the identity $C_{ K}({\mathcal{O}})^0$. We then match them with the ${K}$-types of the genuine (\ie representations which do not factor to $SO(2n,\mathbb C)$) unipotent representations attached to ${\mathcal{O}}$.
\end{abstract}
\author{Dan Barbasch}
\address[D. Barbasch]{Department of Mathematics\\
Cornell University\\Ithaca, NY 14850, U.S.A.}
\email{[email protected]} \thanks{D. Barbasch was supported by an NSA grant}
\author{Wan-Yu Tsai} \address[Wan-Yu Tsai]{Institute of Mathematics, Academia Sinica, 6F, Astronomy-Mathematics Building, No. 1, Sec. 4, Roosevelt Road, Taipei 10617, TAIWAN} \email{[email protected]}
\maketitle
\section{Introduction}
Let $G_0\subset G$ be the real points of a complex linear reductive algebraic group $G$ with Lie algebra $\mathfrak g_0$ and maximal compact subgroup $K_0$. Let $\mathfrak g_0=\mathfrak k_0+\mathfrak s_0$ be the Cartan decomposition, and $\mathfrak g=\mathfrak k+\mathfrak s$ be the complexification. Let $K$ be the complexification of $K_0.$
\begin{definition} Let ${\mathcal{O}}:= K\cdot e\subset \mathfrak s$. We say that an irreducible admissible representation $\Xi$ is associated to ${\mathcal{O}},$ if ${\mathcal{O}}$ occurs with nonzero multiplicity in the associated cycle in the sense of \cite{V2}.
An irreducible module $\Xi$ of $G_0$ is called unipotent associated to a nilpotent orbit ${\mathcal{O}}\subset \mathfrak s$ and infinitesimal character $\la_{{\mathcal{O}}}$, if it satisfies \begin{description} \item[1] It is associated to ${\mathcal{O}}$ and its annihilator
$Ann_{U(\mathfrak g)}\Xi$ is the unique maximal primitive ideal with
infinitesimal character $\la_{{\mathcal{O}}}$, \item[2] $\Xi$ is unitary. \end{description} Denote by ${\mathcal{U}} _{G_0}({\mathcal{O}},\la_{{\mathcal{O}}})$ the set of unipotent representations of $G_0$ associated to ${\mathcal{O}}$ and $\la_{{\mathcal{O}}}$.
\end{definition}
{Let $C_K({\mathcal{O}}):= C_K(e)$ denote the centralizer of $e$ in $K$, and let $A_K({\mathcal{O}}):=C_K({\mathcal{O}})/C_K({\mathcal{O}})^0$ be the component group.} Assume that $G_0$ is connected, and a complex group viewed as a real Lie group. In this case $G\cong G_0\times G_0,$ and $K\cong G_0$ as complex groups. Furthermore $\mathfrak s\cong\mathfrak g_0$ as complex vector spaces, and the action of $K$ is the adjoint action. In this case it is conjectured that there exists an infinitesimal character $\la_{{\mathcal{O}}}$ such that in addition, \begin{description} \item[3] There is a 1-1 correspondence $\psi\in
\widehat{ A_K({\mathcal{O}})}\longleftrightarrow \Xi({\mathcal{O}},\psi)\in
{\mathcal{U}}_{G_0}({\mathcal{O}},\la_{{\mathcal{O}}})$ satisfying the additional condition $$ \Xi({\mathcal{O}},\psi)\bigb_{K}\cong R({\mathcal{O}},\psi), $$ \end{description} where \begin{equation}\label{def-reg-fun} \begin{aligned} R({\mathcal{O}}, \psi) &= {\mathrm{Ind}}_{C_{K}(e)} ^{K} (\psi) \\ &= \{f: K\to V_{\psi} \mid f(gx) =\psi(x) f(g) \ \forall g\in K, \ x\in C_K (e)\} \end{aligned} \end{equation} is the ring of regular functions on ${\mathcal{O}}$ transforming according to $\psi$. Therefore, $R({\mathcal{O}},\psi)$ carries a $K$-representation.
\begin{comment} {\clrr As pointed out in \cite{BWT}, this conjecture cannot be valid
for real groups. For complex groups the codimension greater than one
of ${\mathcal{O}}$ in ${\overline{\mathcal{O}}}$ is satisfied, so the conjecture is plausible. } \end{comment} Conjectural parameters $\la_{\mathcal{O}}$ satisfying the conditions above are studied in \cite{B}, along with results establishing the validity of this conjecture for large classes of nilpotent orbits in the classical complex groups. Such parameters $\la_{\mathcal{O}}$ are available for the exceptional groups as well, \cite{B} for $F_4$, and to appear elsewhere for type $E.$
{ This conjecture cannot be valid for all nilpotent orbits in the case of real groups; the intersection of a complex nilpotent orbit with $\mathfrak s$ consists of several components. $R({\mathcal{O}},\psi)$ can be the same for different components, whereas the representations with associated variety containing a given component have drastically different $K$-structures. Examples can be found in \cite{V1}. As explained in \cite{V1} Chapter 7 and \cite{V2} Theorem 4.11, if the codimension of the orbit ${\mathcal{O}}$ is $\ge 2,$ then $\Xi\mid_K=R({\mathcal{O}},\phi)-Y$ with $\phi$ an algebraic representation, and $Y$ an $S(\mathfrak g/\mathfrak k)$-module supported on orbits of strictly smaller dimension. The orbits ${\mathcal{O}}$ under consideration in this paper have codimension $\ge 2.$ Even when $\codim{\mathcal{O}} \ge 2,$ (\eg the case of the minimal orbit in certain real forms of type $D,$) many examples are known where there are no representations with associated variety ${\mathcal{O}}$ or any real form of its complexification. }
In this paper we investigate this conjecture for \textit{small} orbits in the complex case by different techniques than in \cite{B}; paper \cite{BTs} investigates the analogue for the real $Spin$ groups. For the condition of \textit{small} we require that $$ [\mu : R({\mathcal{O}}, \psi)]\le c_{{\mathcal{O}}} $$ \ie that the multiplicity of any $\mu\in \widehat{K}$ be uniformly bounded. This puts a restriction on $\dim{\mathcal{O}}$:
\begin{equation}\label{eq:dim-cond}
\dim {\mathcal{O}} \leq \text{rank}(\mathfrak k) + |\Delta ^+(\mathfrak k,\mathfrak t)|, \end{equation} where $\mathfrak t\subset \mathfrak k$ is a Cartan subalgebra, and $\Delta^+(\mathfrak k,\mathfrak t)$ is a positive system. The reason for this restriction is as follows. Let $(\Pi,X)$ be an admissible representation of $G_0$, and $\mu$ be the highest weight of a representation $(\pi,V)\in \widehat{K}$ which is dominant for $\Delta^+(\mathfrak k,\mathfrak t)$. Assume that $\dim{\mathrm{Hom}}_K[\pi,\Pi]\le C$, and $\Pi$ has associated variety {\cf \cite{V2})}. Then $$ \dim\{ v\ :\ v\in X \text{ belongs to an isotypic component with
} ||\mu||\le t\}\le Ct^{|\Delta^+(\mathfrak k,\mathfrak t)|+\dim \mathfrak t}. $$
The dimension of $(\pi,V)$ grows like $t^{|\Delta^+(\mathfrak k,\mathfrak t) |}$, the number of representations with highest weight $||\mu||\le t$ grows like $t^{\dim\mathfrak t},$ and the multiplicities are assumed uniformly bounded. On the other hand, considerations involving primitive ideals imply that the dimension of this set grows like $t^{\dim G\cdot e/2}$ with $e\in{\mathcal{O}},$ and half the dimension of (the complex orbit) $G\cdot e$ is the dimension of the ($K$-orbit) $K\cdot e\in\mathfrak s.$ { In the
case of type $D,$ condition (\ref{eq:dim-cond}) coincides with being
spherical, see \cite{P}. Since we only deal with characters of
$C_{ K}({\mathcal{O}})$, multiplicity $\le 1$ is guaranteed}.
In the case of the complex groups of type $D_n$, we consider $G_0=Spin(2n,{\mathbb{C}})$ viewed as a real group, and hence $K\cong G_0$ is the complexification of the maximal compact subgroup $K_0= Spin(2n)$ of $G$.
In Section 2 we list all small nilpotent orbits satisfying (\ref{eq:dim-cond}) and describe the (component groups) of their centralizers. In Section 3, we compute $R({\mathcal{O}},\psi)$ for each ${\mathcal{O}}$ in \ref{ss:orbits} and $\psi\in \widehat{A_{ K}({\mathcal{O}})}$. In Section 4 we associate to each ${\mathcal{O}}$ an infinitesimal character $\la_{{\mathcal{O}}}$ by \cite{B}.
The fact is that ${\mathcal{O}}$ is the minimal orbit which can be the associated variety of a $(\mathfrak g, K)$-module with infinitesimal character $(\la_L, \la_R)$, with $\la_L$ and $\la_R$ both conjugate to $\la_{{\mathcal{O}}}$. We make a complete list of irreducible modules ${\overline{X}}(\la_L,\la_R)$ (in terms of Langlands classification) which are attached to ${\mathcal{O}}.$ Then we match the $ K$-structure of these representations with $R({\mathcal{O}},\psi)$. This demonstrates the conjecture we state in the beginning of the introduction. The following theorem summarizes this. \begin{theorem} With notation as above, view $ G_0={Spin}(2n,{\mathbb{C}})$ as a real group. The $ K$-structure of each representations in ${\mathcal{U}}_{G_0}({\mathcal{O}},\la_{{\mathcal{O}}})$ is calculated explicitly and matches the $K$-structure of the $R({\mathcal{O}},\psi)$ with $\psi \in \widehat{A_{K}({\mathcal{O}})}$.That is, there is a 1-1 correspondence $\psi\in
\widehat { A_{{K}}({\mathcal{O}})}\longleftrightarrow \Xi({\mathcal{O}},\psi)\in
{\mathcal{U}}_{ G _0 }({\mathcal{O}},\la_{\mathcal{O}})$ satisfying $$ \Xi({\mathcal{O}},\psi)\mid_{ K}\cong R({\mathcal{O}},\psi). $$ \end{theorem}
For the case $O(2n,\mathbb C)$ (rather than $Spin(2n,\mathbb C)$), the $K$-structure of the representations studied in this paper were considered earlier in \cite{McG} and \cite{BP1}.
\section{Preliminaries} \subsection{Nilpotent Orbits} \label{ss:orbits} The complex nilpotent orbits of type $D_n$ are parametrized by partitions of $2n$, with even blocks occur with even multiplicities, and with $I, II$ in the \textit{very even} case (see \cite{CM}). The small nilpotent orbits satisfying (\ref{eq:dim-cond}) are those ${\mathcal{O}}$ with $\dim {\mathcal{O}}\le n^2$.
We list them out as the following four cases:
$$ \begin{aligned} Case\ 1:\ &n=2p&&\quad {\mathcal{O}}=[3\ 2^{n-2} \ 1]&& \dim {\mathcal{O}} = n^2 \\ Case\ 2:\ &n=2p \ \text{ or } \ 2p+1 && \quad {\mathcal{O}} = [ 3\ 2^{2k} \ 1^{2n-4k-3}] \ \ \footnotesize{0\le k \le p -1} &&\dim {\mathcal{O}} =4nk-4k^2+4n-8k-4\\ Case\ 3:\ &a=2p &&\quad{\mathcal{O}}=[2^n]_{I,II} && \dim {\mathcal{O}} = n^2-n \\ Case\ 4:\ &a=2p \ \text{ or } \ 2p+1 &&\quad {\mathcal{O}}=[ 2^{2k} \ 1^{2n-4k}] \ \ 0\le k < n/2&& \dim {\mathcal{O}} = 4nk-4k^2-2k\\ \end{aligned} $$
Note that these are the orbits listed in \cite{McG}. The proof of the next Proposition, and the details about the nature of the component groups, are in Section \ref{ss:clifford}.
\begin{prop} (Corollary \ref{c:cgp})
\begin{description}
\item[Case 1] If ${\mathcal{O}}=[3\ 2^{2p-2}\ 1],$ then $A_{{K}}({\mathcal{O}})\cong \mathbb Z_2\times\mathbb Z_2$. \item[Case 2] If ${\mathcal{O}}=[3\ 2^{2k}\ 1^{2n-4k-3}]$ with $2n-4k-3>1,$ then $A_{{K}}({\mathcal{O}})\cong \mathbb Z_2.$ \item[Case 3] If ${\mathcal{O}}=[2^{2p}]_{I,II}$, then $A_{{K}} ({\mathcal{O}}) \cong\mathbb Z_2.$ \item[Case 4] If ${\mathcal{O}}=[2^{2k}\ 1^{2n-4k}]$ with $2k<n,$ then
$A_{{K}} ({\mathcal{O}}) \cong 1.$
\end{description} In all cases $C_{K}({\mathcal{O}})=Z( K)\cdot C_{ K}({\mathcal{O}})^0.$
\end{prop}
\section{Regular Sections} We use the notation introduced in Sections 1 and 2. We compute the centralizers needed for $R({\mathcal{O}},\psi)$ in $\mathfrak k$ and in ${K}$. We use the standard roots and basis for $\mathfrak{so}(2n,{\mathbb{C}}).$ A basis for the Cartan subalgebra is given by $H({\epsilon}_i)$, the root vectors are $X(\pm{\epsilon}_i\pm{\epsilon}_j)$. Realizations in terms of the Clifford algebra and explicit calculations are in Section \ref{ss:clifford}.
Let $e$ be a representative the orbit ${\mathcal{O}}$, and let $\{e,h,f\}$ be the corresponding Lie triple. Let \begin{itemize} \item $C_{\mathfrak k} (h)_i$ be the $i$-eigenspace of $ad(h)$ in $\mathfrak k$, \item $C_{\mathfrak k} (e)_i$ be the $i$-eigenspace of $ad(h)$ in the
centralizer of $e$ in $\mathfrak k$, \item $C_\mathfrak k (h)^+:= \sum \limits _{i>0} C_\mathfrak k (h) _i$, and $C_\mathfrak k (e)^+:= \sum \limits _{i>0} C_\mathfrak k (e) _i.$ \end{itemize}
\begin{comment} \begin{table}[h] \caption{$D_n$, $n=2p$} \label{nilp-orbit1} \begin{center}
\begin{tabular} {c| c| c| c | c |c} ${\mathcal{O}}$& $[3 \ 2^{n-2} \ 1 ]$& $[ 3\ 2^{2k} \ 1^{2n-4k-3}]$ & $[2^n]_I $ & $ [2^n]_{II} $ & $[2^{2k} \ 1^{2n-4k}] $ \\ && $0\le k \le p-1$&&& $0\le k \le p -1$ \\ \hline $\dim {\mathcal{O}}$& $n^2 $ & $4nk-4k^2+4n-8k-4$& $n^2-n$& $n^2-n$& $4nk-4k^2-2k$\\\hline $A_K({\mathcal{O}})$& ${\mathbb Z}_2\times {\mathbb Z}_2$&${\mathbb Z}_2$&${\mathbb Z} _2$& ${\mathbb Z} _2$& 1 \end{tabular} \end{center} \end{table}
\begin{table}[h] \caption{$D_n$, $n=2p+1$} \label{nilp-orbit2} \begin{center}
\begin{tabular}{c|c|c} ${\mathcal{O}}$& $[3\ 2^{2k} \ 1^{2n-4k-3} ]$ & $[ 2^{2k} \ 1^{2n-4k}]$ \\ & $0\le k \le p-1$& $0\le k\le p$ \\\hline $\dim {\mathcal{O}}$& $4nk-4k^2+4n-8k-4$ & $4nk-4k^2-2k$\\\hline $A_K({\mathcal{O}})$& ${\mathbb Z}_2$ & 1 \end{tabular} \end{center} \end{table} \end{comment}
\subsection{} We describe the centralizer for ${\mathcal{O}}= [ 3 \ 2^{2k}\ 1^{2n-4k-3}]$ in detail. These are Cases 1 and 2.
Representatives for $e$ and $h$ are $$ \begin{aligned} e&= X({\epsilon}_1-{\epsilon}_{2k+2}) +X({\epsilon}_1+{\epsilon}_{2k+2}) + \sum \limits_{2\le i\le 2k+1} X({\epsilon}_i +{\epsilon} _{k+i})\\ h&= 2 H({\epsilon}_1)+\sum\limits_{2\le i\le 2k+1} H({\epsilon}_i) = H(2, \underset{2k}{\underbrace{1,\dots,1}}, \underset{n-1-2k}{\underbrace{0, \dots, 0}}). \end{aligned} $$
Then \begin{equation} \begin{aligned} C_{\mathfrak k} (h)_0 &= \mathfrak{gl} (1)\times \mathfrak{gl} (2k)\times { \mathfrak{so}(2n-2-4k)}, \\ C_{\mathfrak k} (h)_1 &= Span\{ X({\epsilon}_1 - {\epsilon}_i) , \ X({\epsilon}_i \pm {\epsilon} _j ) , \ 2\le i\le 2k+1<j\le n\}, \\ C_{\mathfrak k} (h)_2 &= Span \{ X( {\epsilon} _1\pm {\epsilon} _j), \ X({\epsilon}_i+ {\epsilon} _l), 2\le i \neq l \le 2k+1<j \le n \}, \\ C_{\mathfrak k} (h)_3 &= Span \{ X( {\epsilon}_1 +{\epsilon} _i ), \ 2\le i\le 2k+1\}. \end{aligned} \end{equation}
Similarly
\begin{equation} \begin{aligned} C_{\mathfrak k}(e)_0 &\cong \mathfrak{sp}(2k)\times{ {\mathfrak{so}(2n-3-4k)}},\\ C_{\mathfrak k} (e)_1 &= Span\{ X( {\epsilon}_1 - {\epsilon}_i ) - X({\epsilon} _{k+i } \pm {\epsilon}_{2k+2}) , \ X({\epsilon} _1 -{\epsilon} _{k+i})- X({\epsilon} _i \pm{\epsilon} _{2k+2}), \ 2\le i \le k+1, \\ & X({{\epsilon}_j }\pm {\epsilon}_l ) , \ 2\le j\le 2k+1, \ 2k+3\le l \le n\},\\ C_{\mathfrak k} (e)_2 &= C_{\mathfrak k} (h)_2 , \\ C_{\mathfrak k} (e)_3 &= C_{\mathfrak k} (h)_3. \end{aligned} \end{equation}
We denote by $\chi$ the trivial character of $C_{\mathfrak k}(e)$. A representation of ${K}$ will be denoted by its highest weight: $$
V=V(a_1,\dots, a_p), \quad a_1\ge \dots \ge |a_p|, $$ with all $a_i\in {\mathbb Z}$ or all $a_i\in {\mathbb Z}+1/2$.
We will compute \begin{equation} \label{eq:mreal} {\mathrm{Hom}}_{C_{\mathfrak k}(e)}[V^*, \chi]= {\mathrm{Hom}}_{C_{\mathfrak k}(e)_0}\left [V^*/(C_{\mathfrak k}(e)^+V^*), \chi \right ]:= \left (V^*/(C_{\mathfrak k}(e)^+V^*\right )^\chi. \end{equation}
\subsection{Case 1.} $n=2p$, ${\mathcal{O}}=[3\ 2^{n-2}\ 1]$.
In this case $C_{\mathfrak k}(h)_0 = \mathfrak{gl}(1)\times \mathfrak{gl} (n-2)\times \mathfrak{so}(2), C_{\mathfrak k}(e)_0 =\mathfrak{sp}(n-2)$.
Consider the parabolic $\mathfrak p = \mathfrak l +\mathfrak n$ determined by $h$, \begin{equation} \label{eq:ell} \begin{aligned} \mathfrak l &= C_{\mathfrak k}(h)_0 \cong \mathfrak{gl}(1)\times \mathfrak{gl}(n-2)\times \mathfrak{so}(2), \\ \mathfrak n &= C_{\mathfrak k}(h)^+. \end{aligned} \end{equation} We denote by $V^*$, the dual of $V$. Since $n=2p,$ $V^*\cong V.$ Then $V^*$ is a quotient of a generalized Verma module $M(\la)=U(\mathfrak k)\otimes _{U(\overline{\mathfrak p} )} F(\la)$, where $\la$ is a weight of $V^*$ which is dominant for $\overline{\mathfrak p}$. This is $$ \la = (-a_1;-a_{n-1} , \dots, -a_2; -a_n ). $$ The $;$ denotes the fact that this is a (highest) weight of $\mathfrak l\cong \mathfrak{gl}(1)\times \mathfrak{gl}(n-2)\times \mathfrak{so}(2)$.
We choose the standard positive root system $\triangle^+ (\mathfrak l)$ for $\mathfrak l$. { As a $C_\mathfrak k(e)_0$-module, $$ \mathfrak n =C_{\mathfrak
k}(e)^+\oplus \mathfrak n ^{\perp}, $$ where we can choose $\mathfrak n ^{\perp}=Span \{ X({\epsilon}_1-{\epsilon}_{j} ) , \ 2\le j\le n-1 \}$. This complement is $\mathfrak l$-invariant.
It restricts to the standard module of $C_{\mathfrak k}(e)_0=\mathfrak{sp}(n-2).$ }
The generalized Bernstein-Gelfand-Gelfand resolution is: \begin{equation}
\label{eq:bgg:cx} 0 \dots \longrightarrow \bigoplus _{w\in W^+, \ \ell(w)=k}M(w\cdot\la) \longrightarrow\dots \longrightarrow \bigoplus _{w\in W^+, \ \ell(w)=1}M(w\cdot\la) \longrightarrow M(\la) \longrightarrow V^* \longrightarrow 0, \end{equation} with $w\cdot \la:= w(\la+\rho(\mathfrak k))-\rho(\mathfrak k)$, and $w\in W^+$, the $W(\mathfrak l)$-coset representatives that make $w\cdot \la$ dominant for $\Delta^+(\mathfrak l).$ This is a free $C_{\mathfrak k}(e)^+$-resolution so we can compute cohomology by considering \begin{equation}
\label{eq:coh} 0 \dots \longrightarrow \bigoplus _{w\in W^+, \ \ell(w)=k} \overline{M(w\cdot\la)} \longrightarrow\dots \longrightarrow \bigoplus _{w\in W^+, \ \ell(w)=1}\overline{M(w\cdot\la)} \longrightarrow \overline{M(\la)} \longrightarrow \overline{V^*} \longrightarrow 0, \end{equation} where $\overline{X}$ denotes $X/[C_{\mathfrak k}(e)^+ X]$.
Note that in the sequences, $M(w\cdot\la)\cong S(\mathfrak n)\otimes_{\mathbb C} F(w\cdot\la)$ and $\overline{M(w\cdot\la)}\cong S(\mathfrak n^{\perp})\otimes_{\mathbb C} F(w\cdot\la)$. As an $\mathfrak l$-module, $\mathfrak n^{\perp}$ has highest weight $(1;0, \dots,0,-1 ;0)$. Then $S^k(\mathfrak n^\perp)\cong F(k;0,\dots ,0 ,-k;0)$ as an $\mathfrak l$-module.
Let $\mu:=(-\alpha_1 ; -\alpha_{n-1} , \dots, -\alpha_2 ; -\alpha_n)$ be the highest weight of an $\mathfrak l$-module. By the Littlewood-Richardson rule, \begin{equation}
\label{eq:tensorproduct} S^k(\mathfrak n ^{\perp})\otimes F_{\mu}=\sum V(-\alpha_1 +k; -\alpha_{n-1} -k_{n-1} , \dots, -\alpha_3-k_3, -\alpha_2-k_2;-\alpha_n ). \end{equation} The sum is taken over
$$\{k_i\ | \ k_i\ge 0,\ \sum k_i =k, \ 0\le k_i\le \alpha_{i-1}-\alpha_{i}, \ 3\le i \le n-1\}.$$
\begin{lemma}\label{le:Sn}
Hom$ _{C_{\mathfrak k}(e) _0}[S^k (\mathfrak n ^{\perp}) \otimes F_{\mu} : \chi] \neq 0$ for every $\mu$. The multiplicity is 1. \begin{proof} Since $(\mathfrak{gl}(n-2),\mathfrak{sp}(n-2))$ is a hermitian symmetric pair, Helgason's theorem implies that a composition factor in $S(\mathfrak n^\perp)\otimes F_{\mu}$ admits $C_{\mathfrak k}(e)_0$-fixed vectors only if $$ -\alpha_{n-1}-k_{n-1}=-\alpha_{n-2}-k_{n-2},\ -\alpha_{n-3}- k_{n-3}=-\alpha_{n-4}-k_{n-4},\dots ,-\alpha_3-k_3=-\alpha_2-k_2. $$
The conditions ${ 0\le k_i\le \alpha_{i-1}-\alpha_{i} } $ imply \begin{equation} \begin{aligned}
k_{n-2} =0, &\quad k_{n-1} = \alpha_{n-2}-\alpha_{n-1}, \\
\hspace*{4em} \vdots\\
k_4=0,& \quad k_5 =\alpha_4-\alpha_5, \\
k_2=0, & \quad k_3=\alpha_2-\alpha_3.
\end{aligned} \end{equation}
Therefore, given $\mu$, the weight of the $C_{\mathfrak k}(e)_0$-fixed vector in $S(\mathfrak n ^{\perp})\otimes F_{\mu}$ is $$ (-\alpha_1+\alpha_2-\alpha_3+\alpha_4-\alpha_5+\dots+\alpha_{n-2}-\alpha_{n-1} ; -\alpha_{n-2},-\alpha_{n-2}, \dots, -\alpha_2,\alpha_2; -\alpha_n), $$ and the multiplicity is 1.
\end{proof} \end{lemma}
\begin{cor} For every $V(a_1,\dots,a_n)\in \widehat{{K}},$, Hom$_{C_{\mathfrak k}(e)} [V,\chi]= 0$ or 1. The action of $\ad h$ is $-2\sum\limits_{1\le i \le p} a_{2i-1}$. \begin{proof} The first statement follows from Lemma \ref{le:Sn} and the surjection $$ \overline{M(\la)}\cong S(\mathfrak n^{\perp}) \otimes_{{\mathbb{C}}} F(\la)\longrightarrow \overline{V^*}\longrightarrow 0. $$ The action of $\ad h$ is computed from the module \begin{eqnarray}\label{eq:fix-vec} V(-a_1+k; -a_{n-2}, -a_{n-2} , \dots, -a_2,-a_2 ; -a_n) \end{eqnarray} with $k=a_2-a_3+a_4-a_5+\dots+a_{n-2}-a_{n-1}$. The value is $-2\sum\limits_{1\le i \le p} a_{2i-1}$. \end{proof} \end{cor}
\subsubsection*{$\mathbf{\ell(w)=1}$} To show that the weights in (\ref{eq:fix-vec}) actually occur, it is enough to show that these weights do not occur in the term in the BGG resolution (\ref{eq:coh}) with $\ell(w)=1$.
We calculate $w\cdot\la:$ $$ \rho=\rho(\mathfrak k)=(-(n-1) ; -1, -2, \dots, -(n-2); 0) $$ is dominant for $\overline{\mathfrak p}$, and $$
\la +\rho = (-a_1-n+1; -a_{n-1}-1, -a_{n-2} -2 , \dots, -a_2-n+2; -a_n).
$$
There are three elements $w\in W^+$ of length 1. They are the left
$W(\mathfrak l)$-cosets of $$ w_1=s_{{\epsilon}_1- { {\epsilon}_{n-1} }}, \ w_2= s_{ { {\epsilon}_{2}}-{\epsilon}_n}, w_3=s_{ { {\epsilon}
_{2} } +{\epsilon} _n}. $$ So \begin{equation}
\begin{aligned}
w_1\cdot \la &= { (-a_2 +1 ; -a_{n-1}, -a_{n-2}, \dots, -a_4, -a_3,-a_1-1 ; -a_n)},\\
w_2\cdot \la&= { (-a_1; -a_n+1, -a_{n-2}, -a_{n-3}, \dots, -a_3, -a_2; -a_{n-1}-1 )} ,\\
w_3\cdot\la &= { (-a_1; a_n +1, -a_{n-2},-a_{n-3} ,\dots, -a_3,-a_2;a_{n-1}+1)}. \end{aligned}
\end{equation}
\begin{lemma} For all $\la$, Hom$_{C_{\mathfrak k}(e)}[\overline{M(w_i\cdot \la)} , \chi ]=1$. The eigenvalues of $\ad h$ are different from $-2\sum \limits_{1\le i\le p}a_{2i-1}$ for each $w_i$.
\begin{proof}
The $\mathfrak{sp}(n-2)$-fixed weights come from {$S(\mathfrak n^\perp)\otimes F(w_i\cdot \la), \ i=1, 2, 3, \ $ are }
\begin{equation}\label{eq:spn-2} { \begin{aligned}
w_1&&\longleftrightarrow& \mbox{ \footnotesize $( a_1-a_2-a_3 +a_4 -a_5+\dots +a_{n-2}-a_{n-1} +2 ; -a_{n-2},-a_{n-2}, \dots , -a_4,-a_4, -a_2,-a_2; -a_n)$ },\\
w_2 &&\longleftrightarrow & \mbox{ \footnotesize $ (-a_1+a_2-a_3+\dots +a_{n-4}-a_{n-3}+a_{n-2}-a_n+1; -a_{n-2}, -a_{n-2}, \dots,-a_4, -a_4,-a_2,-a_2;-a_{n-1}-1)$ },\\
w_3 &&\longleftrightarrow& \mbox{ \footnotesize $(-a_1+a_2- a_3+\dots + a_{n-4}-a_{n-3}+a_{n-2}+a_n+1 ; -a_{n-2} , -a_{n-2}, \dots,-a_4,-a_4, -a_2, -a_2; a_{n-1}-1)$}. \end{aligned}}
\end{equation}
The negatives of the weights of $h$ are
{ \begin{equation}
\label{eq:wth}
\begin{aligned} &w_0=1&&\longleftrightarrow &&2(a_1+a_3+\dots +a_{n-1}),\\ &w_1&&\longleftrightarrow &&2(a_2+a_3+a_5 \dots +a_{n-1}+1),\\ &w_2&&\longleftrightarrow &&2(a_1+a_3+\dots +a_{n-3} +a_{n}-1),\\ &w_3&&\longleftrightarrow &&2(a_1+a_3+\dots +a_{n-3}-a_n -1).
\end{aligned} \end{equation} } The last three weights are not equal to the first one. This completes the proof. \end{proof} \end{lemma}
\begin{theorem}\label{th:reg} Every representation $V(a_1,\dots,a_n)$ has $C_{\mathfrak k}(e)$ fixed vectors and the multiplicity is 1. We write $C_K({\mathcal{O}}):= C_K(e)$.
In summary, $$ \text{Ind}_{C_{{K}}({\mathcal{O}})^0} ^{{K}} (Triv)=\bigoplus _{a \in \widehat{{K}} } V(a_1,\dots,a_n). $$
\end{theorem}
Theorem \ref{th:reg} can be
interpreted as computing regular functions on the universal cover
$\widetilde{\mathcal{O}}$ of
${\mathcal{O}}$ transforming trivially under $C_{\mathfrak k}(e)_0$. We decompose it further: \begin{eqnarray}\label{eq:decomp} R(\widetilde{{\mathcal{O}}}, Triv):= {\mathrm{Ind}}_{C_{{ K} } ({\mathcal{O}})^0} ^{ K} (Triv) = {\mathrm{Ind}} _{C_{{K}} ({\mathcal{O}})} ^{{K}} \left
[{\mathrm{Ind}}_{C_{ {K}} ({\mathcal{O}})^0} ^{ C_{ {K}} ({\mathcal{O}}) } (Triv) \right ]. \end{eqnarray} The inner induced module splits into \begin{equation} \label{eq:char} {\mathrm{Ind}}_{C_{ {K}} ({\mathcal{O}})^0} ^{ C_{ {K}} ({\mathcal{O}}) } (Triv)=\sum\psi \end{equation} where $\psi $ are the irreducible representations of $C_{ K}({\mathcal{O}})$ trivial on $C_{ K}({\mathcal{O}})^0.$ Thus, the sum in (\ref{eq:char}) is taken over $\widehat{A_K({\mathcal{O}})}$.
Then \begin{equation}\label{sum-reg-fun} R(\widetilde{{\mathcal{O}}}, Triv) =\text{Ind}_{C_{ K} ({\mathcal{O}})^0} ^{ K}(Triv)=\sum \limits_{\psi \in \widehat{A_K({\mathcal{O}})} } R({\mathcal{O}},\psi). \end{equation}
We will decompose $R({\mathcal{O}},\psi)$ explicitly as a representation of ${K}$.
\begin{lemma}
Let $\mu_i$, $1\le i \le 4$, be the following $K$-types parametrized by their highest weights: \begin{eqnarray*} &\mu_1= (0, \dots, 0), \mu _2 = (1,0,\dots ,0), \\ &\mu_3= (\frac{1}{2},\dots,\frac{1}{2} ), \mu_4= (\frac{1}{2},\dots,\frac{1}{2},-\frac{1}{2} ). \end{eqnarray*} Let $\psi_i $ be the restriction of the highest weight of $\mu _i$ to $C_{{K}}({\mathcal{O}})$, respectively. Then \begin{eqnarray*} \text{Ind}_{C_{{K}}({\mathcal{O}})^0} ^{ C_{{K}}({\mathcal{O}}) } (Triv)=\sum \limits_{i=1} ^4 \psi _i. \end{eqnarray*}
\end{lemma}
\begin{prop} \label{p:regfun1} The induced representation (\ref{sum-reg-fun}) decomposes as $$ \text{Ind} _{C_{ K} ({\mathcal{O}}) } ^{ K} (Triv) =\sum _{i=1} ^4 R({\mathcal{O}},\psi_i) $$ where \begin{eqnarray*}
R({\mathcal{O}}, \psi _1)&= \text{Ind} _{C_{ K} ({\mathcal{O}}) } ^{ K}(\psi_1)=\bigoplus V(a_1,\dots,a_n)&\quad \text{ with } a_i\in {\mathbb Z}, \ \sum a_i \in 2{\mathbb Z}, \\ R({\mathcal{O}}, \psi _2)&= \text{Ind} _{C_{ K} ({\mathcal{O}}) } ^{ K} (\psi_2)=\bigoplus V(a_1,\dots,a_n)&\quad \text{ with } a_i\in {\mathbb Z}, \sum a_i \in 2{\mathbb Z}+1 , \\
R({\mathcal{O}}, \psi _3)&= \text{Ind} _{C_{ K} ({\mathcal{O}}) } ^{ K}(\psi_3)=\bigoplus V(a_1,\dots,a_n) &\quad \text{ with } a_i\in {\mathbb Z}+1/2,\sum a_i \in 2{\mathbb Z}+p,\\
R({\mathcal{O}}, \psi _4)&= \text{Ind} _{C_{ K} ({\mathcal{O}}) } ^{ K} (\psi_4)=\bigoplus V(a_1,\dots,a_n)&\quad \text{ with } a_i\in {\mathbb Z}+1/2, \sum a_i \in 2{\mathbb Z}+p+1. \end{eqnarray*}
\end{prop}
\subsection{Case 2} ${\mathcal{O}}=[3\ 2^{2k} \ 1^{2n-4k-3}],$ $0\le k \le p-1$.
\subsubsection*{} Consider the parabolic $\mathfrak p = \mathfrak l +\mathfrak n$ determined by $h$: \begin{equation*} \begin{aligned} \mathfrak l &= C_{\mathfrak k}(h)_0 \cong \mathfrak{gl}(1)\times \mathfrak{gl}(2k)\times {\mathfrak{so}(2n-2-4k)}, \\ \mathfrak n &= C_{\mathfrak k}(h)^+. \end{aligned} \end{equation*} In this section, let ${\epsilon}=-1$ when $n$ is even; ${\epsilon}=1$ when $n$ is odd. The dual of $V,$ denoted $V^*$, has lowest weight $({\epsilon} a_n,-a_{n-1},\dots, -a_2, -a_1)$. It is therefore a quotient of a generalized Verma module $M(\la)=U(\mathfrak k)\otimes _{U(\overline{\mathfrak p} )} F(\la)$, where $\la$
is dominant for $\overline{\mathfrak p}$, and dominant for the standard
positive system for $\mathfrak l:$ $$ {\la = (-a_1; \underset{2k}{\underbrace{ -a_{2k+1} , \dots, -a_3,
-a_2}}; \underset{n-1-2k}{\underbrace{ a_{2k+2}, \dots, a_{n-1}, {\epsilon} a_n}}).} $$ $\mathfrak n =C_{\mathfrak k}(e)^+\oplus \mathfrak n ^{\perp}$ as a module for $C_{\mathfrak
k}(e)_0$. A basis for $\mathfrak n^\perp\subset C_\mathfrak k(h)_1$ is given by $$ { \{ X({\epsilon}_1{ -} {\epsilon} _{2k+2}) \}}, \quad 2\le i\le 2k+1. $$ This is the standard representation of $\mathfrak{sp}(2k),$ trivial for {$\mathfrak{so}(2n-4-4k).$} We write {its highest weight as} $$ {(1;0,\dots ,0,-1;0,\dots ,0)}. $$ We can now repeat the argument for the case $k=p;$ there is an added constraint that ${a_{2k+3}}=\dots =a_n=0$ because the representation with highest weight $(a_{2k+2},\dots ,a_{n-1},{\epsilon} a_n)$ of $\mathfrak{so}(2n-2-4k)$ must have fixed vectors for ${\mathfrak{so}(2n-3-4k)}.$
Then the next theorem follows.
\begin{theorem}
A representation $V(a_1,\dots,a_n)$ has $C_{\mathfrak k }(e)$ fixed vectors if and only if $$a_{2k+3}=\dots=a_n=0,$$ and the multiplicity is 1. In summary, $$ \text{Ind}_{C_{ K}({\mathcal{O}})^0}^{ K} (Triv)=\bigoplus V(a_1,\dots, a_{2k+2},0\dots,0), \quad \text{ with } a_1\ge \dots \ge a_{2k+2}\ge 0, \ a_i\in {\mathbb Z}. $$ \end{theorem}
As in (\ref{sum-reg-fun}), we decompose $\text{Ind}_{C_{ K}({\mathcal{O}})^0}^{ K} (Triv)$ further in to sum of $R({\mathcal{O}},\psi)$ with $\psi\in\widehat{A_K({\mathcal{O}})}$.
\begin{lemma}
Let $\mu_1, \mu_2 $ be the following ${K}$-types parametrized by their highest weights: \begin{eqnarray*} \mu_1= (0, \dots, 0), \mu _2 = (1,0,\dots ,0). \end{eqnarray*} Let $\psi_i $ be the restriction of the highest weight of $\mu _i$ to $C_G({\mathcal{O}})$, respectively. Then \begin{eqnarray*} \text{Ind}_{C_{ K} ({\mathcal{O}})^0} ^{ C_{ K}({\mathcal{O}}) } (Triv)=\psi_1 +\psi _2. \end{eqnarray*}
\end{lemma}
\begin{prop}\label{p:regfun2} The induced representation (\ref{sum-reg-fun}) decomposes as $$ \text{Ind} _{C_{ K} ({\mathcal{O}}) ^0} ^{ K} (Triv) = R({\mathcal{O}},\psi_1) + R({\mathcal{O}},\psi_2) $$ where \begin{eqnarray*}
R({\mathcal{O}}, \psi _1)&= \text{Ind} _{C_{K} ({\mathcal{O}}) } ^{ K}(\psi_1)=\bigoplus V(a_1,\dots,a_{2k+2},0,\dots,0)&\quad \text{ with } a_i\in {\mathbb Z}, \ \sum a_i \in 2{\mathbb Z}, \\ R({\mathcal{O}}, \psi _2)&= \text{Ind} _{C_{ K} ({\mathcal{O}}) } ^{ K} (\psi_2)=\bigoplus V(a_1,\dots,a_{2k+2},0,\dots,0)&\quad \text{ with } a_i\in {\mathbb Z}, \sum a_i \in 2{\mathbb Z}+1. \\ \end{eqnarray*}
\end{prop}
\subsection{} Now we treat ${\mathcal{O}}=[2^{2k} \ 1^{2n-4k}]$ with $0\le k\le p$. These are Cases 3 and 4. When $k=p$ (and hence $n=2p$), the orbit is labeled by $I,II$. The computation is similar and easier than the previous two cases. We state the results for $R(\widetilde{O},Triv)$ as follows.
\begin{theorem}\
\begin{description} \item[Case 3] For $k=p$, so $n=2p,$ $$ \begin{aligned} &{\mathcal{O}} _I =[ 2^n]_I,\qquad &R(\widetilde{{\mathcal{O}}_I},Triv)= \text{Ind}_{C_{ K}({\mathcal{O}}_{I}) ^0} ^{ K}(Triv)=\bigoplus V(a_1, a_1, a_3, a_3, \dots, a_{n-1}, a_{n-1}),\\ &{\mathcal{O}} _{II}=[ 2^n]_{II},\qquad &R(\widetilde{{\mathcal{O}}_{II}},Triv)=\text{Ind}_{C_{ K}({\mathcal{O}}_{II}) ^0} ^{ K}(Triv)=\bigoplus V(a_1, a_1, a_3, a_3, \dots,
a_{n-1}, -a_{n-1}).
\end{aligned}
$$ \item[Case 4] {For $k\le p-1$}, $$ {\mathcal{O}}= [2^{2k}\ 1^{2n-4k}],\qquad R(\widetilde{{\mathcal{O}}},Triv)=\text{Ind}_{C_{ K}({\mathcal{O}}) ^0} ^{ K}(Triv)= {\bigoplus V(a_1, a_1, a_3, a_3, \dots, a_{2k-1}, a_{2k-1}, 0,\dots, 0)}, $$ satisfying $a_1\ge a_3\ge \dots \ge a_{2k-1}\ge 0$. \end{description} \begin{proof}
We treat the case $n=2p$ and ${k\le p-1};$ $n=2p+1$ is similar. {A representative of ${\mathcal{O}}$ is}
$e=X({\epsilon}_1+{\epsilon}_{2})+\dots +X({\epsilon}_{2k-1}+{\epsilon}_{2k})$, and the corresponding middle element in the Lie
triple is $h=H(\underset{2k}{\underbrace{{1,\dots
,1}}},\underset{n-2k}{\underbrace{{0,\dots ,0}}})$. Thus \begin{equation}
\label{eq:ch_i}
\begin{aligned} &C_{\mathfrak k}(h)_0=\mathfrak{gl}({2k})\times \mathfrak{so}(2n-{4k})\\ &C_{\mathfrak k}(h)_1=Span\{ X({\epsilon}_i\pm {\epsilon}_j)\}\qquad {1\le i \le 2k< j\le n},\\ &C_{\mathfrak k}(h)_2=Span\{ X({\epsilon}_l+{\epsilon}_m)\}\qquad 1\le l\ne m\le {2k}.
\end{aligned} \end{equation} and \begin{equation}
\begin{aligned} &C_{\mathfrak k}(e)_0=\mathfrak{sp}(2k)\times \mathfrak{so}(2n-4k)\\ &C_{\mathfrak k}(e)_1=C_{\mathfrak k}(h)_1,\\ &C_{\mathfrak k}(e)_2=C_{\mathfrak k}(h)_2.
\end{aligned} \label{eq:cei} \end{equation} As before, let $\mathfrak p=\mathfrak l+\mathfrak n$ be the parabolic subalgebra determined by $h,$ and $V=V(a_1,\dots ,a_n)$ be an irreducible representation of $K$. Since we assumed $n=2p,$ $V=V^*.$ In this case $C_{\mathfrak k}(e)^+=\mathfrak n,$ so Kostant's theorem implies $V/[C_{\mathfrak k}(e)^+V]=V_{\mathfrak l}(a_1,\dots a_{2k};a_{2k+1},\dots ,a_n)$ as a $\mathfrak{gl}(2k)\times \mathfrak{so}(2n-4k)$-module. Since we want $\mathfrak{sp}(2k)\times \mathfrak{so}(2n-4k)$-fixed vectors, $a_{2k+1}=\dots =a_n=0,$ and Helgason's theorem implies $a_1=a_2,a_3=a_4,\dots ,a_{2k-1}=a_{2k}$.
{When $n=2p$, and ${\mathcal{O}}=[2^n]_{I,II}$, the calculations are similar to $k\le p-1.$ The choices $I,II$ are
$$
\begin{aligned} & e_I=X({\epsilon}_1-{\epsilon}_2)+X({\epsilon}_3-{\epsilon}_4)+\dots+X({\epsilon}_{n-1}-{\epsilon}_n)\quad &&h_I=H(1,\dots,1),\\ &e_{II}=X({\epsilon}_1-{\epsilon}_2)+X({\epsilon}_3-{\epsilon}_4)+\dots +X({\epsilon}_{n-3}-{\epsilon}_{n-2})+X({\epsilon}_{n-1}+{\epsilon}_n),\quad &&h_{II}=H(1,\dots,1, -1). \end{aligned} $$ These orbits are induced from the two nonconjugate maximal parabolic subalgebras with $\mathfrak{gl}(n)$ as Levi components, and $R(\widetilde{{\mathcal{O}}_{I,II}},Triv)$ are just the induced modules from the trivial representation on the Levi component. } \end{proof} \end{theorem}
We aim at decomposing $R(\widetilde{{\mathcal{O}}},Triv) =\sum R({\mathcal{O}},\psi)$ with $\psi\in \widehat{A_K({\mathcal{O}})}$ as before.
\begin{lemma}\ \begin{description}
\item[Case 3] { $n=2p$, ${\mathcal{O}} = [2^n]_{I,II}$}. Let $\mu_1$, $\mu_2$, $\nu_1$, $\nu_2$, be: \begin{eqnarray*} &\mu_1= (1, \dots, 1), \mu _2 = (\frac{1}{2},\dots\frac12), \\ &\nu_1= (1,\dots,1,-1), \nu_2= (\frac{1}{2},\dots,\frac{1}{2},-\frac{1}{2} ). \end{eqnarray*} Let $\psi_i $ be the restriction of the highest weight of $\mu _i$ to $C_K(e)$, and $\phi _i$ be the restriction of the highest weight of $\nu_i$, respectively. Then \begin{eqnarray*} \text{Ind}_{C_K ({\mathcal{O}}_I)^0} ^{ C_K ({\mathcal{O}}_I) } (Triv)&=&\psi_1 +\psi _2,\\ \text{Ind}_{C_K ({\mathcal{O}}_{II})^0} ^{ C_K ({\mathcal{O}}_{II}) } (Triv)&=&\phi_1 +\phi _2. \end{eqnarray*} The $\psi_i,\phi_i$ are viewed as representations of $\widehat{A_K({\mathcal{O}}_{I,II})}$, and $\psi_1$ and $\phi_1$ are $Triv,$ $\psi_2,\phi_2$ are $Sgn.$
\item[Case 4] { ${\mathcal{O}}=[2^{2k}\ 1^{2n-4k}]$, $0\le k \le p-1$}. \begin{eqnarray*} \text{Ind}_{C_K ({\mathcal{O}})^0} ^{ C_K({\mathcal{O}}) } (Triv)=Triv. \end{eqnarray*}
\end{description}
\end{lemma}
Then we are able to split up $R(\widetilde{{\mathcal{O}}},Triv)$ as a sum of $R({\mathcal{O}},\psi)$ as in (\ref{sum-reg-fun}).
\begin{prop} \label{p:regfun3}\
\begin{description} \item[Case 3] { $n=2p$, ${\mathcal{O}} = [2^n]_{I,II}$}: $R(\widetilde{\mathcal{O}}_{I, II}) = R({\mathcal{O}}_{I,II},Triv) + R({\mathcal{O}}_{I,II}, Sgn)$ with \begin{eqnarray*}
R({\mathcal{O}}_I, Triv)&=& \text{Ind} _{C_{ K}({\mathcal{O}} _I) } ^{ K} (Triv)=\bigoplus V(a_1,a_1,a_3,a_3,\dots,a_{n-1},a_{n-1}), \quad \text{ with } a_i\in{\mathbb Z}, \\ R({\mathcal{O}}_I, Sgn)&= &\text{Ind} _{C_{ K} ({\mathcal{O}} _I) } ^{ K}(Sgn)=\bigoplus V(a_1,a_1,a_3,a_3,\dots,a_{n-1},a_{n-1}), \quad \text{ with } a_i\in{\mathbb Z} +1/2, \\ R({\mathcal{O}}_{II}, Triv)&=&\text{Ind} _{C_{ K}({\mathcal{O}} _{II}) } ^{ K}(Triv)=\bigoplus V(a_1,a_1,a_3,a_3,\dots,a_{n-1},-a_{n-1}), \quad \text{ with } a_i\in{\mathbb Z}, \\ R({\mathcal{O}}_{II}, Sgn)&=&\text{Ind} _{C_{ K}({\mathcal{O}} _{II}) } ^{ K}(Sgn)=\bigoplus V(a_1,a_1,a_3,a_3,\dots,a_{n-1},-a_{n-1}), \quad \text{ with } a_i\in{\mathbb Z} +1/2, \end{eqnarray*} satisfying $a_1\ge a_3 \ge \dots\ge a_{n-1}\ge 0$.
\item[Case 4] { ${\mathcal{O}}=[2^{2k}\ 1^{2n-4k}]$, $0\le k \le p-1$}:
\begin{eqnarray*}
R(\widetilde{{\mathcal{O}}},Triv)=R({\mathcal{O}},Triv)= \text{Ind} _{C_{ K} ({\mathcal{O}})} ^{ K} (Triv)= \bigoplus V(a_1,a_1,a_3,a_3,\dots, a_{2k-1},a_{2k-1},0,\dots,0), \ \text{ with } a_i\in {\mathbb Z},
\end{eqnarray*}
satisfying $a_1\ge a_3 \ge \dots\ge a_{2k-1}\ge 0$ \end{description} \end{prop}
\section{Representations with small support}
\subsection{Langlands Classification} Let $G$ be a complex linear algebraic reductive group viewed as a real Lie group. Let $\theta$ be a Cartan involution with fixed points $K.$ Let $G\supset B=HN\supset H=TA$ be a Borel subgroup containing a fixed $\theta$-stable Cartan subalgebra $H$, with $$ \begin{aligned} &T=\{ h\in H\ \mid \ \theta(h)=h\},\\ &A=\{h\in H\ \mid \ \theta(h)=h^{-1}\}. \end{aligned} $$
The Langlands classification is as follows. Let $\chi\in\widehat H.$ Denote by $$ X(\chi):=Ind_B^G[\chi\otimes \one]_{K\text{-finite}} $$ the corresponding admissible standard module (Harish-Chandra induction). Let $(\mu,\nu)$ be the differentials of $\chi\mid_T$ and $\chi\mid_A$ respectively. Let $\la_L=(\mu+\nu)/2$ and $\la_R=(\mu-\nu)/2$. We write $X(\mu,\nu)=X(\la_L,\la_R)=X(\chi).$ \begin{theorem}\
\label{t:langlands}
\begin{enumerate}
\item $X(\mu,\nu)$ has a unique irreducible subquotient denoted
$\overline{X}(\mu,\nu)$ which contains the $K$-type with extremal
weight $\mu$ occurring with multiplicity one in $X(\mu,\nu).$ \item $\overline{X}(\mu,\nu)$ is the unique irreducible quotient when
$\langle Re\nu,\al\rangle >0$ for all $\alpha\in\Delta(\mathfrak n,\mathfrak h),$ and
the unique irreducible submodule when $\langle Re\nu,\al\rangle <0$.
\item $\overline{X}(\mu,\nu)\cong\overline{X}(\mu',\nu')$ if and only if there
is $w\in W$ such that $w\mu=\mu', w\nu=\nu'.$ Similarly for
$(\la_L,\la_R).$
\end{enumerate} \end{theorem} Assume $\la_L,\ \la_R$ are both dominant integral. Write $F(\la)$ to be the finite dimensional representation of $G$ with infinitesimal character $\la$. Then ${\overline{X}}(\la_L,-\la_R)$ is the finite dimensional representation $F(\la_L)\otimes {F(-w_0\la_R)}$ where $w_0\in W$ is the long Weyl group element. The lowest $K$-type has extremal weight $\la_L-\la_R$. Weyl's character formula implies $$ {\overline{X}}(\la_L,-\la_R)=\sum \limits _{w\in W} {\epsilon}(w)X(\la_L,-w\la_R). $$
\subsubsection*{} In the following contents in this section, we use different notation as follows. We write
$(\widetilde{G}, \widetilde{K})=(Spin (2n,{\mathbb{C}}), Spin(2n) )$ and $(G,K)=(SO(2n,{\mathbb{C}}) , SO(2n))$.
\subsection{Infinitesimal characters} \label{s:infchar}
From \cite{B}, we can associate to each ${\mathcal{O}}$ in Section \ref{ss:orbits} an infinitesimal character $\la_{{\mathcal{O}}}$. The fact is that ${\mathcal{O}}$ is the minimal orbit which can be the associated variety of a $(\mathfrak g, K)$-module with infinitesimal character $(\la_L, \la_R)$, with $\la_L$ and $\la_R$ both conjugate to $\la_{{\mathcal{O}}}$.
The $\la_{{\mathcal{O}}}$ are listed below.
\begin{description} \item[Case 1] {$n=2p$, ${\mathcal{O}}=[3 \ 2^{n-2} \ 1]$,} $$\la_{{\mathcal{O}}} =\rho/2= (p-\frac{1}{2}, \dots,\frac{3}{2}, \frac{1}{2}\mid p-1,\dots, 1, 0).$$
\item[Case 2] { ${\mathcal{O}}=[3\ 2^{2k} \ 1^{2n-4k-3}]$, $0\le k\le p-1$,} $$ \la_{{\mathcal{O}}}=( k+\frac{1}{2}, \dots,\frac{3}{2}, \frac{1}{2}\mid n-k-2,\dots, 1, 0).$$
\item[Case 3] {$n=2p$, ${\mathcal{O}}_{I,II}=[2^n]_{I, II}$,} \begin{eqnarray*} \la _{{\mathcal{O}} _I}&=& \left ( \frac{2n-1}{4} ,\frac{2n-5}{4} , \dots, \frac{-(2n-7)}{4} , \frac{ -(2n-3)}{4} \right),\\ \la _{{\mathcal{O}}_{II}} &=& \left ( \frac{2n-1}{4} ,\frac{2n-5}{4} , \dots, \frac{-(2n-7)}{4} , \frac{ (2n-3)}{4} \right). \end{eqnarray*}
\item[Case 4] { ${\mathcal{O}}=[2^{2k}\ 1^{2n-4k}]$, $0\le k \le p-1$,} $$ \la_{{\mathcal{O}}}= (k,k-1,\dots, 1 ; n-k-1, \dots, 1, 0). $$ \end{description}
Notice that the infinitesimal characters in Cases 1 and 2 are nonintegral. For instance, in Case 1, $\la_{{\mathcal{O}}}=\rho/2$, where $\rho$ is half sum of the positive roots of type $D_{2p}$. The integral system is of type $D_p\times D_p$. The notation $|$ separates the coordinates of the two $D_p$.
\subsection{} \label{ss:rep} We define the following irreducible modules in terms of Langlands classification:
\begin{description} \item[Case 1] {$n=2p$, ${\mathcal{O}}=[3 \ 2^{n-1} \ 1]$}. \begin{enumerate} \item[(i)] $\Xi_1 = {\overline{X}} (\la_{{\mathcal{O}}},-\la_{{\mathcal{O}}})$; \item[(ii)] $\Xi_2= {\overline{X}} (\la_{{\mathcal{O}}}, -w_1\la_{{\mathcal{O}}} )$, where $w_1\la_{{\mathcal{O}}}= (p-\frac{1}{2}, \dots, \frac{3}{2}, -\frac{1}{2} \mid p-1,\dots, 1, 0)$; \item[(iii)] $\Xi_3= {\overline{X}} (\la_{{\mathcal{O}}}, -w_2\la_{{\mathcal{O}}})$, where $w_2\la_{{\mathcal{O}}}= (p-1,\dots, 1, 0 \mid p-\frac{1}{2}, \dots, \frac{3}{2}, \frac{1}{2} )$; \item[(iv)] $\Xi_4= {\overline{X}} (\la_{{\mathcal{O}}},-w_3 \la_{{\mathcal{O}}})$, where $w_3\la_{{\mathcal{O}}}=(p-1,\dots, 1, 0 \mid p-\frac{1}{2}, \dots, \frac{3}{2}, -\frac{1}{2} )$. \end{enumerate}
\item[Case 2] {${\mathcal{O}}=[3\ 2^{2k} \ 1^{2n-4k-3}]$, $0\le k\le p-1$}. \begin{enumerate} \item [(i)] $\Xi_1={\overline{X}}(\la_{{\mathcal{O}}},-\la_{{\mathcal{O}}})$; \item [(ii)]$\Xi_2= {\overline{X}}(\la_{{\mathcal{O}}},-w_1\la_{{\mathcal{O}}}),\ w_1\la_{{\mathcal{O}}}= ( k+\frac{1}{2}, \dots,\frac{3}{2}, \frac{1}{2}\mid n-k-2,\dots, 1, 0 ).$ \end{enumerate}
\item[Case 3] {$n=2p$, ${\mathcal{O}}_{I,II}=[2^n]_{I, II}$}. \begin{enumerate} \item [(i)] $\Xi_{I} ={\overline{X}}(\la_ {{\mathcal{O}}_I} , -\la_{{\mathcal{O}}_{I} } )$; \item [(i$'$)] $\Xi _{I} ={\overline{X}}(\la _{{\mathcal{O}}_{I}} , -w\la
_{{\mathcal{O}}_{I}})$,\quad { $w\la _{{\mathcal{O}} _I}=\left(\frac{2n-3}{2},\dots ,-\frac{2n-1}{4}\right)$}; \item [(ii)] $\Xi_{II} ={\overline{X}}(\la_ {{\mathcal{O}}_{II}} , -\la_{{\mathcal{O}}_{II} } )$; \item [(ii$'$)] $\Xi '_{II} ={\overline{X}}(\la _{{\mathcal{O}}_{II}},\quad -w\la_{{\mathcal{O}}_{II}})$, { $w\la _{{\mathcal{O}}
_{II}}=\left(\frac{2n-3}{4},\dots
,-\frac{2n-5}{4},\frac{2n-1}{4}\right)$}; \end{enumerate} \item[Case 4] {${\mathcal{O}}=[2^{2k}\ 1^{2n-4k}]$, $0\le k \le p-1$}. \begin{enumerate} \item[(i)] $\Xi = {\overline{X}}(\la_{\mathcal{O}}, -\la_{\mathcal{O}}) $. \end{enumerate} \end{description}
\begin{remark} The representations introduced above form the set ${\mathcal{U}}_{\widetilde{G}}({\mathcal{O}},\la_{{\mathcal{O}}})$. \end{remark}
\subsubsection*{Notation} \label{ss:notation} We write $F(\la)$ for the finite dimensional representation of the appropriate $SO$ or $Spin$ group with infinitesimal character $\la$; write $V(\mu)$ for the finite dimensional representation of the appropriate $SO$ or $Spin$ group with highest weight $\mu$.
\subsection{$\widetilde{K}$-structure} We compute the $\widetilde{K}$-types of each representation listed in \ref{ss:rep}.
\subsubsection*{Case 1}
The arguments are refinements of those in \cite{McG}.
Let $\widetilde{H}$ be the image of $Spin(2p,{\mathbb{C}})\times Spin(2p,\mathbb C)$ in $Spin(4p,\mathbb C)$, and $\widetilde{U}$ the image of the maximal compact subgroup $Spin(2p)\times Spin(2p)$ in $\widetilde{K}$. Irreducible representations of $\widetilde{U}$ can be viewed as $Spin(2p)\times Spin(2p)$-representations such that $\pm(I,I)$ acts trivially.
Cases (i) and (ii) factor to representations of $SO(2n,{\mathbb{C}}),$ (iii) and (iv) are genuine for $Spin(2n,{\mathbb{C}}).$
The Kazhdan-Lusztig conjectures for nonintegral infinitesimal character together with Weyl's formula for the character of a finite dimensional module, imply that \begin{equation}
\label{eq:charfla} \overline{X}(\rho/2, -w_i\rho/2)=\sum_{w\in W(D_p\times D_p)} {\epsilon}(w) X(\rho/2,-ww_i\rho/2), \end{equation} since $W(\la_{{\mathcal{O}}})=W(D_p\times D_p)$.
Restricting (\ref{eq:charfla}) to $\widetilde{K},$ and using Frobenius reciprocity, we get \begin{equation} \label{eq:ind-K-U} \overline{X}(\rho/2, - w_i\rho/2)\mid_{\widetilde{K}}=Ind_{\widetilde U }^{\widetilde{K}} [F_1(\rho/2) {\otimes} F_2(-w_i\rho/2)], \end{equation} where $F_{1,2}$ are finite dimensional representations of the two factors $Spin(2p,\mathbb C)\times Spin(2p,\mathbb C)$ with infinitesimal character $\rho/2$ and $-w_i\rho/2$, respectively. The terms $[F_1(\rho/2) {\otimes } F_2(-w_i\rho/2)]$ are \begin{description} \item[(i)] $V (1/2,\dots ,1/2)\otimes V(1/2,\dots ,1/2) \boxtimes V(0,\dots ,0)\otimes V(0,\dots
,0)$, \item[(ii)] $V(1/2,\dots ,-1/2)\otimes V(1/2,\dots ,1/2) \boxtimes V(0,\dots
,0)\otimes V(0,\dots ,0)$, \item[(iii)] $V(1/2,\dots ,1/2)\otimes V(0,\dots ,0)\boxtimes V(0,\dots
,0)\otimes V(1/2,\dots ,1/2)$, \item[(iv)] $V(1/2,\dots ,1/2)\otimes V(0,\dots ,0)\boxtimes V(0,\dots
,0)\otimes V(1/2,\dots ,-1/2)$ \end{description} as $Spin(n)\times Spin(n)$-representations (see \ref{ss:notation} for the notation).
\begin{comment} \sout{ In the case $n=2p$, when $G= SO(2n,{\mathbb{C}})$, $G_{{\mathbb R}} ^{split} =SO(n,n), K=S[O(n)\times O(n)]$; when $K=Spin(2n,{\mathbb{C}}), G_{{\mathbb R}} ^{split} = Spin(n,n)$, $K=Spin(n)\times Spin(n) / \{(1,1), (b,b)\}$, where $b$ is the nontrivial element in the kernel of the covering map $Spin(n)\to SO(n)$.
It is more convenient to work with $Pin(n,n)$ and $Pin(n)\times Pin(n).$ A representation of $Pin(2p)$ will be denoted by $F(a_1,\dots ,a_p;{\epsilon})$ where ${\epsilon}=\{\pm 1, 1/2\}$ as follows. If $a_p=0,$ there are two inequivalent representations with this highest weight, ${\epsilon}=\pm 1$ is according to Weyl's convention. In all other casess there is a unique representation with this highest weight, ${\epsilon}=1/2$ or ${\epsilon}$ is suppressed altogether.
Representations of $S[Pin(n)\times Pin(n)]$ are parametrized by restrictions of $F(a,{\epsilon}_1)\boxtimes F(b,{\epsilon}_2)$ with the following equivalences: \begin{enumerate} \item If one of ${\epsilon}_i=\frac{1}{2}$, say, ${\epsilon}_1 =\frac{1}{2}$, then $F(a;{\epsilon}_1)\boxtimes F(b;{\epsilon}_2)=F(a'; \delta_1)\boxtimes F(b' ; \delta_2)$ if and only if $a=a', b=b', {\epsilon} _1=\delta _1, {\epsilon}_2= \delta_2$. \item If ${\epsilon}_1, {\epsilon}_2, \delta_1, \delta_2\in \{\pm 1\}$, then $F(a;{\epsilon}_1)\boxtimes F(b;{\epsilon}_2)=F(a'; \delta_1)\boxtimes F(b' ; \delta_2)$ iff $a=a', b=b', {\epsilon}_1{\epsilon}_2=\delta_1\delta_2$. \end{enumerate} \end{comment}
\begin{lemma} \label{pinrep} Let $SPIN_+ =V(\frac{1}{2},\dots,\frac{1}{2}),$ and $SPIN_- = V(\frac{1}{2},\dots,\frac{1}{2}, -\frac{1}{2}) \in \widehat{Spin(n)}$. Then \begin{equation} \begin{aligned} SPIN _+ \otimes SPIN_+ &= \bigoplus \limits _{0\le k\le [\frac{p}{2}]} V(\underset{2k}{\underbrace{1\dots 1}}, \underset{p-2k}{\underbrace{0\dots 0}} ) , \\ SPIN_+\otimes SPIN_- &= \bigoplus \limits _{0\le k\le [\frac{p-1}{2}]} V(\underset{2k+1}{\underbrace{1\dots 1}}. \underset{p-2k-1}{\underbrace{0\dots 0}} ) \end{aligned} \end{equation} \end{lemma}
\begin{proof}
The proof is straightforward. \end{proof}
Lemma \ref{pinrep} implies that (\ref{eq:ind-K-U}) becomes
\begin{equation}\label{ind-fine1} \begin{aligned} (i)\ &{\overline{X}} (\rho/2,- \rho/2)\mid_{\widetilde{K}} &=\text{Ind} ^{\widetilde{K}} _{\widetilde U} &\left [\bigoplus \limits _{0\le k \le [\frac{p}{2}] } V(\underset{2k}{\underbrace{1,\dots,1}},0,\dots,0 )\boxtimes V(0,\dots,0) \right ] \\ (ii)\ &{\overline{X}} (\rho/2,- w_1\rho/2)\mid_{\widetilde{K}} & =\text{Ind} ^{\widetilde{K}} _{\widetilde U} &\left [\bigoplus \limits _{0\le k \le [\frac{p-1}{2}] } V(\underset{2k+1}{\underbrace{1,\dots,1}},0,\dots,0 )\boxtimes V(0,\dots,0) \right ] \\ (iii)\ &{\overline{X}} (\rho/2, -w_2\rho/2)\mid_{\widetilde{K}} &=\text{Ind} ^{\widetilde{K}}_{\widetilde U} &\left [V(1/2,\dots ,1/2)\boxtimes V(1/2,\dots ,1/2)\right] \\ (iv)\ &{\overline{X}} (\rho/2, -w_3\rho/2)\mid_{\widetilde{K}} &=\text{Ind} ^{\widetilde{K}}_{ \widetilde U} &\left [V(1/2,\dots ,1/2)\boxtimes V(1/2,\dots ,-1/2)\right]. \end{aligned} \end{equation}
\begin{prop}\label{p:n2p}
\begin{equation}
\begin{aligned}
&{\overline{X}} (\rho/2,-\rho/2) |_{\widetilde K}= \bigoplus V(a_1,\dots ,a_n),\quad \text{ with } a_i\in\mathbb Z, \ \sum a_i\in 2 \mathbb Z, \\
&{\overline{X}} (\rho/2, -w_1 \rho/2) |_{\widetilde K} = \bigoplus V(a_1,\dots,a_n),\quad \text{ with } a_i\in\mathbb Z, \ \sum a_i\in 2 \mathbb Z+1,\\
&{\overline{X}} (\rho/2,-w_2\rho/2) |_{\widetilde K}= \bigoplus V(a_1,\dots, a_n),\quad \text{ with } a_i\in\mathbb Z+1/2, \ \sum a_i\in 2 \mathbb Z+p,\\
&{\overline{X}} (\rho/2,-w_3\rho/2) |_{\widetilde K} =\bigoplus V(a_1,\dots, a_n),\quad \text{ with } a_i\in\mathbb Z+1/2, \ \sum a_i\in 2 \mathbb Z+p+1. \end{aligned} \end{equation} \end{prop} \begin{proof} In the first two cases we can substitute $\big({G}^{split},K^{split}):=\big(SO(2p,2p),S[O(2p)\times O(2p)])\big)$ for $\big(\widetilde{K},\widetilde U\big),$ and $\big(Spin(2p,2p),Spin(2p)\times Spin(2p)/\{\pm (I,I)\}\big)$ for the last two cases. The problem of computing the $\widetilde{K}$-structure of $\overline{X}$ reduces to finding the finite dimensional representations of $\widetilde{G}^{split}$ which contain factors of $F(\rho/2)\otimes F(-w_i\rho/2).$ Any finite dimensional representation of $\widetilde{G}^{split}$ is a Langlands quotient of a principal series. Principal series have fine lowest $K$-types (see \cite{V}). Let ${M}A$ be a split Cartan subgroup of $\widetilde{G}^{split}.$ A principal series is parametrized by a $(\delta,\nu)\in\widehat{M}A.$ The $\delta$ are called fine, and each fine ${K^{split}}$-type $\mu$ is a direct sum of a Weyl group orbit of a $fine$ $\delta.$ This implies that the multiplicities in (\ref{ind-fine1}) are all one, and all the finite dimensional representations occur in $(i),(ii),(iii),(iv)$. The four formulas correspond to the various orbits of the $\delta.$ \end{proof}
\subsubsection*{Case 2: ${\mathcal{O}}=[3\ 2^{2k}\ 1^{2n-4k-3}]$, $0\le k \le p-1$}
Recall that $$\la_{{\mathcal{O}}}= ( k+\frac{1}{2}, \dots,\frac{3}{2}, \frac{1}{2}\mid n-k-2,\dots, 1, 0),$$ and the integral system is $D_k\times D_{n-k}.$
The irreducible modules are of the form $\overline{X}(\la_L,-w\la_R)$ such that $\la_{{\mathcal{O}}}$ is dominant, $w_i\la_{{\mathcal{O}}}$ is antidominant for $D_{k}\times D_{n-k},$ and they factor to $SO(2n,\mathbb C).$ These representations are listed in \ref{ss:rep}.
\subsubsection*{} We need to work with the real form $\big(SO(r,s), S[O(r)\times O(s)]\big)$. A representation of $O(n)$, $r=2m+\eta$ with $\eta=0$ or $1$, will be denoted by $V(a_1,\dots,a_m; {\epsilon})$, with ${\epsilon}=\pm 1,1/2$ according to Weyl's convention, and $a_1\ge a_2\ge \dots\ge a_m\ge 0.$ If $a_m=0,$ there are two inequivalent representations with this highest weight, one for ${\epsilon}=1$, one for ${\epsilon}=-1.$ Each restricts irreducibly to $SO(r)$ as the representation $V(a_1,\dots,a_m)\in \widehat{SO(r)}$.
When $a_m\neq 0$, there is a unique representation with this highest weight, ${\epsilon}=1/2$ or ${\epsilon}$ is suppressed altogether. The restriction of this representation to $SO(r)$ is a sum of two representations $V(a_1,\dots,a_m)$ and $V(a_1,\dots,a_{m-1},-a_m)$.
Representations of $Pin(s)$ are parametrized in the same way, with $a_1\ge \dots \ge a_m\ge 0$ allowed to be nonnegative decreasing half-integers.
Representations of $S[O(r)\times O(s)]$ are parametrized by restrictions of $V(a;{\epsilon}_1)\boxtimes V(b; {\epsilon}_2)$ with the following equivalences: \begin{enumerate} \item If one of ${\epsilon}_i=\frac{1}{2}$, say, ${\epsilon}_1 =\frac{1}{2}$, then $V(a;{\epsilon}_1)\boxtimes V(b;{\epsilon}_2)=V(a'; \delta_1)\boxtimes V(b' ; \delta_2)$ if and only if $a=a', b=b', {\epsilon} _1=\delta _1, {\epsilon}_2= \delta_2$. \item If ${\epsilon}_1, {\epsilon}_2, \delta_1, \delta_2\in \{\pm 1\}$, then $V(a;{\epsilon}_1)\boxtimes V(b;{\epsilon}_2)=V(a'; \delta_1)\boxtimes V(b' ; \delta_2)$ iff $a=a', b=b', {\epsilon}_1{\epsilon}_2=\delta_1\delta_2$. \end{enumerate}
\begin{lemma} \label{pin-rep-1} Let $PIN =V(\frac{1}{2}\dots, \frac{1}{2}) \in \widehat{Pin(s)}, s=2m+\eta$ with $\eta=0$ or 1. Then \begin{equation} PIN\otimes PIN =\sum _{\ell=0} ^{m-1} V(\underset{k}{\underbrace{1\dots 1}}, \underset{m-\ell}{\underbrace{0\dots 0}} ; {\epsilon} ) + V(1,\dots,1;1/2), \end{equation} where the sum in over ${\epsilon}=1$ and $-1$. \begin{proof} Omitted. \end{proof}
\end{lemma}
\subsubsection*{}
We will use the groups $U=S[O(2k)\times O(2n-2k)]\subset K=SO(2n)$. Again, the representations that we want are in \ref{ss:rep}.
As before, \begin{equation}
\label{eq:cf2} \overline{X}(\la_{{\mathcal{O}}}, -w_i\la_{\mathcal{O}})=\sum_{w\in W(D_{k}\times D_{n-k})} {{\epsilon}(w)}X(\la_{\mathcal{O}}, - ww_i\la_{\mathcal{O}}). \end{equation} {Restricting to $K,$ and using Frobenius reciprocity, (\ref{eq:cf2}) implies \begin{equation} \label{ind-K-U-2} \overline{X}(\la_{\mathcal{O}}, - w_i\la_{\mathcal{O}})\mid_{K }=\text{Ind}_{U }^{K} [F_1(\la_{{\mathcal{O}}}) {\otimes} F_2(-w_i\la_{\mathcal{O}})]. \end{equation} }
The terms $[F_1(\la_{\mathcal{O}})\otimes F_2(-w_i\la_{\mathcal{O}})]$ are \begin{description} \item[(i)] $V(1/2,\dots ,1/2)\otimes V(0,\dots ,0)\boxtimes V(1/2,\dots
,1/2)\otimes V(0,\dots ,0)$, \item[(ii)] $V(1/2,\dots ,1/2,-1/2)\otimes V(0,\dots ,0)\boxtimes V(1/2,\dots ,
1/2,-1/2)\otimes V(0,\dots ,0)$. \end{description}
\ \begin{lemma} \begin{equation}\label{ind-fine6} \begin{aligned} {\overline{X}} (\la_{\mathcal{O}}, -\la_{\mathcal{O}}) =\text{Ind} ^{K} _{U} \Big [ & \sum \limits _{0\le 2\ell \le k } V(\underset{2\ell}{\underbrace{1,\dots,1}},0,\dots,0 ;1 )\boxtimes V(0,\dots,0;1)\\ \\ &+ \sum \limits _{0\le 2\ell \le k} V(\underset{2\ell}{\underbrace{1,\dots,1}},0,\dots,0 ;1 )\boxtimes V(0,\dots,0;-1)\Big],\\ {\overline{X}} (\la_{\mathcal{O}}, -w_1\la_{\mathcal{O}}) =\text{Ind} ^{K} _U \Big [ & \sum \limits _{0\le 2\ell+1 \le k } V(\underset{2l+1}{\underbrace{1,\dots,1}},0,\dots,0 ;1 )\boxtimes V(0,\dots,0;1) \\ &+ \sum \limits _{0\le 2\ell+1 \le k} V(\underset{2\ell+1}{\underbrace{1,\dots,1}},0,\dots,0 ;1 )\boxtimes V(0,\dots,0;-1) \Big]. \end{aligned} \end{equation} \end{lemma} \begin{proof} This follows from Lemma \ref{pin-rep-1}. \end{proof} \begin{prop}
\begin{equation}\label{eq:ktypeskn}
\begin{aligned}
& {\overline{X}} (\la_{\mathcal{O}},-\la_{\mathcal{O}})|_{\widetilde{K}} = \bigoplus V(a_1,\dots,a_{k}, 0,\dots ,0),\quad \text{ with } \ a_i\in \mathbb Z, \ \sum a_i\in 2{\mathbb Z}\\
&{\overline{X}} (\la_{\mathcal{O}}, -w_1\la_{\mathcal{O}} )|_{\widetilde{K}} = \bigoplus V(a_1,\dots,a_{k},0,\dots ,0),\quad \text{ with } \ a_i\in \mathbb Z.\ \sum a_i\in 2{\mathbb Z} +1.
\end{aligned}
\end{equation} \end{prop} \begin{proof} The proof is almost identical to that of Proposition \ref{p:n2p}. When $k=p-1$,
the group $\widetilde{G^{split}}$ in the proof of Proposition \ref{p:n2p} is replaced by $G^{qs}=SO(2p,2p+2)$ and $\widetilde{U}$ is replaced by $U=S[O(2p)\times O(2p+2)].$ When $k<p-1$, the group $\widetilde{G^{split}}$ is replaced by $G^{k,n-k}=SO(2k,2n-2k)$ and $\widetilde{U}$ is replaced by $U= S[O(2k)\times O(2n-2k)].$ We follow \cite{V}. The $K$-types $\mu$ in (\ref{ind-fine6}) have $\mathfrak q(\la_L)$ the $\theta$-stable parabolic $\mathfrak q=\mathfrak l+\mathfrak u$ determined by $\xi=(0,\dots ,0;\underset{n-2k-2}{\underbrace{1,\dots ,1}},0\dots ,0)$. The Levi component is $S[O(2k)\times O(2k+2)].$ The resulting $\mu_L=\mu-2\rho(\mathfrak u\cap\mathfrak s)$ are fine $U\cap L$-types. A bottom layer argument reduces the proof to the quasisplit case $n=2p+1$. \end{proof}
\subsubsection*{Cases 3,4} We use the infinitesimal characters in \ref{s:infchar} and the representations are from \ref{ss:rep} again.
In Case 4, ${\mathcal{O}}=[2^{2k}\ 1^{2n-4k}]$ with $k<p$. There is a unique irreducible representation with associated support ${\mathcal{O}}$, and it is spherical. It is a special unipotent representation with character given by \cite{BV}.
When {$n=2p$ and $k=p$}, there are two nilpotent orbits ${\mathcal{O}}_{I,II} = [2^n]_{I,II}$. The representations $\Xi_{I,II}$ in \ref{ss:rep} are
spherical representations, one each for ${\mathcal{O}}_{I,II}$ that are not genuine. The two representations are induced irreducibly from the trivial representation of the parabolic subgroups with Levi components $GL(n)_{I,II}.$ On the other hand, the representations $\Xi'_{I,II}$ are induced irreducibly from the character $Det ^{1/2}$ of the parabolic subgroups with Levi components $GL(n)_{I,II}.$ All of these are unitary.
\begin{prop}
\label{p:kstruct3}
The $\widetilde K$-types of these representations are:
\begin{description}
\item[Case 3] ${\mathcal{O}}_{I,II}=[2^{2p}]_{I,II}:\ $
\begin{equation}
\begin{aligned}
\Xi_I |_{\widetilde{K}} & =& \bigoplus V(a_1,a_1,a_3,a_3,\dots,a_{n-1},a_{n-1}) & \text{ with } a_i\in {\mathbb Z}, \\
\Xi '_I |_{\widetilde{K}} & =& \bigoplus V(a_1,a_1,a_3,a_3,\dots,a_{n-1},a_{n-1}) & \text{ with } a_i\in {\mathbb Z} +1/2,\\
\Xi_{II}|_{\widetilde{K}} & =& \bigoplus V(a_1,a_1,a_3,a_3,\dots,a_{n-1},-a_{n-1}) & \text{ with } a_i\in {\mathbb Z}, \\
\Xi '_{II}|_{\widetilde{K}} & =& \bigoplus V(a_1,a_1,a_3,a_3,\dots,a_{n-1},-a_{n-1}) & \text{ with } a_i\in {\mathbb Z} +1/2,
\end{aligned}
\end{equation} satisfying $a_1\ge a_3\ge \dots \ge a_{n-1}\ge 0$
\item[Case 4] ${\mathcal{O}}=[2^{2k}\ 1^{2n-4k}],\ 0\le k< n/2:\ $
$$\Xi | _{\widetilde{K}}= \bigoplus V(a_1,a_1,\dots,
a_k,a_k,0,\dots ,0), \ \text{ with } \ a_i\in {\mathbb Z},$$
satisfying $a_1\ge a_3\ge \dots \ge a_k\ge 0.$
\end{description}
\begin{proof} These are well known. The cases $[2^{n}]_{I,II}$ follow by Helgason's theorem since $(D_{n},A_{n-1})$ is a symmetric pair (for the real form $SO^*(2n)$). They also follow by the method outlined below for the other cases.
For $2k<n,$ the methods outlined in \cite{BP2} combined with \cite{B} give the answer; the representations are $\Theta$-lifts of the trivial representation of $Sp(2k,{\mathbb{C}}).$ More precisely ${\overline{X}}(\la_{{\mathcal{O}}},-\la_{\mathcal{O}})$ is $\Omega/[\mathfrak{sp}(2k,{\mathbb{C}})\Omega]$ where $\Omega$ is the oscillator representation for the pair $O(2n,{\mathbb{C}})\times Sp(2k,{\mathbb{C}})$. The $K$-structure can then be computed using seesaw pairs, namely $\Omega$ is also the oscillator representation for the pair $O(2n)\otimes Sp(4k,{\mathbb R})$. \end{proof} \end{prop}
\subsection{} We resume the notation used in Section 3. Let $(G_0, K) = (Spin (2n,{\mathbb{C}}), Spin(2n,{\mathbb{C}}))$.
By comparing Propositions \ref{p:regfun1}, \ref{p:regfun2}, \ref{p:regfun3}
and the $K$-structure of representations listed in this section, we have
the following matchup.
\begin{description}
\item[Case 1] $\Xi_i |_{ K} = R({\mathcal{O}},\psi_i), \ 1\le i \le 4$;
\item[Case 2] $\Xi_i |_{ K} = R({\mathcal{O}},\psi_i), \ i=1,2$;
\item[Case 3] $\Xi_I |_{ K} =R({\mathcal{O}}_I, Triv)$, $\Xi ' _I |_{ K} =R({\mathcal{O}}_I, Sgn)$, \\ \hspace*{2.5em} $\Xi_{II}|_{ K} =R({\mathcal{O}}_{II}, Triv)$, $\Xi '_{II} |_{ K} =R({\mathcal{O}}_{II}, Sgn)$;
\item[Case 4] $\Xi |_{ K} = R({\mathcal{O}} , Triv)$. \end{description}
Then the following theorem follows.
\begin{theorem}\label{t:main} Attain the notation above. Let $ G_0={Spin}(2n,{\mathbb{C}})$ be viewed as a real group. The $K$-structure of each representations in ${\mathcal{U}}_{G_0}({\mathcal{O}},\la_{{\mathcal{O}}})$ is calculated explicitly and matches the $K$-structure of the $R({\mathcal{O}},\psi)$ with $\psi \in \widehat{A_{ K}({\mathcal{O}})}$.That is, there is a 1-1 correspondence $\psi\in
\widehat { A_{{K}}({\mathcal{O}})}\longleftrightarrow \Xi({\mathcal{O}},\psi)\in
{\mathcal{U}}_{ G_0}({\mathcal{O}},\la_{\mathcal{O}})$ satisfying $$ \Xi({\mathcal{O}},\psi)\mid_{ K}\cong R({\mathcal{O}},\psi). $$ \end{theorem}
\section{Clifford algebras and Spin groups} \label{ss:clifford} Since the main interest is in the case of $Spin(V),$ the simply connected groups of type $D,$ we realize everything in the context of the Clifford algebra.
\subsection{} Let $(V,Q)$ be a quadratic space of even dimension $2n$, with a basis $\{e_i,f_i\}$ with $1\le i\le n,$ satisfying $Q(e_i,f_j)=\delta_{ij},$ $Q(e_i,e_j)=Q(f_i,f_j)=0$. Occasionally we will replace $e_j,f_j$ by two orthogonal vectors $v_j,w_j$ satisfying $Q(v_j,v_j)=Q(w_j,w_j)=1,$ and orthogonal to the $e_i,f_i$ for $i\ne j.$ Precisely they will satisfy $v_j=(e_j+f_j)/\sqrt{2}$ and $w_j=(e_j-f_j)/(i\sqrt{2})$ (where $i:=\sqrt{-1},$ not an index). Let $C(V)$ be the Clifford algebra with automorphisms $\al$ defined by $\al(x_1\cdots x_r)=(-1)^r x_1\cdots x_r$ and $\star$ given by $(x_1\cdots x_r)^\star=(-1)^r x_r\cdots x_1, $ subject to the relation $xy+yx=2Q(x,y)$ for $x,y\in V$. The double cover of $O(V)$ is $$ Pin(V):=\{ x\in C(V)\ \mid\ x\cdot x^\star=1,\ \al(x)Vx^\star\subset V\}. $$ The double cover $Spin(V)$ of $SO(V)$ is given by the elements in $Pin(V)$ which are in $C(V)^{even},$ \ie $\disp{Spin(V):=Pin(V)\cap C(V)^{even}}.$ For $Spin,$ $\al$ can be suppressed from the notation since it is the identity.
The action of $Pin(V)$ on $V$ is given by $\rho(x)v=\al(x)vx^*.$ The element $-I\in SO(V)$ is covered by \begin{equation} \label{eq:-spin} \pm \Ep_{2n}=\pm i^{n-1}vw\prod_{1\le j\le n-1} [1-e_jf_j]=\pm i^n\prod_{1\le j\le n} [1-e_jf_j]. \end{equation} These elements satisfy $$ \Ep_{2n}^2= \begin{cases}
+Id &\text{ if } n\in 2\mathbb Z,\\
-Id&\text { otherwise.} \end{cases} $$ The center of $Spin(V)$ is $$ Z(Spin(V))=\{\pm I, \pm\Ep_{2n}\}\cong \begin{cases} \mathbb Z_2\times \mathbb Z_2 &\text{ if }n \text{ is even,}\\ \mathbb Z_4 &\text{ if } n \text{ is odd}. \end{cases} $$ The Lie algebra of $Pin(V)$ as well as $Spin(V)$ is formed of elements of even order $\le 2$ satisfying $$ x+x^\star=0. $$ The adjoint action is {$\ad x(y)=xy-yx$}. A Cartan subalgebra and the root vectors corresponding to the usual basis in Weyl normal form are formed of the elements
\begin{equation} \begin{aligned} \label{eq:liea} &(1-e_if_i)/2&&\longleftrightarrow &&H({\epsilon}_i)\\ &e_ie_j/2&&\longleftrightarrow &&X(-{\epsilon}_i-{\epsilon}_j),\\ &e_if_j/2&&\longleftrightarrow &&X(-{\epsilon}_i+{\epsilon}_j),\\ &{f_i} f_j/2&&\longleftrightarrow &&X({\epsilon}_i+{\epsilon}_j). \end{aligned} \end{equation} \begin{comment} \subsubsection*{Root Structure}\ We use $1\le i\le p$ and $1\le j\le q-1$ consistently. We give a realization of the Lie algebra for $Spin(2p+1,2q-1)$. The case $Spin(2p,2q)$, is (essentially) obtained by suppressing the short roots.
\begin{tabular}{ll} {Compact} &Noncompact\\ &\\ ${\mathfrak t}=\{(1-e_if_i), (1-e_{p+j}f_{p+j})\}$ & $\mathfrak a=\{v^+v^-\}$\\ $h({\epsilon}_i),\ h({\epsilon}_{p+j})$ &$h({\epsilon}_{p+q})$\\ $f_iv^+, e_iv^+, f_{p+j}v^-, e_{p+j}v^-$ &$f_iv^-, e_iv^-, v^+f_{p+j}, v^+e_{p+j}$\\ $X({\epsilon}_i)_c, X(-{\epsilon}_{i})_c, X({\epsilon}_{p+j})_c, X(-{\epsilon}_{p+j})_c$&$X({\epsilon}_i)_{n}, X(-{\epsilon}_i)_{n} X({\epsilon}_{p+j})_{n}, X(-{\epsilon}_{-p+j})_{n}$\\ $f_if_l, f_ie_l, e_ie_l, e_if_l $& $f_if_{p+j}, f_ie_{p+j}, e_ie_{p+l}, e_if_{p+l}$\\ $f_{p+j}f_{p+m},f_{p+j}e_{p+m}, e_{p+j}f_{p+m}, e_{p+j}e_{p+m}$& \\ $X({\epsilon}_i+{\epsilon}_l),X({\epsilon}_i-{\epsilon}_l), X(-{\epsilon}_i-{\epsilon}_l), X(-{\epsilon}_i+{\epsilon}_l)$& $X({\epsilon}_i+{\epsilon}_{p+j}), X({\epsilon}_i-{\epsilon}_{p+j})$,\\ & $X(-{\epsilon}_i-{\epsilon}_{p+j}), X(-{\epsilon}_i+{\epsilon}_{p+j})$,\\ $X({\epsilon}_{p+j}+{\epsilon}_{p+m}), X({\epsilon}_{p+j}-{\epsilon}_{p+m}),$ &\\ $X(-{\epsilon}_{p+j}-{\epsilon}_{p+m}), X(-{\epsilon}_{p+j}+{\epsilon}_{p+m}).$& \end{tabular} \end{comment}
\subsection{Nilpotent Orbits}
We write $\widetilde{K}=Spin(V)=Spin (2n,{\mathbb{C}}) $, $K=SO(V)=SO(2n,{\mathbb{C}})$. A nilpotent orbit of an element $e$ will have Jordan blocks denoted by \begin{equation} \label{eq:blocks} \begin{aligned} &e_1\longrightarrow e_2\longrightarrow\dots \longrightarrow e_k\longrightarrow v\longrightarrow -f_k\longrightarrow f_{k-1}\longrightarrow {-f_{k-2} \longrightarrow} \dots \longrightarrow \pm f_1\longrightarrow 0\\ &\begin{matrix} &e_1\longrightarrow &e_2&\longrightarrow&\dots &\longrightarrow &e_{2\ell}\longrightarrow 0\\ &f_{2\ell}\longrightarrow &-f_{2\ell-1}&\longrightarrow&\dots &\longrightarrow &-f_1\longrightarrow 0 \end{matrix} \end{aligned} \end{equation} with the conventions about the $e_i,f_j,v$ as before. There is an even number of odd sized blocks, and any two blocks of equal odd size $2k+1$ can be replaced by a pair of blocks of the form as the even ones. A realization of the odd block is given by $ \displaystyle\frac{1}{2}\left ( {\sum \limits _{i=1} ^{k-1} }e_{i+1}f_i +vf_k\right ),$ and a realization of the even blocks by $\dpfr \left( {\sum \limits _{i} ^{2l-1}}e_{i+1}f_{i}\right).$ When there are only even blocks, there are two orbits; one block of the form $\big(\sum_{1\le i< \ell-1} e_{i+1}f_{i}+e_\ell f _{\ell-1}\big)/2$ is replaced by $\big(\sum_{1\le i< \ell-1} e_{i+1}f_{i}+ f_\ell f_{\ell-1}\big)/2.$
The centralizer of $e$ in $\mathfrak{so}(V)$ has Levi component isomorphic to a product of $\mathfrak{so}(r_{2k+1})$ and $\mathfrak{sp}(2r_{2\ell})$ where $r_j$ is the number of blocks of size $j.$ The centralizer of $e$ in $SO(V)$ has Levi component $\prod Sp(2r_{2\ell})\times S[\prod O(r_{2k+1})]$. For each odd sized block define \begin{equation}
\label{eq:epsilon} \Ep_{2k+1}=i^{k}v\prod (1-e_jf_j). \end{equation} This is an element in $Pin(V),$ and acts by $-Id$ on the block. Even products of $\pm \Ep_{2k+1}$ belong to $Spin(V),$ and represent the connected components of $C_{\widetilde{K}}(e).$ \begin{prop} Let $m$ be the number of distinct odd blocks. Then $$A_K({\mathcal{O}}) \cong \begin{cases} {\mathbb Z} _2 ^{m-1} & \mbox{ if } m > 0 \\ 1 & \mbox{ if } m=0. \end{cases} $$ Furthermore, \begin{enumerate}
\item If $E$ has an odd block of size $2k+1$ with $r_{2k+1}>1,$ then $A_{\widetilde{K}}({\mathcal{O}})\cong A_K({\mathcal{O}}).$
\item If all $r_{2k+1}\le 1,$ then there is an exact sequence $$ 1\longrightarrow \{\pm I\}\longrightarrow A_{\widetilde{K}}({\mathcal{O}})\longrightarrow A_K({\mathcal{O}})\longrightarrow 0. $$ \end{enumerate} \end{prop} \begin{proof}
Assume that there is an ${r_{2k+1}>1}.$ Let $$ \begin{matrix} &e_1&\rightarrow &\dots &\rightarrow &e_{2k+1}&\rightarrow 0\\ &f_{2k+1}&\rightarrow&\dots &\rightarrow &-f_1 &\rightarrow 0 \end{matrix} $$ be two of the blocks. In the Clifford algebra this element is $e=(e_2f_1+\dots +e_{2k+1}f_{2k})/2.$ The element ${\sum \limits _{ j=1} ^{2k+1}}(1-e_{j}f_{j})$ in the Lie algebra commutes with $e$. So its exponential \begin{equation}\label{path1} \prod \exp\big( i\theta(1-e_{j}f_{j})/2\big)= \prod [\cos\theta/2 + i\sin\theta/2 (1-e_{j}f_{j})] \end{equation} also commutes with $e.$ {At $\theta=0$, the element in (\ref{path1}) is $I$; at $\theta =2\pi$, it is $-I$.} Thus $-I$ is in the connected component of the identity of $A_{\widetilde{K}}({\mathcal{O}})$ (when $r_{2k+1}>1$), and therefore $A_{\widetilde{K}}({\mathcal{O}})=A_K({\mathcal{O}}).$
Assume there are no blocks of odd size. Then $C_K({\mathcal{O}}) {\cong \prod Sp (r_{2l})}$ is simply connected, so $C_{\widetilde{K}}({\mathcal{O}})\cong C_K({\mathcal{O}})\times\{\pm I\}.$ { Therefore $A_{\widetilde{K}} ({\mathcal{O}}) \cong {\mathbb Z} _2$.}
Assume there are {$m$ distinct odd blocks with $m\in 2{\mathbb Z} _{>0}$ and $r_{2k_1+1}=\cdots =r_{2k_m +1}=1.$ In this case, $C_K({\mathcal{O}})\cong \prod Sp(r_{2l}) \times S[ \underset{m}{\underbrace{O(1) \times \cdots \times O(1)}} ]$ , and hence $A_{\widetilde{K}} ({\mathcal{O}})\cong {\mathbb Z}_2^{m-1}$.
Even products of $\{ \pm \Ep _{2k_j +1} \}$ are representatives of elements in $A_{\widetilde{K} }({\mathcal{O}})$.} They satisfy
$$ \Ep_{2k+1}\cdot\Ep_{2\ell+1}= \begin{cases} -\Ep_{2\ell +1}\cdot\Ep_{2k+1} &k\ne \ell,\\ { (-1)^kI} &k=\ell. \end{cases} $$ \end{proof}
\begin{cor} \ \label{c:cgp}
\begin{enumerate}
\item If ${\mathcal{O}}=[3\ 2^{n-2}\ 1],$ then {$A_{\widetilde{K}}({\mathcal{O}})\cong \mathbb Z_2\times\mathbb Z_2=\{\pm\Ep_3\cdot\Ep_1,\pm I\}$}. \item If ${\mathcal{O}}=[3\ 2^{2k}\ 1^{2n-4k-3}]$ with $2n-4k-3>1,$ then $A_{\widetilde{K}}({\mathcal{O}})\cong \mathbb Z_2.$ \item If ${\mathcal{O}}=[2^{n}]_{I,II}$ ($n$ even), then $A_{\widetilde{K}} ({\mathcal{O}}) \cong\mathbb Z_2.$ \item If ${\mathcal{O}}=[2^{2k}\ 1^{2n-4k}]$ with $2k<n,$ then
$A_{\widetilde{K}} ({\mathcal{O}}) \cong 1.$
\end{enumerate} In all cases $C_{\widetilde K}({\mathcal{O}})=Z(\widetilde K)\cdot C_{\widetilde K}({\mathcal{O}})^0.$
\end{cor}
\end{document} |
\begin{document}
\title{The EPR Argument \\ in a Relational Interpretation \\ of Quantum Mechanics} \author{Federico Laudisa\\ Department of Philosophy, University of Florence, \\ Via Bolognese 52, 50139 Florence, Italy} \maketitle \date{} \begin{abstract} \noindent It is shown that in the Rovelli {\it relational} interpretation of quantum mechanics, in which the notion of absolute or observer independent state is rejected, the conclusion of the ordinary EPR argument turns out to be frame-dependent, provided the conditions of the original argument are suitably adapted to the new interpretation. The consequences of this result for the `peaceful coexistence' of quantum mechanics and special relativity are briefly discussed. \end{abstract}
\section{Introduction}
The controversial nature of observation in quantum mechanics has been at the heart of the debates on the foundations of the theory since its early days. Unlike the situation in classical theories, the problem of how and where we should localize the boundary between systems that observe and systems that are observed is not merely a practical one, nor has there been a widespread consensus on whether there is some fundamental difference between the two classes of systems. The emphasis on the reference to an appropriate observational context, in order for most properties of quantum mechanical systems to be meaningful, has been for example the focus of Bohr's reflections on the foundations of quantum mechanics, and the attention to this level of description has been inherited to a certain extent even by those interpretations of the theory that urged to go well beyond the Copenhagen standpoint.
Although these controversies deal primarily with the long-studied measurement problem, we are naturally led to ask ourselves whether a deeper emphasis on the role of the observer might suggest new directions also about nonlocality. The celebrated argument of Einstein, Podolsky and Rosen (EPR) for the incompleteness of quantum mechanics turns essentially on the possibility for an observer of predicting the result of a measurement performed in a spacetime region that is supposed to be isolated from the region where the observer is localized. The quantum mechanical description of a typical EPR state, however, prevents from conceiving that result as simply revealing a {\it preexisting} property, so that the upshot of the argument is the alternative between completeness and locality: by assuming the former we must give up the latter, and viceversa. \footnote{Clearly both options are available here since we restrict our attention to the EPR argument. But after the Bell theorem, it is a widespread opinion that the only viable option for ordinary quantum mechanics is in fact the first.} It then turns out rather natural to ask whether, and to what extent, taking into due account the fact that quantum predictions are the predictions {\it of a given observer} affects remarkably the structure and the significance of the argument.
The ordinary EPR argument is formulated in nonrelativistic quantum mechanics, whose symmetries constitute a Galilei semigroup. Therefore, an obvious form of observer dependence that must be taken into account is the frame dependence that must enter into the description when the whole framework of the EPR argument is embedded into the spacetime of special relativity. It can be shown that a relativistic EPR argument still works and, as a consequence, events pertaining to a quantum system may `depend' on events pertaining a different quantum system localized in a space-like separated region. \footnote{Clearly there is no consensus on what a reasonable interpretation of this `dependence' might be, but a thorough discussion of this point is beyond the scope of the present paper.} However, in the relativistic EPR argument, a special attention must be paid to the limitations placed by this generalization to the attribution of properties to subsystems of a composite system: the lesson of a relativistic treatment of the EPR argument lies in the caution one must take when making assumptions on what an observer knows about the class of quantum mechanical events taking place in the absolute elsewhere of the observer himself. Such observer dependence is essentially linked to the space-like separation between the regions of the measurements and prevents from using a property-attributing language without qualifications; an analysis of these limitations shows that the relativistic EPR argument does not in fact support the widespread claim that nonlocality involves an {\it instantaneous} creation of properties at a distance. \footnote{See (\cite{GG}, \cite{Ghirardi}); for an analysis of the relevance of this fact to the status of superluminal causation in quantum mechanics, see \cite{Laudisa}.}
It must be stressed that this argument implies by itself no definite position about the existence of such influences {\it in the physical world}: it involves only the {\it logical} compatibility between the idea of action-at-a-distance and the special relativistic account of the spacetime structure. More generally, what the argument is meant to point out is the necessity of a shift to the relativistic regime, in order to rigorously assess whether some frequently stated claims about the metaphysical consequences of the EPR argument are really consistent with spacetime physics.
But according to a recent {\it relational} interpretation of quantum mechanics, advanced by Carlo Rovelli (\cite{Rov}), we need not shift to a fully relativistic quantum theory to find a fundamental form of observer dependence. In this interpretation, the very notion of state of a physical system should be considered meaningless unless it is not understood as relative to another physical system, that plays temporarily the role of observer: when dealing with concrete physical systems, it is the specification of such observer that allows the ascription of a state to a system to make sense, so that, to a certain extent, by a relational point of view the selection of such observers features in quantum theory as the specification of a frame of reference does in relativity theory. In the following section the relational interpretation will be sketched, whereas in section 3 a relational analysis of the (relativistic) EPR argument will show that whether the ordinary conclusion of the argument - quantum mechanics is either incomplete or nonlocal - holds or not with respect to a given observer depends on the frame of reference of the latter. In the final section we will briefly discuss the consequences of this analysis for the so called `peaceful coexistence thesis' concerning the relation between quantum theory and relativity theory.
\section{Relational quantum mechanics and the observer dependence of states}
In the relational interpretation of quantum mechanics, the relativistic frame dependence of an observer's predictions is not the only source of observer dependence in quantum mechanics: a fundamental form of observer dependence is detected already in {\it non-relativistic} quantum mechanics and that concerns the very definability of physical state. In the relational interpretation, the notion of absolute or observer independent state of a system is rejected: it would make no sense to talk about a state of a physical system without referring to an observer, with respect to which that state is defined (\cite{Rov}). Although the analysis of its assumptions and its consequences is developed within non-relativistic quantum mechanics, this claim is somehow reminiscent of the Einsteinian operational critique of the absolute notion of simultaneity for distant observers, and the main idea underlying the interpretation is put forward by analyzing the different accounts that two observers give of the same sequence of events in a typical quantum measurement process.
Let us consider a system $S$ and a physical quantity $Q$ that can be measured on $S$. We assume that the possible measurement results are two, $q_1$ and $q_2$ (for simplicity the spectrum of $Q$ is assumed to be simple and non-degenerate). The premeasurement state of $S$ at a time $t_1$ can then be written as $\alpha_1\phi_{q_1}+\alpha_2\phi_{q_2},$ with $\alpha ,\beta$ complex numbers such that $\vert\alpha_1\vert^2+\vert\alpha_2\vert^2=1.$ If we suppose that a measurement of $Q$ is performed and the measurement result is $q_2,$ according to ordinary quantum mechanics, at a postmeasurement time $t_2$ the state of $S$ is given by $\phi_{q_2}.$ Let $O$ be the observer performing the measurement; the sequence that $O$ observes is then \begin{equation} \underbrace{\alpha_1\phi_{q_1}+\alpha_2\phi_{q_2}}_{t_1}\,\, \Longrightarrow\,\,\,\underbrace{\phi_{q_2}}_{t_2} \label{eq:sequence-O} \end{equation} \noindent Let us now consider how a second observer $O'$ might describe this same measurement process, concerning in this case the {\it composite} system $S+O.$ We denote by $\psi_{init}$ the premeasurement state of $O$ and by $\psi_{q_1}$ and $\psi_{q_2}$ respectively the eigenstates of the pointer observable (namely the states that correspond to recording the $Q$-measurement results $q_1,q_2$). The premeasurement state of $S+O$ at $t_1,$ belonging to the tensor product Hilbert space $\cal{H}_S\otimes\cal{H}_O$ of the Hilbert spaces $\cal{H}_S$ and $\cal{H}_O$ associated to $S$ and $O$ respectively, is then expressed as \begin{equation} \psi_{init}\otimes(\alpha_1\phi_{q_1}+\alpha_2\phi_{q_2}); \label{eq:S+O-t1} \end{equation} if $O'$ performs no measurement, by linearity (\ref{eq:S+O-t1}) at $t_2$ becomes \begin{equation} \alpha_1\psi_{q_1}\otimes\phi_{q_1}+\alpha_2\psi_{q_2}\otimes\phi_{q_2}. \label{eq:S+O-t2} \end{equation} The measurement process, as described by $O',$ is then given by the sequence \begin{equation} \underbrace{\psi_{init}\otimes(\alpha_1\phi_{q_1}+\alpha_2\phi_{q_2})}_{t_1} \,\, \Longrightarrow \,\, \underbrace{\alpha_1\psi_{q_1}\otimes\phi_{q_1}+\alpha_2\psi_{q_2}\otimes\phi_{q_2}}_{t_2}. \label{eq:sequence-O'} \end{equation} \noindent So far $O'$ may only claim that the states of $S$ and the states of $O$ are suitably correlated. But let us suppose now that $O'$, at time $t_3>t_2$ performs on $S$ a measurement of $Q$. Since we are dealing with the two different observers $O$ and $O'$, it is natural to ask what general consistency condition we should assume to hold for the different descriptions that $O$ and $O'$ might give of the measurement process. In the present situation, such a condition can be the following: if $S$ at a time $t$ is in an eigenstate $\psi_q$ of an observable $Q$ relative to $O$, the observer $O'$ that measures $Q$ on $S$ at $t'>t$ (with no intermediate measurements, between $t$ and $t'$, of observables that are not compatible with $Q$) will find the eigenvalue belonging to $\psi_q$: therefore $S$ will be in the state $\psi_q$ also relative to $O'$ (we will return later to the general form that this consistency condition assumes). Therefore, the state of $S+O$ at $t_3$ relative to $O'$ will be $\psi_{q_2}\otimes\phi_{q_2}\,,$ since at $t_2$ the state of $S$ relative to $O$ had been reduced to $\phi_{q_2}.$ If we now compare (\ref{eq:sequence-O}) and (\ref{eq:sequence-O'}), we see that $O$ and $O'$ give a different account of the same sequence of events in the above described measurement: at $t_2$ $O$ attributes to $S$ the state $\phi_{q_2},$ whereas $O',$ that views $S$ as a subsystem of $S+O,$ attributes to $S$ the state $\alpha_1\phi_{q_1}+\alpha_2\phi_{q_2}.$
The most general assumption of the relational interpretation of quantum mechanics can be then summarized as follows: the circumstance that there are different ways in which even a simple quantum mechanical process like the measurement above can be described by different observers suggests then that this relational aspect might be not an accident, but a fundamental property of quantum mechanics. In addition to this, the relational interpretation makes a pair of further assumptions, concerning respectively the universality and the completeness of quantum mechanics.
\noindent 1. {\it All physical systems are equivalent.}
\noindent No specific assumption is made concerning the systems that are supposed to act as observers, except that they must satisfy the laws of quantum mechanics: being observer is not a property fixed once and for all for a privileged class of physical systems, permanently identifiable as `observation systems' and clearly separable from the rest of physical systems (\cite{Rov}, p.1644), nor is it implicitly assumed that such observation systems are conscious entities.
\noindent 2. {\it Relational quantum mechanics is a complete physical theory.}
\noindent The general circumstance that different observers may give different accounts of the same processes is no sign of any fundamental incompleteness of quantum mechanics, but is simply the consequence of a relational meta-assumption, according to which there is no `absolute' or `from-outside' point of view from which we might evaluate the states of a physical system or the value of quantities measurable on that system in those states. ``Quantum mechanics can therefore be viewed as a theory about the states of systems and values of physical quantities relative to other systems. [...] If the notion of observer-independent description of the world is unphysical, a complete description of the world is exhausted by the relevant information that systems have about each other.'' (\cite{Rov}, p. 1650).
One might be tempted to describe the situation above simply by saying that the difference between $O$ and $O'$ is that $O$ knows at $t_2$ what the state of $S$ is whereas $O'$ does not, and it is for this reason that $O'$ attributes to $S$ a superposition state, namely an informationally `incomplete' state. The problem with this position, however, is that it implicitly assumes the `absolute' viewpoint on states of physical systems, namely just the viewpoint that the relational interpretation urges to reject as implausible. Moreover, on the basis of the above account, one might draw a general distinction between a {\it description} and an {\it observation} of a system $S$ by an observer $O$: in the former case, a `description' of $S$ involves no interaction between $O$ and $S$ {\it at the time in which $O$ describes $S,$} although it is still necessarily based on some {\it prior} interaction between $S$ and other systems. On the other hand, we may say that $O$ `observes' $S$ when $O$ actually measures some relevant physical quantity on $S$: it is clear that, in this case, there is an interaction between $S$ and $O$ that occurs exactly when $O$ is said to `observe' $S.$
If we return to our specific example, we might consider the `description' that $O'$ gives of $S$ {\it at a given $t$} in terms of correlation properties of the system $O+S$ as the maximal amount of information on the measurement process involving $S$ and $O$ that is available to $O'$ in absence of interaction {\it at $t$} between $O'$ and the composite system $O+S.$ So let us recall the sequence (\ref{eq:sequence-O'}) of events. $O'$ is supposed to perform no measurement in the time interval $[t_1,t_2]$ and thus can `describe' the state of $S$ at $t_2$ only through some observable $C_{(O,S)}$ (defined on $\cal{H}_S\otimes\cal{H}_O$), that tests whether $O$ has correctly recorded the result of the measurement on $S.$ The eigenvalues of $C_{(O,S)}$ are simply $1$ and $0$. The states $\psi_{q_1}\otimes\phi_{q_1},$ $\psi_{q_2}\otimes\phi_{q_2}$ turn out to be eigenstates belonging to the eigenvalue $1$ - the record was correct - whereas the states $\psi_{q_1}\otimes\phi_{q_2},$ $\psi_{q_2}\otimes\phi_{q_1}$ turn out to be eigenstates belonging to the eigenvalue $0$ - the record was incorrect, namely \begin{eqnarray*} C_{(O,S)}\,\psi_{q_1}\otimes\phi_{q_1}=\psi_{q_1}\otimes\phi_{q_1}, & & C_{(O,S)}\psi_{q_1}\otimes\phi_{q_2} = 0\\ C_{(O,S)}\,\psi_{q_2}\otimes\phi_{q_2}=\psi_{q_2}\otimes\phi_{q_2}, & & C_{(O,S)}\psi_{q_2}\otimes\phi_{q_1}=0.\\ \end{eqnarray*} On the basis of the above account, the consistency requirement that we mentioned earlier, and that concerns the relation between different descriptions of the same event given by different observers, appears rather natural: it can be expressed as the requirement that if the only information available to $O'$ is that $O$ has measured $Q$ on $S$ but the result is unknown, the results that $O'$ obtains by performing a $C_{(O,S)}$-measurement and a $Q$-measurement must be correlated (\cite{Rov}, pp. 1650-2).
\section{A relational analysis of the EPR argument}
An ideal place to look at to see how a relational approach might change the `absolute' view of quantum mechanical states is just the framework in which the EPR argument is usually developed (\cite{EPR}, \cite{Bohm}).
The physical framework common to all variants of this argument involves a two-particles' system, whose subsystems interact for a short time and then separate. The original formulation of the EPR argument takes into account a pair of quantities for each particle, such in a way that the members of each pair are mutually incompatible. We will take into account the usual non-relativistic Bohm version of the original argument, formulated as a spin correlation experiment, and in a simplified form that deals with just one quantity for each particle (\cite{Redhead}). The assumptions of the argument will be slightly rephrased as compared to the widespread formulation, but this rephrasing does not substantially affect the argument.
\begin{enumerate}
\item {\sc Reality}
\noindent If, without interacting with a physical system $S,$ we can predict with certainty - or with probability one - the result $q$ of the measurement of a quantity $Q$ performed at time $t$ on $S,$ then at a time $t'$ immediately after $t,$ there exists a property - associated with $q$ and denoted by $[q]$ - that is actually satisfied by $S$: any such property is said to be an {\it objective} property of $S.$
\item {\sc Completeness}
\noindent Any physical theory $T$ describing a physical system $S$ accounts for every objective property of $S.$
\item {\sc Locality}
\noindent No objective property of a physical system $S$ in a state $s$ can be influenced by measurements performed at a distance on a different physical system.
\item {\sc Adequacy}
\noindent The statistical predictions of quantum mechanics are correct. \end{enumerate}
\noindent A word of comment on the formulation of the Reality condition is in order. As is expected, the notion of an objective property $[q]$ of a physical system $S$ is equivalent to the notion of $S$ having $[q]$ no matter whether $Q$ is measured on $S$ or not. Clearly $S$ may have non-objective properties such as, for instance, `correlation' properties: certain possible values of a quantity $R$ measurable on $S$ are correlated to possible values of a quantity pertaining the pointer of an apparatus that is supposed to measure $R$ on $S.$ Obviously $S$ cannot be said to have such properties independently from any measurement (actual or not) of $Q.$
The experimental situation considered (often called an EPR-Bohm correlation experiment) involves a two spin-1/2 particles' system $S_1+S_2$ prepared at the source in the singlet state. If we focus only on the spin part, such state of $S_1+S_2$ can be written for any spatial direction $x$ as \begin{equation} {1\over\sqrt 2}[\phi_{1,x}(+)\otimes\phi_{2,x}(-)\,-\, \phi_{1,x}(-)\otimes\phi_{2,x}(+)]\,, \label{eq:singlet} \end{equation} where: \begin{itemize} \item $\phi_{i,x}(\pm)$ is the eigenvector of the operator $\sigma_{i,x},$ representing spin up or down along the direction $x$ for the particle $i=1,2;$ \item $\phi_{1,x}(+)\otimes\phi_{2,x}(-)$ and $\phi_{1,x}(-)\otimes\phi_{2,x}(+)$ belong to the tensor product ${\cal H}_1\otimes{\cal H}_2$ of the Hilbert spaces ${\cal H}_1$ and ${\cal H}_2$ associated respectively to subsystems $S_1$ and $S_2$. \end{itemize} $S_1$ and $S_2$ are supposed to fly off in opposite directions; spin measurements are supposed to be performed when $S_1$ and $S_2$ occupy two widely separated spacetime regions $R_1$ and $R_2,$ respectively. It follows from (\ref{eq:singlet}) and Adequacy that if on measurement we find spin up along the direction $x$ for the particle $S_1,$ the probability of finding spin down along the same direction $x$ for the particle $S_2$ equals $1$: it is usual to say that $S_1$ and $S_2$ are strictly anticorrelated.
Let us suppose now that, for a given direction $z,$ we measure $\sigma_{1,z}$ at time $t_1$ and we find $-1.$ Adequacy then allows us - via anticorrelation - to predict with probability one the result of the measurement of $\sigma_{2,z}$ for any time $t_2$ immediately after $t_1,$ namely $+1.$ Then, according to Reality, there exists an objective property $[+1]$ of $S_2$ at $t_2.$ By Locality, $[+1]$ was an objective property of $S_2$ also at a time $t_0 < t_1,$ since otherwise it would have been `created' instantaneously by the act of performing a spin measurement on $S_1.$ However, at time $t_0$ the state of $S_2$ is a mixture, namely ${1\over 2}(P_{\phi_{2,x}(+)}+ P_{\phi_{2,x}(-)})$, since the entangled state (\ref{eq:singlet}), although it is a pure state of $S_1+S_2,$ uniquely determines the states of the subsystems as mixed states. Thus $S_2$ is shown to satisfy an objective property in a state which is not an eigenstate of $\sigma_{2,z}.$ However all quantum mechanics is able to predict is the satisfaction of properties such as $[+1]$ only in eigenstates of $\sigma_{2,z}:$ quantum mechanics then turns out to be incomplete, since there exist objective properties that are provably satisfied by a system described by quantum mechanics but that cannot be described in quantum mechanical terms. By a strictly logical point of view, the conclusion of the argument can be rephrased as the statement that the conjunction of Reality, Completeness, Locality and Adequacy leads to a contradiction.
In the framework of ordinary quantum mechanics, Reality and Adequacy cannot be called into question: whereas the latter simply assumes that the probabilistic statements of quantum mechanics are reliable, without the former no quantum system could ever satisfy objective properties, not even such a property as having a certain value for a given quantity one is going to measure, when the system is prepared in an eigenstate belonging to that (eigen)value of that quantity. \footnote{See for instance the clear discussion in \cite{BLM} on objectivity and non-objectivity of properties in quantum mechanics.} The alternative then reduces to the choice between Completeness and Locality: by assuming Completeness, we then turn the above EPR argument into a nonlocality argument. A relativistic formulation of this argument can also be given: although the different geometry of spacetime must be taken into account, the only generalization lies in adapting the Locality condition, to the effect that objective properties of a physical system cannot be influenced by measurements performed in space-like separated regions on a different physical system (\cite{GG}).
Let us turn now to a relational analysis of the argument. In a relational approach to the EPR argument, we have to modify accordingly its main conditions (Adequacy is obvious), basically by relativizing the objectivity of properties to given observers. The new versions might read as follows:
\begin{enumerate}
\item[$1'.$] {\sc Reality$^*$}
\noindent If an observer $O$, without interacting with a physical system $S,$ can predict with certainty (or at any rate with probability one) at time $t$ the value $q$ of a physical quantity $Q$ measurable on $S$ in a state $s,$ then, at a time $t'$ immediately after $t,$ $q$ corresponds to a property of $S$ that is objective relative to $O.$
\item[$2'.$] {\sc Completeness$^*$}
\noindent Any physical theory $T$ describing a physical system $S$ accounts for every property of $S$ that is objective relative to some observer.
\item[$3'.$] {\sc R-Locality$^*$}
\noindent No property of a physical system $S$ that is objective relative to some observer can be influenced by measurements performed in space-like separated regions on a different physical system. \end{enumerate}
\noindent Once the above weaker sense of objectivity is defined, Completeness$^*$ is little more than rephrased, whereas R-Locality$^*$ guarantees that no property that is non-objective (in the weaker sense of objective as relative to a given observer) can be turned into an objective one (still in the weaker sense) simply by means of operations performed in space-like separated regions (at this stage, already the relativistic version of the Locality condition in the ordinary EPR argument is assumed).
So far the relational versions of the original EPR conditions have been introduced: but are they sufficient in order to derive the same conclusion drawn by the original argument? We are interested into the states of the subsystems $S_1$ and $S_2$ and the values of spin in those states when $S_1$ and $S_2,$ that initially interact for a short time and then fly off in opposite directions, are localized in space-like separated regions $R_1$ and $R_2$ respectively. According to the relational interpretation the reference to the state of a physical system is meaningful only relative to some observer. So let us suppose to introduce two observers, $O_1$ for $S_2$ and $O_2$ for $S_2,$ that, after coexisting in the spacetime region of the source for the short time of the interaction, follow each its respective subsystem. Initially, at time $t_0$, $O_1$ and $O_2$ agree on the state (\ref{eq:singlet}). But when, after leaving the source, $S_1$ and $S_2$ are subject to measurement, the spacetime regions in which the measurements are supposed to take place are mutually isolated. Then we suppose that, for a given direction $z,$ an observer $O_1$ measures $\sigma_{1,z}$ on $S_1$ at time $t_1 > t_0$ and finds $-1.$ Now the strict spin value anticorrelation built into the state (\ref{eq:singlet}) allows $O_1$ to predict with certainty the spin value for $O_2$ without interacting with it. Namely Adequacy allows $O_1$ to predict with probability one the value of $\sigma_{2,z}$ on $S_2$ for any time $t_2$ immediately after $t_1,$ namely $+1.$ Then, according to Reality$^*$, there exists a property $[+1]$ of $S_2$ that is objective {\it relative to} $O_1$ at $t_2.$ By R-Locality$^*$, $[+1]$ was an objective property of $S_2$ relative to $O_1$ also at a time $t_0 < t_1,$ since otherwise it would have been created by the act of performing a spin measurement on the space-like separated system $S_1.$ Then $O_1$ may backtrack at $t_0$ the property $[+1]$ or, equivalently, the {\it pure} state $\phi_{2,z}(+).$ At this point the non-relational argument proceeds by pointing out that at $t_0$ the state of $S_2$ as determined by (\ref{eq:singlet}) was the mixture ${1\over 2}(P_{\phi_{2,x}(+)}+ P_{\phi_{2,x}(-)})$: since this kind of state cannot possibly display a property like $[+1],$ the charge of incompleteness for quantum mechanics follows. By a relational viewpoint, however, this conclusion does {\it not} follow: in fact, we are comparing two states relative to {\it different} observers. Since the regions $R_1$ and $R_2$ of the measurements are space-like separated, there are frames in which the measurement performed by $O_1$ precedes the measurement performed by $O_2$. Let us suppose that in one of these frames $O_1$ performs a spin measurement on $S_1$ along the direction $z$ at a time $t_1$ and finds $-1$; then, at $t_2$ immediately after $t_1$, $S_1$ is in the state $\phi_{1,z}(-)$ relative to $O_1$, and the latter is allowed to attribute to $S_2$ the pure state $\phi_{2,z}(+)$ at $t_2$, corresponding to a spin value $+1$. Thus there is a property $[+1]$ of $S_2$ that is objective relative to $O_1$; hence, according to R-Locality$^*$, $O_1$ should be allowed to backtrack at $t_0<t_1$ that same property $[+1]$ - satisfied by $S_2$ in $\phi_{2,z}(+)$ - but in this case he would derive incompleteness, since at that time the state of $S_2$ is a mixture. At time $t_0$, however, $O_2$ did not perform yet any spin measurement: therefore $O_2$ is still allowed to attribute to $S_1$ and $S_2$ only the mixtures ${1\over 2}(P_{\phi_{1,x}(+)}+ P_{\phi_{1,x}(-)})$ and ${1\over 2}(P_{\phi_{2,x}(+)}+ P_{\phi_{2,x}(-)})$ respectively, and there is no matter of fact as to which observer `is right'. Until $O_2$ does not perform a measurement, $O_2$ may then describe the measurement by $O_1$ simply as the establishment of a correlation between $O_1$ and $S_1$.
The conclusion to be drawn is then that the question: {\it when is an observer allowed to claim, via the EPR argument, that quantum mechanics is either incomplete or nonlocal?} has a frame-dependent answer. Let us denote with $M_O^t(\sigma, S,\rho,R)$ the event that an observer $O$ performs at time $t$ a measurement of a physical quantity $\sigma$ on a system $S$ in the state $\rho$ in the region $R$. If we set \begin{eqnarray*} M_R & := & M_{O_1}^{t^R}(\sigma_{1,x}, S_1,{1\over 2}(P_{\phi_{1,x}(+)}+ P_{\phi_{1,x}(-)}),R_1)\\ M_L & := & M_{O_2}^{t^L}(\sigma_{2,x}, S_2,{1\over 2}(P_{\phi_{2,x}(+)}+ P_{\phi_{2,x}(-)}),R_2), \end{eqnarray*} the three possible cases are:
\noindent (a) $M_R$ precedes $M_L$;
\noindent (b) $M_R$ follows $M_L$.
\noindent (c) $M_R$ and $M_L$ are simultaneous.
In case (a), the EPR argument works for $O_1$ but not for $O_2$, namely quantum mechanics is either incomplete or nonlocal relative to $O_1$ but not relative to $O_2$, whereas the situation is reversed in the case (b). Finally, in the case (c), what the EPR argument implies is nothing but the outcome-outcome dependence built into the correlations intrinsic to EPR entangled states (\cite{Shimony1}). However, until $O_1$ and $O_2$ do not interact, they cannot compare their respective state attributions so that, up to the interaction time, again each observer can claim either the incompleteness or the nonlocality of quantum mechanics only relative to himself.
\section{Final discussion}
As we recalled above, the EPR incompleteness argument for ordinary quantum mechanics can be turned into a nonlocality argument, that appears to threaten the mutual compatibility at a fundamental level of quantum theory and relativity theory. The received view has been that, relevant that nonlocality may be by a foundational viewpoint, the conflict engendered by it is not as deep as it seems. There would be in fact a `peaceful coexistence' between the two theories, since locality in quantum mechanics would be recovered at the statistical level and in any case - it is argued - nonlocal correlations are uncontrollable so no superluminal transmission of information is allowed. \footnote{See for instance \cite{Eberhard}, \cite{GRW} and \cite{Shimony1}. For dissenting views, one can see \cite{CJ} and \cite{Muller}.}
Such view, however, has several drawbacks, first of all the fact that it is based on highly controversial notions such as the `controllability' (or `uncontrollability') of information, or on the vagueness of establishing when it is exactly that an `influence' becomes a bit of information, or on whether relativity theory prohibits superluminal exchanges of information but not superluminal travels of influences, and subtleties like that. On the other hand, the very inapplicability of the ordinary EPR argument in relational quantum mechanics prevents this kind of controversies and provides a new way to interpret the peaceful coexistence thesis, since in the relational interpretation no compelling alternative between completeness and locality via the EPR argument can be derived in a frame-independent way.
The observer dependence that affects the EPR argument in a relational approach to quantum mechanics might also imply a different view of the hidden variables program. Most hidden variable theories do essentially two things. First, on the basis of the EPR argument's conclusion they {\it assume} the incompleteness of quantum mechanics as a starting point; second, they introduce the hypothesis of a set of `complete' states of a classical sort, the averaging on which gives predictions that are supposed to agree with the quantum mechanical ones (see for instance \cite{CS}, \cite{Redhead}). If however, in a relational approach to quantum mechanics, the conditions of Reality$^*$, Completeness$^*$, R-Locality$^*$ and Adequacy need not clash from one frame of reference to the other, the attempt of `completing' quantum mechanics by introducing hidden variables turns out to be unmotivated by a relational point of view, and so does any nonlocality argument that has this attempt as a premise. Moreover, the hypothetical complete states of hidden variables theories are conceived themselves as observer-{\it in}dependent, so that an absolute view of the states of physical system would be reintroduced, albeit at the level of the hidden variables.
The issue of whether statistical correlations across space-like separated regions are a real threat to a peaceful coexistence between quantum theory and relativity theory is being more and more investigated also in the framework of algebraic quantum field theory (AQFT): in fact a suitable form of Bell inequality - an inequality that in different formulations has been extensively studied with reference to the nonlocality issue in non-relativistic quantum mechanics (\cite{Bell}) - has been shown also to be violated in AQFT (see for instance \cite{SW} and \cite{S}). The questions of whether a relational interpretation of AQFT might be developed, and of what such interpretation might have to say about the violation of the Bell inequaliity in AQFT, appear worth investigating for several reasons. First, an absolute view of quantum states seems in principle precluded in AQFT. In the algebraic framework for quantum field theory, local algebras are axiomatically associated with specific (open bounded) regions of the Minkowski spacetime: the elements of the algebras are represented as the observables that can be measured in the spacetime region to which the algebra is associated. The states, represented as suitable expectation functionals on the given algebra, encode the statistics of all possible measurements of the observables in the algebra and thus inherit from the latter the feature of being defined not globally but with respect to a particular spacetime region. Second, the derivation of a suitable Bell inequality and the analysis of its violation refer neither to unspecified hidden variables nor to the need of introducing them. Finally, the locality that the violation of the suitable Bell inequality might call into question is directly motivated by relativistic constraints and is not a hypothetical condition satisfied only by hidden variables, and imposed over and above a theory that is intrinsically nonlocal such as ordinary quantum mechanics.
\noindent {\bf Acknowledgements}
\noindent I am deeply grateful to Carlo Rovelli for an interesting correspondence on an earlier version of this paper: his insightful comments greatly helped me to clarify a number of relevant points, although I am of course the only responsible of the arguments defended here. This work was completed during a visit to the Department of History and Philosophy of Science of the E\"otv\"os University in Budapest. I wish to thank in particular Mikl\'os R\'edei and L\'aszl\'o Szab\'o for their support and their hospitality. This work is supported by a NATO CNR Outreach Fellowship.
\end{document} |